text
stringlengths 9
7.94M
| subset
stringclasses 1
value | meta
dict | file_path
stringclasses 1
value | question
dict | answers
listlengths |
|---|---|---|---|---|---|
\begin{document}
\selectlanguage{english}
\title{A note on comonotonicity and positivity of the control components of decoupled quadratic FBSDE}
\begin{abstract} In this small note we are concerned with the solution of Forward-Backward Stochastic Differential Equations (FBSDE) with drivers that grow quadratically in the control component (quadratic growth FBSDE or qgFBSDE). The main theorem is a comparison result that allows comparing componentwise the signs of the control processes of two different qgFBSDE. As a byproduct one obtains conditions that allow establishing the positivity of the control process. \end{abstract}
{\bf 2010 AMS subject classifications:} Primary: 60H30. Secondary: 60H07, 60J60.\\
{\bf Key words and phrases:} BSDE, forward-backward SDE, quadratic growth, comparison, positivity, stochastic calculus of variations, Malliavin calculus, Feynman-Kac formula.
\section{Introduction} This small note is concerned with forward-backward stochastic differential equations (BSDEs) in the Brownian framework, i.e. equations following, for some measurable functions $b$, $\sigma$, $f$ and $g$, the dynamics \begin{align*} X_s^{t,x}&=x+\int_t^s b(r,X^{t,x}_r)\mathrm{d} r+\int_t^s \sigma(r,X^{t,x}_r)\mathrm{d} W_r,\\ Y^{t,x}_s &=g(X^{t,x}_T) +\int_s^T f(r,X^{t,x}_r,Y^{t,x}_r,Z^{t,x}_r)\mathrm{d} s-\int_t^T Z^{t,x}_r\mathrm{d} W_r, \end{align*} where $W$ a $d$-dimensional Brownian motion, $(t,x)\in[0,T]\times\mathbb{R}^m$ and $s\in[t,T]$. The function $f$ is called generator or driver while $g$ is named the terminal condition function. The solution of the FBSDE is the triple of adapted processes $(X,Y,Z)$; $Z$ is called the control process.
In the last 30 years much attention has been given to this type of equations due to their importance in the fields of optimal control and finance. The standard theory of FBSDE is formulated under the canonical Lipschitz assumption (see for example \cite{EPQ} and references), but in many financial problems drivers $f$ which have quadratic growth in the control component appear i.e.~when $f$
satisfies a growth condition of the type $|f(t,x,y,z)|\leq C(1+|y|+|z|^2)$. The particular relation between FBSDE with drivers of quadratic growth in the control component (qgFBSDE) and the field of finance, stochastic control and parabolic PDE can be illustrated by the works \cite{HIM2005}, \cite{HPdR10}, \cite{EPQ} and references therein.
One of the fundamental results in BSDE or FBSDE theory is the so called comparison theorem that allows one to compare the $Y$ components of of the solution of two BSDEs. In rough, given a terminal condition function $g^i$, a driver $f^i$ and the corresponding FBSDE solution $(X,Y^i,Z^i)$ for $i\in\{1,2\}$, if $g^1$ dominates $g^2$ and $f^1$ dominates $f^2$ in some sense then this order relation is expected to carry over to the $Y$ components, i.e. $Y^1$ dominates $Y^2$ in some sense. Such a result is however not possible for the control components $Z^i$. In this short note we give a type of comparison result for the control components $Z$, a so called comonotonicity result. This result allows one to compare the signs of the control processes $Z^1$ and $Z^2$ componentwise and as a side product one finds sufficient conditions to establish the positivity of the control process for a single FBSDE.
This type of results can be useful in several situations, for instance in the numerics for such equations, since they allow to establish a priori heuristics that can improve the quality of the numerical approximation. This point of view is pertinent as the applications of FBSDE extend to the field of fluid mechanics (see \cite{freidosreis2011}).
A possible application of the results presented in this note lies in the problematic of showing the existence (and smoothness) of marginal laws of $Y$ which are absolutely continuous with respect to the Lebesgue measure. This type of analysis involves showing the strict positivity of the Malliavin variance (in rough the $Z$ component) of the solution of the FBSDE, (see e.g. \cite{MR2134722}). The results in \cite{MR2134722} were established for FBSDE whose driver function satisfies a standard Lipschitz condition in its spatial components and it is not possible to adapt the proof to cover the qgFBSDE setting of this work.
From another point of view, the comonotonicity result is an interesting result in the context of economic models of equilibrium pricing when analyzed in the qgFBSDE framework. In such framework the equilibrium market price of risk can be characterized in terms of the control process of the solution to a qgFBSDE. The difficulty is that the individual optimization problems underlying the characterization of the equilibrium requires the equilibrium volatility (the $Z$ component of the solution to a certain qgFBSDE) to satisfy an exponential integrability condition as well as a positivity condition. Since the results of \cite{MR2134722} cannot be applied or adapted to the qgFBSDE setting, the comonotonicity result presented here (and its corollary) provides conditions that ensure the positivity of the relevant process and hence may prove to be very useful in equilibrium analysis. An example of such type of problems can be found for example in \cite{HPdR10}.
The results of this work originate in \cite{05CKW} where the authors give a comonotonicity result for FBSDE satisfying a standard Lipschitz condition and where the driver function is independent of the diffusion process $X$. In \cite{reis2011} the author extended the results of \cite{05CKW} to the qgFBSDE setting but was not able to include the dependence on $X$ in the driver. The dependence of $f$ in $X$ is something that is quite common in the financial framework and that makes the applicability of \cite{reis2011} limited. This short note presents a full generalization of the results of \cite{reis2011} where the driver is now allowed to depend on $X$, this makes the conditions and analysis more involved but makes the result general enough that it can now be ``broadly'' applied to the standard financial setting where the driver $f$ almost always depends on the underlying diffusion $X$.
The note is organized as follows: In Section 2 we introduce some notation and recall some known results. The main results are then stated and proved in Section 3.
\section{Preliminaries}
Throughout fix $T>0$. We work on a canonical Wiener space $(\Omega, \mathcal{F}, \mathbb{P})$ carrying a $d$-dimensional Wiener process $W = (W^1,\cdots, W^d)$ restricted to the time interval $[0,T]$ and we denote by $\mathcal{F}=(\mathcal{F}_t)_{t\in[0,T]}$ its natural filtration enlarged in the usual way by the $\mathbb{P}$-zero sets.
Let $p\geq 2$, then we denote by $\mathcal{S}^p(\mathbb{R}^m)$ the space of all measurable processes $(Y_t)_{t\in[0,T]}$ with values in $\mathbb{R}^m$ normed by $\| Y \|_{\mathcal{S}^p} = \mathbb{E}[\sup_{t \in [0,T]}|Y_t|^p ]^{{1}/{p}}$ and by $\mathcal{S}^\infty(\mathbb{R}^m)$ its subspace of bounded measurable processes. We also denote by $\mathcal{H}^p(\mathbb{R}^m)$ the space of all progressively measurable processes $(Z_t)_{t\in[0,T]}$ with values in $\mathbb{R}^m$ normed by $\|Z\|_{\mathcal{H}^p} = \mathbb{E}[\big( \int_0^T |Z_s|^2 \mathrm{d} s \big)^{p/2} ]^{{1}/{p}}$.
For vectors $x = (x^1,\cdots, x^m)\in \mathbb{R}^m$ we write $|x| = (\sum_{i=1}^m (x^i)^2)^{\frac{1}{2}}$. $\nabla$ denotes the canonical gradient operator and for a function $h(x,y):\mathbb{R}^m\times\mathbb{R}^d\to \mathbb{R}$ we write $\nabla_x h$ or $\nabla_y h$ to refer to the first derivatives with relation to $x$ and $y$ respectively.
We work with decoupled systems of forward and backward stochastic differential equations (FBSDE) for $(t,x)\in[0,T]\times\mathbb{R}^m$ and $s\in[t,T]$ \begin{align} \label{sde} X_s^{t,x}&=x+\int_t^s b(r,X^{t,x}_r)\mathrm{d} r+\int_t^s \sigma(r,X^{t,x}_r)\mathrm{d} W_r,\\ \label{bsde} Y^{t,x}_s &=g(X^{t,x}_T) +\int_s^T f(r,X^{t,x}_r,Y^{t,x}_r,Z^{t,x}_r)\mathrm{d} s-\int_t^T Z^{t,x}_r\mathrm{d} W_r, \end{align} for some measurable functions $b$, $\sigma$, $g$ and $f$.
We now state our assumptions. \begin{assump}\label{H1}
The function $b:[0,T]\times\mathbb{R}^m\to \mathbb{R}^m$ and $\sigma:[0,T]\times\mathbb{R}^m\to \mathbb{R}^{m\times d}$ are continuously differentiable in space with derivatives uniformly bounded by a constant $K$ and are $\frac12$-H\"older continuous in time. $\sigma$ is uniformly elliptic and $|b(\cdot,0)|$ and $|\sigma(\cdot,0)|$ are uniformly bounded.
$g:\mathbb{R}^m\to\mathbb{R}$ is bounded, continuously differentiable with bounded derivatives. $f$ is a continuously differentiable function in space, uniformly continuous in the time variable and satisfies for some $M>0$ for all $(t,x,y,z)\in[0,T]\times \mathbb{R}^m\times \mathbb{R}\times \mathbb{R}^d$, $|f(t,x,y,z)|\leq M (1+|y|+|z|^2)$ as well as \begin{align*}
|\nabla_x f(t,x,y,z)|\leq M (1+|y|+|z|^2),\quad
|\nabla_y f(t,x,y,z)|\leq M, \quad
|\nabla_z f(t,x,y,z)|\leq M (1+|z|). \end{align*} \end{assump}
\begin{assump} \label{H2} The spatial derivatives $\nabla b$, $\nabla \sigma$ and $\nabla g$ satisfy a standard Lipschitz condition in their spatial variables with Lipschitz constant $K$.
$\nabla_y f$ satisfies a standard Lipschitz condition with Lipschitz constant $K$ and for all $t\in[0,T]$, $x,x'\in\mathbb{R}^m$, $y,y'\in\mathbb{R}$ and $z,z'\in\mathbb{R}^d$ it holds that \begin{align*}
&|\nabla_x f(t,x,y,z)-\nabla_x f(t,x',y',z')| \\ &\hspace{1cm}
\leq K\big(1+|z|+|z'|\big)\big\{
(1+|z|+|z'|\big)|x-x'|+|y-y'|+|z-z'|\big\},\\
&|\nabla_z f(t,x,y,z)-\nabla_z f(t,x',y',z')| \\ &\hspace{1cm}
\leq K \big\{(1+|z|+|z'|) |x-x'|+|y-y'|+|z-z'|\big\}, \end{align*}
\end{assump} The next theorem compiles several results found throughout \cite{AIdR07}, \cite{IdR2010} and \cite{reis2011}. \begin{theo}\label{compilationtheorem} Let Assumption \ref{H1} hold then for any $p\geq 2$ and $(t,x)\in[0,T]\times\mathbb{R}$ there exists a unique solution $\Theta^{t,x}=(X^{t,x},Y^{t,x},Z^{t,x})$ of FBSDE \eqref{sde}-\eqref{bsde} in the space $\mathcal{S}^p\times\mathcal{S}^\infty\times\mathcal{H}^p$ and\footnote{BMO refers to the class of Bounded mean oscillation martingales, see \cite{IdR2010} or \cite{kazamaki} for more details.} $\int_0^\cdot Z\mathrm{d} W \in BMO$.
The variational process of $\Theta^{t,x}$ exists and satisfies for $s\in[t,T]$ \begin{align} \label{nablasde} \nabla_x X_s^{t,x}&=I_d+\int_t^s \nabla_x b(r,X^{t,x}_r)\nabla_x X^{t,x}_r\mathrm{d} r+\int_t^s \nabla_x\sigma(r,X^{t,x}_r)\nabla_x X^{t,x}_r\mathrm{d} W_r,\\ \label{nablabsde} \nabla_x Y^{t,x}_s &=\nabla_x g(X^{t,x}_T)\nabla_x X_T^{t,x} +\int_s^T \langle (\nabla f)(r,\Theta^{t,x}_r),\nabla_x \Theta^{t,x}_r \rangle\mathrm{d} s-\int_t^T \nabla_x Z^{t,x}_r\mathrm{d} W_r. \end{align} The triple $\Theta^{t,x}$ is Malliavin differentiable and its Malliavin derivatives are given by $D \Theta^{t,x} = (D X^{t,x},DY^{t,x},DZ^{t,x})$. The process $(Z_s^{t,x})_{s\in[t,T]}$ has continuous paths, $Z^{t,x}\in\mathcal{S}^p$ and for $0\leq t\leq u\leq s\leq T$ the following representation holds \begin{align} \label{representation} D_s Y^{t,x}_s = Z^{t,x}_s,\ \mathbb{P}\text{-}a.s. \quad \textrm{ and } \quad D_u Y^{t,x}_s = \nabla_x Y^{t,x}_s (\nabla_x X^{t,x}_u)^{-1} \sigma(u,X^{t,x}_u),\ \mathbb{P}\text{-}a.s. \end{align} There exists a continuous function $u:[0,T]\times\mathbb{R}^m\to\mathbb{R}$ such that for all $(t,x)\in[0,T]\times \mathbb{R}^m$ and $s\in[t,T]$ it holds that $Y^{t,x}_s=u(s,X_s^{t,x})$ $\mathbb{P}$-a.s..
Under Assumption \ref{H2} the function $u$ is continuously differentiable in its spatial variables and $Z^{t,x}_s=(\nabla_x u)(s,X_s^{t,x})\sigma(s,X^{t,x}_s)$ $\mathbb{P}$-a.s. for all $0\leq t\leq s\leq T$ and $x\in\mathbb{R}^m$. \end{theo} \begin{proof} Existence and uniqueness of the solution is quite standard either for the SDE (e.g. \cite{Protter2005}) or for the BSDE (see e.g. Theorem 1.2.12 and Lemma 1.2.13 in \cite{reis2011}).
The variational differentiability and representation formulas as well as the path continuity of $Z$ follow from Theorems 2.8, 2.9 and 5.2 in \cite{IdR2010} (or Theorems 3.1.9, 3.2.4 and 4.3.2 of \cite{reis2011}). We emphasize that due to the continuity of the involved processes, the representation formulas \eqref{representation} hold $\mathbb{P}$-a.s. for all $t\in[0,T]$ and not just $\mathbb{P}\otimes \textrm{Leb}$-a.a.
Lastly, the Markov property of the $Y$ process is rather standard (see Theorem 4.1.1 of \cite{reis2011}). The differentiability assumptions on the driver and terminal condition function (Assumption \ref{H2}) ensure that the function $u$ is continuously differentiable in the spatial variables. A detailed proof of this can be found either in Theorem 7.7 in \cite{10AIdR} or Theorem 4.1.2 in \cite{reis2011}. \end{proof}
\section{A comonotonicity result for quadratic FBSDE}
In this section we work with a $d$-dimensional Brownian motion $W$ on the time interval $[0,T]$ for some positive finite $T$. Throughout let $(t,x)\in [0,T]\times \mathbb{R}^m$. Our standing assumption for this section is as follows. \begin{assump}\label{H} Let Assumptions \ref{H1} and \ref{H2} hold. Assume that $m=1$ and $d\geq 1$ \end{assump} \begin{remark} \label{caseofmdiff1} We note that it is possible to write the results of this section for multidimensional SDE systems (i.e.~when $m\geq 1$) under the assumption that $\sigma$ is a square diagonal matrix and the system of forward equations is fully decoupled. There are many applications where such an assumption takes place (e.g. \cite{HPdR10}). We write these result with $m=1$ to simplify the presentation of this short note. \end{remark}
For each $i\in\{1,2\}$ we define the SDE \eqref{sde} with $b_i$ and $\sigma_i$ and BSDE \eqref{bsde} with terminal condition and driver given by $g_i$ and $f_i$. We denote the respective solution of the system by $(X^{t,x,i}_s,Y^{t,x,i}_s,Z^{t,x,i}_s)_{s\in[t,T]}$ valued in $\mathbb{R}\times\mathbb{R}\times\mathbb{R}^d$ for $(t,x,i)\in[0,T]\times\mathbb{R}\times\{1,2\}$.
We define the vector-product operator, ``$\odot$'', as $\odot:\mathbb{R}^d\times\mathbb{R}^d\to\mathbb{R}^d$ such that \begin{align}\label{odotoperator} a\odot b= (a_1b_1, \ldots, a_d b_d),\qquad \textrm{for any } a=(a_1,\cdots,a_d),b=(b_1,\cdots,b_d)\in \mathbb{R}^d. \end{align} With the convention that $a\odot b\geq 0$ means that for each $i\in\{1,\ldots,d\}$, $a_i b_i\geq 0$.
The aim of this section is to explore conditions such the following statement holds \[Z^{t,x,1}_s \odot Z_s^{t,x,2} \geq 0,\quad \mathbb{P}\text{-a.s.},\quad \textrm{for any } (t,x)\in[0,T]\times\mathbb{R}\textrm{ and }s\in[t,T]. \]
\begin{defi}[Comonotonic functions] We say that two measurable functions $g,h:\mathbb{R}\to\mathbb{R}$ are comonotonic if they are \emph{monotone} and \emph{have the same type of monotonicity}, i.e.~if $g$ is increasing or decreasing then $h$ is also increasing or decreasing respectively. We say that $g$ and $h$ are strictly comonotonic if they are comonotonic and strictly monotonic. \end{defi} We now state our main theorem. \begin{theo}\label{comono-theo-1} Let Assumption \ref{H} hold and for $(t,x)\in[0,T]\times\mathbb{R}$ define $(X^{t,x,i},Y^{t,x,i},Z^{t,x,i})$ as the unique solution of FBSDE (\ref{sde})-(\ref{bsde}) for $i\in\{1,2\}$. Suppose that $x\mapsto g_i(x)$ and $x\mapsto f_i(\cdot,x,\cdot,\cdot)$ are comonotonic for all $i\in\{1,2\}$ and further, that $g_1,g_2$ are also comonotonic\footnote{This implies that $x\mapsto f_1(\cdot,x,\cdot,\cdot)$ and $x\mapsto f_2(\cdot,x,\cdot,\cdot)$ are comonotonic as well.}. If it holds for all $s\in[t,T]$ that \begin{align} \label{sigmaineq} \sigma_1(s,X^{t,x,1}_s)\odot \sigma_2(s,X^{t,x,2}_s)\geq 0,\quad \mathbb{P}\text{-}a.s., \end{align} then \begin{align} \label{eq:ZZineq} Z^{t,x,1}_s \odot Z^{t,x,2}_s \geq 0,\quad \mathbb{P}\text{-}a.s.,\quad \textrm{for any } (t,x)\in[0,T]\times\mathbb{R}\textrm{ and }s\in[t,T]. \end{align} If $g_1,\,g_2$ are strictly comonotonic and inequality \eqref{sigmaineq} holds strictly then \eqref{eq:ZZineq} is also strict. \end{theo} \begin{proof} Throughout take $t\in[0,T]$, $x\in\mathbb{R}$ and let $i\in\{1,2\}$. According to Theorem \ref{compilationtheorem}, for each $i\in\{1,2\}$ there exits a measurable deterministic, continuously differentiable function (in its spatial variables) $u_i:[0,T]\times \mathbb{R}\to \mathbb{R}$ such that $Y_s^{t,x,i}=u_i(s,X_s^{t,x,i})$ and $Z_s^{t,x,i}= (\nabla_x u_i) (s,X_s^{t,x,i})\sigma(s,X_s^{t,x,i})$ $\mathbb{P}$-a.s. We have then $\mathbb{P}$-a.s. that for any $s\in[t,T]$ (recall that $\sigma_i$ is a vector and $\nabla u_i$ a scalar) \begin{align} \nonumber Z^{t,x,1}_s\odot Z^{t,x,2}_s &= \Big( (\nabla_x u_1)(s,X^{t,x,1}_s)\ \sigma_1(s,X^{t,x,1}_s) \Big) \odot \Big( (\nabla_x u_2) (s,X^{t,x,2}_s)\ \sigma_2(s,X^{t,x,2}_s)\Big) \\ \label{comono-zodotz} &= \Big(\sigma_1(s,X^{t,x,1}_s)\odot \sigma_2(s,X^{t,x,2}_s)\Big) (\nabla_x u_1)(s,X^{t,x,1}_s) (\nabla_x u)(s,X^{t,x,2}_s). \end{align} A standard comparison theorem for SDEs (see \cite{Protter2005}) yields that for any fixed $t$ and $T$ the mappings $x\mapsto X^{t,x,i}_T$ are increasing. This, along with the fact that $g_1$ and $g_2$ are comonotonic functions, implies that for fixed $t$ and $T$ it holds that $x\mapsto g_1(X^{t,x,1}_T)$ and $x\mapsto g_2(X^{t,x,2}_T)$ are a.s.~comonotonic. A similar argument implies the same conclusion for the drivers $f_i$, i.e.~$x\mapsto f_1(\cdot,X^{t,x,1}_\cdot,\cdot,\cdot)$ and $x\mapsto f_2(\cdot,X^{t,x,2}_\cdot,\cdot,\cdot)$ are a.s. comonotonic.
Using the comparison theorem for quadratic BSDE (see e.g.~Theorem 2.6 in \cite{00Kob}) and the monotonicity (and comonotonicity) of $x\mapsto g_i(X^{t,x,i}_T)$ and $x\mapsto f_i(\cdot,X^{t,x,i}_\cdot,\cdot,\cdot)$ we can conclude that $x\mapsto Y^{t,x,i}$ is also a.s.~monotone. Furthermore, since $x\mapsto g_1(X^{t,x,1}_T)$, $x\mapsto g_2(X^{t,x,2}_T)$, $x\mapsto f_1(\cdot,X^{t,x,1}\cdot,\cdot,\cdot)$ and $x\mapsto f_2(\cdot,X^{t,x,2}_\cdot,\cdot,\cdot)$ are comonotonic the same comparison theorem yields that the mappings $x\mapsto Y^{t,x,1}$ and $x\mapsto Y^{t,x,2}$ are also a.s.~comonotonic. Equivalently, one can write for any $(t,x)\in[0,T]\times \mathbb{R}$ that (notice that $\nabla u$ exists according to Theorem \ref{compilationtheorem}) \begin{align} \label{aux-for-strict} \big\langle (\nabla_x u_1)(t,x), (\nabla_x u_2) (t,x)\big\rangle \geq 0. \end{align} Therefore, combining \eqref{aux-for-strict} with \eqref{sigmaineq} in (\ref{comono-zodotz}) we easily obtain \[Z^{t,x,1}_s(\omega)\odot Z^{t,x,2}_s(\omega)\geq 0, \quad \mathbb{P}\text{-}a.s.\ \omega\in \Omega,\ (t,x)\in[0,T]\times\mathbb{R},\quad s\in[t,T]. \] Under the assumption that $g_1$ and $g_2$ are strictly comonotonic it is clear that inequality \eqref{aux-for-strict} is also strict. Furthermore, if one also assumes that the inequality in \eqref{sigmaineq} holds strictly for any $(t,x)\in[0,T]\times\mathbb{R}$ then \eqref{eq:ZZineq} also holds strictly. \end{proof} Unfortunately it doesn't seem possible to weaken the assumptions of the previous theorem. The key factor is the representation of $Z^{t,x}$ via the function $Y^{t,x}_t=u(t,x)$ which needs to be continuously uniformly differentiable in the spatial variable and for that one needs Assumption \ref{H2} to hold.
We obtain an interesting conclusion of the previous result if we interpret the forward diffusion of the system as a backward equation. In terms of applications (as mentioned in the introduction) it is the next result that gives a condition that allows the user to conclude the positivity or negativity of the control process.
In the next result we focus on just one FBSDE so we fix $i=1$ and we omit this index. \begin{coro}\label{comono-theo-2} Let the assumption of Theorem \ref{comono-theo-1} hold (fix $i=1$). Take $(t,x)\in[0,T]\times\mathbb{R}$ and let $(X,Y,Z)$ be the unique solution of the FBSDE \begin{align} \label{loc-19122008-1} X_t&=x+\int_0^t b(s,X_s)\mathrm{d} s+\int_0^t \sigma(s,X_s)\mathrm{d}W_s,\\ \label{loc-19122008-2} Y_t &=g(X_T) +\int_t^T f(s,X_s,Y_s,Z_s)\mathrm{d}s-\int_t^T Z_s\mathrm{d}W_s. \end{align}
Then, if $x\mapsto g(x)$ and $x\mapsto f(\cdot,x,,\cdot,\cdot)$ are increasing (respectively decreasing) functions, then $Z_t \odot \sigma(t,X_t)$ is $\mathbb{P}$-a.s.~positive (respectively negative) for all $t\in[0,T]$. In particular, if the monotonicity of $g$ and $f$ (in $x$) is strict and if $\sigma$ is strictly positive then $Z$ is either strictly positive or strictly negative (according to the monotonicity of $g$ and $f$). \end{coro} \begin{proof} Throughout let $x\in\mathbb{R}$ and $t\in[0,T]$. We prove the statement for the case of $g(x)$ and $f(\cdot,x,\cdot,\cdot)$ being increasing functions (in the spatial variable $x$) and we give a sketch of the proof for the decreasing case. Rewriting SDE \eqref{loc-19122008-1} as a BSDE leads to $X_t=X_T-\int_t^T b(s,X_s)\mathrm{d} s-\int_t^T \sigma(s,X_s)\mathrm{d} W_s$. In fact we can still rewrite the above equation in a more familiar way, namely \begin{align}\label{loc-19112008-3} \tilde{Y}_t=\tilde{g}(X_T)+\int_t^T \tilde{f}(s,\tilde{Y}_s)\mathrm{d} s-\int_t^T \tilde{Z}_s\mathrm{d} W_s, \end{align} where $\tilde{Z}_s=\sigma(s,X_s)$ for $s\in[0,T]$, $\tilde{g}(x)=x$ and $\tilde{f}(t,x,y,z)=- b(t,y)$.
At this stage we need to clarify the identification $\tilde{Z}_\cdot=\sigma(\cdot,X_\cdot)$. Let us write explicitly the dependence on the parameter $x$ of the solution $(X,\tilde{Y},\tilde{Z})$ of the FBSDE (\ref{loc-19122008-1}), (\ref{loc-19112008-3}), i.e. we write $(X,\tilde{Y} ,\tilde{Z})$ to denote $(X,\tilde{Y},\tilde{Z})$. Note that the solution of the BSDE (\ref{loc-19112008-3}) is the solution of SDE (\ref{loc-19122008-1}) which is a Markov process. We can then write $\tilde{Y}_\cdot= X_\cdot=\tilde{u}(\cdot,X_\cdot)$ where $\tilde{u}$ is the identity function (infinitely differentiable). Under Assumption \ref{H1} both $X$ an $\tilde{Y}$ are differentiable as a functions of $x$ (see Theorem \ref{compilationtheorem}), we have then $\tilde{Z}_\cdot=(\nabla_x \tilde{u}) (\cdot,X_\cdot)\sigma(\cdot,X_\cdot)$. And since $\tilde{u}$ is the identity function with derivative being the constant function $1$, it follows immediately that $\tilde{Z}_\cdot=\sigma(\cdot,X_\cdot)$.
Our aim is to use the previous theorem to imply this result. So we only have to check that its assumptions are verified. Comparing the terminal conditions of (\ref{loc-19122008-2}) and (\ref{loc-19112008-3}), i.e. comparing $x\mapsto g(X^x_T)$ with $x\mapsto \tilde{g}(X^x_T)= X^x_T$ it is clear that both functions are almost surely increasing. Further, the driver function $\tilde{f}$ of BSDE \eqref{loc-19112008-3} is given by $\tilde{f}(t,x,y,z)=\tilde{f}(t,y)=-b(t,y)$ which is independent of $x$. Clearly $x\mapsto \tilde{f}(\cdot,x,\cdot,\cdot)$ and $x\mapsto f(\cdot,x,\cdot,\cdot)$ are comonotonic.
Theorem \ref{comono-theo-1} applies and we conclude immediately that \begin{align} \label{Zodotsigma} Z_t\odot \tilde{Z}_t=Z_t\odot \sigma(t,X_t)\geq 0,~\mathbb{P}\text{-}a.s.\quad t\in[0,T]. \end{align}
For the other case, when $g$ is a decreasing function, the approach is very similar. We rewrite the SDE (\ref{loc-19122008-1}) in the following way, \[ -X_t=-X_T+\int_t^T b(s,X_s)\mathrm{d} s-\int_t^T\big[ -\sigma(s,X_s)\big]\mathrm{d} W_t,\quad t\in[0,T]. \] The terminal condition of the above BSDE is given by $x\mapsto \tilde{g}(x)=-x$ evaluated at $x=X_T$ and the driver $\tilde{f}(t,x,y,z)=\tilde{f}(t,y)=b(t,-y)$ which is independent of $x$. Since $\tilde{g}$ is a decreasing function, we obtain our result by comparing the above BSDE with (\ref{loc-19122008-2}) and applying the previous theorem. \end{proof} The above corollary allows one to conclude in particular the strict positivity of the control process. If one is only interested in establishing positivity (ignoring strictness), then one can indeed lower the strength of the assumptions. \begin{lemma} Let Assumption \ref{H1} hold and $m=1$. Assume that $x\mapsto g(x)$ and $x\mapsto f(\cdot,x,0,0)$ are both monotone increasing then $ Z_t\odot \sigma(t,X_t)\geq 0,$ $\mathbb{P}$-a.s. for all $t\in[0,T]$. If $x\mapsto g(x)$ and $x\mapsto f(\cdot,x,\cdot,\cdot)$ are both monotone decreasing then $Z_t\odot \sigma(t,X_t)\leq 0$, $\mathbb{P}$-a.s. for all $t\in[0,T]$. \end{lemma} \begin{remark} Again, as in Remark \ref{caseofmdiff1}, it is possible to state and prove the same result for $m\geq 1$. One needs to impose that $\sigma$ is a square diagonal matrix and the SDE to be a decoupled system. \end{remark} \begin{remark} It is possible to weaken the assumptions of this lemma as was done for Theorem 4.3.6 in \cite{reis2011} or Corollary 2 in \cite{IdRZ2010}. Namely, the conditions are weakened to Lipschitz type conditions with the appropriate Lipschitz ``constant'', then one argues similarly but combining with a regularization argument. \end{remark} \begin{proof} Throughout let $t\in[0,T]$ and $x\in\mathbb{R}$. Then due to the representation formulas in \eqref{representation} we have $\mathbb{P}$-a.s. that \begin{align} \label{trick} Z_t\odot\sigma(t,X_t) = D_t Y_t\odot \sigma(t,X_t) = \nabla_x Y_t (\nabla_x X_t)^{-1} \sigma(t,X_t) \odot\sigma(t,X_t). \end{align} It is trivial to verify that $\sigma(t,X_t) \odot\sigma(t,X_t)\geq 0$. It remains to establish a result concerning the sign of $\nabla_x Y$ and $(\nabla_x X)^{-1}$.
Under the assumptions it is easy to verify that the solution of \eqref{nablasde} is positive. The solution of $\nabla_x X$ is essentially a positive geometric Brownian motion with a nonlinear drift and volatility which in turn implies that $(\nabla_x X)^{-1}$ is also positive. If we manage to deduce a result concerning the sign of $\nabla_x Y$ we are then able to obtain a weaker version of Corollary \ref{comono-theo-2}.
The methodology developed to deduce moment and a priori estimates for quadratic BSDE and illustrated in Lemma 3.1 and 3.2 of \cite{IdR2010} (or Chapter 2 in \cite{reis2011}) allow the following equality \begin{align} \label{simplifiedeq} \nabla_x Y_t = \mathbb{E}^{\widehat{\mathbb{P}}}\big[ e_T (e_t)^{-1} \nabla_x g(X_t) \nabla X_T +\int_t^T [e_r e_t^{-1} (\nabla_x f)(r,X_r,0,0)\nabla_x X_r] \mathrm{d} r
\big|\mathcal{F}_t\big], \end{align} where the process $e$ and the measure $\widehat{\mathbb{P}}$ are defined as \[ e_t=\exp\Big\{ \int_0^t \frac{f(r,X_r,Y_r,Z_r)-f(r,X_r,0,Z_r)}{Y_r}\mathbbm{1}_{Y_r\neq 0} \mathrm{d} r \Big\}, \] and $\widehat{\mathbb{P}}$ is a probability measure with Radon-Nikodym density given by \[
\frac{\mathrm{d} \widehat{\mathbb{P}}}{\mathrm{d} \mathbb{P}}=M_T=\mathcal{E}\Big( \int_0^T \frac{f(r,X_r,0,Z_r)-f(r,X_r,0,0)}{|Z_r|^2}Z_r\mathbbm{1}_{|Z_r|\neq 0} \mathrm{d} W_r \Big). \] Both $(e_t)_{t\in[0,T]}$ and $M$ are well defined. The first because $y\mapsto f(\cdot,\cdot,y,\cdot)$ is assumed to be uniformly Lipschitz and hence $e$ is bounded from above and below and away from zero. The second follows from a combination of the growth assumptions on $\nabla_z f$ and the fact that $\int Z \mathrm{d} W$ is a bounded mean oscillation martingale\footnote{This observation is key in many results for quadratic BSDE. The stochastic exponential of a BMO martingale is uniformly integrable and defines a proper density. This type of reasoning can be found ubiquitously in \cite{AIdR07} or \cite{IdR2010} for example.} (BMO).
We have already seen that $\nabla X$ is positive and it also trivial to conclude that the process $e$ also is. Given that $g$ and $f$ are differentiable, then saying that these functions are monotonic (in x) boils down to making a statement on the sign of $(\nabla_x g)(x)$ and $(\nabla_x f)(\cdot,x,0,0)$. If one assumes that $g$ and $f(\cdot,x,0,0)$ are monotone increasing in $x$ then $(\nabla g)(x)\geq 0$ and $(\nabla_x f)(\cdot,x,0,0)\geq 0 $ for all $x$. Hence from \eqref{simplifiedeq} (and the remarks above) we conclude that $\nabla_x Y$ is also positive. Returning to \eqref{trick} we have then that $Z_t \odot \sigma(t,X_t)\geq 0$ which proves our result.
The arguments are similar for the case when $g(x)$ and $f(\cdot,x,0,0)$ are decreasing functions. \end{proof}
{\bf Acknowledgments:} The first author would like to thank Peter Imkeller and Ulrich Horst for their comments. The first author gratefully acknowledges the partial support from the CMA/FCT/UNL through project PEst-OE/MAT/UI0297/2011.
This work was partially supported by the project SANAF UTA\_{}CMU/MAT/0006/2009.
\end{document}
|
arXiv
|
{
"id": "1301.4370.tex",
"language_detection_score": 0.7519557476043701,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\makeatletter \def\@fnsymbol#1{\ifcase#1\or * \or 1 \or 2 \else\@ctrerr\fi\relax}
\let\mytitle\@title \chead{\small\itshape D. Dikranjan and G. Luk\'acs / LCA groups admitting quasi-convex null sequences} \fancyhead[RO,LE]{\small \thepage} \makeatother
\title{Locally compact abelian groups admitting\\ non-trivial quasi-convex
null sequences\thanks{{\em 2000 Mathematics Subject Classification}
\def\thanks#1{}
\thispagestyle{empty}
\begin{abstract} \noindent In this paper, we show that for every locally compact abelian group $G$, the following statements are equivalent:
\begin{myromanlist}
\item $G$ contains no sequence $\{x_n\}_{n=0}^\infty$ such that \mbox{$\{0\}\cup \{\pm x_n \mid n \in \mathbb{N} \}$} is infinite and quasi-convex in $G$, and \mbox{$x_n \longrightarrow 0$};
\item one of the subgroups $\{g \in G \mid 2g=0\}$ and $\{g \in G \mid 3g=0\}$ is open in $G$;
\item $G$ contains an open compact subgroup of the form $\mathbb{Z}_2^\kappa$ or $\mathbb{Z}_3^\kappa$ \ for some cardinal $\kappa$.
\end{myromanlist} \end{abstract}
\section{Introduction}
\label{sect:intro}
One of the main sources of inspiration for the theory of topological groups is the theory of topological vector spaces, where the notion of convexity plays a prominent role. In this context, the reals $\mathbb{R}$ are replaced with the circle group $\mathbb{T}=\mathbb{R}/\mathbb{Z}$, and linear functionals are replaced by {\em characters}, that is, continuous homomorphisms to $\mathbb{T}$. By making substantial use of characters, Vilenkin introduced the notion of quasi-convexity for abelian topological groups as a counterpart of convexity in topological vector spaces (cf. \cite{Vilenkin}). The counterpart of locally convex spaces are the locally quasi-convex groups. This class includes all locally compact abelian groups and locally convex topological vector spaces (cf.~\cite{Banasz}).
According to the celebrated Mackey-Arens theorem (cf.~\cite{MackeyCTLS} and~\cite{ArensDLin}), every locally convex topological vector space $(V,\tau)$ admits a so-called {\em Mackey topology}, that is, a locally convex vector space topology $\tau_\mu$ that is finest with respect to the property of having the same set of continuous linear functionals (i.e., \mbox{$(V,\tau_\mu)^*=(V,\tau)^*$}). Moreover, $\tau_\mu$ can be described as the topology of uniform convergence of the sets of an appropriate family of convex weakly compact sets of $(V,\tau)^*$. A counterpart of this notion in the class of locally quasi-convex abelian groups, the so-called {\em Mackey group topology}, was proposed in \cite{ChaMarTar}. It seems reasonable to expect to describe the Mackey topology of a locally quasi-convex abelian group $G$ as the topology $\tau_\mathfrak{S}$ of uniform convergence on members of an appropriate family $\mathfrak{S}$ of quasi-convex compact sets of the Pontryagin dual~$\widehat G$, where each set in $\mathfrak{S}$ is equipped with the weak topology. This underscores the importance of the compact quasi-convex sets in locally quasi-convex abelian groups. It was proven by Hern\'andez, and independently, by Bruguera and Mart\'{\i}n-Peinador, that a metrizable locally quasi-convex abelian group $G$ is complete if and only if the quasi-convex hull of every compact subset of $G$ is compact (cf.~\cite{Hern2} and~\cite{BrugMar2}). Luk\'acs extended this result, and proved that a~metrizable abelian group $A$ is MAP and has the quasi-convex compactness property if and only if it is locally quasi-convex and complete; he also showed that such groups are characterized by the property that the evaluation map $\alpha_A\colon A \rightarrow \hat{\hat A}$ is a closed embedding (cf.~\cite[I.34]{GLdualtheo}).
Let \mbox{$\pi\colon \mathbb{R} \rightarrow \mathbb{T}$} denote the canonical projection. Since the restriction
\mbox{$\pi_{|[0,1)}\colon [0,1) \rightarrow \mathbb{T}$} is a~bijection, we often identify in the sequel, {\em par abus de language}, a~number \mbox{$a\in [0,1)$} with its image (coset) \mbox{$\pi(a)=a+\mathbb{Z}\in \mathbb{T}$}. We put \mbox{$\mathbb{T}_m:=\pi([-\frac{1}{4m},\frac{1}{4m}])$} for all \mbox{$m\in \mathbb{N}\backslash\{0\}$}. According to standard notation in this area, we use $\mathbb{T}_+$ to denote $\mathbb{T}_1$. For an abelian topological group $G$, we denote by $\widehat{G}$ the {\em Pontryagin dual} of a $G$, that is, the group of all characters of $G$ endowed with the compact-open topology.
\begin{definition}\label{def:into:qc} For $E\subseteq G$ and $A \subseteq \widehat{G}$, the {\em polars} of $E$ and $A$ are defined as \begin{align} E^\triangleright=\{\chi\in \widehat{G} \mid \chi(E) \subseteq \mathbb{T}_+\} \quad \text{and} \quad A^\triangleleft=\{ x \in A \mid \forall \chi \in A, \chi(x) \in \mathbb{T}_+ \}. \end{align} The set $E$ is said to be {\em quasi-convex} if $E=E^{\triangleright\triangleleft}$. We say that $E$ is {\em $qc$-dense} if $G=E^{\triangleright\triangleleft}$. \end{definition}
Obviously, \mbox{$E\subseteq E^{\triangleright\triangleleft}$} holds for every \mbox{$E\subseteq G$}. Thus, $E$ is quasi-convex if and only if for every \mbox{$x\in G\backslash E$} there exists \mbox{$\chi\in E^\triangleright$} such that \mbox{$\chi(x)\not\in \mathbb{T}_+$}. The set \mbox{$Q_{G}(E):=E^{\triangleright\triangleleft}$} is the smallest quasi-convex set of $G$ that contains $E$, and it is called the {\em quasi-convex hull} of $E$.
\begin{definition} A sequence $\{x_n\}_{n=0}^\infty \subseteq G$ is said to be {\em quasi-convex} if \mbox{$S=\{0\} \cup \{\pm x_n \mid n \in \mathbb{N}\}$} is quasi-convex in $G$. We say that $\{x_n\}_{n=0}^\infty$ is {\em non-trivial} if the set $S$ is infinite, and it is a {\em null sequence} if $x_n \longrightarrow 0$. \end{definition}
\begin{example} \label{ex:intro:TJ23R} Each of the compact groups $\mathbb{T}$, $\mathbb{J}_2$ ($2$-adic integers), and $\mathbb{J}_3$ ($3$-adic integers), and the locally compact group $\mathbb{R}$ admits a non-trivial quasi-convex null sequence (cf.~\cite[1.2-1.4]{DikLeo} and~\cite[A-D]{DikGL1}). \end{example}
In this paper, we characterize the locally compact abelian groups $G$ that admit a non-trivial quasi-convex null sequence.
\begin{samepage} \begin{Ltheorem} \label{thm:qcs:main} For every locally compact abelian group $G$, the following statements are equivalent:
\begin{myromanlist}
\item $G$ admits no non-trivial quasi-convex null sequences;
\item one of the subgroups $G[2]=\{g \in G \mid 2g=0\}$ and $G[3]=\{g \in G \mid 3g=0\}$ is open in $G$;
\item $G$ contains an open compact subgroup of the form $\mathbb{Z}_2^\kappa$ or $\mathbb{Z}_3^\kappa$ \ for some cardinal $\kappa$.
{\parindent -25pt Furthermore, if $G$ is compact, then these conditions are also equivalent to: }
\item $G \cong \mathbb{Z}_2^\kappa \times F$ or $G\cong \mathbb{Z}_3^\kappa \times F$, where $\kappa$ is some cardinal and $F$ is a finite abelian group;
\item one of the subgroups $2G$ and $3G$ is finite.
\end{myromanlist} \end{Ltheorem} \end{samepage}
\pagebreak[2]
Theorem~\ref{thm:qcs:main} answers a question of L. Aussenhofer in the case of compact abelian groups. The proof of Theorem~\ref{thm:qcs:main} is presented in \S\ref{sect:qcs}. One of its main ingredients is the next theorem, which (together with the results mentioned in Example~\ref{ex:intro:TJ23R}) implies that for \mbox{every} prime $p$,~the compact group $\mathbb{J}_p$ of $p$-adic integers admits a non-trivial quasi-convex null sequence.
\begin{Ltheorem} \label{thm:Jp:main} Let $p \geq 5$ be a prime, $\underline a=\{a_n\}_{n=0}^\infty$ an increasing sequence of non-negative integers, and put \mbox{$y_n = p^{a_n}$}. The set $L_{\underline a,p} = \{0\} \cup \{\pm y_n \mid n \in \mathbb{N}\}$ is quasi-convex in~$\mathbb{J}_p$. \end{Ltheorem}
The proof of Theorem~\ref{thm:Jp:main} is presented in \S\ref{sect:Jp}. Theorem~\ref{thm:Jp:main} does not hold for \mbox{$p \not\geq 5$} (see Example~\ref{ex:Jp:p23}). Dikranjan and de Leo gave sufficient conditions on the sequence $\underline a$ to ensure that $L_{\underline a, 2}$ is quasi-convex in $\mathbb{J}_2$ (cf.~\cite[1.4]{DikLeo}). For the case where $p=3$, Dikranjan and Luk\'acs gave a complete characterization of those sequences $\underline a$ such that $L_{\underline a, 3}$ is quasi-convex (cf.~\cite[Theorem~D]{DikGL1}).
The arguments and techniques developed in the proof of Theorem B are applicable {\em mutandi mutandis} for sequences with $p$-power denominators in $\mathbb{T}$. This is done in the next theorem, whose proof is given in \S\ref{sect:Tp}. Theorem~\ref{thm:Tp:main} is a continuation of our study of non-trivial quasi-convex null sequences in $\mathbb{T}$ (cf.~\cite{DikGL1}).
\begin{Ltheorem} \label{thm:Tp:main} Let $p \geq 5$ be a prime, $\underline a=\{a_n\}_{n=0}^\infty$ an increasing sequence of non-negative integers, and put \mbox{$x_n = p^{-(a_n+1)}$}. The set $K_{\underline a,p} = \{0\} \cup \{\pm x_n \mid n \in \mathbb{N}\}$ is quasi-convex in~$\mathbb{T}$. \end{Ltheorem}
Theorem~\ref{thm:Tp:main} does not hold for \mbox{$p \not\geq 5$} (see Example~\ref{ex:Tp:p23}). Dikranjan and Luk\'acs gave complete characterizations of those sequences $\underline a$ such that $K_{\underline a, 2}$ and $K_{\underline a, 3}$ are quasi-convex in $\mathbb{T}$ (cf.~\cite[Theorem~A, Theorem~C]{DikGL1}).
We are leaving open the following three problems. The first two are motivated by the observation that certain parts of Theorem~\ref{thm:qcs:main} hold for a class larger than that of (locally) compact abelian groups. Indeed, one can prove, for instance, that items (i), (ii), and (v) of Theorem~\ref{thm:qcs:main} remain equivalent for so-called $\omega$-bounded abelian groups. (A topological group $G$ is called {\em $\omega$-bounded} if every countable subset of $G$ is contained in some compact subgroup of $G$.)
\begin{problem} Is it possible to replace the class of locally compact abelian groups in Theorem~\ref{thm:qcs:main} with a~different class of abelian topological groups that contains all compact abelian groups? \end{problem}
\begin{problem} Is it possible to obtain a characterization for countably compact abelian groups that admit no non-trivial quasi-convex null sequences in terms of their structures, in the spirit of Theorem~\ref{thm:qcs:main}? \end{problem}
\begin{problem} Let $H$ be an infinite cyclic subgroup of $\mathbb{T}$. Does $H$ admit a non-trivial quasi-convex null sequence? \end{problem}
\section{Preliminaries: Exotic tori and abelian pro-finite groups}
\label{sect:prel}
In this section, we provide a few well-known definitions and results that we rely on later. We chose to isolate these in order to improve the flow of arguments in \S\ref{sect:qcs}.
\begin{definition} (\cite{DikProET}, \cite[p.~141]{DikProSto}) A compact abelian group is an {\em exotic torus} if it contains no subgroup that is topologically isomorphic to $\mathbb{J}_p$ for some prime $p$. \end{definition}
The notion of exotic torus was introduced by Dikranjan and Prodanov in \cite{DikProET}, who also provided, among other things, the following characterization for such groups.
\pagebreak[2]
\begin{ftheorem}[{\cite{DikProET}}] \label{thm:prel:ET} A compact abelian group $K$ is an exotic torus if and only if it contains a~closed subgroup $B$ such that
\begin{myromanlist}
\item $K/B \cong \mathbb{T}^n$ for some $n \in \mathbb{N}$, and
\item $B = \prod\limits_{p} B_p$, where each $B_p$ is a compact bounded abelian $p$-group.
\end{myromanlist} Furthermore, if $K$ is connected, then each $B_p$ is finite. \end{ftheorem}
\pagebreak[2]
Since \cite{DikProET} is not easily accessible for most readers, we provide a sketch of the proof of Theorem~\ref{thm:prel:ET} for the sake of completeness. The main idea of the proof is to pass to the Pontryagin dual, and then to establish that an abelian group is the dual of an exotic torus if and only of it satisfies conditions that are dual to (i) and (ii). To that end, we recall that an abelian group $X$ is said to be {\em strongly non-divisible} if there is no surjective homomorphism \mbox{$X \rightarrow \mathbb{Z}(p^\infty)$} for any prime $p$ (cf.~\cite{DikProET}).
\begin{proof} It follows from Pontryagin duality that a compact abelian group $K$ contains no subgroup that is topologically isomorphic to $\mathbb{J}_p$ (i.e., $K$ is an exotic torus) if and only if its Pontryagin dual \mbox{$X=\widehat K$} has no quotient that is isomorphic to \mbox{$\mathbb{Z}(p^\infty)\cong \widehat{\mathbb{J}}_p$} (cf.~\cite[Theorem~54]{Pontr}), that is, $X$ is strongly non-divisible. Similarly, by Pontryagin duality (cf.~\cite[Theorem~37 and~54]{Pontr}), a compact abelian group $K$ satisfies (i) and (ii) if and only if its Pontryagin dual \mbox{$X=\widehat K$} contains a subgroup $F$ such that \begin{myromanlist}
\item[(i$^\prime$)] $F \cong \mathbb{Z}^n$ for some $n \in \mathbb{N}$, and
\item[(ii$^\prime$)] $X/F \cong \bigoplus\limits_p T_p$, where each $T_p$ is a bounded abelian $p$-group.
\end{myromanlist} Hence, we proceed by showing that a discrete abelian group $X$ is strongly non-divisible if and only if $X$ contains a subgroup $F$ that satisfies (i$^\prime$) and (ii$^\prime$).
\pagebreak[3]
Suppose that $X$ is strongly non-divisible. If $X$ contains a free abelian group $F_0$ of infinite rank, then there is a surjective homomorphism \mbox{$f\colon F_0 \rightarrow \mathbb{Z}(2^\infty)$}, and $f$ extends to a surjective homomorphism \mbox{$\bar f \colon X \rightarrow \mathbb{Z}(2^\infty)$}, because $\mathbb{Z}(2^\infty)$ is divisible. Since $X$ is strongly non-divisible, this is impossible. Thus, $X$ contains a free abelian subgroup $F$ of finite rank $n$ such that $X/F$ is a~torsion group (i.e., $r_0(X)=n$). Clearly, $F\cong \mathbb{Z}^n$, and so (i$^\prime$) holds. Let \mbox{$X/F = \bigoplus\limits_p T_p$} be the primary decomposition of $X/F$ (cf.~\cite[8.4]{Fuchs}), fix a prime $p$, and let $A_p$ be a subgroup of $T_p$ such that $A_p$ is a direct sum of cyclic subgroups, and $T_p/A_p$ is divisible (e.g., take a $p$-basic subgroup of~$T_p$; cf.~\cite[32.3]{Fuchs}). One has $T_p/A_p \cong \bigoplus\limits_{\kappa_p} \mathbb{Z}(p^\infty)$ (cf.~\cite[23.1]{Fuchs}). Since $T_p/A_p$ is a~homomorphic image of the strongly non-divisible group $X$, this implies that $\kappa_p=0$, and so $T_p=A_p$. Therefore, $T_p$ is a direct sum of cyclic groups. Finally, if $T_p$ is not bounded, then it admits a surjective homomorphism \mbox{$T_p \rightarrow \mathbb{Z}(p^\infty)$}. This, however, is impossible, because $T_p$ is a homomorphic image of $X$. Hence, each $T_p$ is a bounded $p$-group, and (ii$^\prime$) holds.
Conversely, suppose that $X$ contains a subgroup $F$ that satisfies (i$^\prime$) and (ii$^\prime$). Assume that $X$ is not strongly non-divisible, that is, there is a prime $p$ such that there exists a surjective homomorphism \mbox{$f\colon X \rightarrow \mathbb{Z}(p^\infty)$}. Since $\mathbb{Z}(p^\infty)$ is not finitely generated, the finitely generated image $f(F)$ is a proper subgroup of $\mathbb{Z}(p^\infty)$, and so \mbox{$\mathbb{Z}(p^\infty)/f(F)\cong \mathbb{Z}(p^\infty)$}. Thus, by replacing $f$ with its composition with the canonical projection \mbox{$\mathbb{Z}(p^\infty)\rightarrow \mathbb{Z}(p^\infty)/f(F)$}, we may assume that \mbox{$F \hspace{-2pt}\subseteq\hspace{-1pt} \ker f$}. Therefore, $f$ induces a surjective homomorphism \mbox{$g\colon X/F \rightarrow \mathbb{Z}(p^\infty)$}. Since $g(\bigoplus\limits_{q\neq p} T_q) = \{0\}$, surjectivity of $g$ implies
that $g_{|T_p}$ is surjective. This, however, is a contradiction, because by (ii$^\prime$),~$T_p$~is bounded. Hence, $X$ is strongly non-divisible.
It remains to be seen that if $K$ is a connected exotic tori, then each $B_p$ in (ii) is finite. In terms of the Pontryagin dual \mbox{$X=\widehat K$}, this is equivalent to the statement that if $X$ satisfies (i$^\prime$) and (ii$^\prime$), and $X$ is torsion free, then each $T_p$ is finite (cf.~\cite[Theorem~46]{Pontr}). We proceed by showing this latter implication. By (ii$^\prime)$, $X$ contains a~subgroup $F$ such that \mbox{$F\cong \mathbb{Z}^n$} for some \mbox{$n \in \mathbb{N}$}. We have already seen that there is a~maximal $n$ with respect to this property. Since $X$ is torsion free, maximality of $n$ implies that $F$ meets non-trivially every non-zero subgroup of $X$ (i.e., $F$ is an {\em essential} subgroup). Let \mbox{$i\colon F \rightarrow \mathbb{Q}^n$} be a~monomorphism such that \mbox{$i(F)=\mathbb{Z}^n$}. Since $\mathbb{Q}^n$ is divisible, $i$ can be extended to a~homomorphism \mbox{$j\colon X \rightarrow \mathbb{Q}^n$}, and $j$ is a~monomorphism, because \mbox{$F \hspace{-2pt} \cap\hspace{-1.5pt} \ker j = \ker i$} is trivial (and $F$ is an essential subgroup). Therefore, $j$ induces a~monomorphism \mbox{$\bar j \colon X/F\rightarrow \mathbb{Q}^n/\mathbb{Z}^n$}. Since \mbox{$\mathbb{Q}^n/\mathbb{Z}^n\cong \bigoplus\limits_p \mathbb{Z}(p^\infty)^n$}, the image $\bar j(T_p)$ is isomorphic to a~bounded subgroup of $\mathbb{Z}(p^\infty)^n$ for every prime $p$. Hence, each $T_p$ is finite, because all bounded subgroups of $\mathbb{Z}(p^\infty)^n$ are finite. This completes the proof. \end{proof}
Recall that a topological group is {\em pro-finite} if it is the (projective) limit of finite groups, or equivalently, if it is compact and zero-dimensional. For a prime $p$, a topological group $G$ is called a {\em pro-$p$-group} if it is the (projective) limit of finite $p$-groups, or equivalently, if it is pro-finite and $x^{p^n} \longrightarrow e$ for every $x \in G$ (or, in the abelian case, $p^n x \longrightarrow 0$).
\begin{ftheorem}[{\cite{ArmacostLCA}, \cite[Corollary 8.8(ii)]{HofMor}, \cite[4.1.3]{DikProSto}}] \label{thm:prel:profin} Let $G$ be a pro-finite group. Then \mbox{$G\hspace{-2pt} =\hspace{-2pt} \prod\limits_{p} \hspace{-2pt} G_p$}, where each $G_p$ is a pro-$p$-group. \end{ftheorem}
\section{LCA groups that admit a non-trivial quasi-convex null sequence} \label{sect:qcs}
In this section, we prove Theorem~\ref{thm:qcs:main} by using two intermediate steps: First, we consider direct products of finite cyclic groups, and then we show that Theorem~\ref{thm:qcs:main} holds for pro-finite groups. We start off with a lemma that allows us to relate non-trivial quasi-convex null sequences in closed (and open) subgroups to those in the ambient group.
\begin{lemma} \label{lemma:qcs:subgr} Let $G$ be a locally compact abelian group, and $H$ a closed subgroup.
\begin{myalphlist}
\item If $H$ admits a non-trivial quasi-convex null sequence, then so does $G$.
\item If $H$ is open in $G$, then $H$ admits a non-trivial quasi-convex null sequence if and only if $G$ does.
\end{myalphlist} \end{lemma}
In order to prove Lemma~\ref{lemma:qcs:subgr}, we rely on the following general property of the quasi-convexity hull, which will also be used in the proof of Theorem~\ref{thm:qcs:cyclic} below.
\begin{fproposition}[{\cite[I.3(e)]{GLdualtheo}, \cite[2.7]{DikLeo}}] \label{prop:qcs:qc-hom} If $f\colon G\to H$ is a continuous homomorphism of abelian topological groups, and $E\subseteq G$, then $f(Q_G(E))\subseteq Q_H(f(E))$. \end{fproposition}
\begin{proof}[Proof of Lemma~\ref{lemma:qcs:subgr}.] (a) Let \mbox{$\iota\colon H \rightarrow G$} denote the inclusion. By Proposition~\ref{prop:qcs:qc-hom}, \begin{align} Q_H(S) \subseteq \iota^{-1}(Q_G(S))=Q_G(S)\cap H. \end{align} On the other hand, since $H$ is a subgroup, $H^\triangleright = H^\perp$, where \mbox{$H^\perp = \{\chi \hspace{-2pt}\in\hspace{-2pt} \widehat G \mid\chi(H)=\{0\}\}$} is the annihilator of $H$ in $\widehat G$. Thus, by Pontryagin duality (cf.~\cite[Theorems~37 and~54]{Pontr}), \begin{align} Q_G(H)=(H^\perp)^\triangleright = H^{\perp\perp}=H, \end{align} and so \mbox{$Q_G(S) \subseteq Q_G(H)=H$}. Finally, \mbox{$Q_G(S) \subseteq Q_H(S)$}, because by Pontryagin duality, every character of $H$ extends to a character of $G$ (cf.~\cite[Theorem~54]{Pontr}).
\pagebreak[2]
(b) The necessity of the condition follows from (a). In order to show sufficiency, let $\{x_n\}_{n=0}^\infty$ be a non-trivial quasi-convex null sequence in $G$, and put $S=\{0\}\cup \{\pm x_n \mid n \in \mathbb{N}\}$. Let \mbox{$\iota\colon H \rightarrow G$} denote the inclusion. Then, by Proposition~\ref{prop:qcs:qc-hom}, \mbox{$\iota^{-1}(S)=S\cap H$} is quasi-convex in $H$. Since $H$ is a neighborhood of $0$ and $\{x_n\}_{n=0}^\infty$ is a non-trivial null sequence, the intersection \mbox{$S\cap H$} is infinite. Therefore, the subsequence $\{x_{n_k}\}_{k=0}^\infty$ of $\{x_n\}_{n=0}^\infty$ consisting of the members that belong to $H$ is a~non-trivial quasi-convex null sequence in $H$. \end{proof}
\begin{lemma} \label{lemma:qcs:Z23} Let $G$ be an abelian topological group of exponent $2$ or $3$. Then $G$ admits no \mbox{non-trivial} quasi-convex null sequences. In particular, the groups $\mathbb{Z}_2^\kappa$ and $\mathbb{Z}_3^\kappa$ admit no non-trivial quasi-convex null sequences for any cardinal $\kappa$. \end{lemma}
\begin{proof} Observe that if $x$ is an element of order $2$ or $3$ in an abelian topological group~$G$,~and \mbox{$\chi(x) \in \mathbb{T}_+$}, then \mbox{$\chi(x)=0$}, and so \mbox{$\chi \in \langle x \rangle^\perp$}. Thus, if \mbox{$S \subseteq G$} and $S$ consists of elements of order at most~$3$,~then \mbox{$S^\triangleright = S^\perp=\langle S\rangle^\perp$} is a closed subgroup of $\widehat G$. Consequently, $Q_G(S)=(\langle S\rangle^{\perp})^\perp$ is a~subgroup of $G$. Therefore, if $S$ is a non-trivial null sequence, then $S \subsetneq Q_G(S)$, because every group is homogeneous. Hence, no abelian topological group contains non-trivial quasi-convex sequences consisting of elements of order at most~$3$. Since each element in $\mathbb{Z}_2^\kappa$ and $\mathbb{Z}_3^\kappa$ has order at most $3$, this completes the proof. \end{proof}
Lemma~\ref{lemma:qcs:Z23} combined with Lemma~\ref{lemma:qcs:subgr}(b) yields the following consequence.
\begin{corollary} \label{cor:qcs:Z23} Let $G$ be a locally compact abelian group, and $\kappa$ a cardinal. If $G$ contains an open subgroup that is topologically isomorphic to $\mathbb{Z}_2^\kappa$ or $\mathbb{Z}_3^\kappa$, then $G$ admits no non-trivial quasi-convex null sequences. \qed \end{corollary}
\begin{theorem} \label{thm:qcs:cyclic} Let $\{m_k\}_{k=1}^\infty$ be a sequence of integers such that \mbox{$m_k \geq 4$} \ for every \mbox{$k\in\mathbb{N}$}. Then the product $P=\prod\limits_{k=0}^\infty \mathbb{Z}_{m_k}$ admits a non-trivial quasi-convex null sequence. \end{theorem}
\begin{proof} Let $\pi_k\colon P \rightarrow \mathbb{Z}_{m_k}$ denote the canonical projection for each $k \in \mathbb{N}$, and let $e_n \in P$ be such that $\pi_k(e_n)=0$ if $k\neq n$, and $\pi_k(e_k)$ generates $\mathbb{Z}_{m_k}$. Clearly, $\{e_n\}_{n=0}^\infty$ is a non-trivial null sequence, and so it remains to be seen that it is quasi-convex. To that end, put $S=\{0\} \cup \{\pm e_n\mid n \in \mathbb{N}\}$. Since $\pi_k(S) = \{0,\pm \pi_k(e_k)\}$ is quasi-convex in $\mathbb{Z}_{m_k}$ for every $k \in \mathbb{N}$ (cf.~\cite[7.8]{Aussenhofer}), by Proposition~\ref{prop:qcs:qc-hom}, \begin{align} \label{eq:qcs:QPS} Q_P(S) \subseteq \bigcap\limits_{k\in \mathbb{N}} \pi_k^{-1}(Q_{\mathbb{Z}_{m_k}}(\pi_k(S))) = \bigcap\limits_{k\in \mathbb{N}} \pi_k^{-1}(\{0,\pm \pi_k(e_k)\}) = \prod\limits_{k\in \mathbb{N}} \{0,\pm \pi_k(e_k)\}. \end{align} Every element in the compact group $P$ can be expressed in the form \mbox{$x=\sum\limits_{k \in \mathbb{N}} c_k e_k$}, where \mbox{$c_k \in \mathbb{Z}_{m_k}$} for every \mbox{$k \in \mathbb{N}$}. For each \mbox{$k \in \mathbb{N}$}, let \mbox{$\chi_k\colon P \rightarrow \mathbb{T}$} denote the continuous character defined by \mbox{$\chi_k(x) = \frac{c_k}{m_k} + \mathbb{Z}$}, and put $l_k = \lfloor \frac {m_k} 4 \rfloor$. (As $\chi_k$ factors through $\pi_k$, it is indeed continuous.) Then \begin{align} l_k\chi_k(e_n) = \begin{cases} \frac{l_k}{m_k} & n=k\\ 0 & n \neq k. \end{cases} \end{align} Consequently, \mbox{$(l_{k_1} \chi_{k_1} \pm l_{k_2} \chi_{k_2})(e_n) \in \mathbb{T}_+$} for every \mbox{$k_1 \neq k_2$}, and thus \mbox{$l_{k_1} \chi_{k_1} \pm l_{k_2} \chi_{k_2} \in S^\triangleright$}. Let \mbox{$x \in Q_P(S)\backslash\{0\}$}. By (\ref{eq:qcs:QPS}), \mbox{$x=\sum\limits_{k \in \mathbb{N}} c_k e_k$}, where \mbox{$c_k\in \{0,1,-1\}$}. Let \mbox{$k_1 \in \mathbb{N}$} be the smallest index such that \mbox{$c_{k_1} \neq 0$}. By replacing $x$ with $-x$ if necessary, we may assume that \mbox{$c_{k_1}=1$}. Let $k_2 \in \mathbb{N}$ be such that $k_2 \neq k_1$. By what we have shown so far, $l_{k_1}\chi_{k_1} + c_{k_2}l_{k_2}\chi_{k_1} \in S^\triangleright$. Therefore, \begin{align} \label{eq:qcs:k12} \tfrac{l_{k_1}}{m_{k_1}} + \tfrac{l_{k_2} c_{k_2}^2}{m_{k_2}} + \mathbb{Z} = (l_{k_1}\chi_{k_1} + c_{k_2}l_{k_2}\chi_{k_1})(x) \in \mathbb{T}_+. \end{align} Since $m_k \geq 4$ for all $k \in \mathbb{N}$, one has $l_k >0$, and so (\ref{eq:qcs:k12}) implies that $c_{k_2}=0$. Hence, $c_k = 0$ for all $k \neq k_1$. This shows that $x \in S$, and $S$ is quasi-convex, as desired. \end{proof}
\begin{corollary} \label{cor:qcs:nosub} Let $G$ be a locally compact abelian group that admits no non-trivial quasi-convex null sequences. Then $G$ has no subgroups that are topologically isomorphic to:
\begin{myalphlist}
\item $\mathbb{J}_p$ for some prime $p$,
\item $\mathbb{Z}_{p^2}^\omega$ \ for some prime $p$,
\item $\mathbb{Z}_p^\omega$ \ for $p > 3$,
\item $\mathbb{T}$, or
\item $\mathbb{R}$.
\end{myalphlist} \end{corollary}
\begin{proof} By Example~\ref{ex:intro:TJ23R} and Theorem~\ref{thm:Jp:main}, for every prime $p$, the group $\mathbb{J}_p$ admits a non-trivial quasi-convex null sequence. By Theorem~\ref{thm:qcs:cyclic}, for every prime $p$, the countable product $\mathbb{Z}_{p^2}^\omega$ admits a non-trivial quasi-convex null sequence, and if \mbox{$p > 3$}, then so does the group $\mathbb{Z}_{p}^\omega$. Finally, by Example~\ref{ex:intro:TJ23R}, $\mathbb{T}$ and $\mathbb{R}$ admit a non-trivial quasi-convex null sequence. Hence, all five statements follow by Lemma~\ref{lemma:qcs:subgr}(a). \end{proof}
\begin{corollary} \label{cor:qcs:pro-p} Let $p$ be a prime, and $G$ an abelian pro-$p$-group. Then $G$ admits no non-trivial quasi-convex null sequences if and only if
\begin{myromanlist}
\item $p> 3$ and $G$ is finite, or
\item $p \leq 3$ and $G \cong \mathbb{Z}_p^\kappa \times F$ for some cardinal $\kappa$ and finite group $F$.
\end{myromanlist} \end{corollary}
\begin{proof} Suppose that $G$ admits no non-trivial quasi-convex null sequences. Then, by Corollary~\ref{cor:qcs:nosub}(a), $G$ contains no subgroup that is topologically isomorphic to $\mathbb{J}_p$, and so $G$ is an exotic torus. Let $B$ be a closed subgroup of $G$ provided by Theorem~\ref{thm:prel:ET}. Since $G$ is a pro-$p$-group, it has no connected quotients. So, $n=0$, and $G=B$. Thus, $G=B_p$ is a compact bounded $p$-group (as $G$ contains no elements of order coprime to $p$). Consequently, $G$ is topologically isomorphic to a~product of finite cyclic groups (cf.~\cite[4.2.2]{DikProSto}). Hence, $G\cong \prod\limits_{i=1}^N \mathbb{Z}_{p^i}^{\kappa_i}$ for some cardinals $\{\kappa_i\}_{i=1}^N$.~By Corollary~\ref{cor:qcs:nosub}(b), $G$ contains no subgroup that is topologically isomorphic to $\mathbb{Z}_{p^2}^\omega$. Therefore, $\kappa_i$~is finite for \mbox{$i \geq 2$}, and $G \cong \mathbb{Z}_p^{\kappa_1} \times F$ for some finite group $F$. Hence, by Corollary~\ref{cor:qcs:nosub}(c), $\kappa < \omega$ or $p \leq 3$, as desired.
Conversely, if $G$ is finite, then it contains no non-trivial sequences, and if (ii) holds, then $G$ contains an open subgroup $H$ that is topologically isomorphic to $\mathbb{Z}_2^\kappa$ or $\mathbb{Z}_3^\kappa$, so the statement follows by Corollary~\ref{cor:qcs:Z23}. \end{proof}
\begin{proposition} \label{prop:qcs:profin} Let $G$ be an abelian pro-finite group. Then $G$ admits no non-trivial quasi-convex null sequences if and only if $G\cong \mathbb{Z}_2^\kappa \times F$ or $G\cong \mathbb{Z}_3^\kappa \times F$ for some cardinal $\kappa$ and finite abelian group $F$. \end{proposition}
\begin{proof} Suppose that $G$ admits no non-trivial quasi-convex null sequences. By Lemma~\ref{lemma:qcs:subgr}(a), no closed subgroup of $G$ admits a non-trivial quasi-convex null sequence. By Theorem~\ref{thm:prel:profin}, \mbox{$G\hspace{-1pt}= \hspace{-1pt}\prod\limits_{p} G_p$}, where each $G_p$ is a pro-$p$-group. Since each $G_p$ is a closed subgroup, $G_p$ admits no non-trivial quasi-convex null sequences. Therefore, by Corollary~\ref{cor:qcs:pro-p}, for each \mbox{$p>3$}, the subgroup $G_p$ is finite. Put \mbox{$F_0 = \prod\limits_{p>3} G_p$}. If $F_0$ is infinite, then there are infinitely many primes \mbox{$p_k > 3$} such that \mbox{$G_{p_k} \neq 0$}. Consequently, $F_0$ (and thus $G$) contains a subgroup that is topologically isomorphic to the product \mbox{$P=\prod\limits_{k=1}^\infty \mathbb{Z}_{p_k}$}. However, by Theorem~\ref{thm:qcs:cyclic}, $P$ does admit a non-trivial quasi-convex null sequence, contrary to our assumption (and Lemma~\ref{lemma:qcs:subgr}(a)). This contradiction shows that $F_0$ is finite. Since $G_2$ and $G_3$ contain no non-trivial quasi-convex null sequences, by Corollary~\ref{cor:qcs:pro-p}, $G_2\cong\mathbb{Z}_2^{\kappa_2}\times F_2$ and $G_3 \cong \mathbb{Z}_3^{\kappa_3} \times F_3$ for some cardinals $\kappa_2$ and $\kappa_3$, and finite groups $F_2$ and $F_3$. Thus, \begin{align} G = \prod\limits_{p} G_p = G_2 \times G_3 \times F_0 \cong \mathbb{Z}_2^{\kappa_2} \times \mathbb{Z}_3^{\kappa_3} \times F_0\times F_2 \times F_3, \end{align} and it remains to be seen that at least one of $\kappa_2$ and $\kappa_3$ is finite. Assume the contrary. Then $G$ contains a (closed) subgroup that is topologically isomorphic to \mbox{$\mathbb{Z}_2^\omega \times \mathbb{Z}_3^\omega \cong \mathbb{Z}_6^\omega$}, which does admit a non-trivial quasi-convex null sequence by Theorem~\ref{thm:qcs:cyclic}, contrary to our assumption (and Lemma~\ref{lemma:qcs:subgr}(a)). Therefore, at least one of $\mathbb{Z}_2^{\kappa_2} \times F_0\times F_2 \times F_3$ and $\mathbb{Z}_3^{\kappa_3} \times F_0\times F_2 \times F_3$ is finite.
Conversely, if $G\cong \mathbb{Z}_2^\kappa \times F$ or $G\cong \mathbb{Z}_3^\kappa \times F$ where $F$ is finite, then $G$ contains an open subgroup $H$ that is topologically isomorphic to $\mathbb{Z}_2^\kappa$ or $\mathbb{Z}_3^\kappa$, so the statement follows by Corollary~\ref{cor:qcs:Z23}. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:qcs:main}.] We first consider the special case where $G$ is compact. Clearly, in this case, (ii) $\Leftrightarrow$ (v) and (iii) $\Leftrightarrow$ (iv).
(i) $\Rightarrow$ (v): Suppose that $G$ admits no non-trivial quasi-convex null sequences. Let $K$ denote the connected component of $G$. By Lemma~\ref{lemma:qcs:subgr}(a), $K$ admits no non-trivial quasi-convex null sequences. Thus, by Corollary~\ref{cor:qcs:nosub}(a), $K$ is an exotic torus, and by Theorem~\ref{thm:prel:ET}, $K$ contains a~closed subgroup $B$ such that $B = \prod\limits_{p} B_p$, where each $B_p$ is a finite $p$-group, and $K/B \cong \mathbb{T}^n$ for some $n \in \mathbb{T}$. The group $B$ is pro-finite, and by Lemma~\ref{lemma:qcs:subgr}(a), it admits no non-trivial quasi-convex null sequences. Therefore, by Proposition~\ref{prop:qcs:profin}, $B\cong \mathbb{Z}_2^\kappa \times F$ or $B\cong \mathbb{Z}_3^\kappa \times F$, where $\kappa$ is some cardinal, and $F$ is a finite abelian group. Since $B_2$ and $B_3$ are finite, this implies that $B$ itself is finite. Consequently, by Pontryagin duality, $\widehat B \cong \widehat K / B^\perp$ is finite (cf.~\cite[Theorem~54]{Pontr}), and $B^\perp \cong \widehat{K/B} = \mathbb{Z}^n$ (cf.~\cite[Theorem~37]{Pontr}). This implies that $\widehat K$ is finitely generated. On the other hand, $\widehat K$ is torsion free, because $K$ is connected (cf.~\cite[Example~73]{Pontr}), which means that $\widehat K = \mathbb{Z}^n$ and $K \cong \mathbb{T}^n$. By Corollary~\ref{cor:qcs:nosub}(d), $K$ contains no subgroup that is topologically isomorphic to $\mathbb{T}$. Hence, $n=0$, and $K=0$. This shows that (i) implies that $G$ is pro-finite (i.e., a zero-dimensional compact group). The statement follows now by Proposition~\ref{prop:qcs:profin}.
(v) $\Rightarrow$ (iv) is obvious, because (v) implies that $G$ is a compact bounded group; therefore,~$G$~is a direct product of finite cyclic groups (cf.~\cite[4.2.2]{DikProSto}).
(iv) $\Rightarrow$ (i): It follows from (iv) that $G$ is pro-finite, and thus the statement is a consequence of Proposition~\ref{prop:qcs:profin}.
\pagebreak[2]
Suppose now that $G$ is a locally compact abelian group.
(i) $\Rightarrow$ (ii): Suppose that $G$ admits no non-trivial quasi-convex null sequences. There is \mbox{$n\in\mathbb{N}$} and a closed subgroup $M$ such that $M$ has an open compact subgroup $K$, and \mbox{$G \cong \mathbb{R}^n \times M$} (cf.~\cite[3.3.10]{DikProSto}). By Corollary~\ref{cor:qcs:nosub}(e), \mbox{$n=0$}, and so \mbox{$G=M$} and $K$ is open in $G$. By Lemma~\ref{lemma:qcs:subgr}(a), $K$ admits no non-trivial quasi-convex null sequences. Thus, by what we have shown so far, $K$ has an open compact subgroup $O$ that is topologically isomorphic to $\mathbb{Z}_2^\kappa$ or $\mathbb{Z}_3^\kappa$ for some cardinal $\kappa$. Since $K$ is open in $G$, it follows that $O$ is open in $G$. Therefore, one of $G[2]$ and $G[3]$ is open in $G$, because $O \subseteq G[2]$ or $O \subseteq G[3]$.
(ii) $\Rightarrow$ (iii): Let $L$ be a bounded locally compact abelian group. Then there is $n\in\mathbb{N}$ and a~closed subgroup $M$ such that $M$ has an open compact subgroup $K$, and \mbox{$L \cong \mathbb{R}^n \times M$} (cf.~\cite[3.3.10]{DikProSto}). Since $L$ is bounded, $n=0$, and thus $L$ admits an open compact bounded subgroup $K$. Consequently, $K$ is a direct product of finite cyclic groups (cf.~\cite[4.2.2]{DikProSto}). By this argument, for every locally compact abelian group $G$, the subgroups $G[2]$ and $G[3]$ contain open subgroups $O_2$ and $O_3$, respectively, such that $O_2 \cong \mathbb{Z}_2^{\kappa_2}$ and $O_3 \cong \mathbb{Z}_3^{\kappa_3}$ for some cardinals $\kappa_2$ and $\kappa_3$. If one of $G[2]$ and $G[3]$ is open in $G$, then $O_2$ or $O_3$ is open in $G$, as desired.
(iii) $\Rightarrow$ (i) follows by Corollary~\ref{cor:qcs:Z23}. \end{proof}
\section{Sequences of the form $\boldsymbol{\{p^{a_n}\}_{n=1}^\infty}$ in $\boldsymbol{\mathbb{J}_p}$}
\label{sect:Jp}
In this section, we present the proof of Theorem~\ref{thm:Jp:main}. We start off by establishing a few preliminary facts concerning representations of elements in $\mathbb{T}$, which are recycled and reused in \S\ref{sect:Tp}, where Theorem~\ref{thm:Tp:main} is proven. We identify points of $\mathbb{T}$ with $(-1/2,1/2]$. Let \mbox{$p>2$} be a prime. Recall that every \mbox{$y \in (-1/2,1/2]$} can be written in the form \begin{align} \label{eq:Jp:repy} y=\sum\limits_{i=1}^\infty \frac{c_i}{p^i} = \frac{c_1}{p} + \frac{c_2}{p^2} + \cdots + \frac{c_s}{p^s}+\cdots, \end{align}
where $c_i\in\mathbb{Z}$ and $|c_i| \leq \frac{p-1}{2}$ for all $i \in \mathbb{N}$.
\begin{theorem} \label{thm:Jp:c1} Let \mbox{$p>2$} be a prime, and \mbox{$y=\sum\limits_{i=1}^\infty \frac{c_i}{p^i} \in \mathbb{T}$}, where \mbox{$c_i\in\mathbb{Z}$} and
\mbox{$|c_i| \leq \frac{p-1}{2}$} for all \mbox{$i \in \mathbb{N}$}. \begin{myalphlist}
\item If $y \in \mathbb{T}_+$, then $|c_1| \leq \lfloor\tfrac{p+2}{4} \rfloor$.
\item If $my \in \mathbb{T}_+$ for all $m=1,\ldots, \lceil\tfrac{p}{2} \rceil$, then $c_1=0$.
\item If $my \in \mathbb{T}_+$ for all $m=1,\ldots, \lceil\tfrac{p}{6} \rceil$, then $c_1\in \{-1,0,1\}$.
\end{myalphlist} \end{theorem}
\begin{proof}
Since $|c_i| \leq \tfrac{p-1}{2}$ for all $i \in \mathbb{N}$, one has \begin{align} \label{eq:Jp:estimate}
\left|\sum\limits_{i=2}^\infty \dfrac{c_i}{p^i} \right| \leq
\sum\limits_{i=2}^\infty \dfrac{|c_i|}{p^i} \leq \dfrac{p-1}{2}\left(\sum\limits_{i=2}^\infty \dfrac{1}{p^i}\right) = \dfrac{p-1}{2} \cdot \dfrac{1}{p^2}\cdot \dfrac{p}{p-1}=\dfrac{1}{2p}. \end{align}
(a) If $y \in \mathbb{T}_+$, then by (\ref{eq:Jp:estimate}), one obtains that \begin{align}
\left| \dfrac{c_1}{p} \right| \leq
|y| + \left|\sum\limits_{i=2}^\infty \dfrac{c_i}{p^i} \right| \leq \dfrac 1 4 + \dfrac 1 {2p} = \dfrac{p+2}{4p}. \end{align}
Therefore, $|c_1| \leq \tfrac{p+2}{4}$, and hence
$|c_1| \leq \lfloor \tfrac{p+2}{4} \rfloor$.
\pagebreak[2]
(b) Put $l= \lceil \tfrac p 2 \rceil$. If $my \in \mathbb{T}_+$ for all $m=1,\ldots,l$, then $y \in \{1,\ldots,l\}^\triangleleft =\mathbb{T}_l$, and so
$|y| \leq \tfrac 1 {4l}$. Since $p$ is odd, $\tfrac p 2 < l$. Thus, $|y| \leq \tfrac 1 {4l} < \tfrac{1} {2p}$. Therefore, by (\ref{eq:Jp:estimate}), one obtains that \begin{align}
\left| \dfrac{c_1}{p} \right| \leq
|y| + \left|\sum\limits_{i=2}^\infty \dfrac{c_i}{p^i} \right| < \dfrac 1{2p} + \dfrac 1 {2p} = \dfrac 1 {p}. \end{align} Hence, $c_1=0$, as required.
\pagebreak[2]
(c) Put $l= \lceil \tfrac p 6 \rceil$. If $my \in \mathbb{T}_+$ for all $m=1,\ldots,l$, then $y \in \{1,\ldots,l\}^\triangleleft =\mathbb{T}_l$, and so
$|y| \leq \tfrac 1 {4l}$. Since $p$ is odd, $\tfrac p 6 < l$. Thus, $|y| \leq \tfrac 1 {4l} < \tfrac{3} {2p}$. Therefore, by (\ref{eq:Jp:estimate}), one obtains that \begin{align}
\left| \dfrac{c_1}{p} \right| \leq
|y| + \left|\sum\limits_{i=2}^\infty \dfrac{c_i}{p^i} \right| < \dfrac 3{2p} + \dfrac 1 {2p} = \dfrac 2 {p}. \end{align}
Hence, $|c_1| \leq 1$, as required. \end{proof}
\begin{corollary} \label{cor:Jp:c1} Let \mbox{$p\geq 5$} be a prime such that \mbox{$p \neq 7$}, and \mbox{$y=\sum\limits_{i=1}^\infty \frac{c_i}{p^i} \in \mathbb{T}$}, where \mbox{$c_i\in\mathbb{Z}$} and
\mbox{$|c_i| \leq \frac{p-1}{2}$} for all \mbox{$i \in \mathbb{N}$}. If $my \in \mathbb{T}_+$ for all $m=1,\ldots, \lfloor\tfrac{p}{4} \rfloor$, then $c_1\in \{-1,0,1\}$. \end{corollary}
\begin{proof} If \mbox{$p \hspace{-1pt}= \hspace{-1pt} 5$}, then \mbox{$y \hspace{-1pt} \in \hspace{-1pt} \mathbb{T}_+$}, and by Theorem~\ref{thm:Jp:c1}(a),
\mbox{$|c_1| \hspace{-1pt}\leq \hspace{-1pt} \lfloor \frac 7 4\rfloor \hspace{-1pt}= \hspace{-1pt}1$}. If \mbox{$p \hspace{-1pt} \geq \hspace{-1.5pt} 11$}, then \mbox{$\lceil \tfrac p 6 \rceil \hspace{-1pt}\leq \hspace{-1pt}
\lfloor \tfrac p 4 \rfloor$}, and the statement follows by Theorem~\ref{thm:Jp:c1}(c). \end{proof}
\begin{example} Let $y=\tfrac 2 7 - \tfrac{3}{49} + \mathbb{Z} = \tfrac{11}{49}+ \mathbb{Z}$. Since $\tfrac{11}{49} < \tfrac{12}{48}=\tfrac{1}{4}$, clearly $y \in \mathbb{T}_+$, and thus $my \in \mathbb{T}_+$ for all $1 \leq m \leq \lfloor \tfrac 7 4 \rfloor =1$. However, $c_1 =2$ and $c_2 = -3$. This shows that the assumption that $p\neq 7$ cannot be omitted in Corollary~\ref{cor:Jp:c1}. Nevertheless, a slightly weaker statement does hold for all primes $p \geq 5$, including $p=7$. \end{example}
\begin{corollary} \label{cor:Jp:p-1c1} Let \mbox{$p\geq 5$} be a prime, and
\mbox{$y=\sum\limits_{i=1}^\infty \frac{c_i}{p^i} \in \mathbb{T}$}, where \mbox{$c_i\in\mathbb{Z}$} and \mbox{$|c_i| \leq \frac{p-1}{2}$} for all \mbox{$i \in \mathbb{N}$}. If $my \in \mathbb{T}_+$ for all $m=1,\ldots, \lfloor\tfrac{p}{4} \rfloor$ and $(p-1)y \in \mathbb{T}_+$, then $c_1\in \{-1,0,1\}$. \end{corollary}
\begin{proof} In light Corollary~\ref{cor:Jp:c1}, it remains to be seen that the statement holds for \mbox{$p=7$}. In this case, it is given that $y,6y \in \mathbb{T}_+$, which means that \begin{align} y \in \{1,6\}^\triangleleft = \mathbb{T}_6 \cup (-\tfrac 1 6+\mathbb{T}_6) \cup (\tfrac 1 6+\mathbb{T}_6). \end{align}
Thus, $|y| \leq \tfrac 5 {24}$. Therefore, by (\ref{eq:Jp:estimate}), one obtains that \begin{align}
\left| \dfrac{c_1}{7} \right| \leq
|y| + \left|\sum\limits_{i=2}^\infty \dfrac{c_i}{7^i} \right| \leq \dfrac{5}{24} + \dfrac{1}{14} = \dfrac{47}{168} < \dfrac{48}{168} = \dfrac{2}{7}. \end{align}
Hence, $|c_1| \leq 1$, as desired. \end{proof}
We turn now to investigating the set $L_{\underline a,p}$, its polar, and its quasi-convex hull. Recall that the Pontryagin dual $\widehat {\mathbb{J}}_p$ of $\mathbb{J}_p$ is the Pr\"ufer group $\mathbb{Z}(p^\infty)$. For \mbox{$k \in \mathbb{N}$}, let \mbox{$\zeta_k\colon \mathbb{J}_p \rightarrow \mathbb{T}$} denote the continuous character defined by \mbox{$\zeta_k(1)=p^{-(k+1)}$}. For \mbox{$m \in \mathbb{N}$}, put \mbox{$J_{\underline{a},p,m} = \{ k \in \mathbb{N} \mid m\zeta_k \hspace{-1pt}\in \hspace{-2pt} L_{\underline a,p}^\triangleright \}$} and \mbox{$Q_{\underline{a},p,m} = \{m\zeta_ k \mid k \hspace{-1pt}\in \hspace{-2pt} J_{\underline{a},p,m}\}^\triangleleft$}.
\begin{lemma} \label{lemma:Jp:J12p} For $1 \leq m \leq p-1$, \begin{align} J_{\underline{a},p,m} = \begin{cases} \mathbb{N} & \text{if }\ \tfrac m p \in \mathbb{T}_+\\ \mathbb{N} \backslash \underline{a}
& \text{if }\ \tfrac m p \not\in \mathbb{T}_+. \end{cases} \end{align} \end{lemma}
\begin{proof} For $1 \leq m \leq p-1$ and $i \in \mathbb{Z}$, \begin{align} \label{eq:Zp:pmi} m\cdot p^{i} \in \mathbb{T}_+ \quad & \Longleftrightarrow \quad i \neq -1 \vee (i=-1 \wedge \tfrac m p \in \mathbb{T}_+)\\
& \Longleftrightarrow \quad i \neq -1 \vee \tfrac m p \in \mathbb{T}_+. \end{align} For $k,n \in \mathbb{N}$, one has $m\zeta_k(x_n) = m \cdot p^{a_n-k-1}$. Thus, by (\ref{eq:Zp:pmi}) applied to $i=a_n-k-1$, \begin{align} m\zeta_k(x_n) \in \mathbb{T}_+ \quad & \Longleftrightarrow \quad a_n-k-1 \neq -1 \vee \tfrac m p \in \mathbb{T}_+ \\ & \Longleftrightarrow \quad k \neq a_n \vee \tfrac m p \in \mathbb{T}_+. \end{align} Consequently, $m\zeta_k \in L_{\underline a,p}^\triangleright$ if and only if $k \neq a_n$ for all $n \in \mathbb{N}$, or $\tfrac m p \in \mathbb{T}_+$. \end{proof}
\begin{theorem} \label{thm:Jp:Q12p} If $p>2$, then \mbox{$Q_{\underline{a},p,1} \cap \cdots \cap Q_{\underline{a},p,p-1} \subseteq \{ \sum\limits_{n=0}^\infty \varepsilon_n y_n \mid (\forall n \in \mathbb{N}) (\varepsilon_n \in \{-1,0,1\})\}$}. \end{theorem}
\begin{proof} Recall that every element $x \in \mathbb{J}_p$ can be written in the form $x= \sum\limits_{i=0}^\infty c_i \cdot p^i$,
where $c_i \in \mathbb{Z}$ and $|c_i| \leq \frac{p-1}{2}$. For $x$ represented in this form, for every $k \in \mathbb{N}$, one has \begin{align} \label{eq:Jp:zetak} \zeta_k(x) = \sum\limits_{i=0}^\infty c_i \cdot p^{i-k-1} \equiv_1 \sum\limits_{i=0}^k c_i \cdot p^{i-k-1} = \sum\limits_{i=1}^{k+1} \frac{c_{k-i+1}}{p^{i}}. \end{align} Let $x \in Q_{\underline{a},p,1} \cap \cdots\cap Q_{\underline{a},p,p-1}$. If $k\neq a_n$ for any $n \in\mathbb{N}$, then by Lemma~\ref{lemma:Jp:J12p}, \begin{align} k\in \mathbb{N}\backslash \underline a = J_{\underline{a},p,1} = \cdots=J_{\underline{a},p,p-1}. \end{align} Thus, for $y=\zeta_k(x)$, one has $my \in \mathbb{T}_+$ for $m=1,\ldots,p-1$. Therefore, by Theorem~\ref{thm:Jp:c1}(b), the coefficient of $\tfrac 1 p$ in (\ref{eq:Jp:zetak}) is zero. Hence, $c_{k}=0$ for every $k$ such that $k\neq a_n$ for all $n\in\mathbb{N}$. For $p=3$, this already completes the proof, and so we may assume that $p\geq 5$. If $k=a_n$ for some $n \in \mathbb{N}$, then by Lemma~\ref{lemma:Jp:J12p}, \begin{align} k\in \mathbb{N} = J_{\underline{a},p,1} = \cdots= J_{\underline{a},p,{\lfloor \tfrac p 4\rfloor}}= J_{\underline{a},p,p-1}. \end{align} Consequently, for $y=\zeta_k(x)$, one has $my \in \mathbb{T}_+$ for $m=1,\ldots,\lfloor\tfrac p 4\rfloor$ and $(p-1)y\in\mathbb{T}_+$. Therefore, by Corollary~\ref{cor:Jp:p-1c1}, the coefficient of $\tfrac 1 p$ in (\ref{eq:Jp:zetak}) belongs to $\{-1,0,1\}$. So, \mbox{$c_{k}=c_{a_n} \in \{-1,0,1\}$}. Hence, $x$ has the form \begin{align} x= \sum\limits_{n=0}^\infty \dfrac{c_{a_n}}{p^{a_n}} = \sum\limits_{n=0}^\infty \varepsilon_n x_n, \end{align} where $\varepsilon_n = c_{a_n} \in \{-1,0,1\}$. \end{proof}
Put $L_p=L_{\mathbb{N},p}$, that is, the special case of $L_{\underline{a},p}$, where $\underline a$ is the sequence $a_n=n$. First, we show that if $p \geq 5$, then $L_p$ is quasi-convex.
\begin{lemma} \label{lemma:Jp:polar} Let $p \geq 5$ be a prime. Then \mbox{$m_1 \zeta_{k_1} + \cdots + m_l \zeta_{k_l} \in L_p^\triangleright$} \ for every \mbox{$0 \leq k_1 < \cdots < k_l$} and \mbox{$m_1,\ldots,m_l$} such that
\mbox{$|m_i| \leq \lfloor \tfrac p 4 \rfloor$} for all \mbox{$i \leq i \leq l$}. \end{lemma}
\pagebreak[2]
\begin{proof} Let $p^n \in L_p$. We may assume that \mbox{$n \leq k_1$}, because \mbox{$\eta_{k_i}(p^n)=0$} for every \mbox{$i\in\mathbb{N}$} such that \mbox{$k_i<n$}. Since \mbox{$0\leq k_1 < \cdots < k_l$}, \begin{align} \frac{1}{p^{k_1+1}} + \cdots + \frac{1}{p^{k_l+1}} \leq \frac{1}{p^{k_1+1}} + \frac{1}{p^{k_1+2}}+ \cdots + \frac{1}{p^{k_1+l}} < \sum\limits_{i=0}^\infty \frac{1}{p^{k_1+i+1}} = \dfrac{1}{p^{k_1}(p-1)}. \end{align}
One has $|m_i| \leq \lfloor \tfrac p 4 \rfloor \leq \tfrac{p-1}{4}$ for each $i$, because $p$ is odd. Thus, \begin{align}
|(m_1 \zeta_{k_1} + \cdots + m_l \zeta_{k_l})(p^n)| & \leq
|m_1||\eta_{k_1}(p^n)| + \cdots + |m_l||\eta_{k_l}(p^n)| \\ & \leq \dfrac{p-1}4 \left(\frac{1}{p^{k_1+1}} + \cdots + \frac{1}{p^{k_l+1}} \right) p^n \\ & < \dfrac{p-1}4 \cdot \dfrac{1}{p^{k_1}(p-1)} \cdot p^n = \dfrac{1}{4 p^{k_1 -n}} \leq \dfrac 1 4. \end{align} Therefore, $(m_1 \zeta_{k_1} + \cdots + m_l \zeta_{k_l})(p^n)\in \mathbb{T}_+$ for all $n \in \mathbb{N}$. Hence, \mbox{$m_1\zeta_{k_1} + \cdots + m_l \zeta_{k_l} \in L_p^\triangleright$}, as desired. \end{proof}
\begin{proposition} \label{prop:Jp:Lp} Let $p \geq 5$ be a prime. Then $L_p$ is quasi-convex. \end{proposition}
\begin{proof} Let \mbox{$x \in Q_\mathbb{T}(L_p)\backslash\{0\}$}. Then \mbox{$x \in Q_{\mathbb{N},p,1} \cap \cdots\cap Q_{\mathbb{N},p,p-1}$}. Consequently, by Theorem~\ref{thm:Jp:Q12p}, \mbox{$x=\sum\limits_{n=0}^\infty \varepsilon_n p^{n}$}, where \mbox{$\varepsilon_n \in \{-1,0,1\}$} for all \mbox{$n \in \mathbb{N}$}. Observe that for $k\in \mathbb{N}$, \begin{align} \zeta_k(x) = \sum\limits_{n=0}^\infty \dfrac{\varepsilon_n}{p^{-n+k+1}} \equiv_1 \sum\limits_{n=0}^{k} \dfrac{\varepsilon_n}{p^{-n+k+1}} = \sum\limits_{i=1}^{k+1} \frac{\varepsilon_{k-i+1}}{p^{i}}. \end{align} Let $k_1$ denote the smallest index such that $\varepsilon_{k_1}\neq 0$, and let $k_2 \in \mathbb{N}$ be such that $k_1 < k_2$. In order to prove that $x \in K_p$, it remains to be seen that $\varepsilon_{k_2}=0$. In order to simplify notations, we set $\varepsilon_n=0$ for all $n \in \mathbb{Z}\backslash\mathbb{N}$. Put \begin{align} \label{eq:Jp:y+} y_+ &=(\zeta_{k_1}+\zeta_{k_2})(x) = \sum\limits_{i=1}^{k_1+1} \frac{\varepsilon_{k_1-i+1}}{p^{i}} + \sum\limits_{i=1}^{k_2+1} \frac{\varepsilon_{k_2-i+1}}{p^{i}} = \sum\limits_{i=1}^{k_2+1} \frac{\varepsilon_{k_1-i+1}+\varepsilon_{k_2-i+1}}{p^{i}}, \text{ and}\\ \label{eq:Jp:y-} y_- &=(\zeta_{k_1} - \zeta_{k_2})(x) = \sum\limits_{i=1}^{k_1+1} \frac{\varepsilon_{k_1-i+1}}{p^{i}} - \sum\limits_{i=1}^{k_2+1} \frac{\varepsilon_{k_2-i+1}}{p^{i}} = \sum\limits_{i=1}^{k_2+1} \frac{\varepsilon_{k_1-i+1}-\varepsilon_{k_2-i+1}}{p^{i}}. \end{align} By Lemma~\ref{lemma:Jp:polar}, $m\zeta_{k_1}+m\zeta_{k_2}, m\zeta_{k_1}-m\zeta_{k_2} \in L_p^\triangleright$ for $m=1,\ldots,\lfloor \tfrac p 4 \rfloor$. Thus, for $m=1,\ldots,\lfloor \tfrac p 4 \rfloor$, \begin{align} \label{eq:Jp:my+} my_+ & = m (\zeta_{k_1}+\zeta_{k_2})(x) \in \mathbb{T}_+, \text{ and}\\ \label{eq:Jp:my-} my_- & = m (\zeta_{k_1}-\zeta_{k_2})(x) \in \mathbb{T}_+. \end{align} One has
$|\varepsilon_{k_1-i+1} + \varepsilon_{k_2-i+1}| \leq 2 \leq \tfrac{p-1}2$ and
$|\varepsilon_{k_1-i+1} - \varepsilon_{k_2-i+1}| \leq 2 \leq \tfrac{p-1}2$, for all $i \in \mathbb{Z}$, because
\mbox{$|\varepsilon_n| \leq 1$} for all \mbox{$n \in \mathbb{Z}$}. By replacing $x$ with $-x$ if necessary, we may assume that \mbox{$\varepsilon_{k_1}\hspace{-2pt}=1$}. Put \mbox{$\rho=\varepsilon_{k_2}$}. We may apply Corollary~\ref{cor:Jp:c1} and~\ref{cor:Jp:p-1c1}, but we must distinguish between three (overlapping) cases:
1. If $p\neq 7$, then by Corollary~\ref{cor:Jp:c1}, the coefficients of $\tfrac 1 p$ in (\ref{eq:Jp:y+}) and (\ref{eq:Jp:y-}) are $-1$, $0$, or
$1$. In other words, $|1 + \rho| \leq 1$ and $|1 - \rho| \leq 1$. Since $\rho \in \{-1,0,1\}$, this implies that $\rho=0$, as required.
\pagebreak[2]
2. If $k_1+1 < k_2$, then by Lemma~\ref{lemma:Jp:polar}, \begin{align} (p-1)(\eta_{k_1}+\eta_{k_2}) & = \zeta_{k_1-1}-\zeta_{k_1} +\zeta_{k_2-1}-\zeta_{k_2} \in L_p^\triangleright, \text{ and} \\ (p-1)(\eta_{k_1}-\eta_{k_2}) & = \zeta_{k_1-1}-\zeta_{k_1} -\zeta_{k_2-1}+\zeta_{k_2} \in L_p^\triangleright. \end{align} (If $k_1=0$, then of course, $\zeta_{k_1-1} = \zeta_{-1} =0$.) Thus, in addition to (\ref{eq:Jp:my+}) and (\ref{eq:Jp:my-}), one also has \begin{align} (p-1)y_+ & = (p-1) (\zeta_{k_1}+\zeta_{k_2})(x) \in \mathbb{T}_+, \text{ and}\\ (p-1)y_- & = (p-1) (\zeta_{k_1}-\zeta_{k_2})(x) \in \mathbb{T}_+. \end{align} Therefore, by Corollary~\ref{cor:Jp:p-1c1}, the coefficients of $\tfrac 1 p$ in (\ref{eq:Jp:y+}) and (\ref{eq:Jp:y-}) are $-1$, $0$, or $1$.
In other words, $|1 + \rho| \leq 1$ and $|1 - \rho| \leq 1$. Since $\rho \in \{-1,0,1\}$, this implies that $\rho=0$, as required.
3. If $p=7$ and $k_2=k_1+1$, then by what we have shown so far, \begin{align} x = 7^{k_1} +\rho\cdot 7^{k_1+1} = (1+7\rho)\cdot 7^{k_1}, \end{align} where $\rho \in \{-1,0,1\}$. By Lemma~\ref{lemma:Jp:polar}, \begin{align} (7\rho +1)\zeta_{k_1+1} = \rho\zeta_{k_1} + \zeta_{k_1+1} \in L_7^\triangleright. \end{align} Therefore, \begin{align} \dfrac{2\rho}{7} + \dfrac{1}{49} = \dfrac{14\rho +1 }{49} \equiv_1 \dfrac{(7\rho +1)^2}{49} = (7\rho +1)\zeta_{k_1+1}(x) \in \mathbb{T}_+. \end{align} Hence, $\rho=0$, as required. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:Jp:main}.] By Proposition~\ref{prop:Jp:Lp}, $L_p$ is quasi-convex. Thus, $L_{\underline{a},p} \subseteq L_p$ implies that $Q_\mathbb{T}(L_{\underline{a},p}) \subseteq L_p$. On the other hand, by Theorem~\ref{thm:Jp:Q12p}, \begin{align} Q_\mathbb{T}(L_{\underline{a},p}) \subseteq Q_{\underline{a},p,1} \cap \cdots\cap Q_{\underline{a},p,p-1}\subseteq \{ \sum\limits_{n=0}^\infty \varepsilon_n y_n \mid (\forall n \in \mathbb{N}) (\varepsilon_n \in \{-1,0,1\})\}. \end{align} Therefore, \begin{align} Q_\mathbb{T}(L_{\underline{a},p}) \subseteq L_p \cap \{\sum\limits_{n=0}^\infty \varepsilon_n y_n \mid (\forall n \in \mathbb{N}) (\varepsilon_n \in \{-1,0,1\})\} = L_{\underline{a},p}, \end{align} as desired. \end{proof}
\begin{example}\label{ex:Jp:p23} If $p=2$ or $p=3$, then $Q_{\mathbb{J}_p}(L_p)=\mathbb{J}_p$, that is, $L_p$ is $qc$-dense in $\mathbb{J}_p$ (cf.~\cite[4.6(c)]{DikLeo}). In particular, $L_2$ and $L_3$ are not quasi-convex in $\mathbb{J}_2$ and $\mathbb{J}_3$, respectively. Thus, Proposition~\ref{prop:Jp:Lp} fails if $p \not\geq 5$. Therefore, Theorem~\ref{thm:Jp:main} does not hold for $p \not\geq 5$. \end{example}
\section{Sequences of the form $\boldsymbol{\{p^{-(a_n+1)}\}_{n=1}^\infty}$ in $\boldsymbol{\mathbb{T}}$}
\label{sect:Tp}
In this section, we prove Theorem~\ref{thm:Tp:main}. Recall that the Pontryagin dual $\widehat{\mathbb{T}}$ of $\mathbb{T}$ is $\mathbb{Z}$. For \mbox{$k \in \mathbb{N}$}, let \mbox{$\eta_k\colon \mathbb{T} \rightarrow \mathbb{T}$} denote the continuous character defined by \mbox{$\eta_k(x)=p^{k}\cdot x$}. For \mbox{$m \in \mathbb{N}$}, put \begin{align} J_{\underline{a},p,m} = \{ k \in \mathbb{N} \mid m\eta_k \in K_{\underline a,p}^\triangleright \} \quad \text{ and } \quad Q_{\underline{a},p,m} = \{m\eta_ k \mid k \in J_{\underline{a},p,m}\}^\triangleleft. \end{align}
\begin{lemma} \label{lemma:Tp:J12p} For $1 \leq m \leq p-1$, \begin{align} J_{\underline{a},p,m} = \begin{cases} \mathbb{N} & \text{if }\ \tfrac m p \in \mathbb{T}_+\\ \mathbb{N} \backslash \underline{a}
& \text{if }\ \tfrac m p \not\in \mathbb{T}_+. \end{cases} \end{align} \end{lemma}
\begin{proof} For $1 \leq m \leq p-1$ and $i \in \mathbb{Z}$, \begin{align} \label{eq:Tp:pmi} m\cdot p^{i} \in \mathbb{T}_+ \quad & \Longleftrightarrow \quad i \neq -1 \vee (i=-1 \wedge \tfrac m p \in \mathbb{T}_+)\\
& \Longleftrightarrow \quad i \neq -1 \vee \tfrac m p \in \mathbb{T}_+. \end{align} For $k,n \in \mathbb{N}$, one has $m\eta_k(x_n) = m \cdot p^{k-a_n-1}$. Thus, by (\ref{eq:Tp:pmi}) applied to $i=k-a_n-1$, \begin{align} m\eta_k(x_n) \in \mathbb{T}_+ \quad & \Longleftrightarrow \quad k-a_n-1 \neq -1 \vee \tfrac m p \in \mathbb{T}_+ \\ & \Longleftrightarrow \quad k \neq a_n \vee \tfrac m p \in \mathbb{T}_+. \end{align} Consequently, $m\eta_k \in K_{\underline a,p}^\triangleright$ if and only if $k \neq a_n$ for all $n \in \mathbb{N}$, or $\tfrac m p \in \mathbb{T}_+$. \end{proof}
\begin{theorem} \label{thm:Tp:Q12p} If $p >2$, then \mbox{$Q_{\underline{a},p,1} \cap \cdots \cap Q_{\underline{a},p,p-1} \subseteq \{ \sum\limits_{n=0}^\infty \varepsilon_n x_n \mid (\forall n \in \mathbb{N}) (\varepsilon_n \in \{-1,0,1\})\}$}. \end{theorem}
\begin{proof} Let \mbox{$x=\sum\limits_{i=1}^\infty \dfrac{c_i}{p^i}$} be a representation of \mbox{$x\in \mathbb{T}$}. Observe that for every \mbox{$k\in \mathbb{N}$}, one has \begin{align} \label{eq:Tp:etak} \eta_{k}(x) = p^k x = \sum\limits_{i=1}^\infty \dfrac{c_i}{p^{i-k}} \equiv_1 \sum\limits_{i=k+1}^\infty \dfrac{c_i}{p^{i-k}} = \sum\limits_{i=1}^\infty \dfrac{c_{k+i}}{p^i}. \end{align} Let $x \in Q_{\underline{a},p,1} \cap \cdots\cap Q_{\underline{a},p,p-1}$. If
$k\neq a_n$ for any $n \in\mathbb{N}$, then by Lemma~\ref{lemma:Tp:J12p}, \begin{align} k\in \mathbb{N}\backslash \underline a = J_{\underline{a},p,1} = \cdots=J_{\underline{a},p,p-1}. \end{align} Thus, for $y=\eta_k(x)$, one has $my \in \mathbb{T}_+$ for $m=1,\ldots,p-1$.
Arguing as in the proof of Theorem~\ref{thm:Jp:Q12p}, we conclude that \mbox{$c_{k+1}=0$} for every $k$ such that $k\neq a_n$ for all \mbox{$n\in\mathbb{N}$}. For $p=3$, this already completes the proof, and so we may assume that $p\geq 5$. If $k=a_n$ for some $n \in \mathbb{N}$, then by Lemma~\ref{lemma:Tp:J12p}, \begin{align} k\in \mathbb{N} = J_{\underline{a},p,1} = \cdots= J_{\underline{a},p,{\lfloor \tfrac p 4\rfloor}}= J_{\underline{a},p,p-1}. \end{align} Arguing further as in the proof of Theorem~\ref{thm:Jp:Q12p} (but making use of the characters $\eta_k$ instead of $\zeta_k$), we conclude that \mbox{$c_{k+1}=c_{a_n+1} \in \{-1,0,1\}$}.
Hence, $x$ has the form \begin{align} x= \sum\limits_{n=0}^\infty \dfrac{c_{a_n+1}}{p^{a_n+1}} = \sum\limits_{n=0}^\infty \varepsilon_n x_n, \end{align} where $\varepsilon_n = c_{a_n+1} \in \{-1,0,1\}$. \end{proof}
Put $K_p =K_{\mathbb{N},p}$, that is, the special case of $K_{\underline{a},p}$, where $\underline a$ is the sequence $a_n=n$. First, we show that if $p \geq 5$, then $K_p$ is quasi-convex.
\begin{lemma} \label{lemma:Tp:polar} Let $p \geq 5$ be a prime. Then \mbox{$m_1 \eta_{k_1} + \cdots + m_l \eta_{k_l} \in K_p^\triangleright$} \ for every \mbox{$0\leq k_1 < \cdots < k_l$} and \mbox{$m_1,\ldots,m_l$} such that
\mbox{$|m_i| \leq \lfloor \tfrac p 4 \rfloor$} for all \mbox{$i \leq i \leq l$}. \end{lemma}
\begin{proof} Let $\tfrac 1 {p^{n+1}} \in K_p$. We may assume that \mbox{$k_l \leq n$}, because \mbox{$\eta_{k_i}(\tfrac 1 {p^{n+1}})=0$} for every \mbox{$i\in\mathbb{N}$} such that \mbox{$n<k_i$}. Since \mbox{$0\leq k_1 < \cdots < k_l$}, \begin{align} p^{k_1} + \cdots +p^{k_l} \leq 1 + p + \cdots + p^{k_l} = \dfrac{p^{k_l+1}-1}{p-1}. \end{align}
One has $|m_i| \leq \lfloor \tfrac p 4 \rfloor \leq \tfrac{p-1}{4}$ for each $i$, because $p$ is odd. Thus, \begin{align}
|(m_1 \eta_{k_1} + \cdots + m_l \eta_{k_l})(\tfrac{1}{p^{n+1}})| & \leq
|m_1||\eta_{k_1}(\tfrac{1}{p^{n+1}})| + \cdots +
|m_l||\eta_{k_l}(\tfrac{1}{p^{n+1}})| \\ & \leq \dfrac{p-1}4 (p^{k_1} + \cdots +p^{k_l}) \dfrac{1}{p^{n+1}} \\ & \leq \dfrac{p-1}4 \cdot \dfrac{p^{k_l+1}-1}{p-1} \cdot \dfrac{1}{p^{n+1}} = \dfrac{p^{k_l+1}-1}{4p^{n+1}} < \dfrac 1 4. \end{align} Therefore, $(m_1 \eta_{k_1} + \cdots + m_l \eta_{k_l})(\tfrac{1}{p^{n+1}}) \in \mathbb{T}_+$ for all $n \in \mathbb{N}$. Hence, \mbox{$m_1 \eta_{k_1} + \cdots + m_l \eta_{k_l} \in K_p^\triangleright$}, as desired. \end{proof}
\begin{proposition} \label{prop:Tp:Kp} Let $p \geq 5$ be a prime. Then $K_p$ is quasi-convex. \end{proposition}
\begin{proof} Let \mbox{$x \in Q_\mathbb{T}(K_p)\backslash\{0\}$}. Then \mbox{$x \in Q_{\mathbb{N},p,1} \cap \cdots\cap Q_{\mathbb{N},p,p-1}$}. Consequently, by Theorem~\ref{thm:Tp:Q12p}, \mbox{$x=\sum\limits_{n=0}^\infty \tfrac{\varepsilon_n}{p^{n+1}}$}, where \mbox{$\varepsilon_n \in \{-1,0,1\}$} for all \mbox{$n \in \mathbb{N}$}. Observe that for $k\in \mathbb{N}$, \begin{align} \eta_k(x) = p^k x = \sum\limits_{n=0}^\infty \dfrac{\varepsilon_n}{p^{n-k+1}} \equiv_1 \sum\limits_{n=k}^\infty \dfrac{\varepsilon_n}{p^{n-k+1}} = \sum\limits_{i=1}^\infty \frac{\varepsilon_{k+i-1}}{p^{i}}. \end{align} Let $k_1$ denote the smallest index such that $\varepsilon_{k_1}\neq 0$, and let $k_2 \in \mathbb{N}$ be such that $k_1 < k_2$. In order to prove that $x \in K_p$, it remains to be seen that $\varepsilon_{k_2}=0$. Put \begin{align} \label{eq:Tp:y+} y_+ &=(\eta_{k_1}+\eta_{k_2})(x) = \sum\limits_{i=1}^\infty \frac{\varepsilon_{k_1+i-1}}{p^{i}}+ \sum\limits_{i=1}^\infty \frac{\varepsilon_{k_2+i-1}}{p^{i}} = \sum\limits_{i=1}^\infty \frac{\varepsilon_{k_1+i-1}+\varepsilon_{k_2+i-1}}{p^{i}}, \text{ and}\\ \label{eq:Tp:y-} y_- &=(\eta_{k_1} - \eta_{k_2})(x) = \sum\limits_{i=1}^\infty \frac{\varepsilon_{k_1+i-1}}{p^{i}} - \sum\limits_{i=1}^\infty \frac{\varepsilon_{k_2+i-1}}{p^{i}} = \sum\limits_{i=1}^\infty \frac{\varepsilon_{k_1+i-1} - \varepsilon_{k_2+i-1}}{p^{i}}. \end{align} By Lemma~\ref{lemma:Tp:polar}, $m\eta_{k_1}+m\eta_{k_2}, m\eta_{k_1}-m\eta_{k_2} \in K_p^\triangleright$ for $m=1,\ldots,\lfloor \tfrac p 4 \rfloor$. Thus, for $m=1,\ldots,\lfloor \tfrac p 4 \rfloor$, \begin{align} \label{eq:Tp:my+} my_+ & = m (\eta_{k_1}+\eta_{k_2})(x) \in \mathbb{T}_+, \text{ and}\\ \label{eq:Tp:my-} my_- & = m (\eta_{k_1}-\eta_{k_2})(x) \in \mathbb{T}_+. \end{align} One has
$|\varepsilon_{k_1+i-1} + \varepsilon_{k_2+i-1}| \leq 2 \leq \tfrac{p-1}2$ and
$|\varepsilon_{k_1+i-1} - \varepsilon_{k_2+i-1}| \leq 2 \leq \tfrac{p-1}2$, for all $i \in \mathbb{N}$, because
\mbox{$|\varepsilon_n| \leq 1$} for all \mbox{$n \in \mathbb{N}$}. By replacing $x$ with $-x$ if necessary, we may assume that \mbox{$\varepsilon_{k_1}\hspace{-2pt}= 1$}. Put \mbox{$\rho=\varepsilon_{k_2}$}. We may apply Corollaries~\ref{cor:Jp:c1} and~\ref{cor:Jp:p-1c1}, but we must distinguish between three (overlapping) cases:
1. If $p\neq 7$, then by Corollary~\ref{cor:Jp:c1}, the coefficients of $\tfrac 1 p$ in (\ref{eq:Tp:y+}) and (\ref{eq:Tp:y-}) are $-1$, $0$, or
$1$. In other words, $|1 + \rho| \leq 1$ and $|1 - \rho| \leq 1$. Since $\rho \in \{-1,0,1\}$, this implies that $\rho=0$, as required.
2. If $k_1+1 < k_2$, then by Lemma~\ref{lemma:Tp:polar}, \begin{align} (p-1)(\eta_{k_1}+\eta_{k_2}) & = -\eta_{k_1}+\eta_{k_1+1} -\eta_{k_2}+\eta_{k_2+1} \in K_p^\triangleright, \text{ and} \\ (p-1)(\eta_{k_1}-\eta_{k_2}) & = -\eta_{k_1}+\eta_{k_1+1} +\eta_{k_2}-\eta_{k_2+1} \in K_p^\triangleright. \end{align} Thus, in addition to (\ref{eq:Tp:my+}) and (\ref{eq:Tp:my-}), one also has \begin{align} (p-1)y_+ & = (p-1) (\eta_{k_1}+\eta_{k_2})(x) \in \mathbb{T}_+, \text{ and}\\ (p-1)y_- & = (p-1) (\eta_{k_1}-\eta_{k_2})(x) \in \mathbb{T}_+. \end{align} Therefore, by Corollary~\ref{cor:Jp:p-1c1}, the coefficients of $\tfrac 1 p$ in (\ref{eq:Tp:y+}) and (\ref{eq:Tp:y-}) are $-1$, $0$, or $1$.
In other words, $|1 + \rho| \leq 1$ and $|1 - \rho| \leq 1$. Since $\rho \in \{-1,0,1\}$, this implies that $\rho=0$, as required.
3. If $p=7$ and $k_2=k_1+1$, then by what we have shown so far, \begin{align} x = \frac{1}{7^{k_1+1}} +\frac{\rho}{7^{k_1+2}} = \dfrac{7+\rho}{7^{k_1+2}}, \end{align} where $\rho \in \{-1,0,1\}$. By Lemma~\ref{lemma:Tp:polar}, \begin{align} (7+\rho)\eta_{k_1} = \rho \eta_{k_1} + \eta_{k_1+1} \in K_7^\triangleright. \end{align} Therefore, \begin{align} \dfrac{2\rho}{7} + \dfrac{\rho^2}{49} = \dfrac{14\rho +\rho^2}{49} \equiv_1 \dfrac{(7+\rho)^2}{49} = (7+\rho)\eta_{k_1}(x) \in \mathbb{T}_+. \end{align} Hence, $\rho=0$, as required. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:Tp:main}.] By Proposition~\ref{prop:Tp:Kp}, $K_p$ is quasi-convex. Thus, $K_{\underline{a},p} \subseteq K_p$ implies that $Q_\mathbb{T}(K_{\underline{a},p}) \subseteq K_p$. On the other hand, by Theorem~\ref{thm:Tp:Q12p}, \begin{align} Q_\mathbb{T}(K_{\underline{a},p}) \subseteq Q_{\underline{a},p,1} \cap \cdots\cap Q_{\underline{a},p,p-1}\subseteq \{ \sum\limits_{n=0}^\infty \varepsilon_n x_n \mid (\forall n \in \mathbb{N}) (\varepsilon_n \in \{-1,0,1\})\}. \end{align} Therefore, \begin{align} Q_\mathbb{T}(K_{\underline{a},p}) \subseteq K_p \cap \{ \sum\limits_{n=0}^\infty \varepsilon_n x_n \mid (\forall n \in \mathbb{N}) (\varepsilon_n \in \{-1,0,1\})\} = K_{\underline{a},p}, \end{align} as desired. \end{proof}
\begin{example}\label{ex:Tp:p23} If $p=2$ or $p=3$, then $Q_{\mathbb{T}}(K_p)=\mathbb{T}$, that is, $K_p$ is $qc$-dense in $\mathbb{T}$ (cf.~\cite[4.4]{DikLeo}). In particular, $K_2$ and $K_3$ are not quasi-convex in $\mathbb{T}$. Thus, Proposition~\ref{prop:Tp:Kp} fails if $p \not\geq 5$. Therefore, Theorem~\ref{thm:Tp:main} does not hold for $p \not\geq 5$. \end{example}
\section*{Acknowledgments}
We are grateful to Karen Kipper for her kind help in proof-reading this paper for grammar and punctuation.
\nocite{Aussenhofer} \nocite{Banasz} \nocite{BeiLeoDikSte}
\nocite{deLeoPhD} \nocite{DikLeo} \nocite{Fuchs}
\nocite{Pontr} \nocite{Vilenkin} \nocite{GLdualtheo}
{\footnotesize
}
\begin{samepage}
\noindent \begin{tabular}{l @{\hspace{1.8cm}} l} Department of Mathematics and Computer Science & Department of Mathematics\\ University of Udine & University of Manitoba\\ Via delle Scienze, 208 -- Loc. Rizzi, 33100 Udine
& Winnipeg, Manitoba, R3T 2N2 \\ Italy & Canada \\ & \\ \em e-mail: [email protected] & \em e-mail: [email protected] \end{tabular}
\end{samepage}
\end{document}
|
arXiv
|
{
"id": "0901.0132.tex",
"language_detection_score": 0.5918466448783875,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{The Independence of Distinguishability and the Dimension of the System}
\author{Hao Shu}
\institute{Hao Shu \at
College of Mathematics, South China University of Technology, Guangzhou, 510641, P. R. China
\\
\email{$Hao\_ B\_ [email protected]$}
}
\date{}
\maketitle
\begin{abstract}
The are substantial studies on distinguishabilities, especially local distinguishability, of quantum states. It is shown that a necessary condition of a local distinguishable state set is the total Schmidt rank not larger than the system dimension. However, if we view states in a larger system, the restriction will be invalid. Hence, a nature problem is that can indistinguishable states become distinguishable by viewing them in a larger system without employing extra resources. In this paper, we consider this problem for (perfect or unambiguous) LOCC$_{1}$, PPT and SEP distinguishabilities. We demonstrate that if a set of states is indistinguishable in $\otimes _{k=1}^{K} C^{d _{k}}$, then it is indistinguishable even being viewed in $\otimes _{k=1}^{K} C^{d _{k}+h _{k}}$, where $K, d _{k}\geqslant2, h _{k}\geqslant0$ are integers. This shows that such distinguishabilities are properties of states themselves and independent of the dimension of quantum system. Our result gives the maximal numbers of LOCC$_{1}$ distinguishable states and can be employed to construct a LOCC indistinguishable product basis in general systems. Our result is suitable for general states in general systems. For further discussions, we define the local-global indistinguishable property and present a conjecture.
\\
\keywords{LOCC \and SEP \and Local-global indistinguishability \and PPT \and Mixed states \and Nonlocality \and Multipartite system}
\end{abstract}
\section{Introduction}
\qquad In quantum information theory, the distinguishability of states is of central importance. If general POVMs are allowed, states can be distinguished if and only if they are orthogonal\cite{1}. However, in realistic tasks, multipartite states are often shared by separated owners, who can not employ general POVMs. Fortunately, technologies of classical communications have been well-developed and can be employed easily. In spirit of this, distinguishing states by local operators and classical communications (LOCC) becomes available and significant. On the other hand, since the distinguishability via LOCC POVMs implies the distinguishability via SEP POVMs, which implies the distinguishability via PPT POVMs, while PPT and SEP POVMs have more simple properties than LOCC ones, they are also considerable.
There are substantial researches on distinguishabilities, especially LOCC distinguishability. It has been shown that two orthogonal pure states are LOCC distinguishable\cite{1}. An innocent intuition might be that the more entanglement a set has, the harder it can be distinguished by LOCC, which, however, is not true in general. Although entanglement indeed gives bounds to the LOCC distinguishability\cite{3,4}, a LOCC indistinguishable set of nine orthogonal product states in $C^3 \otimes C^3$ exists\cite{2}, which of course has no entanglements.
The local distinguishability of maximally entangled states might be the most-researched one. In $C^3 \otimes C^3$, three orthogonal maximally entangled states are LOCC distinguishable\cite{5} while there exist three LOCC$_{1}$ indistinguishable orthogonal maximally entangled states in $C^d \otimes C^d$, for $d\geqslant4 $ be even or $d=3k+2$\cite{6}. The result was weakened but extended to all $d\geqslant4 $, namely there are 4 LOCC$_{1}$ indistinguishable orthogonal maximally entangled states in $C^d \otimes C^d$ when $d\geqslant4 $\cite{7}. More results were given for Bell states and generalized Bell states. In $C^2 \otimes C^2$, three Bell states are LOCC indistinguishable\cite{8}, while in $C^d \otimes C^d$ with $d\geqslant3$, three generalized Bell states are LOCC distinguishable\cite{9}. A result shows that if $d$ is a prime and $l(l-1)\leqslant 2d$, then $l$ generalized Bell states are LOCC distinguishable\cite{10}. Note that $d+1$ maximally entangled states in $C^d \otimes C^d$ are LOCC indistinguishable\cite{4}. Strangely, a set of generalized Bell states is LOCC distinguishable by two copies\cite{11}.
On the other hand, LOCC distinguishability of orthogonal product states is also interesting. It has been shown that an unextendible product basis is LOCC indistinguishable\cite{12} while the LOCC distinguishabllity of a completed product basis is equivalent to its LPCC distinguishability\cite{19}. Constructing LOCC indistinguishable product states is also studied\cite{13,14,15,16,17}. We mention that in $C^3 \otimes C^2$, there are four LOCC$_{1}$ indistinguishable orthogonal product states when Alice goes firstly, while in $C^3 \otimes C^3$, there are five LOCC$_{1}$ indistinguishable orthogonal product states no matter who goes firstly\cite{18}. Other methods include \cite{20,21,22,23}.
As for LOCC indistinguishable sets, auxiliary resources might be employed distinguishing\cite{24,25,26,27,28}. Another scheme is distinguishing with an inconclusive result, known as the unambiguous discrimination\cite{23,29,30,31,32,33,34,35}. Finally, there are works for asymptotically LOCC distinguishability\cite{36,37}.
Instead of considering states in a fixed system as most previous researches, we consider the nature problem that whether a set of indistinguishable states can become distinguishable by viewing them in a larger system without employing other resources. It is considerable for at least three reasons. Firstly, the local distinguishability of states is bounded by the dimension of the system. In a bipartite system, a necessary condition of a local distinguishable set is the total Schmidt rank not larger than the system dimension\cite{4}, which however, can be removed by viewing states in a larger system. Hence, whether the states remain indistinguishable is still suspectable. Secondly, by employing extra resources such as entanglement, a local indistinguishable set may become distinguishable\cite{24,25,26,27,28}, while, an universal resource might only exist in a larger system\cite{27}. This gives a feeling that the local indistinguishability of states might depend on the system. Finally, the distinguishability of points in a Hilbert space could be described by distances. For example, setting the distance of two different points be 1, and otherwise be 0. However, the distance of points may depend on the chosen space. For instance, the usual distance of two diagonal points of a unit square is 2 in the 1-dimensional space consisting of the edges of the square, while it is $\sqrt 2$ in a 2-dimensional space consisting of the plane of the square. Hence, it is worth suspecting that the distinguishability of states may depend on the chosen space.
In this paper, we demonstrate that LOCC$_{1}$, unambiguous LOCC, PPT and SEP distinguishabilities are properties of states themselves, namely independent of the system dimension, by proving that an indistinguishable set in $\otimes _{k=1}^{K} C^{d _{k}}$ remains indistinguishable via POVMs of the same kind even being viewed in $\otimes _{k=1}^{K} C^{d _{k}+h _{k}}$. Our result solves the problem of searching the maximal number of local distinguishable states once for all and provides a LOCC indistinguishable product basis in general systems. Note that the result is suitable for general states in general systems and both perfect and unambiguous discriminations.
\section{Setting}
\qquad In mathematics, a quantum system shared by $K$ owners, namely partite $A^{(s)}, s=1,2,...,K$, can be described as a Hilbert space $\otimes _{k=1}^{K} C^{d _{k}}$, while a general state can be described as a density operator (positive semi-definited with unit trace). A state set $S$ is LOCC(LOCC$_{1}$, PPT, SEP) distinguishable means that there is a LOCC(LOCC$_{1}$, PPT, SEP) POVM $\left\{\ M _{j}\right \}_{j=1,2,...,J}$ such that for any $j$, $Tr(M_{j}\rho)\neq 0$ for at most one state $\rho$ in $S$.
A LOCC$_{r}$ POVM is described as follow. Partite $A^{(s)}$, $s=1,2,...,K$, provide local measurements, depending on the previously published results, on their partita and publish the results, in order. A round is that every member measures and publishes the result once. A POVM $\left\{\ M _{j}\right \}_{j=1,2,...,J}$ generated by such a procedure after $r$ rounds is defined to be a LOCC$_{r}$ POVM. For example, a $LOCC _{1}$ POVM is given as follow. $A^{(1)}$ provides a POVM $\left\{\ A^{(1)} _{j} \right \}_{j=1,2,...,J}$ on his partita and gets an outcome $j _{1}$. For $s=2,3,...,K$, each $A^{(s)}$ provides a POVM $\left\{\ A^{(s)} _{j _{1},j _{2},...,j _{(s-1)},j} \right \}_{j=1,2,...,J _{j _{1},j _{2},...,j _{(s-1)}}}$ on his partita depending on the classical communications of others measured before him, and gets an outcome $j _{s}$. Hence, the LOCC$_{1}$ POVM is $\left\{\ M _{j _{1},j _{2},...,j _{(K-1)},j _{K}} \right \}_{1\leqslant j _{s}\leqslant J _{j _{1},j _{2},...,j_ {(s-1)}}}$, where $M _{j _{1},j _{2},...,j _{(K-1)},j _{K}}=\otimes _{s=1}^{K} A^{(s)} _{j _{1},j _{2},...,j _{(s-1)},j _{s}}$ and $J _{j _{1},j _{2},...,j_ {(s-1)}}=J$ for $s=1$. On the other hand, $\left\{\ M _{j}\right \}_{j=1,2,...,J}$ is said to be a PPT POVM if every $M _{j}$ is positive semi-definited after a partial transposition, while it is said to be a SEP POVM if every $M _{j}$ can be written as a tensor product of local operators.
There is a nature embedding from $C^{d}$ to $C^{d+h}$, viewing a state in $\otimes _{k=1}^{K} C^{d _{k}}$ as a state in $\otimes _{k=1}^{K} C^{d _{k}+h _{k}}$. Precisely, a computational basis in $\otimes _{k=1}^{K} C^{d _{k}}$ can be extended to a computational basis in $\otimes _{k=1}^{K} C^{d _{k}+h _{k}}$, by which all operators are written in matrix form. For a density matrix(state) $\rho$ in $\otimes _{k=1}^{K} C^{d _{k}}$, it can be viewed as the density matrix(state)
$\widetilde{\rho}=
\begin{pmatrix}
\rho & 0\\
0 & 0
\end{pmatrix}$ in $\otimes _{k=1}^{K} C^{d _{k}+h _{k}}$.
These views will be employed in the rest of the paper.
\section{Result}
\begin{Theorem}
Let $\left\{\ \rho _{i} |i=1,2,...,N \right \}$ be a set of states (pure or mixed), written in density matrix form, in $\otimes _{k=1}^{K} C^{d _{k}}$, where N is a finite positive integer. If they are indistinguishable via LOCC$_{1}$(PPT, SEP, global) or unambiguous LOCC (PPT, SEP, global) POVMs, in $\otimes _{k=1}^{K} C^{d _{k}}$, then they are indistinguishable via LOCC$_{1}$(PPT, SEP, global) or unambiguous LOCC (PPT, SEP, global) POVMs, in $\otimes _{k=1}^{K} C^{d _{k}+h _{k}}$ (viewed as $\widetilde{\rho_{i}}=
\begin{pmatrix}
\rho _{i} & 0\\
0 & 0
\end{pmatrix}$), respectively, where $h_{k}$ are non-negative integers.
\end{Theorem}
Note that unambiguous LOCC distinguishability is equivalent to unambiguous SEP distinguishability\cite{23}.
The following corollaries, providing maximal number of distinguishable states, generalize results in special systems to general systems and thus somehow show the abilities of the theorem.
For general pure states, since there exist three LOCC$_{1}$ indistinguishable orthogonal states in $C^2 \otimes C^2$, for example, three orthogonal Bell states, Theorem 1 together with the result in \cite{1} imply that:
\begin{Corollary}
In any non-trivial system, the maximal number T such that any T orthogonal pure states are LOCC$_{1}$ distinguishable is 2.
\end{Corollary}
For orthogonal product states, four LOCC$_{1}$ indistinguishable states were constructed in $C^3 \otimes C^2$, providing fixed measurement order, while five LOCC$_{1}$ indistinguishable states were constructed in $C^3 \otimes C^3$, providing choice measurement order. Results in bipartite systems also show that three orthogonal product states are LOCC$_{1}$ distinguishable in any order while four orthogonal product states are LOCC$_{1}$ distinguishable in suitable order\cite{18}.
Therefore, as a consequence of Theorem 1, we have:
\begin{Corollary}
In $C^m \otimes C^n$, the maximal number P such that any P orthogonal product states are LOCC$_{1}$ distinguishable in fixed measurement order is 3, where $m\geqslant 3, n\geqslant 2$, while, the maximal number Q such that any Q orthogonal product states are LOCC$_{1}$ distinguishable in suitable order is 4, where $m, n\geqslant 3$.
\end{Corollary}
For orthogonal product basis, a LOCC indistinguishable basis in $C^3 \otimes C^3$ was constructed\cite{2}. Assisted with the result in \cite{19}, by using Theorem 1, a LOCC indistinguishable orthogonal product basis in $C^m \otimes C^n$ can be constructed, where $m, n\geqslant 3$.
\begin{Corollary}
The nine Domino states in \cite{2} together with $|i\rangle|j\rangle$, i=3,4,...,$m-1$, j=3,4,...,$n-1$, form a LOCC indistinguishable completed orthogonal product basis in $C^m \otimes C^n$, where $m, n\geqslant 3$.
\end{Corollary}
The above basis is LOCC indistinguishable, not only LOCC$_{1}$ indistinguishable. Details are provided as follow.
The nine Domino states (unnormalized) in \cite{2} form an orthogonal product basis in $C^3 \otimes C^3$. They are $|0\rangle |0\pm 1\rangle, |0\pm 1\rangle|2\rangle, |2\rangle |1\pm 2\rangle, |1\pm 2\rangle|0\rangle, |1\rangle|1\rangle$. It is easy to see that in $C^m \otimes C^n$ with $m, n\geqslant 3$, the states together with $|i\rangle|j\rangle, i=3,4,...,m-1, j=3,4,...,n-1$, form a completed orthogonal product basis. We will demonstrate that it is LOCC indistinguishable.
We need a lemma which is an easy corollary of the result in \cite{19}.
\begin{Lemma} An orthogonal product basis in a multipartite system is LOCC distinguishable if and only if it is LOCC$_{1}$ distinguishable.
\end{Lemma}
By Theorem 1, the above orthogonal basis is LOCC$_{1}$ indistinguishable, since the Domino states are LOCC (and thus LOCC$_{1}$) indistinguishable in $C^3 \otimes C^3$. As we are considering an orthogonal product basis, the above lemma shows that it is LOCC indistinguishable.
The construction can be generalized to multipartite systems, by the tensor product of above basis after normalizing and a normalized orthogonal basis of other partite.
\section{Proof of the result}
We only prove the theorem for perfect discriminations. The proofs for unambiguous discriminations are similar.
We will show that the up-left block of a POVM in $\otimes _{k=1}^{K} C^{d _{k}+h _{k}}$ is a POVM in $\otimes _{k=1}^{K} C^{d _{k}}$ of the same kind for LOCC$_{1}$, PPT, SEP or global POVM, while, the condition of distinguishability in a lower dimensional system is the same as in the larger dimensional system, since the low-right block has trace 0.
\begin{Lemma}
Let $\left\{\ M_{j} \right \}_{j=1,2,...,J}$ be a (LOCC$_{1}$, PPT, SEP, global) POVM of $\otimes _{k=1}^{K} C^{d _{k}+h _{k}}$ such that
$M _{j}=\begin{pmatrix}
M_{j1} & M_{j2}\\
M_{j3} & M_{j4}
\end{pmatrix}$
written in block form, where $M _{j1}$ is a $(\prod _{k=1}^{K}d _{k})\times(\prod _{k=1}^{K}d _{k})$ matrix. Then $\left\{\ M_{j1} \right \}_{j=1,2,...,J}$ is a (LOCC$_{1}$, PPT, SEP, global) POVM of $\otimes _{k=1}^{K} C^{d _{k}}$.
\end{Lemma}
\noindent \textbf{Proof:}
For a POVM $\left\{\ M_{j} \right \}_{j=1,2,...,J}$ in $\otimes _{k=1}^{K} C^{d _{k}+h _{k}}$, $M_{j}$ can be written as
$M _{j}=\sum _{i} (\otimes _{s=1}^{K}A^{(s)}_{ji})$, where the sum is finite. Write
$A^{(s)} _{ji}=\begin{pmatrix}
A^{(s)}_{ji1} & A^{(s)}_{ji2}\\
A^{(s)}_{ji3} & A^{(s)}_{ji4}
\end{pmatrix}$,
$M _{j}=\begin{pmatrix}
M_{j1} & M_{j2}\\
M_{j3} & M_{j4} \end{pmatrix}$
in block form, where $A^{(s)}_{ji1}$ is a $d _{s}\times d _{s}$ matrix, $M_{j1}$ is a $\prod _{s=1}^{K}d _{s}\times \prod _{s=1}^{K}d _{s}$ matrix. Then $M _{j1}=\sum _{i} \otimes _{s=1}^{K}(A^{(s)} _{ji1})$. Since $\left\{\ M _{j} \right \}_{j=1,2,...,J}$ is a POVM, $\sum _{j}M _{j}=I_{\prod _{s=1}^{K}(d _{s}+h _{s})}$ while $M _{j}$ is positive semi-definited. Therefore, $\sum _{j}M_{j1}=I_{\prod _{s=1}^{K} d _{s}}$ while $M _{j1}$ is positive semi-definited. Hence, $\left\{\ M _{j1} \right \}_{j=1,2,...,J}$ is a POVM.
$\left\{\ M _{j} \right \}_{j=1,2,...,J}$ is a $PPT$ POVM means that for every j, $M _{j}$ is positive semi-definited after a partial transposition. Without loss generality, for j, assume that the partial transposition is on $A^{(1)}$. Hence, $\sum _{i} [(A^{(1)} _{ji})^{T}\otimes \otimes _{s=2}^{K}(A^{(s)} _{ji})]$ is positive semi-definited and thus, $\sum _{i} [(A^{(1)} _{ji1})^{T}\otimes \otimes _{s=2}^{K}(A^{(s)} _{ji1})]$ is positive semi-definited, which implies that $M _{j1}$ is a $PPT$ operator.
$\left\{\ M_{j} \right \}_{j=1,2,...,J}$ is a $SEP$ POVM means that $M_{j}$ can be written as
\noindent $M _{j}=\otimes _{s=1}^{K}A^{(s)} _{j}$, for every $j$. Hence, $M _{j1}=\otimes _{s=1}^{K}A^{(s)} _{j1}$ is separable, which implies that $\left\{\ M _{j1} \right \}_{j=1,2,...,J}$ is a $SEP$ POVM of $\otimes _{s=1}^{K} C^{d _{k}}$, where similar to above,
$A^{(s)} _{j}=\begin{pmatrix}
A^{(s)}_{j1} & A^{(s)}_{j2}\\
A^{(s)}_{j3} & A^{(s)}_{j4}
\end{pmatrix}$.
On the other hand, as in the setting section, let $\left\{\ M _{j _{1},j _{2},...,j _{(K-1)},j _{K}} \right \}_{1\leqslant j _{s}\leqslant J _{j _{1},j _{2},...,j_ {(s-1)}}}$ be a $LOCC_{1}$ POVM, where $M _{j _{1},j _{2},...,j _{(K-1)},j _{K}}=\otimes _{s=1}^{K} A^{(s)} _{j _{1},j _{2},...,j _{(s-1)},j _{s}}$. Write
\noindent $M _{j _{1},j _{2},...,j _{(K-1)},j _{K}}=
\begin{pmatrix}
M _{j _{1},j _{2},...,j _{(K-1)},j _{K}1} &
M _{j _{1},j _{2},...,j _{(K-1)},j _{K}2}\\
M _{j _{1},j _{2},...,j _{(K-1)},j _{K}3} &
M _{j _{1},j _{2},...,j _{(K-1)},j _{K}4}
\end{pmatrix}$,
\noindent $A^{(s)} _{j _{1},j _{2},...,j _{(s-1)},j _{s}}=\begin{pmatrix}
A^{(s)}_{j _{1},j _{2},...,j _{(s-1)},j _{s}1} & A^{(s)}_{j _{1},j _{2},...,j _{(s-1)},j _{s}2}\\
A^{(s)}_{j _{1},j _{2},...,j _{(s-1)},j _{s}3} & A^{(s)}_{j _{1},j _{2},...,j _{(s-1)},j _{s}4} \end{pmatrix}$,
\noindent where $M_{j _{1},j _{2},...,j _{(K-1)},j _{K}1}$ is a $\prod _{s=1}^{K}d _{s}\times \prod _{s=1}^{K}d _{s}$ matrix, $A^{(s)}_{j _{1},j _{2},...,j _{(s-1)},j _{s}1}$ is a $d _{s}\times d _{s}$ matrix. Hence, $M _{j _{1},j _{2},...,j _{(K-1)},j _{K}1}=\otimes _{s=1}^{K} A^{(s)} _{j _{1},j _{2},...,j _{(s-1)},j _{s}1}$. Since $\left\{\ A^{(s)} _{j _{1},j _{2},...,j _{(s-1)},j _{s}} \right \}_{j _{s}=1,2,...,J _{j _{1},j _{2},...,j_ {(s-1)}}}$ is a local POVM of partita $A^{(s)}$,
\noindent $\left\{\ A^{(s)} _{j _{1},j _{2},...,j _{(s-1)},j _{s}1} \right \}_{j _{s}=1,2,...,J _{j_{1},j_{2},...,j_{(s-1)}}}$ is a local POVM of $A^{(s)}$. Therefore, $\left\{\ M _{j _{1},j _{2},...,j _{(K-1)},j _{K}1} \right \}_{1\leqslant j _{s}\leqslant J _{j _{1},j _{2},...,j_ {(s-1)}}}$ is a LOCC$_{1}$ POVM.
\\
\noindent \textbf{Proof of Theorem 1}:
Let $\left\{\ \rho _{i} |i=1,2,...,N \right \}$ be a set of orthogonal states in $\otimes _{s=1}^{K} C^{d _{s}}$. Assume that it is distinguishable via $LOCC _{1}$(PPT, SEP, global) POVM $\left\{\ M _{j} \right \}_{j=1,2,...,J}$ in $\otimes _{s=1}^{K} C^{d _{s}+h _{s}}$, namely, for every $j$, $Tr(M _{j}\widetilde{\rho _{i}})\neq 0$ for at most one i. Let us prove that it is distinguishable via $LOCC _{1}$(PPT, SEP, global) POVM in $\otimes _{s=1}^{K} C^{d _{s}}$.
Write
$M _{j}=\begin{pmatrix}
M_{j1} & M_{j2}\\
M_{j3} & M_{j4}
\end{pmatrix}$
in block form, where $M_ {j1}$ is a $\prod _{s=1}^{K}d _{s}\times \prod _{s=1}^{K}d _{s}$ matrix. Since $\left\{\ M _{j} \right \}_{j=1,2,...,J}$ is a $LOCC _{1}$(PPT, SEP, global) POVM, $\left\{\ M _{j1} \right \}_{j=1,2,...,J}$ is a $LOCC _{1}$(PPT, SEP, global) POVM of $\otimes _{s=1}^{K} C^{d _{s}}$, by the above Lemma.
Note that $Tr(M _{j1}{\rho _{i}})\neq 0$ implies that $Tr(M _{j}\widetilde{\rho _{i}})\neq 0$, since that they are equal. Hence, at most one $\rho _{i}$ satisfies $Tr(M _{j1}{\rho _{i}})\neq 0$, which means that the states are $LOCC _{1}$(PPT, SEP, global) distinguishable via POVM $\left\{\ M _{j1} \right \}_{j=1,2,...,J}$ in $\otimes _{s=1}^{K} C^{d _{s}}$.
$
\blacksquare$
\section{Discussion}
\qquad The result of other indistinguishabilities may also hold. However, the method in this paper may not work. For example, the up-left block of a LOCC(LPCC) POVM in $\otimes _{k=1}^{K} C^{d _{k}+h _{k}}$ may not be a LOCC(LPCC) POVM in $\otimes _{k=1}^{K} C^{d _{k}}$. The reason may relate to that measuring a state in a larger dimensional system, the collapsing state may not in the ordinary lower dimensional system. We construct a projective POVM in $C^4 \otimes C^4$, which is not projective when only looking at the up-left $3 \times 3$ block as follow.
\begin{equation*}
P_{1}=
\begin{pmatrix}
\quad\frac{1}{4} & \quad\frac{1}{4} & \quad\frac{1}{4} & \quad\frac{1}{4}\\
\quad\frac{1}{4} & \quad\frac{1}{4} & \quad\frac{1}{4} & \quad\frac{1}{4}\\
\quad\frac{1}{4} & \quad\frac{1}{4} & \quad\frac{1}{4} & \quad\frac{1}{4}\\
\quad\frac{1}{4} & \quad\frac{1}{4} & \quad\frac{1}{4} & \quad\frac{1}{4}
\end{pmatrix}
P_{2}=
\begin{pmatrix}
\quad\frac{1}{4} & -\frac{1}{4} & \quad\frac{1}{4} & -\frac{1}{4}\\
-\frac{1}{4} & \quad\frac{1}{4} & -\frac{1}{4} & \quad\frac{1}{4}\\
\quad\frac{1}{4} & -\frac{1}{4} & \quad\frac{1}{4} & -\frac{1}{4}\\
-\frac{1}{4} & \quad\frac{1}{4} & -\frac{1}{4} & \quad\frac{1}{4}
\end{pmatrix}
\end{equation*}
\begin{equation*}
P_{3}=
\begin{pmatrix}
\quad\frac{1}{4} & \quad\frac{1}{4} & -\frac{1}{4} & -\frac{1}{4}\\
\quad\frac{1}{4} & \quad\frac{1}{4} & -\frac{1}{4} & -\frac{1}{4}\\
-\frac{1}{4} & -\frac{1}{4} & \quad\frac{1}{4} & \quad\frac{1}{4}\\
-\frac{1}{4} & -\frac{1}{4} & \quad\frac{1}{4} & \quad\frac{1}{4}
\end{pmatrix}
P_{4}=
\begin{pmatrix}
\quad\frac{1}{4} & -\frac{1}{4} & -\frac{1}{4} & \quad\frac{1}{4}\\
-\frac{1}{4} & \quad\frac{1}{4} & \quad\frac{1}{4} & -\frac{1}{4}\\
-\frac{1}{4} & \quad\frac{1}{4} & \quad\frac{1}{4} & -\frac{1}{4}\\
\quad\frac{1}{4} & -\frac{1}{4} & -\frac{1}{4} & \quad\frac{1}{4}
\end{pmatrix}
\end{equation*}
However, it will not be surprising if the result can be extended to other cases. Note that for a completed or unextendible product basis, the result holds for LOCC distinguishability\cite{19,36}. Hence, we have the following definitions and conjecture:
\begin{Definition}\textbf{(Local-global indistinguishable property)}
Let $\left\{\ d_{k} \right \}_{k=1,2,...,N}$ be integers with $d_{k}\geq 2$. For a state set S in $\otimes _{k=1}^{K} C^{d _{k}}$ and kinds of indistinguishabilities M, $M'$,
(1) Define that S satisfies the $M\rightarrow M'$ local-global indistinguishable property, if S is indistinguishable via M in $\otimes _{k=1}^{K} C^{d _{k}}$ implies that S is indistinguishable via $M'$ in $\otimes _{k=1}^{K} C^{d _{k}+ h_{k}}$, where $h _{k}$ are any non-negative integers.
(2) If for any state set S in $\otimes _{k=1}^{K} C^{d _{k}}$, S satisfies the $M\rightarrow M'$ local-global indistinguishable property, then $\otimes _{k=1}^{K} C^{d _{k}}$ is said to be satisfying the $M\rightarrow M'$ local-global indistinguishable property.
(3) If for any state set S in any system, it satisfies the $M\rightarrow M'$ local-global indistinguishable property, then $M\rightarrow M'$ is said to be satisfying the local-global indistinguishable property.
\end{Definition}
\begin{Conjecture}
If M=(perfect or unambiguous) LOCC(LPCC, LOCC$_{r}$, LPCC$_{r}$, PPT, SEP, global, projective) distinguishability, then $M\rightarrow M$ satisfies the local-global indistinguishable property.
\end{Conjecture}
This gives a framework of local-global indistinguishability, which states the independence of state distinguishability and the dimension of the system. Now, Theorem 1 can be restated as: For M=$LOCC _{1}$ (unambiguous LOCC, (perfect or unambiguous) PPT, SEP, global) distinguishability, the conjecture is true.
\section{Conclusion}
\qquad In this paper, we consider the nature problem that whether the indistinguishability of states depends on the dimension of the system. We demonstrate that LOCC$_{1}$, PPT and SEP indistinguishabilities, both perfect and unambiguous, are properties of states themselves and independent of the dimensional choice. More exactly, we show that if states are LOCC$_{1}$ (or unambiguous LOCC, (perfect or unambiguous) PPT, SEP) indistinguishable in a lower dimensional system, then they are LOCC$_{1}$ (or unambiguous LOCC, (perfect or unambiguous) PPT, SEP) indistinguishable in a dimensional extended system. The result is true for both bipartite and multipartite systems and for both pure and mixed states.
Assisted with previous results, Theorem 1 gives the maximal numbers of local distinguishable states and can be employed to construct a LOCC indistinguishable orthogonal product basis in general systems, except for one or two small dimensional ones. Note that the corollaries are even suitable for multipartite systems.
For further discussions, we define the local-global indistinguishable property and present a conjecture. Both proving the validity and searching counter-examples for it could be interesting.
\section*{Acknowledgments}
We wish to acknowledge professor Zhu-Jun Zheng who gave advices and helped to check the paper.
\section*{Declarations}
\subsection*{\textbf{Funding}} No funding.
\subsection*{\textbf{Conflicts of interest/Competing interests}} The author declares there are no conflicts of interest.
\subsection*{\textbf{Availability of data and material}} All data supporting the finding can be found in the article.
\subsection*{\textbf{Code availability}} Not applicable.
\subsection*{\textbf{Authors' contributions}} The article has a unique author.
\end{document}
|
arXiv
|
{
"id": "2010.03120.tex",
"language_detection_score": 0.7542891502380371,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[ mappings connected with parallel addition ] {On the mappings connected with parallel addition of nonnegative operators}
\author[Yury Arlinski\u{\i}]{Yu.M. Arlinski\u{\i}} \address{Department of Mathematical Analysis \\ East Ukrainian National University \\ Prospect Radyanskii, 59-a, Severodonetsk, 93400, Ukraine\\ and Department of Mathematics, Dragomanov National Pedagogical University, Kiev, Pirogova 9, 01601, Ukraine}
\email{[email protected]} \subjclass[2010] {47A05, 47A64, 46B25} \keywords{Parallel sum, iterates, fixed point}
\begin{abstract} We study a mapping $\tau_G$ of the cone ${\mathbf B}^+({\mathcal H})$ of bounded nonnegative self-adjoint operators in a complex Hilbert space ${\mathcal H}$ into itself. This mapping is defined as a strong limit of iterates of the mapping ${\mathbf B}^+({\mathcal H})\ni X\mapsto\mu_G(X)=X-X:G\in{\mathbf B}^+({\mathcal H})$, where $G\in{\mathbf B}^+({\mathcal H})$ and $X:G$ is the parallel sum. We find explicit expressions for $\tau_G$ and establish its properties. In particular, it is shown that $\tau_G$ is sub-additive, homogeneous of degree one, and its image coincides with set of its fixed points which is the subset of ${\mathbf B}^+({\mathcal H})$, consisting of all $Y$ such that ${\rm ran\,} Y^{1/2}\cap{\rm ran\,} G^{1/2}=\{0\}$. Relationships between $\tau_G$ and Lebesgue type decomposition of nonnegative self-adjoint operator are established and applications to the properties of unbounded self-adjoint operators with trivial intersections of their domains are given. \end{abstract} \maketitle \tableofcontents \section{Introduction} We will use the following notations: ${\rm dom\,} A$, ${\rm ran\,} A$, and ${\xker\,} A$ are the domain, the range, and the kernel of a linear operator $A$, ${\rm \overline{ran}\,} A$ and ${\rm clos\,}{\cL}$ denote the closure of ${\rm ran\,} A$ and of the set $\cL$, respectively. A linear operator $A$ in a Hilbert space $\cH$ is called \begin{itemize}
\item bounded from bellow if $(A f,f)\ge m||f||^2 $ for all $f\in{\rm dom\,} A$ and some real number $m$,
\item positive definite if $m>0$,
\item nonnegative if $(A f,f)\ge 0 $ for all $f\in{\rm dom\,} A.$ \end{itemize}
The cone of all bounded self-adjoint non-negative operators in a complex Hilbert space $\cH$ we denote by $\bB^+(\cH)$ and let $\bB^{+}_0(\cH)$ be the subset of operators from $\bB^+(\cH)$ with trivial kernels. If $A,B\in \bB^+(\cH)$ and $C=ABA$, then by Douglas theorem \cite{Doug} one has ${\rm ran\,} C^{1/2}=A{\rm ran\,} B^{1/2}$. If $\cK$ is a subspace (closed linear manifold) in $\cH$, then $P_\cK$ is the orthogonal projection in $\cH$ onto $\cK$, and $\cK^\perp\stackrel{def}{=}\cH\ominus\cK$.
Let $X,G\in\bB^+(\cH)$.
The \textit{parallel sum} $X:G$ is defined by the quadratic form: \[ \left((X:G)h,h\right)\stackrel{def}{=}\inf_{f,g \in \cH}\left\{\,\left(Xf,f\right)+\left(Gg,g\right):\,
h=f+g \,\right\} \ , \] see \cite{AD}, \cite{FW}, \cite{K-A}. One can establish for $X:G$ the following equivalent definition \cite{AT}, \cite{PSh} \[ X:G=s-\lim\limits_{\varepsilon\downarrow 0}\, X\left(X+G+\varepsilon I\right)^{-1}G. \] Then for positive definite bounded self-adjoint operators $X$ and $G$ we obtain \[ X:G=(X^{-1}+G^{-1})^{-1} \ . \] As is known \cite{PSh}, $X:G$ can be calculated as follows \[ X:G=X-\left((X+G)^{-1/2}X\right)^*\left((X+G)^{-1/2}X\right). \] Here for $A\in\bB^+(\cH)$ by $A^{-1}$ we denote the Moore--Penrose pseudo-inverse. The operator $X:G$ belongs to $\bB^+(\cH)$ and, as it is established in \cite{AT}, the equality \begin{equation} \label{ukfdyj} {\rm ran\,} (X:G)^{1/2}={\rm ran\,} X^{1/2}\cap{\rm ran\,} G^{1/2} \end{equation} holds true. If $T$ is bounded operator in $\cH$, then in general $$T^*(A:B)T\le (T^*AT):(T^*BT)$$ for $A,B\in\bB^+(\cH)$, but, see \cite{Ar2}, \begin{multline} \label{trans}{\xker\,} T^*\cap{\rm ran\,} (A+B)^{1/2}=\{0\} \\ \Longrightarrow T^*(A:B)T= (T^*AT):(T^*BT). \end{multline} Besides, if $A'\le A''$, $B'\le B''$, then $A':B'\le A'':B''$ and, moreover \cite{PSh}, \begin{equation} \label{monotcon} A_n\downarrow A\quad\mbox{and}\quad B_n\downarrow B\quad\mbox{strongly}\Rightarrow A_n:B_n\downarrow A:B\quad\mbox{strongly}. \end{equation} Let $X,G\in\bB^+(\cH)$. Since $X\le X+G$ and $G\le X+G$, one gets \begin{multline} \label{fg2}
X=(X+G)^{1/2}M(X+G)^{1/2},\\
G=(X+G)^{1/2}(I-M)(X+G)^{1/2} \end{multline} for some non-negative contraction $M$ on $\cH$ with ${\rm ran\,} M\subset{\rm \overline{ran}\,}(X+G)$.
\begin{lemma} {\rm \cite{Ar2}} \label{yu1} Suppose $X, G\in \bB^+(\cH)$ and let $M$ be as in \eqref{fg2}. Then \[
X:G=(X+G)^{1/2}(M-M^2)(X+G)^{1/2}. \] \end{lemma} Since $$ {\rm ran\,} M^{1/2}\cap{\rm ran\,} (I-M)^{1/2}={\rm ran\,} (M-M^2)^{1/2}, $$ the next proposition is an immediate consequence of Lemma \ref{yu1}, cf. \cite{FW}, \cite{PSh}. \begin{proposition} \label{root} 1) ${\rm ran\,} (X:G)^{1/2}={\rm ran\,} X^{1/2}\cap{\rm ran\,} G^{1/2}$.
2) The following statements are equivalent: \begin{enumerate} \def\rm (\roman{enumi}){\rm (\roman{enumi})} \item
$X:G=0$;
\item $M^2=M$, i.e., the operator $M$ in \eqref{fg2} is an orthogonal projection in ${\rm \overline{ran}\,}(X+G)$; \item ${\rm ran\,} X^{1/2}\cap{\rm ran\,} G^{1/2}=\{0\}$. \end{enumerate} \end{proposition}
Fix $G\in\bB^+(\cH)$ and define a mapping \begin{equation} \label{mapmu} \bB^+(\cH)\ni X\mapsto\mu_G(X)\stackrel{def}{=}X-X:G\in \bB^+(\cH).
\end{equation} Then \begin{enumerate} \item $0\le\mu_G(X)\le X$, \item $\mu_G(X)=X\iff X:G=0\iff{\rm ran\,} X^{1/2}\cap {\rm ran\,} G^{1/2}=\{0\}$. \end{enumerate} Therefore, if $G$ is positive definite, then the set of fixed points of $\mu_G$ consists of a unique element, the trivial operator. Denote by $\mu^{[n]}_G$ the $n$th iteration of the mapping $\mu_G$, i.e., for $X\in\bB^+(\cH)$ \begin{multline*} \mu^{[2]}_G(X)=\mu_G(\mu_G(X)),\;\mu^{[3]}_G(X)=\mu_G(\mu^{[2]}_G(X)),\cdots,\\ \mu^{[n]}_G(X)=\mu_G(\mu^{[n-1]}_G(X)). \end{multline*} Since \[ X\ge\mu_G(X)\ge \mu^{[2]}_G(X)\ge\cdots\ge\mu^{[n]}_G(X)\ge \cdots, \] the strong limit of $\{\mu^{[n]}_G(X)\}_{n=0}^\infty$ exists for an arbitrary $X\in\bB^+(\cH)$ and is an operator from $\bB^+(\cH)$. In this paper we study the mapping \[ \bB^+(\cH)\ni X\mapsto\tau_G(X)\stackrel{def}{=}s-\lim\limits_{n\to\infty}\mu^{[n]}_G(X)\in\bB^+(\cH). \] We show that the range and the set of fixed points of $\tau_G$ coincides with the cone \begin{multline*} \bB^+_G(\cH)=\left\{Y\in\bB^+(\cH): {\rm ran\,} Y^{1/2}\cap{\rm ran\,} G^{1/2}=\{0\}\right\}\\ =\left\{Y\in \bB^+(\cH), \;Y:G=0\right\}. \end{multline*} We find explicit expressions for $\tau_G$ and establish its properties. In particular, we show that $\tau_G$ is homogenous and sub-additive, i.e., $\tau_G(\lambda X)=\lambda\tau_G(X)$ and $\tau_G(X+Y)\le \tau_G(X)+\tau_G(Y)$ for an arbitrary operators $X, Y\in\bB^+(\cH)$ and an arbitrary positive number $\lambda$. It turns out that
$$\tau_G(X)=\tau_{\widetilde G}(X)=\tau_G(\widetilde G+X)$$ for all $X\in\bB^+(\cH)$, where $\widetilde G\in\bB^+(\cH)$ is an arbitrary operator such that ${\rm ran\,} \widetilde G^{1/2}={\rm ran\,} G^{1/2}$. We prove the equality $\tau_G(X)=X-[G]X,$ where the mapping \[ \bB^+(\cH)\ni X\mapsto[G]X\stackrel{def}{=}s-\lim\limits_{n\to\infty}(nG:X)\in\bB^+(\cH) \] has been defined and studied by T.~Ando \cite{Ando_1976} and then in \cite{Pek_1978}, \cite{Kosaki_1984}, and \cite{E-L}. In the last Section \ref{applll} we apply the mappings $\{\mu^{[n]}_G\}$ and $\tau_G$ to the problem of the existence of a self-adjoint operator whose domain has trivial intersection with the domain of given unbounded self-adjoint operator \cite{Neumann}, \cite{Dix}, \cite{FW}, \cite{ES}. Given an unbounded self-adjoint operator $A$, in Theorem \ref{ytcgjl} we suggest several assertions equivalent to the existence of a unitary operator $U$ possessing the property $U{\rm dom\,} A\cap{\rm dom\,} A=\{0\}$. J.~von Neumann \cite[Satz 18]{Neumann} established that such $U$ always exists for an arbitrary unbounded self-adjoint $A$ acting in a separable Hilbert space. In a nonseparable Hilbert space always exists an unbounded self-adjoint operator $A$ such that for any unitary $U$ the relation $U{\rm dom\,} A\cap{\rm dom\,}\ne\{0\}$ holds, see \cite{ES}. \section{The mapping $\mu_G$ and strong limits of its orbits}
\begin{lemma} \label{vspm} Let $F_0\in\bB^+(\cH)$. Define the orbit \[ F_1=\mu_G(F_0),\; F_2=\mu_G(F_1),\ldots,F_{n+1}=\mu_G(F_n),\ldots. \] Then the sequence $\{F_n\}$ is non-increasing: $$F_0\ge F_1\ge\cdots \ge F_n\ge F_{n+1}\ge\cdots,$$ and the strong limit \[ F\stackrel{def}{=}s-\lim\limits_{n\to\infty} F_n \] is a fixed point of $\mu_G$, i.e., satisfies the condition \[ F:G=0. \] \end{lemma} \begin{proof} Since $\mu_G(X)\le X$ for all $X\in\bB^+(\cH)$, the sequence $\{F_n\}$ is non-increasing. Therefore, there exists a strong limit $F=s-\lim\limits_{n\to\infty} F_n.$ On the other hand, because the sequence $\{F_n\}$ in non-increasing, the sequence $\{F_n:G\}$ is non-increasing as well and property \eqref{monotcon} of parallel addition leads to \[ s-\lim\limits_{n\to\infty}(F_n:G)=F:G. \] Besides, the equalities \[ F_n:G=F_n-F_{n+1}, \;n=0,1,\ldots \] yield $F:G=0.$
Thus, $F=\mu_G(F)$, i.e., $F$ is a fixed point of the mapping $\mu_G$. \end{proof}
For $G,F_0\in\bB^+(\cH)$ define subspaces \begin{equation} \label{prosm}\begin{array}{l} \Omega\stackrel{def}{=}{\rm{clos}}\left\{f\in\cH:(G+F_0)^{1/2}f\in{\rm ran\,} G^{1/2}\right\}, \\
{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}\stackrel{def}{=}\cH\ominus\Omega.
\end{array} \end{equation} Note that if a linear operator ${\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}$ is defined by \begin{equation} \label{contrv} \left\{\begin{array}{l}x=(G+F_0)^{1/2}f+g\\ {\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X} x=G^{1/2}f,\; f\in\cH,\; g\in{\xker\,}(G+F_0) \end{array} \right., \end{equation} then ${\rm dom\,} {\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}={\rm ran\,}(G+F_0)^{1/2}\oplus{\xker\,}(G+F_0)$ is a dense in $\cH$ linear manifold and ${\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}$ is a contraction. Let $\overline{{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}}$ be the continuation of ${\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}$ on $\cH$. Clearly $\overline{{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}}={\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^{**}$. If we denote by $(G+F_0)^{-1/2}$ the Moore-Penrose pseudo-inverse to $(G+F_0)^{1/2}$, then from \eqref{contrv} one can get that \begin{equation} \label{opercv} \begin{array}{l} {\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}(G+F_0)^{1/2}=G^{1/2}=(G+F_0)^{1/2}{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^*,\\ {\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^*=(G+F_0)^{-1/2}G^{1/2},\;{\rm ran\,} {\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^*\subseteq{\rm \overline{ran}\,}(G+F_0),\\ {\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X} g=G^{1/2}(G+F_0)^{-1/2}g,\; g\in{\rm ran\,}(G+F_0)^{1/2}. \end{array} \end{equation} Moreover, \begin{equation} \label{11} \Omega={\rm \overline{ran}\,} {\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^*\oplus{\xker\,}(G+F_0), \; {\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}={\xker\,} \left(\overline{{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}}{\upharpoonright\,}{\rm \overline{ran}\,}(G+F_0)\right). \end{equation} Besides we define the following contractive linear operator \begin{equation} \label{contrw} \left\{\begin{array}{l}x=(G+F_0)^{1/2}f+g\\ \cW x=F_0^{1/2}f,\; f\in\cH,\; g\in{\xker\,}(G+F_0). \end{array} \right. \end{equation} The operator $\cW$ is defined on ${\rm dom\,} \cW={\rm ran\,}(G+F_0)^{1/2}\oplus{\xker\,}(G+F_0)$ and \begin{equation} \label{opwa} \begin{array}{l} \cW(G+F)^{1/2}=F^{1/2}_0=(G+F_0)^{1/2}\cW^*,\\ \cW^*=(G+F_0)^{-1/2}F^{1/2}_0,\;{\rm ran\,} \cW^*\subseteq{\rm \overline{ran}\,} (G+F_0),\\ \cW h=F^{1/2}_0(G+F_0)^{-1/2}h,\; h\in{\rm ran\,} (G+F_0)^{1/2}. \end{array} \end{equation}
Let $\overline\cW=\cW^{**}$ be the continuation of $\cW$ on $\cH$. Clearly, $\overline\cW^*=\cW^*.$
Note that
\[
{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^*\overline{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X} h+\cW^*\overline \cW h=h,\;h\in{\rm \overline{ran}\,}(G+F_0)
\] Set \begin{equation} \label{prosn} \sN\stackrel{def}{=}{\xker\,}(I-\overline{\cW}\,\overline{\cW}^*). \end{equation} Since ${\xker\,} \cW^*={\xker\,} F_0$, the subspace $\sN$ is contained in ${\rm \overline{ran}\,} F_0$. \begin{proposition} \label{singar} The equalities \begin{multline} \label{equival1} {\rm ran\,} (I-\overline{\cW}\,\overline{\cW}^*)^{1/2}=\left\{f\in \cH: F^{1/2}_0f\in{\rm ran\,} G^{1/2}\right\}\\ =\left\{f\in \cH: F^{1/2}_0f\in{\rm ran\,} (F:G_0)^{1/2}\right\} \end{multline} hold. \end{proposition} \begin{proof} Set $\cH_0\stackrel{def}{=}{\rm \overline{ran}\,}(G+F_0)$. Note that ${\xker\,} (G+F_0)={\xker\,} G\cap{\xker\,} F_0$. Define \begin{equation} \label{mo} M_0\stackrel{def}{=}\overline\cW^*\overline\cW{\upharpoonright\,}\cH_0. \end{equation} Then $M_0\in\bB^+(\cH_0)$ and \begin{equation} \label{cvop} \overline{{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}}^*\overline{{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}}{\upharpoonright\,}=I_{\cH_0}-M_0=I_{\cH_0}-\overline\cW^*\overline\cW{\upharpoonright\,}\cH_0. \end{equation} From \eqref{opercv} and \eqref{opwa} \begin{multline} \label{equiv11} F^{1/2}_0f=G^{1/2}h\iff (G+F_0)^{1/2}\cW^*f=(G+F_0)^{1/2}{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^* h\\ \iff \cW^*f={\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^* h \end{multline}
Equality \eqref{cvop} yields \[ {\rm ran\,} {\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^*={\rm ran\,} (I_{\cH_0}-\overline\cW^*\overline\cW{\upharpoonright\,}\cH_0)^{1/2} \] Hence \eqref{equiv11} is equivalent to the inclusion $f\in {\rm ran\,} (I-\overline \cW\overline\cW^*)^{1/2}.$ Application of \eqref{ukfdyj} completes the proof. \end{proof} Thus from \eqref{prosn} and \eqref{equiv11} we get \begin{equation} \label{nob1} \sN=\cH\ominus{\left\{{\rm clos}\left\{g\in\cH:F^{1/2}_0g\in{\rm ran\,} G^{1/2}\right\}\right\}}.
\end{equation}
\begin{theorem} \label{form1} Let $G\in\bB^+(\cH)$, $F_0\in\bB^+(\cH)$, $F_n\stackrel{def}{=}\mu_G(F_{n-1})$, $n\ge 1$, $F\stackrel{def}{=}s-\lim_{n\to \infty}F_n$. Then \begin{equation} \label{prosm1} F=(G+F_0)^{1/2}P_{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}(G+F_0)^{1/2} \end{equation} and \begin{equation} \label{prosn1} F=F^{1/2}_0P_\sN F^{1/2}_0, \end{equation} where ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}$ and $\sN$ are given by \eqref{prosm} and \eqref{nob1}, respectively. \end{theorem} \begin{proof} From \eqref{contrw}, \eqref{contrw}, \eqref{mo}, \eqref{cvop}, \eqref{11} we have \[ \begin{array}{l} F_0=(G+F_0)^{1/2}M_0(G+F_0)^{1/2},\\
G=(G+F_0)^{1/2}(I_{\cH_0}-M_0)(G+F_0)^{1/2}, \end{array} \] \[
{\xker\,} (I_{\cH_0}-M_0)={\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O},\;{\rm \overline{ran}\,} (I-M_0)=\cH_0\ominus{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}=\Omega\ominus{\xker\,} (G+F_0). \] Then by Lemma \ref{yu1} \begin{multline*}
F_0:G=(G+F_0)^{1/2}(M_0:(I_{\cH_0}-M_0))(G+F_0)^{1/2}\\ =(G+F_0)^{1/2}(M_0-M^2_0)(G+F_0)^{1/2}. \end{multline*} It follows that \[ F_1=\mu_G(F_0)=F_0-F_0:G=(G+F_0)^{1/2}M^2_0(G+F_0)^{1/2}. \] Then (further $I=I_{\cH_0}$ is the identity operator) from \eqref{trans} \begin{multline*} F_1:G=(G+F_0)^{1/2}\left((I-M_0):M^2_0\right)(G+F_0)^{1/2}\\=(G+F_0)^{1/2}\left((I-M_0)M^2_0(I-M_0+M^2_0)^{-1}\right)(G+F_0)^{1/2}, \end{multline*} \begin{multline*} F_2\stackrel{def}{=}\mu_G(F_1)=F_1-F_1:G\\ =(G+F_0)^{1/2}\left(M_0^2-(I-M_0)M^2_0(I-M_0+M^2_0)^{-1}\right)(G+F_0)^{1/2}\\ =(G+F_0)^{1/2}M^4_0(I-M_0+M^2_0)^{-1}(G+F_0)^{1/2}. \end{multline*} Let us show by induction that for all $n\in\dN$ $$F_n\stackrel{def}{=}\mu_G(F_{n-1})=(G+F_0)^{1/2}M_n(G+F_0)^{1/2}\quad\mbox{for all} \quad n\in\dN,$$
where \begin{enumerate} \item $\{M_n\}$ is a non-increasing sequence from $\bB^+(\cH_0)$, \item $I-M_0+M_n$ is positive definite, \item $M_n$ commutes with $M_0$, \item $M_{n+1}=(I-M_0+M_n)^{-1}M^2_n.$ \end{enumerate} All statements are already established for $n=1$ and for $n=2$. Suppose that all statements are valid for some $n$. Further, using the equality $M_0M_n=M_nM_0$, we have \begin{multline*} I-M_0+M_{n+1}=I-M_0+(I-M_0+M_n)^{-1}M^2_n\\ =(I-M_0+M_n)^{-1}\left((I-M_0+M_n)(I-M_0)+M^2_n\right)\\
=(I-M_0+M_n)^{-1}\left((I-M_0)^2+M_n(I-M_0)+M^2_n\right)\\ =(I-M_0+M_n)^{-1}\left(\left((I-M_0)+\frac{1}{2}M_n\right)^2+\frac{3}{4}M^2_n\right). \end{multline*} Since \[ (I-M_0)+\frac{1}{2}M_n\ge \frac{1}{2}\left(I-M_0+M_n\right), \] and $I-M_0+M_n$ is positive definite, we get that the operator $I-M_0+M_{n+1}$ is positive definite. \begin{multline*} M_0M_{n+1}=M_0(I-M_0+M_n)^{-1}M^2_n\\ =(I-M_0+M_n)^{-1}M^2_n M_0=M_{n+1}M_0. \end{multline*} From \eqref{trans} we have \begin{multline*} F_{n+1}:G=(G+F_0)^{1/2}\left((I-M_0):M_{n+1}\right)(G+F_0)^{1/2}\\ =(G+F_0)^{1/2}(I-M_0)M_{n+1}(I-M_0+M_{n+1})^{-1}(G+F_0)^{1/2}, \end{multline*} and \begin{multline*} F_{n+2}=\mu_G(F_{n+1})=F_{n+1}-F_{n+1}:G\\ =(G+F_0)^{1/2}\left(M_{n+1}-(I-M_0)M_{n+1}(I-M_0+M_{n+1})^{-1}\right)(G+F_0)^{1/2}\\ =(G+F_0)^{1/2}(I-M_0+M_{n+1})^{-1}M^2_{n+1}(G+F_0)^{1/2}\\ =(G+F_0)^{1/2}M_{n+2}(G+F_0)^{1/2}. \end{multline*} One can prove by induction that inequality $I-M_n\ge 0$ and the equalities $M_{n+1}=(I-M_0+M_n)^{-1}M^2_n$ for all $n\in\dN$ imply $${\xker\,}(I-M_n)={\xker\,}(I-M_0),\;n\in\dN.$$
Let $M=\lim\limits_{n\to\infty} M_n$. Then $F=(G+F_0)^{1/2}M(G+F_0)^{1/2}$. Since $M_{n+1}(I-M_0+M_{n})=M^2_n,$ we get $(I-M_0)M=0$. Thus, ${\rm ran\,} M\subseteq{\xker\,}(I-M_0).$ Since $M{\upharpoonright\,}{\xker\,}(I-M_0)=I$, we get $M=P_{{\xker\,}(I-M_0)}.$ It follows that \eqref{prosm1} holds true.
The inequalities $0\le \mu_G(X)\le X$ yield $F_n=F^{1/2}_0N_nF^{1/2}_0,$ where $\{N_n\}$ is non-increasing sequence from $\bB^+(\cH)$, $0\le N_n\le I$ for all $n\in\dN$, and ${\xker\,} N_n\supseteq{\xker\,} F_0$. Let $ N=s-\lim_{n\to \infty}N_n$. Then $F=F^{1/2}_0 NF^{1/2}_0$. From \eqref{contrw} we have \[ F^{1/2}_0=\cW(G+F_0)^{1/2}=(G+F_0)^{1/2}\cW^*, \] Since $M_0=\overline\cW^*\overline\cW{\upharpoonright\,}\cH_0$ we get and $\overline\cW=VM^{1/2}_0$, where $V$ is isometry from ${\rm \overline{ran}\,} M_0$ onto ${\rm \overline{ran}\,} F_0$. Thus \[ F^{1/2}_0=VM^{1/2}_0(G+F_0)^{1/2},\; M^{1/2}_0(G+F_0)^{1/2}=V^*F^{1/2}_0. \] Because $P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}}=M^{1/2}_0P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}} M^{1/2}_0$ we get from $F=(G+F_0)^{1/2}P_{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}(G+F_0)^{1/2}$: \[ F=F^{1/2}_0VP_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}} V^*F^{1/2}_0. \] The operator $VP_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}} V^*$ is orthogonal projection in ${\rm \overline{ran}\,} F_0$. Denote $\sN_0={\rm ran\,} VP_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}} V^*=V{\rm ran\,} P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}}.$ From $ (G+F_0)^{1/2}M^{1/2}_0h=F^{1/2}_0Vh$, for all $h\in{\rm \overline{ran}\,} M_0$ we obtain \[ (G+F_0)^{1/2}\varphi=F^{1/2}_0V\varphi,\; \varphi\in{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}={\xker\,}(I_{\cH_0}-M_0), \] and then \[ \varphi=(G+F_0)^{-1/2}F^{1/2}_0V\varphi. \] Hence \[ (G+F_0)^{-1/2}F^{1/2}_0g=V^*g,\; g=V\varphi\in\sN_0. \] On the other hand \[ (G+F_0)^{-1/2}F^{1/2}_0x=\overline{\cW}^*x\quad\mbox{for all}\quad x\in\cH. \] It follows that $\overline{\cW}^*g=V^*g$ for all $g\in\sN_0$. So \[
g\in\sN_0\iff ||\overline{\cW}^*g||=||g||\iff g\in{\xker\,} (I-\overline{\cW}\,\overline{\cW}^*). \] Thus, $\sN_0$ coincides with $\sN$ defined in \eqref{prosn}, and \eqref{prosn1} holds true. \end{proof} \begin{corollary}\label{commute} Suppose $F_0$ commutes with $G$. Then $\sN$ defined in \eqref{prosn} takes the form $\sN={\xker\,} G\cap{\rm \overline{ran}\,} F_0.$ In particular, \begin{enumerate} \item if ${\xker\,} F_0\supseteq{\xker\,} G$, then $F=0$, \item if $F_0=G$, then $F=0$, \item if ${\xker\,} G=\{0\}$, then $F=0$. \end{enumerate} \end{corollary} \begin{proof} If $F_0G=GF_0$. Then $F^{1/2}_0(G+F_0)^{-1/2}f=(G+F_0)^{-1/2}F^{1/2}_0f$ for all $f\in{\rm ran\,} (G+F_0)^{1/2}$. Hence, $\cW^*=\overline\cW=\cW^{**}$ and $\overline \cW$ is nonnegative contraction. It follows from \eqref{prosn} that \[ \sN={\xker\,}(I-\overline{\cW}^2)={\xker\,} (I-\cW^*) ={\xker\,} (I-(G+F_0)^{1/2}F_0^{1/2}). \] Clearly \[ f\in {\xker\,} (I-(G+F_0)^{1/2}F_0^{1/2})\iff f\in{\xker\,} G\cap{\rm \overline{ran}\,} F_0. \] Furthermore, applying \eqref{prosn1} we get implications \[ \begin{array}{l} {\xker\,} F_0\supseteq{\xker\,} G\Longrightarrow {\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_0=\{0\},\\ {\xker\,} G=\{0\}\Longrightarrow{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}=\{0\}. \end{array} \] \end{proof} \begin{corollary} \label{new1}
If $G\in\bB^+_0(\cH)$ and if $F_0$ is positive definite, then $F=0$.
\end{corollary} \begin{proof}
In the case when $F_0$ is positive definite the subspace ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}$ defined in \eqref{prosm} can be described as follows: ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}= (G+F_0)^{1/2}{\xker\,} G$. Hence, if ${\xker\,} G=\{0\}$, then ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}=\{0\}$ and \eqref{prosm1} gives $F=0$.
\end{proof}
\begin{theorem} Let $G\in\bB^+(\cH)$, $F_0 \in \bB^+(\cH)$, $F_{n+1}=\mu_G(F_n)$, $n\ge 0$, $F=\lim_{n\to \infty}F_n$. \begin{enumerate}
\item If ${\rm ran\,} F^{1/2}_0\subseteq{\rm ran\,} G^{1/2}$, then $F=0$. \item If ${\rm ran\,} F^{1/2}_0={\rm ran\,} G^{\alpha}$, where $\alpha<1/2$, then $F=0$. \end{enumerate} \end{theorem} \begin{proof} (1) Let ${\rm ran\,} F^{1/2}_0\subseteq{\rm ran\,} G^{1/2}$. Then $ F^{1/2}_0\cH\subseteq{\rm ran\,} G^{1/2} $. From \eqref{nob1} and \eqref{prosn1} it follows $F=0$.
(2) Suppose ${\rm ran\,} F^{1/2}_0={\rm ran\,} G^{\alpha},$ where $\alpha< 1/2$. Then by Douglas theorem \cite{Doug} the operator $F_0$ is of the form \[ F_0=G^{\alpha}Q_0G^{\alpha}, \] where $Q$ is positive definite in $\cH_0={\rm \overline{ran}\,} G$. Hence, $G+G^{\alpha}QG^{\alpha}=G^{\alpha}(G^{1-2\alpha}+Q_0)G^{\alpha}$, and \begin{multline*} \mu_G(F_0)= \left((G+G^{\alpha}Q_0G^{\alpha})^{-1/2}G^{\alpha}Q_0G^{\alpha} \right)^*(G+G^{\alpha}Q_0G^{\alpha})^{-1/2}G^{\alpha}Q_0G^{\alpha}\\ =G^{\alpha}Q_0(G^{1-2\alpha}+Q_0)^{-1}Q_0G^{\alpha}=G^{\alpha}\mu_{G^{1-2\alpha}}(Q_0)G^{\alpha}. \end{multline*} Note that $Q_1\stackrel{def}{=}\mu_{G^{1-2\alpha}}(Q_0)$ is positive definite. Therefore for $F_1=\mu_G(F_0)$ possess the property ${\rm ran\,} F^{1/2}_1={\rm ran\,} G^{\alpha}$. By induction we can prove that \[ F_{n+1}=\mu_G(F_n)=G^{\alpha}\mu_{G^{1-2\alpha}}(Q_n)G^{\alpha} =G^{\alpha}Q_{n+1}G^{\alpha}. \] Using that $Q_0$ is positive definite and applying Corollary \ref{new1}, we get $\lim_{n\to\infty}Q_n=0$. Hence \[ F=\lim\limits_{n\to\infty}F_n=\lim\limits_{n\to\infty}G^{\alpha}Q_nG^{\alpha}=0. \] \end{proof}
\begin{corollary} \label{mnogo} Let $\lambda>0$. Define a subspace \[ {\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_\lambda=\cH\ominus{\left\{{\rm clos}\left\{g\in\cH:(\lambda G+F_0)^{1/2}g\in{\rm ran\,} G^{1/2}\right\}\right\}}
\] Then
\[ (\lambda G+F_0)^{1/2}P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_\lambda}(\lambda G+F_0)^{1/2} =F^{1/2}_0P_{\sN} F^{1/2}_0, \]
where $\sN$ is given by \eqref{nob1}.
\end{corollary} \begin{proof} Replace $G$ by $\lambda G$ and consider a sequence $$F_0, F_1=\mu_{\lambda G}(F_0),\;F_{n}=\mu_{\lambda G}(F_{n-1}),\ldots.$$ Clearly \begin{multline*} \cH\ominus{\left\{{\rm clos}\left\{g\in\cH:F^{1/2}_0g\in{\rm ran\,} (\lambda G)^{1/2}\right\}\right\}}\\= \cH\ominus{\left\{{\rm clos}\left\{g\in\cH:F^{1/2}_0g\in{\rm ran\,} G^{1/2}\right\}\right\}}=\sN. \end{multline*} By Theorem \ref{form1} \[ s-\lim\limits_{n\to\infty}F_n=F^{1/2}_0P_\sN F^{1/2}_0. \] On the other side the application of \eqref{prosm1} gives \[ s-\lim\limits_{n\to\infty}F_n=(\lambda G+F_0)^{1/2}P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_\lambda}(\lambda G+F_0)^{1/2}. \]
\end{proof}
\begin{theorem} \label{interzero} Let $G\in\bB^+_0(\cH)$, ${\rm ran\,} G\ne \cH$. Let $F_0\in\bB^+(\cH)$, $F_n\stackrel{def}{=}\mu_G(F_{n-1})$, $n\ge 1$, $F\stackrel{def}{=}s-\lim_{n\to \infty}F_n$. Then \begin{multline*}
F\in\bB^+_0(\cH)\Longrightarrow \left\{\begin{array}{l}F_0\in\bB^+_0(\cH),\\ {\rm ran\,}(G+F_0)\cap{\rm ran\,} G^{1/2}=\{0\}\end{array}\right.\\ \iff \left\{\begin{array}{l}F_0\in\bB^+_0(\cH),\\ {\rm ran\,} F_0\cap{\rm ran\,} G^{1/2}=\{0\}\end{array}\right.. \end{multline*} Moreover, the following conditions are equivalent: \begin{enumerate} \def\rm (\roman{enumi}){\rm (\roman{enumi})} \item $F\in\bB^+_0(\cH)$, \item ${\rm ran\,} (G+F_0)^{1/2}\cap {\rm \overline{ran}\,} (G+F_0)^{-1/2}G^{1/2}=\{0\}$, \item for each converging sequence $\{y_n\}\subset{\rm ran\,} G^{1/2}$ such that $$\lim_{n\to\infty}y_n\in{\rm ran\,} F_0$$ follows that the sequence $\{(G+F_0)^{-1/2}y_n\}$ is diverging, \item ${\rm ran\,} F^{1/2}_0\cap{\rm clos\,}\left\{F^{-1/2}_0\left({\rm ran\,} F^{1/2}_0\cap {\rm ran\,} G^{1/2}\right)\right\}=\{0\}$, \item for each converging sequence $\{z_n\}\subset {\rm ran\,} F^{1/2}_0\cap{\rm ran\,} G^{1/2}$ such that $$\lim_{n\to\infty}z_n\in{\rm ran\,} F_0$$ follows that the sequence $\{F^{-1/2}_0 z_n\}$ is diverging. \end{enumerate} \end{theorem} \begin{proof}
Clearly $F\in\bB^+_0(\cH)\iff{\xker\,} F=\{0\}$. Since ${\xker\,} (G+F_0)=\{0\},$ from \eqref{prosm1}, \eqref{prosm}, \eqref{contrv}, \eqref{opercv} it follows equivalences \begin{multline*}
{\xker\,} F=\{0\}\iff\Omega\cap {\rm ran\,} (G+F_0)^{1/2}=\{0\}\\ \iff{\rm ran\,} (G+F_0)^{1/2}\cap {\rm \overline{ran}\,} (G+F_0)^{-1/2}G^{1/2}=\{0\}. \end{multline*} So (i)$\iff$(ii). In particular $$ {\xker\,} F=\{0\}\Longrightarrow {\rm ran\,} (G+F_0)^{1/2}\cap {\rm ran\,} (G+F_0)^{-1/2}G^{1/2}=0.$$ Hence \begin{equation} \label{inters} {\rm ran\,} (G+F_0)\cap {\rm ran\,} G^{1/2}=0. \end{equation} Assume that ${\rm ran\,} G^{1/2}\cap{\rm ran\,} F_0\ne\{0\}$. Then $F_0x=G^{1/2}y$ for some $x,y\in \cH$. Set $z\stackrel{def}{=}y+G^{1/2}x$. Then $F_0x=G^{1/2}(z-G^{1/2}x)$ and $(G+F_0)x=G^{1/2}z$ that contradicts to \eqref{inters}.
Conversely, if ${\rm ran\,} (G+F_0)\cap {\rm ran\,} G^{1/2}\ne\{0\}$, then ${\rm ran\,} G^{1/2}\cap{\rm ran\,} F_0\ne\{0\}$. So, \eqref{inters} is equivalent to ${\rm ran\,} G^{1/2}\cap{\rm ran\,} F_0=\{0\}.$ Note that the latter is equivalent to $F^2_0:G=0.$
Suppose ${\rm ran\,} (G+F_0)^{1/2}\cap {\rm \overline{ran}\,} (G+F_0)^{-1/2}G^{1/2}\ne\{0\}.$ Then there is a sequence $\{x_n\}\subset \cH$ and a vector $f\in\cH$ such that \[ (G+F_0)^{1/2}f=\lim\limits_{n\to\infty}(G+F_0)^{-1/2}G^{1/2}x_n \] Hence $\lim\limits_{n\to\infty}G^{1/2}x_n=(G+F_0)f.$ Let $y_n=G^{1/2}(x_n-G^{1/2}f),$ $n\in\dN$. Then $\{y_n\}\subset{\rm ran\,} G^{1/2}$, $\lim\limits_{n\to\infty}y_n=F_0f,$ and \[ \lim\limits_{n\to\infty}(G+F_0)^{-1/2}y_n=(G+F_0)^{1/2}f-(G+F_0)^{-1/2}Gf. \] Conversely, if there is converging sequence $\{y_n=G^{1/2}z_n\}$ such that $$\lim_{n\to\infty}y_n= F_0f$$ and the sequence $\{(G+F_0)^{-1/2}y_n\}$ converges as well, then from $$\lim_{n\to\infty}G^{1/2}(z_n+G^{1/2}f)=(G+ F_0)f$$ and because the operator $(G+F_0)^{-1/2}$ is closed, we get \begin{multline*} (G+F_0)^{1/2}f=(G+F_0)^{-1/2}(G+F_0)f\\ =\lim\limits_{n\to\infty}(G+F_0)^{-1/2}G^{1/2}(z_n+G^{1/2}f). \end{multline*} This means that ${\rm ran\,} (G+F_0)^{1/2}\cap {\rm \overline{ran}\,} (G+F_0)^{-1/2}G^{1/2}\ne\{0\}.$ Thus, conditions (i) and (ii) are equivalent.
Using \eqref{ukfdyj}, \eqref{contrw}, \eqref{opwa}, \eqref{equival1}, \eqref{prosn1}, and Theorem \ref{form1}, the equivalences (i)$\iff$(iv)$\iff$(v) can be proved similarly. \end{proof}
\section{The mapping $\tau_G$} Recall that the mapping $\mu_G$ is defined by \eqref{mapmu} and by $\mu^{[n]}_G$ we denote the $n$th iteration of the mapping $\mu_G$. Note that \[ \mu^{[n+1]}_G(X)=\mu^{[n]}_G(X)-\mu^{[n]}_G(X):G,\; n\ge 0. \] Hence \begin{equation} \label{rec} \sum\limits_{k=0}^n\left(\mu^{[k]}_G(X):G\right)=X-\mu^{[n+1]}_G(X). \end{equation} Clearly \[ X\ge \mu_G(X)\ge \mu^{[2]}_G(X)\ge\cdots\ge \mu^{[n]}_G(X)\ge\cdots. \] Therefore, the mapping \[ \bB^+(\cH)\ni X\mapsto\tau_G(X)\stackrel{def}{=}s-\lim\limits_{n\to\infty}\mu^{[n]}_G(X)\in\bB^+(\cH) \] is well defined. Besides, using \eqref{rec} and the monotonicity of parallel sum, we see that \begin{enumerate} \item $ \mu^{[n]}_G(X):G\ge \mu^{[n+1]}_G(X):G$ for all $n\in\dN_0,$ \item the series $\sum\limits_{n=0}^\infty \left(\mu^{[n]}_G(X):G\right)$ is converging in the strong sense and \begin{equation} \label{ryad} \sum\limits_{n=0}^\infty \left(\mu^{[n]}_G(X):G\right)=X-\tau_G(X). \end{equation} \end{enumerate} Hence the mapping $\tau_G$ can be defined as follows: \[ \tau_G(X)\stackrel{def}{=}X-\sum\limits_{n=0}^\infty \left(\mu^{[n]}_G(X):G\right). \]
Most of the following properties of the mapping $\tau_G$ are already established in the statements above. \begin{theorem} \label{propert} The mapping $\tau_G$ possesses the properties: \begin{enumerate} \item $\tau_G(\mu_G(X))=\tau_G(X)$ for all $X\in\bB^+(H),$ therefore,\\ $\tau_G(\mu_G^{[n]}(X))=\tau_G(X)$ for all natural $n$;
\item $\tau_G(X):G=0$ for all $X\in\bB^+(H);$ \item $\tau_G(X)\le X$ for all $X\in\bB^+(\cH)$ and $\tau_G(X)=X$ $\iff$ $X:G=0$ $\iff$ ${\rm ran\,} X^{1/2}\cap{\rm ran\,} G^{1/2}=\{0\}$; \item $\tau_G(X)=\tau_G(\tau_G(X))$ for an arbitrary $X\in\bB^+(\cH)$; \item define a subspace \begin{equation} \label{ghjcn1} {\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}: =\cH\ominus{\rm{clos}}\left\{f\in\cH,\;(G+X)^{1/2}f\in{\rm ran\,} G^{1/2}\right\}, \end{equation} then \begin{equation} \label{formula11} \tau_G(X)=(G+X)^{1/2}P_{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}(G+X)^{1/2}; \end{equation} \item define a contraction $\cT=(G+X)^{-1/2}X^{1/2}$ and subspace
\[ \sL\stackrel{def}{=}{\xker\,}(I-\cT^*\cT), \] then \begin{equation} \label{ghjcn2} \sL=\cH\ominus{\left\{{\rm clos}\left\{g\in\cH,\;X^{1/2}g\in{\rm ran\,} G^{1/2}\right\}\right\}} \end{equation} and \begin{equation} \label{formula111} \tau_G(X)=X^{1/2}P_\sL X^{1/2}; \end{equation} in particular, if $X$ is positive definite, then $\sL=X^{1/2}{\xker\,} G$; \item $XG=GX$ $\Longrightarrow$ $\tau_G(X)=X^{1/2}P_{\sN}X^{1/2}$, where $\sN$ takes the form $\sN={\xker\,} G\cap{\rm \overline{ran}\,} X$; \item $\tau_G(G)=0$; \item ${\rm ran\,} X^{1/2}\subseteq{\rm ran\,} G^{1/2}$ $\Longrightarrow$ $\tau_G(X)=0;$ in particular, $$\tau_G\left(X:G\right)=0$$
for every $X\in\bB^+(\cH)$; \item ${\rm ran\,} X^{1/2}={\rm ran\,} G^{\alpha}$, $\alpha<1/2$ $\Longrightarrow$ $\tau_G(X)=0;$ \item $\tau_G(\lambda G+X)=\tau_{\eta G}(X)=\tau_G(X)$ for all $\lambda>0$ and $\eta>0$; \item $\tau_G(\xi X)=\xi \tau_G(X),$ $\xi>0;$ \item if ${\rm ran\,} G^{1/2}_1={\rm ran\,} G^{1/2}_2$, then \[ \tau_{G_1}(X)=\tau_{G_2}(X)= \tau_{G_1}(G_2+X)=\tau_{G_2}(G_1+X) \] for all $X\in\bB^+(\cH)$; \item if ${\rm ran\,} G^{1/2}_1\subseteq{\rm ran\,} G^{1/2}_2$, then $\tau_{G_1}(X)\ge \tau_{G_2} (X)$ for all $X\in\bB^+(\cH)$; \item $\tau_G(X)\in \bB^+_0(\cH)$ $\Longrightarrow$ $X\in\bB^+_0(\cH)$ and $X^2:G=0$; \item the following conditions are equivalent: \begin{enumerate} \def\rm (\roman{enumi}){\rm (\roman{enumi})} \item $\tau_G(X)\in\bB^+_0(\cH)$, \item $ X\in \bB^+_0(\cH)$ and ${\rm ran\,} (G+X)^{1/2}\cap{\rm clos\,}\{(G+X)^{-1/2}{\rm ran\,} G^{1/2}\}=\{0\}$, \item $ X\in \bB^+_0(\cH)$ and for each converging sequence $\{y_n\}\subset{\rm ran\,} G^{1/2}$ such that $$\lim_{n\to\infty}y_n\in{\rm ran\,} X$$ it follows that the sequence $\{(G+X)^{-1/2}y_n\}$ is diverging, \item $ X\in \bB^+_0(\cH)$ and ${\rm ran\,} X^{1/2}\cap{\rm clos\,}\left\{X^{-1/2}\left({\rm ran\,} X^{1/2}\cap {\rm ran\,} G^{1/2}\right)\right\}=\{0\}$, \item$ X\in \bB^+_0(\cH)$ and for each converging sequence $\{z_n\}\subset {\rm ran\,} X^{1/2}\cap{\rm ran\,} G^{1/2}$ such that $$\lim_{n\to\infty}z_n\in{\rm ran\,} X$$ follows that the sequence $\{X^{-1/2} z_n\}$ is diverging; \end{enumerate} \item and if $X$ is a compact operator, then $X$ is a compact operator as well, moreover, if $\tau_G(X)$ from the Shatten-von Neumann class $S_p$ \cite{GK}, then $\tau_G(X)\in S_p.$
\end{enumerate} \end{theorem} \begin{proof} Equalities in (6) follow from \eqref{contrw}, Proposition \ref{singar} and Theorem \ref{form1}, (11) follows from Corollary \ref{mnogo}. If $\xi>0$, then \begin{multline*} \tau_G(\xi X)=(G+\xi X)^{1/2}P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_{1/\xi}} (G+\xi X)^{1/2}\\ =\xi ((1/\xi)G+X)^{1/2}P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_{1/\xi}} ((1/\xi)G+X)^{1/2}\\ =\xi\tau_G(X). \end{multline*} This proves (12).
If ${\rm ran\,} G^{1/2}_1={\rm ran\,} G^{1/2}_2$, then \[ X^{1/2}g\in{\rm ran\,} G^{1/2}_1\iff X^{1/2}g\in{\rm ran\,} G^{1/2}_2. \] Now from property (6) follows the equality $\tau_{G_1}(X)=\tau_{G_2}(X)$. Using (11) we get \begin{multline*} \tau_{G_1}(G_2+X)=\tau_{G_2}(G_2+X)=\tau_{G_2}(X)\\ =\tau_{G_1}(X)=\tau_{G_1}(G_1+X)=\tau_{G_2}(G_1+X). \end{multline*}
So, property (13) is proved. If ${\rm ran\,} G^{1/2}\subseteq{\rm ran\,} G^{1/2}_2$, then
$$X^{1/2}g\in{\rm ran\,} G^{1/2}_1\Longrightarrow X^{1/2}g\in{\rm ran\,} G^{1/2}_2.$$ Hence \begin{multline*} \sL_1=\cH\ominus{\left\{{\rm clos}\left\{g\in\cH:X^{1/2}g\in{\rm ran\,} G^{1/2}_1\right\}\right\}}\\ \supseteq \sL_2=\cH\ominus{\left\{{\rm clos}\left\{g\in\cH:X^{1/2}g\in{\rm ran\,} G^{1/2}_2\right\}\right\}}, \end{multline*} and \[ \tau_{G_1}(X)=X^{1/2}P_{\sL_1}X^{1/2}\ge X^{1/2}P_{\sL_2}X^{1/2}=\tau_{G_2}(X). \] If $X$ is compact operator, then from $\tau_G(X)=X^{1/2}P_\sL X^{1/2}$ it follows that $\tau_G(X)$ is compact operator. If $X\in S_p$, where $p\ge 1$ and $S_p$ is Shatten--von Neumann ideal, then from $X^{1/2},P_\sL X^{1/2}\in S_{2p}$ follows that $X^{1/2}P_\sL X^{1/2}\in S_p$ \cite[page 92]{GK}. \end{proof} \begin{remark} Given $G\in\bB^+(\cH)$. All $ \widetilde G\in\bB^+(\cH)$ such that ${\rm ran\,} \widetilde G^{1/2}={\rm ran\,} G^{1/2}$ are of the form \[ \widetilde G=G^{1/2}Q G^{1/2}, \] where $Q,Q^{-1}\in\bB^+({\rm \overline{ran}\,} G)$. \end{remark} \begin{remark} \label{extr} Let $G,\widetilde G\in\bB^+(\cH)$ and ${\rm ran\,} G^{1/2}={\rm ran\,}\widetilde G^{1/2}$.
The equalities
$$\tau_G(\widetilde G+X)=(\widetilde G+X)^{1/2}\widetilde P(\widetilde G+X)^{1/2}=\tau_G(X)=X^{1/2}P_\sL X^{1/2},$$
where $\widetilde P$ is the orthogonal projection onto the subspace
\[
\cH\ominus{\rm{clos}}\left\{f\in\cH:(\widetilde G+X)^{1/2}f\in{\rm ran\,} G^{1/2}\right\},
\]
see \eqref{ghjcn1} and \eqref{ghjcn2},
show that $\tau_G (X)$ is an extreme point of the operator interval $[0,X]$ and operator intervals $[0, \widetilde G+X]$ cf. \cite{Ando_1996}. \end{remark}
\begin{remark} \label{osta} Let $G,X\in\bB_0^+(\cH)$, ${\rm ran\,} G^{1/2}\cap {\rm ran\,} X^{1/2}=\{0\}$. From properties (13) and (16) in Theorem \ref{propert} follows that if the equality \[
{\rm ran\,} (G+X)^{1/2}\cap{\rm \overline{ran}\,}((G+X)^{-1/2}G^{1/2})=\{0\} \] holds true, then it remains valid if $G$ is replaced by $\widetilde G$ such that ${\rm ran\,} \widetilde G^{1/2}={\rm ran\,} G^{1/2}.$ \end{remark}
\begin{proposition} \label{polez} 1) Assume $G\in \bB^+(\cH)$. (a) If $X:G\ne 0,$ then $\left(\mu^{[n]}_G(X)\right):G\ne 0$ for all $n$.
b) If $X\in\bB^+_0(\cH)$, then $\mu^{[n]}_G(X)\in\bB^+_0(\cH)$ for all $n$. Moreover, if ${\rm ran\,} X^{1/2}\supseteq{\rm ran\,} G^{1/2},$ then ${\rm ran\,}\left(\mu^{[n]}_G(X)\right)^{1/2}={\rm ran\,} X^{1/2}$ for all $n$.\\ \noindent 2) If $G\in\bB^+_0(\cH)$ and $\tau_G(X)\in\bB^+_0(\cH)$, then $\mu^{[n]}_G(X)\in\bB^+_0(\cH)$ \begin{equation} \label{dobav2} {\rm ran\,} \left(\mu^{[n]}_G(X)\right)^{1/2}\cap{\rm clos\,}\left\{\left(\mu^{[n]}_G(X)\right)^{-1/2}{\rm ran\,} G^{1/2}\right\}=\{0\}, \end{equation} in particular, $\left(\mu^{[n]}_G(X)\right)^2:G=0$ ($\iff {\rm ran\,}\mu^{[n]}_G(X)\cap{\rm ran\,} G^{1/2}=\{0\}$) for all $n$. \end{proposition} \begin{proof} Due to the property $\tau_G(\mu_G(X))=\tau_G(X)$ for all $X\in\bB(\cH)$, it is sufficient to prove that the assertions of proposition hold for $n=1$. Let $\cH_0={\rm \overline{ran}\,}(G+X)$. There exists $M\in \bB^+(\cH_0)$ such that \[ X=(G+X)^{1/2}M(G+X)^{1/2}, \; G=(G+X)^{1/2}(I-M)(G+X)^{1/2}. \] Then \begin{multline*} \mu_G(X)=X-X:G\\=(G+X)^{1/2}M(G+X)^{1/2}-(G+X)^{1/2}M(I-M)(G+X)^{1/2}\\ =(G+X)^{1/2}M^2(G+X)^{1/2}. \end{multline*} It follows \[ {\rm ran\,}\left(\mu_G(X)\right)^{1/2}=(G+X)^{1/2}{\rm ran\,} M. \] Because $X:G\ne 0$, we have ${\rm ran\,} X^{1/2}\cap{\rm ran\,} G^{1/2}\ne\{0\}$. Therefore \[ {\rm ran\,} M^{1/2}\cap{\rm ran\,} (I-M)^{1/2}\ne \{0\}. \] This means that there are $f,h\in\cH$ such that $M^{1/2}f=(I-M)^{1/2}h$. Hence \[ Mf=(I-M)^{1/2}M^{1/2}h. \] Since ${\rm ran\,}(X:G)^{1/2}=(G+X)^{1/2}{\rm ran\,} (M-M^2)^{1/2}$, we get $${\rm ran\,}\left(\mu_G(X)\right)^{1/2}\cap{\rm ran\,}(X:G)^{1/2}\ne \{0\}.$$ But ${\rm ran\,}(X:G)^{1/2}\subseteq{\rm ran\,} G^{1/2}$. Hence $\mu_G(X):G\ne 0.$
Clearly \begin{multline*} {\rm ran\,} X^{1/2}\supseteq{\rm ran\,} G^{1/2}\iff {\rm ran\,} M^{1/2}\supseteq{\rm ran\,} (I-M)^{1/2}\\ \iff{\rm ran\,} M=\cH_0. \end{multline*} Hence \begin{multline*} {\rm ran\,}\left(\mu_G(X)\right)^{1/2}=(G+X)^{1/2}{\rm ran\,} M={\rm ran\,} (G+X)^{1/2}\\ ={\rm ran\,} X^{1/2}\supseteq{\rm ran\,} G^{1/2}. \end{multline*}
If ${\xker\,} X=\{0\}$, then ${\xker\,}(G+X)=\{0\}$ and ${\rm ran\,}(G+X)^{1/2}\cap{\xker\,} M =\{0\}$. It follows that ${\rm ran\,}(G+X)^{1/2}\cap{\xker\,} M^2 =\{0\}$. Hence ${\xker\,}\mu_G(X)=\{0\}$.
Since $\tau_G(\mu_G(X))=\tau_G(X)$ and $\tau_G(X)\in\bB^+_0(\cH)$ implies ${\xker\,} X=\{0\}$ and $X^2:G=0$, see Theorem \ref{interzero}, we get $$\tau_G(X)\in\bB^+_0(\cH)\Longrightarrow {\xker\,}\mu_G(X)=\{0\},\; \left(\mu_G(X)\right)^2:G=0.$$ \end{proof} \begin{remark} \label{dobav} Let $G\in\bB^+_0(\cH)$. Assume that ${\rm ran\,} X^{1/2}\supset {\rm ran\,} G^{1/2}$ and $\tau_G(X)\in\bB^+_0(\cH)$. Denoting ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_n={\rm clos\,}\left\{\left(\mu^{[n]}_G(X)\right)^{-1/2}{\rm ran\,} G^{1/2}\right\}$, one obtains from \eqref{dobav2} that \[ {\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_n\cap{\rm ran\,} X^{1/2}={\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_n^\perp\cap{\rm ran\,} X^{1/2}=\{0\}\;\forall n\in\dN. \] These relations yield \[ {\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_n\cap{\rm ran\,} G^{1/2}={\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_n^\perp\cap{\rm ran\,} G^{1/2}=\{0\}\;\forall n\in\dN. \] If $J_n=P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_n}-P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}^\perp_n}=2P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_n}-I,$ $n\in\dN$, then $J_n=J_n^*=J^{-1}_n$ ($J_n$ is a fundamental symmetry in $\cH$ for each natural number $n$), and \[ {\rm ran\,} (J_nG^{1/2}J_n)\cap{\rm ran\,} G^{1/2}=\{0\}\; \forall n\in\dN, \] cf. \cite{Arl_ZAg_IEOT_2015}, \cite{schmud}. \end{remark} Let $G\in\bB^+(\cH)$. Set \[ \bB^+_G(\cH)=\left\{Y\in\bB^+(\cH): {\rm ran\,} Y^{1/2}\cap{\rm ran\,} G^{1/2}=\{0\}\right\}. \] Observe that $Y\in\bB^+_G(\cH)\Longrightarrow Y^{1/2}QY^{1/2}\in \bB^+_G(\cH)$ for an arbitrary $Q\in\bB^+(\cH)$. The cone $\bB^+_G(\cH)$ is the set of all fixed points of the mappings $\mu_G$ and $\tau_G$. In addition \[ \bB^+_G(\cH)=\tau_G(\bB^+(\cH)). \] Actually, property (13) in Theorem \ref{propert} shows that
if $Y\in\bB^+_G(\cH)$, then for each $\widetilde G\in\bB^+(\cH)$ such that ${\rm ran\,} \widetilde G^{1/2}={\rm ran\,} G^{1/2}$, the operator $Y+\widetilde G$ is contained in the pre-image $\tau_G^{-1}\{Y\}$, i.e., the equality \[
\tau_G(\widetilde G+Y)=Y=(\widetilde G+Y)^{1/2}P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_{\widetilde G }}(\widetilde G+Y)^{1/2} \] holds, where $${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_{\widetilde G}=\cH\ominus\{g\in\cH:(\widetilde G+Y)^{1/2}g\in{\rm ran\,} G^{1/2}\}.$$ In particular,
\[
\tau_G(\widetilde G+\tau_G(X))=\tau_G(X),\;\forall X\in\bB^+(\cH).
\]
Thus, the operator $\widetilde G+Y$ is contained in the \textit{basin of attraction} of the fixed point $Y$ of the mapping $\mu_G$ for an arbitrary $\widetilde G\in\bB^+(\cH)$ such that ${\rm ran\,} \widetilde G^{1/2}={\rm ran\,} G^{1/2}$. In addition since ${\rm ran\,}(\widetilde G+Y)^{1/2}={\rm ran\,} G^{1/2}\dot+{\rm ran\,} Y^{1/2}$, the statement 1 b) of Proposition \ref{polez} yields that
$${\rm ran\,} \left(\mu^{[n]}_G(\widetilde G+Y)\right)^{1/2}=const\supset{\rm ran\,} G^{1/2}\;\forall n\in\dN.$$
\section{Lebesgue type decomposition of nonnegative operators and the mapping $\tau_G$} Let $A\in\bB^+(\cH)$. T.~Ando in \cite{Ando_1976} introduced and studied the mapping \[ \bB^+(\cH)\ni B\mapsto [A]B\stackrel{def}{=}s-\lim\limits_{n\to\infty}(nA:B)\in\bB^+(\cH). \] The decomposition \[ B=[A]B+(B-[A]B) \] provides the representation of $B$ as the sum of $A$-\textit{absolutely continuous} ($[A]B$) and $A$-\textit{singular} ($(B-[A]B$) parts of $B$ \cite{Ando_1976}. An operator $C\in\bB^+(\cH)$ is called $A$-absolutely continuous \cite{Ando_1976} if there exists a nondecreasing sequence $\{C_n\}\subset\bB^+(\cH)$ such that $C=s-\lim_{n\to\infty}C_n$ and $C_n\le \alpha_n A$ for some $\alpha_n$, $n\in\dN$ ($\iff {\rm ran\,} C^{1/2}_n\subseteq{\rm ran\,} A^{1/2}$ $\forall n\in\dN$). An operator $C\in\bB^+(\cH)$ is called $A$-singular if the intersections of operator intervals $[0,C]$ and $[0,A]$ is the trivial operator ($[0,C]\cap [0,A]=0$). Moreover, the operator $[A]B$ is maximum among all $A$-absolutely continuous nonnegative operators $C$ with $C\le B$. The decomposition of $B$ on $A$-absolutely continuous and $A$-singular parts is generally non-unique. Ando in \cite{Ando_1976} proved that uniqueness holds if and only if ${\rm ran\,}([A]B)^{1/2}\subseteq{\rm ran\,} A^{1/2}$. Set \begin{equation} \label{omtuf} \Omega_{A}^B\stackrel{def}{=}{\rm{clos}}\left\{f\in\cH:B^{1/2}f\in{\rm ran\,} A^{1/2}\right\}. \end{equation} It is established in \cite{Ando_1976} that the following conditions are equivalent \begin{enumerate} \def\rm (\roman{enumi}){\rm (\roman{enumi})} \item $B$ is $A$-absolutely continuous, \item $[A]B=B,$ \item $\Omega_A^B=\cH$. \end{enumerate} In \cite{Pek_1978} (see also \cite{Kosaki_1984}) the formula \begin{equation} \label{formu} [A]B=B^{1/2}P_{\Omega^B_{A}}B^{1/2} \end{equation} has been established. Hence the operator $[A]B$ possesses the following property, see \cite{Pek_1978}: \begin{multline*} \max\left\{Y\in\bB^+(\cH):0\le Y\le B,\;{\rm{clos}}\{Y^{-1/2}({\rm ran\,} A^{1/2})\}=\cH\right\}\\ =[A]B. \end{multline*} The notation $B_{{\rm ran\,} A^{1/2}}$ and the name \textit{convolution on the operator domain} was used for $[A]B$ in \cite{Pek_1978}. Notice that from \eqref{formu} it follows the equalities \begin{multline*} {\rm ran\,}\left([A]B\right)^{1/2}=B^{1/2}\Omega_A^B,\\ B-[A]B=B^{1/2}(I-P_{\Omega^B_{A}})B^{1/2},\\ [A]B:(B-[A]B)=0,\; A:(B-[A]B)=0. \end{multline*} In addition due to \eqref{ukfdyj}, \eqref{omtuf}, and \eqref{formu}: \begin{enumerate} \item $[A](\lambda B)=\lambda\left([A]B\right),$ $\lambda>0,$ \item ${\rm ran\,} \widetilde A^{1/2}={\rm ran\,} A^{1/2}$ $\Longrightarrow$ $[\widetilde A]B=[A]B $ for all $B\in\bB^+(\cH),$ \item $[A:B]B=[A]B$. \end{enumerate}
\begin{theorem} \label{singp} \begin{enumerate} \item
Let $G\in\bB^+(\cH)$. Then for each $X\in\bB^+(\cH)$ the equality \begin{equation} \label{razn} \tau_G(X)=X-[G]X \end{equation} holds. Therefore, $\tau_G(X)=0$ if and only if $X$ is $G$-absolutely continuous. In addition $\tau_G([G]X)=0$ for all $X\in\bB^+(\cH)$.
If ${\rm ran\,} \widetilde G^{1/2}={\rm ran\,} G^{1/2}$ for some $\widetilde G \in\bB^+(\cH)$, then \begin{equation} \label{cbyuek} \tau_G(X)=X-[\widetilde G] X=\widetilde G+X-[G](\widetilde G+X). \end{equation} Hence \begin{equation} \label{cnhfyyj} \widetilde G=[G](\widetilde G +X)-[G](X), \end{equation} and \begin{equation} \label{tot} X-\tau_G(X)=[G](\widetilde G+X)-\widetilde G. \end{equation} In addition \begin{equation} \label{izm} \sum\limits_{n=0}^\infty \left(\mu^{[n]}_G(X):G\right)=[G]X,\; \forall X\in\bB^+(\cH). \end{equation}
\item The following inequality is valid for an arbitrary $X_1, X_2\in\bB^+(\cH)$: \begin{equation} \label{ytjblf} \tau_G(X_1+X_2)\le \tau_G(X_1)+\tau_G(X_2). \end{equation}
\item the following statements are equivalent: \begin{enumerate} \def\rm (\roman{enumi}){\rm (\roman{enumi})} \item $\tau_G(X)\in\bB^+_0(\cH)$, \item $X\in\bB^+_0(\cH)\quad\mbox{and}\quad\left([G]X\right):X^2=0,$ \item $G+X\in\bB^+_0(\cH)$ and $[G](G+X):( G+X)^{2}=0.$ \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} (1) From \eqref{omtuf}, \eqref{formu}, and Theorem \ref{propert} we get equalities \begin{multline*} \tau_G(X)=X^{1/2}(I-P_{\Omega^X_{G}})X^{1/2}=X-[G]X,\\ \tau_G(\widetilde G+X)=(\widetilde G+X)^{1/2}(I-P_{\Omega^{\widetilde G+X}_{G}})(\widetilde G+X)^{1/2}=\widetilde G+X-[G](\widetilde G+X). \end{multline*} Then \eqref{cbyuek}, \eqref{cnhfyyj}, and \eqref{tot} follow from the equalities $\tau_G(X)= \tau_{\widetilde G}[X]=\tau_G(\widetilde G+X)$.
Since $[G]([G]X)=[G]X$, we get $\tau_G\left([G]X\right)=0.$
Note that using the equality $[G](X+\alpha G)=[G]X+\alpha G$ \cite[Lemma 1]{Nishio}
and the equality
\[
\tau_G(\alpha G+X)=\alpha G+X-[G](\alpha G+X),
\]
we get $\tau_G(\alpha G+X)=X-[G]X=\tau_G(X)$.
Equation \eqref{izm} follows from \eqref{ryad} and \eqref{razn}.
(2) Inequality \eqref{ytjblf} follows from the inequality, see \cite{E-L}, $$[G](X_1+X_2)\ge [G]X_1+[G]X_2$$
and equality \eqref{razn}.
(3) From \eqref{omtuf} and statements (16a) and (16d) of Theorem \ref{propert} it follows \begin{multline*} \tau_G(X)\in\bB^+_0(\cH)\iff X\in\bB^+_0(\cH)\quad\mbox{and}\quad\Omega_G^X\cap{\rm ran\,} X^{1/2}=\{0\}\\ \iff X\in\bB^+_0(\cH)\quad\mbox{and}\quad X^{1/2}\Omega_G^X\cap{\rm ran\,} X=\{0\}\\ \iff X\in\bB^+_0(\cH)\quad\mbox{and}\quad{\rm ran\,}\left([G]X\right))^{1/2}\cap{\rm ran\,} X=\{0\}\\ \iff X\in\bB^+_0(\cH)\quad\mbox{and}\quad\left([G]X\right):X^2=0. \end{multline*} Further we use the equality $\tau_G(X)=\tau_G(G+X)$, see statement (13) of Theorem \ref{propert}. \end{proof} \section{The mappings $\{\mu_G^{[n]}\},$ $\tau_G,$ and intersections of domains of unbounded self-adjoint operators} \label{applll} Let $A$ be an unbounded self-adjoint operator in an infinite dimensional Hilbert space $\cH$. J.von Neumann \cite[Satz 18]{Neumann} established that if $\cH$ is \textit{separable}, then there is a self-adjoint operator unitary equivalent to $A$ such that its domain has trivial intersection with the domain of $A$. Another proof of this result was proposed by J.~Dixmier in \cite{Dix}, see also \cite[Theorem 3.6]{FW}. In the case of \textit{nonseparable} Hilbert space in \cite{ES} it is constructed an example of unbounded self-adjoint operator $A$ such that for any unitary $U$ one has ${\rm dom\,} (U^*AU)\cap{\rm dom\,} A\ne \{0\}$. So, in general, the von Neumann theorem does not hold. It is established in \cite[Theorem 4.6]{ES}, that the following are equivalent for a dense operator range $\cR$ (the image of a bounded nonnegative self-adjoint operator in $\cH$ \cite{FW}) in an infinite-dimensional Hilbert space: \begin{enumerate} \def\rm (\roman{enumi}){\rm (\roman{enumi})} \item there is a unitary operator $U$ such that $U\cR\cap\cR=\{0\}$; \item for every subspace (closed linear manifold) $\cK\subset \cR$ one has ${\rm dim\,} \cK\le{\rm dim\,} \cK^\perp$. \end{enumerate}
In the theorem below we suggest another several statements equivalent to the von Neumann's theorem. \begin{theorem} \label{ytcgjl} Let $\cH$ be an infinite-dimensional complex Hilbert space and let $A$ be an unbounded self-adjoint operator in $\cH$. Then the following assertions are equivalent \begin{enumerate} \item there exists a unitary operator $U$ in $\cH$ such that $${\rm dom\,}(U^*AU)\cap{\rm dom\,} A=\{0\};$$ \item there exists an unbounded self-adjoint operator $S$ in $\cH$ such that $${\rm dom\,} S\cap{\rm dom\,} A=\{0\};$$ \item there exists a fundamental symmetry $J$ in $\cH$ ($J=J^*=J^{-1}$) such that $${\rm dom\,}(JAJ)\cap{\rm dom\,} A=\{0\};$$ \item there exists a subspace ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}$ in $\cH$ such that $${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}\cap{\rm dom\,} A={\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}^\perp\cap{\rm dom\,} A=\{0\};$$ \item there exists a positive definite self-adjoint operator $B$ in $\cH$ such that $${\rm dom\,} B\supset{\rm dom\,} A\quad\mbox{and}\quad{\rm clos\,}\left\{B{\rm dom\,} A\right\}\cap {\rm dom\,} B=\{0\},$$ \item there exists a closed densely defined restriction $A_0$ of $A$ such that ${\rm dom\,} (AA_0)=\{0\}$ (this yields, in particular, ${\rm dom\,} A^2_0=\{0\}$).
\end{enumerate} \end{theorem} \begin{proof}
Let $|A|=\sqrt{A^2}$. Set $G=\left(|A|+I\right)^{-2}$. Then $G\in\bB^+_0(\cH)$ and ${\rm ran\,} G^{1/2}={\rm dom\,} A.$
According to \cite[Proposition 3.1.]{Arl_ZAg_IEOT_2015} the following assertion for the operator range $\cR$ are equivalent \begin{enumerate} \def\rm (\roman{enumi}){\rm (\roman{enumi})} \item There exists in $\cH$ an orthogonal projection $P$ such that \[ {\rm ran\,} P\cap\cR=\{0\} \ \ \ {\rm{and}} \ \ \ {\rm ran\,} (I-P)\cap\cR=\{0\} \ . \] \item There exists in $\cH$ a fundamental symmetry $J$ such that \[ J\cR\cap\cR=\{0\} \ . \] \end{enumerate} Now we will prove that (2)$\Longrightarrow$(1), (3), (4), (5). The existence of self-adjoint $S$ with the property ${\rm dom\,} S\cap{\rm dom\,} A=\{0\}$ implies the existence of
$F\in\bB^+_0(\cH)$ such that ${\rm ran\,} F^{1/2}\cap{\rm ran\,} G^{1/2}=\{0\}$ (for example, take $F=(|S|+I)^{-2}$). Then the equality $F:G=0$ yields, see Proposition \ref{root} that \[ G=(G+F)^{1/2}P(G+F)^{1/2},\; F=(G+F)^{1/2}(I-P)(G+F)^{1/2}, \] where $P$ is orthogonal projection in $\cH$. The equalities ${\xker\,} G={\xker\,} F=\{0\}$ imply $${\rm ran\,} P\cap{\rm ran\,} (G+F)^{1/2}={\rm ran\,} (I-P)\cap{\rm ran\,} (G+F)^{1/2}=\{0\}.$$ Since ${\rm ran\,} G^{1/2}\subset{\rm ran\,} (G+F)^{1/2}$, we get $${\rm ran\,} P\cap{\rm ran\,} G^{1/2}={\rm ran\,} (I-P)\cap{\rm ran\,} G^{1/2}=\{0\}.$$ Let ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}={\rm ran\,} P$, then holds (4). Put $J=P-(I-P)=2P-I$. The operator $J$ is fundamental symmetry and $J{\rm ran\,} G^{1/2}\cap{\rm ran\,} G^{1/2}=\{0\}$. This gives (3).
Since ${\xker\,} F=\{0\}$ and $F=\tau_G(F)=\tau_G(G+F)$, using Theorem \ref{propert}, equalities \eqref{ghjcn1}, \eqref{formula11}, and Theorem \ref{interzero} we obtain \[ {\rm ran\,} (G+F)^{1/2}\cap {\rm \overline{ran}\,} (G+F)^{-1/2}G^{1/2}=\{0\}. \] Denoting $B=(G+F)^{-1/2}$, we arrive to (5).
Let us proof (5)$\Longrightarrow$(2). Set $X=B^{-2}$. Then ${\rm ran\,} X^{1/2}\supset{\rm ran\,} G^{1/2}$ and \[ X\in \bB^+_0(\cH),\; {\rm ran\,} X^{1/2}\cap{\rm clos\,}\left\{X^{-1/2}{\rm ran\,} G^{1/2}\right\}=\{0\}. \] The equivalence of conditions (16a) and (16d) of Theorem \ref{propert} implies ${\xker\,}\tau_G(X)=\{0\}.$ Since the operator $Y=\tau_G(X)$ possesses the property ${\rm ran\,} Y^{1/2}\cap{\rm ran\,} G^{1/2}=\{0\}$, we get for $S=Y^{-2}$ that ${\rm dom\,} S\cap{\rm dom\,} A=\{0\}$.
Now we are going to prove $(4)\iff (6)$. Suppose (6) is valid, i.e., $A_0$ is closed densely defined restriction of $A$ such that ${\rm dom\,} (AA_0)=\{0\}$. Let
$$\cU=(A-iI)(A+iI)^{-1}$$
be the Cayley transform of $A$. $\cU$ is a unitary operator and
\[
A=i(I+\cU)(I-\cU)^{-1},\; {\rm dom\,} A={\rm ran\,} (I-\cU),{\rm ran\,} A={\rm ran\,} (I+\cU).
\]
Let $\cU_0=(A_0-iI)(A_0+iI)^{-1}$ be the Cayley transform of $A_0$. Set ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}\stackrel{def}{=}{\rm ran\,} (A_0+iI)$. Then $\cU_0=\cU{\upharpoonright\,}{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}$, \begin{multline*} {\rm dom\,} A_0={\rm ran\,} (I-\cU_0)=(I-\cU){\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O},\\
{\rm ran\,} A_0={\rm ran\,} (I+\cU_0)=(I+\cU){\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}. \end{multline*} Because ${\rm dom\,} A_0$ is dense in $\cH$, we get ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}^\perp\cap{\rm dom\,} A=\{0\}$. The equality ${\rm dom\,} (AA_0)=\{0\}$ is equivalent to \[ \left\{\begin{array}{l}{\rm ran\,} A_0\cap{\rm dom\,} A=\{0\},\\ {\xker\,} A_0=\{0\}\end{array} \right.. \] The latter two equalities are equivalent to ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}\cap{\rm dom\,} A=\{0\}$. Thus (4) holds. If (4) holds, then define the symmetric restriction $A_0$ as follows \[
{\rm dom\,} A_0=(I-\cU){\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O},\; A_0=A{\upharpoonright\,}{\rm dom\,} A_0
\]
we get ${\rm dom\,}(AA_0)=\{0\}$.
The proof is complete.
\end{proof} Let us make a few remarks. \begin{remark} \label{ljd1} If (5) is true, then \begin{enumerate} \item the more simple proof of the implication (5)$\Rightarrow$(2) is the observation that (5) implies ${\rm dom\,} B^2\cap {\rm dom\,} A=\{0\}$; \item taking into account that $B^{-1}$ is bounded and ${\rm dom\,} A$ is dense in $\cH$, we get
\[
\left(\cH\ominus{\rm clos\,}\left\{B{\rm dom\,} A\right\}\right)\cap {\rm dom\,} B=\{0\},
\] if we set ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}\stackrel{def}{=}{\rm clos\,}\left\{B{\rm dom\,} A\right\}$, we see that the inclusion ${\rm dom\,} A\subset{\rm dom\,} B$ implies (4), i.e., this is one more way to prove (5)$\Longrightarrow$(4) and (5)$\Longrightarrow$(6); \item using the proof of Theorem \ref{ytcgjl} and equalities \eqref{ghjcn2} and \eqref{formula111}, we see that the operator $S\stackrel{def}{=}\left(B^{-1}P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}^\perp}B^{-1}\right)^{-1}$ is well defined, self-adjoint positive definite, and ${\rm dom\,} S\cap{\rm dom\,} A=\{0\}$; \item denoting $B_0=B{\upharpoonright\,}{\rm dom\,} A$ and taking the closure of $B_0,$ we get the closed densely defined positive definite symmetric operator $\bar{B}_0$ (a closed restriction of $B$) such that \[ {\rm dom\,} (B\bar{B}_0)=\{0\}. \] \end{enumerate}
\end{remark} \begin{remark}
In the case of an infinite-dimensional Hilbert space be separable. K.~Schm\"{u}dgen in \cite[Theorem 5.1]{schmud} established the validity of assertion (4) for an arbitrary $A$. In \cite{Arl_ZAg_IEOT_2015} using parallel addition of operators it is shown that validity (2) for an arbitrary unbounded self-adjoint $A$ implies (4).
The first construction of a densely defined closed symmetric operator $T$ such that ${\rm dom\,} T^2=\{0\}$ was given by M.A.~Naimark \cite{Naimark1}, \cite{Naimark2}. In \cite{Chern} P.~Chernoff gave an example of semi-bounded from bellow symmetric $T$ whose square has trivial domain. K.~Schm\"{u}dgen in \cite[Theorem 5.2]{schmud} proved that each unbounded self-adjoint operator $H$ has two closed densely defined restrictions $H_1$ and $H_2$ such that \[ {\rm dom\,} H_1\cap{\rm dom\,} H_2=\{0\}\quad\mbox{and}\quad{\rm dom\,} H^2_1={\rm dom\,} H^2_2=\{0\}. \]
In \cite{ArlKov_2013} the abstract approach to the construction of examples of nonnegative self-adjoint operators $\cL$ and their closed densely defined restrictions $\cL_0$ such that ${\rm dom\,}(\cL\cL_0)=\{0\}$ has been proposed. In \cite[Theorem 3.33]{Arl_ZAg_IEOT_2015} it is established that each unbounded self-adjoint $A$ has two closed densely defined restrictions $A_1$ and $A_2$ possessing properties \begin{multline*} {\rm dom\,} A_1\dot+{\rm dom\,} A_2={\rm dom\,} A,\; {\rm dom\,} (AA_1)={\rm dom\,} (AA_2)=\{0\}, \\ {\rm dom\,} A_1\cap{\rm dom\,} A^2={\rm dom\,} A_2\cap{\rm dom\,} A^2=\{0\}. \end{multline*}
M.~Sauter in the e-mail communication with the author suggested another proof of the equivalence of (1) and (2) in Theorem \ref{ytcgjl}. His proof is essentially relied on the methods developed in the paper \cite{ES}. \end{remark} We conclude this paper by the theorem related to the assertions (2) and (5) of Theorem \ref{ytcgjl}. The proof is based on the properties of the mappings $\{\mu^{[n]}_G\}$ and $\tau_G$.
\begin{theorem} \label{bynth} Let $\cH$ be an infinite dimensional separable Hilbert space and let $A$ be unbounded self-adjoint operator in $\cH$. Then for each positive definite self-adjoint operator $S$ such that ${\rm dom\,} S\cap {\rm dom\,} A=\{0\}$ there exists a sequence $\{S_n\}$ of positive definite operators possessing properties \begin{itemize} \item ${\rm dom\,} S_n={\rm dom\,} S\dot+{\rm dom\,} A$ $\forall n$,
\item ${\rm clos\,}\left\{S_n{\rm dom\,} A\right\}\cap {\rm dom\,} S_n=\{0\}$ $\forall n$, \item ${\rm dom\,} S^2_n\cap{\rm dom\,} A=\{0\}$ $\forall n,$ \item if $\sL_n=\cH\ominus {\rm clos\,}\left\{S_n{\rm dom\,} A\right\}$, then $S=\left(S^{-1}_nP_{\sL_n}S^{-1}_n\right)^{-1}$ $\forall n$,
\item for each $f\in {\rm dom\,} S\dot+{\rm dom\,} A$ the sequence $\{||S_nf||\}_{n=1}^\infty$ is nondecreasing,
\item ${\rm dom\,} S=\left\{f: \sup\limits_{n\ge 1}||S_n f||<\infty\right\},$ \item ${\rm s-R}-\lim\limits_{n\to\infty}S_n=S$, where ${\rm s-R}$ is the strong resolvent limit of operators \cite[Chapter 8, \S 1]{Ka}.
\end{itemize} \end{theorem} \begin{proof}
Let $G\stackrel{def}{=}(|A|+I)^{-2}$, $F\stackrel{def}{=}S^{-2}$. Then ${\rm ran\,} G^{1/2}={\rm dom\,} A,$ ${\rm ran\,} F^{1/2}={\rm dom\,} S$. According to Theorem \ref{propert} the equalities \[ F=\tau_G(F)=\tau_G(G+F) \] are valid. Set \[ F_n=\mu_G^{[n]}(G+F), n=0,1,\ldots. \] Then $\{F_n\}$ is non-increasing sequence of operators, $\tau_G(F_n)=F$, and $$s-\lim\limits_{n\to\infty}F_n=F.$$
Due to the L\"{o}wner-Heinz inequality we have that the sequence of operators $\{F^{1/2}_n\}_{n=1}^\infty$ is non-increasing. In addition
\[
s-\lim\limits_{n\to\infty}F^{1/2}_n=F^{1/2}.
\] Since ${\rm ran\,} F^{1/2}_0={\rm ran\,} G^{1/2}\dot+{\rm ran\,} F^{1/2}$, Proposition \ref{polez} yields ${\rm ran\,} F^{1/2}_n={\rm ran\,} G^{1/2}\dot+{\rm ran\,} F^{1/2}$ for all natural numbers $n$. Now define \[ S_n=F^{-1/2}_n,\; n=0,1,\ldots. \] Then for all $n$: $${\rm dom\,} S_n={\rm ran\,} F^{1/2}_n={\rm ran\,} G^{1/2}\dot+{\rm ran\,} F^{1/2}={\rm dom\,} A\dot+{\rm dom\,} S,$$ the sequences of unbounded nonnegative self-adjoint operators $\{S^2_n\}$ and $\{S_n\}$ are non-decreasing, \[ \lim\limits_{n\to\infty} S^{-1}_n=S^{-1},\;\lim\limits_{n\to\infty} S^{-2}_n=S^{-2}. \] The latter means, that \[ {\rm s-R}-\lim\limits_{n\to\infty}S_n=S,\; {\rm s-R}-\lim\limits_{n\to\infty}S^2_n=S^2. \] Taking into account that $\tau_G(F_n)=F$ and using statement 2) of Proposition \ref{polez} we conclude that the equality \[ {\rm \overline{ran}\,} F^{1/2}_n \cap {\rm clos\,}\{F^{-1/2}_n{\rm ran\,} G^{1/2}\}=\{0\} \] holds for each $n\in\dN$. Hence ${\rm clos\,}\left\{S_n{\rm dom\,} A\right\}\cap {\rm dom\,} S_n=\{0\}$ and ${\rm dom\,} S^2_n\cap{\rm dom\,} A=\{0\}$ for all natural numbers $n.$ Set \[ \sL_n:=\cH\ominus {\rm clos\,}\left\{S_n{\rm dom\,} A\right\} =\cH\ominus{\rm clos\,}\{F^{-1/2}_n{\rm ran\,} G^{1/2}\}. \] Taking in mind the equality (see \eqref{ghjcn2} and \eqref{formula111}) \[ F=\tau_G(F_n)=F^{1/2}_nP_{\sL_n}F^{1/2}_n, \] we get $S=\left(S^{-1}_nP_{\sL_n}S^{-1}_n\right)^{-1}$ for all $n\in\dN$.
Let $f\in{\rm dom\,} S={\rm ran\,} F^{1/2}$. Since $F_n\ge F$ for all $n\in\dN$, we have $F^{-1}_n\le F^{-1}$, i.e., $||S_n f||\le ||S f||$ for all $n$.
Suppose that $||S_n f||\le C$ for all $n$. Then there exists a subsequence of vectors $\{S_{n_k}f\}_{k=1}^\infty$ that converges weakly to some vector $\varphi$ in $\cH$, i.e, \[ \lim\limits_{k\to\infty} (S_{n_k} f, h)=(\varphi, h) \quad\mbox{for all} \quad h\in\cH. \] Further for all $g\in \cH$ \begin{multline*} (f, g)=(F^{1/2}_{n_k}S_{n_k}f,g)=(S_{n_k}f,F^{1/2}_{n_k}g)\\ =(S_{n_k}f,F^{1/2}g)+ (S_{n_k}f,F^{1/2}_{n_k}g-F^{1/2}g)\rightarrow (\varphi,F^{1/2}g)=(F^{1/2}\varphi, g).
\end{multline*} It follows that $f\in{\rm dom\,} S$.
Thus, we arrive to the equality
${\rm dom\,} S=\left\{f: \sup\limits_{n\ge 1}||S_n f||<\infty\right\}.$ The proof is complete. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1510.01282.tex",
"language_detection_score": 0.5550686717033386,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{frontmatter} \title{Latent Space Approaches to Community Detection in Dynamic Networks}
\runtitle{Latent Space Approaches to Community Detection in Dynamic Networks}
\begin{aug} \author{\fnms{Daniel K.} \snm{Sewell}\thanksref{addr1}\ead[label=e1]{[email protected]}} \and \author{\fnms{Yuguo} \snm{Chen}\thanksref{addr2}\ead[label=e2]{[email protected]}}
\runauthor{Sewell and Chen}
\address[addr1]{Department of Biostatistics, University of Iowa, Iowa City, IA 52242, \printead{e1}} \address[addr2]{Department of Statistics, University of Illinois at Urbana-Champaign, Champaign, IL 61820, \printead{e2}}
\end{aug}
\begin{abstract} Embedding dyadic data into a latent space has long been a popular approach to modeling networks of all kinds. While clustering has been done using this approach for static networks, this paper gives two methods of community detection within dynamic network data, building upon the distance and projection models previously proposed in the literature. Our proposed approaches capture the time-varying aspect of the data, can model directed or undirected edges, inherently incorporate transitivity and account for each actor's individual propensity to form edges. We provide Bayesian estimation algorithms, and apply these methods to a ranked dynamic friendship network and world export/import data. \end{abstract}
\begin{keyword} \kwd{clustering} \kwd{longitudinal data} \kwd{Markov chain Monte Carlo} \kwd{mixture model} \kwd{P\'olya-Gamma distribution} \kwd{variational Bayes} \end{keyword}
\end{frontmatter}
\section{Introduction} Researchers are often interested in detecting communities within dyadic data. These dyadic data are represented as networks with a certain number of actors which can form amongst themselves relationships/connections called edges. Some examples of such data include social networks, collaboration networks, biological networks, food-webs, power grids, linguistic networks. These dyadic data can have directed or undirected edges, have zero-one or weighted edges and can come in the form of static or dynamic (time-varying) networks. Clustering these data into communities can lead to better understanding of the organization of the objects in the network, and, for dynamic networks, how this organization evolves over time.
\cite{xing2010state} developed a dynamic mixed membership stochastic blockmodel. This work builds on the stochastic blockmodel \citep{holland1983stochastic}, further developed into the mixed membership blockmodel \citep{airoldi2005latent}. In the work of \cite{xing2010state}, each actor has an individual membership probability (time-varying) vector and, based on this probability vector, can choose certain roles with which to interact with other actors. A different approach to be taken in this paper begins with the work of \cite{hoff2002latent} where the actors are embedded either within a latent Euclidean space, referred to as the distance model, or within a hypersphere, referred to as the projection model. \cite{handcock2007model} used their distance model and performed community detection on the latent actor positions. Further, the distance model of \cite{hoff2002latent} was extended by \cite{sewell2014latent} and \cite{durante2014nonparametric} to include dynamic network data, and \cite{sewell2014analysis,sewell2014weighted} extended their dynamic model to allow for various types of weighted edges.
Applying a latent space model has distinct advantages over other approaches, such as blockmodeling. Using a latent space approach allows the user to capture local and global structures. The output yields meaningful visualization of the data, providing rich qualitative information. Transitivity and reciprocity, two important features of many networks, is inherently incorporated in the model. In our proposed methodology, the variation in individual edge propensities, often described by their degree distributions, is accounted for. Finally, homophily can be easily incorporated into the model just as in latent space approaches for static networks. That is, exogenous actor attributes can be incorporated into the linear modeling; these covariates may also be added by extending the hierarchical model to predict cluster assignments \citep[see][]{gormley2010mixture}.
This work provides advances beyond the existing literature on latent space network models by constructing mechanisms to perform community detection on dynamic network data and providing Bayesian estimation methods. Specifically, the primary goals of our proposed methodology are to determine what communities exist in the network, which actors belong to these communities and how these actors change communities over time. The proposed methodology accomplishes these clustering goals while maintaining a very flexible framework that can handle directed or undirected dyads and virtually any type of weighted edges, e.g., ranked dynamic network data. Information is borrowed across time to obtain more accurate clustering estimates. In addition, we present clustering models based on the two common geometries used in the latent space literature, Euclidean spaces and hyperspheres. To the authors' knowledge there is no existing latent space methodology that achieves these community detection goals for dynamic networks with either geometry, and no such methodology even for static networks which utilize the hypersphere geometry.
The remainder of the paper is as follows. Section \ref{Models} gives the model and methodology. Section \ref{Estimation} gives estimation methods. Section \ref{SimulationStudy} describes a simulation study. Section \ref{DataAnalysis} reports the results from analyzing Newcomb's fraternity data \citep{newcomb1956prediction} and world trade data. Section \ref{Discussion} gives a discussion.
\section{Models} \label{Models} The data we will analyze are of the form $({\cal N},\{{\cal E}_t: t\in \{1,2,\ldots,T\})$, where ${\cal N}$ is the set of all actors (also called by some authors nodes or vertices), and ${\cal E}_t\subseteq \{\{i,j\}, i,j\in {\cal N}, i\neq j\}$ is the set of edges at time $t$. The edges ${\cal E}_t$ can be viewed as an adjacency matrix $Y_t$ with entries $y_{ijt}$ denoting the edge from actor $i$ to actor $j$ at time $t$. The latent space approach to modeling networks assumes that there is, for each actor at each time point, a latent position within a network space which represents unobserved actor attributes. We will assume that at each time point, each actor belongs to one of a fixed number $G$ of clusters; this cluster assignment may change over time. We will denote the latent position of actor $i$ at time $t$ as ${\bf X}_{it}$ and the cluster assignment for actor $i$ at time $t$ as $\boldsymbol{Z}_{it}$, a $G$ dimensional vector in which one element is 1 and the others are zero. We will also let ${\cal X}_t=(\boldsymbol{X}_{1t}',\ldots,\boldsymbol{X}_{nt}')'$ and ${\cal Z}_t=(\boldsymbol{Z}_{1t}',\ldots,\boldsymbol{Z}_{nt}')'$. While the dependency structure of the model may vary, we assume throughout the paper that given the latent positions ${\cal X}_t$, $Y_t$ and $Y_s$, $s\neq t$, are conditionally independent; in many cases (such as binary networks) this assumption can be further extended such that $y_{ijt}$ and $y_{i'j't}$ are conditionally independent given ${\cal X}_t$.
In community detection within a latent space approach, we use the decomposition \begin{equation}
\pi(\{Y_t,{\cal X}_t,{\cal Z}_t\}_{t=1}^T)=\pi(\{Y_t\}_{t=1}^T|\{{\cal X}_t\}_{t=1}^T)\pi(\{{\cal X}_t,{\cal Z}_t\}_{t=1}^T). \end{equation} The idea here is that the edge probabilities are determined by some underlying attributes which are captured in the latent variables. Thus if we detect a community in the network it is because there is a corresponding cluster of attributes. For example, if we see in a social network a group of close friends, this close group, or community, exists because these friends are similar in some fundamental ways, i.e., they have attributes that are clustered together.
\subsection{Distance model} \label{DistanceModel}
Within the context of the distance model, the network is embedded within a latent Euclidean space, where the probability of edge formation increases as the Euclidean distance between actors decreases. Let $D({{\cal X}}_t)$ denote the $n\times n$ distance matrix constructed such that $\big(D({{\cal X}}_t)\big)_{ij} \triangleq d_{ijt} = \|\boldsymbol{X}_{it}-\boldsymbol{X}_{jt}\|$. In general we will assume that the density of $Y_t$ can be written as a function of the distance matrix $D({\cal X}_t)$ and some set of likelihood parameters, which we will denote as $\theta_{\ell}$. For example, the original likelihood for binary networks in \cite{hoff2002latent} is \begin{equation}
\mathbb{P}(y_{ijt}=1|{\cal X}_t,\theta_{\ell})=\frac{\exp\{y_{ijt}\eta_{ijt}\}}{1+\exp\{\eta_{ijt}\}}, \hspace{2pc} \eta_{ijt}=\alpha-d_{ijt}, \label{distLik} \end{equation} where in this context $\theta_{\ell}=\{\alpha\}$. Variants of this likelihood have been proposed, such as in \cite{sarkar2005dynamic}, \cite{krivitsky2009representing}, and \cite{sewell2014latent}. This last was then extended to account for a wide range of weighted networks in \cite{sewell2014weighted}. Other likelihoods may be better suited for various other types of weighted edges \citep[see, e.g.,][]{sewell2014analysis}.
\cite{handcock2007model} clustered static network data by clustering the latent positions via a normal mixture model. This cannot be directly applied to dynamic network data since the latent positions must have some sort of temporal dependency imposed. Therefore we propose applying the model-based longitudinal clustering model given by \cite{sewell2014model} to the latent positions. Our focus here is the modeling of the latent positions, which can then be used for whatever likelihood formulation is most appropriate to the data. We will now describe this model for the latent variables.
We make two assumptions on the latent positions and the cluster assignments. First, the cluster assignments are assumed to follow a Markov process, i.e., $$
\boldsymbol{Z}_{it}|\boldsymbol{Z}_{i1},\ldots,\boldsymbol{Z}_{i(t-1)} \stackrel{{\cal D}}{=}\boldsymbol{Z}_{it}|\boldsymbol{Z}_{i(t-1)}. $$ Second, given the current cluster assignment and all previous cluster assignments and latent positions, we assume the current latent positions depend only on the previous latent positions and the current cluster assignments, i.e., $$
\boldsymbol{X}_{it}|\boldsymbol{X}_{i1},\ldots,\boldsymbol{X}_{i(t-1)},\boldsymbol{Z}_{i1},\ldots,\boldsymbol{Z}_{it} \stackrel{ {\cal D}}{=}\boldsymbol{X}_{it}|\boldsymbol{X}_{i(t-1)},\boldsymbol{Z}_{it}. $$
The joint density of the latent positions and the cluster assignments is given as \begin{align}\nonumber &\pi(\{{\cal X}_t\}_{t=1}^T,\{{\cal Z}_t\}_{t=1}^T)&\\ &=\prod_{i=1}^n\prod_{g=1}^G\left[
\beta_{0g}N(\boldsymbol{X}_{i1}|\boldsymbol\mu_g,\Sigma_g) \right]^{Z_{i1g}} \prod_{t=2}^T \prod_{h=1}^G\left[ \prod_{k=1}^G\left[
\beta_{hk}N(\boldsymbol{X}_{it}|\lambda\boldsymbol\mu_k+(1-\lambda)\boldsymbol{X}_{i(t-1)},\Sigma_k) \right]^{Z_{itk}} \right]^{Z_{i(t-1)h}},&
\label{jointXZDist} \end{align}
where $N(\boldsymbol{X}|\boldsymbol\mu,\Sigma)$ is the normal density with mean vector $\boldsymbol\mu$ and covariance matrix $\Sigma$ evaluated at $\boldsymbol{X}$. Thus the communities are each modeled as a multivariate normal distribution in the latent space with mean $\boldsymbol\mu_g$ and covariance matrix $\Sigma_g$. Since these refer to the location and shape of the $g^{th}$ community in the latent network space, we will refer to $\boldsymbol\mu_g$ and $\Sigma_g$ as the $g^{th}$ community location and community shape respectively. The mean of the latent position $\boldsymbol{X}_{it}$ is then modeled as $\lambda\boldsymbol\mu_g+(1-\lambda)\boldsymbol{X}_{i(t-1)}$, $\lambda\in(0,1)$, which is a blending of the current cluster effect $\boldsymbol\mu_g$ with the individual temporal effect $\boldsymbol{X}_{i(t-1)}$. Hence we will refer to $\lambda$ as the blending coefficient. The $\beta_{0g}$'s determine the probability of initially belonging to the $g^{th}$ community and the $\beta_{hk}$'s determine the probability of transitioning from the $h^{th}$ community to the $k^{th}$ community. We will therefore refer to the vectors $\boldsymbol\beta_0=(\beta_{01},\ldots,\beta_{0G})$ and $\boldsymbol\beta_{h}=(\beta_{h1},\ldots,\beta_{hG})$, $h=1,\ldots,G$, respectively as the initial clustering parameter and the transition parameter for group $h$.
\subsection{Projection model} \label{ProjectionModel} \cite{cox1991multidimensional} and \cite{banerjee2005clustering} gave many contexts in which there has been empirical evidence that embedding data onto a hypersphere and/or using cosine distances is preferable to Euclidean space/distances. Here we continue this tradition by embedding dynamic network data onto the hypersphere. In this section we assume the more specific, but most commonly encountered, context of directed binary edges (the model to be proposed can be simplified for undirected edges). In the projection model, every actor is embedded within some latent hypersphere; the probability of an edge forming between two actors depends on the angle, rather than the Euclidean distance, between them. Thus it is the angle between any two actors that represents the ``closeness" of the actors. Though the latent space is strictly a Euclidean space rather than a hypersphere, it is more helpful to think of the positions within $\Re^p$ as unit vectors on a $p-1$ dimensional hypersphere with individual edge propensities reflected in the magnitude of the latent positions.
Our proposed likelihood of the adjacency matrices adapts the likelihood of the projection model originally proposed by \cite{hoff2002latent}, and extends \cite{durante2014nonparametric} to allow for directed edges. The specific form of the likelihood is given as \begin{align}
\pi(\{Y_t\}_{t=1}^T|\{{\cal X}_t\}_{t=1}^T,\theta_{\ell})&=\prod_{t=1}^T\prod_{i\neq j} \frac{\exp\{y_{ijt}\eta_{ijt}\}}{1+\exp\{\eta_{ijt}\}},&\label{projLik1}\\ \eta_{ijt}&=\alpha+s_j\boldsymbol{X}_{it}'\boldsymbol{X}_{jt}& \label{projLik2}\\
&=\alpha +\|\boldsymbol{X}_{it}\|\cdot(s_j\|\boldsymbol{X}_{jt}\|)\cdot\cos(\phi_{ijt}),&\label{projLikAlt} \end{align} where $\phi_{ijt}$ is the angle between $\boldsymbol{X}_{it}$ and $\boldsymbol{X}_{jt}$; in this context $\theta_{\ell}=\{\alpha,\boldsymbol{s}\}$, where $\alpha$ reflects a baseline edge propagation rate and $\boldsymbol{s}=(s_1,\ldots,s_n)$ is a vector of actor specific parameters that reflect how the tendency of the actors to receive edges relates to the tendency to send edges. We therefore refer to $\boldsymbol{s}$ as the receiver \underline{s}caling parameters. While (\ref{projLik2}) is simpler, (\ref{projLikAlt}) makes it clear how the probability of an edge from $i$ to $j$ is made up of some constant plus the product of the sending effect of $i$, the receiving effect of $j$, and the closeness between $i$ and $j$ in the latent space as measured by the cosine of the angle between the two actors.
The question remains as to how to perform clustering. With the projection model the latent positions are embedded within a hypersphere, and thus the clustering must be done in a fundamentally different way than that done for the distance model. Since we would expect a group of highly connected actors to have small angles between them all, we propose clustering based on the angles of the actors' latent positions.
We first assume that the latent positions follow a hidden Markov model, with the cluster assignments as the hidden states. That is, the cluster assignments follow a Markov process (i.e., given $\boldsymbol{Z}_{i(t-1)}$, $\boldsymbol{Z}_{it}$ is conditionally independent of $\boldsymbol{Z}_{i(t-s)}$ for any $s>1$), and given the cluster assignments ${\cal Z}_t$, the latent positions $\boldsymbol{X}_t$ are assumed to be conditionally independent of $\boldsymbol{X}_s$ for any $s\neq t$.
The joint density on the latent positions and cluster assignments is given as \begin{align}\nonumber &\pi(\{{\cal X}_t\}_{t=1}^T,\{{\cal Z}_t\}_{t=1}^T)&\\ &=\prod_{i=1}^n\prod_{g=1}^G\left[
\beta_{0g}N(\boldsymbol{X}_{i1}|r_i\boldsymbol{u}_g,\tau_i^{-1}I_p) \right]^{Z_{i1g}} \prod_{t=2}^T \prod_{h=1}^G\left[ \prod_{k=1}^G\left[
\beta_{hk}N(\boldsymbol{X}_{it}|r_i\boldsymbol{u}_k,\tau_i^{-1}I_p) \right]^{Z_{itk}} \right]^{Z_{i(t-1)h}},&
\label{jointXZProj} \end{align} where $I_p$ is the $p\times p$ identity matrix. As with the distance model of Section \ref{DistanceModel}, the communities are modeled as multivariate normal distributions within the latent space. Here $\boldsymbol{r}=(r_1,\ldots,r_n)$, the \underline{r}adii of the means of the $\boldsymbol{X}_{it}$'s, are individual effects representing the individual propensities to send edges; hence we refer to $\boldsymbol{r}$ as the sender propensities. $\boldsymbol{u}_g$ is the unit vector corresponding to the direction of the $g^{th}$ community, and hence we refer to the $\boldsymbol{u}_g$'s as the community directions. $\boldsymbol\tau=(\tau_1,\ldots,\tau_n)$ are the precision parameters, and $\boldsymbol\beta_0=(\beta_{01},\ldots,\beta_{0G})$ and $\boldsymbol\beta_{h}=(\beta_{h1},\ldots,\beta_{hG})$, $h=1,\ldots,G$, are again respectively the initial clustering parameter and the transition parameter for group $h$.
From (\ref{jointXZProj}) we can see how the different aspects of the network are captured in the joint density of $\{{\cal X}_t\}_{t=1}^T$ and $\{{\cal Z}_t\}_{t=1}^T$. The clusters are completely determined by the community directions $\boldsymbol{u}_g$. Thus if two actors belong to the same cluster then they have the same mean direction, and therefore the model will deem these two actors as similar (based on the cosine of their angle). The permanence and transience of the clusters are captured in the transition parameters $\boldsymbol\beta_h$, $h=1,\ldots,G$. The individual effects are captured by the sender propensities $\boldsymbol{r}$ and the receiver scaling parameters $\boldsymbol{s}$. To see this more clearly, notice that the square of the individual sending effect (and the scaled individual receiving effect), $\|\boldsymbol{X}_{it}\|^2$, has mean $p\tau_i^{-1}+r_i^2$; under the quite reasonable assumption that $\uparrow r_i \notimplies \downarrow \tau_i^{-1}$ (we would expect the opposite to occur), we see that $r_i$ has a direct effect on the individual effect. The difference in individual $i$'s sending and receiving effect is given by the $i^{th}$ receiver scaling parameter $s_i$.
Note that the parameterization (\ref{projLik2}) of the likelihood (\ref{projLik1}) is not identifiable, as $\boldsymbol{s}$ and ${\cal X}_t$ can be scaled arbitrarily. The estimation is done within a Bayesian framework, however, and thus by fixing the hyperparameters corresponding to the priors on the unknown parameters, the posterior distribution is identifiable.
\section{Estimation} \label{Estimation}
Our estimation is done within the Bayesian framework, with the goal of finding the maximum {\it a posteriori} (MAP) estimators of the unknown parameters and latent positions.
\subsection{MCMC for the distance model} \label{MCMC} We propose a Markov chain Monte Carlo (MCMC) method to obtain posterior modes to estimate the latent positions and model parameters of the distance model given in Section \ref{DistanceModel}. Specifically, we implement a Metropolis-Hastings (MH) within Gibbs sampler.
We assign the following priors: \begin{eqnarray} \lambda&\sim& N_{(0,1)}(\nu_{\lambda},\xi_{\lambda}),\\ \boldsymbol\mu_g&\sim& N({\bf 0},\tau^2I_p) \hspace{1pc}\mbox{for $g=1,\ldots,G$},\\ \Sigma_g&\sim& W^{-1}(p+1,diag(\gamma_1,\ldots,\gamma_p)) \hspace{1pc}\mbox{for $g=1,\ldots,G$},\\ \tau^2&\sim&\Gamma^{-1}(a,b),\\ \gamma_{\ell}&\sim&\Gamma(c,1/d) \hspace{1pc}\mbox{for $\ell=1,\ldots,p$},\\ \boldsymbol\beta_h&\sim&Dir(1,\ldots,1) \hspace{1pc}\mbox{for $h=0,1,\ldots,G$}, \end{eqnarray} where $N_{(0,1)}(\mu,\sigma^2)$ indicates the normal distribution with mean $\mu$ and variance $\sigma^2$ truncated to the range of $(0,1)$, $W^{-1}(a,B)$ indicates the inverse Wishart distribution with degrees of freedom $a$ and scale matrix $B$, $diag(d_1,\ldots,d_K)$ indicates a $K\times K$ diagonal matrix with $d_1,\ldots, d_K$ on the diagonal, $Dir(a_1,\ldots,a_K)$ indicates the Dirichlet distribution with parameters $a_1$ to $a_K$, $\Gamma^{-1}(a,b)$ indicates the inverse gamma distribution with shape and scale parameters $a$ and $b$ respectively, and $\Gamma(a,b)$ indicates the gamma distribution with shape and scale parameters $a$ and $b$ respectively. Additionally, there will be some prior $\pi(\theta_{\ell})$ on the likelihood parameters $\theta_{\ell}$ that will depend on the formulation of the likelihood.
With the exception of the latent positions and $\theta_{\ell}$, these priors are conjugate with respect to the full conditional distributions; these distributions are given in the supplementary material. For the latent positions, MH steps are necessary. The context specific form of the likelihood will determine whether the likelihood parameters $\theta_{\ell}$ can be sampled directly or whether the user needs to implement MH steps here as well (see Sections \ref{methodEvaluation} and \ref{Fraternity} for examples).
\subsection{Variational Bayesian inference for the projection model} \label{VB} \cite{polson2013bayesian} gave a data augmentation scheme for logistic models by utilizing the P\'olya-Gamma distribution. This scheme starts by introducing a random variable $\omega_{ijt}$ which, given $\eta_{ijt}$, follows $PG(1,\eta_{ijt})$, where $PG(b,c)$ denotes the P\'olya-Gamma distribution with parameters $b>0$ and $c\in\Re$. This auxiliary variable $\omega_{ijt}$ is conditionally independent of $y_{ijt}$ given $\eta_{ijt}$. Polson et al. show that the conditional joint density of $y_{ijt}$ and $\omega_{ijt}$ can be written as \begin{equation}
\pi(y_{ijt},\omega_{ijt}|\eta_{ijt})= \frac12e^{(y_{ijt}-1/2)\eta_{ijt}}
e^{-\omega_{ijt}\eta_{ijt}^2/2}PG(\omega_{ijt}|1,0), \label{PGJoint} \end{equation}
where $PG(\omega|b,c)$ is the P\'olya-Gamma density with parameters $b$ and $c$ evaluated at $\omega$. This data augmentation leads to tractable forms for the full conditional distributions of the model parameters and latent positions, leading to efficient and accurate estimation for binary data using Gibbs sampling \citep{choi2013polya}, the EM algorithm \citep{scott2013expectation} and, as we will show here, variational Bayes (VB) approaches.
Using Polson et al.'s work we may either implement a Gibbs sampler, as each full conditional distribution belongs to a well known family from which we can sample, or alternatively we may implement a mean field VB algorithm. Unlike a MCMC approach which obtains samples approximately from the posterior distribution, the VB algorithm here iteratively finds an approximation to the posterior density $\pi(\{{\cal X}_t,{\cal Z}_t\}_{t=1}^T,\theta_{\ell},\theta_{p}|\{Y_t\}_{t=1}^T)$, where $\theta_{p}$ is all the remaining model parameters corresponding to the prior on $\{{\cal X}_t,{\cal Z}_t\}_{t=1}^T$. Using the mean field VB implies that we are finding a factorized approximation $Q$ of the posterior which minimizes the Kullback-Liebler divergence between the true posterior and $Q$. This factorized form will be given shortly.
VB procedures have been gaining popularity in large part due to their greatly decreased computational cost in comparison with most sampling methods. \cite{salter2013variational} applied VB to the static latent space cluster model for networks given by \cite{handcock2007model} (which is a static form of the distance model). Within this iterative scheme, the factorized distributions of the latent positions and many of the model parameters required numerical optimization techniques, as a closed form analytical solution was unavailable. By utilizing the projection model as described in Section \ref{ProjectionModel}, however, we can find closed form solutions for each iteration, thereby reducing the computational cost involved in the estimation algorithm.
We assign the following priors: \begin{align} \omega_{ijt}&\sim PG(1,0) \hspace{1pc}\mbox{for $t=1,\ldots,T$, $1\leq i \neq j \leq n$},&\\ s_i&\sim Exp(1)\hspace{1pc}\mbox{for $i=1,\ldots,n$},&\\
r_i|\tau_i&\sim \Gamma(1,c\tau_i^{-1}) \hspace{1pc}\mbox{for $i=1,\ldots,n$},&\\ \tau_i&\sim \Gamma(a_2^*,b_2^*) \hspace{1pc}\mbox{for $i=1,\ldots,n$},&\\ \alpha&\sim N(0,b_3^*), &\\ \pi(\boldsymbol{u}_g)&=\frac{\Gamma(p/2)}{2\pi^{p/2}}\hspace{1pc}\mbox{for $h=0,1,\ldots,G$},&\\ \boldsymbol\beta_h&\sim Dir(\boldsymbol\gamma_h^*) \hspace{1pc}\mbox{for $h=0,1,\ldots,G$}.& \end{align}
To estimate the posterior $\pi(\{{\cal X}_t,{\cal Z}_t\}_{t=1}^T,\theta_{\ell},\theta_p|\{Y_t\}_{t=1}^T)$, we use the factorized approximation $Q$, which looks like \begin{align}\nonumber &Q(\Omega,\{{\cal X}_t\}_{t=1}^T,\{{\cal Z}_t\}_{t=1}^T,\alpha,\boldsymbol{s},\boldsymbol{r},\boldsymbol\tau,\boldsymbol{u},\{\boldsymbol\beta_h\}_{h=0}^G)&\\ &=q(\Omega)q(\{{\cal X}_t\}_{t=1}^T)q(\{{\cal Z}_t\}_{t=1}^T)q(\alpha)q(\boldsymbol{s})q(\boldsymbol{r})q(\boldsymbol\tau)q(\boldsymbol{u})q(\{\boldsymbol\beta_h\}_{h=0}^G),& \label{factorized} \end{align} where $\Omega=\{\omega_{ijt}\}_{t,i\neq j}$. Using the priors given above, the factorized distributions on the right hand side of (\ref{factorized}) all belong to well known families of distributions. The exact forms are given in the supplementary material.
Of interest is the computational time required for our proposed methods, and in particular how the VB algorithm decreases the computational time required. We recorded the times required to implement both our VB approach (500 iterations) and the corresponding Gibbs sampler (50,000 samples drawn), letting $n$ be 100, 200, 400, 600, 800, and 1,000. The times are given graphically in Figure \ref{compTime}. From this we can see that the VB algorithm shows drastic reduction in computational cost. We will see in Section \ref{SimulationStudy}, however, that the performance of the VB and Gibbs sampler are very similar.
\begin{figure}
\caption{Run time in minutes for 50,000 draws using the MCMC algorithm (dashed line, squares) and 500 iterations of the VB algorithm (solid line, circles)}
\label{compTime}
\end{figure}
\subsection{Initialization} Our context involves a high dimensional estimation problem, and so how we initialize the MCMC or the VB algorithm plays a non-negligible role in the performance. We performed a small sensitivity analysis of the starting conditions of our algorithms, the details of which can be found in the supplementary material. The results indicated that under some conditions the VB algorithm for the projection model can be sensitive to the initialization scheme, though it did not appear that either of the MCMC algorithms (the Gibbs sampler for the projection model and the MH within Gibbs sampler for the distance model) were particularly sensitive. The full details on how we initialized the algorithms are given in the supplementary material.
\subsection{Number of communities} \label{NumberOfClusters} An implicit challenge underlying the previous discourse is that in practice we do not in general know the number of communities $G$. We found the strategy given by \cite{handcock2007model} to be quite successful in our simulation study (see Section \ref{BICsimstudy}). We briefly summarize this method and refer the interested reader to the original source for more details.
Rather than estimating the integrated likelihood $\pi(\{Y_t\}_{t=1}^T|G)$ as would typically be done, we instead consider the joint distribution of the observed network data and unobserved latent positions, using our MAP estimator as the fixed values of the latent positions, i.e., $\pi(\{Y_t\}_{t=1}^T,\{\widehat{{\cal X}}_t\}_{t=1}^T|G)$, where $\{\widehat{{\cal X}}_t\}_{t=1}^T$ is the MAP estimators of the latent positions. We can rewrite this as \begin{equation}
\pi(\{Y_t\}_{t=1}^T,\{\widehat{{\cal X}}_t\}_{t=1}^T|G)=\int \pi(\{Y_t\}_{t=1}^T|\{\widehat{{\cal X}}_t\}_{t=1}^T,\theta_{\ell})\pi(\theta_{\ell})d\theta_{\ell} \int \pi(\{\widehat{{\cal X}}_t\}_{t=1}^T|\theta_p)\pi(\theta_p)d\theta_p, \label{intLik} \end{equation}
where all distributions are implicitly conditioning on $G$. The two integrals on the right hand side of (\ref{intLik}) can each be estimated via the Bayesian information criterion (BIC), thus allowing us to find the BIC approximation of $2\log(\pi(\{Y_t\}_{t=1}^T,\{\widehat{{\cal X}}_t\}_{t=1}^T|G))$ as \begin{equation*} \mbox{BIC}=\mbox{BIC}_1+\mbox{BIC}_2, \end{equation*} where \begin{align} \nonumber
\mbox{BIC}_1&=2\log(\pi(\{Y_t\}_{t=1}^T|\{\widehat{{\cal X}}_t\}_{t=1}^T,\hat\theta_{\ell})) - \mbox{dim}(\theta_{\ell})\log\Big(\sum_{t,i\neq j} y_{ijt}\Big), &\\ \nonumber
\mbox{BIC}_2&= 2\log(\pi(\{\widehat{{\cal X}}_t\}_{t=1}^T|\hat\theta_p)) - \mbox{dim}(\theta_p)\log(nT).& \end{align}
Rather than using maximum likelihood estimators for $\hat\theta_{\ell}$ and $\hat\theta_{p}$ in computing the BIC's, we used the MAP estimators, as was also done in, e.g., \cite{fraley2007bayesian}. We remark that for the projection model, since the posterior modes found by the VB and the Gibbs sampler perform comparably (see Section \ref{methodEvaluation}), this BIC model selection method is still valid for the VB estimates. This is because we only need the posterior mode, and hence any inaccuracies in the posterior variances/covariances of the parameters induced by approximating the posterior distribution with the VB factorized distribution will not affect the BIC criterion. One last note is that we utilized recursive relations identical or similar to those given in \cite{sewell2014model} in order for the number of terms required to compute $\pi(\{{\cal X}_t\}_{t=1}^T|\hat\theta_p)$ to be linear, rather than exponential, with respect to $T$.
\section{Simulation study} \label{SimulationStudy} \subsection{Method evaluation} \label{methodEvaluation} We simulated 200 binary networks, each with $n=100$ actors and $T=10$ time points. These 200 data sets were subdivided evenly in two different ways. First, half of the data sets were generated according to the distance model, the other half via the projection model. Second, half of the data sets had sticky cluster transition probabilities, letting the $\beta_{hh}$'s take large values (recall that $\boldsymbol\beta_h$ is the transition parameter for group $h$), while the other half had more transitory transition probabilities, letting the $\beta_{hh}$'s to take more moderate values. In summary, we had 50 data sets from the distance model with sticky transition probabilities, 50 from the distance model with transitory transitions, 50 from the projection model with sticky transition probabilities, and 50 from the projection model with transitory transitions. Details on how the data were generated will be given shortly.
We compared various methods in four ways. The first was to evaluate how well the model explains the data used to fit the model. To this end we obtained in-sample edge predictions and computed the AUC (area under the receiver operating characteristic curve); a value of one implies a perfect fit, whereas a value of 0.5 implies that the predictions are random. As a good in-sample fit may be due to overfitting the data, we also looked at one step ahead predictions. We obtained one step ahead predicted probabilities and computed the correlation with the true one step ahead probabilities. We aim to stress, however, that prediction is not the primary purpose of this methodology, but rather to accurately recover hidden communities in the network object. We thus compared the true clustering assignments with the estimated clustering assignments using two methods. The first is the corrected Rand index (CRI), which can be viewed as a measure of misclassification. Values close to 1 indicate nearly identical clustering assignments and values near zero indicate what one might expect with two random clustering assignments. Second, we computed the variation of information (VI) \citep{meilua2003comparing}. The VI is a true metric, and hence a smaller VI value implies that the two clusterings being compared are closer to being identical.
For each of the 200 simulations we compared six methods. The first two are the VB algorithm and the Gibbs sampler for the projection model. The third is the distance model. Here we used the likelihood formulation found in the dynamic latent space model of \cite{sewell2014latent}. This likelihood is given as \begin{equation}
\mbox{logit}(\mathbb{P}(y_{ijt}=1|{\cal X}_t,\beta_{IN},\beta_{OUT},s_i,s_j))=\beta_{IN}\Big(1-\frac{d_{ijt}}{s_j}\Big)+\beta_{OUT}\Big(1-\frac{d_{ijt}}{s_i}\Big), \label{SandC2014Likelihood} \end{equation} where $\beta_{IN}$ and $\beta_{OUT}$ are global parameters that reflect the relative importance of popularity and activity respectively, the $s_i$'s are actor specific parameters that reflect the tendency to send and receive edges, and $d_{ijt}$ is the distance between actors $i$ and $j$ within the latent Euclidean space at time $t$. Estimation is done by putting a bivariate normal prior on $\beta_{IN}$ and $\beta_{OUT}$, a Dirichlet prior on the $s_i$'s, and incorporating these parameters in the MH within Gibbs MCMC algorithm of Section \ref{MCMC}. The fourth and fifth methods were the clustering models of \cite{handcock2007model} and of \cite{krivitsky2009representing}, implemented in the latentnet R package \citep{krivitsky2008fitting,krivitsky2015latentnet}. These latter two models cluster static networks via a latent space approach; to apply them to dynamic networks, clustering was performed at each time point and then combined sequentially using the relabeling algorithm given in \cite{papastamoulis2010artificial}. Note that these two methods, being static models, cannot be used to perform one step ahead predictions. Lastly we used the temporal exponential random graph model (TERGM) \citep{hanneke2010discrete}, as implemented in the btergm R package \citep{leifeld2015xergm}. The terms we specified for the TERGM were the total number of edges in the network, the number of reciprocated edges, the number of transitive triples, the number of cyclic triples, in-degrees and out-degrees, the number of lagged reciprocated edges, and the stability of the network. Note that this method can be used to determine in-sample predictions and one step ahead predictions, but has no functionality for determining cluster assignments. All MCMC methods were used to obtain 50,000 samples.
For the data sets generated according to the distance model, we set the blending coefficient $\lambda=0.8$, the dimension of the latent space $p=2$, the total number of clusters $G=6$, and the likelihood parameters $\beta_{IN}=0.3$, and $\beta_{OUT}=0.7$. We set the community locations $\boldsymbol\mu_g$ to be $(-0.03,0)$, $(-0.01,0)$, $(0.01,0)$, $(0.03,0)$, $(0,0.02)$, and $(0,-0.02)$. We drew the community shapes $\Sigma_g$ , $g=1,\ldots,G$, from $W^{-1}(13,(1\times10^{-5}) I_2)$, the initial clustering parameter $\boldsymbol\beta_0\sim Dir(10,\ldots,10)$, and for $h=1,\ldots,6$, the transition parameter $\boldsymbol\beta_h$ for group $h$ was set to was set to be proportional to \begin{equation}\nonumber
\left(\frac{1}{\|\boldsymbol\mu_1 -\boldsymbol\mu_h\|},\ldots,\frac{1}{\|\boldsymbol\mu_{h-1} -\boldsymbol\mu_h\|},\mbox{const}\times \max_{k\neq h}\left\{\frac{1}{\|\boldsymbol\mu_k -\boldsymbol\mu_h\|}\right\},\frac{1}{\|\boldsymbol\mu_{h+1} -\boldsymbol\mu_h\|},\ldots,\frac{1}{\|\boldsymbol\mu_K -\boldsymbol\mu_h\|} \right). \end{equation} For sticky transition probabilities we set the constant in the above equation equal to 20 which yields probabilities from 0.82 to 0.87 of remaining in the same cluster, and for transitory transition probabilities we set the constant equal to 10 which yields probabilities from 0.70 to 0.77 of remaining in the same cluster.
The cluster assignments $\{{\cal Z}_t\}_{t=1}^T$ and latent positions $\{{\cal X}_t\}_{t=1}^T$ were drawn according to (\ref{jointXZDist}), and the actor specific parameters $(s_1,\ldots,s_n)\sim Dir\Big(100\frac{1/\|X_{1,1}\|}{\max_j(1/\|X_{j,1}\|)},\ldots,100\frac{1/\|X_{100,1}\|}{\max_j(1/\|X_{j,1}\|)}\Big)$. Finally, the adjacency matrices were simulated according to (\ref{SandC2014Likelihood}). This led to an average density of the simulated networks (taken over all time points of all simulations) of 0.221 and 0.222 for sticky and transitory transition probabilities respectfully. The average modularity (again averaged over all time points of all simulations) was 0.299 and 0.287 for sticky and transitory transition probabilities respectfully, giving a measure of how well separated the clusters are. Specifically, the modularity \citep[originally defined by][for undirected networks]{clauset2004finding} as implemented in the igraph package \citep{igraphPackage} is $$ \frac{1}{2S_t}\sum_{i\neq j}\left(Y^*_{ijt}-\frac{k_{it}k_{jt}}{2S_t}\right)1_{\{ \boldsymbol{Z}_{it}'\boldsymbol{Z}_{jt}=1 \}}, $$ where $S_t$ is the number of edges in the network at time $t$, $Y_t^*$ is the $n\times n$ symmetric adjacency matrix constructed by setting $Y_{ijt}^*=Y_{jit}^*=(Y_{ijt}+Y_{jit})/2$, and $k_{it}$ is the average of the in degree and out degree for actor $i$ at time $t$. For comparison, an Erd\H{o}s-R\'enyi graph with comparable density has on average a modularity of 0.076, and a network consisting of 5 fully connected subgraphs, each of which are fully disconnected, has a modularity of 0.8 (and a density of 0.19).
For the data sets generated according to the projection model, we set the total number of clusters $G=6$, the dimension of the latent space $p=3$, the baseline propagation rate $\alpha=-5$, the initial clustering parameter $\boldsymbol\beta_0 = (1/6,\ldots,1/6)$ and the community directions $$ (\boldsymbol{u}_1,\ldots,\boldsymbol{u}_6) = \left[\begin{array}{cccccc} -15&30&60&105&45&45 \\ 0 &0& 0& 0&60&-60 \end{array}\right], $$ where $\boldsymbol{u}_g$ are given in the spherical coordinate angles in degrees. For $h=1,\ldots,6$, the transition parameter $\boldsymbol\beta_h$ was set to be proportional to $(\exp(\mbox{const}\cdot\boldsymbol{u}_h'\boldsymbol{u}_1),\ldots,\exp(\mbox{const}\cdot\boldsymbol{u}_h'\boldsymbol{u}_6))$. For sticky transition probabilities we set the constant above equal to 8 which yields probabilities from 0.68 to 0.96 of remaining in the same cluster, and for transitory transition probabilities we set the constant equal to 5 which yields probabilities from 0.52 to 0.83 of remaining in the same cluster.
For $i=1,\ldots,100$, we simulated the receiver scaling parameters $s_i\sim N(1,0.15)$, the sender propensities $r_i\sim N(2.3,0.05^2)$, and set the precision parameters $\tau_i=175/r_i^2$. The cluster assignments $\{{\cal Z}_t\}_{t=1}^T$ and latent positions $\{{\cal X}_t\}_{t=1}^T$ were drawn according to (\ref{jointXZProj}). Finally, the adjacency matrices were simulated according to (\ref{projLik1}) and (\ref{projLik2}). This led to an average modularity of 0.305 and 0.279 for sticky and transitory transition probabilities respectfully. The average density of the simulated networks was 0.183 and 0.191 for sticky and transitory transition probabilities respectfully.
Table \ref{simTable} gives the simulation results. The AUC values show that the TERGM fits the data poorly, but all the other methods fit rather comparably. However, looking at the CRI and VI we see that the static methods are overfitting the model; that is, they are providing good predicted probabilities for the observed data used to fit the model but are not doing so well at capturing the underlying truth. The correlation between the estimated one step ahead probabilities and the true probabilities are much higher for our methods than for the TERGM. Note that both the projection model and the distance model provide good predictive performance regardless of the true geometry of the latent space and regardless of the cluster transition probability matrix. Once we start looking at the CRI and the VI, which is of primary importance with respect to the goals of the proposed work, we notice several things. First, when using the projection model, the VB and the Gibbs sampler yield very similar performance. Second, when the geometry of the latent space is misspecified, our proposed models still perform quite well and in fact perform similarly to the correctly specified model. Lastly, we note that the performance of the static methods deteriorate when the probabilities of changing clusters increase. The VI values are also given graphically in Figure \ref{simVI}, visually demonstrating the performance disparities between the dynamic and static methods. \begin{landscape} \begin{table}[p] \centering \footnotesize \begin{tabular}{ l ll l l l l l } \hline
True model & Transitions & Fitted model & AUC (in sample) & Correlation (one step ahead) & CRI & VI \\ \hline
Projection & Sticky & Projection (VB) & 0.889 (0.00579) & 0.987 (0.00376) & 0.987 (0.00967) & 0.0560 (0.0284) \\
Projection & Sticky & Projection(MCMC) & 0.885 (0.00601) & 0.975 (0.00401) & 0.984 (0.00889) & 0.0676 (0.0265) \ \\
Projection & Sticky & Distance & 0.875 (0.00623) & 0.933 (0.0155) & 0.954 (0.0182) & 0.150 (0.0491) \ \\
Projection & Sticky & Handcock et al. & 0.876 (0.00664) & NA & 0.799 (0.0893) & 0.518 (0.192) \ \\
Projection & Sticky & Krivitsky et al. & 0.899 (0.00543) & NA & 0.806 (0.0866) & 0.485 (0.185) \ \\
Projection & Sticky & TERGM & 0.619 (0.0154) & 0.270 (0.114) & NA & NA \\
\ & \ & \ & \ & \ & \ & \ & \ \\
Projection & Transitory& Projection (VB) & 0.884 (0.00444) & 0.980 (0.0130) & 0.981 (0.0767) & 0.0741 (0.193) \\
Projection & Transitory & Projection(MCMC) & 0.880 (0.00458) & 0.962 (0.0153) & 0.977 (0.0743) & 0.0862 (0.187) \\
Projection & Transitory & Distance & 0.870 (0.00458) & 0.922 (0.0222) & 0.944 (0.0692) & 0.183 (0.170) \\
Projection & Transitory & Handcock et al. & 0.871 (0.00497) & NA & 0.520 (0.109) & 1.36 (0.328) \\
Projection & Transitory & Krivitsky et al. & 0.895 (0.00392) & NA & 0.528 (0.113) & 1.31 (0.335) \\
Projection & Transitory & TERGM & 0.618 (0.0144) & 0.261 (0.0749) & NA & NA \\
\ & \ & \ & \ & \ & \ & \ & \ \\
Distance & Sticky& Projection (VB) & 0.862 (0.00523) & 0.858 (0.0287) & 0.876 (0.0404) & 0.436 (0.102) \\
Distance & Sticky & Projection(MCMC) & 0.855 (0.00585) & 0.863 (0.0275) & 0.903 (0.0428) & 0.349 (0.104) \\
Distance & Sticky & Distance & 0.863 (0.00575) & 0.928 (0.0303) & 0.981 (0.0542) & 0.0821 (0.129) \\
Distance & Sticky & Handcock et al. & 0.861 (0.00518) & NA & 0.733 (0.141) & 0.798 (0.406) \\
Distance & Sticky & Krivitsky et al. & 0.879 (0.00520) & NA & 0.719 (0.1334) & 0.861 (0.391) \\
Distance & Sticky & TERGM & 0.601 (0.0146) & 0.293 (0.0663) & NA & NA \\
\ & \ & \ & \ & \ & \ & \ & \ \\
Distance & Transitory & Projection (VB) & 0.853 (0.00667) & 0.845 (0.0338) & 0.820 (0.0751) & 0.583 (0.202) \\
Distance & Transitory & Projection(MCMC) & 0.846 (0.00702) & 0.834 (0.0272) & 0.864 (0.0731) & 0.455 (0.197) \\
Distance & Transitory & Distance & 0.851 (0.00644) & 0.882 (0.0351) & 0.889 (0.122) & 0.364 (0.295) \\
Distance & Transitory & Handcock et al. & 0.851 (0.00572) & NA & 0.418 (0.115) & 1.78 (0.371) \\
Distance & Transitory & Krivitsky et al. & 0.872 (0.00585) & NA & 0.421 (0.102) & 1.73 (0.311) \\
Distance & Transitory & TERGM & 0.597 (0.0140) & 0.224 (0.0378) & NA & NA \\ \end{tabular} \normalsize \caption{Simulation results from data generated according to the distance and projection models with both sticky and transitory cluster transition probabilities. The median values are reported, with standard deviations in parentheses. The AUC corresponds to the data used to fit the model, the Correlation (one step ahead) values correspond to the correlation between the estimated probabilities and the true probabilities, the CRI and VI are the corrected Rand index and variation of information respectively between the true and estimated cluster assignments.} \label{simTable} \end{table} \end{landscape}
\begin{figure}
\caption{The Variation of Information (VI) values from the simulation study are given here graphically, separated by the underlying true geometry (distance/projection) and the type of transition (sticky/transitory).}
\label{simVI}
\end{figure}
\subsection{Sensitivity study} It is not obvious how to choose the values of the hyperparameters from Section \ref{Estimation}. In the above simulation study as well as in Section \ref{DataAnalysis}, we used an automatic selection method for these hyperparameters, the details of which can be found in the supplementary material. It is important, however, to determine how sensitive the estimation procedures are to the choice of hyperparameters. To this end we analyzed 100 data sets simulated according to the projection model and 100 according to the distance model, in each case fitting the data using the model with the correct geometry. Each set of 100 data sets was evenly divided between sticky and transitory cluster transition probabilities. For each simulation we evaluated the clustering performance using CRI and VI.
For each simulation we set the hyperparameters in the following way. For the distance model, we drew $\nu_\lambda\sim Unif(0.5,1)$ and fixed $\xi_{\lambda}=1$, fixed $a=3$ and drew $b\sim Unif(0.01,0.05)$, fixed $c=1.001$ and drew $d\sim N(10,2.5^2)$. For the projection model, we drew $c\sim \Gamma(20,0.5)$, $a_2^*\sim N(600,100^2)$, and $b_2^*\sim \Gamma(1,0.05)$, and fixed $b_3^*=100$.
Table \ref{priorSensTab} provides the results from this sensitivity analysis. From this we see that the projection models still perform quite well, although the Gibbs sampler for the projection model has a larger standard deviation of the performance measures. What we should immediately notice is the appalling performance of the distance model when $\xi_\lambda=1$. Upon closer inspection we noticed that the parameter estimates of the blending coefficient $\lambda$ were in nearly all cases very close to zero, which means that the model was not using much of the cluster information to predict the latent positions. As a remedy, we altered this part of the sensitivity analysis, drawing $\nu_\lambda\sim Unif(0.7,0.95)$ and fixing $\xi_\lambda=5\times10^{-4}$, thereby setting a very low prior probability that $\lambda$ is small. With this alteration we see from Table \ref{priorSensTab} that the clustering performance is quite satisfactory. In summary, the estimation methods are not particularly sensitive to the selection of hyperparameters with the exception of those associated with $\lambda$.
\begin{table}[htb] \centering \begin{tabular}{ l l l l } \hline
Transitions & Fitted model & CRI & VI \\ \hline
Sticky & Distance ($\xi_{\lambda}=1$) & 0.0169 (0.0104) & 3.42 (0.0756) \\
Sticky & Distance ($\xi_{\lambda}=5\times10^{-4}$) & 0.981 (0.0427) & 0.0921 (0.113) \\
Sticky & Projection (VB) & 0.989 (0.00844) & 0.0477 (0.0263) \\
Sticky & Projection (MCMC) & 0.976 (0.194) & 0.0904 (0.635) \\
\ & \ & \ & \ \\
Transitory & Distance ($\xi_{\lambda}=1$) & 0.0218 (0.0341) & 3.382 (0.129) \\
Transitory & Distance ($\xi_{\lambda}=5\times10^{-4}$) & 0.917 (0.139) & 0.322 (0.362) \\
Transitory & Projection (VB) & 0.980 (0.136) & 0.0734 (0.353) \\
Transitory & Projection (MCMC) & 0.962 (0.219) & 0.126 (0.681) \\ \end{tabular} \caption{Simulation results testing prior sensitivity for data generated according to the distance and projection models with both sticky and transitory cluster transition probabilities. The median values are reported, with standard deviations in parentheses.} \label{priorSensTab} \end{table}
\subsection{BIC model selection} \label{BICsimstudy} The last simulation study evaluates the BIC method described in Section \ref{NumberOfClusters}. Due to the increased computational cost to fit the model for several values of $G$, we generated 15 data sets each from the distance model and the projection model (30 total). We fitted both the distance and projection models to each data set for $G\in\{3,\ldots,9\}$, and selected the $G$ with the optimal BIC value.
One important comment is that the BIC method of Section \ref{NumberOfClusters} is not appropriate to select the geometry of the latent space, i.e., choose whether we should use the distance or the projection model. Instead we used the deviance information criterion (DIC) \citep{spiegelhalter2002bayesian} to make this distinction. We originally attempted to use DIC to choose both the geometry and the number of clusters, but DIC performed extremely poorly at determining $G$. DIC was, however, perfect at selecting the geometry (in this simulation study) once the optimal number of clusters had been chosen (via BIC). Therefore based on this simulation study, we recommend to the practitioner the admittedly inelegant procedure of first using the BIC (as described in Section \ref{NumberOfClusters}) to choose $G$ for each geometry, and then using DIC to compare these two models with differing geometries.
Figure \ref{BICFigure} provides the results. As mentioned above, DIC perfectly selected the geometry, and so we only present the BIC values for the model with the correctly specified geometry for varying $G$. Specifically, Figure \ref{BICFigure} gives the average ranking of the BIC values, where low rankings indicate better BIC values. From this we see that the true number of clusters (6) is frequently chosen as the optimal number of clusters, and values of $G$ far from the truth rank poorly.
\begin{figure}
\caption{Simulation results testing the BIC method of selecting the number of clusters $G$ (horizontal axis). The vertical axis represents the average rankings over 15 simulations (for each model), where low values indicate better BIC values. The true number of clusters is 6.}
\label{BICFigure}
\end{figure}
\section{Data analysis} \label{DataAnalysis} \subsection{Newcomb's fraternity data} \label{Fraternity} \cite{newcomb1956prediction} discussed data collected on 17 male college students who were previously unknown to each other. These 17 students, as part of Newcomb's study, agreed to live together for sixteen weeks (though the data set excludes the ninth week due to school vacation). For each week, every student ranks the other 16 students from 1 (most favored) to 16 (least favored).
In this context $Y_t$ is the $t^{th}$ $n\times n$ adjacency matrix whose $i^{th}$ row, denoted ${\bf y}_{it}$, is how the $i^{th}$ actor ranks the other $n-1$ actors. Without loss of generality, assume that the rankings go, in order of most favored to least favored, from 1 to $n-1$. Then we let $\boldsymbol{o}_{it}=(o_{i1t},o_{i2t},\ldots,o_{i(n-1)t})$ denote the $(n-1)\times1$ vector which is the ordering of the rank vector ${\bf y}_{it}$ (e.g., if ${\bf y}_{1t}=(0,4,3,1,2)$ then $\boldsymbol{o}_{1t}=(4,5,3,2)$). We assume that, conditioning on $({\cal X}_t,\boldsymbol\Psi)$, ${\bf y}_{it}$ is independent of ${\bf y}_{i't}$, $i\neq i'$.
The likelihood we will use is that used by \cite{sewell2014analysis}, given as \begin{equation}
\mathbb{P}(Y_t|{\cal X}_t,\boldsymbol{s}) =\prod_{i=1}^n\prod_{j=1}^{n-1}\frac{s_{o_{ijt}}\exp(-d_{io_{ijt}t}) }{\sum_{\ell=j}^{n-1}s_{o_{i\ell t}}\exp(-d_{io_{i\ell t}t})}, \label{PL2} \end{equation} where again $\boldsymbol{s}=(s_1,\ldots,s_n)$ are actor specific parameters which indicate an actor's social reach, and for identifiability $\sum_{i=1}^ns_i=1$. This is a Plackett-Luce model \citep{plackett1975analysis}, and as such satisfies Luce's Choice axiom which can be characterized by having actor $i$ rank actor $j$ over actor $k$ with the same probability whether or not actor $\ell$ is included in the set to be ranked. See \cite{sewell2014analysis} for further motivation and details of this model. As this likelihood depends on the latent positions through the distances $D({\cal X}_t)$'s, we implement the distance model of Section \ref{DistanceModel}. This flexible framework allows us to detect communities through the latent positions of the students. Estimation is done by putting a Dirichlet prior on $\boldsymbol{s}$ and incorporating these parameters in the MH within Gibbs MCMC algorithm of Section \ref{MCMC}.
For $G=2,\ldots,9$, we ran 100,000 iterations of the MCMC algorithm of Section \ref{MCMC}, thus having a maximum of nine clusters. For each of the 8 chains, we used a short MCMC chain (the same chain for each $G$) following the model with no clustering of \cite{sewell2014analysis} to initialize the latent positions $\{{\cal X}_t\}_{t=1}^T$ and the actor specific likelihood parameters $\boldsymbol{s}$, and for the remaining prior parameters we used the generalized EM algorithm given by \cite{sewell2014model}.
The BIC method described in Section \ref{NumberOfClusters} led us to choose five communities. These BIC values ranged from $-13,531$ to $-13,066$. The MCMC chain converged relatively quickly, as is seen in Figure \ref{fratTrace}, which provides a trace plot of the posterior value for all 100,000 samples. Adjacent in Figure \ref{fratACF} is the ACF plot, which shows that the correlation decays at a reasonable rate, and, together with Figure \ref{fratTrace} indicates that we had good mixing. Geweke's diagnostic test, as implemented in the coda R package \citep{codaRpackage}, yielded a $p$-value of 0.611 using a burn in of 5,000, implying convergence. \begin{figure}
\caption{Trace plot of the posterior value for each iteration of the MCMC algorithm.}
\caption{Autocorrelation function (ACF) plot.}
\caption{Diagnostic plots for MCMC estimation corresponding to the fraternity data.}
\label{fratTrace}
\label{fratACF}
\end{figure}
The goodness of fit was evaluated using the pseudo-$R^2$ value described in \cite{sewell2014analysis}. The pseudo-$R^2$ takes values in the interval $[0,1)$, where a higher value implies a better fit of the data. After analyzing the data, we obtained a pseudo-$R^2$ value of 0.575. This is slightly less than that obtained by Sewell and Chen (0.622), which we feel satisfied with since we are imposing more structure via the clustering on the prior of the latent positions; that is, though we are imposing more structure on the prior of the latent positions, we are not losing much in terms of model fit.
This data set has been analyzed many times since its genesis, and several of these analyses have focused at least in part on community detection. \cite{nakao1993longitudinal}, when analyzing Newcomb's fraternity data, created similarity matrices for each time point and then performed multidimensional scaling to obtain latent network positions, visually determining the communities. \cite{moody2005dynamic} used various visualization methods and also commented on some clustering that were noticed via visual inspection. \cite{sewell2014analysis} provided a detailed analysis of Newcomb's fraternity data which included a post-hoc analysis of the subgroup formation.
An important advantage of our proposed approach over these ad hoc or post hoc methods is the ability to compute the posterior probabilities of pairwise membership to the same cluster; that is, we can quantify the uncertainty of our hard clustering assignments. With the MCMC output these quantities can easily be computed, and hence we can determine if the previously described results are reasonable according to our analysis. Figure \ref{fratProbs} depicts the pairwise posterior probabilities of two actors belonging to the same cluster at week 7 (chosen for a stabilized representation of the dynamic cluster memberships). Dark shaded regions indicate high probabilities, and light regions indicate low probabilities. If a method estimates that two actors belong to the same cluster, then a square (our proposed method), triangle (Nakao and Romney), circle (Moody et al.), or an asterisk (Sewell and Chen) is given in the appropriate cell. Note that all methods other than the proposed do not assign clusters to all actors in the network. From this figure we see that there is often agreement on pairs belonging to the same cluster for most of the very high pairwise probabilities; there is also most often agreement on pairs not belonging to the same cluster for the low pairwise probabilities. For the numerical values of the pairwise posterior probabilities for week 7 as well as for all other weeks, see the supplementary material.
\begin{figure}
\caption{Pairwise probabilities of actors belonging to the same cluster at week 7. Actors (rows/columns) are ordered according to the MAP estimates of the communities. Different methods' estimates are given by the methods' corresponding shapes in the appropriate cell. Shown are our proposed MAP estimates (square) as well as those from Nakao and Romney (triangle), Moody et al. (circle), and Sewell and Chen (asterisk). Note that all methods other than the proposed do not assign clusters to all actors in the network.}
\label{fratProbs}
\end{figure}
Figure \ref{frat_overall} shows the latent space with the MAP estimators of the latent positions, thus showing the overall structure of the subgroups of the network. All actors at all time points are shown here. Figure \ref{frat_snapshots} shows the latent positions at weeks 1, 7, and 15. The community structure stabilized at around week 4, where it did not change at all until week 12, and only slightly until week 14. We can characterize our five communities, referencing these groups using the shapes given in Figures \ref{frat_overall} and \ref{frat_snapshots}. The $\Box$ community matches well with communities discovered by Nakao and Romney, Sewell and Chen, and the main community discovered by Moody et al.. Once all the members eventually joined this community within the first few weeks (none departed the community), it remained constant for the remainder of the study until student 14 joined the final week. The \textbullet\hspace{0.1pc} community seemed to be the opposite, in that it was the most transient. Similar to the \textbullet\hspace{0.1pc} community, the $\bigcirc$ community was also fairly transient, with many students leaving and some joining throughout the study. The + community was characterized by students joining and remaining in the community, and in this manner was similar to the $\Box$ community. The + community was also the most popular group in terms of rankings received, unlike the $\Box$ community which was more isolated and not very popular, and matches well with a community discovered by Sewell and Chen. The $\triangle$ community evolved into the least popular group (until the least popular student, 16, formed his own community the last two weeks), consisting of several of those students Nakao and Romney termed ``outliers,'' and several of the students that Sewell and Chen described as having departed the main communities.
\begin{figure}
\caption{Latent positions of all actors at all time points in the fraternity data. The contour lines correspond to the normal distributions which characterize the five communities. The symbols correspond to the community assignments given.}
\label{frat_overall}
\end{figure}
\begin{figure}
\caption{Week 1}
\caption{Week 7}
\caption{Week 15}
\caption{Latent positions of the fraternity data at weeks 1, 7, and 15. The contour lines correspond to the normal distributions which characterize the five communities. The symbols correspond to the community assignments given.}
\label{frat_snapshots}
\end{figure}
As the network was completely nascent at the first week, it is hardly a surprise that there are quite a number of actors that switch communities, especially during the beginning of the study. Our model was able to capture this evolution of the network, unlike clustering algorithms which assume constant cluster assignments over time. In all there were 15 transitions, 8 of which were during the first three transition periods, and 5 of which were during the last two transition periods. This implies that the subgroup formation of the social network was fairly stable after week four, though the stability of the network faltered at the end of the semester; this last comment regarding the deterioration of the network stability also corroborates statements made by various other researchers \cite[e.g.,][]{nakao1993longitudinal,krivitsky2012rank,sewell2014analysis}.
\subsection{World trade data} \label{WorldTradeData} We consider world trade data with the goals of determining trade blocs and gleaning what information we can from these blocs. We look at annual export and import data between countries during the years 1964 to 1976 (so $T=13$). A (directed) trade relation is established from country $i$ to country $j$, i.e., $Y_{ijt}=1$, if country $i$ exports some non-negligible amount of goods to country $j$ during year $t$. During this time, for a variety of reasons a few countries are not constant throughout, and so we only include the $n=111$ countries which exist throughout the entirety of the study period. Thus we have thirteen $111\times111$ binary adjacency matrices. As this is primarily a pedagogical example, we chose these years to strike a balance between a large number of time points with a large number of countries. The data we used were obtained through the Economic Web Institute at {\it http://www.economicswebinstitute.org/worldtrade.htm}, originally obtained through the IMF Direction of Trade Yearbook.
To detect trade blocs within the binary trade relations data, we implemented both the distance model and the projection model, letting $G$ take values from 2 to 9. Using the procedure described in Sections \ref{NumberOfClusters} and \ref{BICsimstudy}, we selected the projection geometry with four clusters; the BIC values for the projection model ranged from $-35,838$ to $-34,448$. Figure \ref{WTTrace} provides a trace plot of the posterior value of all 100,000 samples, and Figure \ref{WTACF} provides the ACF plot. From these we see evidence of convergence and good mixing. Geweke's diagnostic test yielded a $p$-value of 0.210 using a burn in of 35,000, implying convergence. \begin{figure}
\caption{Trace plot of the posterior value for each iteration of the MCMC algorithm.}
\caption{Autocorrelation function (ACF) plot.}
\caption{Diagnostic plots for MCMC estimation corresponding to the world trade data.}
\label{WTTrace}
\label{WTACF}
\end{figure}
Figure \ref{worldTradeFig} shows the posterior mode of the latent positions of all countries at all time points ($nT$ points plotted), where the four communities have been labeled along segments from the origin to the communities' centers. For ease of viewing we have plotted the countries based only on their directional unit vectors, disregarding the magnitudes of the vectors which correspond to the individual effects.
\begin{figure}
\caption{Estimates of latent locations (plotting the unit vectors indicating direction and ignoring the magnitude of the vectors that correspond to individual effects) of countries in the international export/import data. The four communities have been labeled along segments from the origin to the communities' centers.}
\label{worldTradeFig}
\end{figure}
Most of the blocs are relatively densely interconnected, as seen in Table \ref{WTDensities}. The exception is Bloc 1, a global trade bloc with nations representing all inhabited continents, which is loosely interconnected. This community is also the most transitory, as seen in Table \ref{WTBetas} which gives the estimated values of $\boldsymbol\beta_0$ and $\boldsymbol\beta_h$, $h=1,\ldots,4$. It is intuitive that these two things should coincide, in that trade blocs that are not actively trading with each other should be more likely to lose member nations to other trade blocs. Bloc 2 is the largest bloc averaging 49 nations per year, and involves with very few exceptions only eastern hemisphere nations, indicating that geography may be playing a role in the formation of trade blocs. Bloc 3 consists of the U.S.S.R., several eastern European countries, and most of Latin America. This gives quantitative evidence in favor of claims of close ties between U.S.S.R. and Latin America and the Soviet influence in the western hemisphere \citep[e.g.,][]{blasier1988giant}. Bloc 4 is a community that is indicative of a very interesting vestigial effect from French colonization. Of the countries that belonged to bloc 4, France and her former colonies constitute $2/3$ of them. French colonial policy required her colonies to import only from or through France, export only to France, and to ship using French vessels \citep{grier1999colonial}. That France and her former colonies behave similarly as participants in world trade gives evidence that colonial policy established a longer term trend.
\begin{table}[t] \centering
\begin{tabular}{r|rrrr}
& 1 & 2 & 3 & 4 \\
\hline 1 & 0.10 & 0.14 & 0.08 & 0.07 \\
2 & 0.14 & 0.25 & 0.15 & 0.12 \\
3 & 0.08 & 0.15 & 0.24 & 0.05 \\
4 & 0.07 & 0.12 & 0.05 & 0.19 \\ \end{tabular} \caption{Densities within each of the four communities and between each community, averaged over all time points. These densities are computed by dividing the total number of edges by the total possible number of edges.} \label{WTDensities} \end{table}
\begin{table}[t] \centering
\begin{tabular}{r|rrrr} &\multicolumn{4}{c}{$g$}\\
& 1 & 2 & 3 & 4 \\
\hline 0 & 0.305 & 0.394 & 0.217 & 0.084 \\
1 & 0.952 & 0.034 & 0.013 & 0.001 \\
2 & 0.002 & 0.983 & 0.009 & 0.006 \\
3 & 0.011 & 0.005 & 0.983 & 0.002 \\
4 & 0.004 & 0.004 & 0.004 & 0.989 \\ \end{tabular} \caption{Estimates of initial clustering parameter (first row) and transition parameters (last four rows).} \label{WTBetas} \end{table}
\section{Discussion} \label{Discussion} Community detection is an important topic in network analysis. We have extended the commonly used distance and projection latent space models to incorporate clustering of dynamic network data, utilizing the temporal information to build the model. This model can handle directed or undirected dynamic network data, and can also be used to model a wide range of weighted network data. We have also given the first, to our knowledge, clustering model corresponding to the projection model in \cite{hoff2002latent}, \cite{durante2014nonparametric}, and others. This model also can handle directed or undirected dynamic network data, and the VB algorithm we have described provides computationally fast estimation of the model.
While the VB algorithm using the projection model for binary networks is relatively fast, the corresponding Gibbs sampler we have also implemented is time intensive for larger networks, as seen in Figure \ref{compTime}. However, this burden could potentially be alleviated by adapting the likelihood approximation method first derived by \cite{raftery2012fast} for binary networks. For the distance model, we expect that creating a VB algorithm would be non-trivial and context specific; we therefore leave that for future research.
In this paper we have discussed a method of selecting the number of clusters and the latent space geometry. However, a difficult topic we have not yet addressed is the selection of the dimension of the latent space. \cite{durante2014nonparametric} developed a non-parametric approach to this problem in a simpler setting, which may inspire similar type strategies to select the dimensionality of the latent space in our context. A very useful area of future research then would be to construct a unifying model selection method to determine the latent space geometry, the dimension of the latent space, and the number of clusters.
One last comment is that the clustering models that have been proposed are based on the assumption that actors within a cluster are more likely to form edges than actors in different clusters. While this is, we expect, the most common context, there may be certain scenarios in which this is not the case. Instead there may be varying roles that the actors can take on, and these roles do not necessitate that each role is well interconnected, that is, actors in the same community may not be densely connected to each other. In such a case a blockmodel approach would be more appropriate to modeling the data.
\end{document}
|
arXiv
|
{
"id": "2005.08276.tex",
"language_detection_score": 0.8251509070396423,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title {\bf Explicit Relations between Kaneko--Yamamoto Type Multiple Zeta Values and Related Variants} \author{ {Ce Xu${}^{a,}$\thanks{Email: [email protected]}\quad and Jianqiang Zhao${}^{b,}$\thanks{Email: [email protected]}}\\[1mm] \small a. School of Mathematics and Statistics, Anhui Normal University, Wuhu 241000, PRC\\ \small b. Department of Mathematics, The Bishop's School, La Jolla, CA 92037, USA \\ [5mm] Dedicated to Professor Masanobu Kaneko on the occasion of his 60th birthday}
\date{} \maketitle \noindent{\bf Abstract.} In this paper we first establish several integral identities. These integrals are of the form \[\int_0^1 x^{an+b} f(x)\,dx\quad (a\in\{1,2\},\ b\in\{-1,-2\})\] where $f(x)$ is a single-variable multiple polylogarithm function or $r$-variable multiple polylogarithm function or Kaneko--Tsumura A-function (this is a single-variable multiple polylogarithm function of level two). We find that these integrals can be expressed in terms of multiple zeta (star) values and their related variants (multiple $t$-values, multiple $T$-values, multiple $S$-values etc.), and multiple harmonic (star) sums and their related variants (multiple $T$-harmonic sums, multiple $S$-harmonic sums etc.), which are closely related to some special types of Schur multiple zeta values and their generalization. Using these integral identities, we prove many explicit evaluations of Kaneko--Yamamoto multiple zeta values and their related variants. Further, we derive some relations involving multiple zeta (star) values and their related variants.
\noindent{\bf Keywords}: Multiple zeta (star) values, multiple $t$-values, multiple $T$-values, multiple $M$-values, Kaneko--Yamamoto multiple zeta values, Schur multiple zeta values.
\noindent{\bf AMS Subject Classifications (2020):} 11M06, 11M32, 11M99, 11G55.
\section{Introduction and Notations} \subsection{Multiple zeta values (MZVs) and Schur MZVs} We begin with some basic notations. A finite sequence $\bfk \equiv {\bfk_r}:= (k_1,\dotsc, k_r)$ of positive integers is called a \emph{composition}. We put
\[|\bfk|:=k_1+\dotsb+k_r,\quad {\rm dep}(\bfk):=r,\] and call them the weight and the depth of $\bfk$, respectively.
For $0\leq j\leq i$, we adopt the following notations: \begin{align*} &{\bfk}_i^j:=(\underbrace{k_{i+1-j},k_{i+2-j},\dotsc,k_i}_{j\ \text{components}}) \end{align*} and \begin{align*} &{\bfk}_i\equiv{\bfk}_i^i:=(k_1,k_2,\dotsc,k_i), \end{align*} where ${\bfk}_i^0:=\emptyset$\quad $(i\geq 0)$. If $i<j$, then ${\bfk}_i^j:=\emptyset$.
For a composition $\bfk_r=(k_1,\dotsc,k_r)$ and positive integer $n$, the multiple harmonic sums (MHSs) and multiple harmonic star sums (MHSSs) are defined by \begin{align*} \zeta_n(k_1,\dotsc,k_r):=\sum\limits_{0<m_1<\cdots<m_r\leq n } \frac{1}{m_1^{k_1}m_2^{k_2}\cdots m_r^{k_r}} \end{align*} and \begin{align*} \zeta^\star_n(k_1,\dotsc,k_r):=\sum\limits_{0<m_1\leq \cdots\leq m_r\leq n} \frac{1}{m_1^{k_1}m_2^{k_2}\cdots m_r^{k_r}}, \end{align*} respectively. If $n<r$ then ${\zeta_n}(\bfk_r):=0$ and ${\zeta _n}(\emptyset )={\zeta^\star_n}(\emptyset ):=1$. The multiple zeta values (abbr. MZVs) and the multiple zeta-star value (abbr. MZSVs) are defined by \begin{equation*} \zeta(\bfk):=\lim_{n\to\infty } \zeta_n(\bfk) \qquad\text{and}\qquad \zeta^\star(\bfk):=\lim_{n\to\infty } \zeta_n^\star(\bfk), \end{equation*} respectively. These series converge if and only if $k_r\ge2$ so we call a composition $\bfk_r$ \emph{admissible} if this is the case.
The systematic study of multiple zeta values began in the early 1990s with the works of Hoffman \cite{H1992} and Zagier \cite{DZ1994}. Due to their surprising and sometimes mysterious appearance in the study of many branches of mathematics and theoretical physics, these special values have attracted a lot of attention and interest in the past three decades (for example, see the book by the second author \cite{Z2016}). A common generalization of the MZVs and MZSVs is given by the Schur multiple zeta values \cite{MatsumotoNakasuji2020,NPY2018}, which are defined using the skew Young tableaux. For example, for integers $a,c,d,e,f\geq 1,\,c,g \geq 2$ the following sum is an example of a Schur multiple mixed values \begin{equation}\label{equ:SchurEg} \zeta\left(\ {\ytableausetup{centertableaux, boxsize=1.2em}
\begin{ytableau}
\none & a& b & c \\
d& e & \none\\
f & g & \none
\end{ytableau}}\ \right)
= \sum_{{\scriptsize
\arraycolsep=1.4pt\def0.8{0.8}
\begin{array}{ccccccc}
&&m_a&\leq&m_b&\leq& m_c \\
&&\vsmall&& && \\
m_d&\leq&m_e&&&& \\
\vsmall&&\vsmall&&& \\
m_f&\leq&m_g&&&&
\end{array} }} \frac{1}{m_a^{\,a} \,\, m_b^b \,\, m_c^c \,\, m_d^d \,\, m_e^e \,\, m_f^f} \,, \end{equation}
In this paper, we shall study some families of variations of MZVs. First, consider the following special form of the Schur multiple zeta values. \begin{defn} (cf. \cite{KY2018}) For any two compositions of positive integers $\bfk=(k_1,\dotsc,k_r)$ and $\bfl=(l_1,\dotsc,l_s)$, define \begin{align}\label{equ:KYMZVs} \zeta(\bfk\circledast{\bfl^\star}) :=&\sum\limits_{0<m_1<\cdots<m_r=n_s\geq \cdots \geq n_1 > 0} \frac{1}{m_1^{k_1}\cdots m_r^{k_r}n_1^{l_1}\cdots n_s^{l_s}} \\ =& \sum\limits_{n=1}^\infty \frac{\zeta_{n-1}(k_1,\dotsc,k_{r-1})\zeta^\star_n(l_1,\dotsc,l_{s-1})}{n^{k_r+l_s}}. \notag \end{align} We call them \emph{Kaneko--Yamamoto multiple zeta values} (K--Y MZVs for short). \end{defn}
The K--Y MZVs defined by \eqref{equ:KYMZVs} are the special cases of the Schur multiple zeta values $\zeta_\gl({\boldsymbol{\sl{s}}})$ given by the following skew Young tableaux of anti-hook type \[ {\boldsymbol{\sl{s}}}={\footnotesize \ytableausetup{centertableaux, boxsize=1.8em} \begin{ytableau}
\none & \none & \none & \tikznode{a1}{\scriptstyle k_1} \\
\none & \none & \none & \vdots \\
\none & \none & \none & \scriptstyle k_{r-1} \\
\tikznode{a2}{\scriptstyle l_1} & \cdots & \scriptstyle l_{s-1} & \tikznode{a3}{\scriptstyle x} \end{ytableau}} \] where $x=k_r+l_s$ and $\gl$ is simply the Young diagram underlying the above tableaux.
\subsection{Variations of MZVs with even/odd summation indices} One may modify the definition MZVs by restricting the summation indices to even/odd numbers. These values are apparently NOT in the class of Schur multiple zeta values. For instance, in recent papers \cite{KanekoTs2018b,KanekoTs2019}, Kaneko and Tsumura introduced a new kind of multiple zeta values of level two, called \emph{multiple T-values} (MTVs), defined for admissible compositions $\bfk=(k_1,\dotsc,k_r)$ by \begin{align} T(\bfk):&=2^r \sum_{0<m_1<\cdots<m_r\atop m_i\equiv i\ {\rm mod}\ 2} \frac{1}{m_1^{k_1}m_2^{k_2}\cdots m_r^{k_r}}\nonumber\\ &=2^r\sum\limits_{0<n_1<\cdots<n_r} \frac{1}{(2n_1-1)^{k_1}(2n_2-2)^{k_2}\cdots (2n_r-r)^{k_r}}. \end{align} This is in contrast to Hoffman's \emph{multiple $t$-values} (MtVs) defined in \cite{H2019} as follows: for admissible compositions $\bfk=(k_1,\dotsc,k_r)$ \begin{align*} t(\bfk):=\sum_{0<m_1<\cdots<m_r\atop \forall m_i: odd} \frac{1}{m_1^{k_1}m_2^{k_2}\cdots m_r^{k_r}} =\sum\limits_{0<n_1<\cdots<n_r} \frac{1}{(2n_1-1)^{k_1}(2n_2-1)^{k_2}\cdots (2n_r-1)^{k_r}}. \end{align*} Moreover, in \cite{H2019} Hoffman also defined its star version, called \emph{multiple $t$-star value} (MtSVs), as follows: \begin{align*} t^\star(\bfk):=\sum_{0<m_1\leq \cdots\leq m_r\atop \forall m_i: odd} \frac{1}{m_1^{k_1}m_2^{k_2}\cdots m_r^{k_r}} =\sum\limits_{0<n_1\leq \cdots\leq n_r} \frac{1}{(2n_1-1)^{k_1}(2n_2-1)^{k_2}\cdots (2n_r-1)^{k_r}}. \end{align*} Very recently, the authors have defined another variant of multiple zeta values in \cite{XZ2020}, called \emph{multiple mixed values} or \emph{multiple $M$-values} (MMVs for short). For $\bfeps=(\varepsilon_1, \dots, \varepsilon_r)\in\{\pm 1\}^r$ and admissible compositions $\bfk=(k_1,\dotsc,k_r)$, \begin{align} M(\bfk;\bfeps):&=\sum_{0<m_1<\cdots<m_r} \frac{(1+\varepsilon_1(-1)^{m_1}) \cdots (1+\varepsilon_r(-1)^{m_r})}{m_1^{k_1} \cdots m_r^{k_r}}\nonumber\\
&=\sum_{0<m_1<\cdots<m_r\atop 2| m_j\ \text{if}\ \varepsilon_j=1\ \text{and}\ 2\nmid m_j\ \text{if}\ \varepsilon_j=-1} \frac{2^r}{m_1^{k_1}m_2^{k_2} \cdots m_r^{k_r}}. \end{align} For brevity, we put a check on top of the component $k_j$ if $\varepsilon_j=-1$. For example, \begin{align*} M(1,2,\check{3})=&\, \sum_{0<m_1<m_2<m_3} \frac{(1+(-1)^{m_1}) (1+(-1)^{m_2}) (1-(-1)^{m_3})}{m_1 m_2^{2}m_3^{3}}\\ =&\, \sum_{0<l<m<n} \frac{8}{(2\ell) (2m)^{2} (2n-1)^{3}}. \end{align*} It is obvious that MtVs satisfy the series stuffle relation, however, it is nontrivial to see that MTVs can be expressed using iterated integral and satisfy both the duality relations (see \cite[Theorem 3.1]{KanekoTs2019}) and the integral shuffle relations (see \cite[Theorem 2.1]{KanekoTs2019}). Similar to MZVs, MMVs satisfy both series stuffle relations and the integral shuffle relations. Moreover, in \cite{XZ2020}, we also introduced and studied a class of MMVs that is opposite to MTVs, called \emph{multiple S-values} (MSVs). For admissible compositions $\bfk=(k_1,\dotsc,k_r)$, \begin{align} S(\bfk):&=2^r \sum_{0<m_1<\cdots<m_r\atop m_i\equiv i-1\ {\rm mod}\ 2} \frac{1}{m_1^{k_1}m_2^{k_2}\cdots m_r^{k_r}}\nonumber\\ &=2^r \sum_{0<n_1<\cdots<n_r} \frac{1}{(2n_1)^{k_1}(2n_2-1)^{k_2}\cdots (2n_r-r+1)^{k_r}}. \end{align} It is clear that every MMV can be written as a linear combination of alternating MZVs (also referred to as Euler sums or colored multiple zeta values) defined as follows. For $\bfk\in\mathbb{N}^r$ and $\bfeps\in\{\pm 1\}^r$, if $(k_r,\eps_r)\ne(1,1)$ (called \emph{admissible} case) then \begin{equation*}
\zeta(\bfk;\bfeps):=\sum\limits_{0<m_1<\cdots<m_r} \frac{\eps_1^{m_1}\cdots \eps_r^{m_r} }{ m_1^{k_1}\cdots m_r^{k_r}}. \end{equation*} We may compactly indicate the presence of an alternating sign as follows. Whenever $\eps_j=-1$, we place a bar over the corresponding integer exponent $k_j$. For example, \begin{equation*} \zeta(\bar 2,3,\bar 1,4)=\zeta( 2,3,1,4;-1,1,-1,1). \end{equation*} Similarly, a star-version of alternating MZVs (called \emph{alternating multiple zeta-star values}) is defined by \begin{equation*}
\zeta^\star(\bfk;\bfeps):=\sum\limits_{0<m_1\leq\cdots\leq m_r} \frac{\eps_1^{m_1}\cdots \eps_r^{m_r} }{ m_1^{k_1}\cdots m_r^{k_r}}. \end{equation*} Deligne showed that the rational spaced generated by alternating MZVs of weight $w$ is bounded by the Fibonacci number $F_w$ where $F_0=F_1=1$. In \cite[Theorem 7.1]{XZ2020} we showed that the rational spaced generated by MMVs of weight $w$ is bounded by $F_w-1$. The missing piece is the one-dimensional space generated by $\ln^w 2$.
\subsection{Variations of Kaneko--Yamamoto MZVs with even/odd summation indices} Now, we introduce the $T$-variant of Kaneko--Yamamoto MZVs. For positive integers $m$ and $n$ such that $n\ge m$, we define \begin{align*} &D_{n,m} := \left\{
\begin{array}{ll} \Big\{(n_1,n_2,\dotsc,n_m)\in\mathbb{N}^{m} \mid 0<n_1\leq n_2< n_3\leq \cdots \leq n_{m-1}<n_{m}\leq n \Big\},\phantom{\frac12}\ & \hbox{if $2\nmid m$;} \\ \Big\{(n_1,n_2,\dotsc,n_m)\in\mathbb{N}^{m} \mid 0<n_1\leq n_2< n_3\leq \cdots <n_{m-1}\leq n_{m}<n \Big\},\phantom{\frac12}\ & \hbox{if $2\mid m$,}
\end{array} \right. \\ &E_{n,m} := \left\{
\begin{array}{ll} \Big\{(n_1,n_2,\dotsc,n_{m})\in\mathbb{N}^{m}\mid 1\leq n_1<n_2\leq n_3< \cdots< n_{m-1}\leq n_{m}< n \Big\},\phantom{\frac12}\ & \hbox{if $2\nmid m$;} \\ \Big\{(n_1,n_2,\dotsc,n_{m})\in\mathbb{N}^{m}\mid 1\leq n_1<n_2\leq n_3< \cdots \leq n_{m-1}< n_{m}\leq n \Big\}, \phantom{\frac12}\ & \hbox{if $2\mid m$.}
\end{array} \right. \end{align*}
\begin{defn} (\cite[Defn. 1.1]{XZ2020}) For positive integer $m$, define \begin{align} &T_n({\bfk_{2m-1}}):= \sum_{\bfn\in D_{n,2m-1}} \frac{2^{2m-1}}{(\prod_{j=1}^{m-1} (2n_{2j-1}-1)^{k_{2j-1}}(2n_{2j})^{k_{2j}})(2n_{2m-1}-1)^{k_{2m-1}}},\label{MOT}\\ &T_n({\bfk_{2m}}):= \sum_{\bfn\in D_{n,2m}} \frac{2^{2m}}{\prod_{j=1}^{m} (2n_{2j-1}-1)^{k_{2j-1}}(2n_{2j})^{k_{2j}}},\label{MET}\\ &S_n({\bfk_{2m-1}}):= \sum_{\bfn\in E_{n,2m-1}} \frac{2^{2m-1}}{(\prod_{j=1}^{m-1} (2n_{2j-1})^{k_{2j-1}}(2n_{2j}-1)^{k_{2j}})(2n_{2m-1})^{k_{2m-1}}},\label{MOS}\\ &S_n({\bfk_{2m}}):= \sum_{\bfn\in E_{n,2m}} \frac{2^{2m}}{\prod_{j=1}^{m} (2n_{2j-1})^{k_{2j-1}}(2n_{2j}-1)^{k_{2j}}},\label{MES} \end{align} where $T_n({\bfk_{2m-1}}):=0$ if $n<m$, and $T_n({\bfk_{2m}})=S_n({\bfk_{2m-1}})=S_n({\bfk_{2m}}):=0$ if $n\leq m$. Moreover, for convenience sake, we set $T_n(\emptyset)=S_n(\emptyset):=1$. We call \eqref{MOT} and \eqref{MET} \emph{multiple $T$-harmonic sums} ({\rm MTHSs} for short), and call \eqref{MOS} and \eqref{MES} \emph{multiple $S$-harmonic sums} ({\rm MSHSs} for short). \end{defn} In \cite{XZ2020}, we used the MTHSs and MSHSs to define the convoluted $T$-values $T({\bfk}\circledast {\bfl})$, which can be regarded as a $S$- or $T$-variant of K--Y MZVs.
\begin{defn} (\cite[Defn. 1.2]{XZ2020}) For positive integers $m$ and $p$, the \emph{convoluted $T$-values} are defined by \begin{align}\label{equ:schur1} T({\bfk_{2m}}\circledast{\bfl_{2p}})=&\,2\sum\limits_{n=1}^\infty \frac{T_n({\bfk_{2m-1}})T_n({\bfl_{2p-1}})}{(2n)^{k_{2m}+l_{2p}}},\\ T({\bfk_{2m-1}}\circledast{\bfl_{2p-1}})=&\,2\sum\limits_{n=1}^\infty \frac{T_n({\bfk_{2m-2}})T_n({\bfl_{2p-2}})}{(2n-1)^{k_{2m-1}+l_{2p-1}}},\\ T({\bfk_{2m}}\circledast{\bfl_{2p-1}})=&\,2\sum\limits_{n=1}^\infty \frac{T_n({\bfk_{2m-1}})S_n({\bfl_{2p-2}})}{(2n)^{k_{2m}+l_{2p-1}}},\\ T({\bfk_{2m-1}}\circledast{\bfl_{2p}})=&\,2\sum\limits_{n=1}^\infty \frac{T_n({\bfk_{2m-2}})S_n({\bfl_{2p-1}})}{(2n-1)^{k_{2m-1}+l_{2p}}}.\label{equ:schur4} \end{align} We may further define the \emph{convoluted $S$-values} by \begin{align}\label{equ:schur1} S({\bfk_{2m}}\circledast{\bfl_{2p}})=&\,2\sum\limits_{n=1}^\infty \frac{S_n({\bfk_{2m-1}})S_n({\bfl_{2p-1}})}{(2n-1)^{k_{2m}+l_{2p}}},\\ S({\bfk_{2m-1}}\circledast{\bfl_{2p-1}})=&\,2\sum\limits_{n=1}^\infty \frac{S_n({\bfk_{2m-2}})S_n({\bfl_{2p-2}})}{(2n)^{k_{2m-1}+l_{2p-1}}}.\label{equ:schur6} \end{align} \end{defn} In view of the interpretation of K--Y MZVs as special Schur MZVs, one may wonder if Schur MZVs can be generalized so that the convoluted $S$- and $T$-values are special cases.
\subsection{Schur MZVs modulo $N$} We now generalize the concept of Schur multiple zeta functions (resp. values) to Schur multiple zeta functions (resp. values) modulo any positive integer $N$, the case $N=2$ of which contain all the MMVs as special cases.
It turns out that when $N=2$ the only differences between these values and the Schur MZVs is that each box in the Young diagram is decorated by either ``0'' or ``1'' at upper left corner so that the running index appearing in that box must be either even or odd. For example, a variation of the example in \eqref{equ:SchurEg} can be given as follows: \begin{equation*} \zeta\left(\ {\ytableausetup{centertableaux, boxsize=1.2em}
\begin{ytableau}
\none & {}^{\text{0}} a& {}^{\text{0}}b & {}^{\text{1}}c \\
{}^{\text{1}}d& {}^{\text{1}}e & \none\\
{}^{\text{0}}f & {}^{\text{1}}g & \none
\end{ytableau}}\ \right)
:= \sum_{{\scriptsize
\arraycolsep=1.4pt\def0.8{0.8}
\begin{array}{cccccccl}
&&m_a&\leq&m_b&\leq& m_c \quad &\ 2|m_a,2|m_b,2\nmid m_c\\
&&\vsmall&& &&&\ \\
m_d&\leq&m_e&&&& &\ 2\nmid m_d,2\nmid m_e\\
\vsmall&&\vsmall&&&& \\
m_f&\leq&m_g&&&& &\ 2|m_f,2\nmid m_g
\end{array} }} \frac{2^7}{m_a^{\,a} \,\, m_b^b \,\, m_c^c \,\, m_d^d \,\, m_e^e \,\, m_f^f} \,, \end{equation*}
We now briefly describe this idea in general. For a skew Young diagram $\gl$ with $n$ boxes (denoted by $\sharp(\gl)=n$), let $T(\gl, X)$ be the set of all Young tableaux of shape $\gl$ over a set $X$. Let $D(\gl)=\{(i,j): 1\le i\le r, \ga_i\le j\le \gb_i\}=\{(i,j): 1\le j\le s, a_j\le i\le b_j\}$ be the skew Young diagram of $\gl$ so that $(i,j)$ refers to the box on the $i$th row and $j$th column of $\gl$. Fix any positive integer $N$, we may decorate $D(\gl)$ by putting a residue class $\pi_{ij}$ mod $N$ at the upper left corner of $(i,j)$-th box. We call such a decorated diagram a Young diagram modulo $N$, denoted by $\gl^\pi$. Further, we define the set of semi-standard skew Young tableaux of shape $\gl^\pi$ by $$
\SSYT(\gl^\pi):=\left\{(m_{i,j})\in T(\gl, \mathbb{N})\left|
\aligned
& m_{i,\ga_i}\le m_{i,\ga_i+1}\le \dotsm \le m_{i,\gb_i},\ \ m_{a_j,j}< m_{a_j+1,j}<\dotsm<m_{b_j,j},\\
& m_{i,j}\equiv \pi_{i,j} \pmod{N} \ \ \forall 1\le i\le r, \ga_i\le j\le \gb_i
\endaligned \right.\right\}. $$ For ${\boldsymbol{\sl{s}}} = (s_{i,j} )\in T(\gl,\mathbb{C})$, the \emph{Schur multiple zeta function} \emph{modulo} $N$ associated with $\gl^\pi$ is defined by the series \begin{equation*} \zeta_{\gl^\pi}({\boldsymbol{\sl{s}}}):=\sum_{M\in \SSYT(\gl^\pi)} \frac{2^{\sharp(\gl)} }{M^{\boldsymbol{\sl{s}}}} \end{equation*} where $M^{\boldsymbol{\sl{s}}}=(m_{i,j})^{\boldsymbol{\sl{s}}}:=\prod{}_{(i,j)\in D(\gl)} m_{i,j}^{s_{i,j}}$. Similar to \cite[Lemma 2.1]{NPY2018}, it is not too hard to prove that the above series converges absolutely whenever ${\boldsymbol{\sl{s}}}\in W_\gl$ where $$
W_\gl:=\left\{{\boldsymbol{\sl{s}}}=(s_{i,j})\in T(\gl, \mathbb{C}) \left|
\aligned
& \Re(s_{i,j})\ge 1 \ \forall (i,j)\in D(\gl)\setminus C(\gl) \\
& \Re(s_{i,j})> 1 \ \forall (i,j)\in C(\gl)
\endaligned \right.\right\}, $$ where $C(\gl)$ is the set of all corners of $\gl$. But this domain of convergence is not ideal. To define the most accurate domain of convergence we need the following terminology. Given any two boxes $B_1$ and $B_2$ in $\gl$ we define $B_1\preceq B_2$ if $B_1$ is to the left or above $B_2$, namely, $B_1$ must be in the gray area in the picture \begin{tikzpicture}[scale=0.05]
\filldraw [gray!30!white] (0,1) -- (3,1) -- (3,4) -- (8,4) -- (8,6) -- (0,6) -- (0,1); \fill[black] (3,3) -- (4,3) -- (4,4) -- (3,4) -- (3,3);
\end{tikzpicture} where the $B_2$ is the black box.
An \emph{allowable move} along a path from box $B$ is a move to a box $C$ such that $B\preceq C$ and all boxes above $C$ and to the left of $C$, if there are any, are already covered by the previous moves along the path. An \emph{allowable path} in a skew Young diagram is a sequence of allowable moves covering all the boxes without backtracking. Then the domain of convergence of $M_{\gl^\pi}({\boldsymbol{\sl{s}}})$ is the subset of $W_\gl$ defined by the condition that $\Re( \sum_{s_{ij}\in\mathcal{P}_\ell} s_{ij})>\ell$ for all allowable paths $\mathcal{P}$, where $\mathcal{P}_\ell$ is the sub-path of $\mathcal{P}$ covering the last $\ell$ boxes ending at a corner. For example, the graph $ { \ytableausetup{centertableaux, boxsize=.5em}\begin{ytableau}
\none & \none & \scriptscriptstyle 1 & \scriptscriptstyle 4 & \scriptscriptstyle 6 \\
\scriptscriptstyle 2 & \scriptscriptstyle 5 & \scriptscriptstyle 7 & \none & \none \\
\scriptscriptstyle 3 & \none & \none & \none & \none \\
\end{ytableau}} $ ($1\to 2\to \cdots \to 7$) shows an allowable path in a skew Young diagram.
Similar to K--Y MZVs, the above convoluted $S$- and $T$-values are all special cases of Schur MZVs modulo 2 corresponding to anti-hook type Young diagrams. The six convoluted $S$- or $T$-values in \eqref{equ:schur1}-\eqref{equ:schur6} are all given by mod 2 Schur MZVs $\zeta_{\gl_j^{\pi_j}}$ ($1\le j\le 6$) below, respectively: $$ \aligned & \gl_1^{\pi_1}={ \ytableausetup{centertableaux, boxsize=1.8em}\begin{ytableau}
\none & \none & \none & \none & \tikznode{a1}{~} \\
\none & \none & \none & \none & \tikznode{a2}{~} \\
\none & \none & \none & \none & \vdots \\
\none & \none & \none & \none & \tikznode{a3}{~} \\
\tikznode{a5}{~} &\tikznode{a4}{~}& \cdots & \tikznode{a7}{~} & \tikznode{a6}{~} \end{ytableau}},\qquad \gl_2^{\pi_2}={ \ytableausetup{centertableaux, boxsize=1.8em}\begin{ytableau}
\none & \none & \none & \none & \tikznode{b1}{~} \\
\none & \none & \none & \none & \tikznode{b2}{~} \\
\none & \none & \none & \none & \vdots \\
\none & \none & \none & \none & \tikznode{b8}{~} \\
\tikznode{b5}{~} &\tikznode{b4}{~}& \cdots & \tikznode{b6}{~} & \tikznode{b7}{~} \end{ytableau}},\qquad \gl_3^{\pi_3}={ \ytableausetup{centertableaux, boxsize=1.8em}\begin{ytableau}
\none & \none & \none & \none & \tikznode{c1}{~} \\
\none & \none & \none & \none & \tikznode{c2}{~} \\
\none & \none & \none & \none & \vdots \\
\none & \none & \none & \none & \tikznode{c3}{~} \\
\tikznode{c4}{~} &\tikznode{c5}{~} & \cdots & \tikznode{c7}{~}& \tikznode{c6}{~} \end{ytableau}},\\ &\gl_4^{\pi_4}={ \ytableausetup{centertableaux, boxsize=1.8em}\begin{ytableau}
\none & \none & \none & \none & \tikznode{d1}{~} \\
\none & \none & \none & \none & \tikznode{d2}{~} \\
\none & \none & \none & \none & \vdots \\
\none & \none & \none & \none & \tikznode{d8}{~} \\
\tikznode{d4}{~} &\tikznode{d5}{~} & \cdots &\tikznode{d6}{~} & \tikznode{d7}{~} \end{ytableau}},\qquad \gl_5^{\pi_5}={ \ytableausetup{centertableaux, boxsize=1.8em}\begin{ytableau}
\none & \none & \none & \none & \tikznode{e1}{~} \\
\none & \none & \none & \none & \tikznode{e2}{~} \\
\none & \none & \none & \none & \vdots \\
\none & \none & \none & \none & \tikznode{e3}{~} \\
\tikznode{e5}{~} &\tikznode{e4}{~}& \cdots & \tikznode{e7}{~} & \tikznode{e6}{~} \end{ytableau}},\qquad \gl_6^{\pi_6}={ \ytableausetup{centertableaux, boxsize=1.8em}\begin{ytableau}
\none & \none & \none & \none & \tikznode{f1}{~} \\
\none & \none & \none & \none & \tikznode{f2}{~} \\
\none & \none & \none & \none & \vdots \\
\none & \none & \none & \none & \tikznode{f8}{~} \\
\tikznode{f4}{~} &\tikznode{f5}{~} & \cdots & \tikznode{f6}{~}& \tikznode{f7}{~} \end{ytableau}} \endaligned $$ \tikz[overlay,remember picture]{ \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]a1.north west) -- ([yshift=0.5em,xshift=-0.5em]a1.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]a1.center) -- ([yshift=0mm,xshift=0mm]a1.center) node[midway]{$k_1$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]a2.north west) -- ([yshift=0.5em,xshift=-0.5em]a2.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]a2.center) -- ([yshift=0mm,xshift=0mm]a2.center) node[midway]{$k_2$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]a3.north west) -- ([yshift=0.5em,xshift=-0.5em]a3.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]a3.center) -- ([yshift=0mm,xshift=0mm]a3.center) node[midway]{$k_{m'}$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]a4.north west) -- ([yshift=0.5em,xshift=-0.5em]a4.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]a4.center) -- ([yshift=0mm,xshift=0mm]a4.center) node[midway]{$l_2$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]a5.north west) -- ([yshift=0.5em,xshift=-0.5em]a5.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]a5.center) -- ([yshift=0mm,xshift=0mm]a5.center) node[midway]{$l_1$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]a6.north west) -- ([yshift=0.5em,xshift=-0.5em]a6.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]a6.center) -- ([yshift=0mm,xshift=0mm]a6.center) node[midway]{$x_1$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]a7.north west) -- ([yshift=0.5em,xshift=-0.5em]a7.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]a7.center) -- ([yshift=0mm,xshift=0mm]a7.center) node[midway]{$l_{p'}$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]b1.north west) -- ([yshift=0.5em,xshift=-0.5em]b1.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]b1.center) -- ([yshift=0mm,xshift=0mm]b1.center) node[midway]{$k_1$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]b2.north west) -- ([yshift=0.5em,xshift=-0.5em]b2.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]b2.center) -- ([yshift=0mm,xshift=0mm]b2.center) node[midway]{$k_2$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]b8.north west) -- ([yshift=0.5em,xshift=-0.5em]b8.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]b8.center) -- ([yshift=0mm,xshift=0mm]b8.center) node[midway]{$k_{m''}$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]b4.north west) -- ([yshift=0.5em,xshift=-0.5em]b4.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]b4.center) -- ([yshift=0mm,xshift=0mm]b4.center) node[midway]{$l_2$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]b5.north west) -- ([yshift=0.5em,xshift=-0.5em]b5.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]b5.center) -- ([yshift=0mm,xshift=0mm]b5.center) node[midway]{$l_1$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]b6.north west) -- ([yshift=0.5em,xshift=-0.5em]b6.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]b6.center) -- ([yshift=0mm,xshift=0mm]b6.center) node[midway]{$l_{p''}$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]b7.north west) -- ([yshift=0.5em,xshift=-0.5em]b7.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]b7.center) -- ([yshift=0mm,xshift=0mm]b7.center) node[midway]{$x_2$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]c1.north west) -- ([yshift=0.5em,xshift=-0.5em]c1.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]c1.center) -- ([yshift=0mm,xshift=0mm]c1.center) node[midway]{$k_1$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]c2.north west) -- ([yshift=0.5em,xshift=-0.5em]c2.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]c2.center) -- ([yshift=0mm,xshift=0mm]c2.center) node[midway]{$k_2$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]c3.north west) -- ([yshift=0.5em,xshift=-0.5em]c3.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]c3.center) -- ([yshift=0mm,xshift=0mm]c3.center) node[midway]{$k_{m'}$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]c4.north west) -- ([yshift=0.5em,xshift=-0.5em]c4.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]c4.center) -- ([yshift=0mm,xshift=0mm]c4.center) node[midway]{$l_1$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]c5.north west) -- ([yshift=0.5em,xshift=-0.5em]c5.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]c5.center) -- ([yshift=0mm,xshift=0mm]c5.center) node[midway]{$l_2$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]c6.north west) -- ([yshift=0.5em,xshift=-0.5em]c6.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]c6.center) -- ([yshift=0mm,xshift=0mm]c6.center) node[midway]{$x_3$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]c7.north west) -- ([yshift=0.5em,xshift=-0.5em]c7.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]c7.center) -- ([yshift=0mm,xshift=0mm]c7.center) node[midway]{$l_{p''}$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]d1.north west) -- ([yshift=0.5em,xshift=-0.5em]d1.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]d1.center) -- ([yshift=0mm,xshift=0mm]d1.center) node[midway]{$k_1$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]d2.north west) -- ([yshift=0.5em,xshift=-0.5em]d2.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]d2.center) -- ([yshift=0mm,xshift=0mm]d2.center) node[midway]{$k_2$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]d8.north west) -- ([yshift=0.5em,xshift=-0.5em]d8.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]d8.center) -- ([yshift=0mm,xshift=0mm]d8.center) node[midway]{$k_{m''}$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]d4.north west) -- ([yshift=0.5em,xshift=-0.5em]d4.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]d4.center) -- ([yshift=0mm,xshift=0mm]d4.center) node[midway]{$l_1$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]d5.north west) -- ([yshift=0.5em,xshift=-0.5em]d5.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]d5.center) -- ([yshift=0mm,xshift=0mm]d5.center) node[midway]{$l_2$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]d6.north west) -- ([yshift=0.5em,xshift=-0.5em]d6.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]d6.center) -- ([yshift=0mm,xshift=0mm]d6.center) node[midway]{$l_{p'}$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]d7.north west) -- ([yshift=0.5em,xshift=-0.5em]d7.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]d7.center) -- ([yshift=0mm,xshift=0mm]d7.center) node[midway]{$x_4$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]e1.north west) -- ([yshift=0.5em,xshift=-0.5em]e1.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]e1.center) -- ([yshift=0mm,xshift=0mm]e1.center) node[midway]{$k_1$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]e2.north west) -- ([yshift=0.5em,xshift=-0.5em]e2.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]e2.center) -- ([yshift=0mm,xshift=0mm]e2.center) node[midway]{$k_2$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]e3.north west) -- ([yshift=0.5em,xshift=-0.5em]e3.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]e3.center) -- ([yshift=0mm,xshift=0mm]e3.center) node[midway]{$k_{m'}$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]e4.north west) -- ([yshift=0.5em,xshift=-0.5em]e4.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]e4.center) -- ([yshift=0mm,xshift=0mm]e4.center) node[midway]{$l_2$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]e5.north west) -- ([yshift=0.5em,xshift=-0.5em]e5.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]e5.center) -- ([yshift=0mm,xshift=0mm]e5.center) node[midway]{$l_1$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]e6.north west) -- ([yshift=0.5em,xshift=-0.5em]e6.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]e6.center) -- ([yshift=0mm,xshift=0mm]e6.center) node[midway]{$x_1$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]e7.north west) -- ([yshift=0.5em,xshift=-0.5em]e7.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]e7.center) -- ([yshift=0mm,xshift=0mm]e7.center) node[midway]{$l_{p'}$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]f1.north west) -- ([yshift=0.5em,xshift=-0.5em]f1.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]f1.center) -- ([yshift=0mm,xshift=0mm]f1.center) node[midway]{$k_1$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]f2.north west) -- ([yshift=0.5em,xshift=-0.5em]f2.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]f2.center) -- ([yshift=0mm,xshift=0mm]f2.center) node[midway]{$k_2$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]f8.north west) -- ([yshift=0.5em,xshift=-0.5em]f8.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]f8.center) -- ([yshift=0mm,xshift=0mm]f8.center) node[midway]{$k_{m''}$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]f4.north west) -- ([yshift=0.5em,xshift=-0.5em]f4.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]f4.center) -- ([yshift=0mm,xshift=0mm]f4.center) node[midway]{$l_2$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]f5.north west) -- ([yshift=0.5em,xshift=-0.5em]f5.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]f5.center) -- ([yshift=0mm,xshift=0mm]f5.center) node[midway]{$l_1$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]f6.north west) -- ([yshift=0.5em,xshift=-0.5em]f6.north west) node[midway]{${}^\text{1}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]f6.center) -- ([yshift=0mm,xshift=0mm]f6.center) node[midway]{$l_{p''}$}; \draw[decorate,decoration={brace},thick] ([yshift=0.5em,xshift=-0.5em]f7.north west) -- ([yshift=0.5em,xshift=-0.5em]f7.north west) node[midway]{${}^\text{0}$}; \draw[decorate,decoration={brace},thick] ([yshift=0mm,xshift=0mm]f7.center) -- ([yshift=0mm,xshift=0mm]f7.center) node[midway]{$x_2$}; } where $m'=2m-1,m''=2m-2,p'=2p-1,p''=2p-2$, $x_1=k_{2m}+l_{2p}$, $x_2=k_{2m-1}+l_{2p-1}$, $x_3=k_{2m}+l_{2p-1}$, and $x_4=k_{2m-1}+l_{2p}$.
The primary goals of this paper are to study the explicit relations of K--Y MZVs $\zeta(\bfk\circledast{\bfl^\star})$ and their related variants, such as $T$-variants $T(\bfk\circledast{\bfl})$. Then using these explicit relations, we establish some explicit formulas of multiple zeta (star) values and their related variants.
The remainder of this paper is organized as follows. In Section \ref{sec2}, we first establish the explicit evaluations of integrals $\int_0^1 x^{n-1}{\rm Li}_{\bfk}(x)\,dx$ and $\int_0^1 x^{2n+b}{\rm A}(\bfk;x)\,dx$ for all positive integers $n$ and $b\in\{-1,-2\}$, where ${\rm Li}_{\bfk}(x)$ is the single-variable multiple polylogarithm (see \eqref{equ:singleLi}) and ${\rm A}(\bfk;x)$ is the Kaneko--Tsumura A-function (see \eqref{equ:defnA}). Then, for all compositions $\bfk$ and $\bfl$, using these explicit formulas obtained and by considering the two kind of integrals \[I_L(\bfk;\bfl):=\int_0^1 \frac{{\rm Li}_{\bfk}(x){\rm Li}_{\bfl}(x)}{x}\,dx\quad\text{and}\quad I_A(\bfk;\bfl):=\int_0^1 \frac{{\rm A}(\bfk;x){\rm A}(\bfl;x)}{x} \, dx,\] we establish some explicit relations of $\zeta(\bfk\circledast{\bfl^\star})$ and $T(\bfk\circledast{\bfl})$. Further, we express the integrals $I_L(\bfk;\bfl)$ and $I_A(\bfk;\bfl)$ in terms of multiple integrals associated with 2-labeled posets following the idea of Yamamoto \cite{Y2014}.
In Section \ref{sec3}, we first define a variation of the classical multiple polylogarithm function with $r$-variable $\gl_{\bfk}(x_1,x_2,\dotsc,x_r)$ (see \eqref{equ:gl}), and give the explicit evaluation of the integral $$ \int_0^1 x^{n-1} \gl_{\bfk}(\sigma_1x,\sigma_2x,\dotsc,\sigma_rx)\,dx, \quad \sigma_j\in\{\pm 1\}.$$ Then we will consider the integral \[I_\gl((\bfk;\bfsi),(\bfl;\bfeps)):= \int_0^1 \frac{\gl_{\bfk_r}(\sigma_1x,\dotsc,\sigma_rx)\gl_{\bfl_s}(\varepsilon_1x,\dotsc,\varepsilon_sx)}{x}\,dx \] to find some explicit relations of alternating Kaneko--Yamamoto MZVs $\zeta((\bfk;\bfsi)\circledast(\bfl;\bfeps)^\star)$. Further, we will find some relations involving alternating MZVs. Finally, we express the integrals $I_\gl((\bfk;\bfsi),(\bfl;\bfeps))$ in terms of multiple integrals associated with 3-labeled posets.
In Section \ref{sec4}, we define the multiple $t$-harmonic (star) sums and the function $t(\bfk;x)$ related to multiple $t$-values. Further, we establish some relations involving multiple $t$-star values.
\section{Formulas of Kaneko--Yamamoto MZVs and $T$-Variants}\label{sec2} In this section we will prove several explicit formulas of Kaneko--Yamamoto MZVs and $T$-variants, and find some explicit relations among MZ(S)Vs and MTVs.
\subsection{Some relations of Kaneko--Yamamoto MZVs} \begin{thm}\label{Thm1} Let $r,n\in \mathbb{N}$ and ${\bfk}_r:=(k_1,\dotsc,k_r)\in \mathbb{N}^r$. Then \begin{align}\label{a1}
\int_0^1 x^{n-1}{\rm Li}_{{\bfk}_r}(x)dx&=\sum_{j=1}^{k_r-1} \frac{(-1)^{j-1}}{n^j}\zeta\left({\bfk}_{r-1},k_r+1-j\right)+\frac{(-1)^{|{\bfk}_r|-r}}{n^{k_r}}\zeta^\star_n\left(1,{\bfk}_{r-1}\right)\nonumber\\
&\quad+\sum_{l=1}^{r-1} (-1)^{|{\bfk}_r^l|-l} \sum_{j=1}^{k_{r-l}-1}\frac{(-1)^{j-1}}{n^{k_r}} \zeta^\star_n\left(j,{\bfk}_{r-1}^{l-1}\right)\zeta\left({\bfk}_{r-l-1},k_{r-l}+1-j\right), \end{align} where ${{\rm Li}}_{{{k_1},\dotsc,{k_r}}}(z)$ is the single-variable multiple polylogarithm function defined by \begin{align}\label{equ:singleLi} &{{\rm Li}}_{{{k_1},\dotsc,{k_r}}}(z): = \sum\limits_{0< {n_1} < \cdots < {n_r}} {\frac{{{z^{{n_r}}}}}{{n_1^{{k_1}}\cdots n_r^{{k_r}}}}},\quad z \in \left[ { - 1,1} \right). \end{align} \end{thm} \begin{proof} It's well known that multiple polylogarithms can be expressed by the iterated integral \begin{align*} {\rm Li}_{k_1,\dotsc,k_r}(x)=\int_0^x \frac{dt}{1-t}\left(\frac{dt}{t}\right)^{k_1-1} \dotsm\frac{dt}{1-t}\left(\frac{dt}{t}\right)^{k_r-1}, \end{align*} where for 1-forms $\ga_1(t)=f_1(t)\, dt,\dotsc,\ga_\ell(t)=f_\ell(t)\, dt$, we define iteratively \begin{equation*}
\int_a^b \ga_1(t) \cdots \ga_\ell(t) = \int_a^b \left(\int_a^y \ga_1(t)\cdots\ga_{\ell-1}(t)\right) f_\ell(y)\, dy. \end{equation*} Using integration by parts, we deduce the recurrence relation \[ \int_0^1 x^{n-1}{\rm Li}_{{\bfk}_r}(x)dx=\sum_{j=1}^{k_r-1} \frac{(-1)^{j-1}}{n^j}\zeta({\bfk}_{r-1},k_r+1-j)+\frac{(-1)^{k_r-1}}{n^{k_r}}\sum_{j=1}^n \int_0^1 x^{j-1}{\rm Li}_{{\bfk}_{r-1}}(x)dx. \] Thus, we arrive at the desired formula by a direct calculation. \end{proof}
For any string $\{s_1,\dots,s_d\}$ and $r\in \mathbb{N}$, we denote by $\{s_1,\dots,s_d\}_r$ the concatenated string obtained by repeating $\{s_1,\dots,s_d\}$ exactly $r$ times. \begin{cor}\label{cor-I2}\emph{(cf. \cite{Xu2017})} For positive integers $n$ and $r$, \begin{align*} \int_0^1 x^{n-1}\log^r(1-x)dx=(-1)^rr!\frac{\zeta^\star_n(\{1\}_{r})}{n}. \end{align*} \end{cor}
For any nontrivial compositions $\bfk$ and $\bfl$, we consider the integral \[ I_L(\bfk;\bfl):=\int_0^1 \frac{{\rm Li}_{\bfk}(x){\rm Li}_{\bfl}(x)}{x}\,dx \] and use \eqref{a1} to find some explicit relations of K--Y MZVs. We prove the following theorem.
\begin{thm}\label{thm-KY} For compositions ${\bfk}_r=(k_1,\dotsc,k_r)\in \mathbb{N}^r$ and ${\bfl}_s=(l_1,l_2,\dotsc,l_s)\in \mathbb{N}^s$, \begin{align}\label{a2}
&\sum_{j=1}^{k_r-1} (-1)^{j-1}\zeta\left({\bfk}_{r-1},k_r+1-j \right)\zeta\left({\bfl}_{s-1},l_s+j\right)+(-1)^{|{\bfk}_r|-r}\zeta\left({\bfl}_s\circledast\Big(1,{\bfk}_r\Big)^\star\right)\nonumber\\
&+\sum_{i=1}^{r-1} (-1)^{|{\bfk}_r^i|-i}\sum_{j=1}^{k_{r-i}-1}(-1)^{j-1} \zeta\left({\bfk}_{r-i-1},k_{r-i}+1-j\right)\zeta\left({\bfl}_s\circledast\Big(j,{\bfk}_r^i\Big)^\star\right)\nonumber\\
&=\sum_{j=1}^{l_s-1} (-1)^{j-1}\zeta\left({\bfl}_{s-1},l_s+1-j \right)\zeta\left({\bfk}_{r-1},k_r+j\right)+(-1)^{|{\bfl}_s|-s}\zeta\left({\bfk}_r\circledast\Big(1,{\bfl}_s\Big)^\star\right)\nonumber\\
&\quad+\sum_{i=1}^{s-1} (-1)^{|{\bfl}_s^i|-i}\sum_{j=1}^{l_{s-i}-1}(-1)^{j-1} \zeta\left({\bfl}_{s-i-1},l_{s-i}+1-j\right)\zeta\left({\bfk}_r\circledast\Big(j,{\bfl}_s^i\Big)^\star\right). \end{align} \end{thm} \begin{proof} According to the definition of multiple polylogarithm, we have \begin{align*} \int_0^1 \frac{{\rm Li}_{\bfk_r}(x){\rm Li}_{\bfl_s}(x)}{x}&= \sum\limits_{n=1}^\infty \frac{\zeta_{n-1}(\bfl_{s-1})}{n^{l_s}} \int_0^1 x^{n-1} {\rm Li}_{\bfk_r}(x)dx\\ &= \sum\limits_{n=1}^\infty \frac{\zeta_{n-1}(\bfk_{r-1})}{n^{k_r}} \int_0^1 x^{n-1} {\rm Li}_{{\bfl}_s}(x)dx \end{align*} Then using \eqref{a1} with a direct calculation, we may deduce the desired evaluation. \end{proof}
The formula in Theorem \ref{thm-KY} seems to be related to the harmonic product of Schur MZVs of anti-hook type in \cite[Theorem 3.2]{MatsumotoNakasuji2020} and the general harmonic product formula in \cite[Lemma 2.2]{BachmannYamasaki2018}. However, it does not seem to follow from them easily.
As a special case, setting $r=2,s=1$ in \eqref{a2} and noting the fact that \[\zeta(l_1\circledast(1,k_1,k_2)^\star)=\zeta^\star(1,k_1,k_2+l_1)\] and \[\zeta(l_1\circledast(j,k_2)^\star)=\zeta^\star(j,l_1+k_2)\] we find that \begin{align}\label{a3} &\sum_{j=1}^{k_2-1}(-1)^{j-1} \zeta(k_1,k_2+1-j)\zeta(l_1+j)+(-1)^{k_1+k_2}\zeta^\star (1,k_1,k_2+l_1)\nonumber\\ &+(-1)^{k_2-1}\sum_{j=1}^{k_2-1} (-1)^{j-1} \zeta(k_1+1-j)\zeta^\star(j,l_1+k_2)\nonumber\\ &=\sum_{j=1}^{l_1-1}(-1)^{j-1}\zeta(l_1+1-j)\zeta(k_1,k_2+j)+(-1)^{l_1-1}\zeta((k_1,k_2)\circledast(1,l_1)^\star). \end{align} On the other hand, from the definition of K--Y MZVs, it is easy to find that \[\zeta((k_1,k_2)\circledast(1,l_1)^\star)=\zeta^\star(k_1,1,k_2+l_1)+\zeta^\star(1,k_1,k_2+l_1)-\zeta^\star(k_1+1,k_2+l_1)-\zeta^\star(1,k_1+k_2+l_1).\] Hence, we can get the following corollary. \begin{cor} For positive integers $k_1,k_2$ and $l_1$, \begin{align}\label{a4} &((-1)^{l_1-1}+(-1)^{k_1+k_2-1}) \zeta^\star(1,k_1,k_2+l_1)+(-1)^{l_1-1}\zeta^\star(k_1,1,k_2+l_1)\nonumber\\ &=\sum_{j=1}^{k_2-1}(-1)^{j-1} \zeta(k_1,k_2+1-j)\zeta(l_1+j)-(-1)^{k_2}\sum_{j=1}^{k_2-1} (-1)^{j-1} \zeta(k_1+1-j)\zeta^\star(j,l_1+k_2)\nonumber\\ &\quad-\sum_{j=1}^{l_1-1}(-1)^{j-1}\zeta(l_1+1-j)\zeta(k_1,k_2+j)+(-1)^{l_1-1}\zeta^\star(k_1+1,k_2+l_1)\nonumber\\&\quad+(-1)^{l_1-1}\zeta^\star(1,k_1+k_2+l_1). \end{align} \end{cor}
Next, we establish an identity involving
\emph{Arakawa--Kaneko zeta function} (see \cite{AM1999}) which is defined by \begin{align} \xi(k_1,\dotsc,k_r;s):=\frac{1}{\Gamma(s)} \int\limits_{0}^\infty \frac{t^{s-1}}{e^t-1}\oldLi_{k_1,\dotsc,k_r}(1-e^{-t})dt\quad (\Re(s)>0). \end{align} Setting variables $1-e^{-t}=x$ and $s=p+1\in \mathbb{N}$, we deduce \begin{align*} \xi(k_1,\dotsc,k_r;p+1)&=\frac{(-1)^{p}}{p!}\int\limits_{0}^1 \frac{\log^{p}(1-x){\mathrm{Li}}_{{{k_1},{k_2}, \cdots ,{k_r}}}\left( x \right)}{x}dx\\ &=\sum\limits_{n=1}^\infty \frac{\zeta_{n-1}(k_1,\dotsc,k_{r-1})\zeta^\star_n(\{1\}_{p})}{n^{k_r+1}}=\zeta({\bfk}\circledast (\{1\}_{p+1})^\star), \end{align*} where we have used Corollary \ref{cor-I2}. Clearly, the Arakawa--Kaneko zeta value is a special case of integral $I_L(\bfk;\bfl)$. Further, setting $l_1=l_2=\cdots=l_s=1$ in Theorem \ref{thm-KY} yields \begin{align*} &\xi(k_1,\dotsc,k_r;s+1)=\zeta({\bfk}\circledast (\{1\}_{s+1})^\star)\\
&=\sum_{j=1}^{k_r-1} (-1)^{j-1}\zeta\left({\bfk}_{r-1},k_r+1-j \right)\zeta\left(\{1\}_{s-1},1+j\right)+(-1)^{|{\bfk}_r|-r}\zeta\left(\{1\}_s\circledast\Big(1,{\bfk}\Big)^\star\right)\nonumber\\
&\quad+\sum_{i=1}^{r-1} (-1)^{|{\bfk}_r^i|-i}\sum_{j=1}^{k_{r-i}-1}(-1)^{j-1} \zeta\left({\bfk}_{r-i-1},k_{r-i}+1-j\right)\zeta\left(\{1\}_s\circledast\Big(j,{\bfk}_r^i\Big)^\star\right). \end{align*}
We end this section by the following theorem and corollary. \begin{thm} For any positive integer $m$ and composition $\bfk=(k_1,\dotsc,k_r)$, \begin{align}\label{czt} &2\sum_{j=0}^{m-1} {\bar \zeta}(2m-1-2j) \sum\limits_{n=1}^\infty \frac{\zeta_{n-1}(\bfk_{r-1})T_n(\{1\}_{2j+1})}{n^{k_r+1}}+\sum\limits_{n=1}^\infty \frac{\zeta_{n-1}(\bfk_{r-1})S_n(\{1\}_{2m})}{n^{k_r+1}}\nonumber\\
&=\sum_{j=1}^{k_r-1} (-1)^{j-1}2^j \zeta(\bfk_{r-1},k_r+1-j)T(\{1\}_{2m-1},j+1)+(-1)^{|\bfk|-r}\sum\limits_{n=1}^\infty \frac{T_n(\{1\}_{2m-1})\zeta^\star_n(1,\bfk_{r-1})}{n^{k_r+1}}\nonumber\\
&\quad+\sum_{l=1}^{r-1} (-1)^{|\bfk_r^l|-l}\sum_{j=1}^{k_{r-l}-1}(-1)^{j-1} \zeta(\bfk_{r-l-1},k_{r-l}+1-j)\sum\limits_{n=1}^\infty \frac{T_n(\{1\}_{2m-1})\zeta^\star_n\Big(j,\bfk_{r-1}^{l-1}\Big)}{n^{k_r+1}}. \end{align} \end{thm} \begin{proof} On the one hand, in \cite[Theorem 3.6]{XZ2020}, we proved that \begin{align*} \int_{0}^1 \frac{1}{x}\cdot \oldLi_{\bfk}(x^2)\log^{2m}\left(\frac{1-x}{1+x} \right)\, dx=\frac{(2m)!}{2}\times[\text{The left-hand side of \eqref{czt}}]. \end{align*} On the other hand, we note that \begin{align*} \int_{0}^1 \frac{1}{x}\cdot \oldLi_{\bfk}(x^2)\log^{2m}\left(\frac{1-x}{1+x} \right)\, dx&=(2m)!\sum\limits_{n=1}^\infty \frac{T_n(\{1\}_{2m-1})}{n} \int_0^1 x^{2n-1} {\rm Li}_{\bfk}(x^2)dx\\ &=(2m)!\sum\limits_{n=1}^\infty \frac{T_n(\{1\}_{2m-1})}{2n} \int_0^1 x^{n-1} {\rm Li}_{\bfk}(x)dx. \end{align*} Then using (\ref{a1}) with an elementary calculation, we have \begin{align*} \int_{0}^1 \frac{1}{x}\cdot \oldLi_{\bfk}(x^2)\log^{2m}\left(\frac{1-x}{1+x} \right)\, dx=\frac{(2m)!}{2}\times[\text{The right-hand side of \eqref{czt}}]. \end{align*} Thus, formula \eqref{czt} holds. \end{proof}
In particular, setting $\bfk=(\{1\}_{r-1},k)$ we obtain \cite[Theorem 3.9]{XZ2020}. Setting $\bfk=(\{2\}_{r-1},k)$ we get the following corollary.
\begin{cor} For any positive integers $k,m$ and $r$, \begin{multline} \label{cztb}
2\sum_{j=0}^{m-1} {\bar \zeta}(2m-1-2j) \sum\limits_{n=1}^\infty \frac{\zeta_{n-1}(\{2\}_{r-1})T_n(\{1\}_{2j+1})}{n^{k+1}}+\sum\limits_{n=1}^\infty \frac{\zeta_{n-1}(\{2\}_{r-1})S_n(\{1\}_{2m})}{n^{k+1}} \\
=\sum_{j=1}^{k-1} (-1)^{j-1}2^j \zeta(\{2\}_{r-1},k+1-j)T(\{1\}_{2m-1},j+1) \\
+\sum_{l=1}^{r} (-1)^{l+k} \zeta(\{2\}_{r-l})\sum\limits_{n=1}^\infty \frac{T_n(\{1\}_{2m-1})\zeta^\star_n(j,\{2\}_{l-1})}{n^{k+1}}. \end{multline}
\end{cor}
\subsection{Some relations of $T$-varinat of Kaneko--Yamamoto MZVs} Recall that the Kaneko--Tsumura A-function ${{\rm A}}(k_1,\dotsc,k_r;z)$ (see \cite{KanekoTs2018b}) is defined by \begin{align}\label{equ:defnA} &{{\rm A}}(k_1,\dotsc,k_r;z): = 2^r\sum\limits_{1 \le {n_1} < \cdots < {n_r}\atop n_i\equiv i\ {\rm mod}\ 2} {\frac{{{z^{{n_r}}}}}{{n_1^{{k_1}} \cdots n_r^{{k_r}}}}},\quad z \in \left[ { - 1,1} \right). \end{align} In this subsection, we present a series of results concerning this function. \begin{thm} For positive integers $m$ and $n$, \begin{align}
&\int_0^1 x^{2n-2} {\rm A}(\bfk_{2m};x)\, dx=\sum_{j=1}^{\bfk_{2m}-1} \frac{(-1)^{j-1}}{(2n-1)^j} T(\bfk_{2m-1},k_{2m}+1-j)+\frac{(-1)^{|\bfk_{2m}|}}{(2n-1)^{k_{2m}}}T_n(1,\bfk_{2m-1})\nonumber\\
&\quad+\frac{1}{(2n-1)^{k_{2m}}}\sum_{i=1}^{m-1} (-1)^{|\bfk_{2m}^{2i}|} \sum_{j=1}^{k_{2m-2i}-1} (-1)^{j-1} T(\bfk_{2m-2i-1},k_{2m-2i}+1-j)T_n(j,\bfk_{2m-1}^{2i-1})\nonumber\\
&\quad-\frac{1}{(2n-1)^{k_{2m}}}\sum_{i=0}^{m-1} (-1)^{|\bfk_{2m}^{2i+1}|} \sum_{j=1}^{k_{2m-2i-1}-1} (-1)^{j-1} T(\bfk_{2m-2i-2},k_{2m-2i-1}+1-j)S_n(j,\bfk_{2m-1}^{2i})\nonumber\\
&\quad -\frac{1}{(2n-1)^{k_{2m}}} \sum_{i=0}^{m-1} (-1)^{|\bfk_{2m}^{2i+1}|} \left(\int_0^1 {\rm A}(\bfk_{2m-2i-1},1;x)dx\right) T_n(\bfk_{2m-1}^{2i}),\label{a5}\\
&\int_0^1 x^{2n-1} {\rm A}(\bfk_{2m+1};x)\, dx=\sum_{j=1}^{\bfk_{2m+1}-1} \frac{(-1)^{j-1}}{(2n)^j} T(\bfk_{2m},k_{2m+1}+1-j)-\frac{(-1)^{|\bfk_{2m+1}|}}{(2n)^{k_{2m+1}}}T_n(1,\bfk_{2m})\nonumber\\
&\quad-\frac{1}{(2n)^{k_{2m+1}}}\sum_{i=0}^{m-1} (-1)^{|\bfk_{2m+1}^{2i+1}|} \sum_{j=1}^{k_{2m-2i}-1} (-1)^{j-1} T(\bfk_{2m-2i-1},k_{2m-2i}+1-j)T_n(j,\bfk_{2m}^{2i})\nonumber\\
&\quad+\frac{1}{(2n)^{k_{2m+1}}}\sum_{i=0}^{m-1} (-1)^{|\bfk_{2m+1}^{2i+2}|} \sum_{j=1}^{k_{2m-2i-1}-1} (-1)^{j-1} T(\bfk_{2m-2i-2},k_{2m-2i-1}+1-j)S_n(j,\bfk_{2m}^{2i+1})\nonumber\\
&\quad +\frac{1}{(2n)^{k_{2m+1}}} \sum_{i=0}^{m-1} (-1)^{|\bfk_{2m+1}^{2i+2}|} \left(\int_0^1 {\rm A}(\bfk_{2m-2i-1},1;x)dx\right) T_n(\bfk_{2m}^{2i+1}),\label{a6}\\
&\int_0^1 x^{2n-2} {\rm A}(\bfk_{2m+1};x)\,dx=\sum_{j=1}^{\bfk_{2m+1}-1} \frac{(-1)^{j-1}}{(2n-1)^j} T(\bfk_{2m},k_{2m+1}+1-j)-\frac{(-1)^{|\bfk_{2m+1}|}}{(2n-1)^{k_{2m+1}}}S_n(1,\bfk_{2m})\nonumber\\
&\quad+\frac{1}{(2n-1)^{k_{2m+1}}}\sum_{i=1}^{m} (-1)^{|\bfk_{2m+1}^{2i}|} \sum_{j=1}^{k_{2m+1-2i}-1} (-1)^{j-1} T(\bfk_{2m-2i},k_{2m+1-2i}+1-j)T_n(j,\bfk_{2m}^{2i-1})\nonumber\\
&\quad-\frac{1}{(2n-1)^{k_{2m+1}}}\sum_{i=0}^{m-1} (-1)^{|\bfk_{2m+1}^{2i+1}|} \sum_{j=1}^{k_{2m-2i}-1} (-1)^{j-1} T(\bfk_{2m-2i-1},k_{2m-2i}+1-j)S_n(j,\bfk_{2m}^{2i})\nonumber\\
&\quad -\frac{1}{(2n-1)^{k_{2m+1}}} \sum_{i=0}^{m} (-1)^{|\bfk_{2m+1}^{2i+1}|} \left(\int_0^1 {\rm A}(\bfk_{2m-2i},1;x)dx\right) T_n(\bfk_{2m}^{2i}),\label{a7}\\
&\int_0^1 x^{2n-1} {\rm A}(\bfk_{2m};x)\, dx=\sum_{j=1}^{\bfk_{2m}-1} \frac{(-1)^{j-1}}{(2n)^j} T(\bfk_{2m-1},k_{2m}+1-j)+\frac{(-1)^{|\bfk_{2m}|}}{(2n)^{k_{2m}}}S_n(1,\bfk_{2m-1})\nonumber\\
&\quad-\frac{1}{(2n)^{k_{2m}}}\sum_{i=1}^{m} (-1)^{|\bfk_{2m}^{2i-1}|} \sum_{j=1}^{k_{2m+1-2i}-1} (-1)^{j-1} T(\bfk_{2m-2i},k_{2m+1-2i}+1-j)T_n(j,\bfk_{2m-1}^{2i-2})\nonumber\\
&\quad+\frac{1}{(2n)^{k_{2m}}}\sum_{i=1}^{m-1} (-1)^{|\bfk_{2m}^{2i}|} \sum_{j=1}^{k_{2m-2i}-1} (-1)^{j-1} T(\bfk_{2m-2i-1},k_{2m-2i}+1-j)S_n(j,\bfk_{2m-1}^{2i-1})\nonumber\\
&\quad +\frac{1}{(2n)^{k_{2m}}} \sum_{i=1}^{m} (-1)^{|\bfk_{2m}^{2i}|} \left(\int_0^1 {\rm A}(\bfk_{2m-2i},1;x)dx\right) T_n(\bfk_{2m-1}^{2i-1}),\label{a8} \end{align} where we allow $m=0$ in \eqref{a6} and \eqref{a7}. \end{thm} \begin{proof} It is easy to see that the A-function can be expressed by an iterated integral: \begin{align*} {\rm A}(k_1,\dotsc,k_r;x)=\int_0^x \frac{2dt}{1-t^2}\left(\frac{dt}{t}\right)^{k_1-1} \cdots\frac{2dt}{1-t^2}\left(\frac{dt}{t}\right)^{k_r-1}. \end{align*} Using integration by parts, we deduce the recurrence relation \begin{align*} \int_0^1 x^{2n-2} {\rm A}(\bfk_r;x)\, dx&=\sum_{j=1}^{k_r-1}\frac{(-1)^{j-1}}{(2n-1)^j} T(\bfk_{r-1},k_r+1-j)+\frac{(-1)^{k_r-1}}{(2n-1)^{k_r}} \int_0^1 {\rm A}(\bfk_{r-1},1;x)\, dx\\ &\quad+\frac{(-1)^{k_r-1}}{(2n-1)^{k_r}}2\sum_{k=1}^{n-1} \int_0^1 x^{2k-1} {\rm A}(\bfk_{r-1};x)\, dx, \end{align*} and \begin{align*} \int_0^1 x^{2n-1} {\rm A}(\bfk_r;x) \, dx&=\sum_{j=0}^{k_r-2}\frac{(-1)^{j}}{(2n)^{j+1}} T(\bfk_{r-1},k_r-j)+\frac{(-1)^{k_r-1}}{(2n)^{k_r}}2\sum_{k=1}^{n} \int_0^1 x^{2k-2} {\rm A}(\bfk_{r-1};x)\, dx. \end{align*} Hence, using the recurrence formulas above, we may deduce the four desired evaluations after an elementary but rather tedious computation, which we leave to the interested reader. \end{proof}
\begin{lem}\label{equ:Aones} For any positive integer $r$ we have \begin{equation*} \int_0^1 {\rm A}(\{1\}_{r};x) \, dx = -2^{1-r} \zeta(\bar r)= \left\{
\begin{array}{ll} \phantom{\frac12} \log 2, & \hbox{if $r=1$;} \\ 2^{1-r}(1-2^{1-r}) \zeta(r), \qquad \ & \hbox{if $r\ge 2$.}
\end{array} \right. \end{equation*} \end{lem}
\begin{proof} Consider the generating function \begin{equation*} G(u):=1+\sum_{r=1}^\infty \left(\int_0^1 {\rm A}(\{1\}_{r};x) \, dx \right) (-2u)^r. \end{equation*} By definition \begin{align*} G(u) =\, & 1+\sum_{r=1}^\infty (-2u)^r \int_0^1 \int_0^{x} \left(\frac{dt}{1-t^2} \right)^r \, dx \\\ =\, & 1+\sum_{r=1}^\infty \frac{(-2u)^r}{r!} \int_0^1 \left( \int_0^{x} \frac{dt}{1-t^2} \right)^r \, dx \\ =\, & 1+\int_0^1 \left( \sum_{r=1}^\infty \frac{1}{r!} \left(-u\log \left(\frac{1+x}{1-x}\right)\right)^r \right) \, dx \\ =\, & \int_0^1 \left(\frac{1-x}{1+x}\right)^{u} \, dx . \end{align*} Taking $a=u,b=1,c=u+2$ and $t=-1$ in the formula \begin{equation*} {}_2F_1\left(\left.{
a,b \atop c}\right|t \right)=\frac{\Gamma(c)}{\Gamma(b)\Gamma(c-b)} \int_0^1 v^{b-1} (1-v)^{c-b-1} (1-vt)^{-a} \,dv, \end{equation*} we obtain \begin{align*} G(u)=\, &\frac{1}{u+1} \sum_{k\ge 0} \frac{u(u+1)}{(u+k)(u+k+1)} (-1)^k \\ =\, &u\sum_{k\ge0} (-1)^k \left(\frac{1}{u+k}-\frac{1}{u+k+1}\right)\\ =\, & 1+\sum_{k\ge 1} 2 (-1)^k \frac{u}{u+k} \\ =\, & 1-\sum_{k\ge 1} 2 (-1)^k \sum_{r\ge 0} \left(\frac{-u}{k}\right)^{r+1} \\ =\, & 1-2 \sum_{r\ge 1} \zeta(\bar r)(-u)^r. \end{align*} The lemma follows immediately. \end{proof}
\begin{thm}\label{thm-IA} For composition $\bfk=(k_1,\dotsc,k_r)$, the integral \[\int_0^1 {\rm A}(k_1,\dotsc,k_r,1;x)dx\] can be expressed as a $\mathbb{Q}$-linear combination of alternating MZVs. \end{thm} \begin{proof} It suffices to prove the integral can be expressed in terms of $\log 2$ and MMVs since these values generate the same $\mathbb{Q}$-vector space as that by alternating MZVs as shown in \cite{XZ2020}. Suppose $k_r>1$. Then \begin{align*} \,& \int_0^1 {\rm A}(k_1,\dotsc,k_r,1;x)\, dx\\ =\,& 2^{r+1}\sum_{\substack{ 0<n_1<\cdots<n_r<n_{r+1} \\ n_i\equiv i \pmod{2} }} \frac{1}{n_1^{k_1}\cdots n_r^{k_r}} \left(\frac1{n_{r+1}}- \frac1{n_{r+1}+1} \right) \\ =\,& \left\{
\begin{array}{ll}
M_*(\breve{k_1},k_2,\breve{k_3},\dotsc, \breve{k_r},1) - M_*(\breve{k_1},k_2,\dotsc, \breve{k_r},\breve{1} ),& \qquad \hbox{if $2\nmid r$;} \\
M_*(\breve{k_1},k_2,\breve{k_3},\dotsc, k_r,\breve{1}) - M_*(\breve{k_1},k_2,\dotsc, k_r,1), & \qquad \hbox{if $2\mid r$,}
\end{array} \right. \\ =\,& \left\{
\begin{array}{ll} M(\breve{k_1},k_2,\breve{k_3},\dotsc, \breve{k_r})\big(M_*(1)-M_*(\breve{1})\big) & \pmod{MMV}, \qquad \hbox{if $2\nmid r$;} \\ M(\breve{k_1},k_2,\breve{k_3},\dotsc, k_r)\big(M_*(\breve{1})-M_*(1)\big) & \pmod{MMV},\qquad \hbox{if $2\mid r$,}
\end{array} \right. \\ =\,& \left\{
\begin{array}{ll} -2M(\breve{k_1},k_2,\breve{k_3},\dotsc, \breve{k_r})\log 2 & \pmod{MMV} ,\qquad \hbox{if $2\nmid r$;} \\ 2M(\breve{k_1},k_2,\breve{k_3},\dotsc, k_r)\log 2 & \pmod{MMV},\qquad \hbox{if $2\mid r$.}
\end{array} \right. \end{align*} which can be expressed as a $\mathbb{Q}$-linear combination of MMVs by \cite[Theorem 7.1]{XZ2020}.
In general, we may assume $k_r>1$ and consider $\int_0^1 {\rm A}(k_1,\dotsc,k_r,\{1\}_\ell;x)\, dx$. By induction on $\ell$, we see that \begin{align*} &\, \int_0^1 {\rm A}(k_1,\dotsc,k_r,\{1\}_\ell;x)\, dx\\ =&\, \left\{
\begin{array}{ll} M(\breve{k_1},k_2,\breve{k_3},\dotsc, \breve{k_r})\big(M_*(\bfu,1)-M_*(\bfu,\breve{1})\big) & \pmod{MMV} , \qquad \hbox{if $2\nmid r$, $2\nmid \ell$;} \\ M(\breve{k_1},k_2,\breve{k_3},\dotsc, \breve{k_r})\big(M_*(\bfu',1,\breve{1})-M_*(\bfu',1,1)\big) &\pmod{MMV} , \qquad \hbox{if $2\nmid r$, $2\mid\ell$;} \\ M(\breve{k_1},k_2,\breve{k_3},\dotsc, k_r)\big(M_*(\bfv,\breve{1})-M_*(\bfv,1)\big) & \pmod{MMV},\qquad \hbox{if $2\mid r$, $2\nmid\ell$;}\\ M(\breve{k_1},k_2,\breve{k_3},\dotsc, k_r)\big(M_*(\bfv',\breve{1},1)-M_*(\bfv',\breve{1},\breve{1})\big) & \pmod{MMV}, \qquad\hbox{if $2\mid r$, $2\mid\ell$,}
\end{array} \right. \end{align*} where $\bfu=\{1,\breve{1}\}_{(\ell-1)/2},\bfu'=\{1,\breve{1}\}_{(\ell-2)/2}, \bfv=\{\breve{1},1\}_{(\ell-1)/2}, \bfv'=\{\breve{1},1\}_{(\ell-2)/2}.$ By Lemma \ref{equ:Aones} we see that $M_*(\cdots,1)-M_*(\cdots,\breve{1})=\mp 2\zeta(\bar\ell)$. This finishes the proof of the theorem. \end{proof}
\begin{exa}\label{exa-A} Applying the idea in the proof of Theorem~\ref{thm-IA} we can find that for any positive integer $k$, \begin{align*} \int_0^1 {\rm A}(k,1;x)\,dx&=M_*(\breve{k},1)-M_*(\breve{k},\breve{1})\\ &=M(\breve{k})(M_*(1)-M_*(\breve{1}))+M(\breve{1},\breve{k})-M(1,\breve{k})+2M\big((k+1)\breve{\, }\big). \end{align*} Observing that $M_*(\breve{1})-M_*(1)=2\log(2),\ M(\breve{k})=T(k),\ M(\breve{1},\breve{k})=4t(1,k)$ and $M(1,\breve{k})=S(1,k)$, we obtain \begin{align*} \int_0^1 {\rm A}(k,1;x)\,dx=-2\log(2)T(k)+2T(k+1)+4t(1,k)-S(1,k). \end{align*} \end{exa}
{}From Lemma \ref{equ:Aones} we can get the following corollary, which was proved in \cite{XZ2020}. \begin{cor}\label{cor-II}\emph{(\cite[Theorem 3.1]{XZ2020})} For positive integers $m$ and $n$, the following identities hold. \begin{align} &\begin{aligned} \int_{0}^1 t^{2n-2} \log^{2m}\left(\frac{1-t}{1+t} \right) dt&= \frac{2(2m)!}{2n-1} \sum_{j=0}^m {\bar \zeta}(2j)T_n(\{1\}_{2m-2j}),\label{ee} \end{aligned}\\ &\begin{aligned} \int_{0}^1 t^{2n-2} \log^{2m-1}\left(\frac{1-t}{1+t} \right) dt&= -\frac{(2m-1)!}{2n-1} \left(2\sum_{j=1}^{m} {\bar \zeta}(2j-1)T_n(\{1\}_{2m-2j}) + S_n(\{1\}_{2m-1}) \right),\label{eo} \end{aligned}\\ &\begin{aligned} \int_{0}^1 t^{2n-1} \log^{2m}\left(\frac{1-t}{1+t} \right) dt&=\frac{(2m)!}{n} \left(\sum_{j=1}^{m} {\bar \zeta}(2j-1)T_n(\{1\}_{2m-2j+1})+ S_n(\{1\}_{2m})\right),\label{oe} \end{aligned}\\ &\begin{aligned} \int_{0}^1 t^{2n-1} \log^{2m-1}\left(\frac{1-t}{1+t} \right) dt&= -\frac{(2m-1)!}{n} \sum_{j=0}^{m-1} {\bar \zeta}(2j-2)T_n(\{1\}_{2m-2j-1}),\label{oo} \end{aligned} \end{align} where ${\bar \zeta}(m):=-\zeta(\overline{ m})$, and ${\bar \zeta}(0)$ should be interpreted as $1/2$ wherever it occurs. \end{cor}
We now derive some explicit relations about $T$-variant of K--Y MZV $T(\bfk\circledast\bfl)$ by considering the integral \[ I_A(\bfk;\bfl):=\int_0^1 \frac{{\rm A}(\bfk;x){\rm A}(\bfl;x)}{x} \, dx. \]
\begin{thm} \label{thm:S2Ts} For positive integers $k$ and $l$, we have \begin{multline*} ((-1)^l-(-1)^k)S(1,k+l) =\sum_{j=1}^l (-1)^{j-1} T(l+1-j)T(k+j)+\sum_{j=1}^k (-1)^{j} T(k+1-j)T(l+j), \end{multline*} where $T(1):=2\log(2)$. \end{thm} \begin{proof} One may deduce the formula by a straightforward calculation of the integral \begin{align*} \int_0^1 \frac{{\rm A}(k;x){\rm A}(l;x)}{x}\, dx. \end{align*} We leave the details to the interested reader. \end{proof}
For example, setting $k=1$ and $l=2p\ (p\in\mathbb{N})$ in Theorem \ref{thm:S2Ts} yields \begin{align*} S(1,2p+1)=\sum_{j=0}^{p-1} (-1)^{j-1} T(2p+1-j)T(j+1)-\frac{(-1)^p}{2}T^2(p+1). \end{align*}
\begin{thm}\label{thm-TT2} For positive integers $k_1,k_2$ and $l$, \begin{align}\label{b17} &(-1)^{l-1}T((k_1,k_2)\circledast(1,l))+(-1)^{k_1+k_2-1}T(1,k_1,k_2+l)\nonumber\\ &=\sum_{j=1}^{k_2-1} (-1)^{j-1}T(k_1,k_2+1-j)T(l+j)-\sum_{j=1}^{l-1} (-1)^{j-1} T(l+1-j)T(k_1,k_2+j)\nonumber\\ &\quad-(-1)^{k_2}\sum_{j=1}^{k_1-1}(-1)^{j-1} T(k_1+1-j)S(j,k_2+l)-(-1)^{k_2}T(k_2+l)\int_0^1 {\rm A}(k_1,1;x) \, dx, \end{align} where $\int_0^1 {\rm A}(k,1;x) \, dx$ is given by Example \ref{exa-A}. \end{thm} \begin{proof} From \eqref{a5} and \eqref{a6}, we deduce \begin{align*} \int_0^1 x^{2n-1}{\rm A}(k;x)\,dx=\sum_{j=1}^{k-1} \frac{(-1)^{j-1}}{(2n)^j}T(k+1-j)+\frac{(-1)^{k-1}}{(2n)^k}T_n(1) \end{align*} and \begin{multline*} \int_0^1 x^{2n-2}{\rm A}(k_1,k_2;x)\,dx=\sum_{j=1}^{k_2-1} \frac{(-1)^{j-1}}{(2n-1)^j}T(k_1,k_2+1-j)+\frac{(-1)^{k_1+k_2}}{(2n-1)^{k_2}}T_n(1,k_1)\\ +\frac{(-1)^{k_2-1}}{(2n-1)^{k_2}}\sum_{j=1}^{k_1-1} (-1)^{j-1} T(k_1+1-j)S_n(j) +\frac{(-1)^{k_2-1}}{(2n-1)^{k_2}}\int_0^1 {\rm A}(k_1,1,;x)\, dx. \end{multline*} According to the definitions of A-functions, MTVs and MSVs, on the one hand, we have \begin{align*} &\int_0^1 \frac{{\rm A}(k_1,k_2;x){\rm A}(l;x)}{x}\, dx=2\sum\limits_{n=1}^\infty\frac{1}{(2n-1)^l} \int_0^1 x^{2n-2}{\rm A}(k_1,k_2;x)\,dx\\ &=\sum_{j=1}^{k_2-1} (-1)^{j-1}T(k_1,k_2+1-j)T(l+j)-(-1)^{k_2}\sum_{j=1}^{k_1-1}(-1)^{j-1} T(k_1+1-j)S(j,k_2+l)\\ &\quad-(-1)^{k_2}T(k_2+l)\int_0^1 {\rm A}(k_1,1;x) \, dx+(-1)^{k_1+k_2}T(1,k_1,k_2+l). \end{align*} On the other hand, \begin{multline*} \int_0^1 \frac{{\rm A}(k_1,k_2;x){\rm A}(l;x)}{x}\, dx=2\sum\limits_{n=1}^\infty\frac{T_n(k_1)}{(2n)^{k_2}} \int_0^1 x^{2n-1}{\rm A}(l;x)\,dx\\ =\sum_{j=1}^{l-1} (-1)^{j-1} T(l+1-j)T(k_1,k_2+j)+(-1)^{l-1} T((k_1,k_2)\circledast(1,l)). \end{multline*} Hence, combining two identities above, we obtain the desired evaluation. \end{proof}
\begin{thm}\label{thm-TT3} For positive integers $k_1,k_2$ and $l_1,l_2$, we have \begin{align*} &(-1)^{k_1+k_2}T((l_1,l_2)\circledast(1,k_1,k_2)) -(-1)^{l_1+l_2}T((k_1,k_2)\circledast(1,l_1,l_2))\\ &=\sum_{j=1}^{k_2-1} (-1)^{j} T(k_1,k_2+1-j)T(l_1,l_2+j)-\sum_{j=1}^{l_2-1} (-1)^{j} T(l_1,l_2+1-j)T(k_1,k_2+j) \\ &\quad-(-1)^{k_2}\sum_{j=1}^{k_1} (-1)^{j} T(k_1+1-j)T((l_1,l_2)\circledast(j,k_2)) \\ &\quad+(-1)^{l_2}\sum_{j=1}^{l_1} (-1)^{j} T(l_1+1-j)T((k_1,k_2)\circledast(j,l_2)), \end{align*} where $T(1):=2\log(2).$ \end{thm} \begin{proof} Consider the integral \[\int_0^1 \frac{{\rm A}(k_1,k_2;x){\rm A}(l_1,l_2;x)}{x}\, dx.\] By a similar argument used in the proof of Theorem \ref{thm-TT2}, we can prove Theorem \ref{thm-TT3}. \end{proof}
Moreover, according to the definitions of Kaneko--Tsumura $\psi$-function and Kaneko--Tsumura A-function (which is a single-variable multiple polylogarithm function of level two) \cite{KanekoTs2018b,KanekoTs2019}, \begin{align}\label{a14} \psi(k_1,\dotsc,k_r;s):=\frac{1}{\Gamma(s)} \int\limits_{0}^\infty \frac{t^{s-1}}{\sinh(t)}{\rm A}({k_1,\dotsc,k_r};\tanh(t/2))dt\quad (\Re(s)>0) \end{align} and \begin{align}\label{a15} &{\rm A}(k_1,\dotsc,k_r;z): = 2^r\sum\limits_{1 \le {n_1} < \cdots < {n_r}\atop n_i\equiv i\ {\rm mod}\ 2} {\frac{{{z^{{n_r}}}}}{{n_1^{{k_1}} \cdots n_r^{{k_r}}}}},\quad z \in \left[ { - 1,1} \right). \end{align} Setting $\tanh(t/2)= x$ and $s =p+1\in\mathbb{N}$, we have \begin{align}\label{cc8} \psi(k_1,\dotsc,k_r;p+1)&=\frac{(-1)^{p}}{p!}\int\limits_{0}^1 \frac{\log^{p}\left(\frac{1-x}{1+x}\right){\rm A}(k_1,\dotsc,k_r;x)}{x}dx\nonumber\\ &=\int\limits_{0}^1 \frac{{\rm A}(\{1\}_p;x){\rm A}(k_1,\dotsc,k_r;x)}{x}dx, \end{align} where we have used the relation \begin{align*} {\rm A}({\{1\}_r};x)=\frac{1}{r!}({\rm A}(1;x))^r=\frac{(-1)^r}{ r!}\log^r\left(\frac{1-x}{1+x}\right). \end{align*}
We remark that the Kaneko--Tsumura $\psi$-values can be regarded as a special case of the integral $I_A(\bfk;\bfl)$. So, one can prove \cite[Theorem 3.3]{XZ2020} by considering the integrals $I_A(\bfk;\bfl)$.
\subsection{Multiple integrals associated with 2-labeled posets} According to iterated integral expressions, we know that ${\rm Li}_{\bfk(x)}$ and ${\rm A}(\bfk;x)$ satisfy the shuffle product relation. In this subsection, we will express integrals $I_L(\bfk;\bfl)$ and $I_A(\bfk;\bfl)$ in terms of multiple integral associated with 2-labeled posets, which implies that the integrals $I_L(\bfk;\bfl)$ and $I_A(\bfk;\bfl)$ can be expressed in terms of linear combination of MZVs (or MTVs). The key properties of these integrals was first studied by Yamamoto in \cite{Y2014}.
\begin{defn} A \emph{$2$-poset} is a pair $(X,\delta_X)$, where $X=(X,\leq)$ is a finite partially ordered set and $\delta_X$ is a map from $X$ to $\{0,1\}$. We often omit $\delta_X$ and simply say ``a 2-poset $X$''. The $\delta_X$ is called the \emph{label map} of $X$.
A 2-poset $(X,\delta_X)$ is called \emph{admissible} if $\delta_X(x)=0$ for all maximal elements $x\in X$ and $\delta_X(x)=1$ for all minimal elements $x\in X$. \end{defn}
\begin{defn} For an admissible 2-poset $X$, we define the associated integral \begin{equation}\label{4.1} I_j(X)=\int_{\Delta_X}\prod_{x\in X}\om^{(j)}_{\delta_X(x)}(t_x), \qquad j=1,2, \end{equation} where
\[\Delta_X=\bigl\{(t_x)_x\in [0,1]^X \bigm| t_x<t_y \text{ if } x<y\bigr\}\] and \[\om^{(1)}_0(t)=\om^{(2)}_0(t)=\frac{dt}{t}, \quad \om^{(1)}_1(t)=\frac{dt}{1-t}, \quad \om^{(2)}_1(t)=\frac{2dt}{1-t^2}. \] \end{defn}
For the empty 2-poset, denoted $\emptyset$, we put $I_j(\emptyset):=1\ (j=1,2)$.
\begin{pro}\label{prop:shuffl2poset} For non-comparable elements $a$ and $b$ of a $2$-poset $X$, $X^b_a$ denotes the $2$-poset that is obtained from $X$ by adjoining the relation $a<b$. If $X$ is an admissible $2$-poset, then the $2$-poset $X^b_a$ and $X^a_b$ are admissible and \begin{equation}\label{4.2} I_j(X)=I_j(X^b_a)+I_j(X^a_b)\quad (j=1,2). \end{equation} \end{pro}
Note that the admissibility of a 2-poset corresponds to the convergence of the associated integral. We use Hasse diagrams to indicate 2-posets, with vertices $\circ$ and $\bullet$ corresponding to $\delta(x)=0$ and $\delta(x)=1$, respectively. For example, the diagram \[\begin{xy} {(0,-4) \ar @{{*}-o} (4,0)}, {(4,0) \ar @{-{*}} (8,-4)}, {(8,-4) \ar @{-o} (12,0)}, {(12,0) \ar @{-o} (16,4)} \end{xy} \] represents the 2-poset $X=\{x_1,x_2,x_3,x_4,x_5\}$ with order $x_1<x_2>x_3<x_4<x_5$ and label $(\delta_X(x_1),\dotsc,\delta_X(x_5))=(1,0,1,0,0)$. This 2-poset is admissible. To describe the corresponding diagram, we introduce an abbreviation: For a sequence $\bfk_r=(k_1,\dotsc,k_r)$ of positive integers, we write \[\begin{xy} {(0,-3) \ar @{{*}.o} (0,3)}, {(1,-3) \ar @/_1mm/ @{-} _{\bfk_r} (1,3)} \end{xy}\] for the vertical diagram \[\begin{xy} {(0,-24) \ar @{{*}-o} (0,-20)}, {(0,-20) \ar @{.o} (0,-14)}, {(0,-14) \ar @{-} (0,-10)}, {(0,-10) \ar @{.} (0,-4)}, {(0,-4) \ar @{-{*}} (0,0)}, {(0,0) \ar @{-o} (0,4)}, {(0,4) \ar @{.o} (0,10)}, {(0,10) \ar @{-{*}} (0,14)}, {(0,14) \ar @{-o} (0,18)}, {(0,18) \ar @{.o} (0,24)}, {(1,-24) \ar @/_1mm/ @{-} _{k_1} (1,-14)}, {(4,-3) \ar @{.} (4,-11)}, {(1,0) \ar @/_1mm/ @{-} _{k_{r-1}} (1,10)}, {(1,14) \ar @/_1mm/ @{-} _{k_r} (1,24)} \end{xy}.\] Hence, for admissible composition $\bfk$, using this notation of multiple associated integral, one can verify that
\begin{equation*} \zeta(\bfk)=I_1\left(\ \begin{xy} {(0,-3) \ar @{{*}.o} (0,3)}, {(1,-3) \ar @/_1mm/ @{-} _\bfk (1,3)} \end{xy}\right)\quad\text{and}\quad T(\bfk)=I_2\left(\ \begin{xy} {(0,-3) \ar @{{*}.o} (0,3)}, {(1,-3) \ar @/_1mm/ @{-} _\bfk (1,3)} \end{xy}\right). \end{equation*}
Therefore, according to the definitions of $I_L(\bfk;\bfl)$ and $I_A(\bfk;\bfl)$, and using this notation of multiple associated integral, we can get the following theorem. \begin{thm}\label{thm-ILA} For compositions $\bfk$ and $\bfl$, we have \begin{equation*} I_L(\bfk;\bfl)=I_1\left(\xybox{ {(0,-9) \ar @{{*}-o} (0,-4)}, {(0,-4) \ar @{.o} (0,4)}, {(0,4) \ar @{-o} (5,9)},
{(10,-9) \ar @{{*}-o} (10,-4)}, {(10,-4) \ar @{.o} (10,4)}, {(10,4) \ar @{-} (5,9)},
{(-1,-9) \ar @/^1mm/ @{-} ^\bfk (-1,4)}, {(11,-9) \ar @/_1mm/ @{-} _{\bfl} (11,4)}, }\ \right)\quad\text{\rm and}\quad I_A(\bfk;\bfl)=I_2\left(\xybox{ {(0,-9) \ar @{{*}-o} (0,-4)}, {(0,-4) \ar @{.o} (0,4)}, {(0,4) \ar @{-o} (5,9)},
{(10,-9) \ar @{{*}-o} (10,-4)}, {(10,-4) \ar @{.o} (10,4)}, {(10,4) \ar @{-} (5,9)},
{(-1,-9) \ar @/^1mm/ @{-} ^\bfk (-1,4)}, {(11,-9) \ar @/_1mm/ @{-} _{\bfl} (11,4)}, }\ \right). \end{equation*} \end{thm} \begin{proof}This follows immediately from the definitions of $I_L(\bfk;\bfl)$ and $I_A(\bfk;\bfl)$. We leave the detail to the interested reader. \end{proof}
It is clear that using Theorem \ref{thm-ILA}, the integrals $I_L(\bfk;\bfl)$ (or $I_A(\bfk;\bfl)$) can be expressed in terms of MZVs (or MTVs). In particular, for any positive integer $s$ the integrals $I_L(\bfk;\{1\}_s)$ and $I_A(\bfk;\{1\}_s)$ become the Arakawa--Kaneko zeta values and Kankeo--Tsumura $\psi$-values, respectively. Moreover, Kawasaki--Ohno \cite{KO2018} and Xu--Zhao \cite{XZ2020} have used the multiple integrals associated with 2-posets to prove explicit formulas for all Arakawa--Kaneko zeta values and Kankeo--Tsumura $\psi$-values.
Now, we end this section by the following duality relations. For any $n\in\mathbb{N}$ and composition $\bfk=(k_1,\dotsc,k_r)$, set \begin{equation*}
\bfk_{+n}:=(k_1,\dotsc,k_{r-1},k_r+n). \end{equation*}
\begin{thm}\label{thmDFILA} For any $p\in\mathbb{N}$ and compositions of positive integers $\bfk$, $\bfl$, we have \begin{equation*} I_L(\bfk_{+(p-1)};\bfl)+(-1)^p I_L(\bfk;\bfl_{+(p-1)}) =\sum_{j=1}^{p-1} (-1)^{j-1} \zeta(\bfk_{+(p-j)})\zeta(\bfl_{+j}) \end{equation*} and \begin{equation*} I_A(\bfk_{+(p-1)};\bfl)+(-1)^p I_A(\bfk;\bfl_{+(p-1)}) =\sum_{j=1}^{p-1} (-1)^{j-1} T(\bfk_{+(p-j)})T(\bfl_{+j}). \end{equation*} \end{thm} \begin{proof} This follows easily from the definitions of $I_L(\bfk;\bfl)$ and $I_A(\bfk;\bfl)$ by using integration by parts. We leave the detail to the interested reader. \end{proof}
Setting $\bfk=(\{1\}_r)$ and $\bfl=(\{1\}_s)$ in Theorem \ref{thmDFILA} and noting the duality relations $\zeta(\{1\}_{r-1},s+1)=\zeta(\{1\}_{s-1},r+1)$ and $T(\{1\}_{r-1},s+1)=T(\{1\}_{s-1},r+1)$ , we obtain the following two well-known duality formulas for Arakawa--Kaneko zeta values and Kankeo--Tsumura $\psi$-values (see \cite{AM1999,KanekoTs2018b}) \begin{align*} &\xi(\{1\}_{r-1},p;s+1)+(-1)^p\xi(\{1\}_{s-1},p;r+1)=\sum\limits_{j=0}^{p-2} (-1)^j \zeta(\{1\}_{r-1},p-j) \zeta(\{1\}_j,s+1) \end{align*} and \begin{align*} &\psi(\{1\}_{r-1},p;s+1)+(-1)^p\psi(\{1\}_{s-1},p;r+1)=\sum\limits_{j=0}^{p-2} (-1)^j T(\{1\}_{r-1},p-j) T(\{1\}_j,s+1). \end{align*}
\section{Alternating variant of Kaneko--Yamamoto MZVs}\label{sec3}
\subsection{Integrals of multiple polylogarithm function with $r$-variable} For any composition $\bfk_r=(k_1,\dotsc,k_r)\in\mathbb{N}^r$, we define the \emph{classical multiple polylogarithm function} with $r$-variable by \begin{align*} \oldLi_{\bfk_r}(x_1,\dotsc,x_r):=\sum_{0<n_1<n_2<\dotsb<n_r} \frac{x_1^{n_1}\dotsm x_r^{n_r}}{n_1^{k_1}\dotsm n_r^{k_r}} \end{align*}
which converges if $|x_j\cdots x_r|<1$ for all $j=1,\dotsc,r$. It can be analytically continued to a multi-valued meromorphic function on $\mathbb{C}^r$ (see \cite{Zhao2007d}). We also consider the following two variants. The first one is the star version: \begin{align*} {\rm Li}^\star_{\bfk_r}(x_1,\dotsc,x_r):=\sum_{0<n_1\leq n_2\leq \dotsb\leq n_r} \frac{x_1^{n_1}\dotsm x_r^{n_r}}{n_1^{k_1}\dotsm n_r^{k_r}}. \end{align*} The second is the most useful when we need to apply the technique of iterated integrals: \begin{align} \gl_{\bfk_r}(x_1,\dotsc,x_{r-1},x_r):=&\, \oldLi_{\bfk_r}(x_1/x_2,\dotsc,x_{r-1}/x_r,x_r) \notag\\ =&\,\sum_{0<n_1<n_2<\dotsb<n_r} \frac{(x_1/x_2)^{n_1}\dotsm (x_{r-1}/x_r)^{n_{r-1}}x_r^{n_r}}{n_1^{k_1}\dotsm n_{r-1}^{k_{r-1}}n_r^{k_r}}\label{equ:gl} \end{align}
which converges if $|x_j|<1$ for all $j=1,\dotsc,r$. Namely, \begin{equation}\label{equ:glInteratedInt} \gl_{\bfk_r}(x_1,\dotsc,x_r)= \int_0^1 \left(\frac{x_1\, dt}{1-x_1t}\right)\left(\frac{dt}{t}\right)^{k_1-1}\cdots \left(\frac{x_r\, dt}{1-x_r t}\right)\left(\frac{dt}{t}\right)^{k_r-1}. \end{equation}
Similarly, we define the parametric multiple harmonic sums and parametric multiple harmonic star sums with $r$-variable are defined by \begin{align*} \zeta_n(k_1,\dotsc,k_r;x_1,\dotsc,x_r):=\sum\limits_{0<m_1<\cdots<m_r\leq n } \frac{x_1^{m_1}\cdots x_r^{m_r}}{m_1^{k_1}\cdots m_r^{k_r}} \end{align*} and \begin{align*} \zeta^\star_n(k_1,\dotsc,k_r;x_1,\dotsc,x_r):=\sum\limits_{0<m_1\leq \cdots\leq m_r\leq n} \frac{x_1^{m_1}\cdots x_r^{m_r}}{m_1^{k_1}\cdots m_r^{k_r}}, \end{align*} respectively. Obviously, \begin{align*} \lim_{n\rightarrow \infty} \zeta_n(k_1,\dotsc,k_r;x_1,\dotsc,x_r)=\oldLi_{k_1,\dotsc,k_r}(x_1,\dotsc,x_r) \end{align*} and \begin{align*} \lim_{n\rightarrow \infty} \zeta^\star_n(k_1,\dotsc,k_r;x_1,\dotsc,x_r)={\rm Li}^\star_{k_1,\dotsc,k_r}(x_1,\dotsc,x_r). \end{align*} \begin{defn} For any two compositions of positive integers $\bfk=(k_1,\dotsc,k_r)$, $\bfl=(l_1,\dotsc,l_s)$, $\bfsi:=(\sigma_1,\dotsc,\sigma_r)\in\{\pm 1\}^r$ and $\bfeps:=(\varepsilon_1,\dotsc,\varepsilon_s)\in\{\pm 1\}^s$, define \begin{align} &\zeta((\bfk;\bfsi)\circledast(\bfl;\bfeps)^\star)\equiv\zeta((k_1,\dotsc,k_r;\sigma_1,\dotsc,\sigma_r)\circledast (l_1,\dotsc,l_s;\varepsilon_1,\dotsc,\varepsilon_s)^\star)\nonumber\\ &:=\sum\limits_{n=1}^\infty \frac{\zeta_{n-1}(k_1,\dotsc,k_{r-1};\sigma_1,\dotsc,\sigma_{r-1}) \zeta^\star_n(l_1,\dotsc,l_{s-1};\varepsilon_1,\dotsc,\varepsilon_{r-1})}{n^{k_r+l_s}}(\sigma_r\varepsilon_s)^n. \end{align} We call them \emph{alternating Kaneko--Yamamoto MZVs}. \end{defn}
\begin{thm} For $n\in\mathbb{N}$, $\bfk_r=(k_1,\dotsc,k_r)\in\mathbb{N}^r$ and $\bfsi_r:=(\sigma_1,\dotsc,\sigma_r)\in\{\pm 1\}^r$, we have \begin{align*} &\int_0^1 x^{n-1} \gl_{k_1,\dotsc,k_r}(\sigma_1x,\dotsc,\sigma_rx)\,dx\nonumber\\ &=\sum_{j=1}^{k_r-1} \frac{(-1)^{j-1}}{n^{j-1}} \gl_{\bfk_{r-1},k_r+1-j}(\bfsi_r) +\frac{(-1)^{k_r}}{n^{k_r}}(\sigma_r^n-1)\gl_{\bfk_{r-1},1}(\bfsi_r)\nonumber\\
&\quad-\frac{\sigma_r^n}{n^{k_r}} \sum_{l=1}^{r-1} (-1)^{|\bfk_r^l|}\sum_{j=1}^{k_{r-l}-1}(-1)^{j}\zeta^\star_n\Big(j,\bfk_{r-1}^{l-1};\sigma_{r-l+1},(\bfsi_{r}\bfsi_{r-1})^{l-1}\Big) \gl_{\bfk_{r-l-1},k_{r-l}+1-j}(\bfsi_{r-l})\nonumber\\
&\quad-\frac{\sigma_r^n}{n^{k_r}} \sum_{l=1}^{r-1} (-1)^{|\bfk_{r}^{l+1}|-l}\gl_{\bfk_{r-l-1},1}(\bfsi_{r-l})\Big(\zeta^\star_n\big(\bfk_{r-1}^l;\sigma_{r-l+1},(\bfsi_{r}\bfsi_{r-1})^{l-1}\big)-\zeta^\star_n\big(\bfk_{r-1}^l;(\bfsi_{r}\bfsi_{r-1})^{l}\big) \Big)\nonumber\\
&\quad +(-1)^{|\bfk|-r}\frac{\sigma_r^n}{n^{k_r}} \zeta^\star_n\Big(1,\bfk_{r-1};\sigma_1,(\bfsi_r\bfsi_{r-1})^{r-1}\Big), \end{align*} where $(\bfsi_{r}\bfsi_{r-1})^{l}:=(\sigma_{r-l+1}\sigma_{r-l},\sigma_{r-l+2}\sigma_{r-l+1},\dotsc,\sigma_r\sigma_{r-1})$ and $(\bfsi_{r}\bfsi_{r-1})^{0}:=\emptyset$. If $\sigma_r=1$ then $(\sigma_r^n-1)\gl_{\bfk_{r-1},1}(\bfsi_r):=0$, and if $\sigma_{r-l}=1$ then \[\gl_{\bfk_{r-l-1},1}(\bfsi_{r-l})\Big(\zeta^\star_n\big(\bfk_{r-1}^l;\sigma_{r-l+1},(\bfsi_{r}\bfsi_{r-1})^{l-1}\big)-\zeta^\star_n\big(\bfk_{r-1}^l;(\bfsi_{r}\bfsi_{r-1})^{l}\big)\Big):=0.\] \end{thm} \begin{proof} According to definition, \begin{align*} \frac{d}{dx}\gl_{k_1,\dotsc,k_r}(\sigma_1x,\dotsc,\sigma_{r-1}x,\sigma_rx)= \left\{ \begin{array}{ll} \frac{1}{x} \gl_{k_1,\dotsc,k_{r-1},k_r-1}(\sigma_1x,\dotsc,\sigma_{r-1}x,\sigma_rx),
&\quad \hbox{if $k_r\geq 2$}; \\
\frac{\sigma_r}{1-\sigma_rx}\gl_{k_1,\dotsc,k_{r-1}}(\sigma_1x,\dotsc,\sigma_{r-1}x), &\quad\hbox{if $k_r = 1$}. \\ \end{array} \right. \end{align*} Hence, using identity above we can get the following recurrence relation \begin{align*} &\int_0^1 x^{n-1} \gl_{k_1,\dotsc,k_r}(\sigma_1x,\sigma_2x,\dotsc,\sigma_rx)\,dx\\ &=\sum_{j=1}^{k_r-1} \frac{(-1)^{j-1}}{n^j} \gl_{k_1,\dotsc,k_{r-1},k_r+1-j}(\sigma_1,\sigma_2,\dotsc,\sigma_r) +\frac{(-1)^{k_r}}{n^{k_r}}(\sigma_r^n-1)\gl_{k_1,\dotsc,k_{r-1},1}(\sigma_1,\dotsc,\sigma_r)\\ &\quad-\frac{(-1)^{k_r}}{n^{k_r}}\sigma_r^n\sum_{k=1}^{n}\sigma_r^k \int_0^1 x^{k-1} \gl_{k_1,k_2,\dotsc,k_{r-1}}(\sigma_1x,\sigma_2x,\dotsc,\sigma_{r-1}x)\,dx. \end{align*} Thus, we obtain the desired formula by using the recurrence relation above. \end{proof}
Letting $r=1$ and $2$, we can get the following two corollaries. \begin{cor}\label{cor-IL} For positive integers $n,k$ and $\sigma\in\{\pm 1\}$, \begin{align*} \int_0^1 x^{n-1}\gl_k(\sigma x)\,dx=\frac{(-1)^{k}}{n^k}(\sigma^n-1)\gl_1(\sigma) -(-1)^{k}\frac{\sigma^n}{n^k}\zeta^\star_n(1;\sigma)-\sum_{j=1}^{k-1} \frac{(-1)^{j}}{n^j}\gl_{k+1-j}(\sigma). \end{align*} \end{cor}
\begin{cor}\label{cor-IIL} For positive integers $n,k_1,k_2$ and $\sigma_1,\sigma_2\in\{\pm 1\}$, \begin{align*} &\int_0^1 x^{n-1}\gl_{k_1,k_2}(\sigma_1 x,\sigma_2 x)\,dx\\ &=\sum_{j=1}^{k_2-1}\frac{(-1)^{j-1}}{n^j} \gl_{k_1,k_2+1-j}(\sigma_1,\sigma_2) +(-1)^{k}\frac{\sigma_2^n}{n^{k_2}} \sum_{j=1}^{k_1-1}(-1)^{j}\gl_{k_1+1-j}(\sigma_1)\zeta^\star_n(j;\sigma_2)\\ &\quad+(-1)^{k_2}\frac{\sigma_2^n-1}{n^{k_2}}\gl_{k_1,1}(\sigma_1,\sigma_2)+(-1)^{k_1+k_2}\gl_1(\sigma_1)\frac{\sigma_2^n}{n^{k_2}}\Big(\zeta^\star_n(k_1;\sigma_2)-\zeta^\star_n(k_1;\sigma_2\sigma_1) \Big)\\ &\quad+(-1)^{k_1+k_2}\frac{\sigma_2^n}{n^{k_2}}\zeta^\star_n(1,k_1;\sigma_1,\sigma_2\sigma_1). \end{align*} \end{cor}
Clearly, setting $\sigma_1=\sigma_2=\cdots=\sigma_r=1$ gives the formula \eqref{a1}.
\subsection{Explicit formulas for alternating Kaneko--Yamamoto MZVs}
Obviously, we can consider the integral \[I_\gl((\bfk;\bfsi),(\bfl;\bfeps)):=\int_0^1 \frac{\gl_{\bfk_r}(\sigma_1x,\dotsc,\sigma_rx)\gl_{\bfl_s}(\varepsilon_1x,\dotsc,\varepsilon_sx)}{x}\,dx\] to find some explicit relations of $\zeta((\bfk;\bfsi)\circledast(\bfl;\bfeps)^\star)$. We have the following theorems.
\begin{thm} For positive integers $k,l$ and $\sigma,\varepsilon\in\{\pm 1\}$, \begin{align} &(-1)^k{\rm Li}^\star_{1,k+l}(\sigma,\sigma\varepsilon)-(-1)^l{\rm Li}^\star_{1,k+l}(\varepsilon,\sigma\varepsilon)\nonumber\\ &=\sum_{j=1}^{k-1} (-1)^{j-1} \gl_{k+1-j}(\sigma)\gl_{l+j}(\varepsilon)-\sum_{j=1}^{l-1} (-1)^{j-1} \gl_{l+1-j}(\varepsilon)\gl_{k+j}(\sigma)\nonumber\\ &\quad+(-1)^l\gl_1(\varepsilon)(\gl_{k+l}(\sigma)-\gl_{k+l}(\sigma\varepsilon)) -(-1)^k\gl_1(\sigma)(\gl_{k+l}(\varepsilon)-\gl_{k+l}(\sigma\varepsilon)), \end{align} where if $\sigma=1$ then $\gl_1(\sigma)(\gl_{k+l}(\varepsilon)-\gl_{k+l}(\sigma\varepsilon)):=0$. Similarly, if $\varepsilon=1$ then $\gl_1(\varepsilon)(\gl_{k+l}(\sigma)-\gl_{k+l}(\sigma\varepsilon)):=0$. \end{thm} \begin{proof} Considering the integral $\int_0^1 \frac{\gl_k(\sigma x)\gl_l(\varepsilon x)}{x}\,dx$ and using Corollary \ref{cor-IL} with an elementary calculation, we obtain the formula. \end{proof}
\begin{thm} For positive integers $k_1,k_2,l$ and $\sigma_1,\sigma_2,\varepsilon\in\{\pm 1\}$, \begin{align}\label{c7} &\sum_{j=1}^{l-1} (-1)^{j-1} \gl_{l+1-j}(\varepsilon)\oldLi_{k_1,k_2+j}(\sigma_1\sigma_2,\sigma_2)
-(-1)^l \zeta((k_1,k_2;\sigma_1\sigma_2,\sigma_2)\circledast(1,l;\varepsilon,\varepsilon)^\star)\nonumber\\ &\quad-(-1)^l\gl_{1}(\varepsilon)\Big(\oldLi_{k_1,k_2+l}(\sigma_1\sigma_2,\sigma_2)-\oldLi_{k_1,k_2+l}(\sigma_1\sigma_2,\sigma_2\varepsilon)\Big)\nonumber\\ &=\sum_{j=1}^{k_2-1} (-1)^{j-1} \gl_{k_1,k_2+1-j}(\sigma_1,\sigma_2)\gl_{l+j}(\varepsilon) -(-1)^{k_2}\sum_{j=1}^{k_1-1}(-1)^{j-1} \gl_{k_1+1-j}(\sigma_1){\rm Li}^\star_{j,k_2+l}(\sigma_2,\varepsilon\sigma_2)\nonumber\\ &\quad-(-1)^{k_2}\gl_{k_1,1}(\sigma_1,\sigma_2)\Big(\gl_{k_2}(\varepsilon)-\gl_{k_2}(\varepsilon\sigma_2) \Big) +(-1)^{k_1+k_2} {\rm Li}^\star_{1,k_1,k_2+l}(\sigma_1,\sigma_2\sigma_1,\sigma_2\varepsilon), \nonumber\\ &\quad+(-1)^{k_1+k_2} \gl_1(\sigma_1)\Big({\rm Li}^\star_{k_1,k_2+l}(\sigma_2,\varepsilon\sigma_2)-{\rm Li}^\star_{k_1,k_2+l}(\sigma_2\sigma_1,\varepsilon\sigma_2) \Big)\end{align} where if $\varepsilon=1$ then $\gl_{1}(\varepsilon)\Big(\oldLi_{k_1,k_2+l}(\sigma_1\sigma_2,\sigma_2)-\oldLi_{k_1,k_2+l}(\sigma_1\sigma_2,\sigma_2\varepsilon)\Big):=0$; if $\sigma_1=1$ then $\gl_1(\sigma_1)\Big({\rm Li}^\star_{k_1,k_2+l}(\sigma_2,\varepsilon\sigma_2)-{\rm Li}^\star_{k_1,k_2+l}(\sigma_2\sigma_1,\varepsilon\sigma_2) \Big):=0$ and if $\sigma_2=1$ then $\gl_{k_1,1}(\sigma_1,\sigma_2)\Big(\gl_{k_2}(\varepsilon)-\gl_{k_2}(\varepsilon\sigma_2) \Big):=0$. \end{thm} \begin{proof} Similarly, considering the integral $\int_0^1 \frac{\gl_{k_1,k_2}(\sigma_1 x,\sigma_1 x)\gl_l(\varepsilon x)}{x}\,dx$ and using Corollary \ref{cor-IIL} with an elementary calculation, we prove the formula. \end{proof}
On the other hand, according to definition, we have \begin{align*} &\zeta((k_1,k_2;\sigma_1\sigma_2,\sigma_2)\circledast(1,l;\varepsilon,\varepsilon)^\star)=\sum\limits_{n=1}^\infty \frac{\zeta_{n-1}(k_1;\sigma_1\sigma_2)\zeta^\star_n(1;\varepsilon)}{n^{k_2+l}}(\sigma_2\varepsilon)^n\\ &={\rm Li}^\star_{k_1,1,k_2+l}(\sigma_1\sigma_2,\varepsilon,\sigma_2\varepsilon)+{\rm Li}^\star_{1,k_1,k_2+l}(\varepsilon,\sigma_1\sigma_2,\sigma_2\varepsilon)-{\rm Li}^\star_{k_1+1,k_2+l}(\sigma_1\sigma_2\varepsilon,\sigma_2\varepsilon) -{\rm Li}^\star_{1,k_1+k_2+l}(\varepsilon,\sigma_1\varepsilon). \end{align*} Substituting it into \eqref{c7} yields the following corollary. \begin{cor}\label{cor:c8} For positive integers $k_1,k_2,l$ and $\sigma_1,\sigma_2,\varepsilon\in\{\pm 1\}$, \begin{align*} &(-1)^{l}{\rm Li}^\star_{k_1,1,k_2+l}(\sigma_1\sigma_2,\varepsilon,\sigma_2\varepsilon) +(-1)^{l}{\rm Li}^\star_{1,k_1,k_2+l}(\varepsilon,\sigma_1\sigma_2,\sigma_2\varepsilon) +(-1)^{k_1+k_2} {\rm Li}^\star_{1,k_1,k_2+l}(\sigma_1,\sigma_2\sigma_1,\sigma_2\varepsilon) \\ &=\sum_{j=1}^{k_2-1} (-1)^{j} \gl_{k_1,k_2+1-j}(\sigma_1,\sigma_2)\gl_{l+j}(\varepsilon) -(-1)^{k_2}\sum_{j=1}^{k_1-1}(-1)^{j} \gl_{k_1+1-j}(\sigma_1){\rm Li}^\star_{j,k_2+l}(\sigma_2,\varepsilon\sigma_2) \\ &\quad-\sum_{j=1}^{l-1} (-1)^{j} \gl_{l+1-j}(\varepsilon)\oldLi_{k_1,k_2+j}(\sigma_1\sigma_2,\sigma_2) +(-1)^{k_2}\gl_{k_1,1}(\sigma_1,\sigma_2)\Big(\gl_{k_2}(\varepsilon)-\gl_{k_2}(\varepsilon\sigma_2) \Big) \\ &\quad-(-1)^{k_1+k_2} \gl_1(\sigma_1)\Big({\rm Li}^\star_{k_1,k_2+l}(\sigma_2,\varepsilon\sigma_2)-{\rm Li}^\star_{k_1,k_2+l}(\sigma_2\sigma_1,\varepsilon\sigma_2) \Big) \\ &\quad-(-1)^l\gl_{1}(\varepsilon)\Big(\oldLi_{k_1,k_2+l}(\sigma_1\sigma_2,\sigma_2)-\oldLi_{k_1,k_2+l}(\sigma_1\sigma_2,\sigma_2\varepsilon)\Big) \\ &\quad+(-1)^{l}{\rm Li}^\star_{k_1+1,k_2+l}(\sigma_1\sigma_2\varepsilon,\sigma_2\varepsilon) +(-1)^{l}{\rm Li}^\star_{1,k_1+k_2+l}(\varepsilon,\sigma_1\varepsilon). \end{align*} \end{cor}
Clearly, setting $\sigma_1=\sigma_2=\varepsilon=1$ in Corollary \ref{cor:c8} gives the formula \eqref{a4}. We also find numerous explicit relations involving alternating MZVs. For example, letting $k_1=k_2=l=2$ and $\sigma_1=\eps=-1, \sigma_2=1$, we have \begin{align*}
\zeta^\star(\bar 2,\bar 1,\bar 4)+2\zeta^\star(\bar 1,\bar 2,\bar 4) &=3 {\rm Li}_4\left(\frac{1}{2}\right) \zeta (3)-\frac{7 \pi ^4 \zeta (3)}{128}+\frac{61 \pi ^2 \zeta (5)}{192}-\frac{105 \zeta (7)}{128}+\frac{1}{8} \zeta (3) \log ^4(2)\\&\quad-\frac{1}{8} \pi ^2 \zeta (3) \log ^2(2)+\frac{63}{16} \zeta (3)^2 \log (2)-\frac{61 \pi ^6 \log (2)}{10080}, \end{align*} where we used Au's Mathematica package \cite{Au2020}.
\subsection{Multiple integrals associated with 3-labeled posets}
In this subsection, we introduce the multiple integrals associated with 3-labeled posets, and express the integrals $I_\gl((\bfk;\bfsi),(\bfl;\bfeps))$ in terms of multiple integrals associated with 3-labeled posets.
\begin{defn} A \emph{$3$-poset} is a pair $(X,\delta_X)$, where $X=(X,\leq)$ is a finite partially ordered set and $\delta_X$ is a map from $X$ to $\{-1,0,1\}$. We often omit $\delta_X$ and simply say ``a 3-poset $X$''. The $\delta_X$ is called the \emph{label map} of $X$.
Similar to 2-poset, a 3-poset $(X,\delta_X)$ is called \emph{admissible}? if $\delta_X(x) \ne 1$ for all maximal elements and $\delta_X(x) \ne 0$ for all minimal elements $x \in X$. \end{defn}
\begin{defn} For an admissible $3$-poset $X$, we define the associated integral \begin{equation} I(X)=\int_{\Delta_X}\prod_{x\in X}\omega_{\delta_X(x)}(t_x), \end{equation} where
\[\Delta_X=\bigl\{(t_x)_x\in [0,1]^X \bigm| t_x<t_y \text{ if } x<y\bigr\}\] and \[\omega_{-1}(t)=\frac{dt}{1+t},\quad \omega_0(t)=\frac{dt}{t}, \quad \omega_1(t)=\frac{dt}{1-t}.\] \end{defn}
For the empty 3-poset, denoted $\emptyset$, we put $I(\emptyset):=1$.
\begin{pro}\label{prop:shuffl3poset} For non-comparable elements $a$ and $b$ of a $3$-poset $X$, $X^b_a$ denotes the $3$-poset that is obtained from $X$ by adjoining the relation $a<b$. If $X$ is an admissible $3$-poset, then the $3$-poset $X^b_a$ and $X^a_b$ are admissible and \begin{equation} I(X)=I(X^b_a)+I(X^a_b). \end{equation} \end{pro}
Note that the admissibility of a $3$-poset corresponds to the convergence of the associated integral. We use Hasse diagrams to indicate $3$-posets, with vertices $\circ$ and ``$\bullet\ \sigma$" corresponding to $\delta(x)=0$ and $\delta(x)=\sigma\ (\sigma\in\{\pm 1\})$, respectively. For convenience, if $\sigma=1$, replace ``$\bullet\ 1$" by $\bullet$ and if $\sigma=-1$, replace ``$\bullet\ -1$" by ``$\bullet\ {\bar1}$". For example, the diagram \[\begin{xy} {(0,-4) \ar @{{*}-o} (4,0)}, {(4,0) \ar @{-{*}} (8,-4)}, {(8,-4) \ar @{-o}_{\bar 1} (12,0)}, {(12,0) \ar @{-o} (16,4)}, {(16,4) \ar @{-{*}} (24,-4)}, {(24,-4) \ar @{-o}_{\bar 1} (28,0)}, {(28,0) \ar @{-o} (32,4)} \end{xy} \] represents the $3$-poset $X=\{x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8\}$ with order $x_1<x_2>x_3<x_4<x_5>x_6<x_7<x_8$ and label $(\delta_X(x_1),\dotsc,\delta_X(x_8))=(1,0,-1,0,0,-1,0,0)$. For composition $\bfk=(k_1,\dotsc,k_r)$ and $\bfsi\in\{\pm 1\}^r$ (admissible or not), we write \[\begin{xy} {(0,-3) \ar @{{*}.o} (0,3)}, {(1,-3) \ar @/_1mm/ @{-} _{(\bfk,\bfsi)} (1,3)} \end{xy}\] for the `totally ordered' diagram: \[\begin{xy} {(0,-24) \ar @{{*}-o}_{\sigma_1} (4,-20)}, {(4,-20) \ar @{.o} (10,-14)}, {(10,-14) \ar @{-} (14,-10)}, {(14,-10) \ar @{.} (20,-4)}, {(20,-4) \ar @{-{*}} (24,0)}, {(24,0) \ar @{-o}_{\sigma_{r-1}}(28,4)}, {(28,4) \ar @{.o} (34,10)}, {(34,10) \ar @{-{*}} (38,14)}, {(38,14) \ar @{-o}_{\sigma_r} (42,18)}, {(42,18) \ar @{.o} (48,24)}, {(0,-23) \ar @/^2mm/ @{-}^{k_1} (9,-14)}, {(24,1) \ar @/^2mm/ @{-}^{k_{r-1}} (33,10)}, {(38,15) \ar @/^2mm/ @{-}^{k_r} (47,24)} \end{xy} \] If $k_i=1$, we understand the notation $\begin{xy} {(0,-5) \ar @{{*}-o}_{\sigma_i} (4,-1)}, {(4,-1) \ar @{.o} (10,5)}, {(0,-4) \ar @/^2mm/ @{-}^{k_i} (9,5)} \end{xy}$ as a single $\bullet\ {\sigma_i}$. We see from \eqref{equ:glInteratedInt} \begin{align}\label{5.19} I\left(\ \begin{xy} {(0,-3) \ar @{{*}.o} (0,3)}, {(1,-3) \ar @/_1mm/ @{-} _{(\bfk,\bfsi)} (1,3)} \end{xy}\right)=\frac{\gl_{k_1,\dotsc,k_r}(\sigma_1,\sigma_2,\dotsc,\sigma_r)}{\sigma_1\sigma_2\cdots \sigma_r}. \end{align}
Therefore, according to the definition of $I_\gl((\bfk;\bfsi),(\bfl;\bfeps))$, and using this notation of multiple associated integral, we can get the following theorem. \begin{thm}\label{thm-ILA-} For compositions $\bfk\equiv \bfk_r$ and $\bfl\equiv\bfl_s$ with $\bfsi\in\{\pm 1\}^r$ and $\bfeps\in\{\pm 1\}^s$, \begin{equation*} I_\gl((\bfk;\bfsi),(\bfl;\bfeps))=I\left(\xybox{ {(0,-9) \ar @{{*}-o} (0,-4)}, {(0,-4) \ar @{.o} (0,4)}, {(0,4) \ar @{-o} (5,9)},
{(10,-9) \ar @{{*}-o} (10,-4)}, {(10,-4) \ar @{.o} (10,4)}, {(10,4) \ar @{-} (5,9)},
{(-1,-9) \ar @/^1mm/ @{-} ^{(\bfk,\bfsi)} (-1,4)}, {(11,-9) \ar @/_1mm/ @{-} _{(\bfl,\bfeps))} (11,4)}, }\ \right). \end{equation*} \end{thm} \begin{proof}This follows immediately from the definition of $I_\gl((\bfk;\bfsi),(\bfl;\bfeps))$. We leave the detail to the interested reader. \end{proof}
Finally, we end this section by the following a theorem which extends \cite[Theorem~ 4.1]{KY2018} to level two. \begin{thm} For any non-empty compositions $\bfk_r$, $\bfl_s$ and $\bfsi_r\in\{\pm 1\}^r$, we have \begin{align}\label{5.21} I\left( \raisebox{16pt}{\begin{xy} {(-3,-18) \ar @{{*}-}_{\sigma_1'} (0,-15)}, {(0,-15) \ar @{{o}.} (3,-12)}, {(3,-12) \ar @{{o}.} (9,-6)}, {(9,-6) \ar @{{*}-}_{\sigma_r'} (12,-3)}, {(12,-3) \ar @{{o}.} (15,0)}, {(15,0) \ar @{{o}-} (18,3)},
{(18,3) \ar @{{o}-} (21,6)}, {(21,6) \ar @{{o}.} (24,9)}, {(24,9) \ar @{{o}-} (27,3)}, {(27,3) \ar @{{*}-} (30,6)}, {(30,6) \ar @{{o}.} (33,9)}, {(33,9) \ar @{{o}-} (35,5)}, {(37,6) \ar @{.} (41,6)}, {(42,3) \ar @{{*}-} (45,6)}, {(45,6) \ar @{{o}.{o}} (48,9)},
{(-3,-17) \ar @/^1mm/ @{-}^{k_1} (2,-12)}, {(9,-5) \ar @/^1mm/ @{-}^{k_r} (14,0)}, {(18,4) \ar @/^1mm/ @{-}^{l_s} (23,9)}, {(28,3) \ar @/_1mm/ @{-}_{l_{s-1}} (33,8)}, {(43,3) \ar @/_1mm/ @{-}_{l_1} (48,8)}, \end{xy}} \right)=\frac{\zeta((\bfk_r;\bfsi_r)\circledast(\bfl_s;\{1\}_s)^\star)}{\sigma_1'\sigma_2'\cdots\sigma_r'}, \end{align} where $\sigma_j'=\sigma_j\sigma_{j+1}\cdots\sigma_r$, and $\bullet\ \sigma_j'$ corresponding to $\delta(x)=\sigma_j'$. \end{thm} \begin{proof} The proof is done straightforwardly by computing the multiple integral on the left-hand side of \eqref{5.21} as a repeated integral ``from left to right'' using the key ideas in the proof of \eqref{equ:glInteratedInt} and \cite[Corollary 3.1]{Y2014}. \end{proof}
If letting all $\sigma_i=1\ (i=1,2,\dotsc,r)$, then we obtain the ``integral-series" relation of Kaneko--Yamamoto \cite{KY2018}.
From Proposition \ref{prop:shuffl3poset} and (\ref{5.19}), it is clear that the left-hand side of (\ref{5.21}) can be expressed in terms of a linear combination of alternating multiple zeta values. Hence, we can find many linear relations of alternating multiple zeta values from (\ref{5.21}). For example, \begin{align} &2\gl_{1,1,3}(\sigma_1',\sigma_2',1) +2\gl_{1,1,3}(\sigma_1',1,\sigma_2')+2\gl_{1,1,3}(1,\sigma_1',\sigma_2')\nonumber\\&\quad+\gl_{1,2,2}(\sigma_1',1,\sigma_2')+\gl_{1,2,2}(1,\sigma_1',\sigma_2')+\gl_{2,1,2}(1,\sigma_1',\sigma_2')\nonumber\\ &=\zeta(2,1,2;1,\sigma_1,\sigma_2)+\zeta(1,2,2;\sigma_1,1,\sigma_2)+\zeta(3,2;\sigma_1,\sigma_2)+\zeta(1,4;\sigma_1,\sigma_2). \end{align} If $(\sigma_1,\sigma_2)=(1,1)$ and $(1,-1)$, then we get the following two cases \begin{align*} &6\zeta(1,1,3)+2\zeta(1,2,2)+\zeta(2,1,2)=\zeta(1,2,2)+\zeta(2,1,2)+\zeta(3,2)+\zeta(1,4), \end{align*} and \begin{align*} &2\zeta(1,{\bar 1},3)+2\zeta({\bar 1},{\bar 1},{\bar 3})+2\zeta({\bar 1},{ 1},{\bar 3})+\zeta({\bar 1},{\bar 2},{\bar 2})+\zeta({\bar 1},{ 2},{\bar 2})+\zeta({\bar 2},1,{\bar 2})\\ &\quad=\zeta(2,1,{\bar 2})+\zeta(1,2,{\bar 2})+\zeta(3,{\bar 2})+\zeta(1,{\bar 4}). \end{align*}
\section{Integrals about multiple $t$-harmonic (star) sums}\label{sec4} Similar to MHSs and MHSSs, we can define the following their $t$-versions. \begin{defn} For any $n, r\in\mathbb{N}$ and composition $\bfk:=(k_1,\dotsc,k_r)\in\mathbb{N}^r$, \begin{align} &t_n(k_1,\dotsc,k_r):=\sum_{0<n_1<n_2<\dotsb<n_r\leq n} \frac{1}{(2n_1-1)^{k_1}(2n_2-1)^{k_2}\dotsm (2n_r-1)^{k_r}},\label{t-1}\\ &t^\star_n(k_1,\dotsc,k_r):=\sum_{0<n_1\leq n_2\leq \dotsb\leq n_r\leq n} \frac{1}{(2n_1-1)^{k_1}(2n_2-1)^{k_2}\dotsm (2n_r-1)^{k_r}},\label{t-2} \end{align} where we call \eqref{t-1} and \eqref{t-2} are multiple $t$-harmonic sums and multiple $t$-harmonic star sums, respectively. If $n<r$ then ${t_n}(\bfk):=0$ and ${t_n}(\emptyset )={t^\star _n}(\emptyset ):=1$. \end{defn}
For composition $\bfk:=(k_1,\dotsc,k_r)$, define \begin{align*} L(k_1,\dotsc,k_r;x):=\frac{1}{2^{k_1+\dotsb+k_r}} {\rm Li}_{k_1,\dotsc,k_r}(x^2) \end{align*} where $L(\emptyset;x):=1$. Set $L(k_1,\dotsc,k_r):=L(k_1,\dotsc,k_r;1)$. Similarly, define \begin{align*} t(k_1,\dotsc,k_r;x):&=\sum_{0<n_1<n_2<\dotsb<n_r} \frac{x^{2n_r-1}}{(2n_1-1)^{k_1}(2n_2-1)^{k_2}\dotsm (2n_r-1)^{k_r}}\\ &=\sum\limits_{n=1}^\infty \frac{t_{n-1}(k_1,\dotsc,k_{r-1})}{(2n-1)^{k_r}}x^{2n-1}, \end{align*} where $t(\emptyset;x):=1/x$. Note that $t(k_1,\dotsc,k_r;1)=t(k_1,\dotsc,k_r)$.
\begin{thm} For composition $\bfk:=(k_1,\dotsc,k_r)$ and positive integer $n$, \begin{align}\label{d3} &\int_0^1 x^{2n-2} L(\bfk_r;x)\,dx=\sum_{j=1}^{k_r-1} \frac{(-1)^{j-1}}{(2n-1)^j}L(\bfk_{r-1},k_r+1-j)
+ \frac{(-1)^{|\bfk|-r}}{(2n-1)^{k_r}} t^\star_n(1,\bfk_{r-1})\nonumber\\
&\quad+\frac{1}{(2n-1)^{k_r}} \sum_{l=1}^{r-1} (-1)^{|\bfk_r^l|-l}\sum_{j=1}^{k_{r-l}-1} (-1)^{j-1}L(\bfk_{r-l-1},k_{r-l}+1-j)t^\star_n(j,\bfk_{r-1}^{l-1})\nonumber\\
&\quad-\frac{1}{(2n-1)^{k_r}}\sum_{l=0}^{r-1} (-1)^{|\bfk_r^{l+1}|-l-1}\left(\int_0^1 \frac{L(\bfk_{r-l-1},1;x)}{x^2}\,dx\right)t^\star_n(\bfk_{r-1}^l). \end{align} \end{thm} \begin{proof} By the simple substitution $t\to t^2/x^2$ in \eqref{equ:glInteratedInt} we see quickly that \begin{align*} L(k_1,\dotsc,k_r;x)=\int_{0}^x \frac{tdt}{1-t^2}\left(\frac{dt}{t}\right)^{k_1-1}\dotsm \frac{tdt}{1-t^2}\left(\frac{dt}{t}\right)^{k_r-1}. \end{align*} By an elementary calculation, we deduce the recurrence relation \begin{align*} \int_0^1 x^{2n-2} L(\bfk_r;x)\,dx&=\sum_{j=1}^{k_r-1} \frac{(-1)^{j-1}}{(2n-1)^j}L(\bfk_{r-1},k_r+1-j) -\frac{(-1)^{k_r-1}}{(2n-1)^{k_r}} \int_0^1 \frac{L(\bfk_{r-1},1;x)}{x^2}dx\\ &\quad+\frac{(-1)^{k_r-1}}{(2n-1)^{k_r}}\sum_{l=1}^n \int_0^1 x^{2l-2}L(\bfk_{r-1};x)dx. \end{align*} Hence, using the recurrence relation, we obtain the desired evaluation by direct calculations. \end{proof}
\begin{thm} For composition $\bfk:=(k_1,\dotsc,k_r)$ and positive integer $n$, \begin{align}
&\int_0^1 x^{2n-2} t(\bfk_r;x)\,dx=\sum_{j=1}^{k_r-1} \frac{(-1)^{j-1}}{(2n-1)^j}t(\bfk_{r-1},k_r+1-j) + \frac{(-1)^{|\bfk|-r}}{(2n-1)^{k_r}} s^\star_n(1,\bfk_{r-1})\nonumber\\
&\quad+\frac{1}{(2n-1)^{k_r}} \sum_{l=1}^{r-1} (-1)^{|\bfk_r^l|-l}\sum_{j=1}^{k_{r-l}-1} (-1)^{j-1}t(\bfk_{r-l-1},k_{r-l}+1-j)\widehat{t}^\star_n(j,\bfk_{r-1}^{l-1})\nonumber\\
&\quad+\frac{1}{(2n-1)^{k_r}}\sum_{l=0}^{r-1} (-1)^{|\bfk_r^{l+1}|-l-1}\left(\int_0^1t(\bfk_{r-l-1},1;x)\,dx\right)\widehat{t}^\star_n(\bfk_{r-1}^l), \end{align} where \begin{align*} &\widehat{t}^\star_n(k_1,\dotsc,k_r):=\sum_{2\leq n_1\leq n_2\leq \dotsb\leq n_r\leq n} \frac{1}{(2n_1-1)^{k_1}(2n_2-1)^{k_2}\cdots(2n_r-1)^{k_r}},\\ &s^\star_n(k_1,\dotsc,k_r):=\sum_{2\leq n_1\leq n_2\leq \dotsb\leq n_r\leq n} \frac{1}{(2n_1-2)^{k_1}(2n_2-1)^{k_2}\cdots(2n_r-1)^{k_r}}. \end{align*} \end{thm} \begin{proof} By definition we have \begin{align*} \frac{d}{dx}t({{k_1}, \cdots ,k_{r-1},{k_r}}; x)= \left\{ {\begin{array}{*{20}{c}} \frac{1}{x} t({{k_1}, \cdots ,{k_{r-1}},{k_r-1}};x)
{\ \ (k_r\geq 2),} \\
{\frac{x}{1-x^2}t({{k_1}, \cdots ,{k_{r-1}}};x)\;\;\;\ \ \ (k_r = 1),} \\ \end{array} } \right. \end{align*} where $t(\emptyset;x):=1/x$. Hence, we obtain the iterated integral \begin{align*} t(k_1,\dotsc,k_r;x)=\int_{0}^x \frac{dt}{1-t^2}\left(\frac{dt}{t}\right)^{k_1-1}\frac{tdt}{1-t^2}\left(\frac{dt}{t}\right)^{k_{2}-1} \cdots \frac{tdt}{1-t^2}\left(\frac{dt}{t}\right)^{k_r-1}. \end{align*} By an elementary calculation, we deduce the recurrence relation \begin{align*} \int_0^1 x^{2n-2} t(\bfk_r;x)\,dx&=\sum_{j=1}^{k_r-1} \frac{(-1)^{j-1}}{(2n-1)^j}t(\bfk_{r-1},k_r+1-j) +\frac{(-1)^{k_r-1}}{(2n-1)^{k_r}} \int_0^1 t(\bfk_{r-1},1;x)\,dx\\ &\quad+\frac{(-1)^{k_r-1}}{(2n-1)^{k_r}}\sum_{l=2}^n \int_0^1 x^{2l-2}t(\bfk_{r-1};x)dx. \end{align*} Hence, using the recurrence relation, we obtain the desired evaluation by direct calculations. \end{proof}
\begin{thm}\label{thm:L1111} For any positive integer $r$, $\int_0^1 \frac{L(\{1\}_{r};x)}{x^2} \, dx$ can be expressed as a $\mathbb{Q}$-linear combinations of products of $\log 2$ and Riemann zeta values. More precisely, we have \begin{equation*} 1-\sum_{r\ge 1} \left(\int_0^1 \frac{L(\{1\}_{r};x)}{x^2} \, dx\right) u^r =\exp\left( \sum_{n=1}^\infty \frac{\zeta(\bar n)}{n}u^n\right) =\exp\left(-\log(2)u-\sum_{n=2}^\infty \frac{1-2^{1-n}}{n}\zeta(n)u^n\right). \end{equation*} \end{thm} \begin{proof} Consider the generating function \begin{equation*} F(u):=1-\sum_{r=1}^\infty 2^r\left(\int_0^1 \frac{L(\{1\}_{r};x)}{x^2} \, dx\right) u^r. \end{equation*} By definition \begin{align*} F(u) =\, & 1-\sum_{r=1}^\infty u^r \int_0^1 \int_0^{x^2} \left(\frac{dt}{1-t} \right)^r \frac{dx}{x^2} \\\ =\, & 1-\sum_{r=1}^\infty \frac{u^r}{r!} \int_0^1 \left( \int_0^{x^2} \frac{dt}{1-t} \right)^r \frac{dx}{x^2} \\ =\, & 1- \int_0^1 \left( \sum_{r=1}^\infty \frac{(-u\log(1-x^2))^r}{r!} \right) \frac{dx}{x^2} \\ =\, & 1+\int_0^1 \Big( (1-x^2)^{-u}-1\Big) d(x^{-1})
= \frac{\Gamma(1-u) \Gamma(1/2)}{\Gamma(1/2-u)} \end{align*} by integration by parts followed by the substitution $x=\sqrt{t}$. Using the expansion \begin{align*}
\Gamma(1-u)=\exp\left(\gamma u+\sum_{n=2}^\infty \frac{\zeta(n)}{n}u^n\right)\qquad(|u|<1). \end{align*} and setting $x=1/2-u$ in the duplication formula $\Gamma(x)\Gamma(x+1/2)=2^{1-2x}\sqrt{\pi}\Gamma(2x)$, we obtain \begin{align*}
\log\Gamma(1/2-u)=\frac{\log\pi}{2}+\gamma u+2u\log(2)+\sum_{n=2}^\infty \frac{(2^n-1)\zeta(n)}{n}u^n\qquad(|u|<1/2). \end{align*} Therefore \begin{equation*} F(u)=\exp\left(-2\log(2)u-\sum_{n=2}^\infty \frac{2^n-2}{n}\zeta(n)u^n\right)= \exp\left( \sum_{n=1}^\infty \frac{2^n}{n}\zeta(\bar n)u^n\right) \end{equation*} by the facts that $\zeta(\bar1)=-\log 2$ and $2^n \zeta(\bar n)=(2-2^n)\zeta(n)$ for $n\ge 2$. The theorem follows immediately. \end{proof}
Clearly, $\int_0^1 \frac{L(\{1\}_{r};x)}{x^2}\,dx\in\mathbb{Q}[\log(2),\zeta(2),\zeta(3),\zeta(4),\ldots]$. For example, \begin{align} \label{equ:Lxdepth=1} &\int_0^1 \frac{L(1;x)}{x^2}\,dx=\log(2),\\ &\int_0^1 \frac{L(1,1;x)}{x^2}\,dx=\frac{1}{4}\zeta(2)-\frac1{2}\log^2(2), \notag\\ &\int_0^1 \frac{L(1,1,1;x)}{x^2}\,dx=\frac1{4}\zeta(3)+\frac1{6}\log^3(2)-\frac1{4}\zeta(2)\log(2). \notag \end{align}
More generally, using a similar argument as in the proof of Theorem \ref{thm-IA}, we can prove the following more general results. \begin{thm}\label{thm:LandtIntegrals} Let $r,n$ be two non-negative integers and $\bfk_r=(k_1,\dotsc,k_r)\in\mathbb{N}^r$ with $\bfk_0=\emptyset$. Then one can express all of the integrals \begin{equation*} \int_0^1 \frac{L(k_1,\dotsc,k_r,1;x)}{x^n}\,dx\ \quad (0\le n\le 2r+2) \end{equation*} and \begin{equation*}
\int_0^1 \frac{t(k_1,\dotsc,k_r,1;x)}{x^n}\,dx\ \quad(0\le n\le 2r+1) \end{equation*} as $\mathbb{Q}$-linear combinations of alternating MZVs (and number $1$ for $\int_0^1 L(\bfk_r,1;x)\,dx$). \end{thm}
\begin{proof} The case $n=1$ is trivial as both integrals are clearly already MMVs after the integration.
If $n=0$ then we have \begin{equation*}
\int_0^1 t(k_1,\dotsc,k_r,1;x) \,dx =\int_0^1 \sum_{0<n_1<\dots<n_r<m} \frac{x^{2m-1} \,dx}{(2n_1-1)^{k_1}\cdots (2n_r-1)^{k_r}(2m-1)} =\frac{\lim_{N\to \infty} c_N}{2^{r+1}} \end{equation*} where \begin{align} c_N =\, &\sum_{0<n_1<\dots<n_r<m\le N} \frac{2^{r+1}}{(2n_1-1)^{k_1}\cdots (2n_r-1)^{k_r}(2m-1)(2m)} \notag\\ =\,& \sum_{0<n_1<\dots<n_r<m\le 2N} \frac{(1-(-1)^{n_1})\cdots (1-(-1)^{n_r})}{n_1^{k_1}\cdots n_r^{k_r}} \left(\frac{1-(-1)^m}{m}-\frac{1-(-1)^m}{m+1}\right) \notag\\ =\,&- 2\sum_{0<n_1<\dots<n_r<m\le 2N} \frac{(1-(-1)^{n_1})\cdots (1-(-1)^{n_r})(-1)^m}{n_1^{k_1}\cdots n_r^{k_r}m}\notag\\ +\,&\sum_{0<n_1<\dots<n_r\le 2N} \frac{(1-(-1)^{n_1})\cdots (1-(-1)^{n_r})}{n_1^{k_1}\cdots n_r^{k_r}} \sum_{m=n_r+1}^{2N} \left(\frac{1+(-1)^m}{m}-\frac{1-(-1)^m}{m+1}\right) \notag\\ =\,&- 2\sum_{0<n_1<\dots<n_r<m\le 2N} \frac{(1-(-1)^{n_1})\cdots (1-(-1)^{n_r})(-1)^m}{n_1^{k_1}\cdots n_r^{k_r}m}\notag\\ +\,&\sum_{0<n_1<\dots<n_r\le 2N} \frac{(1-(-1)^{n_1})\cdots (1-(-1)^{n_r})}{n_1^{k_1}\cdots n_r^{k_r}}
\left(\frac{1-(-1)^{n_r}}{n_r+1}\right). \label{equ:tintInductionStep} \end{align} By partial fraction decomposition \begin{equation*} \frac1{n^k(n+1)}=\sum_{j=2}^k \frac{(-1)^{k-j}}{n^j}-(-1)^k\left(\frac{1}{n}-\frac{1}{n+1}\right) \end{equation*} setting $n=n_r$, $k=k_r+1$ and taking $N\to\infty$ in the above, we may assume $k_r=1$ without loss of generality. Now if $r=1$ then by \eqref{equ:tintInductionStep} \begin{equation*} c_N = 2\sum_{0<n<m\le 2N}\frac{(-1)^{n+m}-(-1)^m}{nm} + \sum_{0<n\le 2N} \frac{(1-(-1)^n)^2}{n(n+1)}. \end{equation*} Hence \begin{equation}\label{equ:tIntr=1}
\int_0^1 t(1,1;x)\,dx=\frac14 \big(2\zeta(\bar1,\bar1)-2\zeta(1,\bar1)+4\log 2\big)=\log 2-\frac14 \zeta(2) \end{equation} by \cite[Proposition 14.2.5]{Z2016}. If $r>1$ then \begin{align*} \sum_{n=m}^{2N} \frac{1-(-1)^{n}}{n(n+1)} = \sum_{n=m}^{2N} \frac{1}{n(n+1)} - \sum_{n=m}^{2N} \frac{(-1)^{n}}{n}+\sum_{n=m}^{2N} \frac{(-1)^{n}}{n+1} = \frac{1+(-1)^{m}}{m} - 2\sum_{n=m}^{2N} \frac{(-1)^{n}}{n}. \end{align*} Taking $m=n_{r-1}+1$ and $n=n_r$ in \eqref{equ:tintInductionStep} we get \begin{align*} c_N =\,&- 2\sum_{0<n_1<\dots<n_r<m\le 2N} \frac{(1-(-1)^{n_1})\cdots (1-(-1)^{n_r})(-1)^m}{n_1^{k_1}\cdots n_r^{k_r}m} \\ +\,&2\sum_{j=2}^{k_r} (-1)^{k_r-j} \sum_{0<n_1<\dots<n_r<m\le 2N} \frac{(1-(-1)^{n_1})\cdots (1-(-1)^{n_r})}{n_1^{k_1}\cdots n_{r-1}^{k_{r-1}}n_r^{j}} \\ -\,& 4(-1)^{k_r} \sum_{0<n_1<\dots<n_r<2N} \frac{(1-(-1)^{n_1})\cdots (1-(-1)^{n_{r-1}})}{n_1^{k_1}\cdots n_{r-1}^{k_{r-1}}}
\left(\frac{1}{n_{r-1}+1} - \sum_{n_r=n_{r-1}+1}^{2N} \frac{(-1)^{n_r}}{n_r} \right). \end{align*} Here, when $r=1$ the last line above degenerates to $4(-1)^{k_r} \sum_{n_1=1}^{2N} \frac{(-1)^{n_1}}{n_1}$. Taking $N\to\infty$ and using induction on $r$, we see that the claim for $\int_0^1 t(\bfk_r,1;x)\,dx$ in the theorem follows.
The computation of $\int_0^1 L(\bfk_r,1;x)\,dx$ is completely similar to that of $\int_0^1 t(\bfk_r,1;x)\,dx$. Thus we can get \begin{align*} \int_0^1 L(\bfk_r,1;x)\,dx= \,& \frac{1}{2^r} \sum_{0<n_1<\dots<n_r<m} \frac{(1+(-1)^{n_1})\cdots (1+(-1)^{n_r})(-1)^m}{n_1^{k_1}\cdots n_r^{k_r}m} \\ +\,&\frac{1}{2^r} \sum_{j=2}^{k_r} (-1)^{k_r-j} \sum_{0<n_1<\dots<n_r } \frac{(1+(-1)^{n_1})\cdots (1+(-1)^{n_r})}{n_1^{k_1}\cdots n_{r-1}^{k_{r-1}} n_r^{j}} \\ -\,& \frac{2(-1)^{k_r}}{2^r} \sum_{0<n_1<\dots<n_r} \frac{(1+(-1)^{n_1})\cdots (1+(-1)^{n_{r-1}})(-1)^{n_r}}{n_1^{k_1}\cdots n_{r-1}^{k_{r-1}} n_r}\\ -\,& \frac{2(-1)^{k_r}}{2^r} \sum_{0<n_1<\dots<n_r} \frac{(1+(-1)^{n_1})\cdots (1+(-1)^{n_{r-1}})}{n_1^{k_1}\cdots n_{r-1}^{k_{r-1}}(n_{r-1}+1)}. \end{align*} Here, when $r=1$ the last line above degenerates to $-(-1)^{k_1}$. So by induction on $r$ we see that the claim for $\int_0^1 L(\bfk_r,1;x)\,dx$ is true.
Similarly, if $n=2$ then we can apply the same technique as above to get \begin{equation*}
\int_0^1 \frac{L(k_1,\dotsc,k_r,1;x)}{x^2}\,dx =\int_0^1 \frac{1}{2^{k_1+\dots+k_r+1}} \sum_{0<n_1<\dots<n_r<m} \frac{x^{2m-2}\,dx}{n_1^{k_1}\cdots n_r^{k_r}m} =\frac{\lim_{N\to \infty} d_N}{2^{r+1}} \end{equation*} where \begin{align*} d_N=\,& \sum_{0<n_1<\dots<n_r<m\le N} \frac{2^{r+1}}{(2n_1)^{k_1}\cdots (2n_r)^{k_r}2m(2m-1)} \\ =\,& \sum_{0<n_1<\dots<n_r<m\le 2N} \frac{(1+(-1)^{n_1})\cdots (1+(-1)^{n_r})}{n_1^{k_1}\cdots n_r^{k_r}} \left(\frac{1+(-1)^m}{m-1}-\frac{1+(-1)^m}{m}\right) \\ =\,& \sum_{0<n_1<\dots<n_r<m\le 2N} \frac{(1+(-1)^{n_1})\cdots (1+(-1)^{n_r})}{n_1^{k_1}\cdots n_r^{k_r}} \left[\left(\frac{1-(-1)^m}{m}-\frac{1+(-1)^m}{m}\right) \right. \\ \,& \hskip7cm + \left. \left(\frac{1+(-1)^m}{m-1}-\frac{1-(-1)^m}{m}\right)\right]\\ =\,&- 2\sum_{0<n_1<\dots<n_r<m\le 2N} \frac{(1+(-1)^{n_1})\cdots (1+(-1)^{n_r})(-1)^m}{n_1^{k_1}\cdots n_r^{k_r}m}\\ +\,&\sum_{0<n_1<\dots<n_r<2N} \frac{(1+(-1)^{n_1})\cdots (1+(-1)^{n_r})}{n_1^{k_1}\cdots n_r^{k_r}} \sum_{m=n_r+1}^{2N} \left(\frac{1+(-1)^m}{m-1}-\frac{1-(-1)^m}{m}\right) \\ =\,& -2\sum_{0<n_1<\dots<n_r<m\le 2N} \frac{(1+(-1)^{n_1})\cdots (1+(-1)^{n_r})(-1)^m}{n_1^{k_1}\cdots n_r^{k_r}m}\\ + \,&\sum_{0<n_1<\dots<n_r<2N} \frac{(1+(-1)^{n_1})\cdots (1+(-1)^{n_r})}{n_1^{k_1}\cdots n_r^{k_r}}
\left(\frac{1-(-1)^{n_r}}{n_r}\right) \\ =\,& -2\sum_{0<n_1<\dots<n_r<m\le 2N} \frac{(1+(-1)^{n_1})\cdots (1+(-1)^{n_r})(-1)^m}{n_1^{k_1}\cdots n_r^{k_r}m}\\ \to \,& -2\sum_{\eps_j=\pm1,1\le j\le r} \zeta(k_1,\dotsc,k_r,1;\eps_1,\dotsc,\eps_r,-1) \end{align*} as $N\to\infty$. Hence, \begin{equation} \label{equ:LintInductionStep}
\int_0^1 \frac{L(k_1,\dotsc,k_r,1;x)}{x^2}\,dx =\frac{-1}{2^r}\sum_{\eps_j=\pm1,1\le j\le r} \zeta(k_1,\dotsc,k_r,1;\eps_1,\dotsc,\eps_r,-1). \end{equation}
By exactly the same approach as above, we find that \begin{align} \notag \int_0^1 \frac{t(k_1,\dotsc,k_r,1;x)}{x^2}\,dx =&\,\frac{-1}{2^r}\sum_{0<n_1<\dots<n_r<m } \frac{(1-(-1)^{n_1})\cdots (1-(-1)^{n_r})(-1)^m}{n_1^{k_1}\cdots n_r^{k_r}m}\\ =&\,\frac{-1}{2^r}\sum_{\eps_j=\pm1,1\le j\le r} \eps_1\cdots\eps_r\zeta(k_1,\dotsc,k_r,1;\eps_1,\dotsc,\eps_r,-1). \label{equ:tintoverx2} \end{align}
More generally, for any larger values of $n$ we may use the partial fraction technique and similar argument as above to express the integrals in Theorem~\eqref{thm:LandtIntegrals} as $\mathbb{Q}$-linear combinations of alternating MZVs. So we leave the details to the interested reader. This finishes the proof of the theorem. \end{proof}
\begin{exa} For positive integer $k>1$, by the computation in $n=0$ case in the proof the Theorem~\ref{thm:LandtIntegrals} we get \begin{align*} \int_0^1 t(k,1;x)\,dx =\, & \frac12 \big(\zeta(\bar{k},\bar1)-\zeta(k,\bar1) \big) -(-1)^k \log 2+\frac12\sum_{j=2}^k (-1)^{k-j}\big(\zeta(j)-\zeta(\bar{j})\big),\\ \int_0^1 L(k,1;x)\,dx =\, & \frac12 \big(\zeta(\bar{k},\bar1)+\zeta(k,\bar1) \big)-(-1)^k +(-1)^k \log 2+\frac12\sum_{j=2}^k (-1)^{k-j}\big(\zeta(j)+\zeta(\bar{j})\big) \\ =\, & \frac12 \big(\zeta(\bar{k},\bar1)+\zeta(k,\bar1) \big)-(-1)^k +(-1)^k \log 2+\sum_{j=2}^k \frac{(-1)^{k-j}}{2^j} \zeta(j) . \end{align*} \end{exa}
\begin{exa} For positive integer $k>1$, we see from \eqref{equ:LintInductionStep} and \eqref{equ:tintoverx2} that \begin{align}\label{equ:Lr=1} \int_0^1 \frac{L(k,1;x)}{x^2}\,dx=&\, -\frac12\big( \zeta(k,\bar1)+\zeta(\bar{k},\bar1)\big),\\ \int_0^1 \frac{t(k,1;x)}{x^2}\,dx=&\, \frac12\big( \zeta(k,\bar1)-\zeta(\bar{k},\bar1)\big). \notag \end{align} Taking $r=2$ in \eqref{equ:LintInductionStep} and \eqref{equ:tintoverx2} we get \begin{align*} \int_0^1 \frac{L(k_1,k_2,1;x)}{x^2}\,dx=-\frac14\big(\zeta(k_1,k_2,\bar1)+ \zeta(k_1,\bar{k_2},\bar1)+\zeta(\bar{k_1},k_2,\bar1)+\zeta(\bar{k_1},\bar{k_2},\bar1)\big),\\ \int_0^1 \frac{t(k_1,k_2,1;x)}{x^2}\,dx=-\frac14\big(\zeta(k_1,k_2,\bar1)- \zeta(k_1,\bar{k_2},\bar1)-\zeta(\bar{k_1},k_2,\bar1)+\zeta(\bar{k_1},\bar{k_2},\bar1)\big). \end{align*} \end{exa}
\begin{re} It is possible to give an induction proof of Theorem~\ref{thm:LandtIntegrals} using the regularized values of MMVs as defined by \cite[Definition 3.2]{YuanZh2014b}. However, the general formula for the integral of $L(\bfk,1;x)$ would be implicit. To illustrate the idea for computing $\int_0^1 t(\bfk,1;x)\,dx$, we consider the case $\bfk=k\in \mathbb{N}$. Notice that \begin{align*} \sum_{0<m<n<N} \frac{1}{(2m-1)(2n-1)2n}=&\, \sum_{\substack{0<m<n<2N\\ m,n\ \text{odd}}} \frac{1}{m}\left(\frac{1}{n}-\frac{1}{n+1} \right) \\ =&\, \sum_{\substack{0<m<n<2N\\ m,n\ \text{odd}}} \frac{1}{mn} - \sum_{\substack{0<m<n\le 2N\\ m \ \text{odd}, n \ \text{even}}} \frac{1}{mn}+ \sum_{\substack{0<m<2N\\ m \ \text{odd} }} \frac{1}{m(m+1)}. \end{align*} By using regularized values, we see that \begin{align*} \int_0^1 t(1,1;x)\,dx =\, & \sum_{0<m<n} \frac{1}{(2m-1)(2n-1)2n}=\frac14\big(M_*(\breve{1},\breve{1})-M_*(\breve{1},1)\big) +\log 2. \end{align*} We have \begin{equation*} M_*(\breve{1},\breve{1})=\frac12\big(M_*(\breve{1})^2-2M_*(\breve{2})\big)=\frac12\big((T+2\log 2)^2-2M(\breve{2})\big), \end{equation*} Since $2M(\breve{2})=3\zeta(2)$, \begin{equation*} M_\shuffle(\breve{1},\breve{1})= \frac12\rho\big((T+2\log 2)^2-3\zeta(2)\big)=\frac12\big((T+\log 2)^2-2\zeta(2)\big) \end{equation*} by \cite[Theorem 2.7]{Z2016}. On the other hand, \begin{equation*} \rho\big(M_*(\breve{1},1)\big)=M_\shuffle(\breve{1},1)= \frac12 M_\shuffle(\breve{1})^2=\frac12(T+\log 2)^2. \end{equation*} Since $\rho$ is an $\mathbb{R}}\def\pa{\partial$-linear map, \begin{equation*} \int_0^1 t(1,1;x)\,dx =\log 2+\frac14\rho\big(M_*(1,\breve{1})-M_*(1,1)\big)=\log 2-\frac14\zeta(2). \end{equation*} which agrees with \eqref{equ:tIntr=1}. \end{re}
Similarly, by considering some related integrals we can establish many relations involving multiple $t$-star values. For example, from \eqref{d3} we have \begin{align*} \int_0^1 x^{2n-2} L(k_1,k_2;x)dx&=\sum_{j=1}^{k_2-1} \frac{(-1)^{j-1}}{(2n-1)^j}L(k_1,k_2+1-j)+\frac{(-1)^{k_2}}{(2n-1)^{k_2}} \int_0^1 \frac{L(k_1,1;x)}{x^2}dx\\ &\quad-\frac{(-1)^{k_2}}{(2n-1)^{k_2}} \sum_{j=1}^{k_1-1} (-1)^{j-1} L(k_1+1-j)t^\star_n(j)\\ &\quad-\frac{(-1)^{k_1+k_2}}{(2n-1)^{k_2}} \log(2)t^\star_n(k_1)+\frac{(-1)^{k_1+k_2}}{(2n-1)^{k_2}}t^\star_n(1,k_1). \end{align*} Hence, considering the integral $\int_0^1 \frac{{\rm A}(l;x)L(k_1,k_2;x)}{x}\,dx$ or $\int_0^1 \frac{t(l;x)L(k_1,k_2;x)}{x}\,dx$, we can get the following theorem. \begin{thm} For positive integers $k_1,k_2$ and $l$, \begin{align} &\sum_{j=1}^{k_2-1} (-1)^{j-1} L(k_1,k_2+1-j)T(l+j)+(-1)^{k_2}T(k_2+l) \int_0^1 \frac{L(k_1,1;x)}{x^2}\,dx\nonumber\\ &-(-1)^{k_2} 2\sum_{j=1}^{k_1-1}(-1)^{j-1} L(k_1+1-j)t^\star(j,k_2+l)\nonumber\\ &-(-1)^{k_1+k_2}2\log(2)t^\star(k_1,k_2+l)+(-1)^{k_1+k_2}2t^\star(1,k_1,k_2+l)\nonumber\\ &=\frac{1}{2^{k_1+k_2}} \sum_{j=1}^{l-1} \frac{(-1)^{j-1}}{2^j} T(l+1-j)\zeta(k_1,k_2+j)-\frac{(-1)^l}{2^{k_1+k_2+l}} \sum\limits_{n=1}^\infty \frac{\zeta_{n-1}(k_1)T_n(1)}{n^{k_2+l}}, \end{align} where $\int_0^1 \frac{L(k_1,1;x)}{x^2}\,dx$ is given by \eqref{equ:Lr=1}. \end{thm}
{\bf Acknowledgments.} The first author is supported by the Scientific Research Foundation for Scholars of Anhui Normal University and the University Natural Science Research Project of Anhui Province (Grant No. KJ2020A0057).
{\small
}
\end{document}
|
arXiv
|
{
"id": "2008.13163.tex",
"language_detection_score": 0.40200725197792053,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{A real algebra perspective on multivariate tight wavelet frames}
\author{
Maria Charina\footnoteremember{myfootnote}{Fakult\"at f\"ur Mathematik, TU Dortmund, D--44221 Dortmund, Germany, [email protected]$}, \ Mihai Putinar \footnote{Department of Mathematics, University of California at Santa Barbara, CA 93106-3080, USA},\\
Claus Scheiderer \footnote{Fachbereich Mathematik und Statistik, Universit\"at Konstanz, D--78457 Konstanz, Germany} \ and \ Joachim St\"ockler \footnoterecall{myfootnote} }
\maketitle
\begin{abstract} Recent results from real algebraic geometry and the theory of polynomial optimization are related in a new framework to the existence question of multivariate tight wavelet frames whose generators have at least one vanishing moment. Namely, several equivalent formulations of the so-called Unitary Extension Principle (UEP) from \cite{RS95} are interpreted in terms of hermitian sums of squares of certain nonnegative trigonometric polynomials and in terms of semi-definite programming. The latter together with the results in \cite{LP,sch:mz} answer affirmatively the long standing open question of the existence of such tight wavelet frames in dimension $d=2$; we also provide numerically efficient methods for checking their existence and actual construction in any dimension. We exhibit a class of counterexamples in dimension $d=3$ showing that, in general, the UEP property is not sufficient for the existence of tight wavelet frames. On the other hand we provide stronger sufficient conditions for the existence of tight wavelet frames in dimension $d \ge 3$ and illustrate our results by several examples. \end{abstract}
\noindent {\bf Keywords:} multivariate wavelet frame, real algebraic geometry, torus, hermitian square, polynomial optimization, trigonometric polynomial.\\
\noindent{\bf Math. Sci. Classification 2000:} 65T60, 12D15, 90C26, 90C22.
\section{Introduction}
Several fundamental results due to two groups of authors (I. Daubechies, B. Han, A. Ron, Z. Shen \cite{DHRS03} and respectively C. Chui, W. He, J. St\"ockler \cite{CHS04, CHS05}) lay at the foundation of the theory of tight wavelet frames and also provide their characterizations. These characterizations allow on one hand to establish the connection between frame constructions and the challenging algebraic problem of existence of sums of squares representations (sos) of non-negative trigonometric polynomials. On the other hand, the same characterizations provide methods, however unsatisfactory from the practical point of view, for constructing tight wavelet frames.
The existence and effective
construction of tight frames, together with good estimates on the number of
frame generators, are still open problems. One can easily be discouraged by a general result by Scheiderer in \cite{S99}, which implies that not all non-negative trigonometric polynomials in the dimension $d \ge 3$ possess sos representations. However, the main focus is on dimension $d=2$ and on special non-negative trigonometric polynomials. This motivates us to pursue the issue of existence of sos representations further.
It has been observed in \cite{CoifmanDonoho1995} that redundancy of wavelet frames has advantages for applications in signal denoising - if the data is redundant, then loosing some data during transmission does not necessarily affect the reconstruction of the original signal. Shen et al. \cite{Shen2011} use the tight wavelet frame decomposition to recover a clear image from a single motion-blurred image. In \cite{JoBra} the authors show how to use multiresolution wavelet filters $p$ and $q_j$ to construct irreducible representations for the Cuntz algebra and, conversely, how to recover wavelet filters from these representations. Wavelet and frame decompositions for subdivision surfaces are one of the basic tools, e.g., for progressive compression of $3-d$ meshes or interactive surface viewing \cite{CS07,KSS,VisualComp}. Adaptive numerical methods based on wavelet frame discretizations have produced very promising results \cite{CDD1,CDD2} when applied to a large class of operator equations, in particular, PDE's and integral equations. We list some existing constructions of compactly supported MRA wavelet tight frames of $L_2({\mathbb R}^d)$ \cite{CH,CHS01,DHRS03,HanMo2005,LS06,RS95, Selesnick2001} that employ the Unitary Extension Principle. For any dimension and in the case of a general expansive dilation matrix, the existence of tight wavelet frames is always ensured by \cite{CCH,CS}, if the coefficients of the associated refinement equation are real and nonnegative. A few other compactly supported multi-wavelet tight frames are circulating nowadays, see \cite{CCH, CS07,GGL}.
The main goal of this paper is to relate the existence of multivariate tight wavelet frames to recent advances in
real algebraic geometry and the theory of moment problems. The starting point of our study is the so-called Unitary Extension Principle (UEP) from \cite{RS95}, a special case of the above mentioned characterizations in \cite{CHS04, CHS05, DHRS03}. In section \ref{sec:UEP} we first list several equivalent well-known formulations of the UEP from wavelet and frame literature, but use the novel algebraic terminology to state them. It has been already observed in \cite{LS06} that a sufficient condition for the existence of tight wavelet frames satisfying UEP can be expressed in terms of sums of squares representations of a certain nonnegative trigonometric polynomial. In \cite[Theorem 3.4]{LS06}, the authors also provide an algorithm for the actual construction of the corresponding frame generators. In subsection \ref{subsec:sumsofsquares}, we extend the result of \cite[Theorem 3.4]{LS06} and obtain another equivalent formulation of UEP, which combined with the results from \cite{sch:mz} guarantees the existence of UEP tight wavelet frames in the two-dimensional case, see subsection \ref{subsec:existence}. We also exhibit there a class of three-dimensional counterexamples showing that, in general, the UEP conditions are not sufficient for the existence of tight wavelet frames. In those examples, however, we make a rather strong assumption on the underlying refinable function, which leaves hope that in certain other cases we will be able to show the existence of such tight wavelet frames. The novel, purely algebraic equivalent formulation of the UEP in Theorem \ref{th:UEP_hermitian} is aimed at better understanding the structure of tight wavelet frames. The constructive method in \cite[Theorem 3.4]{LS06} yields frame generators of support twice as large as the one of the underlying refinable function. Theorem \ref{th:UEP_hermitian} leads to a numerically efficient method for frame constructions with no such restriction on the size of their support. Namely, in subsection \ref{subsec:semi-definite}, we show how to reformulate Theorem \ref{th:UEP_hermitian} equivalently as a problem of semi-definite programming. This establishes a connection between constructions of tight wavelet frames and moment problems, see \cite{HP, lasserrebook, LP} for details.
In section \ref{subsec:sufficient}, we give sufficient conditions for the existence of tight wavelet frames in dimension $d \ge 3$ and illustrate our results by several examples of three-dimensional subdivision. In section \ref{subsec:construction}, we discuss an elegant method that sometimes simplifies the frame construction and allows to determine the frame generators analytically. We illustrate this method on the example of the so-called butterfly scheme from \cite{GDL}.
{\bf Acknowledgements.} The authors are grateful to the Mathematical Institute at Oberwolfach for offering optimal working conditions through the Research In Pairs program in 2011. The second author was partially supported by the National Science Foundation Grant DMS-10-01071.
\section{Background and Notation}
\subsection{Real algebraic geometry} Let $d\in{\mathbb N}$, let $T$ denote the $d$-dimensional anisotropic real (algebraic) torus, and let ${\mathbb R}[T]$ denote the (real) affine coordinate ring of $T$ $$
{\mathbb R}[T]\>=\>{\mathbb R}\bigl[x_j,\,y_j\colon j=1,\dots,d\bigr]\big/
\bigl(x_j^2+y_j^2-1\colon j=1,\dots,d\bigr). $$ In other words, $T$ is the subset of ${\mathbb R}^{2d}$ defined by the equations $x_j^2+y_j^2=1, 1 \leq j \leq d,$ and endowed with additional algebraic structure which will become apparent in the following pages. Rather than working with the above description, we will mostly employ the complexification of $T$, together with its affine coordinate ring ${\mathbb C}[T]={\mathbb R}[T]\otimes_{\mathbb R}{\mathbb C}$. This coordinate ring comes with a natural involution $*$ on ${\mathbb C}[T]$, induced by complex conjugation. Namely, $$
{\mathbb C}[T]\>=\>{\mathbb C}[z_1^{\pm1},\dots,z_d^{\pm1}] $$ is the ring of complex Laurent polynomials, and $*$ sends $z_j$ to $z_j^{-1}$ and is complex conjugation on coefficients. The real coordinate ring ${\mathbb R}[T]$ consists of the $*$-invariant polynomials in ${\mathbb C}[T]$, i.e. $\displaystyle p=\sum_{\alpha \in {\mathbb Z}^d} p(\alpha) z^\alpha \in {\mathbb R}[T]$ if and only if $p(-\alpha)=\overline{p(\alpha)}$.
The group of ${\mathbb C}$-points of $T$ is $T({\mathbb C})=({\mathbb C}^*)^d={\mathbb C}^*\times\cdots \times{\mathbb C}^*$. In this paper we often denote the group of ${\mathbb R}$-points of $T$ by ${\mathbb T}^d$. Therefore, $$
{\mathbb T}^d\>=\>T({\mathbb R})\>=\>\{(z_1,\dots,z_d)\in({\mathbb C}^*)^d\colon|z_1|=\cdots=
|z_d|=1\} $$ is the direct product of $d$ copies of the circle group $S^1$. The neutral element of this group we denote by ${\boldsymbol{1}}=(1,\dots,1)$.
Via the exponential map $\hbox{exp}$, the coordinate ring ${\mathbb C}[T]={\mathbb C}[z_1^{\pm1}, \dots,z_d^{\pm1}]$ of $T$ is identified with the algebra of (complex) trigonometric polynomials. Namely, $\hbox{exp}$ identifies $(z_1, \dots, z_d)$ with ${\boldsymbol{e}}^{-i \omega}:=( e^{-i\omega_1}, \dots, e^{-i\omega_d})$. In the same way, the real coordinate ring ${\mathbb R}[T]$ is identified with the ring of real trigonometric polynomials, i.e.\ polynomials with real coefficients in $\cos(\omega_j)$ and $\sin(\omega_j)$, $j=1,\dots,d$.
Let $M\in{\mathbb Z}^{d\times d}$ be a matrix with $\det(M)\ne0$, and write
$m:=|\det M|$. The finite abelian group \begin{equation}\label{def:G} G:=2\pi M^{-T}{\mathbb Z}^d/ 2\pi {\mathbb Z}^d \end{equation} is (via exp) a subgroup of ${\mathbb T}^d=T({\mathbb R})$. It is isomorphic to
${\mathbb Z}^d/M^T{\mathbb Z}^d$ and has order $|G|=m$. Its character group is $G'= {\mathbb Z}^d/ M{\mathbb Z}^d$, via the natural pairing $$
G\times G'\to{\mathbb C}^*,\quad\bil\sigma\chi=e^{i\sigma\cdot\chi},
\quad \sigma\in G, \quad \chi\in G'. $$ Here $\sigma\cdot\chi$ is the ordinary inner product on ${\mathbb R}^d$, and $\bil\sigma \chi$ is a root of unity of order dividing $m$. Note that the group $G$ acts on the coordinate ring ${\mathbb C}[T]$ via multiplication on the torus \begin{equation}\label{def:psigma}
p\mapsto p^\sigma({\boldsymbol{e}}^{-i\omega}):= p({\boldsymbol{e}}^{-i(\omega+\sigma)}), \quad \sigma\in
G, \quad \omega \in {\mathbb R}^d. \end{equation} The group action commutes with the involution~$*$, that is, $(p^*) ^\sigma=(p^\sigma)^*$ holds for $p\in{\mathbb C}[T]$ and $\sigma\in G$.
From the action of the group $G$ we get an associated direct sum decomposition of ${\mathbb C}[T]$ into the eigenspaces of this action $$
{\mathbb C}[T]\>=\>\bigoplus_{\chi\in G'}{\mathbb C}[T]_\chi, $$ where ${\mathbb C}[T]_\chi$ consists of all $p\in{\mathbb C}[T]$ satisfying $p^\sigma= \bil\sigma\chi\,p$ for all $\sigma\in G$. For $\chi\in G'$ and $p\in {\mathbb C}[T]$, we denote by $p_\chi$ the weight $\chi$ isotypical component of~$p$. Thus, $$
p_\chi=\frac1m\sum_{\sigma\in G}\bil\sigma\chi\,p^{\sigma} $$ lies in ${\mathbb C}[T]_\chi$, and we have $\displaystyle p=\sum_{\chi\in G'}p_\chi$. For every $\chi\in G'$, we choose a lift $\alpha_\chi\in{\mathbb Z}^d$ such that \begin{equation}\label{def:p_polyphase} \tilde p_\chi:=z^{-\alpha_\chi}p_\chi \end{equation}
is $G$-invariant. The components $\tilde p_\chi$ are called
{\em polyphase components} of
$p$, see
\cite{StrangNguyen}.
\subsection{Wavelet tight frames} A wavelet tight frame is a structured system of functions that has some special group structure and is defined by the actions of translates and dilates on a finite set of functions $\psi_j \in L_2({\mathbb R}^d)$, $ 1 \le j \le N$. More precisely, let $M\in{\mathbb Z}^{d\times d}$ be a general expansive matrix, i.e.
$\rho(M^{-1})<1$, or equivalently, all eigenvalues of $M$ are strictly larger than $1$ in modulus, and let $m=|\det M|$.
We define translation operators $T_\alpha$ on $L_2({\mathbb R}^d)$ by $T_\alpha f=f(\cdot-\alpha) $, $\alpha\in {\mathbb Z}^d$, and dilation (homothethy) $U_M$ by $U_M f=m^{1/2} f(M\cdot)$. Note that these operators are isometries on $L_2({\mathbb R}^d)$.
\begin{Definition} \label{def:wavelet_tight_frame} Let $\{\psi_j \ : \ 1 \le j \le N \}\subseteq L_2({\mathbb R}^d)$. The family $$
\Psi=\{U_M^\ell T_\alpha \psi_j \ : \ 1\le j \le N, \
\ell \in {\mathbb Z}, \ \alpha \in {\mathbb Z}^d \} $$ is a wavelet tight frame of $L_2({\mathbb R}^d)$, if \begin{equation}\label{def:parseval}
\|f\|^2_{L_2}=\sum_{{1 \le j \le N, \ell \in {\mathbb Z},} \atop
\alpha \in {\mathbb Z}^d} |\langle f,U_M^\ell T_\alpha \psi_j\rangle|^2 \quad \hbox{for all} \quad f \in L_2({\mathbb R}^d). \end{equation} \end{Definition}
The foundation for the construction of multiresolution wavelet basis or wavelet tight frame is a compactly supported function $\phi \in L_2({\mathbb R}^d)$ with the following properties.
\begin{itemize}
\item[(i)] $\phi$ is refinable, i.e. there exists a finitely
supported sequence $p=\left( p(\alpha)\right)_{\alpha \in {\mathbb Z}^d}$, $p(\alpha)\in {\mathbb C}$, such that
$\phi$ satisfies
\begin{equation} \label{eq:refinement_equation}
\phi(x)= m^{1/2} \sum_{\alpha \in {\mathbb Z}^d} p(\alpha) U_M T_\alpha \phi(x),
\quad x \in {\mathbb R}^d.
\end{equation}
Taking the Fourier-Transform
$$
\widehat{\phi}(\omega)=\int_{{\mathbb R}^d} \phi(x) e^{-i\omega \cdot x}
dx
$$
of both sides of \eqref{eq:refinement_equation} leads to its equivalent form
\begin{equation} \label{eq:F_refinement_equation}
\widehat{\phi}(M^T\omega)=p({\boldsymbol{e}}^{-i\omega})
\widehat{\phi}(\omega), \quad
\omega \in {\mathbb R}^d,
\end{equation}
where the trigonometric polynomial $p \in {\mathbb C}[T]$ is given by
$$
p({\boldsymbol{e}}^{-i\omega})= \sum_{\alpha \in {\mathbb Z}^d} p(\alpha) e^{-i\alpha\omega},
\qquad \omega\in{\mathbb R}^d.
$$ The isotypical components $p_\chi$ of $p$ are given by \begin{equation}\label{def:isotypical}
p_\chi({\boldsymbol{e}}^{-i\omega})= \sum_{\alpha\equiv \chi~{\rm mod}~M{\mathbb Z}^d} p(\alpha)
e^{-i\alpha\omega},\qquad \chi\in G'. \end{equation}
\item[(ii)] One usually assumes that $\widehat{\phi}(0)=1$ by proper normalization. This
assumption on $\widehat{\phi}$ and \eqref{eq:F_refinement_equation} allow us to
read all properties of $\phi$
from the polynomial $p$, since the refinement equation
\eqref{eq:F_refinement_equation} then implies
$$
\widehat{\phi}(\omega)=\prod_{\ell=1}^\infty p({\boldsymbol{e}}^{-i(M^T)^{-\ell} \omega}),
\quad \omega \in {\mathbb R}^d.
$$ The uniform convergence of this infinite product on compact sets is guaranteed by $p({\boldsymbol{1}})=1$.
\item[(iii)] One of the approximation properties of $\phi$ is the requirement that the translates $T_\alpha \phi$, $\alpha\in{\mathbb Z}^d$, form a partition of unity. Then \begin{equation}\label{identity:isotypical_at_one}
p_\chi({\boldsymbol{1}})=m^{-1} ,\qquad \chi\in G'. \end{equation} \end{itemize}
\noindent The functions $\psi_j$, $j=1, \dots, N$, are assumed to be of the form \begin{equation}\label{def:psij}
\widehat{\psi}_j(M^T\omega)=q_j({\boldsymbol{e}}^{-i\omega}) \widehat{\phi}(\omega), \end{equation} where $q_j \in {\mathbb C}[T]$. These assumptions imply that $\psi_{j}$ have compact support and, as in \eqref{eq:refinement_equation}, are finite linear combinations of $U_M T_\alpha\phi$.
\section{Equivalent formulations of UEP} \label{sec:UEP}
In this section we first recall the method called UEP (unitary extension principle) that allows us to determine the trigonometric polynomials $q_j$, $1\le j \le N$, such that the family $\Psi$ in Definition \ref{def:wavelet_tight_frame} is a wavelet tight frame of $L_2({\mathbb R}^d)$, see \cite{DHRS03,RS95}.
We also give several equivalent formulations of UEP to link frame constructions with problems in algebraic geometry and semi-definite programming.
We assume throughout this section that $\phi\in L_2({\mathbb R}^d)$ is a refinable function with respect to the expansive matrix $M\in {\mathbb Z}^{d\times d}$, with trigonometric polynomial $p$ in \eqref{eq:F_refinement_equation} and $\hat \phi(0)=1$, and
the functions $\psi_j$ are defined as in \eqref{def:psij}. We also make use of the definitions \eqref{def:G} for $G$ and \eqref{def:psigma} for $p^\sigma$, $\sigma\in G$.
\subsection{Formulations of UEP in wavelet frame literature}
Most formulations of the UEP are given in terms of identities for trigonometric polynomials, see \cite{DHRS03,RS95}.
\begin{Theorem} \label{th:UEP} (UEP) Let the trigonometric polynomial $p \in {\mathbb C}[T]$ satisfy $p({\boldsymbol{1}})=1$. If the trigonometric polynomials $q_j \in {\mathbb C}[T]$, $1 \le j \le N$, satisfy the identities \begin{equation}\label{id:UEP}
\delta_{\sigma,\tau}-p^{\sigma*}p^{\tau}=
\sum_{j=1}^N q_j^{\sigma*}q_j^{\tau},\qquad
\sigma,\tau \in G, \end{equation} then the family $\Psi$ is a wavelet tight frame of $L_2({\mathbb R}^d)$. \end{Theorem}
We next state another equivalent formulation of the Unitary Extension Principle in Theorem \ref{th:UEP} in terms of the isotypical components $p_\chi$, $q_{j,\chi}$ of the polynomials $p$, $q_j$. In the wavelet and frame literature, see e.g. \cite{StrangNguyen}, this equivalent formulation of UEP is usually given in terms of the polyphase components in \eqref{def:p_polyphase} of $p$ and $q_j$. The proof we present here serves as an illustration of the algebraic structure behind wavelet and tight wavelet frame constructions.
\begin{Theorem} \label{th:UEP_polyphase} Let the trigonometric polynomial $p \in {\mathbb C}[T]$ satisfy $p({\boldsymbol{1}})=1$. The identities \eqref{id:UEP} are equivalent to \begin{equation}\label{id:equiv_UEP_1} \begin{array}{rcl}
&& \displaystyle p_{\chi}^* p_\chi+\sum_{j=1}^N q_{j,\chi}^* q_{j,\chi}=m^{-1},
\quad \chi \in G', \\[12pt]
&& \displaystyle p_{\chi}^* p_{\eta}+\sum_{j=1}^N q_{j,\chi}^* q_{j,\eta}=0\qquad
\chi,\eta \in G', \quad \chi \not=\eta.
\end{array} \end{equation} \end{Theorem}
\begin{proof}
Recall that
$ \displaystyle{
p=\sum_{\chi \in G'} p_{\chi}}$ and $ \displaystyle{
p_\chi=m^{-1} \sum_{\sigma \in G}\langle\sigma, \chi \rangle p^{\sigma}}$ imply $$
p^*=\sum_{\chi \in G'} p^*_{\chi} \quad \hbox{and} \quad p^{\sigma*}=\sum_{\chi \in G'}
(p^*_{\chi})^\sigma = \sum_{\chi \in G'} \langle\sigma, \chi\rangle
p^*_{\chi}. $$
Thus, with $\eta'=\chi+\eta$ in the next identity, we get $$
p^{\sigma *} p = \sum_{\chi, \eta' \in G'} \langle \sigma, \chi\rangle p^*_\chi
p_{\eta'}=\sum_{\eta \in G'} \sum_{\chi \in G'}
\langle\sigma, \chi\rangle p^*_\chi p_{\chi+\eta}. $$ Note that the isotypical components of $p^{\sigma *} p$ are given by \begin{equation}\label{id:thmUEPpolyphase}
(p^{\sigma *} p)_\eta = \sum_{\chi \in G'}
\langle\sigma, \chi\rangle p^*_\chi p_{\chi+\eta},\qquad \eta\in G'. \end{equation} Similarly for $q_j$. Therefore, we get that the identities \eqref{id:UEP} for $\tau=0$ are equivalent to \begin{equation} \label{id:equiv_UEP_1_aux}
\sum_{\chi \in G'} \langle\sigma, \chi\rangle \left( p^*_\chi p_{\chi+\eta}+ \sum_{j=1}^N q^*_{j,\chi} q_{j,\chi+\eta}
\right)=\delta_{\sigma,0} \delta_{\eta,0}, \quad \eta \in G', \quad \sigma \in G. \end{equation} Note that the identities \eqref{id:UEP} for $\tau \in G$ are redundant and it suffices to consider only those for $\tau=0$. For fixed $\eta \in G'$, \eqref{id:equiv_UEP_1_aux} is a system of $m$ equations indexed by $\sigma \in G$ in $m$ unknowns $\displaystyle p_\chi^* p_{\chi+\eta}+ \sum_{j=1}^N q^*_{j,\chi} q_{j,\chi+\eta}$, $\chi \in G'$. The corresponding system matrix $A=(\langle \sigma, \chi \rangle)_{\sigma \in G, \chi \in G'}$ is invertible and $\displaystyle A^{-1}=m^{-1} A^*$. Thus, \eqref{id:equiv_UEP_1_aux} is equivalent to \eqref{id:equiv_UEP_1}.
\end{proof}
It is easy to see that the identities in Theorem \ref{th:UEP} and in Theorem \ref{th:UEP_polyphase} have equivalent matrix formulations.
\begin{Theorem}\label{th:matrixUEP} The identities \eqref{id:UEP} are equivalent to \begin{equation}\label{identity:UEPmatrixform}
U^*U=I_{m} \end{equation} with $$
U^*=\left[
\begin{matrix} p^{\sigma*}& q_1^{\sigma*} &\cdots& q_N^{\sigma*}\end{matrix}
\right]_{\sigma\in G} \in M_{m \times (N+1)}({\mathbb C}[T]), $$ and are also equivalent to \begin{equation}\label{identity:UEPmatrixformpoly}
\widetilde{U}^*\widetilde{U}=m^{-1}I_{m}, \end{equation} with $$
\widetilde{U}^*=\left[
\begin{matrix} \tilde{p}^*_{\chi}&
\tilde{q}_{1,\chi}^{*} &\cdots& \tilde{q}_{N,\chi}^{*}\end{matrix}
\right]_{\chi \in G'} \in M_{m \times (N+1)}({\mathbb C}[T]). $$ \end{Theorem}
\begin{Remark} The identities \eqref{identity:UEPmatrixform} and \eqref{identity:UEPmatrixformpoly} connect the construction of $q_1,\ldots,q_N$ to the following matrix extension problem: extend the first row $(p^\sigma)_{\sigma\in G}$ of the polynomial matrix $U$ (or $(\widetilde p_\chi)_{\chi\in G'}$ of $\widetilde U$) to a rectangular $(N+1)\times m$ polynomial matrix satisfying \eqref{identity:UEPmatrixform} (or \eqref{identity:UEPmatrixformpoly}). There are two major differences between the identities \eqref{identity:UEPmatrixform} and \eqref{identity:UEPmatrixformpoly}. While the first column $(p,q_1,\ldots,q_N)$ of $U$ determines all other columns of $U$ as well, the columns of the matrix $\widetilde U$ can be chosen independently, see \cite{StrangNguyen}. All entries of $\widetilde U$,
however, are forced to be $G$-invariant trigonometric polynomials. \end{Remark}
The following simple consequence of the above results provides a necessary condition for the existence of UEP tight wavelet frames.
\begin{Corollary}\label{cor:UEP_matrix} Let the trigonometric polynomial $p \in {\mathbb C}[T]$ satisfy $p({\boldsymbol{1}})=1$. For the existence of trigonometric polynomials $q_j$ satisfying \eqref{id:UEP}, it is necessary that the sub-QMF condition \begin{equation}\label{id:subQMF} 1-\sum_{\sigma\in G}p^{\sigma*}p^\sigma\>\ge\>0 \end{equation} holds on ${\mathbb T}^d$. In particular, it is necessary that $1-p^*p$ is non-negative on ${\mathbb T}^d$. \end{Corollary}
Next, we give an example of a trigonometric polynomial $p$ satisfying $p({\boldsymbol{1}})=1$, but for which the corresponding polynomial $f$ is negative for some $\omega \in {\mathbb R}^3$.
\begin{Example} Consider \begin{eqnarray*}
p(z_1,z_2,z_3)&=&6z_1z_2z_3 \left(\frac{1+z_1}{2}\right)^2\left(\frac{1+z_2}{2}\right)^2\left(\frac{1+z_3}{2}\right)^2 \left(\frac{1+z_1z_2z_3}{2}\right)^2 - \\ && \frac{5}{4}z_1 \left(\frac{1+z_1}{2}\right)\left(\frac{1+z_2}{2}\right)^3\left(\frac{1+z_3}{2}\right)^3 \left(\frac{1+z_1z_2z_3}{2}\right)^3 -\\ && \frac{5}{4}z_2 \left(\frac{1+z_1}{2}\right)^3\left(\frac{1+z_2}{2}\right)\left(\frac{1+z_3}{2}\right)^3 \left(\frac{1+z_1z_2z_3}{2}\right)^3 -\\ && \frac{5}{4}z_3 \left(\frac{1+z_1}{2}\right)^3\left(\frac{1+z_2}{2}\right)^3\left(\frac{1+z_3}{2}\right) \left(\frac{1+z_1z_2z_3}{2}\right)^3 -\\ && \frac{5}{4}z_1 z_2z_3\left(\frac{1+z_1}{2}\right)^3\left(\frac{1+z_2}{2}\right)^3\left(\frac{1+z_3}{2}\right)^3 \left(\frac{1+z_1z_2z_3}{2}\right). \end{eqnarray*} The associated refinable function is continuous as the corresponding subdivision scheme is uniformly convergent, but $p$ does not satisfy the sub-QMF condition, as $$
1-\sum_{\sigma\in G}|p^\sigma({\boldsymbol{e}}^{-i \omega})|^2<0 \quad \hbox{for} \quad
\omega=\left( \frac{\pi}{6},0,0\right). $$ \end{Example}
\subsection{Sums of squares} \label{subsec:sumsofsquares}
Next we give another equivalent formulation of the UEP in terms of a sums of squares problem for the Laurent polynomial \begin{equation}\label{def:f}
f:=1-\sum_{\sigma\in G}p^{\sigma*}p^{\sigma}. \end{equation} We say that $f \in C[T]$ is a {\it sum of hermitian squares}, if there exist $h_1,\ldots,h_r\in {\mathbb C}[T]$ such that $\displaystyle f=\sum_{j=1}^r h_j^*h_j$. We start with the following auxiliary lemma.
\begin{Lemma}\label{lem:subQMFiso} Let $p\in {\mathbb C}[T]$ with isotypical components $p_\chi$, $\chi\in G'$. \begin{itemize} \item[(a)] $ \displaystyle \sum_{\sigma\in G}p^{\sigma*}p^\sigma \>=\>m\cdot\sum_{\chi\in G'} p_\chi^*p_\chi$ is a $G$-invariant Laurent polynomial in ${\mathbb R}[T]$. \item[(b)] If $f$ in \eqref{def:f} is a sum of hermitian squares \begin{equation}\label{id:tildeH} f\>=\>\sum_{j=1}^rh_j^*h_j, \end{equation} with $h_j\in {\mathbb C}[T]$, then \begin{equation}\label{id:H} f\>=\>\sum_{j=1}^r\sum_{\chi \in G'}\tilde h_{j,\chi}^*\tilde h_{j,\chi}, \end{equation} with the $G$-invariant polyphase components $\tilde h _{j,\chi}\in {\mathbb C}[T]$. \end{itemize} \end{Lemma}
\begin{proof} Similar computations as in the proof of Theorem \ref{th:UEP_polyphase} yield the identity in (a). The $G$-invariance and invariance by involution are obvious. For (b) we observe that the left-hand side of \eqref{id:tildeH} is $G$-invariant as well. Therefore, \eqref{id:tildeH} implies $$
1-\sum_{\sigma\in G}p^{\sigma*}p^{\sigma}= m^{-1}\sum_{j=1}^r \sum_{\sigma\in G}
h_j^{\sigma*}h_j^{\sigma}. $$ Using the result in (a) we get $$
m^{-1}\sum_{j=1}^r \sum_{\sigma\in G}h_j^{\sigma*}h_j^{\sigma}
\>=\>\sum_{j=1}^r\sum_{\chi\in G'}h_{j,\chi}^*h_{j,\chi}. $$ The polyphase component $\tilde h_{j,\chi}=z^{-\alpha_\chi}h_{j,\chi}$, with $\alpha_\chi\in{\mathbb Z}^d$ and $\alpha_\chi\equiv \chi$ mod~$M{\mathbb Z}^d$, is $G$-invariant
and satisfies $\tilde h_{j,\chi}^*\tilde h_{j,\chi}= h_{j,\chi}^*h_{j,\chi}$. \end{proof}
The results in \cite{LS06} imply that having a sum of hermitian squares decomposition of \[
f=1-\sum_{\sigma\in G}p^{\sigma*}p^{\sigma}
=\sum_{j=1}^rh_j^*h_j\in {\mathbb R}[T], \] with $G$-invariant polynomials $h_j\in{\mathbb C}[T]$, is sufficient for the existence of the polynomials $q_j$ in Theorem~\ref{th:UEP}. The authors in \cite{LS06} also provide a method for the construction of $q_j$ from a sum of squares decomposition of the trigonometric polynomial $f$. Lemma \ref{lem:subQMFiso} shows that one does not need to require $G$-invariance of $h_j$ in \eqref{def:f}. Moreover, it is not mentioned in \cite{LS06}, that the existence of the sos decomposition of $f$ is
also a necessary condition, and, therefore, provides another equivalent formulation
of the UEP conditions \eqref{id:UEP}. We state the following extension of \cite[Theorem 3.4]{LS06}.
\begin{Theorem}\label{th:LaiSt} For any $p \in {\mathbb C}[T]$, with $p({\boldsymbol{1}})=1$, the following conditions are equivalent.
\begin{itemize}
\item[(i)] There exist trigonometric polynomials $h_1,\ldots,h_r \in {\mathbb C}[T]$ satisfying \eqref{def:f}. \item[(ii)] There exist trigonometric polynomials $q_1,\ldots, q_N \in {\mathbb C}[T]$ satisfying \eqref{id:UEP}. \end{itemize} \end{Theorem}
\begin{proof} Assume that $(i)$ is satisfied. Let $\chi_k$ be the elements of $G'\simeq\{\chi_1,\ldots,\chi_m\}$. For $1\le j\le r$ and $1\le k\le m$, we define the polyphase components $\tilde h_{j,\chi_k}$ of $h_j$ and set $\alpha_\chi\in{\mathbb Z}^d$, $\alpha_\chi\equiv \chi$ mod~$M{\mathbb Z}^d$, as in Lemma \ref{lem:subQMFiso}. The constructive method in the proof of \cite[Theorem 3.4]{LS06} yields the explicit form of $q_1,\ldots,q_N$, with $N=m(r+1)$, satisfying \eqref{id:UEP}, namely \begin{eqnarray}
q_k&=& m^{-1/2}z^{\alpha_{\chi_k}}
(1-mpp_{\chi_k}^*), \qquad 1\le k\le m,\label{def:LSqk1}\\
q_{mj+k}&=& p
\tilde h_{j,\chi_k}^*
, \qquad 1\le k\le m, \qquad 1 \le j\le r.\label{def:LSqk2} \end{eqnarray} Conversely, if $(ii)$ is satisfied, we obtain from \eqref{identity:UEPmatrixform} \[
I_m-
\left[\begin{matrix} \vdots\\p^{\sigma*}\\ \vdots\end{matrix}
\right]_{\sigma\in G}\cdot
\left[\begin{matrix} \cdots & p^{\sigma}& \cdots\end{matrix}
\right]_{\sigma\in G}= \sum_{j=1}^N
\left[\begin{matrix} \vdots\\q_j^{\sigma*}\\ \vdots\end{matrix}
\right]_{\sigma\in G}\cdot
\left[\begin{matrix} \cdots & q_j^{\sigma}& \cdots\end{matrix}
\right]_{\sigma\in G}. \] The determinant of the matrix on the left-hand side is equal to $f$ in \eqref{def:f}, and, by the Cauchy-Binet formula, the determinant of the matrix on the right-hand side is a sum of squares. \end{proof}
\begin{Remark}\label{rem:matrix-extension} The constructive method in \cite{LS06} yields $N=m(r+1)$ trigonometric polynomials $q_j$ in \eqref{id:UEP}, where $r$ is the number of trigonometric polynomials $h_j$ in \eqref{def:f}. Moreover, the degree of some $q_j$ in \eqref{def:LSqk1} and \eqref{def:LSqk2} is at least twice as high as the degree of $p$. \end{Remark}
Next, we give an equivalent formulation of the UEP condition in terms of hermitian sums of squares, derived from the identities \eqref{id:equiv_UEP_1}
in Theorem \ref{th:UEP_polyphase}. Our goal is to improve the constructive method in \cite{LS06} and to give an algebraic equivalent formulation that directly delivers the trigonometric polynomials $q_j$ in Theorem~\ref{th:UEP}, avoiding any extra computations as in \eqref{def:LSqk1} and \eqref{def:LSqk2}. To this end we write $A\>=\>{\mathbb C}[T]$ and consider $$A\otimes_{\mathbb C} A={\mathbb C}[T\times T].$$ So $A$ is the ring of Laurent polynomials in $d$ variables $z_1,\dots, z_d$. We may identify $A\otimes A$ with the ring of Laurent polynomials in $2d$ variables $u_1,\dots,u_d$ and $v_1,\dots,v_d$, where $u_j=z_j\otimes1$ and $v_j=1\otimes z_j$, $j=1,\dots,d$. On $A$ we have already introduced the $G'$-grading $A=\bigoplus_{\chi\in G'} A_\chi$ and the involution $*$ satisfying $z_j^*=z_j^{-1}$. On $A\otimes A$ we consider the involution $*$ defined by $(p\otimes q) ^*=q^*\otimes p^*$ for $p,q\in A$. Thus $u_j^*=v_j^{-1}$ and $v_j^*= u_j^{-1}$. An element $f\in A\otimes A$ will be called \emph{hermitian} if $f=f^*$. We say that $f$ is a sum of hermitian squares if there are finitely many $q_1,\dots,q_r\in A$ with $\displaystyle f=\sum _{k=1}^rq_k^*\otimes q_k$. On $A\otimes A$ we consider the grading $$A\otimes A\>=\>\bigoplus_{\chi,\eta\in G'}A_\chi\otimes A_\eta.$$ So $A_\chi\otimes A_\eta$ is spanned by the monomials $u^\alpha v^\beta$ with $\alpha+M{\mathbb Z}^d=\chi$ and $\beta+M{\mathbb Z}^d=\eta$. Note that $(A_\chi\otimes A_\eta)^*=A_{-\eta}\otimes A_{-\chi}$.
The multiplication homomorphism $\mu\colon A\otimes A\to A$ (with $\mu(p\otimes q)=pq$) is compatible with the involutions. Let $I= \ker(\mu)$, the ideal in $A\otimes A$ that is generated by $u_j-v_j$ with $j=1,\dots,d$. We also need to consider the smaller ideal $$J\>:=\>\bigoplus_{\chi,\eta\in G'}\Bigl(I\cap\bigl(A_\chi\otimes A_\eta\bigr)\Bigr)$$ of $A\otimes A$. The ideal $J$ is $*$-invariant. Note that the inclusion $J\subseteq I$ is proper since, for example, $u_j-v_j\notin J$.
\begin{Theorem}\label{th:UEP_hermitian} Let $p\in A={\mathbb C}[T]$ satisfy $p({\boldsymbol{1}})=1$. The following conditions are equivalent. \begin{itemize} \item[(i)] The Laurent polynomial $$f=1-\sum_{\sigma\in G}p^{\sigma*}p^\sigma$$
is a sum of hermitian squares in $A$; that is, there exist $h_1,\dots,h_r\in A$ with $\displaystyle f= \sum_{j=1}^r h_j^*h_j$. \item[(ii)] For any hermitian elements $h_\chi=h_\chi^*$ in $A_{-\chi}\otimes A_{\chi}$, with $\mu(h_\chi)=\frac1m$ for all $\chi\in G'$, the element $$g\>:=\>\sum_{\chi\in G'}h_\chi-p^*\otimes p$$ is a sum of hermitian squares in $A\otimes A$ modulo~$J$; that is, there exist $q_1,\dots,q_N\in A$ with $\displaystyle g- \sum_{j=1}^N q_j^*q_j\in J$. \item[(iii)] $p$ satisfies the UEP condition \eqref{id:UEP} for suitable $q_1, \dots,q_N\in A$. \end{itemize} \end{Theorem}
\begin{proof} By Theorem \ref{th:LaiSt}, $(i)$ is equivalent to $(iii)$. In $(ii)$, let hermitian elements $h_\chi\in A_{-\chi,\chi}$ be given with $\mu(h_\chi)=\frac1m$. Then $(ii)$ is equivalent to the existence of $q_1,\dots,q_N\in A$ with \begin{equation}\label{id:gmodJ}
\sum_{\chi\in G'}h_\chi-p^*\otimes p-\sum_{j=1}^Nq_j^*\otimes q_j\in J. \end{equation} We write $\displaystyle p=\sum_{\chi \in G'}p_\chi$ and $q_j$ as the sum of its isotypical components and observe that \eqref{id:gmodJ} is equivalent to \begin{equation}\label{id:gchieta}
\delta_{\chi,\eta}h_\chi -p_\chi^*\otimes p_\eta-
\sum_{j=1}^Nq_{j,\chi}^*\otimes q_{j,\eta}
\in I\quad\hbox{for all}\quad \chi,\eta\in G'. \end{equation} Due to $\mu(h_\chi)=\frac1m$, the relation \eqref{id:gchieta} is an equivalent reformulation of equations \eqref{id:equiv_UEP_1} in Theorem \ref{th:UEP_polyphase}, and therefore equivalent to equations \eqref{id:UEP}. \end{proof}
\begin{Remark}\label{rem:UEP_hermitian} \begin{itemize} \item[$(i)$] The proof of Theorem \ref{th:UEP_hermitian} does not depend on the choice of the hermitian elements $h_\chi\in A_{-\chi}\otimes A_{\chi}$ in $(ii)$. Thus, it suffices to choose particular hermitian elements satisfying $\mu(h_\chi)=\frac1m$. For example, if $p_\chi({\boldsymbol{1}})=m^{-1}$ is satisfied for all $\chi\in G'$, we can choose \begin{equation}\label{def:hchi}
h_\chi = \sum_{\alpha\equiv \chi}{\rm Re}(p(\alpha))u^{-\alpha}v^\alpha, \end{equation} where $p(\alpha)$ are the coefficients of the Laurent polynomial $p$. \item[$(ii)$] The same Laurent polynomials $q_1,\ldots,q_N$ can be chosen in Theorem~\ref{th:UEP_hermitian} $(ii)$ and $(iii)$. This is the main advantage of working with the condition $(ii)$ rather than with $(i)$. \end{itemize} \end{Remark}
\subsection{Semi-definite programming} \label{subsec:semi-definite}
We next devise a constructive method for determining the Laurent polynomials $q_j$ in \eqref{id:UEP}. This method is based on $(ii)$ of Theorem \ref{th:UEP_hermitian} and $(i)$ of Remark \ref{rem:UEP_hermitian}.
For a Laurent polynomial $ p=\sum_\alpha p(\alpha)z^\alpha$, let ${\cal N}\subseteq {\mathbb Z}^d$ contain $\{\alpha \in {\mathbb Z}^d \ : \ p(\alpha) \not=0\}$. We also define the tautological (column) vector $$
{\boldsymbol{x}}=\left[z^\alpha \ : \ \alpha \in {\cal N} \right]^T, $$
and the orthogonal projections $E_\chi \in {\mathbb R}^{|{\cal N}| \times |{\cal N}|}$ to be diagonal matrices with diagonal entries given by $$
E_\chi(\alpha,\alpha)=\left\{ \begin{array}{cc} 1, & \alpha \equiv \chi \ \hbox{mod} \ M{\mathbb Z}^d, \\
0, & \hbox{otherwise}, \end{array}\right. \alpha \in {\cal N}. $$
\begin{Theorem} \label{th:UEP_semidefinite} Let \begin{equation}\label{def:p_coeff}
p={\boldsymbol{p}}\cdot {\boldsymbol{x}}\in A={\mathbb C}[T], \qquad {\boldsymbol{p}}=[p(\alpha) \ : \ \alpha \in {\cal N}]\in {\mathbb C}^{|{\cal N}|}, \end{equation} satisfy $p_\chi({\boldsymbol{1}})=m^{-1}$ for all $\chi\in G'$. The following conditions are equivalent. \begin{itemize}
\item[(i)] There exist row vectors ${\boldsymbol{q}}_j=[q_j(\alpha) \ : \ \alpha \in {\cal N}]\in {\mathbb C}^{|{\cal N}|}$,
$1 \le j \le N$, satisfying the identities \begin{eqnarray} \label{id:equiv_UEP_2}
{\boldsymbol{x}}^*E_\chi \left( {\rm diag}({\rm Re}\,{\boldsymbol{p}})-{\boldsymbol{p}}^* {\boldsymbol{p}} -\sum_{j=1}^N {\boldsymbol{q}}_j^* {\boldsymbol{q}}_j \right) E_\eta
{\boldsymbol{x}}=0\quad\hbox{for all}\quad
\chi,\eta \in G'. \end{eqnarray} \item[(ii)] $p$ satisfies the UEP condition \eqref{id:UEP} with $$
q_j={\boldsymbol{q}}_j\cdot {\boldsymbol{x}}\in {\mathbb C}[T], \qquad j=1,\ldots,N, $$
and suitable row vectors ${\boldsymbol{q}}_j\in {\mathbb C}^{|{\cal N}|}$. \end{itemize} \end{Theorem}
\begin{proof} Define $$
{\boldsymbol{v}}=\left[1\otimes z^\alpha \ : \ \alpha \in {\cal N} \right]^T \in (A\otimes A)^{|{\cal N}|}. $$ Note that ${\boldsymbol{p}}{\boldsymbol{v}}=1\otimes p$ and the definition of $E_\chi$ gives ${\boldsymbol{p}} E_\chi{\boldsymbol{v}}=1\otimes p_\chi$. Therefore, we have $$
{\boldsymbol{v}}^* E_\chi {\boldsymbol{p}}^* {\boldsymbol{p}} E_\eta {\boldsymbol{v}}=p_\chi^*\otimes p_\eta
\quad\hbox{for all}\quad \chi,\eta\in G', $$ and the analogue for $q_{j,\chi}^*\otimes q_{j,\eta}$. Moreover, we have $$
{\boldsymbol{v}}^* E_\chi \hbox{diag}({\rm Re}\,{\boldsymbol{p}}) E_\eta {\boldsymbol{v}}=\delta_{\chi,\eta}
\sum_{\alpha\equiv \chi}{\rm Re}(p(\alpha))u^{-\alpha}v^\alpha. $$ Due to $p_\chi({\boldsymbol{1}})=m^{-1}$ and by Remark \ref{rem:UEP_hermitian} we choose $h_\chi={\boldsymbol{v}}^* E_\chi \hbox{diag}({\rm Re}\, {\boldsymbol{p}}) E_\chi {\boldsymbol{v}}$ as the hermitian elements in Theorem \ref{th:UEP_hermitian}$(ii)$, and
the relation \eqref{id:gchieta} is equivalent to $$
{\boldsymbol{v}}^*E_\chi \left( \hbox{diag}({\rm Re}\,{\boldsymbol{p}})-{\boldsymbol{p}}^* {\boldsymbol{p}}
-\sum_{j=1}^N {\boldsymbol{q}}_j^* {\boldsymbol{q}}_j \right) E_\eta {\boldsymbol{v}} \in I
\quad\hbox{for all}\quad \chi,\eta\in G'. $$ Due to $\mu({\boldsymbol{v}})={\boldsymbol{x}}$, the claim follows from the equivalence of $(ii)$ and $(iii)$ in Theorem \ref{th:UEP_hermitian}. \end{proof}
We suggest the following constructive method based on Theorem \ref{th:UEP_semidefinite}. Given the trigonometric polynomial $p$ and the vector ${\boldsymbol{p}}$ in \eqref{def:p_coeff}, define the matrix \begin{equation} \label{def:R}
R=\hbox{diag}({\rm Re}\,{\boldsymbol{p}})-{\boldsymbol{p}}^*{\boldsymbol{p}}\in {\mathbb C}^{|{\cal N}| \times |{\cal N}|}.
\end{equation}
Then the task of constructing tight wavelet frames can be formulated as the following problem of {\bf semi-definite programming}: find a matrix $O\in {\mathbb C}^{|{\cal N}| \times |{\cal N}|}$ such that \begin{equation}\label{id:Sposdef}
S:=R+O\quad\hbox{is positive semi-definite} \end{equation} subject to the constraints \begin{equation}\label{id:null-matrices}
{\boldsymbol{x}}^* E_\chi \, O \, E_\eta {\boldsymbol{x}}=0 \quad \hbox{for all}\quad
\chi, \eta \in G'. \end{equation} If such a matrix $O$ exists, we determine the trigonometric polynomials $q_j={\boldsymbol{q}}_j {\boldsymbol{x}}\in {\mathbb C}[T]$ by choosing any decomposition of the form $$
S=\sum_{j=1}^N{\boldsymbol{q}}_j^* {\boldsymbol{q}}_j $$ with standard methods from linear algebra.
If the semi-definite programming problem does not have a solution, we can increase the set ${\cal N}$ and start all over. Note that the identities \eqref{id:null-matrices} are equivalent to the following linear constraints on the null-matrices $O$ $$
\sum_{\alpha \equiv \chi, \beta \equiv \eta}
O_{\alpha,\beta} z^{\beta-\alpha}=0 \quad\hbox{for all}\quad \chi, \eta
\in G', $$ or, equivalently, $$
\sum_{\alpha \equiv \chi} O_{\alpha,
\alpha+\tau}=0 \quad \hbox{for all}\quad
\tau \in \{\beta-\alpha \ : \ \alpha, \beta \in
{\cal N}\}. $$
\begin{Example} To illustrate the concept of null-matrices, we consider first a very prominent one-dimensional example of a Daubechies wavelet. Let $$
p={\boldsymbol{p}} \cdot {\boldsymbol{x}}, \quad
{\boldsymbol{p}}=\frac{1}{8} \left[\begin{array}{cccc} 1+\sqrt{3} & 3+\sqrt{3} & 3-\sqrt{3}& 1-\sqrt{3}
\end{array}\right], $$ and ${\boldsymbol{x}}=\left[1,z,z^2,z^3\right]^T$. In this case $M=m=2$, $G\simeq\{0,\pi\}$, $G'\simeq\{0,1\}$ and the orthogonal projections $E_\chi \in {\mathbb R}^{4 \times 4}$, $\chi \in G'$, are given by $$
E_0=\hbox{diag}[1,0,1,0] \quad \hbox{and} \quad E_1=\hbox{diag}[0,1,0,1]. $$ By \eqref{def:R}, we have $$
R=\frac{1}{64} \left[\begin{array}{rrrr}4+6\sqrt{3}& -6-4\sqrt{3}& -2\sqrt{3}&2\\
-6-4\sqrt{3}& 12+2\sqrt{3} & -6& 2\sqrt{3} \\ -2\sqrt{3} & -6 & 12-2\sqrt{3} & -6+4\sqrt{3} \\
2& 2\sqrt{3} & -6+4\sqrt{3} & 4-6\sqrt{3} \end{array} \right], $$ which is not positive semi-definite. Define $$
O=\frac{1}{64} \left[\begin{array}{rrrr}-8\sqrt{3}& 8\sqrt{3}& 0&0\\
8\sqrt{3}& -8\sqrt{3} & 0& 0 \\ 0 & 0 & 8\sqrt{3} & -8\sqrt{3} \\
0& 0 & -8\sqrt{3} & 8\sqrt{3} \end{array} \right] $$ satisfying \eqref{id:null-matrices}. Then $S=R+O$ is positive semi-definite, of rank one, and yields the well-known Daubechies wavelet, see \cite{Daub} defined by $$
q_1= \frac{1}{8} \left[ \begin{array}{cccc}
1-\sqrt{3} & -3+\sqrt{3} & 3+\sqrt{3}& -1-\sqrt{3}
\end{array}\right] \cdot {\boldsymbol{x}}. $$
\end{Example}
Another two-dimensional example of one possible choice of an appropriate null-matrix satisfying \eqref{id:null-matrices} is given in Example \ref{ex:butterfly}.
\begin{Remark} Another, very similar, way of working with null-matrices was pursued already in \cite{CS08}. \end{Remark}
\section{Existence and constructions of tight wavelet frames} \label{sec:algebra}
In this section we use results from algebraic geometry and Theorem \ref{th:LaiSt} to resolve the problem of existence of tight wavelet frames. Theorem \ref{th:LaiSt} allows us to reduce the problem of existence of $q_j$ in \eqref{id:UEP} to the problem of existence of an sos decomposition of a single nonnegative polynomial \begin{equation}
f=1-\sum_{\sigma \in G} p^{\sigma*} p^{\sigma} \in {\mathbb R}[T]. \notag \end{equation} In subsection \ref{subsec:existence}, for dimension $d=2$, we show that the polynomials $q_1,\dots,q_N \in {\mathbb C}[T]$ as in Theorem \ref{th:UEP} always exist. This result is based on recent progress in real algebraic geometry. We also include an example of a three-dimensional trigonometric polynomial $p$, satisfying the sub-QMF condition \eqref{id:subQMF}, but for which trigonometric polynomials $q_1,\ldots,q_N$ as in Theorem \ref{th:UEP} do not exist. In subsection \ref{subsec:sufficient}, we give sufficient conditions for the existence of the $q_j$'s in the multidimensional case and give several explicit constructions of tight wavelet frames in section \ref{subsec:construction}.
\subsection{Existence of tight wavelet frames} \label{subsec:existence}
In this section we show that in the two-dimensional case ($d=2$) the question of existence of a wavelet tight frame can be positively answered using the results from \cite{sch:surf}. Thus, Theorem \ref{th:existence2dim} answers a long standing open question about the existence of tight wavelet frames as in Theorem \ref{th:UEP}. The result of Theorem \ref{th:noUEP} states that in the dimension $d \ge 3$ for a given trigonometric polynomial $p$ satisfying $p({\boldsymbol{1}})=1$ and the sub-QMF condition \eqref{id:subQMF} one cannot always determine trigonometric polynomials $q_j$ as in Theorem \ref{th:UEP}.
\begin{Theorem}\label{th:existence2dim} Let $d=2$, $p\in{\mathbb C}[T]$ satisfy $p({\boldsymbol{1}})=1$ and $\displaystyle \sum_{\sigma\in G}p^{\sigma *} p^\sigma \le1$ on ${\mathbb T}^2=T({\mathbb R})$. Then there exist $N\in{\mathbb N}$ and trigonometric polynomials $q_1,\dots,q_N\in{\mathbb C}[T]$ satisfying \begin{equation}\label{aux:th:existence2dim}
\delta_{\sigma,\tau}\>=\>p^{\sigma*}p^{\tau}+\sum_{j=1}^N
q_j^{\sigma*}q_j^{\tau},\qquad \sigma,\,\tau\in G. \end{equation} \end{Theorem}
\begin{proof} The torus $T$ is a non-singular affine algebraic surface over ${\mathbb R}$, and $T({\mathbb R})$ is compact. The polynomial $f$ in \eqref{def:f} is in ${\mathbb R}[T]$ and is nonnegative on $T({\mathbb R})$ by assumption. By Corollary~3.4 of \cite{sch:surf}, there exist $h_1,\dots,h_r\in{\mathbb C}[T]$ satisfying $\displaystyle f=\sum_{j=1}^r h_j^* h_j$. According to Lemma \ref{lem:subQMFiso} part $(b)$, the polynomials $h_j$ can be taken to be $G$-invariant. Thus, by Theorem \ref{th:LaiSt}, there exist polynomials $q_1,\dots,q_N$ satisfying \eqref{aux:th:existence2dim}. \end{proof}
The question may arise, if there exists a trigonometric polynomial $p$ that satisfies $p({\boldsymbol{1}})=1$ and the sub-QMF condition $\displaystyle \sum_{\sigma\in G} p^{\sigma*}p^\sigma \le 1$ on ${\mathbb T}^d$, but for which there exists no UEP tight frame as in Theorem \ref{th:UEP}. Or, due to Corollary \ref{cor:UEP_matrix}, if we can find such a $p$, for which the nonnegative trigonometric polynomial $1-p^{*}p$ is not a sum of hermitian squares of trigonometric polynomials?
\begin{Theorem}\label{th:noUEP} There exists $p\in{\mathbb C}[T]$ satisfying $p({\boldsymbol{1}})=1$ and the sub-QMF condition on ${\mathbb T}^3$, such that $1-p^*p$ is not a sum of hermitian squares in ${\mathbb R}[T]$. \end{Theorem}
The proof is constructive. The following example defines a family of trigonometric polynomials with the properties stated in Theorem \ref{th:noUEP}. We make use of the following local-global result from algebraic geometry: if the Taylor expansion of $f\in {\mathbb R}[T]$ at one of its roots has, in local coordinates, a homogeneous part of lowest degree which is not sos of real algebraic polynomials, then $f$ is not sos in ${\mathbb R}[T]$.
\begin{Example} Denote $z_j=e^{-i\omega_j}$, $j=1,2,3$. We let \[
p(z)=\Big(1-c \cdot m(z) \Big) a(z),\quad z \in T, \quad 0<c\le \frac{1}{3}, \] where \[
m(z)= y_1^4y_2^2+y_1^2y_2^4+y_3^6-3y_1^2y_2^2y_3^2 \in {\mathbb R}[T],\qquad
y_j=\sin\omega_j. \] In the local coordinates $(y_1,y_2,y_3)$ at $z={\boldsymbol{1}}$, $m$ is the well-known Motzkin polynomial in ${\mathbb R}[y_1,y_2,y_3]$; i.e.
$m$ is not sos in ${\mathbb R}[y_1,y_2,y_3]$. Moreover, $a\in {\mathbb R}[T]$ is chosen
such that \begin{equation}\label{eq:propA}
D^\alpha a({\boldsymbol{1}})=\delta_{0,\alpha},\quad
D^\alpha a(\sigma)=0,\quad 0\le |\alpha|< 8, \quad
\sigma\in G\setminus \{{\boldsymbol{1}}\}, \end{equation} and $\displaystyle \sum_{\sigma \in G} a^{\sigma*}a^\sigma \le 1$. Such $a$ can be, for example, any scaling symbol of a 3-D orthonormal wavelet with 8 vanishing moments; in particular, the tensor product Daubechies symbol $a(z)=m_8(z_1) m_8(z_2) m_8(z_3)$ with $m_8$ in \cite{Daub} satisfies conditions \eqref{eq:propA} and $\displaystyle \sum_{\sigma \in G} a^{\sigma *}a^\sigma = 1$. The properties of $m$ and $a$ imply that \begin{itemize} \item[1.] $p$ satisfies the sub-QMF condition on ${\mathbb T}^3$, since $m$ is $G$-invariant and $0\le 1-c \cdot m \le 1$ on ${\mathbb T}^3$, \item[2.] $p$ satisfies sum rules of order at least $6$, \item[3.] the Taylor expansion of $1-p^*p$ at $z={\boldsymbol{1}}$, in local coordinates $(y_1,y_2,y_3)$, has $2\cdot c \cdot m$ as its homogeneous part of lowest degree. \end{itemize} Therefore, $1-p^*p$ is not sos of trigonometric polynomials in ${\mathbb R}[T]$. By Corollary \ref{cor:UEP_matrix}, the corresponding nonnegative trigonometric polynomial $f$ in \eqref{def:f} has no sos decomposition. \end{Example}
\subsection{Sufficient conditions for existence of tight wavelet frames} \label{subsec:sufficient}
In the general multivariate case $d \ge 2$, in Theorem \ref{th:existence-ddim}, we provide a sufficient condition for the existence of a sums of squares decomposition of $f$ in \eqref{def:f}. This condition is based on the properties of the Hessian of $f \in {\mathbb R}[T]$ $$
{\rm Hess}(f)=\left( D^\mu f \right)_{\mu \in {\mathbb N}_0^s, |\mu|=2}, $$
where $f$ is a trigonometric polynomial in $\omega \in {\mathbb R}^d$ and $D^\mu$ denotes the $|\mu|-$th partial derivative with respect to $\omega \in {\mathbb R}^d$.
\begin{Theorem}\label{hessiancrit} Let $V$ be a non-singular affine ${\mathbb R}$-variety for which $V({\mathbb R})$ is compact, and let $f\in{\mathbb R}[V]$ with $f\ge0$ on $V({\mathbb R})$. For every $\xi \in V({\mathbb R})$ with $f(\xi)=0$, assume that the Hessian of $f$ at $\xi$ is strictly positive definite. Then $f$ is a sum of squares in ${\mathbb R}[V]$. \end{Theorem}
\begin{proof} The hypotheses imply that $f$ has only finitely many zeros in $V({\mathbb R})$. Therefore the claim follows from \cite[Corollary 2.17, Example 3.18]{sch:mz}. \end{proof}
Theorem \ref{hessiancrit} implies the following result.
\begin{Theorem}\label{th:existence-ddim} Let $p\in{\mathbb C}[T]$ satisfy $p({\boldsymbol{1}})=1$ and $\displaystyle f=1-\sum_{\sigma\in G}p^{\sigma*}p^\sigma \ge 0$ on $T({\mathbb R})={\mathbb T}^d$. If the Hessian of $f$ is positive definite at every zero of $f$ in ${\mathbb T}^d$, then there exist $N\in{\mathbb N}$ and polynomials $q_1,\dots,q_N\in{\mathbb C}[T]$ satisfying \eqref{id:UEP}. \end{Theorem}
\begin{proof} By Theorem \ref{hessiancrit}, $f$ is a sum of squares in ${\mathbb R}[T]$. The claim follows then by Theorem \ref{th:existence2dim}. \end{proof}
Due to $p({\boldsymbol{1}})=1$, $z={\boldsymbol{1}}$ is obviously a zero of $f$. We show next how to express the Hessian of $f$ at ${\boldsymbol{1}}$ in terms of the gradient $\nabla p({\boldsymbol{1}})$ and the Hessian of $p$ at ${\boldsymbol{1}}$, if $p$ additionally satisfies the so-called sum rules of order $2$, or, equivalently, satisfies the zero conditions of order $2$. We say that $p \in {\mathbb C}[T]$ satisfies zero conditions of order $k$, if $$
D^\mu p({\boldsymbol{e}}^{-i\sigma})=0, \quad \mu \in {\mathbb N}_0^d, \quad |\mu|<k, \quad \sigma \in G\setminus\{0\}, $$ see \cite{JePlo,JiaJiang} for details. The assumption that $p$ satisfies sum rules of order $2$ together with $p({\boldsymbol{1}})=1$ are necessary for the continuity of the corresponding refinable function $\phi$.
\begin{Lemma}\label{lem:Hessianf_HessianP} Let $p\in{\mathbb C}[T]$ with real coefficients satisfy the sum rules of order $2$ and $p({\boldsymbol{1}})=1$. Then the Hessian of $\displaystyle f=1-\sum_{\sigma\in G}p^{\sigma*}p^\sigma$ at ${\boldsymbol{1}}$ is equal to $$-2\,{\rm Hess}(p)({\boldsymbol{1}})-2\,\nabla p({\boldsymbol{1}})^*\nabla p({\boldsymbol{1}}).$$ \end{Lemma}
\begin{proof} We expand the trigonometric polynomial $p$ in a neighborhood of
${\boldsymbol{1}}$ and get $$
p({\boldsymbol{e}}^{-i\omega})=1+ \nabla p({\boldsymbol{1}}) \omega+
\frac{1}{2}\omega^T \mbox{Hess}(p)({\boldsymbol{1}}) \omega
+\mathcal{O}(|\omega|^3). $$ Note that, since the coefficients of $p$ are real, the row vector $v=\nabla p({\boldsymbol{1}})$ is purely imaginary and $\mbox{Hess}(p)({\boldsymbol{1}})$ is real and symmetric. The sum rules of order $2$ are equivalent to $$
p^\sigma({\boldsymbol{1}})= 0,\quad
\nabla p^\sigma({\boldsymbol{1}})=0\qquad
\mbox{for all}\quad
\sigma\in G\setminus\{ 0\}. $$
Thus, we have $p^\sigma({\boldsymbol{e}}^{-i\omega})=\mathcal{O}(|\omega|^2)$ for all $\sigma\in G\setminus\{0\}$. Simple computation yields \begin{eqnarray*}
|p({\boldsymbol{e}}^{-i\omega})|^2&=&1+ (v+\overline{v}) \omega+ \omega^T (\mbox{Hess}(p)({\boldsymbol{1}})+v^*v) \omega
+\mathcal{O}(|\omega|^3) \\&=&1+ \omega^T (\mbox{Hess}(p)({\boldsymbol{1}})+v^*v) \omega
+\mathcal{O}(|\omega|^3). \end{eqnarray*} Thus, the claim follows. \end{proof}
\begin{Remark} Note that ${\rm Hess}(f)$ is a zero matrix, if $p$ is a symbol of interpolatory subdivision scheme, i.e., $$
p=m^{-1}+ m^{-1}\sum_{\chi \in G' \setminus \{0\}} p_\chi, $$ and $p$ satisfies zero conditions of order at least $3$. This property of ${\rm Hess}(f)$ follows directly from the equivalent formulation of zero conditions of order $k$, see \cite{Cabrelli}. The examples of $p$ with such properties are for example the butterfly scheme in Example \ref{ex:butterfly} and the three-dimensional interpolatory scheme in Example \ref{ex:3D_butterfly}. \end{Remark}
\begin{Remark} The sufficient condition of Theorem \ref{hessiancrit} can be generalized to cases when the order of vanishing of $f$ is larger than two. Namely, let $V$ and $f$ as in \ref{hessiancrit}, and let $\xi\in V({\mathbb R})$ be a zero of $f$. Fix a system $x_1,\dots,x_n$ of local (analytic) coordinates on $V$ centered at $\xi$. Let $2d>0$ be the order of vanishing of $f$ at $\xi$, and let $F_\xi(x_1,\dots,x_n)$ be the homogeneous part of degree $2d$ in the Taylor expansion of $f$ at $\xi$. Let us say that $f$ is \emph{strongly sos at~$\xi$} if there exists a linear basis $g_1,\dots,g_N$ of the space of homogeneous polynomials of degree $d$ in $x_1,\dots,x_n$ such that $F_\xi=g_1^2+\cdots+g_N^2$. (Equivalently, if $F_\xi$ lies in the interior of the sums of squares cone in degree~$2d$.) If $2d=2$, this condition is equivalent to the Hessian of $f$ at $\xi$ being positive definite.
Then the following holds: If $f$ is strongly sos at each of its zeros in $V({\mathbb R})$, $f$ is a sum of squares in ${\mathbb R}[V]$. For a proof we refer to \cite{sch:future}. As a result, we get a corresponding generalization of Theorem \ref{th:existence-ddim}: If $f$ as in \ref{th:existence-ddim} is strongly sos at each of its zeros in ${\mathbb T}^d$, then the conclusion of \ref{th:existence-ddim} holds. \end{Remark}
For simplicity of presentation, we start by applying the result of Theorem \ref{th:existence-ddim} to the $2-$dimensional polynomial $f$ derived from the symbol of the three-directional piecewise linear box spline. This example also motivates the statements of Remark \ref{remark:box_and_cosets_sec2.2}.
\begin{Example} \label{example:b111-sec2.2} The three-directional piecewise linear box spline is defined by its associated trigonometric polynomial $$
p({\boldsymbol{e}}^{-i\omega})= e^{-i(\omega_1+\omega_2)} \cos\left(\frac{\omega_1}{2}\right)
\cos\left(\frac{\omega_2}{2}
\right)\cos\left(\frac{\omega_1+\omega_2}{2}\right), \quad
\omega \in {\mathbb R}^2. $$ Note that $$
\cos\left(\frac{\omega_1}{2}\right)
\cos\left(\frac{\omega_2}{2}
\right)\cos\left(\frac{\omega_1+\omega_2}{2}\right)
=1-\frac{1}{8}\omega^T \left(\begin{matrix} 2&1\\1&2\end{matrix}\right) \omega +
\mathcal{O}(|\omega|^4). $$ Therefore, as the trigonometric polynomial $p$ satisfies sum rules of order $2$, we get $$
f({\boldsymbol{e}}^{-i\omega})=\frac{1}{8}\omega^T \left(\begin{matrix} 2&1\\1&2\end{matrix}\right) \omega +
\mathcal{O}(|\omega|^4). $$ Thus, the Hessian of $f$ at ${\boldsymbol{1}}$ is positive definite.
To determine the other zeroes of $f$, by Lemma \ref{lem:subQMFiso} part $(a)$, we can use either one of the representations \begin{eqnarray*}
f({\boldsymbol{e}}^{-i\omega})&=&1-\sum_{\sigma \in \{0,\pi\}^2} \prod_{\theta \in \{0,1\}^2 \setminus \{0\}}
\cos^2\left(\frac{(\omega+ \sigma)\cdot \theta}{2}\right)\\
&=& \frac{1}{4} \sum_{\chi \in \{0,1\}^2}(1-\cos^2(\omega \cdot \chi)). \end{eqnarray*} It follows that the zeros of $f$ are the points $\omega \in \pi {\mathbb Z}^2$ and, by periodicity of $f$ with period $\pi$ in both coordinate directions, we get that $$
\mbox{Hess}(f)({\boldsymbol{e}}^{-i\omega})=\mbox{Hess}(f)({\boldsymbol{1}}), \quad \omega \in \pi {\mathbb Z}^2, $$ is positive definite at all zeros of $f$. \end{Example}
\begin{Remark} \label{remark:box_and_cosets_sec2.2}
\noindent $(i)$ The result of \cite[Theorem 2.4]{CS07} implies the existence of tight frames for multivariate box-splines. According to the notation in \cite[p.~127]{deBoor}, the corresponding trigonometric polynomial is given by $$
p({\boldsymbol{e}}^{-i\omega})=\prod_{j=1}^n \frac{1+{\boldsymbol{e}}^{-i\omega \cdot \xi^{(j)}}}{2},
\quad \omega \in {\mathbb R}^d, $$ where $\Xi=(\xi^{(1)},\ldots,\xi^{(n)})\in{\mathbb Z}^{d\times n}$ is unimodular and has rank $d$. (Unimodularity means that all $d\times d$-submatrices have determinant $0,1$, or $-1$.) Moreover, $\Xi$ has the property that leaving out any column $\xi^{(j)}$ does not reduce its rank. (This property guarantees continuity of the box-spline and that the corresponding polynomial $p$ satisfies at least sum rules of order $2$.) Then one can show that $$
f= 1-\sum_{\sigma \in G}p^{\sigma*}p^\sigma \ge 0 \quad \hbox{on} \ {\mathbb T}^d, $$ the zeros of $f$ are at $\omega \in \pi {\mathbb Z}^d$ and the Hessian of $f$ at these zeros is positive definite. This yields an alternative proof for \cite[Theorem 2.4]{CS07} in the case of box splines.
\noindent $(ii)$ If the summands $m^{-2}-p_\chi^* p_\chi$ are nonnegative on ${\mathbb T}^d$, then it can be easier to determine the zeros of $f$ by determining the common zeros of all of these summands. \end{Remark}
\begin{Example} \label{ex:3D_butterfly} There was an attempt to define an interpolatory scheme for 3D-subdivision with dilation matrix $2I_3$ in \cite{CMQ}. There are several inconsistencies in this paper and we give a correct description of the trigonometric polynomial $p$, the so-called subdivision mask. Note that the scheme we present is an extension of the 2-D butterfly scheme to 3-D data in the following sense: if the data are constant along one of the coordinate directions (or along the main diagonal in ${\mathbb R}^3$), then the subdivision procedure keeps this property and is identical with the 2-D butterfly scheme.
We describe the trigonometric polynomial $p$ associated with this 3-D scheme by defining its isotypical components. The isotypical components, in terms of $z_k=e^{-i\omega_k}$, $k=1,2$, are given by \small \begin{eqnarray*}
p_{0,0,0}(z_1,z_2,z_3)&=&1/8,\\[12pt]
p_{1,0,0}(z_1,z_2,z_3) &=& \frac{1}{8} \cos\omega_1 +
\frac{\lambda}{4} \Big(\cos(\omega_1+2\omega_2)+\cos(\omega_1+2\omega_3)\\&+&\cos(\omega_1+2\omega_2+2\omega_3)\Big)
- \frac{\lambda}{4} \Big(\cos(\omega_1-2\omega_2)+\cos(\omega_1-2\omega_3)\\&+&
\cos(3\omega_1+2\omega_2+2\omega_3)\Big),\\[12pt]
p_{0,1,0}(z_1,z_2,z_3) &=& p_{1,0,0}(z_2,z_1,z_3),\
p_{0,0,1}(z_1,z_2,z_3) = p_{1,0,0}(z_3,z_1,z_2),\\
p_{1,1,1}(z_1,z_2,z_3) &=& p_{1,0,0}(z_1z_2z_3,z_1^{-1},z_2^{-1}),\\[12pt]
p_{1,1,0}(z_1,z_2,z_3) &=& \Big(\frac{1}{8}-\lambda \Big) \cos(\omega_1+\omega_2) +
\lambda \Big(\cos(\omega_1-\omega_2)+\cos(\omega_1+\omega_2+2\omega_3)\Big)\\
&-&\frac{\lambda}{4} \Big(\cos(\omega_1-\omega_2+2\omega_3)+\cos(\omega_1-\omega_2-2\omega_3)+ \\
&& \cos(3\omega_1+\omega_2+2\omega_3)+\cos(\omega_1+3\omega_2+2\omega_3)\Big),\\[12pt]
p_{1,0,1}(z_1,z_2,z_3) &=& p_{1,1,0}(z_1,z_3,z_2),\qquad
p_{0,1,1}(z_1,z_2,z_3) = p_{1,0,0}(z_2,z_3,z_1), \end{eqnarray*} \normalsize where $\lambda$ is the so-called tension parameter.
The polynomial $p$ also satisfies $$
p(z_1,z_2,z_3)=\frac{1}{8} (1+z_1)(1+z_2)(1+z_3)(1+z_1z_2z_3)q(z_1,z_2,z_3), \quad q({\boldsymbol{1}})=1, $$ which implies sum rules of order $2$.
\begin{itemize} \item[$(a)$] For $\lambda=0$, we have $q(z_1,z_2,z_3)=1/(z_1z_2z_3)$. Hence, $p$ is the scaling symbol of the trivariate
box spline with the direction set $(1,0,0)$, $(0,1,0)$, $(0,0,1)$, $(1,1,1)$ and whose support center is shifted to the origin.
\item[$(b)$] For $0\le \lambda< 1/16$, the corresponding subdivision scheme converges and has a continuous limit function. The only zeros of the associated nonnegative trigonometric polynomial $f$ are at $\pi {\mathbb Z}^3$, and the Hessian of $f$ at these zeros is given by $$
\hbox{Hess}(f)({\boldsymbol{1}})=\hbox{Hess}(f)({\boldsymbol{e}}^{-i\omega})=\left( \begin{array}{ccc} 1-16\lambda &\frac{1}{2}-8\lambda&\frac{1}{2}-8\lambda\\
\frac{1}{2}-8\lambda&1-16\lambda&\frac{1}{2}-8\lambda \\\frac{1}{2}-8\lambda&\frac{1}{2}-8\lambda&1-16\lambda \end{array}\right) $$ for all $\omega \in \pi {\mathbb Z}^3$. The existence of the sos decomposition of $f$ is guaranteed by Theorem \ref{th:existence-ddim} and one possible decomposition of $f$ is computed as follows.
\begin{itemize}
\item[$(b_1)$] Denote $u:=\cos(\omega_1+\omega_2)$, $v:=\cos(\omega_1+\omega_3)$,
$w:=\cos(\omega_2+\omega_3)$, and $\tilde u:=\sin(\omega_1+\omega_2)$, $\tilde v:=\sin(\omega_1+\omega_3)$,
$\tilde w:=\sin(\omega_2+\omega_3)$.
Simple computations yield
\[
p_{1,1,0} =\frac18 -(1-u)(\frac18-\lambda v^2-\lambda w^2)-\lambda (v-w)^2,
\] and \begin{eqnarray*}
\frac{1}{64}&-&|p_{1,1,0}|^2=
\lambda^2 (v^2-w^2)^2 + \Big(\Big(\frac1{16}-\lambda v^2\Big) \\&+&
\Big(\frac1{16}-\lambda w^2\Big)\Big)
\left(\frac1{8}\tilde u^2+\lambda (v-uw)^2+\lambda(w-uv)^2\right). \end{eqnarray*}
Therefore, $\frac{1}{64}-|p_{1,1,0}|^2$ has an sos decomposition with $7$ summands $h_j$, and each $h_j$ has only one nonzero isotypical component.
\item[$(b_2)$] The isotypical component $p_{1,0,0}$ is not bounded by $1/8$; consider, for example, $p_{1,0,0}({\boldsymbol{e}}^{-i\omega})$ at the point $\omega=\left( -\frac{\pi}{6}, -\frac{2\pi}{3}, -\frac{2\pi}{3}\right)$. Yet we obtain, by simple computations, \[
p_{1,0,0} = \frac18\cos\omega_1+\frac{\lambda}{2} A \sin\omega_1,\
A:=\sin 2(\omega_1+\omega_2+\omega_3)-\sin 2\omega_2-\sin 2\omega_3, \] and \[
\frac{1}{16}-|p_{1,0,0}|^2-|p_{0,1,0}|^2-|p_{0,0,1}|^2-|p_{1,1,1}|^2 =
E_{1,0,0}+E_{0,1,0}+E_{0,0,1}+E_{1,1,1},
\] where \[
E_{1,0,0}= \frac{3\lambda}{16} \sin^4\omega_1 +
\frac{\lambda}{64} (2\sin\omega_1- A \cos\omega_1)^2 +
\frac{1-16\lambda}{64} \sin^2\omega_1 (1+\lambda A^2) ; \]
the other $E_{i,j,k}$ are given by the same coordinate transformations as $p_{i,j,k}$. Hence, for $ \frac{1}{16}-|p_{1,0,0}|^2-|p_{0,1,0}|^2-|p_{0,0,1}|^2-|p_{1,1,1}|^2$, we obtain an sos decomposition with $12$ summands $g_j$, each of which has only one nonzero isotypical component. \end{itemize}
Thus, for the trivariate interpolatory subdivision scheme with tension parameter $0\le\lambda<1/16$, by Theorem~\ref{th:LaiSt}, we have explicitly constructed a tight frame with 41 generators $q_j$ as in Theorem \ref{th:UEP}.
\item[c)] For $\lambda=1/16$, the sum rules of order $4$ are satisfied.
In this particular case,
the scheme is $C^1$ and the Hessian of $f$ at ${\boldsymbol{1}}$ is the zero-matrix,
thus the result
of Theorem~\ref{th:existence-ddim} is not applicable. Nevertheless, the sos decomposition
of
$1-\sum p^{\sigma *} p^\sigma$ in b), with further simplifications
for $\lambda=1/16$, gives a tight frame with 31 generators for the trivariate interpolatory subdivision scheme. \end{itemize} \end{Example}
\subsection{Constructions of tight wavelet frames} \label{subsec:construction}
Lemma \ref{lem:subQMFiso} part $(a)$ sometimes yields an elegant method for determining the sum of squares decomposition of the polynomial $f$ in \eqref{def:f} and, thus, constructing the trigonometric polynomials $q_j$ in Theorem \ref{th:UEP}. Note that \begin{equation} \label{idea:method}
f\>=1-\sum_{\sigma \in G} p^{\sigma *} p^\sigma \>=\>1-m\sum_{\chi\in G'} p_\chi^* p_\chi \>=\>m\sum_{\chi\in G'}
\Bigl(\frac1{m^2}-p_\chi^* p_\chi \Bigr). \end{equation} So it suffices to find an sos decomposition for each of the polynomials $m^{-2}-p_\chi^* p_\chi$, provided that they are all nonnegative. This nonnegativity assumption is satisfied, for example, for the special case when all coefficients $p(\alpha)$ of $p$ are nonnegative. This is due to the simple fact that for nonnegative $p(\alpha)$ we get $$
p^*_\chi p_\chi\>\le\>|p_\chi({\boldsymbol{1}})|^2\>=\>m^{-2} $$ on ${\mathbb T}^d$, for all $\chi\in G'$.
The last equality in \eqref{idea:method} allows us to simplify the construction of frame generators considerably. In Example \ref{example:b111_2_construction} we apply this method to the three-directional piecewise linear box spline. Example \ref{ex:butterfly} illustrates the advantage of the representation in \eqref{idea:method} for the butterfly scheme \cite{GDL}, an interpolatory subdivision method with the corresponding mask $p \in {\mathbb C}[T]$ of a larger support, some of whose coefficients are negative. Example \ref{ex:jiang_oswald} shows that our method is also applicable for at least one of the interpolatory $\sqrt{3}-$subdivision schemes studied in \cite{JO}. For the three-dimensional example that also demonstrates our constructive approach see Example \ref{ex:3D_butterfly} part $(b1)$.
\begin{Example}\label{example:b111_2_construction} Consider the three-directional piecewise linear box spline with the symbol $$p(z_1,z_2)\>=\>\frac18\,(1+z_1)(1+z_2)(1+z_1z_2),\quad z_j= e^{-i\omega_j}.$$ The sos decomposition for the isotypical components yields $$
f\>=\>1-m\sum_{\chi\in G'} p_\chi^* p_\chi \>=\>\frac14\sin^2(\omega_1)+
\frac14\sin^2(\omega_2)+\frac14\sin^2({\omega_1+\omega_2}). $$ Thus, in \eqref{id:tildeH} we have a decomposition with $r=3$. Since each of $h_1$, $h_2$, $h_3$ has only one isotypical component, we get a representation $f=\tilde h_1^2+\tilde h_2^2+\tilde h_3^2$ with $3$ $G$-invariant polynomials $\tilde h_j$. By Theorem \ref{th:LaiSt} we get $7$ frame generators. Note that the method in \cite[Example 2.4]{LS06} yields $6$ generators of slightly larger support. The method in \cite[Section 4]{CH} based on properties of the Kronecker product leads to $7$ frame generators whose support is the same as the one of $p$. One can also employ the technique discussed in \cite[Section]{GR} and get $7$ frame generators. \end{Example}
Another prominent example of a subdivision scheme is the so-called butterfly scheme. This example shows the real advantage of treating the isotypical components of $p$ separately for $p$ with larger support.
\begin{Example} \label{ex:butterfly} The butterfly scheme describes an interpolatory subdivision scheme that generates a smooth regular surface interpolating a given set of points \cite{GDL}. The trigonometric polynomial $p$ associated with the butterfly scheme is given by \begin{eqnarray*} p(z_1,z_2) & = & \frac{1}{4}+\frac{1}{8}\Bigl(z_1+z_2+z_1z_2+z_1^{-1}+z_2^{-1}+
z_1^{-1}z_2^{-1}\Bigr) \\ && +\frac{1}{32}\Bigl(z_1^2z_2+z_1z_2^2+z_1z_2^{-1}+z_1^{-1}z_2+z_1^{-2}
z_2^{-1}+z_1^{-1}z_2^{-2}\Bigr) \\ && -\frac1{64}\Bigl(z_1^3z_2+z_1^3z_2^2+z_1^2z_2^3+z_1z_2^3+z_1^2
z_2^{-1}+z_1z_2^{-2} \\ && +z_1^{-1}z_2^2+z_1^{-2}z_2+z_1^{-3}z_2^{-1}+z_1^{-3}z_2^{-2}+
z_1^{-2}z_2^{-3}+z_1^{-1}z_2^{-3}\Bigr). \end{eqnarray*} Its first isotypical component is $p_{0,0}=\frac14$, which is the case for every interpolatory subdivision scheme. The other isotypical components, in terms of $z_k=e^{-i\omega_k}$, $k=1,2$, are given by $p_{1,0} (z_1,z_2)=\frac14\cos(\omega_1)+\frac1{16}\cos(\omega_1+2\omega_2) -\frac1{32}\cos(3\omega_1+2\omega_2)-\frac1{32}\cos(\omega_1 -2\omega_2)$, i.e., $$
p_{1,0}(z_1,z_2)\>=\>\frac14\cos(\omega_1)+\frac18\sin^2(\omega_1)
\cos(\omega_1+2\omega_2), $$ and $p_{0,1}(z_1,z_2)=p_{1,0}(z_2,z_1)$, $p_{1,1}(z_1,z_2)=p_{1,0} (z_1z_2,z_2^{-1})$. Note that on ${\mathbb T}^2$ $$
|p_\chi|\le\frac14 \quad \hbox{for all} \quad \chi\in G', $$ thus, our method is applicable. Simple computation shows that \begin{eqnarray*}
1-16\,\bigl|p_{1,0}(z_1,z_2)
\bigr|^2&=&1-\cos^2(\omega_1)-\cos(\omega_1)\sin^2(\omega_1)\cos (\omega_1+2\omega_2)\\&-&\frac14\sin^4(\omega_1)\cos^2(\omega_1+ 2\omega_2). \end{eqnarray*} Setting $u_j:=\sin(\omega_j)$, $j=1,2$, $v:= \sin(\omega_1+\omega_2)$, $v':=\sin(\omega_1-\omega_2)$, $w:=\sin (\omega_1+2\omega_2)$ and $w':=\sin (2\omega_1+\omega_2)$, we get
$$1-16\,\bigl|p_{1,0}(z_1,z_2)\bigr|^2\>=\>\frac14\,u_1^2\Bigl(w^2+ (u_2^2+v^2)^2+2u_2^2+2v^2\Bigr).$$ Therefore, \begin{eqnarray*} f=1-\sum_{\sigma\in G} p^{\sigma *} p^\sigma & = & \frac14\Bigl(u_1^2u_2^2+u_1^2
v^2+u_2^2v^2\Bigr)+\frac1{16}\Bigl(u_1^2w^2+u_2^2w'^2+v^2v'^2
\Bigr) \\ && +\frac1{16}\Bigl(u_1^2(u_2^2+v^2)^2+u_2^2(u_1^2+v^2)^2+v^2(u_1^2
+u_2^2)^2\Bigr). \end{eqnarray*} This provides a decomposition $\displaystyle f= \sum_{j=1}^9 h_j^* h_j$ into a sum of $9$ squares. As in the previous example, each $h_j$ has only one nonzero isotypical component $h_{j,\chi_j}$. Thus, by part $(b)$ of Lemma \ref{lem:subQMFiso} and by Theorem \ref{th:LaiSt}, there exists a tight frame with $13$ generators. Namely, as in the proof of Theorem \ref{th:LaiSt}, we get \begin{eqnarray*}
q_1(z_1,z_2)&=&\frac{1}{2}-\frac{1}{2}p(z_1,z_2), \quad q_2(z_1,z_2)=
\frac{1}{2}z_1-2 p(z_1,z_2)p_{(1,0)}^*(z_1,z_2) \\
q_3(z_1,z_2)&=&q_2(z_2,z_1), \quad q_4(z_1,z_2)=q_2(z_1z_2,z_2^{-1})\\
q_{4+j}(z_1,z_2)&=&p(z_1,z_2) \widetilde{h}^*_{j,\chi_j}, \quad j=1, \dots, 9, \end{eqnarray*} where $\widetilde{h}_{j,\chi_j}$ are the lifted isotypical components defined as in Lemma \ref{lem:subQMFiso}. Let ${\cal N}=\{0, \dots ,7\}^2$, $p={\boldsymbol{p}} \cdot {\boldsymbol{x}}$ and $q_j={\boldsymbol{q}}_j \cdot {\boldsymbol{x}}$ with ${\boldsymbol{x}}= [ z^\alpha \ : \ \alpha \in {\cal N}]^T$. The corresponding null-matrix $O \in {\mathbb R}^{64 \times 64}$ satisfying \eqref{id:null-matrices} is given by $$
{\boldsymbol{x}}^* \cdot O \cdot {\boldsymbol{x}}= {\boldsymbol{x}}^* \left[
\sum_{j=1}^{13} {\boldsymbol{q}}_j^T {\boldsymbol{q}}_j -\hbox{diag}({\boldsymbol{p}})+{\boldsymbol{p}}^T {\boldsymbol{p}} \right] {\boldsymbol{x}}. $$ Note that other factorizations of the positive semi-definite matrix $\hbox{diag}({\boldsymbol{p}})-{\boldsymbol{p}}^T {\boldsymbol{p}}+O$ of rank $13$ lead to other possible tight frames with at least $13$ frame generators. An advantage of using semi-definite programming techniques is that it can possibly yield $q_j$ of smaller degree and reduce the rank of $\hbox{diag}({\boldsymbol{p}})-{\boldsymbol{p}}^T {\boldsymbol{p}}+O$.
Using the technique of semi-definite programming the authors in \cite{CS08} constructed numerically a tight frame for the butterfly scheme with 18 frame generators. The advantage of our construction is that the frame generators are determined analytically. The disadvantage is that their support is approximately twice as large as that of the frame generators in \cite{CS08}. \end{Example}
The next example is one of the family of interpolatory $\sqrt{3}-$subdivision studied in \cite{JO}. The associated dilation matrix is $\displaystyle{M=\left[\begin{array}{rr} 1&2\\-2&-1 \end{array} \right]}$ and $m=3$.
\begin{Example} \label{ex:jiang_oswald} The symbol of the scheme is given by $$
p(z_1,z_2)= p_{(0,0)}(z_1,z_2)+ p_{(1,0)}(z_1,z_2)+ p_{(0,1)}(z_1,z_2) $$ with isotypical components $p_{(0,0)}=\frac{1}{3}$, \begin{eqnarray*}
p_{(0,1)}(z_1,z_2)=\frac{4}{27}(z_2+z_1^{-1}+z_1z_2^{-1})-\frac{1}{27}(z_1^{-2}z_2^{2}+z_1^2+z_2^{-2}) \end{eqnarray*} and $ p_{(1,0)}(z_1,z_2)=p_{(0,1)}(z_2,z_1)$. We have by Lemma
\ref{lem:subQMFiso} and due to the equality $|p_{(0,1)}(z_1,z_2)|^2=|p_{(1,0)}(z_1,z_2)|^2$ $$
1-\sum_{\sigma \in G} p^{\sigma *} p^\sigma=2\left( \frac19-p_{(0,1)}^*p_{(0,1)}\right), $$ thus it suffices to consider only \begin{eqnarray*}
\frac{1}{9}&-&|p_{(0,1)}(z_1,z_2)|^2=3^{-2}-27^{-2} \Big(51+16\cos(\omega_1+\omega_2)+16\cos(2\omega_1-\omega_2) \\ &+&16\cos(\omega_1-2\omega_2)+2\cos(2\omega_1+2\omega_2)+2\cos(2\omega_1-4\omega_2)\\&+&2\cos(4\omega_1-2\omega_2) -8\cos(3\omega_1)-8\cos(3\omega_2)-8\cos(3\omega_1-3\omega_2\Big). \end{eqnarray*} Numerical tests show that this polynomial is nonnegative. \end{Example}
\end{document}
|
arXiv
|
{
"id": "1202.3596.tex",
"language_detection_score": 0.6277065873146057,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{A geometric one-sided inequality for zero-viscosity limits}
\titlerunning{A geometric one-sided inequality} \author{Yong-Jung Kim
}
\institute{Department of Mathematical Sciences, KAIST \\ 291 Daehak-ro, Yuseong-gu, Daejeon 305-701, Republic of Korea\\ \\ Fax: +82-42-350-5710\\ \\ email: [email protected]}
\date{\today}
\maketitle
\begin{abstract} The Oleinik inequality for conservation laws and Aronson-Benilan type inequalities for porous medium or p-Laplacian equations are one-sided inequalities that provide the fundamental features of the solution such as the uniqueness and sharp regularity. In this paper such one-sided inequalities are unified and generalized for a wide class of first and second order equations in the form of $$ u_t=\sigma(t,u,u_x,u_{xx}),\quad u(x,0)=u^0(x)\ge0,\quad t>0,\,x\in\mbox{\boldmath$R$}, $$ where the non-strict parabolicity ${\partial\over\partial q} \sigma(t,z,p,q)\ge0$ is assumed. The generalization or unification of one-sided inequalities is given in a geometric statement that the zero level set $$ A(t;m,x_0):=\{x:\rho_m(x-x_0,t)-u(x,t)>0\} $$ is connected for all $t,m>0$ and $x_0\in\mbox{\boldmath$R$}$, where $\rho_m$ is the fundamental solution with mass $m>0$. This geometric statement is shown to be equivalent to the previously mentioned one-sided inequalities and used to obtain uniqueness and TV boundedness of conservation laws without convexity assumption. Multi-dimensional extension for the heat equation is also given. \end{abstract}
\tableofcontents
\section{Introduction}
Studies on partial differential equations, or PDEs for brevity, are mostly focused on finding properties of PDEs within a specific discipline and on developing a technique specialized to them. However, finding a common structure over different disciplines and unifying theories from different subjects into a generalized theory is the direction that mathematics should go in. The purpose of this paper is to develop geometric arguments to combine Oleinik or Aronson-Benilan type one-sided estimates that arise from various disciplines from hyperbolic to parabolic problems. This unification of existing theories from different disciplines will provide a true generalization of such theories to a wider class of PDEs. It is clear that algebraic or analytic formulas and estimates that depend on the specific PDE cannot provide such a unified theory and we need a different approach. In this paper we will see that a geometric structure of solutions may provide an excellent alternative in doing such a unification.
The main example of this paper is the entropy solution of an initial value problem of a scalar conservation law, \begin{equation}\label{c law} \partial_t u+\partial_x f(u)=0,\ u(x,0)=u^0(x),\quad t>0,\ x\in\mbox{\boldmath$R$}. \end{equation} Dafermos \cite{MR0481581} and Hoff \cite{MR688972} showed that, if the flux $f$ is convex, the entropy solution satisfies the Oleinik inequality, \begin{equation}\label{OleinikInequality} \partial_x f'(u)\big(=f''(u)u_x\big)\le {1\over t},\quad t>0,\ x\in\mbox{\boldmath$R$}, \end{equation} in a weak sense. This is a sharp version of a one-sided inequality obtained by Oleinik \cite{MR0094541} for a uniformly convex flux case. This inequality provides a uniqueness criterion and the sharp regularity for the admissible weak solution. However, if the flux is not convex, then the Oleinik estimate fails.
One may find a similar theory from a different discipline of PDEs, nonlinear diffusion equations, \begin{equation}\label{NONLINEARDIFFUSION} \partial_t u=\nabla\cdot(\phi(u,\nabla u)\nabla u)=0,\ u(x,0)=u^0(x),\quad t>0,\ x\in\mbox{\boldmath$R$}^n. \end{equation} This equation is called the porous medium equation (PME) if $\phi=qu^{q-1}$ with $q>1$, the fast diffusion equation (FDE) with $q<1$, and the heat equation if $q=1$. Aronson and B\'{e}nilan \cite{MR524760} showed that, for $q\ne1$, its solution satisfies a one-sided inequality \begin{equation}\label{ABInequality} \Delta \wp(u)\ge -{k\over t},\quad k:={1\over n(q-1)+2},\ \wp(u):={q\over q-1}u^{q-1}. \end{equation}
This inequality played a key role in the development of nonlinear diffusion theory that the Oleinik inequality did. The equation (\ref{NONLINEARDIFFUSION}) is called the $p$-Laplacian equation (PLE) if $\phi=|\nabla u|^{p-2}$ with $p>1$ and its solution satisfies a similar one-sided inequality. However, these inequalities depend on the homogeneity of the function $\phi$.
The Oleinik inequality for hyperbolic conservation laws and the Aronson-Benilan type inequalities for porous medium or p-Laplacian equations are one-sided inequalities that provide key features of solutions such as the uniqueness and sharp regularity. Even though these key inequalities are from different disciplines of PDEs, they reflect the very same phenomenon. However, this kind of one-sided estimates do not hold without convexity or homogeneity assumption of the problem. Such a key estimate for a general situation has been the missing ingredient to obtain theoretical progress for a long time in related disciplines.
The purpose of this paper is to present a unified and generalized version of such one-sided inequalities for a general first or second order differential equation in the form of \begin{equation}\label{EQN} u_t=\sigma(t,u,u_x,u_{xx}),\quad u(x,0)=u^0(x)\ge0,\quad t>0,\,x\in\mbox{\boldmath$R$}, \end{equation} where the sub-indices stand for partial derivatives and the initial value $u^0$ is nonnegative and bounded. The main hypothesis on $\sigma$ is the parabolicity, \begin{equation}\label{positivity} 0\le {\partial\over\partial q}\sigma(t,z,p,q)\le C<\infty, \end{equation} which is not necessarily uniformly parabolic. Here, we denote $u,u_x$ and $u_{xx}$ by $z,p$ and $q$, respectively.
The solution of (\ref{EQN}) is not unique in general. For example, the conservation law (\ref{c law}) is in this form with $\sigma(t,z,p,q)=f'(z)p$ and its weak solution is not unique. However, it is well known that the zero viscosity limit of a conservation law is the entropy solution. For the wide class of problems in (\ref{EQN}), one can still consider zero-viscosity limits as follows. Let ${\varepsilon}>0$ be small and $u^\eps (x,t)$ be the solution to a perturbed problem \begin{equation}\label{EQNxperturbed} \partial_t u^\eps ={\sigma^{\varepsilon}}(t,u^\eps ,u^\eps _x,u^\eps _{xx}),\quad u^\eps (x,0)=u^{\eps0}(x),\quad t>0,\,x\in\mbox{\boldmath$R$}, \end{equation} where ${u^\eps }^0$ and ${\sigma^{\varepsilon}}$ are smooth perturbations of $u^0$ and $\sigma$, respectively, and ${\sigma^{\varepsilon}}$ satisfies \begin{equation}\label{positivity_perturbed} {\varepsilon}\le{\partial\over\partial q}{\sigma^{\varepsilon}}(t,z,p,q)\le{\cal C}<\infty. \end{equation} If $\sigma$ is smooth, one may simply put ${\sigma^{\varepsilon}}=\sigma+{\varepsilon} q$. In fact ${\sigma^{\varepsilon}}$ is required to be $C^2$ with respect to $q$, $C^1$ with respect to $p$, and $C^0$ with respect to $z$. For a conservation law case, $\sigma=f'(z)p$ is already smooth enough and such a perturbation is standard. The convergence of the perturbed problem is known for many cases including PME and PLE. The focus of this paper is the structure of the limit of $u^{\varepsilon}$ as ${\varepsilon}\to0$ and hence we assume such a convergence.
Let $\rho_m$ be a nonnegative fundamental solution of (\ref{EQN}) with mass $m>0$, i.e., $$ \rho_m(x,t)\to m\delta(x)\mbox{~~as~~} t\to0\mbox{~~in~~}L^1(\mbox{\boldmath$R$}). $$ The idea for the unification of the one-sided inequalities comes from the observation that they are actually comparisons with fundamental solutions, where a fundamental solution satisfies the equality. For example, the Oleinik inequality can be written as $\partial_x f'(u(x,t))\le \partial_x f'(\rho_m(x,t))$ for all $x\in{\rm supp}\thinspace(\rho_m(t))$. Similarly, the Aronson-B\'{e}nilan inequality becomes $\Delta(\wp(u))\ge\Delta(\wp(\rho_m))$ for all $x\in{\rm supp}\thinspace(\rho_m(t))$. This observation indicates that the unified version of such one-sided inequalities should be a comparison with fundamental solutions.
The unification process is to find the basic common feature, which will be given in terms of geometric concept of connectedness of a level set. First we introduce the connectedness of the zero level set in a modified way. \begin{definition}\label{Def.Connectedness} The zero level set $A:=\{x\in\mbox{\boldmath$R$}:e(x)>0\}$ is connectable by adding zeros, or simply connectable, if there exists a connected set $B$ such that $A\subset B\subset\{x\in\mbox{\boldmath$R$}:e(x)\ge0\}$. \end{definition} In other words, if we can connect the zero level set $A:=\{x\in\mbox{\boldmath$R$}:e(x)>0\}$ by adding a part of zeros of the function $e(x)$, we call it connectable or simply connected. For example, if the graph of $e$ is as given in Figure \ref{fig0}, its zero level set is connectable by adding zeros. In other words we are actually interested in sign changes. In this paper the connectedness the level set is always in this sense. Notice that, for a uniformly parabolic case that ${\partial\over\partial q}\sigma>{\varepsilon}>0$, the usual connectedness is just enough. However, to include the case ${\partial\over\partial q}\sigma\ge0$, we need to generalize the connectedness as in the definition. \begin{figure}\label{fig0}
\end{figure}
Finally, we are ready to present the unified version of the one-sided inequalities. \begin{theorem}[Geometric one-sided inequality] \label{thm.generalization} Let $u(x,t)$ be the nonnegative zero-viscosity solution of (\ref{EQN})-(\ref{positivity}), $\rho_m(x,t)$ be the fundamental solution with mass $m>0$, and $e_{m,x_0}(x,t):=\rho_m(x-x_0,t)-u(x,t)$. Then, the zero level set $A(t;m,x_0):=\{x\in\mbox{\boldmath$R$}:e_{m,x_0}(x,t)>0\}$ is connectable for all $m,t>0$ and $x_0\in\mbox{\boldmath$R$}$. \end{theorem}
The proof of Theorem \ref{thm.generalization} is given in Section \ref{Sect.ProofOfMainTheorem} using the zero set theory (see \cite{MR953678,MR672070}). Main parts of this paper come after the proof. It is shown that the connectedness of the level set is equivalent to the Oleinik inequality for a conservation law with a convex flux, Theorem \ref{thm.equi1}, and the Aronson-Benilan inequality for the porous medium and the fast diffusion equation, Theorem \ref{thm.equi2}. In this way we may see that the connectedness of the level set is a true unification of the one-sided inequalities from different disciplines.
The estimates for solutions of PDEs are usually obtained by using analytical relations, but not geometrical ones. However, the geometric approach in this paper will show that they are equally useful and convenient. In fact, in certain situations, geometric relations provide simple and intuitive way to estimate solutions. One of the purposes of this paper is to develop geometric approaches to estimate solutions of PDEs. In Section \ref{Sect.Steepness}, the connectivity of the zero level set in Theorem \ref{thm.generalization} is developed to obtain geometric steepness estimates of the solution. A bounded solution is compared to fundamental solutions in Theorem \ref{thm.steepness}. The steepness comparison can be considered as an estimate of solution gradients. However, in a delicate situation as in the theorem, geometric arguments may provide a relatively simple and intuitive way to estimate of solutions which is not possible by usual analytical approaches.
The steepness comparison is used as a key to show the uniqueness of the solution to a conservation law without convexity in Theorem \ref{thm.uniquenessCLAW}. It is shown in the theorem that, if the zero level set in Theorem \ref{thm.generalization} is connected for all fundamental solutions, it is the unique entropy solution even without the convexity assumption. This steepness comparison also shows that the total variation of the solution is uniformly bounded for any given time and bounded domain in Theorem \ref{thm.TVB}. Such an estimate is well known for a convex flux case, where the Oleinik inequality is the key for the estimate. Hence it is not surprising that the geometric generalization of the Oleinik inequality gives a similar TV estimation without the convexity assumption.
The challenge of the geometric approach of this paper is to extend it to multi-dimensions. The main difficulty is that there is no multi-dimensional version of a lap number theory nor a zero set theory. In fact, the number of connected components of zero level set has no monotonicity property, which is the reason why there is no multi-dimensional version of such theories. However, our interest is a special case of comparing a general bounded solution $u(x,t)$ to a fundamental solution $\rho_m(x,t)$. Such connectedness will provide the steepness comparison for multi-dimensions. In fact, it is proved in Theorem \ref{thm.HeatEqn} that the level set is convex for the heat equation. A brief discussion for an extension of theory to multi-dimensions is given in Section \ref{sect.Rn}.
\section{Proof of the geometric one-sided inequality} \label{Sect.ProofOfMainTheorem} In this section we prove Theorem \ref{thm.generalization}. The proof depends on the monotone decrease of the lap number or the number of zero points (see \cite{MR953678,MR672070,Sturm}). For an equation in a divergence form, the lap number theory is more convenient. Since our equation (\ref{EQN}) is in a non-divergence form, the zero set theory is more convenient. The following lemma is a simplified version of Angenent \cite[Theorem B]{MR953678}. \begin{lemma}[zero set theory] \label{lem.angenent88} Let $e(x,t)$ be a nontrivial bounded solution to $$ e_t=a(x,t)e_{xx}+b(x,t)e_x+c(x,t)e,\quad e(x,0)=e_0(x), $$ where $a\ge{\varepsilon}$ for some ${\varepsilon}>0$ and $a,a^{-1},a_t,a_x,a_{xx},b,b_t,b_x,c$ are bounded. Then, for all $t>0$, the zeros of $e(x,t)$ are discrete and the number of zeros are decreasing as $t\to\infty$. \end{lemma}
We will apply this lemma to the difference $e(x,t):=\rho_m(x,t)-u(x,t)$ and show that the zero level set $A:=\{x:e(x,t)>0\}$ is connected (or connectable) for all $t>0$. The number of sign changes of $e(x,t)$ for a small $t>0$ is at most two since $\rho_m(x,t)$ is a delta sequence as $t\to0$ and $u(x,0)$ is bounded and nonnegative. Hence what we need is a special case of the zero set theory. Notice that the zero set is just the boundary of the zero level set and hence the theory can be written in terms of the number of connected components of the zero level set. This is the idea that can be naturally extended to multi-dimensions. The following corollary is the one we need for our purpose.
\begin{corollary} \label{cor.zeroset} Under the same assumptions in Lemma \ref{lem.angenent88}, $A(t):=\{x\in\mbox{\boldmath$R$}: e(x,t)>0\}$ is connected for all $t>0$ if $A(0)$ is connected. \end{corollary} \begin{proof} If $A(0)=\mbox{\boldmath$R$}$ or $A(0)=\emptyset$, then the initial value $e_0(x)$ has no zero point. Therefore, $e(x,t)$ has no zero for all $t>0$ by the zero set theory (or by the maximum principle) and hence $A(t)$ is also connected. Suppose that $A(0)$ is a half real line. Then, $e(x,t)$ has at most one zero and hence $A(t)$ is also connected. Suppose that $A(0)$ is a bounded interval. Then, $e(x,t)$ has at most two zero points. If $e(x,t)$ has no or a single zero, then $A(t)$ is connected. Let $e(x,t)$ has two zeros, $x_1(t)<x_2(t)$. Then, the zero set theory implies that $e(x,\tau)$ has two zeros $x_1(\tau)<x_2(\tau)$ for all $0<\tau<t$. Since $e(x,t)$ has the same sign on the domain bounded by $\tau=0$, $\tau=t$, $x=x_1(\tau)$ and $x=x_2(\tau)$, $A(t)$ is an interval and hence connected. $
\qed$\end{proof}
Consider a solution $u(x,t)$ of \begin{equation}\label{EQNx} \partial_t u={\tilde \sigma}(x,t,u,u_x,u_{xx}),\ u(x,0)=u^0(x)\ge0,\ t>0,\,x\in\mbox{\boldmath$R$}, \end{equation} where the initial value $u^0$ is nonnegative and bounded. We consider a parabolic case that satisfies \begin{equation}\label{positivityx} 0\le{\partial\over\partial q}{\tilde \sigma}(x,t,z,p,q)\le C. \end{equation} In this notation, ${\tilde \sigma}$ is allowed to have $x$ dependency and hence is a generalized form of (\ref{EQN})--(\ref{positivity}). For the uniqueness of the problem we consider the zero-viscosity limit as usual. Let ${\varepsilon}>0$ be small and $u^\eps (x,t)$ be the solution to a perturbed problem \begin{equation}\label{EQNxperturbed} \partial_t u^\eps ={\tilde \sigma^{\varepsilon}}(x,t,u^\eps ,u^\eps _x,u^\eps _{xx}),\ u^\eps (x,0)=u^{\eps0}(x),\ t>0,\,x\in\mbox{\boldmath$R$}, \end{equation} where ${u^\eps }^0$ and ${\tilde \sigma^{\varepsilon}}$ are smooth perturbations of $u^0$ and ${\tilde \sigma}$, respectively, and ${\tilde \sigma^{\varepsilon}}$ satisfies \begin{equation}\label{positivity_perturbed} {\varepsilon}\le{\partial\over\partial q}{\tilde \sigma^{\varepsilon}}(x,t,z,p,q)\le{\cal C}<\infty. \end{equation} Notice that the perturbed problem is a special case of the original problem. Hence the properties of the solutions of (\ref{EQNx}) hold true for solutions of perturbed problem. However, certain properties hold for the perturbed problem only, which will be discussed below.
The regularity of the solution $u^\eps $ of the perturbed pr:20oblem, the convergence to a weak solution $u^\eps \to u$ as ${\varepsilon}\to0$, and the uniqueness of the limit are known for several cases such as conservation laws, porous medium equations, and $p$-Laplacian equations. We will call the limit as the zero-viscosity limit. However, there is no such a theory under the generality in (\ref{EQNx}-\ref{positivityx}). Therefore, to complete the theory for an individual equation, such a zero viscosity limit should be obtained first. The following study is about the structure of such the zero viscosity limit when it does exist.
Let $u(x,t)$ be the solution of (\ref{EQNx}) given as a zero-viscosity limit of solutions $u^\eps $ of the perturbed problem (\ref{EQNxperturbed}), i.e., $$ u^\eps \to u\mbox{~~~a.e.~~~as~~~}{\varepsilon}\to0. $$ The fundamental solution $\rho_m(x,t)$ is also the one given as a zero viscosity limit from the same perturbation process. Hence we assume that $u^{\varepsilon}$ and $\rho_m^{\varepsilon}$ are smooth solutions of the perturbed problem that converges to $u$ and $\rho_m$ as ${\varepsilon}\to0$, respectively. These limits are solutions of the original problem.
\begin{theorem}\label{thm.generalization x} Let $u(x,t)$ be the nonnegative zero-viscosity solution of (\ref{EQNx})--(\ref{positivityx}), $\rho_{m,x_0}(x,t)$ be a fundamental solution with $\rho_{m,x_0}(x,0)=m\delta(x-x_0)$, and $e_{m,x_0}(x,t):=\rho_{m,x_0}(x,t)-u(x,t)$. Then, the zero level set $A(t;m,x_0):=\{x\in\mbox{\boldmath$R$}:e_{m,x_0}(x,t)>0\}$ is connectable for all $m,t>0$ and $x_0\in\mbox{\boldmath$R$}$. \end{theorem} \begin{proof} Let $u^\eps $ be the smooth solution of (\ref{EQNxperturbed}) that converges to $u$ as ${\varepsilon}\to0$. Similarly, let $\rho^\eps _{m,x_0}$ be the smooth solution that converges to $\rho_{m,x_0}$ as ${\varepsilon}\to0$. The proof of the theorem consists of two steps. The first step is to show that the zero level set of the perturbed problem, $$A^{\varepsilon}(t;m,x_0):=\{x\in\mbox{\boldmath$R$}:\rho^\eps _{m,x_0}(x,t)-u^\eps (x,t)>0\},$$ is connected. Let $e_{m,x_0}^{\varepsilon}(x,t):=\rho^\eps _{m,x_0}(x,t)-u^\eps (x,t)$. Then, subtracting (\ref{EQNxperturbed}) from the corresponding equation for $\rho^\eps _{m,x_0}$ gives $$ \partial_t e^{\varepsilon}_{m,x_0} ={\tilde \sigma}^{\varepsilon}(x,t,\rho^\eps _m,\partial_x \rho^\eps _{m,x_0},\partial^2_{x}\rho^\eps _{m,x_0}) -{\tilde \sigma}^{\varepsilon}(x,t,u^\eps ,\partial_xu^\eps ,\partial_x^2u^\eps ). $$ One may rewrite it as $$ \partial_t e^{\varepsilon}_{m,x_0}=a(x,t)\partial_x^2e^{\varepsilon}_{m,x_0} +b(x,t)\partial_x e^{\varepsilon}_{m,x_0}+c(x,t)e_{m,x_0}^{\varepsilon}, $$ where \begin{eqnarray*} &&a(x,t)\\ &&\ :={{\tilde \sigma}^{\varepsilon}(x,t,\rho^\eps _{m,x_0}, \partial_x\rho^\eps _{m,x_0},\partial_x^2\rho^\eps _{m,x_0}) -{\tilde \sigma}^{\varepsilon}(x,t,\rho^\eps _{m,x_0}, \partial_x\rho^\eps _{m,x_0},u^\eps _{xx})\over \partial_x^2\rho^\eps _{m,x_0}-u^\eps _{xx}},\\ &&b(x,t):={{\tilde \sigma}^{\varepsilon}(x,t,\rho^\eps _{m,x_0}, \partial_x\rho^\eps _{m,x_0},u^\eps _{xx}) -{\tilde \sigma}^{\varepsilon}(x,t,\rho^\eps _{m,x_0},u^\eps _x,u^\eps _{xx})\over \partial_x\rho^\eps _{m,x_0}-u^\eps _{x}},\\ &&c(x,t):={{\tilde \sigma}^{\varepsilon}(x,t,\rho^\eps _{m,x_0},u^\eps _x,u^\eps _{xx}) -{\tilde \sigma}^{\varepsilon}(x,t,u^\eps ,u^\eps _x,u^\eps _{xx})\over \rho^\eps _{m,x_0}-u^\eps }. \end{eqnarray*} The regularity of ${\tilde \sigma^{\varepsilon}}$, the smoothness of the solutions $u^\eps $ and $\rho^\eps _{m,x_0}$, and the uniform parabolicity in(\ref{positivity_perturbed}) imply that $a\ge{\varepsilon},a^{-1},a_t,a_x,a_{xx},b,b_x $ and $c$ are bounded. It is clear that the number of connected components of the zero level set $A^{\varepsilon}(t;m,x_0):=\{x\in\mbox{\boldmath$R$}:\rho^\eps _{m,x_0}(x,t)-u^\eps (x,t)>0\}$ is one for $t>0$ small since $\rho^\eps _{m,x_0}(x,t)$ is a delta-sequence as $t\to0$ and the initial value $u^{\eps0}(x)$ is bounded and smooth. Therefore, Corollary \ref{cor.zeroset} implies that the set $A^{\varepsilon}(t;m,x_0)$ is connected for all $m,t>0$ and $x_0\in\mbox{\boldmath$R$}$.
Next we show that the zero level set $A(t;m,x_0):=\{x\in\mbox{\boldmath$R$}:e_{m,x_0}(x,t)>0\}$ is connectable. The advantage of the use of the connectability in Definition \ref{Def.Connectedness} is that such a geometric structure is preserved after the above limiting process. Suppose that $A(t;m,x_0)$ is not connectable. Then $A(t;m,x_0)$ has two disjoint components that cannot be connected by simply adding zeros of $e_{m,x_0}:=\rho_{m,x_0}(x,t)-u(x,t)$. In other words there is a negative point of $e_{m,x_0}$ between two components of $A(t;m,x_0)$. Therefore, there are three points $x_1<x_2<x_3$ such that $e_{m,x_0}(x_1,t)>0$, $e_{m,x_0}(x_2,t)<0$ and $e_{m,x_0}(x_3,t)>0$. Since $e_{m,x_0}^{\varepsilon}\to e_{m,x_0}$ pointwise as ${\varepsilon}\to0$, there exists ${\varepsilon}_0>0$ such that $e^{{\varepsilon}_0}_m(x_1,t),e^{{\varepsilon}_0}_{m,x_0}(x_3,t)>0$ and $e^{{\varepsilon}_0}_{m,x_0}(x_2,t)<0$, i.e., $A^{{\varepsilon}_0}(t;m,x_0)$ is disconnected. However, it contradicts the previous result and we may conclude that $$A(t;m,x_0):=\{x\in\mbox{\boldmath$R$}:e_{m,x_0}(x,t)>0\}$$ is connectable.$
\qed$ \end{proof}
Theorem \ref{thm.generalization} is an immediate corollary of Theorem \ref{thm.generalization x}. All we have to show is $\rho_{m,x_0}(x,t)=\rho_m(x-x_0,t)$.
\noindent {\bf Proof of Theorem \ref{thm.generalization}:}\ \ Let $u(x,t)$ be the solution of (\ref{EQN}), i.e., $$ \partial_t u=\sigma(t,u,u_x,u_{xx}),\quad u(x,0)=u^0(x),\quad t>0,\,x\in\mbox{\boldmath$R$}, $$ and $\rho_{m,x_0}$ be the fundamental solution with $\rho_{m,x_0}(x,0)=m\delta(x-x_0)$. Then, since the equation is autonomous with respect to the space variable, one may easily see that $\rho_{m,x_0}(x,t)=\rho_m(x-x_0,t)$, where $\rho_m(x,t)$ is the fundamental solution with $\rho_m(x,0)=m\delta(x)$. Therefore, the zero level set $$ \{x:\rho_m(x-x_0,t)-u(x,t)>0\} =\{x:\rho_{m,x_0}(x,t)-u(x,t)>0\} $$ is connectable for all $m,t>0$ and $x_0\in\mbox{\boldmath$R$}$. $
\qed$
The connectedness of the level set $A(t;m,x_0)$ has two parameters, $m$ and $x_0$. One may freely choose the size and place of the fundamental solution $\rho_{m,x_0}(x,t)$ using two parameters $m>0$ and $x_0$. These free parameters provide sharp estimates of a solution $u$ in terms of the fundamental solution.
Before considering the implications of Theorem \ref{thm.generalization}, we show certain uniqueness property of the perturbed problem (\ref{EQNxperturbed})--(\ref{positivity_perturbed}) using the arguments in the proof of Theorem \ref{thm.generalization x} and the zero set theory given in Lemma \ref{lem.angenent88}. \begin{theorem}\label{thm.uniquness} Let $u^{\varepsilon}$ and $v^{\varepsilon}$ be smooth bounded solutions to a regularized problem, \begin{equation}\label{eqnPerturbedNox} \partial_t u^\eps ={\tilde \sigma^{\varepsilon}}(x,t,u^\eps ,u^\eps _x,u^\eps _{xx}),\quad {\varepsilon}\le {\partial\over\partial q}{\tilde \sigma^{\varepsilon}}(t,u,p,q)\le {\cal C}, \end{equation} where $t>0$, $x\in\mbox{\boldmath$R$}$, and ${\tilde \sigma^{\varepsilon}}$ is smooth. Then, \begin{enumerate} \item If $u^{\varepsilon}(x,t_0)=v^{\varepsilon}(x,t_0)$ in an interval $I\subset \mbox{\boldmath$R$}$ for a given $t_0>0$, then $u^{\varepsilon}\equiv v^{\varepsilon}$ on $\mbox{\boldmath$R$}\times\mbox{\boldmath$R$}^+$. \item If ${\tilde \sigma^{\varepsilon}}$ is autonomous with respect to the space variable $x$ and $u^{\varepsilon}(\cdot,t_0)$ is constant in an interval $I\subset \mbox{\boldmath$R$}$ for a given $t_0>0$, then $u^{\varepsilon}(x,t)=\alpha(t)$, where $\alpha(t)$ is a solution of a ordinary differential equation $\alpha'(t)={\tilde \sigma^{\varepsilon}}(t,\alpha(t),0,0)$. \end{enumerate} \end{theorem} \begin{proof} Let $e^{\varepsilon}=v^{\varepsilon}-u^{\varepsilon}$. Then, $e^{\varepsilon}$ satisfies \begin{eqnarray}\label{equation for e} &&e^{\varepsilon}_t=a(x,t)e^{\varepsilon}_{xx}+b(x,t)e^{\varepsilon}_x+c(x,t)e^{\varepsilon},\\ &&e(x,0)=v^{\varepsilon}(x,0)-u^{\varepsilon}(x,0),\nonumber \end{eqnarray} where the coefficients, \begin{eqnarray*} &&a(x,t):={{\tilde \sigma}^{\varepsilon}(x,t,v^{\varepsilon},v^{\varepsilon}_x,v^{\varepsilon}_{xx}) -{\tilde \sigma}^{\varepsilon}(x,t,v^{\varepsilon},v^{\varepsilon}_x,u^\eps _{xx})\over v^{\varepsilon}_{xx}-u^\eps _{xx}},\\ &&b(x,t):={{\tilde \sigma}^{\varepsilon}(x,t,v^{\varepsilon},v^{\varepsilon}_x,u^\eps _{xx}) -{\tilde \sigma}^{\varepsilon}(x,t,v^{\varepsilon},u^\eps _x,u^\eps _{xx})\over v_x^{\varepsilon}-u^\eps _x},\\ &&c(x,t):={{\tilde \sigma}^{\varepsilon}(x,t,v^{\varepsilon},u^\eps _x,u^\eps _{xx}) -{\tilde \sigma}^{\varepsilon}(x,t,u^\eps ,u^\eps _x,u^\eps _{xx})\over v^{\varepsilon}-u^\eps }, \end{eqnarray*} satisfy the conditions in Lemma \ref{lem.angenent88}. If $u^{\varepsilon}(x,t_0)=v^{\varepsilon}(x,t_0)$ in an interval $I\subset \mbox{\boldmath$R$}$ for a given $t_0>0$, then $e^{\varepsilon}$ should be a trivial one since the zero set of $e^{\varepsilon}(\cdot,t_0)$ is not discrete. Therefore, $v^{\varepsilon}\equiv u^{\varepsilon}$ and the first part of the theorem is obtained.
For the second part of the theorem, we suppose that $u^{\varepsilon}(x,t_0)$ is constant for $x\in[a,b]=I$. Consider an ordinary differential equation $$ \alpha'(t)={\tilde \sigma^{\varepsilon}}(t,\alpha(t),0,0), \quad\alpha(t_0)=u(a,t_0)\in\mbox{\boldmath$R$}. $$ Since a smooth perturbation ${\tilde \sigma^{\varepsilon}}(t,z,p,q)$ is assumed, ${\partial{\tilde \sigma^{\varepsilon}}(t,z,0,0)\over\partial z}$ is continuous and the classical ordinary differential equation theory gives a unique solution for all $t\ge0$. Clearly, $v(x,t)=\alpha(t)$ is a solution of (\ref{eqnPerturbedNox}), which agrees with $u$ on $I\times{t_0}$. Therefore, the first part of the theorem implies that $u(x,t)=\alpha(t)$ from the beginning. $
\qed$\end{proof} Note that the theorem does not hold without the uniform parabolicity. The finite speed of propagation of a conservation law allows us to construct a counter example easily. For example, if two initial values agree on an interval, such an agreement persists at least certain finite time due to the finite speed of propagation. Therefore, the support of a fundamental solution $\rho_m(x,t)$ is not the whole real line for a given $t>0$ in general. However, under the uniform parabolicity of the perturbed problem, the theorem gives the well-known phenomenon that the support of the solution is the whole real line, i.e., ${\rm supp}\thinspace(\rho^\eps _m)=\mbox{\boldmath$R$}$. As a result we have the following lemma. \begin{lemma}\label{lem.inm} Let $\rho^\eps _m(x,t)$ be the fundamental solution of (\ref{eqnPerturbedNox}). If $m_1<m_2$, then $\rho^\eps _{m_1}(x,t)<\rho^\eps _{m_2}(x,t)$ for all $x\in\mbox{\boldmath$R$}$ and $t>0$. Furthermore, for any $m_0>0$, $x\in\mbox{\boldmath$R$}$ and $t>0$ fixed, $\rho^{\varepsilon}_m(x,t)\to\rho^{\varepsilon}_{m_0}(x,t)$ as $m\to m_0$. \end{lemma} \begin{proof} Let $e^{\varepsilon}(x,t)=\rho^\eps _{m_2}(x,t)-\rho^\eps _{m_1}(x,t)$ with $m_1<m_2$. Then $e^{\varepsilon}$ satisfies (\ref{equation for e}) which is uniformly parabolic with an initial value $(m_2-m_1)\delta$. Hence the solution becomes strictly positive for all $x\in\mbox{\boldmath$R$}$ and $t>0$. Therefore, $\rho^\eps _{m_1}(x,t)<\rho^\eps _{m_2}(x,t)$ for all $x\in\mbox{\boldmath$R$}$ and $t>0$, and $$
\|\rho^\eps _{m_1}(t)-\rho^\eps _{m_2}(t)\|_{L^1}=|m_1-m_2|. $$ Since the problem is uniformly parabolic, the fundamental solution $\rho_m^{\varepsilon}(x,t)$ is continuous for all $t>0$. Hence the $L^1$ convergence implies the point-wise convergence and the proof is complete. $
\qed$\end{proof} The lemma holds true for a perturbed problem which is uniformly parabolic. One may expect a non-strict inequality $\rho_{m_1}(x,t)\le\rho_{m_2}(x,t)$ for $m_1<m_2$ without the uniform parabolicity. The point-wise convergence $\rho_m(x,t)\to\rho_{m_0}(x,t)$ as $m\to m_0$ may fail if $\rho_{m_0}$ is discontinuous at the given point.
Theorem \ref{thm.generalization} is about a comparison between $u(x,t)$ and $\rho_m(x-x_0,t)$. Since the fundamental solution itself is also a bounded solution for all given $t>0$, one may compare two fundamental solutions using the theorem. We first obtain the shape of the fundamental solution of (\ref{EQN}) by comparing it to its space translation. The following corollary says that the fundamental solution $\rho_m(x,t)$ changes its monotonicity only once. \begin{corollary}[Fundamental solutions have no wrinkles] \label{cor.mono} Let $\rho_m$ be the fundamental solution of (\ref{EQN}). Then there exists $\bar x=\bar x(t)\in\mbox{\boldmath$R$}$ such that $\rho_m(\cdot,t)$ is increasing for $x<\bar x$ and decreasing for $x>\bar x$. \end{corollary}
\begin{proof} The fundamental solution is nonnegative and $\rho_m(x,t)\to0$ as $|x|\to\infty$. Therefore, $\rho_m(\cdot,t)$ may have infinite or an odd number of monotonicity changes. Suppose that $\rho_m(\cdot,t)$ has $2n-1$ number of monotonicity changes. Then, $A(t):=\{x\in\mbox{\boldmath$R$}:\rho_m(x,t)-\rho_m(x-x_0,t)>0\}$ should have $n$ components for $x_0>0$ small enough. Similarly, if the monotonicity of $\rho_m(\cdot,t)$ is changed infinitely many times, then the set $A(t)$ is still disconnected for $x_0$ small enough. Therefore, Theorem \ref{thm.generalization} implies that $n=1$ and hence $\rho_m(\cdot,t)$ changes its monotonicity only once. $
\qed$\end{proof}
\begin{lemma}\label{lem.inx_0} Let $\rho^\eps _m(x,t)$ be the fundamental solution of the regularized problem (\ref{eqnPerturbedNox}). If $\bar x=\bar x(t)$ is the maximum point of $\rho^\eps _m(\cdot,t)$, then $\rho^\eps _m(\cdot,t)$ is strictly increasing on $(-\infty,\bar x)$ and strictly decreasing on $(\bar x,\infty)$. \end{lemma} \begin{proof} Suppose that the monotonicity of $\rho^\eps _m(x,t)$ given in Corollary \ref{cor.mono} is not strict on $x<\bar x$. Then, there exist $a<b<\bar x$ such that $\rho^\eps _m(a,t)=\rho^\eps _m(b,t)$ and hence $\rho^\eps _m(x,t)$ is constant on the interval $[a,b]$. Theorem \ref{thm.uniquness} implies that $\rho^\eps _m(\cdot,t)=\alpha(t)$, which can not be a delta-sequence as $t\to0$. Therefore, the monotonicity of $\rho^\eps _m(\cdot,t)$ is strict on $(-\infty,\bar x)$. Similarly, the fundamental solution is strictly decreasing on $(\bar x,\infty)$. $
\qed$\end{proof}
\begin{remark} The strict monotonicity of the fundamental solution $\rho_m(\cdot,t)$ in Lemmas \ref{lem.inm} and \ref{lem.inx_0} is not expected for the general case (\ref{EQN})--(\ref{positivity}). The fundamental solutions of the hyperbolic conservation law in Section \ref{sect.C-law}, (\ref{N-wave}), provide such examples. \end{remark}
\section{Steepness as a geometric interpretation} \label{Sect.Steepness}
The Oleinik or the Aronson-B\'{e}nilan one-sided inequalities have another geometric interpretation that fundamental solutions are steeper than any other bounded solutions. The purpose of this section is to show that the connectedness of the level set given in Theorem \ref{thm.generalization} provides the same steepness comparison for the general case. This steepness comparison can be considered as a geometric version of estimates of solutions gradient.
First we remind and introduce notations. Let $u(x,t)$ be a bounded solution of (\ref{EQN}) and $\rho_m(x,t)$ be the fundamental solution of mass $m>0$. The steepness of solution $u$ at a point $x=x_1$ is compared to the one of the fundamental solution $\rho_m$ at the point $x=x_2$ with the same value, i.e., $$ u(x_1,t)=\rho_m(x_2,t), $$ and with the same monotonicity. The existence and the uniqueness of such a point is from Lemma \ref{lem.inx_0} if the problem is uniformly parabolic. Then, by letting $$ \rho_{m,x_0}(x,t):=\rho_m(x-x_0,t)\mbox{~~~with~~~}x_0:=x_1-x_2, $$ we have $\rho_{m,x_0}(x_1,t)=\rho_m(x_2,t)$, i.e., the graphs of $u(x,t)$ intersects the graph of $\rho_{m,x_0}(x,t)$ at $x=x_1$. However, if the problem is not uniformly parabolic, one need to state a little bit more generally due to non-uniqueness and possible appearance of discontinuities. Hence, at an intersection point, we may say \begin{eqnarray} &&[\min u(x_1\pm,t),\max u(x_1\pm,t)]\cap [\min \rho_{m,x_0}(x_1\pm,t),\max \rho_{m,x_0}(x_1\pm,t)],\nonumber\\ &&\ne\emptyset, \label{intersection} \end{eqnarray} where $\min v(x\pm,t)$ and $\max v(x\pm,t)$ respectively denote the minimum and maximum of the left and right hand limit for given time $t$ and point $x$. Of course, if $u$ and $\rho_m$ are continuous, then (\ref{intersection}) implies that $$u(x_1,t)=\rho_{m,x_0}(x_1,t)$$ and the arguments in the following proof become simpler.
In the rest of this section we let $[a,b]$ be the maximal interval including $x_1$ such that the relation (\ref{intersection}) is satisfied. We employ the notational convention that $[a,b]:=\{a\}$ if $a=b$. Note that Theorem \ref{thm.uniquness} implies that $a=b=x_1$ for perturbed problems. However, it is possible that $a\ne b$ for a problem without uniform parabolicity, where an invicid conservation law is a good example. There are four possible scenarios of intersecting two graphs (see Figure \ref{fig.fourcases}). When $\rho_{m,x_0}$ and $u$ are discontinuous at $x=x_1$, the corresponding four scenarios are in Figure \ref{fig.fourcasesShock}. In the figures, only the cases that $\rho_{m,x_0}$ and $u$ increase at the intersection point are given. One may obviously figure out the other cases that $u$ and $\rho_{m,x_0}$ decrease. \begin{figure}
\caption{Four possible scenarios at the intersection point when the solutions are continuous. Solid lines are graphs of $\rho_{m,x_0}(\cdot,t)$ and dotted ones are of $u(\cdot,t)$.}
\label{fig.fourcases}
\caption{Four possible scenarios at the intersection point when the solutions are discontinuous. Solid lines are graphs of $\rho_{m,x_0}(\cdot,t)$ and dotted ones are of $u(\cdot,t)$.}
\label{fig.fourcasesShock}
\end{figure}
In the rest of this section we will show which scenarios are allowed and which are not. The proofs are solely based on the connectedness of the level set in Theorem \ref{thm.generalization} and are good examples that explain how to use geometric arguments instead of analytic estimates. The proof is intuitively clear. For example, if it is the case in Figure \ref{fig.fourcases}(d), then, after shifting $\rho_{m,x_0}$ to right a little bit, we can make the zero level set $\{x\in\mbox{\boldmath$R$}:\rho_m(x-x_0-\epsilon,t)-u(x,t)>0\}$ disconnected. Hence, the case is never allowed. If it is the case in Figures \ref{fig.fourcases}(b) or \ref{fig.fourcases}(c) and $m$ is large enough to satisfy $\|u(t)\|_\infty\le\|\rho_{m,x_0}(t)\|_\infty$, then the level set becomes disconnected before or after shifting $\rho_{m,x_0}$ to left a little bit. Hence, these two cases are not allowed at least for $m>0$ large. In the following theorem we state and prove this observation formally. \begin{theorem}[Fundamental solution is the steepest.]\label{thm.steepness} Let $u(x,t)$ be a bounded solution of (\ref{EQN}), $\rho_m$ be the fundamental solution of mass $m>0$, and (\ref{intersection}) be satisfied for all $a\le x_1\le b$. \begin{enumerate} \item Suppose that both $u(\cdot,t)$ and $\rho_{m,x_0}(\cdot,t)$ are nonconstant increasing functions on $(a-{\varepsilon},a)$. Then, \begin{enumerate} \item If there exists ${\varepsilon}>0$ such that $u(x,t)>\rho_{m,x_0}(x,t)$ on $(a-{\varepsilon},a)$ and $\rho_{m,x_0}(x,t)<u(x,t)$ on $(b,b+{\varepsilon})$, then $\rho_m(x,t)\le u(x,t)$ for all $x>b$. \label{case1a} \item If there exists ${\varepsilon}>0$ such that $u(x,t)<\rho_{m,x_0}(x,t)$ on $(a-{\varepsilon},a)$, then $\rho_m(x,t)\le u(x,t)$ for all $x>b$.\label{case1b} \end{enumerate} \item Suppose that both $u(\cdot,t)$ and $\rho_{m,x_0}(\cdot,t)$ are nonconstant decreasing functions on $(b,b+{\varepsilon})$. Then, \begin{enumerate} \item If there exists ${\varepsilon}>0$ such that $u(x,t)>\rho_{m,x_0}(x,t)$ on $(a-{\varepsilon},a)$ and $\rho_{m,x_0}(x,t)<u(x,t)$ on $(b,b+{\varepsilon})$, then $\rho_m(x,t)\le u(x,t)$ for all $x<a$. \item If there exists ${\varepsilon}>0$ such that $u(x,t)<\rho_{m,x_0}(x,t)$ on $(b,b+{\varepsilon})$, then $\rho_m(x,t)\le u(x,t)$ for all $x<a$. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} The second part is of the dual statement of the first one and we show the first part only. We may assume without loss that both $u$ and $\rho_{m,x_0}$ strictly increase on $(a-{\varepsilon},a)$ after rearranging $x_0$ if needed. (This step is not needed for the perturbed problem due to Lemma \ref{lem.inx_0}.) To show (\ref{case1a}), we assume that there exists $\alpha>b$ such that $\rho_{m,x_0}(\alpha,t)>u(\alpha,t)$ and derive a contradiction. Remind that $\rho_{m,x_0}(b+{\varepsilon},t)<u(b+{\varepsilon},t)$. We may assume $u(\cdot,t)$ and $\rho_{m,x_0}(\cdot,t)$ are continuous at $\alpha$ and $b+{\varepsilon}$ by rearranging $\alpha$ and ${\varepsilon}$ if needed. Then, the continuity of $\rho_{m,x_0}$ and $u$ at $\alpha$ and $b+{\varepsilon}$ implies that there exists small $0<\tau<{\varepsilon}$ such that $\rho_{m,x_0}(\alpha+\tau,t) > u(\alpha,t)$ and $u(b+{\varepsilon},t)>\rho_{m,x_0}(b+{\varepsilon}+\tau,t)$. Therefore, the zero level set $A:=\{x\in\mbox{\boldmath$R$}:e(x,t)>0\}$ with $e(x,t):=\rho_{m,x_0-\tau}(x,t)-u(x,t)$ is not connectable since $e(a,t)>0$, $e(\alpha,t)>0$ and $e(b+{\varepsilon},t)<0$, which contradicts Theorem \ref{thm.generalization}. Therefore, there is no such $\alpha>b$ and hence $\rho_m(x,t)\le u(x,t)$ for all $x>b$.
The proof of (\ref{case1b}) is similar to (\ref{case1a}). The difference is in the comparing points. We similarly suppose that there exists $\alpha>b$ such that $\rho_{m,x_0}(\alpha,t)>u(\alpha,t)$. Remind that $\rho_{m,x_0}(a-{\varepsilon},t)>u(a-{\varepsilon},t)$. We assume $u(\cdot,t)$ and $\rho_{m,x_0}(\cdot,t)$ are continuous at $\alpha$ and $a-{\varepsilon}$ by rearranging $\alpha$ and ${\varepsilon}$ if needed. Then, the continuity of $\rho_{m,x_0}$ and $u$ at $\alpha$ and $a-{\varepsilon}$ implies that there exists small $0<\tau<{\varepsilon}$ such that $\rho_{m,x_0}(\alpha+\tau,t) > u(\alpha,t)$ and $u(a-{\varepsilon},t)<\rho_{m,x_0}(a-{\varepsilon}+\tau,t)$. We also have $u(a,t)>\rho_{m,x_0}(a+\tau,t)$ Therefore, the zero level set $A:=\{x\in\mbox{\boldmath$R$}:e(x,t)>0\}$ with $e(x,t):=\rho_{m,x_0-\tau}(x,t)-u(x,t)$ is not connectable since $e(a,t)<0$, $e(\alpha,t)>0$ and $e(a-{\varepsilon},t)>0$, which contradicts Theorem \ref{thm.generalization}. Therefore, there is no such $\alpha>b$ and hence $\rho_m(x,t)\le u(x,t)$ for all $x>b$. $
\qed$ \end{proof}
The previous theorem compares the steepness of a general bounded solution $u$ to the fundamental solution $\rho_m$ and one may obtain information or estimates of $u$ from a fundamental solution. For example, if the fundamental solution is continuous, then the general solution should be continuous. If not, one can easily construct a situation such as Figure \ref{fig.fourcases}(b) which violates Theorem \ref{thm.steepness}. If the fundamental solution contains decreasing discontinuities only, we can say that the increasing discontinuity of a weak solution is not admissible. The entropy condition of hyperbolic conservation laws is exactly the case. In certain cases, the fundamental solution is given explicitly and hence corresponding one-sided inequality is explicit. Oleinik and Aronson-B\'{e}nilan type inequalities are such examples. However, even if there is no such explicit inequalities, these steepness comparison in Theorem \ref{thm.steepness} may provide equally useful estimates for a general solution.
\begin{remark} Theorem \ref{thm.steepness}(\ref{case1a}) handles the case in Figure \ref{fig.fourcases}(b). Since $u$ is a bounded solution, there exists $m>0$ such that $\|\rho_m(t)\|_\infty>\|u(t)\|_\infty$. In that case, $u(x,t)$ can not be bigger than or equal to $\rho_m(x,t)$ for all $x>a$. In other words, such a case is possible only for $m>0$ small. In Section \ref{sect.C-law}, we will see that such a case is not possible at all even for a small $m$ for a convex conservation law case. However, the case is possible for small $m$ if the convexity assumption is dropped. Theorem \ref{thm.steepness}(\ref{case1b}) handles the cases in Figures \ref{fig.fourcases}(c) and \ref{fig.fourcases}(d). First, the case \ref{fig.fourcases}(d) is excluded completely. The other case \ref{fig.fourcases}(c) can be possible for $m>0$ large. \end{remark}
\begin{remark} The theorem does not exclude the case in Figure \ref{fig.fourcases}(a) which is usually the case if not always. This relation shows that the the fundamental solution $\rho_m$ is steeper than the general solution $u$ and such a comparison should be between two points of the same value. If the graph of the solution $u$ can touch the graph of the fundamental solution $\rho_m$ as in Figure \ref{fig.fourcases}(d), it implies that $u$ is more concave than the fundamental solution $\rho_m$ is. However, such a case is excluded and hence we may say that the fundamental solution is more concave than any other solution, which is another interpretation of the steepness. \end{remark}
\section{Scalar conservation laws} \label{sect.C-law}
In this section we consider a scalar conservation law with a smooth flux, \begin{equation}\label{c law4} \partial_t u+\partial_xf(u)=0,\ u(x,0)=u^0(x)\ge0,\quad t>0,\ x\in\mbox{\boldmath$R$}. \end{equation} The flux $f$ is assumed without loss to satisfy \begin{equation}\label{hypoC} f(0)=f'(0)=0. \end{equation} This conservation law is in the form of (\ref{EQN}) with $\sigma(x,z,p,q)=-f'(z)p$, where (\ref{positivity}) is satisfied with $\partial_q\sigma=0$.
The scalar conservation law serves us for two purposes. Its solution gives a concrete example to review the steepness theory developed in the previous section. The fundamental solution of a conservation law has a rich structure and is an excellent prototype of a general case. This nonlinear hyperbolic equation is also used to show that the theory of this paper is more or less optimal and one can not expect more than the theory under the generality in this paper.
The dynamics of solutions to the conservation law is well understood if the flux is convex. However, for the general case without convexity assumption, the theory is limited even for a scalar equation case. The main obstacle to develop a theory without convexity assumption is that the Oleinik inequality does not hold for the case. H owever, the geometric version of such one-sided inequalities obtained in this paper holds true. We will apply it to hyperbolic conservation laws without convexity assumption and show that the solution with connectable zero level set is unique and is the entropy solution. We will also apply the the theory to obtain a TV boundedness of a solution without the convexity assumption. This indicates that the connectivity of the zero level set is the true generalization of the Oleinik one-sided inequality.
\subsection{Structure of fundamental solutions}
The solution of an initial value problem of an autonomous linear problem is given as the convolution between the initial value and the fundamental solution. Unfortunately, there is no such a nice scenario for nonlinear problems. However, the connectedness of the zero level set given in Theorem \ref{thm.generalization} can be successfully used to obtain key estimates of a general solution by comparing it to a fundamental solution. In fact, we have obtained a steepness estimate in Section \ref{Sect.Steepness} using the connectedness of the zero level set and will obtain more of them in following sections.
In this section we survey the structure of nonnegative fundamental solution $\rho_m(x,t)$ of mass $m>0$ that satisfies \begin{equation} \partial_t\rho_m=-\partial_xf(\rho_m),\ \rho_m(x,0)=m\delta(x),\quad m,t>0,\ x\in\mbox{\boldmath$R$}. \end{equation} First, one may easily check that the fundamental solution satisfies \begin{equation}\label{SimilarityInSize} \rho_m(mx,mt)=\rho_1(x,t),\quad x\in\mbox{\boldmath$R$},\ t>0. \end{equation} This relation shows that it is enough to consider the case with $m=1$. One can also read that solutions of different sizes live in a different time scale, where the larger one lives in a slower time scale. \begin{remark} The similarity structure is well known for several cases including hyperbolic conservation laws. Similarity structure is a relation between the time and the space variable. For example $\rho_m(x,t)$ can be obtained from its profile at $t=1$ using an invariance relation. The relation in (\ref{SimilarityInSize}) shows a different kind of similarity structure among fundamental solutions of different sizes. \end{remark}
We first consider a convex flux that $f''(u)\ge0$ in a weak sense. Then the fundamental solution is explicitly given by \begin{equation}\label{N-wave} \rho_m(x,t)=\left\{\begin{array}{ccc} g(x/t)&,&\ 0<x<a_m(t),\\ 0 &,& {\rm otherwise,}\\ \end{array}\right. \end{equation} where $g$ is called the rarefaction profile and is given by the inverse relation of the derivative of the flux, i.e., \begin{equation}\label{g(x)} f'(g(x))=x. \end{equation} The support of the fundamental solution is given by the equal area rule \begin{equation} \int_0^{a_m(t)}g(x/t)dx=m \end{equation} (see Dafermos \cite{MR2574377}).
Since $g$ is the inverse of an increasing function $f'$, this rarefaction profile $g$ is also an increasing function. Therefore, one can clearly see that the fundamental solution $\rho_m(x,t)$ has the monotonicity structure given in Corollary \ref{cor.mono} with $\bar x(t)=a_m(t)$. In particular the decreasing part of the fundamental solution is simply the single discontinuity from the maximum to zero value. However, if $f'$ has a discontinuity, then $g$ is not strictly monotone. Hence the strict monotonicity in Lemma \ref{lem.inx_0} fails in this case. Let $m_1<m_2$. Then it is clear that $\rho_{m_1}(x,t)\le\rho_{m_2}(x,t)$ and $\rho_{m_1}(x,t)=\rho_{m_2}(x,t)$ for $0<x<a_{m_1}(t)$. Hence, the strict monotonicity in Lemma \ref{lem.inm} also fails. Suppose that $f'(u)$ is constant in an interval. Then, $g'$ has a discontinuity and hence the fundamental solution may have a increasing discontinuity. Therefore, the strict monotonicity in Lemmas \ref{lem.inm} and \ref{lem.inx_0} holds for the perturbed problems only and Corollary \ref{cor.mono} is the one we may expect for a general case without the uniform parabolicity.
The steepness comparison in Section \ref{Sect.Steepness} shows that the cases in Figures \ref{fig.fourcases}(b,c) and \ref{fig.fourcasesShock}(b,c) are not allowed for $m$ large. However, we can clearly see that those cases are not allowed even for small $m$ with convexity assumption. For example, since $\rho_{m_1}(x,t)=\rho_{m_2}(x,t)$ for $0<x<\min(a_{m_1}(t),a_{m_2}(t))$, such a case is not allowed for any $m>0$ if it is not for large $m$. On the other hand, we will observed in the rest of this section that a conservation law without the convexity assumption provides examples that such cases may happen for small $m$. We start with a brief review of the structure of the fundamental solution.
\begin{figure}
\caption{Envelopes and corresponding fundamental solution}
\label{fig.envelopes}
\end{figure}
The explicit formula (\ref{N-wave}) is valid only with convexity assumption. The fundamental solution without it is given in \cite{HaKim,KimLee}. We will briefly review its structure to use as an example to view the general theory. Using the convex-concave envelopes of the flux, one may find the left and the right side limit of a discontinuity of a fundamental solution, where the maximum of the fundamental solution is used as a parameter. Let $h(u;\bar u)$ be the lower convex envelope of $f$ on the interval $[0,\bar u]$, which is the supremum of convex functions $\eta$ such that $\eta(u)\le f(u)$ on the interval. This envelope is piecewise linear or identical to $f(u)$ (see Figure \ref{fig.envelopes}(a)). It is shown in \cite{HaKim} that, if the convex envelope $h(u;\bar u)$ has a linear part that connects two values, say $0$ and $u_3$ as in Figure \ref{fig.envelopes}(a), then the fundamental solution has a increasing discontinuity that connects $0$ and $u_3$, as in Figure \ref{fig.envelopes}(b), at the moment when $\bar u$ is the maximum of the fundamental solution $\rho_m(\cdot,t)$.
The upper concave envelope $k(u;\bar u)$ is the infimum of the concave functions such that $\eta(u)\ge f(u)$. Similarly, if the concave envelope $k(u;\bar u)$ has a linear part connecting two values, say $0$ and $u_1$ or $u_2$ and $\bar u$ as in Figure \ref{fig.envelopes}(a), then the fundamental solution has decreasing discontinuities connecting $0$ and $u_1$ or $u_2$ and $\bar u$, as in Figure \ref{fig.envelopes}(b). The exact place of the discontinuities and the profile of the continuous part depend on the dynamics of envelopes at earlier times. However, the exact size of each shock can be found from the envelope at that moment of a given maximum $\bar u>0$. At a later time, when the maximum $\bar u$ of the fundamental solution is like the one in Figure \ref{fig.envelopes}(c), the convex envelope is identical to $f$ is the concave envelop is linear. Then the fundamental solution at that moment is like the one in Figure \ref{fig.envelopes}(d).
Now we consider an example of the case in Figure \ref{fig.fourcases}(b) for $m$ large. Let Figures \ref{fig.envelopes}(b) and \ref{fig.envelopes}(d) be respectively the graphs of $\rho_m(x,t_1)$ and $\rho_m(x,t_2)$ with $t_1<t_2$. First rewrite the relation in (\ref{SimilarityInSize}) as $$\rho_{ma}(ax,at)=\rho_m(x,t).$$ Then, we have $$ \rho_m(x,t_2)=\rho_{mt_1/t_2}(t_1x/t_2,t_1). $$ In other words, $\rho_{mt_1/t_2}(x,t_1)$ has the shape of Figure \ref{fig.envelopes}(d) after shrinking it in $x$ direction with a ration of $t_1/t_2$. If $\rho_m(x,t_1)$ plays the role of $u(x,t_1)$ and $\rho_{mt_1/t_2}(x,t_1)$ of the comparing fundamental solution, then it will give the scenario of Figure \ref{fig.fourcases}(b). Hence such a case is really possible for a general case with a large $m$. This observation also indicates that the well-known similarity structure of fundamental solution is valid only with the convexity assumption.
\subsection{Equivalence to the Oleinik inequality}
In this section we show that the connectedness of the zero level set in Theorem \ref{thm.generalization} is equivalent to the one-sided Oleinik inequality (\ref{OleinikInequality}) which is valid only with a convex flux.
\begin{theorem}\label{thm.equi1} Let $f''(u)>0$ and $\rho_m(x,t)$ be given by (\ref{N-wave}). Then a non-negative bounded function $u(x)$ satisfies the Oleinik inequality \begin{equation}\label{OleinikInequality2} {f'(u(x))-f'(u(y))\over x-y}\le {1\over t},\quad t>0,\ x,y\in\mbox{\boldmath$R$} \end{equation} if and only if the zero level set $$A(t;m,x_0):=\{x\in\mbox{\boldmath$R$}:\rho_m(x-x_0,t)-u(x)>0\}$$ is connected (or connectable) for all $x_0\in\mbox{\boldmath$R$}$ and $m>0$. \end{theorem} \begin{proof} In the following the time $t>0$ is fixed and we will drop the time variable from $\rho_m$ for brevity. First, note that $\rho_m(x)$ satisfies $$ {f'(\rho_m(x))-f'(\rho_m(y))\over x-y} ={f'(g(x/t))-f'(g(y/t))\over x-y}={1\over t} $$ for all $0<x,y<a_m(t)$. Since $f''>0$, $f'$ is increasing and hence we have $A(t;m,x_0)=\{x\in\mbox{\boldmath$R$}:f'(\rho_m(x-x_0))-f'(u(x))>0\}$. Suppose that the set $A$ is not connected for some $m>0$ and $x_0\in\mbox{\boldmath$R$}$. After an translation of $u$, we may assume $x_0=0$. Then, there exist three points $x_1<x_2<x_3$ such that $f'(\rho_m(x_1))>f'(u(x_1))$, $f'(\rho_m(x_2))<f'(u(x_2))$ and $f'(\rho_m(x_3))>f'(u(x_3))$. Therefore, $x_1,x_2\in{\rm supp}\thinspace(\rho_m(t))$ and $$ {f'(u(x_1))-f'(u(x_2))\over x_1-x_2}>{f'(\rho_m(x_1))-f'(\rho_m(x_2))\over x_1-x_2}={1\over t}. $$ Hence the Oleinik inequality fails.
Now suppose that the Oleinik inequality fails. Then, there exist $x_1<x_2$ such that $$ {f'(u(x_2))-f'(u(x_1))\over x_2-x_1}>{1\over t}. $$ Let $$x_0:=(x_1+x_2)/2-t[f'(u(x_1))+f'(u(x_2))]/2$$ and $m$ be so large that $a_m(t)>t\,\sup(f'(u(x)))$. Then, for $x_3:=x_0+a_m(t)-\epsilon$ with a small $\epsilon>0$, we have \begin{eqnarray*} &&f'(\rho_m(x_1-x_0))-f'(u(x_1)) ={f'(u(x_2))-f'(u(x_1))\over2}-{x_2-x_1\over 2t}>0,\\ &&f'(\rho_m(x_2-x_0))-f'(u(x_2)) ={x_2-x_1\over 2t}-{f'(u(x_2))-f'(u(x_1))\over2}<0,\\ &&f'(\rho_m(x_3-x_0))-f'(u(x_3))=(a_m(t)-\epsilon)/t-f'(u(x_3))>0. \end{eqnarray*} In other words the zero level set $A(t;m,x_0)$ is disconnected. $
\qed$\end{proof}
Notice that the function $u$ is not necessarily a solution of the conservation law for the equivalence relation in the theorem. The time variable $t$ in the inequality (\ref{OleinikInequality2}) is related to the fundamental solution $\rho_m(x,t)$ only.
\subsection{Uniqueness without convexity}
In this section we consider a conservation law (\ref{c law4}) with a nonconvex flux. Let $u(x,t)$ be a nonnegative bounded solution and have a discontinuity at a point $x_0$ such that $\displaystyle\lim_{x\to x_0+}u(x,t)=u_r$ and $\displaystyle\lim_{x\to x_0-}u(x,t)=u_l$. For an illustration, consider the graph of nonconvex flux given in Figure \ref{fig_walk}. \begin{figure}
\caption{An illustration to explain the Oleinik entropy condition. For a simpler illustration we have returned to the original case without the assumption in (\ref{hypoC}). }
\label{fig_walk}
\end{figure} Suppose that you are moving from the left limit $(u_l,f(u_l))$ to the right limit $(u_r,f(u_r))$ along the line connecting the two points. If the graph of the flux $f(u)$ lies always on your left side, then the discontinuity is admissible. For example, if the left and the right side limit pair is $(u_l,u_r)=(c,0)$ as in Figure \ref{fig_walk}, then the discontinuity is admissible. However, if $(u_l,u_r)=(c,a)$ as in Figure \ref{fig_walk}, then the graph of the flux $f(u)$ is on your right side for $a<u<b$ and hence the discontinuity is not admissible. This admissibility criterion is called the Oleinik entropy condition. If discontinuities of a weak solution satisfy the Oleinik entropy condition, then the weak solution is called the entropy solution. It is well known that the entropy solution is unique and identical to the zero-viscosity limit of its perturbed problem.
Suppose that the zero level set \begin{equation}\label{Atmx0} A(t;m,x_0):=\{x\in\mbox{\boldmath$R$}:\rho_m(x-x_0,t)- u(x,t)>0\} \end{equation} is connected for all $t,m>0$ and $x_0$. In this section we will show that such a weak solution is the entropy solution if the flux has a single inflection point. However, for a general nonconvex flux, it can be a non-entropy solution. For example, let $u$ have a discontinuity. Then, the steepness comparison in the previous section implies that for $m>0$ sufficiently large, $\rho_m(x,t)$ should have a larger discontinuity of the same monotonicity since the case in Figure \ref{fig.fourcasesShock}(a) is only the possible one for $m$ large. Of course, discontinuities of the fundamental solution are admissible ones since they are given by convex-concave envelopes. Therefore, if a flux has a property that a jump smaller than an admissible one with same monotonicity is always admissible, then $u$ should be the entropy solution. For example, if the flux has a single inflection point, one can easily check that it is the case.
For a general case, the story is quite different. For example, if the flux is given as in Figure \ref{fig_walk}, the discontinuity $(u_l,u_r)=(c,a)$ is not admissible even though a larger one $(u_l,u_r)=(c,0)$ is admissible. Furthermore, one may easily check that \begin{equation}\label{CounterExample} u(x,t)=\left\{\begin{array}{cc} c, \quad&x<\sigma t,\\a, \quad&x>\sigma t,\\ \end{array}\right.\quad \sigma={f(c)-f(a)\over c-a}, \end{equation} is a weak solution solution that makes the set $A(t;m,x_0)$ be connected for all $t,m,x_0$. Unfortunately, this is not an entropy solution and hence the connectedness of the zero level set is not enough to single out the entropy solution. However, we have the following lemma which gives a clue to obtain the uniqueness. \begin{lemma}\label{lemma.0shock} Let $u(x)$ be a nonnegative bounded function and the zero level set $A(t;m,x_0)$ in (\ref{Atmx0}) be connected for all $t,m>0$ and $x_0\in\mbox{\boldmath$R$}$. Then any discontinuity of $u$ that connects $u=0$ is admissible. \end{lemma} \begin{proof} Let $u(x)$ have a discontinuity at $x=x_1$ and the left side limit is $u_l>0$ and the right side limit is $u_r=0$. Suppose that the discontinuity is not admissible. Then, since a part of the graph of the flux is above the line connecting $(0,0)$ and $(u_l,f(u_l))$, the concave envelope of the flux on the interval $(0,u_l+{\varepsilon}_0)$ is not a line for a small ${\varepsilon}_0>0$. Let $\rho_m(x,t)$ be the fundamental solution with the maximum $u_l+{\varepsilon}_0$ at time $t>0$ and $\bar x$ be the maximum point. Let $x_0=x_1-\bar x- {\varepsilon}_1$. Then, it is clear that the set $A(t;m,x_0)$ becomes disconnected for a small ${\varepsilon}_1>0$. A diagram that shows the relation is given in Figure \ref{fig.admissibility}. \begin{figure}
\caption{Envelopes and corresponding fundamental solution}
\label{fig.admissibility}
\end{figure} If $u_l=0$ and $u_r>0$, then one may consider the convex envelope and obtain the nonconnectedness of level set $A$ similarly. $
\qed$\end{proof}
The connectedness of the set $A(t;m,x_0)$ allows us to single out the zero-viscosity limit if the flux is convex or has a single inflection point. For a general nonconvex case, Lemma \ref{lemma.0shock} encourages us to consider fundamental solutions with a nonzero far field.
\begin{theorem}\label{thm.uniquenessCLAW} Let $\rho_m^c$ be a solution to the conservation law (\ref{c law4}) with initial value $\rho_m^c(x,0)=c+m\delta(x)$, $c\ge0$, and $u(x,t)$ be a weak solution. Then, the zero level set \begin{equation}\label{Atmx0c} A(t;m,x_0,c):=\{x\in\mbox{\boldmath$R$}:\rho_m^c(x-x_0,t)- u(x,t)>0\} \end{equation} is connected for all $t,m,c>0$ and $x_0\in\mbox{\boldmath$R$}$ if and only if $u(x,t)$ is the entropy solution. \end{theorem} \begin{proof} ($\Rightarrow$) Suppose that $u(x,t)$ has a discontinuity that connects $c$ and $d$ with $c<d$. Then, consider the fundamental solution $\rho_m^c$ which is similarly constructed using the convex and concave envelopes of the flux on the interval $[c,\bar u]$, where $\bar u$ is the maximum of the fundamental solution. This procedure is identical to the earlier case with $c=0$. Then, we may repeat the previous process of Lemma \ref{lemma.0shock} to show the admissibility of this discontinuity. The detail is omitted.
($\Leftarrow$) It is well known that the entropy solution is the zero-viscosity limit of the perturbed problem. The zero level set $A$ of a zero-viscosity limit is connected by Theorem \ref{thm.generalization} for $c=0$. For $c>0$ we may repeat the process since the zero set theory, Lemma \ref{lem.angenent88} is valid independently of $c>0$. $
\qed$\end{proof}
\begin{remark} The connectedness of this zero level set can be used as another admissibility criterion of a conservation law without convexity. Furthermore, it gives a hope that the connectivity of the zero level set can be used for an admissibility criterion for more general problems in the form of (\ref{EQN})-(\ref{positivity}). \end{remark}
\subsection{Boundedness of total variation}
The Oleinik inequality should be understood in a weak sense since the solution is not necessarily smooth. Hence it is preferred to write it as \begin{equation}\label{OleinikInequality2} {f'(u(x,t))-f'(u(y,t))\over x-y}\le {1\over t},\quad t>0,\ x,y\in\mbox{\boldmath$R$}. \end{equation} Hoff \cite{MR688972} showed that the weak solution satisfying the Oleinik inequality is unique if and only if the flux $f$ is convex. In other words, the inequality (\ref{OleinikInequality2}) does not give an uniqueness criterion without convexity of the flux. Furthermore, if the flux is not convex, the inequality is not satisfied by the entropy solution.
The theoretical development for a nonconvex case has been limited due to the lack of an Oleinik type inequality and, therefore, finding a replacement of such an inequality has been believed as a crucial step for further progress. There have been several technical developments to find the right inequality (see \cite{MR818862,MR2405854,MR1855004,MR2119939}). These efforts are related to finding a constant $C\ge0$ such that a weak version of the Oleinik inequality, \begin{equation}\label{OleinikTV}
f'(u(x,t))-f'(u(y,t))\le {x-y\over t}+C|TV(u(0))-TV(u(t))|, \end{equation} is satisfied by the entropy solution. Here, $TV(u(t))$ is the total variation of the solution $u$ at a fixed time $t\ge0$. The total variation is defined by $$
TV(u(t)):=\sup_{P}\sum_i |u(x_i,t)-u(x_{i+1},t)|, $$ where the $\sup$ is taken over all possible partitions $P:=\{\cdots<x_i<x_{i+1}<\cdots\}$. It is clear that (\ref{OleinikTV}) is a weaker version of the Oleinik inequality (\ref{OleinikInequality2}) and that it cannot give the uniqueness since even the stronger original version does not give the uniqueness. The connectedness of the zero level set in Theorem \ref{thm.generalization} is the correct generalization that gives the uniqueness for general flux without convexity assumption, Theorem \ref{thm.uniquenessCLAW}.
The boundedness of the total variation of a solution has been one of the key estimates in the regularity theory of various problems. The one-sided Oleinik inequality actually gives TV-boundedness on any bounded domains for all $t>0$ even if it is not initially. (Notice that the inequality in (\ref{OleinikTV}) cannot be used for such a purpose since $TV(u(0))$ is already included in the estimate.) Even if there is no lower bound in the estimate the upper bound controls the variation. Roughly speaking, in terms of fundamental solution $\rho_m(\cdot,t)$, the variation of the solution in the domain of size of the support of $\rho_m(\cdot, t)$ is smaller than the variation of the fundamental solution due to the steepness comparison property in Theorem \ref{thm.steepness}.
Let \begin{equation}\label{C(t)}
C(t)=\sup_{c,m>0}{2\,\sup_x(\rho_m^c(x,t)-c)\over|{\rm supp}\thinspace(\rho_m^c-c)|}<\infty, \end{equation}
where $\rho_m^c$ is the fundamental solution in Theorem \ref{thm.uniquenessCLAW}. The variation of the fundamental solution on its support is $2\sup_x(\rho_m^c(x,t)-c)$ and hence $C(t)$ is the maximum ratio of variation of all possible fundamental solutions. Therefore, one can easily see that $TV(u(t))\le C(t)|{\rm supp}\thinspace(u(t))|$ since the fundamental solution is the steepest one and hence the variation of a solution $u$ in a unit interval cannot be bigger than $C(t)$. For example, for the invicid Burgers equation case, we have $C(t)={2\over t}$ and hence we have
$$TV(u(t))\le {2\over t}|{\rm supp}\thinspace(u(t))|,$$ which is a way how the Oleinik one-sided inequality gives the TV-boundedness. The following theorem is a summary of the $TV$ estimate.
\begin{theorem}[TV boundedness]\label{thm.TVB} Let $u(x,t)$ be a bounded solution of (\ref{EQN}), where the flux $f$ is not necessarily convex. If $C(t)$ given by (\ref{C(t)}) is finite and $u(x,t)$ is compactly supported, then \begin{equation}\label{TV1}
TV(u(t))\le C(t)|{\rm supp}\thinspace(u(t))|. \end{equation} In general, the total variation in a bounded interval $I=(a,b)$, is bounded by \begin{equation}\label{TV2}
TV\big(u(t)|_{x\in I}\big)\le C(t)|b-a|. \end{equation} \end{theorem}
\section{Porous medium equation}
Let $u(x,t)$ be the solution to the porous medium equation \begin{equation}\label{PME5} \partial_t u=\Delta u^\gamma,\ u(x,0)=u^0(x)\ge0,\quad t,\gamma>0,\ x\in\mbox{\boldmath$R$}^n. \end{equation} The fundamental solution of this equation is called the Barenblatt solution and is explicitly given by \begin{equation}\label{Barenblatt}
\rho_m(x,t)=\Big (C_mt^{1-\gamma\over \gamma+1}-{\gamma-1\over2\gamma(\gamma+1)}|x|^2t^{-1}\Big)_+^{1\over \gamma-1},\quad \gamma\ne1, \end{equation} where we are using the notation $(f)_+:=\max(0,f)$. For $\gamma=1$, the fundamental solution is of course the Gaussian. The constant $C_m$ is positive and decided by the relation for the total mass $\int\rho_m(x,t)dx=m$. For the fast diffusion regime, $0<\gamma<1$, the inside of the parenthesis is positive for all $x\in\mbox{\boldmath$R$}^n$ and hence $\rho_m$ is strictly positive and $C^\infty$ on $\mbox{\boldmath$R$}^n$. It is also well studied that the general solution $u$ is also strictly positive and $C^\infty$ on $\mbox{\boldmath$R$}$. For the porous medium equation regime, $\gamma>1$, the fundamental solution $\rho_m$ is compactly supported and $C^\infty$ in the interior of the support. The solution $u$ is also $C^\infty$ away from zero points.
For dimension $n=1$, the Aronson-B\'{e}nilan inequality in (\ref{ABInequality}) is written as \begin{equation}\label{AB1D} \partial_x^2\wp(u)\ge -{1\over t(\gamma+1)},\quad \wp(u):={\gamma\over \gamma-1}u^{\gamma-1},\quad \gamma\ne1, \end{equation} where $\wp$ is usually called pressure. One can easily check that the pressure is an increasing function for all $\gamma>0$ and the Barenblatt solution satisfies the equality in (\ref{AB1D}). In the following theorem we show that the connectedness of the zero level set in Theorem \ref{thm.generalization} is equivalent to the Aronson-B\'{e}nilan one-sided inequality.
\begin{theorem}\label{thm.equi2} Let $\rho_m(x,t)$ be the Barenblatt solution with $1\ne \gamma>0$ and $u(x)$ be a non-negative bounded smooth function with possible singularity at zero points. For the case $0<\gamma<1$, $u$ is assumed to be positive. Then, the Aronson-B\'{e}nilan inequality (\ref{AB1D}) is satisfied if and only if the zero level set $A(t;m,x_0):=\{x\in\mbox{\boldmath$R$}:\rho_m(x-x_0,t)-u(x)>0\}$ is connected (in the sense of Definition \ref{Def.Connectedness}) for all $x_0\in\mbox{\boldmath$R$}$ and $m>0$. \end{theorem} \begin{proof} ($\Rightarrow$) Suppose that the set $A(t;m,x_0)$ is not connected for some $m>0$ and $x_0\in\mbox{\boldmath$R$}$. After a translation of $u(x)$, we may set $x_0=0$. Then, there exist three points $x_1<x_2<x_3$ such that $\rho_m(x_1,t)>u(x_1)$, $\rho_m(x_2,t)<u(x_2)$, and $\rho_m(x_3,t)>u(x_3)$. Let $\zeta:=\wp(u)-\wp(\rho_m)$ be the pressure difference. Suppose that the Aronson-B\'{e}nilan inequality (\ref{AB1D}) holds. Then, $$ \partial_x^2\zeta=\partial_x^2\wp(u)-\partial_x^2\wp(\rho_m)\ge0. $$ Note that the pressure function $\wp:u\to {\gamma\over \gamma-1}u^{\gamma-1}$ is an increasing function for $\gamma>0$ and hence we have $$ \zeta(x_1)<0,\ \zeta(x_3)<0. $$ The maximum principle implies that $\zeta(x)<0$ on $(x_1,x_3)$. However, it contradicts to $\zeta(x_2)>0$. Hence the Aronson-B\'{e}nilan inequality should fail.
($\Leftarrow$) Now suppose that there exists $x_2$ such that $\partial_x^2\wp(u(x_2))<-{1\over t(\gamma+1)}$, i.e., the Aronson-B\'{e}nilan inequality fails at a point $x_2$. Then, since $u$ is smooth away from zero points, there exist $x_1<x_2<x_3$ and $\epsilon>0$ such that $\partial_x^2\wp(u(x))<-{1\over t(\gamma+1)}-\epsilon$ and $u>0$ on $[x_1,x_3]$. Let $h(x,t)=-{1\over 2t(\gamma+1)}(x-x_0)^2+b$, where two unknowns, $x_0$ and $b$, are uniquely decided by two relations, $$ h(x_1,t)=\wp(u(x_1)),\quad h(x_3,t)=\wp(u(x_3)). $$
Consider the porous medium regime $\gamma>1$ first. Then $\wp(u(x_1))>0$ and, since $h$ is not entirely negative, the constant $b$ should be positive. Set $C_m:={\gamma-1\over \gamma}bt^{\gamma-1\over \gamma+1}$. Then, $C_m>0$ and $$ h(x,t)={\gamma\over \gamma-1}C_mt^{1-\gamma\over \gamma+1}-{1\over2(\gamma+1)}(x-x_0)^2t^{-1}. $$ Therefore, $h(x,t)=\wp(\rho_m(x-x_0,t))$ for $\rho_m(x,t)>0$. Let $\zeta_m:=\wp(u)-\wp(\rho_m)$. Then, $$ \partial^2_{x}\zeta_m<-\epsilon,\ \zeta_m(x_1)=\zeta_m(x_3)=0. $$ The strong maximum principle implies that $\zeta_m(x)>0$ for all $x\in(x_1,x_3)$. Therefore, there exists $m'>m$ such that $\zeta_{m'}(x_2)>0$. Since $\rho_{m'}(x,t)>\rho_m(x,t)$ for all $x$ in the interior of the support of $\rho_{m'}(t)$, we conclude that $$ \zeta_{m'}(x_1)<0,\ \zeta_{m'}(x_2)>0,\ \zeta_{m'}(x_3)<0. $$ Therefore, the set $A(t;m',x_0)$ is disconnected.
For the fast diffusion regime, $0<\gamma<1$, we need a slightly more subtle approach to obtain the positivity of the corresponding constant $C_m>0$. Since $u$ is bounded, we may set $-B:=\wp(\sup u)<0$. Then, $\wp(u(x))\le -B$ for all $x\in\mbox{\boldmath$R$}$. Since $u$ is smooth, so is $\wp(u)$. Suppose that $\partial_x^2\wp(u)$ has minimum value at $x_2$ and $\partial_x^2\wp(u(x_2))<-{1\over t(\gamma+1)}$, i.e., the Aronson-B\'{e}nilan inequality fails. For a sufficiently small ${\varepsilon}>0$, there exists $t_1<t$ such that $\partial_x^2\wp(u(x_2))=-{1\over t_1(\gamma+1)}-2{\varepsilon}$. Then, since $u$ is smooth, there exist $x_1<x_2<x_3$ and $\epsilon>0$ such that $\partial_x^2\wp(u(x))<-{1\over t_1(\gamma+1)}-\epsilon$ for $x\in(x_1,x_3)$. Let $h^{\varepsilon}(x,t):=-\big({1\over 2t_1(\gamma+1)}+{\varepsilon}\big)(x-x'_0)^2+b$, where $x'_0$ and $b$ are uniquely decided by $$ h^{\varepsilon}(x_1,t)=\wp(u(x_1)),\quad h^{\varepsilon}(x_3,t)=\wp(u(x_3)). $$ Since $h^{\varepsilon}(x,t)$ has the minimum curvature of $\wp(u)$ and shares the same values at $x_1$ and $x_3$ with $\wp(u)$, we have $h^{\varepsilon}(x,t)\le \wp(u(x))$ for all $x\not\in(x_1,x_3)$. Since the curvature difference between $h^{\varepsilon}$ and $\wp(u)$ is less than ${\varepsilon}$ on the interval, we have $h^{\varepsilon}(x,t)<0$ for all $x\in\mbox{\boldmath$R$}$. By taking smaller ${\varepsilon}>0$ if needed, we obtain $h(x,t):=-{1\over 2t_1(\gamma+1)}(x-x'_0)^2+b<0$ using the same boundary condition. Therefore, $b<0$ and hence the constant $C_m:={\gamma-1\over \gamma}bt_1^{\gamma-1\over \gamma+1}$ becomes positive. The same arguments for the PME case show that there exists $m'>0$ such that $A(t_1;m',x'_0)$ is disconnected with $t_1<t$. Therefore there exists $m>0$ and $x_0$ that make $A(t;m,x_0)$ be disconnected. $
\qed$ \end{proof}
The Aronson-B\'{e}nilan one-sided inequality is valid in multi-dimensions. Hence it is natural to ask what is the corresponding equivalent concept for the multi-dimensional case. Further discussions on this matter are in the next section.
\section{Connectivity in multi-dimensions}\label{sect.Rn}
In this section we discuss about a possibility to extend the one dimensional theory of this paper to multi-dimensions. Let $u(x,t)$ be a bounded nonnegative solution of \begin{equation}\label{EQNRn} \partial_t u=F(t,u,Du,D^2u),\ u(x,0)=u^0(x)\ge0,\ t>0,\ x\in\mbox{\boldmath$R$}^n, \end{equation} where the $n\times n$ matrix $D_qF(x,t,z,p,q)$ is positive definite, i.e., \begin{equation}\label{PositiveDefinit} \sum_{i,j=1}^n\big(D_{q_{ij}}F(t,z,p,q)\big)\xi_i\xi_j\ge0 \end{equation} for all $\xi_i\in\mbox{\boldmath$R$}$.
Remember that the one dimensional theory depends on non-increase of the number of zeros or of the lap number. An advantage of the argument in Theorem \ref{thm.generalization} in compare with the one-sided inequalities is that the connectivity is a multi-dimensional concept. Counting the number of zeros is meaningless in multi-dimensions. A correct way is to count the number of connected components of the zero level set. However, the number of connected component does not decrease in general in multi-dimensions. For example, let $v$ be another solution with an initial value $v^0$ and consider the number of connected components of the set $A(t):=\{x\in\mbox{\boldmath$R$}^n:v(x,t)-u(x,t)\ge0\}$. Unfortunately, the number of connected components may increase depending on the initial distributions and the situation is far more delicate. Hence, an extension of the lap number theory or the zero set theory to multi-dimensions should be a one classifying cases when the number of connected components of the level set decreases.
The case of this paper is when $v(x,t)=\rho_{m,x_0}(x,t)$ with $x_0\in\mbox{\boldmath$R$}^n$ and $m>0$, i.e., the zero level set is \begin{equation}\label{LevelSetRn} A(t;m,x_0):=\{x\in\mbox{\boldmath$R$}^n:\rho_m(x-x_0,t)-u(x,t)\ge0\}. \end{equation} Therefore, our chance to extend Theorem \ref{thm.generalization} to multi-dimensions comes from the fact that $\rho_m(x,t)$ has a special initial value, the delta distribution, which is the steepest one. If one can show that this set is simply connected, then it may indicate that the fundamental solution $\rho_m$ is steeper than any other solution. In the following theorem we will show that the zero level set $A(t;m,x_0)$ is convex for the heat equation case.
Let $u(x,t)$ be the bounded nonnegative solution of the heat equation \begin{equation}\label{HeatEqn} \partial_t u=\Delta u,\quad u(x,0)=u^0(x)\ge0,\quad x\in\mbox{\boldmath$R$}^n,\ t>0. \end{equation} Let $\rho_m(x,t)$ be the fundamental solution of the heat equation of mass $m>0$, i.e., $$
\rho_m(x,t)=m\phi(x,t),\quad \phi(x,t)={1\over\sqrt{4\pi t}^{\,n}}e^{-|x|^2/4t}, $$ where $\phi(x,t)$ is called the heat kernel. Then, the solution $u(x,t)$ is given by $$ u(x,t)=u^0*\phi(t)=\int u^0(y)\phi(x-y,t)dy. $$
\begin{theorem}\label{thm.HeatEqn} Let $u(x,t)$ be the bounded solution of the heat equation (\ref{HeatEqn}) and $\rho_m(x,t)$ be the fundamental solution of mass $m>0$. Then the set $A(t;m,x_0)$ in (\ref{LevelSetRn}) is convex or empty for all $m,t>0$ and $x_0\in\mbox{\boldmath$R$}$. \end{theorem} \begin{proof} Since the heat equation is autonomous with respect to the space variable $x$, it is enough to consider the case $x_0=0$. First rewrite the level set $A$ as $$ A(t;m)=\{x\in\mbox{\boldmath$R$}^n:\psi(x,t)\le1\}, $$ where $\psi(x,t):={u(x,t)\over\rho_m(x,t)}$ is well-defined for all $t>0$. Rewrite $\psi(x,t)$ as $$
\psi(x,t)=\int {u^0(y)\over m} {\phi(x-y,t)\over\phi(x,t)}dy =\int {u^0(y)\over m} e^{2x\cdot y\over4t}e^{-|y|^2\over4t}dy. $$ Differentiating $\psi$ twice with respect to $x_i$ gives $$
{\partial^2\over\partial x_i^2}\psi(x,t)= \int {u^0(y,t)\over m} \Big({y_i\over2t}\Big)^2e^{2x\cdot y\over4t}e^{-|y|^2\over4t}dy\ge0. $$ Therefore, $\psi$ is convex on a line segment which is parallel to the coordinate system. Note that the heat equation is invariant under a rotation and hence $\psi$ is convex along any line segment. Suppose that the zero level set $A(t;m)$ is not convex. Then there exists $x_1,x_2\in A(t;m)$ such that $(x_1+x_2)/2\notin A(t;m)$, which contradicts to the fact that $\psi$ is convex on the line that connects $x_1$ and $x_2$. Hence the set $A(t;m)$ is convex. $
\qed$ \end{proof}
This theorem gives us a hope to extend the one dimensional theory to multi-dimensions under the parabolicity assumption (\ref{PositiveDefinit}).
\end{document}
|
arXiv
|
{
"id": "1306.3577.tex",
"language_detection_score": 0.774659276008606,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\def\Xint#1{\mathchoice
{\XXint\displaystyle\textstyle{#1}}
{\XXint\textstyle\scriptstyle{#1}}
{\XXint\scriptstyle\scriptscriptstyle{#1}}
{\XXint\scriptscriptstyle\scriptscriptstyle{#1}}
\!\int} \def\XXint#1#2#3{{\setbox0=\hbox{$#1{#2#3}{\int}$}
\vcenter{\hbox{$#2#3$}}\kern-.5\wd0}} \def\Xint={\Xint=} \def\Xint-{\Xint-}
\title[Quantitative two weight theorem]{ A new quantitative two weight theorem for the Hardy-Littlewood maximal operator}
\author{Carlos P\'erez} \address{Departamento de An\'alisis Matem\'atico, Facultad de Matem\'aticas, Universidad de Sevilla, 41080 Sevilla, Spain} \email{[email protected]}
\author{Ezequiel Rela} \address{Departamento de An\'alisis Matem\'atico, Facultad de Matem\'aticas, Universidad de Sevilla, 41080 Sevilla, Spain} \email{[email protected]}
\thanks{Both authors are supported by the Spanish Ministry of Science and Innovation grant MTM2012-30748 and by the Junta de Andaluc\'ia, grant FQM-4745.}
\subjclass{Primary: 42B25. Secondary: 43A85.}
\keywords{Two weight theorem, Space of homogeneous type, Muckenhoupt weights, Calder\'on-Zygmund, Maximal functions}
\begin{abstract} A quantitative two weight theorem for the Hardy-Little\-wood maximal operator is proved improving the known ones. As a consequence a new proof of the main results in \cite{HP} and \cite{HPR1} is obtained which avoids the use of the sharp quantitative reverse Holder inequality for $A_{\infty}$ proved in those papers. Our results are valid within the context of spaces of homogeneous type without imposing the non-empty annuli condition. \end{abstract}
\maketitle
\section{ Introduction and Main results} \label{sec:intro}
\subsection{Introduction} The purpose of this note is to present a \emph{quantitative} two weight theorem for the Hardy-Littlewood maximal operator when the underlying space is a space of homogeneous type $\mathcal{S}$ (SHT in the sequel), endowed with a quasimetric $\rho$ and a doubling measure $\mu$ (see Section \ref{sec:SHT} for the precise definitions). We briefly recall some background on this problem in the \emph{euclidean} or \emph{classical} setting, when we are working in $\mathbb{R}^n$ and we consider Lebesgue measure and euclidean metric. We also assume that in this classical setting all the maximal operators involved and $A_p$ classes of weights are defined over cubes. Let $M$ stand for the usual uncentered Hardy-Littlewood maximal operator: \begin{equation*}
Mf(x) = \sup_{Q\ni x } \frac{1}{|Q|}\int_{Q} |f|\,dx. \end{equation*} The problem of characterizing the pair of weights for which the maximal operator is bounded between weighted Lebesgue spaces was solved by Sawyer \cite{sawyer82b}: To be more precise, if $1<p<\infty$ we define for any pair of weights $w,\sigma$, the (two weight) norm,
\begin{equation} \label{MainQuestion}
\|M(\cdot \sigma)\|_{L^p(w)}:= \sup_{f\in L^p(\sigma)} \frac{ \|M(f \sigma) \|_{L^p(w)}}{ \|f\|_{L^p(\sigma)} } \end{equation}
then Sawyer showed that $\|M(\cdot \sigma)\|_{L^p(w)}$ is finite if and and only if
\begin{equation*}
\sup_Q \frac{\int_Q (M(\chi_Q \sigma )^p \ wdx}{ \sigma(Q)}<\infty, \end{equation*}
where the supremum is taken over all the cubes in $\mathbb{R}^n$. A quantitative precise version of this result is the following: if we define
\begin{equation*}
[w,\sigma]_{S_p}:= \left(\frac1{\sigma(Q)} \,\int_Q M(\sigma\chi_Q)^pw\ dx\right)^{1/p}. \end{equation*} then \begin{equation}\label{eq:moen}
\|M(\cdot \sigma)\|_{L^p(w)} \sim p'[w,\sigma]_{S_p}, \end{equation} where $\frac{1}{p}+\frac{1}{p'}=1$. This result is due to K. Moen and can be found in \cite{Moen:Two-weight}.
However, it is still an open problem to find a characterization more closely related to the $A_p$ condition of Muckenhoupt which is easier to use in applications. Indeed, recall that the two weight $A_p$ condition:
\begin{equation*} \sup_Q\left(\Xint-_Q w\ dx\right)\left(\Xint-_Q v^{-\frac1{p-1}}\ dx\right)^{p-1}<\infty \end{equation*}
is necessary for the boundedness of $M$ from $L^p(v)$ into $L^{p}(w)$ (which is clearly equivalent, setting $\sigma=v^{1-p'}$, to the two weight problem), but it is not sufficient. Therefore, the general idea is to strengthen the $A_p$ condition to make it sufficient. The first result on this direction is due to Neugebauer \cite{Neugebauer}, proving that, for any $r>1$, it is sufficient to consider the following ``power bump'' for the $A_p$ condition:
\begin{equation}\label{Neug} \sup_Q\left(\Xint-_Q w^{r}\ dx\right)^\frac{1}{r}\left(\Xint-_Q v^{-\frac{r}{p-1}}\ dx\right)^{(p-1)r}<\infty \end{equation}
Later, the first author improved this result in \cite{perez95} by considering a different approach which allows to consider much larger classes weights. The new idea is to replace \emph{only} the average norm associated to the weight $v^{-\frac1{p-1}}$ in \eqref{Neug} by an ``stronger'' norm which is often called a ``bump". This norm is defined in terms of an appropriate Banach function $X$ space satisfying certain special property. This property is related to the $L^p$ boundedness of a natural maximal function related to the space. More precisely, for a given Banach function space $X$, the local $X$-average of a measurable function $f$ associated to the cube $Q$ is defined as
\begin{equation*}
\|f\|_{X,Q}=\left\|\tau_{\ell(Q)}(f\chi_Q)\right\|_X, \end{equation*} where $\tau_\delta$ is the dilation operator $\tau_\delta f(x)=f(\delta x)$, $\delta>0$ and $\ell(Q)$ stands for the sidelength of the cube $Q$. The natural maximal operator associated to the space $X$ is defined as
\begin{equation*}
M_{X}f(x)= \sup_{Q:x\in Q} \|f\|_{X,Q} \end{equation*}
and the key property is that the maximal operator $M_{X'}$ is bounded on $L^p(\mathbb{R}^n)$ where $X'$ is the associate space to $X$ (see \eqref{bnessX'} below).
As a corollary of our main result, Theorem \ref{thm:main}, we will give a quantitative version of the main result from \cite{perez95} regarding sufficient conditions for the two weight inequality to hold:
\begin{theorem}\label{thm:perez-bump} Let $w$ and $\sigma$ be a pair of weights that satisfies the condition
\begin{equation}\label{keySufficientCondition}
\sup \left(\Xint-_Q w\ dx\right) \|\sigma^{1/p'}\|^p_{X,Q} <\infty. \end{equation}
Suppose, in addition, that the maximal operator associated to the associate space is bounded on $L^p(\mathbb{R}^n)$: \begin{equation}\label{bnessX'}
M_{X'}: L^p(\mathbb{R}^n)\to L^p(\mathbb{R}^n). \end{equation} Then there is a finite positive constant $C$ such that:
\begin{equation*}
\|M(\cdot \sigma)\|_{L^p(w)} \leq C. \end{equation*}
\end{theorem}
In this note we give a different result of this type with the hope that it may lead to different, possible better, conditions for the two weight problem for Singular Integral Operators.
Most of the interesting examples are obtained when $X$ is an Orlicz space $L_\Phi $ defined in term of the Young function $\Phi$ (see Section \ref{sec:SHT} for the precise definitions). In this case, the local average with respect to $\Phi$ over a cube $Q$ is
\begin{equation*}
\|f\|_{\Phi,Q} =\|f\|_{\Phi,Q,\mu}= \inf\left\{\lambda >0:
\frac{1}{\mu(Q)}\int_{Q} \Phi\left(\frac{ |f|}{ \lambda } \right) dx \le 1\right\} \end{equation*}
where $\mu$ is here the Lebesgue measure. The corresponding maximal function is \begin{equation}\label{eq:maximaltype}
M_{\Phi}f(x)= \sup_{Q:x\in Q} \|f\|_{\Phi,Q}. \end{equation}
Related to condition \eqref{keySufficientCondition} we introduce here the following quantities.
\begin{definition}\label{def:Ap-multiple} Let $(\mathcal{S},d\mu)$ be a SHT. Given a ball $B\subset \mathcal{S}$, a Young function $\Phi$ and two weights $w$ and $\sigma$, we define the quantity \begin{equation}\label{eq:A_p-local}
A_p(w,\sigma,B,\Phi):=\left( \Xint-_{B} w\, d\mu\right)\|\sigma^{1/p'}\|^p_{\Phi,B} \end{equation}
and we say that a pair of weights belong to the $A_{p,\Phi}$ class if
\begin{equation*} [w,\sigma,\Phi]_{A_p}:=\sup_B A_p(w,\sigma,B,\Phi) <\infty, \end{equation*} where the $\sup$ is taken over all balls in the space. In the particular case of $\Phi(t)=t^{p'}$, this condition corresponds to the classical $A_p$ condition and we use the notation
\begin{equation*}
[w,\sigma]_{A_p}:=\sup_B\left(\Xint-_{B} w\ d\mu\right)\left(\Xint-_{B} \sigma\ d\mu\right)^{p-1}. \end{equation*}
\end{definition}
We define now a generalization of the Fuji-Wilson constant of a $A_{\infty}$ weight $\sigma$ as introduced in \cite{HP} by means of a Young function $\Phi$:
\begin{equation*} [\sigma,\Phi]_{W_p}:=\sup_B\frac{1}{\sigma(B)}\int_B M_{\Phi}\left(\sigma^{1/p}\chi_B\right)^p\ d\mu \end{equation*}
Note that the particular choice of $\Phi_p(t):=t^p$ reduces to the $A_\infty$ constant ( see Definition \eqref{eq:Ainfty} from Section \ref{sec:SHT}): \begin{equation}\label{eq:WpPhi-p--Ainfty} [\sigma,\Phi_p]_{W_p}=\sup_B\frac{1}{\sigma(B)}\int_B M\left(\sigma\chi_B\right)\ d\mu =[\sigma]_{A_\infty}. \end{equation}
\subsection{Main results} Our main purpose in the present note is to address the problem mentioned above within the context of spaces of homogeneous type. In this context, the Hardy--Littlewood maximal operator $M$ is defined over balls: \begin{equation}\label{eq:maximal-SHT}
Mf(x) = \sup_{B\ni x } \frac{1}{\mu(B)}\int_{B} |f|\,d\mu. \end{equation}
The Orlicz type maximal operators are defined also with balls and with respect to the measure $\mu$ in the natural way.
Our main result is the following theorem. \begin{theorem} \label{thm:main} Let $1 < p < \infty$ and let $\Phi$ be any Young function with conjugate function $\bar\Phi$. Then, for any pair of weights $w,\sigma$, there exists a structural constant $C>0$ such that the (two weight) norm defined in \eqref{MainQuestion} satisfies
\begin{equation}\label{eq:main}
\|M(\cdot \sigma)\|_{L^p(w)} \leq C p'\left( [w,\sigma,\Phi]_{A_p}[\sigma,\bar\Phi]_{W_p}\right)^{1/p}, \end{equation} \end{theorem}
We emphasize that \eqref{eq:main}, which is even new in the usual context of Euclidean Spaces, fits into the spirit of the $A_p-A_{\infty}$ theorem derived in \cite{HP} and \cite{HPR1}. The main point here is that we have a two weight result with a better condition and with a proof that avoids completely the use of the sharp quantitative reverse H\"older inequality for $A_{\infty}$ weights proved in these papers. This property is, of course, of independent interest but it is not used in our results.
From this Theorem, we derive several corollaries. First, we have a direct proof of the two weight result derived in \cite{HP} using the $[w]_{A_{\infty}}$ constant of Fujii-Wilson \eqref{eq:Ainfty}.
\begin{corollary}\label{cor:mixed-two-weight} Under the same hypothesis of Theorem \ref{thm:main}, we have that there exists a structural constant $C>0$ such that
\begin{equation}\label{eq:mixed-two}
\|M(\cdot \sigma)\|_{L^p(w)} \leq Cp'\left([w,\sigma]_{A_p}[\sigma]_{A_\infty}\right)^{1/p}. \end{equation} \end{corollary}
Note that the result in Theorem \ref{thm:main} involves two suprema like in Corollary \ref{cor:mixed-two-weight}. It would interesting to find out if there is a version of this result involving only one supremum. There is some evidence that it could be the case, see for example \cite{HP}, Theorem 4.3. See also the recent work \cite{LM}.
As a second consequence of Theorem \ref{thm:main}, we have the announced quantitative version of Theorem \ref{thm:perez-bump}:
\begin{corollary}\label{cor:precise-bump} Under the same hypothesis of Theorem \ref{thm:main}, we have that there exists a structural constant $C>0$ such that
\begin{equation*}
\|M(\cdot \sigma)\|_{L^p(w)} \leq C p' [w,\sigma,\Phi]_{A_p}^{1/p} \|M_{\bar\Phi}\|_{L^p(\mathbb{R}^n)} \end{equation*}
\end{corollary} We remark that this approach produces a non-optimal dependence on $p$, since we have to pay with one $p'$ for using Sawyer's theorem. However, the ideas from the proof of Theorem \ref{thm:main} can be used to derive a direct proof of Corollary \ref{cor:precise-bump} without the $p'$ factor. We include the proof in the appendix.
Finally, for the one weight problem, we recover the known mixed bound. \begin{corollary}\label{cor:mixed-one-weight} For any $A_p$ weight $w$ the following mixed bound holds: \begin{equation*}
\|M\|_{L^p(w)} \leq C p' \left([w]_{A_p}[\sigma]_{A_\infty}\right)^{1/p} \end{equation*} where $C$ is an structural constant and as usual $\sigma=w^{1-p'}$ is the dual weight. \end{corollary}
\begin{remark} To be able to extend the proofs to this general scenario, we need to use (and prove) suitable versions of classical tools on this subject, such as Calder\'on--Zygmund decompositions. We remark that in previous works (\cite{PW-JFA}, \cite{SW}) most of the results are proved under the assumption that the space has non-empty annuli. The main consequence of this property is that in that case the measure $\mu$ enjoys a reverse doubling property, which is crucial in the proof of Calder\'on--Zygmund type lemmas. However, this assumption implies, for instance, that the space has infinite measure and no atoms (i.e. points with positive measure) and therefore constraints the family of spaces under study. Recently, some of those results were proven without this hypothesis, see for example \cite{Pradolini-Salinas}. Here we choose to work without the annuli property and therefore we need to adapt the proofs from \cite{PW-JFA}. Hence, we will need to consider separately the cases when the space has finite or infinite measure. An important and useful result on this matter is the following: \end{remark}
\begin{lemma}[\cite{vili}]\label{lem:bounded-finite} Let $(\mathcal S,\rho,\mu)$ be a space of homogeneous type. Then $\mathcal{S}$ is bounded if and only if $\mu(\mathcal S)<\infty$. \end{lemma}
\subsection{Outline} The article is organized as follows. In Section \ref{sec:prelim} we summarize some basic needed results on spaces of homogeneous type and Orlicz spaces. We also include a Calder\'on--Zygmund type decomposition lemma. In Section \ref{sec:proofs} we present the proofs of our results. Finally, we include in Section \ref{sec:appendix} an Appendix with a direct proof of a slightly better result than Corollary \ref{cor:mixed-two-weight}.
\section{preliminaries}\label{sec:prelim}
In this section we first summarize some basic aspects regarding spaces of homogeneous type and Orlicz spaces. Then, we include a Calder\'on--Zygmund (C--Z) decomposition lemma adapted to our purposes.
\subsection{Spaces of homogeneous type}\label{sec:SHT} A quasimetric $d$ on a set $\mathcal{S}$ is a function $d:{\mathcal S} \times {\mathcal S} \rightarrow [0,\infty)$ which satisfies \begin{enumerate}
\item $d(x,y)=0$ if and only if $x=y$; \item $d(x,y)=d(y,x)$ for all $x,y$;
\item there exists a finite constant $\kappa \ge 1$ such that, for all $x,y,z \in \mathcal{S}$, \begin{equation*} d(x,y)\le \kappa (d(x,z)+d(z,y)). \end{equation*} \end{enumerate}
Given $x \in \mathcal{S}$ and $r > 0$, we define the ball with center $x$ and radius $r$, $B(x,r) := \{y \in {\mathcal{S}} :d(x,y) < r\}$ and we denote its radius $r$ by $r(B)$ and its center $x$ by $x_B$. A space of homogeneous type $({\mathcal{S}},d,\mu)$ is a set $\mathcal{S}$ endowed with a quasimetric $d$ and a doubling nonnegative Borel measure $\mu$ such that \begin{equation}\label{eq:doubling}
\mu(B(x,2r)) \le C\mu(B(x,r)) \end{equation}
Let $C_\mu$ be the smallest constant satisfying \eqref{eq:doubling}. Then $D_\mu = \log_2 C_\mu$ is called the doubling order of $\mu$. It follows that \begin{equation} \frac{\mu(B)}{\mu(\tilde{B})} \le C^{2+\log_2\kappa}_{\mu}\left(\frac{r(B)}{r(\tilde{B})}\right)^{D_\mu} \;\mbox{for all balls}\; \tilde{B} \subset B. \end{equation}
In particular for $\lambda>1$ and $B$ a ball, we have that \begin{equation}\label{eq:doublingDIL}
\mu(\lambda B) \le (2\lambda)^{D_\mu} \mu(B). \end{equation} Here, as usual, $\lambda B$ stands for the dilation of a ball $B(x,\lambda r)$ with $\lambda>0$. Throughout this paper, we will say that a constant $c=c(\kappa,\mu)>0$ is a \emph{structural constant} if it depends only on the quasimetric constant $\kappa$ and the doubling constant $C_\mu$.
An elementary but important property of the quasimetric is the following. Suppose that we have two balls $B_1=B(x_1,r_1)$ and $B_2=B(x_2,r_2)$ with non empty intersection. Then, \begin{equation}\label{eq:engulfing}
r_1\le r_2 \Longrightarrow B_1\subset \kappa(2\kappa+1)B_2. \end{equation} This is usually known as the ``engulfing'' property and follows directly from the quasitriangular property of the quasimetric.
In a general space of homogeneous type, the balls $ B(x,r)$ are not necessarily open, but by a theorem of Macias and Segovia \cite{MS}, there is a continuous quasimetric $d'$ which is equivalent to $d$ (i.e., there are positive constants $c_{1}$ and $c_{2}$ such that $c_{1}d'(x,y)\le d(x,y) \le c_{2}d'(x,y)$ for all $x,y \in \mathcal{S}$) for which every ball is open. We always assume that the quasimetric $d$ is continuous and that balls are open.
We will adopt the usual notation: if $\nu$ is a measure and $E$ is a measurable set, $\nu(E)$ denotes the $\nu$-measure of $E$. Also, if $f$ is a measurable function on $(\mathcal S,d,\mu)$ and $E$ is a measurable set, we will use the notation $f(E):=\int_E f(x)\ d\mu$. We also will denote the $\mu$-average of $f$ over a ball $B$ as $f_{B} = \Xint-_B f d\mu$. We recall that a weight $w$ (any non negative measurable function) satisfies the $A_p$ condition for $1<p<\infty$ if \begin{equation*}
[w]_{A_p}:=\sup_B\left(\Xint-_B w\ d\mu\right)\left(\Xint-_B w^{-\frac{1}{p-1}}\ d\mu\right)^{p-1}, \end{equation*} where the supremum is taken over all the balls in $\mathcal{S}$. The $A_{\infty}$ class is defined in the natural way by $A_{\infty}:=\bigcup_{p>1}A_p$
This class of weights can also be characterized by means of an appropriate constant. In fact, there are various different definitions of this constant, all of them equivalent in the sense that they define the same class of weights. Perhaps the more classical and known definition is the following due to Hru\v{s}\v{c}ev \cite{Hruscev} (see also \cite{GCRdF}): \begin{equation*} [w]^{exp}_{A_\infty}:=\sup_B \left(\Xint-_{B} w\,d\mu\right) \exp \left(\Xint-_{B} \log w^{-1}\,d\mu \right). \end{equation*} However, in \cite{HP} the authors use a ``new'' $A_\infty$ constant (which was originally introduced implicitly by Fujii in \cite{Fujii} and later by Wilson in \cite{Wilson:87}), which seems to be better suited. For any $w\in A_\infty$, we define
\begin{equation}\label{eq:Ainfty}
[w]_{A_\infty}:= [w]^{W}_{A_\infty}:=\sup_B\frac{1}{w(B)}\int_B M(w\chi_B )\ d\mu,
\end{equation} where $M$ is the usual Hardy--Littlewood maximal operator. When the underlying space is $\mathbb{R}^d$, it is easy to see that $[w]_{A_\infty}\le c [w]^{exp}_{A_\infty}$ for some structural $c>0$. In fact, it is shown in \cite{HP} that there are examples showing that $[w]_{A_\infty}$ is much smaller than $[w]^{exp}_{A_\infty}$ The same line of ideas yields the inequality in this wider scenario. See the recent work of Beznosova and Reznikov \cite{BR} for a comprehensive and thorough study of these different $A_\infty$ constants. We also refer the reader to the forthcoming work of Duoandikoetxea, Martin-Reyes and Ombrosi \cite{DMRO} for a discussion regarding different definitions of $A_\infty$ classes.
\subsection{Orlicz spaces}\label{sec:Orlicz} We recall here some basic definitions and facts about Orlicz spaces.
A function $\Phi:[0,\infty) \rightarrow [0,\infty)$ is called a Young function if it is continuous, convex, increasing and satisfies $\Phi(0)=0$ and $\Phi(t) \rightarrow \infty$ as $t \rightarrow \infty$. For Orlicz spaces, we are usually only concerned about the behaviour of Young functions for $t$ large. The space $L_{\Phi}$ is a Banach function space with the Luxemburg norm \[
\|f\|_{\Phi} =\|f\|_{\Phi,\mu} =\inf\left\{\lambda >0: \int_{\mathcal{S}}
\Phi( \frac{ |f|}{\lambda }) \, d\mu \le 1 \right\}. \] Each Young function $\Phi$ has an associated complementary Young function $\bar{\Phi}$ satisfying \begin{equation*} t\le \Phi^{-1}(t)\bar{\Phi}^{-1}(t) \le 2t \label{propiedad} \end{equation*} for all $t>0$. The function $\bar{\Phi}$ is called the conjugate of $\Phi$, and the space $L_{\bar{\Phi}}$ is called the conjugate space of $L_{\Phi}$. For example, if $\Phi(t) = t^p$ for $1 < p < \infty$, then $\bar{\Phi}(t) = t^{p'}, p' = p/(p-1)$, and the conjugate space of $L^p(\mu)$ is $L^{p'}(\mu)$.
A very important property of Orlicz spaces is the generalized H\"older inequality \begin{equation}\label{eq:HOLDERglobal}
\int_{\mathcal{S}} |fg|\, d\mu \le 2 \|f\|_{\Phi}\|g\|_{\bar{\Phi}}. \end{equation} Now we introduce local versions of Luxemburg norms. If $\Phi$ is a Young function, let \begin{equation*}
\|f\|_{\Phi,B} =\|f\|_{\Phi,B,\mu}= \inf\left\{\lambda >0:
\frac{1}{\mu(B)}\int_{B} \Phi\left(\frac{ |f|}{ \lambda }\right) \, d\mu \le 1\right\}. \end{equation*} Furthermore, the local version of the generalized H\"older inequality (\ref{eq:HOLDERglobal}) is \begin{equation}\label{eq:HOLDERlocal}
\frac{1}{\mu(B)}\int_{B}fg\, d\mu \le 2 \|f\|_{\Phi,B}\|g\|_{\bar{\Phi},B}. \end{equation} Recall the definition of the maximal type operators $M_\Phi$ from \eqref{eq:maximaltype}: \begin{equation}\label{eq:maximaltype-SHT}
M_{\Phi}f(x)= \sup_{B:x\in B} \|f\|_{\Phi,B}. \end{equation} An important fact related to this sort of operator is that its boundedness is related to the so called $B_p$ condition. For any positive function $\Phi$ (not necessarily a Young function), we have that \begin{equation*}
\|M_{\Phi}\|^p_{L^{p}(\mathcal{S})}\, \leq c_{\mu,\kappa}\, \alpha_{p}(\Phi), \end{equation*} where $\alpha_{p}(\Phi)$ is the following tail condition \begin{equation}\label{eq:Phi-p} \alpha_{p}(\Phi)= \,\int_{1}^{\infty} \frac{\Phi(t)} { t^p } \frac{dt}{t} < \infty. \end{equation} It is worth noting that in the recent article \cite{LL} the authors define the appropriate analogue of the $B_p$ condition in order to characterize the boundedness of the \emph{strong} Orlicz-type maximal function defined over rectangles both in the linear and multilinear cases. Recent developments and improvements can also be found in \cite{Masty-Perez}, where the authors addressed the problem of studying the maximal operator between Banach function spaces.
\subsection{Calder\'on--Zygmund decomposition for spaces of homogeneous type}
\
The following lemma is a classical result in the theory, regarding a decomposition of a generic level set of the Hardy--Littlewood maximal function $M$. Some variants can be found in \cite{AimarPAMS} for $M$ and in \cite{AimarTAMS} for the centered maximal function $M^c$ . In this latter case, the proof is straightforward. We include here a detailed proof for the general case of $M$ where some extra subtleties are needed.
\begin{lemma}[Calder\'on--Zygmund decomposition]\label{lem:stoppingtime} Let $B$ be a fixed ball and let $f$ be a bounded nonnegative measurable function. Let $M$ be the usual non centered Hardy--Littlewood maximal function. Define the set $\Omega_{\lambda}$ as \begin{equation}\label{eq:omegalambda} \Omega_\lambda = \{x \in B: Mf(x) >\lambda\}, \end{equation} Let $\lambda>0$ be such that $\lambda\ge \Xint-_B f\ d\mu$. If $\Omega_{\lambda}$ is non-empty, then given $\eta > 1$, there exists a countable family $\{B_i\}$ of pairwise disjoint balls such that, for $\theta=4\kappa^2+\kappa$, \begin{itemize} \item[i)] $\displaystyle \cup_{i} B_{i}\subset \Omega_{\lambda} \subset \cup_{i} \theta B_{i}$, \item[ii)] For all $i$, \begin{equation*} \lambda <\frac{1}{\mu(B_i)} \int_{B_i} f d\mu. \end{equation*} \item[iii)] If $B$ is any ball such that $B_i\subset B$ for some $i$ and $r(B)\ge \eta r(B_i)$, we have that \begin{equation}\label{eq:ballmaximal1} \frac{1}{\mu(\eta B)} \int_{\eta B} f d\mu \le \lambda. \end{equation}
\end{itemize} \end{lemma}
\begin{proof} Define, for each $x\in\Omega_\lambda$, the following set:
\begin{equation*}
\mathcal{R}^\lambda_x=\left\{r>0: \Xint-_{B} f\ d\mu >\lambda, x\in B=B(y,r)\right\},
\end{equation*} which is clearly non-empty. The key here is to prove that $\mathcal{R}^\lambda_x$ is bounded. If the whole space is bounded, there is nothing to prove. In the case of unbounded spaces, we argue as follows. Since the space is of infinite measure (recall Lemma \ref{lem:bounded-finite}), and clearly $S=\bigcup_{r>0} B(x,r)$, we have that $\mu(B(x,r))$ goes to $+\infty$ when $r\to\infty$ for any $x\in \mathcal{S}$. Therefore, for $K=\kappa(2\kappa+1)$, we can choose $r_1$ such that the ball $B_1=B(x,r_1)$ satisfies the inequality \begin{equation*}
\mu(B_1)\ge \frac{2(2K)^{D_\mu}\|f\|_{L^1}}{\lambda} \end{equation*} Suppose now that $\sup\mathcal{R}^\lambda_x=+\infty$. Then we can choose a ball $B_2=B(y,r_2)$ for some $y$ such that $x\in B_2$, $\Xint-_{B_2}f\ d\mu>\lambda$ and $r_2>r_1$. Now, by the engulfing property \eqref{eq:engulfing} we obtain that $B_1\subset KB_2$. The doubling condition \eqref{eq:doublingDIL} yields \begin{equation*}
\mu(B_1)\le \mu(KB_2)\le (2k)^{D_\mu}\mu(B_2) \end{equation*} Then we obtain that \begin{equation*}
\frac{2\|f\|_{L^1}}{\lambda}\le \mu(B_2)
< \frac{\|f\|_{L^1}}{\lambda} \end{equation*} which is a contradiction. We conclude that, in any case, for any $x\in \Omega_\lambda$, we have that $\sup \mathcal{R}^\lambda_x<\infty$.
Now fix $\eta>1$. If $x \in \Omega_\lambda$, there is a ball $B_{x}$ containing $x$, whose radius $r(B_x)$ satisfies $\frac{\sup \mathcal{R}^\lambda_x}{\eta} < r(B_x)\leq \sup \mathcal{R}^\lambda_x$, and for which $\Xint-_{B_x} f \ d\mu > \lambda$. Thus the ball $B_x$ satisfies ii) and iii). Also note that $\Omega_\lambda = \bigcup_{x\in \Omega_{\lambda}} B_x$. Picking a Vitali type subcover of $\{B_{x}\}_{x\in \Omega_{\lambda}}$ as in \cite{SW}, Lemma 3.3, we obtain a family of pairwise disjoint balls $\{B_{i}\} \subset \{B_{x}\}_{x\in \Omega_{\lambda}}$ satisfying i). Therefore $\{B_i\}$ satisfies i), ii) and iii). \end{proof}
We will need another important lemma, in order to handle simultaneously decompositions of level sets at different scales.
\begin{lemma}\label{lem:disjointing} Let $B$ be a ball and let $f$ be a bounded nonnegative measurable function. Let also $a \gg 1$ and, for each integer $k$ such that $a^k>\Xint-_B f\ d\mu$, we define $\Omega_{k}$ as \begin{equation}\label{eq:Omega-k} \Omega _{k} = \left\{x\in B: Mf(x) >a^{k} \right\}, \end{equation} Let $\{E_i^k\}_{i,k}$ be defined by $E_i^k=B_i^k\setminus \Omega_{k+1}$, where the family of balls $\{B_i^k\}_{i,k}$ is obtained by applying Lemma \ref{lem:stoppingtime} to each $\Omega_k$. Then, for $\theta=4\kappa^2+\kappa$ as in the previous Lemma and $\eta=\kappa^2(4\kappa+3)$, the following inequality holds: \begin{equation}\label{eq:Bik vs Eik} \mu(B_i^k\cap \Omega_{k+1})< \frac{(4\theta\eta)^{D_\mu}}{a}\mu(B_i^k). \end{equation} Consequently, for sufficiently large $a$, we can obtain that \begin{equation}\label{eq:Bik vs Eik one half} \mu(B_i^k) \le 2\mu(E_i^k). \end{equation} \end{lemma}
\begin{proof} To prove the claim, we apply Lemma \ref{lem:stoppingtime} with $\eta=\kappa^2(4\kappa+3)$. Then, by part i), we have that, for $\theta=4\kappa^2+\kappa$
\begin{equation*} \Omega_{k+1}\subset \bigcup_m\theta B^{k+1}_m \end{equation*} and then \begin{equation}\label{eq:decompBik-k+1} \mu(B_{i}^{k} \cap \Omega_{k+1} )\le \sum_{m} \mu( B_{i}^{k} \cap \theta B_{m}^{k+1} ). \end{equation} Suppose now that $B_i^k\cap \theta B_m^{k+1}\neq \emptyset$. We claim that $r(B_{m}^{k+1})\le r(B_{i}^{k})$. Suppose the contrary, namely $r(B_{m}^{k+1})> r(B_{i}^{k})$. Then, by property \eqref{eq:engulfing}, we can see that $B_{i}^{k} \subset \kappa^2(4\kappa+3) B_{m}^{k+1}=\eta B_{m}^{k+1}$. For $B=\eta B_{m}^{k+1}$, part iii) from Lemma \ref{lem:stoppingtime} gives us that the average satisfies \begin{equation}\label{eq:avg-etaBmk+1}
\frac{1}{\mu(B)}\int_B f\ d\mu\le a^k. \end{equation} Now, by the properties of the family $\{B_m^{k+1}\}_m$ and the doubling condition of $\mu$, we have that, for $a>(2\eta)^{D_\mu}$, \begin{equation}\label{eq:ak} \frac{1}{\mu(\eta B_{m}^{k+1})}\int_{\eta B_{m}^{k+1}} f\ d\mu>\frac{a^{k+1}}{ (2\eta)^{D_\mu}}>a^k. \end{equation} This last inequality contradicts \eqref{eq:avg-etaBmk+1}. Then, whenever $B_i^k\cap \theta B_m^{k+1}\neq \emptyset$, we have that $r(B_{m}^{k+1})\le r(B_{i}^{k})$ and from that it follows that $ B_{m}^{k+1}\subset \eta B_{i}^{k}$. The sum \eqref{eq:decompBik-k+1} now becomes \begin{eqnarray*} \mu(B_{i}^{k} \cap \Omega_{k+1} )& \le & \sum_{m:B_{m}^{k+1}\subset \eta B_{i}^{k}} \mu( B_{j}^{k} \cap \theta B_{m}^{k+1} )\\ & \le & (2\theta)^{D_\mu} \sum_{m:B_{m}^{k+1}\subset \eta B_{i}^{k}} \mu(B_{m}^{k+1} )\\ &\le & \frac{(2\theta)^{D_\mu}}{a^{k+1}}\int_{\eta B_i^k}f\ d\mu \end{eqnarray*} since the sets $\{B_m^{k+1}\}_m$ are pairwise disjoint. Finally, by part iii) of Lemma \ref{lem:stoppingtime}, we obtain \begin{equation*}
\mu(B_{i}^{k} \cap \Omega_{k+1})\le \frac{(4\theta\eta)^{D_\mu}}{a}\mu(B_i^k), \end{equation*} which is inequality \eqref{eq:Bik vs Eik}. \end{proof}
\section{Proofs of the main results}\label{sec:proofs}
We present here the proof or our main results. Our starting point is a version of the sharp two weight inequality \eqref{eq:moen} valid for SHT from \cite{kairema:twoweight}:
\begin{theorem}[\cite{kairema:twoweight}]\label{thm:kairema} Let $(\mathcal{S},\rho,\mu)$ a SHT. Then the H--L maximal operator $M$ defined by \eqref{eq:maximal-SHT} satisfies the bound \begin{equation}\label{eq:kairema}
\left\|M(f\sigma)\right\|_{L^p(w)}\le C p'[w,\sigma]_{S_p}\|f\|_{L^p(\sigma)}, \end{equation} where $[w,\sigma]_{S_p}$ is the Sawyer's condition with respect to balls: \begin{equation} [w,\sigma]_{S_p}:=\sup_B \left(\frac1{\sigma(B)} \int_B M(\sigma\chi_B)^pw\ d\mu\right)^{1/p}. \end{equation} \end{theorem}
We now present the proof of the main result. \begin{proof}[Proof of Theorem \ref{thm:main}] By Theorem \ref{thm:kairema}, we only need to prove that \begin{equation*}
[w,\sigma]_{S_p}\le C [w,\sigma,\Phi]^{1/p}_{A_p}[\sigma,\bar\Phi]^{1/p}_{W_p} \end{equation*} for some constant $C$, for any Young function $\Phi$, for any $1<p<\infty$. Let $B$ be a fixed ball $B$ and consider the sets $\Omega_k$ from \eqref{eq:Omega-k} for the function $\sigma\chi_B$ for any $k\in \mathbb{Z}$. We remark here that in order to apply a C--Z decomposition of these sets, we need the level of the decomposition to be larger that the average over the ball. We proceed as follows. Take any $a>1$ and consider $k_0\in\mathbb{Z}$ such that \begin{equation}\label{eq:small-average} a^{k_0-1}< \Xint-_B \sigma\ d\mu \le a^{k_0}. \end{equation} Now, let $A$ be the set of the small values of the maximal function: \begin{equation*}
A=\left\{x\in B: M(\sigma\chi_B)\le a\Xint-_B \sigma\ d\mu\right\}. \end{equation*}
For any $x\in B\setminus A$, we have that \begin{equation*}
M(\sigma\chi_B)(x)> a\Xint-_B \sigma\ d\mu>a^{k_0}\ge\Xint-_B \sigma\ d\mu. \end{equation*}
Therefore, \begin{eqnarray*} \int_B M(\sigma\chi_B)^p w \ d\mu & = & \int_A M(\sigma\chi_B)^p w \ d\mu +\int_{B\setminus A} M(\sigma\chi_B)^p w \ d\mu \\ & \le & a^p w(B) \left(\Xint-_{B} \sigma\ d\mu\right)^p + \sum_{k\ge k_0} \int_{ \Omega_{k}\setminus \Omega_{k+1}} M(\sigma\chi_B)^p w\ d\mu\\ &= & I + II \end{eqnarray*}
The first term $I$ can be bounded easily. By the general H\"older inequality \eqref{eq:HOLDERlocal}, we obtain \begin{eqnarray*}
I & \le & 2a^p\left(\Xint-_B w\ d\mu \right)\|\sigma^{1/p'}\|^p_{\Phi,B}\|\sigma^{1/p}\|^p_{\bar\Phi,B}\ \mu(B)\\ &\le & 2[w,\sigma,\Phi]_{A_p}\int_B M_{\bar\Phi}(\sigma^{1/p}\chi_B)^p\ d\mu \end{eqnarray*}
Now, for the second term $II$, we first note that
\begin{eqnarray*} \int_{B\setminus A} M(\sigma\chi_B)^p w \ d\mu & = & \sum_{k\ge k_0} \int_{ \Omega_{k}\setminus \Omega_{k+1}} M(\sigma\chi_B)^p w\ d\mu\\ & \le & a^p\sum_{k\ge k_0} a^{kp} w(\Omega_{k}) \end{eqnarray*}
By the choice of $k_0$, we can apply Lemma \ref{lem:stoppingtime} to perform a C--Z decomposition at all levels $k\ge k_0$ and obtain a family of balls $\{B^k_i\}_{i,k}$ with the properties listed in that lemma. Then,
\begin{eqnarray*} \int_{B\setminus A} M(\sigma\chi_B)^p w \ d\mu &\le & a^{p} \sum_{k,i} \left(\Xint-_{B_i^k}\sigma\chi_B\ d\mu \right)^{p} w(\theta B_i^k)\\ &\le & a^{p} \sum_{k,i} \left(\frac{\mu(\theta B_i^k)}{\mu(B_i^k)}\Xint-_{\theta B_i^k}\sigma^\frac{1}{p}\sigma^\frac{1}{p'}\chi_B\ d\mu \right)^{p} w(\theta B_i^k)
\end{eqnarray*} We now proceed as before, using the local generalized Holder inequality \eqref{eq:HOLDERlocal} and the doubling property \eqref{eq:doublingDIL} of the measure (twice). Then we obtain \begin{equation*}
\int_{B\setminus A} M(\sigma\chi_B)^p w d\mu \le 2a^{p} (2\theta)^{(p+1)D_\mu}[w,\sigma,\Phi]_{A_p}\sum_{k,i}\left\| \sigma^\frac{1}{p}\chi_B\right\|^p_{\bar{\Phi},\theta B_i^k}\mu(B_i^k) \end{equation*} The key here is to use Lemma \ref{lem:disjointing} to pass from the family $\{B_i^k\}$ to the pairwise disjoint family $\{E_i^k\}$. Then, for $a\ge 2(4\theta\eta)^{D_\mu}$, we can bound the last sum as follows \begin{eqnarray*}
\sum_{k,i}\left\| \sigma^\frac{1}{p}\chi_B\right\|^p_{\bar{\Phi},\theta B_i^k}\mu(B_i^k)& \le & 2 \sum_{k,i}\left\| \sigma^\frac{1}{p}\chi_B\right\|^p_{\bar{\Phi},\theta B_i^k}\mu(E_i^k)\\ &\le& 2 \sum_{k,i} \int_{E_i^k}M_{\bar{\Phi}}(\sigma^\frac{1}{p}\chi_B)^p\ d\mu\\ &\le& 2 \int_B M_{\bar{\Phi}}(\sigma^\frac{1}{p}\chi_B)^p\ d\mu \end{eqnarray*} since the sets $ \{ E_{k,j}\}$ are pairwise disjoint. Collecting all previous estimates and dividing by $\sigma(B)$, we obtain the desired estimate \begin{equation*}
[w,\sigma]^p_{S_p}\le 4 a^{p} (2\theta)^{(p+1)D_\mu}[w,\sigma,\Phi]_{A_p} [\sigma,\bar\Phi]_{W_p}, \end{equation*} and the proof of Theorem \ref{thm:main} is complete. \end{proof}
It remains to prove Corollary \ref{cor:mixed-two-weight}. To that end, we need to consider the special case of $\Phi(t)=t^{p'}$.
\begin{proof}[Proof of Corollary \ref{cor:mixed-two-weight}] Considering then $\Phi(t)=t^{p'}$, the quantity \eqref{eq:A_p-local} is \begin{eqnarray*}
A_p(w,\sigma,B,\Phi) & = &\left( \Xint-_{B} w\, d\mu\right)\|\sigma^{1/p'}\|^p_{\Phi,B} \\ & = & \left( \Xint-_{B} w(y)\, d\mu\right) \left( \Xint-_{B} \sigma \, d\mu\right)^{p-1}. \end{eqnarray*} In addition, we have from \eqref{eq:WpPhi-p--Ainfty} that $[\sigma,\overline{\Phi_{p'}}]_{W_p}=[\sigma,\Phi_p]_{W_p}=[\sigma]_{A_\infty}$ and therefore we obtain \eqref{eq:mixed-two}. \end{proof}
For the proof of Corollary \ref{cor:precise-bump}, we simply use the boundedness of $M_{\bar\Phi}$ on $L^p(\mu)$, \begin{equation*}
[\sigma,\bar\Phi]_{W_p}:=\sup_B\frac{1}{\sigma(B)}\int_B M_{\bar\Phi}\left(\sigma^{1/p}\chi_B\right)^p\ d\mu
\leq \|M_{\bar\Phi}\|^p_{L^p}. \end{equation*}
The proof of Corollary \ref{cor:mixed-one-weight} is trivial.
\section{Appendix}\label{sec:appendix}
We include here a direct proof of version of Corollary \ref{cor:precise-bump} which is better in terms of the dependence on $p$. Precisely, we have the following Proposition.
\begin{proposition}\label{pro:precise-bump-sharp-p}
Let $1 < p < \infty$. For any pair of weights $w,\sigma$ and any Young function $\Phi$, there exists a structural constant $C>0$ such that \begin{equation*}
\|M (f\sigma)\|_{L^p(w)}\leq C [w,\sigma,\Phi]^{1/p}_{A_p} \|M_{\bar\Phi}\|_{L^p}\|f\|_{L^{p}(\sigma)} \end{equation*} \end{proposition}
\begin{proof}[Proof of Proposition \ref{pro:precise-bump-sharp-p}] By density it is enough to prove the inequality for each nonnegative bounded function with compact support $f$. We first consider the case of unbounded $S$. In this case we have $\Xint-_S f\sigma\ d\mu=0$. Therefore, instead of the sets from sets from \eqref{eq:Omega-k}, we consider \begin{equation*}\label{eq:Omega-k-global} \Omega _{k} = \left\{x\in \mathcal{S}: M(f\sigma)(x) >a^{k} \right\}, \end{equation*} for any $a>1$ and any $k\in \mathbb{Z}$. Then, we can write
\begin{equation*} \int_{\mathcal S} M(f\sigma)^p w \ d\mu = \sum_{k} \int_{ \Omega_{k}\setminus \Omega_{k+1}} M(f\sigma)^p w\ d\mu \end{equation*} Then, following the same line of ideas as in the proof of Theorem \ref{thm:main}, we obtain \begin{equation*}
\int_{\mathcal S} M(f\sigma)^p w d\mu \le 2a^{p} (2\theta)^{(p+1)D_\mu}[w,\sigma,\Phi]_{A_p} \sum_{k,i}\left\| f\sigma^\frac{1}{p}\right\|^p_{\bar{\Phi},\theta B_i^k}\mu(B_i^k) \end{equation*} By Lemma \ref{lem:disjointing} we can replace the family $\{B_i^k\}$ by the pairwise disjoint family $\{E_i^k\}$ to obtain the desired estimate: \begin{equation*}
\int_{\mathcal S} M(f\sigma)^p w \ d\mu \le 4a^{p} (2\theta)^{(p+1)D_\mu}[w,\sigma,\Phi]_{A_p} \|M_{\bar{\Phi}}\|_{L^p}^p\int_{S}f^p\sigma\ d\mu. \end{equation*} In the bounded case, the whole space is a ball and we can write $\mathcal S=B(x,R)$ for any $x$ and some $R>0$. The problem here is to deal with the small values of $\lambda$, since we cannot apply Lemma \ref{lem:disjointing} for $a^k\le \Xint-_S f\sigma\ d\mu$. We then take any $a>1$ and consider $k_0\in\mathbb{Z}$ to verify \eqref{eq:small-average}: \begin{equation*} a^{k_0-1}< \Xint-_S f\sigma\ d\mu \le a^{k_0} \end{equation*} and argue as in the proof of Theorem \ref{thm:main}. \end{proof}
Now, from this last proposition, we can derive another proof of the mixed bound \eqref{eq:mixed-two} from Corollary \ref{cor:mixed-two-weight}. The disadvantage of this approach with respect to the previous one is that we need a deep property of $A_\infty$ weights: the sharp Reverse H\"older Inequality. In the whole generality of SHT, we only know a \emph{weak} version of this result from the recent paper \cite{HPR1}: \begin{theorem}[Sharp weak Reverse H\"older Inequality, \cite{HPR1}]\label{thm:SharpRHI} Let $w\in A_\infty$. Define the exponent $r(w)=1+\frac{1}{\tau_{\kappa\mu}[w]_{A_{\infty}}}$, where $\tau_{\kappa\mu}$ is an structural constant. Then, \begin{equation*}
\left(\Xint-_B w^{r(w)}\ d\mu\right)^{1/r(w)}\leq 2(4\kappa)^{D_\mu}\Xint-_{2\kappa B} w\ d\mu, \end{equation*} where $B$ is any ball in $\mathcal S$. \end{theorem}
The other ingredient for the alternative proof of Corollary \ref{cor:mixed-two-weight} is the known estimate for the operator norm for $M$. For any $1<q<\infty$, we have that $\|M\|^q_{L^q}\sim q'$.
\begin{proof}[Another proof of Corollary \ref{cor:mixed-two-weight}] Consider the particular choice of $\Phi(t)=t^{p'r}$ for $r>1$. Then quantity \eqref{eq:A_p-local} is
\begin{equation*} A_p(w,\sigma,B,\Phi) =\left( \Xint-_{B} w(y)\, d\mu\right) \left( \Xint-_{B} \sigma^r\, d\mu\right)^{p/rp'} \end{equation*} If we choose $r$ from the sharp weak reverse H\"older property (Theorem \ref{thm:SharpRHI}), we obtain that \begin{eqnarray*} A_p(w,\sigma,B,\Phi) & = & \left( \Xint-_{B} w\ d\mu\right)\left(2(4\kappa)^{D_\mu}\Xint-_{2\kappa B} \sigma\ d\mu\right)^{p-1}\\ &\le& 2^{p-1}(4\kappa)^{pD_\mu}\left( \Xint-_{2\kappa B} w\ d\mu\right)\left(\Xint-_{2\kappa B} \sigma\ d\mu\right)^{p-1}\\ &\le & 2^{p-1}(4\kappa)^{pD_\mu}[w,\sigma]_{A_p} \end{eqnarray*} And therefore the proof of Proposition \ref{pro:precise-bump-sharp-p} gives \begin{equation*}
\|M (f\sigma)\|_{L^p(w)} \leq C [w,\sigma]_{A_p}^{1/p} \|M_{\bar{\Phi}}\|_{L^{p}(\mathcal{S},d\mu)} \, \|f\|_{L^{p}(\sigma)}. \end{equation*}
We conclude with the proof by computing $\|M_{\bar \Phi}\|_{L^p}$ for $\Phi(t)=t^{p'r}$. We use \eqref{eq:Phi-p}, and then we obtain that $\|M_{\bar \Phi}\|^p_{L^p}\le c r'p'$. But, by the choice of $r$, it follows that $r'\sim [\sigma]_{A_\infty}$ and we obtain \eqref{eq:mixed-two}. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1305.0415.tex",
"language_detection_score": 0.662220299243927,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{How To Guide Your Learner: Imitation Learning with Active Adaptive Expert Involvement}
\begin{abstract} Imitation learning aims to mimic the behavior of experts without explicit reward signals. Passive imitation learning methods which use static expert datasets typically suffer from compounding error, low sample efficiency, and high hyper-parameter sensitivity. In contrast, active imitation learning methods solicit expert interventions to address the limitations. However, recent active imitation learning methods are designed based on human intuitions or empirical experience without theoretical guarantee. In this paper, we propose a novel active imitation learning framework based on a teacher-student interaction model, in which the teacher's goal is to identify the best teaching behavior and actively affect the student's learning process. By solving the optimization objective of this framework, we propose a practical implementation, naming it AdapMen. Theoretical analysis shows that AdapMen\ can improve the error bound and avoid compounding error under mild conditions. Experiments on the MetaDrive benchmark and Atari 2600 games validate our theoretical analysis and show that our method achieves near-expert performance with much less expert involvement and total sampling steps than previous methods. The code is available at \href{https://github.com/liuxhym/AdapMen}{\textcolor{blue}{https://github.com/liuxhym/AdapMen}}. \end{abstract}
\section{Introduction}\label{sec_intro}
Imitation Learning (IL)~\citep{bc, dagger, gail} aims to learn a policy from expert demonstrations with no explicit task-relevant knowledge like reward and transition. IL has achieved huge success in a variety of domains, including games~\citep{dagger, alphago} and recommendation systems~\citep{virtual_taobao, recommendation}.
The traditional IL method Behavior Cloning (BC)~\citep{bc} imitates expert behaviors via supervised learning. Although BC works fine in simple environments, it requires a lot of data and small errors compound quickly when the learned policy deviates from the states in the expert dataset. This issue can be formalized by the sub-optimality bound of the learned policy, which is $\tilde{{\mathcal{O}}}(\epsilon_bH^2)$ for BC~\citep{bc}, where $\epsilon_b$ is the optimization error, $H$ is the horizon of the Markov Decision Processes (MDPs) and $\tilde{{\mathcal{O}}}$ means the constant and log terms are omitted. The quadratic dependency on $H$ is known as the \textit{compounding error} issue.
To tackle the compounding error issue, Apprenticeship Learning (AL)~\citep{apprenticeship, fem} and Adversarial Imitation Learning (AIL)~\citep{gail, dac, ValueDICE, iq_learn} algorithms introduce interactions with environment. They first infer a reward function from expert demonstrations, then learn a corresponding policy by Reinforcement Learning (RL). The sub-optimality bound is then reduced to $\tilde{{\mathcal{O}}}(\epsilon_gH)$ \cite{xu2020}, where $\epsilon_g$ is the optimization error of AL and AIL. From another perspective, DAgger~\citep{dagger} attributes the compounding error issue to the difference between the train distribution and test distribution. Thus, DAgger queries the expert for action labels corresponding to each state visited by the learner~\citep{dagger}.
Despite the reduction of the order of $H$, the complicated optimization process of AL and AIL leads to even worse sample complexity than BC~\citep{ail_finite_sample}. Additionally, these algorithms are highly sensitive to hyper-parameters and are hard to converge in practice~\citep{zhang2022}. DAgger also relies on an additional assumption that the learner can recover from mistakes made by itself to a certain extent, which is known as the $\mu$-recoverability condition on the MDPs. \cite{value_interaction} proves that DAgger has a better theoretical guarantee than BC under such an assumption while \cite{limits} shows a negative result for general cases. Moreover, the assumption is satisfied only when any sub-optimal action leads to little performance degradation, which can be impractical, e.g., in risk-sensitive environments~\citep{limits, value_interaction}. Some recent methods~\citep{HG-dagger, ensembledagger, safetydagger, thriftydagger, HACO} modify DAgger so that they only solicit expert interventions based on certain criteria. Though these methods achieve certain empirical success, there were no theoretical understanding of these methods and their design of intervention criteria is totally intuitive, hindering further algorithmic design.
To address the issues in previous methods, we study the IL problems from a new perspective. From experience, sometimes experts are not the best teachers. For example, many legendary players end up with controversial coaching careers. Experts advance disciplines, while teachers advance learners. Inspired by the idea of machine teaching~\citep{mt, qian2022} , we formulate the IL process as a teacher-student framework. In this framework, the teacher decides what to teach and how to impart knowledge rather than simply correcting the student. With more attentive help from the teacher, the student agent can learn faster.
We formalize this intuition by introducing an optimization problem on minimizing the value loss of the learned policy. By solving the optimization problem in the framework, we obtain a novel imitation learning method \textbf{A}ctive a\textbf{da}ptive ex\textbf{p}ert involve\textbf{Men}t (AdapMen), where a teacher actively involves in the learner's interaction with the environment and adjusts its teaching behavior accordingly. The overall interaction structure is illustrated in Fig.~\ref{fig:framework}, where the criterion and the expert is together viewed as the teacher. At each time step, a criterion calculated from expert actions judges whether to take the learner's action or ask the expert to take over control.
The sub-optimality and sample complexity bounds of AdapMen\ and other typical IL methods are listed in Tab.~\ref{tab:complexity}. Under mild conditions, AdapMen\ achieves no compounding error with much lower sample complexity than previous methods. To validate our theories, we also experimentally verify the validity of the assumption and demonstrate the power of AdapMen\ in several tasks.
\begin{figure}\label{fig:framework}
\end{figure} \begin{table}[tbp]
\centering
\begin{tabular}[b]{|l|l|l|}
\hline
& \parbox{1.5cm}{Sub-\\optimality} & \parbox{1.5cm}{Sample\\Complexity} \\ \hline
BC & $\tilde{{\mathcal{O}}}(\epsilon_bH^2)$ & $\tilde{{\mathcal{O}}}(\frac{|{\mathcal{S}}|H^2}{\epsilon})$ \\ \hline
AIL & $\tilde{{\mathcal{O}}}(\epsilon_gH)$ & $\tilde{{\mathcal{O}}}(\frac{|{\mathcal{S}}|H^2}{\epsilon^2})$ \\ \hline
DAgger & $\tilde{{\mathcal{O}}}(\mu \epsilon_bH)$ & $\tilde{{\mathcal{O}}}(\frac{\mu|{\mathcal{S}}|H}{\epsilon})$ \\ \hline
AdapMen & $\tilde{{\mathcal{O}}}(\epsilon_bH)$ & $\tilde{{\mathcal{O}}}(\frac{|{\mathcal{S}}|H}{\epsilon})$ \\ \hline
\end{tabular}
\caption{Theoretical Guarantee of IL Methods. $\tilde{{\mathcal{O}}}$ means the log term of $H$ is omitted.}
\label{tab:complexity} \end{table}
\section{Related Work}
\textbf{Imitation Learning.} The most traditional approach to imitation learning is Behavioral Cloning (BC)~\citep{BC1, BC2, bc}, where a classifier or regressor is trained to fit the behaviors of the expert. This simple form of IL suffers from high compounding error because of covariate shifts. By allowing the learner agent to further interact with the environment, Apprenticeship Learning (AL)~\citep{apprenticeship, fem} infers a reward function from expert demonstrations by Inverse Reinforcement Learning (IRL)~\citep{irl} and learns a policy with Reinforcement Learning (RL) using the recovered reward function. In this way, the learner can correct its behavior on unseen states to mitigate the compounding error issue. Recently, based on Generative Adversarial Network (GAN)~\citep{gan}, Adversarial Imitation Learning (AIL)~\citep{gail, dac, ValueDICE} performs state-action distribution matching in an adversarial manner and has a stronger empirical performance than AL. Since AL and AIL have access to environment transitions, they are classified as known-transition methods. Notwithstanding the compounding error issue, this type of method is highly sensitive to hyper-parameters and hard to converge in practice~\citep{zhang2022}. Different from the known-transition methods, DAgger-style algorithms~\citep{dagger, aggrevate, aggrevated} address the covariate shift by querying the expert online. Without the min-max optimization in known-transition methods, DAgger-style algorithms tend to be more stable. However, these algorithms can only avoid compounding error under the $\mu$-recoverability assumption, which is often not satisfied in risk-sensitive environments~\citep{limits, value_interaction}. Our method AdapMen\ is free from the $\mu$-recoverability assumption and the hyper-parameters are automatically tuned.
\textbf{Human-in-the-loop.} Many works focus on incorporating human interventions in the training loop of RL or IL paradigms. DAgger~\citep{dagger} can be seen as one of the human-in-the-loop methods if the expert is a human. DAgger requires experts to provide action labels without being fully in control of the system, which can introduce safety concerns and is very likely to degrade the quality of the collected labels due to the loss of direct feedback. To address this challenge, a list of learning from intervention approaches have been proposed to empower humans to intervene and guide the learner agent to safe states. "Human-Gated" approaches~\citep{HG-dagger, EIL, HACO} require humans to determine when the agent needs help and when to cede control, which is unreliable because of the high randomness of human behavior. In contrast, ``Agent-Gated'' approaches~\citep{safetydagger, ensembledagger, lazydagger, thriftydagger} allow the learner agent to actively seek human interventions based on certain criteria including the novelty or the risk of the visited states. However, all of the criteria are heuristic without theoretical guarantees and the hyper-parameters are hard to tune. Our method AdapMen\ can actively involve in the interaction process and adaptively adjust its intervention probability.
\section{Background} \label{sec:background} Consider an MDP task denoted by $M = ({\mathcal{S}},{\mathcal{A}},{\mathcal{P}},H,r,\rho)$, where ${\mathcal{S}}$ is the state space, ${\mathcal{A}}$ is the action space, ${\mathcal{P}}:{\mathcal{S}}\times {\mathcal{A}} \rightarrow {\mathcal{S}}$ is the transition function, $H$ is the planning horizon, $r: {\mathcal{S}}\times {\mathcal{A}} \rightarrow \mathbb{R}$ is the reward function, and $\rho$ is the distribution of initial states. Without loss of generality, we assume $r(s,a)\in[0,1]$. A policy is defined as $\pi(\cdot\mid s)$, which outputs an action distribution. To facilitate later analysis, we introduce the state-action distribution at time step $t$ as follows: $$\begin{aligned}
d_h^\pi(&s,a)=\textnormal{Pr}\left(s_h=s,a_h=a|s_1\sim \rho, a_t\sim \pi(s_t), s_{t+1}\sim {\mathcal{P}}(\cdot|s_t,a_t), t\in[h]\right), \end{aligned}$$ where $[h]=\{1,2,\dots,h\}$. We define $$d^\pi=\frac{1}{H}\sum_{h=1}^H d^\pi_h,$$ which is the average distribution of states if we follow policy $\pi$ for $H$ steps.
In imitation learning, the reward function of a task is not accessible. Instead, the learner agent has access to an expert with policy $\pi^*$, and the goal is to recover the policy $\pi^*$ by learning from labeled training data, e.g., state-action pairs generated by an expert agent. Following \cite{xu2020} and \cite{limits}, we assume the expert policy is deterministic in the theoretical analysis, while it can be stochastic in practice.
To measure the quality of a learner policy, we define the \textit{policy value} as $$ \begin{aligned}
J(\pi)=\mathbb{E}\Bigg[\sum_{h=1}^H&r(s_h,a_h)|s_1\sim \rho; a_h\sim \pi(\cdot|s_h),s_{h+1}\sim {\mathcal{P}}(\cdot|s_h,a_h), \forall h\in [H]\Bigg]. \end{aligned}$$ This is the cumulative return for the learner agent in the task demonstrated by the expert. Accordingly, the quality of imitation learning is measured by the \textit{sub-optimality gap}: $J(\pi^*)-J(\pi)$. We also introduce the Q-function at time step $h$: $$ \begin{aligned}
Q^\pi_h(s,a)=\mathbb{E}\Bigg[&\sum_{t=h}^Hr(s_t,a_t)|s_h=s, a_h=a;
a_t\sim \pi(\cdot|s_t),s_{t+1}\sim {\mathcal{P}}(\cdot|s_t,a_t), \forall t\in \{h+1, \dots, H\}\Bigg]. \end{aligned} $$ For brevity, we use $Q^{\pi_1}_h(s,\pi_2)$ as a shorthand of $\mathbb{E}_{a\sim \pi_2}Q^{\pi_1}_h(s,a)$. Then, $J(\pi)$ and $J(\pi^*)$ can be denoted as: \begin{align*}
J(\pi)=\mathbb{E}_{s\sim \rho}Q^\pi_H(s,\pi),\quad J(\pi^*)=\mathbb{E}_{s\sim \rho}Q^{\pi^*}_H(s,\pi^*). \end{align*}
\section{Teacher-Student Interaction Model}\label{sec_method}
Given the inspiration that experts may not be the best teachers, we construct a teaching policy for the agent. In the learning process, the agent aims to mimic the teacher policy instead of the expert policy. This intuition can be formulated as the following optimization problem: \begin{equation}\begin{aligned}
\min_{\pi'} \quad J(\pi^*)-J(\pi_{\pi'})\quad \text{s.t.} \ \ \mathbb{E}_{s\sim \beta}\ell(s,\pi_{\pi'},\pi')\leq \epsilon_b, \end{aligned} \label{eq_constraint} \end{equation}
where $\pi'$ is the teaching policy, $\pi_{\pi'}$ is the corresponding learned student policy, $\beta$ is the data distribution of the buffer that stores intervened samples, $\ell(s,\pi_{\pi'},\pi')$ is the 0-1 loss, i.e., $\ell(s,\pi_{\pi'},\pi')=0$ if $\pi_{\pi'}(\cdot|s)=\pi'(\cdot|s)$ and $\ell(s,\pi_{\pi'},\pi')=1$ otherwise, and $\epsilon_b$ is the upper bound of the optimization loss. Intuitively, we aim to find a policy $\pi'$ that generates data to not only correct the learner when it deviates from the desired behavior, but also helps it learn as quickly as possible.
Denote $\pi$ as the policy before the policy optimization process, i.e., $\pi_{\pi'}$ is optimized from $\pi$. Because it is useless to store the data coinciding with the agent policy, a natural choice for the distribution of buffer is \begin{equation}\label{eq_beta}
\beta(s)=\frac{1}{H\delta}\sum_{h=1}^H\mathbb I(\pi(\cdot|s)\neq \pi'(\cdot|s))d_h^{\pi'}(s). \end{equation} That is, we only save the samples when $\pi$ and $\pi'$ behave differently. $\delta$ is the normalization factor for the distribution of the buffer, i.e., \begin{equation}\label{eq_delta}
\delta=\sum_s\frac{1}{H}\sum_{h=1}^H\mathbb I(\pi'(\cdot|s)\neq \pi(\cdot|s))d^{\pi'}_h(s)=\mathbb E_{s\sim d^{\pi'}}\mathbb I(\pi'(\cdot|s)\neq \pi(\cdot|s)). \end{equation}
Before solving this optimization problem, we introduce Lemma~\ref{lem:pdlema} for better understanding of the derivation. \begin{lemma}[Policy Difference Lemma~\citep{kakade}]\label{lem_policy} For any policies $\pi_1$ and $\pi_2$, $$J(\pi_1)-J(\pi_2)=\sum_{h=1}^H\mathbb{E}_{s\sim d^{\pi_1}_h}[Q^{\pi_2}_h(s,\pi_1)-Q^{\pi_2}_h(s,\pi_2)].$$ \label{lem:pdlema} \end{lemma} With this lemma, we rewrite the optimization objective as \begin{equation}
\begin{aligned}
&\quad \ J(\pi^*)-J(\pi_{\pi'})=J(\pi^*)-J(\pi')+J(\pi')-J(\pi_{\pi'})\\
&\overset{(a)}{=}\sum_{h=1}^H\mathbb E_{s\sim d^{\pi'}_h}[Q_{h}^{\pi^*}(s,\pi^*)-Q_{h}^{\pi^*}(s,\pi')]\\
&\qquad +\sum_{h=1}^H\mathbb E_{s\sim d^{\pi'}_h}[Q_{h}^{\pi_{\pi'}}(s,\pi')-Q_{h}^{\pi_{\pi'}}(s,\pi_{\pi'})]. \end{aligned} \label{eq_double_q} \end{equation}
(a) is derived from Lemma~\ref{lem_policy}. The minimization of the first term implies the teaching policy $\pi'$ should be similar to the expert policy $\pi^*$, while the minimization of the second term implies $\pi'$ should be close to $\pi_{\pi'}$. Note that we cannot determine $\pi'$ simply from $\pi_{\pi'}$ since $\pi_{\pi'}$ is learned from $\pi'$. However, $\pi'$ can be close to $\pi_{\pi'}$ if we assume $\pi_{\pi'}(\cdot|s)=\pi(\cdot|s)$ if $\pi(\cdot|s)=\pi'(\cdot|s)$. The assumption is
straightforward because $\pi(\cdot|s)=\pi'(\cdot|s)$ implies we do not need to do optimization on state $s$, thus $\pi_{\pi'}$ stays unchanged on this state. Therefore, the overall optimization leads to a trade-off of $\pi'$ between $\pi^*$ and $\pi$.
To decompose the objective into a more tractable one, we assume $Q^\pi_h$ can be upper-bounded by $\Delta$, then \begin{equation}\label{eq_Q_relax} \begin{aligned} &\quad \ \sum_{h=1}^H\mathbb E_{s\sim d^{\pi'}_h}[Q_{h}^{\pi_{\pi'}}(s,\pi')-Q_{h}^{\pi_{\pi'}}(s,\pi_{\pi'})]\\
&\leq \Delta\sum_{h=1}^H\mathbb E_{s\sim d^{\pi'}_h}{\mathbb{I}}(\pi'(\cdot|s)\neq \pi_{\pi'}(\cdot|s)). \end{aligned} \end{equation}
In this way, the problem is transformed to increasing the probability that $\pi'$ equals $\pi_{\pi'}$ and reducing the value degradation between $\pi'$ and $\pi^*$ simultaneously.
Applying the constraint in (\ref{eq_constraint}) to the right-hand side of Eq.~(\ref{eq_Q_relax}) with the mentioned $\Delta$, we have \begin{align}
&\quad \ \Delta\sum_{h=1}^H\mathbb E_{s\sim d^{\pi'}_h}{\mathbb{I}}(\pi'(\cdot|s)\neq \pi_{\pi'}(\cdot|s))\\
&\overset{(b)}{\leq}\Delta\sum_{h=1}^H\mathbb E_{s\sim d^{\pi'}_h}{\mathbb{I}}(\pi'(\cdot|s)\neq \pi(\cdot|s)){\mathbb{I}}(\pi'(\cdot|s)\neq \pi_{\pi'}(\cdot|s))\\
&\overset{(c)}{=}\Delta H\delta \ \mathbb E_{s\sim \beta}{\mathbb{I}}(\pi'(\cdot|s)\neq \pi_{\pi'}(\cdot|s))\overset{(d)}{\leq} \Delta H\delta \epsilon_b\\
&\overset{(e)}{=}\Delta\epsilon_b\sum_{h=1}^H\mathbb{E}_{s\sim d_h^{\pi'}}{\mathbb{I}}(\pi'(\cdot|s)\neq \pi(\cdot|s)), \label{eq_pi_part} \end{align}
where (b) uses the fact that $\pi'(\cdot|s)\neq\pi_{\pi'}(\cdot|s)$ implies $\pi'(\cdot|s)=\pi_{\pi'}(\cdot|s)$, (c) is derived from the definition of $\beta$ in Eq.~(\ref{eq_beta}), (d) uses the condition $\mathbb{E}_{s\sim \beta}\ell(s,\pi_{\pi'},\pi')\leq \epsilon_b$, and (e) is derived from the definition of $\delta$ in Eq.~(\ref{eq_delta}).
The first term of Eq.~(\ref{eq_double_q}) can be rewritten as
\begin{align}&\sum_{h=1}^H\mathbb E_{s\sim d^{\pi'}_h}[Q_{h}^{\pi^*}(s,\pi')-Q_{h}^{\pi^*}(s,\pi^*)]\\=&\sum_{h=1}^H\mathbb E_{s\sim d^{\pi'}_h}[Q_{h}^{\pi^*}(s,\pi')-Q_{h}^{\pi^*}(s,\pi^*)]{\mathbb{I}}(\pi'(\cdot|s)\neq \pi^*(\cdot|s)). \label{eq_pi*_part} \end{align}
The added ${\mathbb{I}}(\pi'(\cdot|s)\neq \pi^*(\cdot|s))$ does not contribute to this term, because $Q_{h}^{\pi^*}(s,\pi')-Q_{h}^{\pi^*}(s,\pi^*)=0$ when $\pi'(\cdot|s)= \pi^*(\cdot|s)$. The total value loss is composed of Eq.~(\ref{eq_pi_part}) and Eq.~(\ref{eq_pi*_part}). Fixing the distribution $d^{\pi'}$, Eq.~(\ref{eq_pi_part}) equals 0 if $\pi'(\cdot|s)=\pi(\cdot|s)$ and Eq.~(\ref{eq_pi*_part}) equals 0 if $\pi'(\cdot|s)=\pi^*(\cdot|s)$. Thus, the agent will suffer from a $Q_h^{\pi^*}(s,\pi^*)-Q^{\pi^*}_h(s,\pi')$ value loss if $\pi'(\cdot|s)=\pi(\cdot|s)$, and suffer from a $\Delta\epsilon_b$ value loss if $\pi'(\cdot|s)=\pi^*(\cdot|s)$. In this way, proper choice of $\pi'$ is \begin{equation}\label{eq_pi_prime}
\pi'(\cdot|s)=\left\{
\begin{aligned}
& \pi^*(\cdot|s) \quad \textnormal{if} \,\,\, Q_{h}^{\pi^*}(s,\pi^*)-Q_{h}^{\pi^*}(s,\pi)\geq \Delta\epsilon_b,\\
& \pi(\cdot|s) \quad\,\; \textnormal{otherwise}.
\end{aligned}
\right. \end{equation}
The resultant $\pi'$ switches between the expert policy and the learner policy according to whether $Q_{h}^{\pi^*}(s,\pi^*)-Q_{h}^{\pi^*}(s,\pi)$ exceeds the threshold. In other words, the expert intervenes the interaction when deemed necessary according to the $Q$-value difference.
In the teacher-student interaction model, the intervention mode of the teacher is somewhat similar to DAgger-based active learning methods~\citep{HG-dagger, ensembledagger, safetydagger,thriftydagger}. The good performance achieved by them can be explained in the way that their intervention strategies make the expert a better teacher.
Note that Eq.~(\ref{eq_pi_prime}) does not tell us how to design $\pi'$ as $\Delta$ is not available. However, it exposes the mode of a good teacher: let the expert intervenes in the interaction according to the value of $Q_{h}^{\pi^*}(s,\pi^*)-Q_{h}^{\pi^*}(s,\pi)$ and a threshold. Denote the threshold as $p$, the remaining work is to analyze the influence of $p$ and figure out a proper $p$.
\section{Analysis}\label{sec_analysis} In this section, we analyze the theoretical properties of the intervention mode in both infinite and finite sample cases, and compare it with previous IL approaches.
First, we derive the sub-optimality bound for the teacher-student interaction model in the infinite sample case. The result is shown in the following theorem, whose proof can be found in Appendix~\ref{sec_proof}. \begin{theorem}\label{thm_infinite} Let $\pi$ be a policy such that $\mathbb E_{s\sim \beta}[\ell(s, \pi, \pi')]\leq \epsilon_b$, then $J(\pi^*)-J(\pi) \leq pH+\delta \epsilon_b H^2$, where $\delta=\mathbb E_{s\sim d^{\pi'}}\mathbb I(Q_h^{\pi^*}(s,\pi^*)-Q_h^{\pi^*}(s,\pi)>p)$. \end{theorem}
\noindent\textbf{Remark 1. }It seems ${\mathcal{O}}(H^2)$, the term in the BC method, also appears in this sub-optimality bound. However, $\delta$ can be small if $p$ is properly chosen and may even nullify the effect of ${\mathcal{O}}(H^2)$. The definition of $\delta$ implies it decreases as $p$ increases, while the first term $pH$ increases as $p$ increases. Therefore, $p$ provides a trade-off between the two terms. Intuitively, the first term is the error induced by neglecting some erroneous actions, while the second term is caused by optimization error.
\noindent\textbf{Remark 2. }BC is a special case of our method. When $p$ equals 0, $\delta$ equals 1. In this case, the expert takes over the entire training process, which is exactly the paradigm of BC. Replacing $p$ with 0 and $\delta$ with 1, the bound becomes $\epsilon_b H^2$, which is the sub-optimality bound of BC, as shown in Appendix~\ref{sec_review}. Therefore, BC is the upper bound of sub-optimality in our framework.
\noindent\textbf{Remark 3. }Suppose $Q_{h}^{\pi^*}(s,\pi^*)-Q_{h}^{\pi^*}(s,\pi)$ follows a distribution $P$, then $p$ equals the $\delta$ quantile of $P$. If $P$ is concentrated, in other words, $P$ has strong tail decay, then a little increase in $p$ leads to a large drop of $\delta$, and the error bound can be improved to a great extent.
When $P$ belongs to the Sub-Exponential distribution class, which includes many common distributions, e.g., Gaussian distribution, exponential distribution and Bernoulli distribution, we have \begin{corollary}\label{corollary} If distribution $P$ belongs to ${\mathcal{O}}(\epsilon_b)$-Sub-Exponential distribution class with expectation ${\mathcal{O}}(\epsilon_b)$, let $p=\Omega(\epsilon_b\log H)$, then $J(\pi^*)-J(\pi)=\tilde{{\mathcal{O}}}(\epsilon_bH)$, where $\tilde{{\mathcal{O}}}$ omits the constant and $\log$ term. \end{corollary} The proof is given in Appendix~\ref{sec_proof}. For brevity, we use $D_Q$ to denote $Q^{\pi^*}_h(s,\pi^*)-Q^{\pi^*}_h(s,\pi)$ for the remaining of this paper. This corollary implies our method can avoid compounding error under a mild assumption on the distribution of $D_Q$. In Sec.~\ref{sec_experiment}, we show that the distribution $P$ in actual tasks satisfies this assumption.
We then derive the sub-optimality bound in the finite sample case. Let $\{\hat{\pi}_i\}_{i=1}^N$ be the sequence of policies generated by our method in $N$ iterations with a fixed $p$, and $\delta_i=\mathbb{E}_{s\sim d^{\hat{\pi}'_i}}{\mathbb{I}}(Q^{\pi^*}_h(s,\pi^*)-Q^{\pi^*}_h(s,\hat{\pi}_i)>p)$, then we obtain the following theorem. \begin{theorem}\label{thm_finite} Let $\hat{\pi}=\frac{1}{N}\sum_i\hat{\pi}_i $, then $J(\pi^*)-\mathbb{E}[J(\hat{\pi})] \lesssim pH+\delta
\frac{|{\mathcal{S}}|H^2}{N}$, where $\delta=\frac{1}{N}\sum_i\delta_i$ and $\lesssim$ omits the constant and the log term.
If the condition of Corollary~\ref{corollary} is satisfied for all $N$ iterations, the bound can be improved as $J(\pi^*)-\mathbb{E}[J(\hat{\pi})]\lesssim \frac{|{\mathcal{S}}|H}{N}$. \end{theorem}
The bound of sample complexity can be derived from this theorem. Let value loss be $\epsilon$, then $N=\tilde{{\mathcal{O}}}(\frac{\delta|{\mathcal{S}}|H^2}{\epsilon-pH})$. Under the condition of Corollary~\ref{corollary}, the sample complexity is $\tilde{{\mathcal{O}}}(\frac{|{\mathcal{S}}|H}{\epsilon})$. This shows our method can also avoid the quadratic term of $H$ in the sample complexity. In contrast, AL and AIL methods suffer from such a term in complexity even if the compounding error in the sub-optimality bound is avoided.
\section{Practical Implementation}\label{sec_practical}
In this section, we design a practical algorithm based on the analysis in Sec.~\ref{sec_method} and \ref{sec_analysis}. The key idea is to find a proper value of the threshold $p$ and a surrogate of $Q_h^{\pi^*}$ when $Q_h^{\pi^*}$ is not available.
To facilitate our derivation, we first introduce the definition of TV divergence and KL divergence. \begin{definition} Let $P$ and $Q$ be two distributions over a sample space ${\mathcal{S}}$ , then the TV divergence between $P$ and $Q$, $D_{\mathrm{TV}}(P,Q)$, is defined as
$$D_{\mathrm{TV}}(P,Q)=\frac{1}{2}\int |P(s)-Q(s)|ds.$$
The KL divergence between $P$ and $Q$, $D_{\mathrm{KL}}(P,Q)$, is defined as $$D_{\mathrm{KL}}(P,Q)=\int P(s)\log\frac{P(s)}{Q(s)}ds.$$ \end{definition}
\subsection{The choice of $p$} According to Corollary~\ref{corollary}, the sub-optimality bound is small when the assumption on $P$, i.e., $p=\Omega(\epsilon_bH)$, is satisfied. However, letting $p=\Omega(\epsilon_bH)$ is inappropriate because it cannot generalize to other distribution classes and the constant in $\Omega$ is difficult to determine.
To avoid the drawbacks of Corollary~\ref{corollary}, we choose $p$ according to Theorem~\ref{thm_infinite}. Remember that $p$ provides a trade-off between the first term and the second term of the sub-optimality bound, i.e., $pH$ and $\delta\epsilon_bH^2$, and the order of the error depends on the larger term. Therefore, the best order of the bound can be achieved when the two terms are equal. Based on this intuition, the relationship between $p$ and $\delta$ should be $p=\delta \epsilon_b H$. In fact, the choice of $p$ preserves the $\tilde{{\mathcal{O}}}(\epsilon_bH)$ bound in Corollary~\ref{corollary} when the assumption on $P$ is satisfied. Please refer to Appendix~\ref{sec_p} for a detailed discussion.
It is natural to assume the optimization process is smooth, i.e., the intervention probability $\delta$ and policy 0-1 loss $\epsilon_b$ changes slowly throughout the optimization process. Therefore, we can calculate $p$ using $\delta$ and $\epsilon_b$ of the last iteration as an approximation. $\delta$ and $\epsilon_b$ of the last iteration are easy to obtain because $\epsilon_b$ can be calculated directly and $\delta$ can be estimated with the intervention frequency.
For tasks with continuous action spaces, the policy 0-1 loss is exactly 1, which makes the bound in Theorem~\ref{thm_infinite} trivial. In fact, Theorem~\ref{thm_infinite} holds for $\ell$ is the TV divergence between $\pi$ and $\pi^*$, and we discuss this in Appendix~\ref{sec_p}. According to Pinsker's inequality~\citep{pinsker}, $D_{\mathrm{TV}}(P,Q)\leq \sqrt{D_{\mathrm{KL}}(P,Q)}$, i.e., KL divergence can be the upper bound of TV divergence. Thus we use KL divergence instead to avoid the complex computation of TV divergence because the condition $\mathbb E_{s\sim \beta}[\ell(s, \pi, \pi')]\leq \epsilon_b$ still holds when $\ell$ is selected as the TV divergence of policy and $\epsilon_b$ is selected as the KL divergence of policy. In this way, we only need to determine $p$ in the first iteration. The key idea to tune the initial $p$ is to let $p$ approximately equals $\delta\epsilon_b H$, which can be easily calculated after a few interactions with environments.
\subsection{Surrogate of Q-value difference} In many real-world applications, though the exact expert Q-values are hard to get upfront, many existing methods can acquire a Q that is close to $Q^{\pi^*}$, including learning from offline datasets~\cite{cql, combo, hve}, using human advice~\cite{PEBBLE, human+}, and computing from rules~\cite{kernel, knowledge}. However, in some cases the Q-function cannot be obtained, we hope to find a surrogate of $Q^{\pi^*}$. Note that the expert policy $\pi^*$ is accessible, and we derive the relationship between Q-value difference and policy divergence as follows. \begin{theorem}\label{thm_pi} The Q-value difference can be bounded by the policy divergence:
$$Q^{\pi^*}_h(s,\pi^*)-Q^{\pi^*}_h(s,\pi)\leq D_{\textnormal{TV}}(\pi^*(\cdot|s), \pi(\cdot|s))(H-h).$$ \end{theorem}
This theorem shows $D_{\textnormal{TV}}(\pi^*(\cdot|s), \pi(\cdot|s))(H-h)$ is the upper bound of $Q^{\pi^*}_h(s,\pi^*)-Q^{\pi^*}_h(s,\pi)$. Using the upper bound as a surrogate is reasonable because the sub-optimality bound in Theorem~\ref{thm_infinite} is preserved.
Similarly, in the environments with continuous action spaces, we use $\sqrt{D_{\mathrm{KL}}(\pi^*(a|s),\pi(a|s))}$ instead of $D_{\mathrm{TV}}(\pi^*(a|s),\pi(a|s))$. This is because TV divergence is difficult to calculate in continuous action spaces, and Pinsker's inequality~\citep{pinsker} guarantees the theoretical results under this modification.
For our practical algorithm, as the threshold is adaptively tuned in the training process, we name it \textbf{A}ctive a\textbf{da}ptive ex\textbf{p}ert involve\textbf{Men}t (AdapMen). The pseudo-code of AdapMen\ is given in Alg.~\ref{alg}.
\begin{algorithm}[htbp]
\caption{Training procedure of AdapMen}
\label{alg}
\begin{algorithmic}[1]
\REQUIRE {
An expert policy $\pi^*$;
A Q-function $Q$ corresponding to $\pi^*$;
Number of sampling steps $N$;
Learner update interval $K$
}
\STATE Initialize learner policy $\pi$, buffer $B$, $p$
\FOR{$n = 1$ to $N$}{
\STATE Get learner agent action $a_l$ and expert action $a_e$
\STATE Calculate the surrogate Q-value difference $D_Q$
\IF {$D_Q > p$}{
\STATE Take expert action $a_e$, add the transition to $B$
}
\ELSE{
\STATE Take learner agent action $a_l$
} \ENDIF
\IF {$n\%K == 0$}{
\STATE Sample batches of transitions from $B$ to train $\pi$
\STATE Update $p$-value
}\ENDIF
}
\ENDFOR
\end{algorithmic} \end{algorithm}
\section{Experiments}\label{sec_experiment}
\begin{figure*}
\caption{Performance in MetaDrive with policy experts}
\label{fig:perf_test_return}
\label{fig:intervention_count}
\label{fig:performance}
\end{figure*}
In this section, we conduct experiments to test whether AdapMen\ reaches the theoretical advantages of our framework.
We choose MetaDrive~\citep{metadrive} and Atari 2600 games from ALE ~\citep{ALE} as benchmarks. MetaDrive is a highly compositional autonomous driving benchmark that is closely related to real-world applications. The MetaDrive simulator can generate an infinite number of diverse driving scenarios from both procedural generation and real data importing. The agent observes a 259-dimensional vector which is composed of a 240-dimensional vector denoting the 2D-Lidar-like point clouds, a vector summarizing the target vehicle’s state and a vector for the navigation information. The action space is a continuous 2-dimensional vector representing the acceleration and steering of the car, respectively. The goal is to follow the traffic rules and reach the target position as fast as possible. The training configuration of MetaDrive follows that in ~\citep{EGPO}. For the justice of comparison, the evaluation is performed on 20 randomly selected scenarios. Atari 2600 games are challenging visual-input RL tasks with discrete action spaces. Using conventional environment wrappers and processing techniques, the agent observes a $(84\times 84)$ grayscale image and has discrete action spaces ranging from 6 valid actions to 18 valid actions depending on the game. We randomly select six common Atari games. To avoid the small stochasticity problem of the Atari simulator, we activate the "sticky action" feature to simulate actual human input and increase stochasticity.
We choose BC~\citep{BC1}, DAgger~\citep{dagger}, HG-DAgger~\citep{HG-dagger}, EnsembleDAgger~\citep{ensembledagger}, and ValueDICE~\citep{ValueDICE} as baselines. The details of BC and DAgger have been introduced in Sec.~\ref{sec_intro}. HG-DAgger and EnsembleDAgger are representative methods of active imitation learning methods. HG-DAgger allows interactive imitation learning from human experts in real-world systems by letting a human expert take over control when deemed necessary, and EnsembleDAgger uses both action variance from an ensemble of policies and action discrepancies between learner and expert as the criterion to decide whether the expert should take over control. ValueDICE is the SoTA of AIL methods, which trains the learner agent via robust divergence minimization in an off-policy manner. Hyper-parameters of the implementations of baselines are listed in Appendix~\ref{sec:parameter}.
We first test the performance of AdapMen\ and baselines in the two benchmarks with expert in the form of trained policies, namely policy experts. Then, we dive into AdapMen\ and demonstrate some key properties of our algorithm to answer the following questions: \begin{itemize} \item How is the intervention threshold automatically adjusted during the training process? \item Can the distribution of $D_Q$ satisfy the assumption in Corollary~\ref{corollary} in most cases? \item Is policy divergence a good surrogate of $D_Q$? \end{itemize} Finally, we simulate real-world scenarios by letting a human be the expert and control the vehicle in the MetaDrive benchmark.
\begin{figure*}
\caption{Performance in six Atari games with policy experts}
\label{fig:performance_atari}
\end{figure*}
\subsection{Performance with policy experts} \subsubsection{Performance in MetaDrive}
The expert policy of MetaDrive is trained by Soft Actor-Critic (SAC) ~\citep{SAC}. For AdapMen, we take one of the trained Q-networks as $Q^{\pi^*}$ and calculate $D_Q$ based on it. To demonstrate the robustness towards inaccurate $Q^{\pi^*}$ when the ground truth value is not available, we also perform experiments on the estimated value function in Appendix~\ref{sec_extra}.
The performance in the MetaDrive benchmark is plotted in Fig.~\ref{fig:performance}. The horizontal axis represents the total number of steps sampled in the environment. The vertical axis of Fig.~\ref{fig:perf_test_return} and Fig.~\ref{fig:intervention_count} are policy return, success rate, and number of expert interventions, respectively. HG-DAgger is omitted in experiments for the sake of fairness because the expert of the algorithm should be human.
For the MetaDrive benchmark, AdapMen\ achieves the best performance in terms of both cumulative return and success rate. ValueDICE achieves the worst performance probably because of its highest sample complexity and sensitivity to hyper-parameters thus we fail to find a working configuration. Notwithstanding the low expert intervention counts of EnsembleDAgger, the performance of EnsembleDAgger severely degrades. The $\mu$-recoverability property of DAgger is hard to satisfy in risk-sensitive environments so that DAgger shows no advantage than BC. BC achieves the best performance among all the baseline algorithms. This is because the policy expert has little stochasticity and the dimension of input is small.
The total number of expert data usage is shown in Fig.~\ref{fig:intervention_count}. Here the expert data usage is defined as the number of expert state-action pairs added to the buffer for training the learner. This quantity of BC, ValueDICE is the same as DAgger and we omit them in the figure. DAgger always adds the expert state-action pair to the buffer, thus having the biggest expert data usage. Compared with EnsembleDAgger, by generating the best buffer distribution for teaching, AdapMen\ requires fewer expert interventions while achieving a better test performance.
To further verify our theory, we draw the trend of $p$-value and actual intervention probability throughout the training process in Fig.~\ref{fig:pvalue}, where the left vertical axis represents the value of $p$,while the right vertical axis represents the intervention probability. The probability is calculated every 200 sample steps in the environment. Theorem~\ref{thm_infinite} implies the sub-optimality is negatively related to $p$ and $\delta$. This is verified by the decreasing trend of $p$ and $\delta$ in the training process, coinciding with the increasing policy return in Fig.~\ref{fig:perf_test_return}. Meanwhile, the sharply changing $p$ also demonstrates the importance of adaptively changing intervention criterion. Intuitively, as the learner agent gets better at driving the car, the teacher should increase the difficulty of the teaching policy. A lower $p$-value indicates more difficult learning content. \begin{figure}
\caption{$p$-value and intervention probability of AdapMen\ on MetaDrive}
\label{fig:pvalue}
\end{figure}
\subsubsection{Performance in Atari games}
The expert policies of Atari games are trained by Deep Q Learning~\citep{atari_dqn}. Note that we activate the "sticky action" features to increase the stochasticity of the tasks. Since Ensemble-DAgger requires a continuous action space, we omit it for comparison in the Atari 2600 games. The performance curves in the Atari 2600 games are plotted in Fig.~\ref{fig:performance_atari}.
AdapMen\ outperforms baselines in 5 out of 6 Atari games. These tasks are more challenging, which can be inferred from the performance of baselines. In Qbert, all algorithms fail to learn from the expert except for AdapMen. In all the tasks, ValueDICE performs equally poorly as in MetaDrive. BC, which has near-optimal performance in MetaDrive, also collapses in most of the six Atari games. This shows that BC fails in higher-dimensional environments. DAgger performs better than other baselines, this is probably because the $\mu$-recoverability assumption can still be satisfied in most states in Atari games.
\subsection{Performance of AdapMen\ criterion based on policy divergence} \begin{figure}
\caption{Performance in MetaDrive with different criteria of AdapMen}
\label{fig:perf_adapmen}
\end{figure}
As mentioned in Sec.~\ref{sec_practical}, when $Q^{\pi^*}$ is not available, we use policy divergence as a surrogate of $D_Q$. To validate the correctness of this surrogate, we test it on MetaDrive, and plot its performance in Fig.~\ref{fig:perf_adapmen}. AdapMen\ is the original algorithm, while AdapMen-PI uses the policy divergence instead of $D_Q$. The result shows AdapMen-PI has comparable performance with AdapMen. This experiment validates our theory and demonstrates that policy divergence is also a proper criterion.
\subsection{Analysis of $D_Q$ distribution}
\begin{figure}
\caption{$D_Q$ distributions in MetaDrive and Atari games. The blue lines show the distributions of $D_Q$ estimated by kernel density estimation.}
\label{fig:analysis}
\end{figure}
In Corollary~\ref{corollary}, we assume the distributions of $D_Q$, i.e., $P$ in Sec.\ref{sec_analysis}, belongs to ${\mathcal{O}}(\epsilon_b)$-Sub-Exponential distribution class with expectation ${\mathcal{O}}(\epsilon_b)$. The assumption is satisfied if the tail of $P$ is bounded by an exponential distribution with parameter $\epsilon_b$. To verify this assumption, we plot $P$ of MetaDrive and the six Atari games and compare their tails with exponential distribution. The partial results are shown in Fig.~\ref{fig:analysis} because of space limitation. The rest of the results are in Appendix~\ref{sec_dq}. The distributions at the beginning and at the end of the training are on the left-hand side and right-hand side, respectively. All the tails of $P$ are bounded by the exponential distribution, which implies the assumption is satisfied in nearly all tested tasks. This bridges the gap between the theoretical analysis and practical applicability of AdapMen.
\subsection{Performance in MetaDrive with a human expert}
\begin{figure}
\caption{Performance in MetaDrive with a human expert}
\label{fig:perf_test_return_human}
\label{fig:perf_success_human}
\label{fig:perf_human}
\end{figure} In real-world tasks, humans are important sources of expert information, especially in autonomous driving tasks. To mimic real-world tasks, we substitute the SAC expert in MetaDrive with a human. The experimental results are shown in Fig.~\ref{fig:perf_human}. The random and sometimes irrational behaviors of human experts raise a huge challenge for imitation learning algorithms, and general degradation of performance happens for all methods. BC has a 75\% performance degradation. In contrast, AdapMen\ has a relatively small performance degradation and achieves the best final performance. The performance of HG-DAgger is surprising. Although our human expert has tries his best to correct the behavior of the learner agent, HG-DAgger is only slightly better than BC. HG-DAgger even uses more expert actions to train the learner policy than AdapMen. This shows the teaching strategy of humans are unreliable and an objective criterion is important.
\section{Conclusion} In this paper, we formulate the IL process as a teacher-student interaction framework. The proposed framework shows expert should involve in the interaction of the agent with the environment according to a certain criterion. We theoretically verify the effectiveness of this framework, and derive a better error bound and sample complexity under a mild condition, which we experimentally demonstrate common in many benchmarks. Based on the teacher-student interaction framework, we propose a practical method AdapMen, where the intervention criterion is tuned automatically in the training process, which frees the hyper-parameter tuning budget of other active imitation learning methods. Experimental results demonstrate that AdapMen\ achieves a better performance than other IL methods.
\appendix \onecolumn \section{Proofs of Section~\ref{sec_analysis} and Section~\ref{sec_practical}}\label{sec_proof}
For brevity, we denote $\pi(\cdot|s)$ as $\pi(s)$ in the appendix. \subsection{Proof of Theorem~\ref{thm_infinite}} \begin{lemma}[Safety]\label{lem_safty} The teaching policy $\pi'$ satisfies $J(\pi')\geq J(\pi^*)-pH$. \end{lemma} \begin{proof} We follow a similar proof to \citep{dagger}. Given our policy $\pi'$, consider the policy $\pi'_{0:t}$, which executes $\pi'$ in the first t-steps and then executes the expert policy $\pi^*$. Then \begin{align*}
J(\pi')&=J(\pi^*)+\sum_{t=0}^{H-1}[J(\pi'_{1:H-t})-J(\pi'_{1:H-t-1})]\\
&=J(\pi^*)+\sum_{t=1}^H\mathbb E_{s\sim d^{\pi'}_t}[Q_{t}^{\pi^*}(s,\pi')-Q_{t}^{\pi^*}(s,\pi^*)]\\
&\geq J(\pi^*)-pH \end{align*} \end{proof} Besides facilitating the proof of the theorem, this lemma also demonstrates that the deployed policy only suffers from a $pH$ value loss, which ensures safety in the learning process.
\begin{proof}[Proof of Theorem~\ref{thm_infinite}] Note that $J(\pi^*)-J(\pi)=J(\pi^*)-J(\pi')+J(\pi')-J(\pi)$, where $J(\pi^*)-J(\pi')$ can be derived from Lemma~\ref{lem_safty}. Thus, we only need to calculate $J(\pi')-J(\pi)$. Here we use $\pi_{0:t}$ to denote the policy that executes $\pi$ in the first t-steps and then executes $\pi'$. Then, \begin{align*}
J(\pi')&=J(\pi)+\sum_{t=0}^{H-1}[J(\pi_{1:H-t})-J(\pi_{1:H-t-1})]\\
&=J(\pi)+\sum_{t=1}^H\mathbb E_{s\sim d^{\pi'}_t}[Q_{t}^{\pi}(s,\pi')-Q_{t}^{\pi}(s,\pi)]\\
&\overset{(a)}{=}J(\pi)+\sum_{t=1}^H\mathbb E_{s\sim d^{\pi'}_t}\mathbb I(Q_{t}^{\pi^*}(s,\pi')-Q_{t}^{\pi^*}(s,\pi)>p)[Q_{t}^{\pi}(s,\pi')-Q_{t}^{\pi}(s,\pi)]\\
&=J(\pi)+\sum_{t=1}^H\mathbb E_{s\sim d^{\pi'}_t}\mathbb I(Q_{t}^{\pi^*}(s,\pi')-Q_{t}^{\pi^*}(s,\pi)>p)\mathbb I(\pi(s)\neq \pi^*(s))[Q_{t}^{\pi}(s,\pi')-Q_{t}^{\pi}(s,\pi)]\\
&\overset{(b)}{\leq}J(\pi)+H\sum_{t=1}^H\mathbb E_{s\sim d^{\pi'}_t}\mathbb I(Q_{t}^{\pi^*}(s,\pi')-Q_{t}^{\pi^*}(s,\pi)>p)\mathbb I(\pi(s)\neq \pi^*(s))\\
&\overset{(c)}{=}J(\pi)+\delta\epsilon_b H^2, \end{align*} where (a) uses the fact that $\pi'=\pi$ if $Q_{t}^{\pi^*}(s,\pi')-Q_{t}^{\pi^*}(s,\pi)<p$, (b) is because the Q-value is no more than $T$, and (c) is derived from $\mathbb E_{s\sim \beta}[\ell(s, \pi, \pi^*)]=\mathbb E_{s\sim \beta}[\ell(s, \pi, \pi')]\leq \epsilon_b$. \end{proof}
\subsection{Proof of Corollary~\ref{corollary}} \begin{proof}[Proof of Corollary~\ref{corollary}] Under the sub-exponential assumption on $P$, we have $$\textnormal{Pr}\left(Q^{\pi^*}_t(s,\pi')-Q^{\pi^*}_t(s,\pi)>p\right)\leq \exp(-\frac{p-\mu}{\sigma}),$$ where $\mu=\mathbb{E}[Q^{\pi^*}_t(s,\pi')-Q^{\pi^*}_t(s,\pi)]$ and $\sigma^2={\mathbb{V}}[Q^{\pi^*}_t(s,\pi')-Q^{\pi^*}_t(s,\pi)]$.
Under the condition of the corollary, $\mu={\mathcal{O}}(\epsilon_b)$ and $\sigma={\mathcal{O}}(\epsilon_b)$. Note that $$\delta=\mathbb{E}_{s\sim d_{\pi'}}{\mathbb{I}}(Q^{\pi^*}_t(s,\pi')-Q^{\pi^*}_t(s,\pi)>p)=\textnormal{Pr}\left(Q^{\pi^*}_t(s,\pi')-Q^{\pi^*}_t(s,\pi)>p\right),$$ we have $\delta={\mathcal{O}}(\frac{1}{H})$ if $p\geq \mu+\sigma\log H=\Omega(\epsilon_bH)$.
Thus, $$J(\pi^*)-J(\pi)\leq pH+\delta \epsilon_b H^2 ={\mathcal{O}}(\epsilon_bH\log H)+{\mathcal{O}}(\epsilon_bH)= \tilde{{\mathcal{O}}}(\epsilon_b H).$$ \end{proof}
\subsection{Proof of Theorem~\ref{thm_finite}}
Let $\{\hat{\pi}'_i\}_{i=1}^N$ be the teaching policy induced by $\{\hat{\pi}_i\}_{i=1}^N$ and the intervention threshold $p$.
\begin{lemma}\label{lemma_avarage} Suppose there is an online learning algorithm which outputs policies $\{\hat{\pi}_1, \dots, \hat{\pi}_N\}$ sequentially according to any procedure. Here the learner learns the policy $\hat{\pi}_i$ from some distribution conditioned on $\Tr_1, \dots, \Tr_{i-1}$ and subsequently samples a trajectory $\Tr_i$ by rolling out $\hat{\pi}'_i$, the policy depending on $\hat{\pi}_i$. This process is repeated for $N$ iterations. Denote $\hat{\beta}_i$ as the empirical distribution over the states induced by $\Tr_i$. Denote $\hat{\pi}=\frac{1}{N}\sum_{i=1}^N\hat{\pi}_i$ as the mixture policy. Let $\mathbb{E}_{s\sim \hat{\beta}_i}[\ell(s,\hat{\pi},\pi^*)]={\mathcal{L}}(\hat{\beta}_i, \hat{\pi}, \pi^*)$ and $\hat{\beta}=\frac{1}{N}\hat{\beta}_i$, the mixture state distribution. Then, $$\mathbb{E}[{\mathcal{L}}(\beta, \hat{\pi}, \pi^*)]=\frac{1}{N}\sum_{i=1}^N\mathbb{E}[{\mathcal{L}}(\hat{\beta}_i, \hat{\pi}_i,\pi^*)].$$ \end{lemma} \begin{proof} Since the trajectory $\Tr_i$ is rolled out using $\hat{\pi}'_i$. Conditioned on $\hat{\pi}_i$, $\hat{\beta}_i$ is unbiased and equal to $\beta_i$ in expectation because $\hat{\pi}'_i$ is derived from $\hat{\pi}$. Therefore, for each $i$, since ${\mathcal{L}}(\beta, \hat{\pi}, \pi^*)$ is a linear function of $\beta$,
$$\mathbb{E}\left[{\mathcal{L}}(\hat{\beta}_i,\hat{\pi}_i,\pi^*)|\hat{\pi}_i\right]={\mathcal{L}}(\beta_i,\hat{\pi}_i,\pi^*).$$ Summing across $i=1,\cdots,N$ and using the fact that ${\mathcal{L}}(\beta,\hat{\pi},\pi^*)=\frac{1}{N}\sum_{i=1}^N{\mathcal{L}}(\beta_i,\hat{\pi}_i,\pi^*)$, taking the expectation completes the proof. \end{proof}
The lemma implies it suffices to minimize the empirical 0-1 loss under the empirical distribution.
Note that
$${\mathcal{L}}(\hat{\beta}_i,\pi,\pi^*)=\frac{1}{H}\sum_{t=1}^H\sum_{s\in{\mathcal{S}}}\left<\pi^t(\cdot|s),z_i^t(s)\right>,$$
where $z_i^t=\left\{\hat{\beta}_i(s)(1-\pi^*(\cdot|s))\right\}$.
To learn the returned policy sequence, we use the normalized-EG algorithm~\citep{book}, which is also known as online mirror descent with entropy regularization for online learning. Formally, the online learning problem and the algorithm are described in Section 2 of~\citep{book}.
\begin{lemma}[Theorem 8 in~\cite{value_interaction}]\label{lemma_omd} Assume that the normalized EG algorithm is runned in a sequence of linear loss functions $\{\left<z_i,\cdot\right>:i=1, \cdots, T\}$, with $\eta=1/2$ to return a sequence of distributions $w_1, \cdots, w_T\in \Delta_{{\mathcal{A}}}^1$. Assume that for all $t\in[H]$, $\bm 0\preceq z_t\preceq\bm 1$. For any $u$ such that $\sum_{i=1}^T\left<z_i,u\right>=0$,
$$\sum_{t=1}^T\left<w_i-u,z_i\right>\leq 4\log(|{\mathcal{A}}|).$$ \end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm_finite}] According to Lemma~\ref{lemma_omd}, we have
$$\sum_{t=1}^H\left<z_i^t(s),\hat{\pi}_i(\cdot|s)\right>\leq 4\log(|{\mathcal{A}}|).$$ Averaging across $t\in [H]$, summing across $s\in{\mathcal{S}}$, and rescalling the definition of ${\mathcal{L}}$ result in
$$\frac{1}{N}\sum_{i=1}^N{\mathcal{L}}(\hat{\beta}_i,\hat{\pi}_i,\pi^*)\leq \frac{4||{\mathcal{S}}|\log|{\mathcal{A}}|)}{N}.$$ From Lemma~\ref{lemma_avarage}, the resulting sequence of policies $\hat{\pi}_1,\dots,\hat{\pi}_N$ and their mixtures $\hat{\pi}$ satisfies,
$$\mathbb{E}\left[{\mathcal{L}}(\beta,\hat{\pi},\pi^*)\right]\leq \frac{4|{\mathcal{S}}|\log(|{\mathcal{A}}|)}{N}.$$
This implies $\epsilon_b\leq \frac{4|{\mathcal{S}}|\log(|{\mathcal{A}}|)}{N}$. Note that $$\mathbb{E}_{s\sim d^{\hat{\pi}'}}{\mathbb{I}}(Q_{t}^{\pi^*}(s,\hat{\pi}')-Q_{t}^{\pi^*}(s,\hat{\pi})>p)=\frac{1}{N}\sum_i\delta_i,$$ Combining this results with Theorem~\ref{thm_infinite}, we complete the proof.
\end{proof}
\subsection{Proof of Theorem~\ref{thm_pi}}
\begin{lemma}\label{lem_q_diff} The Q-value difference satisfies $$Q^{\pi^*}_h(s_h,\pi^*)-Q^{\pi^*}_h(s_h,\pi)\leq D_{\textnormal{TV}}(d_1(s,a),d_2(s,a))(H-h+1),$$
where $d_1(s,a)=\frac{1}{H-h+1}\sum_{t=h}^H\textnormal{Pr}(s_t=s,a_t=a|s_h, a_t\sim \pi^*)$ and $d_2(s,a)=\frac{1}{H-h+1}\sum_{t=h}^H\textnormal{Pr}(s_t=s,a_t=a|s_h, a_h\sim \pi, a_t\sim \pi^*, \forall t> h)$. \end{lemma} \begin{proof} \begin{align*}
Q^{\pi^*}_h(s_h,\pi^*)-Q^{\pi^*}_h(s_h,\pi)&=(H-h+1)\sum_{s,a}(d_1(s,a)-d_2(s,a))r(s,a)\\
&\leq (H-h+1)\sum_{s,a}(d_1(s,a)-d_2(s,a))_+r(s,a)\\
&\leq (H-h+1)D_{\mathrm{TV}}(d_1(s,a),d_2(s,a)), \end{align*} where $(p_1(s,a)-p_2(s,a))_+=p_1(s,a)-p_2(s,a)$ if $p_1(s,a)-p_2(s,a)>0$, otherwise $(p_1(s,a)-p_2(s,a))_+=0$. The last inequality results from the assumption that $r(s,a)\leq 1$. \end{proof} \begin{lemma}[Lemma B.1 of \citep{mbpo}]\label{lem_mbpo}
Suppose $p_1(s,a)=p_1(s)p_1(a|s)$ and $p_2(s,a)=p_2(s)p_2(a|s)$, we can bound the total variation distance of the joint as:
$$D_{\mathrm{TV}}(p_1(s,a),p_2(s,a))\leq D_{\mathrm{TV}}(p_1(s),p_2(s))+\mathbb{E}_{s\sim p_1}D_{\mathrm{TV}}(p_1(a|s),p_2(a|s)).$$ \end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm_pi}] According to Lemma~\ref{lem_q_diff}, we only need to bound $D_{\mathrm{TV}}(d_1(s,a),d_2(s,a))$.
Let $d_1(s)=\frac{1}{H-h+1}\sum_{t=h}^H\textnormal{Pr}(s_t=s|s_h, a_t\sim \pi^*)$ and $d_2(s)=\frac{1}{H-h+1}\sum_{t=h}^H\textnormal{Pr}(s_t=s|s_h, a_h\sim \pi, a_t\sim \pi^*)$. Let $d_i^t(s)$ be the state distribution at step $t$, i.e., $d_i(s)=\frac{1}{H-h+1}\sum_{t=h}^Hd_i^t(s)$. Similarly, let $d_i^t(s,a)$ be the state action distribution at step $t$. Then $d_1^t(s,a)=d_1^t(s)\pi^*(a|s)$ and $d_2^t(s,a)=d_2^t(s)\pi^*(a|s)$ for $t\geq h+1$, and $d_1^t(s,a)=d_1^t(s)\pi^*(a|s)$, $d_2^t(s,a)=d_2^t(s)\pi(a|s)$ for $t=h$. We apply Lemma~\ref{lem_mbpo} to $d_i^t(s,a)$ and obtain: \begin{align}
&D_{\mathrm{TV}}(d_1^t(s,a),d_2^t(s,a))\leq D_{\mathrm{TV}}(d_1^t(s),d_2^t(s))+\mathbb{E}_{s\sim d_1^t(s)}D_{\mathrm{TV}}(\pi^*(a|s),\pi^*(a|s))=D_{\mathrm{TV}}(d_1^t(s),d_2^t(s)), \qquad t\geq h+1 \label{eq_after_step}\\
&D_{\mathrm{TV}}(d_1^t(s,a),d_2^t(s,a))\leq D_{\mathrm{TV}}(d_1^t(s),d_2^t(s))+\mathbb{E}_{s\sim d_1^t(s)}D_{\mathrm{TV}}(\pi^*(a|s),\pi(a|s))=D_{\mathrm{TV}}(\pi^*(a|s_h),\pi(a|s_h)), \qquad t=h \label{eq_first_step} \end{align} The derivation of the second inequality uses the fact that the state at step $h$ is exactly $s_h$.
Therefore, we only need to focus on the TV divergence between $d_1^t(s)$ and $d_2^t(s)$. For $t>h+1$, \begin{align*}
D_{\mathrm{TV}}(d^t_1(s),d^t_2(s))&=\frac{1}{2}\sum_s\left|d_1^t(s)-d_2^t(s)\right|\\
&=\frac{1}{2}\sum_s \left|\sum_{s',a'}\left(d_1^{t-1}(s')\pi^*(a'|s'){\mathcal{P}}(s|s',a')-d_2^{t-1}(s')\pi^*(a'|s'){\mathcal{P}}(s|s',a')\right)\right|\\ \end{align*}
Denote $\sum_{a'}\pi^*(a'|s'){\mathcal{P}}(s|s',a')$ as $p(s|s')$, the above equation can be simplified as \begin{align*}
D_{\mathrm{TV}}(d^t_1(s),d^t_2(s))&=\frac{1}{2}\sum_s \left|\sum_{s'}\left(d_1^{t-1}(s')p(s|s')-d_2^{t-1}(s')p(s|s')\right)\right|\\
&\leq\frac{1}{2}\sum_{s,s'}\left|d_1^{t-1}(s')p(s|s')-d_2^{t-1}(s')p(s|s')\right|\\
&=\frac{1}{2}\sum_{s,s'}p(s|s')\left|d_1^{t-1}(s')-d_2^{t-1}(s')\right|\\
&\overset{(a)}{=}\frac{1}{2}\sum_{s'}\left|d_1^{t-1}(s')-d_2^{t-1}(s')\right|\\
&=D_{\mathrm{TV}}(d_1^{t-1}(s'),d_2^{t-1}(s')), \end{align*}
where (a) uses $\sum_sp(s|s')=1$. Recursively, we have $D_{\mathrm{TV}}(d^t_1(s),d^t_2(s))=D_{\mathrm{TV}}(d^{h+1}_1(s),d^{h+1}_2(s))$ for all $t\geq h+1$.
For $D_{\mathrm{TV}}(d^{h+1}_1(s),d^{h+1}_2(s))$, we have \begin{align*}
D_{\mathrm{TV}}(d^{h+1}_1(s),d^{h+1}_2(s))&=\frac{1}{2}\sum_s|d_1^{h+1}(s)-d_2^{h+1}(s)|\\
&=\frac{1}{2}\sum_s \left|\sum_{s',a'}\left(d_1^{h}(s')\pi^*(a'|s'){\mathcal{P}}(s|s',a')-d_2^{h}(s')\pi(a'|s'){\mathcal{P}}(s|s',a')\right)\right|\\
&=\frac{1}{2}\sum_s \left|\sum_{a'}\left(\pi^*(a'|s_h){\mathcal{P}}(s|s_h,a')-\pi(a'|s_h){\mathcal{P}}(s|s_h,a')\right)\right|\\
&\leq \frac{1}{2}\sum_{s,a'}\left|\pi^*(a'|s_h){\mathcal{P}}(s|s_h,a')-\pi(a'|s_h){\mathcal{P}}(s|s_h,a')\right|\\
&=\frac{1}{2}\sum_{a'}\left|pi^*(a'|s_h)-\pi(a'|s_h)\right|\\
&=D_{\mathrm{TV}}(\pi^*(a|s_h),\pi(a|s_h)). \end{align*} Then \begin{align*}
D_{\mathrm{TV}}(d_1(s,a),d_2(s,a))&=D_{\mathrm{TV}}\left(\frac{1}{H-h+1}\sum_{t=h}^Hd_1^t(s,a),\frac{1}{H-h+1}\sum_{t=h}^Hd_2^t(s,a)\right)\\
&\leq \frac{1}{H-h+1}\sum_{t=h}^HD_{\mathrm{TV}}(d_1^t(s,a),d_2^t(s,a))\\
&\overset{(a)}{=}\frac{1}{H-h+1}\left(D_{\mathrm{TV}}(\pi^*(a|s_h),\pi(a|s_h))+\sum_{t=h+1}^HD_{\mathrm{TV}}(d_1^t(s),d_2^t(s))\right)\\
&=D_{\mathrm{TV}}(\pi^*(a|s),\pi(a|s)), \end{align*} where (a) uses Eq.~(\ref{eq_first_step}) and (\ref{eq_after_step}). Finally, using Lemma~\ref{lem_q_diff} completes the proof. \end{proof}
\section{Review of Previous Imitation Learning Methods}\label{sec_review} \textbf{Behavioral Cloning.} BC ignores the changes between the train and test distributions and simply trains a policy $\pi$ that performs well under the distribution of states $d_{\pi^*}$ encountered by the expert policy. This is achieved by the standard supervised learning: $$\hat{\pi}=\argmin_\pi \mathbb{E}_{s\sim d_{\pi^*}}[\ell(s,\pi, \pi^*)].$$
Assuming $\ell(s,\pi, \pi^*)$ is the 0-1 loss, i.e., $\ell(s,\pi, \pi^*)=\textnormal{Pr}(\pi(s)\neq \pi^*(s))$, we have the following sub-optimality bound in the infinite sample case: \begin{theorem}[Theorem 2.1 in \citep{bc}]\label{thm_bc} Let $\mathbb E_{s\sim d_{\pi^*}}[\ell(s,\pi, \pi^*)]=\epsilon_b$, then $J(\pi^*)-J(\pi)\leq H^2\epsilon_b$. \end{theorem} The dependency on $H^2$ implies the issue of compounding error. Note that this bound is tight, as \cite{xu2020} pointed out. Due to the quadratic growth in $H$, BC has a poor performance guarantee.
In the finite sample case, \cite{limits} views BC as the algorithm that outputs a policy belonging to $\Pi_{\textnormal{mimic}}({\mathcal{D}})$ given expert dataset ${\mathcal{D}}$, where $\Pi_{\textnormal{mimic}}({\mathcal{D}})=\left\{\pi\mid \forall s\in {\mathcal{D}},\pi(s)=\pi^*(s)\right\}$. Let $N$ as be the number of samples in ${\mathcal{D}}$ and $\lesssim$ omits the $\log$ term, the sub-optimality bound is: \begin{theorem}[Theorem 4.2 (a) in \citep{value_interaction}]
Consider any policy $\pi$ which carries out behavioral cloning with expert dataset ${\mathcal{D}}$ (i.e., $\pi\in \Pi_{\textnormal{mimic}}({\mathcal{D}})$), we have $J(\pi^*)-J(\pi)\lesssim \min\left\{H,\frac{|{\mathcal{S}}|H^2}{N}\right\}$. \end{theorem} The compounding error is also indicated by the factor $H^2$.
\textbf{Distribution Matching.} Rather than considering the IL problem as a policy function approximation, the distribution matching approaches consider the state-action distribution induced by a policy. Concretely, the distribution matching approach proposes to learn $\pi$ by minimizing the divergence between $d^\pi$ and $d^{\pi^*}$. For example, GAIL~\citep{gail} minimizes the JS divergence while ValueDICE~\citep{ValueDICE} minimizes the KL divergence.
Xu et al.~\cite{xu2020} constructs the sub-optimality bound for distribution matching approaches in the infinite sample case: \begin{theorem}[Lemma 1 in \citep{ail_finite_sample}] Let $\pi$ be a policy such that $D_{\textnormal{JS}}(d^{\pi*},d^{\pi})= \epsilon_g$, we have $J(\pi^*)-J(\pi)\leq 2\sqrt{2}\epsilon_g H$. \end{theorem} It seems the compounding error issue is solved as the bound is proportional to $H$ rather than $H^2$. However, the two bounds cannot be compared directly as $D_{\textnormal{JS}}(d^{\pi*},d^{\pi})$ is more difficult to optimize than $\mathbb E_{s\sim d^{\pi^*}}[\ell(s,\pi)]$, which implies $\epsilon_g$ can be much larger than $\epsilon_b$ in realistic cases.
To enable a more reasonable comparison, we consider the finite sample case, where the number of samples required to achieve $\epsilon_b$ or $\epsilon_g$ is taken into account. Xu et al.~\cite{ail_finite_sample} give the sub-optimality bound for AIL: \begin{theorem}[Theorem 1 in \citep{ail_finite_sample}]
Consider the policy $\pi$ generated by AIL with expert dataset ${\mathcal{D}}$, we have $J(\pi^*)-J(\pi)\lesssim H\sqrt{\frac{|{\mathcal{S}}|-1}{N}}$. \end{theorem}
Let the sub-optimality gap be $\epsilon$, and the sample complexity of AIL becomes $\tilde{{\mathcal{O}}}(|{\mathcal{S}}|H^2/\epsilon^2)$, where $\tilde{{\mathcal{O}}}$ omits the $\log$ term. However, the sample complexity of BC is $\tilde{{\mathcal{O}}}(|{\mathcal{S}}|H^2/\epsilon)$, and AIL is even worse than BC in terms of sample complexity.
\textbf{Dataset Aggregation.} In the DAgger setting, the learner can query the expert when interacting with the environment and trains the next policy under the aggregation of all collected data. Despite the passive result on DAgger that it has the same sub-optimality bound as BC in the worst case given by \citep{limits}, Rajaraman et al.~\cite{value_interaction} prove DAgger can achieve a better performance under the $\mu$-recoverability assumption. The definition of the $\mu$-recoverability assumption is: \begin{definition}[$\mu$-recoverability] An IL instance is said to satisfy $\mu$-recoverability if for each $t\in [H]$ and $s\in\mathcal{S}$, $Q^{\pi^*}_t(s,\pi^*)-Q^{\pi^*}_t(s,a)\leq \mu$ for all $a\in\mathcal{A}$. \end{definition} Satisfying $\mu$-recoverability implies any non-optimal action induces a performance degradation smaller than for every state. It is noteworthy that this quantity is sensitive to the corner case of the environment because the inequality should hold for every state. Furthermore, in many situations where safety is important, a wrong action may cause the failure of a policy because $\mu$ is very large. In the worst case, $\mu={\mathcal{O}}(H)$. Then we state the sub-optimality bound for DAgger. \begin{theorem}[Theorem 2.2 in \citep{dagger}]\label{thm_dagger} If an IL instance satisfies $\mu$-recoverability, let $\pi$ be a policy such that $\mathbb E_{s\sim d^\pi}[\ell(s,\pi, \pi^*)]=\epsilon_b$, then $J(\pi)\geq J(\pi^*)-\mu H\epsilon_b$. \end{theorem} The result shows DAgger solves the compounding error issue by replacing $H^2$ with $\mu H$. However, $\mu$ can be arbitrarily large and can reduce to the bound of BC in the worst case.
Similarly, we present the result for the finite sample case. \begin{theorem}[Theorem 1 in \citep{value_interaction}]
Under $\mu$-recoverability, the policy generated in DAgger setting satisfies $J(\pi^*)-J(\pi)\lesssim \mu \frac{|{\mathcal{S}}|H}{N}$. \end{theorem}
\section{Theoretical Guarantee of the Choice of $p$}\label{sec_p} The choice of $p$ in Sec.\ref{sec_practical} preserves the bound in Corollary~\ref{corollary}.
\begin{theorem} If the distribution $P$ belongs to ${\mathcal{O}}(\epsilon_b)$-Sub-Exponential distribution class with mean ${\mathcal{O}}(\epsilon_b)$, and $p=\delta\epsilon_b H$, then $J(\pi^*)-J(\pi)={\mathcal{O}}(\epsilon_bH)$ \end{theorem} \begin{proof} According to Corollary~\ref{corollary}, $\delta\leq \frac{1}{H}$ when $p=\mu+\sigma \log H$, where $\mu={\mathcal{O}}(\epsilon_b)$ and $\sigma={\mathcal{O}}(\epsilon_b)$.
Under this circumstance, $\delta\epsilon_b H\leq \epsilon_b< p$, and $p$ should decreases to $p'$ to satisfy the condition $p'=\delta'\epsilon_bH$. Because $p'<p$, $\delta'\epsilon_bH=p'H<pH$. Thus, $p'H+\delta'\epsilon_bH\leq 2pH={\mathcal{O}}(\epsilon_bH)$. Then we complete the proof.
The $\ell$ can be replaced with TV divergence in Theorem~\ref{thm_infinite}. \end{proof}
\begin{theorem} Let $\pi$ be a policy such that $\mathbb E_{s\sim \beta}D_{\mathrm{TV}}(\pi(s),\pi^*(s))\leq \epsilon_b$, then $J(\pi^*)-J(\pi) \leq pH+\delta \epsilon_b H^2$, where $\delta=\mathbb E_{s\sim d^{\pi'}}\mathbb I(Q_h^{\pi^*}(s,\pi^*)-Q_h^{\pi^*}(s,\pi)>p)$. \end{theorem}
\begin{proof} Lemma~\ref{lem_safty} holds under this circumstance because it does not use the condition on $\ell$.
According to the proof of Theorem~\ref{thm_infinite}, we have $$J(\pi)= J(\pi^*)-J(\pi')+\sum_{t=1}^H\mathbb E_{s\sim d^{\pi'}_t}\mathbb I(Q_{t}^{\pi^*}(s,\pi')-Q_{t}^{\pi^*}(s,\pi)>p)[Q_{t}^{\pi}(s,\pi')-Q_{t}^{\pi}(s,\pi)].$$
Thus we only need to bound the term $\sum_{t=1}^H\mathbb E_{s\sim d^{\pi'}_t}\mathbb I(Q_{t}^{\pi^*}(s,\pi')-Q_{t}^{\pi^*}(s,\pi)>p)[Q_{t}^{\pi}(s,\pi')-Q_{t}^{\pi}(s,\pi)]$.
According to Theorem~\ref{thm_pi}, \begin{align*}
&\quad \ \sum_{t=1}^H\mathbb E_{s\sim d^{\pi'}_t}\mathbb I(Q_{t}^{\pi^*}(s,\pi')-Q_{t}^{\pi^*}(s,\pi)>p)[Q_{t}^{\pi}(s,\pi')-Q_{t}^{\pi}(s,\pi)]\\
&\leq \sum_{t=1}^H\mathbb E_{s\sim d^{\pi'}_t}\mathbb I(Q_{t}^{\pi^*}(s,\pi')-Q_{t}^{\pi^*}(s,\pi)>p)D_{\mathrm{TV}}(\pi(s),\pi^*(s))(H-t+1)\\
&\leq \delta\epsilon_b H^2 \end{align*} Then we complete the proof. \end{proof}
\section{Experiments on Approximated $Q^*$}\label{sec_extra} In the theoretical analysis part, we use Q* for the bound deduction. Indeed, the exact expert Q-values are hard to get upfront. However, many existing methods can acquire a Q that is close to $Q^*$, including learning from offline datasets~\cite{cql, combo, hve}, using human advice~\cite{PEBBLE, human+}, and computing from rules~\cite{kernel, knowledge}. The cost depends on the choice of the surrogate of Q*.
To demonstrate the feasibility of a surrogate of $Q^*$, we experiment on the $Q^*$ obtained from an offline dataset. The experiment setting is the same as described in Sec. 7.1.1 in the main body. We choose CQL~\cite{cql} to learn a Q-network. The offline dataset is composed of only 5 expert trajectories (totaling 1351 transitions, which is about 20\% of samples used in the training process and about 7\% of samples used by DAgger). The result is shown in the following table.
\begin{table}[H] \centering \caption{Experiment on approximated $Q^*$.}\label{tab_approximate}
\begin{tabular}{l|l} \toprule Algorithm & mean (std)\\ \midrule AdapMen ($Q^*$) & 496.6 (13.0) \\ AdapMen (CQL) & 396.7 (17.2) \\ \addlinespace DAgger & 366.2 (7.1) \\ BC & 376.2 (4.8) \\ CQL & 21.6 (21.9) \\ \addlinespace EnsembleDAgger & 197.1 (3.7) \\ ValueDICE & 65.9 (5.1) \\ \bottomrule \end{tabular} \end{table}
The number in the parentheses is the standard deviation of five seeds. AdapMen ($Q^*$) uses the ground truth $Q^*$, while AdapMen (CQL) uses the $Q$ learned from CQL. The learned $Q$ suffers from little performance degradation, still outperforming baselines. CQL directly uses the learned $Q$ to update the policy, whose extremely low performance shows the necessity of AdapMen.
\section{Extra Information of experiments in Atari game} \begin{table}[H]
\centering
\caption{Atari Expert Performance. Calculated from 100 trajectories.}
\begin{tabular}[b]{|l|l|}
\hline
Task & mean (std) \\ \hline
MsPacman & 1619 (2073) \\ \hline
BeamRider & 1500 (1695) \\ \hline
DemonAttack & 230 (231) \\ \hline
Pong & 5.72 (5.41) \\ \hline
Qbert & 1205 (1934) \\ \hline
Enduro & 180 (149) \\ \hline \end{tabular}
\label{tab:atari performance} \end{table}
\section{Hyper-parameters} \label{sec:parameter}
\begin{table}[h] \caption{Hyperparameters for implemented algorithms in Metadrive} \centering
\begin{tabular}{l|p{3.5cm}|c} \toprule \textbf{Type} & \textbf{Name} & \textbf{Value} \\ \toprule \multirow{6}{*}{\textbf{General}}
& learning rate & 1e-4 \\
& policy net structure & [("mlp", 256), ("mlp", 256)]\\
& batch size & 128 \\
& update learner intervel & 200 \\
& number of batches per update& 50 \\
& expert buffer size & 2000\\ \midrule \multirow{1}{*}{\textbf{HG-DAgger} \& \textbf{EnsembleDAgger} }
& ensemble size & 5\\ \midrule \multirow{3}{*}{\textbf{ValueDICE}}
& actor learning rate & 1e-5 \\
& nu net structure & [("mlp", 256), ("mlp", 256)]\\
& nu learning rate & 1e-3 \\
& nu reg coeff & 10 \\
& absorbing per episode & 10 \\
& number of random actions & 1000 \\ \midrule \multirow{1}{*}{\textbf{AdapMen-Q} \& \textbf{AdapMen-Pi}}
& horizon & 100\\ \bottomrule \end{tabular} \label{tab:hyperparameters1} \end{table}
\begin{table}[h] \caption{Hyperparameters for implemented algorithms in Atari} \centering
\begin{tabular}{l|p{3.5cm}|c} \toprule \textbf{Type} & \textbf{Name} & \textbf{Value} \\ \toprule \multirow{6}{*}{\textbf{General}}
& learning rate & 3e-4 \\
& policy net hidden dim & [("conv2d", 16, 8, 4, 0),\\&& ("conv2d", 32, 4, 2, 0),\\&&("flatten",),\\&& ("mlp", 256), ("mlp", 256)]\\
& batch size & 32 \\
& update learner intervel & 200 \\
& number of batches per update& 200 \\
& expert buffer size & 50000\\ \midrule \multirow{3}{*}{\textbf{ValueDICE}}
& actor learning rate & 1e-5 \\
& nu net structure & [("conv2d", 16, 8, 4, 0), \\&&("conv2d", 32, 4, 2, 0),\\&&("flatten",), \\&&("mlp", 256), ("mlp", 128)]\\
& nu learning rate & 1e-3 \\
& nu reg coeff & 10 \\
& number of random actions & 2000 \\ \midrule \multirow{1}{*}{\textbf{AdapMen-Q} \& \textbf{AdapMen-Pi}}
& horizon & 100\\ \bottomrule \end{tabular} \label{tab:hyperparameters2} \end{table}
\section{Additional $D_Q$ distribution}\label{sec_dq}
\begin{figure}
\caption{Additional $D_Q$ distributions in Atari games}
\end{figure}
\end{document}
|
arXiv
|
{
"id": "2303.02073.tex",
"language_detection_score": 0.769224226474762,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\maketitle \sloppy
\thispagestyle{empty}
\belowdisplayskip=18pt plus 6pt minus 12pt \abovedisplayskip=18pt plus 6pt minus 12pt \parskip 4pt plus 1pt \parindent 0pt
\newcommand{\barint}{
\rule[.036in]{.12in}{.009in}\kern-.16in
\displaystyle\int } \def{\mathbb{R}}{{\mathbb{R}}} \def{[0,\infty)}{{[0,\infty)}} \def{\mathbb{R}}{{\mathbb{R}}} \def{\mathbb{N}}{{\mathbb{N}}} \def{\mathbf{l}}{{\mathbf{l}}} \def{\bar{u}}{{\bar{u}}} \def{\bar{g}}{{\bar{g}}} \def{\bar{G}}{{\bar{G}}} \def{\bar{a}}{{\bar{a}}} \def{\bar{v}}{{\bar{v}}} \def{\bar{\mu}}{{\bar{\mu}}} \def{\mathbb{R}^{n}}{{\mathbb{R}^{n}}} \def{\mathbb{R}^{N}}{{\mathbb{R}^{N}}}
\newcommand{\snr}[1]{\lvert #1\rvert} \newcommand{\nr}[1]{\lVert #1 \rVert}
\newtheorem{theo}{\bf Theorem} \newtheorem{coro}{\bf Corollary}[section] \newtheorem{lem}[coro]{\bf Lemma} \newtheorem{rem}[coro]{\bf Remark} \newtheorem{defi}[coro]{\bf Definition} \newtheorem{ex}[coro]{\bf Example} \newtheorem{fact}[coro]{\bf Fact} \newtheorem{prop}[coro]{\bf Proposition}
\newcommand{{\rm div}}{{\rm div}} \def\texttt{(a1)}{\texttt{(a1)}} \def\texttt{(a2)}{\texttt{(a2)}} \newcommand{{(M_B^-)}}{{(M_B^-)}} \newcommand{{(M_{B_{r_j}}^-)}}{{(M_{B_{r_j}}^-)}} \newcommand{{a^B_{\rm i}}}{{a^B_{\rm i}}} \newcommand{{\mathcal{ A}}}{{\mathcal{ A}}} \newcommand{\widetilde}{\widetilde} \newcommand{\varepsilon}{\varepsilon} \newcommand{\varphi}{\varphi} \newcommand{\vartheta}{\vartheta} \newcommand{{g_\bullet}}{{g_\bullet}} \newcommand{{(\gb)_n}}{{({g_\bullet})_n}} \newcommand{\varrho}{\varrho} \newcommand{\partial}{\partial} \newcommand{{\mathcal{W}}}{{\mathcal{W}}} \newcommand{{\rm supp}}{{\rm supp}} \newcommand{{\min_{\partial B_k}u}}{{\min_{\partial B_k}u}}
\newcommand{\textit{\texttt{data}}}{\textit{\texttt{data}}}
\parindent 1em
\begin{abstract} We study properties of ${\mathcal{ A}}$-harmonic and ${\mathcal{ A}}$-superharmonic functions involving an operator having generalized Orlicz growth. Our framework embraces reflexive Orlicz spaces, as well as natural variants of variable exponent and double-phase spaces. In particular, Harnack's Principle and Minimum Principle are provided for ${\mathcal{ A}}$-superharmonic functions and boundary Harnack inequality is proven for ${\mathcal{ A}}$-harmonic functions. \end{abstract}
\section{Introduction}
The cornerstone of the classical potential theory is the Dirichlet problem for harmonic functions. The focus of the nonlinear potential theory is similar, however, harmonic functions are replaced by $p$-harmonic functions, that is, continuous solutions to the $p$-Laplace equation $-\Delta_p u=-{\rm div}(|Du|^{p-2}Du)=0$, $1<p<\infty$. There are known attempts to adapt the theory to the case when the exponent varies in space, that is $p=p(x)$ for $x\in\Omega$ or the growth is non-polynomial. Inspired by the significant attention paid lately to problems with strongly nonstandard and non-uniformly elliptic growth e.g.~\cite{IC-pocket,ChDF,comi,hht,m,r} we aim at developing basics of potential theory for problems with essentially broader class of operators embracing in one theory as special cases Orlicz, variable exponent and double-phase generalizations of $p$-Laplacian. To cover whole the mentioned range of general growth problems we employ the framework described in the monograph~\cite{hahabook}. Let us stress that unlike the classical studies~\cite{hekima,KiMa92} the operator we consider does {\em not} enjoy homogeneity of a~form ${\mathcal{ A}}(x,k\xi)=|k|^{p-2}k{\mathcal{ A}}(x,\xi)$. Consequently, our class of solutions is {\em not} invariant with respect to scalar multiplication. Moreover, we allow for operators whose ellipticity is allowed to vary dramatically in the space variable. What is more, we do {\em not} need to assume in the definition of ${\mathcal{ A}}$-superharmonic function that it is integrable with some positive power, which is typically imposed in the variable exponent case, cf. e.g.~\cite{hhklm,laluto}.
We study fine properties of ${\mathcal{ A}}$-superharmonic functions defined by the Comparison Principle {with respect to} continuous solutions to $-{\rm div}{\mathcal{ A}}(x,Du)=0$. {Here ${\mathcal{ A}}:\Omega\times{\mathbb{R}^{n}}\to{\mathbb{R}^{n}}$ is assumed to have} generalized Orlicz growth expressed by the means of an inhomogeneous convex $\Phi$--functions $\varphi:\Omega\times{[0,\infty)}\to{[0,\infty)}$ satisfying natural non-degeneracy and balance conditions, see Section~\ref{sec:prelim} for details. In turn, the solutions belong to the Musielak-Orlicz-Sobolev space $W^{1,\varphi(\cdot)}(\Omega)$ described carefully in the monograph~\cite{hahabook}. {The assumptions on the operator are summarized below and will be referred to as \textbf{(A)} throughout the paper}.
\subsection*{Assumption (A)} {We assume that} $\Omega \subset {\mathbb{R}^{n}}$, $n\ge 2$, is an open bounded set. Let a vector field ${\mathcal{ A}}:\Omega\times{\mathbb{R}^{n}}\to{\mathbb{R}^{n}}$ be a Caratheodory's function, that is $x\mapsto {\mathcal{ A}}(x,\cdot)$ is measurable and $z\mapsto {\mathcal{ A}}(\cdot,z)$ is continuous. Assume further that the following growth and coercivity assumptions hold true for almost all $x\in \Omega$ and all $z\in \mathbb{R}^{n}\setminus \{0\}$: \begin{flalign}\label{A} \begin{cases}
\ \snr{{\mathcal{ A}}(x,z)} \le c_1^{\mathcal{ A}}\varphi\left(x,\snr{z}\right)/|z|,\\ \ c_2^{\mathcal{ A}} {\varphi\left(x,\snr{z} \right)} \le {\mathcal{ A}}(x,z)\cdot z
\end{cases} \end{flalign} with absolute constants $c_1^{\mathcal{ A}},c_2^{\mathcal{ A}}>0$ and some function $\varphi:\Omega\times{[0,\infty)}\to{[0,\infty)}$ being measurable with respect to the first variable, convex with respect to the second one and satisfying (A0), (A1), (aInc)$_p$ and (aDec)$_q$ with some $1<p\leq q<\infty$. {The precise statement of these conditions is given in Section \ref{sec:prelim}.} We collect all parameters of the problem as $ \textit{\texttt{data}}=\textit{\texttt{data}}(p,q,c_1^{\mathcal{ A}},c_2^{\mathcal{ A}}). $
Moreover, let ${\mathcal{ A}}$ be monotone in the sense that for a.a. $x\in \Omega$ and any distinct $z_{1},z_{2}\in \mathbb{R}^{n}$ it holds that \begin{flalign*} 0< \,\langle {\mathcal{ A}}(x,z_{1})-{\mathcal{ A}}(x,z_{2}),z_{1}-z_{2}\rangle. \end{flalign*} We shall consider weak solutions, ${\mathcal{ A}}$-supersolutions, ${\mathcal{ A}}$-superharmonic, and ${\mathcal{ A}}$-harmonic functions related to the problem\begin{equation} \label{eq:main}-{\rm div}\, {\mathcal{ A}}(x,Du)= 0 \quad\text{in }\ \Omega. \end{equation} For precise definitions see~Section~\ref{sec:sols}.
\subsection*{Special cases} {Besides the $p$-Laplace operator case, corresponding to the choice of} $\varphi(x,s)=s^{p},$ $1<p<\infty$, we cover by one approach a wide range of more degenerate operators. When we take $\varphi(x,s)=s^{p(x)}$, {with} $p: \Omega \to \mathbb{R}$ {such that} $1<p^{-}_{\Omega} \leq p(x)\leq p^{+}_{\Omega} < \infty$ {and satisfying} $\log$-H\"older condition (a special case of (A1)), {we render the so-called $p(x)$-Laplace equation} \begin{align*} 0=-\Delta_{p(x)} u=-{\rm div}(\snr{Du}^{p(x)-2}Du).
\end{align*}
Within the framework studied in~\cite{comi} solutions to double phase version of the $p$-Laplacian \[0=-{\rm div}\, {\mathcal{ A}}(x,Du)=-{\rm div}\left(\omega(x)\big(\snr{Du}^{p-2}+a(x)\snr{Du}^{q-2}\big)Du\right)\] are analysed with $1<p\leq q<\infty$, possibly vanishing weight $0\leq a\in C^{0,\alpha}(\Omega)$ and $q/p\leq 1+\alpha/n$ (a~special case of (A1); sharp for density of regular functions) and with a bounded, measurable, separated from zero weight $\omega$. We embrace also the borderline case between the double phase space and the variable exponent one, cf.~\cite{bacomi-st}. Namely, we consider solutions to \[0=-{\rm div} {\mathcal{ A}}(x,Du)=-{\rm div}\left(\omega(x)\snr{Du}^{p-2}\big(1+a(x)\log({\rm e}+\snr{Du})\big)Du\right)\] with $1<p<\infty$, log-H\"older continuous $a$ and a bounded, measurable, separated from zero weight $\omega$. Having an $N$-function $B\in\Delta_2\cap\nabla_2$, we can allow for problems with the leading part of the operator with growth driven by $\varphi(x,s)=B(s)$ with an example of \[0=-{\rm div} \, {\mathcal{ A}}(x,Du)=-{\rm div}\left(\omega (x)\tfrac{B(\snr{Du})}{\snr{Du}^2}Du\right)\] with a bounded, measurable, and separated from zero weight $\omega$. To give more new examples one can consider problems stated in weighted Orlicz (if $\varphi(x,s)=a(x)B(s)$), variable exponent double phase (if $\varphi(x,s)=s^{p(x)}+a(x)s^{q(x)}$), or multi phase Orlicz cases (if $\varphi(x,s)=\sum_i a_i(x)B_i(s)$), as long as $\varphi(x,s)$ is comparable to a~function doubling with respect to the second variable and it satisfies the non-degeneracy and no-jump assumptions (A0)-(A1), see Section~\ref{sec:prelim}.
\subsection*{State of art} The key references for already classical nonlinear potential theory are~\cite{adams-hedberg,hekima,KiMa92}, but its foundations date back further to~\cite{HaMa1,HedWol}. A complete overview of the theory for equations with $p$-growth is presented in~\cite{KuMi2014}. The first generalization of~potential theory towards nonstandard growth is done in the weighted case~\cite{Mik,Tur}. So far significant attention was put on the variable exponent case, see e.g.~\cite{Alk,hhklm,hhlt,hklmp,laluto}, and analysis of related problems over metric spaces~\cite{BB}, there are some results obtained in the double-phase case~\cite{fz}, but to our best knowledge the Orlicz case is not yet covered by any comprehensive study stemming from~\cite{lieb,Maly-Orlicz}.
Let us mention the recent advances within the theory. Supersolutions to~\eqref{eq:main} are in fact {solutions to} measure data problems with nonnegative measure, that enjoy lately the separate interest, cf.~\cite{ACCZG,IC-gradest,IC-measure-data,IC-lower,CGZG,CiMa,KiKuTu,KuMi2014,min-grad-est} concentrating on their existence and gradient estimates. The generalization of studies on removable sets for H\"older continuous solutions provided by~\cite{kizo} to the case of strongly non-uniformly elliptic operators has been carried out lately in~\cite{ChDF,ChKa}. There are available various regularity results for related quasiminimizers having Orlicz or generalized Orlicz growth~\cite{hh-zaa,hhl,hht,hklmp,ka,kale,Maly-Orlicz}. For other recent developments in the understanding of the functional setting {we refer} also to~\cite{CUH,yags,haju,CGSGWK}.
\subsection*{Applications} This kind of results are useful in getting potential estimates for solutions to measure data problems, entailing further regularity properties of their solutions, cf.~\cite{KiMa92,KuMi2014,KuMi2013}. Particularly, the Maximum and Minimum Principles for $p$-harmonic functions together with properties of Poisson modification of $p$-superharmonic functions are important tools in getting Wolff potential estimates via the methods of~\cite{KoKu,tru-wa}. In fact, developing this approach further, {we employ} the results of our paper in the proof of Wolff potential estimates for problems with Orlicz growth~\cite{CGZG-Wolff}. They directly entail many natural and sharp regularity consequences and Orlicz version of the Hedberg--Wolff Theorem yielding full characterization of the natural dual space to the space of solutions by the means of the Wolff potential (see~\cite{HedWol} for the classical version).
\subsection*{Results and organization} Section~\ref{sec:prelim} is devoted to notation and basic information on the setting. In Section~\ref{sec:sols} we define weak solutions, ${\mathcal{ A}}$-supersolutions, ${\mathcal{ A}}$-harmonic and ${\mathcal{ A}}$-superharmonic functions and provide proofs of their fundamental properties including the Harnack inequality for ${\mathcal{ A}}$-harmonic functions (Theorem~\ref{theo:Har-A-harm}). Further analysis of ${\mathcal{ A}}$-superharmonic functions is carried out in Section~\ref{sec:A-sh}. We prove there Harnack's Principle (Theorem~\ref{theo:harnack-principle}), fundamental properties of Poisson's modification (Theorem~\ref{theo:Pois}), and Strong Minimum Principle (Theorem~\ref{theo:mini-princ}) together with their consequence of the boundary Harnack inequality (Theorem~\ref{theo:boundary-harnack}) for ${\mathcal{ A}}$-harmonic functions.
\section{Preliminaries}\label{sec:prelim}
\subsection{Notation} In the following we shall adopt the customary convention of denoting by $c$ a constant that may vary from line to line. Sometimes to skip rewriting a constant, we use $\lesssim$. By $a\simeq b$, we mean $a\lesssim b$ and $b\lesssim a$. By $B_R$ we shall denote a ball usually skipping prescribing its center, when it is not important. Then by $cB_R=B_{cR}$ we mean a ball with the same center as $B_R$, but with rescaled radius $cR$.
With $U\subset \mathbb{R}^{n}$ being a~measurable set with finite and positive measure $\snr{U}>0$, and with $f\colon U\to \mathbb{R}^{k}$, $k\ge 1$ being a measurable map, by \begin{flalign*} \barint_{U}f(x) \, dx =\frac{1}{\snr{U}}\int_{U}f(x) \,dx \end{flalign*}
we mean the integral average of $f$ over $U$. We make use of symmetric truncation on level $k>0$, $T_k:{\mathbb{R}}\to{\mathbb{R}}$, defined as follows \begin{equation*}T_k(s)=\left\{\begin{array}{ll}s & |s|\leq k,\\ k\frac{s}{|s|}& |s|\geq k. \end{array}\right. \label{Tk}\end{equation*}
\subsection{Generalized Orlicz functions} We employ the formalism introduced in the monograph~\cite{hahabook}. Let us present the framework.
For $L\geq 1$ a real-valued function $f$ is $L$-almost increasing, if $Lf(s) \geq f(t)$ for $s > t$; $f$ is called $L$-almost decreasing if $Lf(s) \leq f(t)$ for $s > t$.
\begin{defi} We say that $\varphi:\Omega\times{[0,\infty)}\to[0,\infty]$ is a convex $\Phi$--function, and write $\varphi\in\Phi_c(\Omega)$, if the following conditions hold: \begin{itemize} \item[(i)] For every $s\in{[0,\infty)}$ the function $x\mapsto\varphi(x, s)$ is measurable and for a.e. $x\in\Omega$ the function $s\mapsto\varphi(x, s)$ is increasing, convex, and left-continuous. \item[(ii)] $\varphi(x, 0) = \lim_{s\to 0^+} \varphi(x, s) = 0$ and $\lim_{s\to \infty} \varphi(x, s) = \infty$ for a.e. $x\in\Omega$.
\end{itemize} \end{defi} \noindent Further, we say that $\varphi\in\Phi_c(\Omega)$ satisfies\begin{itemize} \item[(aInc)$_p$] if there exist $L\geq 1$ and $p >1$ such that $s\mapsto\varphi(x, s)/s^p$ is $L$-almost increasing in ${[0,\infty)}$ for every $x\in\Omega$, \item[(aDec)$_q$] if there exist $L\geq 1$ and $q >1$ such that $s\mapsto\varphi(x, s)/s^q$ is $L$-almost decreasing in ${[0,\infty)}$ for every $x\in\Omega$. \end{itemize} \noindent By $\varphi^{-1}$ we denote the inverse of a convex $\Phi$-function $\varphi$ {with respect to the second variable}, that is \[ \varphi^{-1}(x,\tau) := \inf\{s \ge 0 \,:\, \varphi(x,s)\ge \tau\}. \] We shall consider those $\varphi\in\Phi_c(\Omega)$, which satisfy the following set of conditions. \begin{itemize} \item[(A0)] There exists $\beta_0\in (0, 1]$ such that $\varphi(x, \beta_0) \leq 1$ and $\varphi(x, 1/\beta_0) \geq 1$ for all $x\in\Omega$.
\item[(A1)] There exists $\beta_1\in(0,1)$, such that for every ball $B$ with $|B|\leq 1$ it holds that
\[\beta_1\varphi^{-1}(x,s)\leq\varphi^{-1}(y,s)\quad\text{for every $s\in [1,1/|B|]$ and a.e. $x,y\in B\cap\Omega$}.\] \item[(A2)] For every {$s>0$} there exist $\beta_2\in(0,1]$ and $h\in L^1(\Omega)\cap L^\infty(\Omega)$, such that \[\varphi(x,\beta_2 r)\leq\varphi(y,r) +h(x)+h(y)\quad\text{for a.e. $x,y\in \Omega$ whenever $\varphi(y,r)\in[0,s]$}.\] \end{itemize} Condition (A0) is imposed in order to exclude degeneracy, while (A1) can be interpreted as local continuity. Fundamental role is played also by (A2) which imposes balance of the growth of $\varphi$ with respect to its variables separately.
\emph{The Young conjugate} of $\varphi\in\Phi_c(\Omega)$ is the function $\widetilde\varphi:\Omega\times{[0,\infty)}\to[0,\infty]$ defined as $ \widetilde \varphi (x,s) = \sup\{r \cdot s - \varphi(x,r):\ r \in {[0,\infty)}\}.$ Note that Young conjugation is involute, i.e. $\widetilde{(\widetilde\varphi)}=\varphi$. Moreover, if $\varphi\in\Phi_c(\Omega)$, then $\widetilde\varphi\in\Phi_c(\Omega)$. For $\varphi\in\Phi_c(\Omega)$, the following inequality of Fenchel--Young type holds true $$ rs\leq \varphi(x,r)+\widetilde\varphi(x,s).$$
We say that a function $\varphi$ satisfies $\Delta_2$-condition (and write $\varphi\in\Delta_2$) if there exists a~constant $c>0$, such that for every $s\geq 0$ it holds $\varphi(x,2s)\leq c(\varphi(x,s)+1)$. If $\widetilde\varphi\in\Delta_2,$ we say that $\varphi$ satisfies $\nabla_2$-condition and denote it by $\varphi\in\nabla_2$. If $\varphi,\widetilde\varphi\in\Delta_2$, then we call $\varphi$ a doubling function. If $\varphi\in \Phi_c(\Omega)$ satisfies {\rm (aInc)$_p$} and {\rm (aDec)$_q$}, then $\varphi\simeq\psi_1$ with some $\psi_1\in \Phi_c(\Omega)$ satisfying $\Delta_2$-condition and $\widetilde\varphi\simeq \widetilde\psi_2$ with some $\widetilde\psi_2\in \Phi_c(\Omega)$ satisfying $\Delta_2$-condition, so we can assume that functions within our framework are doubling. Note that also $\psi_1\simeq\widetilde\psi_2$.
In fact, within our framework \begin{equation} \label{doubl-star}\widetilde\varphi\left(x, {\varphi(x,s)}/{s}\right)\sim \varphi(x,s) \quad\text{for a.e. }\ x\in\Omega\ \text{ and all }\ s>0 \end{equation} for some constants depending only on $p$ and $q$.
\subsection{Function spaces}\label{ssec:spaces}
For a comprehensive study of these spaces we refer to \cite{hahabook}. We always deal with spaces generated by $\varphi\in\Phi_c(\Omega)$ satisfying (aInc)$_p$, (aDec)$_q$, (A0), (A1), and (A2). For $f\in L^0(\Omega)$ we define {\em the modular} $\varrho_{\varphi(\cdot),\Omega}$ by \begin{equation}
\label{modular}
\varrho_{\varphi(\cdot),\Omega} (f)=\int_\Omega\varphi (x, | f(x)|) dx. \end{equation} When it is clear from the context we omit assigning the domain.
\noindent {\em The Musielak--Orlicz space} is defined as the set \[L^{\varphi(\cdot)} (\Omega)= \{f \in L^0(\Omega):\ \ \lim_{\lambda\to 0^+}\varrho_{\varphi(\cdot),\Omega}(\lambda f) = 0\}\] endowed with the Luxemburg norm
\[\|f\|_{\varphi(\cdot)}=\inf \left\{\lambda > 0 :\ \ \varrho_{\varphi(\cdot),\Omega} \left(\tfrac 1\lambda f\right) \leq 1\right\} .\] For $\varphi\in\Phi_c(\Omega)$, the space $L^{\varphi(\cdot)}(\Omega)$ is a Banach space~\cite[Theorem~2.3.13]{hahabook}. Moreover, the following H\"older inequality holds true\begin{equation}
\label{in:Hold}\|fg\|_{L^1(\Omega)}\leq 2\|f\|_{L^{\varphi(\cdot)}(\Omega)}\|g\|_{L^{\widetilde\varphi(\cdot)}(\Omega)}. \end{equation}
We define {\em the Musielak-Orlicz-Sobolev space} $W^{1,\varphi(\cdot)}(\Omega)$ as follows \begin{equation*}
W^{1,\varphi(\cdot)}(\Omega)=\big\{f\in W^{1,1}_{loc}(\Omega):\ \ f,|D f|\in L^{\varphi(\cdot)}(\Omega)\big\}, \end{equation*}where $D$ stands for distributional derivative. The space is considered endowed with the norm \[
\|f\|_{W^{1,\varphi(\cdot)}(\Omega)}=\inf\big\{\lambda>0 :\ \ \varrho_{\varphi(\cdot),\Omega} \left(\tfrac 1\lambda f\right)+ \varrho_{\varphi(\cdot),\Omega} \left(\tfrac 1\lambda Df\right)\leq 1\big\}\,. \] By $W_0^{1,\varphi(\cdot)}(\Omega)$ we denote a closure of $C_0^\infty(\Omega)$ under the above norm.
Because of the growth conditions $W^{1,\varphi(\cdot)}(\Omega)$ is a separable and reflexive space. Moreover, smooth functions are dense there.
\begin{rem}\cite{hahabook} If $\varphi\in \Phi_c(\Omega)$ satisfies {\rm (aInc)$_p$}, {\rm (aDec)$_q$}, (A0), (A1), (A2), {then} strong (norm) topology of $W^{1,\varphi(\cdot)}(\Omega)$ coincides with the sequensional modular topology. Moreover, smooth functions are dense in this space in both topologies. \end{rem}
Note that as a consequence of \cite[Lemma~2.1]{bbggpv} for every function $u$, such that $T_k(u)\in W^{1,\varphi(\cdot)}(\Omega)$ for every $k>0$ (with $T_k$ given by~\eqref{Tk}) there exists a (unique) measurable function $Z_u : \Omega \to {\mathbb{R}^{n}}$ such that \begin{equation}\label{gengrad}
D T_k(u) = \chi_{\{|u|<k\}} Z_u\quad \hbox{for a.e. in $\Omega$ and for every $k > 0$.} \end{equation} With an abuse of~notation, we denote $Z_u$ simply by $D u$ and call it a {\it generalized gradient}.
{\subsection{The operator} Let us motivate that the growth and coercivity conditions from~\eqref{A} imply the expected proper definition of the operator involved in problem~\eqref{eq:main}. We notice that in our regime the operator $\mathfrak{A}_{\varphi(\cdot)}$ defined as $$ \mathfrak{A}_{\varphi(\cdot)} v := {\mathcal{ A}}(x,Dv) $$ is well defined as $\ \mathfrak{A}_{\varphi(\cdot)} : W^{1,\varphi(\cdot)}_0(\Omega) \to (W^{1,\varphi(\cdot)}_0(\Omega))'\ $ via \begin{flalign*} \langle\mathfrak{A}_{\varphi(\cdot)}v,w\rangle:=\int_{\Omega}{\mathcal{ A}}(x,Dv)\cdot Dw \,dx\quad \text{for}\quad w\in C^{\infty}_{0}(\Omega), \end{flalign*} where $\langle \cdot, \cdot \rangle$ denotes dual pairing between reflexive Banach spaces $W^{1,\varphi(\cdot)}(\Omega))$ and $(W^{1,\varphi(\cdot)}(\Omega))'$. Indeed, when $v\in W^{1,\varphi(\cdot)}(\Omega)$ and $w\in C_0^\infty(\Omega)$, growth conditions~\eqref{A}, H\"older's inequality~\eqref{in:Hold}, equivalence~\eqref{doubl-star}, and Poincar\'e inequality~\cite[Theorem~6.2.8]{hahabook} justify that \begin{flalign}
\nonumber\snr{\langle \mathfrak{A}_{\varphi(\cdot)}v,w \rangle}\le &\, c\int_{\Omega}\frac{\varphi(x,\snr{Dv})}{\snr{Dv}}\snr{Dw} \ dx \le c\left \| \frac{\varphi(\cdot,\snr{Dv})}{\snr{Dv}}\right \|_{L^{\widetilde \varphi(\cdot)}(\Omega)}\nr{Dw}_{L^{\varphi(\cdot)}(\Omega)}\nonumber \\ \le &\, c\nr{Dv}_{L^{ \varphi(\cdot)}(\Omega)}\nr{Dw}_{L^{\varphi(\cdot)}(\Omega)}\le c\nr{w}_{W^{1,\varphi(\cdot)}(\Omega)}.\label{op} \end{flalign} By density argument, the operator is well-defined on $W^{1,\varphi(\cdot)}_0(\Omega)$.}
\section{Various types of solutions and the notion of ${\mathcal{ A}}$-harmonicity} \label{sec:sols} All the problems are considered under Assumption {\bf (A)}.
\subsection{Definitions and basic remarks} $\ $
A \underline{continuous} function $u\in W^{1,\varphi(\cdot)}_{loc}(\Omega)$ is {called} an {\em ${\mathcal{ A}}$-harmonic function} in an open set $\Omega$ if it is a (weak) solution to the equation $-{\rm div}{\mathcal{ A}}(x,Du)= 0$, {i.e., \begin{equation} \label{eq:main:0} \int_\Omega {\mathcal{ A}}(x,Du)\cdot D\phi\,dx= 0\quad\text{for all }\ \phi\in C^\infty_0(\Omega). \end{equation} } Existence and uniqueness of ${\mathcal{ A}}$-harmonic functions is proven in \cite{ChKa}. \begin{prop} \label{prop:ex-Ahf} Under {\rm Assumption {\bf (A)}} if $\Omega$ is bounded and $w\in W^{1,\varphi(\cdot)}(\Omega)\cap C(\Omega)$, then there exists a unique solution $u\in W^{1,\varphi(\cdot)}(\Omega)\cap C(\Omega)$ to problem \begin{equation*} \begin{cases}-{\rm div}\, {\mathcal{ A}}(x,Du)= 0\quad\text{in }\ \Omega,\\ u-w\in W_0^{1,\varphi(\cdot)}(\Omega).\end{cases} \end{equation*}
Moreover, $u$ is locally bounded and for every $E\Subset\Omega$ we have \[\|u\|_{L^\infty(E)}\leq c(\textit{\texttt{data}}, \|Du\|_{ W^{1,\varphi(\cdot)}(\Omega)}).\] \end{prop}{}
We call a function $u\in W^{1,\varphi(\cdot)}_{loc}(\Omega)$ a (weak) {\em ${\mathcal{ A}}$-supersolution} to~\eqref{eq:main:0} if~$-{\rm div}{\mathcal{ A}}(x,Du)\geq 0$ weakly in $\Omega$, that is \begin{equation*}
\int_\Omega {\mathcal{ A}}(x,Du)\cdot D\phi\,dx\geq 0\quad\text{for all }\ 0\leq\phi\in C^\infty_0(\Omega) \end{equation*} and a (weak) {\em ${\mathcal{ A}}$-subsolution} if $-{\rm div}{\mathcal{ A}}(x,Du)\leq 0$ weakly in $\Omega$, that is \begin{equation*}
\int_\Omega {\mathcal{ A}}(x,Du)\cdot D\phi\,dx\leq 0\quad\text{for all }\ 0\leq\phi\in C^\infty_0(\Omega). \end{equation*} By density of smooth functions we can use actually test functions from $W^{1,\varphi(\cdot)}_0(\Omega)$.
The classes of {\em ${\mathcal{ A}}$-superharmonic} and {\em ${\mathcal{ A}}$-subharmonic} are defined by the Comparison Principle. { \begin{defi}\label{def:A-sh} We say that function $u$ is ${\mathcal{ A}}$-superharmonic if \begin{itemize} \item[(i)] $u$ is lower semicontinuous; \item[(ii)] $u \not\equiv \infty$ in any component of $\Omega$; \item[(iii)] for any $K\Subset\Omega$ and any ${\mathcal{ A}}$-harmonic $h\in C(\overline {K})$ in $K$, $u\geq h$ on $\partial K$ implies $u\geq h$ in $K$. \end{itemize} We say that an {\color{black} upper} semicontinuous function $u$ is ${\mathcal{ A}}$-subharmonic if $(-u)$ is ${\mathcal{ A}}$-superharmonic. \end{defi} }
The above definitions have the following direct consequences.
\begin{lem}\label{lem:A-arm-loc-bdd-below} An ${\mathcal{ A}}$-superharmonic function $u$ is locally bounded from below.\\ An ${\mathcal{ A}}$-subharmonic function $u$ is locally bounded from above. \end{lem} \begin{lem}\label{lem:A-h-is-great} If $u$ is ${\mathcal{ A}}$-harmonic, then it is ${\mathcal{ A}}$-supersolution, ${\mathcal{ A}}$-subsolution, ${\mathcal{ A}}$-superharmonic, and ${\mathcal{ A}}$-subharmonic. \end{lem} {By minor modification of the proof of \cite[Lemma 4.3]{ka} we get the following fact. \begin{lem}\label{lem:comp-princ} Let $u\in W^{1,\varphi(\cdot)}(\Omega)$ be an ${\mathcal{ A}}$-supersolution to \eqref{eq:main:0}, and $v\in W^{1,\varphi(\cdot)}(\Omega)$ be an ${\mathcal{ A}}$-subsolution to \eqref{eq:main:0}. If $\min(u-v) \in W^{1,\varphi(\cdot)}_0(\Omega)$, then $u \geq {v}$ a.e. in $\Omega$. \end{lem} We have the following estimate for ${\mathcal{ A}}$-supersolutions. \begin{lem}[Lemma~5.1,~\cite{ChKa}]\label{lem:A-supers-cacc}If $u\in W^{1,\varphi(\cdot)}(\Omega)$ is a nonnegative ${\mathcal{ A}}$-supersolution, $B\Subset\Omega,$ and $\eta\in C^{1}_{0}(B)$ is such that $0\leq \eta\leq 1$. Then for all $\gamma\in(1,p)$ there holds \begin{flalign*}
\int_{B }u^{-\gamma}\eta^{q}\varphi(x,\snr{D u}) \, dx\le c\int_{B }u^{-\gamma}\varphi(x,\snr{D\eta}u) \, dx\quad\text{
with $\ \ c=c(\textit{\texttt{data}},\gamma)$.}
\end{flalign*}
\end{lem}}
It is well known that solutions, subsolutions, and supersolutions can be described by the theory of quasiminimizers. Since many of the results on quasiminizers from~\cite{hh-zaa} apply to our ${\mathcal{ A}}$-harmonic functions we shall recall the definition.
Among all functions having the same `boundary datum' $w\in W^{1,\varphi(\cdot)}(\Omega)$ the function $u\in W^{1,\varphi(\cdot)}$ is a {\em quasiminimizer} if it has the least energy up to a factor $C$, that is if $(u-w)\in W_0^{1,\varphi(\cdot)}(\Omega)$ and \begin{equation}\label{def-quasiminimizer}
\int_\Omega \varphi(x,|D u|)\,dx\leq C\int_\Omega \varphi(x, |D(u+v)|)\,dx \end{equation} holds true with an absolute constant $C>0$ for every $v\in W_0^{1,\varphi(\cdot)}(\Omega)$. We call a~function $u$ {\em superquasiminimizer} ({\em subquasiminimizer}) if~\eqref{def-quasiminimizer} holds for all $v$ as above that are additionally nonnegative (nonpositive).
\begin{lem}\label{lem:Ah-is-quasi} An ${\mathcal{ A}}$-harmonic function $u$ is a quasiminimizer. \end{lem}{}
\begin{proof} Let us take an arbitrary $v\in W_0^{1,\varphi(\cdot)}(\Omega)$. We may write $v = w + \tilde{v} - u$ with `boundary datum' $w$ and any $\tilde{v} \in W_0^{1,\varphi(\cdot)}(\Omega)$, and upon testing the equation \eqref{eq:main} with $v$ we obtain \[ \int_\Omega {\mathcal{ A}}(x,D u)\cdot Du\,dx=\int_\Omega {\mathcal{ A}}(x,D u)\cdot D(w+\tilde{v})\,dx.\] Then by coercivity of ${\mathcal{ A}}$, Young's inequality, growth of ${\mathcal{ A}}$ and doubling growth of~$\varphi$, for every $\varepsilon>0$ we have \begin{flalign*}
c_2^{\mathcal{ A}} \int_\Omega \varphi(x,|D u|)\,dx&\leq \int_\Omega {\mathcal{ A}}(x,D u)\cdot Du\,dx=\int_\Omega {\mathcal{ A}}(x,D u)\cdot D(w+\tilde{v})\,dx\\
&\leq \varepsilon \int_\Omega \widetilde\varphi(x,|{\mathcal{ A}}(x,D u)|)\,dx+c(\varepsilon)\int_\Omega \varphi(x, |D(w+\tilde{v})|)\,dx\\
&\leq \varepsilon \int_\Omega \widetilde\varphi(x,c_1^{\mathcal{ A}}\varphi(x,|D u|)/|Du|)\,dx+c(\varepsilon)\int_\Omega \varphi(x, |D(w+\tilde{v})|)\,dx\\
&\leq \varepsilon \bar c \int_\Omega \varphi(x,|D u|)\,dx+ c(\varepsilon)\int_\Omega \varphi(x, |D(w+\tilde{v})|)\,dx \end{flalign*} with $\bar c=\bar c(\textit{\texttt{data}})>0.$ Let us choose $\varepsilon>0$ small enough for the first term on the right-hand side can be absorbed on the left-hand side. By rearranging terms, and using the fact that $u+v=w+\tilde{v}$ we get that \[
\int_\Omega \varphi(x,|D u|)\,dx\leq C\int_\Omega \varphi(x, |D(u+v)|)\,dx\quad \text{ with $\ \ C=C(\textit{\texttt{data}})>0$}. \] Hence we get the claim. \end{proof}
\noindent By the same calculations as in the above proof we have the following corollary. \begin{coro} If $u$ is ${\mathcal{ A}}$-supersolution, then $u$ is a superquasiminizer, i.e.~\eqref{def-quasiminimizer} holds for all nonnegative $v\in W_0^{1,\varphi(\cdot)}(\Omega)$. \end{coro}
\subsection{Obstacle problem }
We consider the set \begin{flalign}\label{con} \mathcal{K}_{\psi,w}(\Omega):=\left\{ v\in W^{1,\varphi(\cdot)}(\Omega)\colon \ v\ge \psi \ \ \mbox{a.e. in} \ \Omega \ \ \mbox{and} \ \ v-{w}\in W^{1,\varphi(\cdot)}_{0}(\Omega) \right\}, \end{flalign} where we call $\psi:\Omega\to\overline{R}$ the obstacle and $w \in W^{1,\varphi(\cdot)}(\Omega)$ the boundary datum. If $ \mathcal{K}_{\psi,w}(\Omega)\neq\emptyset$ by a~solution to the obstacle problem we mean a function $u\in \mathcal{K}_{\psi,w}(\Omega)$ satisfying \begin{flalign}\label{obs} \int_{\Omega}{\mathcal{ A}}(x,Du)\cdot D(v-u) \ dx \ge 0 \quad \mbox{for all } \ v\in \mathcal{K}_{\psi,w}(\Omega). \end{flalign} We note the following basic information on the existence, the uniqueness, and the Comparison Principle for the obstacle problem are provided in \cite{kale} and~\cite[Section~4]{ChKa}. \begin{prop}[Theorem 2, \cite{ChKa}]\label{prop:obst-ex-cont} Under~{\rm Assumption {\bf (A)}} let the obstacle $\psi\in W^{1,\varphi(\cdot)}(\Omega)\cup\{-\infty\}$ and the boundary datum $w\in W^{1,\varphi(\cdot)}(\Omega)$ be such that $\mathcal{K}_{\psi,w}(\Omega)\not =\emptyset$. Then there exists a function $u\in \mathcal{K}_{\psi,w}(\Omega)$ being a unique solution to the $\mathcal{K}_{\psi,w}(\Omega)$-obstacle problem \eqref{obs}. Moreover, if $\psi\in W^{1,\varphi(\cdot)}(\Omega)\cap C(\Omega)$, then $v$ is continuous and is ${\mathcal{ A}}$-harmonic in the open set $\{x\in \Omega\colon u(x)>\psi(x)\}$. \end{prop} \noindent For more properties of solutions to related obstacle problems see also~\cite{BCP,Obs1,ChDF,Obs2,hhklm,ka}. In particular, in~\cite{ka} several basic properties of quasiminimizers to related variational obstacle problem are proven.
\begin{prop}[Proposition 4.3, \cite{ChKa}] \label{prop:cacc} Let $B_r \Subset B_R \subset \Omega$. Under assumptions of Proposition~\ref{prop:obst-ex-cont}, \begin{enumerate} \item if $u$ is a solution to the $\mathcal{K}_{\psi,w}(\Omega)$-obstacle problem \eqref{obs}, then there exists $c=c(\textit{\texttt{data}},n)$, such that \begin{align*}
\int_{ B_R} \varphi(x, |D(u-k)_+ |) \, dx \leq c \int_{ B_R} \varphi\left (x,\dfrac{(u-k)_+}{R-r}\right ) \, dx,\ \text{ where $k \geq \sup_{x \in B_R} \psi(x)$.} \end{align*}
\item if $u$ is a ${\mathcal{ A}}$-supersolution to \eqref{eq:main:0} in $\Omega$, then there exists $c=c(\textit{\texttt{data}},n)$, such that \begin{align*}
\int_{B_R} \varphi(x, |Du_{-}|) \, dx \leq c \int_{ B_R} \varphi\left (x,\dfrac{|u_{-}|}{R}\right ) \, dx. \end{align*} \end{enumerate} \end{prop} Note that in fact in~\cite[Proposition~4.3]{ka} only {\it (1)} is proven in detail, but {\it (2)} follows by the same arguments.
\section{${\mathcal{ A}}$-superharmonic functions}\label{sec:A-sh} \subsection{Basic observations}
\begin{prop}[Comparison Principle]\label{prop:comp-princ} Suppose $u$ is ${\mathcal{ A}}$-superharmonic and $v$ is ${\mathcal{ A}}$-subharmonic in $\Omega$. If $\limsup_{y\to x} v(y)\leq\liminf_{y\to x} u(y)$ for all $x\in\partial \Omega$ (excluding the cases $-\infty\leq-\infty$ and $\infty\leq \infty$), then $v\leq u$ in $\Omega$. \end{prop} \begin{proof} When we fix $x\in\Omega$ and $\varepsilon>0$, by the assumption we can find a regular open set $D\Subset\Omega,$ such that $v<u+\varepsilon$ on $\partial D.$ Pick a decreasing sequence $\{\phi_k\}\subset C^\infty(\Omega)$ converging to $v$ pointwise in $\overline{D}$. Since $\partial D$ is compact by lower semicontinuity of $(u+\varepsilon)$ we infer that $\phi_k\leq u+\varepsilon$ on $\partial D$ for some $k$. We take a function $h$ being ${\mathcal{ A}}$-harmonic in $D$ coinciding with $\phi_k$ on $\partial D$. By definition it is continuous up to a boundary of $D$. Therefore, $v\leq h\leq u+\varepsilon$ on $\partial D$ and so $v\leq h\leq u+\varepsilon$ in $D$ as well. We get the claim by letting $\varepsilon\to 0$. \end{proof}{}
\begin{coro}\label{coro:min-A-super}Having the Comparison Principle one can deduce what follows. \begin{itemize} \item[(i)] If $a_1,a_2\in{\mathbb{R}}$, $a_1\geq 0$, and $u$ is ${\mathcal{ A}}$-superharmonic in $\Omega,$ then so is $a_1u+a_2$.
\item[(ii)] If $u$ and $v$ are ${\mathcal{ A}}$-superharmonic in $\Omega,$ then so is $\min\{u,v\}.$
\item[(iii)] Suppose $u$ is not identically $\infty$, then $u$ is ${\mathcal{ A}}$-superharmonic in $\Omega$ if and only if $\min\{u,k\}$ is ${\mathcal{ A}}$-superharmonic in $\Omega$ for every $k=1,2,\dots$.
\item[(iv)] The function $u$ is ${\mathcal{ A}}$-superharmonic in $\Omega$, if it is ${\mathcal{ A}}$-superharmonic in every component of $\Omega.$
\item[(v)] If $u$ is ${\mathcal{ A}}$-superharmonic and finite a.e. in $\Omega$ and $E\subset\Omega$ is a nonempty open subset, then $u$ is ${\mathcal{ A}}$-superharmonic in $E$. \end{itemize}{} \end{coro}
\begin{lem}\label{lem:pasting} Suppose $D\subset\Omega$, $u$ is ${\mathcal{ A}}$-superharmonic in $\Omega$, and $v$ is ${\mathcal{ A}}$-superharmonic in $D$. If the function \[w=\begin{cases} \min\{u,v\}\quad&\text{in }\ D,\\ u\quad&\text{in }\ \Omega\setminus D \end{cases}{}\] is lower semicontinuous, then it is ${\mathcal{ A}}$-superharmonic in $\Omega$. \end{lem} \begin{proof}Let $E\Subset\Omega$ be open and $h$ be an ${\mathcal{ A}}$-harmonic function, such that $h\leq w$ on $\partial E.$ By the Comparison Principle of Proposition~\ref{prop:comp-princ} we infer that $h\leq w$ in $\overline{E}$. Since $w$ is lower semicontinuous, for every $x\in\partial D\cap E$ it holds that \[ \lim_{\substack{y\in D\cap\Omega \\ y\to x}}h(y)\leq u(x)=w(x)\leq\liminf_{\substack{y\in D\cap\Omega \\ y\to x}} v(y). \] Consequently, for every $x\in\partial (D\cap E)$ one has \[ \lim_{\substack{y\in D\cap\Omega \\ y\to x}}h(y)\leq w(x)\leq\liminf_{\substack{y\in D\cap\Omega \\ y\to x}}w(y).\] By the Comparison Principle of Proposition~\ref{prop:comp-princ} also $h\leq w$ in $D\cap E$. Then $h\leq w$ in $E$, what was to prove. \end{proof}{}{}
\begin{lem}\label{lem:cont-supersol-are-superharm} If $u$ is a continuous ${\mathcal{ A}}$-supersolution, then it is ${\mathcal{ A}}$-superharmonic. \end{lem} \begin{proof} Since $u$ is continuous and finite a.e. (because it belongs to $W^{1,\varphi(\cdot)}_{loc}(\Omega)$), we have to prove only that Comparison Principle for ${\mathcal{ A}}$-superharmonic functions holds.
Let $G \Subset \Omega$ be an open set, and let $h$ be a continuous, ${\mathcal{ A}}$-harmonic function in $G$, such that $h \leq u$ on $\partial G$. Fix $\epsilon >0$ and choose and open set $E \Subset G$ such that $u + \epsilon \geq h$ in $G \setminus E$. Since the function $\min\{u+\epsilon-h,0\}$ has compact support, it belongs to $W^{1,\varphi(\cdot)}(E)$. Hence Lemma \ref{lem:comp-princ} implies $u+\epsilon \geq h$ in $E$, and therefore a.e. in $G$. Since the function is continuous, the inequality is true in each point of $G$. As $\epsilon$ was chosen arbitrary, the claim follows.\end{proof}
We shall prove that ${\mathcal{ A}}$-superharmonic functions can be approximated from below by ${\mathcal{ A}}$-supersolutions.
\begin{prop} \label{prop:from-below} Let $u$ be ${\mathcal{ A}}$-superharmonic in $\Omega$ and let $G\Subset\Omega$. Then there exists a nondecreasing sequence of continuous ${\mathcal{ A}}$-supersolutions $\{u_j\}$ in $G$ such that $u=\lim _{j\to\infty}u_j$ pointwise in $G$. For nonnegative $u$, approximate functions $u_j$ can be chosen nonnegative as well. \end{prop} \begin{proof} Since $u$ is lower semicontinuous in $\overline{G}$, it is bounded from below and there exists a nondecreasing sequence $\{\phi_j\}$ of Lipschitz functions on $\overline{G}$ such that $u=\lim _{j\to\infty}\phi_j$ in $G$. For nonnegative $u$, obviously $\phi_j,$ $j\in\mathbb{N}$ can be chosen nonnegative as well. Let $u_j$ be the {solution of the $\mathcal{K}_{\phi_j,\phi_j}(G)$-obstacle problem which by Proposition~\ref{prop:obst-ex-cont} is continuous} and \[\phi_j<u_j \qquad\text{in the open set }\ A_j=\{x\in G:\ \phi_j\neq u_j\}.\] Moreover, $u_j$ is ${\mathcal{ A}}$-harmonic in $A_j.$ By Comparison Principle from Proposition~\ref{prop:comp-princ} we infer that the sequence $\{u_j\}$ is nondecreasing. Since $u$ is ${\mathcal{ A}}$-superharmonic, we have $u_j\leq u$ in $A_j$. Then consequently $\phi_j\leq u_j\leq u$ in $ G.$ Passing to the limit with $j\to\infty$ we get that $u=\lim _{j\to\infty}u_j$, what completes the proof. \end{proof} \begin{lem}\label{lem:loc-bdd-superharm-are-supersol} If $u$ is ${\mathcal{ A}}$-superharmonic in $\Omega$ and locally bounded from above, then $u\in W^{1,\varphi(\cdot)}_{loc}(\Omega)$ and $u$ is ${\mathcal{ A}}$-supersolution in $\Omega$. \end{lem} \begin{proof}
Fix open sets $E\Subset G \Subset \Omega$. By Proposition \ref{prop:from-below} there exists a nondecreasing sequence of continuous ${\mathcal{ A}}$-supersolutions $\{u_j\}$ in $G$ such that $u=\lim _{j\to\infty}u_j$ pointwise in $G$. Since $u$ is locally bounded we may assume {$u_j\leq u <0$} in $G$. It follows from Proposition \ref{prop:cacc} that the sequence {$\{|Du_j |\}$} is locally bounded in $L^{\varphi(\cdot)}(G)$. Since $u_j \to u$ a.e. in $G$, it follows that $u \in W^{1,\varphi(\cdot)}(G)$, and $Du_j \rightharpoonup Du$ weakly in $L^{\varphi(\cdot)}(G)$.
We need to show now that $u$ is an ${\mathcal{ A}}$-supersolution in $\Omega$. To this end we first prove that (up to a subsequence) gradients {$\{Du_j \}$} converge a.e. in $G$. We start with proving that \begin{equation} \label{Ijto0} I_j = \int_{E} \Big({\mathcal{ A}}(x,Du) - {\mathcal{ A}}(x,Du_j) \Big)\cdot \big( Du - Du_j \big)\, dx\to 0 \quad \text{as}\ j\to\infty. \end{equation} Choose $\eta \in C_0^\infty(G)$ such that $0 \leq \eta \leq 1$, and $\eta = 1$ in {$E$}. Using $\psi = \eta(u-u_j)$ as a test function for the ${\mathcal{ A}}$-supersolution $u_j$ and applying the H\"older inequality, the doubling property of $\varphi$, and the Lebesgue dominated monotone convergence theorem we obtain \begin{align*} -\int_G \eta {\mathcal{ A}}(x,Du_j) &\cdot \big( Du - Du_j \big)\, dx \leq \int_G (u-u_j){\mathcal{ A}}(x,Du_j)\cdot D\eta\, dx \\
&\leq 2 \|(u-u_j)D\eta\|_{L^{\varphi(\cdot)}(G)} \|{\mathcal{ A}}(\cdot,Du_j) \|_{L^{\widetilde\varphi(\cdot)}(G)} \\
&\leq c \|u-u_j\|_{L^{\varphi(\cdot)}(G)} \to 0. \end{align*} Moreover, since $$ \eta {\mathcal{ A}}(\cdot , Du) \in L^{\widetilde\varphi(\cdot)}(G), $$ the weak convergence $Du_j \rightharpoonup Du$ in $L^{\varphi(\cdot)}(G)$ implies $$ \int_G \eta {\mathcal{ A}}(x,Du)\cdot \big( Du - Du_j \big)\, dx \to 0. $$ Then, since $\eta \big({\mathcal{ A}}(x,Du) - {\mathcal{ A}}(x,Du_j) \big)\cdot \big( Du - Du_j \big) \geq 0$ a.e. in $G$, we conclude with~\eqref{Ijto0}. Since the integrand in $I_j$ is nonnegative, we may pick up a subsequence (still denoted $u_j$) such that \begin{equation} \label{eq:point-conv} \Big({\mathcal{ A}}(x,Du(x)) - {\mathcal{ A}}(x,Du_j(x)) \Big)\cdot \big( Du(x) - Du_j(x) \big) \to 0\ \ \text{ for a.a. $x\in E$.} \end{equation}
Fix $x \in E$ such that \eqref{eq:point-conv} is valid, and that $|Du(x)| < \infty$. Upon choosing further subsequence we may assume that\[{Du_j(x)}\to \xi \in \overline{{\mathbb{R}}^n}.\] Since we have \begin{align*} \big({\mathcal{ A}}(x,&Du(x)) - {\mathcal{ A}}(x,Du_j(x)) \big)\cdot \big( Du(x) - Du_j(x) \big) \\
&\geq c_2^{\mathcal{ A}} \varphi(x,|Du_j(x)|) - c_1^{\mathcal{ A}} \frac{\varphi(x, |Du(x)|)}{|Du(x)|} |Du_j(x)| - c_1^{\mathcal{ A}} \frac{\varphi(x, |Du_j(x)|)}{|Du_j(x)|} |Du(x)| \\
&\geq c(\textit{\texttt{data}},|Du(x)|) \varphi(x,|Du_j(x)|) \left(1- \frac{|Du_j(x)|}{\varphi(x,|Du_j(x)|)} - \frac{1}{|Du_j(x)|} \right) \end{align*}
and \eqref{eq:point-conv} is true, it must follow that $|\xi| < \infty$.
Since the mapping $\zeta \mapsto {\mathcal{ A}}(x, \zeta)$ is continuous, we have $$ \big({\mathcal{ A}}(x,Du(x)) - {\mathcal{ A}}(x,\xi) \big)\cdot \big( Du(x) - \xi \big) = 0 $$ and it follows that $\xi = Du(x)$, and $$ Du_j(x) \to Du(x) \qquad \text{for a.e. $x \in E$}, $$ and $$ {\mathcal{ A}}(\cdot, Du_j) \rightharpoonup {\mathcal{ A}}(\cdot, Du) \qquad \text{weakly in $L^{\widetilde\varphi(\cdot)}$}. $$ Therefore that $u$ is an ${\mathcal{ A}}$-supersolution of \eqref{eq:main:0}. Indeed, if $\phi \in C_0^\infty(\Omega),$ $\phi \geq 0$ is such that ${\rm supp}\, \phi \subset E$, then {$D\phi \in L^{\varphi(\cdot)}(E)$} and we have \begin{align*} 0 \leq \int_\Omega {\mathcal{ A}}(x, Du_j) \cdot D\phi\, dx \to \int_\Omega {\mathcal{ A}}(x, Du) \cdot D\phi\, dx \quad \text{as}\ j\to\infty. \end{align*} Since $E$ was arbitrary this concludes the proof. \end{proof}
\subsection{Harnack's inequalities}
In order to get strong Harnack's inequality for ${\mathcal{ A}}$-harmonic function and weak Harnack's inequality for ${\mathcal{ A}}$-superharmonic functions we need related estimates proved for ${\mathcal{ A}}$-subsolutions and ${\mathcal{ A}}$-supersolutions. Having Lemma~\ref{lem:Ah-is-quasi} we can specify results derived for quasiminizers in~\cite{hh-zaa} to our case.
\begin{prop}[Corollary~3.6, \cite{hh-zaa}] \label{prop:weak-Har-sub-sup} For a locally bounded function $u\in W^{1,\varphi(\cdot)}_{loc}(\Omega)$ being ${\mathcal{ A}}$-subsolution in $\Omega$ there exist constants $R_0=R_0(n)>0$ and $C=C(\textit{\texttt{data}},n,R_0,{\rm ess\,sup}_{B_{R_0}} u)>0$, such that \[{\rm ess\,sup}_{B_{R/2}}u-k\leq C\left(\left(\barint_{B_R}(u-k)_+^{s}\,dx\right)^\frac{1}{s}+R\right) \] for all $R\in(0,R_0]$, $s>0$ and $k\in {\mathbb{R}}$. \end{prop}
\begin{prop}[Theorem~4.3, \cite{hh-zaa}] \label{prop:weak-Har-super-inf} For a nonnegative function $u\in W^{1,\varphi(\cdot)}_{loc}(\Omega)$ ${\mathcal{ A}}$-supersolution in $\Omega$ there exist constants $R_0=R(n)>0$, $s_0=s_0(\textit{\texttt{data}},n)>0$ and $C=C(\textit{\texttt{data}},n)>0$, such that \[ \left(\barint_{B_R}u^{s_0}\,dx\right)^\frac{1}{s_0} \leq C\left({\rm ess\,inf}_{B_{R/2}} u+R\right) \] for all $R\in(0,R_0]$ provided $B_{3R}\Subset\Omega$ and $\varrho_{\varphi(\cdot),B_{3R}}(Du)\leq 1.$ \end{prop} Let us comment on the above result. For the application in \cite{hh-zaa} dependency of $s_0$ on other parameters is not important and so -- not studied with attention. Actually, this theorem is not proven in detail in \cite{hh-zaa}, but refers to standard arguments presented in~\cite{hht,hklmp}. Their re-verification enables to find $s_0=s_0(\textit{\texttt{data}},n)$. Let us note that after we completed our manuscript, an interesting study on the weak Harnack inequalities with an explicit exponent, holding for unbounded supersolutions, within our framework of generalized Orlicz spaces appeared, see~\cite{bhhk}.
{Since ${\mathcal{ A}}$-harmonic function is an ${\mathcal{ A}}$-subsolution and and ${\mathcal{ A}}$-supersolution at the same time (Lemma~\ref{lem:A-h-is-great}), by Propositions~\ref{prop:weak-Har-sub-sup} and~\ref{prop:weak-Har-super-inf} we infer the full Harnack inequality.} \begin{theo}[Harnack's inequality for ${\mathcal{ A}}$-harmonic functions] \label{theo:Har-A-harm} For a nonnegative ${\mathcal{ A}}$-harmonic function $u\in W^{1,\varphi(\cdot)}_{loc}(\Omega)$ there exist constants $R_0=R(n)>0$, $s_0=s_0(\textit{\texttt{data}},n)>0$ and $C=C(\textit{\texttt{data}},n,R_0,{\rm ess\,sup}_{B_{R_0}} u)>0$, such that \[ {\rm ess\,sup}_{B_{R}}u \leq C\left({\rm ess\,inf}_{B_{R}} u+R\right) \] for all $R\in(0,R_0]$ provided $B_{3R}\Subset\Omega$ and $\varrho_{\varphi(\cdot),B_{3R}}(Du)\leq 1.$ \end{theo}
\subsection{Harnack's Principle for ${\mathcal{ A}}$-superharmonic functions}
We are going to characterize the limit of nondecreasing sequence of ${\mathcal{ A}}$-superharmonic functions and their gradients.
\begin{theo}[Harnack's Principle for ${\mathcal{ A}}$-superharmonic functions] \label{theo:harnack-principle} Suppose that $u_i$, $i=1,2,\ldots$, are ${\mathcal{ A}}$-superharmonic and finite a.e. in $\Omega$. If the sequence $\{u_i\}$ is nondecreasing then the limit function $u=\lim_{i \to \infty} u_i$ {is ${\mathcal{ A}}$-superharmonic or infinite in $\Omega$.} Furthermore, if $u_i$, $i=1,2,\ldots$, are nonnegative, then up to a subsequence also $Du_i\to Du$ a.e. in $\{u<\infty\},$ where `$D$' stands for the generalized gradient, cf.~\eqref{gengrad}. \end{theo}{} \begin{proof} The proof is presented in three steps. We start with motivating that the limit function is either ${\mathcal{ A}}$-superharmonic or $u \equiv \infty$, then we concentrate on gradients initially proving the claim for a priori globally bounded sequence $\{u_i\}$ and conclude by passing to the limit with the bound.
{\em Step 1.} Since $u_i$ are lower semicontinuous, so is $u$. The following fact holds: Given a compact set $K \Subset \Omega$, if $h \in C(K)$, $\epsilon >0$ is a small fixed number, and $u > h-\epsilon$ on $K$, then, for $i$ sufficiently large, $u_i > h-\epsilon$. Indeed, let's argue by contradiction. Assume that for every $i$ there exists $x_i\in K,$ such that $$ u_i(x_i) \leq h(x_i) - \epsilon. $$ Since $K$ is compact, we can assume that $x_i \to x_o$. Fix $l \in \mathbb{N}$. Then, for $i > l$ we have $$ u_l(x_i) \leq u_i(x_i) \leq h(x_i) - \epsilon $$ The right-hand side in the previous display tends with $i \to \infty$ to $h(x_o)-\epsilon$. Hence $$ u_l(x_o) \leq \liminf_{i\to\infty} u_l(x_i) \leq h(x_o) - \epsilon $$ Thus for every $l$ we have $u_l(x_o) \leq h(x_o) - \epsilon$, which implies $u(x_o) \leq h(x_o) - \epsilon$ which is in the contradiction with the fact that $u > h-\epsilon$ on $K$.
Using this fact we can prove that the limit function $u=\lim_{i \to \infty} u_i$ {is ${\mathcal{ A}}$-superharmonic unless $u \equiv \infty$}. Choose an open $\Omega' \Subset {\Omega}$ and $h \in C(\overline{\Omega'})$ an ${\mathcal{ A}}$-harmonic function. Assume the inequality $ u \geq h$ holds on $\partial \Omega'$. It follows that for every $\epsilon >0$ on $\partial \Omega'$ we have $u > h-\epsilon$ and, from the aforementioned fact, it follows that $u_i > h- \epsilon$ on $\partial \Omega'$. Since all $u_i$ are ${\mathcal{ A}}$-superharmonic, Proposition~\ref{prop:comp-princ} yields that $u_i \geq h-\epsilon$ on $\Omega'$. Therefore $u \geq h-\epsilon$ on $\Omega'$. Since $\epsilon $ is arbitrary, we have $u \geq h$ on $\Omega'$. Therefore the Comparison Principle from definition of ${\mathcal{ A}}$-superharmonic holds unless $u \equiv \infty$ in~$\Omega$. Finally, $u=\lim_{i \to \infty} u_i$ is ${\mathcal{ A}}$-superharmonic unless $u \equiv \infty$.
{\em Step 2.} Assume $0\leq u_i\leq k$ for all $i$ with $k>1$ and choose open sets $E\Subset G\Subset{\Omega}$. By Lemma~\ref{lem:A-supers-cacc} we get that \[\varrho_{\varphi(\cdot),G}(Du_i)\leq c k^q\] with $c=c(\textit{\texttt{data}},n)>0$ uniform with respect to $i$. Then, by doubling properties of $\varphi$, we infer that\begin{equation}\label{Duibound}
\|Du_i\|_{L^{\varphi(\cdot)}(G)}\leq c (\textit{\texttt{data}},n,k). \end{equation} Consequently $\{u_i\}$ is bounded in $W^{1,\varphi(\cdot)}(G)$ and $u_i\to u$ weakly in $W^{1,\varphi(\cdot)}(G).$ Further, it has a non-relabelled subsequence converging a.e.~in $G$ to $u\in W^{1,\varphi(\cdot)}(G)$. Let us show that \begin{equation}
\label{grad=grad} Du_j\to Du\qquad\text{a.e. in }\ E. \end{equation}{} We fix arbitrary $\varepsilon\in(0,1)$, denote \[J_i=\{x\in E:\ \big({\mathcal{ A}}(x,Du_i(x))-{\mathcal{ A}}(x,Du(x))\big)\cdot(Du_i(x)-Du(x))>\varepsilon\}\] and estimate its measure. We have \begin{flalign}
|J_i|\leq& |J_i\cap\{|u_i-u|\geq \varepsilon^2\}|\nonumber\\
&+\frac{1}{\varepsilon}\int_{J_i\cap\{|u_i-u|<\varepsilon^2\}} \big({\mathcal{ A}}(x,Du_i)-{\mathcal{ A}}(x,Du)\big)\cdot(Du_i-Du)\,dx.\label{Jiest1} \end{flalign}{} Let $\eta\in C_0^\infty(G)$ be such that $\mathds{1}_{E}\leq \eta\leq \mathds{1}_{G}$. We define \[w_1^i=\min\big\{(u_i+\varepsilon^2-u)^+,2\varepsilon^2\big\}\quad\text{and}\quad w_2^i=\min\big\{(u+\varepsilon^2-u_i)^+,2\varepsilon^2\big\}.\] Then $w_1^i\eta$ and $w_2^i\eta$ are nonnegative functions from $W^{1,\varphi(\cdot)}_0(G)$ and can be used as test functions. Since $u$ and $u_i$, $i=1,2,\ldots$, are ${\mathcal{ A}}$-supersolutions we already know that $u_i\to u$ weakly in $W^{1,\varphi(\cdot)}(E').$ By growth condition {we can estimate like in~\eqref{op}} and by~\eqref{Duibound} we have \begin{flalign*}
\int_{G\cap\{|u_i-u|<\varepsilon^2\}} {\mathcal{ A}}(x,Du)\cdot(Du_i-Du)\eta\,dx&\leq \int_{G\cap\{|u_i-u|<\varepsilon^2\}} {\mathcal{ A}}(x,Du)\cdot D\eta\,w^i_1\,dx\\
&\leq \, c\varepsilon^2\int_{G} \frac{\varphi(x,|Du|)}{|Du|}|D\eta|\,dx\\ &\leq \, c\varepsilon^2 \end{flalign*}{} with $c>0$ independent of $i$ and $\varepsilon$. Analogously \begin{flalign*}
\int_{G\cap\{|u_i-u|<\varepsilon^2\}} {\mathcal{ A}}(x,Du_i)\cdot(Du_i-Du)\eta\,dx&\leq \, c\varepsilon^2, \end{flalign*}{}
Summing up the above observations we have\begin{flalign*}\frac{1}{\varepsilon}\int_{J_i\cap\{|u_i-u|<\varepsilon^2\}} \big({\mathcal{ A}}(x,Du_i)-{\mathcal{ A}}(x,Du)\big)\cdot(Du_i-Du)\,dx\leq c\varepsilon. \end{flalign*}{} The left-hand side is nonnegative by the monotonicity of the operator, so due to~\eqref{Jiest1} we have \begin{flalign*}
|J_i|\leq& |J_i\cap\{|u_i-u|\geq \varepsilon^2\}|+c\varepsilon \end{flalign*}{}
with $c>0$ independent of $i$ and $\varepsilon$. By letting $\varepsilon\to 0$ we get that $|E_j|\to 0$. Because of the strict monotonicity of the operator, we infer~\eqref{grad=grad}. {We can conclude the proof of this step by choosing a diagonal subsequence.}
{\em Step 3.} Now we concentrate on the general case. For every $k=1,2,\dots$ we select subsequences $\{u_{i}^{(k)}\}_k$ of $\{u_i\}$ and find an ${\mathcal{ A}}$-superharmonic function $v_k,$ such that $\{u_{i}^{(k+1)}\}\subset\{u_{i}^{(k)}\}$, $T_k(u^{(k)}_j)\to v_k$ and $D(T_k(u^{(k)}_j))\to D v_k$ a.e. in $\Omega$. We note that $v_k$ increases to a function, which is ${\mathcal{ A}}$-harmonic or equivalently infinite. Additionally, $v_k=T_k(u).$ The diagonally chosen subsequence $\{u_i^{(i)}\}$ has all the desired properties. \end{proof}{}
We have the following consequence of the Comparison Principle and Theorem~\ref{theo:harnack-principle}. \begin{coro}[Harnack's Principle for ${\mathcal{ A}}$-harmonic functions]\label{coro:Ah-harnack-principle} Suppose that $u_i$, $i=1,2,\ldots$, are ${\mathcal{ A}}$-harmonic in $\Omega$. If the sequence $\{u_i\}$ is nondecreasing then the limit function $u=\lim_{i \to \infty} u_i$ {is ${\mathcal{ A}}$-harmonic or infinite in $\Omega$.} \end{coro}
\subsection{Poisson modification}
The Poisson modification of an ${\mathcal{ A}}$-superharmonic function in a regular set $E$ carries the idea of its local smoothing. A boundary point is called regular if at this point the boundary value of any Musielak-Orlicz-Sobolev function is attained not only in the Sobolev sense but also pointwise. A set is called regular if all of its boundary points are regular. See~\cite{hh-zaa} for the result that if the complement of $\Omega$ is locally fat at $x_0\in\partial\Omega$ in the capacity sense, then $x_0$ is regular. Thereby of course polyhedra and balls are obviously regular.
Let us consider a function $u$, which is ${\mathcal{ A}}$-superharmonic {and finite a.e.} in $\Omega$ and an open set $E\Subset\Omega$ with regular $\overline{E}.$ We define \[u_E=\inf\{v:\ v \ \text{is ${\mathcal{ A}}$-superharmonic in $E$ and }\liminf_{y\to x}v(y)\geq u(x)\ \text{for each }x\in\partial \overline E\}\] and the {\em Poisson modification} of $u$ in $E$ by \[P(u,E)=\begin{cases} u\quad&\text{in }\ \Omega\setminus E,\\ u_E &\text{in }\ E. \end{cases}{}\]
\begin{theo}[Fundamental properties of the Poisson modification]\label{theo:Pois} If $u$ is ${\mathcal{ A}}$-superharmonic {and finite a.e.} in $\Omega$, then its Poisson modification $P(u,E)$ is \begin{itemize}
\item [(i)] ${\mathcal{ A}}$-superharmonic in~$\Omega$,
\item [(ii)] ${\mathcal{ A}}$-harmonic in $E$,
\item [(iii)] $P(u,E)\leq u$ in~$\Omega.$ \end{itemize} \end{theo}{} \begin{proof} The fact that $P(u,E)\leq u$ in $\Omega$ results directly from the definition. By assumption $u$ is finite somewhere. Let us pick a nondecreasing sequence $\{\phi_i\}\subset C^\infty({\mathbb{R}^{n}})$ which converges to $u$ in $\overline{E}$. Let $h_i$ be the unique ${\mathcal{ A}}$-harmonic function agreeing with $\phi_i$ on $\partial E.$ The sequence $\{h_i\}$ is nondecreasing by the Comparison Principle from Proposition~\ref{prop:comp-princ}. Since $h_i\leq u$, by Harnack's Principle from Corollary~\ref{coro:Ah-harnack-principle} we infer that \[h:=\lim_{i\to\infty}h_i\] is ${\mathcal{ A}}$-harmonic in $E$. Moreover, $h\leq u$ and thus $h$ is also finite somewhere. Since \[u(y)=\lim_{i\to\infty}\phi_i(y)\leq\liminf_{x\to y} h(x)\quad\text{for }\ y\in\partial E,\]
it follows that $P(u,E)\leq h$ in $E$. On the other hand, by the Comparison Principle (Proposition~\ref{prop:comp-princ}) we get that $h_i\leq P(u,E)$ in $E$ for every $i$. Therefore $P(u,E)|_E=h$ is ${\mathcal{ A}}$-harmonic in $E$. This reasoning also shows that $P(u,E)$ is lower semicontinuous and, by Lemma~\ref{lem:pasting}, it is also ${\mathcal{ A}}$-superharmonic in $\Omega$. \end{proof}{}
\subsection{Minimum and Maximum Principles}
Before we prove the principles, we need to prove the following lemmas. \begin{lem}\label{lem:oo} If $u$ is ${\mathcal{ A}}$-superharmonic and $u=0$ a.e. in $\Omega,$ then $u\equiv 0$ in $\Omega.$ \end{lem} \begin{proof}{It is enough to show that $u=0$ in a given ball $B\Subset\Omega.$ {By lower semicontinuity of $u$ infer that it is nonpositive.} By Lemma~\ref{lem:loc-bdd-superharm-are-supersol}, we get that $u\in W^{1,\varphi(\cdot)}(\Omega).$ Let $v=P(u,B)$ be the Poisson modification of $u$ in $B.$ By Theorem~\ref{theo:Pois} we have that $v$ is continuous in $B$ and $v\leq u\leq 0.$ Therefore $v$ is an ${\mathcal{ A}}$-supersolution in $\Omega$ and $(u-v)\in W^{1,\varphi(\cdot)}_0(\Omega)$. Moreover,
\[c_2^{\mathcal{ A}} \int_\Omega \varphi(x,|Dv|)\,dx\leq \int_\Omega {\mathcal{ A}}(x,Dv)\cdot Dv\,dx\leq \int_\Omega {\mathcal{ A}}(x,Dv)\cdot Du\,dx=0,\] where the last equality holds because $Du=0$ a.e. in $\Omega.$ But then, we directly get that $Dv=0$ and $v=0$ a.e. in $\Omega.$ By continuity of $v$ in $B$ we get that $v=0$ everywhere in $B$. In the view of $v\leq u\leq 0$, we get that also $u\equiv 0$ in $\Omega.$}\end{proof}
\begin{lem}\label{lem:A-sh-lsc} If $u$ is ${\mathcal{ A}}$-superharmonic and finite a.e. in $\Omega$, then for every $x\in\Omega$ it holds that $u(x)=\liminf_{y\to x}u(y)={\rm ess}\liminf_{y\to x} u(y).$ \end{lem} \begin{proof} We fix arbitrary $x\in\Omega$ and by lower semicontinuity $u(x)\leq \liminf_{y\to x}u(y)\leq {\rm ess}\liminf_{y\to x} u(y)=:a.$ Let $\varepsilon\in (0,a)$ and $B=B(x,r)\subset\Omega$ be such that $u(y)>a-\varepsilon$ for a.e. $y\in B.$ By Corollary~\ref{coro:min-A-super} function $v=\min\{u-a+\varepsilon,0\}$ is ${\mathcal{ A}}$-superharmonic in $\Omega$ and $v=0$ a.e. in $B.$ By Lemma~\ref{lem:oo} $v\equiv 0$ in $\Omega,$ but then $u(x)\geq a-\varepsilon$. Letting $\varepsilon\to 0$ we obtain that $u(x)=a$ and the claim is proven. \end{proof}
{We define $\psi:\Omega\times{[0,\infty)}\to{[0,\infty)}$ is given by \begin{equation}
\label{psi} \psi(x,s)=\varphi(x,s)/s. \end{equation} Note that within our regime $s\mapsto\psi(\cdot,s)$ is strictly increasing, but not necessarily convex. Although in general $\psi$ does not generate the Musielak-Orlicz space, we still can define $\varrho_{\psi(\cdot),\Omega}$ by~\eqref{modular} useful in quantifying the uniform estimates for trucations in the following lemma.
\begin{lem}\label{lem:unif-int} If for {$u$} there exist $M,k_0>0$, such that for all $k>k_0$\begin{equation}
\label{apriori}\varrho_{\varphi(\cdot),B}(DT_k u)\leq Mk,
\end{equation} then there exists a function $\zeta:[0,|B|]\to{[0,\infty)}$, such that $\lim_{s\to 0^+}\zeta(s)=0$ and for every measurable set $E\subset B$ it holds that for all $k>0$ \[\varrho_{\psi(\cdot),E}(D T_k u)\leq \zeta(|E|).\] \end{lem} \begin{proof} The result is classical when $p=q$, \cite{hekima}. Therefore, we present the proof only for $p<q$. We start with observing that
\begin{flalign*}|\{x\in B \colon\, \varphi(x,|Du|)>s\}|&\leq |\{x\in B \colon\,|u|>k\}|+ |\{x\in B \colon\,\varphi(x,|Du|)>s,\ |u|\leq k\}|\\&=I_1+I_2. \end{flalign*} Let us first estimate the volume of superlevel sets of $u$ using Tchebyszev inequality, Poincar\'e inequality, assumptions on the growth of $\varphi$, and~\eqref{apriori}. For all sufficiently large $k$ we have \begin{flalign*}
I_1&=|\{{x\in B \colon} |u|>k\}|\leq \int_B\frac{|T_k u|^p}{k^p}\,dx\leq\frac{c}{k^p} \int_B |DT_k u|^p\,dx\\
&\leq\frac{c}{k^p} \int_B \varphi(x,|DT_k u|) \,dx= ck^{-p}\varrho_{\varphi(\cdot),B}(DT_k u)\leq cMk^{1-p}. \end{flalign*} Similarly by Tchebyszev inequality and~\eqref{apriori} we can estimate also \begin{flalign*}
I_2=|\{x\in B \colon\ \varphi(x,|Du|)>s,\ |u|\leq k\}|&\leq \frac{1}{s}\int_{\{\varphi(x,DT_k u)>s\}}\varphi(x,D T_k u)\,dx\leq {M} \frac{k}{s}. \end{flalign*} Altogether for all sufficiently large $s$ (i.e. $s>k_0^p$) we have that \begin{flalign*}
|\{x\in B \colon\ \varphi(x,|Du|)>s\}|&\leq I_1+I_2\leq cs^{\frac{1-p}{p}}. \end{flalign*} Recall that due to~\eqref{doubl-star} there exists $C>0$ uniform in $x$ such that $\psi(x,s)\geq C \widetilde\varphi^{-1}(x,\varphi(x,s)),$ so \begin{flalign*}
|\{x\in B \colon\,\psi(x,|Du|)>s\}|&\leq |\{x\in B \colon\,C\widetilde\varphi^{-1}(x,\varphi(x,|Du|))>s\}|\\
&= |\{x\in B \colon\,\varphi(x,|Du|)>\widetilde\varphi(x,s/C)\}|\\
& \leq |\{x\in B \colon\,\varphi(x,|Du|)>(s/C)^{q'}\}|\leq c s^{-\frac{q'}{p'}}\,, \end{flalign*}
for some $c>0$ independent of $x$. Since the case $q=p$ is trivial for these estimates, it suffices to consider $q>p$. Then ${-\frac{q'}{p'}}<{-1}$ and we get the uniform integrability of $\{\psi(\cdot,|DT_ku|)\}_k$, thus the claim follows. \end{proof} Let us sum up the information on integrability of gradients of truncations of ${\mathcal{ A}}$-superharmonic functions. \begin{rem}\label{rem:unif-int} For a function $u$ being ${\mathcal{ A}}$-superharmonic and finite a.e. in $\Omega$, by Lemma~\ref{lem:loc-bdd-superharm-are-supersol} we get that $\{T_k u\}$ is a sequence of ${\mathcal{ A}}$-supersolutions in $\Omega$. Then~\eqref{apriori} is satisfied because of the Caccioppoli estimate from Lemma~\ref{lem:A-supers-cacc}. Having Lemma~\ref{lem:unif-int} we get that there exists $R_0>0$, such that for every $x\in \Omega$ and $B=B(x,R)\Subset\Omega$ with $R<R_0$ we have $\varrho_{\psi(\cdot),B}(DT_k u)\leq 1$ for all $k>0$ and in fact also $\varrho_{\psi(\cdot),B}(Du)\leq 1$ (where `$D$' stands for the generalized gradient, cf.~\eqref{gengrad}). \end{rem} \begin{lem}\label{lem:wH-for-trunc} For $u$ being a nonnegative function ${\mathcal{ A}}$-superharmonic and finite a.e. in $\Omega$ there exist constants $R^{\mathcal{ A}}_0=R_0^{\mathcal{ A}}(n)>0$, $s_0=s_0(\textit{\texttt{data}},n)>0$ as in the weak Harnack inequality (Proposition~\ref{prop:weak-Har-super-inf}), and $C=C(\textit{\texttt{data}},n)>0$, such that for every $k>1$ we have \begin{equation}
\label{in:wH-for-trunc} \left(\barint_{B_R}(T_k u)^{s_0}\,dx\right)^\frac{1}{s_0} \leq C\left({\inf}_{B_{R/2}} (T_k u)+R\right) \end{equation} for all $R\in(0,R_0^{\mathcal{ A}}]$ provided $B_{3R}\Subset\Omega$ and $\varrho_{\psi(\cdot),B_{3R}}(Du)\leq 1.$ \end{lem} \begin{proof} The proof is based on Remark~\ref{rem:unif-int} and Proposition~\ref{prop:weak-Har-super-inf} that provides weak Harnack inequality for an~${\mathcal{ A}}$-supersolution $v$ holding with constant $C=C(\textit{\texttt{data}},n)$ and for balls with radius $R<R_0(n)$ and so small that $\varrho_{\varphi(\cdot),B_{3R_0}}(Dv)\leq 1$.
The only explanation is required whenever $|Dv|\geq 1$ a.e. in the considered ball. Then for every $k>1$ there exists $R_1(k)$ such that we get~\eqref{in:wH-for-trunc} for $T_k v$ over balls such that $R<\min\{R_1(k),R_0(n)\}$ and $\varrho_{\varphi(\cdot),B_{3R_1(k)}}(D T_kv)\leq 1$. Of course, then there exists $R_0^{\mathcal{ A}}(k)\in(0,R_1(k))$, such that we have~\eqref{in:wH-for-trunc} for $R<\min\{R_1(k),R_0(n)\}$ and $ \varrho_{\psi(\cdot),B_{3R_0^{\mathcal{ A}}(k)}}(D T_kv)\leq \varrho_{\varphi(\cdot),B_{3R_0^{\mathcal{ A}}(k)}}(D T_kv)\leq 1$. Note that it is Remark~\ref{rem:unif-int} that allows us to choose $R_0^{\mathcal{ A}}$ independently of $k$. \end{proof}}
We are in a position to prove that an ${\mathcal{ A}}$-harmonic function cannot attain its minimum nor maximum in a domain.
\begin{theo}[Strong Minimum Principle for ${\mathcal{ A}}$-superharmonic functions]\label{theo:mini-princ} Suppose $u$ is ${\mathcal{ A}}$-superharmonic and finite a.e. in connected set $\Omega$. If $u$ attains its minimum inside $\Omega,$ then $u$ is a constant function. \end{theo}
\begin{proof} {We consider $v=(u-\inf_{\Omega}u)$, which by Corollary~\ref{coro:min-A-super} is ${\mathcal{ A}}$-superharmonic. Let $E=\{x\in \Omega:\ v(x)= 0\}$, which by lower semicontinuity of $v$ (Lemma~\ref{lem:A-sh-lsc}) is nonempty and relatively closed in $\Omega.$ Having in hand Remark~\ref{rem:unif-int} we can choose $B=B(x,R)\subset 3B\Subset\Omega$ with radius smaller than $R_0^{\mathcal{ A}}$ from Lemma~\ref{lem:wH-for-trunc} and such that $\varrho_{\psi(\cdot), B_{3R}}(Du)\leq 1$ where $\psi$ is as in~\eqref{psi}. Therefore, in the rest of the proof we restrict ourselves to a ball $B$. By Corollary~\ref{coro:min-A-super} functions $v$ and $T_k v$ are ${\mathcal{ A}}$-superharmonic in $3B$. Moreover, by Lemma~\ref{lem:loc-bdd-superharm-are-supersol} we infer that $\{T_k v\}$ is a~sequence of ${\mathcal{ A}}$-supersolutions integrable uniformly in the sense of Lemma~\ref{lem:unif-int}. We take any $y\in B$ -- a Lebesgue's point of $T_k v$ for every $k$ and choose $B'=B'(y,R')\Subset B.$ Let us also fix arbitrary $k>0$. We have the weak Harnack inequality from Lemma~\ref{lem:wH-for-trunc} for $T_k v$ on $B'$ yielding \[0\leq\left(\barint_{B'} (T_kv)^{s_0}\,dx\right)^\frac{1}{s_0}\leq C(\inf_{B'/2}T_k v+R')=CR'\] with $s_0,C>0$ independent of $k$. Letting $R'\to 0$ we get that $T_k v(y)=0$. Lebesgue's points of $T_k v$ for every $k$ are dense in $B$, we get that $T_k v\equiv 0$ a.e. in $B$. By arguments as in Lemma~\ref{lem:oo} we get that $T_k v\equiv 0$ in $B$, but then $B\subset E$ and $E$ has to be an open set. Since $\Omega$ is connected, $E$ is the only nonempty and relatively closed open set in $\Omega,$ that is $E=\Omega$. Therefore $T_kv\equiv 0$ in $\Omega.$ As $k>0$ was arbitrary $v=u-\inf_{\Omega}u\equiv 0$ in $\Omega$ as well.} \end{proof}
The classical consequence of Strong Minimum Principle, we get its weaker form.
\begin{coro}[Minimum Principle for ${\mathcal{ A}}$-superharmonic functions]\label{coro:mini-princ} Suppose $u$ is ${\mathcal{ A}}$-superharmonic and finite a.e. in $\Omega$. If $E\Subset\Omega$ a connected open subset of $\Omega$, then \[\inf_{E} u=\inf_{\partial E} u.\] \end{coro}
By the very definition of an ${\mathcal{ A}}$-subharmonic function one gets the following direct consequence of the above fact. \begin{coro}[Maximum Principle for ${\mathcal{ A}}$-subharmonic functions]\label{coro:max-princ} Suppose $u$ is ${\mathcal{ A}}$-subharmonic and finite a.e. in $\Omega$. If $E\Subset\Omega$ a connected open subset of $\Omega$, then \[\sup_E u=\sup_{\partial E} u.\] \end{coro}
Having Theorem~\ref{theo:mini-princ} and Corollary~\ref{coro:max-princ}, we infer that if $u$ is ${\mathcal{ A}}$-harmonic in $\Omega$, then it attains its minimum and maximum on $\partial\Omega$. In other words ${\mathcal{ A}}$-harmonic functions have the following Liouville-type property. \begin{coro}[Liouville Theorem for ${\mathcal{ A}}$-harmonic functions]\label{coro:min-max-princ} If an ${\mathcal{ A}}$-harmonic function attains its extremum inside a domain, then it is a constant function. \end{coro}
\subsection{Boundary Harnack inequality for ${\mathcal{ A}}$-harmonic functions}
\begin{theo}[Boundary Harnack inequality for ${\mathcal{ A}}$-harmonic functions]\label{theo:boundary-harnack} For a~nonnegative function $u$ which is ${\mathcal{ A}}$-harmonic in a connected set $\Omega$ there exist $R_0=R(n)>0$ and $C=C(\textit{\texttt{data}},n,R_0,{\rm ess\,sup}_{B_{R_0}}u)>0 $, such that \begin{equation*} \sup_{\partial B_R} u\leq C(\inf_{\partial B_{R}} u+R) \end{equation*} for all $R\in(0,R_0]$ provided $B_{3R}\Subset\Omega$ and $\varrho_{\psi(\cdot),B_{3R}}(Du)\leq 1,$ where $\psi$ is given by~\eqref{psi}. \end{theo}\begin{proof}It suffices to note that by Lemma~\ref{lem:A-h-is-great} we can use Minimum Principle of~Corollary~\ref{coro:mini-princ} and Maximum Principle of Corollary~\ref{coro:max-princ}. Then by Harnack inequality of Theorem~\ref{theo:Har-A-harm} the proof is complete. \end{proof}
{\begin{coro} Suppose $u$ is ${\mathcal{ A}}$-harmonic in $B_{\frac{3}{2}R}\setminus B_R,$ with $R<R_0 $ from Theorem~\ref{theo:boundary-harnack}, then exists $C=C(\textit{\texttt{data}},n,R_0,{\rm ess\,sup}_{B_{R_0}}u)>0 $, such that \begin{equation*} \sup_{\partial B_{\frac 43 R}} u\leq C(\inf_{\partial B_{\frac 43 R}} u+2R). \end{equation*} \end{coro}\begin{proof} Fix $\varepsilon>0$ small enough for $B_R\Subset B_{\frac 43 R-\varepsilon}\subset B_{\frac 43 R+\varepsilon}\Subset B_{\frac 32 R}.$ Of course, then $u$ is ${\mathcal{ A}}$-harmonic in ${B_{\frac 43 R+\varepsilon}\setminus B_{\frac 43 R-\varepsilon}}$. We cover the annulus with finite number of~balls of equal radius as prescribed in the theorem and such that $\varrho_{\psi(\cdot),B}(Du)\leq 1$, which is possible due to Remark~\ref{rem:unif-int}. Let us observe that due to the Harnack's inequality from Theorem~\ref{theo:boundary-harnack} we have\begin{flalign*} \sup_{\partial B_{\frac 43 R+\varepsilon}} u\leq \sup_{\partial B_{\frac 43 R+\varepsilon}\cup\partial B_{\frac 43 R-\varepsilon}} u&\leq C\Big( \inf_{\partial B_{\frac 43 R+\varepsilon}\cup\partial B_{\frac 43 R-\varepsilon}} u+ \frac 43 R+\varepsilon\Big)\\&\leq C\Big( \inf_{\partial B_{\frac 43 R+\varepsilon}} u + 2R\Big). \end{flalign*} Since $u$ is continuous in $B_{\frac{3}{2}R}\setminus B_R,$ passing with $\varepsilon\to 0$ we get the claim. \end{proof} }
\end{document}
|
arXiv
|
{
"id": "2005.00118.tex",
"language_detection_score": 0.6468015909194946,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Non-Gaussian state generation certified using the EPR-steering inequality}
\author{E. S. G\'{o}mez} \email{[email protected]} \author{G. Ca\~nas} \author{E. Acu\~na} \affiliation{Center for Optics and Photonics, MSI-Nucleus on Advanced Optics, Departamento de F\'{\i}sica, Universidad de Concepci\'{o}n, 160-C, Concepci\'{o}n, Chile} \author{W. A. T. Nogueira} \affiliation{Departamento de F\'isica, ICE, Universidade Federal de Juiz de Fora, Juiz de Fora, CEP 36036-330, Brazil} \affiliation{Universidade Federal de Minas Gerais,
Caixa Postal 702, 30123-970, Belo Horizonte, MG, Brazil} \author{G. Lima} \affiliation{Center for Optics and Photonics, MSI-Nucleus on Advanced Optics, Departamento de F\'{\i}sica, Universidad de Concepci\'{o}n, 160-C, Concepci\'{o}n, Chile} \date{\today}
\pacs{42.50.Dv}
\begin{abstract} Due to practical reasons, experimental and theoretical continuous-variable (CV) quantum information (QI) has been heavily based on Gaussian states. Nevertheless, many CV-QI protocols require the use of non-Gaussian states and operations. Here, we show that the Einstein-Podolsky-Rosen steering inequality can be used to obtain a practical witness for the generation of pure bipartite non-Gaussian states. While the scenario require pure states, we show its broad relevance by reporting the experimental observation of the non-Gaussianity of the CV two-photon state generated in the process of spontaneous parametric down-conversion (SPDC). The observed non-Gaussianity is due only to the intrinsic phase-matching conditions of SPDC. \end{abstract}
\maketitle
\section{Introduction} Continuous-variable (CV) quantum information (QI) is a research field that has been increasingly growing in the past few years \cite{Braunstein}. The need to cover larger Hilbert spaces is motivated by QI protocols, such as quantum key distribution (QKD), that has advantages when implemented in higher dimensions \cite{Gisin1,Lima13}. The extension of many QI protocols, first proposed considering discrete quantum systems, to the realm of CV systems has been heavily based on Gaussian states \cite{Eisert1,Cirac,Acin}. This is due to the fact that their covariance matrices are fully determined by the first and second order moments \cite{Paris1,Weedbrock}, and also because of the practicality in the creation, manipulation and detection of Gaussian states \cite{Vaidman,CV-QKD}.
Nevertheless, several recent works have shown the relevance of non-Gaussian states and operations \cite{SteveNG}. For instance, they are required for quantum computation with cluster states \cite{Lund08_1}, entanglement distillation \cite{Eisert02_1,Fiurasek02_1}, quantum error correction \cite{Fiurasek09_1} and loophole-free Bell tests \cite{Banaszek99_1,Carmichael04_1}. Besides, they provide advantages to the quantum teleportation, quantum cloning and state estimation tasks \cite{Bonifacio03_1,Cerf05_1,Genoni09_1,Adesso09_1}.
In this work, we show that the Einstein-Podolsky-Rosen (EPR) steering inequality \cite{EPR,Schro,Reid,Steer} can be used to obtain a witness for the non-Gaussianity of pure bipartite CV quantum states. While the scenario require pure states, we show its broad relevance by reporting the observation of the non-Gaussianity of the CV two-photon state generated in the process of spontaneous parametric down-conversion (SPDC)\cite{Mandel,Monken1,Saleh1,PR}. SPDC is up to date the most used source for experimental investigations in the field of quantum information, and our work highlights the simplicity of using this source for new applications in CV-QI. The generated down-converted photons are correlated in their transverse momenta and can be used to test the EPR-paradox \cite{BoydEPR}. The observed non-Gaussianity is due only to the intrinsic phase-matching conditions of the SPDC process \cite{TboNG,ExterNF2}, thus, highlighting the simplicity of using SPDC sources for new applications of CV-QI.
\section{The EPR-steering inequality with bipartite Gaussian states}\label{teoria}
Consider a bipartite system described by a pure CV-state $\rho_{12}=|\psi\rangle\langle\psi|$, and a generic pair of complementary noncommuting observables $\hat{u}_i$ and $\hat{v}_i$, with $i=1,2$ used to denote the operation at each subsystem. The spectral decomposition of $\hat{u}_i$ and $\hat{v}_i$ is a infinite set of continuous variables. In Ref.~\cite{Reid} it was introduced the EPR-steering criterion as $\Delta_{inf}^2(\hat{u}_2)\Delta_{inf}^2(\hat{v}_2)\geq C$, where $C$ is is a value that depends on the chosen observables. The violation of this inequality implies the implementation of a EPR-paradox \cite{BoydEPR,Steve}. The \emph{inferred} variances are given by \begin{equation}\label{eq:def_varinf}
\Delta_{inf}^2(\hat{u}_2)=\int du_1\, P(u_1)\Delta^2(u_2|u_1), \end{equation}
where $\Delta^2(u_2|u_1)$ is the variance of the conditional probability distribution $P(u_2|u_1)$, and $P(u_1)$ is the marginal probability distribution of one party's outcomes.
Now, let us consider a general continuous variable two mode pure state described by a Gaussian amplitude $\mathcal{A}(\mathbf{q}_1,\mathbf{q}_2)$, where the vectors $\mathbf{q}_1$ and $\mathbf{q}_{2}$ are the outputs of the observable $\hat{q}$ on each party \begin{equation}\label{gstate}
\mathcal{A}_G(\mathbf{q}_1,\mathbf{q}_2)\propto\exp\left(-\frac{|\mathbf{q}_1+\mathbf{q}_2|^2}{4\sigma_+^2}\right)\exp\left(-\frac{|\mathbf{q}_1-\mathbf{q}_2|^2}{4\sigma_-^2}\right), \end{equation} with $\sigma_+$ and $\sigma_-$ being the widths of the corresponding Gaussian functions. Several CV physical systems can be modelled in this way, for example: an atomic ensemble interacting with an electromagnetic field, two entangled photons sent through Gaussian channels, photoionization of atoms and photodissociation of molecules \cite{Fedorov2004}, spontaneous emission of a photon by an atom \cite{Eberly2002,Eberly2003,Fedorov2005}, and multiphoton pair production \cite{Fedorov2006,Eisert}. Since Gaussian amplitudes have the same functional form in both transverse directions we can, without loss of generality, work in one dimension and consider only its scalar form.
From Eq.~(\ref{eq:def_varinf}) one can obtain the limit of $C$ for Eq.~(\ref{gstate}) \cite{Eisert}. Note that $\Delta_{inf,G}^2(\hat{q}_2)=\Delta^{2}(q_{2}|q_{1})=\sigma_{+}^{2}\sigma_{-}^{2}/(\sigma_{+}^{2}+\sigma_{-}^{2})$. One can also find the inferred variance of the complementary observable $\hat{x}_{2}$, and it is given by $\Delta_{inf,G}^2(\hat{x}_2)=\Delta^{2}(x_{2}|x_{1})=1/(\sigma_{+}^{2}+\sigma_{-}^{2})$. Therefore, the EPR-steering inequality reads \begin{equation}\label{EPRgauss}
\Delta^{2}(q_{2}|q_{1})\Delta^{2}(x_{2}|x_{1})\!=\!\frac{\sigma_{+}^{2}\sigma_{-}^{2}}{\left(\sigma_{+}^{2}+\sigma_{-}^{2}\right)^2}\geq\frac{1}{4}. \end{equation} Let us define one parameter $P\equiv\sigma_{+}/\sigma_{-}$ for simplicity. The Schmidt Number $K_{G}$ for the general Gaussian state of Eq.~(\ref{gstate}) is given by \cite{Eberly} \begin{equation}\label{schmidtgauss} K_{G}=\frac{1}{4}\left(\frac{1}{P}+P\right)^2. \end{equation} Thus, the EPR-steering inequality can be written as \begin{equation}\label{EPR-gauss1}
\Delta^{2}(q_{2}|q_{1})\Delta^{2}(x_{2}|x_{1})=\frac{1}{4K_{G}}\geq\frac{1}{4}. \end{equation} Note that due to the symmetry of Eq.~(\ref{gstate}) one has that $K_G=K_{G\alpha}\times K_{G\beta}$, where $K_{Gj}$ represents the Schmidt number in the transverse direction $j=\alpha,\beta$ \cite{Kfact}. Thus, one may further simplify the EPR-steering inequality for Gaussian states to \begin{equation}\label{EPR-gauss2}
W\equiv\Delta^{2}(q_{2}|q_{1})\Delta^{2}(x_{2}|x_{1})=\frac{1}{4K_{G\alpha}^2}\geq\frac{1}{4}. \end{equation} Thus, one may test it by performing the measurements in only one transverse direction. Moreover, it does not depend of the values chosen for $q_{1}$ and $x_{1}$. When $P=1$, $K_{G}=K_{G_\alpha}=K_{G_\beta}=1$ and the Gaussian state is a product state. In Fig.~\ref{Fig1}(a) [Fig.~\ref{Fig1}(b)] we show $K_{G_\alpha}$ ($W$) with a solid blue line, while varying $P$.
\begin{figure}
\caption{(Color Online) (a) Shows the Schmidt number for bipartite Gaussian states and for the CV spatial state of SPDC [Eq.~(\ref{SPDC-mom})], while varying $P$ and considering one transverse direction of $\mathbf{q}_j$. (b) The values of $W$ plotted in terms of $P$. }
\label{Fig1}
\end{figure}
\section{A witness for the non-Gaussianity of CV quantum states} \label{SPDC} From the results obtained above it is possible to envisage a simple and practical way to determine if a certain bipartite pure state is Gaussian or not. Note from Eq. (\ref{EPR-gauss2}) and Fig.~\ref{Fig1}(b) that pure bipartite Gaussian states will always violate the EPR-steering inequality. The reason is that they are in general entangled states [See Fig.~\ref{Fig1}(a)]. The only exception is the point marked with the horizontal dashed line in Fig.~\ref{Fig1}(b), which represents the point where the Gaussian state is a product state. In this case, we have that $W=\frac{1}{4K_{G}}=0.25$, which corresponds to the upper quantum bound for pure bipartite Gaussian states. Thus, the observation of a value greater than 0.25 for $W$ with pure entangled states can only be achieved while considering non-Gaussian states.
\begin{figure*}
\caption{(Color Online) Experimental setup. (a) State preparation stage. (b) Setup configuration for measuring $\Delta^{2}(q_{2}|q_{1})$. (c) Setup configuration for measuring $\Delta^{2}(x_{2}|x_{1})$. See the main text for details. }
\label{Fig2}
\end{figure*}
While the scenario require pure states, now we show its broad relevance by reporting the observation of the non-Gaussianity of the CV spatial two-photon state generated in the process of SPDC \cite{Mandel,Monken1,Saleh1,PR}. When perfect colinear phase-matching is considered and neglecting effects of anysotropy, we can write the spatial two-photon state as \cite{Monken1,Saleh1} \begin{equation}\label{SPDC-mom}
|\psi\rangle_{12}\propto\iint d\mathbf{q}_1 d\mathbf{q}_2\,\tilde{E}_{p}(\mathbf{q}_1+\mathbf{q}_2)\tilde{G}(\mathbf{q}_1-\mathbf{q}_2)|1\mathbf{q}_1\rangle|1\mathbf{q}_2\rangle, \end{equation}
where $|1\mathbf{q}_i\rangle$ represents one photon in mode $i$ ($i=1,2$) usually called signal or idler, and with the transverse momentum $\mathbf{q}$. $\tilde{E}_{p}(\mathbf{q})$ is the angular spectrum of the pump beam. Usual experimental configurations adopt a Gaussian pump beam and in this case $\tilde{E}_{p}(\mathbf{q})\propto\exp\left[-c^2 |\mathbf{q}|^2/4\right]$. $c$ represents the beam radius at the crystal plane. $\tilde{G}(\mathbf{q})=\ensuremath{\mbox{\hspace{1.3pt}sinc}}\left(b|\mathbf{q}|^2\right)$ defines the phase-matching conditions of the SPDC process, with $\ensuremath{\mbox{\hspace{1.3pt}sinc}}(\xi)\equiv\frac{\sin(\xi)}{\xi}$. $b$ is defined by $b\equiv\frac{L}{8k}$, where $L$ is the crystal length, and $k$ the wavenumber of the down-converted photons. In terms of these definitions, $P$ reads $\frac{1}{c}\sqrt{\frac{L}{2k}}$. This state can be rewritten in the complementary transverse position representation as \cite{ExterNF2,Monken2} \begin{equation}\label{SPDC-pos}
|\psi\rangle_{12}\propto\iint d\boldsymbol{x}_1 d\boldsymbol{x}_2\,E_{p}\left(\frac{\boldsymbol{x}_1+\boldsymbol{x}_2}{2}\right)G\left(\frac{\boldsymbol{x}_1-\boldsymbol{x}_2}{2}\right)|1\boldsymbol{x}_1\rangle|1\boldsymbol{x}_2\rangle, \end{equation}
where the functions $E_{p}(\boldsymbol{x})$ and $G(\boldsymbol{x})$ are the Fourier transform of $\tilde{E}_{p}(\mathbf{q})$ and $\tilde{G}(\mathbf{q})$. Thus, $E_{p}(\boldsymbol{x})\propto \exp\left[-|\boldsymbol{x}|^2/c^2\right]$ and $G(\boldsymbol{x})\propto 1-\frac{2}{\pi}\ensuremath{\mbox{\hspace{1.3pt}Si}}\left(\frac{1}{4b}|\boldsymbol{x}|^2\right)\equiv\mathrm{sint}\left(\frac{1}{4b}|\boldsymbol{x}|^2\right)$, where $\ensuremath{\mbox{\hspace{1.3pt}Si}}(x)\equiv\int_0^x dt\, \ensuremath{\mbox{\hspace{1.3pt}sinc}}(t)$. Here, the transverse position $\hat{x}_j$ and transverse momentum $\hat{q}_j$ are the complementary observables for the EPR-steering inequality test \cite{BoydEPR}. Clearly, the CV spatial two-photon state of SPDC is a non-Gaussian state \cite{TboNG}.
An experimental observation of the effects that arise form the phase-matching conditions, namely, the non-Gaussianity of $|\psi\rangle_{12}$, was reported in Ref. \cite{ExterNF2}. They demonstrated how the spatial correlations in the near field plane of a non-linear crystal changes when the phase matching conditions varies. Now, we demonstrate experimentally how $W$ can be used to detect the non-Gaussianity of this state. The demonstration is based on the fact that the spatial state of SPDC process is pure and entangled \cite{Eberly}, even considering a post-selected one transverse direction \cite{TboNG,PR}. This means that for any value of $P$, the Schmidt number $K_{S\alpha}$ is always greater than 1. In Fig.~\ref{Fig1}(a) [Fig.~\ref{Fig1}(b)] we show $K_{S_\alpha}$ ($W$) with a dashed red line, while varying $P$ for the state $|\psi\rangle_{12}$. One can see that for some values of $P$ ($0.56\leq P\leq 2.58$) the values of $W$ are greater than $\frac{1}{4}$, thus, witnessing the non-Gaussianity of this state.
\section{Experiment}\label{resultados}
The experimental setup is illustrated in Fig.~\ref{Fig2}. We used a solid-state laser source, at $355\,nm$, to pump a $\beta$-barium-borate type-II (BBO-II) non-linear crystal ($L= 1.8\,cm$) for the generation of the down-converted photons. Initially, the Gaussian pump beam had a waist of $c=200\,\mu m$ at the crystal plane. However, to experimentally observe the dependence of $W$ with $P$ [See Fig.~\ref{Fig1}(b)], we generated five more different states by changing the waist of the beam at the crystal plane. This has been done by using a configurable set of doublet achromatic lenses placed before the non-linear crystal [See Fig.~\ref{Fig2}(a)]. The corresponding values of $P$ for the six generated states are shown in Tab.~\ref{tabla1:estados}.
\begin{table}[h] \caption{Corresponding values of $P$ for the generated states.\label{tabla1:estados}} \begin{ruledtabular} \begin{tabular}{ccc} State & $c\,[\mu m]$ & $P$\\ \hline 1 & 200 & 0.1595 \\ 2 & 100 & 0.3189 \\ 3 & 70 & 0.4556 \\ 4 & 45 & 0.7087\\ 5 & 40 & 0.7973 \\ 6 & 35 & 0.9112 \\ \end{tabular} \end{ruledtabular} \end{table}
To guarantee the purity of the measured states, we used spatial and spectral filters in each measurement apparatus. For instance, interference filters were used to select degenerated down-converted photons at $710\,nm$ with $5\,nm$ of bandwidth. This fact implies no entanglement between the frequencies and the transverse spatial coordinates, and then the reduced spatial state must be pure. Besides, the usage of spatial filters is explained afterwards.
Furthermore, a polarizer beam splitter (PBS) separates the signal and idler photon modes. For measuring $\Delta^{2}(q_{2}|q_{1})$ and $\Delta^{2}(x_{2}|x_{1})$, we performed conditional coincidence measurements between the idler and signal photons in two different transverse planes (See Fig.~\ref{Fig2}): the far- and near-field planes of the non-linear crystal, respectively \cite{BoydEPR}.
\subsection{Measurement of $\Delta^2(q_2|q_1)$}
In order to compute $\Delta^2(q_2|q_1)$, one shall perform conditional coincidence measurements at the far-field plane of the non-linear crystal. The reason is very simple: at this plane, the coincidence rate $C_q(x_1,x_2)$ is given by \begin{equation}\label{Eq_coincFF}
C_{q}(x_1,x_2)\propto\left|\tilde{E}_{p}\left[\frac{k}{f_{q}}(x_1+x_2)\right]\tilde{G}\left[\frac{k}{f_{q}}(x_1-x_2)\right]\right|^2, \end{equation} and, since $q_j=k x_j/f_{q}$, it maps the square of the probability amplitude of Eq.~(\ref{SPDC-mom}), i.e., the momentum correlation of the photons generated in the SPDC process.
Figure~\ref{Fig2}(b) shows our setup configuration. One lens $L_{q}$ with focal distance $f_{q}=15\,cm$ was placed before the PBS to create the far-field plane for both signal and idler beams. At these planes, vertical slits were placed for post-selecting the desirable state \cite{LeoGen,LVNRS09} and performing the conditional coincidence measurements \cite{SteveNG,BoydEPR,Yuan}. The width of each slit is $50\,\mu m$. The effect of using a non point-like detector is that the transmittance function of the slit may broad the far-field distribution to be measured. However, in the case of 50 $\mu m$ slits and the experimental configuration adopted, this effect is negligible \cite{Kfact}. This can be easily checked through the calculation of the convolution between the transmittance function of the slit and the predicted far-field distribution.
After the transmission through each slit, the down-converted photons were collected with a $10$x objective lens and multi-mode fibers. The fibers were connected to single-photon counting modules, and then a coincidence circuit (with $4\,ns$ of coincidence window) recorded the data. To perform the coincidence conditional measurements of $\Delta^2(q_2|q_1)$, we scanned in the horizontal direction one slit (of mode 2) while the other one was fixed at the center ($q_1=0$).
To give an example of the results obtained while scanning the slit at mode-2, we show in Fig.~\ref{Fig3}(a) [Fig.~\ref{Fig3}(b)] the far-field conditional distribution measured for the second (sixth) state generated. The experimental results are represented by red points (error bars lie inside the points due to the observed high rate of coincidence counts) and the black dotted-line is the theoretical curve for these distributions arising from Eq.~(\ref{Eq_coincFF}).
\begin{figure}
\caption{(Color Online) Experimental measurement of the far- and near-field conditional distributions. In (a) [(c)] we show the results for the second state at the far-field (near-field) plane. In (b) [(d)] we show the results for the sixth state at the far-field (near-field) plane. The (red) dots represent the experimental data and the dotted (black) lines are the theoretical predictions from Eq.~(\ref{Eq_coincFF}) and Eq.~(\ref{Eq_coincNF}). }
\label{Fig3}
\end{figure}
\subsection{Measurement of $\Delta^2(x_2|x_1)$}
To measure $\Delta^2(x_2|x_1)$ it is now necessary to measure at the near-field plane of the non-linear crystal. Again, the reason is very simple: at this plane, the coincidence distribution is proportional to the square of the amplitude of Eq. (\ref{SPDC-pos}), that is \begin{equation}\label{Eq_coincNF}
C_{x}(x_1,x_2)\propto\left|E_{p}\left(\frac{x_1+x_2}{2}\right)G\left(\frac{x_1-x_2}{2}\right)\right|^2. \end{equation}
For performing the conditional coincidence measurements at the near-field plane we used 4x objective lenses to form the image of the center of the BBO-II onto the transverse plane of the fiber-couplers [see Fig.~(\ref{Fig2})(c)]. For doing this, we removed the slits used to scan the coincidence rate in the far-field plane and the lens $L_q$. The multi-mode fibers were replaced with single-mode fibers whose core diameter were $4.7\,\mu m$. The small size of the fibers core allows for post-selecting and measuring with high accuracy $\Delta^2(x_2|x_1)$ \cite{ExterNF2,Eisert}. Again, the effect of the transmittance function of the fiber over the broadening of the near-field distribution is negligible \cite{Kfact}. This can be easily checked through the calculation of the convolution between the transmittance function of the fiber with the predicted near-field distribution for our experimental configurations. By scanning transversely the single-mode fiber at mode-2, we recorded the coincidence conditional distribution at the near-field plane. An example of the results obtained is shown in Fig.~\ref{Fig3}(c) [Fig.~\ref{Fig3}(d)] for the second (sixth) state generated. The experimental results are represented by red points. The black dotted-line is the theoretical curve for these distributions arising from Eq.~(\ref{SPDC-pos}).
We have measured the coincidence conditional distribution of the down-converted photons by imaging the center of the non-linear crystal to the transverse plane of the detection system. However, as it has been shown in Ref. \cite{ExterNF2}, the conditional distribution at the near-field plane depends strongly of which part of the crystal is imaged at the detection system. This is specially relevant when thicker crystals are considered. In our case, we have checked that the near-field distribution does not change significantly while considering different planes of our thin crystal to be imaged at the detection system. For doing this, we moved the crystal around the longitudinal central position $z_0=0$, imaging 5 different planes $z_c$ of the crystal at the detection plane. For each $z_c$, we recorded the conditional distribution at the near-field plane. The experimental results are shown in Fig.~(\ref{Fig4}). One can observe that our results are in agreement with the theoretical prediction, which takes into account our crystal length and our imaging system. A longitudinal crystal displacement around its center introduces a phase factor of $\exp[-i|\mathbf{q}_j^2|z/(2k)]$ onto Eq.~(\ref{SPDC-mom}). From our results, one can see that there is only a slight narrowing of the near-field conditional distribution such that this effect does not affect significantly our test of $W$.
\begin{figure}
\caption{(Color Online) (a) Experimental result and (b) theoretical prediction for the near-field conditional distribution while moving longitudinally the non-linear crystal (See the main text for details).}
\label{Fig4}
\end{figure}
\subsection{Testing $W$}
\begin{table*}[t] \caption{Results of the conditional variances measured at the near- and far-field planes.\label{tabla2:resultados}} \begin{ruledtabular} \begin{tabular}{ccccc}
$P$ & $\Delta^2(x_2|0)_E\,\left[m^2\right]$ & $\Delta^2(q_2|0)_E\,\left[\frac{1}{m^2}\right]$ &$W_\mathrm{E}$ & $W_\mathrm{T}$\\ \hline $0.1595$ & $(6.62\pm0.26)\times 10^{-10}$ & $(3.55\pm0.14)\times 10^{7}$ & $0.024\pm 0.002$ & $0.033$ \\ $0.3189$ & $(9.17\pm0.37)\times 10^{-10}$ & $(1.25\pm0.05)\times 10^{8}$ & $0.115\pm 0.009$ & $0.11$ \\ $0.4556$ & $(7.76\pm0.31)\times 10^{-10}$ & $(2.60\pm0.1)\times 10^{8}$ & $0.20\pm 0.02$ & $0.19$ \\ $0.7087$ & $(7.75\pm0.31)\times 10^{-10}$ & $(4.67\pm0.19)\times 10^{8}$ & $0.36\pm 0.03$ & $0.34$ \\ $0.7973$ & $(7.23\pm0.29)\times 10^{-10}$ & $(5.28\pm0.21)\times 10^{8}$ & $0.38\pm 0.03$ & $0.38$ \\ $0.9112$ & $(8.16\pm0.33)\times 10^{-10}$ & $(5.18\pm0.21)\times 10^{8}$ & $0.42\pm 0.03$ & $0.43$ \\ \end{tabular} \end{ruledtabular} \end{table*}
In order to test the $W$ witness, and then certify the non-gaussian feature of the CV spatial state of SPDC, we compute the variances of the conditional coincidence measurements at the near- and far-field planes for the six generated states. The results of all the conditional variances measured are shown in Tab.~\ref{tabla2:resultados}. The errors of the variances were obtained by minimizing the squared two-norm of the residuals between the analytical and experimental results (See Fig.~\ref{Fig3}).
\begin{figure}
\caption{(Color Online) Measurement of $W$ for the six generated states. The continuous (black) line corresponds to the theoretical value of the inequality for the spatial SPDC state. The dashed (blue) line shows the Gaussian bound. }
\label{Fig5}
\end{figure}
Figure~\ref{Fig5} shows the values of the $W$ values for each state generated. The continuous (black) line indicate the predicted theoretical value of the inequality when we consider the spatial state of SPDC. The dotted (blue) line is the upper limit for this inequality for Gaussian states. For the first three entangled states, the values obtained for this inequality are below the Gaussian limit as predicted by theory. However, for the last three entangled states, there is a clear experimental violation of this bound. Due to momentum conservation, the spatial state of the SPDC process is always entangled \cite{Eberly} [See Fig.~\ref{Fig1}(a)] and, thus, we have a clear experimental demonstration of the non-Gaussianity of the CV two-photon spatial state of SPDC.
\section{Conclusion}\label{conc} We have introduced a novel application for the EPR-steering inequality by showing that it can be used for witnessing the non-Gaussianity of CV quantum states. To demonstrated this we performed an experiment using the CV spatial state of entangled down-converted photons. Due to the phase-matching conditions of the SPDC process, the generated is state is naturally a pure entangled non-Gaussian state. A clear violation of the Gaussian bound of the EPR-steering inequality has been observed. Since non-Gaussian states are required for many new protocols of CV-QI, our work highlights the simplicity and relevance of using SPDC sources for new applications in CV quantum information processing.
\subsection*{Acknowledgments} We thank C. H. Monken for discussions of this paper. This work was supported by Grants FONDECYT 1120067, Milenio P10-030-F and CONICYT FB0824/2008. E. S. G. and G. C. acknowledge the financial support of CONICYT. W. A. T. N. thanks CNPq (Brazil) for financial support.
\end{document}
|
arXiv
|
{
"id": "1501.02403.tex",
"language_detection_score": 0.7807350754737854,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\preprint{APS/123-QED}
\title{Direct Fidelity Estimation of Quantum States Using Machine Learning} \author{Xiaoqian Zhang} \altaffiliation{These authors contributed equally} \author{Maolin Luo} \altaffiliation{These authors contributed equally} \affiliation{School of Physics and State Key Laboratory of Optoelectronic Materials and Technologies, Sun Yat-sen University, Guangzhou 510000, China} \author{Zhaodi Wen} \affiliation{College of Information Science and Technology, College of Cyber Security, Jinan University, Guangzhou 510632, China} \author{Qin Feng} \author{Shengshi Pang} \affiliation{School of Physics and State Key Laboratory of Optoelectronic Materials and Technologies, Sun Yat-sen University, Guangzhou 510000, China} \author{Weiqi Luo} \affiliation{College of Information Science and Technology, College of Cyber Security, Jinan University, Guangzhou 510632, China} \author{Xiaoqi Zhou} \email{[email protected]} \affiliation{School of Physics and State Key Laboratory of Optoelectronic Materials and Technologies, Sun Yat-sen University, Guangzhou 510000, China}
\date{\today}
\begin{abstract}
In almost all quantum applications, one of the key steps is to verify that the fidelity of the prepared quantum state meets expectations. In this Letter, we propose a new approach solving this problem using machine-learning techniques. Compared to other fidelity estimation methods, our method is applicable to arbitrary quantum states, the number of required measurement settings is small, and this number does not increase with the size of the system. For example, for a general five-qubit quantum state, only four measurement settings are required to predict its fidelity with $\pm1\%$ precision in a nonadversarial scenario. This machine-learning-based approach for estimating quantum state fidelity has the potential to be widely used in the field of quantum information. \end{abstract}
\maketitle
In the field of quantum information, almost all quantum applications require the generation and manipulation of quantum states. However, due to the imperfections of equipment and operation, the prepared quantum state is always different from the ideal state. Therefore, it is a key step to evaluate the deviation of the prepared state from the ideal one in the quantum applications. Quantum state tomography (QST) \cite{1Chantasri,3Wieczorek,4Cramer,5Renes,6Gross,7Steven,8Smith,9Kalev,10Riofr,11Kyrillidis,12Shang,13Silva,14Oh,15Siddhu,16Ma,17Martnez,18Sosa} is the standard method for reconstructing a quantum state to obtain its density matrix, which can be used to calculate the fidelity of the quantum state with respect to the ideal one. In recent years, researchers have proposed compressed sensing methods \cite{6Gross,7Steven,8Smith,9Kalev,10Riofr,11Kyrillidis} to improve the efficiency of QST for the pure quantum states. Despite the fact that compressed sensing greatly reduces the measurement resources, the measurement settings for QST still grow exponentially with the size of the system.
However, to evaluate the fidelity of a quantum state, full reconstruction of its density matrix is not needed. Recently, schemes \cite{19Lu,20Tokunaga,21Steven,54da,22Zhu,23Cerezo,24Somma,25Wang,53Mahler,55Li,56Yu,57Zhu,58Liu,59Zhu,60Wang,61Zhu,62Li,63Sam,64Zhang,65Jiang} for directly estimating the fidelity of quantum states, including the quantum state verification (QSV) method \cite{53Mahler,55Li,56Yu,57Zhu,58Liu,59Zhu,60Wang,61Zhu,62Li,63Sam,64Zhang,65Jiang} and the direct fidelity estimation (DFE) method \cite{21Steven}, have been proposed. The QSV method can determine whether a quantum state is the target state with few measurement resources, but this method is only applicable to special quantum states, such as the stabilizer states or the $W$ states, and is not applicable to general quantum states. Compared with the QSV method, the DFE method \cite{21Steven} is applicable to general quantum pure states but requires more measurement settings. In most practical experiments, the number of measurement settings has a significant impact on the total measurement time (changing measurement setting is time consuming). Both the QSV and the DFE methods assume that the measured quantum state may be prepared or manipulated by an adversary, which is valid for the case of quantum networks. For most local experiments in which the quantum devices are trusted, the imperfections of the quantum state are caused by noise and device defects, not by the adversary. As a result, our aim is to devise a direct fidelity estimation protocol for this scenario, further reducing the number of measurement settings required.
In this Letter, we use machine-learning methods \cite{27Yang,28Ma,29Lu,30Gao,31Deng,35Ren,36Xin,39Ling,40Miszczak,41Ahmed,42Ahmed} to tackle this problem. So far, machine-learning methods have been used for classification problems \cite{27Yang,28Ma,29Lu,30Gao,31Deng,35Ren} in the field of quantum information to detect the nonlocality \cite{27Yang}, steerability \cite{29Lu} and entanglement \cite{28Ma} of quantum states. In these previous works, the classification of quantum states can be performed with high accuracy using fewer measurement settings by using artificial neural networks to learn the potential information between the internal structures of the quantum state space. In this Letter, we transform the quantum state fidelity estimation problem into a classification problem, by dividing the quantum state space into different subspaces according to the value of fidelity, and then using a neural network to predict which subspace the quantum state is in to obtain an estimate of the quantum state fidelity. Compared with previous methods for direct estimation of fidelity, this method not only works for arbitrary quantum states, but also greatly reduces the number of measurement settings required.
First, Let us review how to represent the fidelity of a quantum state using the Pauli operators. The fidelity \cite{53Tacchino} of an arbitrary quantum state $\rho$ with respect to the desired pure state $\rho_0$ can be written as \begin{eqnarray} \displaystyle F(\rho_0,\rho)=tr\sqrt{\rho^{1/2}\rho_0\rho^{1/2}}=\sqrt{tr(\rho\rho_0)}, \end{eqnarray} where \begin{eqnarray} \rho_0=\frac{1}{2^n}\sum\limits_{j=0}^{4^n-1}a_j W_j,\quad \rho=\frac{1}{2^n}\sum\limits_{j=0}^{4^n-1}\beta_jW_j, \end{eqnarray} in which \[ \sum\limits_{j=0}^{4^n-1}a_j^2/2^n=1,\quad\quad \sum\limits_{j=0}^{4^n-1}\beta_j^2/2^n\leqslant1. \]
\noindent Here $W_j$ represents Pauli operators which are $n$-fold tensor products of $I, X, Y$, and $Z$. The fidelity in Eq.(1) can be expanded in terms of the Pauli operators' expectation values $a_j$ and $\beta_j$ \begin{eqnarray} F(\rho_0,\rho)=\sqrt{\frac{1}{2^n}\sum\limits_{j=0}^{4^n-1}\beta_j a_j}. \end{eqnarray}
\begin{figure}
\caption{The artificial neural network for quantum state fidelity evaluation. The input layer neurons are loaded with the measurements of the Pauli operators, the output layer neurons correspond to different fidelity intervals, and the input and output layers are fully connected by several hidden layers. After hundreds of training sessions, a neural network model that can evaluate the fidelity of quantum states is obtained.}
\label{A1}
\end{figure}
Now, we present the neural network model used for fidelity estimation. Here we choose $k$ measurement settings to measure the $n$-qubit quantum state (See Supplementary Material \cite{70SUP} Sec. III for the selection of the measurement settings). Taking a three-qubit quantum state as an example, assuming the measurement setting is XYZ, for each qubit there are two possible measurement results of +1 and -1, there will be eight possible measurement outcomes for the three-qubit state. Using these eight outcomes, it is possible to calculate the expected value of not only the Pauli operator $XYZ$, but also the expected values of the six nontrivial Pauli operators $XYI$, $XIZ$, $XII$, $IYZ$, $IYI$, and $IIZ$. For each of these $k$ measurement settings of the $n$-qubit quantum state, $2^n$ possible outcomes are obtainable. Using these $2^n$ outcomes, the expected values of the $2^n-1$ nontrivial Pauli operators can be calculated. $M$ of these $2^n-1$ expected values will be selected as neuron inputs, and thus, the input layer has a total of $k\times M$ neurons, where $M\leqslant2^n-1$. Here we consider the case $M = 2^n-1$. We note that one can choose $M=poly(n)$ (See Supplementary Material \cite{70SUP} Sec. VIII).
Figure 1 shows the structure of the neural network we used here. The input layer neurons are fully connected to the hidden layer, i.e., each neuron of the input layer is connected to each neuron of the hidden layer. The hidden layer is also fully connected to the output layer. The output layer has a total of 122 neurons corresponding to different fidelity intervals of the quantum states. After several hundred rounds of training, the prediction accuracy of the neural network saturates, resulting in a neural network model that can predict the fidelity of the quantum states with high confidence (See Supplementary Material \cite{70SUP} Sec. VI).
In the following we describe how to generate the neural network training database for an arbitrary $n$-qubit quantum pure state. First, we generate a database for the pure state $|0\rangle^{\otimes n}\langle0|$. We divided the fidelity of the quantum state with respect to $|0\rangle^{\otimes n}\langle0|$ into 122 fidelity intervals \cite{68three}, and generated 20000 quantum states satisfying randomness and uniformity in each interval (see Supplementary Material \cite{70SUP} Sec. II), of which 16000 were used for the training of the neural network and 4000 were used for the validation of the neural network. These 2\,440\,000 quantum states then constitute our original database for the $n$-qubit quantum state fidelity estimation. For an arbitrary $n$-qubit quantum pure state $\rho_0$, a unitary matrix $U$ can be found that satisfies $\rho_0=U|0\rangle^{\otimes n}\langle0|U^\dagger$. Next, we act $U$ on each quantum state in the original database, thus obtaining a new database for $\rho_0$. Because of the invariance of the inner product of the unitary transformation, the relation of the new database with respect to $\rho_0$ is identical to that of the original database with respect to $|0\rangle^{\otimes n}\langle0|$.
Taking a general five-qubit quantum state $|\psi_0\rangle$ as an example \cite{66general}, by setting $k=3$, $4$, $5$, and $6$, four different neural network models are generated, respectively using the methods described above to predict the fidelity of the input quantum state with respect to $|\psi_0\rangle$. When estimating fidelity with a neural network model, the error of the fidelity estimation $\epsilon$ is inversely related to its confidence level $1-\delta$, where $Pr(|\tilde{F}-F|\geqslant\epsilon)\leqslant\delta$, in which $\tilde{F}$ stands for the estimated fidelity and $F$ is the actual fidelity. Figure 2 shows that the higher the number of measurement settings $k$ used, the smaller the error of the fidelity estimation $\epsilon$ is. For the same neural network model, the higher the actual fidelity $F$ is, the smaller the fidelity estimation error $\epsilon$ is.
\begin{figure}
\caption{A plot of the fidelity estimation error $\epsilon$ of the neural network versus the actual fidelity $F$ of the quantum state when measurements are made using three, four, five, and six measurement settings. Here the target state is a five-qubit general quantum state $|\psi_0\rangle$ and the confidence level $1-\delta$ is set to be 95\%. The higher the number of measurement settings $k$ used, the smaller the error of the fidelity estimation $\epsilon$ is. The higher the actual fidelity $F$ is, the smaller the fidelity estimation error $\epsilon$ is.}
\end{figure}
\begin{figure*}
\caption{The performance of this method for fidelity estimation with neural networks varies with the number of qubits in the quantum state.
(a) The number of measurement settings $k$ decreases as the number of qubits $n$ increases when $\epsilon = 1\%$ and $1-\delta = 95\%$ (or 99\%). (b) The fidelity estimation error $\epsilon$ decreases as the number of qubits $n$ increases when $k=3$ and $1-\delta=95\%$ (or 99\%). (c) The confidence level $1-\delta$ of the fidelity estimation grows with the increase of the number of qubits $n$ when $k=3$ and $\epsilon=1\%$. We note that the fidelity of the quantum state to be estimated here is in the range of 0.95 to 1.}
\end{figure*}
Now we look at how to use a neural network model for a specific problem--to determine whether the fidelity of the input quantum states $|\psi_1\rangle$ and $|\psi_2\rangle$ \cite{70data} with respect to $|\psi_0\rangle$ exceeds 96\%. Here we choose the confidence level $1-\delta$ to be 95\%. For $|\psi_1\rangle$, we choose the top three Pauli operators in the absolute value of the expectation value for measurement, and input the measurement results into the neural network model with $k=3$, and obtain a fidelity prediction of $(97\pm 1)\%$, which indicates the fidelity of $|\psi_1\rangle$ exceeds 96\%. For $|\psi_2\rangle$, repeating the above operations, the fidelity result obtained is $(95\pm1.22)\%$, which cannot indicate whether the fidelity of $|\psi_1\rangle$ exceeds 96\% for now. Then the Pauli operators with the fourth largest absolute value of expectation value are measured, and the measurement result was input into the $k=4$ neural network model together with the measurement results of the first three Pauli operators, and the fidelity prediction result obtained is $(94.78\pm1)\%$, indicating that the fidelity of $|\psi_2\rangle$ does not exceed 96\%.
Figure 3 shows how the performance of our method for fidelity estimation with neural networks varies with the number of qubits in the quantum state. In our scheme, in addition to the number of qubits $n$, there are three other parameters, which are the number of measurement settings $k$, the fidelity estimation error $\epsilon$, and the confidence level $1-\delta$ of the fidelity estimation. Figure (3a) shows that the number of measurement settings $k$ decreases as the number of qubits $n$ increases when $\epsilon = 1\%$ and $1-\delta = 95\%$ (or 99\%). Figure (3b) shows that the fidelity estimation error $\epsilon$ decreases as the number of qubits $n$ increases when $k=3$ and $1-\delta=95\%$ (or 99\%). Figure (3c) shows that the confidence level $1-\delta$ of the fidelity estimation grows with the increase of the number of qubits $n$ when $k=3$ and $\epsilon=1\%$. It can be seen from Fig. 3 that the performance of our method for estimating the fidelity of quantum states using neural networks is getting better as the number of qubits increases. This looks a bit counterintuitive, and we try to give an explanation for this phenomenon below.
From Eq. (3), the fidelity can be rewritten as \begin{eqnarray} \begin{split} F(\rho_0,\rho)&=\sqrt{\frac{1}{2^n}\sum\limits_{j=0}^{4^n-1}\beta_j a_j}\\ &=\sqrt{\sum\limits_{j=0}^{4^n-1}P_j \gamma_j}, \end{split} \end{eqnarray} where $\gamma_j=\beta_j/a_j$, $P_j={a_j}^2/{2^n}$ and $\sum_{j=0}^{4^n-1}P_j=1$. It can be seen that the fidelity $F$ is equal to the square root of the weighted average of $\gamma_j$ in which $j$ ranges from 0 to $4^n-1$. For an ideal quantum state with unity fidelity, all $\gamma_j$ will have a value of 1; for a nonideal quantum state, these $4^n$ $\gamma_j$ will deviate from the value of 1. In the nonadversarial case, the deviation of $\gamma_j$ with respect to 1 will be general and random over all Pauli operators, and not concentrated on a few specific Pauli operators. Therefore, a fraction of $\gamma_j$ can be selected for measurement to estimate the overall deviation of $\gamma_j$ with respect to 1 and, thus, achieve the fidelity estimation. The larger the number of selected $\gamma_j$ (this number is defined as $l$), the smaller the error in the fidelity estimation. When $n$ is large, $l$ does not need to increase continuously to provide a good estimate of the fidelity \cite{21Steven}. As a visual example, for the United States presidential election with 260 million voters, the sample size of the poll only needs to be around 1000 \cite{69US}.
With a fixed value of $l$, the number of required measurement settings $k$ decreases as the number of qubits $n$ increases. This is because, for each measurement setting for an $n$-qubit quantum state, $2^n$ possible outcomes will be produced, and using them the $\gamma_j$ corresponding to $2^n-1$ nontrivial Pauli operators can be calculated. For example, for the measurement results of one measurement setting for a three-qubit quantum state, the $\gamma_j$ corresponding to seven nontrivial Pauli operators can be calculated, while for the measurement results of one measurement setting for a six-qubit quantum state, the $\gamma_j$ corresponding to 63 nontrivial Pauli operators can be calculated. Thus, the number of measurement settings required to obtain $l$ $\gamma_j$ for a six-qubit state is less than that for a three-qubit state--this qualitatively explains why the performance of our method for estimating the fidelity of quantum states using neural networks is getting better as the number of qubits $n$ increases.
In the following, we compare our machine-learning-based fidelity estimation method with the other three methods QST, QSV and DFE, pointing out their respective advantages, disadvantages, and applicability. The advantage of QST is that it can characterize a completely unknown quantum state and reconstruct its density matrix to obtain all the information, while the other three methods can only evaluate the fidelity of the input quantum state with respect to a specific target quantum state. Its drawback is that the resources consumed grow exponentially with the size of the system. The advantage of QSV is that it can evaluate the fidelity of a quantum state with minimal resources, while its disadvantage is that it is currently only applicable to special quantum states such as stabilizer states and not to the case where the target quantum state is a multiqubit arbitrary pure quantum state. The advantage of DFE is that it is applicable to the case where the target quantum state is an arbitrary pure quantum state, and its disadvantage is that the selection of the measurement settings requires several rounds of random switching and the total number of measurement settings is relatively large. The advantage of our method is that it is not only applicable to the case where the target quantum state is an arbitrary pure quantum state, but also requires only a small number of measurement settings, and the number of required measurement settings does not increase with the size of the system. For example, for a seven-qubit arbitrary quantum state, our method only needs to switch the measurement settings three times to achieve a fidelity estimation within 0.01 error with $95\%$ confidence level, while to achieve the same result, QST needs to switch the measurement device $3^7=2187$ times and DFE needs to switch the measurement device ${8/[0.01^2\times(1-95\%)]}=1.6\times 10^6$ times. Note that QSV cannot evaluate the fidelity of an arbitrary seven-qubit quantum state. The disadvantage of our method is that it has an upper limit on the fidelity estimation accuracy, which is not suitable for situations requiring extremely high fidelity estimation accuracy. In summary, each of the four methods has its own advantages and disadvantages, which are complementary to each other and each is suitable for different application scenarios. Our method is most suitable for evaluating whether the fidelity of a quantum state exceeds a specific value with respect to an $n$-qubit arbitrary pure quantum state.
To summarize, we present, in this Letter, a method for predicting the fidelity of quantum states using neural network models. Compared with previous methods for quantum state fidelity estimation, our method uses fewer measurement settings and works for arbitrary quantum states. Here our method is applicable to nonadversarial scenarios. It has the potential to be used in a wide variety of local quantum information applications, such as quantum computation, quantum simulation, and quantum metrology. A future research direction is to design machine-learning-based quantum state fidelity estimation schemes in adversarial scenarios.
This work was supported by the National Key Research and Development Program (Grants No. 2017YFA0305200 and No. 2016YFA0301700), the Key Research and Development Program of Guangdong Province of China (Grants No. 2018B030329001 and No. 2018B030325001), the National Natural Science Foundation of China (Grant No. 61974168). X. Zhou acknowledges support from the National Young 1000 Talents Plan. W. Luo acknowledges support from the National Natural Science Foundation of China (Grant No. 61877029). X. Zhang acknowledges support from the National Natural Science Foundation of China (Grant No. 62005321). S. Pang acknowledges support from the National Natural Science Foundation of China (Grant No. 12075323). \\
X. Zhang and M. Luo contributed equally to this work.
\begin{thebibliography}{10}
\bibitem{1Chantasri} A.~Chantasri, S.-S. Pang, T.~Chalermpusitarak, and A.~N. Jordan, Quantum state tomography with time-continuous measurements: Reconstruction with resource limitations, Quantum Stud 7, 23 (2020).
\bibitem{3Wieczorek} G.~Toth, W.~Wieczorek, D.~Gross, R.~Krischek, C.~Schwemmer, and H.~Weinfurter, Permutationally Invariant Quantum Tomography, Phys. Rev. Lett. 105, 250403 (2010).
\bibitem{4Cramer} M.~Cramer, M.~B. Plenio, S.~T. Flammia, R.~Somma, D.~Gross, S.~D. Bartlett, O.~Landon-Cardinal, D.~Poulin, and Y.~Liu, Efficient quantum state tomography, Nat. Commun. 1, 149 (2010).
\bibitem{5Renes} J.~Renes, R.~Blume-Kohout, A.~Scott, and C.~Caves, Symmetric informationally complete quantum measurements, J. Math. Phys.(N.Y.) 45, 2171 (2004).
\bibitem{6Gross} D.~Gross, Y.~Liu, S.~T. Flammia, S.~Becker, and J.~Eisert, Quantum State Tomography via Compressed Sensing, Phys. Rev. Lett. 105, 150401 (2010).
\bibitem{7Steven} S.~T. Flammia, D.~Gross, Y.~Liu, and J.~Eisert, Quantum tomography via compressed sensing: Error bounds, sample complexity and efficient estimators, New J. Phys. 14, 095022 (2012).
\bibitem{8Smith} A.~Smith, C.~A. Riofr\'{I}o, B.~E. Anderson, H.~Sosa-Martinez, I.~H. Deutsch, and P.~S. Jessen, Quantum state tomography by continuous measurement and compressed sensing, Phys. Rev. A 87, 030102(R) (2013).
\bibitem{9Kalev} A.~Kalev, R.~L. Kosut, and I.~H. Deutsch, Quantum tomography protocols with positivity are compressed sensing protocols, npj Quantum Inf. 1, 15018 (2015).
\bibitem{10Riofr} C.~Riofr\'{I}o, D.~Gross, S.~Flammia, T.~Monz, D.~Nigg, R.~Blatt, and J.~Eisert, Experimental quantum compressed sensing for a seven-qubit system, Nat. Commun. 8, 15305 (2017).
\bibitem{11Kyrillidis} A.~Kyrillidis, A.~Kalev, D.~Park, S.~Bhojanapalli, C.~Caramanis, and S.~Sanghavi, Provable compressed sensing quantum state tomography via non-convex methods, npj Quantum Inf. 4, 36 (2018).
\bibitem{12Shang} J.~Shang, Z.~Zhang, and H.~K. Ng, Superfast maximum-likelihood reconstruction for quantum tomography, Phys. Rev. A 95, 062336 (2017).
\bibitem{13Silva} G.~Silva, S.~Glancy, and H.~Vasconcelos, Investigating bias in maximum-likelihood quantum-state tomography, Phys. Rev. A 95, 022107 (2017).
\bibitem{14Oh} C.~Oh, Y.~Teo, and H.~Jeong, Efficient Bayesian credible region certification for quantum-state tomography, Phys. Rev. A 100, 012345 (2019).
\bibitem{15Siddhu} V.~Siddhu, Maximum a $posteriori$ probability estimates for quantum tomography, Phys. Rev. A 99, 012342 (2019).
\bibitem{16Ma} X.~Ma, T.~Jackson, H.~Zhou. J.-X. Chen, D.-W. Lu, M.~D. Mazurek, K.~A.~G. Fisher, X.-H. Peng, D.~Kribs, K.~J. Resch, Z.-F. Ji, B.~Zeng, and R.~Laflamme, Pure-state tomography with the expectation value of pauli operators, Phys. Rev. A 93, 032140 (2016).
\bibitem{17Martnez} D.~Martnez, M.~Sol\'{I}-Prosser, G.~Ca\~{n}as, O.~Jim\'{e}nez, A.~Delgado, and G.~Lima, Experimental quantum tomography assisted by multiply symmetric states in higher dimensions, Phys. Rev. A 99, 012336 (2019).
\bibitem{18Sosa} H.~Sosa-Martinez, N.~Lysne, C.~Baldwin, A.~Kalev, I.~Deutsch, and P.~Jessen, Experimental Study of Optimal Measurements for Quantum State Tomography, Phys. Rev. Lett. 119, 150401 (2017).
\bibitem{19Lu} C.-Y. Lu. O.~G\"{u}hne, W.-B. Gao, and J.-W. Pan, Toolbox for entanglement detection and fidelity estimation, Phys. Rev. A 76, 030305(R) (2007).
\bibitem{20Tokunaga} Y.~Tokunaga, T.~Yamamoto, M.~Koashi, and N.~Imoto, Fidelity estimation and entanglement verification for experimentally produced four-qubit cluster states, Phys. Rev. A 74, 020301(R) (2006).
\bibitem{21Steven} S.~T. Flammia and Y.-K. Liu, Direct Fidelity Estimation from Few Pauli Measurements, Phys. Rev. Lett. 106, 230501 (2011).
\bibitem{54da} M. P. da Silva, O. Landon-Cardinal, and D. Poulin, Practical Characterization of Quantum Devices without Tomography, Phys. Rev. Lett. 107, 210404 (2011).
\bibitem{22Zhu} H.-J. Zhu and M.~Hayashi, Optimal verification and fidelity estimation of maximally entangled states, Phys. Rev. A 99, 052346 (2019).
\bibitem{23Cerezo} M.~Cerezo, A.~Poremba, L.~Cincio, and P.~J. Coles, Variational quantum fidelity estimation, Quantum 4, 1 (2020).
\bibitem{24Somma} D.~S. Rolando, J.~Chiaverini, and D.~J. Berkeland, Lower bounds for the fidelity of entangled-state preparation, Phys. Rev. A 74, 052302 (2006).
\bibitem{25Wang} J.~Wang, Z.~Han, S.~Wang, Z.~Li, L.~Mu, H.~Fan, and L.~Wang, Scalable quantum tomography with fidelity estimation, Phys. Rev. A 101, 032321 (2020).
\bibitem{53Mahler} D.~Mahler, L.~A. Rozema, A.~Darabi, C.~Ferrie, R.~Blume-Kohouz, and A.~Steinberg, Adaptive Quantum State Tomography Improves Accuracy Quadratically, Phys. Rev.
Lett. 111, 183601 (2013).
\bibitem{55Li} Z.-H. Li, Y.-G. Han, and H.-J. Zhu, Efficient verification of bipartite pure states, Phys. Rev. A 100, 032316 (2019).
\bibitem{56Yu} X.-D. Yu, J.-W. Shang, and O.~G\"{u}hne, Optimal verification of general bipartite pure states, npj Quantum Inf. 5, 112 (2019).
\bibitem{57Zhu} H.-J. Zhu and M.~Hayashi, Efficient Verification of Pure Quantum States in the Adversarial Scenario, Phys. Rev. Lett. 123,260504 (2019).
\bibitem{58Liu} Y.-C. Liu, X.-D. Yu, J.-W. Shang, H.-J. Zhu, and X.-D. Zhang, Efficient Verification of Dicke States, Phys. Rev. Applied 12, 044020 (2019).
\bibitem{59Zhu} H.-J. Zhu and M.~Hayashi, Efficient Verification of Hypergraph States, Phys. Rev. Applied. 12, 054047 (2019).
\bibitem{60Wang} K.~Wang and M.~Hayashi, Optimal verification of two-qubit pure states, Phys. Rev. A 100, 032315 (2019).
\bibitem{61Zhu} H.-J. Zhu and M.~Hayashi, General framework for verifying pure quantum states in the adversarial scenario, Phys. Rev. A 100, 062335 (2019).
\bibitem{62Li} Z.-H.~Li, Y.-G. Han, and H.-J. Zhu, Optimal Verification of Greenberger-Horne-Zeilinger states, Phys. Rev. Applied 13,054002 (2020).
\bibitem{63Sam} S.~Pallister, N.~Linden, and A.~Montanaro, Optimal Verification of Entangled States with Local Measurements, Phys. Rev. Lett. 120, 170502 (2018).
\bibitem{64Zhang} W.-H. Zhang, C.~Zhang, Z.~Chen, X.-X. Peng, X.-Y. Xu, P.~Yin, S.~Yu, X.-J. Ye, Y.-J. Han, J.-S. Xu, G.~Chen, C.-F. Li, and G.-C. Guo, Experimental optimal verification of entangled states using local measurements, Phys. Rev. Lett. 125, 030506 (2020).
\bibitem{65Jiang} X.-H. Jiang, K. Wang, K.-Y. Qian, Z.-Z. Chen, Z.-Y. Chen, L.-L. Lu, L.-J. Xia, F.-M. Song, S.-N. Zhu, and X.-S. Ma, Towards the standardization of quantum state verification using optimal strategies, npj Quantum Inf. 6, 90 (2020).
\bibitem{27Yang} M.~Yang, C.~Ren, Y.~Ma, Y.~Xiao, and X.~Ye, Experimental Simultaneous Learning of Multiple Nonclassical Correlations, Phys. Rev. Lett. 123, 190401 (2019).
\bibitem{28Ma} Y.-C. Ma and M.-H. Yung, Transforming Bell's inequalities into state classifiers with machine learning, npj Quantum Inf. 4, 34 (2018).
\bibitem{29Lu} S.-R. Lu, S.-L. Huang, K.-R. Li, J.~Li, J.-X. Chen, D.-W. Lu, Z.-F. Ji, Y.~Shen, D.-L. Zhou, and B.~Zeng, Separability-entanglement classifier via machine learning, Phys. Rev. A 98, 012315 (2018).
\bibitem{30Gao} J.~Gao, L.-F. Qiao, Z.-Q. Jiao, Y.-C. Ma, C.-Q. Hu, R.-J. Ren, A.-L. Yang, H.~Tang, M.-H. Yung, and X.-M. Jin, Experimental Machine Learning of Quantum States, Phys. Rev. Lett. 120, 240501 (2018).
\bibitem{31Deng} D.-L. Deng, Machine Learning Detection of Bell Nonlocality in Quantum Many-Body Systems, Phys. Rev. Lett. 120, 240402 (2018).
\bibitem{35Ren} C.-L. Ren and C.-B. Chen, Steerability detection of an arbitrary two-qubit state via machine learning, Phys. Rev. A 100, 022314 (2019).
\bibitem{36Xin} T.~Xin, S.-R. Lu, N.-P. Cao, G.~Anikeeva, D.-W. Lu, J.~Li, G.-L. Long, and B.~Zeng, Local-measurement-based quantum state tomography via neural networks, npj Quantum Inf. 5, 109 (2019).
\bibitem{39Ling} A.~Ling, K.~P. Soh, A.~Lamas-Linares, and C.~Kurtsiefer, Experimental polarization state tomography using optimal polarimeters, Phys. Rev. A 74, 022309 (2006).
\bibitem{40Miszczak} J.~A. Miszczak, Generating and using truly random quantum states in MATHEMATICA, Comput. Phys. Comm. 183, 118 (2012).
\bibitem{41Ahmed} S. Ahmed, C. S. Mu\~{n}oz, F. Nori, and A. F. Kockum. Quantum state tomography with conditional generative adversarial networks. arXiv:2008.03240v2 [quant-ph] (2020).
\bibitem{42Ahmed} S. Ahmed, C. S. Mu\~{n}oz, F. Nori, and A. F. Kockum, Classification and reconstruction of optical quantum states with deep neural networks. arXiv:2012.02185v1.
\bibitem{53Tacchino} D. F. James, P. G. Kwiat, W. J. Munro, and A. G. White, Measurement of qubits, Phys. Rev. A 64, 052312 (2001).
\bibitem{70SUP} See Supplemental Material at http://link.aps.org/supplemental/10.1103/PhysRevLett.127.130503 for more technical details, which includes Refs.[3,46,51].
\bibitem{52Tacchino} F.~Tacchino, C.~Macchiavello, D.~Gerace, and D.~Bajoni, An artificial neuron implemented on an actual quantum processor, npj Quantum Information 5, 26 (2019).
\bibitem{68three} For 122 labels, the division of the fidelity is $[0:0.05:0.6,0.61:0.01:0.8,1-\frac{1.78}{9}:\frac{0.02}{9}:1]$.
\bibitem{66general}
The five-qubit state $|\psi_0\rangle$ is represented by the following vector $[0.1592 + 0.0000i;0.0474 - 0.1834i;-0.0431 - 0.1664i;-0.3363 - 0.0787i;-0.0380 - 0.1645i; 0.0378 - 0.0606i;-0.0846 + 0.2026i;0.1586 + 0.2416i;0.1116 - 0.0157i;0.1064 + 0.0906i;0.0608 - 0.1556i;0.0302 - 0.0418i;-0.0254 - 0.0624i;-0.0319 - 0.2507i;0.0381 - 0.1123i;-0.1093 - 0.1652i;-0.0873 - 0.1461i;0.1807 - 0.1654i;0.2658 + 0.0427i;-0.2010 - 0.0322i;0.0356 - 0.0686i;0.1222 + 0.0009i;0.0944 + 0.1331i;0.0080 + 0.0417i;-0.0748 - 0.0483i; 0.0762 + 0.0756i;0.1541 - 0.0227i;-0.1414 + 0.1879i;-0.1427 + 0.0462i;-0.1530 - 0.0816i;0.0003 - 0.1342i;-0.1308 - 0.0317i]$.
\bibitem{70data}
The specific forms of $|\psi_1\rangle$ and $|\psi_2\rangle$ can be found in https://doi.org/10.6084/m9.figshare.14371157.
\bibitem{69US} https://www.scientificamerican.com/article/howcan-a-poll-of-only-100/. \end{thebibliography}
\section*{Supplementary materials for ``Direct Fidelity Estimation of Quantum States using Machine Learning"}
In this supplementary material, we discuss more results. Sec.(I) describes the basic structure of artificial neural networks (ANNs) and our artificial neural network. Sec.(II) gives the method to generate quantum states with specified fidelity. It also analyzes the uniformity of pure state fidelity, mixed state fidelity The purity distribution of mixed states, the generality of our method, and Poisson noise are also presented here. The method for selecting the Pauli operators is given in Sec.(III). Sec.(IV) analyzes the accuracy of the quantum state fidelity estimation. Sec.(V) gives the verification accuracy of the neural network. Sec.(VI) presents a practical application of our neural network. Sec.(VII), we discuss the scalability of the neuron number.
\section{The structure of ANNs and our ANN} \centerline{\textbf{The basic structure of ANNs}}
ANNs consists of an input layer, hidden layers and an output layer (See Fig.1). The input layer consists of $k\times2^n$ neurons corresponding to the probability of each outcome, in which $n$ represents the number of qubits and $k$ represents the number of Pauli combinations. We set the inputs $\bf{x_0}$ and the intermediate vector $\bf{x_1}$ in the hidden layer generated by the non-linear relation \begin{eqnarray} \bf{x_1}=\sigma_{RL}(W_1\bf{x_0}+\bf{\omega_1}), \end{eqnarray} where $\sigma_{RL}$ is the ReLU function for each neuron in the hidden layer, defined as $\sigma_{RL}(z_i)=max(z_i,0) (i=1,2,3,...)$. The matrix $\bf{W_1}$ is the initialized weight and the vector $\bf{\omega_1}$ is the bias between the input layer and the hidden layer. The optimal output vector denoted as $\bf{x_2}$ is generated using the function \begin{eqnarray} \bf{x_2}=\sigma_s(W_2\bf{x_1}+\bf{\omega_2}), \end{eqnarray} where $\sigma_s$ is the Softmax function defined by $\sigma(z_i)=\frac{e^{z_i}}{\sum_{k=1}^{122}e^{z_k}} (i=1,...,122)$. The matrix $\bf{W_2}$ is the initialized weight between the hidden layer and the output layer, while the vector $\bf{\omega_2}$ is the bias. The loss function is categorical cross-entropy and is written as $-\frac{1}{n}[y_slog a_s+(1-y_s)log(1-a_s)]$. The subscript $s$ denotes the sequence number of the training sample, and the notation $y$ represents the labels defined by the criterion, $a$ means the output labels of the ANN and $n$ is the training set number, respectively. During the machine learning process, $W_1$, $\omega_1$, $W_2$, $\omega_2$ are continuously optimized until the confidence level reaches saturation, and then the training is stopped.
\begin{figure}\label{A1}
\end{figure}
\centerline{\textbf{The ANN for fidelity estimation}}
We use four 2080Ti GPUs. We choose the optimizer that has the best performance in our task among almost all the built-in optimizers in TensorFlow: NadamOptimizer (adaptive moment estimation). This neural network contains 122 labels, using 1,952,000 data for training and 488,000 data for validation, i.e. each label contains 16,000 training data and 4,000 validation data. By tuning the batch size of inputs, the number of neurons, and the number of training rounds, the performance of the neural network is continuously optimized. Eventually, the parameters required for training this neural network from two-qubit to seven-qubit quantum states are shown in Table S1 and Table S2. Hid-neu represents the number of the hidden neurons.
\begin{table}[!htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\multicolumn{8}{c}{\rule[-3mm]{0mm}{5mm} Table S1. Parameters of the neural network with $k\times[\frac{(n^4-2n^3+11n^2+14n)}{24}+1]$}\\
\multicolumn{8}{c}{\rule[-3mm]{0mm}{5mm} \qquad\qquad expected values of non-trivial Pauli operators as the neuron inputs.}\\
\hline
\multicolumn{4}{|c|}{four-qubit states} &\multicolumn{4}{c|}{five-qubit states} \\
\hline
state & epoch & Batch size &Hid-neu &state & epoch & Batch size & Hid-neu \\
\hline
\multirow{3}{*}{$|\varphi_4\rangle$}&\multirow{3}{*}{400}&\multirow{3}{*}{8192}&2000&\multirow{3}{*}{$|\varphi_5\rangle$}&\multirow{3}{*}{500}&\multirow{3}{*}{16384}&700-300\\
& & &3000& & & &900-300\\
& & &5000& & & &1000-300\\
\hline
\multicolumn{4}{|c|}{six-qubit states} &&&&\\
\hline
state & epoch & Batch size &Hid-neu &&&&\\
\hline
\multirow{3}{*}{$|\varphi_6\rangle$}&\multirow{3}{*}{500}&\multirow{3}{*}{16384}&500-300 &&&&\\
& & &700-300 &&&&\\
& & &900-300 &&&&\\
\hline
\end{tabular}
\end{center} \end{table}
\begin{table}[!htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\multicolumn{8}{c}{\rule[-3mm]{0mm}{5mm} Table S2. Parameters of the neural network with all measurement}\\
\multicolumn{8}{c}{\rule[-3mm]{0mm}{5mm} outcome probabilities as neuron inputs}\\
\hline
\multicolumn{4}{|c|}{two-qubit states} & \multicolumn{4}{c|}{three-qubit states} \\
\hline
state & epoch & Batch size & Hid-neu & state & epoch & Batch size & Hid-neu\\
\hline
Bell & \multirow{2}{*}{200} & \multirow{2}{*}{2048} & 1000 & GHZ & \multirow{2}{*}{400} & \multirow{2}{*}{4096} & 1000\\
$W$ & & & 2000 & W & & & 2000\\
$|\varphi_2\rangle$ & & & 3000 & $|\varphi_3\rangle$ & & & 3000\\
\hline
\multicolumn{4}{|c|}{four-qubit states} & \multicolumn{4}{c|}{five-qubit states} \\
\hline
state & epoch & Batch size & Hid-neu & state & epoch & Batch size & Hid-neu\\
\hline
Cluster & \multirow{5}{*}{400} & \multirow{5}{*}{8192} & & Cluster & \multirow{6}{*}{200} & \multirow{6}{*}{16384} & \\
W & & & 2000 & C-ring & & & 500-500\\
GHZ & & & 3000 & Dicke & & & 1000-1000\\
Dicke & & & 5000 & GHZ & & & 1500-1500\\
$|\varphi_4\rangle$ & & & & W & & & \\
\cline{1-4}
\multicolumn{4}{|c|}{six-qubit states} & $|\varphi_5\rangle$ & & & \\
\hline
state & epoch & Batch size & Hid-neu & \multicolumn{4}{c|}{seven-qubit states} \\
\hline
$C_{23}$ & \multirow{5}{*}{500} & \multirow{5}{*}{16384} & & state & epoch & Batch size & Hid-neu\\
\cline{5-8}
Dicke & & & 500-500 & \multirow{4}{*}{$|\varphi_7\rangle$} & \multirow{4}{*}{500} & \multirow{4}{*}{16384}&\\
GHZ & & & 1000-1000& & & & 800-400\\
W & & & 1500-1500& & & & 900-400\\
$|\varphi_8\rangle$ & & & & & & & 1000-400\\
\hline
\multicolumn{4}{|c|}{eight-qubit states} & &&& \\
\hline
state & epoch & Batch size & Hid-neu & &&& \\
\hline
\multirow{3}{*}{$|\varphi_8\rangle$} & \multirow{3}{*}{500} & \multirow{3}{*}{16384} &700-400 & &&& \\
\cline{5-8}
& & &1000-400 & &&&\\
& & & 2000-500 & &&& \\
\hline
\end{tabular}
\end{center} \end{table}
Next, we show the specific forms of the special number states that appear in Table S2 (See Fig.2). Moreover, $|\varphi_2\rangle$, $|\varphi_3\rangle$, $|\varphi_4\rangle$, $|\varphi_5\rangle$, $|\varphi_6\rangle$, $|\varphi_7\rangle$ and $|\varphi_8\rangle$ are general quantum states \cite{5states}. \noindent The two-qubit Bell state and the W state are \begin{eqnarray} \begin{array}{l}
\displaystyle |\phi^+\rangle_{Bell}=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle), \\
\displaystyle |W\rangle=\frac{1}{\sqrt{3}}(|00\rangle+|01\rangle+|10\rangle). \end{array} \end{eqnarray}
\noindent The three-qubit GHZ state and the W state are \begin{eqnarray} \begin{array}{l}
\displaystyle |GHZ\rangle=\frac{1}{\sqrt{2}}(|000\rangle+|111\rangle),\\
\displaystyle |W\rangle=\frac{1}{\sqrt{3}}(|001\rangle+|010\rangle+|100\rangle). \end{array} \end{eqnarray}
\noindent The four-qubit Cluster state, the Dicke state, the GHZ state and the W state are \begin{eqnarray} \begin{array}{l}
\displaystyle |Cluster_4\rangle=\frac{1}{2}(|0000\rangle+|0011\rangle+|1100\rangle-|1111\rangle),\\
\displaystyle |Dicke_4^2\rangle=\frac{1}{\sqrt{6}}(|0011\rangle+|0110\rangle+|0101\rangle+|1010\rangle\\
\displaystyle \qquad\qquad\qquad+|1001\rangle+|1100\rangle),\\
\displaystyle |GHZ\rangle=\frac{1}{\sqrt{2}}(|0000\rangle+|1111\rangle),\\
\displaystyle |W\rangle=\frac{1}{2}(|0001\rangle+|0010\rangle+|0100\rangle+|1000\rangle). \end{array} \end{eqnarray}
\noindent The five-qubit Cluster state, the C-ring state, the Dicke state, the GHZ state and the W state are \begin{eqnarray} \begin{array}{l}
\displaystyle \qquad|Cluster_5\rangle=\frac{1}{2}(|+0+0+\rangle+|+0-1-\rangle\\
\displaystyle \qquad\qquad\qquad+|-1-0+\rangle+|-1+1-\rangle),\\
\displaystyle \qquad|C\text{-}ring_5\rangle=\frac{1}{2\sqrt{2}}(|+0+00\rangle+|-0+01\rangle\\
\displaystyle \qquad\qquad\qquad +|+0-10\rangle-|-0-11\rangle+|-1-00\rangle\\
\displaystyle \qquad\qquad\qquad +|+1-01\rangle+|-1+10\rangle-|+1+11\rangle),\\
\displaystyle \qquad|Dicke_5^2\rangle=\frac{1}{\sqrt{10}}(|11000\rangle)+|10100\rangle+|01100\rangle\\
\displaystyle \qquad\qquad\qquad +|01010\rangle+|00110\rangle+|00101\rangle+|00011\rangle\\
\displaystyle \qquad\qquad\qquad +|10010\rangle+|10001\rangle+|01001\rangle),\\
\displaystyle \qquad|GHZ\rangle=\frac{1}{\sqrt{2}}(|00000\rangle+|11111\rangle),\\
\displaystyle \qquad|W\rangle=\frac{1}{\sqrt{5}}(|10000\rangle+|01000\rangle+|00100\rangle\\
\displaystyle \qquad\qquad\qquad +|00010\rangle+|00001\rangle). \end{array} \end{eqnarray}
\noindent The six-qubit Cluster state, the Dicke state, the GHZ state and the W state are \begin{eqnarray} \begin{array}{l}
\displaystyle \qquad|C23\rangle=\frac{1}{2}(|+0++0+\rangle+|+0+-1-\rangle\\
\displaystyle \qquad\qquad\qquad+|-1-+0+\rangle-|-1--1-\rangle),\\
\displaystyle \qquad|Dicke_6^2\rangle=\frac{1}{\sqrt{15}}(|000011\rangle+|000101\rangle+|001010\rangle\\
\displaystyle \qquad\qquad\qquad+|001100\rangle+|110000\rangle+|010100\rangle+|101000\rangle\\
\displaystyle \qquad\qquad\qquad +|001001\rangle+|010001\rangle+|100001\rangle +|100010\rangle\\
\displaystyle \qquad\qquad\qquad+|100100\rangle+|000110\rangle+|011000\rangle+|010010\rangle),\\
\displaystyle \qquad|GHZ\rangle=\frac{1}{\sqrt{2}}(|000000\rangle+|111111\rangle),\\
\displaystyle \qquad|W\rangle=\frac{1}{\sqrt{6}}(|000001\rangle+|000010\rangle+|000100\rangle\\
\displaystyle \qquad\qquad+|010000\rangle+|100000\rangle. \end{array} \end{eqnarray}
\begin{figure*}
\caption{The structure of Cluster states. (a) The four-qubit line Cluster state $|Cluster_4\rangle$. (b) The five-qubit line Cluster state $|Cluster_5\rangle$. (c) The five-qubit ring Cluster state $|C\text{-}ring\rangle$. (d) The six-qubit grid Cluster state $|C23\rangle$.}
\label{A1}
\end{figure*}
\section{Generation and analysis of quantum states} \centerline{\textbf{A. Generation of quantum pure states with specified fidelity}}
Preparing quantum states has an important role in realizing quantum information and quantum computing, but often the imperfection of devices and the influence of noise result in obtaining quantum states that are all mixed states. Here, we use neural network techniques to evaluate whether the fidelity between this quantum state and the ideal state satisfies the requirements. Our neural network inputs are derived from mixed states with specified fidelity. We first introduce the method for generating a pure state.
Step 1. Generating an arbitrary pure state\\ In Mathematics, we create a pure state of arbitrary dimension with the help of the function \emph{RandomKet(D)} \cite{40Miszczak}. Specifically, The \emph{RandomKet(D)} function calls \emph{RandomSimplex(D)} and \emph{Randomreal(D)} to generate a D-dimensional arbitrary pure state. The Mathematic code of generating an arbitrary pure state is shown below.
\begin{lstlisting}[frame=shadowbox] RandomSimplex[d_]:=Block[{r,r1,r2}, r=Sort[Table[RandomReal[{0,1}], {i,1,d-1}]];
r1=Append[r,1];
r2=Prepend[r,0]; r1-r2 ];
RandomKet[n_]:=Block[{p, ph}, p=Sqrt[RandomSimplex[n]]; ph=Exp[I RandomReal[{0,2\[Pi]},n-1]]; ph=Prepend[ph,1]; p*ph ]; \end{lstlisting}
Step 2. Generation of a pure state with specified fidelity corresponding to the state $|0\rangle^{\otimes n}$\\ An arbitrary pure state can be expanded as \begin{eqnarray}
|\varphi\rangle=\sum_{i=0}^{2^n-1}\alpha_i|i\rangle, \end{eqnarray}
where $|i\rangle$ $(i=0, 1, ..., 2^n-1)$ is basis vector of calculations. For convenience, we rewrite Eq.(9) as follows. \begin{eqnarray}
|\varphi\rangle=f|0\rangle^{\otimes n}+\sum_{i=1}^{2^n-1}\alpha_i|i\rangle=f|0\rangle+\sqrt{1-f^2}|\phi\rangle^{2^n-1}, \end{eqnarray}
where $\alpha_0=f$. The state $|\phi\rangle^{2^n-1}$ is a $(2^n-1)$ dimensional arbitrary pure state. Therefore, the fidelity $f$ between $|0\rangle^{\otimes n}$ and $|\varphi\rangle$ is given as \begin{eqnarray}
f=F(|0\rangle^{\otimes n},|\varphi\rangle)=|\langle0|^{\otimes n}|\varphi\rangle| \end{eqnarray}
Hence, we can generate an $n$-qubit pure state dataset with the target state $|0\rangle^{\otimes n}$ using Matlab.
Step 3. Generation of a pure state with a specified fidelity corresponding to an arbitrary pure state\\
For convenience, we set $|\textbf{0}\rangle=|0\rangle^{\otimes n}$. Then we rewrite the Eq.(11) as follows. \begin{eqnarray}
f=F(|\bf{0}\rangle,\sigma)=\sqrt{\langle\bf{0}|\sigma|\bf{0}\rangle}, \end{eqnarray}
where $\sigma$ can be viewed as a state in the dataset $S$ with a target state $|\bf{0}\rangle$. If we choose a new target pure state $\rho$, there is a unitary matrix transformation $U$ from $|\bf{0}\rangle\langle \bf{0}|$ to $\rho$. This unitary $U$ can be calculated by the code \emph{Findunitary.m}. As soon as such unitary is found \cite{4Cramer,52Tacchino}, the relative state of $\sigma$ is directly obtained as \begin{eqnarray} \sigma'=U\sigma U^\dagger, \end{eqnarray} where the state $\sigma'$ belongs to the database $S'$ of the target state $\rho$. The fidelity $f'$ between the target state $\rho$ and the state $\sigma'$ can be calculated, that is, \begin{eqnarray}
f'=F(\rho,\sigma')=\sqrt{tr(\rho,\sigma')}=\sqrt{\langle\bf{0}|U^\dagger\sigma'U|\bf{0}\rangle}=\sqrt{\langle\bf{0}|\sigma|\bf{0}\rangle}=f. \end{eqnarray}
Step 4. Projective measurements\\ Each measurement setting $k$ is characterized by $W_k$, and each specific result $p(k_j)$ is associated with a projection operator \begin{eqnarray}
P_{k_j}=|v_{k_j}\rangle\langle v_{k_j}|, \quad j=1,2,...,2^n, \end{eqnarray}
where $|v_{k_j}\rangle$ is the $j_{th}$ eigenvector of the $W_k$, $p(k_j)$ is equal to $tr(\rho P_{k_j})$. \\
\centerline{\textbf{B. Generation of quantum mixed states with specified fidelity}} Here we give the method for generating mixed states with specified fidelity. In a Ginibre matrix $G$, each element is the standard complex normal distribution $CN(0,1)$. The random density matrix of mixed states can be written as \begin{eqnarray} \rho=\frac{GG^\dagger}{tr(GG^\dagger)}. \end{eqnarray}
Inspired by the Ginibre matrix $G$, we propose a method to prepare specified fidelity states with the target state $|\bf{0}\rangle$, i.e., the density matrix of desired N-qubit mixed state is \begin{eqnarray} \rho_n=\frac{\mathbb{G}_n\mathbb{G}_n^\dagger}{tr(\mathbb{G}_n\mathbb{G}_n^\dagger)}. \end{eqnarray} where the matrix $\mathbb{G}_n$ can be expressed as \begin{widetext} \begin{small} \begin{eqnarray} \mathbb{G}_n=\left(
\sqrt{m_1}\left(\begin{array}{c}
x_1e^{-2\pi i*rand_{11}} \\
\sqrt{1-x_1^2}e^{-2\pi i*rand_{12}}|\varphi_1\rangle \\
\end{array} \right),...,\sqrt{m_n}\left(\begin{array}{c}
x_ne^{-2\pi i*rand_{n1}} \\
\sqrt{1-x_n^2}e^{-2\pi i*rand_{n2}}|\varphi_n\rangle \\
\end{array}\right)\right) \end{eqnarray} \end{small} \end{widetext}
The notations \emph{$rand_{b1}$} and \emph{$rand_{b2}$} ($b=1,2,...,n$) represent random numbers. The set $\{|\varphi_1\rangle, ..., |\varphi_n\rangle\}$ is a collection of $2^n-1$ dimensional pure states produced by the function \emph{RandomKet}. $\{m_1, .., m_n\}$ is a set of real numbers of $2^N$ dimension normalized standard normal distribution. $\{x_1, ..., x_n\}$ is a set of undefined real numbers that are closely related to the expected fidelity.
The steps for preparing the mixed state are similar to those for preparing the pure state. As can be seen from Eq.(18), the density matrix of a mixed state with specified fidelity needs to determine the values of $\{m_1,...,m_n\}$ and $\{x_1,...,x_n\}$, where the former is generated randomly using the Matlab code and the latter is determined according to the corresponding constraints.
\emph{An example.}--For a two-qubit state, suppose the desired fidelity is $f_0$ for the target state $|00\rangle$. The values $\{x_1, x_2, x_3, x_4\}$ in the matrix $\mathbb{G}_n$ can be determined from Eqs. (18-21) using Matlab. The value range of $x_1$ can first be determined in Eq.(18) by the given fidelity $f_0$. When we fix the $x_1$ randomly and uniformly, the value range of $x_2$ can also be determined in Eq.(19). Then we fix the $x_2$ randomly and uniformly, and the value range of $x_3$ can also be defined in Eq.(20). Once the value of $x_3$ is fixed randomly and uniformly, the value of $x_4$ is also fixed in Eq.(21). Therefore, we obtain the matrix $\mathbb{G}_n$.
\begin{widetext} \begin{small} \begin{eqnarray} x_1\in[min\{max(\frac{f_0^2-\sum\limits_{i=2}^4 m_i}{m_1},0),1\},min\{max(\frac{f_0^2}{m_1},0),1\}]\\ x_2\in[min\{max(\frac{f_0^2-m_1x_1^2\sum_{i=3}^4 m_i}{m_2},0),1\}, min\{max(\frac{f_0^2-m_1x_1^2}{m_1},0),1\}], \\ x_3\in[min\{max(\frac{f_0^2-\sum\limits_{i=1}^2m_ix_i^2-m_4}{m_2},0),1\}, min\{max(\frac{f_0^2-\sum\limits_{i=1}^2m_ix_i^2}{m_2},0),1\}],\\ x_4=\frac{f_0^2-\sum\limits_{i=1}^3m_ix_i^2}{m_4}. \end{eqnarray} \end{small} \end{widetext}
\centerline{\textbf{C. Uniformity analysis of pure state fidelity}}
We use a computer to generate 500 single-qubit pure states. The fidelity between each of 500 pure states and the state $|0\rangle$ is $\sqrt{0.5}$. We also give a geometric representation of the Bloch ball in Fig.3(a). It is obvious that the distribution of 500 states is uniform.
\begin{figure*}
\caption{(Color Online) (a) Distribution of 500 single-qubit states in the Bloch ball; (b)The distribution of 10,000 four-qubit pure states containing state 6 and
state 20; (c) The distribution of 1,220 four-qubit mixed states including state 6, state 15 and state 20.}
\end{figure*}
In addition, we verify the uniformity of four-qubit pure states fidelities. We generate 10,000 four-qubit pure states, in which the fidelity between each of these states and the state $|0000\rangle$ is 0.25. We select 20 states out of the 10,000 states, and the fidelity between each selected state and 9,999 other states can be calculated. We get 20 similar distributions, including the distribution of random state 6 and the distribution of the random state 20 in Fig.3(b). We conclude that the distribution of 10,000 states is uniform.
\centerline{\textbf{D. Uniformity analysis of mixed state fidelity}}
Here, we verify the uniformity of four-qubit mixed states datasets in Fig.3(c). A total of 1,220 states are generated, in which the fidelity between 1,220 states and the state $|0000\rangle$ is 0.25. We also select 20 states out of 1,220 states. The fidelity, between each selected state and the other 1,219 states, can be calculated. We get 20 similar distribution patterns, including the distribution of random state 6, random state 15 and the distribution of random state 20 in Fig.3(c). We found that the distribution of the 1,220 states is uniform.
\centerline{\textbf{E. Distribution of different purities of mixed states}}
The purity of a quantum state $\rho$ is defined as $tr(\rho^2)$, where $tr(\rho^2)=1$ means that this quantum state is pure and vice versa is a mixed state. Figure 4 shows the distribution of purity for 1,220 four-qubit mixed states, in which $m_1$ can control the purity of quantum states. In Fig.4(a-b) the fidelity $f_0$ between the prepared state and $|0000\rangle$ is 0.25, and the fidelity $f_1$ is 0.8 in Fig.4(c-d). The controller $m_1$ is equal to 1, 0.9, 0.6, 0.2 or 0.01. Moveover, $m_1$ can also be a uniform distribution $U(0,1)$. The analysis here demonstrates how to control the purity of the prepared states.
\begin{figure*}
\caption{(Color Online) Comparison of mixed-state distributions for different purity, where $m_1=1, 0.9, 0.6, 0.2, 0.01, U(0, 1)$. The specified quantum states fidelities are 0.25 and 0.8 respectively.}
\end{figure*}
Here, we discuss nine datasets in Table S3 for the four-qubit general state $|\varphi_4\rangle$, in which eight datasets belong to mixed states and the rest one to pure states. In eight mixed datasets, the purity distribution of $m_1$ is changed for seven datasets and the remaining one does not change the distribution of $m_1$. \begin{table*}[!ht]
\begin{center}
\begin{tabular}{|c|c|c|}
\multicolumn{3}{c}{\rule[-3mm]{0mm}{5mm} Table S3. Nine distributions of $m_1$}\\
\hline
ANN models & datasets & $m_1$ distribution \\
\hline
A & 1 & $m_1=1-rand*rand$ \\
\hline
B & 2 & $m_1=1-\sqrt{rand}*rand$ \\
\hline
C & 3 & $m_1$ belongs to a uniform distribution $U(0,1)$ \\
\hline
D & 4 & $m_1$ belongs to a random standard normal distribution $N(0,1)$ \\
\hline
E & 5 & Pure states \\
\hline
F & 6 & $m_1=1-rand*rand*rand$ \\
\hline
G & 7 & $m_1=1-rand*rand*rand*rand$ \\
\hline
H & 8 & $m_1=1-rand^3$ \\
\hline
I & 9 & $m_1=1-rand*rand^3$ \\
\hline
\end{tabular}
\end{center} \end{table*}
By comparing the distribution from Table S4, it can be seen that the distribution $m_1=1-rand^3$ is better than others because this distribution is effective for mixed states and pure states. It is worth note that the states in this distribution $m_1=1-rand^3$ are closer to pure states than those in other distributions. Here we only show the validation results of $k=3, 4, 5$ of nine ANN models for nine datasets because from these results can draw the conclusion we give. \begin{table*}[!htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\multicolumn{10}{c}{\rule[-3mm]{0mm}{5mm} Table S4. Accuracy of nine ANN models for nine datasets}\\
\hline
datasets & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 &9 \\
\hline
\diagbox{$k$}{model} & \multicolumn{9}{c|}{A} \\
\hline
3 & 79.78\% &80.57\% &81.11\% &77.19\% &72.60\% &78.09\% &76.41\% &78.32\% &77.12\% \\
\hline
4 & 86.99\% &87.33\% &87.42\% &83.37\% &82.62\% &86.34\% &85.45\% &86.10\% &85.63\% \\
\hline
5 & 89.82\% &90.26\% &89.94\% &85.96\% &87.26\% &90.01\% &89.48\% &89.43\% &89.59\% \\
\hline
\diagbox{$k$}{model} & \multicolumn{9}{c|}{B} \\
\hline
3 & 78.84\% &80.24\% &81.21\% &77.61\% &67.65\% &76.47\% &73.82\% &77.04\% &74.98\%\\
\hline
4 & 86.34\% &87.03\% &87.36\% &83.75\% &78.13\% &84.70\% &82.97\% &84.84\% &83.57\%\\
\hline
5 & 89.58\% &90\% &89.70\% &86.49\% &83.56\% &88.89\% &87.93\% &88.65\% &88.17\%\\
\hline
\diagbox{$k$}{model} & \multicolumn{9}{c|}{C} \\
\hline
3 & 77.53\% &79.07\% &80.29\% &77.44\% &66.83\% &74.62\% &71.98\% &75.59\% &75.59\% \\
\hline
4 & 85.36\% &86.32\% &87.11\% &83.61\% &76.39\% &83.14\% &81.01\% &83.69\% &83.69\% \\
\hline
5 & 88.67\% &89.38\% &89.72\% &86.24\% &81.36\% &87.02\% &85.34\% &87.29\% &87.29\% \\
\hline
\diagbox{$k$}{model} & \multicolumn{9}{c|}{D} \\
\hline
3 & 75.29\% & 77.62\% & 79.37\% & 76.67\% & 59.93\% & 70.81\% & 67.10\% & 72.51\% & 69.18\% \\
\hline
4 & 82.65\% & 84.39\% & 85.60\% & 82.80\% & 69.73\% & 79.13\% & 76.01\% & 80.27\% & 77.61\% \\
\hline
5 & 85.72\% & 86.91\% & 87.74\% & 85.26\% & 76.09\% & 83.25\% & 80.90\% & 83.89\% & 82.05\% \\
\hline
\diagbox{$k$}{model} & \multicolumn{9}{c|}{E} \\
\hline
3 & 22.93\% & 15.73\% & 11.10\% & 10.96\% & 85.05\% & 38.62\% & 54.07\% & 34.86\% & 46.63\% \\
\hline
4 & 26.66\% & 18.21\% & 12.64\% & 12.63\% & 91.83\% & 44.57\% & 61.47\% & 39.29\% & 52.52\% \\
\hline
5 & 27.60\% & 18.59\% & 12.49\% & 12.24\% & 95.07\% & 46.41\% & 64.04\% & 40.60\% & 54.55\% \\
\hline
\diagbox{$k$}{model} & \multicolumn{9}{c|}{F} \\
\hline
3 & 80.02\% & 80.13\% & 80.18\% & 76.15\% & 75.52\% & 79.66\% & 78.62\% & 79.19\% & 78.86\% \\
\hline
4 & 87.47\% & 86.94\% & 86.49\% & 82.64\% & 85.10\% & 87.66\% & 87.48\% & 86.93\% & 87.10\% \\
\hline
5 & 90.45\% & 90.24\% & 89.55\% & 85.58\% & 90.70\% & 91.27\% & 91.37\% & 90.46\% & 91.04\% \\
\hline
\diagbox{$k$}{model} & \multicolumn{9}{c|}{G} \\
\hline
3 & 79.13\% & 78.84\% & 78.65\% & 74.30\% & 78.77\% & 79.58\% & 79.67\% & 78.96\% & 78.39\% \\
\hline
4 & 86.63\% & 85.97\% & 85.35\% & 81.16\% & 86.86\% & 87.63\% & 88.16\% & 86.78\% & 87.17\% \\
\hline
5 & 90.43\% & 89.57\% & 88.58\% & 84.44\% & 91.87\% & 91.67\% & 92.25\% & 90.50\% & 90.92\% \\
\hline
\diagbox{$k$}{model} & \multicolumn{9}{c|}{H} \\
\hline
3 & 79.09\% & 79.51\% & 80.78\% & 76.93\% & 76.03\% & 78.37\% & 77.45\% & 78.65\% & 78\% \\
\hline
4 & 87.26\% & 87.15\% & 86.80\% & 83.11\% & 84.64\% & 87.09\% & 86.27\% & 86.75\% & 86.58\% \\
\hline
5 & 90.65\% & 90.32\% & 89.24\% & 85.77\% & 89.30\% & 90.94\% & 90.38\% & 90.28\% & 90.59\% \\
\hline
\diagbox{$k$}{model} & \multicolumn{9}{c|}{I} \\
\hline
3 & 79.73\% & 79.88\% & 79.15\% & 75.84\% & 76.82\% & 79.62\% & 79.18\% & 79.33\% & 78.65\% \\
\hline
4 & 86.82\% & 86.36\% & 86.39\% & 82.18\% & 86.92\% & 87.30\% & 87.53\% & 86.79\% & 87.39\% \\
\hline
5 & 90.08\% & 89.46\% & 88.81\% & 85.18\% & 91.23\% & 91\% & 91.46\% & 90.26\% & 91.08\% \\
\hline
\end{tabular}
\end{center} \end{table*}
\centerline{\textbf{F. Universality of our method}}
We will prove the universality of our methods for preparing quantum states. To better demonstrate this inference, we show three dataset classes with three six-qubit general target pure states $|\varphi_6^1\rangle, |\varphi_6^2\rangle, |\varphi_6^3\rangle$, respectively. By analyzing the results from Table S5, we conclude that our method of preparing database is universal, corresponding to an arbitrary target pure state.
\begin{table*}[!htbp]
\begin{center}
\begin{tabular}{|p{2cm}|p{2cm}|p{2cm}|p{2cm}|}
\multicolumn{4}{c}{\rule[-3mm]{0mm}{5mm} Table S5. Datasets comparison of three six-qubit general states}\\
\hline
\multicolumn{4}{|c|}{six-qubit (Hid-neu:700-300, precision with $\pm 1\%$)} \\
\hline
\diagbox{$k$}{states} & $|\varphi_6^1\rangle$ & $|\varphi_6^2\rangle$ & $|\varphi_6^3\rangle$ \\
\hline
2 & 79.17\% &77.76\% &78.77\%\\
\hline
3 & 86.89\% &85.39\% &88.03\%\\
\hline
4 & 91.52\% &90.56\% &92.49\%\\
\hline
5 & 94.09\% &93.68\% &94.48\%\\
\hline
6 & 95.21\% &95.19\% &94.81\%\\
\hline
7 & 97.10\% &96.24\% &97.15\%\\
\hline
\end{tabular}
\end{center} \end{table*}
\centerline{\textbf{G. Poisson noise analysis}} In an experiment, noise is inevitable. In our work, we assume that the noise follows the Poisson law. Using the \emph{random} function in Matlab, we can obtain one value from the Poisson distribution with $NP$, in which $P$ is the basis measurements, and $NP$ is the ideal coincidence count.
We give a comparison of different number of samples for a four-qubit general state (See Fig.5). The noise model uses the number of samples of $N=$1,000, 4000, 7,000, 10,000, 10,0000 and 1,000,000. For comparison, we find that the noise model for the number of sample $N=$10,000 is appropriate. Therefore, these datasets are generated with the number of samples $N=$10,000 from two-qubit to seven-qubit states.
\begin{figure*}
\caption{(Color online) Comparison of the accuracy of neural network models with different number of samples. Taking a four-qubit general state as an example, when $k=2,3,4,5,6,7$, the neural network models with the number of samples 1000, 4000, 7000, 10000, 100000, 100000 are generated respectively. The higher the number of Pauli operator measurement settings used, the higher the prediction accuracy is. The higher of the number of samples $N$ is, the higher the accuracy is.}
\label{A1}
\end{figure*}
\section{Selection of Pauli combinations} Let $W_k$ $(k=1,2,...,2^{2n})$ denotes all possible Pauli operators ($n$-fold tensor products of $I$, $X$, $Y$ and $Z$). Then $tr(\rho\sigma)$ can be rewritten in Ref.\cite{40Miszczak} as follows. \begin{eqnarray} tr(\rho\sigma)=\sum_k\frac{1}{d}\chi_\rho(k)\chi_\sigma(k), \end{eqnarray} where the characteristic function is defined as $\chi_\rho(k)=tr(\rho W_k)$. For a target pure state $\rho$, we define the weight $Pr(k)$ corresponding to the Pauli operators $W_k$. \begin{eqnarray} Pr(k)=[\chi_\rho(k)]^2, \end{eqnarray} If we want to know the fidelity between a target pure state and an arbitrary state, we need to get $\chi_\sigma(k)$ in Eq.(23) related to the weight $Pr(k)\neq0$. Obviously, the general state requires more Pauli combinations than a special state. Here we have chosen the $k$ Pauli operators with the largest absolute value of the expectation value of the desired quantum state.
Here we show the accuracy of the neural network for the Pauli operators with a large absolute value of expectation and for the Pauli operators with a small absolute value of expectation, respectively (See Table S6). Take the four-qubit cluster state and the general state as an example, the ANN model for the Pauli operator with a large absolute value of expectation can get more information than that for the Pauli operator with a small absolute value of expectation. \begin{table*}[!htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\multicolumn{5}{c}{\rule[-3mm]{0mm}{5mm} Table S6. Comparison of two types of weights}\\
\hline
\multicolumn{5}{|c|}{four-qubit states (Precision $\pm 1\%$)} \\
\hline
\diagbox{$k$}{states} & $|\varphi_4\rangle$(largest weight, & $|\varphi_4\rangle$(smallest weight, & Cluster (weight=1, & Cluster (weight=0, \\
& Hid-neu:5000) & Hid-neu:5000) & Hid-neu:3000) & Hid-neu:3000) \\
\hline
2 & 61.48\% &56.35\% &90.75\% &67.99\% \\
\hline
3 & 77.18\% &69.43\% &96.45\% &81.98\% \\
\hline
4 & 84.51\% &77.79\% &98.48\% &91\% \\
\hline
5 & 88.47\% &83.70\% &99.37\% &96.18\% \\
\hline
6 & 91.80\% &90.76\% &99.81\% &97.34\% \\
\hline
7 & 94.21\% &92.83\% &99.96\% &97.67\% \\
\hline
\end{tabular}
\end{center} \end{table*}
We then show the selected Pauli operators from two-qubit to eight-qubit states in Table S7. \begin{table*}[!htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\multicolumn{4}{c}{\rule[-3mm]{0mm}{5mm} Table S7. Pauli operators}\\
\hline
\multicolumn{2}{|c|}{two-qubit} & \multicolumn{2}{c|}{three-qubit} \\
\hline
states & Pauli operators & states & Pauli operators\\
\hline
Bell &XX;YZ;ZY;YY;ZX;XZ;XY & GHZ & ZZZ;XXX;XYY;YXY;YYX; YYY;XXZ\\
$W_2$ &XX;YZ;ZY;XZ;YY;ZX;ZZ & W & ZZZ;ZXX;ZYY;XZX;XXZ;YZY;YYZ\\
$|\varphi_2\rangle$ &XY;YX;ZZ;YZ;XX;ZY;YY;ZX;XZ & $|\varphi_3\rangle$& YYY;XXY;ZZY;XYZ;YXX;YXZ;YXY \\
\hline
\multicolumn{4}{|c|}{four-qubit} \\
\hline
Cluster & \multicolumn{3}{c|}{ZZXX;ZZYY;XXZZ;YYZZ;XYXY;XYYX;YXXY} \\
Dicke & \multicolumn{3}{c|}{XXXX;YYYY;ZZZZ;XXZZ;ZZYY;ZZXX;YYZZ} \\
GHZ & \multicolumn{3}{c|}{XXXX;YYYY;ZZZZ;XXYY;XYXY;YXYX;YYXX} \\
W & \multicolumn{3}{c|}{ZZZZ;ZZXX;ZZYY;XXZZ;YYZZ;XZXZ;YZYZ} \\
$|\varphi_4\rangle$ & \multicolumn{3}{c|}{YXXZ;XYZX;ZZYY;YZXX;XYZY;YXZZ;ZZYX} \\
\hline
\multicolumn{4}{|c|}{five-qubit} \\
\hline
Cluster & \multicolumn{3}{c|}{XZZXZ;ZYXYZ;ZXZZX;YXXXY;YYZZX;XZZYY;ZYXXY} \\
C-ring & \multicolumn{3}{c|}{XXXXX;ZYXYZ;ZZYXY;XYZZY;YXYZZ;YZZYX;XZZYX} \\
Dicke & \multicolumn{3}{c|}{ZZZZZ;XXXXZ;YYYZY;ZZZXX;ZYYYY;YZZYZ;XXXZX} \\
GHZ & \multicolumn{3}{c|}{XXXXX;ZZZZZ;XYXXY;XYXYX;XYYXX;YXYYY;YYXYY} \\
W & \multicolumn{3}{c|}{ZZZZZ;XXZZZ;XZZXZ;YYZZZ;YZZYZ;ZXZXZ;ZYZYZ} \\
$|\varphi_5\rangle$ & \multicolumn{3}{c|}{YYXZX;XZZYZ;ZYYYY;XZYXX;XXZZY;YZXXZ;ZYXZY} \\
\hline
\multicolumn{4}{|c|}{six-qubit} \\
\hline
C23 & \multicolumn{3}{c|}{XZXYXY;XZXYXY;ZXYZYZ;ZYZYXZ;YXYXZX;XZXZYY;ZYZZXY} \\
Dicke & \multicolumn{3}{c|}{ZZZZZZ;XXZZZZ;ZZZZYY;ZZYYZZ;ZZXZZX;ZZZXXZ;YYZZZZ} \\
GHZ & \multicolumn{3}{c|}{XXXXXX;YYYYYY;ZZZZZZ;XXXYYY;XYYYYX;YXYYXY;YYYYXX} \\
W & \multicolumn{3}{c|}{ZZZZZZ;XZXZZZ;XZZZZX;YYZZZZ;YZZZYZ;ZXZXZZ;ZYZZYZ} \\
$|\varphi_6\rangle$ & \multicolumn{3}{c|}{XXYXYX;ZZXYZZ;YYZYXZ;ZZXZXY;ZXYXYY;ZYZZZX;YZZYZY} \\
\hline
\multicolumn{4}{|c|}{seven-qubit}\\
\hline
$|\varphi_7\rangle$ & \multicolumn{3}{c|}{XYZXXXX;ZZXZYYY;YXXYZZZ;ZYYZZZZ;XXZYYYX;YZXZXXY;ZYYXYZX} \\
\hline
\multicolumn{4}{|c|}{eight-qubit}\\
\hline
$|\varphi_8\rangle$ & \multicolumn{3}{c|}{XXXXXXXX;YYYYYYYY;ZZZZZZZZ;XYZXYZXY;YZXYZXYZ} \\
\hline
\end{tabular}
\end{center} \end{table*}
\section{Precision of fidelity estimation} Figure 6 presents some important information about the neural network with different number of labels. In Fig.6(a-c), in the case of a fixed number of labels, the higher the number of Pauli operator measurement settings used, the higher the prediction accuracy is. Meanwhile, the higher the fidelity of the predicted quantum states is, the higher the accuracy is. In Fig.6(d), the more quantum state fidelity intervals are divided, the higher the accuracy is. However, the accuracy of the neural network model with 234 labels is not much higher than that of the neural network model with 122 labels. Moreover, the higher the number of labels are, the more resources and time are consumed, so the neural network with 122 labels is selected as the most appropriate. The fidelity interval of the quantum state is respectively divided into 66 labels, 122 labels and 234 labels for the specific interval in Ref.\cite{67three}.
\begin{figure*}
\caption{(Color Online) A plot of the prediction accuracy of the neural network versus the quantum state fidelity when measurements are made using three, four, five, and six Pauli operator measurement settings. Here we choose a 5-qubit general state $|\varphi_5\rangle$ as the target state. The higher the number of Pauli operator measurement settings used, the higher the prediction accuracy is. The higher the fidelity of the predicted quantum states is, the higher the accuracy is. The more quantum state fidelity intervals are divided, the higher the accuracy is. However, the higher the number of labels are, the more resources and time are consumed, so the neural network model with 122 labels is selected as the most appropriate.}
\end{figure*}
\section{Accuracies of ANN models for N-qubit states} In Table S8, we show all the accuracies of the ANN models from two-qubit to seven-qubit states with the precision $\pm 1\%$. \begin{table*}[!htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\multicolumn{7}{c}{\rule[-3mm]{0mm}{5mm} Table S8. Accuracies of ANN models}\\
\hline
\multirow{2}{*}{\diagbox{$k$}{states}} & \multicolumn{3}{c}{two-qubit (Hid-neu:2000)} & \multicolumn{3}{|c|}{three-qubit (Hid-neu:2000)} \\
\cline{2-7}
&Bell &$W_2$ &$|\varphi_2\rangle$ &GHZ &W &$|\varphi_3\rangle$ \\
\hline
2 & 30.81\% &46.65\% &36.56\% &34.65\% &57.35\% &54.22\% \\
\hline
3 & 50.50\% &64.87\% &62.50\% &62.55\% &68.10\% &68.38\% \\
\hline
4 & 76.73\% &78.68\% &76.25\% &70.40\% &76.78\% &80.10\% \\
\hline
5 & 90.16\% &92.21\% &80.66\% &77.75\% &86.65\% &89.49\% \\
\hline
6 & 99.99\% &93.90\% &87.27\% &81.23\% &89.07\% &92.72\% \\
\hline
7 & 99.99\% &99.99\% &93.38\% &87.73\% &89.87\% &95.38\% \\
\hline
\multirow{2}{*}{\diagbox{$k$}{states}} & \multicolumn{6}{c|}{four-qubit (Hid-neu:2000)} \\
\cline{2-7}
&Cluster &GHZ &W &Dicke &\multicolumn{2}{|c|}{$|\varphi_4\rangle$} \\
\hline
2 &90.66\% &78.19\% &86.86\% &91.56\% &\multicolumn{2}{|c|}{61.93\%} \\
\hline
3 &96.47\% &97.82\% &89.25\% &96.77\% &\multicolumn{2}{|c|}{76.77\%} \\
\hline
4 &98.51\% &98.53\% &93.58\% &99.41\% &\multicolumn{2}{|c|}{84.81\%} \\
\hline
5 &99.40\% &99.34\% &95.53\% &99.80\% &\multicolumn{2}{|c|}{88.45\%} \\
\hline
6 &99.79\% &99.74\% &96.83\% &99.97\% &\multicolumn{2}{|c|}{91.57\%} \\
\hline
7 &99.96\% &99.94\% &97.92\% &99.99\% &\multicolumn{2}{|c|}{94.22\%} \\
\hline
\multirow{2}{*}{\diagbox{$k$}{states}} & \multicolumn{6}{|c|}{five-qubit (Hid-neu:500-300)} \\
\cline{2-7}
&Cluster &C-ring &Dicke &GHZ &W &$|\varphi_5\rangle$ \\
\hline
2 &87.26\% &79.87\% &84.34\% &92.28\% &81.67\% &70.40\% \\
\hline
3 &89.73\% &82.96\% &89.08\% &93.15\% &91.49\% &81.75\% \\
\hline
4 &90.48\% &84.52\% &92.62\% &93.89\% &93.87\% &88.14\% \\
\hline
5 &91.67\% &85.16\% &95.71\% &95.85\% &95.87\% &93.16\% \\
\hline
6 &92.76\% &86.19\% &96.96\% &96.62\% &96.52\% &95.92\% \\
\hline
7 &93.09\% &86.89\% &97.35\% &97.23\% &97.21\% &96.44\% \\
\hline
\multirow{2}{*}{\diagbox{$k$}{states}} & \multicolumn{3}{|c}{six-qubit (Hid-neu:1500-1500)} &\multicolumn{3}{|c|}{seven-qubit (Hid-neu:1000-400) } \\
\cline{2-7}
&GHZ &C23 &W &Dicke &$|\varphi_6\rangle$ &$|\varphi_7\rangle$ \\
\hline
2 &98.28\% &92.24\% &92.91\% &90.46\% &79.17\% &83.44\%\\
\hline
3 &99.04\% &93.48\% &94.86\% &96.69\% &86.89\% &89.90\%\\
\hline
4 &99.42\% &94.69\% &97.8\% &97.82\% &91.52\% &93.47\%\\
\hline
5 &99.64\% &95.25\% &98.24\% &98.07\% &94.09\% &95.04\%\\
\hline
6 &99.79\% &97.11\% &98.79\% &98.19\% &95.21\% &96.42\% \\
\hline
7 &99.87\% &97.41\% &99.22\% &99.01\% &97.10\% &97.16\%\\
\hline
\end{tabular}
\end{center} \end{table*}
\section{Applications of our neural networks} How can we use a trained neural network to determine whether the fidelity of an input quantum state is higher than 96\%. The fidelity of this quantum state is first predicted using a neural network with $k=2$. If the upper bound of the predicted fidelity range given does not exceed 96\%, the neural network gives the prediction that determines that the fidelity of this state does not exceed 96\%. If the lower bound of the predicted fidelity range is more than 96\%, the neural network will give a prediction that the fidelity of this state is more than 96\%. If the range of prediction fidelity is given including 96\%, it is necessary to determine whether the prediction accuracy reaches $\pm1\%$. If the prediction accuracy reaches $\pm1\%$, the neural network determines that the fidelity of this state does not exceed 96\%. If the prediction accuracy does not reach $\pm1\%$, the neural network cannot determine that the fidelity of this state does not exceed 96\%. Therefore, we continue the prediction with the $k=3$ neural network and repeat the above test process. Finally, the neural network can determine whether the fidelity of this state exceeds 96\%.
\section{Discussion on the scalability of the neuron number} In the main text, we discuss the case $M=2^n-1$, and in this supplementary section we discuss the case $M<2^n-1$. When $n$ is large, $2^n-1$ will be very large, and it is impossible and unnecessary for us to compute all expectation values of the $2^n-1$ non-trivial Pauli operators and take them as neuron inputs; we can select only some of them for neuron inputs. For example, we can select those Pauli operators that contain no more than three Identity operators. There are C(n,4)+C(n,3)+C(n,2)+C(n,1)+C(n,0) such Pauli operators, so $M$ will be set to $\frac{1}{24}(n^4-2n^3+11n^2+14n)+1$. If we set $k$ to $2n$, the number of input neurons $k\times M$ becomes $\frac{1}{12}(n^5-2n^4+11n^3+14n^2)+2n$, only of the order of $n^5$ and does not increase exponentially with $n$. Figure 7 shows how the performance of our neural network fidelity estimation changes as $n$ increases from 4 to 6 for the case of $M=\frac{1}{24}(n^4-2n^3+11n^2+14n)+1$, $k=2n$ and $1-\delta=95\%$ (or 99\%). We can see that in this case the error of fidelity estimation do not increase as $n$ increases. \begin{figure*}
\caption{(Color Online) The performance of this method for fidelity estimation with neural networks varies with the number of qubits in the quantum state. The fidelity estimation error $\epsilon$ do not increase as the number of qubits $n$ increases for the case of $M=\frac{1}{24}(n^4-2n^3+11n^2+14n)+1$, $k=2n$ and $1-\delta=95\%$ (or 99\%).}
\end{figure*}
\end{document}
|
arXiv
|
{
"id": "2102.02369.tex",
"language_detection_score": 0.6777087450027466,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{\textbf{A note on the coincidence of \\the projective and conformal Weyl tensors}}
\author[]{{ Christian L\"ubbe} \footnote{E-mail address:{\tt [email protected]}}}
\affil[]{Department of Mathematics, University College London, Gower
Street, London WC1E 6BT, UK}
\maketitle
\begin{abstract} This article examines the coincidence of the projective and conformal Weyl tensors associated to a given connection $\nabla $. The connection may be a general Weyl connection associated to a conformal class of metrics $[g]$. The main result for $n \ge 4$ is that the Weyl tensors coincide iff $\nabla $ is the Levi-Civita connection of an Einstein metric. \end{abstract}
\section{Introduction}
In 1918 Hermann Weyl introduced, what is now known as Weyl geometries \cite{Weyl1918}. He observed that the Riemann curvature has a conformally invariant component $C\tensor{ij}{k}{l}$, which he referred to as the conformal curvature. In \cite{Weyl1921} Weyl discussed both conformal and projective geometries and showed that analogously the Riemann curvature has a projectively invariant component $W\tensor{ij}{k}{l}$, referred to as the projective curvature. The idea has been extend to parabolic geometries, (see e.g. \cite{BEG}, \cite{CSbook}) and in the modern literature the invariant curvature component is simply referred to as the Weyl tensor or the Weyl curvature, with the type of geometry typically implied by the context. In this article we will be dealing with $C\tensor{ij}{k}{l}$ and $W\tensor{ij}{k}{l}$ simultaneously and we will refer to them as the conformal and projective Weyl tensors respectively.
In \cite{Nur12} Nurowski investigated when a given projective class of connections $ [ \nabla ] $ on $M$ includes a Levi-Civita connection of some metric $g$ on $M$. An algorithm to check the metrisability of a chosen projective structure was given. In proposition 2.5 of \cite{Nur12} it was shown that the projective and conformal Weyl tensors coincide if and only if the Ricci tensor of the Levi-Civita connection satisfies \begin{equation} \label{Nurowski condition} M\tensor{abcd}{ef}{} R_{ef}=0 \end{equation} where $$ M\tensor{abcd}{ef}{} = 2 g_{a[c}\delta^e_{d]}\delta^f_{b} + 2 g_{a[d} g_{c]b}g^{ef} + 2(n-1) g_{b[d}\delta^f_{c]}\delta^e_{a}. $$ Corollary 2.6 of \cite{Nur12} deduces that the projective and conformal Weyl tensors of an Einstein metric are equal. As a comment Nurowski raised the question whether there are non-Einstein metrics, which satisfy condition \eqref{Nurowski condition}.
This article proves that this is not the case. In particular, for a given connection $\nabla$ on an $n\ge 4$ dimensional manifold the projective and conformal Weyl tensors associated to $\nabla$ only agree if $\nabla$ is the Levi-Civita connection of an Einstein metric. The problem is addressed in more generality by allowing for general Weyl connections. This generalisation is of interest, due to the fact that neither the Ricci curvature of a general Weyl connection nor the Ricci curvature of a projective connection need be symmetric. Hence the possibility exists that the two Weyl tensors agree when using a general Weyl connection that is not a Levi-Civita connection for a metric in $[g]$.
\section{Projective and conformal connection changes}
We define the tensors \begin{equation*} \Sigma^{kl}_{ij} = \delta^k_i \delta^l_j + \delta^l_i \delta^k_j , \quad\quad S^{kl}_{ij} = \delta^k_i \delta^l_j + \delta^l_i \delta^k_j - g_{ij} g^{kl} \end{equation*} Two connections $\nabla$ and $\check{\nabla} $ are projectively related if there exists a 1-form $\check{b}_i $ such that the connection coefficients are related by \begin{equation*} \check{\Gamma}\tensor{i}{k}{j} = \Gamma\tensor{i}{k}{j} + \Sigma^{kl}_{ij} \check{b}_l \end{equation*} We denote the class of all connections projectively related to $\nabla$ by $[\nabla]_p $.
Suppose further that $\nabla$ is related to the conformal class $[g]$. By this we mean that there exists a 1-form $f_i$ such that \begin{equation} \label{Weyl connection metric condition} \nabla_i g_{kl} = -2f_i g_{kl} \end{equation} This holds for $g_{ij}$ iff it holds for any representative in $[g]$. Connections that satisfy \eqref{Weyl connection metric condition} are referred to as general Weyl connections of $[g]$. Note that the Levi-Civita connection of any representative in $[g]$ satisfies \eqref{Weyl connection metric condition}. However $\nabla$ need not be the Levi-Civita connection for a metric in $[g]$.
The connections $\nabla$ and $\hat{\nabla} $ are conformally related if there exists a 1-form $\hat{b}_i $ such that the connection coefficients are related by \begin{equation*} \hat{\Gamma}\tensor{i}{k}{j} = \Gamma\tensor{i}{k}{j} + S^{kl}_{ij} \hat{b}_l \end{equation*} We denote the class of all connections conformally related to $\nabla$ by $[\nabla]_c $. Observe that all connections in $[\nabla]_c $ satisfy \eqref{Weyl connection metric condition}.
\section{Decomposition of the Riemann curvature}
Given a connection $\nabla$ the Riemann and Ricci tensors are defined as \begin{equation*} 2 \nabla_{[i} \nabla_{j]} v^k = R\tensor{ij}{k}{l} v^l, \quad \quad R_{jl} = R\tensor{kj}{k}{l} \end{equation*} The projective and conformal Schouten tensor are related to the Ricci tensor of $\nabla$ by \cite{BEG}, \cite{Fri03} \begin{eqnarray*} \rho_{ij} &=& \frac{1}{n-1} R_{(jl)} + \frac{1}{n+1} R_{[jl]}\\ P_{ij} &=& \frac{1}{n-2} R_{(jl)} + \frac{1}{n} R_{[jl]} - \frac{R_{kl}g^{kl}}{2(n-2)(n-1)} g_{ij} \end{eqnarray*}
\noindent The Schouten tensors can be used to decompose the Riemann curvature as follows \begin{equation} \label{Riemann decomposition} R\tensor{ij}{k}{l} = W\tensor{ij}{k}{l} + 2\Sigma_{l[i}^{km} \rho_{j]m} = C\tensor{ij}{k}{l} + 2S_{l[i}^{km} P_{j]m} , \end{equation} where $W\tensor{ij}{k}{l}$ and $C\tensor{ij}{k}{l}$ are the projective and conformal Weyl tensors respectively. Moreover the once contracted Bianchi identity $\nabla_k R\tensor{ij}{k}{l} =0 $ implies \cite{BEG} that \begin{eqnarray}\label{proj_Bianchi} \nabla_k W\tensor{ij}{k}{l} &=& 2(n-2) \nabla_{[i} \rho_{j]l} = (n-2)y_{ijl}\\ \label{conf_Bianchi} \nabla_k C\tensor{ij}{k}{l} &=& 2(n-3) \nabla_{[i} P_{j]l} = (n-3)Y_{ijl}. \end{eqnarray} The tensor $y_{ijl}$ and $Y_{ijl}$ are known as the Cotton-York tensors.
Under a connection change $\check{\nabla} = \nabla + \check{b}$ respectively $\hat{\nabla} = \nabla + \hat{b}$ the Schouten tensors transform as \begin{eqnarray*} \rho_{ij} - \check{\rho}_{ij} &=& \nabla_i b_j + \half \Sigma^{kl}_{ij} \check{b}_k \check{b}_l \\ P_{ij} - \check{P}_{ij} &=& \nabla_i b_j + \half S^{kl}_{ij} \hat{b}_k \hat{b}_l \end{eqnarray*} In both cases the Schouten tensors absorb all terms that arise in the Riemann tensor under connection changes. It follows that the projective Weyl tensor $W\tensor{ij}{k}{l} $ and the conformal Weyl tensor $C\tensor{ij}{k}{l} $ are invariants of the projective class $[\nabla]_p$ and the conformal class $[\nabla]_c$, respectively. The question we wish to address is for which manifolds these two invariants coincide.
We note that for $n \le 2 \,$ $W\tensor{ij}{k}{l} =0$ and for $n \le 3 \,$ $C\tensor{ij}{k}{l} =0$. Therefore it follows trivially that:
\noindent \textit{In $n=2$ the Weyl tensors always agree. In $n=3$ they agree if and only if the manifold is projectively flat, i.e. the flat connection is contained in $[\nabla]_p $ }
Hence in the following we focus only on $n > 3$.
\section{Coincidence of the conformal and projective \\ Weyl tensors} The Ricci tensor can be decomposed into its symmetric trace-free, skew and trace components with respect to the metric $g_{ij}$: \begin{eqnarray} \label{Riccidecomp} R_{ij} &=& \Phi_{ij} + \varphi_{ij} + \frac{R}{n}g_{ij} \end{eqnarray} Hence the Schouten tensors can be rewritten as \begin{eqnarray} \label{projectiveSchoutentoRicci} \rho_{ij} &=& \frac{1}{n-1} \Phi_{ij} + \frac{1}{n+1} \varphi_{ij} + \frac{R}{n(n-1)}g_{ij}\\ \label{conformalSchoutentoRicci} P_{ij} &=& \frac{1}{n-2} \Phi_{ij} + \frac{1}{n} \varphi_{ij} + \frac{R}{2n(n-1)}g_{ij} \end{eqnarray} The condition $W\tensor{ij}{k}{l} = C\tensor{ij}{k}{l} $ is equivalent to \begin{equation}
2\Sigma_{l[i}^{km} \rho_{j]m} = 2S_{l[i}^{km} P_{j]m} \end{equation}
\noindent Substitutions of \eqref{projectiveSchoutentoRicci} and \eqref{conformalSchoutentoRicci} give \begin{eqnarray*} 2\Sigma_{l[i}^{km} \rho_{j]m} &=& \frac{2}{n-1}\delta_{[i}^k \Phi_{j]l} + \frac{2 R}{n(n-1)}\delta_{[i}^k g_{j]l} + \frac{2}{n+1}\delta_{[i}^k \varphi_{j]l} - \frac{2}{n+1} \delta_l^k \varphi_{ij}\\ 2S_{l[i}^{km} P_{j]m} &=& \frac{2}{n-2}\delta_{[i}^k \Phi_{j]l} - \frac{2}{n-2} g_{l[i}\Phi_{j]m}g^{km} + \frac{2 R}{n(n-1)}\delta_{[i}^k g_{j]l} \nonumber \\ && + \frac{2}{n}\delta_{[i}^k \varphi_{j]l} - \frac{2}{n} g_{l[i}\varphi_{j]m}g^{km} - \frac{2}{n} \delta_l^k \varphi_{ij} \end{eqnarray*} We observe that the scalar curvature terms are identical on both sides and hence only $\Phi_{ij}$ and $\varphi_{ij}$ are involved in our condition. The scalar curvature can take arbitrary values.
\noindent Taking the trace over $il$ and equating both sides. \begin{eqnarray*} 2\Sigma_{l[i}^{km} \rho_{j]m} g^{il} &=& \frac{1}{n-1}\Phi\tensor{j}{k}{} - \frac{R}{n} \delta_{j}^k + \frac{3}{n+1}\varphi\tensor{j}{k}{} \\ 2\Sigma_{l[i}^{km} P_{j]m} g^{il} &=& - \Phi\tensor{j}{k}{} - \frac{R}{n} \delta_{j}^k + \frac{4-n}{n}\varphi\tensor{j}{k}{} = - R\tensor{j}{k}{} + \frac{4}{n}\varphi\tensor{j}{k}{} \end{eqnarray*} Comparing irreducible components we find that we require \begin{eqnarray} \frac{n}{n-1}\Phi\tensor{j}{k}{} = 0 \quad \mathrm{and} \quad \frac{n^2-4}{n(n+1)}\varphi\tensor{j}{k}{} = 0 \end{eqnarray} Thus under our assumption of $n > 3$, both $\Phi_{ij}$ and $\varphi_{ij}$ must vanish. It follows that the Ricci tensor is pure trace and hence $g$ is an Einstein metric. Note that the Bianchi identities \eqref{proj_Bianchi}, \eqref{conf_Bianchi} imply that $R$ is constant.
The result can be formulated as follows \begin{theorem} Let $\nabla$ be a connection related to the conformal class $[g]$. \begin{itemize} \item In $n=2$ the Weyl tensors always vanish and hence agree. \item In $n=3$ the Weyl tensors agree if and only if the manifold is projectively flat, i.e. the flat connection is contained in $[\nabla]_p $ \item In $n \ge 4$ the Weyl tensors agree if and only if the connection $\nabla$ is the Levi-Civita connection of the metric $g$ and the manifold is an Einstein manifold. \end{itemize} \end{theorem}
\begin{corollary} If the projective and conformal Weyl tensor for $n\ge 4$ coincide then the Cotton-York tensors coincide as well. In fact they vanish identically. \end{corollary} The result follows immediately from the fact that the connection is the Levi-Civita connection of an Einstein metric. Hence the Schouten tensors are proportional to the metric and both Cotton-York tensors vanish.
\section{Conclusion}
It has been shown that the coincidence of the projective and conformal Weyl tensors is closely linked to the concept of Einstein metrics. For metric connections in $[\nabla]_c$ one could have deduced the main result directly from \eqref{Nurowski condition} by using the above decomposition of the Ricci tensor and using suitable traces of \eqref{Nurowski condition}. However, the set-up given here allowed for a direct generalisation to Weyl connections without requiring a more general form of \eqref{Nurowski condition}. Moreover it was felt that the set-up provided more clarity of role of the different types of curvatures involved.
\end{document}
|
arXiv
|
{
"id": "1301.5659.tex",
"language_detection_score": 0.7005280256271362,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Entropy conservation for comparison-based algorithms\footnote{This work was completed during a Fulbright Scholarship (April 5 - Aug 31, 2019) at Stanford's Computer Science theory group, research host D. Knuth. The author is grateful for discussions with D. Knuth, V. Pratt and M. Fiore.} }
\author{M. P. Schellekens\footnote{Associate Professor, University College Cork, Department of Computer Science, Email: [email protected]} }
\date{} \maketitle \begin{abstract} Comparison-based algorithms are algorithms for which the execution of each operation is solely based on the outcome of a series of comparisons between elements \cite{knu}. Typical examples include most sorting algorithms\footnote{Such as Bubblesort, Insertionsort, Quicksort, Mergesort, \ldots \cite{knu}}, search algorithms\footnote{Such as Quickselect \cite{knu}}, and more general algorithms such as Heapify which constructs a heap data structure from an input list \cite{knu}. \emph{Comparison-based computations can be naturally represented via the following computational model} \cite{sch1}: (a) model data structures as partially-ordered finite sets; (b) model data on these by topological sorts\footnote{We use the computer science terminology for this notion. In mathematics the notion of a topological sort is referred to as a \emph{linear extension} of a partial order.}; (c) considering computation states as finite multisets of such data; (d) represent computations by their induced transformations on states.
In this view, an abstract specification of a sorting algorithm has input state given by any possible permutation of a finite set of elements (represented, according to (a) and (b), by a discrete partially-ordered set together with its topological sorts given by all permutations) and output state a sorted list of elements (represented, again according to (a) and (b), by a linearly-ordered finite set with its unique topological sort).
Entropy is a measure of ``randomness'' or ``disorder.'' Based on the computational model, we introduce an entropy conservation result for comparison-based algorithms: \emph{``quantitative order gained is proportional to positional order lost.''} Intuitively, the result bears some relation to the messy office argument advocating a chaotic office where nothing is in the right place yet each item's place is known to the owner, over the case where each item is stored in the right order and yet the owner can no longer locate the items. Formally, we generalize the result to the class of data structures representable via series-parallel partial orders--a well-known computationally tractable class \cite{moh}. The resulting ``denotational" version of entropy conservation will be extended in follow-up work to an ``operational" version for a core part of our computational model.
\end{abstract}
\section{Introduction} Our work investigates properties of functions arising in computation \emph{when} complexity is taken into account. For traditional denotational semantics the main property of input-output functions is that of Scott-continuity \cite{stoy}. When studying semantics of programming languages that are required to reflect the meaning of programs (intuitively the input-output relation) \emph{and} also the complexity (the efficiency measured in running-time), more refined models are required. Such models have been studied in quantitative domain theory \cite{sch4}. The story however is far from over, since even though the theory of quantitative domains has matured at the model level, the situation at the programming language level is quite different from traditional semantics. Traditional denotational semantics relies on the \emph{compositionality} of the determination of meaning\footnote{The meaning of the sequential execution of two programs is the functional composition of their meanings.}. The complexity measure of worst-case running time is inherently non-compositional. The complexity measure of average-case time is compositional \cite{sch1} but does not support a \emph{computation} of the compositional outcomes of complexities since input-distributions cannot, in general, be feasibly tracked throughout computations. This has led to the development of a special purpose programming language MOQA supporting a compositional determination of average-case time \cite{sch1}. A key aspect of MOQA is that its operations support ``global state preservation", which in turns guarantees a compositional determination of the average-case complexity of MOQA-algorithms. MOQA-algorithms are comparison-based. Here we continue the investigation of global state preservation and show that for the general case of comparison-based algorithms this notion can be refined to a novel notion of entropy conservation.
\section{Basic notions} \subsection{Orders, data structures and sorting}
We assume familiarity with the standard notion of a \emph{partial order}, including related concepts such as extremal elements (minimal and maximal elements), a Hasse diagram, a topological sort (aka linear extension of a partial order), and a linear (or total) order.
\emph{Partial orders are implicitly assumed to be finite} unless they are clear from the context to be infinite, as is the case for the standard linear order over the set of integers. Figures displaying partial orders use a Hasse diagram representation of the transitive-reflexive reduction of the order. Partial orders are denoted by a pair $(X,\sqsubseteq)$ or using Greek letters, $\alpha,\beta, \ldots$ The size $|\alpha|$ of a partial order $\alpha = (X,\sqsubseteq)$ is the cardinality of the underlying set $X$. Specific partial orders are denoted by capital Greek or Roman letters. $\Delta_n$ denotes a discrete order of size $n$ and $L_n$ denotes a linear order of size $n$. The underlying finite set $X$ of a partial order $(X,\sqsubseteq)$ of size $n$ is typically enumerated as $X = \{x_1,\ldots,x_n\}$, using indices $i$ to enumerate elements $x_i$ of the set $X$.
We will represent data structures via finite partial orders and adopt the following convention: \emph{elements of data structures, such as, say, lists, are enumerated starting from position 1 rather than from position 0} (as would be customary in computer science).
We assume familiarity with basic \emph{data structures} such as a tree, a complete binary tree (in which each parent node has exactly two children and the leaves all have the same path-length counting form the root), and the heap data structure (a complete binary tree-structure possibly with some leaves removed in right-to-left order and labelled with integers such that each parent label is larger than the label(s) of its child(ren)). We also assume familiarity with the notion of a \emph{comparison-based algorithm} \cite{knu}.
\emph{Multisets} are set-like structures in which order plays no role but in which, contrary to sets, duplicate elements are allowed and accounted for via multiplicities.
Finally, we recall that the \emph{complement of a graph} $G_1$ is a graph $G_2$ on the same vertices such that two distinct vertices of $G_2$ are adjacent if and only if they are not adjacent in $G_1$.
\subsection{Topological sorts, state space and root states} We recall that partial orders are considered to be finite in this presentation unless otherwise stated.
\begin{definition} \label{topologicalsortdefinition} Given a (finite) partial order $\alpha$ and a linearly ordered countable set $(\mathcal{L},\leq)$, referred to as the \textbf{label set}. A \textbf{labeling} $l$ of the order $\alpha$ is an increasing injection from $\alpha$ into $(\mathcal{L},\leq)$. Note that by definition, labelings never use duplicate labels, i.e. repeated labels. This corresponds to the standard assumption in algorithmic time analysis where, to simplify the analysis, the data, such as lists, are assumed to have distinct elements\footnote{Repeated labels can be catered for in an analysis. The details are technical \cite{sch1}.}. We adopt this convention in our computational model.
A \textbf{topological sort} of a finite partial order $\alpha$ is a pair $(\alpha,l)$ consisting of the partial order $\alpha$ and a \textbf{labeling} $l$. $Top_{\mathcal{L}}(\alpha)$ denotes the set of all topological sorts $(\alpha,l)$ such that $range(l) \subseteq \mathcal{L}$. In other words, $Top_{\mathcal{L}}(\alpha)$ is the set of all topological sorts using labels from the given label set $\mathcal{L}$. In examples we will typically take $\mathcal{L}$ to be the positive integers, but the set could be any countable linear order, e.g. the words of the English alphabet equipped with the lexicographical order.
Note that \textbf{a labeling $l$ of a topological sort $(\alpha,l)$} where $\alpha = (X, \sqsubseteq)$ and $X = \{x_1,\ldots,x_n\}$ \textbf{is determined by a permutation $\sigma$} on $\{1,\ldots,n\}$. The arguments of $\sigma$ are the indices of the elements $x_i$ and the values $\sigma$ takes are the ranks of the labels $l(x_i)$ (taken in the range of $l$). For instance, the topological sort over the linear order on the set $X = \{x_1,x_2,x_3\}$ determined by the labeling $l$ taking the values $l(x_1) = 5, l(x_2) = 2, l(x_3) = 7$ is the permutation $\sigma = {2 \,\, 1 \,\, 3 \choose 1 \,\, 2 \,\, 3 }$.
\end{definition}
Figure \ref{Fig: topological-sort-example-alt} displays four topological sorts, marked I, II, III and IV, for a partial order that has a Hasse diagram forming a binary tree of size 4. The label set $\mathcal{L}$ is the set of positive integers. The four topological sorts are examples of \emph{heap data structures} \cite{knu}. \begin{definition} \label{Def:LPOIso}
Two topological sorts $(\alpha,l_{1})$ and $(\alpha,l_2)$ are \textbf{isomorphic} exactly when for all $x, y\in X$, $l_{1}(x) \leq l_{1}(y)$ if and only if $l_{2}(x)\leq l_{2}(y).$ In other words \textbf{the labels of the topological sort share the same relative order}. For instance, consider the two topological sorts determined by the lablelings $(x_1: 5,x_2: 3, x_3: 1)$ and $(x_1: 4,x_2: 2, x_3: 0)$ of the discrete order $\Delta_3$ of size $3$ over the set $X = \{x_1,x_2,x_3\}$. The topological sorts are isomorphic and represent two unordered (reverse sorted) lists of size $3$. \textbf{Equivalently,} the topological sorts $(\alpha,l_{1})$ and $(\alpha,l_2)$ are isomorphic when: for all $x\in X$, the rank of $l_{1}(x)$ in the range $l_1(X)$ is equal to the rank of $l_{2}(x)$ in the range $l_2(X)$. In other words, \textbf{across topological sorts, labels of the same element must have identical rank}. Given a topological sort $(\alpha,l)$, where $\alpha = (X,\sqsubseteq)$, then its \textbf{root state} $l'$ is obtained by replacing each label of $l$ by its rank in $l(X)$. Root states are exactly the non-ismorphic topological sorts over an order $\alpha$ of size $|\alpha| = n$ that use labels from the set $\{1, \ldots, n\}$ only. \textbf{Two topological sorts} hence \textbf{are equivalent iff they share the same root state.} The root state of the isomorphic topological sorts $(x_1: 5,x_2: 3, x_3: 1)$ and $(x_1: 4,x_2: 2, x_3: 0)$ of the discrete order $\Delta_3$ of size $3$ is the permutation-labeling $(x_1: 3,x_2: 2, x_3: 1)$. Root states are labelings that can be identified with the permutations that determine the labeling.
\end{definition}
Figure \ref{Fig: topological-sort-example-alt} displays four topological sorts, I, II, III and IV, two of which are isomorphic (I and II). Their root state are illustrated via the topological sorts V, VI and VII, where the isomorphic topological sorts I and II share the same root state V.
\begin{definition} The topological sorts of a finite partial order $\alpha$ are identified up to labeling isomorphism. The resulting quotient, denoted $R(\alpha)$, is called the \textbf{state space} of the partial order. With abuse of notation we denote elements of a state space by canonical \emph{representatives} of these equivalence classes (as opposed to the equivalence classes): given a partial order $\alpha$ of size $n$, then its \emph{state space} $R(\alpha)$ consists of the finitely many \emph{root states} of topological sorts over the order, which represent the finitely many states non-isomorphic topological sorts can occur in. \end{definition}
\begin{example}\label{state space} (Root states and state space) The label set $\mathcal{L}$ is the set of positive integers.
\noindent {\bf a) State space representing unordered lists of size 4} \\ The discrete order implies no conditions on labels of its topological sorts. For a discrete order $\Delta_n$ of size $n$, the state space $R(\Delta_n)$ hence corresponds to the set of $n!$ permutations of size $n$.
Consider the case of a discrete order $(X,\Delta_3)$ over a set $X = \{x_1,x_2,x_3\}$. The state space $R(\Delta_3)$ consists of the root states, i.e. the topological sorts using labels from the set $\{1,2,3\}$ only, given by the following labelings determining each such topological sort: \begin{center} $(x_1: 1, x_2: 2, x_3: 3), (x_1: 1, x_2: 3, x_3: 2), (x_2: 2, x_2: 1, x_3: 3),$ \\ $(x_1: 2, x_2: 3, x_3: 1), (x_1: 3, x_2: 1, x_3: 2), (x_1: 3, x_2: 2, x_3: 1).$ \end{center} In other words, the state space $R(X,\Delta_3)$ corresponds to the 3! permutations of size 3: \\ $\{(1,2,3),(1,3,2),(2,1,3),(2,3,1),(3,1,2),(3,2,1)\},$ representing the unordered lists of size 3. \\
\noindent {\bf b) State space representing a heap data structure of size 4} \\ Consider the four topological sorts I, II, III and IV over the order determined by the Hasse Diagram in Figure \ref{Fig: topological-sort-example-alt} and their root states V, VI and VII. These topological sorts V, VI and VII happen to form the only possible root states for this order. The set $\{V,VI,VII\}$ forms the state space of this order, representing exactly the distinct (root) states that heap data structures of size 4 can occur in.
\begin{figure}
\caption{Four topological sorts I, II, III, and IV over the same order. Topological sorts I and II are isomorphic and hence share the same root state V. }
\label{Fig: topological-sort-example-alt}
\end{figure} \end{example}
\begin{definition} A \textbf{global state} $R$ is a finite multiset of state spaces, i.e. $R$ is of the form: $$\{(R(\alpha_1),K_1),\ldots,(R(\alpha_l),K_l)\}$$ \end{definition} The orders $\alpha_1,\ldots, \alpha_k$ indicate that the input data structure has been transformed to several output data structures, represented by different orders\footnote{\cite{sch1} provides examples of transformations leading to different orders, e.g. Quicksort's split operation.}. Each of the state spaces $R(\alpha_i)$ reflects that output states, when they are topological sorts of the given order $\alpha_i$, have root states over $\alpha_i$.
\subsection{The four-part model} As indicated in the abstract, the computational model for modular time analysis of comparison-based algorithms \cite{sch1,sch3} consists of four parts:
\begin{itemize}\item{(a) modelling data structures as partially-ordered finite sets;} \vspace*{-1.5 mm} \item{(b) modelling data on these by topological sorts;}\vspace*{-1.5 mm} \item{(c) considering computation states as finite multisets of such data (aka ``global states'');}\vspace*{-1.5 mm} \item{(d) analysing algorithms by their induced transformations on global states.} \end{itemize}
In this view, an abstract specification of a sorting algorithm has input state given by any possible permutation of a finite set of elements (represented, according to (a) and (b), by a discrete partially-ordered set together with its topological sorts given by all permutations) and output state a sorted list of elements (represented, again according to (a) and (b), by a linearly-ordered finite set with its unique topological sort).
\begin{example} The (unordered) input lists of size 2 of a sorting algorithm are modelled by the topological sorts of a discrete order of size 2 over the set of elements $\{x_1,x_2\}$: $$\{(x_1: 1,x_2:2),(x_1:2,x_2:1)\}$$
\noindent The function values $s(x_1) = 2$ and $s(x_2) = 1$ of, say, the topological sort $s = (x_1:2,x_2:1)$ (over the discrete order of size 2) are referred to as \emph{labels} and correspond, for the case of list data structure, to the list's elements. The location $i$ of a label $a$ for which $s(x_i) = a$ is referred to as the label's \emph{index} and corresponds to the location of an element in a list. The list $(2,1)$ contains the element 2 in position 1 (i.e., the index of $x_1$) and the element 1 in position 2 (i.e. the index of $x_2$). We say that the index of label 2 is 1 and the index of label 1 is 2 for this topological sort. The 2! topological sorts $\{(x_1: 1,x_2:2),(x_1:2,x_2:1)\}$ consist of 2 permutations representing the unordered lists $(1,2)$ and $(2,1)$. These topological sorts form the \emph{``root states''} that lists of size 2 (with distinct elements) can occur in. Indeed, a list of size 2 is either sorted, represented by $(1,2)$, or reverse sorted, represented by $(2,1)$. Together, these topological sorts form a set referred to as the \emph{``state space''}\footnote{A state space intuitively serves to represent the uniform distribution over the data: each of the infinitely many possible input lists $(a,b)$ of size 2 (with distinct elements) is assumed to occur with equal probability in one of the two root states of the state space. This interpretation serves to underpin the complexity analysis of algorithms, which is the topic of \cite{sch1} and will not be considered here.}. \end{example}
\begin{example} \textbf{Trivial sort of lists of size 2} \label{trivial} \\ Computations will transform topological sorts to new topological sorts. All computations will be based on comparisons. For instance, a sorting algorithm, be it a very primitive one that operates only over lists of size 2, can execute a single comparison of the two elements of the list (the labels of the corresponding topological sort), followed by a swap in case the labels are out of order. Such an algorithm leaves the topological sort $(x_1: 1,x_2:2)$ unchanged and transform the topological sort $(x_1:2,x_2:1)$ via a single swap to the topological sort $(x_1:1,x_2:2)$. Sorting, in this model, produces the unique topological sort over the \emph{linear} order, as illustrated in Figure \ref{Fig:sorting-lists-size-two}.
\begin{figure}
\caption{Sorting: transforming a topological sort of the \emph{discrete order} into the unique topological sort of the \emph{linear order} for lists of size 2 }
\label{Fig:sorting-lists-size-two}
\end{figure}
The transformation changes the multiset of the input state space (over the discrete order) $$\{(\{(x_1: 1,x_2:2),(x_1:2,x_2:1)\},1)\}$$ to the multiset of the output state space (over the linear order) $\{(\{(x_1: 1,x_2:2)\},2)\}.$ \\ Such multisets are referred to as ``global states" of the data under consideration.
\end{example}
We focus on the particular case of comparison-based sorting algorithms to illustrate our model\footnote{Note that the partial orders model \emph{implicit} data structures. Readers who wish to focus on the mathematical presentation, as opposed to implementation details, are advised to skip this comment on first reading. Part (a) of the model description stipulates that we use finite partial orders to represent ``data structures". This order may be \emph{implicitly} or \emph{explicitly} represented in the output data structure depending on the implementation. For instance, in the case of a heap-formation from a unordered input list, the computation can establish the heap explicitly by transforming the input list into a binary tree data structure satisfying the heap property, and constitutes a de facto heap. Alternatively, the algorithm may take an input list and retain the list data structure for its outputs. Elements of the input list will be reorganized \emph{in place}, i.e. a new list will be produced, for which the elements \emph{satisfy} a heap structure. The tree-structure underlying this heap remains an ``implicit" part of the implementation. For all purposes the algorithm makes use of the heap-structure intended by the programmer, but the data structure remains a list at all times during the computation. This is for instance the case for the ``Heapify" process in traditional (in-place) Heapsort \cite{knu}. We refer to the heap-structure in that case as the ``\emph{implicit data structure}'' and the list data structure as the ``\emph{explicit data structure}''. These may coincide or not depending on the implementation. In our context, \emph{partial orders model the \emph{implicit} data structure}. }.
\subsection{A basic example of the computational model: sorting algorithms} We illustrate (a), (b), (c), and (d) of the model for a sorting algorithm operating over lists of size $n$.
\subsubsection{Orders and topological sorts, parts (a) and (b)} For sorting algorithms, inputs are list data structures, represented by finite discrete orders. The elements of the order are labelled with positive integers drawn from a linearly ordered label set $\mathcal{L}$, which in this case is the usual linear order on the positive integers. The only requirement on this labeling is that its combination with the discrete order forms a topological sort\footnote{It is possible to deal with lists that have repeated elements. These would need to be modelled by topological sorts for which conditions are relaxed to allow for repeated labels. See \cite{sch1, ear1} for a discussion of how repeated labels can be handled through the assignment of random tie-breakers. It is standard practice in algorithmic analysis to undertake the analysis in first instance for lists \emph{without} duplicate elements--an approach adopted here.}.
Sorting algorithms hence transform topological sorts of the \emph{discrete order (permutations)} into a unique\footnote{``Unique" in the sense of topological sorts of the linear order using the same labels as the input permutation.} topologic sort of the \emph{linear order (the sorted list)}. The transformation of the list $(9,6,3,2)$ into the sorted list $(2,3,6,9)$ by a sorting algorithm is represented in Figure \ref{Fig:sorting-example}. $(2,3,6,9)$ forms the unique topological sort of the linear order (using the labels 2, 3, 6 and 9 under consideration).
\begin{figure}
\caption{Sorting: transforming a topological sort of the \emph{discrete order} into the unique topological sort of the \emph{linear order} }
\label{Fig:sorting-example}
\end{figure} \subsubsection{Global state, part (c)} Every list of size n, after identification up to isomorphism with a root state, corresponds to one of the n! permutations of size n. The corresponding state space $R(\Delta_{n}) = \{\sigma_1, \ldots, \sigma_{n!}\}$ consists of the root states, represented as permutations in this case. The multiset $\{(R(\Delta_{n}),1)\}$ containing a single copy of the state space $R(\Delta_{n}) = \{\sigma_1, \ldots, \sigma_{n!}\}$, forms the \emph{(global) state} of the discrete order of size $n$. This global state intuitively represents the possible inputs for the sorting algorithm.
\subsubsection{Induced transformations on global states, part (d)} Consider the root states of the discrete order $\Delta_n$ of size $n$, corresponding to $n!$ permutations of size $n$, forming the state space $R(\Delta_{n})$. The (global) state of the discrete order of size $n$ is the multiset $\{(R(\Delta_n),1)\}$, representing the inputs of our algorithm. Every sorting algorithm transforms the root states of this global state into $n!$ copies of the state space of the linear order $L_n$, consisting of a unique root state (a topological sort corresponding to the sorted list). We obtain the following result. \\
\noindent \textbf{Global state preservation for sorting} \\ Comparison-based sorting algorithms, for inputs of size $n$, transform the global state $\{(R(\Delta_n),1)\}$ into the global state $\{(R(L_n),n!)\}$.
\subsubsection{Global state preservation: a word of caution} \label{caution1} We have established the first obvious fact: all comparison-based sorting algorithms preserve global states. Note however that, even though \emph{every comparison-based sorting algorithm} can be naturally interpreted to induce a transformation on global states, this does not entail that \emph{every operation} used in a comparison-based algorithm preserves global states (cf. \cite{sch1})\footnote{In a sense, it is counter-intuitive that the whole, i.e. a comparison-based sorting algorithm, satisfies the property while some of its operations may not. Global state preservation for all operations is a crucial requirement for feasible modular time analysis: \emph{the analysis of comparison-based algorithms is guaranteed to be feasibly modular, in case \emph{every} operation of the computation preserves global states} \cite{sch1, sch3}\footnote{In which (global) states are referred to as ``random bags".}. A breakdown of global state preservation for one or more operations lies at the heart of open problems in algorithmic analysis \cite{sch1}. A basic example for which global state preservation breaks down is provided by Heapsort's Selection Phase \cite{sch1}, for which the \emph{exact} time is an open problem \cite{knu}.}. The property of global state preservation can be refined to (global) entropy conservation, motivated in the next section.
\section{Entropy conservation}
Entropy considerations naturally arise in the context of comparison-based algorithms, e.g. via the well-known $\Omega(log_2(n!))$ lower bound for both the worst-case and average-case time of comparison-based algorithms (on inputs of size $n$) \cite{knu}. For the case of unordered lists of size $n$, the entropy of the input data is $log_2(n!)$. \emph{This notion of entropy will be generalized to the context of topological sorts in a natural way, by measuring the log in base 2 of the number of topological sorts of a finite order.} Our investigation of entropy and its conservation is carried out for computation \emph{with history} (see also \cite{knu}).
\subsection{Computation with history over topological sorts} \label{history} Comparison-based computation typically executes swaps of elements based on comparisons, generalizing the case of comparison-based sorting. In computations \emph{with history}, the original index $\textcolor{red}{i}$ of each label $\textcolor{blue}{a}$ in the input data (topological sort) is paired with the label $\textcolor{blue}{a}$ to form an index-label pair $(\textcolor{red}{i},\textcolor{blue}{a})$. Such a pair replaces each label $a$ in the computation. I.e., instead of exchanging \emph{labels}, a computation with history exchanges \emph{index-label pairs}. Note that the comparisons (that determine the swaps) are still made on the \emph{labels} $\textcolor{blue}{a}$ of a pair $(\textcolor{red}{i},\textcolor{blue}{a})$. The indices $\textcolor{red}{i}$ are merely carried along for bookkeeping purposes, recording the original position of the label.
For instance, the trivial sorting example discussed in Example \ref{trivial} that sorts a list of size 2 by a (potential) swap following a single comparison, will leave the topological sort $(x_1: 1,x_2:2)$ unchanged and transforms $(x_1: 2,x_2:1)$ into $(x_1: 1,x_2:2)$.
The same computation \emph{with history} uses index-label pairs as a new type of labels of topological sorts. This computation leaves the topological sort $(x_1: (\textcolor{red}{1},\textcolor{blue}{1}),x_2:(\textcolor{red}{2},\textcolor{blue}{2}))$ unchanged and transforms the topological sort $(x_1:(\textcolor{red}{1},\textcolor{blue}{2}),x_2:(\textcolor{red}{2},\textcolor{blue}{1}))$ into the topological sort $\{x_1:(\textcolor{red}{2},\textcolor{blue}{1}),x_2:(\textcolor{red}{1},\textcolor{blue}{2})\}$. This computation is illustrated in Figure \ref{Fig:sorting-history}. Further swaps, on larger input lists, may move these index-label pairs to other positions in the topological sort, but will never change these index-label pairs' values during the computation.
\begin{figure}
\caption{Sorting with history for lists of size 2 induces a bijection. Indices (marked red) are paired with labels (marked blue) throughout the computation. Swaps of index-label pairs are based on (blue) label-comparisons only (as was the case for computation without history). (Red) indices are merely carried along in the computation as a bookkeeping device. }
\label{Fig:sorting-history}
\end{figure}
Computations with history form a bijection in which the outputs of a computation suffice to determine the inputs. In other words, computations with history are \emph{reversible}, i.e. inputs can be recovered from outputs. In the prior example, the output topological sort $\{x_1:(\textcolor{red}{2},\textcolor{blue}{1}),x_2:(\textcolor{red}{1},\textcolor{blue}{2})\}$ contains the index-label pairs: $(\textcolor{red}{2},\textcolor{blue}{1})$ and $(\textcolor{red}{1},\textcolor{blue}{2})$, which can be ``decoded" to the original input $(x_{\textcolor{red}{1}}: \textcolor{blue}{2},x_{\textcolor{red}{2}}:\textcolor{blue}{1})$. This decoded input, written in history-notation (using index-label pairs instead of original labels), recovers the original input: $(x_1:(\textcolor{red}{1},\textcolor{blue}{2}),x_2:(\textcolor{red}{2},\textcolor{blue}{1}))$.
\subsection{A basic example of entropy conservation: sorting algorithms} \label{entropy}
Comparison-based sorting algorithms compute over $n!$ input lists of size $n$, represented as the root states from the state space over the discrete order of size $n$. As observed, computations with history induce a bijection between inputs and outputs. Viewed over all outputs, the indices of the index-label pairs end up in random order. Indeed, any comparison-based sorting algorithm computing with history and starting from the input permutation
$\sigma = \{(1,\sigma(1)),(2,\sigma(2)),\ldots, (n,\sigma(n))\}$ \\ will produce the sorted output
$ \{(\sigma^{-1}(1),1),(\sigma^{-1}(2),2),\ldots, (\sigma^{-1}(n),n)\}$
We illustrate the transformations on topological sorts induced by a comparison-based sorting algorithm computing with history on all input permutations of size $n$ in Figure \ref{Fig:sorting-history-general}.
\begin{figure}
\caption{Sorting with history for lists of size n induces a bijection. Indices (marked red) are paired with labels (marked blue) throughout the computation. }
\label{Fig:sorting-history-general}
\end{figure}
As is clear from Figure \ref{Fig:sorting-history-general}, labels, originally in random order, i.e., uniformly distributed, now occur sorted, i.e. in linear order. Indices, originally in sorted order, linearly arranged from position $1$ to position $n$, after travelling with the labels as index-label pairs during swaps, ultimately occur in a random order, i.e. uniformly distributed. This can be understood by considering that when $\sigma$ ranges over all permutations of size $n$, $\sigma^{-1}$ ranges over the same set of $n!$ permutations. Hence, $\sigma = \{(1,\sigma(1)),(2,\sigma(2)),\ldots, (n,\sigma(n))\}$ when varying over all permutations, causes $ \{(\sigma^{-1}(1),1),(\sigma^{-1}(2),2),\ldots, (\sigma^{-1}(n),n)\}$ to range over all $\sigma^{-1}$, i.e. over all permutations of size $n$.
At this stage, the linear arrangement of indices is merely an intuitive observation. The indices satisfy the linear order on integers, dictated by the (implicit) ``left-to-right" occurrence of the indices in the original permutation $\sigma = \{(1,\sigma(1)),(2,\sigma(2)),\ldots, (n,\sigma(n))\}$. We will incorporate the linear arrangement of indices explicitly in the context of topological sorts next.
\subsubsection{Representing the order on indices: left-to-right order} \label{left} For traditional list data structures, each input list incorporates a left-to-right order on indices. For instance, for a list $[3,1,2]$, the indices (i.e. positions of the elements) increase in left-to-right order from position $1$ to $3$. This ensures that the list $[3,2,1]$ is not equivalent to, say, the list $[1,3,2]$, i.e. the order of elements is important, contrary to the case of sets. We model lists via topological sorts (root states) of the discrete order. A discrete order is defined on a finite set, in which the order of elements plays no role. To ensure that the discrete order faithfully models the list data structure, we need to impose one more condition on the order $\Delta_n$ over a finite set $X$.
If $X$ is enumerated as $X = \{x_1,\ldots,x_n\}$, then the order on indexed elements $x_i$ can be imposed via a ``left-to-right order''\footnote{The left-to-right order was first introduced on orders in \cite{ear1}.}, i.e., we impose a second order on the elements $x_i$, in addition to the discrete order. These orders are distinct, i.e. do not affect one another. The left-to-right order $\sqsubseteq^{*}$ specifies that $x_i \sqsubseteq^{*} x_j$ if an only if $i < j$, i.e. the linear order on integers is inherited on the elements of the finite set, forming a linear order $\sqsubseteq^{*}$, denoted in our context by $L_n$, on $X$.\footnote{Note that this definition is specific to discrete orders. We will generalize the left-to-right order to more general partial orders later.}
Once the left-to-right order is imposed on the elements of $X$, indices can be interpreted to form the single root state of a linear order of size $n$.
Note that due to Hasse diagram representation, the ``left-to-right" order will be represented ``vertically" in the Hasse diagram of a linear order rather than ``horizontally" (as in the traditional list data structure format). Hence we abandon the terminology ``left-to-right" in favour of that of a ``dual order'' in section \ref{mirror}.
\subsubsection{Splitting the roles of indices and labels in topological sorts} \label{splitting} With the introduction of the left-to-right order, indices are interpreted as topological sorts of the linear order. Hence the computation displayed in Figure \ref{Fig:sorting-history-general} can be split up, where we consider the first coordinate $i$ of a label-index pair $(i,a)$ \emph{separately} from the label-coordinate $a$. This yields a representation using two state spaces, one for labels over a discrete order, the other for indices over a linear order displayed in Figure 6.
\begin{figure}
\caption{Separating the state spaces}
\label{fig:subfigname}
\end{figure}
\subsubsection{Dual order} \label{mirror} The orders involved for labels and for indices (for the case of comparison-based sorting algorithms) are \emph{dual} in the following sense: the Hasse diagram of the discrete order $\Delta_n$ represents the \emph{complement} graph of the Hasse diagram of the linear order $L_n$. This type of duality plays a central role in our entropy conservation results and we will introduce the notion of a dual order for more general types of partial orders in Section \ref{dualSP}.
There is a natural transformation to obtain the dual of the discrete order, i.e. the linear order. One can be obtained from the other by taking the mirror image in the Cartesian plane, where points of the order are represented via a choice of coordinates placing the elements of the discrete order on, say, a horizontal line. The mirror image occurs with respect to the first diagonal. The dual order is formed by the complement graph on the mirror image, forming the Hasse diagram of the linear order. We illustrate this in Figure \ref{Fig:mirror-discrete-linear}.
\begin{figure}
\caption{Forming the dual order via the mirror image wrt the first bisector }
\label{Fig:mirror-discrete-linear}
\end{figure}
\subsubsection{Entropy conservation for comparison-based sorting} \label{entropy-topsort} Next, we pair the \emph{topological sorts}, i.e. instead of considering the original index-label pairs used in the computation with history, we consider \emph{pairs of topological sorts}, one over a discrete order (representing the order on labels) and one over its dual, a linear order (representing the order on indices) as illustrated in Figure \ref{Fig:sorting-history-general-paired}. These pairs are transformed by a sorting algorithm into new pairs of topological sorts, one over a linear order (representing the labels) and one over a discrete order (representing the indices).
\begin{figure}
\caption{Pairing the topological sorts for labels and indices. }
\label{Fig:sorting-history-general-paired}
\end{figure}
For the case of labels: the order is transformed from a discrete order (i.e. full freedom on elements) to a linear order (no freedom on elements). Hence randomness at the label level decreases during the computation with history. This is compensated with an increase in randomness at the index level during the same computation, where the order for the case of indices is transformed from a linear order (no freedom on elements) to a discrete order (full freedom on elements).
Randomness is measured via entropy and we recall that entropy is defined as the log in base 2 of the size of a state space. The above observation on decrease of randomness for labels and increase of randomness for indices can be expressed formally via entropy on inputs and outputs:
\begin{itemize}
\small{\item{Entropy of {\bf input}-labels for the state space $R(\Delta_n)$: $log_2|R(\Delta_n)| = log_2(n!)$ \\(entropy for the $n!$ states of the state space over the discrete order)}
\item{Entropy of {\bf input}-indices for the \emph{dual} state space $R(L_n)$: $log_2|R(L_n)| = log_2(1) = 0$ \\(entropy for the single state satisfying the linear order)}
\item{Entropy of {\bf output}-labels for the state space $R(L_n)$: $log_2|R(L_n)| = log_2(1) = 0$ \\(entropy for the single state of the output linear order)}
\item{Entropy of {\bf output}-indices for the \emph{dual} state space $R(\Delta_n)$: $log_2|R(\Delta_n)| = log_2(n!)$ \\(entropy for $n!$ states of the discrete order)} } \end{itemize}
\begin{definition} Consider the discrete order $\Delta_n$ and its dual order $L_n$. The corresponding \textbf{double state space} is the pair $(R(\Delta_n),R(L_n))$, consisting of a state space and its dual state space.
The \textbf{quantitative entropy} (``label entropy''), denoted by $H_q$, is the entropy of the first component of the double state space: $H_q = log_2|R(\Delta_n)| = log_2(n!)$.
The \textbf{positional entropy} (``index entropy''), denoted by $H_p$, is the entropy of the second component of the double state space: $H_p = log_2|R(L_n)|$
\end{definition}
The quantitative entropy $H_q$ for the case of a comparison-based sorting algorithm evolves from maximum entropy $log_2(n!)$ to 0 entropy. The positional entropy $H_p$ for the case of a comparison-based sorting algorithm evolves from 0 entropy to maximum entropy $log_2(n!)$.
In other words, comparison-based sorting algorithms satisfy entropy conservation, i.e. the sum $H_p + H_q$ of quantitative and positional entropy remains constant, $H_p + H_q = log_2(n!)$, for the case of inputs \emph{and} for the case of outputs. In summary, we obtain a refined version of global state preservation, involving pairs of global states and a related entropy conservation result.
\begin{proposition} (Entropy conservation for comparison-based sorting) \label{entropyconservationsorting} \\
Comparison-based sorting algorithms transform the double global state: $(\{(R(\Delta_n),1)\}, \{(R(L_n),n!)\})$ into the double global state
$(\{(R(L_n),n!)\}, \{(R(\Delta_n),1)\})$
Let the entropies for the double state space $(R(\Delta_n),R(L_n))$ be $$H^{1}_{p} = log_2(|R(L_n)|) \mbox{ and } H^{1}_{q} = log_2(|R(\Delta_n)|)$$ Similarly, let the entropies for the double state space $(R(L_n),R(\Delta_n))$ be $$H^{2}_{p} = log_2(|R(\Delta_n)|) \mbox{ and } H^{2}_{q} = log_2(|R(L_n)|)$$ The sum of the entropies of the states spaces involved in the double global states is constant $$H^{1}_p + H^{1}_q = H^{2}_p + H^{2}_q = log_2(n!)$$
In other words, \emph{comparison-based sorting satisfies entropy conservation}. \end{proposition}
\subsubsection{Entropy conservation: a word of caution} \label{caution2} A word of caution has its place here too, as it did in Section \ref{caution1} on global state preservation: though \emph{every} comparison-based algorithm can be naturally shown to satisfy entropy conservation, this does not entail that \emph{every} operation used in the algorithm conserves entropy.
We will show that for the case of SP-orders, to be introduced next, entropy conservation (Proposition \ref{entropyconservationsorting}) has a natural generalization in Theorem \ref{duality}.
\section{Series-parallel orders} \label{SP} Series-parallel orders, or SP-orders, form an important, computationally tractable class of data structures. These include trees and play a role in sorting, sequencing, and scheduling applications \cite{moh}. \cite{sch1} introduces a calculus supporting the modular time derivation of algorithms, where the suite of data structuring operations used in \cite{sch1} preserves SP-orders. SP-orders are generated from a finite set of elements using a ``series" and ``parallel" operation over partial orders.
\subsection{Series-parallel operations} \label{operations} The series and parallel operations, or SP-operations, $\otimes$ and $\parallel$ are defined over partial orders.
In terms of Hasse diagrams:
\begin{itemize} \item{the series operation puts the first poset below the second, where every element of the first order ends up (in the newly formed order) below each element of the second order.}
\item{the parallel operation puts the two orders side-by-side, leaving their nodes mutually incomparable (across the two orders).} \end{itemize}
Figure \ref{Fig: series-parallel-operations} illustrates the series operation on orders with $\vee$-shaped and $\wedge$-shaped Hasse diagrams, where we introduce the formal definitions of the operations next.
\begin{figure}
\caption{The series operation execution illustrated on Hasse diagrams }
\label{Fig: series-parallel-operations}
\end{figure}
\begin{definition} \label{sp-order} Let $\alpha_1,\alpha_2$ be posets where $\alpha_1 = (A_1,\sqsubseteq_1)$, $\alpha_2 = (A_2,\sqsubseteq_2)$ and $A_1 \cap A_2 = \emptyset$, then the series composition $\alpha_1 \otimes \alpha_2$ and the parallel composition $\alpha_1 \parallel \alpha_2$ of these orders are defined to be the following partial orders
\begin{itemize} \item{$\alpha_1 \otimes \alpha_2 \,=\,\,(A_1 \cup A_2, \sqsubseteq_1 \cup \, \sqsubseteq_2 \cup \, (A_1 \times A_2)) $} \item{$\alpha_1 \parallel \alpha_2 \,=\,\, (A_1 \cup A_2, \sqsubseteq_1 \cup \, \sqsubseteq_2)$} \end{itemize}
If $\alpha=\alpha_1\otimes\alpha_2$, then $\alpha_1$ and $\alpha_2$ are \textbf{series components} of $\alpha$.
If $\alpha=\alpha_1\|\alpha_2$ then $\alpha_1$ and $\alpha_2$ are \textbf{parallel components} of $\alpha$. \end{definition}
\begin{remark} \label{dualordersremark} It is easy to verify that both the series and the parallel operation over partial orders are associative. The series operation is clearly not commutative, while the parallel operation is.
Note however that for the purpose of setting up the computational model for modular timing, in which partial orders model data structures, we will require in practice that the parallel operation is \underline{not} commutative. This will be motivated and formalized in Section \ref{dualSP}.
\end{remark}
SP-orders can be characterized as the N-free partial orders \cite{moh}, i.e. the orders which do not contain the``N-shaped partial order'' as sub-order. An N-shaped partial order is any order determined by a quadruple $\{x,y,u,v\}$ for which $x \sqsubseteq y, u \sqsubseteq v, x \sqsubseteq v$ and $u$ and $y$ are unrelated.
SP-orders include partial orders with tree-shaped Hasse-diagrams, i.e. capture tree data structures. Of course, SP orders need not be tree-shaped, as illustrated by the left-most example in Figure \ref{Fig: SP-orders}. The figure on the right-hand side is an example of a non-SP-order containing the N-shaped sub-order.
\begin{figure}
\caption{SP and non-SP order }
\label{Fig: SP-orders}
\end{figure}
\subsection{The series operation is refining} Comparison-based computations gather information by comparing elements. Order-information gained in this way is represented via partial orders in our context. As the computations proceed, more order-information is obtained. This is captured by the notion of a refinement of the order.
\begin{definition} \label{refinement} A poset $\beta = (X,\sqsubseteq_{\beta})$ \textbf{refines} a poset $\alpha = (X, \sqsubseteq_{\alpha})$, denoted by $\alpha \preccurlyeq \beta$, in case there is a permutation $\sigma$ on (the indices of) elements of $X = \{x_1,\ldots,x_n\}$ such that for all $x_i, x_j \in X$ $$x_i \sqsubseteq_{\alpha} x_ j \Rightarrow x_{\sigma(i)} \sqsubseteq_{\beta} x_{\sigma(j)}$$
I.e., $\beta$ refines $\alpha$ exactly (in case they have the same domain) when $\alpha$ is a sub-order of $\beta$\footnote{Or, with different domains, when there is an order-preserving map from one into the other.}. \end{definition}
Each application of a series operation refines the order under consideration (cf. Figure \ref{Fig:SP-notation}). Computations over orders in our context typically start from the discrete order (first Hasse diagram in Figure \ref{Fig:SP-notation}) and gradually build refinements by repeated application of the series operation, e.g. in the construction the tree-structured Hasse diagram (second Hasse diagram in Figure \ref{Fig:SP-notation}) or in a completed sort, resulting in the linear order (third Hasse diagram in Figure \ref{Fig:SP-notation}).
\subsection{Counting the root states of an SP-order} \label{count}
For general partial orders, determining $|R(\alpha)|$ is $\# P$-complete \cite{bw}. The following lemma \cite{moh} reflects the computational tractability of SP-orders, showing how to quickly compute the size of the number of (non-isomorphic) topological sorts of a SP-order. This determines the size $|R(\alpha)|$ of a state space of a SP-order $\alpha$. \begin{lemma} \emph{{\bf (State space size)}}
\label{Le:|R|}
Let $\alpha_1$ and $\alpha_2$ be posets. \begin{enumerate} \item
\label{Le:|R|Ser}
If $\alpha=\alpha_1\otimes \alpha_2$ then $|R(\alpha)|=|R(\alpha_1)| |R(\alpha_2)|$ \item
\label{Le:|R|Par}
If $\alpha=\alpha_1\|\alpha_2$ then $|R(\alpha)|={|\alpha|\choose |\alpha_1|}|R(\alpha_1)| |R(\alpha_2)|$ \end{enumerate} \end{lemma}
\subsection{SP-expressions} SP-orders are of interest since they determine a computationally tractable class of data structures. Many problems are NP-hard in general. Considerable attention has been given to partial orders with ``nice" structural properties supporting the design of efficient methods \cite{moh}. This includes the class of SP-orders, for which the number of topological sorts can be efficiently computed, while the case for general partial orders, as mentioned, is $\# P-$complete \cite{bw}.
Intuitively, SP-orders are partial orders created from a finite set $X = \{x_1,\ldots,x_n\}$ equipped with the discrete order, by repeated applications of the series and parallel operation, starting from singleton discrete orders. This is formalized via the notion of a SP-expression and an SP-order determined by a SP-expression. Examples are provided following this formal definition.
\begin{definition}(general SP-expressions) \begin{itemize} \item{A variable $x_i$ is a general SP-expression}
\item{$\Psi = (\Psi_1 \otimes \Psi_2)$ is a general SP-expression when $\Psi_1$ and $\Psi_2$ are general SP-expressions }
\item{$\Psi = (\Psi_1 \parallel \Psi_2)$ is a general SP-expression when $\Psi_1$ and $\Psi_2$ are general SP-expressions}
\end{itemize}
A \textbf{SP-expression} is a general SP-expression that contains each of its variables $x_i$ at most once. SP-expressions are logical formulae built form the symbolic notations $\otimes$ and $\parallel$ for the series and parallel operation. We denote SP-expressions in the following by Greek letters $\Psi,\Phi$ etc.
$Var(\Psi)$ denotes the set of variables in $\Psi$. \end{definition}
\begin{example} The expressions $(x_1 \otimes x_2)$, $(x_1 \otimes x_3) \parallel x_2$ and $x_3 \parallel (x_1 \parallel x_2)$ are SP-expressions. The following is a general SP-expression that is not a SP-expression: $(x_1 \otimes x_2) \parallel x_2$. \end{example}
\subsection{SP-orders determined by SP-expressions} \label{order-generation}
An SP-expression $\Psi$ with $n$ variables ``generates" a SP-order of size $n$ starting from the discrete order $(X,\Delta_n)$ where $X$ consists of the set of elements $Var(\Psi) = \{x_1, \ldots, x_n\}$. The generation process is defined as follows:
\begin{enumerate} \small{\item{Interpret each variable $x_i$ in $\Psi$ as a singleton discrete order obtained by the restriction of the discrete order $(X,\Delta_n)$ to the element $x_i$}
\item{Interpret each $\otimes$-symbol in $\Psi$ as a series operation over partial orders}
\item{Interpret each $\parallel$-symbol in $\Psi$ as a parallel operation over partial orders}
\item{Execute the operations of $\Psi$ over partial orders (as indicated in items 2 and 3 above) in the precedence-order determined by the brackets of the SP-expression $\Psi$} }
\end{enumerate}
The final result is referred to as \textbf{the SP-order determined by $\Psi$}.
\begin{remark} It is clear the each SP-order of a given size $n$ can be obtained from a suitably chosen SP-expression wth $n$ variables and a discrete order of size $n$ via the process sketched above.
\end{remark}
\begin{example} The 4-element order with tree-shaped Hasse diagram in Figure \ref{Fig: topological-sort-example-alt} can be determined by the SP-expression $((x_1 \otimes x_2) \parallel x_3) \otimes x_4$ via the process described above. The result is displayed via the second Hasse diagram in Figure \ref{Fig:SP-notation} which illustrates two refinements of a discrete order $\Delta_4$, resulting in the linear order $L_4$. In each case, SP-expressions are displayed that generate the SP-orders involved.
\end{example}
\begin{remark} Given a SP-order determined by SP-expression $\Psi$, then with abuse of terminology we will refer to this order as ``the order $\Psi$'' for the sake of brevity. For instance, for SP-expression $\Psi = ((x_1 \otimes x_2) \parallel x_3) \otimes x_4$, we refer to the order generated by this SP-expression as ``the order $((x_1 \otimes x_2) \parallel x_3) \otimes x_4$'', rather than the order ``generated by this SP-expression''.
We extend this convention also to the context of state spaces and refer to the state space $R(\alpha)$, where $\alpha$ is the order $((x_1 \otimes x_2) \parallel x_3) \otimes x_4$, as the state space $R(((x_1 \otimes x_2) \parallel x_3) \otimes x_4)$. \end{remark}
\begin{figure}
\caption{refinements of the discrete order and their respective SP-expressions }
\label{Fig:SP-notation}
\end{figure}
Finally we include an application of Lemma \ref{Le:|R|} using the SP-expression notation.
\begin{example}The state space discussed in Figure \ref{Fig: topological-sort-example-alt} contains three root states: V, VI and VII, representing the heaps of size $4$. A corresponding SP-order $((x_1 \otimes x_2) \parallel x_3) \otimes x_4$ over the four-element set $X = \{x_1,x_2,x_3,x_4\}$ is displayed in Figure \ref{Fig:SP-notation}. Applying Lemma \ref{Le:|R|}, we obtain: $|R((x_1 \otimes x_2) \parallel x_3) \otimes x_4| = (({3\choose 2} \times (1 \times 1)) \times 1) \times1 = 3$, corresponding to the 3 topological sorts (V, VI and VII) of this order displayed in Figure \ref{Fig: topological-sort-example-alt}.
\end{example}
\section{Dual SP-orders} \label{dualSP}
To ensure that SP-orders faithfully model data structures, we need to impose an additional partial order on SP-orders, leading to the notion of ``a dual SP-order''\footnote{Originally introduced in \cite{ear1}, be it in the left-to-right order context--not a dual order context.}.
\textbf{\emph{In our context, in which we model data structures, the parallel operation in SP-orders will be adapted to a non-commutative version}}. As was the case for our discussion of the left-to-right order for the particular case of the discrete order in Section \ref{left}, this will be achieved by imposing an additional order. The additional order serves to distinguish ``left" and ``right" sub-orders $\alpha_1$ and $\alpha_2$ respectively of a parallel composition $\alpha_1 \parallel \alpha_2$.
We recall the motivation for this additional order, discussed in Sections \ref{left}, \ref{mirror} and Remark \ref{dualordersremark} for the particular case of discrete orders, where we restate matters using the SP-notation.
The discrete order $x_1 \parallel x_2 \parallel \ldots \parallel x_n$ represents an unordered \emph{list} $L = [x_1,\ldots,x_n]$. For regular data structures such as lists or arrays, the order of the indices is not interchangeable, i.e., $L = [x_1,x_2]$ is not equivalent to $L' = [x_2,x_1]$ and hence $x_1 \parallel x_2$ should not be equivalent to $x_2 \parallel x_1$. Similarly, for a heap of size 4, the data structure in our model is represented by the SP-order $((x_1 \otimes x_2) \parallel x_3) \otimes x_4$. This SP-order should not be equivalent to the SP-order $(x_3 \parallel (x_1 \otimes x_2)) \otimes x_4$, since a heap-data structure corresponds to a complete binary tree in which some leaves may have been removed in right to left order only \cite{knu}.
Imposing an additional order ensuring non-commutativity of the parallel operation also matters in terms of the state space. With a commutative parallel operation, the state space $R(\Delta_n) = R(x_1 \parallel x_2 \parallel \ldots \parallel x_n)$ consists of the $n!$ permutation-topological sorts. For a commutative parallel operation the state space of $R(x_1 \parallel x_2 \parallel \ldots \parallel x_n)$ would reduce to a \emph{single} topological sort.
For the case of the discrete order $x_1 \parallel x_2 \parallel \ldots \parallel x_n$ we can impose a left-to-right order $\sqsubseteq^{*}$ on the indexed elements $x_i$ by requiring that $x_i \sqsubseteq^{*} x_j \Leftrightarrow i < j$. Hence the imposed left-to-right order $\sqsubseteq^{*}$ is the linear order $(X,L_n)$.
Using the notation for SP-orders, the original order is the discrete order $x_1 \parallel x_2 \parallel \ldots \parallel x_n$ while the newly imposed order is the linear order $x_1 \otimes x_2 \otimes \ldots \otimes x_n$. This is exactly the ``dual order" obtained from the original one by interchanging parallel operations with series operations and vice versa. Just as for the case of the series operation, for which a series composition $\alpha_1 \otimes \alpha_2$ places all elements of $\alpha_1$ below each element of $\alpha_2$ in the given order $(X,\sqsubseteq)$, we impose a second order ``left-to-right order'' $(X,\sqsubseteq^{*})$ ensuring that \emph{under this order} a parallel composition $\alpha_1 \parallel \alpha_2$ will place every element of $\alpha_1$ below each element of $\alpha_2$. I.e. the two orders are duals, switching the roles of the series and parallel operations. This naturally leads to the notion of a ``dual SP-order''.
\subsection{Dual SP-order and double state space} \begin{definition} The \textbf{dual of a SP-expression} $\Psi$ is the SP-expression $\Psi^{*}$ obtained by interchanging series operations and parallel operations. The dual of the SP-expression $((x_1 \otimes x_2) \parallel x_3) \otimes x_4$ is the SP-expression $((x_1 \parallel x_2) \otimes x_3) \parallel x_4$.
Given an SP-order $\alpha$ determined by an SP-expression $\Psi$, then the \textbf{dual SP-order} $\alpha^{*}$ is the SP-order determined by the dual SP-expression $\Psi^{*}$.
Given SP-order $\alpha$ and its dual $\alpha^{*}$, then $R(\alpha^{*})$ is a \textbf{dual state space} of the state space $R(\alpha)$ and $(R(\alpha),R(\alpha^{*}))$ is a \textbf{double state space}.
\end{definition}
We recall from Section \ref{mirror} that the Hasse diagram representations of the discrete order and its dual could be obtained via a reflection along the first bisector in the Cartesian plane. This is also the case for a general SP-order and its dual.
\subsection{Cartesian construction of the dual order} \label{quadrant}
We generalize the Cartesian construction of the Hasse diagram of the dual of a discrete order to general \emph{SP-orders}.
\begin{center} \includegraphics[height=1.1cm]{dual-SP} \end{center}
\begin{center}
\includegraphics[height=3.5cm]{mirror-discrete-linear} \end{center}
\noindent {\bf Dual SP-order construction} \\ Given an SP order $\alpha$. Consider the Cartesian plane and draw the Hasse diagram of the order $\alpha$ in the lower half of the bisected first quadrant (for a suitable choice of coordinates of the Hasse diagram picked freely within these given quadrant constraints\footnote{These constraints are merely introduced to enhance the graphical display, avoiding overlap of the Hasse diagrams.}).
The Hasse diagram of the dual order $\alpha^{*}$ can be obtained (up to isomorphism) from the Hasse diagram of the order $\alpha$ (drawn in the above way) via the following process of \emph{Reflection} and \emph{Complementation}. \emph{Reflection} transforms the Hasse diagram of the original order into a proper Hasse diagram of the dual order (where mere complementation on the original Hasse diagram would not achieve this goal due to the complementation formation's ``left-to-right" nature, conflicting with a Hasse diagram's vertical setup). \emph{Complementation} represents the interchanging of the series and parallel operations. The dual SP-order construction proceeds as follows:
\begin{itemize} \item{({\bf Reflection}) Reflect each point $x_i$ with respect the bisector $y = x$} \item{({\bf Complementation}) Add directed edges between reflected pairs $x_{i}^{*},x_{j}^{*}$ in case $(x_i,x_j)$ is an unrelated pair in the Hasse diagram of $\sqsubseteq_{\Psi}$ (in other words: create the complement graph on reflected pairs). An edge between reflected elements \emph{points from} $x_{i}^{*}$ to $x_{j}^{*}$ in case $x_{i}^{*}$ is \emph{below} the $x_{j}^{*}$ in the dual Hasse diagram, or, equivalently, in case, for an unrelated pair $x_i$ and $x_j$, the element $x_{i}$ occurs to the left of $x_{j}$ in the original Hasse diagram.} \end{itemize}
\begin{example} Hasse diagram for $\alpha = ((x_1 \otimes x_2) \parallel x_3) \otimes x_4)$ and Hasse diagram of the dual $\alpha^{*} = ((x_1 \parallel x_2)\otimes x_3) \parallel x_4)$.\end{example}
\begin{figure}
\caption{Separating the state spaces}
\label{fig:subfigname}
\end{figure}
\subsection{Entropy conservation equality for double state spaces} \label{entropydouble} As was the case in Section \ref{entropy}, we equate labels with ``quantitative information" and indices with ``positional information" and we will show the following result for double state spaces.
\begin{definition}
Given a SP-order $\alpha$ determined by SP-expression $\Psi$. Let $\alpha^{*}$ be the dual order determined by $\Psi^{*}$. We define the \textbf{quantitative entropy} (label-entropy) $H_q$ and the \textbf{positional entropy} (index-entropy) $H_{p}$ as follows: $H_q = log_2(|R(\alpha)|)$ and $H_p = log_2(|R(\alpha^{*})|)$.
The pair $(H_q,H_p)$ is the \textbf{entropy of a double state space} $(R(\alpha),R(\alpha^{*}))$. The \textbf{maximum entropy} is the entropy-pair of a state space $R(\Delta_n)$ over the discrete order of size $n = |\alpha|$, given by
$$H_{max} = log_2(n!)$$
\end{definition}
The following theorem expresses entropy conservation between the components of the entropy-pair $(H_q,H_p)$ of a double state space $(R(\alpha),R(\alpha^{*}))$.
\begin{theorem} (Entropy conservation) \label{duality} \\ Quantitative and positional entropy are inversely proportional: $H_{p} + H_{q} = H_{max}$ \end{theorem}
\begin{proof} Given a SP-partial order $\alpha$ determined by SP-expression $\Psi$, where $|\alpha| = n$. The proof proceeds by induction on the number of operations $k$ in the formula $\Psi$. It suffices to show that $ |R(\alpha)| |R(\alpha^{*})| = n!$ \\
\noindent a) {\bf Base case $k = 0$:} we must have $\Psi = x_i$ for some variable $x_i$ where $i \in \{1,\ldots,n\}$ and $\alpha = \alpha^{*}$ (the discrete order on $x_i$). Hence $H_{p} \times H_{q} = 1 \times 1 = 1 = n!$ \\
\noindent b) {\bf Case $k > 0$ and $\Psi = \Psi_1 \otimes \Psi_2$:} then $X = X_1 \cup X_2$, $X_1 \cap X_2 = \emptyset$, $X_1 = Var(\Psi_1)$ and $X_2 = Var(\Psi_2)$. Let $l_1 = |X_1|$, $l_2 = |X_2|$ and $n = l_1 + l_2 = |X|$, where $l_1,l_2 \geq 1$. With some abuse of notation, we denote $|R(\alpha)|$ by $R(\Psi)$. Note that $H_{p} \times H_{q} = |R(\alpha)| |R(\alpha^{*})| = |R(\Psi)| |R(\Psi^{*})|$, hence: \begin{eqnarray*}
|R(\Psi)| |R(\Psi^{*})| &=& |R(\Psi_1 \otimes \Psi_2)| |R(\Psi_1^{*} \parallel \Psi_2^{*})| \\
&=& |R(\Psi_1)||R(\Psi_2)| {n \choose l_1} |R(\Psi_1^{*})||R(\Psi_2^{*})| \\
&=& {n \choose l_1}|R(\Psi_1)| |R(\Psi_1^{*})| |R(\Psi_2)||R(\Psi_2^{*})| \\ &=& \frac{n!}{l_1! (n-l_1)!} l_1!l_2! \mbox{\hspace*{1 cm} (Induction hypothesis)} \\ &=& n! \end{eqnarray*}
\noindent c) {\bf Case $k > 0$ and $\Psi = \Psi_1 \parallel \Psi_2$:} the proof of case c) proceeds similar to that of case b).
\end{proof}
\begin{remark} Entropy is in a sense a measure of ``disorganization'' (degrees of freedom), hence in the context of global state preservation Theorem \ref{duality} can be interpreted as: \textbf{``quantitative order gained is proportional to positional order lost\footnote{Which may bear some relation to the messy office argument advocating a chaotic office where nothing is in the right place yet each item's place is known to the owner, over the case where each item is stored in the right order and yet the owner can no longer locate the items.}''} We note that global state conservation for MOQA-computations \cite{sch1} means that the global states must have the same cardinality (as multisets) throughout the computation, hence $H_{p} + H_{q} = H_{max}$ remains constant throughout the computation when one global state is transformed into a second. \end{remark}
\section{Conclusion and future work} We have established a ``denotational" version of entropy conservation for comparison-based sorting for which the sum of the entropies (positional and quantitative) remains constant when proceeding from the input to the output collection. We generalized entropy conservation to global states over SP-orders, for comparison-based computations transforming a global state into global state of the same cardinality (such as the operations of \cite{sch1}). The question remains whether there exists an ``operational" mechanism that not only transforms the underlying order $\alpha$ into $\alpha^{*}$ but \emph{also} transform the labeling $l$ into a ``dual labeling'' $l^{*}$ such that $(\alpha^{*},l^{*})$ forms once again a topological sort. This has been achieved for the case of sorting. Figure \ref{Fig:sorting-history-general-paired} illustrates that permutations $\sigma$, i.e. labelings of the discrete order are transformed into labelings $\sigma^{-1}$ of the dual order, i.e. the linear order. A general operational mechanism for comparison-based computation over SP-orders (using a suitable fragment of the MOQA language of \cite{sch1}) will be explored in future work, in which we establish an ``entropic duality theorem" coupling the computations with a dual computation \emph{per labeling}. The latter computation effects an increase in positional entropy proportional to the decrease in quantitative entropy effected by the original computation. We refer to algorithms satisfying this type of entropic coupling as ``diyatropic". Comparison-based algorithms form one of the most thoroughly studied classes in computer science \cite{knu}. Our results indicate that more aspects still remain to be explored in this area.
\end{document}
|
arXiv
|
{
"id": "2011.07278.tex",
"language_detection_score": 0.790506899356842,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\author{Zejun Hu \thanks {Supported by grants of NSFC-10671181 and Chinese-German cooperation projects DFG PI 158/4-5.}
\and Haizhong Li\thanks{Supported by grants of NSFC-10531090 and Chinese-German cooperation projects DFG PI 158/4-5.}
\and Luc Vrancken} \date{}
\title{A characterisation of the Calabi product of hyperbolic affine spheres}
\sloppy
\begin{abstract}\noindent There exists a well known construction which allows to associate with two hyperbolic affine spheres $f_i: M_i^{n_i} \rightarrow \mathbb R^{n_i+1}$ a new hyperbolic affine sphere immersion of $I \times M_1 \times M_2$ into $\mathbb R^{n_1+n_2+3}$. In this paper we deal with the inverse problem: how to determine from properties of the difference tensor whether a given hyperbolic affine sphere immersion of a manifold $M^n \rightarrow \mathbb R^{n+1}$ can be decomposed in such a way. \end{abstract}
{{\bfseries Key words}: {\em affine hypersphere, Calabi product, affine hypersurface}.
{\bfseries Subject class: } 53A15.}
\section{Introduction}
In this paper we study nondegenerate affine hypersurfaces $M^n$ into $\mathbb R^{n+1}$, equipped with its standard affine connection $D$. It is well known that on such a hypersurface there exists a canonical transversal vector field $\xi$, which is called the affine normal. With respect to this transversal vector field one can decompose \begin{equation} D_X Y = \nabla_X Y +h(X,Y) \xi, \end{equation} thus introducing the affine metric $h$ and the induced affine connection $\nabla$. The Pick-Berwald theorem states that $\nabla$ coincides with the Levi Civita connection $\widehat{\nabla}$ of the affine metric $h$ if and only if $M$ is immersed as a nondegenerate quadric. The difference tensor $K$ is introduced by \begin{equation} K_X Y = \nabla_X Y -\widehat{\nabla}_X Y. \end{equation} It follows easily that $h(K(X,Y),Z)$ is symmetric in $X$, $Y$ and $Z$. The apolarity condition states that $\operatorname{trace} K_X =0$ for every vector field $X$. The fundamental theorem of affine differential geometry, Dillen, see Ref.~\cite{dinovr91} implies that an affine hypersurface is completely determined by the metric and the difference tensor $K$.
Deriving the affine normal, we introduce the affine shape operator $S$ by \begin{equation} D_X \xi =-SX. \end{equation}
Here, we will restrict ourselves to the case that the affine shape operator $S$ is a multiple of the identity, i.e. $S=H I$. This means that all affine normals are parallel or pass through a fixed point. We will also assume that the metric is positive definite in which case one distinguishes the following classes of affine hyperspheres: \begin{romanlist} \item elliptic affine hyperspheres, i.e. all affine normals pass through a fixed point and $H >0$, \item hyperbolic affine hyperspheres, i.e. all affine normals pass through a fixed point and $H<0$, \item parabolic affine hyperspheres, i.e. all the affine normals are parallel ($H=0$). \end{romanlist} Due to the work of amongst others Calabi \cite{ca72}, Pogorelov \cite{po72}, Cheng and Yau \cite{chya86}, Sasaki \cite{sa80} and Li \cite{li92}, positive definite affine hyperspheres which are complete with respect to the affine metric $h$ are now well understood. In particular, the only complete elliptic or parabolic positive definite affine hyperspheres are respectively the ellipsoid and the paraboloid. However, there exist many hyperbolic affine hyperspheres.
In the local case, one is far from obtaining a classification. The reason for this is that affine hyperspheres reduce to the study of the Monge-Amp{\`e}re equations. Calabi introduced a construction, called the Calabi product, which shows how to associate with one (or two) hyperbolic affine hyperspheres a new hyperbolic affine hypersphere. This construction, as well as the corresponding properties for the difference tensor are recalled in the next section.
In this paper we are interested in the reverse construction, i.e. how to determine using properties of the difference tensor whether or not a given hyperbolic affine hypersphere (with mean curvature $-1$) can be decomposed as a Calabi product of a hyperbolic affine hypersphere and a point or as a Calabi product of two hyperbolic affine hyperspheres.
In particular we show the following two theorems: \begin{theorem} Let $\phi: M^n \rightarrow \mathbb R^{n+1}$ be a (positive definite) hyperbolic affine hypersphere with mean curvature $\lambda$, $\lambda<0$. Assume that there exists two distributions $\mathcal D_1$ and $\mathcal D_2$ such that \begin{romanlist} \item $T_pM = \mathcal D_1 \oplus \mathcal D_2$, \item $\mathcal D_1$ and $\mathcal D_2$ are orthogonal with respect to the affine metric $h$ \item $\mathcal D_1$ is a one dimensional distribution spanned by a unit length vector field $T$ \item there exist numbers $\lambda_1$ and $\lambda_2$ satisfying $-\lambda+\lambda_1 \lambda_2 -\lambda_2^2= 0$ such that \begin{align*} & K(T,T)=\lambda_1 T\\ &K(T,U) = \lambda_2 U, \end{align*} where $U \in \mathcal D_2$. \end{romanlist} Then $\phi:M^n \rightarrow \mathbb R^{n+1}$ can be decomposed as the Calabi product of a hyperbolic affine sphere $\psi:M_1^{n-1} \rightarrow \mathbb R^{n}$ and a point. \end{theorem} and \begin{theorem} \label{theoremprod2}Let $\phi: M^n \rightarrow \mathbb R^{n+1}$ be a (positive definite) hyperbolic affine hypersphere with mean curvature $\lambda$, $\lambda<0$. Assume that there exists distributions $\mathcal D_1$ (of dimension 1, spanned by a unit length vector field $T$), $\mathcal D_2$
(of dimension $n_2$) and $\mathcal D_3$ (of dimension $n_3$) such that \begin{romanlist} \item $1+n_2+n_3 = n$, \item $\mathcal D_1$, $\mathcal D_2$ and $\mathcal D_3$ are mutually orthogonal
with respect to the affine metric $h$ \item there exist numbers $\lambda_1$, $\lambda_2$ and $\lambda_3$ such that \begin{align*} & K(T,T)=\lambda_1 T\\ &K(T,V) = \lambda_2 V,\\ &K(T,W)= \lambda_3 W,\\ &K(V,W)=0. \end{align*} where $V \in \mathcal D_2$, $W \in \mathcal D_3$, $\lambda_1 = \lambda_2 +\lambda_3$ and $\lambda_2 \lambda_3 = \lambda$. \end{romanlist} Then $\phi:M^n \rightarrow \mathbb R^{n+1}$ can be decomposed as the Calabi product of two hyperbolic affine sphere immersions $\psi_1:M_1^{n_2} \rightarrow \mathbb R^{n_2+1}$ and
$\psi_2:M_2^{n_3} \rightarrow \mathbb R^{n_3+1}$. \end{theorem} Note that, as explained in the next section, the converse of the above two theorems is also true.
To conclude this introduction, we remark that the basic integrability conditions for a hyperbolic
affine hypersphere with mean curvature $-1$ state that: \begin{align} &\hat R(X,Y)Z = -(h(Y,Z) X-h(X,Z)Y)-[K_X,K_Y]Z,\label{gauss}\\ &(\hat\nabla K)(X, Y,Z) =(\hat \nabla K)(Y,X,Z).\label{codazzi} \end{align}
\section{The Calabi product}
Let $\psi_1: M_1^{n_2} \rightarrow R^{n_2+1}$ and $\psi_2: M_2^{n_3} \rightarrow R^{n_3+1}$ be hyperbolic affine hyperspheres with mean curvature $-1$. Then we define the Calabi product of $M_1$ with a point by \begin{equation*} \tilde \psi(t,p)= (c_1 e^{\tfrac{t}{\sqrt{n}}} \psi_1(p), c_2 e^{-{\sqrt{n}}t}), \end{equation*} where $p \in M_1$ and $t \in \mathbb R$ and the Calabi product of $M_1$ with $M_2$ by \begin{equation*} \psi(t,p,q))= (c_1 e^{\tfrac{\sqrt{n_3+1} t}{\sqrt{n_2+1}}}\psi_1(p),
c_2 e^{-\tfrac{\sqrt{n_2+1} t}{\sqrt{n_3+1}}} \psi_2(q)), \end{equation*} where $p \in M_1$, $q \in M_2$ and $t \in \mathbb R$.
We now investigate the conditions on the constants $c_1$ and $c_2$ in order that the Calabi product has constant mean curvature $-1$. We first do so for the Calabi product of two affine spheres. We denote by $v_1,\dots,v_{n_2}$ local coordinates for $M_1$ and by $w_1,\dots,w_{n_3}$ local coordinates for $M_2$. Then, it follows that \begin{align*} &\psi_t= (c_1 \tfrac{\sqrt{n_3+1}}{\sqrt{n_2+1}}e^{\tfrac{\sqrt{n_3+1} t}{\sqrt{n_2+1}}}\psi_1(p), -c_2 \tfrac{\sqrt{n_2+1}}{\sqrt{n_3+1}}e^{-\tfrac{\sqrt{n_2+1} t}{\sqrt{n_3+1}}} \psi_2(q)),\\ &\psi_{tt}=(c_1 \tfrac{n_3+1}{n_2+1}e^{\tfrac{\sqrt{n_3+1} t}{\sqrt{n_2+1}}}\psi_1(p), -c_2 \tfrac{n_2+1}{n_3+1}e^{-\tfrac{\sqrt{n_2+1} t}{\sqrt{n_3+1}}} \psi_2(q)),\\ &\psi_{tv_i}=\tfrac{\sqrt{n_3+1}}{\sqrt{n_2+1}}(c_1 e^{\tfrac{\sqrt{n_3+1} t}{\sqrt{n_2+1}}}(\psi_1)_{v_i},0),\\ &\psi_{tw_j}=-\tfrac{\sqrt{n_2+1}}{\sqrt{n_3+1}}(0,c_2 e^{-\tfrac{\sqrt{n_2+1} t}{\sqrt{n_3+1}}} (\psi_2)_{w_j}),\\ &\psi_{v_iv_j}=(c_1 e^{\tfrac{\sqrt{n_3+1} t}{\sqrt{n_2+1}}}(\psi_1)_{v_iv_j},0),\\ &\psi_{w_iw_j}=(0,c_2 e^{-\tfrac{\sqrt{n_2+1} t}{\sqrt{n_3+1}}} (\psi_2)_{w_iw_j}). \end{align*} If we denote by $h_2$ the affine metric on $M_2$ and by $h_3$ the centroaffine metric introduced on $M_3$ it follows from the above formulas that \begin{align*} &\psi_{tt} =\tfrac{n_3-n_2}{\sqrt{(n_2+1)(n_3+1)}} \psi_t + \psi\\ &\psi_{v_iv_j} = \tfrac{\sqrt{(n_2+1)(n_3+1)}}{n_2+n_3+2} h_2(\partial v_i,\partial v_j) \psi+...\\ &\psi_{w_iw_j} = \tfrac{\sqrt{(n_2+1)(n_3+1)}}{n_2+n_3+2} h_3(\partial w_i,\partial w_j) \psi+... \end{align*} From \cite{nosa94} we see that $M$ is an affine hypersphere with mean curvature $-1$ if and only if \begin{equation*} det[\psi,\psi_t,\psi_{v_1},\dots,\psi_{v_{n_2}},\psi_{w_1},\dots,\psi_{w_{n_3}}]^2 =h(\partial_t,\partial_t) det[h(\partial v_i,\partial v_j)] det[h(\partial w_i,\partial w_j)]. \end{equation*} Taking into account that $\psi_1$ and $\psi_2$ are already affine spheres with mean curvature $-1$ we must have that \begin{equation} (c_1)^{n_2+1} (c_2)^{n_3+1} = \left (\tfrac{\sqrt{(n_2+1)(n_3+1)}}{n_2+n_3+2}\right )^{n_2+n_3+2}. \end{equation} Hence we can take \begin{align*} &c_1=\tfrac{\sqrt{(n_2+1)(n_3+1)}}{n_2+n_3+2} d_1\\ &c_2=\tfrac{\sqrt{(n_2+1)(n_3+1)}}{n_2+n_3+2} d_2, \end{align*} where \begin{equation} (d_1)^{n_2+1} (d_2)^{n_3+1} = 1. \end{equation} Hence by applying an equiaffine transformation we may assume that $d_1=d_2=1$ and therefore that the Calabi product of two hyperbolic affine spheres with mean curvature $-1$ is an hyperbolic affine sphere with mean curvature $-1$ if and only if \begin{equation*} \psi(t,p,q))= \tfrac{\sqrt{(n_2+1)(n_3+1)}}{n_2+n_3+2}( e^{\tfrac{\sqrt{n_3+1} t}{\sqrt{n_2+1}}}\psi_1(p),
e^{-\tfrac{\sqrt{n_2+1} t}{\sqrt{n_3+1}}} \psi_2(q)), \end{equation*} up to an equiaffine transformation.
For the Calabi product of a hyperbolic affine sphere and a point, we proceed in the same way to deduce the following. The Calabi product of a hyperbolic affine spheres with mean curvature $-1$ and a point is an hyperbolic affine sphere with mean curvature $-1$ if and only if \begin{equation*} \tilde \psi(t,p)= \tfrac{\sqrt{n}}{n+1}( e^{\tfrac{t}{\sqrt{n}}} \psi_1(p), e^{-\sqrt{n} t}), \end{equation*} up to an equiaffine transformation.
\begin{remark} A straightforward calculation shows that the Calabi product of two hyperbolic affine spheres has parallel cubic form (with respect to the Levi Civita connection) if and only if both original hyperbolic affine spheres have parallel cubic forms. Similarly one has that the Calabi product of a hyperbolic affine sphere and a point has parallel cubic form if and only if the original affine sphere has parallel cubic form. \end{remark}
\section{Characterisation of the Calabi product of two hyperbolic affine spheres and the proof of Theorem 2}
Throughout this section we will assume that $\phi: M^n\longrightarrow\mathbb R^{n+1} $ is a hyperbolic affine hypersphere. Without loss of generality we may assume that $\lambda=-1$ by applying a homothety. We will now prove Theorem \ref{theoremprod2}. Therefore, we shall also assume that $M$ admits three mutually orthogonal differential distributions $\mathcal D_1$, $\mathcal D_2$ and $\mathcal D_3$ of dimension $1$, $n_2> 0$ and $n_3> 0$ respectively with $1+n_2+n_3=n$, and, for all vectors $V\in \mathcal D_2$, $W\in \mathcal D_3$, \begin{gather*} K(T,T) = \lambda_1 T,\;\;\;\;\;\; K(T,V) = \lambda_2 V,\\ K(T,W) = \lambda_3 W,\;\;\;\;\;\; K(V,W) = 0. \end{gather*} By the apolarity condition we must have that \begin{equation} \lambda_1 +n_2 \lambda_2 +n_3 \lambda_3 = 0, \end{equation} Moreover, we will assume that \begin{align} &\lambda_1 = \lambda_2+\lambda_3\\ &\lambda_2 \lambda_3 = -1. \end{align}
The above conditions imply that $\lambda_1$, $\lambda_2$ and $\lambda_3$ are constants and can be determined explicitly in terms of the dimension $n$.
As $M$ is a hyperbolic affine sphere we have that the difference tensor is a symmetric tensor with respect to the Levi Civita connection $\hat \nabla$ of the affine metric. In that case, as also $h(K(X,Y),Z)$ is totally symmetric, the information of Lemma 1 and Lemma 2 of \cite{BRV} remains valid and can be summarized in the following lemma: \begin{lemma} \label{lemme1} We have \begin{enumerate} \item $\hat \nabla_{\mathcal D_1} \mathcal D_1 \subset \mathcal D_1$ \item $\hat \nabla_{\mathcal D_2} \mathcal D_2 \subset \mathcal D_2 \oplus \mathcal D_3$ \item $\hat \nabla_{\mathcal D_3} \mathcal D_3 \subset \mathcal D_2 \oplus \mathcal D_3$ \item $h(\hat \nabla_T W,V) = h(\hat \nabla_W T,V)=-h(\hat \nabla_V T,W)$, for any $V \in\mathcal D_2, W\in\mathcal D_3$ \end{enumerate} \end{lemma}
Similarly using the information of the previous lemma, Lemma 3 of \cite{BRV} reduces to \begin{lemma} \label{lemme2} We have \begin{enumerate} \item $(\lambda_3-\lambda_2) h(\hat \nabla_{V} \tilde V, W) = h( K(V,\tilde V), \hat\nabla_T W)
= h(K(V,\tilde V), \hat \nabla_W T)$ \item $(\lambda_2-\lambda_3) h(\hat\nabla_{W} \tilde W, V) = h(K(W,\tilde W), \hat\nabla_T V)
= h(K(W,\tilde W), \hat \nabla_V T)$ \end{enumerate} \end{lemma}
We denote now by $\{V_1,\dots,V_{n_2}\}$, respectively $\{W_1,\dots,W_{n_3}\}$ an orthonormal basis of $\mathcal D_2$ (resp. $\mathcal D_3$) with respect to the affine metric $h$. Then, we have \begin{lemma} \label{lemme3} Let $V, \tilde V \in \mathcal D_2$. Then $$h(\hat \nabla_V T, \hat \nabla_{\tilde V} T) =0.$$ \end{lemma} \begin{proof} Using the Gauss equation, we have that \begin{align*} h(\hat R(V,T)T,\tilde V)&= -h(V,\tilde V) - h(K(T,T),K(V,\tilde V))
+ h(K(T,V),K(T,\tilde V))\\ &= (-1- \lambda_1 \lambda_2 +\lambda_2^2) h( V,\tilde V)\\ &=(-1- \lambda_3 \lambda_2) h(V,\tilde V)=0. \end{align*} On the other hand, by a direct computation using the previous lemmas, we have \begin{align*} h(\hat R(V,T)T,\tilde V)&=h(\hat \nabla_{V} \hat \nabla_{T}T -\hat \nabla_{T}\hat \nabla_{V} T
-\hat \nabla_{\hat \nabla_{V}T -\hat \nabla_{T}V} T,\tilde V)\\ &=h(-\hat\nabla_{T} \hat\nabla_V T,\tilde V ) -\sum_{k=1}^{n_3} h(\hat \nabla_{V}T - \hat \nabla_T V,W_k) h(\hat \nabla_{W_k} T, \tilde V)\\ &=h(-\hat\nabla_{T} \hat \nabla_V T,\tilde V) \qquad \text{by Lemma \ref{lemme1} (iv)}\\ &=-\sum_{k=1}^{n_3} h(\hat\nabla_V T,W_k) h(\hat \nabla_{T} W_k,\tilde V)\\ &=\sum_{k=1}^{n_3} h(\hat \nabla_V T,W_k) h(\hat \nabla_{\tilde V} T,W_k)\\ &=h(\hat \nabla_V T, \hat \nabla_{\tilde V} T). \end{align*} \end{proof}
Similarly, we have \begin{lemma} \label{lemme4} Let $W, \tilde W \in \mathcal D_3$. Then $$h(\hat\nabla_W T, \nabla_{\tilde W} T) = 0.$$ \end{lemma}
Combining the two previous lemmmas with Lemma \ref{lemme2} and Lemma \ref{lemme1} we see that the distributions determined by $\mathcal D_2$ and $\mathcal D_3$ are totally geodesic. It also implies that $h(\hat \nabla_V T,W) = h(\hat \nabla_W T, V) = 0$.
This is sufficient to conclude that locally $(M,h)$ is isometric with $I \times M_1 \times M_2$ where $T$ is tangent to $I$, $\mathcal D_2$ is tangent to $M_1$ and $\mathcal D_3$ is tangent to $M_2$.
The product structure of $M$ implies the existence of local coordinates $(t,p,q)$ for $M$ based on an open subset containing the origin of $\mathbb R \times \mathbb R^{n_2} \times \mathbb R^{n_3}$,
such that $\mathcal D_1$ is given by $dp=dq=0$, $\mathcal D_2$ is given by $dt=dq=0$, and $\mathcal D_3$ is given by $dt=dp=0$. We may also assume that $T = \tfrac{\partial}{\partial t}$. We now put
\begin{equation}\label{eqphi2phi3} \quad \phi_2= -f \lambda_3 \phi + f T, \quad \phi_3 =
g \lambda_2 \phi - g T, \end{equation} where the functions $f$ and $g$, which depend only on the variable $t$, are determined by \begin{align*} f'=f (\lambda_3 - \lambda_1) ,\\ g'=g(\lambda_2 - \lambda_1). \end{align*} It is clear that solutions are given by \begin{equation*} f(t) = d_1 e^{(\lambda_3-\lambda_1)t} \qquad \text{and} \qquad g(t) = d_2 e^{(\lambda_2-\lambda_1)t}, \end{equation*} where $d_1$ and $d_2$ are constants. Of course, as $\lambda_1 = \lambda_2+\lambda_3$ we can rewrite the above equation as \begin{equation*} f(t) = d_1 e^{-\lambda_2 t} \qquad \text{and} \qquad g(t) = d_2 e^{-\lambda_3t}. \end{equation*} Computing $\lambda_1$, $\lambda_2$ and $\lambda_3$ explicitly, where if necessary by changing the sign of $E_1$ we may assume that $\lambda_2 \ge 0$ we find that \begin{align*} &\lambda_2 = \tfrac{\sqrt{n_3+1}}{\sqrt{n_2+1}},\\ &\lambda_3 = -\tfrac{\sqrt{n_2+1}}{\sqrt{n_3+1}}. \end{align*} Solving now the above equations for the immersion $\phi$ we find that \begin{align*} \phi &=\tfrac{1}{f (\lambda_2-\lambda_3)}\phi_2 - \tfrac{1}{g (\lambda_2-\lambda_3)} \phi_3\\ &=(\tfrac{1}{d_1} e^{\tfrac{\sqrt{n_3+1}}{\sqrt{n_2+1}} t} \phi_2
+ \tfrac{1}{d_2} e^{-\tfrac{\sqrt{n_2+1}}{\sqrt{n_3+1}} t} \phi_3)(\frac{\sqrt{(n_2+1)(n_3+1)}}{n_2+n_3+2}). \end{align*}
A straightforward computation, using \eqref{eqphi2phi3}, now shows that \begin{align*} D_T (\phi_2)&=D_T(-f \lambda_3 \phi + f T)\\ &=f(\lambda_3-\lambda_1) (-\lambda_3\phi+T)+ f (-\lambda_3 T +(K(T,T)+\phi))\\ &=f(\lambda_3-\lambda_1)(-\lambda_3\phi+T)+ f ((\lambda_1-\lambda_3) T +\phi))\\ &=f (\lambda_2 \lambda_3 +1)\phi =0. \end{align*} Similarly \begin{align*} &D_W (\phi_2)=f(- \lambda_3 W +K(W,T))=0,\\ &D_T (\phi_3)=0,\\ &D_V (\phi_3)=0. \end{align*} The above implies that $\phi_2$ reduces to a map of $M_1$ in $\mathbb R^n$ whereas $\phi_3$ reduces to a map of $M_2$ in $\mathbb R^n$. As we have that \begin{align*} &d\phi_2(V)=D_V (\phi_2)=f(- \lambda_3 V +K(V,T))=f(-\lambda_3+\lambda_2)V,\\ &d\phi_3(W)=D_W(\phi_3)=g(\lambda_2 W -K(W,T))=g (\lambda_2-\lambda_3)W, \end{align*} these maps are actually immersions. Moreover, denoting by $\nabla^1$ the $\mathcal D_2$ component of $\nabla$, we find that \begin{align*} D_{V}d\phi_2(\tilde V)&=f(-\lambda_3+\lambda_2)D_{V}\tilde V\\ &= f(-\lambda_3+\lambda_2)\nabla_{V}\tilde V+ f(-\lambda_3+\lambda_2)h(V,\tilde V)\phi \\ &= f(-\lambda_3+\lambda_2)\nabla^1_{V}\tilde V+f(-\lambda_3+\lambda_2)(h(K(V,\tilde V),T)T+h(V,\tilde V)\phi)\\ &=d\phi_2(\nabla^1_{V} \tilde V)+ f(-\lambda_3+\lambda_2)h(V,\tilde V)(\lambda_2 T +\phi)\\ &=d\phi_2(\nabla^1_{V} \tilde V)+f (-\lambda_3+\lambda_2)\lambda_2 h(V,\tilde V)(T-\lambda_3 \phi)\\ &=d\phi_2(\nabla^1_{V} \tilde V)+ (-\lambda_3+\lambda_2)\lambda_2 h(V,\tilde V)\phi_2. \end{align*} The above formulas imply that $\phi_2$ can be interpreted as a centroaffine immersion contained in an $n_2+1$-dimensional vector subspace of $\mathbb R^{n+1}$ with induced connection $\nabla^1$ and affine metric $h_1 = (-\lambda_3+\lambda_2)\lambda_2 h$. Similarly, we get that $\phi_3$ can be interpreted as a centroaffine immersion contained in an $n_3+1$-dimensional vector subspace of $\mathbb R^{n+1}$ with induced connection $\nabla^2$ (the restriction of $\nabla$ to $\mathcal D_3$) and affine metric $h_2 =g (\lambda_3-\lambda_2)\lambda_3 h$. Of course as both spaces are complementary, we may assume by a linear transformation that the $n_2+1$ dimensional space is spanned by the first $n_2+1$ coordinates of $\mathbb R^{n+1}$ whereas the $n_3+1$ dimensional space is spanned by the last $n_3+1$ coordinates of $\mathbb R^{n+1}$.
Moreover, taking $V_1,\dots, V_{n_2}$ as before, we find that \begin{align*} \sum_{i=1}^{n_2} (\nabla^1 h_1) (V,V_i,V_i)&= \lambda_2(\lambda_2-\lambda_3)\sum_{i=1}^{n_2} (\nabla^1 h) (V,V_i,V_i)\\ &=-2\lambda_2(\lambda_2-\lambda_3)\sum_{i=1}^{n_2} h(\nabla^1_V V_i,V_i)\\ &=-2\lambda_2(\lambda_2-\lambda_3)\sum_{i=1}^{n_2} h(\nabla_V V_i,V_i)\\ &=\lambda_2(\lambda_2-\lambda_3)\sum_{i=1}^{n_2} (\nabla h) (V,V_i,V_i)=0, \end{align*} as by assumption $h(K(V,W),W))= h(K(V,T),T)$. So $M_1$ is an hyperbolic affine hypersphere. Choosing now the constant $d_1$ appropriately we may assume that $M_1$ has mean curvature $-1$. A similar argument also holds for $M_2$.
As \begin{align*} \phi &=\tfrac{1}{f (\lambda_2-\lambda_3)}\phi_2 - \tfrac{1}{g (\lambda_2-\lambda_3)} \phi_3\\ &=(\tfrac{1}{d_1} e^{\tfrac{\sqrt{n_3+1}}{\sqrt{n_2+1}} t} \phi_2
+ \tfrac{1}{d_2} e^{-\tfrac{\sqrt{n_2+1}}{\sqrt{n_3+1}} t} \phi_3)(\frac{\sqrt{(n_2+1)(n_3+1)}}{n_2+n_3+2}). \end{align*} We note from Section 2 that we must have that $d_1^{n_2+1} d_2^{n_2+1} = 1$ and that therefore $\phi$ is given as the Calabi product of the immersions $\phi_1$ and $\phi_2$.
\begin{remark} In case that $M$ has parallel difference tensor, i.e. if $\hat \nabla K=0$, the conditions of Theorem 2 can be weakened. Indeed we can prove: \begin{theorem} Let $M$ be a hyperbolic affine sphere with mean curvature $\lambda$, where $\lambda<0$. Suppose that $\hat \nabla K= 0$ and there exists $h$-orthonormal distributions $\mathcal D_1$ (of dimension $1$), $\mathcal D_2$ (of dimension $n_2$) and such that $\mathcal D_3$ (of dimension $n_3$) such that \begin{align*} &K(T,T)=\lambda_1 T,\\ &K(T,V)=\lambda_2 V,\\ &K(T,W)=\lambda_3 W, \end{align*} where $T$ is a unit vector spanning $\mathcal D_1$ and $V \in \mathcal D_2$, $W \in \mathcal D_3$. Moreover we suppose that $\lambda_2 \ne \lambda_3$ and $2\lambda_2 \ne \lambda_1 \ne 2 \lambda_3$. Then $\phi:M^n \rightarrow \mathbb R^{n+1}$ can be decomposed as the Calabi product of two hyperbolic affine sphere immersions $\psi_1:M_1^{n_2} \rightarrow \mathbb R^{n_2+1}$ and
$\psi_2:M_2^{n_3} \rightarrow \mathbb R^{n_3+1}$ with parallel cubic form. \end{theorem} \begin{proof} By applying an homothety we may choose $\lambda=-1$. As $\hat \nabla K=0$, we also have that $\hat R. K = 0$. This means that \begin{equation*} \hat R(X,Y)K(Z,U)= K(\hat R(X,Y)Z,U)+K(Z, \hat R(X,Y)U). \end{equation*} So, taking $X=Z=U=T$ and $Y=V$, we find that $$ \hat R(T,V)T=V -K_T K_V T +K_V K_TT=(1-\lambda_2^2+\lambda_1 \lambda_2) V. $$ Hence we deduce that \begin{equation*} (\lambda_1-2\lambda_2)(-1-\lambda_1 \lambda_2 +\lambda_2^2)=0. \end{equation*} Similarly we have \begin{equation*} (\lambda_1-2\lambda_3)(-1-\lambda_1 \lambda_3 +\lambda_3^2)=0. \end{equation*} In view of the conditions, we must have that $\lambda_2$ and $\lambda_3$ are the two different roots of the equation $$-1 - \lambda_1 x +x^2=0.$$ Consequently $\lambda_2+\lambda_3 = \lambda_1$ and $\lambda_2 \lambda_3=-1$.
Finally we take $Z=U=T$, $X=V$ and $Y=W$. Then we find that \begin{align*} \lambda_1 \hat R(V,W)T&= 2 K(\hat R(V,W)T,T)=-2 K(K_V K_W T,T)+2 K(K_W K_V T,T)\\ &=-2(\lambda_3-\lambda_2)K_T K_VW. \end{align*} Hence \begin{align*} \lambda_1 (\lambda_2-\lambda_3) K_V W&= 2 K(\hat R(V,W)T,T)=-2 K(K_V K_W T,T)+2 K(K_W K_V T,T)\\ &=-2(\lambda_3-\lambda_2)K_T K_VW. \end{align*} This implies that $K_V W$ is an eigenvector of $K_T$ with eigenvalue $\tfrac{1}{2} \lambda_1$. Given the form of $K_T$ we deduce that $K(V,W) = 0$. We are now in a position to apply Theorem 2 and deduce that $M$ can be obtained as the Calabi product of the hyperbolic affine spheres. \end{proof}
\end{remark}
\section{Characterisation of the Calabi product of a hyperbolic affine sphere and a point and the proof of Theorem 1}
Throughout this section we will assume that $\phi: M^n\longrightarrow\mathbb R^{n+1} $ is a hyperbolic affine hypersphere with mean curvature $-1$ and we will prove Theorem 1. Therefore, we shall also assume that $M$ admits two mutually orthogonal differential distributions $\mathcal D_1$ and $\mathcal D_2$ of dimension $1$ and $n_2> 0$, respectively, with $1+n_2=n$, and, for unit vector $T\in \mathcal D_1$ and all vectors $V\in \mathcal D_2$, \begin{gather*} K(T,T) = \lambda_1 T,\;\;\;\;\;\; K(T,V) = \lambda_2 V.\\ \end{gather*} By the apolarity condition we must have that \begin{equation} \lambda_1 +n_2 \lambda_2= 0, \end{equation} Moreover, we will assume that \begin{equation} 1+\lambda_1 \lambda_2-\lambda_2^2=0. \end{equation}
The above conditions imply that $\lambda_1$ and $\lambda_2$ are constant and can be determined explicitly in terms of the dimension $n$. Indeed, if necessary by replacing $T$ with $-T$, we have that \begin{align*} &\lambda_2= \tfrac{1}{\sqrt{n}},\\ &\lambda_1=-\tfrac{n-1}{\sqrt{n}}. \end{align*}
We now proceed as in the previous case. Using the fact that $\hat \nabla K$ is totally symmetric it follows that \begin{lemma} \label{lemme3} We have \begin{enumerate} \item $\hat \nabla_{T} T=0$, \item $\hat \nabla_{V} T=0$, \item $h(\hat \nabla_V \tilde V,T)=0$. \end{enumerate} \end{lemma}
The previous lemmma tells us that the distributions determined by $\mathcal D_1$ and $\mathcal D_2$ are totally geodesic. This is sufficient to conclude that locally $(M,h)$ is isometric with $I \times M_1$ where $T$ is tangent to $I$ and $\mathcal D_2$ is tangent to $M_1$.
The product structure of $M$ implies the existence of local coordinates $(t,p$ for $M$ based on an open subset containing the origin of $\mathbb R \times \mathbb R^{n_2}$,
such that $\mathcal D_1$ is given by $dp=0$ and $\mathcal D_2$ is given by $dt=$. We may also assume that $T = \tfrac{\partial}{\partial t}$. We now put
\begin{equation}\label{eqphi2} \quad \phi_2= f \tfrac{1}{\lambda_2} \phi + f T, \quad \phi_3 =
g \lambda_2 \phi - g T, \end{equation} where the functions $f$ and $g$, which depend only on the variable $t$, are determined by \begin{align*} f'=-f \lambda_2= -\tfrac{1}{\sqrt{n}} ,\\ g'=g(\lambda_2 - \lambda_1)=\sqrt{n}. \end{align*} It is clear that solutions are given by \begin{equation*} f(t) = d_1 e^{-\tfrac{1}{\sqrt{n}}t} \qquad \text{and} \qquad g(t) = d_2 e^{\sqrt{n} t}. \end{equation*}
A straightforward computation, now shows that \begin{align*} D_T (\phi_2)&=D_T(f \sqrt{n} \phi + f T)\\ &=-f(\phi+ \tfrac{1}{\sqrt{n}}T)+ f ( \sqrt{n} T +(K(T,T)+\phi))\\ &=f T (- \tfrac{1}{\sqrt{n}}+ \sqrt{n}-\tfrac{n-1}{\sqrt{n}})\\ &=0. \end{align*} Similarly \begin{align*} &D_T (\phi_3)=0,\\ &D_V (\phi_3)=0. \end{align*} The above implies that $\phi_2$ reduces to a map of $M_1$ in $\mathbb R^n$ whereas $\phi_3$ is a constant vector in $\mathbb R^n$. As we have that \begin{equation*} d\phi_2(V)=D_V (\phi_2)=f(\sqrt{n} V +K(V,T))=f(\sqrt{n}+\tfrac{1}{\sqrt{n}})V, \end{equation*} the map $\phi_2$ is actually immersions. Moreover, denoting by $\nabla^1$ the $\mathcal D_2$ component of $\nabla$, we find that \begin{align*} D_{V}d\phi_2(\tilde V)&=f(\sqrt{n}+\tfrac{1}{\sqrt{n}})D_{V}\tilde V\\ &= f(\sqrt{n}+\tfrac{1}{\sqrt{n}})\nabla_{V}\tilde V+ f(\sqrt{n}+\tfrac{1}{\sqrt{n}})h(V,\tilde V)\phi \\ &= f(\sqrt{n}+\tfrac{1}{\sqrt{n}})\nabla^1_{V}\tilde V+f(\sqrt{n}+\tfrac{1}{\sqrt{n}})(h(K(V,\tilde V),T)T+h(V,\tilde V)\phi)\\ &=d\phi_2(\nabla^1_{V} \tilde V)+ f(\sqrt{n}+\tfrac{1}{\sqrt{n}})h(V,\tilde V)(\tfrac{1}{\sqrt{n}} T +\phi)\\ &=d\phi_2(\nabla^1_{V} \tilde V)+\tfrac{n+1}{n} h(V,\tilde V)\phi_2. \end{align*} The above formulas imply that $\phi_2$ can be interpreted as a centroaffine immersion contained in an $n_2+1$-dimensional vector subspace of $\mathbb R^{n+1}$ with induced connection $\nabla^1$ and affine metric $h_1 = \tfrac{n+1}{n} h$. Of course as the vector $\phi_3$ is transversal to the immersion $\phi_2$, we may assume by a linear transformation that the $\phi_2$ lies in the space spanned by the first $n$ coordinates of $\mathbb R^{n+1}$ whereas the constant vector lies in the direction of the last coordinate, and by choosing $d_2$ appropriately we may assume that $\phi_2= (0,\dots,0,1)$.
As before we get that $M_1$ satisfies the apolarity condition and hence is a hyperbolic affine hypersphere. Choosing now the constant $d_1$ appropriately we may assume that $M_1$ has mean curvature $-1$.
As \begin{equation*} \phi =(\tfrac{1}{d_1} e^{\tfrac{1}{\sqrt{n}} t} \phi_2
+ \tfrac{1}{d_2} e^{-\sqrt{n} t} \phi_3)(\frac{\sqrt{(n)(n_3+1)}}{n+1}). \end{equation*} We note from Section 2 that we must have that $d_1^{n_2+1} d_2 = 1$ and that therefore $\phi$ is given as the Calabi product of the immersions $\phi_1$ and a point.
\vskip 1cm \begin{flushleft}
\noindent Zejun Hu: {\sc Department of Mathematics, Zhengzhou University, Zhengzhou 450052, People's Republic of China.}\ \ E-mail: [email protected]
Haizhong Li: {\sc Department of Mathematical Sciences, Tsinghua University, Beijing 100084, People's Republic of China} \ \ E-mail: [email protected]
Luc Vrancken: {\sc LAMATH, ISTV2, Campus du mont houy, Universite de Valenciennes, France.} \ \ E-mail: [email protected]
\end{flushleft}
\end{document}
|
arXiv
|
{
"id": "0712.1073.tex",
"language_detection_score": 0.6482121348381042,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} We explain how the germ of the structure group of a cycle set decomposes as a product of its Sylow-subgroups, and how this process can be reversed to construct cycle sets from ones with coprime classes. We study the Dehornoy's class associated to a cycle set, and conjecture a bound that we prove in a specific case. The main tool used is a monomial representation, which allows for intuitive, short and self-contained proofs, in particular to easily re-obtain previously known results (Garsideness, I-structure, Dehornoy's class and germ, non-degeneracy of finite cycle sets). \end{abstract} \maketitle
\section{Introduction}\label{intro} In 1992 Drinfeld (\cite{drinfeld}) posed the question of classifying set-theoretical solutions of the (quantum) Yang--Baxter equation, given by pairs $(X,r)$ where $X$ is a set, $r\colon X\times X\to X\times X$ a bijection satisfying $r_1r_2r_1=r_2r_1r_2$ where $r_i$ acts on the $i$ and $i+1$ component of $X\times X\times X$. In \cite{etingof}, the authors propose to study solutions which are involutive ($r^2=\text{id}_{X\times X}$) and non-degenerate (if $r(x,y)=(\lambda_x(y),\rho_y(x))$ then for any $x\in X$, $\lambda_x$ and $\rho_x$ are bijective). Since then, many advances have been made on this question and objects introduced: structure group (\cite{etingof}), I-structure (\cite{istruct}), etc. Many equivalent objects are known, but in particular here we are interested in cycle sets, introduced by Rump (\cite{rump}). Dehornoy (\cite{rcc}) then studied the structure group (from cycle sets) seen from a Garside perspective (divisibility, word problem, ...), he then concludes with a faithful representation, which will be the base of this article. Starting from this representation, we retrieve most of Dehornoy's results in simpler, shorter and self-contained proofs, and other well known results (I-structure, non-degeneracy of finite left-non-degenerate involutive solutions of \cite{rump}). Then we study finite quotient defined through an integer called the Dehornoy's class of the solution, those quotients are called germs because they come with a natural way to recover the structure monoid and its Garside structure. We state the following conjecture on Dehornoy's class (Conjecture \ref{conj1}):
\begin{conj*} Let $S$ be a cycle set of size $n$. The Dehornoy's class $d$ of $S$ is bounded above by the ``maximum of different products of partitions of $n$ into distinct parts'' and the bound is minimal, i.e. $$d\leq\max\left(\left\{\prod\limits_{i=1}^k n_i\middle|k\in\mathbb N, 1\leq n_1<\dots<n_k, n_1+\dots+n_k=n\right\}\right).$$ \end{conj*} We then focuses on the germ and its Sylows, with the main result on cycle sets being constructed from the Zappa--Szép product of germs (Theorem \ref{sylows}), this product being a sort of generalized semi-direct products where each term acts on the others : \begin{thm*} Any cycle set can be obtained as the Zappa--Szép product of cycle sets with class a prime power. \end{thm*} Taking decomposability (\cite{cycle}) into account, one can consider that the "basic" cycle sets are of class and size powers of the same prime.
The first two sections are mostly a new approach to well-known theorems which allows simpler and more intuitive proofs, while the last two contain new results obtained by using the new approach developed in the two first sections. More precisely:
Section 1 is a brief introduction to monomial matrices and the main properties that we will use for our proofs. Section 2 consists in recovering most results of \cite{rcc} with a monomial representation, allowing shorter, simpler and self-contained proofs. Including the study of right-divisibility without Rump's theorem on the non-degeneracy of finite cycle sets (while also easily re-obtaining this theorem). Section 3 focuses on Dehornoy's class and germ, in particular we state a conjecture on the bound of the classes and prove it in a particular case. Section 4 explains how to construct all cycle sets from ones with coprime classes through the Zappa--Szép product of germs, with a precise condition and an explicit algorithm/formula to do so.
\textbf{Acknowledgements} The author wishes to thank Leandro Vendramin for his insightful remarks on the content and readability of this article. \section{Monomial matrices} The basic tool to work on the representation are monomial matrices. We recall the definition and some basic properties: A matrix is said to be monomial if each row and each column has a unique non-zero coefficient. We denote by $\mathfrak{Monom}_n(R)$ the set of monomial matrices over a ring $R$. To a permutation $\sigma\in\mathfrak{S}_n$ we associate the permutation matrix $P_\sigma$ where the $i$-th row contains a $1$ on the $\sigma(i)$-th column, for instance $P_{(123)}=\begin{pmatrix}0&1&0\\0&0&1\\1&0&0\end{pmatrix}$. We then have $P_\sigma \begin{pmatrix} v_1\\\vdots\\v_n\end{pmatrix}=\begin{pmatrix} v_{\sigma(1)}\\\vdots\\v_{\sigma(n)}\end{pmatrix}$ and thus, if $e_i$ is the $i$-th canonical basis vector, $P_\sigma (e_i)=e_{\sigma^{-1}(i)}$. Moreover, for $\sigma,\tau\in\mathfrak{S}_n$ we find $P_\sigma P_\tau=P_{\tau\sigma}$. It is well known that a monomial matrix admits a unique (left) decomposition as a diagonal matrix right-multiplied by a permutation matrix. Thus, if $m$ is monomial, $D_m$ will denote the associated diagonal matrix, and $P_m$ the associated permutation matrix, i.e. $m=D_mP_m$, and by $\psi(m)$ we will denote the permutation associated with the matrix $P_m$. Let $D$ be a diagonal matrix and $P$ a permutation matrix. We denote the conjugate matrix $PDP^{-1}$ as $\pre{P}{D}$, and if $\sigma$ is the permutation associated with $P$ we will also write $\pre{\sigma}{D}$. The following statements are well-known. As they will be essential throughout this paper, we state them explicitly: \begin{lem} \label{conj} Let $D$ be a diagonal matrix and $P$ a permutation matrix. Then $\pre{P}{D}$ is diagonal.
Moreover, the $i$-th row of $D$ is sent by conjugation to the $\sigma^{-1}(i)$-th row. \end{lem} In particular, this implies that, $P_\sigma D=\pre{\sigma}{D}P_\sigma$ giving a way to alternate between left and right (unique) decomposition of monomial matrices. \begin{cor} \label{monprod} Let $m,m'$ be monomial matrices. Then we have $D_{mm'}=D_m\left(\pre{\psi(m)}{D_{m'}}\right)$ and $\psi(mm')=\psi(m')\circ\psi(m)$.
To simplify notations we will sometimes only write $\pre{m}{D_{m'}}$ for $\pre{\psi(m)}{D_{m'}}$. \end{cor} As a final example, let $m=\begin{pmatrix}0&a&0\\0&0&b\\c&0&0\end{pmatrix}$, $m'=\begin{pmatrix}0&0&x\\0&y&0\\z&0&0\end{pmatrix}$, which decomposes with $D_m=\text{diag}(a,b,c)$, $D_{m'}=\text{diag}(x,y,z)$ and $\psi(m)=(123)$, $\psi(m')=(13)$. We find $\psi(m')\circ\psi(m)=(13)\circ(123)=(12)$ and $$D_m\left(\pre{\psi(m)}{D_{m'}}\right)=\diag{a,b,c}\pre{(123)}{\diag{x,y,z}}=\diag{a,b,c}\diag{y,z,x}=\diag{ay,bz,cx}$$ and indeed $mm'=\begin{pmatrix}0&ay&0\\bz&0&0\\0&0&cx\end{pmatrix}=\diag{ay,bz,cx}P_{(12)}$. \section{Cycle sets and monomial matrices}\label{cyclesets} In this section, we retrieve the results of \cite{rcc} using monomial matrices. In particular, our proofs don't rely on Rump's theorem on the bijectivity of finite cycle sets (\cite{rump}), and use simpler arguments compared with the technicality of \cite{rcc}. Moreover, although we don't need it in this section, we provide a short and simple proof of Rump's theorem. \subsection{Cycle sets} \begin{defi}[\cite{rump}] A cycle set is a set $S$ endowed with a binary operation $*$ such that for all $s$ in $S$ the map $\psi(s)\colon t\mapsto s*t$ is bijective and for all $s,t,u$ in $S$:
\begin{equation}
\label{RCL}
(s*t)*(s*u)=(t*s)*(t*u).
\end{equation}
When $S$ is finite, $\psi(s)$ can be identified with a permutation in $\mathfrak{S}_n$.
When the diagonal map is the identity (i.e. for all $s\in S$, $s*s=s$), $S$ is called square-free. \end{defi} From now, we fix a cycle set $(S,*)$ .
\begin{defi}[\cite{rump}] The group $G_S$ associated with $S$ is defined by the presentation:
\begin{equation}
\label{RCG}
G_S\coloneqq\left\langle S\mid s(s*t)=t(t*s),\: \forall s\neq t\in S\right\rangle.
\end{equation}
Similarly, we define the associated monoid $M_S$ by the presentation: $$M_S\coloneqq\left\langle S\mid s(s*t)=t(t*s),\: \forall s\neq t\in S\right\rangle^+.$$
It will be called the structure group (resp. monoid) of $S$.
\end{defi}
\begin{ex} Let $S=\{s_1,\dots,s_n\}$, $\sigma=(12\dots n)\in\mathfrak{S}_n$. The operation $s_i*s_j=s_{\sigma(j)}$ makes $S$ into a cycle set, as for all $s,t$ in $S$ we have $(s*t)*(s*s_j)=s_{\sigma^2(j)}=(t*s)*(t*s_j)$.
The structure group of $S$ then has generators $s_1,\dots,s_n$ and relations $s_is_{\sigma(j)}=s_js_{\sigma(i)}$ (which is trivial for $i=j$).
In particular, for $n=2$ we find $G=\langle s,t\mid s^2=t^2\rangle$. \end{ex}
When the context is clear, we will write $G$ (resp. $M$) for $G_S$ (resp. $M_S$).
We also assume $S$ to be finite and fix an enumeration $S=\{s_1,\dots,s_n\}$.
\begin{rmk}
By the definition of $\psi\colon S\to \mathfrak{S}_n$ we have that $s_i*s_j=s_{\psi(s_i)(j)}$, which we will also write $\psi(s_i)(s_j)$ for simplicity.
\end{rmk}
\subsection{Dehornoy's calculus}\label{calc}
Recall that we fixed $(S,*)$ a finite cycle set of size $n$ with structure monoid (resp. group) $M$ (resp. $G$).
In the following, we introduce the basics of Dehornoy's Calculus, which will be easily understood by directly looking at the representation introduced in the same paper \cite{rcc}, and we use those to retrieve the well-known I-structure of the structure monoid (\cite{istruct}).
Although all these results are already stated in \cite{rcc}, their provided proofs are very technical, whereas using monomial matrices greatly simplifies proofs and allows for more intuition.
\begin{defi}[\cite{rcc}]
\label{omepi}
For a positive integer $k$, we define inductively the formal expression $\Omega_k$ by $\Omega_1(x_1)=x_1$ and
\begin{equation}
\label{Omega}
\Omega_k(x_1,\dots,x_k)=\Omega_{k-1}(x_1,\dots,x_{k-1})*\Omega_{k-1}(x_1,\dots,x_{k-2},x_k).
\end{equation}
We then define another formal expression $\Pi_k$ by:
\begin{equation}
\label{Pi}
\Pi_k(x_1,\dots,x_k)=\Omega_1(x_1)\cdot \Omega_2(x_1,x_2)\cdot\:\dots\:\cdot\Omega_k(x_1,\dots,x_k).
\end{equation}
\end{defi}
For a cycle set $S$, $\Omega_k(t_1,\dots,t_k)$ will be the evaluation in $S$ of $\Omega_k(x_1,\dots,x_k)$ at $(t_1,\dots,t_k)$ in $S^k$. Similarly, $\Pi_k(t_1,\dots,t_k)$ will be the evaluation in $M_S$ of $\Pi_k(t_1,\dots,t_k)$ with the symbol $\cdot$ identified with the product of elements in $M_S$.
\begin{lem}[\cite{rcc}]\label{OmInv} The element $\Omega_k(t_1,\dots,t_k)$ of $S$ is invariant by permutation of the first $k-1$ entries.
\end{lem}
\begin{proof} For $k=1$ and $k=2$ there is only the identity permutation and for $k=3$ this is precisely the condition the cycle set equation (\ref{RCL}): $$\Omega_3(s,t,u)=\Omega_2(s,t)*\Omega_2(s,u)=(s*t)*(s*u)=(t*s)*(t*u)=\Omega_3(t,s,u).$$
Assume $k\geq 4$ and proceed by induction. Since the transpositions $\sigma_i=\begin{pmatrix}i&i+1\end{pmatrix}$ generate $\mathfrak{S}_k$, we only have to look at $\sigma_i$ with $i\leq k-2$. We have, by definition, $$\Omega_k(t_1,\dots,t_k)=\Omega_{k-1}(t_1,\dots,t_{k-1})*\Omega_{k-1}(t_1,\dots,t_{k-2},t_k).$$ If $i\neq k-2$, By the induction hypothesis, both $\Omega_{k-1}$ occurring here are invariant by $\sigma_i$ as it doesn't affect the last term. Remains the case $i=k-2$, for which we have:
$\Omega_k(t_1,\dots,t_{\sigma_{k-2}(r-2)},t_{\sigma_{k-2}(r-1)},t_k)=\Omega_k(t_1,\dots,t_{k-3},t_{k-1},t_{k-2},t_k)$\\ $=\Omega_{k-1}(t_1,\dots,t_{k-3},t_{k-1},t_{k-2})*\Omega_{k-1}(t_1,\dots,t_{k-3},t_{k-1},t_{k})$
(Expanding)\\ $=\left(\Omega_{k-2}(t_1,\dots,t_{k-3},t_{k-1})*\Omega_{k-2}(t_1,\dots,t_{k-3},t_{k-2})\right)$
(Expanding)\\$\tab *\left(\Omega_{k-2}(t_1,\dots,t_{k-3},t_{k-1})*\Omega_{k-2}(t_1,\dots,t_{k-3},t_{k})\right)$\\ $=\left(\Omega_{k-2}(t_1,\dots,t_{k-3},t_{k-2})*\Omega_{k-2}(t_1,\dots,t_{k-3},t_{k-1})\right)$
(cycle set Equation)\\$\tab *\left(\Omega_{k-2}(t_1,\dots,t_{k-3},t_{k-2})*\Omega_{k-2}(t_1,\dots,t_{k-3},t_{k})\right)$\\ $=\Omega_{k-1}(t_1,\dots,t_{k-3},t_{k-2},t_{k-1})*\Omega_{k-1}(t_1,\dots,t_{k-3},t_{k-2},t_{k})$
(Collapsing)\\ $=\Omega_k(t_1,\dots,t_{k-2},t_{k-1},t_k).$
(Collapsing)\\
Which concludes the proof. \end{proof}
\begin{prop}\label{piInv} The element $\Pi_k(t_1,\dots,t_k)$ of $M_S$ is invariant by permutation of the entries.
\end{prop}
\begin{proof} For $k=1$ there is nothing to prove. For $k=2$ we find $\Pi_2(t_1,t_2)=t_1(t_1*t_2)$ which is identified with $t_2(t_2*t_1)=\Pi_2(t_2,t_1)$ by the defining relations of $M$ in \ref{RCG}.
Now assume $k\geq 3$ and, as in the proof of the previous lemma; restrict to the transpositions $\sigma_i=\begin{pmatrix}i&i+1\end{pmatrix}$ with $1\leq i<k$. Recall that, by definition $$ \Pi_k(t_1,\dots,t_k)=\Omega_1(t_1)\cdot \Omega_2(t_1,t_2)\cdot\:\dots\:\cdot\Omega_k(t_1,\dots,t_k).$$ Clearly, the first $i-1$ terms remain unchanged by $\sigma_i$. And by the previous Lemma \ref{OmInv}, for $k>i+1$ the terms $\Omega_k$ are invariant by $\sigma_i$. Thus we only have to look at the product: $$\Omega_i(t_1,\dots,t_{i-1},t_{i+1})\cdot\Omega_{i+1}(t_1,\dots,t_{i-1},t_{i+1},t_i)$$ $=\Omega_i(t_1,\dots,t_{i-1},t_{i+1})\cdot\left(\Omega_i(t_1,\dots,t_{i-1},t_{i+1})*\Omega_i(t_1,\dots,t_{i-1},t_{i})\right)$
(Expanding)\\ $=\Omega_i(t_1,\dots,t_{i-1},t_{i})\cdot\left(\Omega_i(t_1,\dots,t_{i-1},t_{1})*\Omega_i(t_1,\dots,t_{i-1},t_{i+1})\right)$
(Relations of $M$)\\ $=\Omega_i(t_1,\dots,t_{i-1},t_{i})\cdot\Omega_{i+1}(t_1,\dots,t_{i-1},t_i,t_{i+1}).$
(Collapsing)\\ Which concludes the proof. \end{proof} \begin{lem} For any $s,t_1,\dots,t_k$ in $S$, the map $s\mapsto \Omega_{k+1}(t_1,\dots,t_k,s)$ is bijective. \end{lem} \begin{proof} We proceed by induction: for $k=1$ there is nothing to prove, for $k=2$ this is part of the definition of a cycle set. So consider $k\geq 2$ and suppose that the property holds for $k-1$. We have $$\Omega_{k+1}(t_1,\dots,t_k,s)=\Omega_k(t_1,\dots,t_k)*\Omega_k(t_1,\dots,t_{k-1},s),$$ by induction hypothesis $s\mapsto \Omega_{t_1,\dots,t_{k-1},s}$ is bijective, and as $\Omega_k{t_1,\dots,t_k}$ is an element of $S$, its left action is bijective, which concludes the proof. \end{proof} \begin{prop}\label{pipres} Let $f$ be in $M$. Then there exists $(t_1,\dots,t_k)^k$ in $S^k$ such that $f=\Pi_k(t_1,\dots,t_k)$. \end{prop} In the sequel, for any $f\in M$, by a "$\Pi$-expression of $f$" we mean choosing any $(t_1,\dots,t_k)$ in $S^k$ such that $f=\Pi_k(t_1,\dots,t_k)$. \begin{proof} Take a decomposition of $f$ as a product of elements of $S$: $$f=t'_1t'_2\dots t'_k.$$ Let $t_1=t_1'$, because $S$ is a cycle set, the map $t'\mapsto t_1*t'$ is bijective, so there exists $t_2$ such that $t_2'=t_1*t_2$ (explicitly $t_2=\psi(t_1)^{-1}(t'_2)$), i.e.: $$f=t_1(t_1*t_2)t'_3\dots t'_k=\Omega_1(t_1)\Omega_2(t_1,t_2)t'_3\dots t'_k=\Pi_2(t_1,t_2)t'_3\dots t'_k.$$ We proceed by induction on $k$: suppose that we have $t_1,\dots,t_{k-1}$ such that $t'_1\dots t'_{k-1}=\Pi_{k-1}(t_1,\dots,t_{k-1})$, i.e. $t'_i=\Omega_k(t_1,\dots,t_i)$ for $i< k$. By the previous lemma the map $s\mapsto \Omega_k(t_1,\dots,t_{k-1},s)$ is bijective, so there exists $t_{k}$ such that $$t'_{k}=\Omega_{k}(t_1,\dots,t_{k}).$$ By induction, this gives the existence of $t_1,\dots,t_k$ such that $$f=\Omega_1(t_1)\dots\Omega_k(t_1,\dots,t_k)=\Pi_k(t_1,\dots,t_k).$$ \end{proof} \begin{ex}\label{ex1} Take $S=\{s_1,s_2,s_3,s_4\}$ with \begin{align*}\psi(s_1)=(1234)&\qquad\psi(s_3)=(24)\\\psi(s_2)=(1432)&\qquad\psi(s_4)=(13).\end{align*} And consider the element $f=s_1s_2s_3s_4$. We have $\psi(s_1)^{-1}(s_2)=s_1$, so $f=s_1(s_1*s_1)s_3s_4=\Pi_2(s_1,s_1)s_3s_4$.
Similarly, $\psi(s_2)^{-1}(s_3)=s_4$, so $s_3=s_2*s_3=(s_1*s_1)*s_4$, as $\psi(s_1)^{-1}(s_4)=s_3$, we have $s_3=(s_1*s_1)*(s_1*s_3)=\Omega_3(s_1,s_1,s_3)$. So $f=\Pi_3(s_1,s_1,s_3)s_4$.
Finally, for $s_4$, we first write $s_4=s_3*a$, then $a=s_2*b$ and $b=s_1*c$ (going through the letters of $f=s_1s_2s_3s_4$ from right to left), so that $s_4=s_3*(s_2*(s_1*c)))$. Replacing $s_3,s_2$ and $s_1$ by their previously found expressions gives $$s_4=\left((s_1*s_1)*(s_1*s_3)\right)*\left((s_1*s_1)*(s_1*c)\right)=\Omega_4(s_1,s_1,s_3,c).$$ Here we find $c=s_2$ so $$f=\Pi_4(s_1,s_1,s_3,s_2).$$ One can also check for instance that $s_4=\Omega_4(s_1,s_1,s_3,s_2)$ also equals $\Omega_4(s_3,s_1,s_1,s_2)$ and so $f=\Pi_4(s_3,s_1,s_1,s_2)$. \end{ex} \subsection{The monomial representation}
Recall that we fix $(S,*)$ a finite cycle set of size $n$ with $S=\{s_1,\dots,s_n\}$ and with structure monoid (resp. group) $M$ (resp. $G$).
\begin{prop}[\cite{rcc}] \label{prepr}
Let $q$ be an indeterminate and consider $\mathfrak{Monom}_n(\mathbb Q[q,q^{-1}])$, denote $D_{s_i}$ the matrix $\text{diag}(1,\dots,q,\dots,1)$ the $n\times n$ diagonal matrix with a $q$ on the $i$-th row.
The map $\Theta$ defined on $S$ by
\begin{equation}
\label{repr}
\Theta(s_i)\coloneqq D_{s_i} P_{\psi(s_i)}
\end{equation}
extends to a representation $G\to\mathfrak{Monom}_n(\mathbb Q[q,q^{-1}]$) and similarly to a morphism $M\to\mathfrak{Monom}_n(\mathbb Q[q])$.
\end{prop}
\begin{proof} We have to show that $\Theta$ respects the defining relations of $G$ (and $M$). Let $s_i,s_j$ be in $S$ and define $g=\Theta(s_i)\Theta(s_i*s_j)$ and $g'=\Theta(s_j)\Theta(s_j*s_i)$. By Corollary \ref{monprod} we have $D_g=D_{s_i}\pre{\psi(s_i)}{D_{s_i*s_j}}=D_{s_i}\pre{\psi(s_i)}{D_{\psi(s_i)(s_j)}}$ and the latter is equal to $D_{s_i}D_{s_j}$ by Lemma \ref{conj}. By symmetry and commutativity of diagonal matrices, we deduce $D_g=D_{g'}$.
On the other hand, again by Corollary \ref{monprod}, we have $\psi(g)(t)=\psi(s_i*s_j)\circ\psi(s_i)(t)=(s_i*s_j)*(s_i*t)$ for all $t\in S$. By symmetry and as $S$ is a cycle set we deduce that $\psi(g)=\psi(g')$ and so $g=g'$.
\end{proof}
For simplicity, we will write $\Theta(g)=D_gP_g$ to mean $\Theta(g)=D_{\Theta(g)}P_{\Theta(g)}$.
\begin{rmk} The image of $G$ by $\Theta$ lies in the subgroup of $\mathfrak{Monom}_n(\mathbb Q[q,q^{-1}])$ consisting of matrices such that the non-zero coefficients (i.e. the diagonal part of the decomposition) consists only of powers of $q$ (including $q^0=1$). We denote this subgroup by $\Sigma_n$. By $\Sigma_n^+$ we denote the submonoid of $\mathfrak{Monom}_n(\mathbb Q[q])$ consisting of non-matrices whose non-zero coefficients are non-negative powers of $q$ only, and by $D_i$ the matrix diag$(1,\dots,q,\dots,1)$ with a $q$ in the $i$-th place.
\end{rmk}
\begin{rmk}
Let $G^+$ be the submonoid of $M$ of positive words. As $M$ and $G^+$ have the same generators, their images in their respective representations $\Theta$ coincide. Thus, when working in $\mathfrak{Monom}_n(\mathbb Q[q,q^{-1}])$, we will not distinguish between $\Theta(M)$ and $\Theta(G^+)$. Later, we will see that in fact $G$ is the group of fractions of $M$ and $M=G^+$.
\end{rmk}
\begin{ex}
Set $S=\{s_1,s_2,s_3\}$ and $\psi(s_i)=(123)$ for all $i$.
$$\Theta(s_1)=\begin{pmatrix}q&0&0\\0&1&0\\0&0&1\end{pmatrix}\begin{pmatrix}0&1&0\\0&0&1\\1&0&0\end{pmatrix}=\begin{pmatrix}0&q&0\\0&0&1\\1&0&0\end{pmatrix}$$ and similarly $$ \Theta(s_2)=\begin{pmatrix}0&1&0\\0&0&q\\1&0&0\end{pmatrix} \quad \Theta(s_3)=\begin{pmatrix}0&1&0\\0&0&1\\q&0&0\end{pmatrix}.$$
\end{ex}
\begin{prop}\label{PDDP} For all $s,t\in S$: $$P_s D_t=\pre{\psi(s)}{D_t}P_s=D_{\psi(s)^{-1}(t)}P_s.$$
In particular, $P_s D_{s*t}=D_t P_s$.
\end{prop}
\begin{proof}
This is a direct consequence of Lemma \ref{conj}.
\end{proof}
\begin{defi} For an element $g\in\Sigma_n$, we define its "coefficient-powers tuple" cp$(g)$ to be the unique $n$-tuple of integers $(c_1,\dots,c_n)$ such that $D_g=\text{diag}(q^{c_1},\dots,q^{c_n})$.
We set $\lambda(g)\coloneqq\sum_{i=0}^n \mid c_i\mid$.
\end{defi}
For $\sigma\in\mathfrak S_n$, by $\pre{\sigma}{(c_1,\dots,c_n)}$ we denote $(c_{\sigma(1)},\dots,c_{\sigma(n)})$.
\begin{ex}
If $g=\begin{pmatrix}q^2&0&0&0\\0&0&1&0\\0&0&0&q^{-1}\\0&q&0&0\end{pmatrix}$, then cp$(g)=(2,0,-1,1)$ and $\lambda(g)=2+0+1+1=4$.
\end{ex}
\begin{prop}\label{cpLamb} For all $g,h\in\Sigma_n$ we have:
\begin{equation}
\label{cpprod}
\text{cp}(gh)=\text{cp}(g)+\pre{\psi(g)}{\text{cp}(h)}.
\end{equation}
Moreover $\lambda(gh)=\lambda(g)+\lambda(h)$.
\end{prop}
\begin{proof}
The first equality is a direct consequence of Corollary \ref{monprod} applied to the representation. The second follows from the fact that the defining relations of the structure monoid respects the length of words.
\end{proof}
Set $\Omega'=\Theta\circ\Omega$ and $\Pi'=\Theta\circ\Pi$ the evaluation in the representation of the constructions from Section \ref{calc}, that is $\Pi'_k(t_1,\dots,t_k)=\Theta(\Pi(t_1,\dots,t_k))$. \begin{prop}
\label{pimat}
Let $t_1,\dots,t_k$ be in $S$, then $$D_{\Pi'_k(t_1,\dots,t_k)}=D_{t_1}\dots D_{t_k}$$ and $$P_{\Pi'_k(t_1,\dots,t_k)}=P_{\Omega'_1(t_1)}\dots P_{\Omega'_k(t_1,\dots,t_k)}.$$ Or equivalently for all $s$ in $S$, $\psi(\Pi'_k(t_1,\dots,t_k))(s)=\Omega'_{k+1}(t_1,\dots,t_k,s).$
\end{prop}
\begin{proof} We proceed by induction: for $r=1$, $\Pi_1(t_1)=t_1$ and there is nothing to prove. Assume $r\geq1$ and the property true for $r-1$. Then, by definition we have $\Pi(t_1,\dots,t_k)=\Pi_{k-1}(t_1,\dots,t_{k-1})\cdot \Omega_k(t_1,\dots,t_k)$, so by the induction hypothesis
$$\Pi'_{k}(t_1,\dots,t_k)=\left(D_{t_1}\dots D_{t_{k-1}}P_{\Omega'_1(t_1)}\dots P_{\Omega'_{k-1}(t_1,\dots t_{k-1})}\right)\left(D_{\Omega'_k(t_1,\dots,t_k)}P_{\Omega'_k(t_1,\dots,t_k)}\right)$$
Note that $\Omega_k'(t_1,\dots,t_k)=\Omega_{k-1}'(t_1,\dots,t_{k-1})*\Omega_{k-1}'(t_1,\dots,t_{k-2},t_k)$. So by Proposition \ref{PDDP} we get
\begin{align*}P_{\Omega'_{k-1}(t_1,\dots t_{k-1})}D_{\Omega'_k(t_1,\dots,t_k)}&=P_{\Omega'_{k-1}(t_1,\dots t_{k-1})}D_{\Omega_{k-1}'(t_1,\dots,t_{k-1})*\Omega_{k-1}'(t_1,\dots,t_{k-2},t_k)}\\&=D_{\Omega'_{k-1}(t_1,\dots,t_{k-2},t_k)}P_{\Omega'_{k-1}(t_1,\dots t_{k-1})}.\end{align*}
We can then repeat this process for all the permutation matrices $P_{\Omega'_{k-2}(t_1,\dots t_{k-2})},\dots,P_{\Omega_1'(t_1)}$ and get $$P_{\Omega'_1(t_1)}\dots P_{\Omega'_{k-1}(t_1,\dots t_{k-1})}D_{\Omega'_k(t_1,\dots,t_k)}=D_{t_k}P_{\Omega'_1(t_1)}\dots P_{\Omega'_{k-1}(t_1,\dots t_{k-1})}.$$
Thus we find $$\Pi'_k(t_1,\dots,t_k)=\left(D_{t_1}\dots D_{t_k}\right)\left(P_{\Omega'_1(t_1)}\dots P_{\Omega'_k(t_1,\dots,t_k)}\right).$$
As $P_\sigma P_\tau=P_{\tau\sigma}$, we have $\psi(\Pi'_k(t_1,\dots,t_k))(s)=\psi(\Omega'_k(t_1,\dots,t_k))\circ\dots\circ\psi(\Omega'_1(t_1))(s)$. Then $\psi(\Omega'_1(t_1))(s)=t_1*s=\Omega_2(t_1,s)$, which in turns gives $\psi(\Omega'_2(t_1,t_2))\circ\psi(\Omega'_1(t_1))(s)=\psi(\Omega_2(t_1,t_2))(\Omega_2(t_1,s))=\Omega_2(t_1,t_2)*\Omega_2(t_1,s)=\Omega_3(t_1,t_2,s)$. By induction, this gives the last statement.
\end{proof}
\begin{cor}\label{tupPi} Any tuple of non-negative integers $(c_1\dots,c_n)\in\mathbb N^n$ can be realized as the coefficient-powers tuple of a matrix in $\Theta(M_S)$.
\end{cor}
\begin{proof} Let $l=\sum\limits_i c_i$ and take the $l$-tuple containing $c_i$ times the element $s_i$. By the previous proposition, $\Pi'_l$ applied to this tuple gives the expected result.
\end{proof}
\begin{ex}
As in \ref{ex1} take $S=\{s_1,s_2,s_3,s_4\}$ with $$\psi(s_1)=(1234)\qquad\psi(s_3)=(24)\qquad\psi(s_2)=(1432)\qquad\psi(s_4)=(13).$$
The tuple $(2,1,1,0)$ can be obtained from $\Pi_4(s_1,s_1,s_3,s_2)=s_1s_2s_3s_4=f$ as, in the induction of the proof we did: \begin{align*}P_{s_1}P_{s_2}P_{s_3}D_{s_4}&=P_{s_1}P_{s_2}D_{\psi(s_3)^{-1}(s_4)}P_{s_3}=P_{s_1}D_{\psi(s_2)^{-1}\circ\psi(s_3)^{-1}(s_4)}P_{s_2}P_{s_3}\\&=D_{\psi(s_1)^{-1}\circ\psi(s_2)^{-1}\circ\psi(s_3)^{-1}(s_4)}P_{s_1}P_{s_2}P_{s_3}\end{align*}
Computing $\psi(s_1)^{-1}\circ\psi(s_2)^{-1}\circ\psi(s_3)^{-1}(s_4)=s_2$ precisely retrieves the Example \ref{ex1}.
Those reasonings are inverse to one another: to construct an element with given coefficient-powers we use permutations step by step and letter by letter (this is the construction of $\Pi$), and to get a $\Pi$-expression we use all the inverse of those permutations.
\end{ex} \begin{cor}\label{pipres2} Any $f\in M$ is uniquely determined by $D_{\Theta(f)}$.
Moreover, this diagonal part can be read as the entries when taking a $\Pi$-expression of $f$. \end{cor} In particular, this means that the representation is injective on the monoid, as the diagonal part is determined by the entries of a $\Pi$-expression, which is invariant by permutations of those entries. \begin{proof} This follows from the previous proposition and Proposition \ref{pipres}. Take $\Theta(f)=D_f P_f\in M$ with $D_f=D_{s_1}^{a_1}\dots D_{s_k}^{a_k}$. By Proposition \ref{pipres} there exist $t_1,\dots,t_k\in S$ such that $f=\Pi_k(t_1,\dots,t_k)$. By Proposition \ref{pimat}, we have $D_{\Theta(f)}=D_{t_1}\dots D_{t_k}$, this gives the second statement. By the unicity of the monomial decomposition, we must have $a_i$ times $s_i$ in the tuple $(t_1,\dots,t_k)$ and by Proposition \ref{piInv} the orders of the $t_i$'s doesn't matter.
Thus if $g\in M$ is such that $D_{\Theta(g)}=D_{\Theta(f)}$, by the same argument we must have $g=\Pi_k(t_1,\dots,t_k)=f$. \end{proof} Denote $\mathfrak D_n$ (resp. $\mathfrak D_n^+$) the subset of diagonal matrices of $\mathfrak{M}_n$ (resp. $\mathfrak M_n^+$). We have an obvious inclusion $\mathfrak D_n^+\hookrightarrow \mathfrak D_n$, and a faithful representation $\mathbb N^n\stackrel{\sim}{\rightarrow}\mathfrak D_n^+$.
\begin{cor}\label{monInj} The natural morphism $M\rightarrow G$ sending each generator $s_i\in M$ to $s_i\in G$ is injective. \end{cor} \begin{proof}We have shown that there is a (set) bijection $\Pi\colon\mathbb N^n\stackrel{\sim}{\rightarrow}M$. Then we have the following commutative diagram: \begin{center} \begin{tikzcd} \mathbb N^n\arrow[r,"\sim","\Pi"']\arrow[d,"\sim"]&M\arrow[r]&G\arrow[d]\\ \mathfrak D_n^+\arrow[rr,hook]&&\mathfrak D_n \end{tikzcd} \end{center} Because the composition $\mathbb N^n\rightarrow \mathfrak D_n^+\rightarrow \mathfrak D_n$ is injective, and as $\Pi\colon\mathbb N^n\rightarrow M$ is bijective, the composition $M\rightarrow G\rightarrow \mathfrak D_n$ must be injective, so necessarily $M$ injects in $G$. \end{proof} A word $t_1\dots t_k$ over $S$ representing an element $g$ is said to be reduced if its length $k$ is minimal among the representative words of $g$. \begin{prop}\label{frac} Any element $g\in G$ can be decomposed as a reduced left-fraction in $M$, that is: $$\exists f,h\in M, g=fh^{-1} \text{ with } \lambda(g)=\lambda(f)+\lambda(h)$$ where $\lambda$ denotes the length as a $S\cup S^{-1}$-word. \end{prop} \begin{proof} Let $g\in G$, and write a reduced decomposition of $g$ as product of elements in $S\cup S^{-1}$. If this expression is of length 1, this is trivial. If the length is 2, we have 4 cases with $s,t\in S$: $st$, $s^{-1}t^{-1}$, $st^{-1}$ and $s^{-1}t$. The first 3 cases are of the desired form. For the last one, the defining relations of $G$ give $$s(s*t)=t(t*s)\Longleftrightarrow s^{-1}t=(s*t)(t*s)^{-1}.$$ For arbitrary length, we can inductively use the same relation $s^{-1}t=(s*t)(t*s)^{-1}$ to "move" all inverses of the generators to the right in a decomposition of $g$, which gives the desired form. \end{proof} We will later state a similar result for right-fractions (Proposition \ref{frac2}). \begin{cor}\label{fraccom} Any element in $G$ can be decomposed as a left-fraction $fg^{-1}$ in $M$ such that $D_g$ commutes with all permutation matrices (more precisely that $D_g$ is a power of $D_{s_1}\dots D_{s_n}$). \end{cor} \begin{proof} Take a $\Pi$-expression $\Pi_k(t_1,\dots,t_k)$ of $g$. up to permuting the entries, we can assume that $g=\Pi_k(s_1,\dots,s_1,\dots,s_n,\dots,s_n)$, where for $1\leq i\leq n$ each $s_i$ occurs $a_i$ times and $a_1+\dots+a_n=k$. Let $j$ be such that $a_j$ is (one of) the biggest of the $a_i$'s, then if for some $i$ we have $a_i<a_j$ we can consider a new element $\Pi_{k+1}(s_1,\dots,s_1,\dots,s_n,\dots,s_n,s_i)=g\cdot\Omega_{k+1}(s_1,\dots,s_1,\dots,s_n,\dots,s_n,s_i)$, where $s_i$ occurs $a_i+1$ times and that is obtained from $g$ by right-multiplying by an element in $S$. Doing so, until all $s_i$ occurs $a_j$ times, provides an element $\overline g$ which is obtained from $g$ by right-multiplication by some $g'\in M$ and such that $D_{\overline g}=(D_{s_1}\dots D_{s_n})^{a_j}$.
Let $P_\sigma$ be a permutation matrix, then $P_\sigma D_{\overline g}=\pre{\sigma}{D_{\overline g}} P_\sigma=D_{\overline g} P_\sigma$ where the last equality is because all the diagonal terms in $D_{\overline g}$ are equal so are invariant by $\sigma$.
Finally $fg'(\overline g)^{-1}=fg^{-1}$, so replacing $(f,g)$ by $(fg',gg')$ gives us the result. \end{proof} \begin{ex}
Take $S=\{s_1,s_2,s_3\}$ and $\psi(s_i)=(123)$ for all $i$. Consider $h=s_3^{-1}s_2^{-1}s_3$, the relation $s_2s_1=s_3s_3$ (i.e. $s_1s_3^{-1}=s_2^{-1}s_3$) gives $h=s_3^{-1}s_1s_3^{-1}$; similarly $s_3s_2=s_1s_1$ (i.e. $s_2s_1^{-1}=s_3^{-1}s_1$) yields $h=s_2s_1^{-1}s_1^{-1}$.
Let $f=s_2$ and $g=s_1s_1$ so that $h=fg^{-1}$, we have $g=s_1(s_1*s_3)=\Pi_2(s_1,s_3)$, thus $D_g=D_{\Theta(g)}=D_{s_1}D_{s_3}$, which is not stable under permutation (as we have $\pre{(123)}{D_g}=D_{s_{(123)^{-1}(1)}}D_{s_{(123)^{-1}(2)}}=D_{s_3}D_{s_2}\neq D_g$). To complete $g$ so that it commutes, we must add $D_{s_3}$, so we take $g'=\Pi_3(s_1,s_2,s_3)=gs_1$ and $f'=fs_1$. Now $D_{g'}=D_{s_1}D_{s_2}D_{s_3}$ commutes with permutation matrices, and $f'g'^{-1}=fs_1s_1^{-1}g^{-1}=fg^{-1}=h$. \end{ex} \begin{thm}\label{reprInj} Let $S$ be a finite cycle set of cardinal $n$. Then $\Theta$ is a faithful representation of $G$. \end{thm} \begin{proof} Let $g\in G$, from Proposition \ref{frac} we know that there exist $f,h\in M$ such that $g=fh^{-1}$. Thus as $\Theta$ is a representation: $$\Theta(g)=\text{Id}_n\Longleftrightarrow \Theta(f)=\Theta(h)$$ By Corollary \ref{monInj}, $\Theta(f)=\Theta(h)\Longleftrightarrow f=h$, thus $\Theta$ is faithful. \end{proof} From now on, we assume that $S$ is a finite cycle set with $S=\{s_1,\dots,s_n\}$. We identify $G$ with its image by the (faithful) representation $\Theta$. We can as well identify $\Omega$ (resp. $\Pi$) with its image $\Omega'$ (resp. $\Pi'$) by $\Theta$. \begin{defi} A subgroup of $\Sigma_n$ is called permutation-free if the only permutation matrix it contains is the identity. \end{defi} \begin{prop}\label{istruct} $G$ is permutation-free. \end{prop} \begin{proof} Suppose $P_{\sigma}$ is a permutation matrix (associated with $\sigma\in\mathfrak S_n$) that is in $G$. Then by Proposition \ref{frac}, there exists $f,g\in M$ such that $P_\sigma=fg^{-1}$, i.e. $D_f P_f=P_\sigma D_g P_g$. By Corollary \ref{fraccom} we can assume that $P_\sigma D_g=D_g P_\sigma$, so $D_f P_f=D_g P_\sigma P_g$. By the unicity of the monomial decomposition, we must have $D_f=D_g$ and $P_f=P_\sigma P_g$, so by Proposition~\ref{pipres2} $f=g$ and thus $P_\sigma=$Id (and $\sigma=$id). \end{proof} \begin{cor} An element $g\in G$ is uniquely determined by $D_g$. \end{cor} \begin{proof} Suppose for $g,h\in G$ we have $D_g=D_h$. Then $$g^{-1}h=(D_g P_g)^{-1}(D_h P_h)=P_g^{-1} D_g^{-1} D_g P_h= P_g^{-1} P_h\in G$$ We have a permutation matrix in $G$, so it must be the identity, so $P_g=P_h$ and thus $g=h$. \end{proof} \begin{prop}\label{cpPi}
For any tuple $a=(a_1,\dots,a_k)$ in $\mathbb Z^k$, there exists a unique $g\in G$ with cp$(g)=(a_1,\dots,a_k)$.
Moreover, if all $a_i\geq 0$ then $g\in M$ has a $\Pi$-expression $g=\Pi_{\lambda(a)}$ which is of length of $\lambda(a)$ and is minimal by additivity of $\lambda$.
Similarly, if $g\in G$, writing it as a fraction of two minimal-length elements of $M$ also gives that the length of $g$ over $S\cup S^{-1}$ is $\lambda(g)$.
\end{prop} \begin{proof} For the first part, the existence follows from Corollary \ref{tupPi} and the unicity from the previous Corollary.
The second statement is just Proposition \ref{cpLamb} applied to the generators of the monoid, and, similarly, the third is a consequence of considering irreducible left-fractions in Proposition \ref{frac}. \end{proof} \begin{rmk} This is precisely a matricial formulation of the I-structure of \cite{istruct}. \end{rmk} We've seen that the structure group of a cycle set is permutation-free, we now state a reciprocal under a condition on the atom set of the submonoid:
\begin{thm} \label{condCS} Let $G$ be a subgroup of $\Sigma_n$, denote $G^+=G\cap \Sigma_n^+$ (the submonoid of positive elements). Suppose that the set of atoms $S=\{s_1,\dots,s_n\}$ of $G^+$ has cardinal $n$, generates $G$ and there exists a positive integer $k$ such that $D_{s_i}=D_i^k$. Let the operation $*$ be defined on $S$ by $s_i*s_j={\psi(s_i)}(s_j)$, then the following assertions are equivalent: \begin{enumerate}[label=(\roman*)] \item $G$ is permutation-free \item $s(s*t)=t(t*s)$ for all $s,t$ in S \item $G$ is the structure group of $S$ \end{enumerate} \end{thm} \begin{proof} First notice that $q\mapsto q^p$ provides an injective morphism $\Sigma_n\to\Sigma_n$, so we can assume $p=1$.
(i) $\Rightarrow$ (ii): For $1\leq i,j\leq n$, we have from Proposition \ref{PDDP}: $$s_i s_{\psi(i)(j)}=D_iP_{s_i}D_{s_i*s_j}P_{s_i*s_j}=D_iD_jP_{s_i}P_{s_i*s_j}$$ By symmetry, $s_j(s_j*s_i)$ will have the same diagonal part. Then $$\left(s_i(s_i*s_j)\right)^{-1}\left(s_j(s_j*s_i)\right)=P_{s_i(s_i*s_j)}^{-1}D_{s_i(s_i*s_j)}^{-1}D_{s_j(s_j*s_i)}P_{s_j(s_j*s_i)}=P_{s_i(s_i*s_j)}^{-1}P_{s_j(s_j*s_i)}\in G.$$ So by the assumption that $G$ is permutation-free we deduce $s_i(s_i*s_j)=s_j(s_j*s_i)$.
(ii) $\Rightarrow$ (iii): Recall that $P_{s_i(s_i*s_j)}=P_{s_i}P_{s_i*s_j}=P_{\psi(s_i*s_j)\circ\psi(s_i)}$, so we find $\psi(s_i*s_j)\circ\psi(s_i)=\psi(s_j*s_i)\circ\psi(s_j)$. For $t\in S$, this means that $\psi(s_i*s_j)\circ\psi(s_i)(t)=\psi(s_j*s_i)\circ\psi(s_j)(t)$, i.e. $(s_i*s_j)*(s_i*t)=(s_j*s_i)*(s_j*t)$, so precisely that $S$ is a cycle set. Then the generators of $M$ correspond to the generators of $M_S$ and both are submonoids of $\Sigma_n$, so $M=M_S$. Similarly, as $S$ generates $G$ we have $G_S=G$.
(iii)$\Rightarrow $ (i): This is Proposition \ref{istruct}. \end{proof} \subsection{Garsideness} In a 2017 talk (\cite{dehTalk}), Dehornoy addressed the question of whether his results on the construction of the Garside structure and the I-structure can be derived without using Rump's result on the non-degeneracy of finite cycle set (\cite{indec}). In this section, we answer his question positively.
Although equivalent to working in the I-structure \cite{istruct} $\mathbb Z^n\rtimes \mathfrak{S}_n$ by decomposing elements $(\underline c,\sigma)=(\underline c,1)(1,\sigma)=(1,\sigma)(\pre{\sigma}{\underline c},1)$, working with monomial matrices and their decomposition has the advantage of giving more intuition, and allows for looking at both left and right structure at the same time. For instance, this is efficient when looking at divisibility, that we study in this section.
Recall that we fix $(S,*)$ a finite cycle set of size $n$ with structure monoid (resp. group) $M$ (resp. $G$).
\begin{defi} Let $g_1,g_2$ be elements of $M$. We say that $g_1$ left-divides (resp. right-divides) $g_2$, that we note $g_1\preceq g_2$ (resp. $g_1\preceq_r g_2$) if there exists some $h\in M$ such that $g_2=g_1h$ (resp. $g_2=hg_1$) and $\lambda(g_2)=\lambda(g_1)+\lambda(h)$.
An element $g\in M$ is called balanced if the set of its left-divisors $\text{Div}(g)$ and right-divisors $\text{Div}_r(g)$ coincide. \end{defi} We equip $\mathbb Z^n$ with the partial ordering given by $(a_1,\dots,a_n)\leq (b_1,\dots,b_n)$ iff $a_i\leq b_i$ for all $1\leq i\leq n$. In particular, this means that given $g_1,g_2\in G$, $\text{cp}(g_1)\leq\text{cp}(g_2)$ iff the power of $q$ on the $i$-th row of $g_1$ is less than the one of $g_2$ for $1\leq i\leq n$. Moreover, note that, as $g_1=P_{g_1}\pre{g_1}{D_{g_1}}$ and $g_1^t=P_{g_1}^tD_{g_1}=P_{g_1}^{-1}D_{g_1}=\pre{g_1}D_{g_1}P_{g_1}^{-1}$, the coefficient on the $i$-th column of $g_1$ is the coefficient on the $i$-th row of $g_1^t$. \begin{prop}\label{div} $g_1$ left-divides $g_2$ if and only if $\text{cp}(g_1)\leq \text{cp}(g_2)$,
Similarly, $g_1$ right-divides $g_2$ if and only if $\text{cp}(g_1^t)\leq\text{cp}(g_2^t)$. \end{prop} This means that, to check if $g_1$ is a left- (resp. right-) divisor of $g_2$, we only have to check if the power of $q$ is smaller on each row (resp. column). \begin{proof}
Write $g_i=D_{g_i}P_{g_i}=P_{g_i}\pre{g_i}{D_{g_i}}$. For left-divisibility, consider in $G$ the element $h=g_1^{-1}g_2=P_{g_1}^{-1}D_{g_1}^{-1}D_{g_2}P_{g_2}$. By Proposition \ref{cpPi} $h\in M$ iff $D_{g_1}^{-1}D_{g_2})$ contains only non-negative powers of $q$, precisely meaning that the power on each row of $g_1$ is less than the one of $g_2$.
Similarly, for right divisibility, let $h'=g_2g_1^{-1}=P_{g_2}\pre{g_2}{D_{g_2}}\left(\pre{g_1}{D_{g_1}}\right)^{-1}P_{g_1}^{-1}$, which is in $M$ iff $\pre{g_2}{D_{g_2}}\left(\pre{g_1}{D_{g_1}}\right)^{-1}$ contains only non-negative powers of $q$, which is the same criterion on the columns. \end{proof} \begin{ex} Taking $S=\{s_1,s_2\}$ with $\psi(s_1)=\psi(s_2)=(12)$, we can see that:
$\begin{pmatrix}0&q^3\\1&0\end{pmatrix}$ left-divides $\begin{pmatrix}q^4&0\\0&1\end{pmatrix}$ (as $3\leq 4$ on the first line and $0\leq 0$ on the second since $1=q^0$), but doesn't right divide it (as $3>0$ on the second column) \end{ex} \begin{cor}\label{div2} Let $g=\Pi_k(t_1,\dots,t_k)$, then the left-divisors of $g$ are precisely the $\Pi$'s of subtuples of $(t_1,\dots,t_k)$.
Reciprocally, the right multiples of $g$ are the elements $h$ such that, when writing $h=\Pi_l(u_1,\dots,u_l)$, the tuple $(u_1,\dots,u_l)$ contains the tuple $(t_1,\dots,t_k)$. \end{cor} \begin{cor}\label{gcdlcm} Let $g_1,g_2$ be in $M$. The left-gcd (resp. left-lcm) of $g_1$ and $g_2$, denoted $g_1\wedge g_2$ (resp. $g_1\vee g_2$) is given by the unique element such that the coefficient-power on each row is the maximum (resp. minimum) of those of $g_1$ and $g_2$.
For right-gcd (resp. right-lcm) it is the same but for each column. \end{cor} Explicitly, if cp$(g_1)=(a_1,\dots,a_n)$ and cp$(g_2)=(b_1,\dots,b_n)$, then cp$(g_1\wedge g_2)=(\max(a_1,b_1),\dots,\max(a_n,b_n))$ and cp$(g_1\vee g_2)=(\min(a_1,b_1),\dots,\min(a_n,b_n))$. \begin{ex} As in \ref{ex1} take $S=\{s_1,s_2,s_3,s_4\}$ with \begin{align*}\psi(s_1)=(1234)&\qquad\psi(s_3)=(24)\\\psi(s_2)=(1432)&\qquad\psi(s_4)=(13)\end{align*} We have $$\begin{pmatrix}0&q&0&0\\1&0&0&0\\0&0&0&q\\0&0&1&0\end{pmatrix}\wedge\begin{pmatrix}0&0&0&q\\0&0&q&0\\0&1&0&0\\1&0&0&0\end{pmatrix}=\begin{pmatrix}0&q&0&0\\0&0&1&0\\0&0&0&1\\1&0&0&0\end{pmatrix}$$ Which can be understood in two ways: \begin{itemize} \item $\text{gcd}\left(\Pi_2(s_1,s_3),\Pi_2(s_1,s_2)\right)=\Pi_1(s_1)$ as $s_1$ the only term appearing in both $\Pi_2$. \item The gcd of the two given matrices must have cp-tuple $(1,0,0,0)$ (taking the minimal coefficient-power row by row), which uniquely exists by the I-structure. \end{itemize}
Similarly: $$\begin{pmatrix}0&q&0&0\\1&0&0&0\\0&0&0&q\\0&0&1&0\end{pmatrix}\vee\begin{pmatrix}0&1&0&0\\q&0&0&0\\0&0&0&1\\0&0&q&0\end{pmatrix}=\begin{pmatrix}q&0&0&0\\0&q&0&0\\0&0&q&0\\0&0&0&q\end{pmatrix}$$ can be understood as both: \begin{itemize} \item $\text{lcm}\left(\Pi_2(s_1,s_3),\Pi_2(s_2,s_4)\right)=\Pi_4(s_1,s_2,s_3,s_4)$ (we took the maximum number of each occurences of each $s_i$, here always 1) \item The lcm of the two given matrices must have cp-tuple $(1,1,1,1)$ (taking the maximal coefficient-power row by row), which uniquely exists by the I-structure. \end{itemize} The second description however does not provide an explicit description of the element, as we don't have the permutation part.
For the right gcd and lcm, we do the same on the columns, which has the disadvantage and not having something as explicit as the first-description of the left versions (see the $\tilde\Pi$ of \cite{rcc} for a more detailled approach). \end{ex} \begin{cor} An element such that the non-zero terms of its $i$-th row and $i$-th column are equal for all $1\leq i\leq n$ is balanced. \end{cor} \begin{defi} An element of $M$ is called a Garside element if it is balanced, $\text{Div}(g)$ is finite and generates $M$. \end{defi} \begin{prop}[\cite{rcc}] \label{gars-ele} The unique element $\Delta$ such that $D_\Delta=\text{diag}(q,\dots,q)$ is a Garside element of $M$. \end{prop} Equivalently, $\Delta=\Pi_n(s_1,\dots,s_n)$ or cp$(\Delta)=(1,\dots,1)$. \begin{proof} Because all the non-zero coefficients of $\Delta$ are equal, the latter is balanced.
Its set of divisors is the set of elements with non-zero coefficients $1$ or $q$ and so is finite and has cardinal $2^n$, and it contains all the generators $s_i$ so also generates M. \end{proof} \begin{rmk} The powers of $\Delta$, which are the unique elements with cp-tuple $(k,\dots,k)$ for $k\geq 1$, are also Garside elements by the same reasoning.
More generally, Garside elements are precisely the balanced elements with no $1$'s. \end{rmk} \begin{defi}[\cite{garside}] A monoid is said to be a Garside monoid if: \begin{enumerate}[label=(\roman*)] \item It is cancellative, i.e. if for every element $g_1,g_2,h,k$, $hg_1k=hg_2k\Rightarrow f=g$. \item There exists a map $\Lambda$ to the integers such that $\lambda(g_1g_2)\geq\lambda(g_1)+\lambda(g_2)$ and $\lambda(g)=0\Rightarrow g=1$. \item Any two elements have a gcd and lcm relative to both $\preceq$ and $\preceq_r$. \item It possesses a Garside element $\Delta$. \end{enumerate} \end{defi} \begin{prop}[\cite{rcc}] M is a Garside monoid. \end{prop} \begin{proof} The map $\lambda$ previously defined satisfies point (ii) (and in fact as an equality).
For (iii) we have Corollary \ref{gcdlcm} and for (iv) Proposition \ref{gars-ele}.
We are left to prove (i), which is a direct consequence of Corollary \ref{monInj} (alternatively, we can see this is that our elements are monomial matrices, such inject into the linear group $\text{GL}_n(\mathbb Q[q,q^{-1}])$ from which we deduce the cancellative property). \end{proof}
\subsection{Germs} In \cite{rcc} Dehornoy associates a germ to structure groups of cycle sets, in the construction he uses the non-degeneracy of finite cycle sets. Here we provide proofs that do not rely on this property. In particular the direct existence of Dehornoy's class is obtained along with a better bound (again improved in the next section). Moreover, we state an exchange lemma and a solution to the word problem, as is known in the context of Coxeter groups.
Recall that we fix $(S,*)$ a finite cycle set of size $n$ with structure monoid (resp. group) $M$ (resp. $G$). For $s\in S$ and $k\in\mathbb N$ denote by $s^{[k]}$ the unique element in $M$ such that $D_{s^{[k]}}=D_s^k$, i.e. $s^{[k]}=\Pi_k(s,\dots,s)=sT(s)\dots T^{k-1}(s)$ where $T$ is the diagonal map $s\mapsto s*s$. \begin{prop} \label{class} There exists a positive integer $d$ such that for all $s_i\in S$, $s_i^{[d]}$ is diagonal, i.e. $P_{s_i^{[d]}}=\text{Id}$. \end{prop} The smallest integer satisfying this condition is called the Dehornoy's class of $S$, and all the others will be multiples of this class. Our results will be stated for the class, but most would work for any multiples. \begin{proof} First fix $s\in S$. The map sending $s^{[k]}$ to $\psi(s^{[k]})$ is a map from an infinite (countable) set to a finite one ($\mathfrak S_n$), therefore it is not injective. So there exists $k_1,k_2\in\mathbb N$ such that $P_{s^{[k_1]}}=P_{s^{[k_2]}}$. We can assume $k_1>k_2$ without loss of generality. We have $s^{[k_1]}(s^{[k_2]})^{-1}=D_s^{k_1-k_2}$ which is in $G$, and as $k_1-k_2>0$ is also in $M$ (it is $\Pi_{k_1-k_2}(s,\dots,s)$), so it is necessarily equal to $s^{[k_1-k_2]}$ which is thus diagonal.
Doing this for all $s_i\in S$, we get the existence of $d_i\in\mathbb N$ such that $s_i^{[d_i]}$ is diagonal. Notice that again, by the same argument, we must have for all $k\in\mathbb N$ $(s_i^{[d_i]})^k=D_{s_i}^{kd_i}=s_i^{[kd_i]}$. Taking $d=\text{lcm}(d_1,\dots,d_n)$ we have for all $i$ the existence of $d_i'>0$ such that $d=d_id_i'$, we find that for all $i$, $s_i^{[d]}=(s_i^{[d_i]})^{d_i'}$ is diagonal. \end{proof} \begin{rmk} In \cite{rcc}, the author gives a bound on the class of a cycle set as $d\leq (n^2)!$. Here, we obtain a first better bound $d\leq (n!)^n$ given by the previous proof (as $d=\text{lcm}(d_1,\dots,d_n)$ with $d_i\leq n!$). This bound will be improved in Propositions \ref{ddivG} and \ref{conj-proof}. \end{rmk}
\begin{prop} Let $d$ be the class of $S$ and denote by $G^{[d]}$ the subgroup of $G$ generated by all the $s^{[d]}$. Then $G^{[d]}$ is a normal subgroup of $G$. \end{prop} \begin{proof} The generators of $G^{[d]}$ are diagonal, thus this subgroup consists of diagonal matrices only. Conjugating a diagonal matrix $D$ by a permutation matrix is diagonal, thus for all $g$ in $G$, $(D_gP_g)D(P_g^{-1}D_g^{-1})=\pre{\psi(g)}{D}$. Because the subgroup contains all possible diagonal matrix with coefficients powers of $q^d$, it is stable by the action of $\mathfrak S_n$ by permutation, so $\forall h\in G^{[d]}$, $ghg^{-1}\in G^{[d]}$. \end{proof}
We define the quotient group $\overline G$ by $\overline G=G/G^{[d]}$.
\begin{prop}[\cite{rcc}]
The projection $\pi\colon G\to\overline G$ amounts to adding to the presentation of $G$ the relations $s^{[d]}=1$, more precisely quotienting is the same as specializing at $q=\exp(\frac{2i\pi}{d})$ noted $ev_q$.
Moreover, the map $\pi\circ\Pi\colon\mathbb Z^S\to\overline G$ induces a (set) bijection $\overline\Pi\colon (\mathbb Z/d\mathbb Z)^S\to\overline G$.
\end{prop} \begin{proof} The first part comes from the fact that $G^{[d]}$ is generated by the $s^{[d]}$ which are diagonal, and as each element containing a coefficient-power greater than $d$ is a multiple of some $s_i^{[d]}$ by Proposition \ref{div}, we see that quotienting is just specializing at $q=\exp(\frac{2i\pi}{d})$.
The second part then follows as (canonical representative of) elements of $\overline G$ have non-zero coefficients in $\{1,q,\dots,q^{d-1}\}$. \end{proof} \begin{rmk} If $d=1$ then $G^{[d]}=G$ so $\overline G$ is trivial. However, $d=1$ means that all the generators $s$ are diagonal, i.e. $s*t=t$ for all $s,t$ in $S$: this is just the special case of the trivial cycle set. So we will assume $d\geq 2$, but this unique trivial case can still be included as all our results work for any multiples of the class (thus any positive integer for the trivial cycle set). \end{rmk} From now on, assume $d\geq 2$. \begin{ex} Let $S=\{s_1,\dots,s_n\}$ with $\psi(s_i)=(12\dots n)=\sigma$ for all $i$. Then for any $s\in S$, $k\in\mathbb Z$: $s_i^{[k]}=D_s^k P_{\sigma^k}$. Thus Dehornoy's class of $S$ is equal to $n$. Let $\zeta_n=\exp(\frac{2i\pi}{n})$, then $\overline G$ is generated by the $\overline{s_i}=\diag{1,\dots,\zeta_n,\dots,1}P_\sigma$. \end{ex} Denote by $\zeta_d=\exp(\frac{2i\pi}{d})$ a primitive $d$-th root of unity and $\mu_d=\{\zeta_d^i\mid 0\leq i<d\}$. Let $\Sigma_n^d$ be the subgroup of $\mathfrak{Monom}_n(\mathbb C)$ with non-zero coefficients in $\{0\}\cup\mu_d$. Given $k\geq 1$, there is natural embedding $\iota_d^{dk}\colon \Sigma_n^d\to\Sigma_n^{dk}$ sending $\zeta_d$ to $\zeta_{dk}^k$ (as $\zeta_{dk}^k=\exp(\frac{2ik\pi}{dk})=\zeta_d$). From the previous proposition, we deduce the following result: \begin{lem} The quotient group $\overline G$ is a subgroup of $\Sigma_n^d$. \end{lem} Recall that if $S$ has Dehornoy's class $d$, then for any positive integer $k$ we have that $s^{[kd]}=(s^{[d]})^k$ is diagonal, thus we could also consider the germ $G/\langle s^{[kd]}\rangle_{s\in S}\rangle$. The embedding $\iota_d^{dk}(\overline G)$ can then be seen as embedding the germ $\overline G$ is this bigger quotient group.
\begin{defi}\cite{rcc} If $(M,\Delta)$ be a Garside monoid and $G$ the group of fractions of $M$, a group $\overline G$ with a surjective morphism $\pi\colon G\to\overline G$ is said to provide a Garside germ for $(G,M,\Delta)$ if there exists a map $\chi\colon\overline G\to M$ such that $\pi\circ\chi=\text{Id}_{\overline G}$, $\chi(\overline G)=\text{Div}(\Delta)$ and $M$ admits the presentation $$\langle \chi(\overline G)\mid \chi(fg)=\chi(f)\chi(g) \text{\:when\:} ||fg||_{\overline S}=||f||_{\overline S}+||g||_{\overline S}\rangle$$
where $||\cdot||_{\overline S}$ denote the length of an element over $\overline S=\pi(S)$. \end{defi} \begin{prop}[\cite{rcc}] The specialization $\text{ev}_q$ that imposes $q=\exp(\frac{2i\pi}{d})$ provides a Garside germ of $(G,M,\Delta^{d-1})$. \end{prop} \begin{proof} Consider the map $\chi\colon \overline G\to M$ defined by sending $\exp(\frac{2i\pi\cdot k}{d})$ to $q^k\in\mathbb Q[q]$ for $1\leq k<d$. It trivially satisfies $\text{ev}_q\circ\chi=\text{Id}_{\overline G}$. Its image is the elements of $M$ such that each coefficient-power (the power of $q$) is strictly less than $d$, and thus identifies with $\text{Div}(\Delta^{d-1})$ by the characterization of divisibility. And the presentation amounts to forgetting that $q$ is a root of unity, thus generating $M$ as required. \end{proof} To work over $\overline G$, we will use the following corollary to restrict to classes of equivalence over the structure monoid. \begin{cor} The projection $\text{ev}_q\colon M\to\overline G$ is surjective. \end{cor} \begin{proof} $G$ is generated by the $s_i$ (positive generators) and their inverses $s_i^{-1}$ (negative generators), so the same holds for $\overline G$. Moreover, as $\overline G$ is finite, inverses can be obtained from only positive generators. Finally, because $M\hookrightarrow G$ as the submonoid generated by positive generators, we obtain the statement. \end{proof} \begin{ex} Let $S=\{s_1,\dots,s_n\}$ with $\psi(s_i)=(12\dots n)=\sigma$ for all $i$. Then for any $s\in S$, $k\in\mathbb Z$: $s_i^{[k]}=D_s^k P_{\sigma^k}$. The Dehornoy's class of $S$ is $n$ and $\overline G$ is generated by the $\overline{s_i}=\diag{1,\dots,\zeta_n,\dots,1}P_\sigma$ where $\zeta_n=\exp(\frac{2i\pi}{n})$.
To recover $G$ from $\overline G$, one simply takes all the elements of $\overline G$ and forget that $q$ is a root of unity in the following sense: when computing the product of two elements and finding a coefficient $q^a$ with $a>d$, we do not use that $q^d=1$ and just consider it as a new element. So for instance in $\langle \chi(\overline G)\rangle$ with $n=4$: $$\chi\!\left(\overline{s_1^{[3]}}\right)\!\chi\!\left(\overline{s_4^{[2]}}\right)=\chi(\overline{s_1}^{[3]})\chi(\overline{s_4}^{[2]})=\begin{pmatrix}0&0&0&q^3\\1&0&0&0\\0&1&0&0\\0&0&1&0\end{pmatrix}\begin{pmatrix}0&0&1&0\\0&0&0&1\\1&0&0&0\\0&q^2&0&0\end{pmatrix}\! =\!\begin{pmatrix}0&q^5&0&0\\0&0&1&0\\0&0&0&1\\0&0&0&0\end{pmatrix}=s_1^{[5]}.$$ Because $5>4$, we obtain a new element different from $\chi\left(\overline{s_1^{[5]}}\right)=\chi\left(\overline{s_1}\right)$. \end{ex} The quotient group $\overline G$ defined above is called a Coxeter-like group, it was first studied by Chouraqui and Godelle in \cite{godelle} for $d=2$ and generalized by Dehornoy in \cite{rcc}.
Fix $\overline G$ a Coxeter-like group obtained from a cycle set $S$ of cardinal $n$ and class $d\geq 2$ (so that $\overline G$ is not trivial). \begin{prop}\label{germistruct} $\overline G$ is permutation-free. \end{prop} \begin{proof} From Proposition \ref{istruct}, we know that $G$ is permutation-free. As $\overline G$ is the image of $G$ by the evaluation at $q=\zeta_d$, a matrix $\overline{g}\in\overline G$ is a permutation matrix if it comes from an equivalence class of $g\in M$ such that $\text{cp}(g)=(da_1,\dots,da_n)$ (with $a_1,\dots,a_n\in \mathbb N$), i.e. $g=(s_1^{[d]})^{a_1}\dots(s_n^{[d]})^{a_n}=D_{s_1}^{da_1}\dots D_{s_n}^{da_n}$ which is diagonal, thus $\overline g$ is diagonal. \end{proof} \begin{defi} For $c=(\overline{c_1},\dots,\overline{c_n})\in(\mathbb Z/d\mathbb Z)^n$, define $\overline{\text{cp}}(\text{diag}(q^{\overline{c_1}},\dots,q^{\overline{c_n}}))$ to be the unique representative of $c$ as $(c_1,\dots,c_n)\in\mathbb \{0,\dots,d-1\}^n$. If $g\in\overline G$, we define $\overline{\text{cp}}(g))=\overline{\text{cp}}(D_g)$.
We define a function $\text{l}_d$ by:
\begin{equation}
\label{lq}
\forall k\in\{0,1,\dots,d-1\},\text{l}_d(k)=\begin{cases}k,&\: \text{if } k\leq\frac{d}{2}\\k-d,&\:\text{if } k>\frac{d}{2}.\end{cases}
\end{equation}
And we define $\overline\lambda(c)=\sum\limits_{i=1}^n |\text{l}_d(c_i)|$. \end{defi} Note that $\overline{\text{cp}}$ corresponds to cp with the projection $\mathbb Z\to \mathbb Z/d\mathbb Z)$ and taking representatives in the interval $[0,d-1[\cap\mathbb Z$, while $\overline\lambda$ corresponds to $\lambda$ with the same projection but with representatives in $]-\frac{d}{2},\frac{d}{2}]\cap\mathbb Z$. The latter is chosen because, if we have $q^6=1$, the shortest way to write $q^2$ is $q\cdot q$ but to write $q^4$ we should use $q^{-1}\cdot q^{-1}$. \begin{prop} The followings hold: \begin{enumerate}[label=(\roman*)] \item For any $c=(\overline{c_1},\dots,\overline{c_n})\in(\mathbb Z/d\mathbb Z)^n$, there exists a unique element $g\in\overline G$ such that $\overline{\text{cp}}(g)=c$. \item The length of an element $g\in\overline G$ over $\overline S=\pi(S)$ is given by $\overline\lambda$ \end{enumerate} \end{prop} \begin{proof} (i) follows from Proposition \ref{germistruct}.
Point (ii) is obtained by realizing that the shortest way to write $q^k$ for $k\in\{0,\dots,d-1\}$ is using $q\cdot...\cdot q$ is $k\leq\frac{d}{2}$ and otherwise we use the fact that $q^d=1$ to write it as $q^{-1}\cdot...\cdot q^{-1}$. \end{proof} \begin{cor} Because $\overline G$ is finite, the generators have finite order, so any element has a decomposition as a product of generators (we don't have to include inverses). Thus, any element in $\overline G$ has a $\overline\Pi$-expression (in the same sense as in $M$). \end{cor} \begin{cor} Defining similarly to $\Sigma_n$ the group $\overline{\Sigma_n}$ with non-zero coefficients powers of $\exp(\frac{2i\pi}{d})$, we get that a subgroup of $\overline{\Sigma_n}$ respecting the conditions of Proposition \ref{condCS} is a Coxeter-like group. \end{cor} \begin{rmk} We say that a word in $\overline S=\pi(S)$ is reduced in $\overline G$ if it has length $\overline\lambda(w)$ when seen as an element of $\overline G$.
For instance, if $d>2$ then $w=s(s*s)$ is reduced as it represents $\overline{\Pi_2}(s,s)$ which has $\lambda=2$ but the word corresponding to $\overline{\Pi_d}(s,\dots,s)$ is not as it has $\bar\lambda=0$. More generally, the word associated to $\overline{\Pi_r}((t_1),\dots,(t_r))$ is reduced in $\overline G$ if each $\overline s_i$ occurs strictly less than $d$ times in the family $(t_i)$. \end{rmk} For Coxeter groups, we have the so-called exchanging lemma (see \cite{lcg}). We provide a similar result for Coxeter-like groups: \begin{lem}[Exchange Lemma] Let $g\in \overline G$ be written as a reduced expression given by $\overline{\Pi_k}(t_1,\dots,t_k)$. For $s\in S$, if $\overline{\Pi_{k+1}}(s,t_1,\dots,t_k)$ is not reduced, then there exists some $i$ such that $g=\overline{\Pi_k}(s,t_1,\dots,\hat{t_i},\dots,t_k)$, where $\hat{t_i}$ means we omit $t_i$. \end{lem} \begin{proof} As $\overline{\Pi_k}(t_1,\dots,t_k)$ is reduced, $s$ occurs strictly less than $d$ times in $(t_j)_{1\leq j\leq k}$.
Thus, if $\overline{\Pi_{k+1}}(s,t_1,\dots,t_k)$ is not reduced as word, then $s$ must occur exactly $d$ times in $s\cup (t_j)_{1\leq j\leq k}$, in particular as $d>1$, $s$ occurs at least once in $(t_j)_{1\leq j\leq k}$, say $t_i=s$. Because $\overline\Pi$ is invariant by permutation of the entries (as $\Pi$ is), we can move this $t_i$ at the beginning and thus $g=\overline{\Pi_k}(s,t_1,\dots,\hat{t_i},\dots,t_k)$. \end{proof} Another part of interest of the study of Garside groups is that they provide a solution to the word problem: \begin{prop} Two words $t=t_1\dots t_k$ and $u=u_1\dots u_l\in S^*$ represent the same element in $M$ (resp. $\overline G$) if, when taking a $\Pi$-expression (resp. $\overline\Pi$-expression) of both, they have the same number of occurrences of each $s_i$, $1\leq i\leq n$ (resp. modulo $d$). \end{prop} \begin{proof} $\bullet$ In $M$, the elements corresponding to $t$ and $u$ have respectively a $\Pi$-expression of length $k$ and $l$, and because $\Pi$ is a bijection from $\mathbb N^n$ to $M$, they represent the same element iff they have the same number of occurrences of each of the atoms in $\mathbb N^n$. In particular, $k=l$.
$\bullet$ In $\overline G$, the same reasoning applies: take a $\overline\Pi$-expression of both elements, they represent the same equivalence class iff the number of occurrences of each atom is the same modulo $d$ in $\mathbb N^n$, as the map $\overline \Pi\colon(\mathbb Z/d\mathbb Z)^n\to \overline G$ is a bijection. \end{proof} \par We summarize all the previous results in the following diagram, where black arrows are morphism and blue arrows are just maps of sets, and the middle line is a short exact sequence.
\begin{center} \begin{tikzcd} &&M\arrow[d,hook]&&\\ 1\arrow[r]&G^{[d]}\arrow[r,hook]\arrow[d,"\sim"',"\Pi",<-]&G\arrow[r,two heads,"\pi=\text{ev}_q"]\arrow[d,"\sim"',"\Pi",color=blue,<-]&\overline G\arrow[r]\arrow[ul,hook,"\chi",swap,color=blue]\arrow[d,"\sim"',"\overline \Pi",color=blue,<-]&1\\ &\mathbb Z^S\arrow[r,"\times d"]&\mathbb Z^S\arrow[r]&(\mathbb Z/d\mathbb Z)^S& \end{tikzcd} \end{center} Note that the left $\Pi$ is a group morphism because all elements of $G^{[d]}$ have trivial permutation, so this group is abelian and thus $\Pi$ is the morphism sending $(a_1,\dots,a_n)$ to $\diag{q^{da_1},\dots,q^{da_n}}$. \subsection{Non-degeneracy} Here we give a new proof of Rump's result on the non-degeneracy of finite cycle sets \cite{rump}, that is the fact that $s\mapsto s*s$ is bijective; in Dehornoy's paper it is used to obtain the bijectivity of $(s,t)\mapsto (s*t,t*s)$. Here we prove both of those results using the previous section, the first proof has the advantage of being a simple direct consequence of the I-structure compared to the proof in \cite{rump} which involves several steps and constructions.
Recall that we fix $(S,*)$ a finite cycle set of size $n$ with structure monoid (resp. group) $M$ (resp. $G$). \begin{lem}[\cite{rump}] \label{diagBij} $\forall s,s'\in S, s*s=s'*s'\Longleftrightarrow s=s'$. \end{lem} \begin{proof} If $s=s'*s'$ then trivially $s*s=s'*s'$. Suppose $s*s=s'*s'$, and consider $g=ss'^{-1}$, we want to show $g=1$.
We have $g=ss'^{-1}=D_sP_sP_{s'}^{-1}D_{s'}^{-1}$, using that $P_\sigma D=\pre{\sigma}{D}P_\sigma$ and $P_{\sigma}^{-1}=P_{\sigma^{-1}}$ twice we find $$g=D_sP_sD_{s'*s'}^{-1}P_{s'}^{-1}=D_sD_{\psi^{-1}(s)(s'*s')}^{-1}P_sP_{s'}^{-1}.$$ By assumption $s*s=s'*s'$ thus $g=D_sD_{\psi^{-1}(s)(s*s)}^{-1}P_sP_{s'}^{-1}=D_sD_{s}^{-1}P_sP_{s'}^{-1}=P_sP_{s'}^{-1}.$ As $G_S$ is permutation-free, we deduce $g=1$. \end{proof} \begin{prop}\label{o(T)} \begin{enumerate}[label=(\roman*)] \item The diagonal map $T:s\mapsto s*s$ is a bijection of $S$. \item The order $o$ of $T$ divides $d$. In particular, for any integer $k$, we have $s^{[kd]}=\left(sT(s)\dots T^{o-1}(s)\right)^k$. \item More generally $s^{[k]}=sT(s)\dots T^{o-1}(s)sT(s)\dots$ with exactly $k$ terms \end{enumerate} \end{prop} \begin{proof} As $S$ is finite and $T$ is injective by the previous lemma, it is bijective and so has finite order. The third point follows directly from the equalities $s^{[k]}=\Pi_k(s,\dots,s)=sT(s)\dots T^{k-1}(s)$ and, as $T^o(s)=s$, we regroup as many words $sT(s)\dots T^{o-1}(s)$ as possible. For the second point, if $s^{[k]}$ is diagonal, then $s^{[k]}s=D_s^kD_sP_s$ which has diagonal part $D_s^{k+1}$ so must be $s^{[k+1]}$. As $s^{[d+1]}=s^{[d]}T^d(s)$, we deduce $T^d(s)=s$, thus $o=o(T)$ divides $d$. \end{proof} \begin{cor} Let $G^t$ be the set of transposes of the elements of $G$. Then $G^t$ is the structure group of a cycle set structure on $S^t$. \end{cor} \begin{proof} First note that, because $G$ is generated by $S$, $G^t$ is generated by $S^t$. As $T$ is a bijection, for each $i$ the set $S^t$ contains exactly one element $s^t$ such that $D_{s^t}=D_i$, that is $s=T^{-1}(s_i)$. Moreover, as $G$ is permutation-free, so is $G^t$. So by Theorem \ref{condCS} it is the structure group of the cycle set $S^t$. \end{proof} In particular, this can be used to work on the columns in $G$: if we want an element of $G$ with coefficient powers tuple $(a_1,\dots,a_n)$ on the columns, we can work in $G^t$, use the $\Pi$ of Dehornoy's calculus to obtain an elements $g^t$ in $G^t$ with cp$(g^t)=(a_1,\dots,a_n)$ and transpose it to get $g$ in $G$ with $q^{a_i}$ on the $i$-th column. \begin{prop} For any $k\in \mathbb N$, $\psi(s^{[k]})(s)=T^k(s)$. In particular, the map $s\mapsto \psi(s^{[k]})(s)$ is a bijection of $S$. \end{prop} \begin{proof} For $k=0$, $s^{[0]}=1$ so both sides of the equality are $s$. For $k=1$, this is just the definitions $\psi(s)(s)=s*s=T(s)$. Now proceed by induction:
Recall that, by Proposition \ref{pimat}, $$\psi(s^{[k+1]})(s)=\left(\psi(T^{k}(s))\circ\psi(T^{k-1}(s))\circ\dots\circ\psi(s)\right)(s)=\psi(T^k(s))(\psi(s^{[k]})(s)).$$ So by the induction hypothesis $\psi(s^{[k+1]})(s)=\psi(T^k(s))(T^k(s))=T^k(s)*T^k(s)=T^{k+1}(s)$.
As $T$ is a bijection, so is $T^k$. \end{proof} \begin{cor}\label{-k} For $k$ in $\mathbb N$ and $s$ in $S$, let $t=(T^k)^{-1}(s)$ then $s^{[-k]}=\Pi_k(t,\dots,t)^{-1}$. \end{cor} \begin{proof} Let $t\in S$, we have $$(t^{[k]})^{-1}=(D_t^kP_{t^{[k]}})^{-1}=P_{t^{[k]}}^{-1}D_t^{-k}=\pre{\psi(t^{[k]})^{-1}}{D_t^{-k}}P_{t^{[k]}}^{-1}=D_{\psi(t^{[k]})(t)}^{-k}P_{t^{[k]}}^{-1}=D_{T^k(t)}^{-k}P_{t^{[k]}}^{-1}.$$ Thus, if $t=(T^k)^{-1}(s)$, we find $D_{(t^{[k]})^{-1}}=D^{-k}_s$. \end{proof} \begin{prop}[\cite{rcc}] \label{bij} The map $(s,t)\mapsto (s*t,t*s)$ is bijective. \end{prop} \begin{proof} As $S$ is finite, so is $S\times S$, so we only have to show injectivity, i.e. assume $s*t=s'*t'$ and $t*s=t'*s'$ for some $s,t,s',t'\in S$.
Since $\Pi_2(s,t)=s(s*t)=t(t*s)=\Pi_2(t,s)$, we get $\Pi_2(s,t)\Pi_2(s',t')^{-1}=s(s*t)(s'*t')^{-1}s'^{-1}=ss'^{-1}$ by hypothesis and then as previously $ss'^{-1}=D_sD_{\psi^{-1}(s)(s'*s')}^{-1}P_sP_{s'}^{-1}$.
We also have $\Pi_2(s,t)\Pi_2(s',t')^{-1}=\Pi_2(t,s)\Pi_2(t',s')^{-1}=tt'^{-1}=D_tD_{\psi^{-1}(t)(t'*t')}^{-1}P_tP_{t'}^{-1}$.
By unicity of the diagonal part, we must have $D_sD_{\psi^{-1}(s)(s'*s')}^{-1}=D_tD_{\psi^{-1}(t)(t'*t')}^{-1}$, i.e. $D_sD_{\psi^{-1}(t)(t'*t')}=D_tD_{\psi^{-1}(s)(s'*s')}$. Because each term on each side corresponds to a diagonal matrix with only one $q$, we have either $D_s=D_t$ and so $t'=s'$ by the previous lemma, or $s=\psi^{-1}(s)(s'*s')$, thus $s*s=s'*s'$ and again by the previous lemma $s=s'$, so $ss'^{-1}=1=tt'^{-1}$ and finally $t=t'$. In both cases, we are done. \end{proof} \begin{prop}\label{frac2} Any element $g\in G$ can be decomposed as a reduced right-fraction in $M$, that is: $$\exists f,h\in M, g=f^{-1}h.\text{ with } \lambda(g)=\lambda(f)+\lambda(h).$$ \end{prop} \begin{proof} The proof is essentially the same as in Proposition \ref{frac}. Take a reduced decomposition of $g$ as product of elements in $S\cup S^{-1}$, the defining relations of $G$ give $$s(s*t)=t(t*s)\Longleftrightarrow s^{-1}t=(s*t)(t*s)^{-1}.$$ From the previous proposition, for any couple $(s',t')\in S^2$ there exists $(s,t)$ such that $(s*t,t*s)=(s',t')$, thus we can inductively use the relations $s't'^{-1}=(s*t)(t*s)^{-1}=s^{-1}t$ to "move" all inverses of the generators to the left in a decomposition of $g$, which gives the desired form. \end{proof} From Propositions \ref{frac} and \ref{frac2} we deduce: \begin{cor}[\cite{rcc}] $G$ is the group of fractions of $M$. \end{cor} This also implies that $G$ is a Garside group. \section{Bounding of the Dehornoy's class}\label{coxlike} We fix a finite cycle set $(S,*)$ of size $n$ with structure monoid (resp. group) $M$ (resp. $G$), of Dehornoy's class $d>1$ and associated germ $\overline G$. \begin{defi} The permutation group $\mathcal G_S$ associated to a cycle set $S$ is the subgroup of $\mathfrak S_n$ generated by $\psi(s_i), 1\leq i\leq n$.
When the context is clear we will simply write $\mathcal G$. \end{defi} $\mathcal G$ is precisely the image of the map sending $g$ in $G$ to $P_g$. Note that, as $P_\sigma P_\tau=P_{\tau\sigma}$, we have that $\psi(gh)=\psi(h)\psi(h)$, thus an antimorphism. This won't pose problem here, as we'll only use $\psi(s^n)=\psi(s)^n$. \begin{defi}[\cite{cycle}] A subset $X$ of $S$ is said to be $\mathcal G$-invariant if for every $s\in S$, $\psi(s)(X)\subset X$. $S$ is called decomposable if there exists a proper partition $S=X\sqcup Y$ such that $X,Y$ are $\mathcal G_S$-invariant.
In this case $(X,*_{|_X})$ and $(Y,*_{|_Y})$ are also cycle sets.
A cycle set that is not decomposable is called indecomposable. \end{defi} \begin{ex} For $S=\{s_1,s_2,s_3,s_4\}$ and $\psi(s_1)=\psi(s_2)=(2143)$, $\psi(s_3)=\psi(s_4)=(2134)$, we have $\mathcal G=\langle (2143),(2134)\rangle<\mathfrak{S}_n$. We see that $X=\{s_1,s_2\}$ and $Y=\{s_3,s_4\}$ are both $\mathcal G$-invariant and their respective cycle set structure are given by $\psi_X(s_1)=\psi_X(s_2)=(12)$ and $\psi_Y(s_3)=\psi_Y(s_4)=(34)$. \end{ex} In personal communications \cite{raul}, the following conjecture was mentionned: \begin{conj}[\cite{raul}] \label{conj2}
If $S$ is indecomposable then $d\leq n$. \end{conj} Note that, taking $S=\{s_1,\dots,s_n\}$ with $\psi(s)=(12\dots n)$ for all $s$ provides an indecomposable cycle set that attains this bound. \par Using a python program based on the proof of Proposition \ref{class}, we find the following maximum values of the class of cycle sets of size $n$:
\begin{center}
\begin{tabular}{c|cccccccc} $n$&3&4&5&6&7&8&9&10\\\hline $d_{\text{max}}$&3&4&6&8&12&15&24&30 \end{tabular} \end{center} This corresponds to the OEIS sequence \href{https://oeis.org/A034893}{A034893} \textit{"Maximum of different products of partitions of n into distinct parts"}, studied in \cite{part} where the following is proved:
\begin{lem}[\cite{part}] Let $n\geq 2$ be written as $n=\mathcal T_m+l$ where $\mathcal T_m$ is the biggest triangular number ($\mathcal T_m=1+2+\dots+m$) with $\mathcal T_m\leq n$ (and so $l\leq m$). Then the maximum value $$a_n=\max\left(\left\{\prod\limits_{i=1}^k n_i\middle|k\in\mathbb N, 1\leq n_1<\dots<n_k, n_1+\dots+n_k=n\right\}\right)$$ is given by $$a_n=a_{\mathcal T_m+l}=\begin{cases}\frac{(m+1)!}{m-l},&0\leq l\leq m-2\\\frac{m+2}{2}m!,&l=m-1\\(m+1)!,&l=m.\end{cases}$$ \end{lem} This leads to the following conjecture: \begin{conj}\label{conj1} The class $d$ of $S$ is bounded above by $a_n$ and the bound is minimal. \end{conj} Note that the set map $\Pi\colon \mathbb Z^n\to G$ allows us to transport the abelian group structure of $\mathbb Z^n$ to $G$ as follows: \begin{prop} There exists a commutative group structure on $G$ denoted $(G,+)$ such that for all $g,h$ in $G$, $g+h$ is the unique element such that $D_{g+h}=D_gD_h$. \end{prop} \begin{proof} This is a direct consequence of Theorem \ref{istruct}. \end{proof} This structure corresponds to the structure of left braces, see for instance \cite{brace}. \begin{prop}\label{abestruct} There exist $g',h'$ in $G$ such that $g+h=gh'=hg'$ with $D_{h'}=\pre{g^{-1}}{D_h}$ and $D_{g'}=\pre{h^{-1}}{D_g}$.
Moreover, if $g,h$ are in $M$ with $g=\Pi_k(t_1,\dots,t_k)$ and $h=\Pi_l(u_1,\dots,u_l)$ then $g+h=\Pi_{k+l}(t_1,\dots,t_k,u_1,\dots,u_l)$, and $g',h'$ are in $M$. \end{prop} \begin{proof} By definition $g+h=D_gD_hP_{g+h}$. Let $h'=g^{-1}(g+h)$ then $h'=P_g^{-1}D_hP_{g+h}=\pre{g^{-1}}{D_h}P_{g}^{-1}P_{g+h}$, thus $D_{h'}=\pre{g^{-1}}{D_h}$ by the unicity of monomial left-decomposition. And similarly for $g'$ using $g+h=h+g$.
For the second part, the $\Pi$-expression is a consequence of Proposition \ref{pipres}. Then, by the first statement $g'$ (resp. $h'$) only has non-negative coefficient-powers iff $g$ (resp. $h$) does, which is equivalent to being in $M$. \end{proof} \begin{prop} The additive commutative structure $(M,+)$ induces an abelian group structure on $\mathcal G$ compatible with the map $\psi: M \to \mathcal G$. \end{prop} \begin{proof} Let $\sigma,\tau$ be in $\mathcal G$ and $g,h$ in $M$ such that $\psi(g)=\sigma$ and $\psi(h)=\tau$ (i.e. $P_g=P_\sigma$, $P_h=P_\tau$). We will show that $P_{g+h}$ does not depend on $g$ and $h$ but only on $\sigma$ and $\tau$, so that $\sigma+\tau$ is well-defined as $\psi(g+h)$.
By the commutativity of $(M,+)$, it suffices to show that $\sigma+\tau$ does not depend on the choice of the representative $g$ of $\sigma$. From Proposition \ref{abestruct} we have the existence of $h'$ in $M$ such that $g+h=gh'$ and $D_{h'}=\pre{g^{-1}}{D_h}$ which only depends on $h$ and $P_g=P_\sigma$. As $h'$ is uniquely determined by $D_{h'}$, it does not depend on the choice of $g$, so neither does $P_{h'}$. Finally, we have that $P_{g+h}=P_gP_{h'}$ is the product of terms only depending on $\sigma=\psi(g)$. \end{proof} As a consequence we obtain the following result: \begin{prop}[\cite{brace}]\label{ddivG} The class $d$ divides the order of $\mathcal G$. In particular $d$ divides $n!$. \end{prop} \begin{proof}
For $s\in S$, the set $\{s^{[k]}\mid k\in\mathbb Z\}$ is a subgroup of $(G,+)$, and the smallest integer $d_s$ such that $s^{[d_s]}$ is diagonal corresponds to the order of $\psi(s)$ in $(\mathcal G,+)$, which thus divides $|\mathcal G|$.
As $d$ is the lcm of all the $d_s,s\in S$, it also divides $|\mathcal G|$. \end{proof} The landau function $g\colon\mathbb N^*\to\mathbb N^*$ (\cite{landau}) is defined as the largest order of a permutation in $\mathfrak S_n$. \begin{prop}\label{conj-proof} If $S$ is square-free and $\mathcal G$ abelian then $d\leq a_n$ \end{prop} That is, under these conditions the bound part of Conjecture \ref{conj1} holds. \begin{proof} If $S$ is square-free, then for all $s\in S$ we have by definition $T(s)=s$, so for any $k\in\mathbb Z$, $s^{[k]}=sT(s)\dots T^{k-1}(s)=s^k$ so $\{s^{[k]}\mid k\in\mathbb Z\}$ is a subgroup of $(G,\cdot)$ and the smallest integer $d_s$ such that $s^{[d_s]}$ is diagonal corresponds to the order of $\psi(s)$ in $(\mathcal G,\cdot)$, which thus divides $e(\mathcal G)$ the exponent of $\mathcal G$ (the lcm of the orders of every element). So $d$ will also divide $e(\mathcal G)$.
As $\mathcal G$ is abelian and finite, there exists an element with order equal to its exponent, so the exponent is bounded by the maximal order of an element, i.e. $d\mid e(\mathcal G)\leq g(n).$
By the decomposition in disjoint cycles, $g(n)$ is equal to the maximum of the lcm of partitions of $n$: $$g(n)=\max\left(\left\{\text{lcm}(n_1,\dots,n_k)\middle|k\in\mathbb N, 1\leq n_1\leq\dots\leq n_k, n_1+\dots+n_k=n\right\}\right)$$ Moreover, by properties of the lcm, if $1\leq n_i=n_j$, as $\text{lcm}(n_i,n_j)=n_i$, the max is unchanged by replacing $n_j$ by only $1$'s. And as the lcm of a set is bounded above by the product of the elements, we have $g(n)\leq a_n$. Thus $d\leq g(n)\leq a_n$. \end{proof}
\begin{prop} The followings hold: \begin{enumerate}[label=(\roman*)] \item $\psi\colon G\rightarrow\mathcal G$ factorizes through the projection $G\to\overline G$ \item We have the following divisibilities: \begin{itemize}\item $o(T)\mid d$ \item $d\mid \#\mathcal G$\item $\#\mathcal G\mid d^n$ \end{itemize} \end{enumerate}
where $o(T)$ is the order of the diagonal permutation $T$, $\#\mathcal G$ denotes its order $|\mathcal G|$ (to avoid confusion with $\mid$ for divisibility). \end{prop} \begin{proof} (i) follows from the definition of $d$ as $\psi(s^{[d]})=\text{id}$.
For (ii), the first divisibility is Proposition \ref{o(T)}, the second is Proposition \ref{ddivG} and the third is (i). \end{proof} For a positive integer $k$, denote by $\pi(k)$ the set of divisors of $k$. \begin{cor} \label{pp} We have $\pi(d)=\pi(\#\mathcal G)$.
In particular, $d$ is a prime power iff $\#\mathcal G$ is a prime power. \end{cor} This means that our later results, which will involve the condition "$d$ is a prime power" can also be restated for $\#\mathcal G$. \begin{proof} As $d$ divide $\#\mathcal G$ any divisor of $d$ is a divisor of $\#\mathcal G$. Conversely, if $p$ is a prime divisor of $\#\mathcal G$ then it divides $d^n$ and thus divides $d$. \end{proof} \begin{lem}\label{indN} If $S$ is indecomposable then $n$ divides $\#\mathcal G$.
In particular, $\pi(n)\subseteq \pi(\#\mathcal G)=\pi(d)$, and thus if $d$ is a prime power then $n$ is also a power of the same prime. \end{lem} \begin{proof} By \cite{etingof} $S$ is indecomposable iff $\mathcal G$ acts transitively on $S$. By the orbit stabilizer theorem, for any $s$ in $S$ we have $\# \text{Orb}(x)=\frac{\#\mathcal G}{\#\text{Stab}(x)}$. So if $S$ is indecomposable there is a unique orbit of size $n$ so $n$ divides $\#\mathcal G$. The last statements are a direct consequence of this divisibility and the previous corollary. \end{proof} \begin{lem}
If $S$ is indecomposable and $\mathcal G$ is abelian, then $n=|\mathcal G|$ \end{lem} \begin{proof}\footnote{https://math.stackexchange.com/a/1316138}
Again by \cite{etingof} S is indecomposable iff $\mathcal G$ acts transitively on $S$. Let $x_0\in S$, by transitivity for all $x\in S$, there exists $\sigma\in\mathcal G$ such that $x=\sigma(x_0)$. Let $\tau\in\mathcal G$ be such that we also have $x=\tau(x_0)$, we will show that $\tau=\sigma$. For all $y\in S$, there exists $\nu\in\mathcal G$ such that $y=\nu(x)$, thus $\sigma(y)=\sigma(\nu(x))=\sigma(\nu(\tau(x_0))=\tau(\nu(\sigma(x_0))=\tau(y)$. So an element of $\mathcal G$ is uniquely determined by its image of $x_0$, thus $|S|\geq |\mathcal G|$, and the other inequality follows by transitivity. \end{proof}\textsl{•} Let $k\geq 1$ and $G^{[k]}$ be the subgroup of $G$ generated by $S^{[k]}=\{s^{[k]}\mid s\in S\}$. The following result was simultaneously introduced in \cite{lebed}: \begin{prop} \label{Sk} For $\geq 1$, $G^{[k]}$ induces a cycle set structure on $S^{[k]}$. \end{prop} \begin{proof} Recall that $\psi(s^{[k]})(t)=\Omega_{k+1}(s,\dots,s,t)$ for all $s,t\in S$, and note that $$s^{[k]}t^{[k]}=D_s^k P_{s^{[k]}}D_t^kP_{t^{[k]}}=D_s^k\left(\pre{s^{[k]}}{D_t^k}\right)P_{s^{[k]}}P_{t^{[k]}}=D^k_sD_{\psi(s^{[k]})^{-1}(t)}^kP_{s^{[k]}}P_{t^{[k]}}.$$ For all $s^{[k]},t^{[k]}\in S^{[k]}$, define $s^{[k]}\star t^{[k]}$ as $\Omega_{k+1}(s,\dots,s,t)^{[k]}$. Then we have $$s^{[k]}(s^{[k]}\star t^{[k]})=D^k_sD^k_{\psi(s^{[k]})^{-1}(s^{[k]}\star t^{[k]})}P_{s^{[k]}}P_{t^{[k]}}=D^k_sD^k_tP_{s^{[k]}}P_{t^{[k]}}.$$ By symmetry, we have that $D_{s^{[k]}(s^{[k]}\star t^{[k]})}=D_{t^{[k]}(t^{[k]}\star s^{[k]})}$. Thus as $G$ is permutation-free we conclude that $s^{[k]}(s^{[k]}\star t^{[k]})=t^{[k]}(t^{[k]}\star s^{[k]})$.
All generators satisfy the conditions of Theorem \ref{condCS} with $D_{s_i}=D_i^k$ so $(S^{[k]},\star)$ is a cycle set. \end{proof} \begin{rmk} Alternatively, one can directly show that $\star$ satisfies Equation (\ref{RCL}): let $s,t,u\in S$, then we have\\ $(s^{[k]}\star t^{[k]})\star(s^{[k]}\star u^{[k]})=\Omega_{k+1}(s,\dots,s,t)^{[k]}\star\Omega_{k+1}(s,\dots,s,u)^{[k]}$\\ $=\Omega_{k+1}(\Omega_{k+1}(s,\dots,s,t),\dots,\Omega_{k+1}(s,\dots,s,t),\Omega_{k+1}(s,\dots,s,u))^{[k]}$.
By definition of $\Omega$ (see \cite{rcc} eq 4.8), the two expressions $\Omega_{p+q}(x_1,\dots,x_p,y_1,\dots,y_q)$ and $\Omega_q(\Omega_{p+1}(x_1,\dots,x_p,y_1),\dots,\Omega_{p+1}(x_1,\dots,x_p,y_q))$ coincide. Thus $(s^{[k]}\star t^{[k]})\star(s^{[k]}\star u^{[k]})=\Omega_{2k+1}(s,\dots,s,t\dots,t,u)$. As $\Omega$ is invariant by permutation all but the last coordinate, we have $\Omega_{2k+1}(s,\dots,s,t,\dots,t,u)=\Omega_{2k+1}(t,\dots,t,s,\dots,s,u).$ Thus, we conclude that: $s^{[k]}\star t^{[k]})\star(s^{[k]}\star u^{[k]})=(t^{[k]}\star s^{[k]})\star(t^{[k]}\star u^{[k]})$. \end{rmk} \begin{prop} Let $k$ be a positive integer smaller than $d$, then $(S^{[k]},\star)$ is of class $\frac{d}{\text{gcd}(d,k)}$.
Moreover, $(S^{[d+1]},\star)$ is the same, as a cycle set, as $(S,*)$. \end{prop} This means that this construction provides, at most, $d$ different cycle sets. \begin{proof} Recall that $(s^{[k]})^{[j]}=s^{[kj]}$. Thus $(s^{[k]})^{[a]}$ is diagonal when $ka$ is a multiple of $d$, so we deduce that $S^{[k]}$ is of class $\frac{\text{lcm}(d,k)}{k}=\frac{d}{\text{gcd}(d,k)}$.
By definition of $d$, we have that $(S^{[d]},\star)$ is the trivial cycle set ($\psi(s)=\text{id}$), thus $\psi(s^{[d+1]})=\psi(s)$. \end{proof} \section{Sylow subgroups and decomposition} Recall that for $k>1$, $\Sigma_n^k$ denotes the group of monomial matrices with non-zero coefficients powers of $\zeta_k$, and $\iota_k^{kl}$ is the embedding $\Sigma_n^k\hookrightarrow\Sigma_n^{kl}$ sending $\zeta_k$ to $\zeta_{kl}^l$. Given two subgroups $H,K<G$, their internal product subset is defined by $HK=\{hk\mid h\in H,k\in K\}$. If $H$ and $K$ have trivial intersection and $HK=KH$, the set product $HK$ has a natural group structure called the Zappa--Szép product of $H$ and $K$. We apply this to the Sylow-subgroups of the germs to obtain that any cycle set can be obtained as a Zappa--Szép product of cycle sets with coprime classes. \begin{defi}\label{intprod} Let $k,l$ be integers such that $k,l>1$. Let $m$ be a common multiple of $k$ and $l$, with $m=ka=lb$ for some $a,b\geq 1$. Given two subgroups $G<\Sigma_n^k$, $H<\Sigma_n^l$ by $G\bowtie_m H$ we denote the subset $\iota_k^m(G)\iota_l^m(H)$ of $\Sigma_n^m$.
Identifying $G$ and $H$ with their image in $\Sigma_n^m$, we say that they commute (\cite{group}) if $GH=HG$ as sets, i.e. for any $(g,h)$ in $G\times H$, there exists a unique $(g',h')$ in $(G\times H)$ such that $gh=h'g'$. \end{defi} \begin{rmk} This operation can be thought of as taking elements of $G$ and $H$, changing appropriately the roots of unity (with $\zeta_k=\zeta_m^a$ and $\zeta_l=\zeta_m^b$) and taking every product of such elements (we embed $G$ and $H$ in $\Sigma_n^m$ and take their product as subsets).
When $k$ and $l$ are coprime, $G$ and $H$ can be seen as subgroups of $\Sigma_n^m$ with trivial intersection, and so if they commute we have that $G\bowtie_m H$ is a group called the Zappa--Szép product of $G$ and $H$ (\cite{group}, Product Theorem). \end{rmk} Let $(S,*_1),(S,*_2)$ be two cycle sets, over the same set $S$, of coprime respective classes $d_1,d_2$ and germs $\overline G_1,\overline G_2$. Let $d=d_1d_2$ and $\overline G=\overline G_1\bowtie_d \overline G_2$ (which, in general, is only a subset of $\Sigma_n^d$), and we identify each $\overline G_i$ with its image in $\overline G$. \begin{defi} $S_1$ and $S_2$ are said to be $\bowtie$-compatible if $\overline G$ is the structure group of some cycle set $S_1\bowtie S_2$, called the Zappa--Szép product of $S_1$ and $S_2$. \end{defi} We now construct a candidate $S_1\bowtie_d S_2$ for which $\overline G$ could be the germ. This candidate is not, in general a cycle set, but if it is, its class is a divisor of $d$. Then we will state the condition for it to be a cycle set.
For clarity, we will put a subscript to distinguish between the respective structures of $S_1$ and $S_2$: $\psi_1(s)$ will denote the permutation given by $*_1$, and $s^{[k]_2}$ will denote an element of $M_2$.
\begin{algorithm}[H] \caption{Constructing $S_1\bowtie_d S_2$} \label{algdia} \textbf{Input:} A set $S$ with two cycle sets structure $*_1,*_2$ on $S$ of coprime classes $d_1,d_2$
\mbox{} \\ \textbf{Output:} A couple $(S,*)$ with $*$ a binary operation
\mbox{} \begin{algorithmic}[1] \State Compute $(u,v)$ the solution to Bézout's identity $d_2u+d_1v=1[d]$ \State Set $a$ \For{$i=1$ to $n$} \State Compute $g_1=s_i^{[u]_1}\in\overline G_1$ \State Let $\sigma=\psi_1(s_i^{[u]})$ \State Compute $g_2=s_{\sigma(i)}^{[v]_2}\in\overline G_2$ \State Let $\psi(s_i)$ be the permutation part of $\iota_{d_1}^d(g_1)\iota_{d_2}^d(g_2)$ \EndFor \State \Return $S_1\bowtie_d S_2=(S,*)$ with $s_i*s_j=s_{\psi(s_i)(j)}$. \end{algorithmic} \end{algorithm} \begin{rmk} The heart of the algorithm is lign $6$ which relies on the following $$D_i^kP_{\sigma}D_j^lP_{\tau}=D_i^kD_{\sigma^{-1}(j)}^lP_{\tau\sigma}.$$ To obtain an element with coefficient-powers $1$ on the $i$-th coordinate and zero elsewhere, we have to take $j=\sigma(i)$ with here $\sigma=\psi(s_i^{[k]_1})$ and as we apply $\iota^d$ on the elements (in $S_1$ this does $q\mapsto q^{d_2}$ and in $S_2$ $q\mapsto q^{d_1}$), we obtain $D_{s_i}=D_i^{d_2u+d_1v}=D_i$ from lign 1. \end{rmk} \begin{ex} Take two cycle sets of size $n=5$ and class respectively $2$ and $3$, and apply Algorithm \ref{algdia} providing a candidate for a cycle set of class $6$:
Let $S_1=\{s_1',\dots,s_5'\}$ and $S_2=\{s_1'',\dots,s_5''\}$, with $(S_1,\psi_1),(S_2,\psi_2)$ given by: \begin{align*} \psi_1(s_1')=\psi_1(s_3')=(1234)&&\psi_1(s_2')=\psi_1(s_4')=(1432)&&\psi(s_5')=\text{id}\\ \psi_2(s_1'')=\psi_2(s_2'')=(354)&&\psi_2(s_3'')=\psi_2(s_4'')=\psi_2(s_5'')=(345)&& \end{align*} Where $S_1$ is of class $d_1=2$ and $S_2$ of class $d_2=3$.
Consider their respective germs $\overline G_1$ and $\overline G_2$ of order $2^5$ and $3^5$. Then define $\overline G=\overline G_1\bowtie_6\overline G_2$ over the basis $S=\{s_1,\dots,s_5\}$. For instance: $$\iota_2^6(s_1')=\iota_ 2^6\left(\begin{pmatrix} 0&\zeta_2&0&0&0\\0&0&1&0&0\\0&0&0&1&0\\1&0&0&0&0\\0&0&0&0&1 \end{pmatrix}\right)=\begin{pmatrix} 0&\zeta_6^3&0&0&0\\0&0&1&0&0\\0&0&0&1&0\\1&0&0&0&0\\0&0&0&0&1 \end{pmatrix}$$ $$ \iota_3^6(s_1'')=\iota_ 3^6\left(\begin{pmatrix} 1&0&0&0&0\\0&1&0&0&0\\0&0&0&0&\zeta_3\\0&0&1&0&0\\0&0&0&1&0\end{pmatrix}\right)=\begin{pmatrix} 1&0&0&0&0\\0&1&0&0&0\\0&0&0&0&\zeta_6^2\\0&0&1&0&0\\0&0&0&1&0\end{pmatrix}$$ To construct an element $g\in\overline G$ with $\overline{\text{cp}}(g)=(0,0,1,0,0)$ we first solve Bézout's identity modulo 6: $3u+2v=1[6]$, a solution is given by $u=1$ and $v=2$, so we will multiply some $\iota_2^6(s_i'^{[1]})$ and $\iota_2^6(s_j''^{[2]})$ so that their product has coefficient-powers $(0,0,3*1+2*2,0,0)=(0,0,1,0,0)[6]$. Recall that: $$D_i^kP_{\sigma}D_j^lP_{\tau}=D_i^kD_{\sigma^{-1}(j)}^lP_{\tau\sigma}.$$ Here we want $i=\sigma^{-1}(j)=3$, $k=3u$ and $l=2v$, so we take $i=3$. As $\sigma=\psi(s_3'^{[1]})=\psi(s_3')=(1234)$, we have $j=\sigma(3)=4$, and note that $s_4''^{[2]}=s_4''s_5''$. Finally: \begin{align*}\iota_2^6(s_3')\iota_3^6(s_4''^{[2]})&=\begin{pmatrix} 0&1&0&0&0\\0&0&1&0&0\\0&0&0&\zeta_6^{3\cdot 1}&0\\1&0&0&0&0\\0&0&0&0&1 \end{pmatrix} \begin{pmatrix} 1&0&0&0&0\\0&1&0&0&0\\0&0&0&0&1\\0&0&\zeta_6^{2\cdot 2}&0&0\\0&0&0&1&0 \end{pmatrix}\\& =\begin{pmatrix} 0&1&0&0&0\\0&0&0&0&1\\0&0&\zeta_6^{3+4}&0&0\\1&0&0&0&0\\0&0&0&1&0 \end{pmatrix} =\begin{pmatrix} 0&1&0&0&0\\0&0&0&0&1\\0&0&\zeta_6&0&0\\1&0&0&0&0\\0&0&0&1&0 \end{pmatrix}\end{align*} This will be our candidate for $s_3$. Doing this for all the generators we find: $$\psi(s_1)=(124)(35),\psi(s_2)=(1532),\psi(s_3)=(1254),\psi(s_4)=(132)(45),\psi(s_5)=(354).$$ Unfortunately, this isn't a cycle set: $(s_1*s_2)*(s_1*s_1)=s_4*s_2=s_1$ whereas $(s_2*s_1)*(s_2*s_1)=s_5*s_5=s_4$. This also means that, $\overline G$ is not permutation-free or in Dehornoy's Calculus terms: with our candidates $s_1,s_2$ we have that $\Pi_2(s_1,s_2)$ and $\Pi_2(s_2,s_1)$ both have coefficient-powers $(1,1,0,0,0)$ but different permutations. \end{ex} \begin{prop} If $\overline G_1$ and $\overline G_2$ commute, then $S_1$ and $S_2$ are $\bowtie$-compatible. \end{prop} In this case, $G=\overline G_1\bowtie_d \overline G_2$ is the Zappa--Szép product of $\overline G_1$ and $\overline G_2$. \begin{proof} As $d_1$ and $d_2$ are coprime, it is clear that $\overline G_1\cap\overline G_2=\{1\}$.
By (\cite{group}, Product Theorem), $\overline G$ is a subgroup of $\Sigma_n^k$ if and only if $\overline G_1$ and $\overline G_2$ commute, i.e. $\overline G=\overline G_1\bowtie_d \overline G_2=\overline G_2\bowtie_d \overline G_1$. As $\overline G_1$ and $\overline G_2$ have different (non-zero) coefficient-powers, a product $g_1g_2$ of two non-trivial elements from $\iota_{d_1}^d(\overline G_1)$ and $\iota_{d_2}^d(\overline G_2)$ cannot be a permutation matrix.
With Algorithm \ref{algdia}, we know that $\overline G$ respects condition (ii) for Theorem \ref{condCS}, and we've seen it also satisfies condition (i), finishing the proof. \end{proof} \begin{rmk} To check whether $\overline G_1$ and $\overline G_2$ commute, we can restrict to the generators and check that: $$\forall s\in S_1, t\in S_2,\exists s'\in S_1,t'\in S_2\text{ such that } st=t's'.$$ \end{rmk} \begin{prop} If $S_1$ and $S_2$ satisfy the following "mixed" cycle set equation \begin{equation}\label{zsc} \forall s,t,u\in S, (s*_1t)*_2(s*_1u)=(t*_2s)*_2(t*_2u) \end{equation} then $S_1$ and $S_2$ are $\bowtie-$compatible and $(S=S_1\bowtie_d S_2,*)$ is a cycle set. \end{prop} Explicitly, from Algorithm \ref{algdia}, we have $\psi(s_i)=\psi_2\left(s_{\psi_1(s_i'^{[u]})(i)}''^{[v]_2}\right)\circ\psi_1\left(s_i'^{[u]_1}\right)$ with $u,v$ such that $d_2u+d_1v=1[d_1d_2]$. \begin{proof} We will use the previous proposition and show how Equation \ref{zsc} naturally arises from considering the commutativity of the germs.For clarity, although our two cycle sets have the same underlying set $S=\{s_1,\dots,s_n\}$, we will distinguish where we see those elements by writing $s'$ for $(S,*_1)$ and $s''$ for $(S,*_2)$.
Let $s_i'\in S_1,s_j''\in S_2$, then in $\overline G$: $$s_i's_j''=D_i^{d_2}P_{s_i'}D_j^{d_1}P_{s_j''}=D_i^{d_2}D_{\psi_1(s_i')^{-1}(j)}^{d_1}P_{s_i'}P_{s_j''}.$$ We want some $s_k'\in S_1,s_l''\in S_2$ such that $s_i's_j''=s_l''s_k'$, i.e: $$D_i^{d_2}D_{\psi_1(s_i')^{-1}(j)}^{d_1}P_{s_i'}P_{s_j''}=D_l^{d_1}D_{\psi_2(s_l'')^{-1}(k)}^{d_2}P_{s_l''}P_{s_k'}.$$ As $d_1$ and $d_2$ are coprime, they're in particular different, so we must have: $$\begin{cases} D_i^{d_2}=D_{\psi_2(s_l'')^{-1}(k)}^{d_2}\\D_{\psi_1(s_i')^{-1}(j)}^{d_1}=D_l^{d_1}\\P_{s_i'}P_{s_j''}=P_{s_l''}P_{s_k'}. \end{cases}$$ From which we first deduce: $k=\psi_2(s_l'')(i)$ and $j=\psi_1(s_i')(l)$, or equivalently $s_k=s_l*_2 s_i$ and $s_j=s_i*_1 s_l$. So taking this $k$ and $l$ we get $D_{s_i's_j''}=D_{s_l''s_k'}$. We are left with last of the three conditions, which then becomes: $$P_{s_i'}P_{s_i'*_1 s_l''}=P_{s_l''}P_{s_l''*_2 s_i'}.$$ As $P_\sigma P_\tau=P_{\tau\sigma}$, the last condition is equivalent to $$\psi_2(s_i'*_1s_l'')\circ\psi_1(s_i')=\psi_1(s_l''*_2s_i')\circ\psi_2(s_l'').$$ As $s_l''\in S_2$, $\psi_2(s_i'*_1s_l'')$ is seen as the action of an element of $S_2$, so all this becomes equivalent to: $$\forall s,t,u\in S, (s*_1t)*_2(s*_1u)=(t*_2s)*_2(t*_2u).$$ \end{proof} \begin{rmk} The condition that the classes are coprime is used, with Bézout's identity, to have generators of the group $\overline G$ (elements with diagonal part $D_i$). Otherwise, say for instance that the classes are powers of the same prime, $d_1=p^a$ and $d_2=p^b$ with $b\leq a$. Then $\iota_{d_2}^d$ is the identity and $\iota_{d_1}^d$ will add elements with higher coefficient powers (or equal), thus we do not get any new generators (or too many in the case $a=b$). \end{rmk} We've seen how to construct cycle sets from ones of the same size and coprime classes. Now we show that this is enough to get all cycle sets from just ones of prime-power class:
Let $d=p_1^{a_1}\dots p_r^{a_r}$ be the prime decomposition of $p$ ($a_i>0$ and $p_i\neq p_j$), and write $\alpha_i=p_i^{a_i}$ for simplicity. We use techniques inspired by \cite{primitive} to construct new cycle sets from two with coprime Dehornoy's class.
Fix again a cycle set $S$ of size $n$ and class $d>1$, with germ $\overline G$. By Proposition \ref{Sk}, given $k>0$ diving $d$, the subgroup $\overline G^{[k]}$ generated by $S^{[k]}=\{s^{[k]}\mid s\in S\}$ is the germ of a structure group, and has for elements the matrices whose coefficient-powers are multiples of $k$. \begin{lem}\label{sylow} Let $\beta_i=\frac{d}{\alpha_i}$ then \begin{enumerate}[label=(\roman*)] \item For each $i$, $\overline G^{[\beta_i]}$ is a $p_i$-Sylow subgroup of $\overline G$. \item Two such subgroups commute (i.e. $\overline G^{[\beta_i]}\overline G^{[\beta_j]}=\overline G^{[\beta_j]}\overline G^{[\beta_i]}$). \item $\overline G$ is the product of all those subgroups. \end{enumerate} \end{lem} \begin{proof} Fix $1\leq i\leq r$, as $\beta$ divides $d$, the group $\overline G^{[\beta_i]}$ corresponds to the subgroup of $\overline G$ of matrices with coefficient-powers in $\{0,\beta_i,2\beta_i,\dots,\beta_i(\alpha_i-1)\}$ and thus has cardinal $\alpha_i^n=p_i^{a_in}$, so it is a $p_i$-Sylow subgroup of $\overline G$.
For $s,t\in S$ we have that $s^{[\beta_i]}t^{[\beta_j]}$ has a $q^{\beta_j}$ on some row, thus is left-divisible by $t'^{[\beta_j]}$ for some $t\in S$, i.e. $s^{[\beta_i]}t^{[\beta_j]}=t'^{[\beta_j]}s'^{[\beta_i]}$ for some $s'\in S$.
We've seen that the $\overline G^{[\beta_i]}$ are $p_i$-Sylow subgroups of the abelian group $(\overline G,+)$, so by cardinality it is the direct sum of those subgroups. By Proposition \ref{abestruct} for any $g,h\in G$, there exists $h'\in G$ such that $g+h=gh'$, where $D_{h'}=\pre{g^{-1}}{D_h}$, so if $h$ is in some $G^{[k]}$, so is $h'$. Projecting onto $\overline G$, we have that any element $g$ can be expressed as a sum of $g_i\in \overline G^{[\beta_i]}$ and thus a product of $g_i'\in\overline G^{[\beta_i]}$. \end{proof}
\begin{ex}\label{exdec} The first example where $S$ is indecomposable but has class product of different primes is $n=8,d=6$ given by: \begin{flalign*} \psi(s_1)=(12)(36)(47)(58),\quad\quad& \psi(s_2)=(1658)(2347),\\ \psi(s_3)=(1834)(2765),\quad\quad& \psi(s_4)=(12)(38)(45)(67),\\ \psi(s_5)=(1438)(2567),\quad\quad& \psi(s_6)=(1856)(2743),\\ \psi(s_7)=(16)(23)(45)(78),\quad\quad& \psi(s_8)=(14)(25)(36)(78)\end{flalign*} Here, $\overline G$ decomposes as the Zappa--Szép product $\overline G^{[3]}\bowtie_6\overline G^{[2]}$ of its 2- and 3- Sylow. If we denote by $(S_2,\psi_2)$ and $(S_3,\psi_3)$ their respective cycle set structure then we find: \begin{align*} \psi_2(s_1')=\psi_2(s_2')&=(1476)(2583),\\ \psi_2(s_3')=\psi_2(s_6')&=(18)(27)(36)(45),\\ \psi_2(s_4')=\psi_2(s_5')&=(1674)(2385),\\ \psi_2(s_7')=\psi_2(s_8')&=(12)(34)(56)(78) \end{align*} and \begin{align*} \psi_3(s_1'')=\psi_3(s_3'')=\psi_3(s_5'')=\psi_3(s_7'')&=(135)(264),\\ \psi_3(s_2'')=\psi_3(s_4'')=\psi_3(s_6'')=\psi_3(s_8'')&=(153)(246). \end{align*} \end{ex} Lemma \ref{sylow} can be rephrased as $\overline G=\overline G^{[\beta_1]}\bowtie_d\dots\bowtie_d\overline G^{[\beta_r]}$. As the germ can be used to reconstruct the structure group and thus the cycle set, the following theorem summarizes these results from an enumeration perspective, that is constructing all solutions of a given size. \begin{thm}\label{sylows} Any cycle set can be obtained as the Zappa--Szép product of cycle sets of class a prime power. \end{thm} \begin{proof} Any cycle set is determined by its structure monoid, which can be recovered from the germ. By Lemma \ref{sylow} and the above construction, the germ can be decomposed and reconstructed from its Sylows, which also determine cycle set by Proposition \ref{Sk}. \end{proof} \begin{rmk} The class of the cycle set constructed will Algorithm \ref{algdia} will, in general, only be a divisor of the product of the prime-powers. This happens because nothing ensures that, for instance, the cycle set obtained is not trivial: we only know that $s^{[d_1d_2]}$ is diagonal, but it is not necessarily minimal. \end{rmk} \begin{rmk} This construction is similar to the matched product of braces $B_1\bowtie B_2$ appearing in \cite{matched,matched2}. Key differences are that we directly construct a cycle set with permutation group $B_1\bowtie B_2$ (whereas the authors of \cite{matched2} construct one over the set $B_1\bowtie B_2$) and that our construction doesn't rely on groups of automorphisms thanks to the natural embedding $\iota_k^{kl}$. Moreover, instead of classifying all braces, the existence of the germs suggests it is enough to classify braces with abelian group $(\mathbb Z/d\mathbb Z)^n$ for all $d$ and $n$ to recover all cycle-sets. \end{rmk} \begin{cor} Any cycle set is induced (in the sense of using the decomposability and Zappa--Szép product) by indecomposable cycle sets of smaller size and class, both powers of the same prime. \end{cor} \begin{proof} Let $S$ be obtained from the germ as an internal product of $S_1,\dots,S_r$ of classes respectively $p_1^{a_1},\dots,p_r^{a_r}$ with distinct primes. Then, consider a decomposition of each $S_i$ as indecomposable cycle sets: so up to a change of enumeration, the matrices in the structure group of $S_i$ are diagonal-by-block with each block corresponding to a cycle set, so with class dividing the class $p_i^{a_i}$ of $S_i$, thus also a power of $p_i$. By Lemma \ref{indN}, the size of those indecomposable cycle sets must also be powers of $p_i$. \end{proof} However, as far as the author knows, there is no "nice" way, given two cycle sets, to construct all cycle sets that decompose on those two, thus the above result is an existence result but not a constructive one, unlike the Zappa--Szép product previously used. \begin{rmk} Starting from a cycle set, we first write it as a Zappa--Szép product of its Sylows and then decompose each Sylow-subgroup if the associated cycle set is decomposable. If one proceeds the other way, first decomposing and then looking at the Sylows of each cycle set of the decomposition, we obtain less information. For instance, if $S=\{s_1,\dots,s_6\}$ with $\psi(s_i)=(1\dots 6)$ for all $i$, then $S$ is not decomposable, but the cycle sets obtained from its Sylows $S^{[2]}$ and $S^{[3]}$ are decomposable ($\psi_2(s_i)=(14)(25)(36)$ and $\psi_3(s_i)=(135)(246)$ for all $i$, having respectively 3 and 2 orbits).
\end{rmk} \begin{ex} In Example \ref{exdec}, $3$ does not divide $8$ so $S_3$ has to be decomposable, and indeed it decomposes as $S_3=\{s_1'',s_3'',s_5''\}\sqcup\{s_2'',s_4'',s_6''\}\sqcup\{s_7'',s_8''\}$. \end{ex} \begin{cor} Let $N(n,d)$ be the number of cycle sets of size $n$ and of class a divisor of $d=p_1^{a_1}\dots p_r^{a_r}$. Then we have: $N(n,d)\leq \prod_i N(n,p_i^{a_i})$. \end{cor} For $n=10$, we find that there is approximately 67\% of cycle sets that have class a prime-power ($\sim$ 3.3 out of $\sim$ 4.9 millions). We hope that this number greatly reduces as $n$ increases (as hinted by the previous values, for $n=4$ it is $99\%$), as more values of $d$ are possible (Conjecture \ref{conj1}). \emergencystretch=1em \printbibliography
\end{document}
|
arXiv
|
{
"id": "2302.02652.tex",
"language_detection_score": 0.6935494542121887,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Brownian Bridge Asymptotics for the Subcritical Bernoulli
Bond Percolation.}
\author{Yevgeniy Kovchegov\\
\small{Email: [email protected]}\\
\small{Fax: 1-650-725-4066}}
\maketitle
\begin{abstract}
For the $d$-dimensional model of a subcritical bond percolation ($p<p_c$) and a
point $\mathbf{\vec{a}}$ in $\mathbb{Z}^d$,
we prove that a cluster conditioned on connecting points $(0,...,0)$ and $n
\mathbf{\vec{a}}$ if scaled by $\frac{1}{n \| \mathbf{\vec{a}}
\|}$ along $\mathbf{\vec{a}}$ and by $\frac{1}{\sqrt{n}}$ in the orthogonal direction converges asymptotically to Time $\times$ ($d-1$)-dimensional Brownian Bridge. \end{abstract}
\section{Introduction.}
\subsection{Percolation and Brownian Bridge.}
We begin by briefly stating the notion of a bond percolation based
on the material rigorously presented in \cite{grimmett} and \cite{kesten}, and
the notion of the Brownian Bridge as well as the word description
of the result connecting the two that we have obtained and
made the primary objective of this paper.
\textbf{Percolation:} For each edge of the $d$-dimensional square lattice
$\mathbb{Z}^d$ in turn, we declare the edge $open$ with probability
$p$ and $closed$ with probability $1-p$, independently of all
other edges. If we delete the closed edges, we are left with a
random subgraph of $\mathbb{Z}^d$. A connected component of the
subgraph is called a ``cluster", and the number of edges in a
cluster is the ``size" of the cluster. The probability $\theta (p)$ that
the point $(0,0)$ belongs to a cluster of an infinite size is zero if
$p=0$, and one if $p=1$. However, there exists a critical
probability $0< p_c <1$ such that $\theta (p) = 0$ if $p < p_c$
and $\theta (p) >0$ if $p > p_c$. In the first case, we say that
we are dealing with a $subcritical$ percolation model, and in
the second case, we say that we are dealing with a
$supercritical$ percolation model.
\textbf{Brownian Bridge:} defined as a sample-continuous Gaussian
process $B^0$ on $[0,1]$ with mean $0$ and $\bold{E} B^0_s B^0_t
= s(1-t)$ for $0 \leq s \leq t \leq 1$. So, $B^0_0 = B^0_1 = 0$
a.s. Also, if $B$ is a Brownian motion, then the process
$B_t - tB_1$ ($0 \leq t \leq 1$) is a Brownian Bridge. For more
details see \cite{bill}, \cite{dudley} and \cite{durrett}. The
process $B^{0, \mathbf{\vec{a}}}_t \equiv B^0_t +t \mathbf{\vec{a}}$
is a Brownian Bridge connecting points zero and
$\vec{\mathbf{a}}$.
\textbf{History of the problem:} Below, we consider the $d$-dimensional model
of a subcritical bond percolation ($p<p_c$) and a point $\mathbf{\vec{a}}$
in $\mathbb{Z}^d$, conditioned on the event of zero being connected to
$n \mathbf{\vec{a}}$. We first show that a specifically
chosen path connecting points zero and $n \mathbf{\vec{a}}$ and going
through some appropriately defined points on the cluster (regeneration
points), if scaled $\frac{1}{n \| \mathbf{\vec{a}}\|}$ times along
$\mathbf{\vec{a}}$ and $\frac{1}{\sqrt{n}}$ times in the
direction orthogonal to $\mathbf{\vec{a}}$, converges to Time $\times$
($d-1$)-dimensional Brownian Bridge as $n \rightarrow +\infty$,
where the scaled interval connecting points zero and $n \mathbf{\vec{a}}$
serves as a $[0,1]$ time interval. In other words, we prove that a
scaled ``skeleton" going through the regeneration points of the
cluster converges to Time $\times$ ($d-1$)-dimensional Brownian
Bridge. In a subsequent step, we show that if scaled, then the hitting
area of the orthogonal hyper-planes shrinks, implying that for
$n$ large enough, all the points of the scaled cluster are within
an $\varepsilon$-neighborhood of the points in the ``skeleton".
One of the major tools used in this research was the renewal
technique developed in \cite{acc}, \cite{ccc}, \cite{ioffe}, \cite{cc}
and \cite{ioffe1} as part of the derivation of the Ornstein-Zernike estimate
for the subcritical bond percolation model and ``similar" processes.
A major result related to the study is that for
$\mathbf{\vec{a}}=(1,0,...,0)$,
the hitting distribution of the cluster in the
intermediate planes, $x_1 =tn\mathbf{\vec{a}}$, $0<t<1$ obeys a
multidimensional local limit theorem (see \cite{ccc}). Dealing with all other
$\mathbf{\vec{a}} \not= (k,0,...,0)$ became possible only after the
corresponding technique further mastering the regeneration structures and
equi-decay profiles was developed in \cite{ioffe} and \cite{ioffe1}.
This technique played a central role in obtaining the research results.
\subsection{Asymptotic Convergence.}\label{intro:2}
Here we state a version of a local CLT and a technical result that we
later prove.\\
\textbf{Local Limit Theorem:} In this paper we are going to use the version of
the local CLT borrowed from \cite{durrett}:
Let $X_1, X_2,... \in \mathbb{R}$ be i.i.d. with $\bold{E} X_i =0$,
$\bold{E} X_i^2 = \sigma ^2 \in (0, \infty)$, and having
a common lattice distribution with span $h$. If
$S_n = X_1 + ... + X_n$ and $P[X_i \in b + h \mathbb{Z}] = 1$
then $P[S_n \in nb + h \mathbb{Z}] = 1$. We put
$$p_n(x) = P[S_n / \sqrt{n} =x] \mbox{ for } x \in \Lambda_n
= \{ (nb + hz) / \sqrt{n} \mbox{ : } z \in \mathbb{Z} \} $$
and
$$n(x)= (2 \pi \sigma^2)^{-1/2} \exp(-x^2/2\sigma^2) \mbox{ for }
x \in (-\infty, \infty)$$
\begin{CLT}
Under the above hypotheses,
$\sup_{x \in \Lambda_n} |{\frac{\sqrt{n}}{h} p_n(x) - n(x)}|
\rightarrow 0$ as $n \rightarrow \infty$.
\end{CLT}
\textbf{Technical Result Concerning Convergence to the Brownian Bridge}
(to be used in Chapter 2.4, and is proved in Chapter 3.2):
The following technical result is going to be proved in the section \ref{bb} of
this paper, however since we are to use it in the section \ref{percol},
we will state the result below as part of the introduction.
Let $X_1, X_2,...$ be i.i.d. random variables on $\mathbb{Z}^d$
with the span of the lattice distribution equal to one (see \cite{durrett}, section 2.5), and let there be a
$\bar{\lambda} > 0$ such that the moment-generating function
$$\bold{E}(e^{\theta \cdot X_1}) <\infty$$ for all $\theta \in B_{\bar{\lambda}}$. \\
Now, for a given vector $\mathbf{\vec{a}} \in \mathbb{Z}^d$,
let $X_1+...+X_i =[t_i, Y_i]_f \in \mathbb{Z}^d$ when
written in the new orthonormal basis such that
$\mathbf{\vec{a}} = [\| \mathbf{\vec{a}} \|, 0]_f$ (in the new basis
$[\cdot , \cdot ]_f \in \mathbb{R} \times \mathbb{R}^{d-1}$).
Also let $P[\mathbf{\vec{a}} \cdot X_i]>0]=1$.
We define the process $[t, Y_{n,k}^*(t)]_f$ to be
the interpolation of $0$ and
$[\frac{1}{n \| \mathbf{\vec{a}} \|}t_i, \frac{1}{\sqrt{n}}Y_i]_f^{i=0,1,...,k}$,
in Section 2.2 we will show that
\begin{TECHthm} The process $$\{ Y^*_{n,k}\mbox{ for some } k \mbox{ such that } [t_k, Y_k]_f =n \mathbf{\vec{a}} \}$$ conditioned on the existence of such $k$ converges weakly to the
Brownian Bridge (of variance that depends only on the law of $X_1$). \end{TECHthm}
\section{The Main Result in Subcritical Percolation.} \label{percol}
In this section we work only with subcritical percolation probabilities $p<p_c$.
\subsection{Preliminaries.}
Here we briefly go over the definitions that one can find in Section 4 of \cite{ioffe}. \\ We start with the inverse correlation length $\xi_p(\vec{x})$:
$$\xi_p(\vec{x}) \equiv -\lim_{n \rightarrow \infty}
\frac{1}{n} P_p (0 \leftrightarrow [n\vec{x}]),$$ where the limit is always defined due to the FKG property of the Bernoulli bond percolation (see \cite{grimmett}).
Now, $\xi_p(\vec{x})$ is the support function of the compact convex set $$ \bold{K}^p \equiv \bigcap_{\vec{n} \in \mathbb{S}^{d-1}}
\lbrace \vec{r} \in \mathbb{R}^d \mbox{ : }
\vec{r} \cdot \vec{n} \leq \xi_p(\vec{n}) \rbrace ,$$ with non-empty interior int\{$\bold{K}^p$\} containing point zero. \\ Let $\mathbf{\vec{r}} \in \partial \bold{K}^p$, and let $\vec{e}$ be a basis vector such that $\vec{e} \cdot \mathbf{\vec{r}}$ is maximal. For $\vec{x},\vec{y} \in \mathbb{Z}^d$ define $$S^r_{\vec{x},\vec{y}} \equiv \lbrace \vec{z} \in \mathbb{R}^d \vert
\mathbf{\vec{r}} \cdot \vec{x} \leq \mathbf{\vec{r}} \cdot \vec{z}
\leq \mathbf{\vec{r}} \cdot \vec{y} \rbrace .$$ Note that $S^r_{\vec{x},\vec{y}} = \emptyset$ if
$ \mathbf{\vec{r}} \cdot \vec{y} < \mathbf{\vec{r}} \cdot \vec{x} $. \\ Let $\bold{C}^r_{\vec{x},\vec{y}}$ denote the corresponding common open cluster of $x$ and $y$ when we run the percolation process on $S^r_{\vec{x},\vec{y}}$.
\begin{Def} For $\vec{x},\vec{y} \in \mathbb{Z}^d$ lets define $h_r$-connectivity
$\lbrace \vec{x} \leftarrow^{h_r} \rightarrow \vec{y} \rbrace$
of $\vec{x}$ and $\vec{y}$ to be the event that\\ 1. $\vec{x}$ and $\vec{y}$ are connected in the restriction of the percolation configuration to the slab $S^r_{\vec{x},\vec{y}}$. \\ 2. If $\vec{x} \not= \vec{y}$, then
$\bold{C}^r_{\vec{x},\vec{y}} \bigcap S^r_{\vec{x},\vec{x}+\vec{e}}
= \lbrace \vec{x},\vec{x}+\vec{e} \rbrace$ and
$\bold{C}^r_{\vec{x},\vec{y}} \bigcap S^r_{\vec{y}-\vec{e},\vec{y}}
= \lbrace \vec{y}-\vec{e},\vec{y} \rbrace$. \\ 3. If $\vec{x}=\vec{y}$ and all the edges adjoined to $\vec{x}$ and perpendicular to $\vec{e}$ are closed. \end{Def}
Set
$$h_r(\vec{x}) \equiv P_p \lbrack 0 \leftarrow^{h_r} \rightarrow \vec{x} \rbrack .$$ Notice that $h_r(0) = (1-p)^{2(d-1)}$.
\begin{Def} For $\vec{x},\vec{y} \in \mathbb{Z}^d$ lets define $f_r$-connectivity
$\lbrace \vec{x} \leftarrow^{f_r} \rightarrow \vec{y} \rbrace$
of $\vec{x}$ and $\vec{y}$ to be the event that\\
1. $\vec{x} \not= \vec{y}$\\
2. $\vec{x} \leftarrow^{h_r} \rightarrow \vec{y}$ .\\
3. For no $\vec{z} \in \mathbb{Z}^d \setminus \lbrace \vec{x},\vec{y} \rbrace $ both $ \lbrace \vec{x} \leftarrow^{h_r} \rightarrow \vec{z} \rbrace \mbox{ and }
\lbrace \vec{z} \leftarrow^{h_r} \rightarrow \vec{y} \rbrace$ take place. \end{Def}
Set
$$f_r(\vec{x}) \equiv P_p \lbrack 0 \leftarrow^{f_r} \rightarrow \vec{x} \rbrack .$$ Notice that $f_r(0)=0$.
\begin{Def} Suppose $0 \leftarrow^{h_r} \rightarrow \vec{x}$, we say that $\vec{z} \in \mathbb{Z}^d$ is \textbf{a regeneration point}
of $\bold{C}_{0,\vec{x}}^r$ if \\
1. $\mathbf{\vec{r}} \cdot \vec{e} \leq \mathbf{\vec{r}} \cdot \vec{z}
\leq \mathbf{\vec{r}} \cdot (\vec{y}-\vec{e}) $ \\
2. $S_{\vec{z}-\vec{e},\vec{z}+\vec{e}}^r \bigcap \bold{C}_{0,\vec{x}}^r$
contains exactly three points: $\vec{z}-\vec{e}$, $\vec{z}$ and
$\vec{z}+\vec{e}$, where $\vec{e}$ is defined as before.\\ Let also $\vec{x}$ itself be a regeneration point. \end{Def}
The following Ornstein-Zernike equality is due to be used soon: \begin{thm*}
$\exists$ $A(\cdot, \cdot)$ on $(0,p_c) \times \mathbf{S}^{d-1}$ s. t.
\begin{eqnarray} \label{oz}
P_p[0 \leftrightarrow \vec{x}] = \frac{A(p, n(\vec{x}))}
{\|\vec{x}\|^{\frac{d-1}{2}} } e^{-\xi_p(\vec{x})} (1+o(1))
\end{eqnarray}
uniformly in $\vec{x} \in \mathbb{Z}^d$,
where $n(\vec{x}) \equiv \frac{\vec{x}}{\|\vec{x}\|}$. \end{thm*} We refer to \cite{ioffe} for the proof of the theorem.
\subsection{Measure $Q_{r_0}^r(x)$.}
It had been proved in section 4 of \cite{ioffe} that for a given $\mathbf{\vec{r}}_0 \in \partial \bold{K}^p$ there exists $\bar{\lambda} >0$ such that
$$F_{r_0}(\mathbf{\vec{r}})=
\frac{1}{(1-p)^{2(d-1)}} \sum_{x \in \mathbb{Z}^d} f_{\mathbf{\vec{r}}_0}(x)
e^{\mathbf{\vec{r}} \cdot \vec{x}} =1 \mbox{ whenever }
\mathbf{\vec{r}} \in B_{\bar{\lambda}}(\mathbf{\vec{r}}_0)
\bigcap \partial \bold{K}^p$$ and therefore $$Q_{r_0}^r(\vec{x}) \equiv \frac{1}{(1-p)^{2(d-1)}}f_{r_0}(\vec{x})
e^{\mathbf{\vec{r}} \cdot \vec{x}}
\mbox{ is a measure on } \mathbb{Z}^d .$$
Also, it was shown that $$\mu=\mu_{r_0}(\mathbf{\vec{r}}) \equiv \bold{E}_{r_0}^rX =
\sum_{\vec{x} \in \mathbb{Z}^d}\vec{x}Q_{r_0}^r(\vec{x})
= \nabla_r logF_{r_0}(\mathbf{\vec{r}}) \not= 0$$ and
$$F_{r_0}(\mathbf{\vec{r}}) < \infty \mbox{ for all } \mathbf{\vec{r}} \mbox{ in }
B_{\bar{\lambda}}(\mathbf{\vec{r}}_0).$$
The later implies $$F_{r_0}(\mathbf{\vec{r}}) =\sum_{\vec{x} \in \mathbb{Z}^d}f_{r_0}(\vec{x})
e^{\mathbf{\vec{r}} \cdot \vec{x}}
= \sum_{\vec{x} \in \mathbb{Z}^d}Q_{r_0}^{r_0}(\vec{x}) e^{\theta \cdot \vec{x}}
< \infty$$
for $\theta = \mathbf{\vec{r}}-\mathbf{\vec{r}}_0 \in B_{\bar{\lambda}}(0)$,\\ i.e. the moment generating function
$\bold{E}_{r_0}^{r_0} (e^{\theta \cdot X_1})$
of the law $Q_{r_0}^{r_0}$ is finite for all
$\theta \in B_{\bar{\lambda}}(0)$.\\
Now, there is a renewal relation (see section 1 and section 4 of \cite{ioffe}),
$$ h_{r_0}(\vec{x})=\frac{1}{(1-p)^{2(d-1)}}
\sum_{\vec{z} \in \mathbb{Z}^d} f_{r_0}(\vec{z})h_{r_0}(\vec{x}-\vec{z})
\mbox{ with } h_{r_0}(0)=(1-p)^{2(d-1)}$$ and therefore $$h_{r_0}([N \mu])=(1-p)^{2(d-1)}e^{-r \cdot [N \mu]}
\sum_{k} \bigotimes_1^k Q_{r_0}^r(X_1+...+X_k=[N \mu])
\mbox{ for } N>0 ,$$
where $X_1,X_2,...$ is a sequence of i.i.d. random variables distributed according to $Q_{r_0}^r$, as $h_{r_0}$-connection is a chain of $f_{r_0}$-connections with junctions at the regeneration points of $\bold{C}_{0,x}^{r_0}$. \\
\subsection{Important Observation.}
The probability that $0 \leftarrow^{h_{r_0}} \rightarrow x$ with
exactly $k$ regeneration points $x_1, x_1+x_2, ... , \sum_{i=1}^k x_i =x$ \begin{eqnarray} \label{imp}
P_X & \equiv &
P[0 \leftarrow^{h_{r_0}} \rightarrow x \mbox{ ; regeneration points: }
x_1, x_1+x_2, ... , \sum_{i=1}^k x_i =x] \nonumber \\
& = & \frac{1}{(1-p)^{2(d-1)(k-1)}}
P[0 \leftarrow^{f_{r_0}} \rightarrow x_1]
P[x_1 \leftarrow^{f_{r_0}} \rightarrow x_1+x_2]...
P[\sum_{i=1}^{k-1} x_i
\leftarrow^{f_{r_0}} \rightarrow \sum_{i=1}^k x_i =x] \nonumber \\
& = & \frac{1}{(1-p)^{2(d-1)(k-1)}} f_{r_0}(x_1)f_{r_0}(x_2)...f_{r_0}(x_k). \end{eqnarray}
\subsection{The Result.}
In this section we fix $\bold{\vec{a}} \in \mathbb{Z}^d$, and let
$\bold{r}=\bold{r}_0= \bold{\vec{a}} \mathbb{R}^+\bigcap \partial \bold{K}^p$. Then we recall that
$$\bold{E}_{r_0}^r (e^{\theta \cdot X_1}) < \infty$$
for all $\theta \in B_{\bar{\lambda}}(0)$.
We also denote $h(x) \equiv h_{r_0}(x)$ and $f(x) \equiv f_{r_0}(x)$.
\\ First, we introduce a new basis $\{ \vec{f_1},\vec{f_2},..., \vec{f_d} \}$, where
$\vec{f_1} = \frac{\bold{\vec{a}}}{\| \bold{\vec{a}} \|}$. We use
$[\cdot,\cdot]_f \in \mathbb{R} \times \mathbb{R}^{d-1}$ to denote
the coordinates of a vector with respect to the new basis.
Obviously $\mathbf{\vec{a}}=[\|\mathbf{\vec{a}} \|, 0]_f$.
We want to prove that the process corresponding to the last $d-1$ coordinates in the new basis of the scaled
($\frac{1}{n \| \bold{\vec{a}} \|}$ times along $\bold{\vec{a}}$
and $\frac{1}{\sqrt{n}}$ times in the orthogonal d-1
dimensions) interpolation of regeneration points
of $\bold{C}_{0, n \bold{\vec{a}}}^{r_0}$ conditioned on
${0 \leftarrow^{h}\rightarrow n \bold{\vec{a}}}$
converges weakly to the Brownian Bridge $B^o(t)$ (with variance that depends
only on measure $Q_{r_0}^r$) where $t$ represents the scaled first coordinate
in the new basis.\\
Let $X_1, X_2,...$ be i.i.d. random variables distributed according to $Q_{r_0}^r$ law.
We interpolate $0,X_1,(X_1+X_2),...,(X_1+...+X_k)$ and scale by
$\frac{1}{n\|\mathbf{\vec{a}} \|} \times \frac{1}{\sqrt{n}}$ along $<\mathbf{\vec{a}}> \times <\mathbf{\vec{a}}>^{\bot}$ to get the process $[t, Y^*_{n,k}(t)]_f$. The technical theorem (see Chapters (\ref{intro:2}) and (\ref{bb:gen})) implies the following
\begin{thm} The process $$\{ Y^*_{n,k}\mbox{ for some } k \mbox{ such that }
X_1+...+X_k= n \bold{\vec{a}} \}$$ conditioned on the existence of such $k$
converges weakly to the Brownian Bridge (with variance that depends only on measure $Q_{r_0}^r$). \end{thm}
Now, let for $y_1,...,y_k \in \mathbb{Z}^d$ with positive increasing first coordinates
$\gamma (y_1,...,y_k)$ be the last $(d-1)$ coordinates
in the new basis
of the scaled ($\frac{1}{n \|\mathbf{\vec{a}}\|} \times \frac{1}{\sqrt{n}}$) interpolation of points $0,y_1,...,y_k$ (where the first coordinate is time). Notice that $\gamma (y_1,...,y_k) \in C_o[0,1]^{d-1}$
as a function of scaled first coordinate
whenever $y_k= n \bold{\vec{a}}$. \\
By the important observation (\ref{imp}) we've made before,
for any function $F(\cdot )$ on $C[0,1]^{d-1}$,\\ \\ $\sum_k \sum_{x_1+...+x_k= n \bold{\vec{a}}}
F(\gamma (x_1, x_1+x_2, ... , \sum_{i=1}^k x_i))$ $$ \times
P[0 \leftarrow^{h_{r_0}} \rightarrow x
\mbox{ ; regeneration points: }
x_1, x_1+x_2, ... , \sum_{i=1}^k x_i =x]$$
$$=\sum_k \sum_{x_1+...+x_k= n \bold{\vec{a}}}
F(\gamma (x_1, x_1+x_2, ... , \sum_{i=1}^k x_i))
\frac{1}{(1-p)^{2(d-1)(k-1)}} f(x_1)...f(x_k)$$
$$= (1-p)^{2(d-1)} e^{-r \cdot n \bold{\vec{a}}}
\sum_k \sum_{x_1+...+x_k= n \bold{\vec{a}}}
F(\gamma (x_1, x_1+x_2, ... , \sum_{i=1}^k x_i))
Q_{r_0}^r(x_1)...Q_{r_0}^r(x_k).$$ \\ Therefore, for any $A \subset C[0,1]^{d-1}$\\ \\
$P_p[ \gamma (\mbox{regeneration points}) \in A \mbox{ } | \mbox{ }
0 \leftarrow^{h}\rightarrow n \bold{\vec{a}} ]$
$$ =\frac{ \sum_k \sum_{x_1+...+x_k= n \bold{\vec{a}}}
I_A (\gamma (x_1, x_1+x_2, ... , \sum_{i=1}^k x_i))
\frac{1}{(1-p)^{2(d-1)(k-1)}} f(x_1)...f(x_k) }
{ \sum_k \sum_{x_1+...+x_k= n \bold{\vec{a}}}
\frac{1}{(1-p)^{2(d-1)(k-1)}} f(x_1)...f(x_k) }$$
$$ =\frac{\sum_k \sum_{x_1+...+x_k= n \bold{\vec{a}}}
I_A (\gamma (x_1, x_1+x_2, ... , \sum_{i=1}^k x_i))
Q_{r_0}^r(x_1)...Q_{r_0}^r(x_k)}
{\sum_k \sum_{x_1+...+x_k= n \bold{\vec{a}}}
Q_{r_0}^r(x_1)...Q_{r_0}^r(x_k)}$$ $$= P[Y^*_{n,k} \in A \mbox{ for the } k \mbox{ such that }
X_1+...+X_k= n \bold{\vec{a}} \mbox{ } | \mbox{ }
\exists k \mbox{ such that }
X_1+...+X_k= n \bold{\vec{a}}] .$$
Hence, we have proved the following
\begin{cor}
The process corresponding to the last $d-1$ coordinates (in the new basis
$\{ \vec{f_1},\vec{f_2},...,\vec{f_d} \}$) of the scaled
$({\frac{1}{n \|\mathbf{\vec{a}} \|} \times \frac{1}{\sqrt{n}} })$ interpolation of regeneration points
of $\bold{C}_{0,n \bold{\vec{a}}}^{r_0}$ (where the first coordinate
is time) conditioned on ${0 \leftarrow^{h}\rightarrow n \bold{\vec{a}}}$
converges weakly to the Brownian Bridge (with variance that depends only on measure $Q_{r_0}^r$). \end{cor}
\subsection{Shrinking of the Cluster. Main Theorem.}
Here for $\bold{\vec{a}} \in \mathbb{Z}^d$ we let
$\bold{r}_0= \bold{\vec{a}} \mathbb{R}^+\bigcap \partial \bold{K}^p$ again. Before we proceed with the proof that the scaled percolation cluster $\bold{C}_{0,n \bold{\vec{a}}}^{r_0}$ shrinks to the scaled interpolation skeleton of regeneration points, we need to prove the following \begin{prop} If $\mathbf{\vec{r}} = \nabla \xi_p(\mathbf{\vec{r}}_0)$ then $Q_{r_0}^r$ is a probability measure. \end{prop}
\begin{proof}
First we notice that $\mathbf{\vec{r}}_0 \cdot \mathbf{\vec{r}}
= \mathbf{\vec{r}}_0 \cdot \nabla \xi_p(\mathbf{\vec{r}}_0)
= D_{\mathbf{\vec{r}}_0}(\xi_p(\mathbf{\vec{r}}_0)) =
\xi_p(\mathbf{\vec{r}}_0)$, and thus $$H_{r_0}(\mathbf{\vec{r}}) \equiv
\frac{1}{(1-p)^{2(d-1)}} \sum_{\vec{x} \in \mathbb{Z}^d}
h_{r_0}(x)e^{\mathbf{\vec{r}} \cdot \vec{x}} \geq
\sum_{\vec{x} \in <\mathbf{\vec{a}}> \cap \mathbb{Z}^d}
h_{r_0}(x)e^{\mathbf{\vec{r}} \cdot \vec{x}}
= \sum_{\vec{x} \in <\mathbf{\vec{a}}> \cap \mathbb{Z}^d}
h_{r_0}(x)e^{\xi_p(\vec{x})}
= +\infty$$ for $d \leq 3$ by Ornstein-Zernike equation (\ref{oz}). For all other $d$ we sum over all $\vec{x}$ inside a small enough cone around $\mathbf{\vec{a}}$ to get $H_{r_0}(\mathbf{\vec{r}}) = +\infty$.\\
Now, for all $\vec{n} \in \mathbb{S}^{d-1}$, $\vec{n} \cdot
\nabla \xi_p(\mathbf{\vec{r}}_0) = D_{\vec{n}} \xi_p(\mathbf{\vec{r}}_0)
\leq \xi_p(\vec{n})$ by convexity of $\xi_p$, and therefore
$\mathbf{\vec{r}} = \nabla \xi_p(\mathbf{\vec{r}}_0) \in
\partial \mathbf{K}^p$. Notice that due to the strict convexity of
$\xi_p$ and the way $\mathbf{K}^p$ was defined,
$\mathbf{\vec{r}} = \nabla \xi_p(\mathbf{\vec{r}}_0)$ is the
only point on $\partial \mathbf{K}^p$ such that
$\mathbf{\vec{r}}_0 \cdot \mathbf{\vec{r}} = \xi_p(\mathbf{\vec{r}}_0)$.\\
Now, Ornstein-Zernike equation (\ref{oz}) also implies that the sums
$H_{r_0}(\tilde{r})$ and $F_{r_0}(\tilde{r})$
are finite whenever
$\tilde{r} \in \alpha \bold{K}^p =
\bigcap_{\vec{n} \in \mathbb{S}^{d-1}}
\lbrace \vec{r} \in \mathbb{R}^d \mbox{ : }
\vec{r} \cdot \vec{n} \leq \alpha \xi_p(\vec{n}) \rbrace $
with $\alpha \in (0,1)$, and
due to the recurrence relation of $f_{r_0}$ and $h_{r_0}$
connectivity functions,
$H_{r_0}(\tilde{r}) = \frac{1}{1 -F_{r_0}(\tilde{r})}$
(see \cite{ioffe}). Therefore
$F_{r_0}(\mathbf{\vec{r}}) \equiv
\frac{1}{(1-p)^{2(d-1)}} \sum_{\vec{x} \in \mathbb{Z}^d}
f_{r_0}(x)e^{\mathbf{\vec{r}} \cdot \vec{x}} = 1$,
where the probability measure $Q_{r_0}^r$ has an
exponentially decaying tail due to the same reasoning as in chapter 4
of \cite{ioffe} ("mass-gap" property).
\end{proof}
With the help of the proposition above we shell show that the consequent
regeneration points are situated relatively close to each other: \begin{lem*}
$$P_p[\max_{i} |x_i - x_{i-1}|>n^{1/3},
\mbox{ } x_i \mbox{- reg. points }
| \mbox{ } 0 \leftarrow^h \rightarrow n \bold{\vec{a}} ]
<\frac{1}{n}$$ for $n$ large enough. \end{lem*}
\begin{proof}
Let $\mathbf{\vec{r}} \equiv \nabla \xi_p(\mathbf{\vec{r}}_0)
= \nabla \xi_p(\mathbf{\vec{a}})$.
Since $\xi_p(x)$ is strictly convex
(see section 4 in \cite{ioffe}), $$ \frac{ \xi_p(\bold{\vec{a}}) - \xi_p(\bold{\vec{a}} -\frac{\vec{x}}{n})}
{(\frac{\|\vec{x}\|}{n})}
< \frac{\vec{x}}{\|\vec{x}\|} \cdot \nabla \xi_p(\bold{\vec{a}}) $$ for $\vec{x} \in \mathbb{Z}^d$ ($\vec{x} \not= 0$), and therefore $$\xi_p(n \bold{\vec{a}})- \xi_p(n \bold{\vec{a}}-\vec{x})
= \|\vec{x}\| \frac{ \xi_p(\bold{\vec{a}}) - \xi_p(\bold{\vec{a}}
-\frac{\vec{x}}{n})}{(\frac{\|\vec{x}\|}{n})}
< \vec{x} \cdot \nabla \xi_p(\bold{\vec{a}})
= \mathbf{\vec{r}} \cdot \vec{x}.$$ Thus, since $Q_{r_0}^r(x)$ decays exponentially and therefore $$\frac{f(x)}{(1-p)^{2(d-1)}}
e^{\xi_p(n \bold{\vec{a}})- \xi_p(n \bold{\vec{a}}-x)} < Q_{r_0}^r(x)$$ and also decays exponentially. Hence by Ornstein-Zernike result (\ref{oz}),
$$P_p[ n^{1/3} <
|x| , \mbox{ } x \mbox{-first reg. point }
| 0 \leftarrow^h \rightarrow n \bold{\vec{a}} ]
=\sum_{ n^{1/3} < |x| }
\frac{f(x)}{(1-p)^{2(d-1)}} \frac{h(n \bold{\vec{a}}-x)}
{h(n \bold{\vec{a}})}
< \frac{1}{n^2} $$ for $n$ large enough.
So, since the number of the regeneration points is no greater
than $n$,
$$P_p[\max_{i} |x_i - x_{i-1}|>n^{1/3},
\mbox{ } x_i \mbox{- reg. points }
| \mbox{ } 0 \leftarrow^h \rightarrow n \bold{\vec{a}} ]
<\frac{1}{n}$$ for $n$ large enough.
\end{proof}
Now, it is really easy to check that there is a constant $\lambda_f >0$ such that
$$f(\vec{x}) > e^{-\lambda_f \|\vec{x}\|}$$ for all $\vec{x}$ such that $f(\vec{x}) \not= 0$ (here we only need to connect points $\vec{e}$ and $\vec{x} - \vec{e}$ with two non-intersecting open paths surrounded by the closed edges), and there exists a constant $\lambda_u >0$ such that
$$P_p[\mbox{ percolation cluster } \bold{C}(0)
\not\subset [\mathbb{R}; B_{R}^{d-1}(0)]_f ] < e^{-\lambda_u R} $$ for $R$ large enough due to the exponential decay of the radius distribution for subcritical probabilities (see \cite{grimmett}). Hence, for a given $\epsilon >0$ $$P_p[\mbox{ cluster } \bold{C}_{0,\vec{x}}^{r_0}
\not\subset [\mathbb{R}, B_{\epsilon \sqrt{n}}^{d-1}(0)]_f
\mbox{ } | \mbox{ } 0 \leftarrow^f \rightarrow x]
< e^{ \lambda_f \|\vec{x}\| -\lambda_u \epsilon \sqrt{n}}, $$ and therefore, summing over the regeneration points, we get
$$P_p[\mbox{ scaled cluster } \bold{C}_{0, n \bold{\vec{a}} }^{r_0}
\not\subset \epsilon \mbox{-neighbd. of }
[0,1] \times \gamma (\mbox{ reg. points })
\mbox{ } | \mbox{ } 0 \leftarrow^g \rightarrow n \bold{\vec{a}} ]$$
$$ < \frac{1}{n} +
n e^{ \lambda_f n^{1/3} -\lambda_u \epsilon \sqrt{n}} $$
for $n$ large enough.\\
We can now state the main result of this paper: \begin{Mthm}
The process corresponding to the last $d-1$ coordinates (in the new basis
$\{ \vec{f_1},\vec{f_2},...,\vec{f_d} \}$) of the scaled
$({\frac{1}{n \|\mathbf{\vec{a}} \|} \times \frac{1}{\sqrt{n}} })$ interpolation of regeneration points
of $\bold{C}_{0,n \bold{\vec{a}}}^{r_0}$ (where the first coordinate
is time) conditioned on ${0 \leftarrow^{h}\rightarrow n \bold{\vec{a}}}$
converges weakly to the Brownian Bridge (with variance that depends only on measure $Q_{r_0}^r$). \\
Also for a given $\epsilon >0$ $$P_p[\mbox{ scaled cluster } \bold{C}_{0, n \bold{\vec{a}} }^{r_0}
\not\subset \epsilon \mbox{-neighbd. of }
[0,1] \times \gamma (\mbox{ reg. points })
\mbox{ } | \mbox{ } 0 \leftarrow^h \rightarrow n \bold{\vec{a}} ]
\rightarrow 0$$ as $n \rightarrow \infty$. \end{Mthm}
\section{Convergence to Brownian Bridge.} \label{bb}
As it was mentioned in the introduction, this chapter is entirely dedicated to proving the Technical Theorem that we have already used in the proof of the main result.
\subsection{Simple Case.}
Let $Z_1, Z_2,...$ be i.i.d. random variables on $\mathbb{Z}$ with the span of the lattice distribution equal to one (see \cite{durrett}, section 2.5) and mean $\mu = \bold{E}Z_1 <\infty$, $\sigma^2=Var(Z_1)<\infty$. Also let point zero be inside of the closed convex hull of $\{ z \mbox{ : } P[Z_1 = z]>0 \}$. \\
Consider a one dimensional plane and a walk $X_j$ that starts with $X_0=0$ and for a given $X_j$, the (j+1)-st step to be $X_{j+1}=X_j + Z_{j+1}$. After interpolation we get $$X(t)=X_{[t]}+(t-[t])(X_{[t]+1}-X_{[t]})$$ for $0\leq{t}<\infty$.\\ And define $\bar{X}(t) = (t,X(t))$ to be a two dimensional walk.\\
Now, if for a given integer $n>0$ we define $X_n(t)\equiv{\frac{X(nt)}{\sqrt{n}}}$ for $0\leq{t}\leq{1}$,
then $X_n(t)$ would belong to $C[0,1]$ and $X_n(0)=0$.
\begin{thm}\label{simpleT} $X_n(t)$ conditioned on $X_n(1)=0$ converges weakly to the Brownian Bridge. \end{thm}
First we need to prove the theorem when $\mu =0$. For this we need to prove that
\begin{lem} For $A_0 \subseteq {C[0,1]}$, let
$P_n(A_0)=P[X_n\in{A_0}|X_n(1)=0]$ to be the law of $X_n$ conditioned on $X_n(1)=0$. Then \\
(a) For $\mu=0$, the finite-dimensional distributions of $P_n$ converge weakly to a Gaussian distributions.\\
(b) There are positive $\{C_n\}_{n=1,2,...}\rightarrow{C}$ ($C =\sigma^2$ when $\mu=0$) such that $0<C<\infty$ and $$Cov_{P_n}(X_n(s),X_n(t)) = C_ns(1-t) + O(\frac{1}{n})$$ for all $0\leq{s}\leq{t}\leq{1}$. More precisely: $Cov_{P_n}(X_n(s),X_n(t))=C_ns(1-t)$ if $[ns]<[nt]$ and \\ $Cov_{P_n}(X_n(s),X_n(t))=C_ns(1-t) - C_n\frac{\epsilon_1 (1- \epsilon_2 )}{n}$ if $[ns]=[nt]$, where
$\epsilon_1 = \frac{ns-[ns]}{n} \in [0,1)$ and
$\epsilon_2 = \frac{nt-[nt]}{n} \in [0,1)$. \end{lem}
and we need
\begin{lem} For $\mu=0$, the probability measures $P_n$ induced on the subspace of $X_n(t)$ trajectories in $C[0,1]$ are tight. \end{lem}
\begin{proof}[Proof of Lemma 1:] (a) Though it is not difficult to show that a finite-dimensional distribution of $P_n$ converges weakly to a gaussian distribution, here we only show the convergence for one and two points on the interval (in case of one point $t \in [0,1]$, we show that the limit variance has to be equal to $t(1-t) \sigma^2$). Take $t \in \frac{1}{n}\mathbb{Z} \cap (0,1)$ and let $\alpha =\frac{k}{\sqrt{n}}$, then by the Local CLT,
\begin{eqnarray} \label{phi} P[X(tn)=k] = \frac{1}{\sqrt{n}} \Phi_{\sigma \sqrt{t}}(\alpha) + o(\frac{1}{\sqrt{n}}), \mbox{ where } \Phi_{v}(x) \equiv \frac{1}{v \sqrt{2\pi}}e^{-\frac{x^2}{2 v^2}}
\end{eqnarray} is the normal density function, and the error term is uniformly bounded by a $o(\frac{1}{\sqrt{n}})$ function independent of $k$. \\ Therefore, substituting (\ref{phi}), $$P_n[X_n(t)=\alpha] = \frac{(\frac{1}{\sqrt{n}} \Phi_{\sigma \sqrt{t}}(\alpha) + o(\frac{1}{\sqrt{n}}))
(\frac{1}{\sqrt{n}} \Phi_{\sigma \sqrt{1-t}}(\alpha)+ o(\frac{1}{\sqrt{n}}))}
{\frac{1}{\sqrt{n}} \Phi_{\sigma}(0) + o(\frac{1}{\sqrt{n}}) }
= \frac{1}{\sqrt{n}} \Phi_{\sigma \sqrt{t(1-t)}}(\alpha)
+ o(\frac{1}{\sqrt{n}}) .$$ Thus for a set $A$ in $\mathbb{R}$, $$P_n[X_n(t)\in{A}] = \sum_{k\in\sqrt{n}A}[\frac{1}{\sqrt{n}} \Phi_{\sigma \sqrt{t(1-t)}}(\alpha)+ o(\frac{1}{\sqrt{n}})] = N[0, t(1-t)\sigma^2](A) + o(1)$$
-here the limit variance is equal to $t(1-t)\sigma^2$. Given that the variance $\sigma^2 <0$, the convergence follows. \\
The same method works for more than one point, here we do it for two: Let $\alpha_1 = \frac{k_1}{\sqrt{n}}$ and $\alpha_2 = \frac{k_2}{\sqrt{n}}$,
then as before, for $t_1<t_2$ in $\frac{1}{n}\mathbb{Z} \cap (0,1)$,
writing the conditional probability as a ratio of two probabilities,
and representing the probabilities according to (\ref{phi}),
we get
$$P_n[X_n(s)=\alpha_1 , X_n(t)=\alpha_2] =
\frac{\sqrt{|\mathcal{A}|}}{2\pi \sigma^2 }
\exp \left \{-\frac{(\alpha_1, \alpha_2)\mathcal{A} (\alpha_1, \alpha_2)^T} {2\sigma^2} \right \} +o(\frac{1}{n}) . $$ \\ \\ where $$\mathcal{A}={\left( \begin{array}{cc}
\frac{t_2}{(t_2-t_1)t_1} & -\frac{1}{t_2-t_1} \\
-\frac{1}{t_2-t_1} & \frac{1-t_1}{(t_2-t_1)(1-t_2)} \end{array} \right)}.$$ \\ Thus for sets $A_1$ and $A_2$ in $\mathbb{R}$,
\begin{eqnarray*} P_n[X_n(t_1) \in A_1 , X_n(t_2) \in A_2] & = &
\sum_{k_1 \in \sqrt{n}A_1, k_2 \in \sqrt{n}A_2}
[\frac{\sqrt{|\mathcal{A}|}}{2\pi \sigma^2 }
\exp \left \{ -\frac{(\alpha_1, \alpha_2) \mathcal{A} (\alpha_1, \alpha_2)^T}
{2\sigma^2} \right \}+o(\frac{1}{n})] \\ \\ & = & N[0, \mathcal{A}^{-1}](A_1 \times A_2) + o(1) \end{eqnarray*}
Observe that
$(\sigma^2 \mathcal{A}^{-1})={\left( \begin{array}{cc}
t_1(1-t_1)\sigma^2 & t_1(1-t_2)\sigma^2 \\
t_1(1-t_2)\sigma^2 & t_2(1-t_2)\sigma^2 \end{array} \right)}$ is the covariance matrix, and the part (b) of the lemma follows in case $\mu =0$.\\
(b) Though the estimate above produces the needed variance in case when the mean $\mu =0$ , in general, we need to apply the following approach: We first consider the case when $s<t$ and both $s,t \in \frac{1}{n}\mathbb{Z} \cap (0,1)$ where
$$\mathbf{E}[X_n(s) \mbox{ }|\mbox{ } X_n(t) =y]
= \mathbf{E}[Z_1 +...+ Z_{sn} |Z_1 +...+ Z_{tn} = y]
= \frac{s}{t} y ,$$ and therefore
\begin{eqnarray*} Cov_{P_n}(X_n(s),X_n(t)) & = &
\frac{s}{t} \mathbf{E}[X^2_n(t)|X_n(1)=0] \end{eqnarray*}
as $\{ -X_n(1-t) \mbox{ }|\mbox{ } X_n(1)=0 \}$ and $\{ X_n(t)
\mbox{ }|\mbox{ } X_n(1)=0 \}$ are identically distributed.\\
Now, by symmetry (time reversal),
$$Cov_{P_n}(X_n(s),X_n(t)) =Cov_{P_n}(X_n(1-t),X_n(1-s))
=\frac{1-t}{1-s} \mathbf{E}[X^2_n(s)|X_n(1)=0],$$
and therefore
$$\frac{\mathbf{E}[X^2_n(s)|X_n(1)=0]}
{\mathbf{E}[X^2_n(t)|X_n(1)=0]}=\frac{s(1-s)}{t(1-t)}.$$ Hence, there exists a constant $C_n$ such that for all $t \in \frac{1}{n}\mathbb{Z} \cap (0,1)$
$$\frac{\mathbf{E}[X^2_n(t)|X_n(1)=0]}{t(1-t)} \equiv C_n.$$ Thus we have shown that for $s \leq t$ in $\frac{1}{n}\mathbb{Z} \cap [0,1]$, $$Cov_{P_n}(X_n(s),X_n(t)) =
\frac{s}{t} \mathbf{E}[X^2_n(t)|X_n(1)=0]
= \frac{s}{t}C_n t(1-t)
= C_n s(1-t).$$
Now, consider the general case: $s=s_0 + \frac{\epsilon_1}{n} \leq t = t_0 + \frac{\epsilon_2}{n}$, where $ns_0, nt_0 \in \mathbb{Z}$ and $\epsilon_1, \epsilon_2 \in [0,1)$. Then the covariance
\begin{eqnarray*}
Cov_{P_n}(X_n(s),X_n(t))
& = & (1-\epsilon_1)(1-\epsilon_2)Cov_{P_n}(X_n(s_0),X_n(t_0))\\
& + & (1-\epsilon_1)\epsilon_2Cov_{P_n}(X_n(s_0),{X_n(t_0+\frac{1}{n})}) \\
& + &\epsilon_1(1-\epsilon_2)Cov_{P_n}(X_n(s_0+\frac{1}{n}),X_n(t_0))\\
& + &\epsilon_1\epsilon_2Cov_{P_n}(X_n(s_0+\frac{1}{n}),X_n(t_0+\frac{1}{n})) \end{eqnarray*} Therefore \begin{eqnarray*}
Cov_{P_n}(X_n(s),X_n(t))
& = & C_ns(1-t) \mbox{ when } s_0<t_0 \mbox{ (}[ns]<[nt]\mbox{),} \end{eqnarray*} and \begin{eqnarray*}
Cov_{P_n}(X_n(s),X_n(t))
& = & C_ns(1-t) - C_n\frac{\epsilon_1 (1- \epsilon_2 )}{n}
\mbox{ when } s_0=t_0 \mbox{ (}[ns]=[nt]\mbox{).} \end{eqnarray*} Now, plugging in $s=t=\frac{1}{2}$ we get
$$C_n =4\mathbf{E}[X^2_n(\frac{1}{2})|X_n(1)=0] \mbox{ when }n\mbox{ is even,}$$ and
$$C_n =4\mathbf{E}[X^2_n(\frac{1}{2})|X_n(1)=0] (\frac{n}{n-1})
\mbox{ when }n\mbox{ is odd.}$$ Therefore
$$C_n =4\mathbf{E}[X^2_n(\frac{1}{2})|X_n(1)=0]
(1+O(\frac{1}{n})) \rightarrow{C} =\sigma^2$$ as $\{ X_n(\frac{1}{2}), P_n \}$ converges in distribution as $n \rightarrow +\infty$.
\end{proof}
\begin{proof}[Proof of Lemma 2:]
Before we begin the proof of tightness, we notice that the only
real obstacle we face is that the process is conditioned on
$X_n=0$. The tightness for the case without the conditioning has been proved years
ago as part of the Donsker's Theorem (see Chapter 10 in
\cite{bill}). With the help of the local CLT we are essentially
removing the difference between the two cases.
Given a $\lambda >0$ and let $m=[n \delta]$ for
a given $0< \delta \leq 1$, then for any $\mu >0$,
\begin{eqnarray*}
P_{\lambda} & \equiv & P[ \max_{0\leq i \leq m} X_i \geq
\lambda \sqrt{n} > X_m> -\lambda \sqrt{n}| X_n = 0] \\
& = & \sum_{a=-[\lambda \sqrt{n}]}^{[\lambda \sqrt{n}]}
\frac{P[\max_{0 \leq i \leq m}X_i > [\lambda \sqrt{n}]
\mbox{ ; } X_m =a
\mbox{ ; } X_n=0]}
{P[X_n = 0]}\\
& = &
\sum_{a=-[\lambda \sqrt{n}]}^{[\lambda \sqrt{n}]}
\frac{P[\max_{0 \leq i \leq m}X_i > [\lambda \sqrt{n}]
\mbox{ ; } X_m =a]
P[X_{n-m}=-a]}
{P[X_n = 0]}\\
& \leq &
\max_{-[\lambda \sqrt{n}] \leq a \leq [\lambda \sqrt{n}]}
(\frac{P[X_{n-m}=-a]}
{P[X_n = 0]})
\times
\sum_{a=-[\lambda \sqrt{n}]}^{[\lambda \sqrt{n}]}
P[\max_{0 \leq i \leq m}X_i > [\lambda \sqrt{n}]
\mbox{ ; } X_m =a]
\\
& \leq &
2P[\max_{0\leq i \leq m}X_i \geq \lambda \sqrt{n} \geq X_m \geq
-\lambda \sqrt{n} ] \end{eqnarray*} \\ for $n$ large enough, where by the local CLT,
$$ \max_{-[\lambda \sqrt{n}] \leq a \leq [\lambda \sqrt{n}]}
(\frac{P[X_{n-m}=-a]}
{P[X_n = 0]}) \leq 2 $$ for $n$ large enough as $n-m$ linearly depends on $n$. \\
Therefore, the probability
$$P[\max_{0\leq i \leq m} |X_i| \geq \lambda \sqrt{n} |X_n=0]
\leq 2P_{\lambda} + P[|X_m| \geq \lambda \sqrt{n} |X_n=0],$$
where
$$P_{\lambda}
\leq 2P[\max_{0\leq i \leq m} X_i \geq \lambda \sqrt{n}].$$
Now, due to the point-wise convergence, we can proceed as in Chapter 10 of \cite{bill} by bounding the two remaining probabilities:
$$P[\max_{0\leq i \leq m} |X_i| \geq \lambda \sqrt{n}]
\leq 2 P[|X_m| \geq \frac{1}{2} \lambda \sqrt{n} ] \rightarrow
2P[|\sqrt{\delta}N| \geq \frac{\lambda}{2 \sigma}] \leq \frac{16 \delta^{3/2} \sigma^3}{\lambda^3}
\mathbf{E}[|N|^3]$$ and similarly
$$P[|X_m| \geq \lambda \sqrt{n} | X_n=0] \rightarrow
P[|\sqrt{\delta(1-\delta)}N| \geq \frac{\lambda}{\sigma}]
\leq \frac{\delta^{3/2} \sigma^3}{\lambda^3} \mathbf{E}[|N|^3].$$
Thus, for all integer $k \in [0, n-m]$,
$$P[\max_{0\leq i \leq m} |X_{k+i} - X_k| \geq \lambda \sqrt{n}
|X_n=0] = P[\max_{0\leq i \leq m} |X_i| \geq \lambda \sqrt{n}
|X_n=0] \leq 70\frac{\delta^{3/2} \sigma^3}{\lambda^3}
\mathbf{E}[|N|^3]$$ for $n$ large enough, (see Chapter 10 in \cite{bill}). Therefore $\{ P_n \}$ are tight (see Chapter 8 of \cite{bill}).
\end{proof}
\begin{proof}[Proof of Theorem \ref{simpleT}:] The lemmas above imply the convergence when the mean $\mu =0$. Now, for $\mu \not= 0$, there exists a $\rho \in \mathbb{R}$ such that $\sum_{z \in \mathbb{Z}} z e^{\rho z} P[Z_1 =z] = 0$. Then we let $\hat{Z}_1,\hat{Z}_2,...$ be i.i.d. random variables with their distribution defined in the following fashion: $$P[\hat{Z}_j =z] \equiv \frac{e^{\rho z}}{C_{\rho}} P[Z_j =z]$$ for all $j$ and $z \in \mathbb{R}$, where $C_{\rho} \equiv \sum_{z \in \mathbb{Z}} P[\hat{Z}_1 =z]= \sum_{z \in \mathbb{Z}} e^{\rho z} P[Z_1 =z]$. Then the law of $Z_1,...,Z_n$ conditioned on $Z_1+...+Z_n=0$ is the same as that of $\hat{Z}_1,...,\hat{Z}_n$ conditioned on $\hat{Z}_1+...+\hat{Z}_n=0$, and the case is reduced to that of $\mu =0$ as $\bold{E} \hat{Z}_j =0$. We also estimate the covariance equal to $\hat{C} s(1-t)$ for all $0 \leq s \leq t \leq 1$, where as before
$$\hat{C} = \lim_{n \rightarrow +\infty} \bold{E}[Z_1^2 \mbox{ }| \mbox{ } Z_1 +...+ Z_n =0].$$ \end{proof}
Observe that the result can be modified for $X_1, X_2,...$ defined on a multidimensional lattice $\mathbb{L} \subset \mathbb{R}^d$,$d>1$, if we condition on $X_n(1)=\bold{a}(n) = \bold{a+o(1)} \in \{ z\sqrt{n} \mbox{ : } z \in \bigoplus^n_1 \mathbb{L} \}$. We again let point zero be inside the closed convex hull of $\{ z \mbox{ : } P[Z_1 = z]>0 \}$. In this case the process $\tilde{X}_n(t) = X_n(t) + (\bold{a} - \bold{a}(n))t$ converges to the Brownian Bridge $B^{0, \bold{a}}$,
and convergence is uniform whenever
$\bold{a}(n)$ uniformly converges to zero thanks to the Local CLT.
\begin{thm} $\tilde{X}_n(t)$ conditioned on $X_n(1)=\bold{a}(n) = \bold{a}+o(1)$
converges weakly
to the Brownian Bridge. \end{thm}
Here, as before, if we take $t \in \frac{1}{n} \mathbb{Z} \cap [0,1]$ and let $\alpha =\frac{k}{\sqrt{n}}$, then
\begin{eqnarray*}
P[X_n(t)=\alpha \mbox{ }|\mbox{ } X_n(1)=\bold{a}(n)] & = &
\frac{(\frac{1}{\sqrt{n}} \Phi_{\sigma \sqrt{t}}(\alpha)
+ o(\frac{1}{\sqrt{n}}))
(\frac{1}{\sqrt{n}} \Phi_{\sigma \sqrt{1-t}}(\bold{a}(n) -\alpha)
+ o(\frac{1}{\sqrt{n}}))}
{\frac{1}{\sqrt{n}} \Phi_{\sigma}(\bold{a}(n)) + o(\frac{1}{\sqrt{n}})}
\\
& = & \frac{1}{\sqrt{n}} \Phi_{\sigma \sqrt{t(1-t)}}
(\alpha-\bold{a}(n)t) + o(\frac{1}{\sqrt{n}}) . \end{eqnarray*}
\subsection{General Case.}\label{bb:gen}
As before, for a given non-zero vector $\bold{\vec{a}} \in \mathbb{Z}^d$,
we let $X_1, X_2,...$ be i.i.d. random variables on $\mathbb{Z}^d$ with
the span of the lattice distribution equal to one (see \cite{durrett})
such that the probability $P[\bold{\vec{a}} \cdot X_1 >0] =1$,
the mean $\mu = \bold{E}X_1 <\infty$ and there is a constant
$\bar{\lambda} >0$ such that the moment-generating function
$$\bold{E}(e^{\theta \cdot X_1}) <\infty$$
for all $\theta \in B_{\bar{\lambda}}$. Also we let
$\bold{P_{\vec{a}}}$ denote the projection
map on $<\bold{\vec{a}}>$ and
$\bold{P^{\bot}_{\vec{a}}}$
denote the orthogonal projection on $<\bold{\vec{a}}>^{\bot}$. Now we can decompose
the mean $\mu = \mu_{a} \times \mu_{or}$, where
$\mu_{a} \equiv \bold{P_{\vec{a}}}\mu$ and
$\mu_{or} \equiv \bold{P^{\bot}_{\vec{a}}}\mu$.\\
As before we introduce a new basis $\{ \vec{f_1},\vec{f_2},..., \vec{f_d} \}$,
where $\vec{f_1} = \frac{\bold{\vec{a}}}{\| \bold{\vec{a}} \|}$. We again use
$[\cdot,\cdot]_f \in \mathbb{R} \times \mathbb{R}^{d-1}$ to denote
the coordinates of a vector with respect to the new basis.
We denote $X_i=[T_i,Z_i]_f \in \mathbb{Z} \times \mathbb{Z}^{d-1}$, where
$[T_i, 0]_f = \bold{P_{\vec{a}}}X_i$ and
$[0, Z_i]_f = \bold{P^{\bot}_{\vec{a}}}X_i$, and we let
$X_1+...+X_i =[t_i, Y_i]_f \in \mathbb{Z} \times \mathbb{Z}^{d-1}$.
Note: $T_i$ and $Z_i$ don't have to be independent. Interpolating $Y_i$, we get $$Y(t)=Y_{[t]}+(t-[t])(Y_{[t]+1} - Y_{[t]})$$ for $0 \leq t \leq \infty$ and if we now define $Y_n(t)\equiv{\frac{Y(nt)}{\sqrt{n}}}$ for $0\leq{t}\leq{1}$, then the following theorem easily follows from the previous result:
\begin{cor} $Y_n(t)$ conditioned on $Y_n(1)=0$ converges weakly to the Brownian Bridge. \end{cor}
Since the first coordinate $T_i$ is positive with probability one, the next step will be to interpolate $[t_i,Y_i]_f$, and prove that if scaled and conditioned on
$[t_n,Y_n]_f= X_1+...+X_n= [n \| \bold{\vec{a}} \|,0]_f = n\bold{\vec{a}}$ it will
converge weakly to the Brownian Bridge (with the first coordinate being the time axis). Now, the last theorem implies the result for $P[[T_i,0]_f=\mu_a]=1$, we want the same result for $\bold{E}T_i=\| \mu_a \|$ and $VarT_i<\infty$. \\ We first let $\bar{X}_i \equiv X_i - \mu_a$, then $\bold{E}\bar{X}_i= \mu_{or}$ and $Var \bar{X}_i <\infty$. We again interpolate: $$\bar{X}(t)=\bar{X}_{[t]}+(t-[t])(\bar{X}_{[t]+1} - \bar{X}_{[t]})$$ for $0 \leq t \leq \infty$, and scale
$\bar{X}_k(t)\equiv{\frac{\bar{X}(kt)}{\sqrt{k}}}$. Note: the last $d-1$ coordinates of $\bar{X}_k(t)$ w.r.t. the new basis are $Y_k(t)$ (e.g.
$\bold{P^{\bot}_{\vec{a}}}\bar{X}_k(t) = [0, Y_k(t)]_f$).
\\
From here on we denote $S_j \equiv [t_j, Y_j]_f =X_1 +...+X_j$ and $\bar{S}_j \equiv \bar{X}_1 +...+\bar{X}_j =S_j- j\mu_a$ for any positive integer $j$. As a first important step, we state another important \begin{cor}
For $k=k(n)= [\frac{n \| \bold{\vec{a}} \|}{\| \mu_a \|} + k_0 \sqrt{n}]$,
$\{ \bar{X}_k(t) -(k_0\sqrt{\frac{\| \mu_a \|}{\|
\bold{\vec{a}} \|}} \mu_a +\frac{n\bold{\vec{a}} - k\mu_a}{\sqrt{k}})t \}$ conditioned on
$\bar{X}_k(1)= n \bold{\vec{a}} -k\mu_a$
${(e.g. [t_k,Y_k]_f= n \bold{\vec{a}} )}$ converges weakly to the Brownian Bridge $B^{0, - k_0
\sqrt{\frac{\| \mu_a \|}{\| \bold{\vec{a}} \|}} \mu_a}$. \end{cor} Observe that $n \bold{\vec{a}} -k\mu_a = -k_0 \sqrt{n}\mu_a +o(\sqrt{n})$ and that the convergence is uniform for all $k_0$ in a compact set . Now, looking only at the last $d-1$ coordinates of $\bar{X}_k(t)$, w.r.t. the new basis the last Corollary implies:
\begin{lem}
For $k=k(n)= [\frac{n \| \bold{\vec{a}} \|}{\| \mu_a \|} + k_0
\sqrt{n}]$, $Y_k(t)$ conditioned on $t_k=n \| \bold{\vec{a}} \|$ and $Y_k(1)=0$ converges weakly to the Brownian Bridge. \end{lem} Note that convergence is uniform for $k_0$ in a compact set. \\ What the Lemma above says is the following: the interpolation of
$[\frac{i}{k}, \frac{1}{\sqrt{k}}Y_i]_f$ conditioned on $[t_k, Y_k]_f=n \bold{\vec{a}}$ converges to Time$\times$Brownian Bridge. Now, define the process $[t, Y_{n,k}^*(t)]_f$ to be the interpolation of $[\frac{1}{n \| \bold{\vec{a}} \|}t_i, \frac{1}{\sqrt{n}}Y_i]^{i=0,1,...,k}_f$, then
\begin{thm}
For $k=k(n)= [\frac{n \| \bold{\vec{a}} \|}{\| \mu_a \|} + k_0 \sqrt{n}]$, $\sqrt{\frac{n}{k}} Y^*_{n,k}(t)$ conditioned on
$t_k=n \| \bold{\vec{a}} \|$ and $Y_k(1)=0$ converges weakly to the Brownian Bridge. \end{thm}
\begin{proof}[Proof:]
Here we observe that the mean $\bold{E} [\frac{t_i}{n \|
\bold{\vec{a}} \|} - \frac{t_{i-1}}{n\| \bold{\vec{a}} \|}]$ is actually equal to $\frac{\| \mu_a \|}{n \| \bold{\vec{a}} \|} = \frac{1}{k -k_0 \sqrt{n}}+o(\frac{1}{n})$, and that for a given $\epsilon
>0$, the probability of the $\|[\frac{1}{n \| \bold{\vec{a}}
\|}t_i, \frac{1}{\sqrt{n}}Y_i]_f - [\frac{i}{k},
\frac{1}{\sqrt{k}}Y_i]_f \| = |\frac{t_j}{n \| \bold{\vec{a}} \|}
- \frac{j}{k}|$ exceeding $\epsilon$ for some $j\leq k$,
\begin{eqnarray*}
P[ \max_{0\leq j \leq k} |t_j- \frac{n \| \bold{\vec{a}} \|}{k}
j|\geq n\epsilon
\mbox{ }| \mbox{ } S_n = n \bold{\vec{a}}]
& \leq &
P[ \max_{0\leq j \leq k} \| S_j -\frac{n \| \bold{\vec{a}} \| j}{k}\mu_a \|
\geq n \epsilon
\mbox{ } | \mbox{ } S_k = n \bold{\vec{a}}]\\
& \leq &
P[ \max_{0\leq j \leq k} |\bar{S}_j| \geq
n \frac{\epsilon}{2}
\mbox{ } | \mbox{ } \bar{S}_k =
[n \| \bold{\vec{a}} \| -k\| \mu_a \|, 0]_f ]\\
& \rightarrow & 0 \end{eqnarray*}
as $n \rightarrow +\infty$ since
$n \| \bold{\vec{a}} \| -k\| \mu_a \|
= -\| \mu_a \| k_0 \sqrt{n} + o(\sqrt{n})$.
\end{proof}
Now, the next step is to prove that the process $$\{Y^*_{n,k} \mbox{ for some } k \mbox{ such that } [t_k, Y_k]_f= n \bold{\vec{a}} \}$$ conditioned on the existence of such $k$ converges weakly to the Brownian Bridge.\\
First of all the last theorem implies \begin{lem}
For given $k=k(n)= [\frac{n \| \bold{\vec{a}} \|}{\| \mu_a \|} +
k_0 \sqrt{n}]$, $Y^*_{n,k}(t)$ conditioned on $t_k=n \|
\bold{\vec{a}} \|$ and $Y_k(1)=0$ converges weakly to the Brownian Bridge. \end{lem}
For a fixed $M>0$, convergence is also uniform on $k \in [\frac{n
\| \bold{\vec{a}} \|}{\| \mu_a \|} -M\sqrt{n},
\frac{n \| \bold{\vec{a}} \|}{\| \mu_a \|} +M\sqrt{n}]$. For the future purposes we denote $\kappa \equiv \frac{\| \mu_a
\|}{\| \bold{\vec{a}} \|}$ and $I_M \equiv [\frac{n}{\kappa} -M\sqrt{n}, \frac{n}{\kappa}+M\sqrt{n}] \bigcap
\mathbb{Z}$.\\
Finally, we want to prove the following technical result, in which we use the uniformity of convergence for all $k=k(n) \in I_M$ and the truncation techniques to show the convergence of $Y^*_{n,k}$ to the Brownian Bridge in case when we condition only on the existence of such $k$. \begin{TECHthm} The process $$\{ Y^*_{n,k}\mbox{ for some } k \mbox{ such that } [t_k, Y_k]_f = n \bold{\vec{a}} \}$$ conditioned on the existence of such $k$ converges weakly to the Brownian Bridge. \end{TECHthm}
\begin{proof}[Proof:] Take $M$ large, notice that for $A \subset C^{d-1}[0,1]$,
$$\max_{k \in I_M}
|P[Y^*_k \in A \mid [t_k, Y_k]_f = n \bold{\vec{a}}]
-P[ B^o \in A] | = o(1),$$
where the Brownian Bridge $B^o$ is scaled up to the same constant for all those $k$.\\
Hence, $$lim_{n \rightarrow +\infty}
\frac{\sum_{k \in I_M}
P[S_k= n \bold{\vec{a}}]P[Y^*_{n,k} \in A | S_k= n \bold{\vec{a}} ]}
{\sum_{k \in I_M} P[S_k= n \bold{\vec{a}} ]} = P[ B^o \in A].$$
Therefore we are only left to prove the truncation argument as $M \rightarrow +\infty$. Now, for any $\epsilon >0$ there exists $M>0$ such that $$(1+\epsilon) \sum_{k \in I_M} P[S_k= n \bold{\vec{a}} ]
\leq \sum_k P[S_k= n \bold{\vec{a}} ]
\leq (1+2\epsilon) \sum_{k \in I_M} P[S_k= n \bold{\vec{a}} ]$$ for $n$ large enough, as by the large deviation upper bound, there is a constant $\bar{C}_{LD} >0$ such that $$P[S_k= n \bold{\vec{a}} ] \leq
e^{- \bar{C}_{LD} \frac{(n-k\kappa)^2}{k} \wedge |n-k\kappa|} ,$$ and therefore $\exists C_{LD}>0$ such that
$$\sum_{|n-k \kappa| >n^{2/3} }
P[S_k= n \bold{\vec{a}}] < e^{-C_{LD} n^{1/3} }.$$
Also, by the local CLT, $$P[S_k= n \bold{\vec{a}}]=P[\bar{S}_k= (n-k \kappa) \bold{\vec{a}}]
=\frac{1}{k^{d/2} \sqrt{Var\bar{X}_1 (2 \pi)^d }}
e^{-\frac{1}{2Var\bar{X}_1} \frac{(n-k \kappa)^2}{k} }
+o(\frac{1}{k^{d/2}})$$ implying
$$\sum_{|n-k \kappa| \leq n^{2/3} }
P[S_k= n \bold{\vec{a}} ]
= \frac{1}{ n^{\frac{d-1}{2}} } [\int_{-\infty}^{+\infty}
\frac{1}{\sqrt{Var\bar{X}_1 (2 \pi)^d }}
e^{-\frac{x^2}{2Var\bar{X}_1} } dx +o(1)]$$ where
$$\sum_{k \in I_M} P[S_k= n \bold{\vec{a}} ]
= \frac{1}{ n^{\frac{d-1}{2}} } [\int_{-M}^M
\frac{1}{\sqrt{Var\bar{X}_1 (2 \pi)^d }}
e^{-\frac{x^2}{2Var\bar{X}_1} } dx +o(1)].$$ Therefore
$$\frac{1}{1+2\epsilon} \frac{\sum_{k \in I_M}
P[S_k= n \bold{\vec{a}}]P[Y^*_{n,k} \in A | S_k= n \bold{\vec{a}}]}
{\sum_{k \in I_M} P[S_k=n \bold{\vec{a}}]}
\leq \frac{\sum_{k}
P[S_k= n \bold{\vec{a}} ]P[Y^*_{n,k} \in A | S_k= n \bold{\vec{a}} ]}
{\sum_{k} P[S_k= n \bold{\vec{a}} ]}$$
$$ \leq \frac{1}{1+\epsilon} \frac{\sum_{k \in I_M}
P[S_k= n \bold{\vec{a}} ]P[Y^*_{n,k} \in A | S_k= n \bold{\vec{a}} ]}
{\sum_{k \in I_M} P[S_k= n \bold{\vec{a}} ]}$$
for all $A \subset C^{d-1}[0,1]$. Taking the $\liminf$ and $\limsup$ of the fraction in the middle completes the proof.
\end{proof}
\end{document}
|
arXiv
|
{
"id": "0112272.tex",
"language_detection_score": 0.5214771628379822,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} Let $Y$ be the variety of (skew) symmetric $n\times n$-matrices of rank $\le r$. In paper we construct a full faithful embedding between the derived category of a non-commutative resolution of $Y$, constructed earlier by the authors, and the derived category of the classical Springer resolution of $Y$. \end{abstract} \maketitle \section{Introduction} \label{ref-1-0} Throughout $k$ is an algebraically closed field of characteristic zero. If $\Lambda$ is a right noetherian ring then we write ${\cal D}(\Lambda)$ for $D^b_f(\Lambda)$, the bounded derived category of right $\Lambda$-modules with finitely generated cohomology. Similarly for a noetherian scheme/stack $X$ we write ${\cal D}(X):=D^b_{\mathop{\text{\upshape{coh}}}}(X)$.
If $Y$ is the determinantal variety of $n\times n$-matrices of rank $\le r$ then in \cite{VdB100} (and independently in \cite{SegalDonovan}) a ``non-commutative crepant resolution'' \cite{Leuschke,VdB32} $\Lambda$ for $k[Y]$ was constructed. Such an NCCR is a $k[Y]$-algebra which has in particular the property that ${\cal D}(\Lambda)$ is a ``strongly crepant categorical resolution'' of~$\operatorname{Perf}(Y)$ (the derived category of perfect complexes on $Y$) in the sense of~\cite[Def.\ 3.5]{Kuznetsov}. This NCCR was constructed starting from a tilting bundle on the standard Springer type resolution of singularities $Z\rightarrow Y$ where $Z$ is a vector bundle over a Grassmannian. Indeed the main properties of $\Lambda$ were derived from the existence of a derived equivalence between ${\cal D}(\Lambda)$ and ${\cal D}(Z)$.
In this paper we discuss suitably adapted versions of these results for determinantal varieties of symmetric matrices and skew symmetric matrices. It turns out that both settings are very similar but notationally cumbersome to treat together. So we present our main results and arguments in the skew symmetric case. The modifications needed for the symmetric case will be discussed briefly in Section~\ref{symsec}.
Let $n>r>0$ with $2|r$ and now let $Y$ be the variety of skew symmetric $n\times n$-matrices of rank $\le r$. If $n$ is odd then in \cite{SpenkoVdB} we constructed an NCCR
$\Lambda$ for $k[Y]$ (the existence of the resulting strongly crepant categorical resolution of $Y$ was conjectured in \cite[Conj.\ 4.9]{Kuznetsov4}).
The construction of $\Lambda$ also works when $n$ is even but then
$\Lambda$ is not an NCCR, albeit very close to one. In particular one may show that ${\cal D}(\Lambda)$ is a
``weakly crepant categorical resolution'' of $\operatorname{Perf}(Y)$, again in the sense of \cite{Kuznetsov} (see \cite{Abuaf} for an entirely different construction of such resolutions).
In contrast to \cite{VdB100,SegalDonovan} the construction of the NCCR $\Lambda$ is based on invariant theory and does not use geometry. Nonetheless it is well known that also in this case $Y$ has a canonical (commutative) Springer type resolution of singularities $Z\rightarrow Y$ and our main concern below will be the relationship between the resolutions $\Lambda$ and $Z$. In particular we will construct a $k[Y]$-linear embedding \begin{equation} \label{ref-1.1-1} {\cal D}(\Lambda)\hookrightarrow {\cal D}(Z). \end{equation} For $n$ odd such an inclusion is expected by the fact that NCCRs are conjectured to yield minimal categorical resolutions. Note that the embedding \eqref{ref-1.1-1} turns out to be somewhat non-trivial. The image of $\Lambda$ is a coherent sheaf of ${\cal O}_Z$-modules, but it is not a vector bundle.
As already mentioned, the construction of $\Lambda$ uses invariant theory. We explain this next. Let $H$, $V$ be vector spaces of dimension $n$, $r$ with $V$ being in addition equipped with a symplectic bilinear form $\langle-,-\rangle$. The corresponding symplectic group is denoted by $\Sp(V)$.
If ${{\chi}}$ is a partition with $l({{\chi}})\le r/2$ then we let $S^{\langle{{\chi}}\rangle}V$ be the irreducible representation of $\Sp(V)$ with highest weight~${{\chi}}$. If ${{\chi}}=({{\chi}}_1,\ldots,{{\chi}}_{r})\in\ZZ^r$ is a dominant $\operatorname {GL}(V)$-weight then we let $S^{{\chi}} V$ be the irreducible $\operatorname {GL}(V)$-representation with highest weight ${{\chi}}$.
Put $ X=\operatorname {Hom}(H,V) $ and let $T$ be the coordinate ring of $X$: \[ T=\operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(H\otimes_k V^\vee). \] Put \begin{equation} \label{ref-1.2-2} M({{\chi}}):= (S^{\langle{{\chi}}\rangle}V\otimes_k T)^{\Sp(V)}\,. \end{equation} Thus $M(\chi)$ is a ``module of covariants'' in the sense of \cite{VdB9}. Let $B_{m,n}$ be the set of partitions contained in a box with $m$ rows and $n$ columns. Put \begin{equation} \label{ref-1.3-3} M=\bigoplus_{{{\chi}}\in B_{r/2,\lfloor n/2\rfloor-r/2}} M({{\chi}}) \end{equation} and $ \Lambda=\operatorname {End}_{R}(M) $. In \cite{SpenkoVdB} the following result (which improves on \cite{WeymanZhao}) was proved: \begin{theorem} \label{ref-1.1-4} One has $\operatorname {gl\,dim}\Lambda<\infty$. Moreover if $n$ is odd then $\Lambda$ is a Cohen-Macaulay $R:=T^{\Sp(V)}$-module. In other words, in the terminology of \cite{Leuschke,VdB32}, when $n$ is odd $\Lambda$ is a non-commutative crepant resolution (NCCR) of $R$. \end{theorem} By the first fundamental theorem for the symplectic group $R$ is a quotient of $\operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(\wedge^2 H)$ so that dually $\operatorname {Spec} R\hookrightarrow \wedge^2 H^\vee\subset \operatorname {Hom}_k(H,H^\vee)$. The second fundamental theorem for the symplectic group yields \[ \operatorname {Spec} R=\{\psi\mid \psi\in \operatorname {Hom}_k(H,H^\vee),\psi+\psi^\vee=0,\operatorname {rk} \psi\le r\}\,. \] so that $\operatorname {Spec} R\cong Y$ with $Y$ as introduced above. Below we identify $R$ with $k[Y]$.
We now discuss the Springer resolution $p:Z\rightarrow Y$ as well as the inclusion ${\cal D}(\Lambda)\hookrightarrow {\cal D}(Z)$ announced in \eqref{ref-1.1-1}. Let $ F=\operatorname{Gr}(r,H)$ be the Grassmannian of $r$-dimensional quotients $H\twoheadrightarrow Q$ of $H$ and put \[ Z=\{(\phi,Q)\mid Q\in F,\phi\in \operatorname {Hom}_k(Q,Q^\vee),\phi+\phi^\vee=0\}\,. \] The Springer resolution $p:Z\rightarrow Y\hookrightarrow \operatorname {Hom}_k(H,H^\vee)$ of $Y$ sends $(\phi,Q)$ to the composition \[ [H\twoheadrightarrow Q\xrightarrow{\phi} Q^\vee \hookrightarrow H^\vee]\in \operatorname {Hom}_k(H,H^\vee)\,. \]
Using again the fundamental theorems for the symplectic group we have \begin{equation} \label{ref-1.4-5} \operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(Q\otimes_k V^\vee)^{\Sp(V)}\cong\operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(\wedge^2 Q) \end{equation} (since $\dim Q=\dim V$, there are no relations on the righthand side). For a partition~${{\chi}}$ with $l({{\chi}})\le r/2$ we put \begin{equation} \label{eq:mq} M_Q({{\chi}})=(\det Q)^{\otimes r-n}\otimes_k (S^{\langle {{\chi}}\rangle} V\otimes_k \operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(Q\otimes_k V^\vee))^{\Sp(V)} \end{equation} where we consider $M_Q({{\chi}})$ as a $\operatorname {GL}(Q)$-equivariant $\operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(\wedge^2 Q)$-module via \eqref{ref-1.4-5}.
Choose a specific $(H\twoheadrightarrow Q)\in F$. One has $F=\operatorname {GL}(H)/P_Q$ where $P_Q$ is the parabolic subgroup of $\operatorname {GL}(H)$ that stabilizes the kernel of $H\twoheadrightarrow Q$. We regard $\operatorname {GL}(Q)$-equivariant objects tacitly as $P_Q$-equivariant objects through the canonical morphism $P_Q\twoheadrightarrow \operatorname {GL}(Q)$. Taking the fiber in $Q$ defines an equivalence between $\mathop{\text{\upshape{coh}}}(\operatorname {GL}(H),Z)$ and $\operatorname{mod}(P_Q,{\cal Z}_Q)$ where ${\cal Z}_Q:=\operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(\wedge^2 Q)$, whose inverse will be denoted by $\widetilde{?}$. Put \[ {\cal M}_{Z}({{\chi}})=\widetilde{M_Q({{\chi}})}\in \mathop{\text{\upshape{coh}}}(\operatorname {GL}(H),Z)\,. \] \begin{theorem} (see \S\ref{ref-5.2-29}) \label{ref-1.2-6} Let $\mu,\lambda\in B_{r/2,n-r}$. \begin{enumerate} \item \label{ref-1-7} We have for $i>0$. \[ \operatorname {Ext}^i_Z({\cal M}_Z(\lambda),{\cal M}_Z(\mu))=0. \] \item There are isomorphisms as $R$-modules \begin{equation} \label{ref-1.5-8} R\Gamma(Z,{\cal M}_Z(\lambda))\cong M(\lambda)\,. \end{equation} \item \label{ref-3-9} Applying $p_\ast$ induces an isomorphism \begin{align} \operatorname {Hom}_Z({\cal M}_Z(\lambda),{\cal M}_Z(\mu))&\overset{p_\ast}{\cong} \operatorname {Hom}_Y(p_\ast{\cal M}_Z(\lambda),p_\ast{\cal M}_Z(\mu))\label{ref-1.6-10}\\ &\cong \operatorname {Hom}_R(\Gamma(Z,{\cal M}_Z(\lambda)),\Gamma(Z,{\cal M}_Z(\mu)))&&\text{($Y$ is affine)}\nonumber\\ &\cong \operatorname {Hom}_R(M(\lambda),M(\mu))&& \text{(by \eqref{ref-1.5-8})}\nonumber \end{align} \end{enumerate} \end{theorem} From this theorem it follows in particular that \[ {\cal M}_Z:=\bigoplus_{{{\chi}}\in B_{r/2,\lfloor n/2\rfloor-r/2}} {\cal M}_Z({{\chi}}) \] satisfies \[ \operatorname {Ext}^i_Z({\cal M}_Z,{\cal M}_Z)= \begin{cases} \Lambda&\text{if $i=0$}\\ 0&\text{if $i>0$} \end{cases} \] and we obtain the following more precise version of \eqref{ref-1.1-1}: \begin{corollary} \label{ref-1.3-11} There is a full exact embedding \[ -\overset{L}{\otimes}_{\Lambda} {\cal M}_Z: {\cal D}(\Lambda)\hookrightarrow {\cal D}(Z)\,. \] \end{corollary} \begin{remark} Put \[ M':=\bigoplus_{\chi\in B_{r/2,n-r}} M(\chi) \] and $\Gamma=\operatorname {End}_R(M')$. It follows from \cite[Thm 1.5.1]{SpenkoVdB} (applied with $\Delta=\epsilon \bar{\Sigma}$ for a sufficiently small $\epsilon>0$)
that $\operatorname {gl\,dim} \Gamma<\infty$. See the computation in \S6 in loc.\ cit.. We have $\Lambda=e\Gamma e$ for a suitable idempotent $e$. The fact that $\operatorname {gl\,dim} \Lambda<\infty$ implies that $\Gamma$ cannot be an NCCR by \cite[Ex.\ 4.34]{Wemyss1} (see also \cite[Remark 3.6]{SpenkoVdB}). In the terminology of \cite{SpenkoVdB} $\Gamma$ is a (non-crepant) non-commutative resolution of $R$. As in Corollary \ref{ref-1.3-11} we still have an embedding ${\cal D}(\Gamma)\subset {\cal D}(Z)$.
\end{remark}
\begin{comment} \begin{remark} The methods in this paper apply almost verbatim to determinantal varieties for symmetric matrices but, as is often the case, the statements and arguments become a bit more involved (for starters there is a distinction between $r$ even and odd). This is why we restrict ourselves to skew symmetric matrices in this paper. The details for the symmetric case will be discussed elsewhere. \end{remark} \end{comment}
\section{A $\operatorname {GL}(Q)$-equivariant free resolution of $M_Q(\lambda)$} \label{ref-3-12} In this section we discuss some of the properties of the $\operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(\wedge^2 Q)$-modules ${M_{Q}}(\lambda)$ introduced in the introduction. We basically restate some results from \cite{WeymanSam} in our current language. To do this it will be convenient to consider \[ N_Q(\chi):= (S^{\langle {{\chi}}\rangle} V\otimes_k \operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(Q\otimes_k V^\vee))^{\Sp(V)} \] so that $M_Q(\chi)=(\det Q)^{\otimes r-n}\otimes_k N_Q(\chi)$. Since $\det Q$ is one-dimensional, $M_Q(\lambda)$ and $N_Q(\lambda)$ have identical properties.
The following fact will not be used although it seems interesting to know \begin{lemma} ${N_Q}(\lambda)$ is a reflexive $\operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(\wedge^2Q)$-module. \end{lemma} \begin{proof} This follows for example from the fact that $\operatorname {Spec}\operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(Q\otimes_k V^\vee)\rightarrow \operatorname {Spec}\operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(\wedge^2 Q)$ contracts no divisor. \end{proof} \label{ref-3-13}
\begin{comment} Choose a basis for $Q$. Using this basis we will identify the Weyl group of $\operatorname {GL}(Q)$ with $S_r$ and we will write the $\operatorname {GL}(Q)$-weights as $r$-tuples: $\mu=(\mu_1,\ldots,\mu_r)$ with the dominant weights being characterized by $\mu_1\ge \cdots\ge\cdots\ge \mu_r$. For a dominant weight $\mu$ we write $S^\mu Q$ for the corresponding irreducible $\operatorname {GL}(Q)$-representation. The twisted Weyl group action on weights will be denoted by ``$\ast$''. The $i$'th simple reflection $s_i$ acts by $s_i{\ast} (\ldots,\mu_i,\mu_{i+1},\ldots):= (\ldots,\mu_{i+1}-1,\mu_{i}+1,\ldots)$. If there is a $w\in S_r$ such that $w{\ast}\mu$ is dominant then we say that~$\mu$ is regular, otherwise that it is singular. If~$\mu$ is regular then we write $\mu^+$ for the (necessarily unique) dominant weight in the twisted Weyl group orbit of $\mu$. We also write $i(\mu)$ for the minimal number of (twisted) simple reflections required to make $\mu$ dominant. \end{comment}
Recall that a border strip is a connected skew Young diagram not containing any $2\times 2$ square. The size of a border strip is the number of boxes it contains. We follow \cite{WeymanSam} and associate to some partitions $\lambda$ a partition $\tau_{r}(\lambda)$ and a number $i_{r}(\lambda)$. The definition of $(\tau_r(\lambda),i_r(\lambda))$ is inductive. If $l(\lambda)\leq r/2$ then $\tau_{r}(\lambda)=\lambda$, $i_r(\lambda)=0$. Suppose now that $l(\lambda)>r/2$. If there exists a non empty border strip $R_\lambda$ of size $2 l(\lambda)-r-2$ starting at the first box in the bottom row of $\lambda$ such that $\lambda\setminus R_\lambda$ is a partition then $\tau_r(\lambda):=\tau_r(\lambda\setminus R_\lambda)$, and $i_r(\lambda):=c(R_\lambda)+i_r(\lambda\setminus R_\lambda)$, where $c(R_\lambda)$ is the number of columns of $R_\lambda$. Otherwise $\tau_r(\lambda)$ is undefined and $i_r(\lambda)=\infty$.
\begin{comment} Recall that a partition has Frobenius coordinates $(a_1,\ldots,a_u;b_1,\ldots,b_u)$, $a_1>\cdots>a_u\ge 1$, $b_1>\cdots>b_u\ge 1$ if for all $i$ the box $(i,i)$ has arm length $a_i-1$ and leg length $b_i-1$. Let ${Q_{-1}}(m)$ be the set of partitions $\chi$ with
$|\chi|=m$ whose Frobenius coordinates are of the form $(a_1,\ldots,a_u{{;}} a_1{+}1,\ldots,a_u{+}1)$.
For partitions $\delta,\chi$ such that $l(\delta)$, $l(\chi)\le r/2$ put
$(\delta|\chi):=(\delta_1,\ldots,\delta_{r/2},\chi_1,\ldots,\chi_{r/2})$ with the latter being viewed as a weight for $\operatorname {GL}(Q)$. \end{comment}
From \cite[Corollary 3.16]{WeymanSam} we extract the following result (the role of $\operatorname{Sym}} \def\Sp{\operatorname{Sp}(\wedge^2 Q)$ is played by the ring $A$ in loc.\ cit.
and our $\operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(Q\otimes_k V^\vee)$ is denoted by $B$). \begin{proposition} \label{ref-3.2-14}
Assume $\chi$ is a partition with $l(\chi)\le r/2$. Then
${N_Q}(\chi)$ has a $\operatorname {GL}(Q)$-equivariant free resolution as a $\operatorname{Sym}} \def\Sp{\operatorname{Sp}(\wedge^2
Q)$-module which in homological degree $t\ge 0$ is the direct sum of $S^{\lambda}
Q\otimes_k \operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(\wedge^2 Q)$ for $\lambda$ satisfying $(\tau_r(\lambda),i_r(\lambda))=(\chi,t)$. \end{proposition} \begin{example} \label{ref-3.3-15} Write $[\mu_1,\mu_2,\ldots]$ for $S^{\mu} Q\otimes_k \operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(\wedge^2 Q)$. Assume $r=4$. Then the above resolution of ${N_Q}(a,b)$ has the form \[ 0\rightarrow [a,b,1,1]\rightarrow [a,b] \] if $b\ge 1$. If $b=0$ then the resolution has only one term given by $[a]$. \end{example} \begin{example} \label{ref-3.4-16} Assume $r=6$. Now the resolution of ${N_Q}(a,b,c)$ is \[ 0\rightarrow [a,b,c,2,2,2] \rightarrow [a,b,c,2,1,1] \rightarrow [a,b,c,1,1]\rightarrow [a,b,c] \] if $c\ge 2$. If $c=1$ then we have \[ 0\rightarrow [a,b,1,1,1]\rightarrow [a,b,1] \] If $c=0,b\ge 1$ we get \[ 0 \rightarrow [a,b,1,1,1,1]\rightarrow [a,b] \] Finally for $c=b=0$ the resolution has again only a single term given by $[a]$. \end{example} \begin{remark} \label{sam2}
For $\chi_{r/2}\ge r/2-1$ we give an explict description the resolution of $N_Q(\chi)$ (including the differentials) in Appendix \ref{ref-A-50}.
Steven Sam informed us of an alternative (and more general) approach as follows. There is an action of $\mathfrak{so}(Q+Q^*)$ on $\operatorname{Sym}} \def\Sp{\operatorname{Sp}(Q\otimes_k V^*)$, which commutes with the $\Sp(V)$-action. Therefore $\mathfrak{so}(Q+Q^*)$ acts on $N_Q(\chi)$. The resolution of $N_Q(\chi)$ in Proposition \ref{ref-3.2-14} can be upgraded to an $\mathfrak{so}(Q+Q^*)$-equivariant resolution, which is a BGG-resolution by parabolic Verma modules of the irreducible highest weight representation $N_Q(\chi)$. This follows by \cite[Lemma 5.14, Theorem 5.15, Corollary 6.8]{EHP} since $N_Q(\chi)$ is unitary \cite[Proposition 4.1]{ChengZhang}.
In this way, using \cite[Section 5.3]{EHP} and \cite[Proposition 3.7]{Lepowsky}, one may in fact give an explicit description of the resolution of $N_Q(\chi)$ also for general $\chi$. However an analogue of the uniqueness claim of Proposition \ref{prop:uniqueness} is apparently not yet available in the literature.
\end{remark}
\begin{comment} is computed using Weyman's ``geometric method'' (see \cite[Lemma 3.11, Lemma 3.12, Prop.\ 3.13]{WeymanSam}). We do not need the definition of $M_\chi$, just the shape of its resolution which we now describe.
Let $G$ be the Grassmannian of $r/2$ dimensional quotients of $Q$ and let
${\cal P}$, ${\cal S}$ be respectively the universal quotient and subbundle on $G$. The resolution of $M_\chi$ given by the geometric method is in homological degree $t\ge 0$ equal to \begin{equation} \label{ref-3.1-17} \bigoplus_{j\in \NN} H^j(G,\wedge^{t+j}(\wedge^2{\cal S})\otimes_G S^\chi {\cal P}) \otimes_k \operatorname{Sym}} \def\Sp{\operatorname{Sp}(\wedge^2 Q)\,. \end{equation} We have \begin{equation} \label{ref-3.2-18} \wedge^{k}(\wedge^2{\cal S})\cong \bigoplus_{\mu\in {Q_{-1}}(2k)} S^\mu{\cal S}\,. \end{equation} It now suffices to invoke Bott's theorem to see that the resolution \eqref{ref-3.1-17} has the same shape as the resolution introduced in the statement of Proposition \ref{ref-3.2-14}. \end{comment}
Looking at the Examples \ref{ref-3.3-15}, \ref{ref-3.4-16} suggests the following easy consequence of Proposition \ref{ref-3.2-14} which is crucial for what follows: \begin{corollary} \label{ref-3.6-19} The summands of the resolution of ${N_Q}(\chi)$ given in Proposition \ref{ref-3.2-14} are all of the form $S^\delta Q\otimes_k \operatorname{Sym}} \def\Sp{\operatorname{Sp}(\wedge^2 Q)$ with $\delta_1=\chi_1$. \end{corollary} \begin{proof} Note first of all that $l(\delta)\le r$ (otherwise $S^\delta Q=0$). A border strip $R$ of size $\leq 2 l(\lambda)-r-2$ starting at the first box in the bottom row of a partition $\lambda$ with $r\ge l(\lambda)>r/2$ has at most $2 l(\lambda)-r-2$ rows. So if we remove $R$ then the first $l(\lambda)-(2 l(\lambda)-r-2)=-l(\lambda)+r+2\ge 2$ rows of $\lambda$ are unaffected.
If $S^\delta Q\otimes_k \operatorname{Sym}} \def\Sp{\operatorname{Sp}(\wedge^2 Q)$, $\delta\neq \chi$, appears in the resolution of ${N_Q}(\chi)$ then $\chi$ is by Proposition \ref{ref-3.2-14} obtained from $\delta$ by a sequence of border strip removals as in the previous paragraph. Thus $\delta_1=\chi_1$ (and also $\delta_2=\chi_2$).
\begin{comment} We assume $r\ge 4$ since otherwise the statement is trivial. Put $\rho=(r,r-1,\ldots,2,1)$. The twisted Weyl group action can be described as $w{\ast}\theta=w(\theta+\rho)-\rho$. In fact to compute $\theta^+$ one proceeds as follows: first compute $\theta'=\theta+\rho$. Order the components of $\theta'$ in descending order to obtain $\theta''$. If $\theta''$ contains repeated entries then $\theta$ is singular. Otherwise $\theta^+:=\theta''-\rho$.
We apply this with $\theta=(\chi{|}\mu)$ as in Proposition \ref{ref-3.2-14}. The fact $l(\mu)\le r/2$ and $\mu\in \cup_k{Q_{-1}}(2k)$ implies $\mu_1\le r/2-1$ (the arm length of $(1,1)$ in $\mu$ must be one less than the leg length). From this one computes $\theta'_1,\theta'_2\ge \theta'_{u}$ for $u\ge 2$. Indeed if $u\le r/2$ then this follows the fact that $\chi$ is a partition and if $u\ge r/2+1$ then it follows from the fact that $\theta'_1,\theta'_2\ge r-1$ and $\theta'_u=\theta_u+r-u+1=\mu_{u-r}+r-u+1\le \mu_1+r-(r/2+1)+1\le r-1$. Hence $\theta''_1=\theta'_1$, $\theta''_2=\theta_2'$ and thus if $\theta$ is regular
$\theta^+_1=\theta'_1-r=\theta_1$ and similarly
$\theta^+_2=\theta'_2-(r-1)=\theta_2$. \end{comment}
\end{proof} \section{The Springer resolution} \label{ref-4-20} Let $\sigma:V\rightarrow V^\vee$, $\sigma+\sigma^\vee=0$ be the isomorphism corresponding to the symplectic form on $V$. Consider the following diagram. \begin{equation} \label{ref-4.1-21} \xymatrix{ {{E}}\ar[r]^{\tilde{p}}\ar[d]_{\tilde{q}}&X\ar[d]^q\\ Z\ar[r]_p\ar[d]_\pi&Y\ar@{^(->}[r]&\wedge^2 H^\vee\\ F } \end{equation} where $X=\operatorname {Hom}(H,V)$ is as above and \begin{equation} \label{ref-4.2-22} {{Y}}=\{\psi\in \operatorname {Hom}(H,H^\vee)\mid \psi+\psi^\vee=0,\operatorname {rk} \psi\le r\}\subset \wedge^2 H^\vee\,, \end{equation} \[ F=\operatorname{Gr}(r,H):=\{\text{$r$-dimensional quotients of $H$}\}\,, \] \begin{equation}
\label{ref-4.3-23} Z=\{(\phi,Q)\mid Q\in F,\phi\in \operatorname {Hom}(Q,Q^\vee),\phi+\phi^\vee=0\}\,, \end{equation} \begin{equation} \label{ref-4.4-24} {{E}}=\{(\epsilon,Q)\mid Q\in F, \epsilon\in \operatorname {Hom}(Q,V)\}\,. \end{equation} If $\theta:H\rightarrow V\in X$ then $q(\theta)\in {{Y}}$ is the composition \[ q(\theta)=[H\xrightarrow{\theta}V\xrightarrow{\sigma}V^\vee \xrightarrow{\theta^\vee} H^\vee]\,. \] If $(\phi,Q)\in Z$ then $p(\phi,Q)\in {{Y}}$ is the composition \[ p(\phi,Q)=[H\twoheadrightarrow Q\xrightarrow{\phi} Q^\vee \hookrightarrow H^\vee]\,. \] The map $\pi:Z\rightarrow F$ is the projection $(\phi,Q)\mapsto Q$. If $(\epsilon,Q)\in {{E}}$ then $\tilde{p}(\epsilon,Q)$ is the composition \[ [H\twoheadrightarrow Q\xrightarrow{\epsilon} V] \] and $\tilde{q}(\epsilon,Q)$ is $(\phi,Q)$ where $\phi$ is the composition \[ [Q\xrightarrow{\epsilon} V\xrightarrow{\sigma} V^\vee \xrightarrow{\epsilon^\vee} Q^\vee]\,. \] In the diagram \eqref{ref-4.2-22}, $X$, $Z$, $E$ are smooth, $p$ is a resolution of singularities and $\pi$ and $\pi\tilde{q}$ are vector bundles. The coordinate ring of $X$ is $T=\operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(H\otimes_k V^\vee)$. For the other schemes in \eqref{ref-4.1-21} we have \begin{align*} Y&=\operatorname {Spec} T^{\Sp(V)}\\ Z&=\underline{\operatorname {Spec}}_F {\cal Z}\\ {{E}}&=\underline{\operatorname {Spec}}_F {\cal E} \end{align*} with ${\cal Z}$, ${\cal E}$ being the sheaves of ${\cal O}_F$-algebras given by \begin{equation} \label{ref-4.5-25} \begin{aligned} {\cal Z}&=\operatorname{Sym}} \def\Sp{\operatorname{Sp}_F(\wedge^2{\cal Q})\\ {\cal E}&=\operatorname{Sym}} \def\Sp{\operatorname{Sp}_F({\cal Q}\otimes_k V^\vee) \end{aligned} \end{equation} where ${\cal Q}$ is the tautological quotient bundle on $F$. From \eqref{ref-4.5-25} obtain in particular \begin{lemma} \label{ref-4.1-26} If $U\subset F$ is an affine open then $\pi^{-1}(U)$ and $(\pi\tilde{q})^{-1}(U)$ are affine and moreover $k[\pi^{-1}(U)]= k[(\pi\tilde{q})^{-1}(U)]^{\Sp(V)}$. \end{lemma} Now let $Y_0$ be the open subscheme of $Y$ of those $\psi\in Y$ (see \eqref{ref-4.2-22}) which have rank exactly $r$ and put $X_0=q^{-1}(Y_0)$, $Z_0=p^{-1}(Y_0)$, ${{E}}_0=\tilde{p}^{-1}(X_0)=\tilde{q}^{-1}(Z_0)$. Then it is easy to see that
$X_0\subset X=\operatorname {Hom}(H,V)$ is the open subscheme of those $\theta:H\rightarrow V$ which are surjective and that $Z_0\subset Z$ is the open subscheme of those $(\phi,Q)\in Z$ (see \eqref{ref-4.3-23}) where $\phi$ is an isomorphism. Finally ${{E}}_0$ is the open subscheme of ${{E}}$ of those $(\epsilon,Q)$ where $\epsilon$ is an isomorphism. The restricted morphisms $\tilde{q}_0:E_0\rightarrow Z_0$, $q_0:X_0\rightarrow Y_0$ are $\Sp(V)$-torsors and the restricted morphisms $\tilde{p}_0:E_0\rightarrow X_0$, $p_0:Z_0\rightarrow Y_0$ are isomorphisms.
\section{A splitting functor} \label{ref-5-27} \subsection{Preliminaries} The idea of using splitting functors was suggested to us by Sasha Kuznetsov. Recall that a (full) triangulated subcategory of a triangulated category is
right admissible if the inclusion functor has a right adjoint. Following \cite[Def.\ 3.1]{Kuznetsov3} we say that a functor $\Phi:{\cal B}\rightarrow {\cal A}$ is right splitting if $\operatorname {ker} \Phi$ is right admissible in~${\cal B}$,~$\Phi$ restricted to $(\operatorname {ker} \Phi)^\perp$ is fully faithful and finally $\operatorname {im}\Phi=\Phi(\operatorname {ker}\Phi^\perp)$ is right admissible in ${\cal A}$. Left splitting functors are defined in a similar way. We see that splitting functors are categorical versions of partial isometries between Hilbert space.
According to \cite[Lem.\ 3.2, Cor.\ 3.4]{Kuznetsov3} a right splitting functor $\Phi$ has a right adjoint~$\Phi^!$ which is a left splitting functor. According to \cite[Thm 3.3(3r)]{Kuznetsov3} if $\Phi$ is right splitting then~$\Phi$ and $\Phi^!$ induce inverse equivalences between $\operatorname {im} \Phi\subset {\cal A}$ and $\operatorname {im} \Phi^!\subset {\cal B}$. Below we will use the following criterion to verify that a certain functor is splitting. \begin{lemmas}
\label{ref-5.1.1-28} Assume that $\Phi:{\cal B}\rightarrow {\cal A}$ is an exact
functor between triangulated categories. Assume that $\Phi$ has a
right adjoint $\Phi^!$ such that the composition of the counit map
$\Phi \Phi^!\rightarrow \operatorname{id}_{{\cal A}}$ with $\Phi$ yields a natural isomorphism
$\Phi \Phi^!\Phi\rightarrow \Phi$. Then $\Phi$ is a right splitting functor. \end{lemmas} \begin{proof} This is equivalent to the criterion \cite[Thm 3.3(4r)]{Kuznetsov3}. In the latter case we start from the unit map $\operatorname{id}_{{\cal A}} \rightarrow \Phi\Phi^! $ and we require that the resulting $\Phi\rightarrow \Phi \Phi^!\Phi$ is an isomorphism. As the composition $\Phi\rightarrow \Phi \Phi^!\Phi \rightarrow \Phi$ is the identity, it follows that if one of these maps is an isomorphism then so is the other. \end{proof} \subsection{The functor} \label{ref-5.2-29}
The diagram \eqref{ref-4.1-21} may be transformed into a diagram of quotient stacks \[ \xymatrix{ {{E/\Sp(V)}}\ar[r]^{\tilde{p}_s}\ar[d]_{\tilde{q}_s}&X/\Sp(V)\ar[d]^{q_s}\\ Z\ar[r]_p\ar[d]_\pi&Y\ar@{^(->}[r]&\wedge^2 H^\vee\\ F } \] which is compatible with the natural maps $E\rightarrow E/\Sp(V)$, $X\rightarrow X/\Sp(V)$. This means in particular that $L\tilde{q}^\ast_s$, $Lq^\ast_s$, $R\tilde{p}_{s,\ast}$, $L\tilde{p}_{s}^\ast$, $\tilde{p}^!_{s,\ast}$ may be computed like their non-stacky counterparts. We will use this without further comment.
We define the functor $\Phi$ as the composition \[ \Phi:{\cal D}(Z)\xrightarrow{L\tilde{q}^\ast_s} {\cal D}(E/\Sp(V)) \xrightarrow{R\tilde{p}_{s,\ast}} {\cal D}(X/\Sp(V)) \] The functor $\Phi$ has a right adjoint $\Phi^!$ given by the composition \[ \Phi^!:{\cal D}(X/\Sp(V))\xrightarrow{\tilde{p}^!_s} {\cal D}(E/\Sp(V))\xrightarrow{R\tilde{q}_{s\ast}} {\cal D}(Z) \] where $ \tilde{p}_s^!=\omega_{E/X}\otimes_E L\tilde{p}_s^\ast(-) $ and $\tilde{q}_{s\ast}$ is given by taking $\Sp(V)$-invariants. From Lemma \ref{ref-4.1-26} and the fact that $\Sp(V)$ is reductive it follows that $\tilde{q}_{s\ast}$ is an exact functor. \begin{theorems} \label{ref-5.2.1-30} \begin{enumerate} \item $\Phi$ is a right splitting functor. \label{ref-1-31} \item $\operatorname {im} \Phi$ is the smallest triangulated subcategory of ${\cal D}(X/\Sp(V))$ containing $S^{\langle \lambda\rangle} V\otimes_k {\cal O}_X$ for $\lambda\in B_{r/2,n-r}$. \label{ref-2-32} \item $\operatorname {im} \Phi^!$ is the smallest triangulated subcategory of ${\cal D}(Z)$ containing ${\cal M}_Z(\lambda)$ for $\lambda\in B_{r/2,n-r}$. \label{ref-3-33} \item For $\lambda\in B_{r,n-r}$ we have \label{ref-4-34} \[ \Phi(\pi^\ast((\det {\cal Q})^{\otimes r-n} \otimes_F S^\lambda {\cal Q}))\cong S^\lambda V\otimes_k {\cal O}_X\,. \] \item For $\lambda\in B_{r/2,n-r}$ we have \label{ref-5-35} \[ \Phi({\cal M}_Z(\lambda))\cong S^{\langle \lambda\rangle} V\otimes_k{\cal O}_X\,. \] \item For $\lambda\in B_{r/2,n-r}$ we have \label{ref-6-36} \[ \Phi^!(S^{\langle \lambda\rangle} V\otimes_k{\cal O}_X)\cong {\cal M}_Z(\lambda). \] \end{enumerate} \end{theorems} The proof is based on a series of lemmas. Most arguments are quite standard. See \cite{VdB100, WeymanBook}. \begin{lemmas}\label{ref-5.2.2} \begin{enumerate} \item We have \begin{equation} \label{ref-5.1-37} \omega_{E/X}= (\pi\tilde{q})^\ast (\det {\cal Q})^{\otimes r-n} \end{equation} as $\operatorname {GL}(H)\times\Sp(V)$-equivariant coherent sheaves. \item Moreover \begin{equation} \label{ref-5.2-38} R\tilde{p}_{s,\ast} \omega_{E/X}={\cal O}_X. \end{equation} \end{enumerate} \end{lemmas} \begin{proof} \begin{enumerate} \item For clarity we will work $\operatorname {GL}(H)\times \operatorname {GL}(V)$-equivariantly. Using the identification $E=\underline{\operatorname {Spec}} {\cal E}$ with ${\cal E}=\operatorname{Sym}} \def\Sp{\operatorname{Sp}_F({\cal Q}\otimes_k V^\vee)$ (see \eqref{ref-4.5-25}) we find that~$\omega_E$ corresponds to the sheaf of graded ${\cal E}$-modules given by \[ \omega_{{\cal E}}=\omega_{F}\otimes_F\det ({\cal Q}\otimes_k V^\vee)\otimes_F{\cal E}\,. \] From the fact that $\Omega_F=\operatorname {\mathcal{H}\mathit{om}}_F({\cal Q},{\cal R})$ where ${\cal R}=\operatorname {ker}(H\otimes_k{\cal O}_F\rightarrow {\cal Q})$ one computes \[ \omega_F=(\det H)^{\otimes r}\otimes_F(\det {\cal Q})^{\otimes -n}\,. \] We also have \[ \det({\cal Q}\otimes_k V^\vee)=(\det Q)^{\otimes r}\otimes_k (\det V)^{\otimes -r} \] so that ultimately we get \[ \omega_{\cal E}=(\det H)^{\otimes r}\otimes_k (\det V)^{\otimes -r}\otimes_k (\det {\cal Q})^{\otimes r-n}\otimes_F {\cal E}\, \] and hence \[ \omega_E=(\det H)^{\otimes r}\otimes_k (\det V)^{\otimes -r}\otimes_k(\pi\tilde{q})^\ast (\det {\cal Q})^{\otimes r-n}. \] One also has $\omega_X=(\det H)^{\otimes r}\otimes_k (\det V)^{\otimes -n}\otimes_k {\cal O}_X$ which yields \[ \omega_{E/X}=(\det V)^{\otimes n-r}\otimes_k(\pi\tilde{q})^\ast (\det {\cal Q})^{\otimes r-n}. \] It now suffices to note that $\det V$ is a trivial $\Sp(V)$-representation. \item It is easy to show this directly from \eqref{ref-5.1-37} but one may also argue that~$X$, being smooth, has rational singularities and hence $R\tilde{p}_{s,\ast}(\omega_E)=\omega_X$. Tensoring with $\omega_X^{-1}$ yields the desired result.}\end{proof \end{enumerate} \def}\end{proof{}\end{proof} On $E$ there is a tautological map \[ \epsilon:(\pi\tilde{q})^\ast({\cal Q})\rightarrow V\otimes_k {\cal O}_E \]
whose fiber in a point $(\epsilon,Q)\in E$ is simply $\epsilon:Q\rightarrow V$. From this description it is clear that $\epsilon{|}E_0$ is an isomorphism. \begin{lemmas} \label{ref-5.2.3-39} Assume $\lambda\in B_{r,n-r}$. The map $S^\lambda \epsilon$ becomes an isomorphism after applying the functor $R\tilde{p}_\ast (\omega_{E/X}\otimes_E-)$. \end{lemmas} \begin{proof} By \eqref{ref-5.2-38} we have \begin{equation} \label{ref-5.3-40} R\tilde{p}_\ast(\omega_{E/X}\otimes_E(S^\lambda V\otimes_k {\cal O}_E))= S^\lambda V\otimes_k R\tilde{p}_\ast(\omega_{E/X})= S^\lambda V\otimes_k {\cal O}_X. \end{equation} When viewed as $\operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(H\otimes_k V^\vee)$-module $R^i\tilde{p}_\ast(\omega_{E/X}\otimes_E S^\lambda((\pi\tilde{q}_s)^\ast({\cal Q})))$ is given by \begin{equation} \label{ref-5.4-41} H^i(F,S^\lambda {\cal Q} \otimes_F (\det {\cal Q})^{\otimes r-n}\otimes_F {\cal E}) \end{equation} (using \eqref{ref-5.1-37}). It follows from \cite[Prop.\ 1.4]{BLV1000} that \eqref{ref-5.4-41} is zero for $i>0$. So $R^i\tilde{p}_\ast(\omega_{E/X}\otimes_E S^\lambda((\pi\tilde{q}_s)^\ast({\cal Q})))=0$ for $i>0$. We now consider $i=0$. We claim that $\tilde{p}_\ast(\omega_{E/X}\otimes_E S^\lambda((\pi\tilde{q}_s)^\ast({\cal Q}))$ is maximal Cohen-Macaulay.
To this end we have to show that $\operatorname {RHom}_X(\tilde{p}_\ast(\omega_{E/X}\otimes_E S^\lambda((\pi\tilde{q}_s)^\ast({\cal Q})),{\cal O}_X)$ has no higher cohomology or equivalently $\operatorname {Ext}^i_E( S^\lambda((\pi\tilde{q})^\ast({\cal Q})),{\cal O}_E)=0$ for $i>0$. In other words we should have \[ H^i(F,(S^\lambda {\cal Q})^\vee \otimes_F {\cal E})=0 \] for $i>0$. This follows again from \cite[Prop.\ 1.4]{BLV1000}.
Combining this with \eqref{ref-5.3-40} we see that $R\tilde{p}_\ast(\omega_{E/X}\otimes_X S^\lambda\epsilon)$ is a map between maximal Cohen-Macaulay ${\cal O}_X$-modules. Since this map is an isomorphism on $X_0$ and $\codim (X-X_0)\ge 2$ we conclude that $R\tilde{p}_\ast(\omega_{E/X}\otimes_X S^\lambda\epsilon)$ is indeed an isomorphism. \end{proof} Put ${\cal N}_Z(\lambda):=\widetilde{N_Q(\lambda)}$ where the notation $\tilde{?}$ was introduced in the introduction and $N_Q(\lambda)$ was introduced in \S\ref{ref-3-12}. From Lemma \ref{ref-4.1-26} we deduce \begin{equation} \label{ref-5.5-42} {\cal N}_Z(\lambda)\cong R\tilde{q}_{s,\ast}(S^{\langle \lambda\rangle}V\otimes_k {\cal O}_E) \end{equation} so that by adjunction we get a map \begin{equation} \label{ref-5.6-43} L\tilde{q}^\ast_s{\cal N}_Z(\lambda)\rightarrow S^{\langle \lambda\rangle}V\otimes_k {\cal O}_E. \end{equation} \begin{lemmas} \label{ref-5.2.4-44} Assume $\lambda\in B_{r/2,n-r}$. The map \eqref{ref-5.6-43} becomes an isomorphism after applying the functor $R\tilde{p}_{s,\ast}(\omega_{E/X}\otimes_E-)$. \end{lemmas} \begin{proof} Note that \eqref{ref-5.6-43} is an isomorphism on $E_0$ since $E_0\rightarrow Z_0$ is an $\Sp(V)$-torsor and so $L\tilde{q}^\ast_s$ and $R\tilde{q}_{s,\ast}$ define inverse equivalences between ${\cal D}(E_0/\Sp(V))$ and ${\cal D}(Z_0)$.
By Corollary \ref{ref-3.6-19} we have a $\operatorname {GL}(H)$-equivariant resolution \[ \cdots \rightarrow P_1(\pi^\ast{\cal Q})\rightarrow P_0(\pi^\ast{\cal Q})\rightarrow {\cal N}_Z(\lambda)\rightarrow 0 \] where the $P_i$ are polynomial functors which are finite sums of Schur functors $S^\chi$ with $\chi\in B_{r,n-r}$.
It follows that the cone of \eqref{ref-5.6-43} is described by a $\operatorname {GL}(H)\times \Sp(V)$-equivariant complex of the form \begin{equation} \label{ref-5.7-45} \cdots \rightarrow P_1((\pi\tilde{q})^\ast{\cal Q})\rightarrow P_0((\pi\tilde{q})^\ast{\cal Q})\rightarrow S^{\langle \lambda\rangle}V\otimes_k {\cal O}_E\rightarrow 0 \end{equation} and moreover this complex is exact when restricted to $E_0$. Using Lemma \ref{ref-5.2.3-39} and \eqref{ref-5.2-38} applying $R\tilde{p}_{\ast}(\omega_{E/X}\otimes_X-)$ to \eqref{ref-5.7-45} yields a $\operatorname {GL}(H)\times \Sp(V)$-equivariant complex on $X$ \begin{equation} \label{ref-5.8-46} \cdots \rightarrow P_1(V)\otimes_k {\cal O}_X \rightarrow P_0(V)\otimes_k {\cal O}_X\rightarrow S^{\langle \lambda\rangle}V\otimes_k {\cal O}_X\rightarrow 0 \end{equation} This complex is exact on $X_0$ (since $X_0\cong E_0$) but we must prove it is exact on $X$. The morphisms in \eqref{ref-5.8-46} are determined by $\operatorname {GL}(H)\times\Sp(V)$-equivariant maps \begin{align*} P_{i+1}(V)&\rightarrow P_i(V)\otimes_k \operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(H\otimes_k V^\vee)\\ P_{0}(V)&\rightarrow S^{\langle\lambda\rangle}(V)\otimes_k \operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(H\otimes_k V^\vee) \end{align*} which by $\operatorname {GL}(H)$-equivariance must necessarily be obtained from $\Sp(V)$-equivariant maps \begin{align*} P_{i+1}(V)&\rightarrow P_i(V)\\ P_{0}(V)&\rightarrow S^{\langle\lambda\rangle}(V) \end{align*} We conclude that \eqref{ref-5.8-46} is of the form \begin{equation} \label{ref-5.9-47} (\cdots \rightarrow P_2(V)\rightarrow P_1(V)\rightarrow P_0(V)\rightarrow S^{\langle\lambda\rangle} V\rightarrow 0)\otimes_k {\cal O}_X \end{equation} in a way which is compatible with $\operatorname {GL}(H)\times \Sp(V)$-actions. Restricting to $X_0$ we see that \[ \cdots \rightarrow P_2(V)\rightarrow P_1(V)\rightarrow P_0(V)\rightarrow S^{\langle\lambda\rangle} V\rightarrow 0 \] must be exact. But then \eqref{ref-5.9-47} is also exact and hence so is \eqref{ref-5.8-46}. \end{proof} \begin{lemmas} \label{ref-5.2.5-48} Let $\lambda\in B_{r/2,n-r}$. The counit map \[ \Phi\Phi^!(S^{\langle \lambda\rangle}V\otimes_k {\cal O}_X) \rightarrow S^{\langle \lambda\rangle}V\otimes_k {\cal O}_X \] is an isomorphism. \end{lemmas} \begin{proof} We have \[ \tilde{p}_{s}^!(S^{\langle \lambda\rangle}V\otimes_k {\cal O}_X) =S^{\langle \lambda\rangle}V\otimes_k \omega_{E/X}. \] Hence we have to show that the counit map \[ L\tilde{q}_s^\ast R{\tilde{q}}_{s,\ast} (S^{\langle \lambda\rangle}V\otimes_k \omega_{E/X}) \rightarrow S^{\langle \lambda\rangle}V\otimes_k \omega_{E/X} \] becomes an isomorphism after applying $R\tilde{p}_{s,\ast}$.
Using \eqref{ref-5.1-37} we see that it is sufficient to prove that \[ L\tilde{q}_s^\ast R{\tilde{q}}_{s,\ast} (S^{\langle \lambda\rangle}V\otimes_k {\cal O}_E) \rightarrow S^{\langle \lambda\rangle}V\otimes_k {\cal O}_E \] becomes an isomorphism after applying $R\tilde{p}_{s,\ast}(\omega_{E/X}\otimes_X-)$. This is precisely Lemma \ref{ref-5.2.4-44}. \end{proof} \begin{proof}[Proof of Theorem \ref{ref-5.2.1-30}] \begin{enumerate} \item[\eqref{ref-6-36}] We have by \eqref{ref-5.1-37} and \eqref{ref-5.5-42} \begin{align*} \Phi^!(S^{\langle \lambda\rangle} V\otimes_k{\cal O}_X)&=R\tilde{q}_{s,\ast}(\omega_{E/X}\otimes_E (S^{\langle \lambda\rangle} V\otimes_k{\cal O}_E))\\ &=\pi^\ast(\det {\cal Q})^{\otimes r-n} \otimes_Z {\cal N}_Z(\lambda)\\ &={\cal M}_Z(\lambda). \end{align*} \item[\eqref{ref-5-35}] Using \eqref{ref-5.1-37}\eqref{ref-5.2-38} and Lemma \ref{ref-5.2.4-44} we have \begin{align*} \Phi({\cal M}_Z(\lambda))&=R\tilde{p}_{s,\ast}(\omega_{E/X} \otimes_E L\tilde{q}_{s}^\ast {\cal N}_Z(\lambda))\\ &=S^{\langle\lambda\rangle} V\otimes_k R\tilde{p}_{s,\ast}\omega_{E/X}\\ &=S^{\langle\lambda\rangle} V\otimes_k {\cal O}_X. \end{align*} \item[\eqref{ref-4-34}] Using \eqref{ref-5.1-37}\eqref{ref-5.2-38} and Lemma \ref{ref-5.2.3-39} we have \begin{align*} \Phi(\pi^\ast((\det {\cal Q})^{\otimes r-n} \otimes_F S^\lambda {\cal Q}))&=R\tilde{p}_{s,\ast}(\omega_{E/X}\otimes_E L(\pi\tilde{q}_s)^\ast(S^\lambda {\cal Q}))\\ &=S^\lambda V\otimes_k R\tilde{p}_{s,\ast}(\omega_{E/X})\\ &=S^\lambda V\otimes_k {\cal O}_X. \end{align*} \item[\eqref{ref-1-31}] We use Lemma \ref{ref-5.1.1-28}. So we have to prove that the counit map $\Phi\Phi^{!}(A)\rightarrow A$ is an isomorphism for every object of the form $A=\Phi(B)$ with $B\in {\cal D}(Z)$. It is clearly sufficient to check this for $B$ running through a set of generators of ${\cal D}(Z)$. The sheaves $(\det {\cal Q})^{\otimes r-n}\otimes_F S^\lambda{\cal Q}$ for $\lambda\in B_{r,n-r}$ generate ${\cal D}(F)$ \cite{Kapranov3}. Hence since $Z\rightarrow F$ is affine it follows that the sheaves $\pi^\ast((\det {\cal Q})^{\otimes r-n}\otimes_F S^\lambda{\cal Q})$ generate ${\cal D}(Z)$. By \eqref{ref-4-34} we have $\Phi(\pi^\ast((\det {\cal Q})^{\otimes r-n} \otimes_F S^\lambda {\cal Q})) \cong S^\lambda V\otimes_k {\cal O}_X$ and $S^\lambda V$ is a sum of $S^{\langle\mu\rangle}V$ with $\mu_1\le\lambda_1$, for example by careful inspection of the formula \cite[\S2.4.2]{HTW}. It now suffices to invoke Lemma \ref{ref-5.2.5-48} (or, with a bit of handwaving, \eqref{ref-5-35}\eqref{ref-6-36}). \item[\eqref{ref-2-32}] This has been proved as part of \eqref{ref-1-31}. \item[\eqref{ref-3-33}] By \cite[Thm 3.3(3r)]{Kuznetsov3} it follows that $\operatorname {im} \Phi^!=\Phi^!(\operatorname {im} \Phi)$. It now suffices to invoke \eqref{ref-2-32}\eqref{ref-6-36}.}\end{proof \end{enumerate} \def}\end{proof{}\end{proof} \begin{proof}[Proof of Theorem \ref{ref-1.2-6}] \begin{enumerate} \item Since by Theorem \ref{ref-5.2.1-30}(6) ${\cal M}_Z(\lambda),{\cal M}_Z(\mu)\in \operatorname {im} \Phi^!$ we have by Theorem \ref{ref-5.2.1-30}(5) \begin{align*} \operatorname {Ext}^i_Z({\cal M}_Z(\lambda),{\cal M}_Z(\mu))&=\operatorname {Ext}^i_{X/\Sp(V)}(\Phi({\cal M}_Z(\lambda)),\Phi({\cal M}_Z(\mu)))\\ &=\operatorname {Ext}^i_{X/\Sp(V)}(S^{\langle \lambda\rangle}V\otimes_k {\cal O}_X,S^{\langle \mu\rangle}V\otimes_k {\cal O}_X) \end{align*} which is zero for $i>0$ (since $\Sp(V)$ is reductive). Note that we also find \begin{equation} \label{ref-5.10-49} \begin{aligned} \operatorname {Hom}_Z({\cal M}_Z(\lambda),{\cal M}_Z(\mu))&= \operatorname {Hom}_X(S^{\langle \lambda\rangle}V\otimes_k {\cal O}_X,S^{\langle \mu\rangle}V\otimes_k {\cal O}_X)^{\Sp(V)}\\ &\cong\operatorname {Hom}_R(M(\lambda),M(\mu)) \end{aligned} \end{equation} by \cite[Lemma 4.1.3]{SpenkoVdB}. \item We have by \eqref{ref-5.1-37}\eqref{ref-5.2-38} \begin{align*} Rp_\ast {\cal M}_Z(\lambda)&=Rp_\ast R\tilde{q}_{s,\ast}(\omega_{E/X}\otimes_k S^{\langle \lambda\rangle }V)\\ &=Rq_{s,\ast} R\tilde{p}_{s,\ast}(\omega_{E/X}\otimes_k S^{\langle \lambda\rangle }V)\\ &=Rq_{s,\ast}( S^{\langle \lambda\rangle }V\otimes_k R\tilde{p}_{s,\ast}(\omega_{E/X}))\\ &=Rq_{s,\ast} (S^{\langle \lambda\rangle }V\otimes_k {\cal O}_X)\\ &=(S^{\langle \lambda\rangle }V\otimes_k {\cal O}_X)^{\Sp(V)} \end{align*} Taking global sections yields what we want. \item By \eqref{ref-5.10-49} and \eqref{ref-1.5-8} both sides of \eqref{ref-1.6-10} are reflexive $R$-modules. Since~$p_\ast$ induces an isomorphism on $Y_0$ between both sides of \eqref{ref-1.6-10} (viewed as sheaves on $Y$) and $\codim(Y-Y_0)\ge 2$ \eqref{ref-1.6-10} must be an isomorphism. }\end{proof \end{enumerate} \def}\end{proof{}\end{proof}
\section{Symmetric matrices}\label{symsec} In this section we present modification needed to treat determinantal varieties of symmetric matrices.
We keep the same notation as in the introduction, but now we equip $V$ with a symmetric bilinear form so that $r=\dim V$ does not need to be even, $Y$ is the variety of $n\times n$ symmetric matrices of rank $\leq r$, $G=O(V)$,
while
$X=\operatorname {Hom}(H,V)$, $T=\operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(H\otimes V^\vee)$ remain the same, put $R=T^{O(V)}$. By the fundamental theorems for the orthogonal
group we have $Y\cong \operatorname {Spec} R$.
If $\chi$ is a partition with $\chi^t_1+\chi^t_2\leq r$, where
$\chi^t$ denotes the transpose partition, we write $S^{[\chi]}V$
for the corresponding irreducible representation of $O(V)$ (see
\cite[\S 19.5]{FH}), and call such a partition admissible. By
$\chi^\sigma$ we denote the conjugate partition of $\chi$; i.e.,
$(\chi^\sigma)^t_1=r-\chi^t_1$, $(\chi^\sigma)^t_k=\chi^t_k$ for $k>1$. Note that either $l(\chi)\le r/2$ or $l(\chi^\sigma)\le r/2$. We have $S^{[\lambda^\sigma]}V=\det V\otimes_k S^{[\lambda]} V$ \cite[\S6.6, Lemma 2]{Procesi3}.
In \cite{SpenkoVdB} a non-commutative resolution of $R$ has been constructed, which is crepant in case $n$ and $r$ have opposite parity. Let $B_{k,l}^{a}$ denote the set admissible partitions in $B_{k,l}$. We put \begin{equation} \label{ref-6-1} M=\bigoplus_{{{\chi}}\in B_{r,\lfloor (n-r)/2\rfloor +1 }^{a}} M({{\chi}}), \end{equation} where $M(\chi)=(S^{[\chi]} V\otimes_k T)^{O(V)}$
and write $\Lambda=\operatorname {End}_R(M)$.
\begin{theorem} One has $\operatorname {gl\,dim}\Lambda<\infty$.
$\Lambda$ is a non-commutative crepant resolution of $R$ if $n$ and $r$ have opposite parity.\footnote{In case $n$, $r$ have the same parity then there is a \emph{twisted} non-commutative crepant resolution. We do not consider such resolutions in this paper.} \end{theorem}
In the symmetric case we also have an analogous Springer resolution where we adapt the definitions in the obvious way. The fundamental theorems for the orthogonal group yield $\operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(Q\otimes V)^{O(V)}\cong \operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(\operatorname{Sym}} \def\Sp{\operatorname{Sp}^2(Q))$. We only slightly change the definition of $M_Q(\chi)$, now \[ M_Q({{\chi}})=\det(V)^{\gamma_{r,n}}\otimes(\det Q)^{\otimes r-n}\otimes_k (S^{[ {{\chi}}]} V\otimes_k \operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(Q\otimes_k V^\vee))^{O(V)}, \] where $\gamma_{r,n}=0$ (resp. $\gamma_{r,n}=1$) if $r$ and $n$ have the same (resp. opposite) parity. As in the skew symmetric case ${\cal M}_{Z}({{\chi}})=\widetilde{M_Q({{\chi}})}\in \mathop{\text{\upshape{coh}}}(\operatorname {GL}(H),Z)$.
To give an analogue of Proposition \ref{ref-3.2-14} we need to adapt the definitions of $\tau_r(\lambda)$, $i_r(\lambda)$ following \cite[\S 4.4]{WeymanSam}. The differences (denoted by D1, D2, D3 in loc.\ cit.) are that we remove border strips $R_\lambda$ of size $2 l(\lambda)-r$ instead of $2l(\lambda)-r-2$ and in the definition of $i_r(\lambda)$ we use $c(R_\lambda)-1$ instead of $c(R_\lambda)$. Finally if the total number of border strips removed is odd, then we replace the end result $\mu$ with $\mu^\sigma$.
With these modifications and replacing $B_{r/2,n-r}$ by $B_{r,n-r}^{a}$ Proposition \ref{ref-3.2-14} remains true also in the symmetric case by \cite[Corollary 4.23]{WeymanSam} in the case $r$ is odd, and by \cite[(4.2), Theorem 4.4]{WeymanSam} in the case $r$ is even. Also Corollary \ref{ref-3.6-19} remains valid. In its proof we only need to additionally note that one can also remove a border strip of size $l(\lambda)$ (which affects the first row) but this can only happen in the case $\lambda=(1^r)$ and in this case, since the number of borders strips removed is odd, $\tau_r(\lambda)=(0)^\sigma=\lambda$. In particular, $\tau_r(\lambda)_1=\lambda_1$ still holds.
We now present modifications needed in statements of other results. \begin{itemize} \item In Theorem \ref{ref-1.2-6} we replace $B_{r/2,n-r}$ by $B_{r,n-r}^{a}$.
\item In Theorem \ref{ref-5.2.1-30} we replace $S^{\langle \lambda\rangle}V$ by $S^{[ {{\lambda}}]} V$, and $B_{r/2,n-r}$ by $B_{r,n-r}^{a}$. Item (4) needs to be modified as \[ \Phi(\pi^\ast((\det {\cal Q})^{\otimes r-n} \otimes_F S^\lambda {\cal Q}))\cong S^\lambda V\otimes_k (\det V)^{\gamma_{r,n}}\otimes_k {\cal O}_X\,. \] \item In Lemma \ref{ref-5.2.2} we have
\[ \omega_{E/X}=(\det V)^{\gamma_{r,n}}\otimes (\pi\tilde{q})^\ast (\det {\cal Q})^{\otimes r-n} \]
as $\operatorname {GL}(H)\times O(V)$-equivariant coherent sheaves. \end{itemize}
One can easily check that the proofs obtained in the skew symmetric case also apply almost verbatim in the symmetric case. \appendix \section{More on the resolution of ${N_Q}(\chi)$ in the symplectic case} \label{ref-A-50}
We refer to Remark \ref{sam2} for an alternative approach, suggested to us by Steven Sam, towards the results in this Appendix. We believe that our elementary arguments are still of independent interest.
Recall that a partition has Frobenius coordinates $(a_1,\ldots,a_u;b_1,\ldots,b_u)$, $a_1>\cdots>a_u\ge 1$, $b_1>\cdots>b_u\ge 1$ if for all $i$ the box $(i,i)$ has arm length $a_i-1$ and leg length $b_i-1$. Let ${Q_{-1}}(m)$ be the set of partitions $\chi$ with
$|\chi|=m$ whose Frobenius coordinates are of the form $(a_1,\ldots,a_u{{;}} a_1{+}1,\ldots,a_u{+}1)$.
For partitions $\delta,\chi$ such that $l(\delta)$, $l(\chi)\le r/2$ put
$(\delta|\chi):=(\delta_1,\ldots,\delta_{r/2},\chi_1,\ldots,\chi_{r/2})$ with the latter being viewed as a weight for $\operatorname {GL}(Q)$. For $\alpha\in {Q_{-1}}(2k)$, $\beta\in {Q_{-1}}(2(k-1))$, $l(\alpha),l(\beta)\le r/2$ we put $\beta\subset_2 \alpha$ if $\beta\subset \alpha$ and $\alpha/\beta$ does not consist of two boxes next to each other.
For $\chi$ a partition with $l(\chi)\le r/2$ and $\chi_{r/2}\ge r/2-1$ put \[
S_{\chi,k}=\{(\chi|\mu)\mid \mu\in {Q_{-1}}(2k), l(\mu)\le r/2\}\,. \] Note
that if $\mu\in {Q_{-1}}(2k)$ and $l(\mu)\le r/2$ then $\mu_1\le r/2-1$. Hence all elements of $S_{\chi,k}$ are dominant. For $\pi=(\chi|\alpha)\in S_{\chi,k}$, $\tau=(\chi|\beta)\in S_{\chi,k-1}$ put $\tau\subset_2\pi$ if $\beta\subset_2\alpha$. If $\tau\subset_2\pi$ then by the Pieri rule $S^\tau Q$ is a summand with multiplicity one of $\wedge^2 Q\otimes_k S^\pi Q$.
We call any non-zero $\operatorname {GL}(Q)$-equivariant map \[ \phi_{\pi,\tau}:S^\pi Q\rightarrow \wedge^2 Q\otimes_k S^\tau Q \] a Pieri map. Needless to say that a Pieri map is only determined up to a non-zero scalar. By analogy of \cite[\S7]{VdB100} we call a collection of Pieri-maps $\phi_{\pi,\tau}$ such that $\tau\subset_2 \pi$ a Pieri system. We say that two Pieri systems $\phi_{\pi,\tau}$, $\phi'_{\pi,\tau}$ are equivalent if there exist non-zero scalars $(c_\sigma)_\sigma$ such that \[ \phi'_{\pi,\tau}=\frac{c_\tau}{c_\pi}\phi_{\pi,\tau}\, \] for all $\pi,\tau$. We will now make Proposition \ref{ref-3.2-14} more explicit for partitions with $\chi_{r/2}\ge r/2-1$. \begin{proposition} \label{prop:uniqueness}
Assume $\chi$ is a partition with $l(\chi)\le r/2$ and $\chi_{r/2}\ge r/2-1$. Then
$N_{Q}(\chi)$ has a $\operatorname {GL}(Q)$-equivariant resolution $P_\bullet$ as a $\operatorname{Sym}} \def\Sp{\operatorname{Sp}(\wedge^2
Q)$-module such that \[ P_k=\bigoplus_{\pi\in S_{\chi,k}}S^{\pi}
Q\otimes_k \operatorname{Sym}} \def\Sp{\operatorname{Sp}_k(\wedge^2 Q) \] and such that the differential $P_k\rightarrow P_{k-1}$ is the sum of maps for $\tau\subset_2\pi$: \begin{equation} \label{ref-A.1-51} S^\pi Q\otimes_k \operatorname{Sym}(\wedge^2 Q)\xrightarrow{\phi_{\pi,\tau}\otimes 1} S^\tau Q\otimes_k \wedge^2 Q\otimes_k \operatorname{Sym}(\wedge^2 Q) \rightarrow S^\tau Q\otimes_k \operatorname{Sym}(\wedge^2 Q) \end{equation} where the $(\phi_{\pi,\tau})_{\pi,\tau}$ are Pieri maps and the last map is obtained from the multiplication $\wedge^2 Q\otimes_k \operatorname{Sym}(\wedge^2 Q)\rightarrow\operatorname{Sym}(\wedge^2 Q)$. Moreover every choice of Pieri maps such that the compositions $P_k\rightarrow P_{k-1}\rightarrow P_{k-2}$ are zero yields isomorphic resolutions, and the isomorphism is given by scalar multiplication. \end{proposition} \begin{proof} We will first discuss uniqueness up to scalar multiplication of maps in the resolutions. The condition that \eqref{ref-A.1-51} forms a complex may be expressed as follows. For $\pi\in S_{\chi,k}$, $\sigma\in S_{\chi,k-2}$ put \begin{equation} \label{ref-A.2-52} \{(\tau_i)_i\in I\}:=\{\tau\in S_{\chi,k-1}\mid \sigma\subset_2 \tau\subset_2 \pi\}\,. \end{equation} Then \eqref{ref-A.1-51} forms a complex if and only if the compositions \begin{equation} \label{ref-A.3-53} S^\pi Q\xrightarrow{(\phi_{\pi,\tau_i})_i} \bigoplus_i \wedge^2 Q\otimes S^{\tau_i} Q \xrightarrow{(1\otimes\phi_{\tau_i,\sigma})_i} \wedge^2 Q\otimes \wedge^2 Q\otimes S^{\sigma} Q\rightarrow S^2(\wedge^2 Q )\otimes S^{\sigma} Q \end{equation} are zero. We must show that any two Pieri-systems satisfying \eqref{ref-A.3-53} are equivalent.
Let $\alpha\in {Q_{-1}}(2k)$, $\beta\in {Q_{-1}}(2(k-1))$. We may express the relation $\beta\subset_2 \alpha$ in
terms of Frobenius coordinates. If $\alpha=(a_1,\ldots,a_u{;}a_1+1,\ldots,a_u+1)$ and $\beta=(b_1,\ldots,b_v{;}b_1+1,\ldots,b_v+1)$ then $\beta\subset_2\alpha$ if and only if $u=v$ and $(a_1,\ldots,a_u)=(b_1,\ldots,b_t+1,\ldots,b_v)$ for some $t$, or else $u=v+1$ and $(a_1,\ldots,a_{u})=(b_1,\ldots,b_v,1)$. From this it follows in particular that \eqref{ref-A.2-52} contains at most two elements.
Like in the proof of \cite[Prop.\ 7.1(iv)]{VdB100} we can now build a contractible cubical complex $\PP$ with vertices $\cup_k S_{\chi,k}$ and edges the pairs $\tau\subset_2\pi$ such that if $\phi_{\pi,\tau}$, $\phi'_{\pi,\tau}$ are two Pieri-systems satisfying \eqref{ref-A.1-51} then $\phi'_{\pi,\tau}/\phi_{\pi,\tau}$ is a 1-cocycle for $\PP$. Since $\PP$ is contractible this 1-cocycle is a coboundary which turns out to express exactly that $\phi'_{\pi,\tau}$ and $\phi_{\pi,\tau}$ are equivalent.
We now discuss the existence of $P_\bullet$. To this end we introduce some notation.
Let $G$ be the Grassmannian of $r/2$ dimensional quotients of $Q$ and let
${\cal P}$, ${\cal S}$ be respectively the universal quotient and subbundle on~$G$. The resolution of ${N_Q}(\chi)$ constructed in \cite[Lemma 3.11, Lemma 3.12, Prop.\ 3.13]{WeymanSam} (denoted by $M_\chi$ in loc.\ cit.) using the ``geometric method'' is now obtained by applying $\Gamma(G,-\otimes_G S^\chi{\cal P})$ to the Koszul complex \[ \wedge^\bullet (\wedge^2 {\cal S}) \otimes_k \operatorname{Sym}} \def\Sp{\operatorname{Sp}(\wedge^2 Q) \] obtained from the inclusion $\wedge^2{\cal S}\subset {\cal O}_G\otimes_k \wedge^2 Q$. So the resulting complex is \begin{equation} \label{ref-A.4-54} \Gamma(G,\wedge^\bullet(\wedge^2{\cal S})\otimes_G S^\chi {\cal P})\otimes_k \operatorname{Sym}} \def\Sp{\operatorname{Sp}(\wedge^2 Q)\,. \end{equation}
Using the decomposition \begin{equation} \label{ref-3.2-18} \wedge^{k}(\wedge^2{\cal S})\cong \bigoplus_{\mu\in {Q_{-1}}(2k)} S^\mu{\cal S}\, \end{equation}
we obtain from Lemma \ref{ref-A.2-56} below that the differential in \eqref{ref-A.4-54} is given by the composition \begin{multline} \label{ref-A.5-55} \Gamma(G,S^\chi {\cal P}\otimes_G S^\alpha {\cal S}) \xrightarrow{\phi_{\alpha,\beta,{\cal S}}} \Gamma(G,S^\chi {\cal P}\otimes_G \wedge^2{\cal S} \otimes_G S^\beta {\cal S}) \hookrightarrow\\ \Gamma(G,S^\chi {\cal P}\otimes_G (\wedge^2 Q\otimes_k S^\beta {\cal S})) =\Gamma(G,S^\chi {\cal P}\otimes_G S^\beta {\cal S}) \otimes_k \wedge^2Q \end{multline} where $\phi_{\alpha,\beta,{\cal S}}$ is a Pieri map. Now for each pair $(\chi,\alpha)\in S_{\chi,k}$
choose an isomorphism $\Gamma(G,S^\chi {\cal P}\otimes_G S^\alpha {\cal S})\cong S^{(\chi|\alpha)} Q$. Then \eqref{ref-A.5-55} becomes a $\operatorname {GL}(Q)$-equivariant morphism \[
\phi_{\chi,\alpha,\beta}:S^{(\chi|\alpha)}Q
\rightarrow S^{(\chi|\beta)}Q
\otimes_k \wedge^2 Q\,. \] \begin{sublemma} If $\beta\subset_2\alpha$ then $\phi_{\chi,\alpha,\beta}$ is not zero and hence it is a Pieri map. \end{sublemma} \begin{proof}
In \eqref{ref-A.5-55} $\phi_{\alpha,\beta,{\cal S}}$ is a monomorphism. So it induces a monomorphism on global sections. The compositions of two monomorphisms is again monomorphism. This can only be zero if its source is zero, which is not the case since $(\chi|\alpha)\in S_{\chi,k}$ is dominant. \end{proof} It follows that \eqref{ref-A.4-54} becomes a complex of the shape asserted in the statement of the proposition, finishing the proof. \end{proof} A version for vector bundles of the following lemma was used. \begin{lemma} \label{ref-A.2-56} Let $R$ be a vector space of dimension $n$. Let $\alpha\in {Q_{-1}}(2k)$, $\beta\in {Q_{-1}}(2(k-1))$ with $\beta\subset_2\alpha$ and $l(\alpha)\le n$. Then following composition is non-zero \[ \phi_{\alpha,\beta}:S^\alpha R\hookrightarrow \wedge^k(\wedge^2 R)\xrightarrow{\phi} \wedge^2R \otimes_k \wedge^{k-1} (\wedge^2 R) \twoheadrightarrow \wedge^2 R\otimes_l S^\beta R \] where the first and last map are obtained from the $\operatorname {GL}(R)$-equivariant decomposition $\wedge^k(\wedge^2 R)\cong\bigoplus_{\alpha\in {Q_{-1}}(2k)} S^\alpha R$, $\wedge^{k-1}(\wedge^2 R)\cong\bigoplus_{\beta\in {Q_{-1}}(2(k-1))} S^\beta R$ and the middle map is the canonical one. \end{lemma} \begin{proof} Choose a basis $\{e_1,\ldots,e_n\}$ for $R$ and let $U$ be
the unipotent subgroup of $\operatorname {GL}(R)$ given by upper triangular
matrices with 1's on the diagonal, written in the basis
$\{e_1,\ldots,e_n\}$. In other words $u\in U$ if and only if $u\cdot e_i=e_i+\sum_{j<i} \lambda_j e_j$ for $i=1,\ldots,r$.
The $U$-invariant vectors in $\wedge^k(\wedge^2 R)$ corresponding to the decomposition \begin{equation} \label{ref-A.6-57} \wedge^k(\wedge^2 R)\cong\bigoplus_{\alpha\in {Q_{-1}}(2k)} S^\alpha R \end{equation}
were explicitly written down in \cite[Prop. 2.3.9]{WeymanBook}. To explain this let $\alpha\in {Q_{-1}}(2k)$ and write it in Frobenius coordinates as $(a_1,\ldots,a_u{;}a_1+1,\ldots,a_u+1)$. Then the highest weight vector of the $S^\alpha R$-component in \eqref{ref-A.6-57} is given by $u_\alpha:= \bigwedge_{i< j\le i+a_i} v_{ij}$ for $v_{ij}=e_i\wedge e_j$ (we do not care about the sign of $u_\alpha$ so the ordering of the product is unimportant). If we represent $\alpha$ by a Young diagram then the index set of the exterior product corresponds to the boxes strictly below the diagonal which makes it easy to visualize why $u_\alpha$ is $U$-invariant and why it has weight $\alpha$ for the maximal torus corresponding of the diagonal matrices in $\operatorname {GL}(R)$.
We have $\phi(u_\alpha)=\sum_{ij}\pm v_{ij}\otimes \hat{u}_{\alpha,ij}$ where $\hat{u}_{\alpha,ij}$ is obtained from $u_{\alpha}$ by removing the factor $v_{ij}$. Thus $\phi_{\alpha,\beta}(u_\alpha) =\sum_{ij}\pm v_{ij}\otimes \mathop{\text{pr}}\nolimits_\beta(\hat{u}_{\alpha,ij})$ where $\mathop{\text{pr}}\nolimits_\beta:\wedge^{k-1}(\wedge^2 Q) \rightarrow S^\beta R$ is the projection. Since the $v_{ij}$ are linearly independent in $\wedge^2 R$ it follows that $\phi_{\alpha,\beta}(u_\alpha)$ can only be zero if $\mathop{\text{pr}}\nolimits_\beta(\hat{u}_{\alpha,ij})$ is zero for all $i,j$. Now if $\beta\subset_2\alpha$ then there exist $i,j$ such that $\hat{u}_{\alpha,ij}=\pm u_{\beta}$. Since by definition $\mathop{\text{pr}}\nolimits_\beta(u_\beta)=u_\beta\neq 0$ we obtain $\mathop{\text{pr}}\nolimits_\beta(\hat{u}_{\alpha,ij})\neq 0$ and thus also $\phi_{\alpha,\beta}(u_\alpha)\neq 0$. \end{proof}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document}
|
arXiv
|
{
"id": "1511.07290.tex",
"language_detection_score": 0.6103687286376953,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} R.~S.~Kulkarni showed that a finite group acting pseudofreely, but not freely, preserving orientation, on an even-dimensional sphere (or suitable sphere-like space) is either a periodic group acting semifreely with two fixed points, a dihedral group acting with three singular orbits, or one of the polyhedral groups, occurring only in dimension 2. It is shown here that the dihedral group does not act pseudofreely and locally linearly on an actual $n$-sphere when $n\equiv 0\mod 4$. The possibility of such an action when $n\equiv 2\mod 4$ and $n>2$ remains open. Orientation-reversing actions are also considered. \end{abstract} \dedicatory{Dedicated to Jos\'e Mar\'ia Montesinos on the occasion of his 65th birthday} \title{Pseudofree Group Actions on Spheres}
\section{Introduction}
The focus of this note is the question of what finite groups can act pseudofreely (but not freely) on some sphere. Recall that a group action is pseudofree if the fixed point set of each non-identity element is discrete.
A good part of this question was already answered by R.~S.~Kulkarni for pseudofree actions on (cohomology) manifolds with the homology of a sphere. But there were left open questions about existence of certain actions on actual spheres. We quote from Kulkarni. (For consistency with the rest of this paper, we have made minor alterations in the notation.)
\begin{thm}[Kulkarni \cite{Kulkarni1982}] Let $X$ be an admissible space which is a $d$-dimensional $\mathbb{Z}$-cohomology manifold with the mod $2$ cohomology isomorphic to that of an even dimensional sphere $S^{n}$. Let $G$ be a finite group acting pseudofreely on $X$ and trivially on $H^{*}(X;\mathbb{Q})$. The either \begin{enumerate}[a.] \item $G$ acts semifreely with two fixed points and has periodic cohomology of period $d$, or \item $G\approx $ a dihedral group of order $2k$, $k$ odd or \item $n=2$ and $G\approx $ a dihedral, tetrahedral, octahedral or icosahedral group. \end{enumerate} \end{thm}
Actions of the first type in the theorem arise as suspensions of free actions. Actions of the third type arise as classical actions on the $2$-sphere. Kulkarni, however, remarked (p. 222) that he did not know whether a dihedral group of order $2k$, $k$ odd, actually can act pseudofreely on a $\mathbb{Z}$-cohomology manifold which is a $\mathbb{Z}_{2}$-cohomology sphere of even dimension $>2$.
It should be remarked that aside from the standard actions on spheres $S^{1}$ and $S^{2}$ there are no such pseudofree \emph{linear} actions.
We restrict attention primarily to locally linear actions on actual closed manifolds, and we find it useful to consider the slightly broader class of \emph{tame} actions. We understand a pseudofree action of a finite group $G$ to be \emph{tame} if each point $x$ has a disk neighborhood invariant under the isotropy group $G_{x}$.
We denote the dihedral group of order $2k$ by $D_{k}$ and the cyclic group of order $k$ by $C_{k}$. We fix an expression of $D_{k}$ as a semidirect product with group extension \[ 1\to C_{k} \to D_{k} \to C_{2} \to 1 \] where the quotient $C_{2}$ acts on the normal subgroup $C_{k}$ by inversion. There are $k$ involutions in $D_{k}$, all conjugate, and all determining the various splittings as a semidirect product.
\begin{thm}\label{thm:nonexistence} The dihedral group $D_{k}$ of order $2k$, $k$ odd, does not act locally linearly or tamely, pseudofreely, and preserving orientation, on $S^{n}$ when $n\equiv 0\mod 4$. \end{thm}
In fact the argument shows that there do not exist orientation-preserving actions of $D_{k}$ on closed $n$-manifolds, $n\equiv 0\mod 4$, having exactly three orbit types $(2,2,k)$, i.e., with isotropy groups $C_{2}, C_{2}, C_{k}$.
We do not consider here the possibilities for nontrivial pseudofree actions on higher dimensional manifolds with the cohomology of $S^{n}$, such as $S^{n}\times\mathbb{R}^{m}$.
It remains to ponder the case $n\equiv 2\mod 4$, which encompasses the classical actions on the $2$-sphere. In this case we obtain a weak positive result.
\begin{thm}\label{thm:existence} If $n\equiv 2\mod 4$ and $k$ is an odd positive integer, then there is a smooth, closed, orientable $n$--manifold on which the dihedral group $D_{k}$ of order $2k$ acts smoothly, pseudofreely, preserving orientation, with exactly three singular orbits of types $(2,2,k)$. \end{thm}
The argument shows that when $n\ge 6$ such $n$-manifolds can be chosen to be $2$-connected. But it remains an open question whether the manifold can be chosen to be a sphere or a mod 2 homology sphere.
We also consider the case of orientation-reversing pseudofree actions on spheres. \begin{thm}\label{thm:orientation_reversing_cases} Suppose that a finite group $G$ acts locally linearly and pseudofreely on a sphere $S^{n}$, with some elements of $G$ reversing orientation. If $n$ is odd, then $G$ must be a dihedral group and $n\equiv 1\mod 4$; and if $n$ is even, then $G$ must be a periodic group with a subgroup of index $2$.\end{thm}
When $n$ is odd the prototype is the standard action of the dihedral group on a circle, but we do not know if there are analogous actions in higher odd dimensions. The existence is closely related to the existence of orientation-preserving actions in neighboring even dimensions.
When $n$ is even, these kinds of actions arise as ``twisted suspensions'' of free actions. Such actions by even order cyclic groups have been studied and classified by S.~E.~Cappell and J.~L.~Shaneson \cite{CappellShaneson1978}, in the piecewise linear case, and by S.~Kwasik and R.~Schultz \cite{KwasikSchultz1990, KwasikSchultz1991}, in the purely topological case.
\section{Proof of Theorem \ref{thm:nonexistence}} Suppose that $D_{k}$ acts pseudofreely on $S^{n}$. Such an action cannot be free since $D_{k}$ does not satisfy the Milnor condition that every element of order two lies in the center. Similarly $n$ cannot be odd. For otherwise a nontrivial isotropy group would act freely, preserving orientation, on an even-dimensional sphere linking a point with nontrivial isotropy group. But this would violate the Lefschetz fixed point theorem.
So henceforth we assume that $n>2$ and $n$ is even. Now by the Lefschetz Fixed Point theorem we also conclude that every nontrivial element of $D_{k}$ has exactly two fixed points. The two fixed points of an element of order $k$ are interchanged by all the elements of order $2$, since otherwise the dihedral group would act freely on a linking sphere to one of the fixed points. On the other hand the cyclic subgroup $C_{k}$ permutes the fixed points of the elements of order two in two orbits of size $k$. Now removing small invariant disk neighborhoods of the singular set and passing to the quotient we obtain an $n$-manifold $Y^{n}$ whose boundary consists of two homotopy real projective $(n-1)$-spaces $P_{1}$ and $P_{2}$ and a single homotopy lens space $L_{k}$. We have $\pi_{1}(Y)=D_{k}$, $\pi_{1}(P_{j})\approx C_{2}$, and $\pi_{1}(L_{k})=C_{k}$.
The regular covering over $Y$ is classified by a map $f:Y\to K(D_{k},1)$. Note that although there are $k$ different subgroups of order $2$, they are all conjugate in $D_{k}$ and hence there is a well-defined inclusion-induced homomorphism $H_{*}(C_{2})\to H_{*}(D_{k})$, as well as the usual homomorphism $H_{*}(C_{k})\to H_{*}(D_{k})$.
We will make use of the following elementary, well-known, homology calculation. The proof is an exercise in the spectral sequence of the split extension $ 1\to C_{k}\to D_{k}\to C_{2}\to 1. $
\begin{prop}\label{prop:trivialcoefs} For $k$ an odd integer, the homology of $D_{k}$ with $\mathbb{Z}$ coefficients is given by \[ H_{q}(D_{k};\mathbb{Z})= \begin{cases} \mathbb{Z} & \text{for } q=0\\ \mathbb{Z}/2 & \text{for } q\equiv 1\mod 4\\ \mathbb{Z}/2k & \text{for } q\equiv 3\mod 4\\ 0 & \text{for even } q>0 \end{cases} \] Moreover, when $ q\equiv 3\mod 4$, the inclusion $C_{k}\to D_{k}$ induces an injection $$ \mathbb{Z}_{k}=H_{q}(C_{k};\mathbb{Z})\to H_{q}(D_{k};\mathbb{Z})=\mathbb{Z}_{2k}$$ and the projection $D_{k}\to C_{2}$ induces a surjection $$ \mathbb{Z}_{2k}=H_{q}(D_{k};\mathbb{Z})\to H_{q}(C_{2};\mathbb{Z})=\mathbb{Z}_{2}.$$ \qed \end{prop}
\begin{proof}[Completion of Proof of Theorem \ref{thm:nonexistence}] The proof when $n\equiv 0\mod 4$ follows easily from Proposition \ref{prop:trivialcoefs}. Under the classifying map $f:Y\to K(D_{k},1)$ restricted to $\partial Y$, $f_{*}[P_{i}]$ is the element of order $2$ in $H_{n-1}(D_{k})=\mathbb{Z}/2k$, for $i=1,2$. The element $f_{*}[L_{k}]$ is an element of order $k$. But obviously $f_{*}[P_{1}]+f_{*}[P_{2}]+f_{*}[L_{k}]=0$, since the classifying map is defined on all of $Y$. This implies $f_{*}[L_{k}]=0$, a contradiction. \end{proof}
\begin{remark} This argument shows that when $n\equiv 0\mod 4$ there is no orientation-preserving, pseudofree action of $D_{k}$ on any closed, orientable, $n$-manifold of type $(2,2,k)$, i.e., having singular orbit structure consisting of one orbit with isotropy group $C_{k}$ and two orbits with isotropy groups of order $2$. \end{remark}
\section{Proof of Theorem \ref{thm:existence}} We will also need a somewhat less precise statement for oriented bordism. \begin{prop}\label{prop:} For $k$ odd and $q\equiv 1\mod 4$, the map $\Omega_{q}(BC_{k})\to \Omega_{q}(BD_{k})$ induced by inclusion is zero. \end{prop} \begin{proof} We indicate a proof, based on ``big guns''. According to results of Thom and Milnor, the oriented cobordism ring $\Omega_{*}$, is finitely generated, has no odd order torsion, and is finite except in dimensions divisible by $4$. Indeed all 2-torsion elements have order exactly 2, and the torsion subgroup is finitely generated in each dimension. Moreover, modulo torsion, $\Omega_{*}$ is a polynomial algebra with one generator in each dimension $\equiv 0\mod 4$. Rationally these generators can be take to be complex projective spaces $\mathbb{C}P^{2m}$. For these and related facts, we refer to R.~E.~Stong \cite{Stong1968}.
Now we can calculate both $\Omega_{q}(BC_{k})$ and $\Omega_{q}(BD_{k})$ via the Atiyah-Hirzebruch spectral sequences \[ H_{i}(BC_{k};\Omega_{j})\Rightarrow \Omega_{i+j}(BC_{k}) \] and \[ H_{i}(BD_{k};\Omega_{j})\Rightarrow \Omega_{i+j}(BD_{k}) \] Since is $k$ is odd, the groups $H_{i}(BC_{k};\Omega_{j})$ are zero unless $j\equiv 0\mod 4$, according to the above remarks. Therefore suppose $j\equiv 0\mod 4$.
Now $q=i+j$ is congruent to $1\mod 4$ if and only if $i\equiv 1\mod 4$. But then $H_{i}(BC_{k};\Omega_{j})$ is $k$--torsion, since $H_{i}(BC_{k};\mathbb{Z})$ is $k$-torsion (for $i\ne 0$), while $H_{i}(BD_{k};\Omega_{j})$ is $2$--torsion by the Universal Coefficient formula. Therefore, in all such cases, with $i+j\equiv 1\mod 4$, the map \[ H_{i}(BC_{k};\Omega_{j})\to H_{i}(BD_{k};\Omega_{j}) \] between the spectral sequences is trivial. The result follows by comparison of spectral sequences. \end{proof}
\begin{cor} If $L$ is a homotopy lens space of dimension $q\equiv 1\mod 4$ and fundamental group $\pi_{1}(L)=C_{k}$, then the composition $L\to BC_{k}\to BD_{k}$ of the classifying map with the inclusion-induced map is null-bordant.\qed \end{cor}
\begin{proof}[Proof of Theorem \ref{thm:existence}] We show how to construct a smooth, pseudofree action of $D_{k}$ on a smooth $n$-manifold, for any $n\equiv 2\mod 4$, with this same orbit structure. At this writing we are not sure whether the manifold can be chosen to be a sphere or even a mod 2 homology sphere, however, when $n>2$. We conjecture that it cannot.
In dimension $2$ there is a standard $D_{k}$ action on the $2$--sphere such that when one removes invariant disk neighborhoods of the singular points and passes to the orbit space one has a disk with two holes, i.e., a pair of pants. One boundary circle has isotropy type $C_{k}$ and the other two boundary circles have isotropy type $C_{2}$. One can think of the classifying map we examined above as given by choosing two distinct elements of order $2$ in $D_{k}$, with product of order $k$.
We now consider higher dimensions $n\equiv 2\mod 4$. Start with a disjoint union of two real projective $(n-1)$-spaces $P_{1}$ and $P_{2}$ and a single lens space $L_{k}$. We define a regular $D_{k}$ covering of $P_{1}\sqcup P_{2}\sqcup L_{k}$ by mapping $P_{i}\to K(D_{k},1)$, representing the nonzero element of $H_{n-1}(D_{k})=\mathbb{Z}/2$, that is taking the canonical map $P_{i}\to K(C_{2},1)$ followed by a map $K(C_{2},1)\to K(D_{k},1)$ induced by an inclusion $C_{2}\to D_{k}$. Similarly we take a standard inclusion $L_{k}\to K(C_{k},1)$ composed with the natural map $K(C_{k},1)\to K(D_{k},1)$. Note that $\mathbb{R}P^{n-1}$ admits an orientation-reversing diffeomorphism, hence represents an element of order $2$ in $\Omega_{n-1}(BC_{2})$.
It follows from the preceding remarks that the combined map $P_{1}\sqcup P_{2}\sqcup L_{k}\to K(D_{k},1)$ is null-bordant. Indeed, $P_{1}\sqcup P_{2}\to K(D_{k},1)$ and $L_{k}\to K(D_{k},1)$ are separately null-bordant.
Choose such a manifold with the desired boundary and a $D_{k}$ covering extending the given one. Passing to the $2k$-fold covering and capping off all the boundary spheres with disks provides the required $n$-manifold with pseudofree action of $D_{k}$ with the desired singular orbit structure. \end{proof}
\begin{remark}Of course we can arrange that the manifold constructed is connected, by forming the connected sum of components in the orbit space and noting that the resulting classifying map for the covering, $W^{n}\to BD_{k}$ must be surjective on fundamental group. We can also easily arrange that the manifold with group action constructed above be simply connected. Just use surgery to kill the normal subgroup of the fundamental group of the oriented manfold with boundary $P_{1}\sqcup P_{2}\sqcup L_{k}$ with quotient group $D_{k}$. One can further arrange that the manifold with group action is $2$-connected. According to Stong \cite{Stong1968}, for example, $\Omega_{q}^{\text{Spin}}$, like $\Omega_{q}$, has no odd order torsion and has elements of infinite order only in dimensions divisible by $4$. Using this, the preceding spectral sequence argument shows that $\Omega_{q}^{\text{Spin}}(BC_{k})\to \Omega_{q}^{\text{Spin}}(BD_{k})$ is $0$ for $k$ odd
and $q\equiv 1\mod 4$. From this it follows that the $n$-manifold with group action can be chosen to be $2$-connected (when $n\ge 6$) by arranging that the orbit manifold is spin and then doing spin surgery on $0$-, $1$- and $2$-spheres in the orbit space. \end{remark}
\section{Proof of Theorem \ref{thm:orientation_reversing_cases}: The orientation-reversing cases} Suppose a finite group $G$ acts pseudofreely on $S^{n}$, but with not every element of $G$ preserving orientation. Let $H<G$ be the subgroup of index two that does preserve orientation.
\subsection{Dimension $n$ odd} The fundamental example in this case is the action of the dihedral group on the unit circle.
It follows from the Lefschetz Fixed Point Formula that each orientation-reversing element has exactly two fixed points. The pseudofree condition implies that nontrivial, orientation-preserving elements act without fixed point. That is, the subgroup $H$ acts freely. Thus each orientation-reversing element has order two, since the square of an orientation-reversing element has fixed points and the orientation-preserving subgroup acts freely. Moreover, each orientation-reversing element $x\in G- H$ acts on $y\in H$ by inversion, for $xyxy=e\Rightarrow xyx=y^{-1}$. The fact that inversion is an automorphism of $H$ implies that $H$ is abelian. Since $H$ acts freely on a sphere it satisfies the property that every subgroup of order $p^{2}$ is cyclic. It follows that $H$ is cyclic of some order, hence that $G$ is dihedral.
It remains to decide whether one can actually construct such orientation-reversing dihedral actions in higher dimensions. If there were such an orientation-reversing action in an odd dimension $n$, then one could promote it to an orientation-preserving, ``tame'' pseudofree action in dimension $n+1$ by twisted suspension. It then follows from Kulkarni's result that $H$ has odd order. We would therefore conclude that $n+1\equiv 2\mod 4$, by Theorem \ref{thm:nonexistence}. Thus we have ruled out $n\equiv 3\mod 4$, and must have $n\equiv 1\mod 4$.
\subsection{Dimension $n$ even} It follows from the Lefschetz Fixed Point Formula that no orientation-reversing (pseudofree) element has a fixed point.
In this case, by the results from the preceding section, the orientation-preserving subgroup $H$ must be one of those described by Kulkarni.
\subsubsection{$H$ a periodic group acting semifreely with two fixed points} Deleting the two $H$ fixed points, we see that $G$ acts freely on $S^{n-1}\times\mathbb{R}$, and hence $G$ must have periodic cohomology and has $H$ as a subgroup of index 2, as required. \qed
\begin{remark} A standard example arises when $G$ acts freely, preserving orientation, on an odd dimensional equatorial sphere. Such an action can be extended by twisted suspension, using a projection $G\to \{\pm 1\}$. Cappell and Shaneson \cite{CappellShaneson1978} argue that every PL pseudofree action of $\mathbb{Z}_{2N}$ is a twisted suspension. They also point out that not every such action is equivalent to a twisted suspension, for instance, for the quaternion group of order 8. Note also that an even order periodic group need not have a subgroup of index 2. An example is the binary icosahedral group, which is perfect. \end{remark}
\subsubsection{$H$ a dihedral group $D_{k}$, acting with three singular orbits of types $2,2,k$} Note that we have already ruled this case out when $n\equiv 0\mod 4$. With the extra orientation-reversing elements, we are able to rule out such actions in all cases when $n\equiv 0\mod 2$, as we now explain.
Now a transfer argument shows that the orbifold $X=S^{n}/H$ has the rational homology of $S^{n}$, since $H$ acts homologically trivially. In particular, $\chi(S^{n}/H)=2$. On the other hand the same transfer argument shows that the orbifold $X=S^{n}/G$ has the rational homology of a point, since $G$ acts homologically nontrivially. In particular, $\chi(S^{n}/G)=1$.
It follows that the action of $G/H\approx C_{2}$ on $S^{n}/H$ has no fixed points. On the other hand, the action of $G/H\approx C_{2}$ on $S^{n}/H$ must preserve the image of the \emph{three} $H$ singular orbits. And one of these singular orbits (at least the one of type $k$) must be fixed by $G/H$. This contradiction completes the proof. \qed
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document}
|
arXiv
|
{
"id": "0906.1529.tex",
"language_detection_score": 0.8347458243370056,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Quantum correlation generation capability of experimental processes}
\author{Wei-Hao Huang} \thanks{These authors contributed equally to this work.} \affiliation{Department of Engineering Science, National Cheng Kung University, Tainan 70101, Taiwan} \affiliation{Center for Quantum Frontiers of Research $\&$ Technology, National Cheng Kung University, Tainan 70101, Taiwan}
\author{Shih-Hsuan Chen} \thanks{These authors contributed equally to this work.} \affiliation{Department of Engineering Science, National Cheng Kung University, Tainan 70101, Taiwan} \affiliation{Center for Quantum Frontiers of Research $\&$ Technology, National Cheng Kung University, Tainan 70101, Taiwan}
\author{Chun-Hao Chang} \thanks{These authors contributed equally to this work.} \affiliation{Department of Engineering Science, National Cheng Kung University, Tainan 70101, Taiwan} \affiliation{Center for Quantum Frontiers of Research $\&$ Technology, National Cheng Kung University, Tainan 70101, Taiwan}
\author{Tzu-Liang Hsu} \affiliation{Department of Engineering Science, National Cheng Kung University, Tainan 70101, Taiwan} \affiliation{Center for Quantum Frontiers of Research $\&$ Technology, National Cheng Kung University, Tainan 70101, Taiwan}
\author{Kuan-Jou Wang} \affiliation{Department of Engineering Science, National Cheng Kung University, Tainan 70101, Taiwan} \affiliation{Center for Quantum Frontiers of Research $\&$ Technology, National Cheng Kung University, Tainan 70101, Taiwan}
\author{Che-Ming Li} \email{[email protected]} \affiliation{Department of Engineering Science, National Cheng Kung University, Tainan 70101, Taiwan} \affiliation{Center for Quantum Frontiers of Research $\&$ Technology, National Cheng Kung University, Tainan 70101, Taiwan}
\date{\today}
\begin{abstract} Einstein-Podolsky-Rosen (EPR) steering and Bell nonlocality illustrate two different kinds of correlations predicted by quantum mechanics. They not only motivate the exploration of the foundation of quantum mechanics, but also serve as important resources for quantum-information processing in the presence of untrusted measurement apparatuses. Herein, we introduce a method for characterizing the creation of EPR steering and Bell nonlocality for dynamical processes in experiments. We show that the capability of an experimental process to create quantum correlations can be quantified and identified simply by preparing separable states as test inputs of the process and then performing local measurements on single qubits of the corresponding outputs. This finding enables the construction of objective benchmarks for the two-qubit controlled operations used to perform universal quantum computation. We demonstrate this utility by examining the experimental capability of creating quantum correlations with the controlled-phase operations on the IBM Quantum Experience and Amazon Braket Rigetti superconducting quantum computers. The results show that our method provides a useful diagnostic tool for evaluating the primitive operations of nonclassical correlation creation in noisy intermediate scale quantum devices. \end{abstract}
\maketitle
\section{\label{sec:introduction}Introduction}
Quantum computation relies on the properties of quantum mechanics to achieve computational speeds that are unattainable with ordinary digital computing~\cite{feynman2018simulating, deutsch1985quantum, lloyd1996universal, divincenzo2000physical, nielsen2002quantum, georgescu2014quantum}. Quantum technologies have now become so well developed that they have enabled the building of intermediate-size computers with 50-100 qubits. However, the noise produced in the quantum gates of such computers limits the quantum results obtained, and the error rate may be so high as to be almost impossible to evaluate. As a result, these computers have come to be known as noisy intermediate-scale quantum (NISQ) devices~\cite{preskill2018quantum, bruzewicz2019trapped, kjaergaard2020superconducting}. Various physical platforms have been presented for the implementation of quantum computers, ranging from silicon-based systems~\cite{zwanenburg2013silicon, kane1998silicon, ladd2002all} to superconducting circuits~\cite{madsen2022quantum, xiang2013hybrid} constructed with Josephson junctions~\cite{makhlin2001quantum, nakamura1999coherent, krantz2019quantum} and trapped ions using laser pulses~\cite{haffner2008quantum, cirac1995quantum}. Thanks to these developments, various quantum computers have now been commercialized, including the IBM Quantum (IBM Q) Experience~\cite{ibmq} and Amazon Web Services (AWS) Amazon Braket~\cite{aws}.
However, it is essential to go beyond simply boosting the number of physical qubits on a quantum processor to enable quantum information processing (QIP)~\cite{knill2008randomized}. For example, benchmarking different physical platforms typically considers the entire range of critical parameters that affect qubits and influence the current and future capabilities of the quantum processor architecture~\cite{cross2019validating, jurcevic2021demonstration, blume2013robust, blume2017demonstration, benedetti2019generative, chen2014qubit, huang2019fidelity, xue2019benchmarking, veldhorst2014addressable}. Typically, these parameters include not only performance metrics, such as the qubit connectivity, overall gate speed, and gate fidelity, but also the qubit manufacturability. In the short term, it may be sufficient to develop practical benchmarks across different platforms based simply on the figures of merit demonstrated by certain applications~\cite{knill2008randomized, cross2019validating, knill2001benchmarking, wright2019benchmarking, mccaskey2019quantum, harrigan2021quantum}. However, to avoid optimizing or tuning quantum devices such that their performance is maximized only under certain specific benchmarking methods, it is desirable to develop a standardized set of algorithmic benchmarks based on a wider consideration of multiple resource constraints~\cite{cross2019validating}.
Quantum correlations, such as quantum entanglement~\cite{einstein1935can, horodecki2009quantum}, Einstein-Podolsky-Rosen (EPR) steering~\cite{schrodinger1935discussion, wiseman2007steering, uola2019quantum}, and Bell nonlocality~\cite{bell1964einstein, clauser1969proposed, brunner2014bell}, can be considered as such resource constraints for developing benchmarks. These resources have been successfully demonstrated for various applications, including quantum teleportation~\cite{bennett1993teleporting, bouwmeester1997experimental}, quantum secret sharing~\cite{hillery1999quantum, chen2005experimental}, and one-way quantum computing~\cite{PhysRevLett.86.5188, raussendorf2003measurement, walther2005experimental} (entanglement); one-sided device-independent quantum key distribution~\cite{branciard2012one} and randomness certification~\cite{uola2019quantum} (EPR steering); and quantum cryptography~\cite{ekert1991quantum}, communication complexity problem~\cite{brunner2014bell, brukner2004bell, buhrman2010nonlocality}, and device-independent QIP~\cite{brunner2014bell, de2014nonlocality} (Bell nonlocality).
\begin{figure*}\label{concept}
\end{figure*}
These three quantum correlations are involved in many information processing tasks. Therefore, the processes of generating, preserving, distributing, and applying quantum correlations in QIP are of great interest and importance~\cite{ma2016converting, piani2008no}. Moreover, the figures of merit demonstrated in QIP tasks quantify whether the experimental resources and their processes (including channels) are sufficiently qualified for such quantities. These quantities can thus be regarded as essential for developing useful benchmarks across platforms.
Considerable efforts have been made in determining and quantifying quantum resources, such as entanglement~\cite{vogel1989determination, leonhardt1997measuring, peres1996separability, Horodecki_1996, PhysRevLett.80.2245, PhysRevA.65.032314}, steerability~\cite{cavalcanti2009experimental, skrzypczyk2014quantifying, piani2015necessary, gallego2015resource}, and Bell nonlocality~\cite{bell1964einstein, clauser1969proposed, brunner2014bell, de2014nonlocality, eberhard1993background, popescu1994quantum, toner2003communication, pironio2003violations, van2005statistical, acin2005optimal, junge2010operator, hall2011relaxed, chaves2012multipartite, fonseca2015measure, chaves2015unifying, ringbauer2016experimental, montina2016information, brask2017bell, gallego2017nonlocality, brito2018quantifying}, in order to evaluate the trustworthiness of QIP. Furthermore, much work has been done on identifying and quantifying different dynamical quantum processes by, for example, identifying whether a given quantum process can create maximum entangled states from separable states~\cite{campbell2010optimal}, witnessing non-entanglement-breaking quantum channels~\cite{moravvcikova2010entanglement, zhen2020unified, mao2020experimentally}, quantifying the properties of quantum channels according to deductive methods~\cite{hsieh2017quantifying, kuo2019quantum}, and utilizing channel resource theory~\cite{chitambar2019quantum, rosset2018resource, theurer2019quantifying, takagi2019general, yuan2021universal, liu2019resource, gour2019quantify, liu2020operational, hsieh2020resource, saxena2020dynamical, takagi2020application, uola2020quantification}. Particularly, channel resource theory, which is extended from resource theory for quantum states~\cite{chitambar2019quantum} and constructed as a mathematical structure for channel resource, characterizes whether the resources are generated or preserved in the process of interest, and quantifies the channel resources in a completely positive (CP) and trace-preserving (TP) channel~\cite{gour2019quantify, Liu20202020, hsieh2020resource, Saxena20202020}.
However, there are some challenges to quantify the ability of an experimental process in experimentally feasible ways by channel resource theory, as discussed in Ref.~\cite{chen2021quantifying}. First, the non-TP experimental processes, e.g. photon fusion process~\cite{Zeilinger19981998, Weinfurter20122012}, can't be analyzed. Second, whether an experimental process can be simulated by the classical theory~\cite{hsieh2017quantifying} can't be analyzed because it's not related to the state resources required by channel resource theory, and in which there is no definition of the process characteristic. In addition, to quantify the capability of the process in some tasks of interest, e.g. the preservation capability, the certain resources such as entangled states are required to prepare. Finally, the optimization of the ancillary system and the superchannel (which includes interactions between the main system and ancillary systems) for the output state to show the clearest difference between the experimental process and the free superchannels (which describe that resource is nonincreasing through free operation) can't be realized necessarily in the experiment.
Considering the experimental feasibility, we use quantum process capability (QPC) theory~\cite{hsieh2017quantifying,kuo2019quantum} rather than channel resource theory to experimentally quantify the ability of the process in this work. QPC method, which is experimentally feasible, requires the input of just certain separable states to obtain entire knowledge of the experimental process based on the primitive measurement results of the output states, and it is reviewed in the following: 1.~QPC theory classifies processes into capable processes, which show the quantum-mechanical effect on a system prescribed by the specification, and incapable processes, which are unable to satisfy the specification at all. Different from channel resource theory, QPC method follows the quantum operations formalism to characterize the experimental process by quantum process tomography (QPT)~\cite{nielsen2002quantum, ChuangPrescription} and quantifies QPC of it. 2.~We can utilize QPC theory to analyze not only CPTP processes but also non-TP CP processes~\cite{nielsen2002quantum, kuo2019quantum}. Furthermore, QPC theory describes and defines the capability of the whole process to cause quantum-mechanical effects on physical systems as a process characteristic. 3.~QPC theory can be used to characterize the process and quantify its capability for characterizing different QIP tasks. For example, the work done by Hsieh \textit{et~al.}~\cite{hsieh2017quantifying} and Chen \textit{et~al.}~\cite{chia} showed that the analysis of quantum characteristics from a process perspective is more comprehensive than that from a state perspective.
Nonetheless, despite the progress made in the studies above in understanding quantum processes, it still remains unclear as to how best to examine the experimental capabilities of creating quantum correlations such as EPR steering and Bell nonlocality. As described above, these resources play a central role in QIP applications. Moreover, they have immediate practical use in quantum computers. However, when it comes to the practical implementation of quantum computers, environmental disturbances or unexpected experimental conditions can bring the output system into compliance with the laws of classical physics. In the worst case, this may cause the calculations to deviate enormously from the expected results~\cite{preskill2018quantum, blais2007quantum, wendin2017quantum, li2019tackling, bharti2021noisy}.
To finally determine and quantify the quantum properties of quantum processes that generate quantum correlations, a fundamental question arises as to the extent to which non-local processes that generate quantum correlations can be accomplished using classical methods of mimicry. The answer has profound implications for how faithfully the QIP task can be implemented and immediately employed for cross-platform benchmark development on NISQ devices.
To answer this question, we investigate herein the problem of quantifying the generating processes of quantum correlations through the use of tomography tools and numerical methods. To narrow the scope of the study, we focus particularly on just two quantum correlation resources, namely EPR steering and Bell nonlocality. We classify the output states according to different trust situations for both scenarios. For a process with no capability (ability) to generate EPR steering (Bell nonlocality), separable input states remain unsteerable (Bell-local), and Alice's (both) output states are considered untrusted. See Fig.~\ref{concept}. We show that the proposed approach provides a viable means of quantifying QIP tasks and can be used as the basis for benchmark development on various real-world NISQ devices, including the IBM Q Experience~\cite{ibmq} and AWS Amazon Braket~\cite{aws}.
The remainder of this paper is organized as follows. Section~\ref{sec:quantum correlations} reviews the nonclassical properties of EPR steering and Bell nonlocality, respectively. Section~\ref{sec:cpm} describes the use of quantum state tomography (QST) and QPT~\cite{nielsen2002quantum, ChuangPrescription} to perform the characterization of two-qubit experimental processes. Sections~\ref{sec:sgp} and~\ref{sec:bgp} introduce the steering generating process and Bell nonlocality generating process, respectively. Section~\ref{sec:MQIGPQC} presents several quantifiers, identifiers, and fidelity criteria for evaluating the capability of a process to generate EPR steering and Bell nonlocality quantum correlations. Section~\ref{sec:QOQCGP} presents the quantification and identification results obtained for the quantum correlation generation capability of a two-qubit controlled-phase (CPHASE) shift gate implemented on real-world NISQ devices. Finally, Section~\ref{sec:cao} provides some brief concluding remarks and indicates the intended direction of future research.
\section{\label{sec:quantum correlations}Quantum correlations}
\subsection{EPR steering}
We begin by describing two-qubit states with and without EPR steering, respectively. In identifying the quantum correlations of a composite system, different quantum correlations are associated with different levels of trust in the measurement device. In the case where only Bob's local measurement device is trusted, the violation of steering inequality can be used to prove the existence of EPR steering. When seeking to identify EPR steering, the aim is to determine whether a local measurement device held by Alice can steer the system of another local measurement device held by Bob to a particular state.
Wiseman \textit{et al.} proved that EPR steering exists in a shared pair if, and only if, the correlations between the measurement result of Bob and that announced by Alice cannot be described by the local hidden state (LHS) model~\cite{wiseman2007steering}. According to this model, the unnormalized state corresponding to the state of Bob's system conditioned on Alice's measurement, ${\rho^{(\rm{B})}_{l}}$, is described as \begin{equation}\label{neweq:2.13}
P({v^{(\rm{A})}_{k}}){\rho^{(\rm{B})}_{l}}=\sum_{\mu}P({v^{(\rm{A})}_{k}}|{\lambda_\mu})P(\lambda_\mu)\rho^{(\rm{B})}_{\lambda_\mu} \ \ \ \ \ \ \ \forall k, l, \end{equation} where $v^{(\rm{A})}_{k}$ denotes the measurement outcome of Alice's $k$th measurement $V^{(\rm{A})}_{k}$ corresponding to the observable $\hat{V}^{(\rm{A})}_{k}$ with a probability $P({v^{(\rm{A})}_{k}})$, $\lambda_\mu$ represents Alice's local hidden variable,
$P({v^{(\rm{A})}_{k}}|{\lambda_\mu})$ is the probability of finding $v^{(\rm{A})}_{k}$ conditioned on ${\lambda_\mu}$, and $\rho^{(B)}_{\lambda_\mu}$ is Bob's local hidden state corresponding to $\lambda_\mu$. Given Alice's $k$th measurement $V^{(\rm{A})}_{k}$ and outcome $v^{(\rm{A})}_{k}$, any assemblage $\{P({v^{(\rm{A})}_{k}}){\rho^{(\rm{B})}_{l}}\}_{kl}$ of unnormalized states produced in Bob's system due to Alice's measurement which cannot be represented in the form of Eq.~(\ref{neweq:2.13}) is said to be steerable. Considering $P(v^{(\rm{B})}_{l})=\text{tr}(\ket{v^{(\rm{B})}_{l}}\!\!\bra{v^{(\rm{B})}_{l}\!} {\rho^{(B)}_{l}})$, where $v^{(\rm{B})}_{l}$ denotes the measurement outcome of Bob's $l$th observable $\hat{V}^{(\rm{B})}_{l}$ and $\ket{v^{(\rm{B})}_{l}}\!\!\bra{v^{(\rm{B})}_{l}}$ is the eigenstate corresponding to eigenvalue $v^{(\rm{B})}_{l}$ of the observable $\hat{V}^{(\rm{B})}_{l}$, the LHS model describes the correlation of Alice's and Bob's measurement outcomes as the probability in the following, \begin{equation}\label{neweq2}
P(v^{(\rm{A})}_{k}, v^{(\rm{B})}_{l})=\sum_{\mu}P({\lambda_\mu})P({v^{(\rm{A})}_{k}}|{\lambda_\mu})\text{tr}(\ket{v^{(\rm{B})}_{l}}\!\!\bra{v^{(\rm{B})}_{l}\!} {\rho^{(B)}_{\lambda_\mu}}), \end{equation} where $v^{(\rm{A})}_{k}, v^{(\rm{B})}_{l}\in\pm{1}$ for $k,~l=1,2,3$.
Previous works have adopted various approaches for certifying the existence of EPR steering, including using the steerable weight \cite{skrzypczyk2014quantifying} or robustness of steering \cite{piani2015necessary} to quantify the steering of two-qubit states.
\subsection{Bell nonlocality} We next describe the two-qubit state with and without Bell nonlocality~\cite{bell1964einstein}, respectively. Compared to the EPR steering case described above, in which only Alice's measurement device is untrusted, the Bell nonlocality case considers the situation in which neither of the local measurement devices can be trusted.
When both local measurement devices are untrusted, the inability of the local hidden variables (LHV) model~\cite{brunner2014bell} to describe the correlation between the measurements of Alice and Bob, respectively, is seen as evidence of the qubit states with Bell nonlocality. The LHV model describes the correlation of Alice's and Bob's measurement outcomes as the probability in the following, \begin{equation}\label{neweq3}
P(v^{(\rm{A})}_{k}, v^{(\rm{B})}_{l})=\sum_{\mu}P({\lambda_\mu})P({v^{(\rm{A})}_{k}}|{\lambda_\mu})P({v^{(\rm{B})}_{l}}|{\lambda_\mu}), \end{equation} where $v^{(\rm{A})}_{k}$ and $v^{(\rm{B})}_{l}$ denote Alice's and Bob's measurement outcomes of their $k$th and $l$th measurements $V^{(\rm{A})}_{k}$ and $V^{(\rm{B})}_{l}$ corresponding to the observables $\hat{V}^{(\rm{A})}_{k}$ and $\hat{V}^{(\rm{B})}_{l}$, respectively, and $v^{(\rm{A})}_{k}, v^{(\rm{B})}_{l}\in\pm{1}$ for $k,~l=1,2,3$.
In proving the presence of Bell nonlocality, most previous works use Bell inequalities based on Bell tests \cite{bell1964einstein}. However, it is also possible to quantify the Bell nonlocality of states using nonlocal resources~\cite{toner2003communication,pironio2003violations,montina2016information,steiner2000towards,branciard2011quantifying}, statistical strength measures~\cite{van2005statistical,acin2005optimal}, and the tolerance of nonlocal correlations to noise addition~\cite{junge2010operator,brito2018quantifying,kaszlikowski2000violations,acin2002quantum,perez2008unbounded,massar2002nonlocality}.
\section{\label{sec:cpm}Quantum tomography for a two-qubit experimental process}
In the present study, we characterize an experimental process using a QPT algorithm~\cite{nielsen2002quantum, ChuangPrescription}, which takes as its input 36 pure states underlying the two input objects, i.e., $\ket{\boldsymbol\phi_{i_mj_n}}=\ket{\phi_{i_m}}\otimes\ket{\phi_{j_n}}$, where $\ket{\phi_{i_m}}$ and $\ket{\phi_{j_n}}$ for $i,j=1,2,3$ and $m,n=\pm1$ are the eigenstates corresponding to eigenvalues $m$ and $n$ of the observables $\hat{V}_i$ and $\hat{V}_j$, respectively. Each observable set, $\hat{V}_i$ and $\hat{V}_j$, and the identity matrix form an orthonormal set of matrices with respect to the Hilbert-Schmidt inner product. That is, the observables are chosen as $\hat{V}_1=X$, $\hat{V}_2=Y$, and $\hat{V}_3=Z$. The Pauli matrixes $X$, $Y$, and $Z$ can be represented as the following spectral decompositions: $X=\ket{+}\!\!\bra{+}-\ket{-}\!\!\bra{-}$, $Y=\ket{R}\!\!\bra{R}-\ket{L}\!\!\bra{L}$, and $Z=\ket{0}\!\!\bra{0}-\ket{1}\!\!\bra{1}$, where $\ket{+}$ and $\ket{-}$ are the eigenstates corresponding to eigenvalues $+1$ and $-1$ of the Pauli-$X$ matrix, $\ket{R}$ and $\ket{L}$ are the eigenstates corresponding to eigenvalues $+1$ and $-1$ of the Pauli-$Y$ matrix, and $\ket{0}$ and $\ket{1}$ are the eigenstates corresponding to eigenvalues $+1$ and $-1$ of the Pauli-$Z$ matrix. Following the description above, we define $\ket{\phi_{1_{1}}} = \ket{+}$, $\ket{\phi_{1_{-1}}} = \ket{-}$, $\ket{\phi_{2_{1}}} = \ket{R}$, $\ket{\phi_{2_{-1}}} = \ket{L}$, $\ket{\phi_{3_{1}}} = \ket{0}$, and $\ket{\phi_{3_{-1}}} = \ket{1}$. Thus, the $36$ input states can be represented as $\ket{\boldsymbol\phi_{i_mj_n}}\!\!\bra{\boldsymbol\phi_{i_mj_n}}=\ket{\phi_{i_m}}\!\!\bra{\phi_{i_m}}\otimes\ket{\phi_{j_n}}\!\!\bra{\phi_{j_n}}$ for $i,j=1,2,3$ and $m,n=\pm1$. We assume that the input states are quantum states due to the well quality of the state preparation process in the superconducting computer~\cite{kjaergaard2020superconducting}, and we aim to examine the capability of a process to generate quantum correlations based only upon an inspection of the output states.
Through the QPT algorithm, a positive Hermitian matrix can be used to fully characterize the scenario of a physical process acting on a system. For convenience, the Hermitian matrix is referred to hereinafter simply as the process matrix, $\chi_{\rm{expt}}$. The evolution (operation) of the system from an initial state, $\ket{\boldsymbol\phi_{i_mj_n}}$, to an output state, $\rho_{{\rm{out}}|i_mj_n}$, can then be specified in terms of the process matrix, $\chi_{\rm{expt}}$~\cite{nielsen2002quantum}. Here we denote such operation relating the input and output states by: \begin{equation} \label{eq:1}
\chi_{\rm{expt}}(\ket{\boldsymbol\phi_{i_mj_n}}\!\!\bra{\boldsymbol\phi_{i_mj_n}})=\rho_{{\rm{out}}|i_mj_n}, \end{equation} as shown in Fig.~\ref{concept}(a). From QST, the density operator of the output system conditioned on a specific input state $\ket{\phi_{i_m}}\otimes\ket{\phi_{j_n}}$ can be obtained as \begin{equation} \begin{aligned} \label{eq:2}
\rho_{{\rm{out}}|i_mj_n}=&\frac{1}{4}(\hat{I}\otimes\hat{I}\\
&+\!\!\sum^{3}_{k,l=1}\sum_{v^{(\!\rm{A}\!)}_{k},v^{(\!\rm{B}\!)}_{l}=\pm{1}}\!\!\! v^{(\!\rm{A}\!)}_{k}\!v^{(\!\rm{B}\!)}_{l}\!P_{i_mj_n}\!(\!v^{(\!\rm{A}\!)}_{k}\!\!,\!v^{(\!\rm{B}\!)}_{l}\!) \hat{V}^{(\!\rm{A}\!)}_{k}\!\otimes\!\hat{V}^{(\!\rm{B}\!)}_{l}\\
&+\sum^{3}_{k=1}\sum_{v^{(\rm{A})}_{k}=\pm{1}} v^{(\rm{A})}_{k}P_{i_m}(v^{(\rm{A})}_{k}) \hat{V}^{(\rm{A})}_{k}\otimes\hat{I}\\
&+\sum^{3}_{l=1}\sum_{v^{(\rm{B})}_{l}=\pm{1}} v^{(\rm{B})}_{l}P_{j_n}(v^{(\rm{B})}_{l}) \hat{I}\otimes\hat{V}^{(\rm{B})}_{l} ), \end{aligned} \end{equation}
where $\hat{I}$ is the identity operator, and $P_{i_mj_n}(v^{(\rm{A})}_{k},v^{(\rm{B})}_{l})$ is the joint probability of obtaining the measurement result $v^{(\rm{A})}_{k},v^{(\rm{B})}_{l}\in\pm{1}$ for $k,~l=1,2,3$, conditioned on the specific input $\ket{\boldsymbol\phi_{i_mj_n}}$ of two different subsystems (Alice and Bob, respectively). The input states $\ket{\phi_{i_m}}\otimes\ket{\phi_{j_n}}$ are chosen as the eigenstates of three Pauli matrices, i.e., the Pauli-$X$ matrix, Pauli-$Y$ matrix, and Pauli-$Z$ matrix, for $\hat{V}^{(\rm{A})}_{k}$ and $\hat{V}^{(\rm{B})}_{l}$. Given the corresponding output states, $\rho_{{\rm{out}}|i_mj_n}$, the process matrix can be constructed through the QPT algorithm. See Appendix~\ref{app:qptpm} for details.
\section{\label{sec:sgp}Steering generating process} In Sec.~\ref{sec:quantum correlations}, we described the concept of two-qubit states with and without EPR steering. In Sec.~\ref{sec:cpm}, we used the positive Hermitian matrix in the QPT algorithm (i.e., the so-called process matrix, $\chi_{\rm{expt}}$) to describe the mapping between the input and output states of a process. This section combines these two concepts to illustrate the steering generation capability of an experimental process. To quantify the steering generation capability of the process, we classify the two-qubit process as either steering generating capable or steering generating incapable using QPC theory \cite{hsieh2017quantifying,kuo2019quantum}.
\textbf{Definition. Incapable process and capable process for steering generating capability.} A process is said to be steering generating incapable, denoted as $\chi_{\mathcal{I}}$, if all the separable input states remain as unsteerable states after the process. Conversely, a process is said to be steering generating capable if the process cannot be described by $\chi_{\mathcal{I}}$ at all.
Since an incapable process $\chi_{\mathcal{I}}$ cannot generate steering from separable states, the output states must be unsteerable states. In the case considered herein, we perform QST on the output states in Eq.~(\ref{eq:2}) according to the QPT algorithm, and thus the measurements we choose are the three Pauli matrices (i.e., Pauli-$X$, Pauli-$Y$, and Pauli-$Z$).
According to the definition of incapable processes, $\chi_{\mathcal{I}}$, the output states in Eq.~(\ref{eq:2}) are unsteerable and can thus be described by the LHS model. For an unsteerable state, the classical state $\textbf{v}^{(\rm{A})}_{\mu}$ can be considered as an object with properties satisfying the assumption of classical realism~\cite{einstein1935can}, i.e., $\textbf{v}^{(\rm{A})}_{\mu}=(\text{v}^{(\rm{A})}_{1},\text{v}^{(\rm{A})}_{2},\text{v}^{(\rm{A})}_{3})$, where $\text{v}^{(\rm{A})}_{1},\text{v}^{(\rm{A})}_{2},\text{v}^{(\rm{A})}_{3} \in ~\{+1,-1\}$ [see Fig.~\ref{concept}(b)]. Thus, the probabilities $P_{i_mj_n}(v^{(\rm{A})}_{k},v^{(\rm{B})}_{l})$ of the output states in Eq.~(\ref{eq:2}) for unsteerable states can be derived from Eq.~(\ref{neweq2}) as \begin{equation} P_{i_mj_n}\!(\!v^{(\rm{A})}_{k}\!\!\!,\!v^{(\rm{B})}_{l}\!)\!\!=\!\!\!
\sum_{\mu}\!\!P(\!\textbf{v}^{(\rm{A})}_{\mu}\!)P(\!v^{(\rm{A})}_{k}\!|\textbf{v}^{(\rm{A})}_{\mu}\!)\rm{tr}(\ket{\textit{v}^{(\rm{B})}_\textit{l}}\!\!\bra{\textit{v}^{(\rm{B})}_\textit{l}\!}\!\rho^{(\rm{B})}_{\mu,\textit{i}_\textit{m}\textit{j}_\textit{n}}\!),\label{eq:5.11} \end{equation} where
$P_{i_mj_n}(v^{(\rm{A})}_{k}|\textbf{v}^{(\rm{A})}_{\mu})$ are the conditional probabilities; $\rho^{(\rm{B})}_{\mu,i_mj_n}$ is the state held by Bob for the specific input $\ket{\boldsymbol\phi_{i_mj_n}}$; and Alice’s measurement results are decided by $\textbf{v}^{(\rm{A})}_{\mu}$. In accordance with the QPT algorithm in Eq.~(\ref{eq:2}), the observables for the QST procedure for $\rm{Alice}$'s particle are chosen as $\hat{V}^{(\rm{A})}_{1}=X, \hat{V}^{(\rm{A})}_{2}=Y, \hat{V}^{(\rm{A})}_{3}=Z$ while those for Bob’s particle are chosen as $\hat{V}^{(\rm{B})}_{1}=X, \hat{V}^{(\rm{B})}_{2}=Y, \hat{V}^{(\rm{B})}_{3}=Z$. From the classical description in Eq.~(\ref{eq:5.11}), we can replace the joint probability $P_{i_mj_n}(v^{(\rm{A})}_{k},v^{(\rm{B})}_{l})$ in Eq.~(\ref{eq:2}) to construct an incapable process, $\chi_{\mathcal{I}}$, for which all the output states are unsteerable states. To concretely quantify the ability of an experimental process to generate steering, we propose in this study two quantifiers and a fidelity criterion, as described in Sec.~\ref{sec:MQIGPQC}.
\section{\label{sec:bgp}Bell nonlocality generating process}
This section illustrates the Bell nonlocality generating ability of a process by combining the concept of Bell nonlocality (see Sec.~\ref{sec:quantum correlations}) and the QPT algorithm (see Sec.~\ref{sec:cpm}). In order to identify the Bell nonlocal generating capability of an experimental process, we classify the two-qubit process as either Bell nonlocality generating able or Bell nonlocality generating unable using a similar concept of QPC theory \cite{hsieh2017quantifying,kuo2019quantum}.
\textbf{Definition. Unable and able processes for Bell nonlocality generating.} A process is said to be an unable process that cannot generate Bell nonlocality, denoted as $\chi_{\mathcal{U}}$, if all the input separable states remain as Bell-local states following the process. A process is then said to be an able process that can generate Bell nonlocality if it cannot be described by $\chi_{\mathcal{U}}$ at all.
Since an unable process $\chi_{\mathcal{U}}$ cannot generate Bell nonlocality from separable states, the output states must be Bell-local states of which the measurement results can be explained by the LHV model.
The LHV model can explain the correlations between the subsystems belonging to $\rm{Alice}$ and $\rm{Bob}$, respectively. In that case, the output states in Eq.~(\ref{eq:2}) can be regarded as a physical object with properties satisfying the assumption of classical realism~\cite{einstein1935can}. The classical states of output systems satisfy the assumption of realism and can be represented by the realistic sets $(\textbf{v}^{(\rm{A})}_{\zeta},\textbf{v}^{(\rm{B})}_{\eta})=(\text{v}_{1}^{(\rm{A})},\text{v}_{2}^{(\rm{A})},\text{v}_{3}^{(\rm{A})},\text{v}_{1}^{(\rm{B})},\text{v}_{2}^{(\rm{B})},\text{v}_{3}^{(\rm{B})})$, where $\text{v}_{1}^{(\rm{A})},\text{v}_{2}^{(\rm{A})},\text{v}_{3}^{(\rm{A})},\text{v}_{1}^{(\rm{B})},\text{v}_{2}^{(\rm{B})},\text{v}_{3}^{(\rm{B})}\in~\{+1,-1\}$ represent the possible measurement outcomes of the $\zeta$th and $\eta$th physical properties of the classical object [see Fig.~\ref{concept}(c)]. Thus, the probabilities $P_{i_mj_n}(v^{(\rm{A})}_{k},v^{(\rm{B})}_{l})$ of the output states in Eq.~(\ref{eq:2}) for Bell-local states can be derived from Eq.~(\ref{neweq3}) as \begin{equation}
P_{\!i_mj_n\!}\!(\!v\!^{(\rm{A})}_{k}\!\!\!,\!v\!^{(\rm{B})}_{l}\!)\!\!=\!\!\!\sum_{\zeta,\eta}\!\!P_{\!i_mj_n\!}\!(\!\textbf{v}\!^{(\rm{A})}_{\zeta}\!\!,\!\textbf{v}\!^{(\rm{B})}_{\eta}\!)P_{\!i_mj_n\!}\!(\!v\!^{(\rm{A})}_{k}\!|\!\textbf{v}\!^{(\rm{A})}_{\zeta}\!)P_{\!i_mj_n\!}\!(\!v\!^{(\rm{B})}_{l}\!|\!\textbf{v}\!^{(\rm{B})}_{\eta}\!),\label{eq:5.12} \end{equation} where
$P_{i_mj_n}(v^{(\rm{A})}_{k}|\textbf{v}^{(\rm{A})}_{\zeta})$ and $P_{i_mj_n}(v^{(\rm{B})}_{l}|\textbf{v}^{(\rm{B})}_{\eta})$ are the conditional probabilities. With the classical description given in Eq.~(\ref{eq:5.12}), we can replace the joint probability $P_{i_mj_n}(v^{(\rm{A})}_{k},v^{(\rm{B})}_{l})$ in Eq.~(\ref{eq:2}) to construct an unable process, $\chi_{\mathcal{U}}$, for which all the output states are Bell-local states. To concretely identify the ability of an experimental process to generate Bell nonlocality, we propose herein two identifiers and a fidelity criterion, as described in Sec.~\ref{sec:MQIGPQC}.
It is important to note that in constructing an unable process through QPT in Eq.~(\ref{eq:2}), the observables for the QST procedure for $\rm{Alice}$'s particle are selected as $\hat{V}^{(\rm{A})}_{1}=X, \hat{V}^{(\rm{A})}_{2}=Y, \hat{V}^{(\rm{A})}_{3}=Z$ while those for $\rm{Bob}$'s particle must be selected as $\hat{V}^{(\rm{B})}_{1}=U_RX{U_R}^{\dag}, \hat{V}^{(\rm{B})}_{2}=U_RY{U_R}^{\dag}, \hat{V}^{(\rm{B})}_{3}=U_RZ{U_R}^{\dag}$~\cite{Discriminating2020}, where $U_R$ is an arbitrary unitary transformation, i.e., \begin{equation} U_R(\phi,\theta)=\left[ \begin{matrix}
e^{-i\frac{\phi}{2}}\cos(\frac{\theta}{2}) & e^{-i\frac{\phi}{2}}\sin(\frac{\theta}{2}) \\
-e^{i\frac{\phi}{2}}\sin(\frac{\theta}{2}) & e^{i\frac{\phi}{2}}\cos(\frac{\theta}{2})
\end{matrix} \right].\label{S6} \end{equation} The unitary transformation for the observables $\hat{V}^{B}_{j}$ should be chosen as $U_R(0,\pi/4)$ to maximize the difference between the target process and $\chi_{\mathcal{U}}$. Note that we show how to build this unitary transformation in real-world quantum circuits in Sec.~\ref{sec:ecpg}.
\section{\label{sec:MQIGPQC}Methods for quantifying and identifying generating processes of quantum correlations}
This section introduces two quantifiers, two identifiers, and two fidelity criteria for evaluating the capability of an experimental process to generate EPR steering and Bell nonlocality, respectively.
We introduced the definitions of capable and able processes in Sec.~\ref{sec:sgp} and~\ref{sec:bgp}, respectively. We commence this section by constructing a faithful measure, denoted as $C(\chi_{\rm{expt}})$, for a given experimental process matrix $\chi_{\rm{expt}}$, which can be obtained by preparing specific separable states and local measurements in a real-world experiment. We then present two measures (identifiers) for quantifying (identifying) generating processes of quantum correlations, namely (1) the quantum correlation generating composition (i.e., the steering generating composition $\alpha_{\rm{steer}}$ and Bell nonlocality generating composition $\alpha_{\rm{Bell}}$) and (2) the quantum correlation generating robustness (i.e., the steering generating robustness $\beta_{\rm{steer}}$ and Bell nonlocality generating robustness $\beta_{\rm{Bell}}$). Finally, we propose two fidelity criteria for identifying the quantum correlation generating capability of $\chi_{\rm{expt}}$.
Two measures $C(\chi_{\rm{expt}})$ for faithfully quantifying the capability of steering generating processes, i.e., $\alpha_{\rm{steer}}$ and $\beta_{\rm{steer}}$, should satisfy the following three proper measure conditions~\cite{gour2019quantify,liu2020operational,hsieh2020resource}:\\ \textbf{(MP1) Faithfulness:} $C(\chi)=0$ if, and only if, $\chi$ is an incapable process;\\ \textbf{(MP2) Monotonicity:} $C(\chi\circ\chi_{\mathcal{I}})\leq C(\chi)$, i.e., the measures of steering generating capability of a process $\chi$ do not increase following extension with an incapable process;\\ \textbf{(MP3) Convexity:} $C(\sum_{n}p_{n}\chi\circ\chi_{\mathcal{I}})\leq\sum_{n}p_{n}C(\chi\circ\chi_{\mathcal{I}})$, i.e., the mixing of processes does not increase the steering generating capability of the resulting process.
By contrast, two identifiers $C_{\rm{Bell}}(\chi_{\rm{expt}})$ for faithfully identify the capability of Bell nonlocality generating processes, i.e., $\alpha_{\rm{Bell}}$ and $\beta_{\rm{Bell}}$, only satisfy the following two measure conditions:\\ \textbf{(MP1) Faithfulness:} $C_{\rm{Bell}}(\chi)=0$ if, and only if, $\chi$ is an unable process;\\ \textbf{(MP3) Convexity:} $C_{\rm{Bell}}(\sum_{n}p_{n}\chi\circ\chi_{\mathcal{U}})\leq\sum_{n}p_{n}C_{\rm{Bell}}(\chi\circ\chi_{\mathcal{U}})$, i.e., the mixing of processes does not increase the Bell nonlocality generating capability of the resulting process.\\ It is important to note that $\alpha_{\rm{Bell}}$ and $\beta_{\rm{Bell}}$ do not satisfy the monotonicity condition \textbf{(MP2)}, but are still useful for identifying the Bell nonlocality generating process.
\subsection{Composition of quantum correlation generating process}
A process matrix, $\chi_{\rm{expt}}$, can be expressed as a linear combination of capable processes $\chi_{\mathcal{C}}$ and incapable processes $\chi_{\mathcal{I}}$, or able processes $\chi_{\mathcal{A}}$ and unable processes $\chi_{\mathcal{U}}$, using a similar concept of QPC theory~\cite{hsieh2017quantifying,kuo2019quantum}. That is, \begin{equation} \label{eq:15} \chi_{\rm{expt}}=\alpha_{\rm{steer}}\chi_{\mathcal{C}}+(1-\alpha_{\rm{steer}})\chi_{\mathcal{I}}, \end{equation} \begin{equation} \label{eq:15} \chi_{\rm{expt}}=\alpha_{\rm{Bell}}\chi_{\mathcal{A}}+(1-\alpha_{\rm{Bell}})\chi_{\mathcal{U}}, \end{equation} where $\alpha_{\rm{steer}}, \alpha_{\rm{Bell}}\ge0$. Let $\alpha_{\rm{steer}}$ and $\alpha_{\rm{Bell}}$ be defined as the minimum amounts of processes that can generate quantum correlations $\chi_{\mathcal{C}}$ and $\chi_{\mathcal{A}}$, respectively, and
have values of $0\ \textless\ \alpha_{\rm{steer}} \leq 1$ for a capable process and $0\ \textless\ \alpha_{\rm{Bell}} \leq 1$ for an able process. In practical experiments, $\alpha_{\rm{steer}}$ and $\alpha_{\rm{Bell}}$ can be obtained by minimizing the respective quantities via semi-definite programming (SDP) with MATLAB~\cite{yalmip, sdpt, mosek}: \begin{equation} \label{eq:17.2} \alpha_{\rm{steer}}=\mathop{{\rm{\min }}}\limits_{{\tilde{\chi }}_{\mathcal{I}}}\,[1-{\rm{tr}}({\tilde{\chi }}_{\mathcal{I}})], \end{equation} \begin{equation} \label{eq:17.23} \alpha_{\rm{Bell}}=\mathop{{\rm{\min }}}\limits_{{\tilde{\chi }}_{\mathcal{U}}}\,[1-{\rm{tr}}({\tilde{\chi }}_{\mathcal{U}})], \end{equation} where ${\tilde{\chi }}_{\mathcal{I}}=(1-\alpha_{\rm{steer}})\chi_{\mathcal{I}}$, ${\tilde{\chi }}_{\mathcal{C}}=\alpha_{\rm{steer}}\chi_{\mathcal{C}}$, ${\tilde{\chi }}_{\mathcal{U}}=(1-\alpha_{\rm{Bell}})\chi_{\mathcal{U}}$, and ${\tilde{\chi }}_{\mathcal{A}}=\alpha_{\rm{Bell}}\chi_{\mathcal{A}}$ are unnormalized process matrices with $\text{tr}({\tilde{\chi }}_{\mathcal{I}})=1-\alpha_{\rm{steer}}$ , $\text{tr}({\tilde{\chi }}_{\mathcal{C}})=\alpha_{\rm{steer}}$, $\text{tr}({\tilde{\chi }}_{\mathcal{U}})=1-\alpha_{\rm{Bell}}$, and $\text{tr}({\tilde{\chi }}_{\mathcal{A}})=\alpha_{\rm{Bell}}$, respectively.
In calculating $\alpha_{\rm{steer}}$ of a steering generating process, the constraint set for the SDP problem is given as \begin{equation} \label{eq:c17} \begin{aligned} &\chi_{\mathcal{I}}\!\geq\!0,\ \chi_{\rm{expt}}\!-\!{\tilde{\chi }}_{\mathcal{I}}\!\geq\!0,\ \chi_{\mathcal{I}}(\ket{\!\boldsymbol\phi_{i_mj_n}\!}\!\!\bra{\!\boldsymbol\phi_{i_mj_n}\!})\!\!\geq\!0,\ \rho_{\mu,i_mj_n}^{(\rm{B})}\!\!\geq\!0,\\
&\sum _{m=\pm1}\rho_{{\rm{out}}|i_mj_n}=\sum _{m=\pm1}\rho_{{\rm{out}}|1_mj_n}, \ \ \ \ \forall \mu,i_m,j_n, \end{aligned} \end{equation} where $\ket{\boldsymbol\phi_{i_mj_n}}\!\!\bra{\boldsymbol\phi_{i_mj_n}}$ are the $36$ input states introduced in Sec.~\ref{sec:cpm} and $\rho_{\mu,i_mj_n}^{(B)}$ are the states held by Bob in Eq.~(\ref{eq:5.11}). Similarly, when calculating $\alpha_{\rm{Bell}}$ of a Bell nonlocality generating process, the constraint set is given as \begin{equation} \label{eq:18} \begin{aligned} &\chi_{\mathcal{U}}\geq0, \ \chi_{\rm{expt}}-{\tilde{\chi }}_{\mathcal{U}}\geq0, \\ &\chi_{\mathcal{U}}(\ket{\boldsymbol\phi_{i_mj_n}}\!\!\bra{\boldsymbol\phi_{i_mj_n}})\geq0, \ P_{i_mj_n}(\textbf{v}^{(\rm{A})}_{\zeta},\textbf{v}^{(\rm{B})}_{\eta})\geq0, \\
&\sum _{m=\pm1}\!\!\!\rho_{{\rm{out}}|i_mj_n}\!\!=\!\!\!\!\sum _{m=\pm1}\!\!\!\rho_{{\rm{out}}|1_mj_n},\!\!\!\!\ \sum _{n=\pm1}\!\!\!\rho_{{\rm{out}}|i_mj_n}\!\!=\!\!\!\!\sum _{n=\pm1}\!\!\!\rho_{{\rm{out}}|i_m1_n}, \\
&\sum _{m=\pm1}\sum _{n=\pm1}\!\!\rho_{{\rm{out}}|i_mj_n}=\!\!\sum _{m=\pm1}\sum _{n=\pm1}\!\!\rho_{{\rm{out}}|1_m1_n},\ \forall i_m,j_n,\zeta,\eta, \end{aligned} \end{equation} where $P_{i_mj_n}(\textbf{v}^{(\rm{A})}_{\zeta},\textbf{v}^{(\rm{B})}_{\eta})$ is the joint probability in Eq.~(\ref{eq:5.12}).
The constraints $\chi_{\mathcal{I}}\geq0$ and $\chi_{\mathcal{U}}\geq0$ in Eqs. (\ref{eq:c17}) and (\ref{eq:18}), respectively, ensure that the incapable process and unable process are CP mapping. Similarly, the constraints $\chi_{\rm{expt}}-{\tilde{\chi }}_{\mathcal{I}}\geq0$ and $\chi_{\rm{expt}}-{\tilde{\chi }}_{\mathcal{U}}\geq0$ ensure that the capable process ${\tilde{\chi }}_{\mathcal{C}}$ and able process ${\tilde{\chi}}_{\mathcal{A}}$ are also CP mapping. The constraints $\chi_{\mathcal{I}}(\ket{\boldsymbol\phi_{i_mj_n}}\!\!\bra{\boldsymbol\phi_{i_mj_n}})\geq0$ and $\chi_{\mathcal{U}}(\ket{\boldsymbol\phi_{i_mj_n}}\!\!\bra{\boldsymbol\phi_{i_mj_n}})\geq0$ ensure that all the output states are positive semi-definite for all the input states. The fourth constraint in Eq.~(\ref{eq:c17}), i.e., $\rho_{\mu,i_mj_n}^{(\rm{B})}\geq0$, ensures that the density matrix in Eq.~(\ref{eq:5.11}) is positive semi-definite, while the fourth constraint in Eq.~(\ref{eq:18}), i.e., $P_{i_mj_n}(\textbf{v}^{(\rm{A})}_{\zeta},\textbf{v}^{(\rm{B})}_{\eta})\geq0$, ensures that the probability which satisfies the LHV model in Eq.~(\ref{eq:5.12}) is non-negative.
The final constraint in Eq.~(\ref{eq:c17}) ensures that Alice's inputs form an identity matrix $\hat{I}$. The corresponding output states are the same for different decompositions of $\hat{I}$ since the identity matrix $\hat{I}$ can be represented as the sum of the inputs for each basis, i.e., $\hat{I}=\Sigma_{m=\pm1}\ket{\phi_{i_m}}\!\!\bra{\phi_{i_m}}$, $\forall i$. The constraint describes the relations of the outputs in SDP though ensuring that the outputs corresponding to the sum of the inputs for each basis $\sum _{m=\pm1}\rho_{{\rm{out}}|i_mj_n}$, $i=1,2,3$, are equal to the sum in the first basis, $\sum _{m=\pm1}\rho_{{\rm{out}}|1_mj_n}$. Similarly, the remaining constraints in Eq.~(\ref{eq:18}) ensure that when the input states are an identity matrix $\hat{I}$, the corresponding output states are the same for all different decompositions of $\hat{I}$.
The sixth and seventh constraints in Eq.(\ref{eq:18}) state that the input of Bob's qubit and the input of both qubits are $\hat{I}$, and the outputs $\chi_{\mathcal{U}}(\ket{\phi_{i_m}}\!\!\bra{\phi_{i_m}}\otimes\hat{I})=\sum _{n=\pm1}\rho_{{\rm{out}}|i_m1_n}$ and $\chi_{\mathcal{U}}(\hat{I}\otimes\hat{I})=\sum _{m=\pm1}\sum _{n=\pm1}\rho_{{\rm{out}}|1_m1_n}$, are the same for all decompositions of $\hat{I}$, respectively.
In general, the constraint set determines the number of variables that need to be optimized via SDP. For a two-qubit system, each $\rho_{{\rm{out}}|i_mj_n}$ in the constraint set has the form of a $4\times4$ matrix that contains $16$ variables and is conditioned on $36$ input states. To describe the classical dynamics of an incapable process $\chi_{\mathcal{I}}\geq0$, we need $8$ matrices, which corresponds to the number of $\textbf{v}^{(\rm{A})}_{\mu}$. Consequently, a total of $4608$ variables must be solved by SDP. Similarly, to describe the classical dynamics of an unable process $\chi_{\mathcal{U}}\geq0$, we need $64$ matrices, which corresponds to the number of $\textbf{v}^{(\rm{A})}_{\zeta}$ and $\textbf{v}^{(\rm{B})}_{\eta}$, and hence $36864$ variables must be solved by SDP.
\subsection{Robustness of quantum correlation generating process}
An incapable steering generating process or unable Bell nonlocality generating process can be obtained by adding a certain amount of noise into an experimental process, $\chi_{\rm{expt}}$, using a similar concept of QPC theory~\cite{hsieh2017quantifying,kuo2019quantum}, i.e., \begin{equation} \label{eq:19.1} \frac{\chi {}_{{\rm{expt}}}+\beta_{\rm{steer}} \chi_{\rm{noise}}}{1+\beta_{\rm{steer}} }={\chi}_{\mathcal{I}}, \end{equation} \begin{equation} \label{eq:19.2} \frac{\chi {}_{{\rm{expt}}}+\beta_{\rm{Bell}} \chi_{\rm{noise}}}{1+\beta_{\rm{Bell}} }={\chi}_{\mathcal{U}}, \end{equation} where $\beta_{\rm{steer}}, \beta_{\rm{Bell}}\ge0$, and $\chi_{\rm{noise}}$ is the noise process. Here, $\beta_{\rm{steer}}$ and $\beta_{\rm{Bell}}$ represent the minimum amounts of noise which must be added to $\chi_{\rm{expt}}$ to turn $\chi_{\rm{expt}}$ into ${\chi }_{\mathcal{I}}$ and ${\chi }_{\mathcal{U}}$, respectively. $\beta_{\rm{steer}}$ and $\beta_{\rm{Bell}}$ can be obtained by minimizing ${\chi_{\rm{noise}}}$ via SDP with MATLAB~\cite{yalmip, sdpt, mosek} as follows: \begin{equation} \label{eq:17.1} \beta_{\rm{steer}}=\mathop{{\rm{\min }}}\limits_{{\tilde{\chi }}_{\mathcal{I}}}\,[{\rm{tr}}({\tilde{\chi}}_{\mathcal{I}})-1], \end{equation} \begin{equation} \label{eq:17.2} \beta_{\rm{Bell}}=\mathop{{\rm{\min }}}\limits_{{\tilde{\chi }}_{\mathcal{U}}}\,[{\rm{tr}}({\tilde{\chi}}_{\mathcal{U}})-1], \end{equation} where ${\tilde{\chi }}_{\mathcal{I}}=(1+\beta_{\rm{steer}})\chi_{\mathcal{I}}$ and ${\tilde{\chi }}_{\mathcal{U}}=(1+\beta_{\rm{Bell}})\chi_{\mathcal{U}}$ are unnormalized process matrices with $\text{tr}({\tilde{\chi }}_{\mathcal{I}})=1+\beta_{\rm{steer}}$ and $\text{tr}({\tilde{\chi }}_{\mathcal{U}})=1+\beta_{\rm{Bell}}$, respectively.
In calculating $\beta_{\rm{steer}}$ for a steering generating process, the constraint set is given as \begin{equation} \label{eq:17} \begin{aligned} &\chi_{\mathcal{I}}\geq0,\ {\tilde{\chi }}_{\mathcal{I}}-\chi_{\rm{expt}}\geq0,\ \text{tr}({\tilde{\chi }}_{\mathcal{I}})\geq1,\\ &\chi_{\mathcal{I}}(\ket{\boldsymbol\phi_{i_mj_n}}\!\!\bra{\boldsymbol\phi_{i_mj_n}})\geq0,\ \rho_{\mu,i_mj_n}^{(\rm{B})} \geq0,\\
&\sum _{m=\pm1}\rho_{{\rm{out}}|i_mj_n}=\sum _{m=\pm1}\rho_{{\rm{out}}|1_mj_n}, \ \ \ \ \forall \mu,i_m,j_n. \end{aligned} \end{equation} Similarly, when calculating $\beta_{\rm{Bell}}$ for a Bell nonlocality generating process, the constraint set is given as \begin{equation} \label{eq:c18b} \begin{aligned} &\chi_{\mathcal{U}}\geq0,\ {\tilde{\chi }}_{\mathcal{U}}-\chi_{\rm{expt}}\geq0,\ \text{tr}({\tilde{\chi }}_{\mathcal{U}})\geq1, \\ &\chi_{\mathcal{U}}(\ket{\boldsymbol\phi_{i_mj_n}}\!\!\bra{\boldsymbol\phi_{i_mj_n}})\geq0,\ P_{i_mj_n}(\textbf{v}^{(\rm{A})}_{\zeta},\textbf{v}^{(\rm{B})}_{\eta})\geq0, \\
&\sum _{m=\pm1}\!\!\!\rho_{{\rm{out}}|i_mj_n}\!\!=\!\!\!\sum _{m=\pm1}\!\!\!\rho_{{\rm{out}}|1_mj_n},\!\!\sum _{n=\pm1}\!\!\!\rho_{{\rm{out}}|i_mj_n}\!\!=\!\!\!\sum _{n=\pm1}\!\!\!\rho_{{\rm{out}}|i_m1_n}, \\
&\sum _{m=\pm1}\sum _{n=\pm1}\!\rho_{{\rm{out}}|i_mj_n}\!=\!\sum _{m=\pm1}\sum _{n=\pm1}\!\rho_{{\rm{out}}|1_m1_n},\ \forall i_m,j_n,\zeta,\eta. \end{aligned} \end{equation}
The constraints ${\tilde{\chi }}_{\mathcal{I}}-\chi_{\rm{expt}}\geq0$ and ${\tilde{\chi }}_{\mathcal{U}}-\chi_{\rm{expt}}\geq0$ ensure that the noise processes ${\tilde{\chi }}_{\rm{noise}}$ in Eqs.~(\ref{eq:19.1}) and~(\ref{eq:19.2}), respectively, are CP mapping. Meanwhile, the constraints $\text{tr}({\tilde{\chi }}_{\mathcal{I}})\geq1$ and $\text{tr}({\tilde{\chi }}_{\mathcal{U}})\geq1$ ensure that $\beta_{\rm{steer}}\geq0$ and $\beta_{\rm{Bell}}\geq0$, respectively. Note that the other constraints are as described above for the composition measure [Eqs. (\ref{eq:c17}) and (\ref{eq:18})]. Similarly, the numbers of variables to be solved in the SDP optimization tasks in Eqs.~(\ref{eq:17.1}) and~(\ref{eq:17.2}) are also $4608$ and $36864$, respectively.
\subsection{Fidelity criterion for quantum correlations generating process}
An experimental process, $\chi_{\rm{expt}}$, is created with respect to a target process, $\chi_{\rm{target}}$, and the similarity between them can be assessed using the process fidelity, $F_{\rm{expt}}\equiv \rm{tr}(\chi_{\rm{expt}}\chi_{\rm{target}})$. In particular, $F_{\rm{expt}}$ is judged to indicate the capability (ability) of a process to generate steering (Bell nonlocality) if its value goes beyond the best mimicry achieved by incapable processes $\chi_{\mathcal{I}}$ and unable processes $\chi_{\mathcal{U}}$ respectively, i.e., \begin{equation} \label{eq:21.1} F_{\rm{expt}} \textgreater F_{\mathcal{I}}\equiv\mathop{{\rm{\max }}}\limits_{\chi_{\mathcal{I}}}[\rm{tr}(\chi_{\mathcal{I}}\chi_{\rm{target}})], \end{equation} \begin{equation} \label{eq:21.2} F_{\rm{expt}} \textgreater F_{\mathcal{U}}\equiv\mathop{{\rm{\max }}}\limits_{\chi_{\mathcal{U}}}[\rm{tr}(\chi_{\mathcal{U}}\chi_{\rm{target}})]. \end{equation} Eqs.~(\ref{eq:21.1}) and~(\ref{eq:21.2}) show the $\chi_{\rm{expt}}$ cannot be mimicked by any incapable processes and unable processes, respectively. The best mimicry achieved by $\chi_{\mathcal{I}}$ and $\chi_{\mathcal{U}}$ can be evaluated by performing the following SDP maximization tasks in MATLAB~\cite{yalmip, sdpt, mosek}: \begin{equation} \label{eq:5.22.1} F_{\mathcal{I}}\equiv\mathop{{\rm{\max }}}\limits_{\chi_{\mathcal{I}}}[\rm{tr}(\chi_{\mathcal{I}}\chi_{\rm{target}})], \end{equation} \begin{equation} \label{eq:5.22.2} F_{\mathcal{U}}\equiv\mathop{{\rm{\max }}}\limits_{\chi_{\mathcal{U}}}[\rm{tr}(\chi_{\mathcal{U}}\chi_{\rm{target}})], \end{equation} such that $\rm{tr}(\chi_{\mathcal{I}})=1$ and $\rm{tr}(\chi_{\mathcal{U}})=1$, respectively.
In calculating the fidelity of a steering generating process, the constraint set is specified as \begin{equation} \label{eq:22} \begin{aligned} &\chi_{\mathcal{I}}\geq0,\ \chi_{\mathcal{I}}(\ket{\boldsymbol\phi_{i_mj_n}}\!\!\bra{\boldsymbol\phi_{i_mj_n}})\geq0,\ \rho_{\mu,i_mj_n}^{(\rm{B})} \geq0, \\
&\sum _{m=\pm1}\rho_{{\rm{out}}|i_mj_n}=\sum _{m=\pm1}\rho_{{\rm{out}}|1_mj_n}, \ \ \ \ \forall \mu,i_m,j_n. \end{aligned} \end{equation} Similarly, in calculating the fidelity of the Bell nonlocality generating process, the constraint set is given as \begin{equation} \label{eq:22} \begin{aligned} &\chi_{\mathcal{U}}\geq0,\ \chi_{\mathcal{U}}(\ket{\boldsymbol\phi_{i_mj_n}}\!\!\bra{\boldsymbol\phi_{i_mj_n}})\geq0,\ P_{i_mj_n}(\textbf{v}^{(\rm{A})}_{\zeta},\textbf{v}^{(\rm{B})}_{\eta})\geq0, \\
&\sum _{m=\pm1}\!\!\!\rho_{{\rm{out}}|i_mj_n}\!\!=\!\!\!\sum _{m=\pm1}\!\!\rho_{{\rm{out}}|1_mj_n},\!\!\sum _{n=\pm1}\!\!\!\rho_{{\rm{out}}|i_mj_n}\!\!=\!\!\!\sum _{n=\pm1}\!\!\rho_{{\rm{out}}|i_m1_n}, \\
&\sum _{m=\pm1}\sum _{n=\pm1}\rho_{{\rm{out}}|i_mj_n}\!=\!\sum _{m=\pm1}\sum _{n=\pm1}\rho_{{\rm{out}}|1_m1_n},\ \forall i_m,j_n,\zeta,\eta. \end{aligned} \end{equation} Note that the constraints are all as described above for the composition measure [Eqs. (\ref{eq:c17}) and (\ref{eq:18})]. Note also that the numbers of variables to be solved in the optimization tasks in Eqs.~(\ref{eq:5.22.1}) and~(\ref{eq:5.22.2}) are again $4608$ and $36864$, respectively.
\section{\label{sec:QOQCGP}Quantification and identification of quantum correlation generating processes} This section commences by describing two experimental tests for quantum correlation generating processes (a steering generating test and a Bell nonlocality generating test) on circuit-based superconducting quantum computers (Sec.~\ref{sec:ecpg}). The quantification and identification results obtained using the tools described in Sec.~\ref{sec:MQIGPQC} for an ideal simulation, which targets the CPHASE gate, are then presented (Sec.~\ref{sec:sf}). Finally, the results obtained by the proposed quantitative and identifying tools for two real NISQ devices (IBM~Q Experience \cite{ibmq} and Amazon Braket \cite{aws}) and their simulators with and without noise models, respectively, are presented and discussed (Sec.~\ref{sec:QCSQC}).
\subsection{~\label{sec:ecpg}Implementation of controlled-phase gate on superconducting quantum computer}
In quantum computing, a controlled-phase gate is a two-qubit operation that induces a particular phase of the state of a target qubit depending on the state of the control qubit. The gate is a key element in creating entanglement in superconducting quantum computers and serves as a preliminary logical gate in helping create a graph state \cite{HNN}. The two-qubit gate operation can be expressed as follows: \begin{equation} \begin{aligned} \label{CPHASEEEE} \rm{CPHASE}(\lambda)=\begin{pmatrix}
1&0&0&0\\
0&1&0&0\\
0&0&1&0\\
0&0&0&e^{i\lambda} \end{pmatrix}, \end{aligned} \end{equation} where $\lambda$ is the CPHASE shift, i.e., the shift of the phase of the target qubit induced by the state of the control qubit. Given this knowledge of the CPHASE gate and the quantitative and identifying tools presented in Sec.~\ref{sec:MQIGPQC}, we present in the following the detailed steps of the QPT procedure~\cite{nielsen2002quantum} for the CPHASE gate on the IBM Q Experience \cite{ibmq} and Amazon Braket \cite{aws} quantum computers.
We first present two tests, namely a steering generating test and a Bell nonlocality generating test for evaluating the quantum correlation generating capability of an experiment process. The quantum circuit implementations of the two tests are shown in Figs.~\ref{fig:schematicPT}(a) and~\ref{fig:schematicPT}(b), respectively. All of the qubits, i.e., $Q_{0}$ and $Q_{1}$, are initially prepared in a state $\ket{\phi_{3_{1}}} = \ket{0}$ (defined in Sec.~\ref{sec:cpm}), and specific quantum states are then prepared as input states by applying the unitary operation $U_1$ [Figs.~\ref{fig:schematicPT}(a)(i) and \ref{fig:schematicPT}(b)(i)]. The input states are then processed by a CPHASE gate with a shift of $\lambda$ [Figs.~\ref{fig:schematicPT}(a)(ii) and \ref{fig:schematicPT}(b)(ii)].
Finally, QST~[Eq.~(\ref{eq:2})] is applied to the two output qubits to reconstruct the density matrix, $\rho_{{\rm{out}}|i_mj_n}$, by measuring their states in the Pauli bases \{$X$, $Y$, $Z$\} [Figs.~\ref{fig:schematicPT}(a)(iii) and \ref{fig:schematicPT}(b)(iii)]. Experimentally, the measurements performed on the Pauli-$X$ basis and Pauli-$Y$ basis are implemented by using different transformations $U_2$ followed by measurement on the Pauli-$Z$ basis. In particular, the measurement on the Pauli-$X$ basis is implemented by a Hadamard gate ($H = \ket{+}\!\!\bra{0}+\ket{-}\!\!\bra{1}$) followed by measurement on the Pauli-$Z$ basis, while the measurement on the Pauli-$Y$ basis is implemented by an adjoint of phase gate ($S^{\dag} = \ket{0}\!\!\bra{0}-i\ket{1}\!\!\bra{1}$) and an $H$ gate followed by measurement on the Pauli-$Z$ basis.
As discussed in Sec.~\ref{sec:bgp}, in the Bell nonlocality generating test, the unitary operation $U_R$ [Fig.~\ref{fig:schematicPT}(b)(iii)] is added to different choices of Bob's observables which generate the most significant difference possible of $F_{\mathcal{U}}$ between the target process and $\chi_{\mathcal{U}}$. Having chosen a suitable unitary operation $U_R(\phi,\theta)$, it is realized in the quantum circuit model by applying the one-qubit gates to the qubit. For example, if $U_R(\phi,\theta)$ is chosen as $\phi=0$ and $\theta=1/4\pi$, it is realized by setting \begin{equation} U_R(0,\frac{\pi}{4}) = R_x(\frac{\pi}{2})R_z(\frac{\pi}{4})R_x(-\frac{\pi}{2}), \end{equation} where \begin{equation} R_x(\vartheta)=\left[ \begin{matrix}
\cos(\frac{\vartheta}{2}) & -i\sin(\frac{\vartheta}{2}) \\
-i\sin(\frac{\vartheta}{2}) & \cos(\frac{\vartheta}{2})
\end{matrix} \right], R_z(\varphi)=\left[ \begin{matrix}
e^{-i\frac{\varphi}{2}} & 0 \\
0 & e^{i\frac{\varphi}{2}}
\end{matrix} \right].\label{gate} \end{equation}
\begin{figure}\label{fig:schematicPT}
\label{qcircuit}
\end{figure}
We experimentally determined the process matrix of the two process correlation generating tests on the IBM~Q and Amazon Braket quantum computers (both real devices and simulators). For each run of the quantum circuit, we adjusted the maximum shot setting (i.e., the number of times the qubits of the quantum circuit were measured) in accordance with the limit of the respective device. In particular, we set $1024$ shots for the \textit{Amazon Braket Rigetti Aspen-9} and \textit{Aspen-M-1} devices, $8192$ shots for the \textit{ibmq\_santiago} device, and $81920$ shots for the \textit{Amazon Braket local simulator} device and \textit{ibmq\_qasm\_simulator} device.
\subsection{\label{sec:sf}Quantification and identification of ideal controlled-phase gate}
This section presents the results obtained when applying the quantitative and identifying methods proposed in Sec.~\ref{sec:MQIGPQC} to the steering generating test and Bell nonlocality generating test described above for the case of an ideal CPHASE gate process.
In conducting the steering (Bell nonlocality) generating test, we consider an experimental process $\chi_{\rm{expt}}$ that performs an ideal CPHASE gate with a phase shift of $\lambda=\pi$ to be a capable (able) process. For such a process, the steering (Bell nonlocality) generating composition $\alpha_{\rm{steer}}$ ($\alpha_{\rm{Bell}}$) has a value of $1$, while the steering (Bell nonlocality) generating robustness $\beta_{\rm{steer}}$ ($\beta_{\rm{Bell}}$) has a value of $0.4641$ ($0.1716$). Figs.~\ref{fig_sb_a}(a) and~\ref{fig_sb_a}(b) show the computed values of $\alpha_{\rm{steer}}$ and $\alpha_{\rm{Bell}}$, respectively, for various values of the CPHASE shift. Figs.~\ref{fig_sb_b}(a) and~\ref{fig_sb_b}(b) show the corresponding results for $\beta_{\rm{steer}}$ and $\beta_{\rm{Bell}}$, respectively. Note that, in both figures, the solid blue lines show the results obtained from the ideal simulation. We next consider a process that performs an ideal CPHASE gate with a phase shift of $\lambda=\pi$ as the target process and a process fidelity $F_{\mathcal{I}}$ ($F_{\mathcal{U}}$) of $0.6830$ ($0.8536$). The dashed green lines in Figs.~\ref{fig_sb_f}(a) and~\ref{fig_sb_f}(b) show the variation of the fidelity criterion $F_{\mathcal{I}}$ ($F_{\mathcal{U}}$) as the CPHASE shift of the target process changes from $0$ to $2\pi$.
It is seen in Figs.~\ref{fig_sb_a}(b) and~\ref{fig_sb_b}(b) that $\alpha_{\rm{Bell}}$ and $\beta_{\rm{Bell}}$ both start to produce a value at $\lambda=0.46\pi$ and return to zero at $\lambda=1.54\pi$. Meanwhile in Fig.~\ref{fig_sb_f}(b), the fidelity criterion $F_{\mathcal{U}}$ begins to decrease at $\lambda=0.46\pi$, reaches a minimum value of $F_{\mathcal{U}}\sim0.8536$ at $\lambda=\pi$, and then increases once again to a value of around one at $\lambda=1.54\pi$.
\begin{figure}
\caption{ Composition results for steering generating test [Fig.~\ref{fig_sb_a}(a)] and Bell nonlocality generating test [Fig.~\ref{fig_sb_a}(b)]. The solid blue line shows the steering (Bell nonlocality) generating composition results from the ideal simulation. The other symbols show the results obtained on the IBM Q Experience and AWS Amazon Braket devices (real and simulator). }
\label{fig_sb_a}
\label{qcircuit}
\end{figure}
\begin{figure}
\caption{ Robustness results for steering generating test [Fig.~\ref{fig_sb_b}(a)] and Bell nonlocality generating test [Fig.~\ref{fig_sb_b}(b)]. The solid blue line shows the steering (Bell nonlocality) generating robustness results from the ideal simulation. The other symbols show the results obtained on the IBM Q Experience and AWS Amazon Braket devices (real and simulator). }
\label{fig_sb_b}
\label{qcircuit}
\end{figure}
\begin{figure}
\caption{ Fidelity criterion results for steering generating test [Fig.~\ref{fig_sb_f}(a)] and Bell nonlocality generating test [Fig.~\ref{fig_sb_f}(b)]. The dashed green line shows the classical upper bound of the fidelity criterion for the steering (Bell nonlocality) generating test. The other symbols show the results obtained on the IBM Q Experience and AWS Amazon Braket devices (real and simulator). }
\label{fig_sb_f}
\label{qcircuit}
\end{figure}
\subsection{\label{sec:QCSQC}Quantification and identification of controlled-phase gate on superconducting quantum computer}
The discussions above have described the results obtained from the proposed quantitative and identifying methods for the steering generating test and Bell nonlocality generating test in the ideal simulation of the CPHASE gate. This section presents the quantification and identification results obtained for an actual CPHASE gate process performed on two superconducting quantum computer systems, namely the IBM Q Experience~\cite{ibmq} and Amazon Braket~\cite{aws}, and their respective simulators.
We selected nine different CPHASE shifts ($\lambda=0$, $1/4\pi$, $1/2\pi$, $3/4\pi$, $\pi$, $5/4\pi$, $3/2\pi$, $7/4\pi$ and $2\pi$) for the CPHASE gate implemented on the superconducting quantum computer system. To perform QPT, we conducted QST on the experimental outputs. The density matrix $\rho_{\rm{out}|\textit{i}_\textit{m}\textit{j}_\textit{n}}$ of the output states was reconstructed using the following 16 specific input states: $\{\ket{00}$, $\ket{01}$, $\ket{0+}$, $\ket{0R}$, $\ket{10}$, $\ket{11}$, $\ket{1+}$, $\ket{1R}$, $\ket{+0}$, $\ket{+1}$, $\ket{++}$, $\ket{+R}$, $\ket{R0}$, $\ket{R1}$, $\ket{R+}$, $\ket{RR}\}$ (defined in Sec.~\ref{sec:cpm}). Having obtained the density matrix, we reconstructed the physical process via QPT. In particular, we experimentally determined the reasonable process matrix of the CPHASE gate using the maximum-likelihood technique~\cite{o2004quantum}. Finally, we examined the physical process matrix using the composition, robustness, and process fidelity proposed in Sec.~\ref{sec:MQIGPQC}. The corresponding results are presented in Figs.~\ref{fig_sb_a},~\ref{fig_sb_b} and~\ref{fig_sb_f}, respectively.
As described above in Sec.~\ref{sec:sf}, the ideal CPHASE gate with a shift of $\lambda=\pi$ possesses the maximum amount of generating steering (Bell nonlocality) capability i.e., $\alpha_{\rm{steer}}=1$ ($\alpha_{\rm{Bell}}=1$) and $\beta_{\rm{steer}}=0.4641$ ($\beta_{\rm{Bell}}=0.1716$). Furthermore, since the target process is the process that experiences the ideal CPHASE gate with $\lambda=\pi$, the fidelity criterion obtained by the incapable (unable) process is equal to $F_{\mathcal{I}}\sim0.6830$ ($F_{\mathcal{U}}\sim0.8536$). Thus, as shown in Figs.~\ref{fig_sb_a} to \ref{fig_sb_f}, the steering (Bell nonlocality) capability results obtained from the \textit{ibmq\_qasm\_simulator} and \textit{Amazon Braket local simulator} are similar to those of the ideal CPHASE gate process. For the real devices, however, the \textit{ibmq\_santiago} device has the capability to generate steering and Bell nonlocality only when the CPHASE shift is equal to $\pi$ since, in accordance with the fidelity criterion, they cannot be mimicked by an incapable process and unable process, respectively. The \textit{Amazon Braket Rigetti Aspen-9} (\textit{Aspen-M-1}) device is similarly identified to be capable of generating steering for CPHASE shifts of $\pi$ ($3/4\pi$, $\pi$, $5/4\pi$, and $3/2\pi$), as shown in Fig.~\ref{fig_sb_f}(a).
The quantification and identification results were further investigated using a noise model created from the properties of the \textit{ibmq\_santiago} device. (The details of the noise modeling approach are presented in Appendix~\ref{app:IBMQNM}.) The simulation results are similar to the experimental results obtained on the \textit{ibmq\_santiago} device for all three capability indicators. Further noise models were constructed based on the properties of the \textit{Amazon Braket Rigetti Aspen-9} and \textit{Aspen-M-1} devices, respectively. (The noise models are described in Appendix~\ref{app:QCNMA}.) In this case, the simulated results deviated from the experimental results obtained on the corresponding real-world \textit{Amazon Braket Rigetti Aspen-9} and \textit{Aspen-M-1} devices (see Appendix~\ref{app:QCNMA} for the noise model of Amazon Braket).
\section{\label{sec:cao}Conclusion and Outlook}
In this work, we have investigated the problem of identifying and quantifying the quantum correlation generating capability of experimental processes. We have considered two particular quantum correlation generating processes, namely EPR steering generating process and Bell nonlocality generating process. We have defined both types of processes using a similar concept of quantum process capability theory~\cite{hsieh2017quantifying,kuo2019quantum}. We have proposed two measures and two identifiers (namely composition and robustness) for quantifying and identifying the capability of a process to generate steering or Bell nonlocality through the use of tomography and numerical methods. We have also presented two fidelity criteria for identifying faithful processes having the capability of generating steering and Bell nonlocality, respectively. The methods proposed herein are all based on classical mimicry methods with the concepts of local realism and classical dynamics.
Furthermore, we have presented and discussed the results obtained when using these approaches to quantify and identify the quantum correlation generation capability of a CPHASE gate process implemented on several real-world superconducting quantum computers, namely \textit{ibmq\_santiago}, \textit{Amazon Braket Rigetti Aspen-9} and \textit{Aspen-M-1}, and their corresponding simulators (with and without noise).
The experimentally feasible methods presented in this study for quantifying and identifying the steering and Bell nonlocality generating capabilities of a process provide a useful contribution toward the development of future cross-platform benchmarks for QIP tasks. In future work, we expect to extend the proposed formalism to characterize the multipartite correlation generation process further. Our methods may provide a way of quantifying quantum correlation generating processes and identifying the generation of multipartite nonclassical correlations for distributed QIP in entanglement-based quantum networks.
\acknowledgements
We thank Y.-N. Chen and H.-B. Chen for helpful comments and discussions. We also appreciate S. Mangini's providing the calibration data of \textit{Amazon Braket Rigetti Aspen-9} downloaded on $30^{\text{th}}$ October 2021 to us. This work was partially supported by the National Science and Technology Council, Taiwan, under Grant Numbers MOST 107-2628-M-006-001-MY4, MOST 111-2119-M-007-007, and MOST 111-2112-M-006-033.
\appendix
\section{\label{app:qptpm}TWO-QUBIT QUANTUM PROCESS TOMOGRAPHY AND PROCESS MATRIX}
A quantum system after process can be described by the process matrix $\chi_{\rm{expt}}$ with Eq.~(\ref{eq:1}). $\chi_{\rm{expt}}$ can then be used to reconstruct the process through operator-sum representation. For a $2$-qubit system with input states $\rho_{{\rm{in}}}=\ket{\boldsymbol\phi_{i_mj_n}}\!\!\bra{\boldsymbol\phi_{i_mj_n}}$, where $m, n = \pm1$ when $i, j = 3$ and $m, n = 1$ when $i, j = 1, 2$ (defined in Sec.~\ref{sec:cpm}), the output states can be given explicitly as \begin{equation} \label{eq:3} \rho_{{\rm{out}}}\equiv\chi_{\rm{expt}}(\rho_{{\rm{in}}})=\sum^{16}_{q=1}\sum^{16}_{r=1}\chi_{qr}E_q\rho_{{\rm{in}}}E^{\dagger}_r, \end{equation} where \begin{equation} \label{eq:4} E_q=\bigotimes^2_{k=1}\ket{q_k}\!\!\bra{q_{k+2}}, \end{equation} with $q=1+\sum^{4}_{i=1}q_i2^{i-1}$ for $q_i\in\{0,1\}$. In addition, $\ket{0}$ and $\ket{1}$ are defined in Sec.~\ref{sec:cpm}. To determine the coefficients $\chi_{qr}$ which constitute the process matrix $\chi_{\rm{expt}}$, we consider the following $16$ inputs: \begin{equation} \label{eq:5} \rho_{\rm{in},q'}=E_{q'}=\bigotimes^2_{k=1}\ket{q'_k}\!\!\bra{q'_{k+2}}, \end{equation} for $q'=1,2,...,16$. From Eq.~(\ref{eq:3}), the corresponding outputs are obtained as \begin{equation} \label{eq:6} \chi_{\rm{expt}}(\rho_{\rm{in},q'})=\sum^1_{q_1=0}...\sum^1_{r_2=0}\bigotimes^2_{k=1}\ket{q_k}\!\!\bra{r_k}\chi_{g({\textbf{q}},q')h({\textbf{r}},q')}, \end{equation} where $q'=1+\sum^{4}_{i=1}q'_i2^{i-1}$ for $q'_i\in\{0,1\},$ $\textbf{q}=(q_1,q_2),$ $\textbf{r}=(r_1,r_2),$ \begin{equation} \label{eq:7} g({\textbf{q}},q')=1+\sum^2_{i=1}q_i2^{i-1}+\sum^2_{i=1}q'_i2^{2+i-1}, \end{equation} and \begin{equation} \label{eq:8} h({\textbf{r}},q')=1+\sum^2_{i=1}r_i2^{i-1}+\sum^{4}_{i=2+1}q'_i2^{i-1}. \end{equation} Since the output $\chi(\rho_{\rm{in},q'})$ is determined using QST in Eq.~(\ref{eq:2}), we have full knowledge of the output matrix, i.e., \begin{equation} \label{eq:9} \rho'_{\rm{out},q'}=\chi_{\rm{expt}}(\rho_{\rm{in},q'})=\sum^1_{q_1=0}...\sum^1_{r_2=0}\bigotimes^2_{m=1}\ket{q_m}\!\!\bra{r_m}\rho^{(q')}_{\textbf{qr}}. \end{equation} Thus, all $16$ matrix elements $\rho^{(q')}_{\textbf{qr}}$ are determined. By comparing Eq.~(\ref{eq:6}) with Eq.~(\ref{eq:9}), the process matrix $\chi$ with $4\times4$ matrix elements can be obtained as \begin{equation} \label{eq:chi} \chi_{g({\textbf{q}},q')h({\textbf{r}},q')}=\rho^{(q')}_{\textbf{qr}}. \end{equation} \hspace{1cm}From Eq.~(\ref{eq:4}), the $16$ operators, $E_q$, have the form \begin{equation} \label{eq:eqf} E_q=\ket{q_1q_2}\bra{q_3q_4} \end{equation} where $E_1=\ket{00}\bra{00}$, $E_2=\ket{10}\bra{00}$, $E_3=\ket{01}\bra{00}$,..., and $E_{16}=\ket{11}\bra{11}$. To use experimentally preparable input states to obtain the process matrix, $E_q$, $q=1,2,...,16$, can be decomposed as a linear combination of the following density matrices of state: $\ket{\boldsymbol\phi_{i_mj_n}}\!\!\bra{\boldsymbol\phi_{i_mj_n}}$. The output matrix of $E_{k'}$, denoted as $\rho_{out,k'}$, can thus be represented by using the output density matrices of these states, denoted as $\rho'_{out,i_mj_n}=\chi_{\rm{expt}}(\ket{\boldsymbol\phi_{i_mj_n}}\!\!\bra{\boldsymbol\phi_{i_mj_n}})$ (It is worth noting that $\rho_{out,k'}$ is in $\{\ket{0}, \ket{1}\}$ basis, and $\rho'_{out,i_mj_n}$ is in the basis of Pauli matrices). That is, the process matrix can be written in the form \begin{equation} \label{eq:10} \chi_{\rm{expt}} =\frac{1}{4} \left( \begin{matrix} \rho_{\rm{out},1} & \rho_{\rm{out},5} & \rho_{\rm{out},9} & \rho_{\rm{out},13} \\ \rho_{\rm{out},2} & \rho_{\rm{out},6} & \rho_{\rm{out},10} & \rho_{\rm{out},14} \\ \rho_{\rm{out},3} & \rho_{\rm{out},7} & \rho_{\rm{out},11}& \rho_{\rm{out},15} \\ \rho_{\rm{out},4} & \rho_{\rm{out},8} & \rho_{\rm{out},12}& \rho_{\rm{out},16} \end{matrix} \right), \end{equation} where the constant $1/4$ is a normalization factor, and the diagonal elements are given as \begin{equation} \label{eq:outout} \begin{aligned} &\rho_{\rm{out},1}=\rho'_{\rm{out},3_13_1},\\ &\rho_{\rm{out},6}=\rho'_{\rm{out},3_{\text-1}3_1},\\ &\rho_{\rm{out},11}=\rho'_{\rm{out},3_13_{\text-1}},\\ &\rho_{\rm{out},16}=\rho'_{\rm{out},3_{\text-1}3_{\text-1}}.\\ \end{aligned} \end{equation} The other elements have the forms \begin{equation} \begin{aligned} \rho_{\rm{out},2}= &\rho'_{\rm{out},1_13_1}\!\!-\!i\rho'_{\rm{out},2_13_1}\!\!-\!\frac{e^{-i\pi/4}}{\sqrt{2}}(\rho'_{\rm{out},3_13_1}\!+\!\rho'_{\rm{out},3_{\text-1}3_1}\!),\\
\rho_{\rm{out},3}= &\rho'_{\rm{out},3_11_1}\!\!-\!i\rho'_{\rm{out},3_12_1}\!\!-\!\frac{e^{-i\pi/4}}{\sqrt{2}}(\rho'_{\rm{out},3_13_1}\!+\!\rho'_{\rm{out},3_13_{\text-1}}\!),\\
\rho_{\rm{out},4}= &\rho'_{\rm{out},1_11_1}\!-\!i\rho'_{\rm{out},1_12_1}\!-\!\frac{e^{-i\pi/4}}{\sqrt{2}}(\rho'_{\rm{out},1_13_1}\!\!+\!\!\rho'_{\rm{out},1_13_{\text-1}})\\ &-\!i[\rho'_{\rm{out},2_11_1}\!\!\!-\!i\rho'_{\rm{out},2_12_1}\!\!\!-\!\frac{e^{-i\pi/4}}{\sqrt{2}}\!(\rho'_{\rm{out},2_13_1}\!\!\!+\!\!\rho'_{\rm{out},2_13_{\text-1}}\!)\!]\\ &-\!\frac{e^{-i\pi/4}}{\sqrt{2}}\{\![\rho'_{\rm{out},3_11_1}\!\!\!+\!\!\rho'_{\rm{out},3_{\text-1}1_1}\!\!\!-\!i(\rho'_{\rm{out},3_12_1}\!\!\!+\!\!\rho'_{\rm{out},3_{\text-1}2_1}\!)\!]\\ &-\!\frac{e^{-i\pi/4}}{\sqrt{2}}(\rho'_{\rm{out},3_13_1}\!\!+\!\!\rho'_{\rm{out},3_13_{\text-1}}\!\!+\!\!\rho'_{\rm{out},3_{\text-1}3_1}\!\!+\!\!\rho'_{\rm{out},3_{\text-1}3_{\text-1}}\!)\!\},\\
\rho_{\rm{out},5}= &\rho_{\rm{out},2}^\dag,\\ \nonumber \end{aligned} \end{equation} \begin{equation} \begin{aligned}
\rho_{\rm{out},7}= &\rho'_{\rm{out},1_11_1}\!-\!i\rho'_{\rm{out},1_12_1}\!-\!\frac{e^{-i\pi/4}}{\sqrt{2}}(\rho'_{\rm{out},1_13_1}\!\!+\!\!\rho'_{\rm{out},1_13_{\text-1}})\\ &+\!i[\rho'_{\rm{out},2_11_1}\!\!\!-\!i\rho'_{\rm{out},2_12_1}\!\!\!-\!\!\frac{e^{-i\pi/4}}{\sqrt{2}}\!(\rho'_{\rm{out},2_13_1}\!\!\!+\!\!\rho'_{\rm{out},2_13_{\text-1}}\!)\!]\\ &-\!\frac{e^{i\pi/4}}{\sqrt{2}}\{\![\rho'_{\rm{out},3_11_1}\!\!\!+\!\!\rho'_{\rm{out},3_{\text-1}1_1}\!\!\!-\!i(\rho'_{\rm{out},3_12_1}\!\!\!+\!\!\rho'_{\rm{out},3_{\text-1}2_1}\!)\!]\\ &-\!\frac{e^{-i\pi/4}}{\sqrt{2}}(\rho'_{\rm{out},3_13_1}\!\!+\!\!\rho'_{\rm{out},3_13_{\text-1}}\!\!+\!\!\rho'_{\rm{out},3_{\text-1}3_1}\!\!+\!\!\rho'_{\rm{out},3_{\text-1}3_{\text-1}}\!)\!\},\\
\rho_{\rm{out},8}= &\rho'_{\rm{out},3_{\text-1}1_1}\!\!-i\rho'_{\rm{out},3_{\text-1}2_1}\!\!-\frac{e^{-i\pi/4}}{\sqrt{2}}(\rho'_{\rm{out},3_{\text-1}3_1}\!\!+\!\rho'_{\rm{out},3_{\text-1}3_{\text-1}}\!),\\
\rho_{\rm{out},9}= &\rho_{\rm{out},3}^\dag,\\
\rho_{\rm{out},10}= &\rho_{\rm{out},7}^\dag,\\
\rho_{\rm{out},12}= &\rho'_{\rm{out},1_13_{\text-1}}\!\!-i\rho'_{\rm{out},2_13_{\text-1}}\!\!-\frac{e^{-i\pi/4}}{\sqrt{2}}(\rho'_{\rm{out},3_13_{\text-1}}\!\!+\!\rho'_{\rm{out},3_{\text-1}3_{\text-1}}\!),\\
\rho_{\rm{out},13}= &\rho_{\rm{out},4}^\dag,\\
\rho_{\rm{out},14}= &\rho_{\rm{out},8}^\dag,\\
\rho_{\rm{out},15}= &\rho_{\rm{out},12}^\dag.\\ \label{eq:outout2} \end{aligned} \end{equation}
\section{\label{app:IBMQNM}NOISE MODELING OF IBM Q}
IBM~Q~\cite{ibmq} provides several simulators for use, including the \textit{ibmq\_qasm\_simulator}, \textit{simulator\_statevector}, \textit{simulator\_extended\_stabilizer}, and \textit{simulator\_mps}, where these simulators can simulate circuits of up to $32$, $32$, $63$, and $100$ qubits, respectively. However only the \textit{ibmq\_qasm\_simulator} and \textit{simulator\_statevector} devices provide noise modeling~\cite{sourceCodeCH}. Thus, in this study, we purposely chose the \textit{ibmq\_qasm\_simulator} device since it not only supports noise modeling, but is also available for use without connecting to IBM~Q's service. Noise modeling is the process of building a noise model to simulate a quantum circuit in the presence of errors. It is possible to construct custom noise models for simulators or automatically generate a basic noise model from an IBM Q device. Moreover, a simplified approximate noise model can be generated automatically from the properties of a real device. The noise model was constructed using the properties obtained from the \textit{ibmq\_santiago} device on $8^{\text{th}}$ October $2021$.
\section{\label{app:QCNMA}QUANTIFICATION AND IDENTIFICATION OF CONTROLLED-PHASE GATE ON NOISE MODEL OF AMAZON BRAKET}
In Sec.~\ref{sec:QCSQC}, we used the noise model in the IBM~Q backend to simulate the real-world quantum computer. In this Appendix, we simulate the Amazon Braket quantum devices and evaluate their performance by replacing the parameters in the noise model of the IBM~Q backend with the calibration data of the Amazon Braket backend. In particular, we show how the noisy \textit{ibmq\_qasm\_simulator} device is combined with the calibration data of the \textit{Amazon Braket Rigetti Aspen-9} and \textit{Aspen-M-1} devices, respectively, to obtain the results shown in Figs.~\ref{fig_sb_a}, \ref{fig_sb_b}, and \ref{fig_sb_f}.
For the \textit{ibmq\_santiago} noise model, the most important parameters include $T_{\rm{1}}$, $T_{\rm{2}}$, the \text{readout error}, \textit{prob\_meas0\_prep1}, \textit{prob\_meas1\_prep0}, the \text{gate length}, and the \text{gate error}. (Note that the values of these parameters can be taken directly from the IBM Q program.) From the source code~\cite{sourceCodeCH} of the noise model of the IBM Q backend, we found that the noise model is generated based on the 1-qubit and 2-qubit gate errors (which consist of a depolarizing error followed by a thermal relaxation error and describe a CPTP $N$-qubit gate) and the single-qubit readout error on all the measurements. The error parameters of the noise model are tuned based on the parameters $T_{\rm{1}}$, $T_{\rm{2}}$, \text{frequency}, \text{readout error}, \textit{prob\_meas0\_prep1}, \textit{prob\_meas1\_prep0}, \text{gate length}, and \text{gate error}. Therefore, it is necessary to collect data for all these parameters to construct the noise models for the Amazon Braket Rigetti devices.
For the Amazon Braket Rigetti quantum computer, the calibration data provided by Amazon Braket Rigetti are $T_{\rm{1}}$, $T_{\rm{2}}$, the readout fidelity, the gate time, the \textit{RB\_Fidelity}, and the 2-qubit gate fidelity. We note that all these data can be obtained directly from the Amazon Braket console~\cite{awsCH} and Rigetti~\cite{rigettiCH} websites. We note also that there are three common parameters in the noise models of the IBM Q and Amazon Braket devices, respectively. For example, $T_{\rm{1}}$ is the energy relaxation time, i.e., the time scale of the decay of a qubit from the excited state to the ground state, which is related to the amplitude damping noise. The effect of amplitude damping in the Bloch representation as the Bloch vector transformation~\cite{nielsen2002quantum} is given as \begin{equation} (r_x, r_y, r_z) \mapsto (r_x \sqrt{1-\gamma}, r_y \sqrt{1-\gamma}, r_z (1-\gamma) + \gamma), \end{equation} where \begin{equation} \gamma = 1 - \exp{-\frac{t}{T_1}}, \end{equation} and \textit{t} is time. In addition, $T_{\rm{2}}$ is the dephasing time, i.e., the time scale required for the decoherence of a qubit from the coherent state to a completely mixed state, which is related to the phase damping noise. The effect of phase damping in the Bloch representation as the Bloch vector transformation~\cite{nielsen2002quantum} is given as \begin{equation} (r_x, r_y, r_z) \mapsto (r_x \sqrt{1-\lambda}, r_y \sqrt{1-\lambda}, r_z), \end{equation} where \begin{equation} \lambda = 1 - \exp{-\frac{t}{(T_2/2)}}, \end{equation} and \textit{t} is time. If the amplitude damping noise and phase damping noise are both applied, and the latter is followed by the former, then \begin{equation}\label{eq:C5} T_2 \leq 2 \times T_1. \end{equation} An inspection of the source code~\cite{sourceCodeCH} of the noise model of the IBM Q backend showed the existence of several constraints which ensure that Eq.~(\ref{eq:C5}) holds. The \text{gate length} of IBM Q is equivalent to gate time, and represents the duration for which the gate operates on one or two specific qubits. For all three common parameters, we simply replaced the parameter values of IBM Q with those of Amazon Braket in the same units.
For the parameters which are not common to both models, e.g., the readout fidelity of a single qubit, the \textit{RB\_Fidelity} and the 2-qubit gate fidelity, we derived specific relations to relate the calibration data of the Amazon Braket device to the parameters of the IBM Q.
In general, a readout error causes the measurement result for a qubit in the $\ket{0}$ state to be given as the measurement result for a qubit in the $\ket{1}$ state, and vice versa. The \textit{prob\_meas0\_prep1} parameter of IBM Q gives the probability that the measurement result of $\ket{1}$ is that of $\ket{0}$, while the \textit{prob\_meas1\_prep0} parameter gives the probability that the measurement result of $\ket{0}$ is that of $\ket{1}$. In the source code~\cite{sourceCodeCH} of the noise model of the IBM Q backend, \textit{prob\_meas0\_prep1} and \textit{prob\_meas1\_prep0} are adopted preferentially rather than the readout error (if provided). Based on our reading of the Rigetti pyQuil website~\cite{rigetti2CH}, we surmised that the classical readout bit-flip error was roughly symmetric for the simulation. In other words, the readout errors \textit{prob\_meas0\_prep1} and \textit{prob\_meas1\_prep0} of the noise model were the same. Thus, we set the readout error equal to $1$ minus the readout fidelity as a reasonable approximation. The \textit{RB\_Fidelity} is the single-qubit randomized benchmarking fidelity~\cite{knill2008randomized} of the individual gate operation. The 2-qubit gate fidelity included the C-Phase gate (denoted in Amazon Braket console~\cite{awsCH}), i.e., CPHASE gate~[Eq.~(\ref{CPHASEEEE})] fidelity, the XY gate~\cite{Abrams2019} fidelity, and the CZ gate (i.e., controlled-Z gate) fidelity. The state fidelity corresponding to the number of computational gates was given by 1 minus the probability of error in Fig.~1 of Ref.~\cite{knill2008randomized}. Thus, for the noise models of the \textit{Amazon Braket Rigetti Aspen-9} and \textit{Aspen-M-1} devices, we assigned 1 minus \textit{RB\_Fidelity} to the value of the 1-qubit gate error and 1 minus the C-Phase gate fidelity to the value of the 2-qubit gate error.
\begin{table*} \caption{\label{tab:table1}Comparison among parameters of different noise models. First, we used the IBM~Q \textit{ibmq\_santiago} quantum computer with qubit 3 and qubit 4 in October 2021 and archived its calibration parameters on $8^{\text{th}}$ October 2021. Second, we used the \textit{Amazon Braket Rigetti Aspen-9} quantum computer with qubit 10 and qubit 17 in July 2021 and archived its calibration parameters on $30^{\text{th}}$ October and $18^{\text{th}}$ November 2021. Finally, we used the \textit{Amazon Braket Rigetti Aspen-M-1} quantum computer with qubit 15 and qubit 16 on $23^{\text{rd}}$ and $30^{\text{th}}$ of March 2022, respectively, and archived the calibration parameters on the same dates.} \begin{ruledtabular} \begin{tabular}{lccc} &IBM~Q \textit{ibmq\_santiago}& \textit{Amazon Braket Aspen-9} & \textit{Amazon Braket Aspen-M-1}\\ & qubit 3 \& 4 & qubit 10 \& 17 & qubit 15 \& 16 \\ \hline $T_{\rm{1}}$ ($\mu\text{s}$) & 106.2285 \& 44.8018 & 26.43 \& 28.88\footnotemark[1] & 50.79 \& 40.518 \\ $T_{\rm{2}}$ ($\mu\text{s}$) & 82.9952 \& 88.0221 & 21.62 \& 24.14\footnotemark[1] & 60.606 \& 65.261 \\ \textit{prob\_meas0\_prep1}, \textit{prob\_meas1\_prep0} & 0.0082, 0.0044 \& 0.0346, 0.0112 & & \\ readout error & 0.0063 \& 0.0229 & 1 - 0.957 \& 1 - 0.939\footnotemark[1] & 1 - 0.983 \& 1 - 0.987 \\
1-qubit, 2-qubit gate time ($\text{ns}$) & 35.5556, 376.8889 & 48, 168 & 40, 180 \\ 1-qubit gate error & 0.0002 \& 0.0003 & 1 - 0.9989 \& 1 - 0.9993\footnotemark[1] & 1 - 0.9987 \& 1 - 0.99947 \\ 2-qubit gate error & 0.0056 & 1 - 0.97955\footnotemark[2] & 1 - 0.98996 \\
\end{tabular} \end{ruledtabular} \footnotetext[1]{
The calibration data of \textit{Amazon Braket Rigetti Aspen-9}~\cite{mangini2021qubit} is downloaded from the quantum cloud services website of Rigetti Computing~\cite{rigetti3CH} on $30^{\text{th}}$ October 2021. } \footnotetext[2]{Due to the lack of C-Phase gate fidelity in the calibration data of \textit{Amazon Braket Rigetti Aspen-9}~\cite{mangini2021qubit} downloaded on $30^{\text{th}}$ October 2021, we used the data from the website of Amazon Braket console~\cite{awsCH} on $18^{\text{th}}$ November 2021 instead.} \end{table*}
Table~\ref{tab:table1} shows the calibration data used to construct the noise models for the \textit{Amazon Braket Rigetti Aspen-9} and \textit{Aspen-M-1} devices. (Note that the original parameters of the IBM Q \textit{ibmq\_santiago} device are also shown for reference purposes.) The 1-qubit gates of IBM Q \textit{ibmq\_santiago} include the Identity gate, the sx ($\sqrt{x}$) gate, and the Pauli-X gate, while the 2-qubit gate is the CNOT gate (cx3\_4) operated on qubit 3 and qubit 4. The 2-qubit gate fidelities of \textit{Amazon Braket Rigetti Aspen-M-1} and \textit{Aspen-9} are the C-Phase gate fidelities, and the gate times are derived directly from the websites of Amazon Braket console~\cite{awsCH} and Rigetti~\cite{rigettiCH}, respectively. The data for \textit{Amazon Braket Rigetti Aspen-M-1} were collected on $23^{\text{rd}}$ March $2022$ for the Bell nonlocality generating test. For the steering generating test, the data were collected on $30^{\text{th}}$ March $2022$. The readout fidelity was transformed to $0.981$ for qubit $15$ and $0.974$ for qubit $16$, while the \text{2-qubit gate fidelity} was transformed to $0.9825$. Since the calibration data of the Amazon Braket devices changed over time, the data were updated in such a way that the noise model of the Amazon Braket device exhibited the same time-varying behavior as the original noise model of IBM Q.
\end{document}
|
arXiv
|
{
"id": "2305.00370.tex",
"language_detection_score": 0.7054297924041748,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{$\Arrival$: A zero-player graph game in $\NP\cap\coNP$\thanks{This research
was done in the 2014 undergraduate seminar \emph{Wie
funktioniert Forschung}
\begin{abstract}
Suppose that a train is running along a railway network, starting
from a designated origin, with the goal of reaching a designated
destination. The network, however, is of a special nature: every
time the train traverses a switch, the switch will change its
position immediately afterwards. Hence, the next time the train
traverses the same switch, the other direction will be taken, so
that directions alternate with each traversal of the switch.
Given a network with origin and destination, what is the
complexity of deciding whether the train, starting at the origin,
will eventually reach the destination?
It is easy to see that this problem can be solved in exponential time,
but we are not aware of any polynomial-time method. In this short
paper, we prove that the problem is in $\mathrm{NP}\cap \mathrm{coNP}$. This raises
the question whether we have just failed to find a (simple)
polynomial-time solution, or whether the complexity status is more
subtle, as for some other well-known (two-player) graph
games~\cite{Halman}. \end{abstract}
\section{Introduction} In this paper, a \emph{switch graph} is a directed graph $G$ in which every vertex has at most two outgoing edges, pointing to its \emph{even} and to its \emph{odd} successor. Formally, a switch graph is a 4-tuple $G=(V,E,s_0,s_1)$, where $s_0, s_1:V\rightarrow V$, $E= \{(v,s_0(v)): v\in V\} \cup \{(v,s_1(v)): v\in V\}$, with loops $(v,v)$ allowed. Here, $s_0(v)$ is the even successor of $v$, and
$s_1(v)$ the odd successor. We may have $s_0(v)=s_1(v)$ in which case $v$ has just one outgoing edge. We always let $n=|V|$; for $v\in V$, $E^+(v)$ denotes the set of outgoing edges at $v$, while $E^-(v)$ is the set of incoming edges.
Given a switch graph $G=(V,E,s_0,s_1)$ with origin and destination $o,d\in V$, the following procedure describes the train run that we want to analyze; our problem is to decide whether the procedure terminates. For the procedure, we assume arrays $\mathop{{\tt s\_curr}}$ and $\mathop{{\tt s\_next}}$, indexed by $V$, such that initially $\mathop{{\tt s\_curr}}[v]=s_0(v)$ and $\mathop{{\tt s\_next}}[v]=s_1(v)$ for all $v\in V$.
\begin{algorithmic} \Procedure{Run}{$G,o,d$} \State{$v := o$} \While{$v\neq d$} \State{$w := \mathop{{\tt s\_curr}}[v]$} \State{swap ($\mathop{{\tt s\_curr}}[v], \mathop{{\tt s\_next}}[v]$)} \State{$v := w$} \Comment{traverse edge $(v,w)$} \EndWhile \EndProcedure \end{algorithmic}
\begin{definition}
Problem $\mbox{\rm ARRIVAL}$ is to decide whether procedure
$\mbox{\sc Run}(G,o,d)$ terminates for a given switch graph $G=(V,E,s_0,s_1)$
and $o,d\in V$. \end{definition}
\begin{theorem}\label{thm:decidable} Problem $\mbox{\rm ARRIVAL}$ is decidable. \end{theorem}
\begin{proof}
The deterministic procedure $\mbox{\sc Run}$ can be interpreted as a function
that maps the current \emph{state} $(v, \mathop{{\tt s\_curr}},\mathop{{\tt s\_next}})$ to the next
state. We can think of the state as the current location of the
train, and the current positions of all the switches. As at most $n2^n$
different states may occur, $\mbox{\sc Run}$ either terminates within this
many iterations, or some state repeats, in which case $\mbox{\sc Run}$ enters
an infinite loop. Hence, to decide $\mbox{\rm ARRIVAL}$, we have to go through
at most $n2^n$ iterations of $\mbox{\sc Run}$. \end{proof}
Figure~\ref{fig:exponential} shows that a terminating run may indeed take exponential time.
\begin{figure}
\caption{Switch graph $G$ with $n+2$ vertices on which $\mbox{\sc Run}(G,o,d)$
traverses an exponential number of edges. If we encode the
current positions of the switches at $v_n,\ldots,v_1$ with an
$n$-bit binary number (0: even successor is next; 1: odd successor is
next), then the run counts from $0$ to $2^n-1$, resets the
counter to $0$, and terminates. Solid edges point to even or unique successors, dashed
edges to odd successors.}
\label{fig:exponential}
\end{figure}
Existing research on switch graphs (with the above, or similar definitions) has mostly focused on actively controlling the switches, with the goal of attaining some desired behavior of the network (e.g.\ reachability of the destination); see e.g.~\cite{Katz2012}. The question that we address here rather fits into the theory of cellular automata. It is motivated by the online game \emph{Looping Piggy} (\url{https://scratch.mit.edu/projects/1200078/}) that the second author has written for the \emph{Kinderlabor}, a Swiss-based initiative to educate children at the ages 4--12 in natural sciences and computer science (\url{http://kinderlabor.ch}).
It was shown by Chalcraft and Greene~\cite{CG} (see also Stewart~\cite{Stu}) that the train run can be made to simulate a Turing machine if on top of our ``flip-flop switches'', two other types of natural switches can be used. Consequently, the arrival problem is undecidable in this richer model; it then also becomes NP-complete to decide whether the train reaches the destination for \emph{some} initial positions of a set of flip-flop switches~\cite{Mar}.
Restricting to flip-flop switches with fixed initial positions, the situation is much less complex, as we show in this paper. In Sections~\ref{sec:np} and \ref{sec:conp}, we prove that $\mbox{\rm ARRIVAL}$ is in $\mathrm{NP}$ as well as in $\mathrm{coNP}$; Section~\ref{sec:ip} shows that a terminating run can be interpreted as the unique solution of a flow-type integer program with balancing conditions whose LP relaxation may have only fractional optimal solutions.
\section{$\mbox{\rm ARRIVAL}$ is in $\mathrm{NP}$}\label{sec:np} A natural candidate for an $\mathrm{NP}$-certificate is the \emph{run profile} of a terminating run. The run profile assigns to each edge the number of times it has been traversed during the run. The main difficulty is to show that fake run profiles cannot fool the verifier. We start with a necessary condition for a run profile: it has to be a \emph{switching flow}.
\begin{definition}
Let $G=(V,E,s_0,s_1)$ be a switch graph, and let $o,d\in V$, $o\neq
d$. A \emph{switching flow} is a function
$\mathbf{x}:E\rightarrow\mathds{N}_0$ (where $\mathbf{x}(e)$ is denoted as $x_e$) such that the
following two conditions hold for all $v\in V$. \begin{eqnarray} \sum_{e\in E^+(v)} x_e - \sum_{e\in E^-(v)} x_e = \left\{
\begin{array}{rl}
1, & v = o, \\
-1, & v = d, \\
0, & \mbox{otherwise}.
\end{array} \right. \label{eq:conserve_flow} \end{eqnarray}
\begin{eqnarray} 0 \leq x_{(v, s_1(v))} \leq x_{(v, s_0(v))} \leq x_{(v, s_1(v))} + 1. \label{eq:balance_flow} \end{eqnarray} \end{definition}
\begin{observation}\label{obs:termination_flow}
Let $G=(V,E,s_0,s_1)$ be a switch graph, and let $o,d\in V$, $o\neq
d$, such that $\mbox{\sc Run}(G,o,d)$ terminates. Let $\mathbf{x}(G,o,d):E\rightarrow
\mathds{N}_0$ (the run profile) be the function that assigns to each edge
the number of times it has been traversed during $\mbox{\sc Run}(G,o,d)$. Then
$\mathbf{x}(G,o,d)$ is a switching flow. \end{observation}
\begin{proof} Condition (\ref{eq:conserve_flow}) is simply flow conservation (if the run enters a vertex, it has to leave it, except at $o$ and $d$), while (\ref{eq:balance_flow}) follows from the run alternating between successors at any vertex $v$, with the even successor $s_0(v)$ being first. \end{proof}
\begin{figure}
\caption{Run profile (left) and fake run profile (right); both are
switching flows. Solid edges point to even or unique successors,
dashed edges to odd successors.}
\label{fig:fakerun}
\end{figure}
While every run profile is a switching flow, the converse is not always true. Figure~\ref{fig:fakerun} shows two switching flows for the same switch graph, but only one of them is the actual run profile. The ``fake'' run results from going to the even successor of $w$ twice in a row, before going to the odd successor $d$. This shows that the balancing condition (\ref{eq:balance_flow}) fails to capture the strict alternation between even and odd successors. Despite this, and maybe surprisingly, the existence of a switching flow implies termination of the run.
\begin{lemma}\label{lem:flow_termination}
Let $G=(V,E,s_0,s_1)$ be a switch graph, and let $o,d\in V$, $o\neq
d$. If there exists a switching flow $\mathbf{x}$, then
$\mbox{\sc Run}(G,o,d)$ terminates, and $\mathbf{x}(G,o,d)\leq\mathbf{x}$ (componentwise). \end{lemma}
\begin{proof}
We imagine that for all $e\in E$ we put $x_e$ pebbles on edge $e$,
and then start $\mbox{\sc Run}(G,o,d)$. Every time an edge is traversed, we let
the run collect one pebble. The claim is that we
never run out of pebbles, which proves termination as well as the
inequality for the run profile.
To prove the claim, we first observe two invariants: during the run, flow
conservation (w.r.t.\ to the remaining pebbles) always holds, except
at $d$, and at the current vertex which has one more pebble on its
outgoing edges. Moreover, by alternation, starting with the even
successor, the numbers of pebbles on $(v,s_0(v))$ and $(v,s_1(v))$
always differ by at most one, for every vertex $v$.
For contradiction, consider now the first iteration of $\mbox{\sc Run}(G,o,d)$
where we run out of pebbles, and let $e=(v,w)$ be the edge (now
holding $-1$ pebbles) traversed in the offending iteration. By the
above alternation invariant, the other outgoing edge at $v$ cannot
have any pebbles left, either. Then the flow conservation invariant
at $v$ shows that already some incoming edge of $v$ has a deficit of
pebbles, so we have run out of pebbles before, which is a
contradiction. \end{proof}
\begin{theorem}\label{thm:np} Problem $\mbox{\rm ARRIVAL}$ is in $\mathrm{NP}$. \end{theorem}
\begin{proof}
Given an instance $(G,o, d)$, the verifier receives a function
$\mathbf{x}:E\rightarrow\mathds{N}_0$, in form of binary encodings of the values
$x_e$, and checks whether it is a switching flow. For a
Yes-instance, the run profile of $\mbox{\sc Run}(G,o,d)$ is a witness by
Observation~\ref{obs:termination_flow}; the
proof of Theorem~\ref{thm:decidable} implies that the verification
can be made to run in polynomial time, since every value $x_e$ is
bounded by $n2^n$. For a No-instance, the check will fail by
Lemma~\ref{lem:flow_termination}. \end{proof}
\section{$\mbox{\rm ARRIVAL}$ is in $\mathrm{coNP}$}\label{sec:conp} Given an instance $(G,o,d)$ of $\mbox{\rm ARRIVAL}$, the main idea is to construct in polynomial time an instance $(\bar{G},o,\bar{d})$ such that $\mbox{\sc Run}(G,o,d)$ terminates if and only if $\mbox{\sc Run}(\bar{G},o,\bar{d})$ does not terminate. As the main technical tool, we prove that nontermination is equivalent to the arrival at a ``dead end''.
\begin{definition}
Let $G=(V,E,s_0,s_1)$ be a switch graph, and let $o,d\in V$, $o\neq
d$. A \emph{dead end} is a vertex from which there is no directed
path to the destination $d$ in the graph $(V,E)$. A \emph{dead
edge} is an edge $e=(v,w)$ whose head $w$ is a dead end. An edge
that is not dead is called \emph{hopeful}; the length of the
shortest directed path from its head $w$ to $d$ is called its
\emph{desperation}. \end{definition}
By computing the tree of shortest paths to $d$, using inverse breadth-first search from $d$, we can identify the dead ends in polynomial time. Obviously, if $\mbox{\sc Run}(G,o,d)$ ever reaches a dead end, it will not terminate, but the converse is also true. For this, we need one auxiliary result.
\begin{lemma}\label{lem:glimmer}
Let $G=(V,E,s_0,s_1)$ be a switch graph, $o,d\in V$, $o\neq d$, and
let $e=(v,w)\in E$ be a hopeful edge of desperation $k$. Then
$\mbox{\sc Run}(G,o,d)$ will traverse $e$ at most $2^{k+1}-1$ times. \end{lemma}
\begin{proof} Induction on the desperation $k$ of $e=(v,w)$. If $k=0$,
then $w=d$, and indeed, the run will traverse $e$ at most $2^1-1=1$
times. Now suppose $k>0$ and assume that the statement is true for
all hopeful edges of desperation $k-1$. In particular, one of the
two successor edges $(w, s_0(w))$ and $(w,s_1(w))$ is such a hopeful
edge, and is therefore traversed at most $2^{k}-1$ times. By
alternation at $w$, the other successor edge is traversed at most
once more, hence at most $2^k$ times. By flow conservation, the
edges entering $w$ (in particular $e$) can be traversed at most
$2^k+2^k-1=2^{k+1}-1$ times. \end{proof}
\begin{lemma}\label{lem:deadend}
Let $G=(V,E,s_0,s_1)$ be a switch graph, and let $o,d\in V$, $o\neq
d$. If $\mbox{\sc Run}(G,o,d)$ does not terminate, it will reach a dead end. \end{lemma}
\begin{proof} By Lemma~\ref{lem:glimmer}, hopeful edges can be
traversed only finitely many times, hence if the run cycles, it
eventually has to traverse a dead edge and thus reach a dead end. \end{proof}
Now we can prove the main result of this section.
\begin{theorem} Problem $\mbox{\rm ARRIVAL}$ is in $\mathrm{coNP}$. \end{theorem}
\begin{proof}
Let $(G,o,d)$ be an instance, $G=(V,E,s_0,s_1)$. We transform
$(G,o,d)$ into a
new instance $(\bar{G},o,\bar{d})$,
$\bar{G}=(\bar{V},\bar{E},\bar{s}_0,\bar{s}_1)$ as follows. We set
$\bar{V}=V\cup\{\bar{d}\}$, where $\bar{d}$ is an additional vertex, the
new destination. We define $\bar{s}_0,\bar{s}_1$ as follows. For
every dead end $w$, we set \begin{equation} \bar{s}_0 (w) = \bar{s}_1 (w) := \bar{d}. \label{eq:newdest} \end{equation} For the old destination $d$, we install the loop \begin{equation} \bar{s}_0 (d) = \bar{s}_1 (d) := d. \label{eq:loop} \end{equation} For the new destination, $\bar{s}_0(\bar{d})$ and $\bar{s}_1(\bar{d})$ are chosen arbitrarily. In all other cases, $\bar{s}_0(v):=s_0(v)$ and $\bar{s}_1(v):=s_1(v)$. This defines $\bar{E}$ and hence $\bar{G}$.
The crucial properties of this construction are the following:
\begin{itemize} \item[(i)] If $\mbox{\sc Run}(G,o,d)$ reaches the destination $d$, it has not
visited any dead ends, hence $s_0$ and $\bar{s}_0$ as well as $s_1$
and $\bar{s}_1$ agree on all visited vertices except $d$. This means
that $\mbox{\sc Run}(\bar{G},o,\bar{d})$ will also reach $d$, but then cycle
due to the loop that we have installed in \eqref{eq:loop}. \item[(ii)] If $\mbox{\sc Run}(G,o,d)$ cycles, it will at some point reach a first
dead end $w$, by Lemma~\ref{lem:deadend}. As $s_0$ and $\bar{s}_0$ as
well as $s_1$ and $\bar{s}_1$ agree on all previously visited
vertices, $\mbox{\sc Run}(\bar{G},o,\bar{d})$ will also reach $w$, but then
terminate due to the edges from $w$ to $\bar{d}$ that we have
installed in \eqref{eq:newdest}. \end{itemize}
To summarize, $\mbox{\sc Run}(G,o,d)$ terminates if and only if $\mbox{\sc Run}(\bar{G},o,\bar{d})$ does not terminate. Since $(\bar{G},o,\bar{d})$ can be constructed in polynomial time, we can verify in polynomial time that $(G,o,d)$ is a No-instance by verifying that $(\bar{G},o,\bar{d})$ is a Yes-Instance via Theorem~\ref{thm:np}. \end{proof}
\section{Is $\mbox{\rm ARRIVAL}$ in $\mathrm{P}$?} \label{sec:ip} Observation~\ref{obs:termination_flow} and Lemma~\ref{lem:flow_termination} show that $\mbox{\rm ARRIVAL}$ can be decided by checking the solvability of a system of linear (in)equalities (\ref{eq:conserve_flow}) and (\ref{eq:balance_flow}) over the nonnegative integers.
The latter is an NP-complete problem in general: many of the standard NP-complete problems, e.g.\ SAT (satisfiability of boolean formulas) can easily be reduced to finding an integral vector that satisfies a system of linear (in)equalities.
In our case, we have a flow structure, though, and finding integral flows in a network is a well-studied and easy problem~\cite[Chapter 8]{KV12}. In particular, if only the flow conservation constraints (\ref{eq:conserve_flow}) are taken into account, the existence of a nonnegative integral solution is equivalent to the existence of a nonnegative real solution. This follows from the classical \emph{Integral Flow Theorem}, see~\cite[Corollary 8.7]{KV12}. Real solutions to systems of linear (in)equalities can be found in polynomial time through linear programming~\cite[Chapter 4]{KV12}.
However, the additional balancing constraints (\ref{eq:balance_flow}) induced by alternation at the switches, make the situation more complicated. Figure~\ref{fig:fractional} depicts an instance which has a real-valued ``switching flow'' satisfying constraints (\ref{eq:conserve_flow}) and (\ref{eq:balance_flow}), but no integral one (since the run does not terminate).
\begin{figure}
\caption{The run will enter the loop at $t$ and cycle, so there is no
(integral) switching flow. But a real-valued ``switching flow''
(given by the numbers) exists. Solid edges point to even or unique successors,
dashed edges to odd successors. }
\label{fig:fractional}
\end{figure}
We conclude with a result that summarizes the situation and may be the basis for further investigations.
\begin{theorem}\label{thm:ip}
Let $G=(V,E,s_0,s_1)$ be a switch graph, and let $o,d\in V$, $o\neq
d$. $\mbox{\sc Run}(G,o,d)$ terminates if and only if there exists an integral
solution satisfying the constraints (\ref{eq:conserve_flow}) and
(\ref{eq:balance_flow}). In this case, the run profile $\mathbf{x}(G,o,d)$
is the unique integral solution that minimizes the linear objective
function $\Sigma(\mathbf{x})=\sum_{e\in E}x_e$ subject to the constraints
(\ref{eq:conserve_flow}) and (\ref{eq:balance_flow}). \end{theorem}
\begin{proof}
Observation~\ref{obs:termination_flow} and
Lemma~\ref{lem:flow_termination} show the equivalence between
termination and existence of an integral solution (a switching
flow). Suppose that the run terminates with run profile
$\mathbf{x}(G,o,d)$. We have $\mathbf{x}(G,o,d)\leq\mathbf{x}$ for every switching flow
$\mathbf{x}$, by Lemma~\ref{lem:flow_termination}. In particular,
$\Sigma(\mathbf{x}(G,o,d))\leq\Sigma(\mathbf{x})$, so the run profile has minimum
value among all switching flows. A different switching flow $\mathbf{x}$
of the same value would have to be smaller in at least one
coordinate, contradicting $\mathbf{x}(G,o,d)\leq\mathbf{x}$. \end{proof}
Theorem~\ref{thm:ip} shows that the existence of $\mathbf{x}(G,o,d)$ and its value can be established by solving an integer program~\cite[Chapter 5]{KV12}. Moreover, this integer program is of a special kind: its unique optimal solution is at the same time a least element w.r.t.\ the partial order ``$\leq$'' over the set of feasible solutions.
\section{Conclusion} The main question left open is whether the zero-player graph game $\mbox{\rm ARRIVAL}$ is in $\mathrm{P}$. There are three well-known two-player graph games in $\mathrm{NP}\cap\mathrm{coNP}$ for which membership in $\mathrm{P}$ is also not established: \emph{simple stochastic games}, \emph{parity
games}, and \emph{mean-payoff games}. All three are even in $\mathrm{UP}\cap\mathrm{coUP}$, meaning that there exist efficient verifiers for Yes- and No-instances that accept \emph{unique} certificates~\cite{Con,Jur}. In all three cases, the way to prove this is to assign payoffs to the vertices in such a way that they form a certificate if and only if they solve a system of equations with a unique solution.
It is natural to ask whether also $\mbox{\rm ARRIVAL}$ is in $\mathrm{UP}\cap\mathrm{coUP}$. We do not know the answer. The natural approach suggested by Theorem~\ref{thm:ip} is to come up with a verifier that does not accept just any switching flow, but only the unique one of minimum norm corresponding to the run profile. However, verifying optimality of a feasible integer program solution is hard in general, so for this approach to work, one would have to exploit specific structure of the integer program at hand. We do not know how to do this.
As problems in $\mathrm{NP}\cap\mathrm{coNP}$ cannot be $\mathrm{NP}$-hard (unless $\mathrm{NP}$ and $\mathrm{coNP}$ collapse), other concepts of hardness could be considered for $\mbox{\rm ARRIVAL}$. As a first step in this direction, Karthik C.~S.~\cite{Kar} has shown that a natural search version of $\mbox{\rm ARRIVAL}$ is contained in the complexity class $\mathrm{PLS}$ (Polynomial Local Search) which has complete problems not known to be solvable in polynomial time. $\mathrm{PLS}$-hardness of $\mbox{\rm ARRIVAL}$ would not contradict common complexity theoretic beliefs; establishing such a hardness result would at least provide a satisfactory explanation why we have not been able to find a polynomial-time algorithm for $\mbox{\rm ARRIVAL}$.
\section{Acknowledgment} We thank the referees for valuable comments and Rico Zenklusen for constructive discussions.
\end{document}
|
arXiv
|
{
"id": "1605.03546.tex",
"language_detection_score": 0.8238956332206726,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Every smooth Jordan curve has an inscribed rectangle with aspect ratio equal to $\sqrt{3} \abstract{We use Batson's lower bound on the nonorientable slice genus of $(2n,2n-1)$-torus knots to prove that for any $n \geq 2$, every smooth Jordan curve has an inscribed rectangle of of aspect ratio $\tan(\frac{\pi k}{2n})$ for some $k\in \{1,...,n-1\}$. Setting $n = 3$, we have that every smooth Jordan curve has an inscribed rectangle of aspect ratio $\sqrt{3}$.}
\section{Introduction} Every Jordan curve has an inscribed rectangle. Vaughan proved this fact by parameterizing a M\"obius strip above the plane which bounds the curve, and for which self-intersections correspond to inscribed rectangles. \cite{survey} If we fix a positive real number $r$, we may ask if every Jordan curve has an inscribed rectangle of aspect ratio $r$. For all $r\neq 1$, this problem is open, even for smooth or polygonal Jordan curves. The case of $r = 1$ is the famous inscribed square problem, which has been resolved for a large class of curves. The known partial results include \cite{thesis}, in which it is shown that curves which are suitably close to being convex must have inscribed rectangles of aspect ratio $\sqrt{3}$, and \cite{convex}, in which it is shown that every convex curve has an inscribed rectangle of every aspect ratio.
We present a 4-dimensional generalization of Vaughan's proof which lets us get some control on the aspect ratio of the inscribed rectangles. In particular, we will resolve the case of $r = \sqrt{3}$ for all smooth Jordan curves.
In regards to notation, $M$ will denote the M\"obius strip $\text{Sym}_2(S^1) = (S^1\times S^1)/(\mathbb{Z}/2\mathbb{Z})$, where the $\mathbb{Z}/2\mathbb{Z}$ action consists of swapping the elements of the ordered pair. We write $\{x,y\}$ to denote the equivalence class represented by the pair $(x,y)$. As a notational convenience, we abide by the convention that $S^1$ denotes the unit complex numbers.
\section{The Proof} \begin{thm} Let $\gamma: S^1\to \mathbb{C}$ be a $C^\infty$ injective function with nowhere vanishing derivative. Then for all integers $n\geq 2$, there exists an integer $k\in \{1,...,n-1\}$ so that $\gamma$ has an inscribed rectangle of aspect ratio $\tan(\frac{\pi k}{2n})$. \end{thm}
We first prove a lemma.
\begin{lem} Let $K_n$ denote the knot in $\mathbb{C}\times S^1$ parameterized by $g\mapsto (g,g^{2n})$ for $g\in S^1$. Then if $n \geq 3$, there is no smooth embedding of the M\"obius strip $M\hookrightarrow \mathbb{C}\times S^1\times \mathbb{R}_{\geq 0}$ such that $\partial M$ maps to $K_n\times \{0\}$. \end{lem}
\begin{proof} The manifold $\mathbb{C}\times S^1$ is diffeomorphic to the interior of the solid torus, so given any embedding of the solid torus into $S^3$, the image of $K_n$ under this embedding yields a knot in $S^3$ which must have non-orientable 4-genus at most that that of $K_n$. By embedding the solid torus with a single axial twist, we can make the image of $K_n$ be the torus knot $T_{2n,2n-1}$. In \cite{Batson}, Batson uses Heegaard-Floer homology to show that the non-orientable 4-genus of $T_{2n,2n-1}$ is at least $n-1$. This means that for $n\geq 3$, the knot $K_n$ cannot be bounded by a M\"obius strip in the 4-manifold $\mathbb{C}\times S^1\times \mathbb{R}_{\geq 0}$. \end{proof} \begin{proof}[Proof of Theorem 1] Suppose $\gamma: S^1\to \mathbb{C}$ is a $C^\infty$ injective function with nowhere vanishing derivative, and no inscribed rectangles of aspect ratio $\tan(\frac{\pi k}{2n})$ for any $k\in \{1,...,n-1\}$. By the smooth inscribed square theorem, we can assume $n\geq 3$. We define $\mu: M\to \mathbb{C}^2$ by the formula $$ \mu\{x,y\} = \left( \frac{\gamma(x) + \gamma(y)}{2}, (\gamma(y) - \gamma(x))^{2n} \right) $$ We see that if $\mu\{x,y\} = \mu\{w,z\}$, then the pairs share a midpoint $m$, and the angle at $m$ between $x$ and $w$ must be a multiple of $\pi/n$. Therefore, either $\{x,y\} = \{w,z\}$ or $(x,w,y,z)$ form the vertices of a rectangle of aspect ratio $\tan(\frac{\pi k}{2n})$ for some $k\in \{1,...,n-1\}$. Furthermore, we see that the differential of $\mu$ is non-degenerate. Therefore, $\mu$ is a smooth embedding of $M$ into $\mathbb{C}^2$. If we take a small tubular neighborhood $N$ around $\mathbb{C}\times\{0\}$, the M\"obius strip $\mu(M)$ intersects $\partial N$ at a knot which is isotopy equivalent to the knot $K_n$ described in our lemma. We have therefore constructed a smooth M\"obius strip bounding $K_n$, which contradicts Lemma 1. \end{proof}
\begin{crl} Every smooth Jordan curve has an inscribed rectangle of aspect ratio $\sqrt{3}$. \end{crl}
\begin{proof} Apply Theorem 1 to $n = 3$. We have a rectangle of aspect ratio $\tan(\frac{1}{6\pi}) = \frac{1}{\sqrt{3}}$ or $\tan(\frac{2}{6\pi}) = \sqrt{3}$. Aspect ratio is only defined up to reciprocals, so either way, we have a rectangle of the desired aspect ratio. \end{proof}
\nocite{*}
{}
\end{document}
|
arXiv
|
{
"id": "1803.07417.tex",
"language_detection_score": 0.8174728155136108,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{On irrationality measure of Thue-Morse constant}
\begin{abstract} We provide a non-trivial measure of irrationality for a class of Mahler numbers defined with infinite products which cover the Thue-Morse constant. Among the other things, our results imply a generalization to~\cite{bugeaud_2011}. \end{abstract}
\section{Introduction}
Let $\xi\in\mathbb{R}$ be an irrational number. Its irrationality exponent $\mu(\xi)$ is defined to be the supremum of all $\mu$ such that the inequality $$
\left|\xi - \frac{p}{q}\right|<q^{-\mu} $$ has infinitely many rational solutions $p/q$. This is an important property of a real number since it shows, how close the given real number can be approximated by rational numbers in terms of their denominators. The irrationality exponent can be further refined by the following notion.
Let $\psi(q):\mathbb{R}_{\ge 0} \to \mathbb{R}_{\ge 0}$ be a function which tends to zero as $q\to \infty$. Any function $\psi$ with these properties is referred to as the \emph{approximation function}. We say that an irrational number $\xi$ is \emph{$\psi$-well approximable} if the inequality \begin{equation}\label{def_well}
\left|\xi -
\frac{p}{q}\right|<\psi(q) \end{equation} has infinitely many solutions $p/q\in\mathbb{Q}$. Conversely, we say that $\xi$ is \emph{$\psi$-badly approximable} if~\eqref{def_well} has only finitely many solutions. Finally, we say that $\xi$ is \emph{badly approximable} if it is $c/q$-badly approximable for some positive costant $c>0$.
If a number $\xi\in\mathbb{R}$ is $\psi$-badly approximable, we also say that $\psi$ is a \emph{measure of irrationality of $\xi$}.
The statement $\mu(\xi) = \mu$ is equivalent to saying that for any $\epsilon>0$, $\xi$ is both $q^{-\mu-\epsilon}$-well approximable and $q^{-\mu+\epsilon}$-badly approximable. On the other hand, $(q^2\log q)^{-1}$-badly approximable numbers are in general worse approached by rationals when compared to $(q^2\log^2 q)^{-1}$-badly approximable numbers, even though that both of them have irrationality exponent equal to 2.
\begin{remark} \label{im_same_c} It is quite easy to verify
that, for any approximation function $\psi$, for any $\xi\in\mathbb{R}$ and any $c\in\mathbb{Q}\setminus\{0\}$, the numbers $\xi$ and $c\xi$ simultaneously are or are not $\psi$-badly approximable. Similarly, they simultaneously are or are not $\psi$-well approximable. \end{remark}
A big progress has been made recently in determining Diophantine approximation properties of so called Mahler numbers. Their definition slightly varies in the literature. In the present paper we define Mahler functions and Mahler numbers as follows. An analytic function $F(z)$ is called \emph{Mahler function} if it satisfies the functional equation \begin{equation}\label{def_mahlf} \sum_{i=0}^n P_i(z)F(z^{d^i}) = Q(z) \end{equation} where $n$ and $d$ are positive integers with $d\ge 2$, $P_i(z), Q(z) \in \mathbb{Q}[z]$, $i=0,\dots,n$ and $P_0(z)P_n(z) \neq 0$. We will only consider those Mahler functions $F(z)$ which lie in the space $\mathbb{Q}((z^{-1}))$ of Laurent series. Then, for any $\alpha\in\overline{\mathbb{Q}}$ inside the disc of convergence of $F(z)$, a real number $F(\alpha)$ is called a \emph{Mahler number}.
One of the classical examples of Mahler numbers is the so called Thue-Morse constant which is defined as follows. Let $\vv t=(t_0,t_1,\dots)=(0,1,1,0,1,0,0,\dots)$ be the Thue-Morse sequence, that is the sequence $(t_n)_{n\in\mathbb{N}_0}$, where $\mathbb{N}_0:=\mathbb{N}\cup\{0\}$, defined by the recurrence relations $t_0=0$ and for all $n\in\mathbb{N}_0$ $$ \begin{aligned} t_{2n}&=t_n,\\ t_{2n+1}&=1-t_n. \end{aligned} $$ Then, the Thue-Morse constant $\tau_{TM}$ is a real number which binary expansion is the Thue-Morse word. In other words, \begin{equation} \label{defTM} \tau_{TM}:=\sum_{k=0}^{\infty}\frac{t_k}{2^{k+1}}. \end{equation}
It is well known that $\tau_{TM}$ is a Mahler number. Indeed, one can check that $\tau_{TM}$ is related with the generating function \begin{equation} \label{ftm_presentation} f_{TM}(z):=\sum_{i=0}^{\infty}(-1)^{t_i}z^{-i} \end{equation} by the formula $\tau_{TM} = \frac12 (1- \frac12 f_{TM}(2))$. At the same time, the function $f_{TM}(z)$, defined by~\eqref{ftm_presentation}, admits the following presentation~\cite[\S13.4]{AS2003}: $$ f_{TM}(z)=\prod_{k=0}^{\infty}\left(1-z^{-2^k}\right), $$ and the following functional equation holds: \begin{equation} \label{func_eq} f_{TM}(z^2)=\frac{z}{z-1}f_{TM}(z). \end{equation} So it is indeed a Mahler function.
Approximation of Mahler numbers by algebraic numbers has been studied within a broad research direction on transcendence and algebraic independence of these numbers. We refer the reader to the monograph~\cite{Ni1996} for more details on this topic.
It has to be mentioned that, though some results on approximation by algebraic numbers can be specialized to results on rational approximations, most often they become rather weak. This happens because the results on approximations by algebraic numbers necessarily involve complicated constructions, which results in some loss of precision. More fundamental reason is that rational numbers enjoy significantly more regular (and much better understood) distribution in the real line when compared to the algebraic numbers.
The history of the research of approximation properties of Mahler numbers by rational numbers probably started in the beginning of 1990th with the work of Shallit and van der Poorten~\cite{vdP_S}, where they considered a class of numbers that contains some Mahler numbers, including Fredholm constant $\sum_{n=0}^{\infty}10^{-2^n}$, and they proved that all numbers from that class are badly approximable.
The next result on the subject, the authors are aware of, is due to Adamczewski and Cassaigne. In 2006, they proved~\cite{adamczewski_cassaigne_2006} that every automatic number (which, according to~\cite[Theorem 1]{becker_1994}, is a subset of Mahler numbers) has finite irrationality exponent, or, equivalently, every automatic number is not a Liouville number. Later, this result was extended to all Mahler numbers~\cite{bell_bugeaud_coons_2015}. We also mention here the result by Adamczewski and Rivoal~\cite{adamczewski_rivoal_2009}, where they showed that some classes of Mahler numbers are $\psi$-well approximable, for various functions $\psi$ depending on a class under consideration.
The Thue-Morse constant is one of the first Mahler numbers which irrationality exponent was computed precisely, it has been done by Bugeaud in 2011~\cite{bugeaud_2011}. This result served as a foundation for several other works, establishing precise values of irrationality exponents for wider and wider classes of Mahler numbers, see for example~\cite{coons_2013, guo_wu_wen_2014, wu_wen_2014}.
Bugeaud, Han, Wen and Yao~\cite{bugeaud_han_wen_yao_2015} computed the estimates of $\mu(f(b))$ for a large class of Mahler functions $f(z)$, provided that the distribution of indices at which Hankel determinants of $f(z)$ do not vanish or, equivalently, the continued fraction of $f(z)$ is known. In many cases, these estimates lead to the precise value of $\mu(f(b))$. We will consider this result in more details in the next subsection. Later, Badziahin~\cite{badziahin_2017} provided a continued fraction expansion for the functions of the form $$ f(z) = \prod_{t=0}^\infty P(z^{-d^t}) $$ where $d\in\mathbb{N}, d\ge 2$ and $P(z)\in \mathbb{Q}[z]$ with $\deg P< d$. This result, complimented with~\cite{bugeaud_han_wen_yao_2015}, allows to find sharp estimates for the values of these functions at integer points.
Despite rather extensive studies on irrationality exponents of Mahler numbers, very little is known about their sharper Diophantine approximation properties. In 2015, Badziahin and Zorin~\cite{BZ2015} proved that the Thue-Morse constant $\tau_{TM}$, together with many other values of $f_{TM}(b)$, $b\in\mathbb{N}$, are not badly approximable. Moreover, they proved
\begin{theoremBZ} Thue-Morse constant $\tau_{TM}$ is $\frac{C}{q^2\log\log q}$-well approximable, for some explicit constant $C>0$. \end{theoremBZ} Later, in~\cite{badziahin_zorin_2015} they extended this result to the values $f_3(b)$, where $b$ is from a ceratin subset of positive integers, and $$ f_3(z):= \prod_{t=0}^\infty (1 - z^{-3^t}). $$
Khintchine's Theorem implies that outside of a set of the Lebesgue measure zero, all real numbers are $\frac{1}{q^2\log q}$-well approximable and $\frac{1}{q^2\log^2 q}$-badly approximable. Of course, this metric result implies nothing for any particular real number, or countable family of real numbers. However, it sets some expectations on the Diophantine approximaiton properties of real numbers.
The result of Theorem~BZ does not provide the well-approximability result for the Thue-Morse constant suggested by Khintchine's theorem, but it falls rather short to it. At the same time, the bad-approximability side, suggested by Khintchine theorem, seems to be hard to establish (or even to approach to it) in the case of Thue-Morse constant and related numbers. In this paper we prove that a subclass of Mahler numbers, containing, in particular, Thue-Morse constant, is $(q\exp(K\sqrt{\log q\log\log q}))^{-2}$-badly approximable for some constant $K>0$, see Theorem~\ref{th_main}
at the end of Subsection~\ref{ss_CF_LS}. This result is still pretty far from what is suggested by Khintchine's theorem, however it significantly improves the best result~\cite{bugeaud_2011} available at this moment, namely, that the irrationality exponent of Thue-Morse constant equals 2.
\hidden{ \begin{theorem}\label{th_main} Let $d\ge 2$ be an integer and \begin{equation}\label{def_f} f(z) = \prod_{t=0}^\infty P(z^{-d^t}), \end{equation} where $P(z)\in \mathbb{Z}[z]$ is a polynomial such that $P(1) = 1$ and $\deg P(z)<d$. Assume that the series $f(z)$ is badly approximable (i.e. degrees of all partial quotients of $f(z)$ are bounded from above by an absolute constant). Then there exists a positive constant $C$ such that for any $b\in\mathbb{Z}$, $b\geq 2$, we have either $f(b) = 0$ or $f(b)$ is $(q\exp(C\sqrt{\log q\log\log q}))^{-2}$-badly approximable. \end{theorem} }
\subsection{Continued fractions of Laurent series} \label{ss_CF_LS}
Consider the set $\mathbb{Q}((z^{-1}))$ of Laurent series equipped with the standard valuation which is defined as follows: for $f(z) = \sum_{k=-d}^\infty c_kz^{-k} \in \mathbb{Q}((z^{-1}))$, its valuation $\|f(z)\|$ is the biggest degree $d$ of $z$ having non-zero coefficient $c_{-d}$. For example, for polynomials
$f(z)$ the valuation $\|f(z)\|$ coincides with their degree. It is well known that in this setting the notion of continued fraction is well defined. In other words, every $f(z)\in \mathbb{Q}((z^{-1}))$ can be written as $$ f(z) = [a_0(z), a_1(z),a_2(z),\ldots] = a_0(z) + \mathop{\mathbf{K}}_{n=1}^\infty \frac{1}{a_n(z)}, $$ where $a_i(z)$, $i\in\mathbb{Z}_{\ge 0}$, are non-zero polynomials with rational coefficients of degree at least 1.
The continued fractions of Laurent series share most of the properties of classical ones~\cite{poorten_1998}. Furthermore, in this setting we have even stronger version of Legendre theorem: \begin{theoreml}\label{th_legendre} Let $f(z)\in \mathbb{Q}((z^{-1}))$. Then $p(z)/q(z)\in \mathbb{Q}(z)$ in a reduced form is a convergent of $f(z)$ if and only if $$
\left\| f(z) - \frac{p(z)}{q(z)}\right\| < -2\|q(z)\|. $$ \end{theoreml} Its proof can be found in~\cite{poorten_1998}. Moreover, if $p_k(z)/q_k(z)$ is the $k$th convergent of $f(z)$ in its reduced form, then \begin{equation}\label{eq_conv}
\left\| f(z) - \frac{p_k(z)}{q_k(z)}\right\| =
-\|q_k(z)\|-\|q_{k+1}(z)\|. \end{equation}
For a Laurent series $f(z)\in \mathbb{Q}((z^{-1}))$, consider its value $f(b)$, where $b\in \mathbb{N}$ lies within the disc of convergence of $f$. It is well known that the continued fraction of $f(b)$ (or indeed of any real number $x$) encodes, in a pretty straightforward way, approximational properties of this number. At the same time, it is a much subtler question how to read such properties of $f(b)$ from the continued fraction of $f(z)$. The problem comes from the fact that after specialization at $z=b$, partial quotients of $f(z)$ become rational, but often not integer numbers, or they may even vanish. Therefore the necessary recombination of partial quotients is often needed to construct the proper continued fraction of $f(b)$. The problem of this type has been studied in the beautiful article~\cite{vdP_S}. Despite this complication, in many cases some information on Diophantine approximaiton properties of $f(b)$ can be extracted. In particular, this is the case for Mahler numbers. Bugeaud, Han, Wen and Yao~\cite{bugeaud_han_wen_yao_2015} provided the following result that links the continued fraction of $f(z)$ and the irrationality exponents of values $f(b), b\in \mathbb{N}$. In fact, they formulated it in terms of Hankel determinants. The present reformulation can be found in~\cite{badziahin_2017}:
\begin{theorembhwy} Let $d\ge 2$ be an integer and $f(z) = \sum_{n=0}^\infty c_nz^n$ converge inside the unit disk. Suppose that there exist integer polynomials $A(z), B(z), C(z), D(z)$ with $B(0)D(0)\neq 0$ such that \begin{equation}\label{mahl_eq} f(z) = \frac{A(z)}{B(z)} + \frac{C(z)}{D(z)}f(z^d). \end{equation} Let $b\ge 2$ be an integer such that $B(b^{-d^n})C(b^{-d^n})D(b^{-d^n})\neq 0$ for all $n\in\mathbb{Z}_{\ge 0}$. Define $$ \rho:= \limsup_{k\to\infty} \frac{\deg q_{k+1}(z)}{\deg q_k(z)}, $$ where $q_k(z)$ is the denominator of $k$th convergent to $z^{-1}f(z^{-1})$. Then $f(1/b)$ is transcendental and $$ \mu(f(1/b))\le (1+\rho)\min\{\rho^2,d\}. $$ \end{theorembhwy}
The corollary of this theorem is that, as soon as \begin{equation}\label{cond_cf} \limsup_{k\to\infty} \frac{\deg q_{k+1}(z)}{\deg q_k(z)}=1, \end{equation} the irrationality exponent of $f(1/b)$ equals two. Then the natural question arises: can we say anything better on the Diophantine approximation properties of $f(1/b)$ in the case if the continued fraction of $z^{-1}f(z^{-1})$ satisfies a stronger condition than~\eqref{cond_cf}? In particular, what if the degrees of all partial quotients $a_k(z)$ are bounded by some absolute constant or even are all linear? Here we answer this question for a subclass of Mahler functions.
The main result of this paper is the following. \begin{theorem}\label{th_main} Let $d\ge 2$ be an integer and \begin{equation}\label{def_f} f(z) = \prod_{t=0}^\infty P(z^{-d^t}), \end{equation} where $P(z)\in \mathbb{Z}[z]$ is a polynomial such that $P(1) = 1$ and $\deg P(z)<d$. Assume that the series $f(z)$ is badly approximable (i.e. the degrees of all partial quotients of $f(z)$ are bounded from above by an absolute constant). Then there exists a positive constant $K$ such that for any $b\in\mathbb{Z}$,
$|b|\geq 2$, we have either $f(b) = 0$ or $f(b)$ is $q^{-2}\exp(-K\sqrt{\log q\log\log q})$-badly approximable. \end{theorem} \hidden{ \begin{remark} The proof of Theorem~\ref{th_main} can be simplified a bit in the case $b\geq 2$ (that is, if we do not consider cases corresponding to the negative values of $b$). At the same time, we can easily deduce the result of Theorem~\ref{th_main} in its whole generality from the result for positive values of $b$ only. Indeed, if $d$ is even, then $$ f(-b)=\frac{P(-b)}{P(b)}f(b), $$ hence the numbers $f(-b)$ and $f(b)$ are always simultaneously $\psi$-badly approximable or $\psi$-well approximable for any function $\psi$. \end{remark} } \hidden{ \begin{corollary} \label{cor_main}
Let $d\geq 2$ be an integer and let $f(z)$ be the same series as in the statement of Theorem~\ref{th_main} (that is, defined by~\eqref{def_f} and badly approximable). Then there exists a positive constant $K$ such that for any $b\in\mathbb{Z}$, $|b|\geq 2$, we have either $f(b) = 0$, or $P(-b)=0$ or $f(b)$ is $q^{-2}\exp(-K\sqrt{\log q\log\log q})$-badly approximable. \end{corollary} \begin{proof} We are going to deduce Corollary~\ref{cor_main} from Theorem~\ref{th_main}. First of all, note that in case $b\geq 2$ the result readily follows from Theorem~\ref{th_main}. So in this proof we need to deal only with the case $b\leq -2$, which we assume further in this proof.
In case if $d$ is even, we have either $P(-b)=0$ (and then the claim of corollary is justified in this case) or $$ f(b)=\frac{P(b)}{P(-b)}f(-b), $$ so the result follows from Remark~\ref{im_same_c}.
In case if $d$ is odd, consider the function $\widetilde{f}(z)$ defined by~\eqref{def_f} with the polynomial $P(z)$ replaced by $P(-z)$. It is easy to verify that we have \begin{equation} \label{f-b_fb} f(b)=\widetilde{f}(-b). \end{equation} At the same time, we can apply Theorem~\ref{th_main} to the function $\widetilde{f}(z)$ and the integer $-b\geq 2$, hence, by using~\eqref{f-b_fb}, this proves the claim of the corollary for $f(b)$.
\end{proof} } \hidden{ \begin{remark}
Results on irrationality measures of numbers, and more generally results of this kind covering whole classes of numbers, help in verification whether given numbers can or can not belong to certain sets. The very classical example of this nature is a proof by Liouville~\cite{Liouville1851} that, for any $b\in\mathbb{Z}$, $|b|\geq 2$, the number $L_b:=\sum_{k=0}^{\infty}b^{-k!}$ is not algebraic. To justify this, Liouville proved first that, in the modern language, the irratinality exponent of an algebraic number of degree $d$ is at most $d$, and then he verified that the irrationality exponent of the constant $L_b$ is infinite.
In the same article~\cite{Liouville1851}, Liouville stated the problem to verify whether the numbers $M_{b,2}=\sum_{k=0}^{\infty}b^{-k^2}$, where $b\in\mathbb{Z}$, $b\geq 2$ are algebraic or not. More than a hundred years later, a generalized version of this question was asked again in the ground breaking paper by Hartmanis and Stearns~\cite{HS1965}, this time they mentioned a problem to determine whether the numbers $M_{b,s}=\sum_{k=0}^{\infty}b^{-k^s}$, where $b,s\in\mathbb{Z}$, $b,s\geq 2$ are algebraic or not.
The transcendence of $M_{b,2}$ was proved only in~1996, \cite{DNNS1996}. The proof uses sophisticated technics of the modern theory of algebraic independence, and also exploits the presentation of $M_{b,2}$ as a value of a theta function. No such proof for $M_{b,3}$ or indeed for any $M_{b,s}$, $s\geq 3$ is known.
At the same time, the transcendence of $M_{b,s}$ for any $b,s\in\mathbb{Z}$, $|b|\geq 2$, $s\geq 2$ could have been deduced from the following result, conjectured by Lang: \begin{conjecture}[Lang, 1965] \label{conjectire_Lang_1965} For any $\alpha\in\overline{\mathbb{Q}}$, there exists a constant $A=A(\alpha)>0$ such that $\alpha$ is $\frac{1}{q^2\cdot\log^{A}q}$-badly approximable. \end{conjecture} Indeed, it is enough to consider rational approximations to $M_{b,s}$ given by cutting their defining series at an arbitrary index, $$ \xi_{b,s,N}:=\sum_{i=0}^{N}b^{-k^s}. $$ With these approximations, one can easily verify that $M_{b,s}$ is $\frac{1}{q\exp\left(\log_b^{\frac{s-1}{s}} q\right)}$ well-approximable.
Of course, Conjecture~\ref{conjectire_Lang_1965} is a far-reaching generaliztion of the famous Roth's theorem. \end{remark} }
\section{Preliminary information on series $f(z)$.}
In the further discussion, we consider series $f(z)$ which satisfies all the conditions of Theorem~\ref{th_main}. Most of these conditions are straightforward to verify, the only non-evident point is to check whether the product function $f(z)$, defined by~\eqref{def_f}, is badly approximable. To address this, one can find a nice criteria
in \cite[Proposition 1]{badziahin_2017}: $f(z)$ is badly approximable if and only if all its partial quotients are linear. This in turn is equivalent to the claim that the degree of denominator of the $k$th convergent of $f(z)$ is precisely $k$, for all $k\in\mathbb{N}$.
As shown in~\cite{badziahin_2017}, it is easier to compute the continued fraction of a slightly modified series \begin{equation} \label{def_g} g(z) = z^{-1}f(z). \end{equation} Since Diophantine approximtion properties of numbers $f(b)$ and $g(b) = f(b)/b$ essentially coincide, for any $b\in \mathbb{N}$, we will further focus on the work with the function $g(z)$. As we assume that $f(z)$ is a badly approximable function, the function $g(z)$ defined by~\eqref{def_g} is also badly approximable. In what follows, we will denote by $p_k(z)/q_k(z)$ the $k$th convergent of $g(z)$, and then, by~\cite[Proposition 1]{badziahin_2017}, we infer that $\deg q_k(z)=k$.
Write down the polynomial $P(z)$ in the form $$ P(z) = 1 + u_1z+\ldots +u_{d-1}z^{d-1}. $$ Then $P(z)$ is defined by the vector $\mathbf{u} = (u_1,\ldots,u_{d-1}) \in \mathbb{Z}^{d-1}$ and, via~\eqref{def_f} and~\eqref{def_g}, so is $g(z)$. To emphasize this fact, we will often write $g(z)$ as $g_\mathbf{u}(z)$.
\subsection{Coefficients of the series, convergents and Hankel determinants}
We write the Laurent series $g_\mathbf{u}(z)\in \mathbb{Z}[[z^{-1}]]$ in the following form \begin{equation} \label{def_g_u} g_\mathbf{u}(z) = \sum_{n=1}^\infty c_n z^{-n}. \end{equation} We denote by $\mathbf{c}_n$ the vector $(c_1,c_2,\ldots, c_n)$.
Naturally, the definition of $g_{\mathbf{u}}(z)$ via the infinite product
(see~\eqref{def_f} and~\eqref{def_g}) imposes the upper bound on $|c_n|$, $n\in\mathbb{N}$. \begin{lemma} \label{lemma_ub_c_i} The term $c_n$ satisfies \begin{equation}\label{ineq_c_n}
|c_n| \leq \|\mathbf{u}\|_\infty ^{\lceil \log_d n\rceil} \le \|\mathbf{u}\|_\infty ^{\log_d n + 1}. \end{equation} Consequently, \begin{equation}\label{ineq_c}
\|\mathbf{c}_n\|_\infty \leq \|\mathbf{u}\|_\infty ^{\log_d n + 1} \end{equation} \end{lemma} \begin{proof} Look at two different formulae for $g_\mathbf{u}(z)$: $$ g_\mathbf{u}(z) = z^{-1}\prod_{t=0}^\infty (1+u_1z^{-d^t}+\ldots+u_{d-1}z^{-(d-1)d^t}) = \sum_{n=1}^\infty c_n z^{-n}. $$ By comparing the right and the left hand sides one can notice that $c_n$ can be computed as follows: \begin{equation} \label{link_c_u} c_n = \prod_{j=0}^{l(n)} u_{d_{n,j}} \end{equation} where $d_{n,0}d_{n,1}\cdots d_{n,l(n)}$ is the $d$-ary expansion of the number
$n-1$. Here we formally define $u_0=1$. Equation~\eqref{link_c_u} readily implies that $|c_n|\leq
\|\mathbf{u}\|^{l(n)}$. Finally, $l(n)$ is estimated by $l(n) \leq \lceil \log_d(n-1)\rceil \le \lceil \log_d n\rceil$. The last two inequalities clearly imply~\eqref{ineq_c_n}, hence~\eqref{ineq_c}. \end{proof}
Let $p_k(z)/q_k(z)$ be a convergent of $g_\mathbf{u}(z)$ in its reduced form. Recall that throughout the text we assume that $f(z)$ is badly approximable, hence $g_{\mathbf{u}}(z)$ defined by~\eqref{def_g} is badly approximable, and because of this (and employing~\cite[Proposition 1]{badziahin_2017}) we have \begin{equation} \label{deg_qk_k} \deg q_k=k. \end{equation} Denote \begin{equation} \label{coefficients_q_k_p_k} \begin{aligned} q_k(z) &= a_{k,0}+a_{k,1}z+\ldots + a_{k,k}z^k,\qquad \mathbf{a}_k:=(a_{k,0},\ldots,a_{k,k})\\ p_k(z) &= b_{k,0} + b_{k,1}z+\ldots+b_{k,k-1}z^{k-1},\quad \mathbf{b}_k:=(b_{k,0},\ldots,b_{k,k-1}). \end{aligned} \end{equation} Because of~\eqref{deg_qk_k}, we have \begin{equation} \label{a_k_b_k_ne_0}
a_{k,k}\ne 0.
\end{equation}
The Hankel matrix is defined as follows: $$ H_k=H_k(g_{\mathbf{u}})= \begin{pmatrix} c_{1} & c_{2} & \dots & c_{k}\\ c_{2} & c_{3} & \dots & c_{k+1}\\ \vdots & \vdots & \ddots & \vdots\\ c_{k} & c_{k+l} & \dots & c_{2k-1} \end{pmatrix}. $$ It is known (see, for example,~\cite[Section 3]{badziahin_2017}) that the convergent in its reduced form with $\deg q_k(z) = k$ exists if and only if the Hankel matrix $H_k$ is invertible. Thus in our case
we necessarily have that $H_k(g_{\mathbf{u}})$ is invertible for any positive integer $k$.
From~\eqref{eq_conv}, we have that \begin{equation}\label{eq_pq}
\|q_k(z)g_\mathbf{u}(z) - p_k(z)\| = -k-1. \end{equation} In other words, the coefficients for $x^{-1}, \ldots, x^{-k}$ in $q_k(z)g_\mathbf{u}(z)$ are all zero and the coefficient for $x^{-k-1}$ is not. This suggests a method for computing $q_k(x)$. One can check that the vector $\mathbf{a}_k = (a_{k,0},a_{k,1},\ldots, a_{k,k})$ is the solution of the matrix equation $H_{k+1}\mathbf{a}_k = c\cdot\mathbf{e}_{k+1}$, where $c$ is a non-zero constant and $$ \mathbf{e}_{k+1}=(0, \dots, 0, 1)^{t}. $$ This equation has the unique solution since the matrix $H_{k+1}$ is invertible. So, we can write the solution vector $\mathbf{a}$ as \begin{equation} \label{matrix_equality_HAI} \mathbf{a}_k = c\cdot H_{k+1}^{-1}\mathbf{e}_{k+1}. \end{equation}
In what follows, we will use the norm of the matrix $\|H\|_\infty$, defined to be the maximum of the absolute values of all its entries. Given a polynomial $P(z)$ we define its height $h(P)$ as the maximum of absolute values of its coefficients. In particular, we have $h(p_k(z)) =
\|\mathbf{b}_k\|_\infty$ and $h(q_k(z)) = \|\mathbf{a}_k\|_\infty$.
\hidden{ \begin{lemma} \label{lemma_ub_c_i} The term $c_i$ satisfies $$
|c_i| \le \|\mathbf{u}\|_\infty ^{\lceil \log_d i\rceil} \le \|\mathbf{u}\|_\infty ^{\log_d i + 1}. $$ This in fact implies \begin{equation}\label{ineq_c}
\|\mathbf{c}_i\|_\infty \le \|\mathbf{u}\|_\infty ^{\log_d i + 1} \end{equation} \end{lemma} \begin{proof} Look at two different formulae for $g_\mathbf{u}(z)$: $$ g_\mathbf{u}(z) = z^{-1}\prod_{t=0}^\infty (1+u_1z^{-d^t}+\ldots+u_{d-1}z^{-(d-1)d^t}) = \sum_{i=1}^\infty c_i z^{-i}. $$ By comparing the right and left hand sides one can notice that $c_i$ can be computed as follows: $$ c_i = \prod_{j=0}^{l(i)} u_{d_{i,j}} $$ where $d_{i,0}d_{i,1}\cdots d_{i,l(i)}$ is a $d$-ary expansion of the number
$i-1$. Here we put $u_0=1$. This formula readily implies that $|c_i|\le
\|\mathbf{u}\|^{l(i)}$. Finally, $l(i)$ is estimated by $l(i) \le \lceil \log_d(i-1)\rceil \le \lceil \log_d i\rceil$. The last two inequalities imply~\eqref{ineq_c}. \end{proof} }
\begin{lemma}\label{lem_hpq} For any $k\in\mathbb{N}$, the $k$-th convergent $p_k(z)/q_k(z)$ to $g_{\mathbf{u}}(z)$ can be represented by $p_k(z)/q_k(z)=\widetilde{p}_k(z)/\widetilde{q}_k(z)$, where $\widetilde{p}_k,\widetilde{q}_k\in \mathbb{Z}[z]$ and \begin{eqnarray}\label{ineq_hq}
h(\widetilde{q}_k)&\leq& (\|\mathbf{c}_{2k+1}\|_\infty^2 \cdot k)^{k/2}.\\ \label{ineq_hp}
h(\widetilde{p}_k)&\leq& \|\mathbf{c}_{2k+1}\|_\infty^{k+1} \cdot k^{(k+2)/2}. \end{eqnarray} Consecutively, the following upper bounds hold true: \begin{eqnarray}\label{ineq_hq_2}
h(\widetilde{q}_k)&\leq& \|\mathbf{u}\|_\infty ^{k(\log_d (2k+1) + 1)} \cdot k^{k/2}.\\ \label{ineq_hp_2}
h(\widetilde{p}_k)&\leq& \|\mathbf{u}\|_\infty ^{(k+1)(\log_d (2k+1) + 1)} \cdot k^{(k+2)/2}. \end{eqnarray} \end{lemma}
\begin{proof} By applying Cramer's rule to the equation $H_{k+1}\mathbf{a}_k = c\cdot\mathbf{e}_{k+1}$ we infer that \begin{equation} \label{formula_a_i} a_{k,i}=c\cdot\frac{\Delta_{k+1,i}}{\det H_{k+1}}, \quad i=0,\dots,k, \end{equation} where $\Delta_{k+1,i}$ denotes the determinant of the matrix $H_{k+1}$ with the $i$-th column replaced by $\mathbf{e}_{k+1}$, $i=1,\dots,k+1$. Then we use the Hadamard's determinant upper bound to derive \begin{equation} \label{h_a_i}
|\det H_{k+1}|\leq \|H_{k+1}\|_\infty^{k+1}\cdot (k+1)^{(k+1)/2} = (\|\mathbf{c}_{2k+1}\|_\infty^2 (k+1))^{(k+1)/2}. \end{equation} Moreover, by expanding the matrix involved in $\Delta_{k+1,i}$ along the $i$th column and by using Hadamard's upper bound again we get $$
|\Delta_{k+1,i}|\le \|H_{k+1}\|_\infty^k\cdot k^{k/2} = (\|\mathbf{c}_{2k+1}\|_\infty^2\cdot k)^{k/2}, \quad i=0,\dots,k. $$
To define $\widetilde{q}_k(z)$, set $c = \det H_{k+1}$ in~\eqref{formula_a_i}. Then we readily have $\widetilde{q}_k(z) = \sum_{i=0}^k \Delta_{k+1,i+1}z^i$. By construction, it has integer coefficients and $h(\widetilde{q})$ satisfies~\eqref{ineq_hq}.
Next, from~\eqref{eq_pq} we get that the coefficients of $\widetilde{p}_k(z)$ coincide with the coefficients for positive powers of $z$ of $\widetilde{q}_k(z)g_\mathbf{u}(z)$. By expanding the latter product, we get $$
|b_{k,i}|=\left|\sum_{j=i+1}^k a_{k,j} c_{j-i-1}\right| \le \|\mathbf{c}_{2k+1}\|_{\infty}^{k+1}\cdot k^{(k+2)/2}. $$
Hence~\eqref{ineq_hp} is also satisfied.
The upper bounds~\eqref{ineq_hq_2} and~\eqref{ineq_hp_2} follow from~\eqref{ineq_hq} and~\eqref{ineq_hp} respectively by applying Lemma~\ref{lemma_ub_c_i}. \end{proof}
\begin{notation} For the sake of convenience, further in this text we will assume that all the convergents to $g_{\mathbf{u}}(z)$ are in the form described in Lemma~\ref{lem_hpq}. That is, we will always assume that $p_k(z)$ and $q_k(z)$ have integer coefficients and verify the upper bounds~\eqref{ineq_hq} and~\eqref{ineq_hp}, as well as~\eqref{ineq_hq_2} and~\eqref{ineq_hp_2}. \end{notation}
For any $k\in\mathbb{N}$ we define a suite of coefficients $(\alpha_{k,i})_{i> k}$ by \begin{equation} \label{def_alpha} q_k(z)g_\mathbf{u}(z)-p_k(z)=:\sum_{i=k+1}^{\infty}\alpha_{k,i} z^{-i}. \end{equation}
Note that from the equation $H_{k+1}\mathbf{a}_k = c\cdot\mathbf{e}_{k+1}$ we can get that $\alpha_{k,k+1} = c = \det H_{k+1}$. In particular, it is a non-zero integer. \begin{lemma} \label{lemma_ub_coefficients} For any $i,k\in\mathbb{N}$, $i> k\geq 1$, we have \begin{equation} \label{lemma_ub_coefficients_claim} \begin{aligned}
|\alpha_{k,i}|&\leq (k+1) \|\mathbf{c}_{k+i}\|_\infty (\|\mathbf{c}_{2k+1}\|_\infty^2 \cdot k)^{k/2}\\
&\leq (k+1) \|\mathbf{u}\|_\infty^{\log_d(k+i)+1} \|\mathbf{u}\|_\infty^{k(\log_d(2k+1)+1)} \cdot k^{k/2}. \end{aligned} \end{equation} \end{lemma}
\begin{proof} One can check that $\alpha_{k,i}$ is defined by the formula $\alpha_{k,i} = \sum_{j=0}^k a_{k,j} c_{j+i}$, which in view of~\eqref{ineq_hq} from Lemma~\ref{lem_hpq} implies the first inequality in~\eqref{lemma_ub_coefficients_claim}. Then, the second inequality in~\eqref{lemma_ub_coefficients_claim} follows by applying Lemma~\ref{lemma_ub_c_i}. \end{proof}
\hidden{ \subsection{Estimates on the terms of the series}
\begin{lemma} The term $c_i$ satisfies $$
|c_i| \le \|\mathbf{u}\|_\infty ^{\lceil \log_d i\rceil} \le \|\mathbf{u}\|_\infty ^{\log_d i + 1}. $$ This in fact implies \begin{equation}\label{ineq_c}
\|\mathbf{c}_i\|_\infty \le \|\mathbf{u}\|_\infty ^{\log_d i + 1} \end{equation} \end{lemma} \proof
Look at two different formulae for $g_\mathbf{u}(z)$: $$ g_\mathbf{u}(z) = z^{-1}\prod_{t=0}^\infty (1+u_1z^{-d^t}+\ldots+u_{d-1}z^{-(d-1)d^t}) = \sum_{i=1}^\infty c_i z^{-i}. $$ By comparing the right and left hand sides one can notice that $c_i$ can be computed as follows: $$ c_i = \prod_{j=0}^{l(i)} u_{d_{i,j}} $$ where $d_{i,0}d_{i,1}\cdots d_{i,l(i)}$ is a $d$-ary expansion of the number
$i-1$. Here we put $u_0=1$. This formula readily implies that $|c_i|\le
\|\mathbf{u}\|^{l(i)}$. Finally, $l(i)$ is estimated by $l(i) \le \lceil \log_d(i-1)\rceil \le \lceil \log_d i\rceil$. The last two inequalities imply~\eqref{ineq_c}. \endproof }
\subsection{Using functional equation to study Diophantine approximaiton properties.}
From~\eqref{def_f} one can easily get a functional equation for $g_\mathbf{u}(z) = z^{-1}f(z)$: \begin{equation}\label{eq_funcg} g_\mathbf{u} (z) = \frac{g_\mathbf{u} (z^d)}{P^*(z)},\quad P^*(z) = z^{d-1}P(z^{-1}) = z^{d-1}+u_1z^{d-2}+\ldots+u_{d-1}. \end{equation} This equation allows us, starting from the convergent $p_k(z)/q_k(z)$ to $g_\mathbf{u}(z)$, to construct an infinite sequence of convergents $\left(p_{k,m}(z)/q_{k,m}(z)\right)_{m\in\mathbb{N}_0}$ to $g_\mathbf{u}(z)$ by \begin{equation} \label{def_pkm_qkm_poly} \begin{aligned} q_{k,m}(z)&:=q_k(z^{d^m}),\\ p_{k,m}(z)&:=\prod_{t=0}^{m-1}P^*(z^{d^t})p_{k}(z^{d^m}). \end{aligned} \end{equation} This fact can be checked by substituting the functional equation~\eqref{eq_funcg} into the condition of Theorem~L. The reader can also compare with~\cite[Lemma 3]{badziahin_2017}.
By employing~\eqref{eq_funcg} and~\eqref{def_alpha}, we find \begin{equation} \label{property_k_m} q_{k,m}(z)g_\mathbf{u}(z)-p_{k,m}(z)=\prod_{t=0}^{m-1}P^*(z^{d^t})\cdot\sum_{i=k+1}^{\infty}\alpha_{k,i} z^{-d^m\cdot i}. \end{equation}
Consider an integer value $b$ which satisfies the conditions of Theorem~\ref{th_main}. Define\footnote{There is a slight abuse of notation in using the same letters $p_{k,m}$ and $q_{k,m}$ both for polynomials from $\mathbb{Z}[z]$ and for their values at $z=b$. However, we believe that in this particular case such a notation constitutes the best choice. Indeed, the main reason to consider polynomials $p_{k,m}(z)$ and $q_{k,m}(z)$ is to define eventually $p_{k,m}=p_{k,m}(b)$ and $q_{k,m}=q_{k,m}(b)$, which will play the key role in the further proofs. At the same time, it is easy to distinguish the polynomials $p_{k,m}(z)$, $q_{k,m}(z)$ and the corresponding integers $p_{k,m}$ and $q_{k,m}$ by the context. Moreover, we will always specify which object we mean and always refer to the polynomials specifying explicitly the variable, that is $p_{k,m}(z)$, $q_{k,m}(z)$ and not $p_{k,m}$ and $q_{k,m}$.} \begin{eqnarray} p_{k,m}&:=&p_{k,m}(b), \label{def_p_k_m}\\ q_{k,m}&:=&q_{k,m}(b), \label{def_q_k_m} \end{eqnarray} where $p_{k,m}(z)$ and $q_{k,m}(z)$ are polynomials defined by~\eqref{def_pkm_qkm_poly}.
Clearly, for any $k\in\mathbb{N}$, $m\in\mathbb{N}_0$ we have $p_{k,m},q_{k,m}\in\mathbb{Z}$.
\hidden{ \begin{remark} There is a slight abuse of notation in using the same letters $p_{k,m}$ and $q_{k,m}$ both for polynomials from $\mathbb{Z}[z]$ and for their values at $z=b$. However, we believe that in this particular case such a notation constitutes the best choice. Indeed, the main reason to consider polynomials $p_{k,m}(z)$ and $q_{k,m}(z)$ is to define eventually $p_{k,m}=p_{k,m}(b)$ and $q_{k,m}=q_{k,m}(b)$, which will play the key role in the further proofs. At the same time, it is easy to distinguish the polynomials $p_{k,m}(z)$, $q_{k,m}(z)$ and the corresponding integers $p_{k,m}$ and $q_{k,m}$ by the context. Moreover, we will always specify which object we mean and always refer to the polynomials specifying explicitly the variable, that is $p_{k,m}(z)$, $q_{k,m}(z)$ and not $p_{k,m}$ and $q_{k,m}$.
\end{remark} }
\begin{lemma} \label{lemma_smallness} Let $b,k,m\in\mathbb{N}$, $b\geq 2$. Assume \begin{equation} \label{lemma_smallness_bdm_geq_2}
b^{d^m}>2^{1+\log_d\|\mathbf{u}\|_\infty}. \end{equation} Then the integers $p_{k,m}$ and $q_{k,m}$ verify \begin{equation} \label{lemma_smallness_ub}
\left|g_\mathbf{u}(b)-\frac{p_{k,m}}{q_{k,m}}\right|\leq \frac{2(k+1)
k^{k/2} d^m \|\mathbf{u}\|_\infty^{m+(k+1)(\log_d(2k+1)+1)}}{q_{k,m}\cdot b^{d^m\cdot k+1}}. \end{equation} Moreover, if we make in addition a stronger assumption \begin{equation}\label{lem5_assump}
b^{d^m}\ge 4(k+1)k^{k/2} \|\mathbf{u}\|_\infty^{(k+1)(\log_d(2k+1)+1)}, \end{equation} then \begin{equation} \label{lemma_smallness_lb}
\frac{|g_\mathbf{u}(b)|}{4 q_{k,m}\cdot b^{d^m\cdot k+1}}\leq\left|g_\mathbf{u}(b)-\frac{p_{k,m}}{q_{k,m}}\right|. \end{equation} \end{lemma}
\begin{proof} Consider Equation~\eqref{property_k_m} with substituted $z:=b$: \begin{equation} \label{property_k_m_z_substituted_b} q_{k,m}g_\mathbf{u}(b)-p_{k,m}=\prod_{t=0}^{m-1}P^*(b^{d^t})\cdot\sum_{i=k+1}^{\infty}\alpha_{k,i} b^{-d^m\cdot i}. \end{equation}
Each of the factors in $\left|P^*(b^{d^t})\right|$ in the right hand side of~\eqref{property_k_m_z_substituted_b} can be upper bounded by
$d\cdot\|\mathbf{u}\|_\infty b^{d^{t}(d-1)}$. So, the product in the right hand side of~\eqref{property_k_m_z_substituted_b} can be estimated by \begin{equation}\label{eq_lem5_1}
\left|\prod_{t=0}^{m-1}P^*(b^{d^t})\right| \leq d^m \|\mathbf{u}\|_\infty^m \cdot b^{d^{m}-1}. \end{equation} Further, we estimate the second term on the right hand side of~\eqref{property_k_m_z_substituted_b} by employing Lemma~\ref{lemma_ub_coefficients}: \begin{equation}\label{eq_lem5_2}
\left|\sum_{i=k+1}^{\infty}\alpha_{k,i} b^{-d^m\cdot i}\right| \leq
(k+1) \|\mathbf{u}\|_\infty^{k(\log_d(2k+1)+1)} \cdot k^{k/2}\sum_{i=k+1}^\infty
\frac{\|\mathbf{u}\|_\infty^{\log_d(k+i)+1}}{b^{d^m\cdot i}}. \end{equation} The last sum in the right hand side of~\eqref{eq_lem5_2} is bounded from above by $$ \sum_{i=k+1}^\infty
\frac{\|\mathbf{u}\|_\infty^{\log_d(k+i)+1}}{b^{d^m\cdot i}} \le
\frac{\|\mathbf{u}\|_\infty}{b^{d^m(k+1)}}\cdot \sum_{i=0}^\infty
\frac{\|\mathbf{u}\|_\infty^{\log_d(2k+1+i)}}{b^{d^m\cdot i}} $$ \begin{equation}\label{eq_lem5_3}
\le \frac{\|\mathbf{u}\|_\infty^{1+\log_d(2k+1)}}{b^{d^m(k+1)}}
\sum_{i=0}^\infty \frac{(i+1)^{\log_d\|\mathbf{u}\|_\infty}}{b^{d^m\cdot i}} \le \frac{\|\mathbf{u}\|_\infty^{1+\log_d(2k+1)} \cdot C(b,d,m,\|\mathbf{u}\|_\infty)}{b^{d^m(k+1)}}, \end{equation} where \hidden{
$C(\|\mathbf{u}\|_\infty)$ is a constant which only depends on
$\|\mathbf{u}\|_\infty$ but not on $k$ or $m$. In particular, for
$\|\mathbf{u}\|_\infty = 1$ it can be made equal to 2. It is also not too difficult to check that $C(\|\mathbf{u}\|_\infty)\le 2$ in the case
$b^{d^m}\ge 2^{1+\log_d \|\mathbf{u}\|_\infty}$. Indeed, it readily follows from elementary remark that $i+1\leq 2^i$ for all $i\in\mathbb{Z}$. } $$
C(b,d,m,\|\mathbf{u}\|_\infty)=\sum_{i=0}^\infty \frac{(i+1)^{\log_d\|\mathbf{u}\|_\infty}}{b^{d^m\cdot i}}. $$ Note that for any $i\in\mathbb{Z}$, we have $i+1\leq 2^i$. Because of this, assumption~\eqref{lemma_smallness_bdm_geq_2} implies \begin{equation} \label{C_leq_2}
C(b,d,m,\|\mathbf{u}\|_\infty)\leq 2. \end{equation}
Finally, by putting together,~\eqref{property_k_m_z_substituted_b},~\eqref{eq_lem5_1},~\eqref{eq_lem5_2},~\eqref{eq_lem5_3} and~\eqref{C_leq_2} we get $$
\left|q_{k,m}g_\mathbf{u}(b) - p_{k,m}\right|\le \frac{2(k+1) k^{k/2} d^m
\|\mathbf{u}\|_\infty^{m+(k+1)(\log_d(2k+1)+1)}}{b^{d^m\cdot k + 1}}. $$ Dividing both sides by $q_{k,m}$ gives~\eqref{lemma_smallness_ub}.
To get the lower bound, we first estimate the product in~\eqref{property_k_m}. $$ \prod_{t=0}^{m-1}P^*(b^{d^t}) = b^{d^m-1} \prod_{t=0}^{m-1}P(b^{-d^t}) \geq b^{d^m-1}\frac{g_\mathbf{u}(b)}{\prod_{t=m}^{\infty}P(b^{-d^t})}. $$ By~\eqref{lem5_assump}, the denominator can easily be estimated as $$
\prod_{t=m}^{\infty}P(b^{-d^t})\le \prod_{t=m}^{\infty} \left(1+\frac{2\|\mathbf{u}\|_\infty}{b^{d^t}}\right) < 2. $$ Therefore, $$ \prod_{t=0}^{m-1}P^*(b^{d^t})\ge \frac12b^{{d^m}-1}g_\mathbf{u}(b). $$
For the series in the right hand side of~\eqref{property_k_m}, we show that the first term dominates this series. Indeed, we have $|\alpha_{k,k+1}|\ge 1$ since it is a non-zero integer. Then, \begin{equation} \label{lb_calculations} \begin{aligned}
&|q_{k,m}g_\mathbf{u}(b)-p_{k,m}|=\left|\prod_{t=0}^{m-1}P^*(b^{d^t})\cdot\sum_{i=k+1}^{\infty}\alpha_{k,i} b^{-d^m\cdot i}\right|\\ &\geq
\frac12b^{d^m-1} |g_\mathbf{u}(b)| \left(b^{-d^m(k+1)}-\sum_{i=k+2}^{\infty}\left|\alpha_{k,i}\right| b^{-d^m\cdot i}\right)\\ &\stackrel{\eqref{eq_lem5_2}, \eqref{eq_lem5_3}}\geq \frac12b^{d^m-1}
|g_\mathbf{u}(b)| \left(b^{-d^m(k+1)} -
\frac{C(b,d,m,\|\mathbf{u}\|_\infty)(k+1)k^{k/2}\|\mathbf{u}\|_\infty^{(k+1)(\log_d(2k+1)+1)}}{b^{d^m(k+2)}}\right) \end{aligned} \end{equation} Recall that by~\eqref{C_leq_2}, we have
$C(b,d,m,\|\mathbf{u}\|_\infty) \leq 2$. So, by using assumption~\eqref{lem5_assump}, we finally get $$
|q_{k,m}g_\mathbf{u}(b)-p_{k,m}|\ge
\frac{b^{d^m-1}|g_\mathbf{u}(b)|}{4b^{d^m(k+1)}} =
\frac{|g_\mathbf{u}(b)|}{4b^{d^m\cdot k +1}}. $$ Finally, dividing both sides by $q_{k,m}$ leads to~\eqref{lemma_smallness_lb}. \end{proof}
\begin{lemma} \label{lemma_qkm_db} Let $b,k,m\in\mathbb{N}$, $k\geq 1$ and let \begin{equation} \label{lemma_qkm_db_bdm_assumption} b^{d^m} > 3\cdot
\left(\|\mathbf{c}_{2k+1}\|_\infty^2 k\right)^{k/2}. \end{equation} Recall the notations $a_{k,i}$, $i=0,\dots,k$, for the coefficients of $q_k$, $k\in\mathbb{N}$, is defined in~\eqref{coefficients_q_k_p_k}. Then, \begin{equation} \label{lemma_qkm_db_claim}
\frac12|a_{k,k}|\cdot b^{kd^m}\leq q_{k,m}\leq\frac32|a_{k,k}|\cdot b^{kd^m}. \end{equation} \end{lemma} \begin{proof} The leading term of $q_{k,m}(z)$ is $a_{k,k}z^{kd^m}$. We know that $\deg q_k(z) = k$, therefore $a_{k,k}\neq 0$ and $a_{k,k}$ is an integer. Recall also that by~\eqref{ineq_hq} the maximum of the coefficients $a_{k,i}$,
$i=0,\dots,k$, does not exceed $(\|\mathbf{c}_{2k+1}\|_\infty^2 \cdot k)^{k/2}$. Thus we find, by using assumption~\eqref{lemma_qkm_db_bdm_assumption}, $$
\left|\sum_{n=0}^{k-1} a_{k,n}\cdot b^{nd^m}\right| \le b^{kd^m}
\left|\sum_{n=1}^{k} 3^{-n}\right| \le \frac12 b^{kd^m}. $$ We readily infer, by taking into account $q_{k,m} = a_{k,0}+a_{k,1}b^{d^m}+\ldots + a_{k,k}b^{k d^m}$, $$
\frac12 |a_{k,k}|b^{kd^m}\le |a_{k,k}|b^{kd^m}- \frac12 b^{kd^m}\le |q_{k,m}|\le |a_{k,k}|q^{kd^m}+ \frac12 q^{kd^m} =
\frac32|a_{k,k}|q^{kd^m}. $$ This completes the proof of the lemma. \end{proof}
\begin{proposition} \label{proposition_db} Let $k\geq 2$, $m\geq 1$ be integers and assume that~\eqref{lem5_assump} is satisfied. Then, the integers $p_{k,m}=p_{k,m}(b)$ and $q_{k,m}=q_{k,m}(b)$, defined by~\eqref{def_p_k_m} and by~\eqref{def_q_k_m}, satisfy \begin{equation} \label{proposition_db_l_1}
\left|g_\mathbf{u}(b)-\frac{p_{k,m}}{q_{k,m}}\right|\leq \frac{3(k+1)
k^{k} d^m \|\mathbf{u}\|_\infty^{m+(2k+1)(\log_d(2k+1)+1)}}{b \cdot q^2_{k,m}}, \end{equation} \begin{equation} \label{proposition_db_u_1}
\frac{|g_\mathbf{u}(b)|}{8b q_{k,m}^2} \le \left|g_\mathbf{u}(b) -
\frac{p_{k,m}}{q_{k,m}}\right|. \end{equation} Moreover, if, additionally, $k$ and $m$ satisfy \begin{equation} \label{proposition_db_assump} k\cdot d^m\log_2 b - 1 \geq
\frac{1}{3} m^2(\log \|\mathbf{u}\|_\infty)^2, \end{equation} then \begin{equation} \label{proposition_db_claim_2}
\left|g_\mathbf{u}(b)-\frac{p_{k,m}}{q_{k,m}}\right|\leq \frac{3\cdot 2^{C{\sqrt{\log_2 q_{k,m}\log_2\log_2 q_{k,m}}}}}{q_{k,m}^{2}}, \end{equation} where \begin{equation} \label{def_C}
C=2\sqrt{2}+2\sqrt{5\cdot\log\|\mathbf{u}\|_{\infty}}+2. \end{equation}
\end{proposition} \begin{proof} From Lemma~\ref{lemma_qkm_db} we have $$ b^{k\cdot d^m}\ge
\frac{2q_{k,m}}{3|a_{k,k}|}\stackrel{\eqref{ineq_hq_2}}\ge \frac{2q_{k,m}}{3\|\mathbf{u}\|_\infty^{k(\log_d(2k+1)+1)} k^{k/2}}. $$
Similarly, by using $|a_{k,k}|\geq 1$ together with Lemma~\ref{lemma_qkm_db}, we get the lower bound \begin{equation} \label{proposition_db_simple_ub} b^{k\cdot d^m}\le 2q_{k,m}. \end{equation} These two bounds on $b^{kd^m}$ allow to infer the inequalities~\eqref{proposition_db_l_1} and~\eqref{proposition_db_u_1} straightforwardly from the corresponding bounds in Lemma~\ref{lemma_smallness}.
We proceed with the proof of the estimate~\eqref{proposition_db_claim_2}. We are going to deduce it as a corollary of~\eqref{proposition_db_l_1}. To this end, we are going to prove, under the assumptions of this proposition, \begin{equation} \label{eq_prop_first} (k+1)
k^{k} d^m \|\mathbf{u}\|_\infty^{m+(2k+1)(\log_d(2k+1)+1)}\leq 2^{C{\sqrt{\log_2 q_{k,m}\log_2\log_2 q_{k,m}}}}, \end{equation} where the constant $C$ is defined by~\eqref{def_C}. It is easy to verify that~\eqref{proposition_db_l_1} and~\eqref{eq_prop_first} indeed imply~\eqref{proposition_db_claim_2}. Therefore in the remaining part of the proof we will focus on verifying~\eqref{eq_prop_first}.
The inequality~\eqref{proposition_db_simple_ub} together with condition~\eqref{lem5_assump} imply \begin{equation}\label{ineq_logqkm} \begin{array}{rl} \log_2 q_{k,m} \geq &(2k-1)+k\log_2 (k+1) + \frac{k^2}{2}\log_2 k\\
&+k(k+1)(\log_d(2k+1)+1)\log_2\|\mathbf{u}\|_\infty. \end{array} \end{equation} By taking logarithms again one can derive that $\log_2\log_2 q_{k,m}\ge \log_2 k$. Now we compute \begin{equation} \label{proposition_db_implication_1} \log_2 q_{k,m}\log_2\log_2 q_{k,m} \geq \frac{k^2}{2}(\log_2 k)^2 > \frac{1}{8}(k\log_2 k+\log_2(k+1))^2. \end{equation} The last inequality in~\eqref{proposition_db_implication_1} holds true because $k\log_2 k>\log_2 (k+1)$ for $k\geq 2$.
Another implication of~\eqref{ineq_logqkm} is \begin{equation} \label{proposition_db_implication_2_0} \log_2 q_{k,m}\log_2\log_2 q_{k,m} \geq k(k+1)(\log_d(2k+1)+1)\log_2 k
\log_2\|\mathbf{u}\|_\infty. \end{equation} Since for $d\ge 2$ and $k\ge 2$ we have $\log_2 k\geq \frac14(\log_d(2k+1)+1)$ and $k(k+1)\geq \frac15(2k+1)^2$, therefore we readily infer from~\eqref{proposition_db_implication_2_0} \begin{equation} \label{proposition_db_implication_2}
\log_2 q_{k,m}\log_2\log_2 q_{k,m} \ge \frac{1}{20\log_2\|\mathbf{u}\|_\infty}
(2k+1)^2(\log_d(2k+1)+1)^2 (\log_2 \|\mathbf{u}\|_\infty)^2. \end{equation}
Next, it follows from~\eqref{proposition_db_simple_ub} that \begin{equation} \label{qkm_geq_kdm} \log_2 q_{k,m} \geq k\cdot d^m\log_2 b - 1. \end{equation} Therefore assumption~\eqref{proposition_db_assump} implies that $\log_2 q_{k,m}\geq
\frac{1}{3} m^2(\log_2 \|\mathbf{u}\|_\infty)^2$. At the same time, the assumptions $k\geq 2$ joint with~\eqref{lem5_assump} readily imply $b^{k\cdot d^m}\geq 576$, hence, by adding~\eqref{proposition_db_simple_ub}, we find $\log_2\log_2 q_{k,m}\geq\log_2\log_2 288>3$. So, \begin{equation} \label{proposition_db_implication_3}
\log_2 q_{k,m}\log_2\log_2 q_{k,m} > m^2(\log_2 \|\mathbf{u}\|_\infty)^2. \end{equation} Also, by these considerations we deduce from~\eqref{qkm_geq_kdm} \begin{equation} \label{proposition_db_implication_4} \log_2 q_{k,m}\log_2\log_2 q_{k,m} > 3 d^m > \left(m\cdot\log_2 d\right)^2. \end{equation}
\hidden{ Then, $$
\log q_{k,m}\log\log q_{k,m} \ge \frac1b m^2 (\log \|\mathbf{u}\|_\infty)^2 $$ }
Finally, by taking square root in the both sides of~\eqref{proposition_db_implication_1}, \eqref{proposition_db_implication_2}, \eqref{proposition_db_implication_3} and~\eqref{proposition_db_implication_4} and summing up the results we find \hidden{ taking the mean value of three estimates on $\log q_{k,m}\log\log q_{k,m}$ we have that } \begin{equation}\label{eq_prop1} \begin{array}{rl} C\sqrt{\log_2 q_{k,m}\log_2\log_2 q_{k,m}}\!\!\! &\geq \log_2(k+1) + k\log_2 k + m\log_2 d\\
&+ (m+(2k+1)(\log_d(2k+1)+1))\log_2\|\mathbf{u}\|_\infty, \end{array} \end{equation}
where the constant $C$ is defined by~\eqref{def_C}. Finally, by taking the exponents base two from both sides of~\eqref{eq_prop1}, we find~\eqref{eq_prop_first}, hence derive~\eqref{proposition_db_claim_2}. \end{proof}
\begin{remark} Note that the constant $C$ in Proposition~\ref{proposition_db} is rather far from being optimal. The proof above can be significantly optimized to reduce its value. However that would result in more tedious computations. All one needs to show is the inequality~\eqref{eq_prop1}. \end{remark}
\section{Proof of Theorem~\ref{th_main}}
We will prove the folowing result.
\begin{theorem} \label{thm_im} Let $b\geq 2$. There exists an effectively computable constant $\gamma$, which only depends on $d$ and $\mathbf{u}$, such that for any $p\in\mathbb{Z}$ and any sufficiently large $q\in\mathbb{N}$,
we have \begin{equation} \label{thm_im_result}
\left|g_{\mathbf{u}}(b)-\frac{p}{q}\right|\geq\frac{|g_\mathbf{u}(b)|}{4b\cdot q^{2}\cdot\exp\left(\gamma\sqrt{\log_2 q \log_2 \log_2 q}\right)}. \end{equation} \end{theorem}
It is easy to see that Theorem~\ref{th_main} is a straight corollary of Theorem~\ref{thm_im}. Indeed, if $f(b)$ from Theorem~\ref{th_main} is not zero then so is $g_\mathbf{u}(b)$ and the lower bound~\eqref{thm_im_result} is satisfied for all large enough $q$, therefore the inequality $$
\left|g_\mathbf{u}(b)-\frac{p}{q}\right| < q^{-2}\exp(- \gamma \sqrt{\log q\log \log q}) $$ has only finitely many solutions. By definition, this implies that $g_{\mathbf{u}}(b)$ and in turn $f(b)$ are both $q^{-2}\exp(- \gamma \sqrt{\log q\log \log q})$-badly approximable.
\begin{proof}[Proof of Theorem~\ref{thm_im}]
In this proof, we will use the constant $C$ defined by~\eqref{def_C}. Fix a couple of integers $p$ and $q$. We start with some preliminary calculations and estimates.
Define $x>2$ to be the unique solution of the following equation \begin{equation} \label{thm_im_def_x}
q=\frac{1}{12}\cdot x\cdot 2^{-\frac32C\sqrt{\log_2 x\log_2\log_2 x}},
\end{equation} where the constant $C$ is defined by~\eqref{def_C}.
The condition $x>2$ ensures that both $\log_2 x$ and the double logarithm $\log_2\log_2 x$ exist and are positive, hence $2^{-C\sqrt{\log_2 x\log_2\log_2 x}}<1$ and thus
\begin{equation} \label{q_leq_x} 12 q < x. \end{equation}
For large enough $q$ we then have $$ \frac{81}{4}C^2 \log_2 \log_2 x < \log_2 x $$ and therefore \begin{equation} \label{ie_loglog_x_leq_log_x} 2^{\frac32 C\sqrt{\log_2 x\log_2 \log_2 x}}<x^{1/3}. \end{equation}
From~\eqref{thm_im_def_x} and~\eqref{ie_loglog_x_leq_log_x} we readily infer \begin{equation} \label{thm_im_preliminary_ub_x} x<\left( 12 q\right)^{3/2}, \end{equation} Rewrite~\eqref{thm_im_def_x} in the following form \begin{equation} \label{thm_im_def_x_2} x= 12 q\cdot 2^{\frac32C\sqrt{\log_2 x\log_2\log_2 x}}. \end{equation} Then, by applying~\eqref{thm_im_preliminary_ub_x} to it we find that, for large enough $q$, \begin{equation} \label{thm_im_ub_x} x< 12q\cdot 2^{2C\sqrt{\log_2 q\log_2\log_2 q}}. \end{equation}
Denote \begin{equation} \label{def_t} t:=\log_b x. \end{equation} Fix an arbitrary value $\tau\ge \tau_0>1$, where $\tau_0 = \tau_0(\mathbf{u})$ is a parameter which only depends on $\mathbf{u}$ and which we will fix later (namely, it has to ensure inequality~\eqref{cond_tau}). Assume that $t>2$ is large enough (that is, assume $q$ is large enough, then by~\eqref{q_leq_x} $x$ is large enough, hence by~\eqref{def_t} $t$ is large enough), so that \begin{equation} \label{t_log_t}
d \leq \frac1\tau\sqrt{\frac{t}{\log_2 t}}. \end{equation}
As $t>2$, we also have $t\log_2 t > 2$. Choose an integer $n$ of the form $n:=k\cdot d^m$ with $m\in\mathbb{N}$, $k\in\mathbb{Z}$ such that \begin{eqnarray} t\leq & n & \leq t+d\tau\sqrt{t\log_2 t}, \label{thm_im_distance_t_n_ub}\\ \tau \sqrt{t\log_2 t} \leq & d^m & \leq d \tau \sqrt{t\log_2 t}. \label{thm_im_2m_lb_1} \end{eqnarray} One can easily check that such $n$ always exists.
Inequalities \eqref{t_log_t}, \eqref{thm_im_distance_t_n_ub} and~\eqref{thm_im_2m_lb_1} imply \begin{equation} \label{thm_im_k_ub_1} k=\frac{n}{d^m}\leq\frac{t+d\tau \sqrt{t\cdot\log_2 t}}{\tau\sqrt{t\cdot\log_2 t}}=\frac1\tau\sqrt{\frac{t}{\log_2 t}}+d\le \frac2\tau\sqrt{\frac{t}{\log_2 t}}. \end{equation}
Then we deduce, for $t$ large enough, $$ k\log_2 k\le \frac2\tau\sqrt\frac{t}{\log_2 t} \left(\log_2 (2/\tau)+\frac12\log_2 t-\frac12\log_2\log_2 t\right)< \frac2\tau \sqrt{t\log_2 t}. $$ Therefore, for any $\tau$ large enough, that is for any $\tau\geq\tau_0$, where $\tau_0$ depends only on $\mathbf{u}$, we have \begin{multline} \label{cond_tau} 2+\log_2 (k+1)+\frac{k}{2}\log_2 k \\+ (k+1)(\log_d(2k+1)+1)\log_2
\|\mathbf{u}\|_\infty) <\tau \sqrt{t\log_2 t}. \end{multline}
By taking the exponent base two of the left hand side of~\eqref{cond_tau} and the exponent base $b\geq 2$ of the right hand side of~\eqref{cond_tau}, and then using~\eqref{thm_im_2m_lb_1}, we ensure that~\eqref{lem5_assump} is satisfied. We can also take $q$ (and, consecutively, $t$) large enough so that $m$, bounded from below by~\eqref{thm_im_2m_lb_1}, satisfies $d^m \ge m^2 (\log_2\|\mathbf{u}\|_\infty)^2$, and then necessarily~\eqref{proposition_db_assump} is verified. Also, \eqref{thm_im_distance_t_n_ub} and~\eqref{thm_im_2m_lb_1} imply that, for $t$ large enough, $k\geq 2$.
Hence we have checked all the conditions on $k$ and $m$ from Proposition~\ref{proposition_db}. It implies that the integers $p_{k,m}$ and $q_{k,m}$, defined by~\eqref{def_p_k_m} and~\eqref{def_q_k_m}, satisfy inequalities~\eqref{proposition_db_u_1} and~\eqref{proposition_db_claim_2}.
Lemma~\ref{lemma_ub_c_i} and inequality~\eqref{lem5_assump} imply the inequality~\eqref{lemma_qkm_db_bdm_assumption}, so we can use Lemma~\ref{lemma_qkm_db}, i.e. we have \begin{equation} \label{thm_im_q_k_m_db}
\frac12 |a_{k,k}|b^n\leq q_{k,m}\leq\frac32 |a_{k,k}|b^n. \end{equation} In case if $$ \frac{p}{q}=\frac{p_{k,m}}{q_{k,m}}, $$ the result~\eqref{thm_im_result} readily follows from the lower bound~\eqref{proposition_db_u_1} in Proposition~\ref{proposition_db}.
\hidden{ We readily infer $$
\left|q-q_{k,m}\right|\leq $$ }
We proceed with the case \begin{equation} \label{thm_im_case_different} \frac{p}{q}\ne\frac{p_{k,m}}{q_{k,m}}. \end{equation}
By triangle inequality, and then by the upper bound~\eqref{proposition_db_claim_2}, we have \begin{equation} \label{thm_im_main_lb_1} \begin{aligned}
\left|g_\mathbf{u}(b)-\frac{p}{q}\right|&\geq\left|\frac{p_{k,m}}{q_{k,m}}-\frac{p}{q}\right|-\left|g_\mathbf{u}(b)-\frac{p_{k,m}}{q_{k,m}}\right|\\ &\geq \frac{1}{q_{k,m} q}-\frac{3\cdot 2^{C{\sqrt{\log_2 q_{k,m}\log_2\log_2 q_{k,m}}}}}{q_{k,m}^2}. \end{aligned} \end{equation}
By applying the upper bound in~\eqref{thm_im_q_k_m_db} complimented with~\eqref{ineq_hq_2}, we find $$ \log_2 q_{k,m} \leq \log_2 \frac32+ k/2\log_2 k+
k(\log_d(2k+1)+1)\log_2\|\mathbf{u}\|_\infty + n\log_2 b $$ Upper bounds~\eqref{thm_im_k_ub_1} on $k$ and~\eqref{thm_im_distance_t_n_ub} on $n$ ensure that for large enough $q$ we have \begin{equation} \label{thm_im_ub_1} 2^{C\sqrt{\log_2 q_{k,m}\log_2\log_2 q_{k,m}}}\leq 2^{\frac32C\sqrt{\log_2 x\log_2\log_2 x}}. \end{equation}
The formula~\eqref{def_t} for $t$ and the lower bound in~\eqref{thm_im_distance_t_n_ub} together give $b^n\geq x$. Then, by using the lower bound~\eqref{thm_im_q_k_m_db},
we find \begin{equation} \label{thm_im_lb_1} q_{k,m}\geq \frac12 b^n\geq \frac12 x. \end{equation}
By using the estimates~\eqref{thm_im_ub_1} and~\eqref{thm_im_lb_1} on the numerator and denominator respectively, and then by substituting the value of $x$ given by~\eqref{thm_im_def_x_2}, we find $$ \frac{3\cdot 2^{C{\sqrt{\log_2 q_{k,m}\log_2\log_2 q_{k,m}}}}}{q_{k,m}^2}\leq \frac{3\cdot 2^{\frac32C{\sqrt{\log_2 x\log_2\log_2 x}}}}{\frac12 x\cdot q_{k,m}} \stackrel{\eqref{thm_im_def_x_2}}=\frac{1}{2 q_{k,m} q}, $$ hence, recalling~\eqref{thm_im_main_lb_1}, we find \begin{equation} \label{thm_im_lb_12}
\left|g_\mathbf{u}(b)-\frac{p}{q}\right|\geq\frac{1}{2 q_{k,m} q}. \end{equation}
By inequality~\eqref{thm_im_q_k_m_db} combined with the upper bound in~\eqref{thm_im_distance_t_n_ub} and then~\eqref{def_t} and~\eqref{thm_im_ub_x} we get that, for $q$ large enough, \begin{equation*}
q_{k,m}\leq\frac32 |a_{k,k}|b^n\leq \frac32 |a_{k,k}|
b^{t+d\tau\sqrt{t\log_2 t}} \leq 18 |a_{k,k}|q\cdot 2^{(2d\tau\log_2 b+2C)\sqrt{\log_2 q \log_2\log_2 q}}. \end{equation*} The bound~\eqref{ineq_hq_2} implies \begin{equation} \label{ineq_hq_2_log_corollary}
\log_2 |a_{k,k}|\leq \frac{k}{2}\log_2 k+k(\log_d (2k+1) + 1)\log_2\|\mathbf{u}\|_\infty. \end{equation}
By comparing the right hand side of this inequality with the left hand side in~\eqref{cond_tau} we find
$$
|a_{k,k}|\leq 2^{2\tau\sqrt{\log_2 q\log_2\log_2 q}} $$ and then $$ q_{k,m}\le 18 q\cdot 2^{(2d\tau\log_2 b+2\tau+2C)\sqrt{\log_2 q\log_2\log_2 q}} $$ Finally, \eqref{thm_im_lb_12} implies $$
\left|g_\mathbf{u}(b)-\frac{p}{q}\right|\geq\frac{1}{36 q^2\cdot 2^{(2d\tau\log_2 b+2\tau+2C)\sqrt{\log_2 q \log_2\log_2 q}}}. $$ This completes the proof of the theorem with $\gamma = \ln 2\cdot\left(2d\tau\log_2 b+2\tau+2C\right)$. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1707.06677.tex",
"language_detection_score": 0.646256685256958,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{On the maximum CEI of graphs with parameters} \author{Fazal Hayat\footnote{E-mail: [email protected]}\\ School of Mathematical Sciences, South China Normal University, \\
Guangzhou 510631, PR China}
\date{} \maketitle
\begin{abstract} The connective eccentricity index (CEI) of a graph $G$ is defined as $\xi^{ce}(G)=\sum_{v \in V(G)}\frac{d_G(v)}{\varepsilon_G(v)}$, where $d_G(v)$ is the degree of $v$ and $\varepsilon_G(v)$ is the eccentricity of $v$. In this paper, we characterize the unique graphs with maximum CEI from three classes of graphs: the $n$-vertex graphs with fixed connectivity and diameter, the $n$-vertex graphs with fixed connectivity and independence number, and the $n$-vertex graphs with fixed connectivity and minimum degree. \\ \\ {\bf Key Words}: connective eccentricity index, connectivity, diameter, minimum degree, independent number.\\\\ {\bf 2010 Mathematics Subject Classification:} 05C07; 05C12; 05C35 \end{abstract}
\section{Introduction} All graphs considered in this paper are simple and connected. Let $G$ be graph on $n$ vertices with vertex set $V(G)$ and edge set $E(G)$. For $v \in V(G)$, let $N_G(v)$ be the set of all neighbors of $v$ in $G$. The degree of $v \in V(G)$, denoted by $d_G(v)$, is the cardinality of $N_G(v)$. The maximum and minimum degree of $G$ is denoted by $\Delta(G)$ and $\delta(G)$, respectively. The graph formed from $G$ by deleting any vertex $v \in V(G)$ (resp. edge $uv \in E(G)$) is denoted by $G-v$ (resp. $G-uv$). Similarly, the graph formed from $G$ by adding an edge $uv$ is denoted by $G+uv$, where $u$ and $v$ are non-adjacent vertices of $G$. For a vertex subset $A$ of $V(G)$, denote by $G[A]$ the subgraph induced by $A$. The distance between vertices $u$ and $v$ of $G$, denoted by $d_G(u,v)$, is the length of a shortest path connecting $u$ and $v$ in $G$. For $v\in V(G)$, the eccentricity of $v$ in $G$, denoted $\varepsilon_G(v)$, is the maximum distance from $v$ to all other vertices of $G$. The diameter of a graph $G$ is the maximum eccentricity of all vertices in $G$. As usual, by $S_n$ and $P_n$ we denote the star and path on $n$ vertices, respectively.
A subset $A$ of $V(G)$ is called vertex cut of $G$ if $G-A$ is disconnected. The minimum size of $A$ such that $G-A$ is disconnected or has exactly one vertex is called connectivity of $G$, and denote by $k(G)$. A graph is $k$-connected if its connectivity is at least $k$. A subset $I$ of $V(G)$ is called an independent set of $G$ if $I$ contains pairwise non-adjacent vertices. The independence number of $G$ denoted by $\alpha(G)$, is the maximum number of vertices of independent sets of $G$. The join of two disjoint graphs $M_1$ and $M_2$ is denoted by $M_1 \vee M_2$ is the graph formed from $M_1 \cup M_2$ by adding the edges $\{e=xy : x\in V(M_1), y\in V(M_2) \}$. Let $M_1 \vee M_2 \vee M_3 \vee \dots \vee M_t=(M_1\vee M_2)\cup (M_2\vee M_3)\cup \dots \cup (M_{t-1}\vee M_t)$. The sequential join of $t$ disjoint copies of a graph $G$ is denoted by $[t]G$, and the union of $m$ disjoint copies of a graph $G$ is denoted by $mG$.
Topological indices are numbers reflecting certain structural features of a molecule that are derived from its molecular graph. They are used in theoretical chemistry for design of chemical compounds with given physicochemical properties or given pharmacological and biological activities.
The eccentric connectivity index of a graph $G$ is a topological index based on degree and eccentricity, defined as \[ \xi^c (G)=\sum_{v \in V(G)}d_G(v)\varepsilon_G(v). \] It has been studied extensively, see \cite{1,2,3,4,6}.
Gupta et al. in 2000 \cite{GSM}, proposed a new topological index involving degree and eccentricity called the connective eccentricity index, defined as \[ \xi^{ce}(G)=\sum_{v \in V(G)}\frac{d_G(v)}{\varepsilon_G(v)}. \] From experiments for treating hypertension of chemical compounds like non-peptide N-benzylimidazole derivatives, the results obtained using the connective eccentricity index were better than the corresponding values obtained using Balaban’s mean square distance index. Therefore it is worth studying mathematical properties of connective eccentricity index.
Mathematical properties of connective eccentricity index have been studied extensively for trees, unicyclic and general graph. In particular, Yu and Feng \cite {YF} obtained upper or lower bounds for connective eccentricity index of graphs in terms of many graph parameters such as radius, maximum degree, independence number, vertex connectivity, minimum degree, number of pendant vertices and number of cut edges. Li and Zhao \cite {LZ} studied the extremal properties of connective eccentricity index among n-vertex trees with given graph parameters such as, number of pendant vertices, matching number, domination number, diameter, vertex bipartition. Xu et al. \cite {XDL} characterized the extremal graph for connective eccentricity index among all connected graph with fixed order and fixed matching number.
For more studies on connective eccentricity index of graphs we refer \cite{LLZ, TWLF, YQTF} and the references cited therein.
In the present paper, as a continuance we mainly study the mathematical properties of the connective eccentricity index of graphs in terms of various graph invariants. First, we determine the graph which attains the maximum CEI among $n$-vertex graphs with fixed connectivity and diameter, then we identify the unique graph with given connectivity and independence number having the maximum CEI. Finally, we characterize those graph with maximum CEI among $n$-vertex graphs with fixed connectivity and minimum degree.
\begin{lemma}\cite{YF}\label{L1}
Let $G$ be a graph with a pair of non adjacent vertices $u, v$. Then $\xi ^{ce}(G) < \xi ^{ce}(G+uv)$. \end{lemma}
\section{Results}
Let $\mathbb{G}_k(n,d)$ be the class of all $k$-connected graphs of order $n$ with diameter $d$. If $d=1$, then $K_n$ is the unique graph in $\mathbb{G}_k(n,1)$. Therefore, we consider $d\geq 2$ in what follows.
Denoted by $ G(n,k,d)= K_1 \vee [(d-2)/2]K_k \vee K_{n-kd+2k-2} \vee [(d-2)/2]K_k \vee K_1 $ for even $d \geq 4$, and let $\mathcal{H}(n,k,d)$ be the set of graphs of $K_1 \vee [(d-3)/2]K_k \vee K_{s+1} \vee K_{t+1} \vee [(d-3)/2]K_k \vee K_1$, where $s, t \geq k-1$ and $s+ t= n-kd+3k-4$ for odd $d \geq 3$.
\begin{theorem}\label{T1} Let $G$ be a graph in $\mathbb{G}_k(n,d)$ with maximum CEI, where $d\geq 3$. Then $G \cong G(n,k,d)$ if $d$ is even, and $G \in \mathcal{H}(n,k,d)$ otherwise. \end{theorem}
\begin{proof}
Let $G \in \mathbb{G}_k(n,d)$ such that $\xi ^{ce}(G)$ is as large as possible. Let $P:= u_0u_1 \dots u_d$ be a diametral path in $G$. Let $A_i= \{v\in V(G) : d_G(v, u_0)=i\}$. Then $|A_0|=1$ and $A_0\cup A_1\cup \dots \cup A_d$ is a partition of $V(G)$.
Note that $G$ is a $k$-connected graph, we have $|A_i| \geq k $ for $i\in \{1,2, \dots , d-1\}$. By Lemma \ref{L1}, we have $G[A_i]$ and $G[A_{i-1}\cup A_i]$ are complete graphs for $i\in \{1, \dots , d\}$. We claim that $|A_d|=1$; Otherwise, we choose a vertex $v\in A_d\setminus \{u_d\}$ and let $G^\ast =G+\{vx : x\in A_{d-2}\}$. Clearly, $G^\ast \in \mathbb{G}_k(n,d)$. By Lemma \ref{L1}, $\xi ^{ce}(G) < \xi ^{ce}(G^\ast)$, a contradiction. So $|A_d|=1$. Thus, we have $|A_0|=|A_d|=1$, and $|A_i|\ge k$ for $i\in \{2, \dots, d-1\}$.
\noindent {\bf Case 1}. $d$ is even with $d \geq 4$. then $|A_1|= |A_2|=\dots = |A_{\frac{d}{2}-1}|= |A_{\frac{d}{2}+1}| =\dots = |A_{d-1}|=k $ and $|A_{\frac{d}{2}}|= n-kd+2k-2$.
First, we claim that $|A_1|=|A_{d-1}|= k$. Suppose that $|A_1|\geq k+1$, then we choose $w\in A_1\setminus \{u_1\}$ and let $G' =G-wu_0+\{wx : x\in A_3\}$. Clearly, $A_0\cup (A_1\setminus \{w\})\cup (A_2\cup \{w\}) \cup A_0\cup \dots \cup A_d$ is a partition of $ V(G')$. From the construction of $G'$, we have $d_G(v) = d_{G'}(v), \varepsilon_G(v) = \varepsilon_{G'}(v)$ for all $v \in V(G)\setminus (A_3\cup \{u_0,w\})$. Moreover,
\begin{eqnarray*}
d_G(u_0) &=& d_{G'}(u_0)+1, \\
\varepsilon_G(u_0) &=& \varepsilon_{G'}(u_0)=d, \\
d_G(w) &=& d_{G'}(w)+1-|A_3|, \\
\varepsilon_G(w) &>& \varepsilon_{G'}(w) , \\
d_G(x) &=& d_{G'}(x)-1, \\
\varepsilon_G(x) &=& \varepsilon_{G'}(x)< d \hbox{ for all $x \in A_3$}.
\end{eqnarray*}
By the definition of CEI, we have
\begin{eqnarray*}
\xi^{ce}(G) - \xi^{ce}(G') &=& \frac{d_G(u_0)}{\varepsilon_G(u_0)} - \frac{d_{G'}(u_0)}{\varepsilon_{G'}(u_0)}
+\frac{d_G(w)}{\varepsilon_G(w)}- \frac{d_{G'}(w)}{\varepsilon_{G'}(w)}\\
&&+ \sum_{x \in A_3}\left( \frac{d_G(x)}{\varepsilon_G(x)}-
\frac{d_{G'}(x)}{\varepsilon_{G'}(x)}\right)\\
&&+ \sum_{v \in V(G)\setminus (A_3\cup \{u_0,w\})}\left( \frac{d_G(v)}{\varepsilon_G(v)}-
\frac{d_{G'}(v)}{\varepsilon_{G'}(v)}\right)\\
&=& \frac{1}{d} +\frac{d_G(w)}{\varepsilon_G(w)}- \frac{d_{G'}(w)}{\varepsilon_{G'}(w)}+
|A_3| \left(\frac{-1}{\varepsilon_{G}(x)}\right)\\
&<& \frac{1}{d} +\frac{1-|A_3|}{\varepsilon_{G'}(w)}- \frac{|A_3|}{\varepsilon_{G}(x)}\\
&\leq& \frac{1}{d}- \frac{|A_3|}{\varepsilon_{G}(x)}\\
&<& 0, \end{eqnarray*}
where the last inequality follows from the fact that $\varepsilon_{G}(x) < d $ and $|A_3|\ge k\ge 1$. Thus, $\xi^{ce}(G) < \xi^{ce}(G')$, a contradiction to the choice of $G$. Therefore, $|A_1|= k$. Similarly, $|A_{d-1}|=k$, as claimed. By similar argument as above we may also show that $|A_2|=|A_{d-2}|=k$, \dots, $|A_{\frac{d}{2}-1}|= |A_{\frac{d}{2}+1}|=k$. Then we have $|A_{\frac{d}{2}}|= n-kd+2k-2$. Therefore, $G \cong G(n,k,d)$.
\noindent {\bf Case 2.} $d$ is odd with $d \geq 3$. By similar argument as in Case 1, we have $|A_1|= |A_d|=k$, $|A_2|=|A_{d-1}|=k$, \dots, $|A_{\frac{d-3}{2}}|= |A_{\frac{d+3}{2}}| =k$. It follows that $|A_{\frac{d-1}{2}}|+|A_{\frac{d-1}{2}}|= n-kd+3k-2$. Therefore, $G \in \mathcal{H}(n,k,d)$. Now only need to show that all graphs in $\mathcal{H}(n,k,d)$ have equal CEI. Let $G_1= K_1 \vee [(d-3)/2]K_k \vee K_{z+1} \vee [(d-3)/2]K_k \vee K_1$, where $z= n-kd+2k-3$. Clearly, $G_1 \in \mathcal{H}(n,d)$. For a graph $G_2= K_1 \vee [(d-3)/2]K_k \vee K_{s+1} \vee K_{t+1} \vee [(d-3)/2]K_k \vee K_1$, we assume its vertex partition $A_0\cup A_1\cup \dots \cup A_d$ is defined as above.
If one of $s,t$ is $k-1$, then $\xi ^{ce}(G_1) \cong \xi ^{ce}(G_2)$. Suppose that $s,t \geq k$. Let $M\subseteq A_{\frac{d+1}{2}} \setminus \{u_{\frac{d+1}{2}}\}$ and $|M|=t-k+1$. Now we obtain $G_1$ from $G_2$ by the following graph transformation:
\[
G_1=G_2-\{xy : x \in M, y \in A_{\frac{d+3}{2}} \} + \{xy : x \in M, y \in A_{\frac{d-3}{2}} \}.
\]
Then, it is easy to see that $A_0\cup A_1\cup \dots \cup A_{\frac{d-3}{2}} \cup ( A_{\frac{d-1}{2}}\cup M)\cup ( A_{\frac{d+1}{2}}\setminus M)\cup A_{\frac{d+3}{2}} \cup \dots \cup A_d$ is a partition of $V(G_1)$. From the construction of $G_1$, we have $\varepsilon_{G_1}(v) = \varepsilon_{G_2}(v)$ for all $v \in V(G_2)$ and $d_{G_1}(v) = d_{G_2}(v)$ for all $v \in V(G_2) \setminus (A_{\frac{d+3}{2}} \cup A_{\frac{d-3}{2}})$, it follows that
\begin{eqnarray*}
d_{G_2}(x) &=& d_{G_1}(x)+t-k+1 \hbox{ for each $x \in A_{\frac{d+3}{2}} $}, \\
d_{G_2}(x) &=& d_{G_1}(x)-(t-k+1)\hbox{ for each $x \in A_{\frac{d-3}{2}} $}.
\end{eqnarray*}
By the definition of CEI, we have
\begin{eqnarray*}
\xi^{ce}(G_2) - \xi^{ce}(G_1) &=& \sum_{x \in A_{\frac{d+3}{2}}}\left( \frac{d_{G_2}(x)}{\varepsilon_{G_2}(x)}
-\frac {d_{G_1}(x)}{\varepsilon_{G_1}(x)}\right)\\
&&+ \sum_{x \in A_{\frac{d-3}{2}}}\left( \frac{d_{G_2}(x)}{\varepsilon_{G_2}(x)}- \frac{d_{G_1}(x)}{\varepsilon_{G_1}(x)}\right)\\
&&+ \sum_{v \in V(G_2) \setminus (A_{\frac{d+3}{2}} \cup A_{\frac{d-3}{2}})}\left( \frac{d_{G_2}(v)}{\varepsilon_{G_2}(v)}-
\frac{d_{G_1}(v)}{\varepsilon_{G_1}(v)}\right)\\
&=& k\left( \frac{t-k+1}{\varepsilon_{G_2}(x)}
-\frac {t-k+1}{\varepsilon_{G_2}(x)}\right)\\
&=&0. \end{eqnarray*} Thus, $\xi^{ce}(G_2) = \xi^{ce}(G_1)$. This completes the proof. \end{proof}
Let $\mathbb{G}_k(n,\alpha)$ be the class of all $k$-connected graphs of order $n$ with independence number $\alpha$. If $\alpha=1$, then $K_n$ is the unique graph in $\mathbb{G}_k(n,1)$ with maximum CEI. Therefore, we consider $\alpha\geq 2$ in what follows. Let \[ S_{n,\alpha }= K_k \vee (K_1 \cup ( K_{n-k-\alpha} \vee (\alpha -1)K_1). \] Obviously, $S_{n,\alpha } \in \mathbb{G}_k(n,\alpha)$
\begin{theorem}
Among all graphs in $\mathbb{G}_k(n,\alpha)$ with $\alpha\geq 2$, $S_{n,\alpha }$ is the unique graph with maximum CEI. \end{theorem}
\begin{proof}
Note that since $G \in \mathbb{G}_k(n,\alpha)$, we have $k+\alpha \le n$. If $k+\alpha= n$ then $G \cong K_k \vee \alpha K_1$, and the result holds in this case. Therefore, we consider the case $k+\alpha +1 \le n$ in what follows.
Let $G \in \mathbb{G}_k(n,\alpha)$ such that $\xi ^{ce}(G)$ is as large as possible. Let $I$ and $A$ be the maximum independent set and vertex cut of $G$, respectively with $|I|=\alpha $ and $|A|=k$. Let $G_1, G_2, \dots ,G_s$ be the components of $G-A$ with $s\geq 2$. Assume that $|G_1|\geq |G_2|\geq \dots \geq|G_s|$. We claim that $G_1$ is non-trivial; Otherwise, then $G_i$ is trivial for $i\in \{1,2, \dots , s\}$, and the independence number of $G$ is at least $n-k ~(\geq \alpha +1)$, a contradiction. So, $G_1$ is non-trivial. Let $|A\cap I|=a, |A\setminus I|=b$ and $|V(G_i)\cap I|=n_i|, |V(G_i)\setminus I|=m_i$ for $i\in \{1,2, \dots , s\}$. Obviously, $k=a+b$ and $V(G_i)=n_i+m_i$ for $i\in \{1,2, \dots , s\}$. We proceed with the following claims.
\noindent {\bf Claim 1.} $G-A$ contains exactly two components, i.e., $s=2.$
\noindent {\bf Proof of Claim 1.}
Suppose that $s\geq 3$. Since $G_1$ is non-trivial, we have $V(G_1)\setminus I \neq \emptyset$. Then choose $u \in V(G_1)\setminus I$ and $v \in V(G_2)$. Let $H=G+uv$. Clearly, $H \in \mathbb{G}_k(n,\alpha)$. By Lemma \ref{L1}, we have $\xi ^{ce}(G) < \xi ^{ce}(H)$, a contradiction. So, $s=2.$
\noindent {\bf Claim 2.} $G[A] \cong K_b \vee aK_1, G_i \cong K_{m_i}\vee n_iK_1$ and $G[V(G_i)UA] \cong K_{b+m_i}\vee (a+n_i)K_1$ for $i=1,2$.
\noindent {\bf Proof of Claim 2.}
First, we show that $G[A] \cong K_b \vee aK_1$. Suppose that $G[A] \ncong K_b \vee aK_1$. Then there exist $u,v \in A \setminus I$ or $u \in A \setminus I, v \in A\cap I$. Let $Q=G+uv$. Clearly, $Q \in \mathbb{G}_k(n,\alpha)$. By Lemma \ref{L1}, we have $\xi ^{ce}(G) < \xi ^{ce}(Q)$, a contradiction. So, $G[A] \cong K_b \vee aK_1$. By similar techniques we can show that $G_i \cong K_{m_i}\vee n_iK_1$ and $G[V(G_i)UA] \cong K_{b+m_i}\vee (a+n_i)K_1$ for $i=1,2$.
\noindent {\bf Claim 3.} $G_2$ is trivial.
\noindent {\bf Proof of Claim 3.}
Suppose that $G_2$ is non-trivial. Then we have the following two possible cases.
\noindent {\bf Case 1.} $n_2=0$. If $a=0$, then $I=V(G_1)\cap I$. Choose $w \in V(G_2)\setminus I$, we get $I\cup \{w\}$ is an independent set such that $|I\cup \{w\}|=\alpha +1$, a contradiction. So, $a \geq 1$.
Let $G' =G-\{wx : x \in V(G_2)\setminus \{w\}\}+ \{xy : x \in V(G_2)\setminus \{w\}, y\in V(G_1)\}$.
Clearly, $G' \in \mathbb{G}_k(n,\alpha)$. From the construction of $G'$, we have $\varepsilon_G(v) = \varepsilon_{G'}(v)=1$ for each $v \in A \setminus I$ and $\varepsilon_G(v) = \varepsilon_{G'}(v)=2$ for each $v \in V(G)\setminus (A \setminus I)$. Moreover,
\begin{eqnarray*}
d_G(w) &=& d_{G'}(w)+m_2-1, \\
\varepsilon_G(w) &=& \varepsilon_{G'}(w)=2,\\
d_G(x) &=& d_{G'}(x)+1- n_1- m_1 \hbox{ for all $x \in V(G_2)\setminus \{w\}$}, \\
\varepsilon_G(x) &=& \varepsilon_{G'}(x)=2 \hbox{ for all $x \in V(G_2)\setminus \{w\}$}, \\
d_G(x) &=& d_{G'}(x)+1- m_2 \hbox{ for all $x \in V(G_1)$},\\
\varepsilon_G(x) &=& \varepsilon_{G'}(x)=2 \hbox{ for all $x \in V(G_1)$},\\
d_G(x) &=& d_{G'}(x) \hbox{ for all $x \in A$}.
\end{eqnarray*}
By the definition of CEI, we have
\begin{eqnarray*}
\xi^{ce}(G) - \xi^{ce}(G') &=& \frac{d_G(w)}{\varepsilon_G(w)} - \frac{d_{G'}(w)}{\varepsilon_{G'}(w)}
+\sum_{x \in V(G_2)\setminus \{w\}}\left( \frac{d_G(x)}{\varepsilon_G(x)}- \frac{d_{G'}(x)}{\varepsilon_{G'}(x)}\right)\\
&&+ \sum_{x \in V(G_1)}\left( \frac{d_G(x)}{\varepsilon_G(x)}- \frac{d_{G'}(x)}{\varepsilon_{G'}(x)}\right)\\
&&+ \sum_{x \in A}\left( \frac{d_G(x)}{\varepsilon_G(x)}- \frac{d_{G'}(x)}{\varepsilon_{G'}(x)}\right)\\
&=& \frac{1}{2}{m_2-1-(m_2+n_2-1)^2-(m_2-1)(m_1+n_1-1) }\\
&<&0, \end{eqnarray*}
a contradiction.
\noindent {\bf Case 2.} $n_2 \neq 0$.
Choose $w \in V(G_2)\cap I$. Let
$G'' =G-\{wx : x \in V(G_2)\}+ \{xy : x \in V(G_1) \cap I , y\in V(G_2)\setminus I\}+ \{xy : x \in V(G_1) \setminus I , y \in V(G_2)\setminus \{w\}\}
$.
Clearly, $G'' \in \mathbb{G}_k(n,\alpha)$. From the construction of $G''$, we have $\varepsilon_G(v) = \varepsilon_{G''}(v)$ for all $v \in V(G)$. Moreover,
\begin{eqnarray*}
d_G(v) &=& d_{G''}(v) \hbox{ for all $v \in A$}, \\
d_G(w) &=& d_{G''}(w)+m_2,\\
d_G(x) &=& d_{G''}(x)+1-n_1-m_1 \hbox{ for all $x \in V(G_2)\setminus I$}, \\
d_G(x) &=& d_{G''}(x)-m_1 \hbox{ for all $x \in (V(G_2)\cap I)\setminus \{w\}$}, \\
d_G(x) &=& d_{G''}(x)-m_2 \hbox{ for all $x \in V(G_1)\cap I$}, \\
d_G(x) &=& d_{G''}(x)+1-n_2-m_2 \hbox{ for all $x \in V(G_1)\setminus I$}. \\
\end{eqnarray*} By the definition of CEI, we have
\begin{eqnarray*}
\xi^{ce}(G) - \xi^{ce}(G'') &=& \frac{d_G(w)}{\varepsilon_G(w)} - \frac{d_{G''}(w)}{\varepsilon_{G''}(w)}+
\sum_{x \in V(G_2)\setminus I}\left( \frac{d_G(x)}{\varepsilon_G(x)}- \frac{d_{G''}(x)}{\varepsilon_{G''}(x)}\right)\\
&&+ \sum_{x \in (V(G_2)\cap I)\setminus \{w\}}\left( \frac{d_G(x)}{\varepsilon_G(x)}-
\frac{d_{G''}(x)} {\varepsilon_{G''}(x)}\right)\\
&&+ \sum_{x \in V(G_1)\cap I}\left( \frac{d_G(x)}{\varepsilon_G(x)}- \frac{d_{G''}(x)}{\varepsilon_{G''}(x)}\right)\\
&&+ \sum_{x \in V(G_1)\setminus I}\left( \frac{d_G(x)}{\varepsilon_G(x)}-\frac{d_{G''}(x)}{\varepsilon_{G''}(x)}\right)\\
&&+ \sum_{v \in A}\left( \frac{d_G(v)}{\varepsilon_G(v)}-\frac{d_{G''}(v)}{\varepsilon_{G''}(v)}\right)\\
&=& \frac{1}{\varepsilon_G(v)} \{ m_2-m_2(n_1+m_1-1)-m_1(n_2-1)\\
&-& n_1m - m_1(n_2+m_2-1) \}\\
&<&0, \end{eqnarray*}
a contradiction. So, $G_2$ is trivial.
\noindent {\bf Claim 4.} $V(G_2) \subseteq I$.
\noindent {\bf Proof of Claim 4.}
Suppose that $V(G_2) \nsubseteq I$, then $a \geq 2$. Suppose that $a\leq 1$. If $a=1$, i.e., $A\cap I =\{w_1\}$. Let
$G^\ast =G+\{w_1x : x \in V(G_1)\cap I\}$.
Clearly, $G^\ast \in \mathbb{G}_k(n,\alpha)$. By Lemma \ref{L1}, we have $\xi ^{ce}(G) < \xi ^{ce}(G^\ast)$, a contradiction. If $a=0$, then $I\cup V(G_2)$ is an independent set of $G$ such that $|I\cup V(G_2)|=\alpha +1$, a contradiction. So, $a \geq 2$.
Since $G_1$ is non-trivial then $V(G_1)\setminus I \neq \emptyset $. Choose $w_2 \in V(G_1)\setminus I$. Let
$G^{\ast\ast} =G-\{w_1v : v = V(G_2)\}+ \{w_2v : v = V(G_2)\}$.
Clearly, $G^{\ast\ast} \in \mathbb{G}_k(n,\alpha)$.
From the construction of $G^{\ast\ast}$, we have $ \varepsilon_G(x) = \varepsilon_{G^{\ast\ast}}(x)=1$ for all $x \in A\setminus I$, $ \varepsilon_G(x) = \varepsilon_{G^{\ast\ast}}(x)=2$ for all $x \in (A\cap I)\cup(V(G_1)\setminus \{w_2\})\cup\{v\}$ , and $d_G(x) = d_{G^{\ast\ast}}(x)$ for all $x \in V(G)\setminus \{w_1,w_2\}$. Moreover,
\begin{eqnarray*}
d_G(w_1) &=& d_{G^{\ast\ast}}(w_1)+1, \\
\varepsilon_G(w_1) &=& \varepsilon_{G^{\ast\ast}}(w_1)=2,\\
d_G(w_2) &=& d_{G^{\ast\ast}}(w_2)-1, \\
\varepsilon_G(w_2)=2 &>& \varepsilon_{G^{\ast\ast}}(w_2)=1. \\
\end{eqnarray*}
By the definition of CEI, we have
\begin{eqnarray*}
\xi^{ce}(G) - \xi^{ce}(G^{\ast\ast}) &=& \frac{d_G(w_1)}{\varepsilon_G(w_1)} - \frac{d_{G^{\ast\ast}}(w_1)}{\varepsilon_{G^{\ast\ast}}(w_1)}
+\frac{d_G(w_2)}{\varepsilon_G(w_2)}- \frac{d_{G^{\ast\ast}}(w_2)}{\varepsilon_{G^{\ast\ast}}(w_2)}\\
&&+ \sum_{x \in V(G)\setminus \{w_1,w_2\}}\left( \frac{d_G(x)}{\varepsilon_G(x)}- \frac{d_{G^{\ast\ast}}(x)}{\varepsilon_{G^{\ast\ast}}(x)}\right)\\
&=& -\frac{1+d(w_2)}{2}\\
&<&0, \end{eqnarray*}
a contradiction. So, $V(G_2) \subseteq I$. From claim 1--4, we have $G \cong S_{n,\alpha }$. This completes the proof.
\end{proof}
Let $\mathbb{G}_k(n,\delta)$ be the class of all $k$-connected graphs of order $n$ with minimum degree at least $\delta$. Let \[ M_{n,\delta }= K_k \vee (K_{\delta-k+1} \cup K_{n-\delta-1}). \] Obviously, $M_{n,\delta } \in \mathbb{G}_k(n,\delta)$
\begin{theorem}
Among all graphs in $\mathbb{G}_k(n,\delta)$, $M_{n,\delta }$ is the unique graph with maximum CEI. \end{theorem}
\begin{proof}
Note that $G \in \mathbb{G}_k(n,\delta)$, we have $k+1 \le n$. If $k+1= n$, then $G \cong K_k \cong M_{n,\delta }$, and the result holds in this case. Therefore, we consider the case $k+2 \le n$ in what follows.
Let $G \in \mathbb{G}_k(n,\delta)$ such that $\xi ^{ce}(G)$ is as large as possible. Let $A$ be the vertex cut of $G$ with $|A|=k$. Let $G_1, G_2, \dots ,G_r$ be the components of $G-A$ with $r\geq 2$. We claim that $r=2$; Otherwise $r\geq 3$, and let $G_1, G_2, G_3$ be at least three components of $G$. Let $G' =G+\{xy : x \in V(G_2), y \in V(G_3) \}$. Clearly, $G' \in \mathbb{G}_k(n,\delta)$. By Lemma \ref{L1}, we have $\xi ^{ce}(G) < \xi ^{ce}(G')$, a contradiction. So, $r=2.$ Also by Lemma \ref{L1}, $G[V(G_1)\cup A] $ and $G[V(G_2)\cup A] $ are complete. Thus, we have $G\cong K_k \vee (K_{a_1} \cup K_{a_2})$, where $a_1= |V(G_1)|, a_2= |V(G_2)|$, and $a_1 + a_2 = n-k.$
Without loss of generality, we assume that $a_1 \leq a_2$.
To complete the proof it sufficies to show that $a_1= \delta -k +1$. Suppose that $a_1 > \delta -k +1$. For $w \in V(G_1)$, let $G' =G-\{wx : x \in V(G_1)\setminus \{w\}\}+\{wx : x \in V(G_2)\} $. Clearly, $G' \in \mathbb{G}_k(n,\delta)$.
From the construction of $G'$, we have
\begin{eqnarray*}
d_G(w) &=& d_{G'}(w)+a_1-a_2-1, \\
\varepsilon_G(w) &=& \varepsilon_{G'}(w)=2,\\
d_G(z) &=& d_{G^{\ast\ast}}(z)+1, \\
\varepsilon_G(z) &=& \varepsilon_{G'}(z)=2 \hbox{ for all $z \in V(G_1)\setminus \{w\}$}, \\
d_G(t) &=& d_{G^{\ast\ast}}(t)-1, \\
\varepsilon_G(t) &=& \varepsilon_{G'}(t)=2 \hbox{ for all $t \in V(G_2)$}, \\
d_G(x) &=& d_{G^{\ast\ast}}(x),\\
\varepsilon_G(x) &=& \varepsilon_{G'}(x)=1 \hbox{ for all $x \in A$}.
\end{eqnarray*}
By the definition of CEI, we have
\begin{eqnarray*}
\xi^{ce}(G) - \xi^{ce}(G') &=& \frac{d_G(w)}{\varepsilon_G(w)} - \frac{d_{G'}(w)}{\varepsilon_{G'}(w)}+
\sum_{z \in V(G_1)\setminus \{w\}}\left( \frac{d_G(z)}{\varepsilon_G(z)}- \frac{d_{G'}(z)}{\varepsilon_{G'}(z)}\right)\\
&&+ \sum_{t \in V(G_2)}\left( \frac{d_G(t)}{\varepsilon_G(t)}-
\frac{d_{G'}(t)} {\varepsilon_{G'}(t)}\right)\\
&&+ \sum_{x \in A}\left( \frac{d_G(x)}{\varepsilon_G(x)}- \frac{d_{G'}(x)}{\varepsilon_{G'}(x)}\right)\\
&=& a_1-1-a_2\\
&<&0, \end{eqnarray*} where the last inequality follows due to the fact that $a_1 \leq a_2$, a contradiction. So, $a_1= \delta -k +1$, one has $a_2= n-\delta-1$. Thus, $G \cong M_{n,\delta }$. This completes the proof.
\end{proof}
\end{document}
|
arXiv
|
{
"id": "1912.05871.tex",
"language_detection_score": 0.5934686660766602,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[On the complexity of detecting positive eigenvectors]{On the complexity of detecting positive eigenvectors of nonlinear cone maps}
\author{Bas Lemmens} \address{School of Mathematics, Statistics \& Actuarial Science, Sibson Building, University of Kent, Canterbury, Kent CT2 7FS, UK} \curraddr{} \email{[email protected]} \thanks{}
\author{Lewis White} \address{School of Mathematics, Statistics \& Actuarial Science, Sibson Building, University of Kent, Canterbury, Kent CT2 7FS, UK} \curraddr{} \email{[email protected]} \thanks{The second author was supported by a London Mathematical Society ``Undergraduate Research Bursary'' and the School of Mathematics, Statistics and Actuarial Science at the University of Kent.}
\subjclass[2010]{Primary 47H07, 47H09;; Secondary 37C25}
\keywords{Nonlinear maps on cones, positive eigenvectors, illumination problem, Hilbert's metric}
\dedicatory{}
\begin{abstract}
In recent work with Lins and Nussbaum the first author gave an algorithm that can detect the existence of a positive eigenvector for order-preserving homogeneous maps on the standard positive cone. The main goal of this paper is to determine the minimum number of iterations this algorithm requires. It is known that this number is equal to the illumination number of the unit ball, $B_{\mathrm{v}}$, of the variation norm, $\|x\|_{\mathrm{v}} :=\max_i x_i -\min_i x_i$ on $V_0:=\{x\in\mathbb{R}^n\colon x_n=0\}$. In this paper we show that the illumination number of $B_{\mathrm{v}}$ is equal to ${n\choose\lceil \frac{n}{2}\rceil}$, and hence provide a sharp lower bound for the running time of the algorithm. \end{abstract}
\maketitle \section{Introduction} The classical Perron-Frobenius theory concerns the spectral properties of square nonnegative matrices. In recent decades this theory has been extended to a variety of nonlinear maps that preserve a partial ordering induced by a cone (see \cite{LNBook} and the references therein for an up-to-date account).
Of particular interest are order-preserving homogeneous maps $f\colon \mathbb{R}^n_{\geq 0}\to \mathbb{R}^n_{\geq 0}$, where \[ \mathbb{R}^n_{\geq 0}:=\{x\in\mathbb{R}^n\colon x_i\geq 0\mbox{ for all } i=1,\ldots,n\}\] is the {\em standard positive cone}. Recall that $f\colon \mathbb{R}^n_{\geq 0}\to \mathbb{R}^n_{\geq 0}$ is {\em order-preserving} if $f(x)\leq f(y)$ whenever $x\leq y$ and $x,y\in \mathbb{R}^n_{\geq 0}$. Here $w\leq z$ if $z-w\in \mathbb{R}^n_{\geq 0}$. Furthermore, $f$ is said to be {\em homogeneous} if $f(\lambda x) =\lambda f(x)$ for all $\lambda \geq 0$ and $x\in \mathbb{R}^n_{\geq 0}$. Such maps arise in mathematical biology \cite{NMem2,Sch} and in optimal control and game theory \cite{BK,RS}.
It is known \cite[Corollary 5.4.2]{LNBook} that if $f\colon \mathbb{R}^n_{\geq 0}\to \mathbb{R}^n_{\geq 0}$ is a continuous, order-preserving, homogeneous map, then there exists $v\in \mathbb{R}^n_{\geq 0}$ such that \[ f(v) = r(f) v, \] where \[
r(f) :=\lim_{k\to\infty} \|f^k\|^{1/k}_{ \mathbb{R}^n_{\geq 0}} \] is the {\em cone spectral radius} of $f$ and \[
\|g\|_{\mathbb{R}^n_{\geq 0}}:=\sup \{\|g(x)\|\colon x\in \mathbb{R}^n_{\geq 0}\mbox{ and } \|x\|\leq 1\}.\] Thus, as in the case of nonnegative matrices, continuous order-preserving homogeneous maps on $\mathbb{R}^n_{\geq 0}$ have an eigenvector in the cone corresponding to the spectral radius.
In many applications it is important to know if the map has a {\em positive} eigenvector, i.e., an eigenvector that lies in the interior, $\mathbb{R}^n_{>0} :=\{x\in\mathbb{R}^n_{\geq 0}\colon x_i > 0\mbox{ for }i=1,\ldots,n\}$, of $\mathbb{R}^n_{\geq 0}$. This appears to be a much more subtle problem. There exists a variety of sufficient conditions in the literature, see \cite{Ca}, \cite{GG}, \cite[Chapter 6]{LNBook}, and \cite{NMem1}. Recently, Lemmens, Lins and Nussbaum \cite[Section 5]{LLN} gave an algorithm that can confirm the existence of a positive eigenvector for continuous, order-preserving, homogeneous maps $f\colon \mathbb{R}^n_{\geq 0}\to \mathbb{R}^n_{\geq 0}$. The main goal of this paper is to determine the minimum number of iterations this algorithm needs to perform.
\section{Preliminaries}
Given a set $S$ in a finite dimensional vector space $V$ we write $S^\circ$ to denote the interior of $S$, and we write $\partial S$ to denote the boundary of $S$ with respect to the norm topology on $V$.
It is known that if $f\colon \mathbb{R}^n_{\geq 0}\to \mathbb{R}^n_{\geq 0}$ is an order-preserving homogeneous map and there exists $z\in \mathbb{R}^n_{> 0}$ such that $f(z) \in\partial \mathbb{R}^n_{\geq 0}$, then $f(\mathbb{R}^n_{> 0})\subset \partial \mathbb{R}^n_{\geq 0}$, see \cite[Lemma 1.2.2]{LNBook}. Thus to analyse the existence of a positive eigenvector one may as well consider order-preserving homogeneous maps $f\colon \mathbb{R}^n_{>0}\to \mathbb{R}^n_{> 0}$. Moreover, on $\mathbb{R}^n_{>0}$ we have {\em Hilbert's metric}, $d_H$, which is given by \[ d_H(x,y) := \log \left(\max_i \frac{x_i}{y_i}\right) - \log \left(\min_i \frac{x_i}{y_i}\right)\mbox{\quad for }x,y\in \mathbb{R}^n_{>0}. \] Note that $d_H$ is not a genuine metric, as $d_H(\lambda x, \mu x) = 0$ for all $x\in \mathbb{R}^n_{>0}$ and $\lambda,\mu >0$. In fact, $d_H(x,y) =0$ if and only if $x=\lambda y$ for some $\lambda >0$. However, $d_H$ is a metric on the set of rays in $\mathbb{R}^n_{>0}$.
If $f\colon \mathbb{R}^n_{>0}\to \mathbb{R}^n_{>0}$ is order-preserving and homogeneous, then $f$ is nonexpansive under $d_H$, i.e., \[ d_H(f(x),f(y))\leq d_H(x,y)\mbox{\quad for all }x,y\in \mathbb{R}^n_{>0}, \] see for example \cite[Proposition 2.1.1]{LNBook}. In particular, order-preserving homogeneous maps $f\colon \mathbb{R}^n_{>0}\to \mathbb{R}^n_{>0}$ are continuous on $\mathbb{R}^n_{>0}$. Moreover, if $x$ and $y$ are eigenvectors of $f\colon \mathbb{R}^n_{>0}\to \mathbb{R}^n_{>0}$ with $f(x) = \lambda x$ and $f(y)=\mu y$, then $\lambda =\mu$, see \cite[Corollary 5.2.2]{LNBook}.
In \cite[Theorem 5.1]{LLN} the following necessary and sufficient conditions were obtained for an order-preserving homogeneous map $f\colon \mathbb{R}^n_{>0}\to \mathbb{R}^n_{>0}$ to have a nonempty set of eigenvectors, $\mathrm{E}(f) :=\{x\in \mathbb{R}^n_{> 0}\colon x\mbox{ eigenvector of } f\}$, which is bounded under Hilbert's metric. \begin{theorem}\label{thm:npfthm} If $f\colon \mathbb{R}^n_{> 0}\to \mathbb{R}^n_{> 0}$ is an order-preserving homogeneous map, then $\mathrm{E}(f)$ is nonempty and bounded under $d_H$ if and only if for each nonempty proper subset $J$ of $\{1,\ldots,n\}$ there exists $x^J\in \mathbb{R}^n_{> 0}$ such that \begin{equation}\label{eq:1.1} \max_{j\in J}\,\frac{f(x^J)_j}{x^J_j}< \min_{j\in J^c}\,\frac{f(x^J)_j}{x^J_j}. \end{equation} \end{theorem} Note that the assertion is trivial in case $n=1$, as each order-preserving homogeneous map $f\colon \mathbb{R}_{> 0}\to \mathbb{R}_{> 0}$ has a nonempty bounded set of eigenvectors. In case $n\geq 2$ Theorem \ref{thm:npfthm} yields the following simple algorithm for detecting positive eigenvectors: \begin{algorithm} \label{alg} Let $f\colon \mathbb{R}^n_{>0} \to \mathbb{R}^n_{>0}$ be an order-preserving homogeneous map. Repeat the following steps until every nonempty proper subset $J$ of $\{1,\ldots, n\}$ has been recorded. \begin{description} \item[Step 1] Randomly select $x$, with $x_1=1$ and $0<x_j<1$ for all $j\in\{2,\ldots,n\}$, and compute $f(x)_j/x_j$ for all $j \in \{1,\ldots, n\}$. \item[Step 2] Record all nonempty proper subsets $J \subset \{1,\ldots,n\}$ such that inequality (\ref{eq:1.1}) holds. \end{description} \end{algorithm} So, if this algorithm halts, then $f$ has an eigenvector in $\mathbb{R}^n_{>0}$ and $\mathrm{E(}f)$ is bounded under Hilbert's metric. If $\mathrm{E}(f)$ is empty or unbounded under $d_H$, then the algorithm does not halt. This can happen even if the map is linear. Consider, for example the linear map $x\mapsto Ax$ on $\mathbb{R}^2_{>0}$, where \[ A = \left [\begin{array}{cc} 1& 1 \\ 0 & 1 \end{array}\right], \] which has no eigenvector in $\mathbb{R}^2_{>0}$. At present no algorithm is known that can decide if an order-preserving homogeneous map on $\mathbb{R}^n_{>0}$ has an empty or an unbounded set of eigenvectors. It is also unknown if there is an efficient way to generate the vectors $x$ in Step 1.
Note that a randomly chosen $x$ in Step 1 can eliminate multiple subsets $J$ in Step 2. So, it is natural to ask for the least number of vectors required to fulfill the $2^n-2$ inequalities in (\ref{eq:1.1}). This number corresponds to the minimum number of times the algorithm has to perform Steps 1 and 2. In this paper we show that one needs at least \[ n\choose \lceil n/2\rceil \] vectors and this lower bound is sharp. Here $\lceil a\rceil$ is the smallest integer $n\geq a$. Likewise we write $\lfloor a\rfloor $ to denote the largest integer $n\leq a$.
\section{Connection with the illumination number} Recall that given a compact convex set $C$ with nonempty interior in $V$, a vector $v\in V$ {\em illuminates} $z\in\partial C$ if $z+\lambda v\in C^\circ$ for all $\lambda>0$ sufficiently small. A set $S$ is said to {\em illuminate} $C$ if for each $z\in\partial C$ there exists $v\in S$ such that $v$ illuminates $z$. The minimal size of illuminating set for $C$ is called the {\em illumination number} of $C$ and is denoted $i(C)$. There is a long-standing open conjecture which asserts that $i(C)\leq 2^n$ for every compact convex body in an $n$-dimensional vector space, see \cite[Chapter VI]{BMS} for further details. It is easy to show, see for example \cite[Lemma 4.1]{LLN}, that if $S$ illuminates every extreme point of $C$, then $S$ illuminates $C$.
To proceed we need to discuss the connection between illumination numbers and Theorem \ref{thm:npfthm}. Firstly, we note that if we let $\Sigma_0:=\{x\in \mathbb{R}^n_{>0}\colon x_n=1\}$, then $(\Sigma_0,d_H)$ is a metric space. Given an order-preserving homogeneous map $f\colon \mathbb{R}^n_{>0} \to \mathbb{R}^n_{>0}$ we can consider the {\em normalised} map $g_f\colon \Sigma_0\to \Sigma_0$ given by \[ g_f(x) := \frac{f(x)}{f(x)_n}\mbox{ \quad for }x\in \Sigma_0. \] The map $g_f$ is nonexpansive under $d_H$ on $\Sigma_0$. Moreover, $x\in \Sigma_0$ is a fixed point of $g_f$ if and only if $x$ is an eigenvector of $f$. Thus, if we let $\mathrm{Fix}(g_f) :=\{x\in\Sigma_0\colon g_f(x) =x\}$, then $\mathrm{Fix}(g_f)$ is nonempty and bounded in $(\Sigma_0,d_H)$ if and only if $\mathrm{E}(f)$ is nonempty and bounded in $(\mathbb{R}^n_{>0},d_H)$.
It not hard to verify that the map $\mathrm{Log}\colon \Sigma_0\to V_0$ given by \[ \mathrm{Log}(x) := (\log x_1,\ldots,\log x_n)\mbox{\quad for }x=(x_1,\ldots,x_n)\in\Sigma_0 \]
is an isometry from $(\Sigma_0,d_H)$ onto $(V_0,\|\cdot\|_{\mathrm{v}})$, where $V_0 :=\{x\in\mathbb{R}^n\colon x_n =0\}$ and \[
\|x\|_{\mathrm{v}} := \max_i x_i -\min_i x_i \] is the {\em variation norm}.
It follows that the map $h\colon V_0\to V_0$ satisfying $h\circ \mathrm{Log} = \mathrm{Log}\circ g_f$ is nonexpansive under the variation norm, and $\mathrm{Fix}(h)$ is nonempty and bounded in $(V_0,\|\cdot\|_{\mathrm{v}})$ if and only if $\mathrm{Fix}(g_f)$ is nonempty and bounded in $(\Sigma_0,d_H)$.
In \cite[Theorem 3.4]{LLN} the following result concerning fixed point sets of nonexpansive maps on finite dimensional normed spaces was proved. \begin{theorem}\label{thm:fp} If $h\colon V\to V$ is a nonexpansive map on a finite dimensional normed space $V$, then $\mathrm{Fix}(h)$ is nonempty and bounded if and only if there exist $w^1,\ldots,w^m\in V$ such that $\{f(w^i)-w^i\colon i=1,\ldots,m\}$ illuminates the unit ball of $V$. \end{theorem}
For $n\geq 2$, the unit ball $B_\mathrm{v}$ of $(V_0,\|\cdot\|_{\mathrm{v}})$ has $2^n-2$ extreme points, which are given by
\begin{equation} \label{eq:varExtr} \mathrm{ext}(B_{\mathrm{v}}) :=\{v^I_+\colon \emptyset \neq I\subseteq \{1,\ldots,n-1\}\}\cup \{v^I_-\colon \emptyset \neq I\subseteq \{1,\ldots,n-1\}\}, \end{equation} where $(v^I_+)_i =1$ if $i\in I$ and $0$ otherwise, and $(v^I_-)_i =-1$ if $i\in I$ and $0$ otherwise. See \cite[\S 2]{Nu2} for details.
In \cite{LLN} the equivalence in Theorem \ref{thm:npfthm} was obtained by using Theorem \ref{thm:fp} and showing that there exists $x^1,\ldots,x^m\in \mathbb{R}^n_{>0}$ that fulfill the $2^n-2$ inequalities in (\ref{eq:1.1}) if and only if there exist $y^1,\ldots,y^m\in V_0$ that illuminate the $2^n-2$ extreme points of the unit ball $B_\mathrm{v}$. Thus, $i(B_{\mathrm{v}})$ provides a sharp lower bound for the number of times one needs to repeat Steps 1 and 2 in Algorithm \ref{alg}. In the next section we show the following result concerning $i(B_{\mathrm{v}})$. \begin{theorem}\label{thm:main}
If $B_\mathrm{v}$ is the unit ball of $(V_0,\|\cdot\|_{\mathrm{v}})$ and $n\geq 2$, then \[ i(B_\mathrm{v}) = {n\choose\lceil n/2\rceil}. \]
\end{theorem}
\section{Proof of Theorem \ref{thm:main}}
Note that the map $(x_1,\ldots,x_n)\in V_0\mapsto (x_1,\ldots,x_{n-1})\in\mathbb{R}^{n-1}$ is an isometry from $(V_0,\|\cdot\|_{\mathrm{v}})$ onto
$(\mathbb{R}^{n-1},\|\cdot\|_H)$, where \[
\|x\|_H :=\left(\max_i x_i\right) \vee 0 - \left(\min_i x_i\right) \wedge 0. \]
Here $a\wedge b:= \min(a,b)$ and $a\vee b:=\max(a,b)$. Note also that if $B_H$ is the unit ball in $(\mathbb{R}^{n-1},\|\cdot\|_H)$, then \[ \mathrm{ext}(B_H) = \left(\{0,1\}^{n-1}\cup \{0,-1\}^{n-1}\right)\setminus\{(0,\ldots,0)\} \] and \[i(B_H) = i(B_{\mathrm{v}}).\] For notational simplicity we work with $B_H$ instead of $B_{\mathrm{v}}$.
The following two subsets, \[ E_+:= \{0,1\}^{n-1}\setminus\{(0,\ldots,0)\}\mbox{\quad and\quad} E_-:=\{0,-1\}^{n-1}\setminus\{(0,\ldots,0)\}, \] of $\mathrm{ext}(B_H)$ play a key role in the argument. On $\mathrm{ext}(B_H)$ we have the usual partial ordering $x\leq y $ if $y-x\in\mathbb{R}^{n-1}_{\geq 0}$, which gives rise to two finite partially ordered sets $(E_+,\leq)$ and $(E_-,\leq)$.
Recall that subset $\mathcal{A}$ of a partially ordered set $(P,\preceq)$ is called an {\em antichain} if $x,y\in \mathcal{A}$ and $x\preceq y$ implies $x=y$. A {\em chain} $\mathcal{C}$ in $(P,\preceq)$ is a totally ordered subset, if for each $x,y\in \mathcal{C}$ we have that either $x\preceq y$ or $y\preceq x$. The {\em length} of a chain $\mathcal{C}$ is the number of distinct elements in $\mathcal{C}$.
\begin{lemma}\label{lem:antichain} Let $\mathcal{A}$ be an antichain in $(E_+,\leq)$ or in $(E_-,\leq)$. If $x\neq y$ in $\mathcal{A}$ are illuminated by $v$ and $w$, respectively, then $v\neq w$. \end{lemma} \begin{proof} Suppose that $\mathcal{A}$ is antichain in $(E_+,\leq)$ and $x\neq y$ are in $\mathcal{A}$. Then there exist $i\neq j$ such that $0=x_i< y_i =1$ and $0=y_j<x_j =1$.
Now suppose by way of contradiction that $z$ illuminates $x$ and $y$. So, $\|x+\lambda z\|_H<1$ and $\|y+\lambda z\|_H<1$ for all $\lambda>0$ sufficiently small. Suppose first that $z_i\leq z_j$. Then for $\lambda>0$ small, \[
1+\lambda z_j =x_j +\lambda z_j \leq \|x+\lambda z\|_H<1, \] and hence $z_j<0$. So, $z_i\leq z_j<0$. But then \[
1+ \lambda(z_j-z_i) = x_j+\lambda z_j - \lambda z_i \leq \|x+\lambda z\|_H<1, \]
which is impossible. On the other hand, if $z_j\leq z_i$, then $1+\lambda z_i \leq \|y+\lambda z\|_H<1$, so that $z_j\leq z_i<0$. But then \[
1+ \lambda(z_i-z_j) = y_i+\lambda z_i- \lambda z_j \leq \|y+\lambda z\|_H<1, \] which again is impossible. Thus, $z$ cannot illuminate both $x$ and $y$.
The argument for the case where $\mathcal{A}$ is antichain in $(E_-,\leq)$ is similar. \end{proof}
\begin{lemma} \label{lem:+-1} If $x,y\in \mathrm{ext}(B_H)$ are such that $x_i=1$ and $y_i=-1$ for some $i$, then one needs two distinct vectors to illuminate $x$ and $y$. \end{lemma} \begin{proof}
Suppose $w$ illuminates $x$ and $y$. Then $1+\lambda w_i = x_i +\lambda w_i \leq \|x+\lambda w\|_H<1$ for all $\lambda>0$ sufficiently small, and hence $w_i<0$. But also
$1 - \lambda w_i = -( y_i +\lambda w_i) \leq \|y+\lambda w\|_H< 1$ for all $\lambda>0$ sufficiently small. This implies that $w_i>0$, which is impossible. Thus, one needs at least two vectors to illuminate $x$ and $y$. \end{proof}
\begin{corollary}\label{lowerbnd}
If $B_H$ is the unit ball of $(\mathbb{R}^{n-1},\|\cdot\|_H)$ and $n\geq 2$, then \[ i(B_H) \geq {n\choose\lceil n/2\rceil}. \] \end{corollary} \begin{proof} For $1\leq k,m\leq n-1$ define the antichians $\mathcal{A}_+(k):= \{ x\in E_+\colon \sum_i x_i = k\}$ and $\mathcal{A}_-(m):= \{ x\in E_-\colon \sum_i x_i = -m\}$. If $n>1$ is odd, then we can take $k := (n-1)/2$ and $m:=(n+1)/2$ and conclude from Lemmas \ref{lem:antichain} and \ref{lem:+-1} that we need at least \[ {n-1\choose \frac{n-1}{2}} + {n-1\choose \frac{n+1}{2}} = {n\choose \lceil \frac{n}{2}\rceil} \] distinct vectors to illuminate the extreme points in $\mathcal{A}_+(k)\cup \mathcal{A}_-(m)$, as for each $x\in\mathcal{A}_+(k)$ and $y\in\mathcal{A}_-(m)$ there exists an $i$ such that $x_i =1$ and $y_i=-1$.
Likewise if $n>1$ is even, we can take $k = m= \lceil \frac{n-1}{2}\rceil$, and deduce from Lemmas \ref{lem:antichain} and \ref{lem:+-1} that we need at least \[ {n-1\choose \lceil \frac{n-1}{2}\rceil} + {n-1\choose \lceil\frac{n-1}{2}\rceil} = {n-1\choose \lfloor \frac{n-1}{2}\rfloor} + {n-1\choose \lceil\frac{n-1}{2}\rceil} ={n\choose \frac{n}{2}} \] distinct vectors to illuminate the extreme points in $\mathcal{A}_+(k)\cup \mathcal{A}_-(m)$.
This completes the proof. \end{proof}
\begin{lemma}\label{lem:chains} If $\mathcal{C}$ is a chain in $(E_+,\leq)$ or in $(E_-,\leq)$, then there exists $w$ that illuminates each element of $\mathcal{C}$. \end{lemma} \begin{proof} Let $\mathcal{C}$ be a chain in $(E_+,\leq)$ or in $(E_-,\leq)$.
We call a chain $c_1\leq c_2\leq \ldots\leq c_m$ in $(E_+,\leq)$ or in $(E_-,\leq)$ maximal if it has length $n-1$. The chain $\mathcal{C}$ is contained in a maximal chain. As each coordinate permutation is an isometry of $(\mathbb{R}^{n-1},\|\cdot\|_H)$ and the map $x\mapsto -x$ is an isometry of $(\mathbb{R}^{n-1},\|\cdot\|_H)$, we may assume without loss of generality that $\mathcal{C}$ is contained in the maximal chain, \[ \mathcal{C}^*\colon (1,0,0,\ldots,0)\leq (1,1,0,\ldots, 0)\leq \ldots\leq (1,1,\ldots, 1,0)\leq (1,1,1,\ldots,1). \] Let $w\in\mathbb{R}^{n-1}$ be such that $w_1<w_2<\ldots<w_{n-1}<0$. Now if $x$ is the $k$-th element in the maximal chain and $k<n-1$, then for all $\lambda>0$ sufficiently small \[
\|x+\lambda w\|_H = \left(\max_i x_i+\lambda w_i \right) \vee 0 - \left(\min_i x_i+\lambda w_i \right) \wedge 0 = 1+\lambda w_k - \lambda w_{k+1} <1. \]
On the other hand, if $x=(1,1,\ldots,1)$, then clearly $\|x+\lambda w\|_H =1+\lambda w_{n-1}<1$ for all $\lambda>0$ small. Thus $w$ illuminates each element of $\mathcal{C}^*$ and we are done. \end{proof}
To proceed we need to recall a few classical results in the combinatorics of finite partially ordered sets, see \cite[Sections 9.1 and 9.2]{Ju}. Firstly, we recall Dilworth's Theorem, which says that if the maximum size of an antichain in a finite partially ordered set $(P,\preceq)$ is $r$, then $P$ can be partitioned into $r$ disjoint chains. In the case where the partially ordered set is $(\{0,1\}^d,\leq)$, one can combine this result with Sperner's Theorem, which says that the maximum size of antichain in $(\{0,1\}^d,\leq)$ is ${d\choose \lceil d/2\rceil}$. Thus, $(\{0,1\}^d,\leq)$ can be partitioned into ${d\choose \lceil d/2\rceil}$ disjoint chains.
To obtain our result we need some more detailed information about the partitions. In particular, we need a result by De Bruijn,Tengbergen, Kruyswijk \cite{dBTK} concerning symmetric chains, see also \cite[Theorem 9.3]{Ju}. A chain $x^1\leq \ldots\leq x^k$ in $(\{0,1\}^{d},\leq)$ is said to be {\em symmetric} if \begin{enumerate}[(a)] \item $(\sum_{j=1}^d x^m_j ) + 1= \sum_{j=1}^d x^{m+1}_j$ for all $1\leq m<k$, i.e., $x^{m+1}$ is an immediate successor of $x^m$, \item $\sum_{j=1}^d x^k_j = d- \sum_{j=1}^d x^{1}_j$. \end{enumerate} \begin{theorem}[De Bruijn,Tengbergen, Kruyswijk] \label{DBTK} The poset $(\{0,1\}^d,\leq)$ can be partitioned into ${d\choose \lceil d/2\rceil}$ disjoint symmetric chains. \end{theorem}
Let us now prove the main result of the paper. \begin{proof}[Proof of Theorem \ref{thm:main}] First recall that by Corollary \ref{lowerbnd} it suffices to show that $i(B_H)\leq {n\choose \lceil\frac{n}{2}\rceil}$, as $i(B_{\mathrm{v}})=i(B_H)$. In other words, we only need to show that $\mathrm{ext}(B_H)$ can be illuminated by ${n\choose \lceil\frac{n}{2}\rceil}$ vectors.
There are two cases to consider: $n\geq 2$ even, and $n\geq 2$ odd.
Let us first consider the case where $n\geq 2$ is even. By Dilworth's Theorem and Sperner's Theorem we know that the partially ordered set $(\{0,1\}^{n-1},\leq)$ can be partitioned into ${n-1\choose \lceil \frac{n-1}{2}\rceil}$ disjoint chains. This implies that each of the partially ordered sets $(E_+,\leq)$ and $(E_-,\leq)$ can be partitioned into ${n-1\choose \lceil \frac{n-1}{2}\rceil}$ disjoint chains. It now follows from Lemma \ref{lem:chains} that we need at most \[ {n-1\choose \lceil \frac{n-1}{2}\rceil}+{n-1\choose \lceil \frac{n-1}{2}\rceil}= {n-1\choose \lfloor \frac{n-1}{2}\rfloor}+{n-1\choose \lceil \frac{n-1}{2}\rceil} = {n\choose \frac{n}{2}} \] distinct vectors to illuminate $\mathrm{ext}(B_H)$. This implies that $i(B_{\mathrm{v}})= i(B_H)\leq{n\choose \frac{n}{2}}$.
Now suppose that $n\geq 2$ is odd. By Theorem \ref{DBTK} we know that $(\{0,1\}^{n-1},\leq)$ can be partitioned into ${n-1\choose \frac{n-1}{2}}$ disjoint symmetric chains.
Let us consider such a symmetric chain decomposition, and let \[\mathcal{A}_k:=\{x\in \{0,1\}^{n-1}\colon \mbox{$\sum_i x_i =k$}\},\] which is an antichain of size ${n-1\choose k}$. Each element of $\mathcal{A}_{(n+1)/2}$ is contained in a distinct symmetric chain, and each of these chain contains an $x\in \{0,1\}^{n-1}$ with $\sum_i x_i =(n-1)/2$. Thus, the symmetric chain decomposition of $(\{0,1\}^{n-1},\leq)$ consists of \[ {n-1 \choose \frac{n+1}{2}} \] chains containing a vector $x$ with $\sum_i x_i =(n+1)/2$, and \[ {n-1\choose \frac{n-1}{2}} - {n-1 \choose \frac{n+1}{2}} \] chains consisting of a single vector $x$ with $\sum_i x_i =(n-1)/2$.
By deleting $(0,0,\ldots,0)$ from $\{0,1\}^{n-1}$ we obtain a partition of $(E_+,\leq )$ into disjoint chains. Let $\mathcal{S}$ be the set of vectors in $E_+$ which form a singleton chain and $\sum_i x_i = (n-1)/2$. So, \[
|\mathcal{S}| = {n-1\choose \frac{n-1}{2}} - {n-1 \choose \frac{n+1}{2}}. \]
Now pair each $x\in E_+$ with $x'\in E_-$, where $x'_i=0$ if $x_i=1$, and $x'_i=-1$ if $x_i=0$. In this way we obtain a partition of $(E_-,\leq)$ into disjoint chains with $|\mathcal{S}|$ chains consisting of a single vector. In other words, for each $x\in \mathcal{S}$ we have that $x'\in E_-$ forms a singleton chain in the chain decomposition of $(E_-,\leq)$.
We know from Lemma \ref{lem:chains} that we can illuminate the ${n-1\choose \frac{n+1}{2}}$ chains in $(E_+,\leq)$ containing a vector $x$ with $\sum_i x_i = (n+1)/2$ using ${n-1\choose \frac{n+1}{2}}$ vectors. Likewise, we can illuminate the corresponding ${n-1\choose \frac{n+1}{2}}$ chains in $(E_-,\leq)$ with ${n-1\choose \frac{n+1}{2}}$ vectors. So, it remains to illuminate the singleton chains in $(E_+,\leq)$ and $(E_-,\leq)$.
Note that if we can illuminate each pair $\{x,x'\}$, with $x\in\mathcal{S}$ and $x'$ the corresponding vector in $E_-$, by a single vector, then we need at most \[ 2{n-1\choose \frac{n+1}{2}}+ {n-1\choose \frac{n-1}{2}} - {n-1\choose \frac{n+1}{2}} = {n-1\choose \frac{n-1}{2}} +{n-1\choose \frac{n+1}{2}} = {n\choose \lceil \frac{n}{2}\rceil} \] vectors to illuminate $\mathrm{ext}(B_H)$, and hence $i(B_{\mathrm{v}}) = i(B_H) \leq {n\choose \lceil \frac{n}{2}\rceil}$ if $n\geq 2$ is odd.
To see how this can be done we consider such a pair $\{x,x'\}$ with $x\in\mathcal{S}$ and let $I:=\{i\colon x_i =1\}$ and $J:=\{i\colon x_i=0\}$. So, $I=\{i\colon x'_i =0\}$ and $J =\{i\colon x'_i=-1\}$. Now let $w\in\mathbb{R}^{n-1}$ be such that $w_i<0$ for all $i\in I$ and $w_i>0$ for all $i\in J$. Then for all $\lambda>0$ sufficiently small, \[
\|x+\lambda w\|_H = \max_{i\in I} (1+\lambda w_i) - 0 <1 \] and \[
\|x'+\lambda w\|_H = 0 - \min_{i\in J} (-1 +\lambda w_i)<1. \] This shows that $w$ illuminates $x$ and $x'$, which completes the proof. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1811.08658.tex",
"language_detection_score": 0.6741693615913391,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Hodge theorem for the logarithmic de Rham complex via derived intersections} In a beautiful paper \cite{DelIll} Deligne and Illusie proved the degeneration of the Hodge-to-de Rham spectral sequence using positive characteristic methods. In a recent paper \cite{AriCalHab2} Arinkin, C\u ald\u araru and the author of this paper gave a geometric interpretation of the problem of Deligne-Illusie showing that the triviality of a certain line bundle on a derived scheme implies the the Deligne-Illusie result. In the present paper we generalize these ideas to logarithmic schemes and using the theory of twisted derived intersection of logarithmic schemes we obtain the Hodge theorem for the logarithmic de Rham complex. \section{Introduction}
\paragraph Let $Y$ be a smooth proper variety over an algebraically closed field $k$ of characteristic 0. The algebraic de Rham complex is defined as the complex \[\Omega^\sbt_Y:=0\rightarrow {\mathscr O}_Y\xrightarrow{d} \Omega^1_{Y/k}\xrightarrow{d}...\] where $d$ is the usual differential on the {\em algebraic} forms. The de Rham cohomology of $Y$ is defined as the hypercohomology of the de Rham complex, \[H^*_{{\mathsf dR}}(Y)=R^*\Gamma(Y,\Omega^\sbt_Y).\] The Hodge-to-de Rham spectral sequence \[{}^1E_{pq}=H^p(Y,\Omega^q_Y)\Rightarrow H^{p+q}_{{\mathsf dR}}(Y)\] is given by the stupid filtration on the de Rham complex whose associated graded terms are the $\Omega^q_Y$.
\paragraph In their celebrated paper \cite{DelIll}, Deligne and Illusie proved the degeneration of the Hodge-to-de Rham spectral sequence holds in positive characteristics by showing the following result.
\begin{Theorem}\cite{DelIll} Let $X$ be a smooth proper scheme over a perfect field $k$ of positive characteristic $p > \dim X$. Assume that $X$ lifts to the ring $W_2(k)$ of second Witt vectors of $k$. Then the Hodge-to-de Rham spectral sequence for $X$ degenerates at ${}^1E$. \end{Theorem}
Then they showed that the corresponding result in characteristic 0 follows from a standard reduction argument.
\paragraph Similar results can be obtained in the logarithmic setting. Consider a smooth proper scheme $X$ over a perfect field $k$ of characteristic $p>\dim X$ and a reduced normal crossing divisor $D$ on $X$. We can find local coordinates $x_1,...,x_n$ around each point of the divisor so that in an \'etale neighborhood of that point the divisor is cut out by $x_1\cdot...\cdot x_k$ for some $k\leq n$. The logarithmic 1-forms are generated locally by the symbols \[\frac{dx_1}{x_1},...,\frac{dx_k}{x_k},dx_{k+1},...,dx_n\] and the sheaf of logarithmic 1-forms is denoted by $\Omega^1_X(\log D)$. The sheaf $\Omega^q_X(\log D)$ of logarithmic $q$-forms is defined as $\wedge^q\Omega_X^1(\log D)$. The differential $d$ of the {\em meromorphic} de Rham complex maps logarithmic forms to logarithmic forms, and hence we define the logarithmic de Rham complex $\Omega^\sbt_X(\log D)$ as the subcomplex of the meromorphic de Rham complex consisting of logarithmic forms. The stupid filtration of this complex gives rise to a Hodge-to-de Rham spectral sequence \[{}^1E_{pq}=H^p(Y,\Omega^q_X(\log D))\Rightarrow R^{p+q}\Gamma(X,\Omega^\sbt_X(\log D)).\]
In \cite{Kat}, Kato generalized the result of Deligne-Illusie obtaining the following result.
\begin{Theorem}\cite{Kat} \label{thm:Kat} Assume that the pair $(X,D)$ lifts to the ring $W_2(k)$. Then the Hodge-to-de Rham spectral sequence of the logarithmic de Rham complex degenerates at ${}^1E$. \end{Theorem}
As before the corresponding result in characteristic 0 follows from a standard reduction argument. (For a detailed treatment, and for applications to vanishing theorems see \cite{EsnVie}.)
\paragraph In a recent paper \cite{AriCalHab2}, Arinkin, C\u ald\u araru and the author of the present paper recast the problem of the degeneration of the Hodge-to-de Rham spectral sequence as a derived self-intersection problem. They considered the embedding of the Frobenius twist $X'$ of $X$ into its cotangent bundle $T^*X'$ as the zero section. The sheaf of crystalline differential operators $D_X$ on $X$ can be regarded as a sheaf $D$ of Azumaya algebras on $T^*X'$. This Azumaya algebra splits on the zero section $X'\rightarrow T^*X'$ giving rise to an embedding of Azumaya spaces $X'\rightarrow (T^*X',D)$. The intersection problem given by a smooth subvariety $Y$ inside an Azumaya space ${\overline{S}}=(S,{\mathcal A})$ is called a twisted intersection problem in \cite{AriCalHab2}. The main geometric observation in \cite{AriCalHab2} is that there exists a line bundle on the ordinary derived self-intersection of $Y$ inside $S$ measuring the difference between the twisted and the ordinary derived self-intersections. In the case of $X'\rightarrow (T^*X',D)$ this line bundle is given by the dual of $F_*\Omega^\sbt_X$, the Frobenius pushforward of the de Rham complex of $X$.
Using the theory of twisted derived self-intersections the authors obtain the following result giving a geometric interpretation of the result of Deligne-Illusie (see also \cite{OguVol} for the relation between the statements (1) and (2)).
\begin{Theorem}\cite{AriCalHab2} \label{thm:AriCalHab2} Let $X$ be a smooth scheme over a perfect field $k$ of characteristic $p > \dim X$. Then the following five statements are equivalent. \begin{itemize} \item[(1)] $X$ lifts to $W_2(k)$. \item[(2)] $D$ splits on the first infinitesimal neighborhood of $X'$ in $T^*X'$. \item[(3)] The associated line bundle is trivial. \item[(4)] $F_*\Omega^\sbt_X$ is formal in $D(X')$ (meaning that it is quasi-isomorphic to the direct sum of its cohomology sheaves). \end{itemize} \end{Theorem}
\paragraph In the present paper we generalize the result of \cite{AriCalHab2} to the logarithmic setting. We consider the logarithmic scheme $(X,D)$ where $X$ is a smooth proper scheme over a perfect field $k$ of characteristic $p>\dim X$ and $D$ is a reduced normal crossing divisor on $X$. As the category of quasi-coherent sheaves on $(X,D)$ we consider the quasi-coherent parabolic sheaves on $(X,D)$ (see \cite{Yok}). Parabolic sheaves were first introduced by Mehta and Seshadri (\cite{MehSes}, \cite{Ses}) on a projective curve with finitely many marked points as locally-free sheaves $E$ with filtrations \[0=F_k(E)\hookrightarrow F_{k-1}(E)\hookrightarrow...\hookrightarrow F_0(E)=E\] at every marked point in order to generalize to non-projective curves the correspondence between stable sheaves of rank 0 and irreducible unitary representations of the topological fundamental group.
\paragraph The sheaf of crystalline logarithmic differential operators $D_X(\log D)$ is not an Azumaya algebra over its center. On the other hand one can equip $D_X(\log D)$ with a filtration so that the corresponding parabolic sheaf is a parabolic Azumaya algebra over its center. (For a treatment of parabolic Azumaya algebras see \cite{KulLie}.) The center of the Azumaya algebra can be identified with the structure sheaf of the logarithmic cotangent bundle $T^*X'(\log D')$ of the Frobenius twist $(X',D')$ of $(X,D)$ equipped with the divisor $\pi^*D'$ where $\pi:T^*X'(\log D')\rightarrow X'$ is the bundle map. Therefore the parabolic sheaf of crystalline logarithmic differential operators can be regarded as a parabolic sheaf $D(\log D)_*$ over $(T^*X'(\log D'),\pi^*D')$. As before we consider the embeddings \begin{itemize} \item $(X',D')\rightarrow (T^*X'(\log D'),\pi^*D')$ and \item $(X',D')\rightarrow (T^*X'(\log D'),\pi^*D',D(\log D)_*)$. \end{itemize} The difference between the derived self-intersections of the embeddings is measured by a parabolic line bundle, which is the dual of the Frobenius pushforward of the logarithmic de Rham complex $F_*\Omega^\sbt_X(\log D)$ equipped with the filtration given by $D$.
We generalize the theory of twisted derived self-intersections to the logarithmic setting. Our main result is the generalization of Theorem \ref{thm:AriCalHab2}.
\begin{Theorem} \label{thm:equi1} Let $X$ be a smooth variety over a perfect field of characteristic $p>\dim X$, with a reduced normal crossing divisor $D$. Then, the following statements are equivalent. \begin{itemize} \item[(1)] The logarithmic scheme $(X,D)$ lifts to $W_2(k)$. \item[(2)] The associated line bundle is trivial. \item[(3)] The parabolic sheaf of algebras $D(\log D)_*$ splits on the first infinitesimal neighborhood of $(X',D')$ inside $(T^*X'(\log D'),\pi^*D')$. \item[(4)] The complex $F_*\Omega^\sbt_X(\log D)_*$ is quasi-isomorphic to a formal parabolic sheaf equipped with the trivial parabolic structure. \end{itemize} \end{Theorem}
\noindent As an easy corollary we obtain Theorem \ref{thm:Kat}.
\paragraph \label{par:Konque} The main application we have mind is answering a question of Kontsevich. Let $X\rightarrow \field{P}^1$ be a rational function, so that $f^{-1}(\field{A}^1)$ is a smooth complex algebraic variety and $f^{-1}(\infty)$ is a normal crossing divisor. In the above setting Kontsevich introduced a family of complexes $(\Omega^\sbt_X,ud+v\wedge df)$, $(u,v)\in \mathbb{C}^2$ generalizing the notion of a twisted de Rham complex. He conjectured that the hypercohomology spaces are independent of the choices of $u$ and $v$. In a recent preprint \cite{KatKonPan} Katzarkov, Kontsevich and Pantev verify this conjecture in the case of a reduced normal crossing divisor $D$, and in \cite{EsnSabYu} a complete proof is given for any normal crossing divisor. We believe that our methods can be generalized to give another proof of this question of Kontsevich.
\paragraph We remark that there is another approach in the literature to deal with the sheaf of differential operators in the logarithmic setting. In \cite{Sch} Schepler extends the theory of Ogus and Vologodsky (\cite{OguVol}) to the case of logarithmic schemes. He uses the theory of indexed modules and algebras and he shows the the sheaf of differential operators form an indexed Azumaya algebra. These results generalized by Ohkawa (\cite{Ohk1}, \cite{Ohk2}) to the ring of differential operators for higher level. We choose to not work with indexed modules and algebras because of lack of functoriality. It would be interesting to understand the results of \cite{Sch} , \cite{Ohk1} and \cite{Ohk2} from our approach.
\paragraph The paper is organized as follows. In Section \ref{sec:log} we collect some basic facts about logarithmic schemes and parabolic sheaves. In Section \ref{sec:charp} we introduce the parabolic sheaf of crystalline differential operators and the parabolic logarithmic de Rham complex. We show the Azumaya property of the parabolic sheaf of crystalline differential operators and that there exists a Koszul duality between the parabolic sheaf of crystalline differential operators and the parabolic logarithmic de Rham complex. In Section \ref{sec:derint} we summarize the theory of twisted derived intersections, which we briefly expand to the case of logarithmic schemes in Section \ref{sec:logderint}. We conclude the paper with Section \ref{sec:mainthm} where we prove our main theorems Theorem \ref{thm:equi1} and Theorem \ref{thm:Kat}.
\paragraph \textbf{Acknowledgements.} The author expresses his thanks to Dima Arinkin, Andrei C\u ald\u araru and Tony Pantev for useful conversations. \section{Background on logarithmic schemes} \label{sec:log}
In this section we investigate the notion of the category of (quasi-)coherent sheaves on logarithmic schemes $(X,D)$: the category of (quasi-)coherent parabolic sheaves. We remark that we could have taken an alternative path, looking at (quasi-)coherent sheaves on the infinite root stack (see \cite{BorVis}, \cite{TalVis}). We follow the notations of \cite{Yok}.
\paragraph We regard $\mathbf{R}$ as the category whose objects are real numbers and whose morphism spaces between two objects $\alpha\in \mathbf{R}$ and $\beta\in \mathbf{R}$ are defined as \[Mor(\alpha,\beta)=\begin{cases} \{i^{\alpha,\beta}\} & \mbox{if }\alpha\geq \beta,\\ \emptyset & \mbox{otherwise}\end{cases}.\]
\begin{Definition} An $\mathbf{R}$-filtered ${\mathscr O}_X$-module is a covariant functor from the category $\mathbf{R}$ to the category of ${\mathscr O}_X$-modules. For an $\mathbf{R}$-filtered ${\mathscr O}_X$-module $E$ we denote the ${\mathscr O}_X$-module $E(\alpha)$ by $E_\alpha$, and the ${\mathscr O}_X$-linear homomorphisms $E(i^{\alpha,\beta})$ by $i_E^{\alpha,\beta}$. For an $\mathbf{R}$-filtered ${\mathscr O}_X$-module $E$ we define the $\mathbf{R}$-filtered ${\mathscr O}_X$-module $E[\alpha]$ as $E[\alpha]_{\beta}=E_{\alpha+\beta}$ with morphisms $i^{\beta,\gamma}_{E[\alpha]}=i_E^{\beta+\alpha,\gamma+\alpha}$. In the sequel we denote $\mathbf{R}$-filtered ${\mathscr O}_X$-modules by $E_*$. \end{Definition}
\begin{Definition} For an $\mathbf{R}$-filtered ${\mathscr O}_X$-module $E_*$ and for an "ordinary" ${\mathscr O}_X$-module $F$ we define their tensor product as $(E_*\otimes F)_\alpha=E_\alpha\otimes F$ with homomorphisms $i^{\alpha,\beta}_{E_*\otimes F}=i^{\alpha,\beta}_{E_*}\otimes \id_F$. \end{Definition}
\noindent We are ready to define the category of (quasi-)coherent parabolic sheaves with respect to an effective Cartier divisor $D$.
\begin{Definition} A (quasi-)coherent {\em parabolic} sheaf is an $\mathbf{R}$-filtered ${\mathscr O}_X$-module $E_*$ together with an isomorphism of $\mathbf{R}$-filtered ${\mathscr O}_X$-modules \[E_*\otimes {\mathscr O}_X(-D)\cong E_*[1].\] Parabolic morphisms between parabolic sheaves $E_*$ and $E'_*$ are natural transformations $E_*\rightarrow E'_*$. We denote the set of parabolic morphisms by $Hom(E_*, E'_*)$. \end{Definition}
\paragraph[Remark:]\label{rmk:01} It is enought to know $E_\alpha$ and $i_E^{\alpha,\beta}$ for $\alpha,\beta\in [0,1]$ to determine the parabolic sheaf $E_*$. We say that a parabolic sheaf has weights $\alpha=(\alpha_0,...\alpha_k)$ \[0=\alpha_0<\alpha_1<...<\alpha_k<1\] if $E_\beta=E\gamma$ and $i_E^{\beta,\gamma}=\id$ for all $\beta, \gamma\in [0,1]$ satisfying $\alpha_i< \beta,\gamma\leq \alpha_{i+1}$.
For us, there are natural choices for $k$ and the $\alpha_i$, and thus, we define the category of (quasi-)coherent sheaves on $(X,D)$ as follows.
\begin{Definition} The category of (quasi-)coherent sheaves on $(X,D)$ is the category whose objects are parabolic sheaves with $k=p$ and weights $\alpha_i=\frac{i}{p+1}$, and the Hom-spaces are the sets of parabolic morphisms. These categories will be denoted by $Coh(X,D)$ or $QCoh(X,D)$ respectively. \end{Definition}
\paragraph The category $QCoh(X,D)$ is abelian, the kernel and cokernel of a morphism can be defined pointwise. It has enough injectives \cite{Yok}, we denote the corresponding derived category by ${\mathbf D}(X,D)$.
\paragraph Consider a morphism $f:X\rightarrow Y$ and an effective Cartier divisor $D$ on $Y$ such that $f^*D$ is an effective divisor on $X$. This data gives rise to a morphism of logarithmic schemes $ (X,f^*D)\rightarrow (Y,D)$. We abuse notation and denote the induced map by $f$ as well. We define the pushforward and pullback along $f$ as follows. For any parabolic sheaf $E_*$ on $X$ its pushforward $f_*E_*$ is defined as the parabolic sheaf where $(f_*E)_\alpha$ are the pushforward of the sheaves $E_\alpha$ along $f$ and the morphisms $i^{\alpha,\beta}_{f_*E}$ are the morphisms $f_*i^{\alpha,\beta}_E$. Indeed we obtain a parabolic sheaf on $(Y,D)$, by the projection formula, we have \[f_*(E_*[1])=f_*(E_*\otimes {\mathscr O}_X(-f^*D))=f_*E_*\otimes {\mathscr O}_Y(-D)=f_*E_*[1].\] Similarly, for any parabolic sheaf $E'_*$ on $Y$ its pullback $f^*E'_*$, is defined as the parabolic sheaf where $(f^*E')_\alpha$ is the pullback of $E'_\alpha$ along $f$ and the morphisms $i^{\alpha, \beta}_{f^*E'}$ are the morphisms $f^*i^{\alpha,\beta}_{E'}$. Again, we obtain a parabolic sheaf on $(X,f^*D)$, we have \[f^*(E'_*[1])=f^*(E'_*\otimes {\mathscr O}_Y(-D))=f^*E'_*\otimes {\mathscr O}_X(-f^*D)=f^*E'_*[1].\] The pushforward and pullback functors descend to the derived categories and by abuse of notation we denote the corresponding maps by $f_*$ and $f^*$ as well.
\paragraph[Remark:]In general, the pushforward and the pullback morphisms of a parabolic sheaf along a morphism $f:(X,D_1)\rightarrow (Y,D_2)$ are more complicated than as above. Our case is special, we have $f^*D_2=D_1$.
\paragraph The categories $Coh(X,D)$ and $QCoh(X,D)$ are equipped with natural monoidal structures. In order to define the monoidal structures we take a quick detour.
An important subcategory of $QCoh(X,D)$ is the category of parabolic bundles (\cite{Ses}, \cite{MehSes}).
\begin{Definition} A {\em parabolic bundle} is a triple $(E,F_*,\alpha_*)$ where $E$ is a locally-free sheaf on $X$, $F_*$ is a filtration of $E$ by coherent sheaves on $X$ \[F_k(E)=E\otimes {\mathscr O}_X(-D)\hookrightarrow F_{k-1}(E)\hookrightarrow F_{k-2}(E)\hookrightarrow...\hookrightarrow F_0(E)=E\] together with a sequence of weights $\alpha$ satisfying \[0=\alpha_0<\alpha_1<...<\alpha_k<1.\] \end{Definition}
\noindent The sequence of weights determines a family of coherent sheaves $E_x$ for $0\leq x\leq 1$ defined as \[E_0=E\quad\mbox{and}\quad E_x=F_i(E)\] for $\alpha_i< x\leq \alpha_{i+1}$. A morphism between parabolic bundles $(E,F_*,\alpha_*)$ and $(E',F'_*,\alpha'_*)$ is a morphism of ${\mathscr O}_X$-modules $\varphi:E\rightarrow E'$ so that $\varphi(E_x)\subseteq E'_x$ for any $x\in [0,1]$. By Remark \ref{rmk:01} parabolic bundles give rise to parabolic sheaves and morphisms between parabolic bundles are exactly the parabolic morphisms between the corresponding parabolic sheaves.
\paragraph Consider the morphism \[\psi:\Pic X\times \mathbf{Z}\left[\frac{1}{p}\right]\rightarrow Coh(X,D)\] mapping the pair $(L,a)$ to the parabolic bundle $(L,F_*,\alpha_*)$ where \[L_x=\begin{cases}L & \mbox{if }x\leq a',\\ L\otimes {\mathscr O}_X(-D) & \mbox{if }a'<x\leq 1.\end{cases}\] Here $a'$ denotes the residue of $a$ modulo 1. Parabolic bundles of this form are the parabolic line bundles. The tensor product of parabolic sheaves is defined (for parabolic line bundles) to respect the group structure coming from the natural group structure on $\Pic(X)\times \mathbf{Z}\left[\frac{1}{p}\right]$ given by \[(L_1,a_1)\cdot (L_2,a_2)=(L_1\otimes L_2,a_1+a_2).\] The unit element of the tensor product is given by the parabolic sheaf $\psi({\mathscr O}_X,0)$, where \[\psi({\mathscr O}_X,0)_x=\begin{cases}{\mathscr O}_X & \mbox{if }x=0,\\ {\mathscr O}_X(-D) & \mbox{if }0<x\leq 1.\end{cases}\] We define the {\em structure sheaf} of the logarithmic scheme $(X,D)$ to be the parabolic sheaf $\psi({\mathscr O}_X,0)$ and in the sequel we denote it by ${\mathscr O}_{(X,D)}$.
\begin{Definition} \label{log:hom} For two parabolic sheaves $E_*$, $F_*$ we define the sheaf Hom functor as \[\underline{\mathsf{Hom}}_x(E_*,F_*):={\mathsf{Hom}}(E_*,F_*[x]).\] \end{Definition}
In particular, for any parabolic line bundle $L$, its parabolic sheaf of endomorphisms is isomorphic to ${\mathscr O}_{(X,D)}$. The sheaf Hom functor and the tensor product satisfy the usual adjoint property (for more details, see \cite{Yok}) giving rise to natural monoidal structures on $Coh(X,D)$ and $QCoh(X,D)$. We remark that the monoidal structure descends to the derived category ${\mathbf D}(X,D)$. \section{Background on schemes over fields of positive characteristics} \label{sec:charp}
In this section we collect basic facts about schemes over fields of positive characteristics. We review the notion of logarithmic tangent sheaf, logarithmic $q$-forms and the crystalline sheaf of logarithmic differential operators. We show that the parabolic sheaf of crystalline logarithmic differential operators is an Azumaya algebra over its center. We conclude the section by showing that there exists a Koszul duality between the parabolic sheaf of crystalline logarithmic differential operators and the parabolic logarithmic de Rham complex.
\paragraph Let $X$ be a smooth scheme over a perfect field $k$ of characteristic $p$. The absolute Frobenius map $\varphi:\Spec k\rightarrow \Spec k$ is associated to the $p$-th power map $k\rightarrow k$. The Frobenius twist of $X$ is defined as the base change of $X$ along the absolute Frobenius morphism \[\xymatrix{X'\ar[r]\ar[d]& X\ar[d]\\ \Spec k\ar[r]^\varphi&\Spec k.}\] The $p$-th power map ${\mathscr O}_X\rightarrow {\mathscr O}_X$ gives rise to a morphism $X\rightarrow X$ compatible with $\varphi$ and thus it factors through the Frobenius twist. The induced morphism $F:X\rightarrow X'$ is called the relative Frobenius morphism. For any effective Cartier divisor, $D$ on $X$ with ideal sheaf ${\mathscr I}={\mathscr O}_X(-D)$; we obtain a corresponding Cartier divisor $D'$, which is the pullback of $D$ under the base change morphism $X'\rightarrow X$ and whose pullback $F^*D'$ under the relative Frobenius morphism is the divisor $pD$. Thus, we have the following sequence of maps of logarithmic schemes \[(X,D)\rightarrow (X,pD)\rightarrow (X',D')\rightarrow (X,D).\] The morphism $(X,D)\rightarrow (X',D')$ is called the relative Frobenius morphism of logarithmic schemes, by abuse of notation, we denote it by $F$ as well.
\paragraph We say that a derivation $\delta\in T_X$ is {\em logarithmic} if for every open subset $U$ we have $\delta({\mathscr I}(U))\subset {\mathscr I}(U)$. The logarithmic derivations form a subsheaf of the tangent bundle of $X$ which is called the logarithmic tangent sheaf, $T_X(\log D)$. The sheaf $T_X(\log D)$ is a Lie subalgebroid of $T_X$ meaning that it is closed under the Lie-bracket on $T_X$. In characteristic $p>0$ the $p$-th iteration $\delta^{[p]}$ of a derivation $\delta$ is again a derivation. Clearly, the sheaf $T_X(\log D)$ is closed under this operation as well making $T_X(\log D)$ a sub-$p$-restricted Lie algebroid of $T_X$.
In general $T_X(\log D)$ is not a subbundle of $T_X$, we say that a divisor is {\em free} if $T_X(\log D)$ is a locally free sheaf. For instance, reduced normal crossing divisors are free divisors. Indeed, we can find local coordinates $x_1,...,x_n$ around each point of the divisor so that in an \'etale neighborhood of that point the divisor is given by $x_1\cdot ...\cdot x_k$ for some $k\leq n$, and thus the logarithmic tangent sheaf is generated by the logarithmic derivations $x_1\frac{\partial}{\partial x_1}, x_2\frac{\partial}{\partial x_2},...,x_k\frac{\partial}{\partial x_k}$ and $\frac{\partial}{\partial x_{k+1}},...,\frac{\partial}{\partial x_n}$.
\paragraph[Remark:] Consider the parabolic bundle ${\mathcal L}_{(X,pD)}$ on the logarithmic scheme $(X,pD)$ defined as the parabolic sheaf $({\mathscr O}_X,F_*,\alpha_*)$ where the filtration is given by the natural filtration \[{\mathscr O}_X(-pD)\hookrightarrow {\mathscr O}_X(-(p-1)D)\hookrightarrow...\hookrightarrow {\mathscr O}_X(-D)\hookrightarrow {\mathscr O}_X.\] We remark that this parabolic bundle is the pushforward of ${\mathscr O}_{(X,D)}$ under the natural morphism $(X,D)\rightarrow (X,pD)$. The corresponding parabolic bundle on $(X',D')$ is the parabolic bundle $(F_*{\mathscr O}_X,F_*,\alpha_*)$ where the filtration is given by \[F_*{\mathscr O}_X\otimes {\mathscr O}_{X'}(-D')=F_*{\mathscr O}_X(-pD)\hookrightarrow...\hookrightarrow F_*{\mathscr O}_X(-D)\hookrightarrow F_*{\mathscr O}_X.\] This parabolic bundle is the pushforward of ${\mathscr O}_{(X,D)}$ along the relative Frobenius morphism $F:(X,D)\rightarrow (X',D')$, and hence we denote it by $F_*{\mathscr O}_{(X,D)}$.
It is easy to see that those derivations of ${\mathscr O}_X$ which respect the filtration of the parabolic bundle ${\mathcal L}_{(X,pD)}$ are exactly the logarithmic derivations of $X$.
\paragraph We say that a meromorphic $q$-form $\omega$ is logarithmic, if for every affine open subset where ${\mathscr I}=(g)$ for some $g\in {\mathscr O}_X$, we have that $\omega\wedge dg$ and $d(\omega)$ are algebraic. The logarithmic $q$-forms form a subsheaf $\Omega^q_X(\log D)$ of the sheaf of {\em meromorphic} $q$-forms. The differential $d$ of the meromorphic de Rham complex maps logarithmic forms to logarithmic forms. We define the logarithmic de Rham complex $\Omega^\sbt_X(\log D)$ as the subcomplex of the de Rham complex consisting of the logarithmic forms. This complex $\Omega^\sbt_X(\log D)$ is not a complex of ${\mathscr O}_X$-modules, the differential $d$ is not linear in ${\mathscr O}_X$. On the other hand, we have \[d(s^p\omega)=ps^{p-1}ds\wedge \omega+s^pd\omega=s^pd\omega\] for every $s\in {\mathscr O}_X$ and $\omega\in \Omega^q_X(\log D)$. This implies that $F_*\Omega^\sbt_X(\log D)$ is a complex of ${\mathscr O}_{X'}$-modules. If the divisor is free, then the sheaves $\Omega^q_X(\log D)$ are locally free sheaves and moreover we have $\Omega^q_X(\log D)=\wedge^q \Omega^1_X(\log D)$. In the case of a reduced normal crossing divisor, locally the logarithmic $1$-forms are generated by $d(\log x_1),...,d(\log x_k)$ and $dx_{k+1},...,dx_{n}$. Similarly to the non-logarithmic case, there is a perfect duality between $T_X(\log D)$ and $\Omega^1_X(\log D)$ given by contracting with polyvector fields.
In the sequel $D$ denotes a reduced normal crossing divisor.
\paragraph \label{par:center} The sheaf of crystalline logarithmic differential operators $D_X(\log D)$ is defined as the universal enveloping algebra of the Lie algebroid $T_X(\log D)$. Explicitly it is defined locally as the $k$-algebra generated by sections of $T_X(\log D)$ and ${\mathscr O}_X$ modulo the relations \begin{itemize} \item $s\cdot\delta=s\delta$ for every $s\in {\mathscr O}_X$ and $\delta\in T_X(\log D)$, \item $\delta_1\cdot \delta_2-\delta_2\cdot\delta_1=[\delta_1,\delta_2]$ for every $\delta_1, \delta_2\in T_X(\log D)$ and \item $\delta\cdot s-s\cdot\delta=\delta(s)$ for every $s\in {\mathscr O}_X$ and $\delta\in T_X(\log D)$. \end{itemize} We emphasize that we do not work with the sheaf of PD differential operators, for our purposes we need an algebra which is of finite type over $X$. Since $T_X(\log D)$ is a Lie subalgebroid of $T_X$, we have an inclusion of $D_X(\log D)$ into the sheaf of crystalline differential operators $D_X$ (defined as the universal enveloping algebra of $T_X$).
\paragraph The map \[\psi: T_X(\log D)\rightarrow D_X(\log D)\] mapping \[\delta\mapsto \delta^p-\delta^{[p]}\] is ${\mathscr O}_{X'}$-linear and its image is in the center of $D_X(\log D)$ (see \cite{BezMirRum} for a detailed treatment in the non-logarithmic case) implying that the center of $D_X(\log D)$ can be identified with the structure sheaf ${\mathscr O}_{T^*X'(\log D')}$ of the logarithmic cotangent bundle $\pi: T^*X'(\log D')\rightarrow X'$ over the Frobenius twist. The zero section $i:X'\rightarrow T^*X'(\log D')$ of the bundle map $\pi$ gives rise to a natural embedding of logarithmic schemes \[i_D:(X',D')\rightarrow (T^*X'(\log D'),\pi^*D'),\] since $i^*\pi^*D'=D'$.
\paragraph We equip the sheaf of algebras $D_X(\log D)$ with the trivial logarithmic structure on $(X,D)$, we denote the corresponding logarithmic sheaf by $D_X(\log D)_*$: \[D_X(\log D)_x=\begin{cases} D_X(\log D) & \mbox{if }x=0,\{\mathbf D}_X(\log D)\otimes {\mathscr O}_X(-D) &\mbox{if }0<x\leq 1.\end{cases}\]
After pushing forward the parabolic sheaf $D_X(\log D)_*$ along the relative Frobenius morphism $F:(X,D)\rightarrow (X',D')$ we obtain the parabolic sheaf $F_*D_X(\log D)_*$, whose filtration is given by \[F_*D_X(\log D)\otimes {\mathscr O}_{X'}(-D')=F_*(D_X(\log D)\otimes {\mathscr O}_X(-pD))\hookrightarrow...\hookrightarrow F_*(D_X(\log D)).\] This parabolic sheaf has weights $\alpha_i=\frac{i}{p+1}$, similarly to the weights of ${\mathcal L}_{(X,pD)}$ or of $F_*{\mathscr O}_{(X,D)}$.
\paragraph By the discussion in Paragraph \ref{par:center} we can regard the parabolic sheaf of algebras $F_*D_X(\log D)_*$ as a parabolic sheaf of algebras on $(T^*X'(\log D'), \pi^*D')$. We denote the corresponding parabolic sheaf of algebras by $D(\log D)_*$. The bundle map $\pi$ identifies $\pi_*D(\log D)_*$ with $F_*D_X(\log D)_*$.
The following lemma is a straightforward generalization of the non-logarithmic result in \cite{BezMirRum}.
\begin{Lemma} \label{lem:az} Assume that $D$ is a reduced normal crossing divisor. Then, the parabolic sheaf of algebras $D(\log D)_*$ is an Azumaya algebra over the logarithmic space $(T^*X'(\log D'),\pi^*D')$. Moreover, the Azumaya algebra is split on the zero section $(X',D')\rightarrow (T^*X'(\log D'),\pi^*D')$. \end{Lemma}
\begin{Proof} We only highlight the key steps. We need to show that $D(\log D)_*$ becomes a matrix algebra under a flat cover. Consider an affine open set $U'$ of $X'$, and the corresponding open set $U\subseteq X$. Pick local coordinates $x_1,...,x_n$ for $U$ so that the reduced normal crossing divisor, $D$ is given by the equation $x_1\cdot...\cdot x_k$. Then, the (non-parabolic) sheaf of algebras $D_X(\log D)$ is generated by $\Gamma(U,{\mathscr O}_X)$ and the derivations $x_i\frac{d}{dx_i}$ for $1\leq i\leq k$, and the derivations $\frac{d}{dx_i}$ for $k<i\leq n$. Consider $F_*D_X(\log D)$ and the centralizer $A_X$ of $\Gamma(U',{\mathscr O}_{X'})$ inside $F_*D_X(\log D)$. A straightforward calculation shows that the centralizer is the $R=\Gamma(U',{\mathscr O}_{X'})$-algebra generated by the logarithmic derivations $\delta$ of $X$, in other words \[A_X=R\left[x_1\frac{d}{dx_1},...,x_k\frac{d}{dx_k},\frac{d}{dx_{k+1}},...,\frac{d}{dx_n}\right]\] There is a natural logarithmic structure on $V=\Spec A_X$ given by the restriction of the divisor $\pi^*D'$ to $V$. The ideal sheaf corresponding to the divisor is generated by the element $x_1^p\cdot...\cdot x_k^p$. We denote by ${\mathscr I}$ the ideal sheaf of ${\mathscr O}_X$ generated by the element $x_1\cdot...\cdot x_k$. The sheaf
\[D(\log D)|_V:=D(\log D)\otimes_{{\mathscr O}_{T^*X'(\log D')(U)}} A_X\] is generated by $\Gamma(U',{\mathscr O}_{X'})$ and two copies $u_1,...,u_n$ and $v_1,...,v_n$ corresponding to the logarithmic derivations \[x_1\frac{d}{dx_1},...,x_k\frac{d}{dx_k},\frac{d}{dx_{k+1}},...,\frac{d}{dx_n}.\]
The action $u_i.x=u_ix$ and $v_i.x=xv_i$ for $x\in D(\log D)$ gives rise to an action of $D(\log D)|_V$ on the parabolic sheaf \[{\mathcal D}:=D(\log D)\otimes {\mathscr I}^p\hookrightarrow D(\log D)\otimes {\mathscr I}^{p-1}\hookrightarrow...\hookrightarrow D(\log D)\] viewed as a parabolic sheaf over $\Spec A_X$. A local calculation similar to one in \cite{BezMirRum} shows that we have an isomorphism
\[D(\log D)_x|_V=\underline{\mathsf{End}}_{(V,\pi^*D'|_V)}({\mathcal D})_x\] hence $D(\log D)_*$ is an Azumaya algebra over $(T^*X'(\log D'),\pi^*D')$.
Next we show that $D(\log D)_*|_{X'}$ is a split Azumaya algebra. More explicitly, we show that $D(\log D)_*|_{X'}=\underline{\mathsf{End}}_{(X',D')}(F_*{\mathscr O}_{(X,D)})$. We remind the reader that the parabolic sheaf $F_*{\mathscr O}_{(X,pD)}$ is defined as the parabolic sheaf given by the filtration \[F_*{\mathscr O}_X\otimes {\mathscr O}_{X'}(-D')=F_*{\mathscr O}_X(-pD)\hookrightarrow...\hookrightarrow F_*{\mathscr O}_X(-D)\hookrightarrow F_*{\mathscr O}_X.\]
The sheaf of crystalline logarithmic differential operators acts non-trivially on ${\mathscr O}_X(-D)$, ${\mathscr O}_X(-2D)$, ..., ${\mathscr O}_X(-(p-1)D)$ and moreover the elements which act trivially are generated by the symbols $\delta^p-\delta^{[p]}$ for $\delta\in T_X(\log D)$. These are exactly the elements which vanish under the pullback map ${\mathscr O}_{T^*X'(\log D')}\rightarrow {\mathscr O}_{X'}$, hence $D(\log D)_*|_{X'}$ acts on $F_*{\mathscr O}_{(X,D)}$. As before a local calculation similar to one in \cite{BezMirRum} shows that we have an isomorphism
\[D(\log D)_*|_{X'}=\underline{\mathsf{End}}_{(X',D')}(F_*{\mathscr O}_{(X,D)}).\] This concludes the proof.\qed \end{Proof}
\paragraph[Remark:] If $D$ is a non-reduced normal crossing divisor, then the lemma above does not hold anymore. It would be interesting to generalize the above lemma to any normal crossing divisor (see Paragraph \ref{par:Konque}).
\paragraph As a consequence of Lemma \ref{lem:az}, we obtain a Morita equivalence between the category of coherent sheaves $Coh(X',D')$ on $(X',D')$ and the category of coherent sheaves $Coh(D(\log D)|_{(X',D')})$ on $(X',D')$ with a left $D(\log D)|_{X'}$-action: the functors
\[m_*:Coh(X',D')\rightarrow Coh(D(\log D)|_{(X',D')}):\] \[E_*\mapsto E_*\otimes F_*{\mathscr O}_{(X,D)}\] and
\[m^*:Coh(D(\log D)|_{(X',D')})\rightarrow Coh(X',D'):\]
\[ E_*\mapsto \underline{\mathsf{Hom}}_{D(\log D)|_{X'}}(F_*{\mathscr O}_{(X,D)},E_*)\]
are inverses to each other. These functors give rise to an equivalence between the corresponding derived categories ${\mathbf D}(X',D')$ and ${\mathbf D}(X', D', D(\log D)|_{(X',D')})$.
\paragraph We conclude this section by showing that the sheaf of crystalline logarithmic differential operators and the logarithmic de Rham complex are Koszul dual (our reference is \cite{CalNar}). The sheaf ${\mathscr O}_X$ is naturally a left $D_X(\log D)\subset D_X$-module given by the action of the logarithmic derivations on ${\mathscr O}_X$. The logarithmic Spencer complex $Sp^\sbt({\mathscr O}_X)$ defined as the complex of left $D_X(\log D)$-modules \[0\rightarrow D_X(\log D)\otimes \wedge^n T_X(\log D)\rightarrow ...\rightarrow D_X(\log D)\otimes T_X(\log D)\rightarrow D_X(\log D)\] where the differentials \ \[d_{Sp}:D_X(\log D)\otimes \wedge^i T_X(\log D)\rightarrow D_X(\log D)\otimes \wedge^{i-1} T_X(\log D)\] are given by \begin{align*}d_{Sp}(T\otimes \delta_1\wedge ...\wedge \delta_i)&=\sum_{l=1}^i (-1)^{i-1} T\delta_l \otimes \delta_1\wedge...\wedge \hat{\delta_l}\wedge ...\wedge \delta_i+\\ &+\sum_{l<k=1}^i (-1)^{l+k} T\otimes [\delta_l,\delta_k]\wedge \delta_1\wedge...\wedge\hat{\delta_l}\wedge...\wedge\hat{\delta_k}\wedge ...\wedge \delta_i\end{align*} is a locally free resolution of ${\mathscr O}_X$ by locally free left $D_X(\log D)$-modules. As a consequence, for any left $D_X(\log D)$ module $F$ we have that the object ${\mathbf{R}\sHom}_{(X,D_X(\log D))} ({\mathscr O}_X,F)$ can be represented by the logarithmic de Rham complex of $F$ \[0\rightarrow F\rightarrow F\otimes \Omega^1_X(\log D)\rightarrow...\rightarrow F\otimes \Omega^n_X(\log D).\] In the case of $F={\mathscr O}_X$ we obtain an isomorphism \[\Omega^\sbt_X(\log D)={\mathbf{R}\sHom}_{(X,D_X(\log D))} ({\mathscr O}_X,{\mathscr O}_X).\] Similarly, for any left $D_X(\log D)$-module $E$ we obtain a complex $Sp^\sbt(E)$ defined as \[0\rightarrow D_X(\log D)\otimes \wedge^n T_X(\log D)\otimes E\rightarrow ...\rightarrow D_X(\log D)\otimes E\] where the differentials \[d:D_X(\log D)\otimes \wedge^i T_X(\log D)\otimes E\rightarrow D_X(\log D)\otimes \wedge^{i-1} T_X(\log D)\otimes E\] are given by \begin{align*}d(T\otimes \delta_1\wedge ...\wedge \delta_i\otimes e)&=d_{Sp}(T\otimes\delta_1\wedge...\wedge \delta_i)\otimes e-\\ &-\sum_{l=1}^i (-1)^{i-1} T \otimes \delta_1\wedge...\wedge \hat{\delta_l}\wedge ...\wedge \delta_i\otimes \delta_l(e).\\\end{align*} \paragraph The Spencer complex of $E$ gives rise to a locally free resolution of $E$ by left $D_X(\log D)$-modules. As a consequence we can compute \[{\mathbf{R}\sHom}_{(X,D_X(\log D))}({\mathscr O}_X(lD),{\mathscr O}_X(mD))\] for any $l,m\in \mathbf{Z}$ by replacing ${\mathscr O}_X(lD)$ by its Spencer complex, and we obtain isomorphisms \begin{equation}\label{eq:iso}{\mathbf{R}\sHom}_{(X,D_X(\log D))}({\mathscr O}_X(lD),{\mathscr O}_X(mD))=\Omega^\sbt_X(\log D)\otimes {\mathscr O}_X((m-l)D).\end{equation}
\paragraph The above discussion shows that for the parabolic sheaf $D_X(\log D)$ we have \[{\mathbf{R}\sHom}_{(X,D,D_X(\log D)_*)}({\mathscr O}_{(X,D)},{\mathscr O}_{(X,D)})_*=\Omega^\sbt_X(\log D)_*\] where $\Omega^\sbt_X(\log D)_*$ is the parabolic sheaf given by the de Rham complex equipped with the trivial parabolic structure: \[\Omega^\sbt_X(\log D)_x=\begin{cases} \Omega^\sbt_X(\log D) & \mbox{if }x=0,\\ \Omega^\sbt_X(\log D)\otimes {\mathscr O}_X(-D) & \mbox{if }0<x\leq 1.\end{cases}\] Consider the parabolic sheaf $F_*D_X(\log D)_*$ on $(X',D')$. We remind the reader that this parabolic sheaf has weights $\alpha_i=\frac{i}{p+1}$. As before, using the isomorphisms \ref{eq:iso} we obtain an isomorphim \[{\mathbf{R}\sHom}_{(X',D',F_*D_X(\log D)_*)}(F_*{\mathscr O}_{(X,D)},F_*{\mathscr O}_{(X,D)})_*=F_*\Omega^\sbt_X(\log D)_*\] where $F_*\Omega^\sbt_X(\log D)_*$ is the complex of parabolic bundles given by the pushforward de Rham complex with the filtration \[F_*\Omega^\sbt_X(\log D)\otimes {\mathscr O}_{X'}(-D')\hookrightarrow ...\hookrightarrow F_*(\Omega^\sbt_X(\log D)\otimes {\mathscr O}_X(-D))\hookrightarrow F_*\Omega^\sbt_X(\log D).\] Similarly to the parabolic sheaf $F_*{\mathscr O}_{(X,D)}$, the parabolic sheaf $F_*\Omega^\sbt_X(\log D)_*$ has weights $\alpha_i=\frac{i}{p+1}$.
\section{Derived self-intersection of (Azumaya) schemes} \label{sec:derint}
In this section we summarize the theory of derived self-intersections of (Azumaya) schemes. Our references are \cite{AriCal}, \cite{AriCalHab1}, \cite{CioKap} and \cite{Gri}.
\paragraph Let $S$ be a smooth variety and $X$ be a smooth subvariety of $S$ of codimension $n$. Assume that the base field is either of characteristic 0 or of $p>n$. We denote the embedding of $X$ inside $S$ by $i$ and the corresponding normal bundle by $N$. The derived self-intersection $W$ of $X$ inside $S$ is a dg-scheme whose structure sheaf is constructed by taking the derived tensor product of the structure sheaf of $X$ with itself over ${\mathscr O}_S$. The derived self-intersection is equipped with a map from the underived self-intersection, $X$.
\paragraph We say that a derived scheme $W$ is {\em formal} over a scheme $\pi: W\rightarrow Z$ if $\pi_*{\mathscr O}_W$ is a {\em formal} complex of ${\mathscr O}_Z$-modules, meaning that there exists an isomorphism of commutative differential graded algebras \[\pi_*{\mathscr O}_W=\bigoplus_k {\mathscr H}^k(\pi_*{\mathscr O}_W)[-k].\] A local calculation \cite{CalKatSha} shows that the cohomology sheaves of the structure sheaf of the derived self-intersection $W$ (over $S$) are given by \[{\mathscr H}^{-*}({\mathscr O}_W)=\Tor_*^S({\mathscr O}_X,{\mathscr O}_X)=\wedge^* N^\vee.\] Therefore, the formality of the derived self-intersection asserts that there is a quasi-isomorphism (of commutative dg-algebras) \[\pi_*{\mathscr O}_W=\bigoplus_{i=0}^n \wedge^i N^\vee[i]=:{\mathbb{S}}(N^\vee[1]).\] (We omit writing the pushforward of $N^\vee$ to $Z$ along the map $X\rightarrow W\rightarrow Z$.) The main result of \cite{AriCal} is the following.
\begin{Theorem}[\cite{AriCal}] \label{thm:Wformal} The following statements are equivalent. \begin{itemize} \item[(1)] There exists an isomorphism of dg-autofunctors of $D(X)$ \[i^*i_*(-)=(-)\otimes {\mathbb{S}}(N^\vee[1]).\] \item[(2)] $W$ is formal over $X\times X$. \item[(3)] The natural map $X\rightarrow W$ is split over $X\times X$. \item[(4)] The short exact sequence
\[0\rightarrow T_X\rightarrow T_S|_X\rightarrow N\rightarrow 0\] of vector bundles on $X$ splits. \end{itemize} \end{Theorem}
\paragraph[Remark:] For instance, the derived self-intersection of $X'$ inside $T^*X'(\log D')$ is formal: the bundle map $\pi:T^*X'(\log D')\rightarrow X'$ splits the embedding of the zero section, and hence the injection
\[T_{X'}\hookrightarrow T_{T^*X'(\log D')}|_{X'}\] is split.
\paragraph An {\em Azumaya scheme} is a pair $(S,{\mathcal A})$ where $S$ is a scheme or dg-scheme and ${\mathcal A}$ is an Azumaya algebra over $S$. We say that $(f,E):(X,{\mathcal B})\rightarrow (S,{\mathcal A})$ is a 1-morphism of Azumaya schemes if $f$ is a morphism of schemes and $E$ is an $f^*{\mathcal A}^{opp}\otimes {\mathcal B}$-module that provides a Morita equivalence between $f^*{\mathcal A}$ and ${\mathcal B}$. Given an embedding of Azumaya schemes ${\bar{i}}:(X,{\mathcal A}|_X)\rightarrow (S,{\mathcal A})$, the derived self-intersection is given by ${\overline{W}}=(W,{\mathcal A}|_W)$.
\paragraph
From now on, assume that ${\mathcal A}|_X$ is a split Azumaya algebra with splitting module $E$, in other words there exists a 1-isomorphism $(\id, E)$ of Azumaya schemes $m:X\rightarrow (X,{\mathcal A}|_X)$. We denote the induced map $X\rightarrow (S,{\mathcal A})$ by $i'$. We organize our spaces into the following diagram.
\[\xymatrix{W\ar[rr]\ar[dd]&&X\ar[d]^m\ar@/^2pc/[dd]^{i'}\\&(W,{\mathcal A}|_W)\ar[r]^p\ar[d]^q& (X,{\mathcal A}|_X)\ar[d]^{\bar{i}}\\X\ar[r]^-m\ar@/_2pc/[rr]^{i'}&(X,{\mathcal A}|_X)\ar[r]^{{\bar{i}}}& (S,{\mathcal A})}\]
\paragraph The Azumaya schemes $W$ and $(W,{\mathcal A}|_W)$ are abstractly isomorphic, but in general the isomorphism is not over $(X,X)$. (We remark that $W$ can be thought of as a dg-scheme over $X\times X$, on the other hand Azumaya spaces do not have absolute products, and thus it is more natural to think of them as spaces equipped with two morphisms to $X$.) The structure sheaves of derived self-intersections $W$ and $(W,{\mathcal A}|_W)$ regarded as dg-schemes endowed with a map to $(X,X)$ are the kernels of the dg-autofunctors of $D(X)$ $i^*i_*(-)$ and $i'^*i'_*(-)$ respectively. In \cite{AriCalHab1} the authors show that there exists an isomorphism of dg-autofunctors of $D(X)$ \[i'^*i'_*(-)=q_*p^*(-\otimes L)\] for some line bundle $L$ on the derived scheme $W$. This line bundle is called the associated line bundle of the derived self-intersection in \cite{AriCalHab1}.
\paragraph Consider the object ${\bar{i}}^*{\bar{i}}_*E$ of $D(X,{\mathcal A}|_X)$. A local calculation similar to one in \cite{CalKatSha} shows that there exist isomorphisms \[{\mathscr H}^k({\bar{i}}^*{\bar{i}}_*E)=E\otimes \wedge^{-k} N^\vee[-k].\]
Therefore, we obtain a triangle in $D(X,{\mathcal A}|_X)$ \[E\otimes N^\vee[1]\rightarrow \tau^{\geq -1}{\bar{i}}^*{\bar{i}}_*E\rightarrow E\rightarrow E\otimes N^\vee[2].\] The rightmost map of the triangle
\[\alpha_E\in H^2_{(X,{\mathcal A}|_X)}(E,E\otimes N^\vee)=H^2(X,N^\vee)\] is called the HKR class of $E$.
\paragraph The HKR class gives the obstruction of lifting $E$ to the first infinitesimal neighborhood of $(X,{\mathcal A}|_X)$ inside $(S,{\mathcal A})$. Having a lifting of $E$ is equivalent to having a splitting module $F$ of the Azumaya algebra ${\mathcal A}|_{X^{(1)}}$ so that $F|_X=E$ where $X^{(1)}$ denotes the first infinitesimal neighborhood of $X$ inside $S$. Such lifting exists, if $i'$ splits to first order meaning that there exists a map $\varphi:X^{(1)}\rightarrow X$ splitting the natural inclusion $X\rightarrow X^{(1)}$, so that $\varphi^*E$ is a splitting module for ${\mathcal A}|_{X^{(1)}}$. We are ready to state the main result concerning about the triviality of the associated line bundle.
\begin{Theorem}[\cite{AriCalHab1}] \label{thm:Azu} Assume that $W$ is formal over $X\times X$. Then, the following statements are equivalent. \begin{itemize}
\item[(1)] The dg-schemes $W$ and $(W,{\mathcal A}|_W)$ are isomorphic over $X\times X$. \item[(2)] There exists an isomorphism of dg-autofunctors of $D(X)$ \[i^*i_*(-)\cong i'^*i'_*(-)=(-)\otimes {\mathbb{S}}(N^\vee[1]).\] \item[(3)] The associated line bundle is trivial. \item[(4)] The morphism $i'$ splits to first order. \item[(5)] The HKR class $\alpha_E$ vanishes. \end{itemize} \end{Theorem}
\section{Derived self-intersection of logarithmic (Azumaya) schemes} \label{sec:logderint}
In this section we expand the theory of twisted derived intersections to the logarithmic setting.
\paragraph Let $X$ be a smooth subscheme of a smooth scheme $S$. We denote the embedding by $i$. Let us equip $S$ with an effective Cartier divisor $D$ so that $i^*D$ is an effective divisor on $X$. Consider the induced embedding of logarithmic schemes $i_D:(X,D|_X)\rightarrow (S,D)$. We say that $i_D$ splits to first order if there is a left inverse of the induced morphism
\[(X,D|_X)\rightarrow (X^{(1)},D|_{X^{(1)}}),\]
where $X^{(1)}$ denotes the first infinitesimal neighborhood of $X$ inside $S$. Equivalently, we say that $i_D$ splits to first order if there is a splitting $\rho:X^{(1)}\rightarrow X$ of the embedding $X\rightarrow X^{(1)}$ so that $\rho^*D|_X=D|_{X^{(1)}}$.
\paragraph We define the derived-self intersection of $(X,D|_X)$ inside $(S,D)$ as the logarithmic dg-scheme $(W,D|_W)$. We assume that the dg-scheme $W$ is formal over $X\times X$. Notice that the divisor $D|_W$ can be thought of restricting $D|_{X\times X}$ to $W$. We generalize Theorem \ref{thm:Wformal} to the logarithmic setting.
\begin{Proposition} \label{prop:for} Assume further that $i_D$ splits to first order. Then we have an isomorphism of dg-autofunctors of $D(X,i^*D)$ \[i_D^*i_{D,*}(-)=(-)\otimes {\mathbb{S}}(N^\vee[1]).\]
In other words, $(W,D|_W)$ is formal over $(X\times X,D|_{X\times X})$. \end{Proposition}
\begin{Proof} The proof is entirely similar to of the proof of Theorem 0.7 of \cite{AriCal}.\qed \end{Proof}
\paragraph[Remark:]\label{rem:ass} All the assumptions of the Proposition above are satisfied in the case when $S$ is a vector bundle $\pi:S\rightarrow X$ over $X$, $i:X\rightarrow S$ is the zero section and the divisor $D$ is the pullback along $\pi$ of a divisor on $X$.
\paragraph
\label{par:setup} We turn our attention to embeddings of logarithmic Azumaya schemes. Let us equip the logarithmic scheme $(S,D)$ with a parabolic sheaf of Azumaya algebras ${\mathcal A}_*$ and assume that ${\mathcal A}_*$ splits over $(X,D|_X)$ with splitting module $E_*$. We remind the reader that $E_*$ induces an isomorphism of spaces $m_D:(X,D)\rightarrow(X,D|_X,{\mathcal A}_*|_X)$. We denote the embedding of logarithmic Azumaya spaces $(X,D|_X,{\mathcal A}_*|_X)\rightarrow (S,D,{\mathcal A}_*)$ by $\bar{i}_D$, and the composite of embeddings $(X,D|_X)\rightarrow (X,D|_X,{\mathcal A}_*|_X)\rightarrow (S,D,{\mathcal A}_*)$ by $i_D'$. We organize our spaces as follows.
\[\xymatrix{(W,D|_W)\ar[rr]^{p_D}\ar[dd]_{q_D}&&(X,D)\ar[d]^{m_D}\ar@/^3pc/[dd]^{i'_D}\\&(W,D|_W,{\mathcal A}_*|_W)\ar[r]\ar[d]& (X,D|_X,{\mathcal A}_*|_X)\ar[d]^{{\bar{i}}_D}\\(X,D)\ar[r]^-{m_D}\ar@/_3pc/[rr]^{i'_D}&(X,D|_X,{\mathcal A}_*|_X)\ar[r]^{{\bar{i}}_D}& (S,D,{\mathcal A}_*)}\]
\paragraph The spaces $(W,D|_W)$ and $(W,D|_W,{\mathcal A}_*|_W)$ are abstractly isomorphic, but in general not over the pair $((X,D), (X,D))$, since the splitting modules $p_D^*E_*$ and $q_D^*E_*$ may not be isomorphic. Two splitting modules differ by a parabolic line bundle, thus the failure to have an isomorphism between $(W,D|_W)$ and $(W,D|_W,{\mathcal A}_*|_W)$ is measured by a parabolic line bundle on $(W,D|_W)$ which we call the associated parabolic line bundle ${\mathcal L}_*$. As a consequence, we obtain an isomorphism of dg-functors \[i_D'^*i'_{D,*}(-)=q_{D,*}p_D^*(-\otimes {\mathcal L}_*).\]
In particular for the structure sheaf ${\mathscr O}_{(X,D|_X)}$ we have
\[i_D'^*i'_{D,*}{\mathscr O}_{(X,D|_X)}=q_{D,*}p_D^*{\mathcal L}_*.\]
\paragraph Consider the object $\bar{i}_D^*\bar{i}_{D,*}E_*$. As before, a local calculation similar to one in \cite{CalKatSha} shows that there exist isomorphisms of parabolic sheaves \[{\mathscr H}^{k}({\bar{i}}_D^*{\bar{i}}_{D,*}E_*)=E_*\otimes \wedge^{-k} N^\vee[-k].\] As above, we define the HKR class $\alpha^D_{E_*}$ as the rightmost map of the triangle \begin{equation} \label{eq:HKR} E_*\otimes N^\vee[1]\rightarrow \tau^{\geq -1}{\bar{i}}_D^*{\bar{i}}_{D,*}E_*\rightarrow E_*\rightarrow E_*\otimes N^\vee[2].\end{equation} A priori the HKR class is an element of
\[\Ext^2_{(X,D|_X,{\mathcal A}_*|_X)}(E_*,E_*\otimes N^\vee)\]
which is the obstruction of lifting the splitting module $E_*$ to the first infinitesimal neighborhood of the embedding ${\bar{i}}_D$. \paragraph By Morita equivalence the extension group above is isomorphic to
\[Ext^2_{(X,D|_X)}({\mathscr O}_{(X,D|_X)},{\mathscr O}_{(X,D|_X)}\otimes N^\vee)=H^2((X,D|_X),{\mathscr O}_{(X,D|_X)}\otimes N^\vee).\]
It is easy to see that ${\mathsf{Hom}}_{(X,D|_X)}({\mathscr O}_{(X,D|_X)}, E'_*)={\mathsf{Hom}}_X({\mathscr O}_X,E'_0)$ for any parabolic sheaf $E'_*$. Therefore, the HKR class $\alpha^D_{E_*}$ can be though of as an element of $H^2(X,N^\vee)$. This element corresponds to the rightmost map of the triangle given by the triangle in \ref{eq:HKR} when $*=0$, i.e, all objects are considered as ordinary sheaves. Summarizing the above discussion we obtain the following theorem which is a straightforward generalization of Theorem \ref{thm:Azu}.
\begin{Theorem} \label{thm:lat} Assume that both maps $i$ and $i_D$ split to first order. Then, the following statements are equivalent. \begin{itemize}
\item[(1)] The dg-schemes $(W,D|_W)$ and $(W,D|_W,{\mathcal A}|_W)$ are isomorphic over $(X\times X,D|_{X\times X})$.
\item[(2)] There exists an isomorphism of dg-autofunctors of $D(X,D|_X)$ \[i_D^*i_{D,*}(-)\cong i_D'^*i'_{D,*}(-)=(-)\otimes {\mathbb{S}}(N^\vee[1]).\] \item[(3)] The associated parabolic line bundle is trivial. \item[(4)] The morphism $i'_D$ splits to first order. \item[(5)] The HKR class $\alpha^D_{E_*}$ vanishes. \end{itemize} \end{Theorem}
\section{Proof of the main theorems} \label{sec:mainthm}
In this section we prove our main theorems, Theorem \ref{thm:str}, Theorem \ref{thm:alb}, Theorem \ref{thm:equi} and Corollary \ref{cor:end}.
\paragraph Let $X$ be a smooth scheme over a perfect field $k$ of characteristic $p>\dim X$, and $D$ a reduced normal crossing divisor. We denote by $D'$ the corresponding reduced normal crossing divisor on the Frobenius twist, $X'$. We consider the Frobenius twist, $X'$ embedded into the vector bundle $T^*X'(\log D')$ as the zero section. Recall that the parabolic sheaf of crystalline logarithmic differential operators $D(\log D)_*$ can be regarded as a parabolic sheaf of algebras over $T^*X'(\log D')$. Moreover, $D(\log D)$ is an Azumaya algebra over the logarithmic scheme $(T^*X'(\log D',\pi^*D')$, so that $D(\log D)|_{X'}$ is a split Azumaya algebra.
\paragraph We are in the context described in Paragraph \ref{par:setup}. We compare the derived self-intersection $(W,D|_W)$ corresponding to the embedding \[i:(X',D')\rightarrow (T^*X'(\log D'),\pi^*D')\] and the derived self-intersection corresponding to \[i':(X',D')\rightarrow (T^*X'(\log D'),\pi^*D',D(\log D)).\]
We denote the latter space by $({\overline{W}},D'|_W)$, and the map
\[(X',D',D(\log D)|_{X'})\rightarrow (T^*X'(\log D'),\pi^*D',D(\log D))\]
by ${\bar{i}}$. As an easy consequence of Proposition \ref{prop:for} we obtain that the structure sheaf of $(W,D|_W)$ is a formal parabolic sheaf.
\begin{Theorem} \label{thm:str}
The structure sheaf ${\mathscr O}_{(W,D|_W)}$ over $X'$ is isomorphic to the dual of the formal complex ${\mathbb{S}}(\Omega^1_{X'}(\log D')[-1])$ equipped with the trivial parabolic structure. \end{Theorem}
\begin{Proof}
The structure sheaf of $(W,D|_W)$ over $X'$ is given by the object $i^*i_*{\mathscr O}_{(X',D')}$. By Remark \ref{rem:ass} all the assumption of Proposition \ref{prop:for} are satisfied for the embedding $i$ implying the statement above.\qed \end{Proof}
Next we compute the associated parabolic line bundle ${\mathcal L}_*$ of the derived self-intersection corresponding to \[i':(X',D')\rightarrow (T^*X'(\log D'),\pi^*D',D(\log D)).\]
\begin{Theorem} \label{thm:alb} The associated line bundle ${\mathcal L}$ is isomorphic to the dual of $F_*\Omega^\sbt_X(\log D)_*$. \end{Theorem}
\begin{Proof} We have the following sequence of maps \begin{align*} F_*\Omega^\sbt_X(\log D)_*&={\mathbf{R}\sHom}_{(X',D',F_*D_X(\log D)_*)}(F_*{\mathscr O}_{(X,D)},F_*{\mathscr O}_{(X,D)})=\\ &={\mathbf{R}\sHom}_{(X',D',\pi_*D(\log D)_*)}(F_*{\mathscr O}_{(X,D)},F_*{\mathscr O}_{(X,D)})=\\ &={\mathbf{R}\sHom}_{(X',D',\pi_*D(\log D)_*)}(\pi_*{\bar{i}}_*F_*{\mathscr O}_{(X,D)},\pi_*{\bar{i}}_*F_*{\mathscr O}_{(X,D)})=\\ &=\pi_*{\mathbf{R}\sHom}_{(T^*X'(\log D'),\pi^*D',D(\log D)_*)}({\bar{i}}_*F_*{\mathscr O}_{(X,D)},{\bar{i}}_*F_*{\mathscr O}_{(X,D)})\\ \end{align*} where the first isomorphism is the Koszul duality between $\Omega^\sbt_X(\log D)$ and $D_X(\log D)$, the second is the isomorphism $F_*D_X(\log D)_*=\pi_*D(\log D)_*$ for the bundle map $\pi:T^*X'(\log D')\rightarrow X'$, the third is the identity $\pi\circ i=\id$ and the last one is the conseqence of that $\pi$ is affine. The map ${\bar{i}}_*$ has a right adjoint, which we denote by ${\bar{i}}^!$ (see \cite{Yok} and \cite{AriCalHab2}). We have
\[F_*\Omega^\sbt_X(\log D)_*=\pi_*i_*{\mathbf{R}\sHom}_{(X',D',D(\log D)_*|_{X'})}(F_*{\mathscr O}_{(X,D)},{\bar{i}}^!{\bar{i}}_*F_*{\mathscr O}_{(X,D)}).\] We use again that $\pi\circ i=\id$ to obtain
\[F_*\Omega^\sbt_X(\log D)_*={\mathbf{R}\sHom}_{(X',D',D(\log D)_*|_{X'})}(F_*{\mathscr O}_{(X,D)},{\bar{i}}^!{\bar{i}}_*F_*{\mathscr O}_{(X,D)}).\] We remind the reader that there exists a Morita equivalence between $(X',D')$ and $(X', D', D(\log D))$ given by the functors
\[m_*:Coh(X',D')\rightarrow Coh(D(\log D)|_{(X',D')}):\] \[M\mapsto M\otimes F_*{\mathscr O}_{(X,D)}\] and
\[m^*:Coh(D(\log D)|_{(X',D')})\rightarrow Coh(X',D'):\]
\[ M\mapsto \underline{\mathsf{Hom}}_{D(\log D)|_{X'}}(F_*{\mathscr O}_{(X,D)},M)\] and therefore \[F_*\Omega^\sbt_X(\log D)_*=m^*{\bar{i}}^!{\bar{i}}_*{\mathscr O}_{(X',D')}.\] Moreover, the functor $m^*$ is both left and right adjoint to $m_*$ implying that \[F_*\Omega^\sbt_X(\log D)_*=m^!{\bar{i}}^!{\bar{i}}_*{\mathscr O}_{X'}=i'^!i'_*{\mathscr O}_{(X',D')}\] completing the proof.\qed \end{Proof}
We conclude the paper by applying Theorem \ref{thm:lat} to our situation.
\begin{Theorem} \label{thm:equi} Let $X$ be a smooth variety over a perfect field of characteristic $p>\dim X$, with a reduced normal crossing divisor $D$. Then, the following statements are equivalent. \begin{itemize} \item[(1)] The logarithmic scheme $(X,D)$ lifts to $W_2(k)$. \item[(2)] The class of the extension \[0\rightarrow {\mathscr O}_{X'}\rightarrow F_*{\mathscr O}_X\rightarrow F_*Z^1\rightarrow \Omega^1_{X'}(\log D')\rightarrow 0\] vanishes. Here $Z^1$ denotes the image of $d$ inside $F_*\Omega^1_X(\log D)$. \item[(3)] The map $i'$ splits to first order. \item[(4)] The associated line bundle is trivial. \item[(5)] The parabolic sheaf of algebras $D(\log D)_*$ splits on the first infinitesimal neighborhood of $(X',D')$ inside $(T^*X'(\log D'),\pi^*D')$. \item[(6)] The complex $F_*\Omega^\sbt_X(\log D)_*$ is a formal parabolic sheaf equipped with the trivial parabolic structure. \item[(7)] There exists an isomorphism \[F_*\Omega^\sbt_X(\log D)_*\cong {\mathbb{S}}(\Omega^1_{X'}(\log D')[-1])_*\] in $D(X',D')$ where the sheaf ${\mathbb{S}}(\Omega^1_{X'}(\log D')[-1])$ is equipped with the trivial parabolic structure. \end{itemize} \end{Theorem}
\begin{Proof} The HKR class corresponding to the embedding $i'$ is the dual of the extension class in 2. Therefore, the equivalence \[(2)\Leftrightarrow (3)\Leftrightarrow(4)\Leftrightarrow(5)\Leftrightarrow(6)\Leftrightarrow(7)\] follows from Theorem \ref{thm:lat}, Theorem \ref{thm:str} and Theorem \ref{thm:alb}. The equivalence $(1)\Leftrightarrow(2)$ is proved in \cite{EsnVie}. \qed \end{Proof}
\paragraph[Remark:] A particular interesting feature of the theorem above is part (7). The two parabolic complexes $F_*\Omega^\sbt_X(\log D)_*$ and ${\mathbb{S}}(\Omega^1_{X'}(\log D')[-1])_*$ have different filtration, the former one has weights $\alpha_i=\frac{i}{p+1}$, the latter has the trivial parabolic structure. On the other hand under quasi-isomorphism the filtrations may change. We illustrate the phenomenon in the simplest case. Let $X=\Spec k[x]$ and $D=\{0\}$. Then the pushforward of the logarithmic de Rham complex is given by \[\left(x^pk[x]\hookrightarrow x^{p-1}k[x]\hookrightarrow...\hookrightarrow k[x]\right)\xrightarrow{d}\left(x^p\frac{k[x]dx}{x}\hookrightarrow...\hookrightarrow \frac{k[x]dx}{x}\right).\] The kernel of $d$ is the parabolic sheaf \[x^pk[x^p]\hookrightarrow x^pk[x^p]\hookrightarrow ...\hookrightarrow x^pk[x^p]\hookrightarrow k[x^p]\] which is isomorphic ${\mathscr O}_{(X',D')}$. Similar calculations can be done for the cokernel.
\paragraph As an immediate consequence we obtain that the Hodge-to-de Rham spectral sequence for the logarithmic de Rham complex degenerates at ${}^1E$.
\begin{Corollary} \label{cor:end} Let $X$ be a smooth variety over a perfect field of characteristic $p>\dim X$, and $D$ a reduced normal crossing divisor on $X$. Assume that $(X,D)$ lifts to $W_2(k)$. Then, \[R^*\Gamma(X,\Omega^\sbt_X(\log D))=\bigoplus_{p+q=*} H^p(X,\Omega^q_{X}(\log D)).\] \end{Corollary}
\begin{Proof} By Theorem \ref{thm:equi} there exists an isomorphism \[F_*\Omega^\sbt_X(\log D)_*\cong {\mathbb{S}}(\Omega^1_{X'}(\log D')[-1])_*.\] In particular setting $*=0$ we obtain an isomorphism \[F_*\Omega^\sbt_X(\log D)\cong {\mathbb{S}}(\Omega^1_{X'}(\log D')[-1])\] of objects in $D(X')$. Thus, \[R^*\Gamma(X',F_*\Omega^\sbt_X(\log D))=R^*\Gamma(X',{\mathbb{S}}(\Omega^1_{X'}(\log D')[-1])=\bigoplus_{p+q=*} H^p(X',\Omega^q_{X'}(\log D')).\] On one hand, since $F$ is affine we have \[R^*\Gamma(X',F_*\Omega^\sbt_X(\log D))=R^*\Gamma(X,\Omega^\sbt_X(\log D)).\] On the other hand, since $X$ and $X'$ are abstractly isomorphic, we have \[\bigoplus_{p+q=*} H^p(X',\Omega^q_{X'}(\log D'))=\bigoplus_{p+q=*} H^p(X,\Omega^q_{X}(\log D))\] completing the proof.\qed \end{Proof}
\end{document}
|
arXiv
|
{
"id": "1503.00177.tex",
"language_detection_score": 0.737491250038147,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Singularity-Free Spacecraft Attitude Control Using Variable-Speed Control Moment Gyroscopes}
\begin{abstract} This paper discusses spacecraft control using variable-speed CMGs. A new operational concept for VSCMGs is proposed. This new concept makes it possible to approximate the complex nonlinear system by a linear time-varying system (LTV). As a result, an effective control system design method, Model Predictive Control (MPC) using robust pole assignment, can be used to design the spacecraft control system using VSCMGs. A nice feature of this design is that the control system does not have any singular point. A design example is provided. The simulation result shows the effectiveness of the proposed method. \end{abstract}
{\bf Keywords:} Spacecraft attitude control, control moment gyroscopes, reduced quaternion model.
\section{ Introduction}
Control Moment Gyros (CMGs) are an important type of actuators used in spacecraft control because of their well-known torque amplification property \cite{kurokawa98}. The conventional use of CMG keeps the flywheel spinning in a constant speed, while torques of the CMG are produced by changing the gimbal's rotational speed \cite{jt04}. A more complicated operational concept is the so-called variable-speed control gimbal gyros (VSCMG) in which the flywheel's speed of the CMG is allowed to be changed too. This idea was first proposed by Ford in his Ph.D dissertation \cite{ford97} where he derived a mathematical model for VSCMGs which is now widely used in literatures. Because of the extra freedom of VSCMG, it can generate torques on a plane perpendicular to the gimbal axis while the conventional CMG can only generate a torque in a single direction at any instant of time \cite{yt06}.
The existing designs of spacecraft control system using CMG or VSCMG rely on the calculation of the desired torques and then determines the VSCMG's gimbal speed and flywheel speed. This designs have a fundamental problem because there are singular points where the gimbal speed and flywheel speed cannot be found given the desired torques. Extensive literatures focus on this difficulty of implementation in the last few decades, for example, \cite{kurokawa98,fh00,yt04,kurokawa07,zmmt15} and references therein. Another difficulty associated with the control system design using CMG or VSCMG is that the nonlinear dynamical models for these type of actuators are much more complicated than other types of actuators used for spacecraft attitude control systems. Most proposed designs, for example \cite{jt04,yt06,fh00,yt02, fh97,svj98,ma13,jh11}, use Lyapunov stability theory for nonlinear systems. There are two shortcomings of this design method: first, there is no systematic way to find the desired Lyapunov function, and second, the design does not consider the system performance but only stability.
In this paper, we propose a different operational concept for VSCMG: the flywheels of the cluster of the VSCMG do not always spin at high speed, they spin at high speed only when they need to. The same is true for the gimbals. This operational strategy makes the origin (the state variables at zero) an equilibrium point, where a linearized model can be established. Therefore, some mature linear system design methods can be used and system performance can be part of the design by using these linear system design methods. Additional advantages of the proposed operational concept are: (a) energy saving due to normally reduced spin speed of flywheels and gimbals therefore reduction of operational cost, (b) seamless implementation (singularity free) because the control of the spacecraft is achieved by accelerating or decelerating the flywheels and gimbals, therefore, there is no inverse from desired torques to the speeds of the gimbals and flywheels.
It is worthwhile to point out that the linearized model is a linear time-varying (LTV) system. The design methods for linear time-invariant (LTI) systems cannot be directly applied to LTV systems. A popular design method for LTV system is the so-called gain scheduling design method, which has been discussed in several decades, for example, \cite{rugh90, rs00,lr90,sb92}. The basic idea is to fix the time-varying model in a number of ``frozen'' models and using linear system design method for each of these ``frozen'' linear time-invariant systems. When the parameters of the LTV system are not in these ``frozen'' points, interpolation is used to calculate the feedback gain matrix.
Although, gain scheduling design has been proved to be effective for many applications for LTV systems, it has an intrinsic limitation for some time-varying systems which have many independent time-varying variables, which is the case for spacecraft control using VSCMGs. As we will see later that this control system matrices $({\bf A},{\bf B})$ have many independent time-varying parameters and the computation of the gain scheduling design is too much to be feasible. Therefore, we will consider another popular control system design method, the so-called Model Predictive Control (MPC) \cite{aw13}. According to a theorem in \cite{rugh93}, under certain conditions, the closed-loop LTV system designed by MPC method is stable. To meet some of the required stability conditions imposed on the LTV system \cite{rugh93}, we propose using the robust pole assignment design \cite{yt93,tity96} for the MPC design.
The remainder of the paper is organized as follows. Section 2 derives the spacecraft model using variable-speed CMG. Section 3 discusses both gain scheduling design and the MPC design method for spacecraft control using variable-speed CMG. This analysis provides a technical basis of selecting the MPC design over gain scheduling design for this problem. Section 4 provides a design example and simulation result. Section 5 is the summary of the conclusions of the paper.
\section{Spacecraft model using variable-speed CMG}
Throughout the paper, we will repeatedly use a skew-symmetric matrix which is related to the cross product of two vectors. Let ${\bf a}=[ a_1,a_2,a_3 ]^{{\rm T}}$ and ${\bf b}=[ b_1,b_2,b_3 ]^{{\rm T}}$ be two three dimensional vectors. We denote a matrix \[ {\bf a}^{\times} =\left[ \begin{array}{ccc} 0 & -a_3 & a_2 \\ a_3 & 0 & -a_1 \\ -a_2 & a_1 & 0 \end{array} \right] \] such that the cross product of ${\bf a}$ and ${\bf b}$ is equivalent to a matrix and vector multiplication, i.e., ${\bf a} \times {\bf b} = {\bf a}^{\times} {\bf b}$.
Assuming that there are $N$ variable-speed CMGs installed in a spacecraft, following the notations of \cite{ford97}, we define a matrix \begin{equation} {\bf A}_s = [ {\bf s}_{1}, {\bf s}_{2}, \ldots, {\bf s}_{N}] \end{equation} such that the columns of ${\bf A}_s$, ${\bf s}_{j}$ ($j=1, \ldots, N$), specify the unit spin axes of the wheels in the spacecraft body frame. Similarly, we define ${\bf A}_g= [ {\bf g}_{1}, {\bf g}_{2}, \ldots, {\bf g}_{N}]$ the matrix whose columns are the unit gimbal axes and ${\bf A}_t= [ {\bf t}_{1}, {\bf t}_{2}, \ldots, {\bf t}_{N}]$ the matrix whose columns are the unit axes of the transverse (torque) directions, both are represented in the spacecraft body frame. Whereas ${\bf A}_g$ is a constant matrix, the matrices ${\bf A}_s$ and ${\bf A}_t$ depend on the gimbal angles. Let $\boldsymbol{\gamma}=[\gamma_1, \ldots, \gamma_N]^{{\rm T}} \in [0,2\pi] \times \cdots \times [0,2\pi]:=\Pi$ be the vector of $N$ gimbal angles, \begin{equation} [ \dot{\gamma}_1, \ldots, \dot{\gamma_N}]^{{\rm T}} = \dot{\boldsymbol{\gamma}}:=\boldsymbol{\omega}_g = [ {\omega}_{g_1}, \ldots, {\omega}_{g_N}]^{{\rm T}} \label{dotGamma} \end{equation} be the vector of $N$ gimbal speed, then the following relations hold \cite{yt04} (see Figure \ref{fig:gimbalFrame}).
\begin{figure}
\caption{Spacecraft body with a single VSCMG.}
\label{fig:gimbalFrame}
\end{figure}
\begin{equation} \dot{{\bf s}}_i = \dot{{\gamma}}_i {\bf t}_i={{\omega}}_{g_i} {\bf t}_i, \hspace{0.1in} \dot{{\bf t}}_i = -\dot{{\gamma}}_i {\bf s}_i={{\omega}}_{g_i} {\bf s}_i, \hspace{0.1in} \dot{{\bf g}}_i = 0. \label{CMGcord} \end{equation} Denote \begin{equation} \boldsymbol{\Gamma}^c = {\rm diag} (\cos(\boldsymbol{\gamma})), \hspace{0.1in} \boldsymbol{\Gamma}^s = {\rm diag} (\sin(\boldsymbol{\gamma})). \label{diagcs} \end{equation} A different but related expression is given in \cite{ford97} \footnote{There are some typos in the signs in \cite{ford97} which are corrected in (\ref{ford1}) and (\ref{CMGmatrixDerivative}).}. Let ${\bf A}_{s_0}$ and ${\bf A}_{t_0}$ be initial spin axes and gimbal axes matrices at $\boldsymbol{\gamma_0}={\bf 0}$, then \begin{subequations} \begin{align} {\bf A}_s(\boldsymbol{\gamma}) = {\bf A}_{s_0} \boldsymbol{\Gamma}^c+{\bf A}_{t_0} \boldsymbol{\Gamma}^s, \\ {\bf A}_t(\boldsymbol{\gamma}) = {\bf A}_{t_0} \boldsymbol{\Gamma}^c-{\bf A}_{s_0} \boldsymbol{\Gamma}^s. \end{align} \label{ford1} \end{subequations} This gives \begin{subequations} \begin{align} \dot{{\bf A}}_s = {\bf A}_{t_0} {\rm diag} (\dot{\boldsymbol{\gamma}}) = {\bf A}_{t_0} {\rm diag} ( \boldsymbol{\omega}_g), \\ \dot{{\bf A}}_t = -{\bf A}_{s_0} {\rm diag} (\dot{\boldsymbol{\gamma}}) = -{\bf A}_{s_0} {\rm diag} (\boldsymbol{\omega}_g), \end{align} \label{CMGmatrixDerivative} \end{subequations} which are identical to the formulas of (\ref{CMGcord}). Let $J_{s_j}$, $J_{g_j}$, and $J_{t_j}$ be the spin axis inertia, the gimbal axis inertia, and the transverse axis inertia of the $j$-th CMG, let three $N \times N$ matrices be defined as \begin{equation} {\bf J}_s = {\rm diag} (J_{s_j}), \hspace{0.1in} {\bf J}_g = {\rm diag} (J_{g_j}), \hspace{0.1in} {\bf J}_t = {\rm diag} (J_{t_j}). \end{equation} Let the spacecraft inertia matrix be ${\bf J}_b$, then the total inertia matrix including the CMG clusters is given by \cite{ford97} \begin{equation} {\bf J} ={\bf J}_b +{\bf A}_s {\bf J}_s {\bf A}_s^{{\rm T}}+{\bf A}_g {\bf J}_g {\bf A}_g^{{\rm T}}+{\bf A}_t {\bf J}_t {\bf A}_t^{{\rm T}}, \end{equation} which is a function of $\boldsymbol{\gamma}$, and $\boldsymbol{\gamma}$ is a variable depending of time $t$. Therefore, ${\bf J}$ is an implicit function of time $t$. Although ${\bf J}$ is a function $\boldsymbol{\gamma}$, the dependence of ${\bf J}$ on $\boldsymbol{\gamma}$ is weak, especially when the size of spacecraft main body is large \cite{yt06}. We therefore assume that $\dot{{\bf J}}=0$ as treated in \cite{yt02,fh97,svj98,ma13}. Let $\boldsymbol{\omega}=[ \omega_1,\omega_2, \omega_3]^{{\rm T}}$ be the spacecraft body angular rate with respect to the inertial frame, $\boldsymbol{\beta}=[\beta_1, \ldots, \beta_N]^{{\rm T}}$ be the vector of $N$ wheel angles, \begin{equation} [ \dot{\beta}_1, \ldots, \dot{\beta}_{N}]^{{\rm T}} = \dot{ \boldsymbol{\beta}} :=\boldsymbol{\omega}_s = [ {\omega}_{s_1}, \ldots, {\omega}_{s_N}]^{{\rm T}} \end{equation} be the vector of $N$ wheel speed. Denote \begin{equation} {\bf h}_s=[J_{s_1} \dot{\beta}_{1} , \ldots, J_{s_N} \dot{\beta}_{N} ]^{{\rm T}} ={\bf J}_s \boldsymbol{\omega}_s, \end{equation} \begin{equation} {\bf h}_g=[J_{g_1} \dot{\gamma}_{1},\ldots, J_{g_N} \dot{\gamma}_{N}]^{{\rm T}} ={\bf J}_g \boldsymbol{\omega}_g, \end{equation} and ${\bf h}_t$ be the $N$ dimensional vectors representing the components of absolute angular momentum of the CMGs about their spin axes, gimbal axes, and transverse axes respectively. The total angular momentum of the spacecraft with a cluster of CMGs represented in the body frame is given as \begin{equation} {\bf h} = {\bf J}_b \boldsymbol{\omega} +\sum_{i=1}^{N} {\bf s}_i J_{s_i} \dot{\beta}_{i} +\sum_{i=1}^{N} {\bf g}_i J_{g_i} \dot{\gamma}_{i} = {\bf J}_b \boldsymbol{\omega} +{\bf A}_s {\bf h}_s + {\bf A}_g {\bf h}_g = {\bf J}_b \boldsymbol{\omega} +{\bf A}_s {\bf J}_s \boldsymbol{\omega}_s + {\bf A}_g {\bf J}_g \boldsymbol{\omega}_g. \label{CMGh} \end{equation} Taking derivative of (\ref{CMGh}) and using (\ref{CMGcord}) and $\dot{{\bf J}}=0$, noticing that gimbal axes are fixed, we have \begin{eqnarray} \dot{{\bf h}} & = & {\bf J}_b \dot{\boldsymbol{\omega} }
+\sum_{i=1}^{N} \left( \dot{{\bf s}}_i J_{s_i} \dot{\beta}_{i} + {{\bf s}_i} J_{s_i} \ddot{\beta}_{i} \right) +\sum_{i=1}^{N} \left( \dot{{\bf g}}_i J_{g_i} \dot{\gamma}_{i} + {{\bf g}}_i J_{g_i} \ddot{\gamma}_{i} \right) \nonumber \\
& = & {\bf J}_b \dot{\boldsymbol{\omega} }
+\sum_{i=1}^{N} \left( \dot{\gamma}_i {\bf t}_i J_{s_i} \dot{\beta}_{i} + {{\bf s}_i} J_{s_i} \ddot{\beta}_{i} \right) +\sum_{i=1}^{N} {{\bf g}}_i J_{g_i} \ddot{\gamma}_{i} \nonumber \\
& = & -{\boldsymbol{\omega}} \times {\bf h} + {\bf t}_e, \end{eqnarray} where ${\bf t}_e$ is the external torque. Denote $\boldsymbol{\Omega}_s={\rm diag}(\boldsymbol{\omega}_s)$ and $\boldsymbol{\Omega}_g={\rm diag}(\boldsymbol{\omega}_g)$. This equation can be written as a compact form as follows. \begin{eqnarray} {\bf J}_b \dot{\boldsymbol{\omega} }
+ {\bf A}_t {\bf J}_{s} \boldsymbol{\Omega}_{s} \boldsymbol{\omega}_g + {\bf A}_s {\bf J}_{s} \dot{\boldsymbol{\omega}}_{s} + {{\bf A}}_g {\bf J}_{g} \dot{\boldsymbol{\omega}}_{g} = -{\boldsymbol{\omega}} \times ({\bf J}_b \boldsymbol{\omega} +{\bf A}_s {\bf J}_s \boldsymbol{\omega}_s + {\bf A}_g {\bf J}_g \boldsymbol{\omega}_g) + {\bf t}_e, \end{eqnarray} Note that the torques generated by wheel acceleration or deceleration in the directions defined by ${\bf A}_s$ are given by \begin{equation} {\bf t}_s=-{\bf J}_s \boldsymbol{\dot{\omega}}_s = [ t_{s_1}, \ldots, t_{s_N} ]^{{\rm T}} \label{CMGtw} \end{equation} (note that vectors ${\bf t}_i$ in ${\bf A}_t$ are axes and scalars $t_{s_i}$ in ${\bf t}_s$ are torques) and the torques generated by gimbal acceleration or deceleration in the directions defined by ${\bf A}_g$ are given by \begin{equation} {\bf t}_g=-{\bf J}_g \boldsymbol{\dot{\omega}}_g = [ t_{g_1}, \ldots, t_{g_N} ]^{{\rm T}}, \label{CMGtg} \end{equation} the dynamical equation can be expressed as \begin{equation} {\bf J}_b \dot{\boldsymbol{\omega} }
+ {\bf A}_t {\bf J}_{s} \boldsymbol{\Omega}_{s} \boldsymbol{\omega}_g
+ {\boldsymbol{\omega}} \times ({\bf J}_b \boldsymbol{\omega} +{\bf A}_s {\bf J}_s \boldsymbol{\omega}_s + {\bf A}_g {\bf J}_g \boldsymbol{\omega}_g) = {\bf A}_s {\bf t}_s + {{\bf A}}_g {\bf t}_{g}+ {\bf t}_e. \label{CMGdynamics} \end{equation} Let \begin{equation} \bar{{\bf q}}=[q_0, q_1, q_2, q_3]^{{\rm T}}=[q_0, {\bf q}^{{\rm T}}]^{{\rm T}}= \left[ \cos(\frac{\alpha}{2}), \hat{{\bf e}}^{{\rm T}}\sin(\frac{\alpha}{2}) \right]^{{\rm T}} \end{equation} be the quaternion representing the rotation of the body frame relative to the inertial frame, where $\hat{{\bf e}}$ is the unit length rotational axis and $\alpha$ is the rotation angle about $\hat{{\bf e}}$. Therefore, the reduced kinematics equation becomes \cite{yang10} \begin{eqnarray} \nonumber \left[ \begin{array} {c} \dot{q}_1 \\ \dot{q}_2 \\ \dot{q}_3 \end{array} \right] & = & \frac{1}{2} \left[ \begin{array} {ccc} \sqrt{1-q_1^2-q_2^2-q_3^2} & -q_3 & q_2 \\ q_3 & \sqrt{1-q_1^2-q_2^2-q_3^2} & -q_1 \\ -q_2 & q_1 & \sqrt{1-q_1^2-q_2^2-q_3^2} \\ \end{array} \right] \left[ \begin{array} {c} \omega_{1} \\ \omega_{2} \\ \omega_{3} \end{array} \right] \\ & = & {\bf g}(q_1,q_2, q_3, \boldsymbol{\omega}), \label{nadirModel2} \end{eqnarray} or simply \begin{equation} \dot{{\bf q}}={\bf g}({\bf q}, \boldsymbol{\omega}). \label{gfunction} \end{equation} The nonlinear time-varying spacecraft control system model can be written as follows: \begin{eqnarray} \left[ \begin{array}{c} \dot{\boldsymbol{\omega}} \\ \dot{\boldsymbol{\omega}_s} \\ \dot{\boldsymbol{\omega}_g} \\ \dot{{\bf q}} \end{array} \right] & = & \left[ \begin{array}{c} -{\bf J}_b^{-1} \left[ {\bf A}_t {\bf J}_{s} \boldsymbol{\Omega}_{s} \boldsymbol{\omega}_g
+ { \boldsymbol{\omega}} \times ({\bf J}_b \boldsymbol{\omega} +{\bf A}_s {\bf J}_s \boldsymbol{\omega}_s + {\bf A}_g {\bf J}_g \boldsymbol{\omega}_g) \right] \\ {\bf 0} \\ {\bf 0} \\ {\bf g}({\bf q}, \boldsymbol{\omega}) \end{array} \right] + \left[ \begin{array}{c} {\bf J}_b^{-1} \left( {\bf A}_s {\bf t}_s + {{\bf A}}_g {\bf t}_{g}+ {\bf t}_e \right) \\ -{\bf J}_s^{-1} {\bf t}_s \\ -{\bf J}_g^{-1} {\bf t}_g \\ {\bf 0} \end{array} \right] \nonumber \\ & = & {\bf F} ( \boldsymbol{\omega}, \boldsymbol{\omega}_g,\boldsymbol{\omega}_s, {\bf q}, t) + {\bf G} ({\bf t}_s,{\bf t}_g,{\bf t}_e, t), \label{CMGtimeVarying} \end{eqnarray} or simply \begin{eqnarray} \dot{{\bf x}} ={\bf F}({\bf x},\boldsymbol{\gamma}(t)) +{\bf G}({\bf u},{\bf t}_e, \boldsymbol{\gamma}(t)), \end{eqnarray} where the state variable vector is ${\bf x}= [ {\boldsymbol{\omega}}^{{\rm T}},\boldsymbol{\omega}_s^{{\rm T}}, \boldsymbol{\omega}_g^{{\rm T}}, {\bf q}^{{\rm T}}]^{{\rm T}}$, the control variable vector is ${\bf u} = [{\bf t}_s^{{\rm T}},{\bf t}_g^{{\rm T}}]^{{\rm T}}$, disturbance torque vector is ${\bf t}_e$, and ${\bf F}$ and ${\bf G}$ are functions of time $t$ because the parameters of $\boldsymbol{\omega}$, $\boldsymbol{\omega}_s$, $\boldsymbol{\omega}_g$, ${\bf q}$, ${\bf A}_s$ and ${\bf A}_t$ are functions of time $t$. The system dimension is $n=2N+6$. The control input dimension is $2N$.
\section{Spacecraft attitude control using variable-speed CMG}
We consider two design methods for spacecraft attitude control using variable-speed CMGs. But first, we approximate the nonlinear time-varying spacecraft control system model by a linear time-varying spacecraft control system model near the equilibrium point $ \boldsymbol{\omega} ={\bf 0}$, $ \boldsymbol{\omega}_s = {\bf 0} $,
$ \boldsymbol{\omega}_g ={\bf 0}$, and $ {\bf q} = {\bf 0}$ so that an effective design considering system performance can be carried out using the simplified linear time-varying model. Denote the equilibrium by ${\bf x}_e ={\bf 0} = [\boldsymbol{\omega}^{{\rm T}}, \boldsymbol{\omega}_s^{{\rm T}},\boldsymbol{\omega}_g^{{\rm T}}, {\bf q}^{{\rm T}} ]^{{\rm T}} $ and \begin{eqnarray} {\bf F}_1=-{\bf J}_b^{-1} \left[ {\bf A}_t {\bf J}_{s} \boldsymbol{\Omega}_{s} \boldsymbol{\omega}_g
+ {\boldsymbol{\omega}} \times ({\bf J}_b \boldsymbol{\omega} +{\bf A}_s {\bf J}_s \boldsymbol{\omega}_s + {\bf A}_g {\bf J}_g \boldsymbol{\omega}_g) \right], \hspace{0.1in} {\bf F}_2={\bf F}_3 ={\bf 0}, \hspace{0.1in} {\bf F}_4={\bf g}({\bf q}, \boldsymbol{\omega}), \end{eqnarray} \begin{equation} {\bf G}_1={\bf J}_b^{-1} \left( {\bf A}_s {\bf t}_s + {{\bf A}}_g {\bf t}_{g}+ {\bf t}_e \right), \hspace{0.1in} {\bf G}_2 = - {\bf J}_s^{-1} {\bf t}_s, \hspace{0.1in} {\bf G}_3 = - {\bf J}_g^{-1} {\bf t}_g, \hspace{0.1in} {\bf G}_4 = {\bf 0}. \end{equation} Taking partial derivative for ${\bf F}_1$, we have \begin{equation} \frac{\partial {\bf F}_1}{\partial \boldsymbol{\omega} }
= {\bf J}_b^{-1} [ ({\bf A}_s {\bf J}_s \boldsymbol{\omega}_s )^{\times} + ({\bf A}_g {\bf J}_g \boldsymbol{\omega}_g )^{\times} - \boldsymbol{\omega}^{\times} {\bf J}_b + ({\bf J}_b \boldsymbol{\omega} )^{\times} ] :={\bf F}_{11}, \label{F11} \end{equation} \begin{equation} \frac{\partial {\bf F}_1}{\partial \boldsymbol{\omega}_s }
= -{\bf J}_b^{-1} [ {\bf A}_t {\bf J}_s \boldsymbol{\Omega}_g + \boldsymbol{\omega}^{\times} {\bf A}_s {\bf J}_s ] :={\bf F}_{12}, \label{F12} \end{equation} \begin{equation} \frac{\partial {\bf F}_1}{\partial \boldsymbol{\omega}_g }
= -{\bf J}_b^{-1} [ {\bf A}_t {\bf J}_s \boldsymbol{\Omega}_{s} + \boldsymbol{\omega}^{\times} {\bf A}_g {\bf J}_g ] :={\bf F}_{13}, \label{F13} \end{equation} \begin{equation} \frac{\partial {\bf F}_1}{\partial {\bf q} }
={\bf 0}. \label{F14} \end{equation} Taking partial derivative for ${\bf F}_4$, we have \begin{equation} \frac{\partial {\bf F}_4}{\partial \boldsymbol{\omega}}
=\frac{1}{2} \left[ \begin{array} {ccc} \sqrt{1-q_1^2-q_2^2-q_3^2} & -q_3 & q_2 \\ q_3 & \sqrt{1-q_1^2-q_2^2-q_3^2} & -q_1 \\ -q_2 & q_1 & \sqrt{1-q_1^2-q_2^2-q_3^2} \\ \end{array} \right]_{\substack{ {\bf q} \approx 0}} \approx \frac{1}{2} ( {\bf I} + {\bf q}^{\times}) :={\bf F}_{41}, \label{F41} \end{equation} since $q_0=\sqrt{1-q_1^2-q_2^2-q_3^2}$ and $\frac{\partial q_0}{\partial q_i}=-\frac{q_i}{q_0}$ for $i=1,2,3$, we have \begin{equation} \frac{\partial {\bf F}_4}{\partial {\bf q} }
=\frac{1}{2} \left[ \begin{array}{ccc} -\frac{q_1}{q_0} \omega_1 & \omega_3 -\frac{q_2}{q_0} \omega_1 & -\omega_2 -\frac{q_3}{q_0} \omega_1 \\ -\omega_3 -\frac{q_1}{q_0} \omega_2 & -\frac{q_2}{q_0} \omega_2 & \omega_1 -\frac{q_3}{q_0} \omega_2 \\ \omega_2 -\frac{q_1}{q_0} \omega_3 & -\omega_1 -\frac{q_2}{q_0} \omega_3 & -\frac{q_3}{q_0} \omega_3 \end{array} \right]_{\substack{ \boldsymbol{\omega} \approx 0 \\ {\bf q} \approx 0}} \approx - \frac{1}{2} \boldsymbol{\omega}^{\times} :={\bf F}_{44}. \label{F44} \end{equation} Therefore, the linearized time-varying model is given by \begin{eqnarray} \left[ \begin{array}{c} \dot{\boldsymbol{\omega}} \\ \dot{\boldsymbol{\omega}_s} \\ \dot{\boldsymbol{\omega}_g} \\ \dot{{\bf q}} \end{array} \right] & = & \left[ \begin{array}{cccccc} {\bf F}_{11} & {\bf F}_{12} & {\bf F}_{13} & {\bf 0} \\ {\bf 0} & {\bf 0} & {\bf 0} & {\bf 0} \\ {\bf 0} & {\bf 0} & {\bf 0} & {\bf 0} \\ {\bf F}_{41} & {\bf 0} & {\bf 0} & {\bf F}_{44} \\ \end{array} \right] \left[ \begin{array}{c} {\boldsymbol{\omega}} \\ {\boldsymbol{\omega}_s} \\ {\boldsymbol{\omega}_g} \\ {{\bf q}} \end{array} \right] + \left[ \begin{array}{cc} {\bf J}_b^{-1} {\bf A}_s & {\bf J}_b^{-1} {{\bf A}}_g \\ -{\bf J}_s^{-1} & {\bf 0} \\ {\bf 0} & -{\bf J}_g^{-1} \\ {\bf 0} & {\bf 0} \end{array} \right] \left[ \begin{array}{c} {{\bf t}_s} \\ {{\bf t}_g} \end{array} \right] + \left[ \begin{array}{c} {\bf J}_b^{-1} \\ {\bf 0} \\ {\bf 0} \\ {\bf 0} \end{array} \right] {{\bf t}_e} \nonumber \\ & = & {\bf A} {\bf x} + {\bf B} {\bf u} +{\bf C} {\bf t}_e, \label{CMGlinear} \end{eqnarray} where ${\bf C}$ is a time-invariant matrix. The linearized system is time-varying because ${\boldsymbol{\omega}}$, $\boldsymbol{\omega}_s$, $\boldsymbol{\omega}_g$, ${\bf q}$, ${\bf A}_s$ and ${\bf A}_t$ in ${\bf A}$ and ${\bf B}$ are all functions of $t$. \begin{remark} It is worthwhile to note that the linearized system matrices ${\bf A}$, ${\bf B}$, and ${\bf C}$ will be time-invariant if we approximate the linear system at the equilibrium point of the origin (${\bf x}_e={\bf 0}$). However, such a linear time invariant system will not be controllable. Therefore, we take the first order approximation for ${\bf A}$ and ${\bf B}$, which leads to a controllable linear time-varying system. \end{remark} In theory, given ${\bf A}_{s_0}$, ${\bf A}_{t_0}$, and ${\boldsymbol{\omega}_g}$, ${\bf A}_s$ and ${\bf A}_t$ can be calculated by the integration of (\ref{CMGmatrixDerivative}). But using (\ref{diagcs}) and (\ref{ford1}) is a better method because it ensures that the columns of ${\bf A}_s$ and ${\bf A}_t$ are unit vectors as required. Notice that the $i$th column of ${\bf A}_s$ and the $i$th column of ${\bf A}_t$, $i=1,\ldots,n$, must be perpendicular to each other, an even better method to update ${\bf A}_t$ is to use the cross product \begin{equation} {\bf t}_i = {\bf g}_i \times {\bf s}_i, \hspace{0.1in} i=1,\ldots,n, \label{crossperpendicular} \end{equation} to prevent ${\bf t}_i$ and ${\bf s}_i$ from being losing perpendicularity due to the numerical error accumulation. In simulation, integration of (\ref{dotGamma}) can be used to obtain $\boldsymbol{\gamma}$ which is needed in the computation of (\ref{diagcs}), but in engineering practice, the encoder measurement should be used to get $\boldsymbol{\gamma}$.
Assuming that the closed-loop linear time-varying system is given by \begin{equation} \dot{{\bf x}} = \bar{{\bf A}}(t) {\bf x}(t), \hspace{0.1in} {\bf x}(t_0) ={\bf x}_0. \label{timeVarying} \end{equation} It is well-known that even if all the eigenvalues of $\bar{{\bf A}}(t)$, denoted by ${\cal R}_e [\lambda (t)]$, are in the left half complex plane for all $t$, the system may not be stable \cite[pages 113-114]{rugh93}. But the following theorem (cf. \cite[pages 117-119]{rugh93}) provides a nice stability criterion for the closed-loop system (\ref{timeVarying}). \begin{theorem} Suppose for the linear time-varying system (\ref{timeVarying}) with $\bar{{\bf A}}(t)$ continuously differentiable there exist finite positive constants $\alpha$, $\mu$ such that, for all
$t$, $\| \bar{{\bf A}}(t) \| \le \alpha $ and every point-wise eigenvalue of $\bar{{\bf A}}(t)$ satisfies ${\cal R}_e [\lambda (t)] \le -\mu$. Then there exists a positive constant $\beta$ such that if the time derivative of $\bar{{\bf A}}(t)$ satisfies
$\| \dot{\bar{{\bf A}}}(t) \| \le \beta$ for all $t$, the state equation is uniformly exponentially stable. \label{rughTimeVarying} \end{theorem}
This theorem is the theoretical base for the linear time-varying control system design. We need at least that ${\cal R}_e [\lambda (t)] \le -\mu$ holds.
\subsection{Gail scheduling control}
Gain scheduling control design is fully discussed in \cite{rugh90} and it seems to be applicable to this LTV system. The main idea of gain scheduling is: 1) select a set of fixed parameters' values, which represent the range of the plant dynamics, and design a linear time-invariant gain for each; and 2) in between operating points, the gain is interpolated using the designs for the fixed parameters' values that cover the operating points. As an example, for $i=1,\ldots,N$, let $\gamma_{i} \in \{ 2\pi/p_{\gamma}, 4\pi/p_{\gamma}, \cdots,2\pi \}$ be a set of $p_{\gamma}$ fixed points equally spread in $[0,2\pi]$. Then, for $N$ CMGs, there are $p_{\gamma}^N$ possible fixed parameters' combinations. For example, if $N=4$ and $p_{\gamma}=8$, we can represent the grid composed of these fixed points in a matrix form as follows: \begin{equation} \left[ \begin{array}{cccccccc} \pi/4 & \pi/2 & 3\pi/4 & \pi & 5\pi/4 & 3\pi/2 & 7\pi/4 & 2\pi \\ \pi/4 & \pi/2 & 3\pi/4 & \pi & 5\pi/4 & 3\pi/2 & 7\pi/4 & 2\pi \\ \pi/4 & \pi/2 & 3\pi/4 & \pi & 5\pi/4 & 3\pi/2 & 7\pi/4 & 2\pi \\ \pi/4 & \pi/2 & 3\pi/4 & \pi & 5\pi/4 & 3\pi/2 & 7\pi/4 & 2\pi \end{array} \right], \label{grid} \end{equation} and each fixed $\boldsymbol{\gamma}$ is a vector composed of $\gamma_i$ ($i=1,2,3,4$) which can be any element of $i$th row. If $\boldsymbol{\gamma}$ is not a fixed point, we have $\gamma_i \in [\kappa(i), \kappa(i) +1]$ for all $i \in [1,\cdots,N-1]$. Assume that $\gamma_i$ is in the interior of $(\kappa(i), \kappa(i) +1)$ for all $i \in [1,\cdots,N-1]$. Then, $\boldsymbol{\gamma}$ meets the following conditions: \begin{equation} \boldsymbol{\gamma} = \left[ \begin{array}{c} \gamma_1 \in (\kappa(1), \kappa(1)+1) \\ \vdots \\ \gamma_N \in (\kappa(N), \kappa(N)+1) \end{array} \right]. \label{vertex} \end{equation} Using the previous example of (\ref{grid}), if $\boldsymbol{\gamma} =\left[ \frac{5\pi}{8},\frac{3\pi}{8},\frac{7\pi}{16}, \frac{15\pi}{8} \right]^{{\rm T}}$, then $\boldsymbol{\gamma} \in \left[ (\frac{\pi}{2},\frac{3\pi}{4}), (\frac{\pi}{4},\frac{\pi}{2}), (\frac{\pi}{4},\frac{\pi}{2}), (\frac{7\pi}{4},{2\pi}) \right]^{{\rm T}}$. To use gain scheduling control, we need also to consider fixed points for ${\boldsymbol{\omega}}$, $\boldsymbol{\omega}_s$, $\boldsymbol{\omega}_g$, and ${\bf q}$ in their possible operational ranges. Let $p_w$, $p_{w_s}$, $p_{w_g}$, and $p_q$ be the number of the fixed points for ${\boldsymbol{\omega}}$, $\boldsymbol{\omega}_s$, $\boldsymbol{\omega}_g$, and ${\bf q}$. The total vertices for the entire polytope (including a grid of all possible time-varying parameters) will be $p_{\gamma}^N p_w^3 p_{w_s}^N p_{w_g}^N p_q^3$.
For each of these ($p_{\gamma}^N p_w^3 p_{w_s}^N p_{w_g}^N p_q^3$) fixed models, we need conduct a control design to calculate the feedback gain matrix for the ``frozen'' model. If the system (\ref{CMGlinear}) at time $t$ happens to have all parameters equal to the fixed points, we can use a ``frozen'' feedback gain to control the system (\ref{CMGlinear}). Otherwise, we need to construct a gain matrix based on $2^{3N+6}$ ``frozen'' gain matrices. Assuming that each parameter has some moderate number of fixed points, say $8$, and the control system has $N=4$ gimbals, the total number of the fixed models will be $8^{18}$, each needs to compute a feedback matrix, an impossibly computational task.
\subsection{Model Predictive Control}
Unlike the gain scheduling control design in which most computation is done off-line, model predictive control computes the feedback gain matrix on-line for the linear system (\ref{CMGlinear}) in which ${\bf A}$ and ${\bf B}$ matrices are updated in every sampling period. It is straightforward to verify that for any given $\boldsymbol{\gamma}$, if ${\bf x} \neq {\bf x}_e$, the linear system (\ref{CMGlinear}) is controllable. In theory, one can use either robust pole assignment \cite{yt93,tity96}, or LQR design \cite{lvs12}, or ${\bf H}_{\infty}$ design \cite{zdg96} for the on-line design, but ${\bf H}_{\infty}$ design costs significant more computational time and should not be considered for this on-line design problem. Since LTV system design should meet the condition of ${\cal R}_e [\lambda (t)] \le -\mu$ required in Theorem \ref{rughTimeVarying}, robust pole assignment design is clearly a better choice than LQR design for this purpose. Another attrictive feature of the robust pole assignment design is that the perturbation of the closed loop eigenvalues between sampling period are expected to be small. It is worthwhile to note that a robust pole assignment design \cite{tity96} minimizes an upper bound of ${\bf H}_{\infty}$ norm which means that the design is robust to the modeling error and reduces the impact of disturbance torques on the system output \cite{yang96,yang14}. Additional merits about this method, such as computational speed which is important for the on-line design, is discussed in \cite{psnyst14}. Therefore, we use the method of \cite{tity96} in the proposed design.
The proposed design algorithm is given as follows:
\begin{algorithm} {\ } \\ Data: ${\bf J}_b$, ${\bf J}_s$, ${\bf J}_g$, and ${\bf A}_g$. \hspace{0.1in} {\ } \\ Initial condition: ${\bf x}={\bf x}_0$, $\boldsymbol{\gamma}=\boldsymbol{\gamma}_0$, ${\bf A}_{s_0}$, and ${\bf A}_{t_0}$. {\ } \\ \begin{itemize} \item[] Step 1: Update ${\bf A}$ and ${\bf B}$ based on the latest $\boldsymbol{\gamma}$ and ${\bf x}$. \item[] Step 2: Calculate the gain ${\bf K}$ using robust pole assignment algorithm {\tt robpole} (cf. \cite{tity96}). \item[] Step 3: Apply feedback ${\bf u}={\bf K} {\bf x}$ to (\ref{CMGtimeVarying}) or (\ref{CMGlinear}). \item[] Step 4: Update $\boldsymbol{\gamma}$ and ${\bf x} = [ {\boldsymbol{\omega}}^{{\rm T}}, \boldsymbol{\omega}_s^{{\rm T}}, \boldsymbol{\omega}_g^{{\rm T}}, {\bf q}^{{\rm T}}]^{{\rm T}}$. Go back to Step 1. \end{itemize} \label{onLine} \end{algorithm}
\section{Simulation test}
\begin{figure}
\caption{VSCMG system with pyramid configuration concept.}
\label{fig:pyramid1}
\end{figure}
\begin{figure}
\caption{VSCMG system with pyramid configuration.}
\label{fig:pyramid2}
\end{figure}
The proposed design method is simulated using the data in \cite{jt04,yt04,yt02}. We assume that the four variable-speed CMGs are mounted in pyramid configuration as shown in Figures \ref{fig:pyramid1} and \ref{fig:pyramid2}. The angle of each pyramid side to its base is $\theta=54.75$ degree; the inertia matrix of the spacecraft is given by \cite{yt02} as \begin{equation} {\bf J}_b = \left[ \begin{array}{ccc} 15053 & 3000 & -1000 \\ 3000 & 6510 & 2000 \\ -1000 & 2000 & 11122 \end{array} \right] \hspace{0.1in} \mbox{kg}\cdot m^2. \end{equation} The spin axis inertial matrix is given by ${\bf J}_s = {\rm diag} ( 0.7, 0.7, 0.7, 0.7) \hspace{0.04in} \mbox{kg}\cdot m^2$ and the gimbal axis inertia matrix is given by ${\bf J}_g={\rm diag}( 0.1, 0.1, 0.1, 0.1) \hspace{0.04in} \mbox{kg}\cdot m^2$. The initial wheel speeds are $2 \pi$ radians per second for all wheels. The initial gimbal speeds are all zeros. The initial spacecraft body rate vector is randomly generated by Matlab $rand(3,1) *10^{-3}$ and the initial spacecraft attitude vector is a reduced quaternion randomly generated by Matlab $rand(3,1) *10^{-1}$. The gimbal axis matrix is fixed and given by \cite{yt04} (cf. Figures \ref{fig:pyramid1} and \ref{fig:pyramid2}.) \begin{equation} {\bf A}_g = \left[ \begin{array}{cccc} \sin(\theta) & 0 & -\sin(\theta) & 0 \\ 0 & \sin(\theta) & 0 & -\sin(\theta) \\ \cos(\theta) & \cos(\theta) & \cos(\theta) & \cos(\theta) \end{array} \right] \label{gimbalM} \end{equation} The initial wheel axis matrix can be obtained using Figures \ref{fig:pyramid1} and \ref{fig:pyramid2} and is given by \begin{equation} {\bf A}_s = \left[ \begin{array}{cccc} 0 & -1 & 0 & 1 \\ 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right] \label{gimbalW} \end{equation} The initial transverse matrix ${\bf A}_t$ can be obtained by the method of (\ref{crossperpendicular}). The desired or designed closed-loop poles are selected as $\{ -0.2 -0.8, -0.2 \pm 0.1i, -0.6\pm 0.1 i,-1.5 \pm i, -1.6 \pm i,-1.7 \pm i,-1.8 \pm i\}$. The simulation test results are given in Figures \ref{fig:bodyRate}-\ref{fig:quaternionRes}. Clearly, the designed controller stabilizes the system with good performance.
\begin{figure}
\caption{Spacecraft body rate response.}
\label{fig:bodyRate}
\end{figure}
\begin{figure}
\caption{Spin wheel response.}
\label{fig:spinWheel}
\end{figure}
\begin{figure}
\caption{Gimbal wheel response.}
\label{fig:gimbalWheel}
\end{figure}
\begin{figure}
\caption{Reaction wheel response $\Omega_1$, $\Omega_2$, and $\Omega_3$.}
\label{fig:quaternionRes}
\end{figure}
\section{Conclusions}
In this paper, we proposed a new operational concept for variable-speed CMGs. This new concept allows us to simplify the nonlinear model of the spacecraft attitude control using variable-speed CMGs to a linear time-varying model. Although this LTV model is significantly simpler than the original nonlinear model, there are still many time-varying parameters in the simplified model. Two LTV control system design methods, the gain scheduling design and model predictive control design, are investigated. The analysis shows that model predictive control is better suited for spacecraft control using variable-speed CMGs. An efficient robust pole assignment algorithm is used in the on-line feedback gain matrix computation. Simulation test demonstrated the effectiveness of the new concept and control system design method.
\end{document}
|
arXiv
|
{
"id": "1612.06784.tex",
"language_detection_score": 0.6823266744613647,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{bibunit} \title{The mixed problem for the Laplacian in Lipschitz domains}
\author{ Katharine A. Ott \footnote{Research supported, in part, by the National Science Foundation.} \\ Department of Mathematics \\University of Kentucky \\ Lexington, Kentucky \and Russell M. Brown \\ Department of Mathematics \\University of Kentucky \\ Lexington, Kentucky }
\date{}
\maketitle
\abstract{
We consider the mixed boundary value problem, or Zaremba's
problem, for the Laplacian in a bounded Lipschitz domain $\Omega$ in
$ {\bf R} ^ n$, $n\geq 2$. We decompose the boundary $ \partial
\Omega= D\cup N$ with $D$ and $N$ disjoint. The boundary between $D$
and $N$ is assumed to be a Lipschitz surface in $\partial
\Omega$. We find an exponent $q_0>1$ so that for $ p $ between $ 1 $
and $q_0$ we may solve the mixed problem for $L^p$. Thus, if we
specify Dirichlet data on $D$ in the Sobolev space $\sobolev 1 p
(D)$ and Neumann data on $N$ in $ L^ p (N)$, the mixed problem with
data $f_N$ and $f_D$ has a unique solution and the non-tangential
maximal function of the gradient lies in $L^p( \partial \Omega)$.
We also obtain results for $p=1$ when the data comes from Hardy
spaces.
{\em Keywords: }Mixed boundary value problem, Laplacian
{\em Mathematics subject classification: }35J25 }
\section{Introduction}
Over the past thirty years, there has been a great deal of interest in studying boundary value problems for the Laplacian in Lipschitz domains. A fundamental paper of Dahlberg \cite{BD:1977} treated the Dirichlet problem. Jerison and Kenig \cite{JK:1982c} treated the Neumann problem and provided a regularity result for the Dirichlet problem. Another boundary value problem of interest is the mixed problem or Zaremba's problem where we specify Dirichlet data on part of the boundary and Neumann data on the remainder of the boundary. To state this boundary value problem, we let $ \Omega$ be a bounded open set in $ {\bf R} ^n$ and suppose that we have written $ \partial \Omega = D\cup N$ where $D$ is an open subset of the boundary and $ N =\partial \Omega \setminus D$. We consider the {\em $L^p$-mixed problem } by which we mean the boundary value problem \begin{equation} \label{MP} \left\{ \begin{array}{ll} \Delta u = 0, \qquad & \mbox{in } \Omega\\ u = f_D, \qquad & \mbox{on } D\\ \bigfrac { \partial u }{ \partial \nu} = f_N,\qquad & \mbox{on } N \\ \nontan{(\nabla u ) } \in L^ p ( \partial \Omega). \end{array} \right. \end{equation} Here, we are using $ \nontan {(\nabla u)}$ to denote the non-tangential maximal
function of $\nabla u$ and the restriction to the boundary of $u$ and $\nabla u$ are defined using non-tangential limits. See section \ref{Definitions} for details.
The normal derivative at the boundary $ \partial u /\partial \nu$ is defined as $ \nabla u \cdot \nu$ where $\nu $ is the outer unit normal defined a.e.~on the boundary. Our goals are to find conditions on $\Omega$, $N$ and $D$ which allow us to show that (\ref{MP}) has at most one solution and to find conditions on $ \Omega$, $N$, $D$, $f_N$ and $f_D$ which guarantee the existence of solutions.
The study of the mixed problem in Lipschitz domains is listed as an open problem in Kenig's CBMS lecture notes \cite[Problem 3.2.15]{CK:1994}. Recall that simple examples show that we cannot expect to find solutions whose gradient lies in $L^2$ of the boundary. For example, the function $ \mathop{\rm Re}\nolimits \sqrt z$ on the upper half-plane has zero Neumann data on the positive real axis and zero Dirichlet data on the negative real axis but the gradient is not locally square integrable on the boundary of the upper half-space. This appears to present a technical problem as the standard technique for studying boundary value problems has been the Rellich identity which produces estimates in $L^2$.
In 1994, one of the authors observed that the Rellich identity could be used to study the mixed problem in a restricted class of Lipschitz domains \cite{RB:1994b}. Roughly speaking, this work requires that the sets $N$ and $D$ meet at an angle less than $\pi$. Based on this work and the methods used by Dahlberg and Kenig
to study the Neumann problem \cite{DK:1987}, J. Sykes \cite{JS:1999,SB:2001} established results for the mixed problem in a restricted class of Lipschitz graph domains. I.~Mitrea and M.~Mitrea \cite{MM:2007} have studied the mixed problem for the Laplacian with data taken from a large family of function spaces, but with a restriction on the class of domains. Brown and I.~Mitrea have studied the mixed problem for the Lam\'e system \cite{MR2503013} and Brown, I.~Mitrea, M.~Mitrea and Wright have considered a mixed problem for the Stokes system \cite{MR2563727}. More recently, Lanzani, Capogna and Brown \cite{LCB:2008} used a variant of the Rellich identity
to establish an estimate for the mixed problem in two-dimensional graph domains when the data comes from weighted $L^2$ spaces and the Lipschitz constant is less than one. The present work also relies on weighted estimates, but uses a simpler, more flexible approach that applies to all Lipschitz domains.
Several other authors have treated the mixed problem in various settings. Verchota and Venouziou \cite{MR2500502} treat a large class of three dimensional polyhedral domains under the condition that the Neumann and Dirichlet faces meet at an angle of less than $\pi$. Maz'ya and Rossman \cite{MR:2005,MR:2007,MR:2006} have studied the Stokes system in polyhedral domains. Finally, we note that Savar\'e \cite{GS:1997} has shown that on smooth domains, we may find solutions in the Besov space $ B^ {2,\infty}_{ 3/2}$. This result seems to be very close to optimal. The example $ \mathop{\rm Re}\nolimits \sqrt z$ described above shows that we cannot hope to obtain an estimate in the Besov space $ B^ {2,2}_{3/2}$.
We outline the rest of the paper and describe the main tools of the proof. Our first main result is an existence result for the mixed problem when the Neumann data is an atom for a Hardy space. We begin with the weak solution of the mixed problem and use Jerison and Kenig's results for the Dirichlet problem and Neumann problem \cite{JK:1982c} to obtain estimates for the gradient of the solution on the interior of $D$ or $N$. This leads to a weighted estimate where the weight is a power of the distance to the common boundary between $D$ and $N$. The estimate involves a term in the interior of the domain $\Omega$. We handle this term by showing that the gradient of a weak solution lies in $L^ p (\Omega)$ for some $p>2$. The $L^p(\Omega)$ estimates for the gradient of a weak solution are proved in section \ref{Reverse} using the reverse H\"older technique of Gehring \cite{FG:1973} and Giaquinta and Modica \cite{MR549962}. Using this weighted estimate for solutions of the mixed problem, we obtain existence for solutions with Hardy space data by extending the methods of Dahlberg and Kenig \cite{DK:1987}. Uniqueness of solutions is proven in section \ref{Unique}.
With the Hardy space results in hand, we establish the existence of solutions to the mixed problem when the Neumann data is in $L^p ( N)$ and the Dirichlet data is in the Sobolev space $ \sobolev 1 p (D)$. This is done in sections \ref{BoundaryReverse} and \ref{LpSection} by adapting the reverse H\"older technique used by Shen to study boundary value problems for elliptic systems \cite{ZS:2007}. The novel feature in our work is that we are able to use the estimates in Hardy spaces proven in section \ref{Atoms}, whereas Shen's work begins with existence in $L^2$.
\section{Definitions and preliminaries} \label{Definitions}
We say that a bounded, connected open set $\Omega$ is a Lipschitz domain if the boundary is locally the graph of a Lipschitz function. To make this precise, for $M>0$, $ x\in \partial \Omega $ and $ r>0$, we define a {\em coordinate cylinder} $\cyl x r $ to be $\cyl x r = \{
y : |y'-x'|< r , \ |y_n -x_n | < ( 1+M)r \}$. We use coordinates $(x', x_n ) = ( x_1, x'', x_n ) \in {\bf R} \times {\bf R} ^ { n- 2 } \times {\bf R}$ and assume that this coordinate system is a translation and rotation of the standard coordinates. We say that $ \Omega$ is a {\em Lipschitz domain } if for each $x$ in $\partial \Omega$, we may find a coordinate cylinder and a Lipschitz function $\phi : {\bf R} ^ {
n-1} \rightarrow {\bf R}$ with Lipschitz constant $M$ so that \begin{eqnarray*} \Omega \cap \cyl x r & =& \{ (y', y_n ) : y_ n > \phi (y') \} \cap \cyl x r \\ \partial \Omega \cap \cyl x r & = &
\{ (y', y_n ) : y_ n = \phi (y') \}\cap \cyl x r . \end{eqnarray*}
For a Lipschitz domain $ \Omega$, we define a {\em decomposition of the boundary for the mixed problem}, $\partial \Omega = D \cup N$, as follows. We assume that $D$ is a relatively open subset of $ \partial \Omega$, $N= \partial \Omega \setminus D$ and let $\Lambda $ be the boundary (relative to $\partial \Omega$) of $D$. For each $x$ in $ \Lambda$, we require that a coordinate cylinder centered at $x$ have some additional properties. We ask that there be a coordinate system $(x_1, x'', x_n)$, a coordinate cylinder $\cyl x r $, a function $\phi$ as above and also a Lipschitz function $\psi: {\bf R}^{ n-2} \rightarrow {\bf R}$ with Lipschitz constant $M$ so that \begin{eqnarray*} \cyl x r \cap D & = & \{ (y_1, y'', y_n ) : y_ 1 > \psi (y''), \ y_n = \phi(y') \} \cap \cyl x r \\ \cyl x r \cap N & = & \{ (y_1, y'', y_n ) : y_ 1 \leq \psi (y''), \ y_n = \phi(y') \} \cap \cyl x r . \end{eqnarray*}
We fix a covering of the boundary by coordinate cylinders $\{ \cyl {x_i } { r _i } \}_{i=1} ^ L$ so that each $\cyl {x_i } { 100r _i } $ is also a coordinate cylinder. We assume that for each $i$, the cylinder $\cyl {x_i} { 100r_i} \cap \partial \Omega \subset D$, $\cyl {x_i }{
100r_i} \cap \partial \Omega \subset N $ or $\cyl {x_i} {100r_i}$ is one the coordinate cylinders from the definition of the boundary decomposition for the mixed problem. We let $r_0 = \min \{ r_i : i =1, \dots ,L\}$ be the smallest radius in the collection.
We will call a Lipschitz domain $ \Omega$ and a decomposition of the boundary $ \partial \Omega = N \cup D$ satisfying the above properties a {\em standard domain for the mixed problem}.
We will use $ \delta (y) = \mathop{\rm dist}\nolimits (y, \Lambda)$ to denote the distance from a point $y $ to $\Lambda$.
We will let $ \ball x r = \{ y : |y-x| < r \}$ denote the standard ball in $ {\bf R} ^n$ and then $ \sball x r = \ball x r \cap \partial \Omega$ will denote a {\em surface ball}. Throughout this paper we will need to be careful of several points. The surface balls may not be connected and we will use the notation $ \sball x r $ where $ x$ may not be on the boundary. We use $ \dball x r $ to stand for $ \ball x r \cap \Omega$. Since $\Lambda$ is a Lipschitz graph, we may find a constant $c = c(n,M) >0$ so that we have the property \begin{equation} \label{SurfProp} \mbox{If $ x\in \Lambda$ and $ 0<r< r_0$, then $\sigma ( \sball x {r}
\cap D ) > c r^ { n-1}$. } \end{equation} Here and throughout this paper, we use $ \sigma$ for surface measure.
Our main tool for estimating solutions will be the non-tangential maximal function. We fix $ \alpha > 0$ and for $ x\in \partial \Omega$ we define a {\em non-tangential approach region } by $$
\ntar x = \{ y \in \Omega : |x-y | \leq ( 1+ \alpha) \mathop{\rm dist}\nolimits (y, \partial \Omega) \}. $$ Given a function $u$ defined on $ \Omega$, we define the {\em non-tangential maximal function } by $$
\nontan{u} (x) = \sup _{ y \in \ntar x } |u (y) |, \qquad x \in \partial \Omega. $$ It is well-known that for different values of $ \alpha$, the non-tangential maximal functions have comparable $L^p$-norms. Thus, the dependence on $\alpha $ is not important for our purposes and we suppress the value of $\alpha $ in our notation. In (\ref{MP}), we define the restriction of $u$ and $ \nabla u$ to the boundary using non-tangential limits. Thus, for a function $v$ defined in $ \Omega$ and $ x \in \partial \Omega$, we define $$ v(x) = \lim _{ \Gamma (x) \ni y \rightarrow x } v(y) $$ provided the limit exists. It is well-known that if $v$ is harmonic in a Lipschitz domain, then the non-tangential limits exist at almost every point where the non-tangential maximal function is finite. In addition, if the non-tangential maximal function of $ \nabla u$ lies in $L^p( \partial \Omega)$, then according to the argument in \cite[Lemma 2.2]{RB:1995a}, as corrected in Wright \cite{MR2713677}, the non-tangential maximal function of $u$ lies in an $L^p$-space and hence has non-tangential limits a.e..
Many of our estimates will be of a local, scale invariant form and hold on a scale $r$ that is less than $r_0$. The constants in these local estimates will depend on the constant $M$, the dimension $n$, and any $L^p$-indices that appear in the estimate. If a constant depends on $M$, $n$, any $L^p$-indices and also depends on the collection of coordinate cylinders which cover $ \partial \Omega$ and the constant in the coercivity condition (\ref{coerce}), then we say that the constant depends on the global character of $ \Omega$, $N$ and $D$.
We will use $L^p(E)$ to denote $L^p$-spaces. If $ E\subset \partial \Omega$, then we use the $(n-1)$-dimensional measure on the boundary to define the $L^p$-space. Otherwise, the $L^p$-norm is taken with respect to $n$-dimensional Lebesgue measure. For $ \Omega$ an open subset of $ {\bf R} ^n$, $k=1,2,\dots$ and $ 1\leq p \leq \infty$, we use $ \sobolev kp(\Omega)$ to denote the Sobolev space of functions having $k$ derivatives in $L^p( \Omega) $. We introduce notation for the tangential gradient of a function defined on the boundary, $ \tangrad u$. If $u$ is a smooth function defined in a neighborhood of $ \partial \Omega$, then we have that $\tangrad u = \nabla u - (\nabla u \cdot \nu )\nu$. See \cite[p.~580]{GV:1984} for more details. For
$D$ an open subset of $\partial \Omega$, we use $ \sobolev 1 p(D)$ to denote the Sobolev space of functions defined on $D$ and having one derivative in $L^p(D)$. The norm in this space is given by $
\|f\|_{ \sobolev 1 p (D)}= \| f\|_{ L^ p (D)}+\|\tangrad f \|_ { L^ p (D)}$.
Before stating the main theorem, we recall the definitions of atoms and atomic Hardy spaces. We say that $a$ is an {\em atom for the boundary }$ \partial \Omega$ if $a$ is supported in a surface ball $ \Delta _r (x)$ for some $x$ in
$ \partial \Omega$, $\|a\|_{L^\infty(\partial \Omega)} \leq 1/\sigma(\Delta _r(x))$ and $ \int _{ \partial \Omega } a \, d\sigma = 0. $
When we consider the mixed problem, we will want to consider atoms for the subset $N$. We say that {\em $ a$ is an atom for $N$} if $a$ is the restriction to $N$ of a function $\tilde a $ which is an atom for $ \partial \Omega$. For $ N $ a subset of $ \partial \Omega$, the Hardy space $ H^1( N)$ is the collection of
functions $f$ which can be represented as $ \sum \lambda_j a_j$ where each $a_j$ is an atom for $N$ and the coefficients satisfy $\sum | \lambda _j|< \infty $. This includes, of course, the case where $N = \partial \Omega$ and then we obtain the standard definition. It is easy to see that the Hardy space $H^1(N)$ is the restriction to $N$ of elements of the Hardy space $ H^ 1 ( \partial \Omega)$.
We give a similar definition for the Hardy-Sobolev space $H^ { 1,1}$.
We say that $ A$ is an {\em
atom for $H^ { 1,1 } ( \partial \Omega)$ } if $A$ is supported in a surface ball $ \sball x r $ for some $x \in \partial \Omega$ and
$\| \nabla _t A \| _ { L^ \infty ( \partial \Omega ) } \leq 1/\sigma ( \sball x r )$. We say that $A$ is an {\em atom for $H^ { 1 ,1 } (D) $ }
if $A$ is the restriction to $D$ of an atom $ \tilde A$ for $ \partial
\Omega$. Again, the space $H^ { 1,1 }( D )$ is the
collection generated by taking sums of $H^ {1,1}(D) $ atoms with
coefficients in $ \ell ^1$. See the article of Coifman and Weiss \cite{CW:1976} for more information about Hardy spaces.
We are now ready to state our main theorem.
\begin{theorem} \label{main} Let $ \Omega$, $N$ and $D$ be a standard domain for the mixed problem.
a) For $ p \geq 1$, the $L^p$-mixed problem has at most one solution.
b) If $f_N$ lies in $H^ 1(N)$ and $f _D $ lies in $H^{1,1}(D)$, the $L^1$-mixed problem has a solution which satisfies the estimate $$
\| (\nabla u )^*\| _ { L^ 1 (\partial \Omega )} \leq C ( \| f_N\|_ { H
^ 1( N) } + \|f_D\| _ { H ^ { 1,1 } (D)}) . $$
c) There exists $q_0 > 1 $, depending only on $M$ and $n$ so that for $p$ satisfying $1< p< q_0$, we have: If $f_N \in L^ p (N)$ and $ f_D \in \sobolev 1 p(D)$, then the $L^p$-mixed problem has a solution $u$ which satisfies $$
\| (\nabla u ) ^ * \| _ { L^ p ( \partial \Omega ) }
\leq C ( \| f_N \| _{L^ p(N) } + \|f_D \| _ { \sobolev 1 p (D)} ) . $$
The constants in the estimates depend on the global character of the domain and the index $p$. \end{theorem}
The rest of the paper is devoted to the proof of this theorem. We outline
the main steps of the proof. \begin{proof}[Outline of the proof] We begin by recalling that for the Dirichlet problem with data from a Sobolev space, we obtain non-tangential maximal function estimates for the gradient of the solution. This is treated for $p=2 $ by Jerison and Kenig \cite{JK:1982c} and for $ 1< p < 2$ by Verchota \cite{GV:1982,GV:1984}. The Hardy space problem was studied by Dahlberg and Kenig \cite{DK:1987} and by D.~Mitrea in two dimensions \cite[Theorem 3.6]{MR1883390}. Using these results, it suffices to prove Theorem \ref{main} in the case when the Dirichlet data is zero.
The existence when the Neumann data is taken from the atomic Hardy space and the Dirichlet data is zero is given in Theorem \ref{HardyTheorem}. The existence for $L^p$ data appears in section \ref{LpSection}. It suffices to establish uniqueness when $p=1$ and this is treated in Theorem \ref{uRuniq}. \end{proof}
\section{Higher integrability of the gradient of a weak solution} \label{Reverse}
It is well-known that one can obtain higher integrability of the gradient of weak solutions of an elliptic equation. An early result of this type is due to Meyers \cite{NM:1963}. Meyers's result has been extended to the mixed problem by Gr\"oger \cite{MR990595}. However, we choose to obtain our estimates using the reverse H\"older technique introduced by Gehring \cite{FG:1973} and Giaquinta and Modica \cite{MR549962} (we use the formulation from Giaquinta \cite[p.~122]{MG:1983}). This approach allows us to include non-zero boundary data and obtain local, scale-invariant results. At a few points of the proof it will be simpler if we are working in a coordinate cylinder $Z$ where we have that $ \partial \Omega\cap Z $ lies in a hyperplane. Thus, we will establish results for divergence form elliptic operators with bounded measurable coefficients as this class is preserved by a change of variable that will flatten part of the boundary.
We will consider several formulations of the mixed problem. Our goal is to obtain solutions whose gradient lies in $L^p( \partial \Omega)$ for $p$ near 1. Our argument begins with a weak solution whose gradient lies in $L^2 ( \Omega)$. We will show that under appropriate assumptions on the data, this solution will have a gradient in $L^p ( \partial \Omega)$.
We describe a weak formulation of the mixed boundary value problem.
Some of the results of this section will hold for solutions of divergence form operators. Thus, we define weak solutions in this more general setting. For $D$ a subset of the boundary, we let $W^{1,2}_D(\Omega)$ be the closure in $W^{1,2}(\Omega)$ of functions in $ C_0^ \infty ( {\bf R} ^ n )$ for which $ \mathop{\rm supp}\nolimits u \cap \bar D = \emptyset$. We let
$ W_D^{1/2,2}(\partial \Omega) $ be the restrictions to $ \partial \Omega$ of the space $ W^ {1,2}_D( \Omega)$. We define $W^{-1/2,2}_D( \partial \Omega)$ to be the dual of $W^ { 1/2,2} _D( \partial \Omega)$. The Neumann data $f_N$ will be taken from the space
$W^{-1/2,2}_D(\partial \Omega)$. If $A(x) $ is a symmetric matrix with bounded, measurable entries and satisfies the ellipticity condition $\lambda |\xi |^2 \geq A(x) \xi \cdot \xi \geq
\lambda^{-1} |\xi |^ 2 $ for some $ \lambda >0$ and all $ \xi \in {\bf R} ^ n$, we consider the problem \begin{equation}\label{WeakMix} \left \{ \begin{array}{ll} {\mathop{\rm div}\nolimits} A \nabla u = 0 , \qquad & \mbox{in } \Omega \\ u = 0 , \qquad & \mbox{on } D \\ A\nabla u \cdot \nu = f _N, & \mbox{on }N. \end{array} \right. \end{equation} We say that $u$ is a {\em weak solution }of this problem if $u \in W^ { 1,2 } _D ( \Omega) $ and we have $$ \int _ \Omega A \nabla u \cdot \nabla v \, dy = \langle f _N , v \rangle _ { \partial \Omega} , \qquad \mbox{for all } v \in W^ { 1,2 }_D ( \Omega). $$ Here, we are using $ \langle \cdot , \cdot \rangle_{ \partial \Omega}$ to denote the pairing between $ W^ { 1/2,2 } _D( \partial \Omega)$ and the dual $ W^ {- 1/2,2 } _D( \partial \Omega)$. To establish existence of weak solutions of the mixed problem, we assume the coercivity condition \begin{equation} \label{coerce}
\|u\|_{ L^2 ( \Omega) } \leq c \|\nabla u \|_{ L^2( \Omega)}, \qquad u \in W^ { 1 , 2 } _D ( \Omega) . \end{equation} Under this assumption, the existence and uniqueness of weak solutions to (\ref{WeakMix}) is a consequence of the Lax-Milgram theorem. It is easy to see that (\ref{coerce}) holds when $ \Omega$, $N$ and $D$ is a standard domain for the mixed problem.
If $f_N$ is a function on $N$, then we may identify $f_N$ with an element of the space $W^{-1/2,2}_D( \partial \Omega)$ by $$ \langle f_N , \phi \rangle_{ \partial \Omega} = \int_{N } f_N \phi \, d\sigma, \qquad \mbox{for all } \phi \in W^ { 1/2,2} _D( \partial \Omega). $$ From Sobolev embedding we have $ W^ { 1/2, 2}_D( \partial \Omega) \subset L^p ( \partial \Omega)$, where $p = 2 ( n-1)/(n-2)$ if $n\geq 3$ or $ p < \infty$ when $n=2$. Thus the integral on the right-hand side will be well-defined if we have $f_N$ in $L^ { 2(n-1)/n}( N )$ when $n \geq 3 $ or $ L^ p ( N )$ for any $ p > 1$ when $n =2$.
\note {
Outline of the proof of existence of weak solution of the mixed problem (\ref{MP}).
1. We assume that the Neumann data, $f_N$ lies in the dual of $W^ { 1/2,2}_D( \partial \Omega)$ and the Dirichlet data $f_D$ lies in $W^ {1/2, 2} ( \partial \Omega)$ and thus is the restriction to $ \partial \Omega$ of a function $ W^ { 1,2}( \Omega)$.
2. We solve the Dirichlet problem $$\left\{ \begin{array}{ll} \Delta u = 0 , \qquad & \mbox{in }
\Omega\\ u = f_D, \qquad & \mbox{on } \partial \Omega. \end{array} \right. $$
The solution satisfies $u - f_D$ lies in $ W^ { 1,2}_0 ( \Omega)$ and $$ \int _ \Omega \nabla u\cdot \nabla \phi \, dx = 0 , \qquad \phi \in W^{
1,2}_0 (\Omega ) . $$
By Dirichlet's principle, we have $ \|\nabla u \|_{L^2 ( \Omega)}\leq
\|\nabla f_D\|_ {L^ 2 ( \Omega)}$.
3. If $f$ is in $ W^ {1,2} ( \Omega)$ and $ \Delta f$ is in the dual of $ W^ {1,2}( \Omega)$, then we may define the normal derivative of $f$ at the boundary as an element of the dual of $ W^ { 1/2,2}( \partial \Omega)$ by $$ \langle \partial f/\partial \nu, \phi \rangle_{\partial \Omega} = \int _ {\Omega} \nabla
f \cdot \nabla \phi \, dx + \langle \Delta f , \phi\rangle_\Omega,
\qquad \phi \in W^ { 1,2}( \Omega). $$ In particular, if $f$ is weakly harmonic, then we may define the normal derivative.
4. By point 2, we may assume that the Dirichlet data in (\ref{MP}) is given as the boundary values of a harmonic function.
We write the solution of (\ref{MP}) $ u = f_D+v$ where $f_D$ is a harmonic representative of the boundary data in (\ref{MP}). We let $v$ be a solution of the mixed problem $$ \left\{ \begin{array}{ll} \Delta v = 0 , \qquad & \mbox{in } \Omega\\ \partial v /\partial \nu = f_N - \partial f_D /\partial \nu, \qquad & \mbox{on } N\\ v = 0 , \qquad & \mbox{on } D. \end{array} \right. $$ The weak formulation of this problem is $$ \int _ \Omega \nabla v \cdot \nabla \phi\,dx = \langle f_N, \phi\rangle_{\partial \Omega} - \int_ \Omega \nabla \phi \cdot \nabla f_D\,dx, \qquad \phi \in W^ { 1,2}_D ( \Omega) $$ where we have substituted $ \langle \partial f_D/\partial \nu , \phi \rangle _{\partial \Omega} = \int _ \Omega \nabla f_D \cdot \nabla \phi \, dx. $
The existence of $v$ in $W^ { 1,2}_D( \Omega) $ satisfying this weak formulation follows from Lax-Milgram. It is clear that $u= f_D+v$ satisfies $$ \int_\Omega \nabla u \cdot \nabla \phi \, dx = \langle f_N, \phi \rangle_ {\partial \Omega}, \qquad \phi \in W^ { 1,2}_D ( \Omega) $$ and that we have $u-f_D\in W^ { 1,2}_D ( \Omega)$.
5. Uniqueness is easy. If $u$ is a weak solution of (\ref{MP}) with zero data, then we may use $u\in W^ {1,2}_D( \Omega)$ as the test function in the weak formulation to obtain $$
\int_ \Omega |\nabla u |^2 \, dx = 0. $$ Since we assume (\ref{coerce}) for functions in $W^ { 1,2}_D( \Omega)$, it follows that $u$ is zero. } \note { We recall a result from Giaquinta \cite[p.~122]{MG:1983}. In this result and throughout this paper, we use the notation
$ -\!\!\!\!\!\!\int _A f\, dx$ to denote the average of a function $f$ on a set $A$.
Let $Q$ be a cube in $ {\bf R} ^n$ and suppose that
whenever $Q_{2r}(x)\subset Q$, we have $$
-\!\!\!\!\!\!\int _{Q_r(x) } g^q \, dy \ \leq A \left ( -\!\!\!\!\!\!\int _ {Q_ {2r}( x)} g \, dy \right ) ^ {
q} + -\!\!\!\!\!\!\int _ {Q_{2r}(x) } f^ q \, dy . $$ Then there is an $\epsilon >0$ which depends on $A$, $q$ and $n$ so that for $ p \in [q,q+\epsilon)$, we have $$ \left ( -\!\!\!\!\!\!\int_ {Q/2} g^p\, dy \right ) ^ { 1/p} \leq C \left ( -\!\!\!\!\!\!\int _ Q
g^q \, dy \right ) ^ { 1/q} + \left( -\!\!\!\!\!\!\int_ Q f^ p \, dy \right )^ {1/p}. $$ In particular, if $f$ is in $L^ p(Q)$, then $g$ is in $L^ p_{loc}( Q)$.
}
We define a sub-linear operator $P$ which takes functions on $\partial \Omega$ to functions in $ \Omega$ by $$
Pf(x) = \sup _{ s > 0 } \frac 1 {s^ { n-1} } \int _{ \sball x s } |f
|\, d\sigma, \qquad x \in \Omega $$ and a local version of $P$ by $$ P_r f(x) = \sup _{ r> s > 0 } \frac 1 {s^ { n-1} } \int _{ \sball x s
} |f |\, d\sigma , \qquad \qquad x \in \Omega . $$ On the boundary, we have that $Pf$ is the Hardy-Littlewood maximal function $$
Mf(x)= Pf(x) = \sup _{ s> 0} \frac 1 {s ^ {n-1}} \int _ { \sball x s} |f |\, d\sigma, \qquad x\in \partial \Omega . $$ The following result is probably well-known, but we could not find a reference.
\begin{lemma} \label{PEstimate} For $1< p < \infty$, $ 1 \leq q \leq pn/( n -1) $, $ x \in \partial \Omega $ and $r < r_0$, we have \begin{equation}\label{Plocal}
\left ( -\!\!\!\!\!\!\int _{\dball x r } |P_rf|^ q \, dy \right ) ^ { 1/q }
\leq C \left (\frac 1 { r^ { n-1} } \int _{ \sball x { 2r} } |f |^ p \, d\sigma \right ) ^ { 1/p} . \end{equation} The constant in this estimate depends only on the Lipschitz constant $M$ and the dimension. \end{lemma}
\begin{proof} We begin by considering the case
where $ \Omega = \{ (y', y_n ) : y _
n > 0\}$ is a half-space. We use coordinates $ y = (y',y _n )$ and we claim that \begin{eqnarray} \label{P1} Pf(y', y_n ) & \leq & Mf(y',0) \\ \label{P2}
Pf(y) & \leq & C\|f\|_{ L^ p ( \partial \Omega) } y _n ^ {( 1-n ) /p}, \qquad y _ n > 0 . \end{eqnarray} The estimate (\ref{P1}) follows easily since $ \sball {(y', y_n )} s \subset \sball {(y', 0)} s $. To establish the second estimate, we observe that if $ s < y _n $, then $\sball y s = \emptyset$ and hence $$ Pf(y) = \sup _{ s \geq y _n } \frac 1 { s^ { n-1}} \int _ { \sball y s
} |f| \, d\sigma
\leq C y _n ^ { (1-n)/p} \| f\|_{ L ^ p ( \partial \Omega) } . $$
We claim that we have the following weak-type estimate for $Pf$, \begin{equation} \label{Pclaim}
|\{ x\in \Omega : Pf(x) > \lambda \}| \leq C \| f\|_ { L^ p (\partial \Omega ) } ^ p \lambda ^ { -pn /(n-1) } , \qquad \lambda > 0 . \end{equation}
To prove (\ref{Pclaim}), we may assume $ \| f \| _{ L^ p ( \partial
\Omega )} = 1$. With this normalization, the observation (\ref{P2}) implies that $\{ y ' : Pf(y', y _n) > \lambda \} = \emptyset $ if $ y _n > c \lambda ^ { -p /( n-1)} $. Thus, we may use Fubini's theorem to write \begin{eqnarray*}
|\{ x \in \Omega : Pf(x) > \lambda \} | & = & \int _ 0 ^ { c \lambda ^ { -p/( n-1)} } \sigma ( \{ y ' :Pf(y', y _n ) > \lambda \}) \, dy _n \\ &\leq & C\int _0 ^ {c\lambda ^ { -p/ ( n-1 ) }} \sigma ( \{ y ' : Mf ( y',0) > c \lambda \}) \, dy _n \\ & = & C \lambda ^ { -p n / ( n-1)} \end{eqnarray*} where we used (\ref{P1}), the weak-type $(p,p)$ inequality for the maximal operator on ${\bf R} ^ { n-1}$ and our normalization of the $L^p$-norm of $f$.
From the weak-type estimate (\ref{Pclaim}) and the Marcinkiewicz interpolation theorem we obtain that there is a constant $C$ depending on $p$ and $n$ so that for $p>1$, \begin{equation}\label{Pglobal}
\| Pf\|_{ L^ {pn/(n-1)} ( \Omega)} \leq C \| f \|_{L^p ( {\bf R} ^ { n
-1} )} . \end{equation} To obtain the estimate (\ref{Plocal}), we observe that if $ y \in \ball x r $ then $ \ball y r \subset \ball x {2r}$ and hence $$ P_r f( y) \leq P _r ( \chi _ { \sball x { 2r} } f ) (y) ,\qquad y \in \ball x r . $$ Thus in the case where $ \Omega$ is a half-space, the result (\ref{Plocal}) follows from (\ref{Pglobal}) and H\"older's inequality.
Finally, to obtain the local result on a general Lipschitz domain, one may change variables so that the boundary is flat near $ \sball x r$. This introduces the dependence on the constant $M$. \end{proof}
\note { In applying the change of variables, it is helpful to note that for a bi-Lipschitz transformation $ \Phi$, we have $$ B_ { cr} ( \Phi(x) ) \subset \Phi( B_r ( x )) \subset B_ { Cr} ( \Phi(x) ). $$
Do we need to multiply the radius by a constant in the statement?
Check changes to Lemma \ref{RHEstimate} now that we use Lemma \ref{MSIRHI}.
Does Lemma \ref{MSIRHI} below duplicate one of the estimates used
in the $H^1$ part of
the paper. }
We recall several versions of the Poincar\'e and Sobolev inequalities. \begin{lemma} \label{YAPI} Let $ \Omega $ be a convex domain of diameter $d$. Suppose that $ S $ is a subset of $ \bar \Omega $ that satisfies: a) for some $ r $ with $ 0 < r< d$ we have
$ \sigma ( S \cap \ball x r ) = r ^ { n-1} $ and b) there is a constant $A$ so that $ \sigma ( S \cap \ball x t ) \leq A t^ { n-1}$ for $ t >0$. Let $u$ be a function in $ W^ { 1,p } ( \Omega)$ and suppose that $u$ vanishes on $S$. Then for $ 1< p < n $, we have a constant $C$ $$
\left ( \int _ \Omega |u|^ p \, dy \right ) ^ { 1/p }
\leq \frac { C d^ n } { |\Omega | ^ { 1/p'} } { r ^ {1 - n / p }
} \left ( \int _ { \Omega } |\nabla u | ^ p \, dy \right) ^ { 1/p } . $$ The constant $C$ depends on $p$, the dimension $n$ and $A$. \end{lemma} \begin{proof} It suffices to consider functions $u$ which are smooth in $ \bar \Omega$ and vanish on $S$.
We follow the proof of Corollary 8.2.2 in the book of Adams and Hedberg \cite{MR1411441}, except that we substitute the Riesz capacity for the Bessel capacity in order to obtain a scale-invariant estimate. Following their arguments, we obtain that if $u$ vanishes on $S$, then \begin{equation} \label{Fractional}
|u(x) | \leq \frac { d^ n }{ |\Omega| } ( I_ 1 ( |\nabla u | ) (x)
+ \| \nabla u \|_{ L^ p ( \Omega) } \| I _ 1 ( \mu ) \| _ { L^ { p ' }
( \Omega ) } ) . \end{equation}
Here $ I _ 1( f ) ( x) = \int _ { \Omega } f(y) | x-y |^ { 1-n } \, dy$ is the first-order fractional integral and $\mu $ is any non-negative measure on $ S$ normalized so that $\mu(S) =1$. To estimate $ \| I _1( \mu)\|_ {L^ { p'} ( \Omega)} $ we use Theorem 4.5.4 of Adams and Hedberg \cite{MR1411441} which gives that $$ \int _ { {\bf R} ^ n } ( I _ 1( \mu ) ) ^ { p ' } \, dy \leq C \int_{ {\bf R} ^ n } \dot W ^ \mu _ { 1, p } \, d\mu $$ where $\dot W ^ \mu _ { 1, p } (x) $ is the Wolff potential of $\mu$ defined by $$ \dot W ^ \mu _ { 1,p}(x) = \int _ 0 ^ \infty ( \mu ( \ball x t ) t ^ { p
-n } ) ^ { 1/ ( p-1)} \, dt/t. $$
Our assumptions imply that with $ \mu = r ^ { 1-n } \sigma$ denoting normalized surface measure on $S$, we have $ I _ 1 ( \mu) (x) \leq C r^ { ( p-n ) /( p-1)}$ where $C$ depends only on $A$. Using this estimate for the Wolff potential and Young's convolution inequality to estimate $I_1(|\nabla u|)$, the Lemma follows from (\ref{Fractional}). \end{proof}
The next inequality is also taken from Adams and Hedberg \cite[Corollary 8.1.4]{MR1411441}. Let $1/q + 1/n < 1 $ and assume that $ \Omega$ is a convex domain of diameter $d$. We let $ \bar u = -\!\!\!\!\!\!\int _\Omega u \, dy$ and then we may find a constant $C = C_{q,n}$ depending only on $q$ and $n$ so that \begin{equation}\label{SoPo1}
\int _ { \Omega } |u -\bar u | ^ q \, dy
\leq C\frac { d^ n } { |\Omega |} \left ( \int _ { \Omega }
|\nabla u |^ {nq/(n+q)}\, dy \right) ^ { ( n+q)/n} . \end{equation}
Finally, we suppose that $\Omega$ is a domain and $ \dball x r$ lies in a coordinate cylinder $Z$ so that $ \partial \Omega \cap Z$ lies in a hyperplane and let $\bar u = -\!\!\!\!\!\!\int _{\dball x r} u\,dy$. Provided $ \dball x r \subset Z$, we have \begin{equation}\label{SoPo2}
\left( \int _{ \sball x {r} } | u -\bar u | ^ { q} \, d\sigma
\right) ^ { 1/q} \leq C \left( \int_{ \dball x { r}}|\nabla u |^ p \, dy \right ) ^ { 1/p }. \end{equation} In the inequality (\ref{SoPo2}), $p$ and $q$ are related by $ 1/q = 1/p - ( 1- 1/p) / (n-1) $ and $ p > 1$.
\begin{lemma} \label{MSIRHI} Let $ \Omega $, $N$ and $D$ be a standard domain for the mixed problem. Suppose that (\ref{SurfProp}) holds, let $ x \in \Omega$ and $ 0 < r < r_0$. Let $u$ be a weak solution of the mixed problem for a divergence form elliptic operator with zero Dirichlet data and Neumann data $f_N$. We have the estimate $$
\left ( -\!\!\!\!\!\!\int _{ \dball x { r} } |\nabla u | ^ 2 \, dy \right ) ^
{ 1/2 } \leq C \left [
-\!\!\!\!\!\!\int _ {\dball x {2r} } |\nabla u | \, dy
+\left ( \frac 1 { r^ { n-1}} \int _ {N \cap \sball x {2r} } |f_N|^ { p} \, d\sigma \right) ^ { 1/p} \right ] . $$ Here, $p=2$ if $n = 2$ and $ p = 2( n-1)/(n-2)$ for $n\geq 3$. The constant $C$ depends only on $M$ and the dimension $n$. \end{lemma}
\begin{proof} Changing variables to flatten the boundary of a Lipschitz domain preserves the class of elliptic operators with bounded measurable coefficients, thus it suffices to consider the case where the ball $\sball x r $ lies in a hyperplane. We may rescale to set $ r = 1$. We claim that we can find an exponent $a$ so that for $s$ and $t$ which satisfy $ 1/2\leq s < t \leq 1$, we have \begin{eqnarray}
\lefteqn{ \left ( \int _ { \dball x s } |\nabla u |^ 2 \, dy \right )
^ { 1/2} } \nonumber \\
& \leq & \frac C { ( t-s ) ^ a } \left ( \int _ { \dball x t } | \nabla u
| ^ q \, dy \right ) ^ { 1/ q} +
\left ( \int _ {N \cap \sball x 1 } | f_N|^ { p} \, d\sigma \right) ^ { 1/p} \label{pqClaim} \end{eqnarray} where we may choose the exponents $ p = 2 ( n-1)/( n-2) $ and $q = 2n/(2n+2)$ if $ n \geq 3$ or $ p =2$ and $ q = 4/3$ if $n = 2$.
We give the details when $ n \geq 3$. In the argument that follows, let $ \epsilon = (t-s)/2$ and choose $ \eta$ to be a cut-off function which is one on $\ball x s$, supported in $ \ball x { s+ \epsilon}$
and satisfies $ |\nabla \eta | \leq C /\epsilon $. We let $ v = \eta ^ 2 ( u -E) $ where $E$ is a constant. If we choose $E$ so that $ v \in W^ { 1,2 } _ D ( \Omega) $, the weak formulation of the mixed problem and H\"older's inequality gives for $ 1 < p < \infty $ \begin{eqnarray}
\int _ \Omega | \nabla u |^ 2 \eta ^ 2 \, dy & \leq & C \left [ \int _
\Omega | u - E |^ 2 |\nabla \eta | ^ 2 \, dy + \left ( \int _{ N\cap
\sball x { s+ \epsilon }} | u - E| ^ { p'}\, d\sigma \right ) ^ {
2/ p ' } \right . \nonumber \\ & & \qquad \left. + \left( \int
_{N \cap \sball x { s+ \epsilon } } |f_N|^ p \, d\sigma \right ) ^ {
2/ p }\right] . \label{John} \end{eqnarray}
We consider two cases: a) $ \ball x { s+ \epsilon } \cap D = \emptyset $ and b) $\ball x { s+ \epsilon } \cap D \neq \emptyset$. In case a) we may chooose $ E = \bar u = -\!\!\!\!\!\!\int _ { \dball x { s+ \epsilon } } u \, dy $. We use the Poincar\'e-Sobolev inequality (\ref{SoPo1}) and the inequality (\ref{SoPo2}) to estimate the first two terms on the right of (\ref{John}) and conclude that \begin{eqnarray*} \lefteqn{
\int _ { \dball x s } |\nabla u |^ 2 \, dy }\\ & \leq & C \left [ \frac 1 { ( t-s) ^ 2 } \left ( \int _ { \dball x { s+
\epsilon } } |\nabla u | ^ { \frac {2n} {n+2} } \, dy \right )
^ { \frac { n+2 } n } \right . \\ & & \quad + \left . \left ( \int _ { \dball x { s+ \epsilon } }
|\nabla u | ^ { \frac { np} { np-n + 1 } } \, dy \right ) ^ { \frac { 2 ( np -n + 1)} { pn } } +
\left ( \int _ { N \cap \sball x 1 } |f_N|^ { p} d\sigma \right) ^
{\frac 1 p }
\right ]. \end{eqnarray*} If $n \geq 3$, we may choose $p = 2 ( n-1) / ( n-2) $ and then we have that $ np/( np -n + 1) = 2n /(n+2)$ to obtain the claim.
We now turn to case b). Since $ \ball x { s+ \epsilon}$ meets the set $D$, we cannot subtract a constant from $u$ and remain in the space of test functions, $ W^ { 1,2 } _D ( \Omega)$. Thus, we let $E=0 $ in (\ref{John}).
We let $ \bar u $ be the average value of $u$ on $ \dball x {s+ 2\epsilon} $ and obtain $$
\int _ { \dball x { s+ \epsilon } } u ^ 2 |\nabla \eta |^ 2 \, dy
\leq \frac C { \epsilon ^ 2} \left [ \int _ { \dball x { s+ 2\epsilon } } |
u - \bar u | ^ 2 \, dy + \bar u ^ 2 \right ] . $$ Since $ \ball x { s+ \epsilon } \cap D \neq \emptyset $, our assumption (\ref{SurfProp}) on the set $D$ implies that we may find a point $ \tilde x \in \Lambda $ so that $\ball { \tilde x } \epsilon \subset \ball x t $ and so that $ \sigma ( \ball { \tilde x } \epsilon \cap D ) \geq c \epsilon ^ { n -1} $. As $c$ depends on $M$ our final constant may be taken to depend on $M$. Using (\ref{SoPo1}) and the Poincar\'e inequality of Lemma \ref{YAPI} we conclude that \begin{eqnarray}
\int _ { \dball x { s+ \epsilon }} u^2 |\nabla \eta | ^ 2 \, dy &\leq & C \left [ \frac 1 { \epsilon ^ 2 } \left ( \int _ { \dball x
{s+2\epsilon } } |
\nabla u | ^ { 2n / ( n+2) } \, dy \right ) ^ { ( n + 2) /n }
\right. \nonumber \\ \label{Paul} & & \left. \qquad + \frac 1 {\epsilon^{ 2n/q}} \left ( \int _ { \dball x { s + 2\epsilon } }
|\nabla u | ^ q \, dy \right ) ^ { 2/ q} \right] \end{eqnarray} for $ 1 < q < n $. A similar argument using (\ref{SoPo2}) and Lemma \ref{YAPI} gives us \begin{eqnarray} \nonumber
\left( \int _ { \sball x { s+ 2\epsilon } } | u |^ { p'} \, d\sigma \right ) ^ { 1/ p' } & \leq &
\left ( \int _ { \sball x { s + 2\epsilon }} | u -\bar u | ^ { p
' } \, d\sigma \right ) ^ { 1/ p' } + | \bar u |
\\ \nonumber & \leq &
C \left [ \left ( \int _ { \dball x { s+ 2\epsilon } } |\nabla u | ^
{ np / ( np - n + 1) } \, dy \right ) ^ { ( np -n + 1 ) / ( np) } \right. \\ \label{George} & &\left. \quad + \epsilon ^ { 1- n/q} \left ( \int _ { \dball x { s+ 2\epsilon }
} |\nabla u | ^ q \, dy \right ) ^ { 1/ q} \right ] \end{eqnarray} where the use of Lemma \ref{YAPI} requires that we have $ 1< q< n$. We use (\ref{Paul}) and (\ref{George}) in (\ref{John}) and choose $ q = 2n/( n+2) $ and $ p = 2 ( n -2 ) /( n-1) $ if $ n \geq 3$. Once we recall that $ t -s = 2 \epsilon $, we obtain (\ref{pqClaim}).
Finally, we may use the techniques given in \cite[pp.~80-82]{MR1239172} or \cite[pp.~1004--1005]{FS:1984} to see that the claim (\ref{pqClaim}) implies the estimate \begin{equation}
\left ( \int _ { \dball x {1/2} } |\nabla u |^ 2 \, dy \right ) ^ { 1/2}
\leq C \left[ \int _ { \dball x 1 } | \nabla u
| \, dy +
\left ( \int _ {N \cap \sball x 1 } |f_N|^ { p} d\sigma \right) ^ { 1/p}
\right] \end{equation} with $p$ as in (\ref{pqClaim}).
When the dimension $n=2$, the exponent $2n/(n+2)$ is 1 and it is not clear that we have (\ref{SoPo1}) as used to obtain (\ref{Paul}). However, from (\ref{SoPo1}) and H\"older's inequality we can show $$
\left( \int_{ \dball x { s+ 2\epsilon } } |u - \bar u | ^ 2 \, dy \right ) ^ { 1/2} \leq C \left( \int_{ \dball x { s+ 2\epsilon } }
|\nabla u | ^ {4/3} \, dy \right ) ^ { 3/4} . $$ This may be substituted for (\ref{SoPo1}) in the above argument to obtain (\ref{pqClaim}) when $ n =2$. \end{proof} \note { The argument of Fabes and Stroock will give any $p$, not just $p=1$ on the right-hand side.
We give proofs of (\ref{SoPo1}) and (\ref{SoPo2}). Oops, this is a proof of a version of (\ref{SoPo2}) that we are no longer using.
If $u$ is in $W^ {1,p} ( \Omega)$ and $ u =0$ on a
subset $S\subset \sball x r $ with $\sigma(S) > cr^ { n-1}$ and $
r< r_0$, then we
have that $$ \int _{ \sball x r } u^p \, d\sigma \leq \frac {Cr^ { n+p}}
{\sigma(S)} \int _ {\Omega \cap \ball x {Cr}} |\nabla u | ^ p \, dy. $$
\begin{proof} 1. We first consider the case where $ \Omega = \{ x: x_n > 0\}$ and suppose that our ball, $B= B_1(0) $, is centered at the origin. We let $ \bar u = -\!\!\!\!\!\!\int _{ B_r^ +} u dy$, extend $u-\bar u$ to a function $E(u-\bar u) $ on $
{\bf R} ^ n _+$ by reflecting in the ball $ |x|=1$ and multiplying by a cut-off function $ \eta$ which is one on $B$ and supported in $2B$. Let $ v=\eta E(u-\bar u)$ denote the resulting function.
2. According to Runst and Sickel \cite{MR98a:47071}, we have the trace theorem $$
\|v\|_{ B^ {p,p}_{ 1-1/p}} \leq \|v\|_{ W^ { 1,p}( {\bf R} ^ n _ +)}. $$
3. Using Poincar\'e inequalities and properties of the extension operator, we can show $$
\|v\|_ {W^ { 1,p}({\bf R}^n_+)} \leq \|\nabla u \|_ { L^p( B_+)}. $$
4. Recalling the definition of the Besov norm and that $u=0$ on $S$, we have \begin{eqnarray*}
\frac { \sigma ( S) } { 2^{n-2+p}} \int _\Delta |u|^p \, d\sigma
& \leq & \int_ \Delta \int _ \Delta \frac { |u(x',0) - u(y',0) | ^ p }{ |x'-y'| ^
{ n-2 +p} } \, d\sigma d\sigma \\
& \leq &
\int_ {{\bf R} ^ { n-1} } \int _ {{\bf R} ^ { n-1} } \frac { |v(x',0) - v(y',0) | ^ p }{ |x'-y'| ^
{ n-2 +p } } \, d\sigma d\sigma \\
& \leq & \|v\|^p_ {W^ { 1,p}({\bf R}^n_+)} \end{eqnarray*}
where we use $ \Delta $ to denote the ball $ \{ x': |x'|<1\}$. This uses that $v(x',0) - v(y',0)= u(x',0)-u(y',0)$ for $x', y' \in \Delta$.
The inequalities in 3 and 4 give the result when $ \Omega $ is a half-space and $r=1$. Rescaling, gives the result for general $r$. For a general Lipschitz domain, we may change variables to reduce to the problem in a half space. The image of $ \sball x r$ will be contained in a ball of radius $\sqrt{1+M^2}r$ where $M$ depends on the Lipschitz constant. \end{proof}
If $ \bar u = -\!\!\!\!\!\!\int _{ \Omega \cap \ball x r } u \, dy $ or if $u$ vanishes on $ S\subset \partial \Omega \cap \sball x r$ and $ \sigma (S) > cr^ { n-1}$, then we have $$
\left( \int _ { \partial \Omega \cap \sball x r } |u- \bar u | ^
{(n-1)p/(n-p) } \, d\sigma \right) ^ {\frac {n-p} {(n-1)p} } \leq C\left( \int _ { \Omega \cap \ball x {Cr} } |\nabla u |^ p \, dy\right) ^ { 1/p} . $$
\begin{proof}[Proof of SoPo2] 1. We change variables to reduce to the case of a half-space and then rescale to obtain a radius of 1. Note that the change of variables, $ (x',x_n) \rightarrow ( x', \phi(x') + x_n)$ has Jacobian 1, so that this preserves mean value zero.
2. In the case where $ \bar u$ is the average, we may extend and cut off as in the previous result and let $ v = \eta E( u-\bar u)$. We apply the trace theorem to obtain that $v$ is in a Besov space $B^ {
p,p}_{ 1-1/p}$ (\cite{MR98a:47071}[p.~75]. Next, we can use the embedding for Besov spaces on the boundary to conclude that
$$\| v\|_ {L^ {(n-1)p/(n-p)} ( \partial \Omega)} \leq \|v\|_ {B^ { p,p}
_{ 1- 1 / p} (\Omega)}.$$ As $v =u - \bar u$ on $ \sball 0 1$, we have the desired result.
3. In the case where $ \sball 0 1 $ intersects the boundary and $u$ vanishes on a set $S$, we use the previous result to conclude that we have the estimate $$ \int_{ \Delta } u ^ p \leq C\int _ { \ball 0 1 \cap {\bf R} ^ n _+}
|\nabla u | ^ p \, dy. $$
4. From this inequality, we then can show that $$
\int _{B_+} u^p \,dx \leq C \int _ \Delta u^p + \int _{B_+} |\nabla u
| ^ p \, dy. $$
5. Since $u$ is in $W^ { 1,p}$, we may then extend and multiply by a cut-off function and obtain a function $v$ which is in $W^ { 1,p}( {\bf R}^n _+)$. Applying the trace theorem, we conclude that $v$ is in the Besov space $ B^ {p,p}_{ 1-1/1p}$ of the boundary and this space embeds into $ L^ { (n-1)p/(n-p)}$.
6. Given the result in a half space, the result stated in a Lipschitz domain follows by a change of variables. \end{proof}
}
\begin{lemma} \label{RHEstimate} Let $ \Omega$, $D$ and $N$ be a standard domain for the mixed problem. Let $ x \in \Omega$ and suppose that $r$ satisfies $ 0< r < r_0$. Let $u$ be a weak solution of the mixed problem (\ref{WeakMix}) with zero Dirichlet data and Neumann data $f$ in $L^p(N)$ which is supported in $ N \cap \sball x r $. There exists $ p_0=p_0(n,M) > 2 $ so
that for $t $ in
$[2,p_0) $ if $n \geq 3$ or $t$ in $(2,p_0)$ if $n =2 $, we have the estimate \begin{eqnarray*}
\lefteqn{ \left ( -\!\!\!\!\!\!\int _{ \dball x r } |\nabla u |^t \, dy \right )^ { 1/t}} \\
& \leq & C\left[ -\!\!\!\!\!\!\int _ {\dball x {2r} } |\nabla u |\,dy
+\left( \frac 1 { r^ { n-1} }
\int _{ \sball x {2r} \cap N } |f|^{t(n-1)/n}\, d\sigma\right) ^ { n/(t(n-1))}\right] . \end{eqnarray*} The constant in this estimate depends on $t$, $M$ and $n$. \end{lemma} \note{ In applications, we seem to only need $f$ bounded. Is it worth the trouble to keep track of the exponents? }
\begin{proof} According to Lemma \ref{MSIRHI}, $ \nabla u$ satisfies a reverse H\"older inequality and thus we may apply a result of Giaquinta \cite[p.~122]{MG:1983} to conclude that there exists $p_0 > 2$ so that we have $$
\left( -\!\!\!\!\!\!\int _ {\dball x r } | \nabla u|^ t
\, dy \right) ^ { 1/t} \leq C\left [ -\!\!\!\!\!\!\int_ {\dball x {2r} } |\nabla u | \, dy + \left( -\!\!\!\!\!\!\int _ { \dball x { 2r} }
(P_ {2r} |f|^p)^{ t/p} \, dy \right) ^ { 1/ t }\right] $$ for $t $ in $[2,p_0)$ and $p$ as in Lemma \ref{MSIRHI}. From this, we may use Lemma \ref{PEstimate} to obtain $$
\left( -\!\!\!\!\!\!\int _ {\dball x r } | \nabla u|^ t
\, dy \right) ^ { 1/t} \leq C\left [ -\!\!\!\!\!\!\int_ {\dball x {2r} } |\nabla u | \, dy + \left( -\!\!\!\!\!\!\int _ { \sball x { 4r} }
|f|^{ t(n-1)/n} \, d\sigma \right) ^ { n/ t(n-1) }\right] $$ when $ n \geq 3$ and $ t $ is in $[2,p_0)$. If $n=2$ we need $ t >2$ so that $f$ is raised to a power larger than 1. Now a simple argument that involves covering $ \sball x r$ by surface balls of radius $r/4$ allows us to
conclude the estimate of the Lemma. \end{proof}
\section{Estimates for solutions with atomic data} \label{Atoms}
We establish an estimate for the solution of the mixed problem when the Neumann data is an atom for $ H^1$ and the Dirichlet data is zero. The key step is to establish decay of the solution as we move away from the support of the atom. We will measure the decay by taking $L^q$-norms in dyadic rings around the support of the atom. Thus, given a surface ball $ \sball x r$, $ x \in \partial \Omega$, we define $\Sigma_k = \sball x { 2^ k r} \setminus \sball x {2^ { k-1} r}$ and define $ S_k = \dball x {2^k r } \setminus \dball x { 2^ {
k-1} r } $.
\begin{theorem} \label{AtomicTheorem} Let $ \Omega$, $N$ and $D$ be a standard domain for the mixed problem. Let $u$ be a weak solution of the mixed problem with Neumann data $a$ which is an atom which is supported in $N \cap\sball x r $ and zero Dirichlet data.
If $p_0$ is as in Lemma \ref{RHEstimate} and $ 1< q < p_0/2$, then we have $ \nabla u \in L^ q ( \partial \Omega)$, \begin{equation} \label{LocalPart}
\left( \int _{\sball x {8r} } |\nabla u |^q \, d \sigma \right)^ {
1/q} \leq C \sigma (\sball x {8r} )^ {-1/q'} \end{equation} and for $ k \geq 4$, \begin{equation} \label{Decay}
\left ( \int _{ \sring k} |\nabla u |^q \,d\sigma \right) ^ { 1/q} \leq C 2^ {-\beta k} \sigma( \sring k ) ^ {- 1/q'} . \end{equation} Here, $ \beta $ is as in Lemma \ref{Green} and the constant $C$ in the estimates (\ref{LocalPart}) and (\ref{Decay}) depends on $q$ and the global character of the domain. \end{theorem}
If $r < r_0 $ and $x$ is in $\partial \Omega$, then we may construct a star-shaped Lipschitz domain $ \locdom x r = Z_r(x) \cap \Omega$ where $\cyl x r $ is the coordinate cylinder defined above. Given a function $v$ defined in $ \Omega$, $x \in \partial \Omega$, and $r>0$, we define a truncated non-tangential maximal function $ \nontan{v_r} $ by $$
\nontan v_r (x) = \sup _{ y \in \ntar x \cap \ball x r } |v(y)|. $$
\begin{lemma} \label{NeumannRegularity} Let $\Omega$ be a Lipschitz domain. Suppose that $ x \in \partial \Omega $ and $0< r < r_0$. Let $u$ be a harmonic function in $ \locdom x {4r} $. If $\nabla u \in L^2 ( \locdom x {4r} )$ and
$ \partial u /\partial \nu $ is in $L^ 2 (\partial \Omega \cap \partial \locdom x {4r} ) $, then we have $ \nabla u \in L^2 ( \sball x r)$ and $$ \int _ { \sball x {r}} (\nontan {( \nabla u )}_r)^2 \, d\sigma
\leq C \left ( \int _ { \partial \Omega \cap \partial \locdom x {4r} } \left |\frac { \partial u }{ \partial
\nu } \right | ^ 2 \, d\sigma
+ \frac 1 r \int _ {\locdom x {4r} } |\nabla u |^2 \, dy\right). $$ The constant $C$ depends only on the dimension $n$ and $M$. \end{lemma}
\begin{proof} Since the estimate only involves $\nabla u$, we may subtract a constant from $u$ so that $ \int _ {\locdom x r } u \, dy = 0$. We pick a smooth cut-off function $ \eta$ which is one on $ Z_{3r}(x)$ and zero outside $Z_{4r}(x)$. Since we assume that $\nabla u$ is in $L^2( \locdom x {4r } )$, it follows that $\Delta ( \eta u) = u \Delta \eta + 2 \nabla u \cdot \nabla \eta $ is in $L^2( \locdom x {4r} )$. Thus, with $ \Xi$ the usual fundamental solution of the Laplacian, $w = \Xi*(\Delta( \eta u))$ will be in the Sobolev space $W^ {2,2}( {\bf R} ^n)$. We have defined $ \Delta (\eta u )$ to be zero outside $ \locdom x {4r}$ in order to make sense of the convolution in the definition of $w$. Next, we
let $v$ be the solution of the Neumann problem $$ \left\{ \begin{array} {ll} \Delta v = 0, \qquad & \mbox {in } \locdom x {4r } \\ \bigfrac { \partial v } { \partial \nu } = \bigfrac { \partial( \eta u) } { \partial \nu }- \bigfrac {\partial w }{ \partial \nu}, \qquad & \mbox{on }\partial \locdom x {4r } . \end{array} \right. $$ According to Jerison and Kenig \cite{JK:1982c}, the solution $v$ will have non-tangential maximal function in $L^2 ( \partial \locdom x {4r} )$. By uniqueness of weak solutions to the Neumann problem, we may add a constant to $v$ so that we have $ \eta u = v+w$. As $w$ and all its derivatives are bounded in $ \locdom x {2r}$ and the non-tangential maximal function of $\nabla v$ is in $L^2( \partial \locdom x {4r})$, we obtain the Lemma. \end{proof}
The proof of the following Lemma for the regularity problem is identical to the proof of Lemma \ref{NeumannRegularity}.
\begin{lemma} \label{DirichletRegularity} Let $\Omega$ be a Lipschitz domain. Suppose that $ x \in \partial \Omega $ and $0< r < r_0$. Let $u$ be a harmonic function in $ \locdom x { 4r} $.
If $\nabla u \in L^2 (\locdom x {4r} )$ and
$ \tangrad u $ is in $L^ 2 (\partial \Omega \cap \partial \locdom x { 4r} )$, then we
have $ \nabla u \in L^2 ( \sball x r)$ and $$ \int _ { \sball x {r}} (\nontan {( \nabla u )}_r)^2 \, d\sigma
\leq C \left( \int _ { \partial \Omega \cap \partial \locdom x { 4r} }| \tangrad u |^2 \, d\sigma
+ \frac 1 r \int _ {\locdom x { 4r} } |\nabla u |^2 \, dy\right). $$ The constant $C$ depends only on the dimension $n$ and $M$. \end{lemma}
The following weighted estimate will be an intermediate step towards our estimates for solutions with atomic data. In the next lemma, $\Omega$ is a bounded Lipschitz domain and the boundary is written $ \partial \Omega = D \cup N$. Recall that $ \delta(x)$ denotes the distance from $x$ to the set $\Lambda$.
\begin{lemma} \label{Whitney} Let $\Omega$, $D$ and $N$ be a standard domain for the mixed problem.
Let $u$ be a weak solution of the mixed problem (\ref{WeakMix}) with Neumann data $f_N \in L^2 (N)$ and zero Dirichlet data.
Let $\epsilon \in {\bf R}$, $x \in \partial \Omega$ and $0< r < r_0$ and assume that for some $A>0$, $ \delta (x) \leq Ar$. Then we have $$ \int _ {\sball x r } (\nontan{( \nabla u)}_{c\delta}) ^2 \, \delta ^{1-\epsilon} d\sigma \leq C \left ( \int _ { \sball x {2r}}
|f_N|^2 \delta ^ { 1- \epsilon } \, d\sigma
+ \int _ { \dball x { 2r} } |\nabla u |^2 \, \delta ^ {
-\epsilon } \, dy \right ) . $$ The constant in this estimate depends on $M$, $n$, $\epsilon$ and $A$. \end{lemma}
\note { Using a Hardy inequality, we can probably show that $u$ in $L^2( N ; \delta \, d\sigma )$ implies that $u$ is in the dual of $W^ { 1/2,
2}_D ( \partial \Omega)$.
} The proof below uses a Whitney decomposition and thus it is simpler if we use surface cubes, rather than the surface balls used elsewhere. A {\em surface cube }is the image of a cube in $ {\bf R} ^ { n-1}$ under the mapping $
x' \rightarrow ( x' , \phi(x'))$. Obviously, each cube will lie in a
coordinate cylinder.
\begin{proof} We may assume that $ \dball x {2r}$ is contained in a coordinate cylinder $ Z_{ 2r_0}$. If $Z_ { 100r_0} \cap \partial \Omega \subset N$ or $Z_ { 100r_0} \cap \partial \Omega \subset D$, then the estimate of the Lemma follows easily from Lemma \ref{NeumannRegularity} or Lemma \ref{DirichletRegularity} since we have that $ \delta (y) $ is equivalent to $r$ for $y \in \dball x {2r}$. This equivalency follows from our assumption that $ \delta (x) < Ar$ and that $Z_{ 100r_0}$ does not intersect $\Lambda$.
If $ Z_ { 100r_0}$ meets both $D$ and $N$, we begin by finding a decomposition of $ ( \partial \Omega \cap Z_{ 4r_0}) \setminus \Lambda $ into non-overlapping surface cubes $ \{ Q_j \}$ which satisfy: 1) For each cube $ Q_j$, we have constants $c''$ and $c'$ so that $c''\delta (y) \leq \mathop{\rm diam}\nolimits (Q_j) \leq c ' \delta (y)$ for $ y \in Q_j$. The constant $c'$ may be chosen as small as we like. 2) We let $T(Q) = \{ y \in \Omega : \mathop{\rm dist}\nolimits (y, Q ) < \mathop{\rm diam}\nolimits Q\}$. Then the family $ \{ T(2Q_j)\}$ has bounded overlaps and thus $$ \sum \chi _{ T(2Q_j)}\leq C (n, M, c''). $$ To construct the family of surface cubes, begin with a Whitney decomposition of $ {\bf R} ^ { n-1}\setminus \{ ( \psi (x''), x'' ): x''\in {\bf R} ^ { n -2}\}$ and then map the cubes onto the boundary with the map $x' \rightarrow ( x', \phi(x'))$. Here, $ \phi$ and $\psi $ are the functions used to describe $\partial \Omega$ and $\Lambda$ in the coordinate cylinder $Z_{ r_0}$.
As the surface cubes $ Q_j$ are connected and $ \delta $ never vanishes on $Q_j$, we have that either $ Q_j \subset N$ or that $ Q_j \subset D$. We choose the constant $ c'$ small so that $ Q _ j \cap \sball x r \neq \emptyset$ implies that $T(2Q_j)\subset \dball x { 2r} $. Let $r_j$ be the diameter of the cube $r_j$. Applying Lemma \ref{NeumannRegularity} or Lemma \ref{DirichletRegularity}, we conclude that \begin{equation} \label{Whit1}
\int _ { Q_j } |\nabla u |^ 2 \, d\sigma \leq C \left ( \int _ { 2Q_j
\cap N} \left| \frac { \partial u } { \partial \nu } \right | ^ 2 \, d\sigma + \frac 1 { r_j } \int _ { T( 2Q_j)} |\nabla u |^ 2 \, dy \right) . \end{equation} We multiply equation (\ref{Whit1}) by $ r_j ^ { 1-\epsilon } $, choose $c'$ small so that $ r _j $ is equivalent to $ \delta (y)$ in $T(2Q_j)$ and obtain \begin{equation} \label{Whit2}
\int _ { Q_j } |\nabla u |^ 2 \delta ^ { 1- \epsilon} \, d\sigma \leq C \left ( \int _ { 2Q_j
\cap N} \left | \frac { \partial u } { \partial \nu } \right | ^ 2
\delta ^ { 1- \epsilon } \, d\sigma + \int _ { T( 2Q_j)} |\nabla u |^ 2 \delta ^ { -\epsilon }\, dy \right) . \end{equation} We sum over $j$ such that $ Q_j \cap \sball x r \neq \emptyset$ and use that the family $ \{ T(2Q_j)\}$ has bounded overlaps to obtain the Lemma. \end{proof}
An important part of the proof of our estimate for the mixed problem is to show that a solution with Neumann data an atom will decay as we move away from the support of the atom. This decay is encoded in estimates for the Green function for the mixed problem. These estimates rely in large part on the work of de Giorgi \cite{EG:1957}, Moser \cite{JM:1961} and Nash \cite{JN:1958}, on H\"older continuity of weak solutions of elliptic equations with bounded and measurable coefficients, and the work of Littman, Stampacchia and Weinberger \cite{LSW:1963} who constructed the fundamental solution of such operators. Also, see Kenig and Ni \cite{MR87f:35065} for the construction of a global fundamental solution in two dimensions. Given the free space fundamental solution, the Green function may be constructed by reflection in a manner similar to the construction given for graph domains in \cite{LCB:2008}. A similar argument was used by Dahlberg and Kenig \cite{DK:1987} and by Kenig and Pipher \cite{KP:1993} in their studies of the Neumann problem. Once we have a Green function which satisfies the correct boundary conditions in a coordinate cylinder, we may solve a weak version of the mixed problem to obtain a Green function in all of $ \Omega$.
\begin{lemma} \label{Green} Let $\Omega$, $N$ and $D$ be a standard domain for the mixed problem. There exists a Green function $G(x,y)$ for the mixed problem which satisfies: 1) If $G_x(y) = G(x,y)$, then $ G_x$ is in $W^ { 1,2}_D ( \Omega \setminus \ball x r )$ for all $r>0$, 2) $\Delta G_x = \delta _x$, the Dirac $\delta$-measure at $x$, 3) If $f_N$ lies in $ W^ {-1/2, 2} _D ( \partial \Omega)$, then the solution of the mixed problem with $f_D=0$ can be represented by $$ u ( x) = - \langle f_N , G_x\rangle _{\partial \Omega} , $$ 4) The Green function is H\"older continuous away from the pole and satisfies the estimates $$
|G(x,y) - G(x,y')| \leq \frac { C|y-y'|^ \beta } { |x-y |^ { n-2+\beta
}} , \qquad |x-y| > 2 |y-y'|, $$
$$| G(x,y) | \leq \frac C { |x-y|^ { n-2} }, \qquad n\geq 3, $$ and with $ d = \mathop{\rm diam}\nolimits( \Omega)$,
$$| G(x,y) | \leq C( 1+ \log (d/ |x-y|)) , \qquad n = 2. $$ \end{lemma} \note { Construction of a Green function.
1. We begin by recalling that an elliptic operator with bounded measurable coefficients has a Green function in all of $ {\bf R}^n$. This is proven by Littman, Stampacchia and Weinberger \cite{LSW:1963} for dimensions $n \geq 3$. The details when $n=2$ may be found in Kenig and Ni \cite{MR87f:35065}.
2. (Moser, \cite{JM:1961}) If $u$, defined in $\ball x {2r}$, is a solution of an elliptic operator with bounded measurable coefficients, then $u$ is H\"older continuous and satisfies the estimates below.
\begin{eqnarray*}
|u(x) | & \leq & \frac 1 { r^n} \int _ {\ball x {2r}} |u(y) | \, dy \\
|u(y) -u(z) | &\leq & C ( |y-z|/r) ^ \beta \sup _{\ball x {2r}}
|u(y)|, \qquad y, z \in \ball x r. \end{eqnarray*}
3. We cover $ \partial \Omega$ by a collection of coordinate cylinders $\{ Z_i\}_{ i =1, \dots, N}$, with $ Z_ i = \cyl {x_i} {
r_i}$ and we assume that for each $i$, $4Z_i = \cyl { x_i} {4r_i}$ is also a coordinate cylinder. We also assume that each coordinate cylinder satisfies one of the following case a) $ 4Z_i \cap \partial \Omega\subset D$, b) $ 4Z_i \cap \partial \Omega\subset N$, c) $ \Lambda \cap 4Z_i$ is given as a graph as in the definition for cylinders centered at a point in $ \Lambda $.
4. Fix $x$ and suppose that $x$ lies in one of the cylinders $Z_i$. Using the reflection argument as discussed Dahlberg and Kenig (for the pure Neumann or Dirichlet case) or in Lanzani, Capogna and Brown \cite{LCB:2008}, we can construct a first approximation to the Green function $G_0(x,y)$ which satsfies $ \Delta_y G_0(x,y) = \delta_x$n for $ y \in 4Z_i$ (and with $\delta_x$ denoting the $\delta$-function), $G_0(x, y ) = 0$ for $y \in D \cap 4Z_i$, and $\partial G_0(x,y) /\partial \nu_y =0$ for $y \in N \cap 4Z_i$. Since $G_0$ is not defined in all of $ \Omega$, we need to introduce a cut-off function $ \eta$ which is one on $ 2Z_i$ and zero outside $4Z_i$.
We note that $ G_0$ vanishes on $D\cap 4Z_i$. Thus, we have $$ \left\{ \begin{array}{ll} \Delta_y h(x,y) = \Delta_y \eta(y) G_0(x,y) , \qquad & \mbox{in }
\Omega\\ h(x,y) = 0, \qquad &y \in D \\ \partial h(x,y) / \partial \nu_y = \partial\eta (y) G_0 (x,y) /\partial \nu, \qquad & y \in N . \end{array} \right. $$
5. We can estimate $$
\| \partial
\eta G_0(x,\cdot )/ \partial \nu \|_ { W_D^ {-1/2,2} (\partial \Omega) } +
\|\Delta
\eta G_0(x,\cdot)\|_{ H^ { -1}( \Omega)} \leq C $$ where the constant is independent of $x$. This is because the data for this mixed problem is zero. Thus, the solution $ h(x,\cdot)$ to the boundary value problem in 4. satisfies $$
\|h(x, \cdot ) \| _ {L^2 ( \partial \Omega) } + \| \nabla h(x,\cdot)
\|_{ L^2 ( \partial \Omega)} \leq C. $$ (Check details.)
6. We may define the Green function $G(x,y) = h(x,y) + G_0(x,y)$. The pointwise estimates of the Lemma follow from the estimates for $G_0$, the estimate for $h$ in 5. and the boundedness of solutions in 2. The H\"older continuity follows from the upper bounds for the fundamental solution and the estimate in 2.
This construction give $G$ for $x$ in a coordinate cylinder. For $x$ in the interior, we may let $G_0$ be $ \eta \Xi$ where $\eta $ is smooth function which is one in a neighborhood of $x$ and zero near the boundary.
7. We now turn to the representation formula. We write $ G(x,y) = \Xi (x-y) - g(x,y)$ where $\Xi$ is the free space fundamental solution and $g$ is defined by this equation.
We let $u$ be a weak solution with Neumann data $f_N$ and $f_D=0$. We fix $x$ and let $ \eta$ be a cut-off function which is one in a neighborhood of the boundary and 0 in neighborhood of $x$. As $u$ lies in $W^ {1,2}_D ( \Omega)$ and $ \eta G(x,\cdot) $ also lies in $W^ { 1,2}_D( \Omega)$ we may apply the weak formulation of the mixed problem to obtain that \begin{eqnarray*} \langle f _N , G(x, \cdot)\rangle_ {\partial \Omega} & =& \int _ \Omega \nabla u \cdot \nabla ( \eta G(x, \cdot))\, dy\\ 0 = \langle u , \partial G(x,\cdot ) / \partial \nu \rangle _{ \partial \Omega } & = & \int_{ \Omega } \nabla u \cdot \nabla ( \eta G(x, \cdot) )+ u \Delta ( \eta G( x, \cdot) )\,d y . \end{eqnarray*} Subtracting these expressions gives that \begin{eqnarray*} \langle f _N , G(x, \cdot)\rangle & = & - \int_ { \Omega} 2 \nabla \eta \cdot \nabla G(x,\cdot)
+ \Delta \eta G(x,\cdot)\, dy \\ & = & \int_ { \Omega} 2u \nabla (1-\eta) \cdot \nabla G(x,\cdot) +u \Delta( 1- \eta) G(x,\cdot)\, dy. \end{eqnarray*} Since the function $\nabla( 1- \eta)$ is supported away from $x$ and the boundary, we may integrate by parts in the first term in the integral below to obtain, \begin{eqnarray*}
\int_ { \Omega} 2u \nabla (1-\eta) \cdot \nabla G(x,\cdot) +u \Delta( 1- \eta) G(x,\cdot)\, dy & = & - \int_ \Omega 2\nabla u \cdot \nabla ( 1-\eta) G(x, \cdot) \\ & & \qquad + u \Delta (1-\eta) G(x, \cdot) \, dy \\ & = & - \int _ \Omega G(x, \cdot) \Delta ( u ( 1-\eta))\, dy. \end{eqnarray*} From the standard properties of a fundamental solution, we obtain that $$u(x) = - \langle f_N, G(x, \cdot) \rangle_{\partial \Omega} .$$ }
\begin{lemma} \label{Energy} Let $u$ be a weak solution of the mixed problem (\ref{WeakMix}) with Neumann data $f$ in $L^ {p}(N)$ where $p = (2n-2)/n $ for $ n\geq 3$. Then we have the estimate $$
\int _{\Omega } |\nabla u | ^2 \, dy \leq C\| f\|^2_ {L^ p ( N)} . $$
If $n =2$, we have $$
\int _{\Omega } |\nabla u | ^2 \, dy \leq C\| f\|^2_ {H^1 (N)} . $$
In each case, the constant $C$ depends on $\Omega$ and the constant in (\ref{coerce}). \end{lemma}
\begin{proof} When $ n \geq 3$, we use that $ W^ { 1/2, 2}_D( \partial
\Omega ) \subset L^ { 2(n-1) /( n-2)} ( \partial
\Omega)$. By duality, we see that $ L^ { 2(n-1)/n}( \partial \Omega) \subset W^ { -1/2, 2}_D( \partial \Omega ) $ and since the weak solution of the mixed problem satisfies $$
\int_\Omega |\nabla u |^2 \, dy \leq C \| f\|^2 _ { W^ { -1/2,2}_D( \partial
\Omega)} $$ the Lemma follows.
When $ n=2$, the proof above fails since we do not have $ W^ { 1/2, 2}_D(\partial \Omega ) \subset L^ \infty( \partial \Omega)$. However, we do have the embedding $W^ { 1/2, 2}_D(\partial \Omega ) \subset BMO(\partial \Omega) $. Since $ \phi \in W^ { 1/2, 2}_D( \partial \Omega)$ vanishes on $D$ and $ f \in H^1(N)$ has an extension $\tilde f$ which lies in $H^1( \partial \Omega)$, we obtain the result for $n=2$. \end{proof}
Finally, we give a technical lemma that will be used below. \begin{lemma} \label{DorN}
Let $ \Omega $, $N$ and $D$ be a standard domain for
the mixed problem and suppose that $0 < r < r_0$, $ x \in \partial
\Omega$ and $ \delta (x) > r \sqrt { 1+M^2}$. Then we have $ \sball x r \subset N$ or $ \sball x r \subset D$. \end{lemma}
\begin{proof} We fix $ y \in \sball x r $. Since $ r< r_0$, we may find a coordinate cylinder $Z$ which contains $ \sball x r$. We let $ \phi $ be the function whose graph gives $ \partial \Omega$ near $ Z $. Since $ y
\in\sball x r$, we have $|x'-y'| < r$. We let $ x'(t) = ( 1-t) x' + t y '$ and then $ \gamma (t) = ( x' (t) , \phi (x' (t)))$ gives a path in $ \partial \Omega$ joining $x$ to $y$ and of length at most $ r\sqrt { 1+ M^ 2}$. Since $ \delta (x )> r\sqrt { 1+M^2}$ and $ \delta$ is Lipschitz with constant one, we have that $ \delta( \gamma(t) ) >0$ for $ 0 \leq t \leq 1$. Since $ \gamma(t)$ does not pass through $ \Lambda $ we must have $x$ and $y$ both lie in $D$ or both lie in $N$. As $y$ is an arbitrary point in $\sball x r$, it follows that $ \sball x r $ lies entirely in $D$ or entirely in $N$. \end{proof}
\begin{proof}[Proof of \ref{AtomicTheorem}] It suffices to restrict attention to atoms which are supported in a surface ball $\sball x r$, with $ x\in\partial \Omega$ and $0< r< r_0$ since an atom which is supported in a larger surface ball can be sub-divided into a finite number of atoms which are supported in balls of the form $ \sball x { r_0}$. The increase in the constant due to this step will depend on the global character of the domain.
Thus, we fix an atom $a$ that is supported in the set $ \sball x r \cap N$ and begin the proof of (\ref{LocalPart}). We consider two cases: a) $ \delta (x) \leq 16r \sqrt { 1+M^2}$, and b) $ \delta (x) > 16r \sqrt { 1+M^2}$.
In case a) we fix $ q$ between 1 and 2 and use H\"older's inequality with exponents $ 2/q$ and $2 /(2-q)$ to find \begin{eqnarray*}
\left( \int _{ \sball x { 8r}} |\nabla u | ^ q\, d\sigma \right) ^ {
1/q}
& \leq& \left( \int_ { \sball x { 8r}} |\nabla u |^ 2\delta^ { 1-\epsilon} \, d\sigma \right) ^
{ 1/2} \left ( \int _ { \sball x { 8r}} \delta ^ { \frac { q (
\epsilon -1)} { 2-q}}\,d \sigma \right) ^ { \frac {2-q}{ 2q}} \\ & \leq & C r^ { (n-1) ( \frac 1 q -\frac 1 2 ) + \frac { \epsilon -1}2}
\left( \int_{ \sball x { 8r}}|\nabla u | ^2 \delta ^ { 1- \epsilon} \, d\sigma \right)^ { 1/2} . \end{eqnarray*} The second inequality requires that $q$ and $ \epsilon$ satisfy $ q ( \epsilon -1) /( 2- q) > -1$ or $ q < 1 /( 1-\epsilon /2)$. Next, we use Lemma \ref{Whitney} and our assumption that $ \delta(x) \leq 16r \sqrt { 1+M^2}$ to bound the weighted $ L^ 2 ( \delta ^ {
1-\epsilon } d\sigma )$ norm of $ \nabla u $. This gives us \begin{eqnarray*}
\left( \int _ { \sball x {8r} } |\nabla u | ^ q \, d\sigma \right) ^ {
1/q} & \leq & C \left [ \left ( \int _ { \sball x { r} \cap N }
|a |^ 2 \delta ^ { 1-\epsilon} \, d\sigma \right ) ^ { 1/2} \right. \\
& & \quad + \left . \left ( \int _ { \dball x { 16r}} |
\nabla u | ^ 2 \delta ^ { -\epsilon } \, dy \right ) ^ { 1/2} \right ] r ^ { ( n-1)( \frac 1 q - \frac 1 2 ) + \frac { \epsilon -1 } 2} . \end{eqnarray*} We estimate the integral over $ \dball x {16r}$ in this last expression with H\"older's inequality and obtain \begin{eqnarray*}
\lefteqn{ \left ( \int _ { \dball x { 16r}} |
\nabla u | ^ 2 \delta ^ { -\epsilon } \, dy \right ) ^ {1/ 2 } } \qquad \\ & \leq &
C \left ( \int _ { \dball x {16r}} |\nabla u | ^ p \, dy \right) ^ {
1/p} \left ( \int _ { \dball x { 16r} } \delta ^ { - \epsilon p/(
p-2)}\, dy \right ) ^ { 1/ 2 - 1/ p } \\ & \leq & C r ^ { n ( \frac 1 2 - \frac 1 p) - \epsilon / 2 }\left ( \int _
{ \dball x { 16r}}|\nabla u | ^ p \, dy \right ) ^ { 1/p} . \end{eqnarray*} The second inequality depends on our assumption on $ \Lambda $ and holds when $ \epsilon p /( p-2) < 2 $ or $p > 2 / ( 1- \epsilon /2)$. Now we may use the three previous displayed equations and Lemma \ref{RHEstimate} to obtain $$
\left ( \frac 1 { r^ { n-1}}\int _ { \sball x { 8r}}|\nabla u | ^ q \, d\sigma \right ) ^ { 1/q} \leq C \left [ \left ( \frac 1 { r^ n } \int
_ {\dball x { 32r}} |\nabla u |^ 2\, dy \right ) ^ { 1/2} + r
^ { 1-n } \right ]. $$
In this last step, we have used the normalization of $a$, $ \| a \|_{
L^ \infty } \leq 1/ \sigma ( \sball x r )$ to estimate the term involving the Neumann data from Lemma \ref{RHEstimate}. Finally, we may use the Lemma \ref{Energy} and the normalization of the atom to obtain that $( r^ { -n} \int _ \Omega
|\nabla u |^ 2 \, dy ) ^ { 1/2} \leq C r^ { 1-n}$ which gives the estimate (\ref{LocalPart}).
In case b), we use Lemma \ref{DorN} to conclude that $ \sball x { 16r} \subset N$. Next, we use H\"older's inequality, that $a$ is supported in $ \sball x r$ and Lemma \ref{NeumannRegularity} to obtain \begin{eqnarray}
\lefteqn{ \left( \frac 1 { r ^ { n-1}} \int _{ \sball x { 8r} } |\nabla u | ^ q \, d\sigma \right ) ^ { 1/q} } \nonumber \\
& \leq &C \left (\frac 1 { r ^ { n-1}} \int _ { \sball x { 8r} } |\nabla u | ^ 2 \, dy \right ) ^ { 1/2} \nonumber \\ & \leq & C \left [ \left ( \frac 1 { r^ { n-1}} \int _ { \sball x r \cap N }
|a|^2\, d\sigma \right ) ^ { 1/2}
+ \left(\frac 1 { r ^ n } \int _ {
\dball x { 16r}}|\nabla u | ^ 2 \, dy \right ) ^ { 1/2} \right]. \label{EasyCase} \end{eqnarray} Using the normalization of the atom $a$ and Lemma \ref{Energy}, the right-hand side of (\ref{EasyCase}) may be estimated by $ \sigma(\sball x { 8r} ) ^ { -1} $ and we obtain (\ref{LocalPart}) in this case.
Now we turn our attention to the proof of the estimate (\ref{Decay}). Our first step is to observe that the solution $u$ satisfies the estimate \begin{equation} \label{AtomDecay}
|u(y) | \leq \frac { Cr ^ \beta } { |x-y |^ { n-2+ \beta } }, \qquad
|y-x| > 2r. \end{equation} To establish (\ref{AtomDecay}), we begin with the representation formula in part 3) of Lemma \ref{Green} and claim that we may find $\bar x $ in $\sball x r $ so that $$ u ( y ) = - \int _ { \sball x r \cap N } a(z) ( G(y,z) - G( y, \bar x) ) \, d\sigma . $$ If $ \sball x r \subset N$, then we may let $ \bar x = x$ and use that $a$ has mean value zero to obtain the above representation. If $\sball x r \cap D \neq \emptyset $, then we choose $\bar x \in D \cap \sball x r$ and use that $ G( y, \cdot ) $ vanishes on $D$. Now the estimate (\ref{AtomDecay}) follows easily from the normalization of the atom and the estimates for the Green function in part 4) of Lemma \ref{Green}.
We will consider three cases in the proof of (\ref{Decay}): a) $2^k r < r_0$ and $ \delta (x) \leq 2\cdot 2^k r\sqrt{ 1+M^2}$, b) $2^k r < r_0$ and $ \delta (x) > 2\cdot 2^k r\sqrt { 1+M^2}$, c) $2^k r
\geq r_0$. The details are similar to the proof of (\ref{LocalPart}),
thus we will be brief.
We begin with case a) and use H\"older's inequality with exponents $2/q$ and $ 2/(2-q)$ to obtain $$
\left( \int _ { \Sigma _k } |\nabla u | ^ q \, d\sigma \right ) ^ {
1/q}
\leq C \left( \int _ { \Sigma _k } |\nabla u |^2 \delta ^ { 1-
\epsilon } \, d\sigma \right ) ^ {1/2} ( 2^ k r ) ^ { (n-1) ( \frac
1 q - \frac 1 2 ) + \frac { \epsilon -1} 2 }. $$ As in the proof of the estimate (\ref{LocalPart}), this requires that $ 1 < q < 1/ ( 1 - \epsilon /2)$. From Lemma \ref{Whitney} we have $$
\left( \int _ { \Sigma_k } | \nabla u | ^ 2 \delta ^ { 1-\epsilon } \, d\sigma \right) ^ { 1/2}
\leq C \left ( \sum _ { j = k -1} ^ { k+1} \int _ { S_j } |\nabla u | ^ 2\delta ^ { -\epsilon } \, dy \right) ^ { 1/2} $$ This estimate requires $ k \geq 2$ so that $ \sring {k-1} \cap \sball x {r} = \emptyset$. Then H\"older and the reverse H\"older estimate in Lemma \ref{RHEstimate} gives $$
\left( \int _ {S_k} |\nabla u | ^ 2 \delta ^ { -\epsilon } \, dy
\right) ^ {\frac 1 2 } \leq C \left ( \int _ { S_k } |\nabla u | ^ p \, dy \right) ^ { \frac 1 p } \leq C ( 2^ k r) ^ {- \frac { \epsilon} 2} \left ( \sum _ { j = k -1} ^ {
k+1} \int _{S_j} |\nabla u |^2 \, dy \right) ^ { \frac 1 2 } . $$ Here we need $ k \geq 2$ so that $ \sball x r \cap \Sigma _ { k-1} = \emptyset $ and the term in involving the Neumann data in Lemma \ref{RHEstimate} vanishes. Finally, from Caccioppoli and our estimate (\ref{AtomDecay}) for $u$, we obtain that $$
\left( \int _{S_k} |\nabla u | ^ 2 \, dy \right) ^ { 1/2} \leq \frac C { 2^ k r} \left( \sum _ { j = k -1} ^ {k+1} \int_{S_j}
|u|^2 \, dy \right) ^ { 1/2} \leq C 2 ^ { -k\beta} (2^k r ) ^ {1 -n/2}. $$ Again, we need $ k\geq 2$ so that the data for the mixed problem is zero when we apply Caccioppoli's inequality. Combining the four previous estimates gives $$
\left( \int _ { \Sigma _k } |\nabla u | ^ q \, d\sigma \right) ^ {
1/q} \leq C 2 ^ { -k\beta} \sigma ( \Sigma _k ) ^ { -1/q'} $$ for $k \geq 4 $. We need $k \geq 4$ in order to fatten up the set $ \sring k $ three times: once to apply Lemma \ref{Whitney}, once to apply Lemma \ref{RHEstimate} and once to apply Caccioppoli's inequality.
Now we consider case b). Since $ \delta (x) \geq 2( 2^ k r) \sqrt {
1+M^ 2} $, we have $ \sball x {2 \cdot 2^ k r } \subset N$ by Lemma \ref{DorN}. Hence, we may use Lemma \ref{NeumannRegularity}, Caccioppoli's inequality and (\ref{AtomDecay}) to obtain (\ref{Decay}).
Finally, we consider case c) where $ 2^k r > r_0$. We recall that we have a covering of $ \partial \Omega$ by coordinate cylinders. In each coordinate cylinder, we may use Lemma \ref{NeumannRegularity}, Lemma \ref{DirichletRegularity} or Lemma \ref{Whitney} and the techniques given above to obtain $$
\left( \int _ { Z_ { r_0} \cap \partial \Omega } |\nabla u |^ q \, d\sigma\right)^ { 1/q} \leq C r_0 ^ { (1-n)/q'}. $$ Adding these estimates gives (\ref{Decay}) with a constant that depends on the global character of the domain. \end{proof}
We now show that the non-tangential maximal function of our weak solutions lies in $L^1$ when the Neumann data is an atom.
\begin{theorem} \label{HardyTheorem} Let $ \Omega$, $N$ and $D$ be a standard domain for the mixed problem. If $ f_N$ is in $ H^1 (N)$, then there exists $u$ a solution of the $L^1$-mixed problem (\ref{MP}) with Neumann data $f_N$ and zero Dirichlet data and this solution satisfies $$
\|\nontan{ (\nabla u) } \|_{L^1( \partial \Omega)} \leq C \|f_N\|_ { H^1 (N)}. $$ The constant $C$ in this estimate depends on the global character of $ \Omega$, $N$ and $D$. \end{theorem}
\note { Theorem restated to give existence in the Hardy space, rather than for an atom.
Check to make sure the proof proves the theorem. }
\begin{proof} We begin by considering the case when $f_N$ is an atom and we let $u$ be the weak solution of the mixed problem with Neumann data an atom $a$ and zero Dirichlet data. The result for data in $H^1(N)$ follows easily from the result for an atom.
We establish a representation for the gradient of $u$ in terms of the boundary values of $u$. Let $x \in \Omega$ and $j$ be an index ranging from $1$ to $n$. We claim \begin{eqnarray} \label{RepFormula} \frac {\partial u } { \partial x_j} (x) & = & \int _{ \partial \Omega } \sum _{ i=1 } ^ n \frac { \partial \Xi }{ \partial y_i } (x-\cdot)( \nu _i \frac { \partial u }{ \partial y_j } - \frac { \partial u }{ \partial y _i } \nu _j )
\nonumber \\ & & \qquad + \frac {\partial \Xi }{\partial y_j } ( x-\cdot) \frac { \partial u }{ \partial \nu } \, d\sigma . \end{eqnarray} If $u$ is smooth up to the boundary, the proof of (\ref{RepFormula}) is a straightforward application of the divergence theorem. However, it takes a bit more work to establish this result when we only have that $u$ is a weak solution.
Thus, we suppose that $\eta $ is a smooth function which is zero in a neighborhood of $ \Lambda $ and supported in a coordinate cylinder. Using the coordinate system for our coordinate cylinder, we set $ u_ \tau (y) = u(y+\tau e_n)$ where $e_n$ is the unit vector the $x_n$ direction and $ \tau >0$. Applying the divergence theorem gives \begin{eqnarray} \lefteqn{ \int _{ \partial \Omega }\eta ( \frac { \partial \Xi }{ \partial \nu } (x- \cdot ) \frac { \partial u_\tau }{ \partial y_j } - \nabla \Xi (x-\cdot ) \cdot \nabla u_\tau \nu _j
+ \frac {\partial \Xi }{\partial y_j } ( x-\cdot ) \frac { \partial u_\tau }{ \partial \nu } ) \, d\sigma } ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \nonumber\\ & = & \eta (x) \frac {\partial u_\tau }{\partial x_j } (x) +\int _\Omega \nabla \eta \cdot \nabla \Xi(x- \cdot ) \frac {
\partial u_\tau }{ \partial y_j} \nonumber \\ & & \qquad- \nabla _y \Xi (x-\cdot ) \cdot \nabla u_\tau \frac {
\partial \eta } {\partial y_j} \nonumber\\ & & \qquad + \frac { \partial \Xi }{ \partial y_j } (x- \cdot ) \nabla u_\tau \cdot \nabla \eta \, dy . \label{dadgumidentity} \end{eqnarray} Thanks to the truncated maximal function estimate in Lemma \ref{Whitney},
we may let $ \tau $ tend to zero from above and conclude that the same identity holds with $u_\tau$ replaced by $u$. Next, we suppose that $ \eta$ is of the form $ \eta \phi_ \epsilon$ where $ \phi_ \epsilon =0$ on $\{ x: \delta (x) < \epsilon\}$,
$ \phi_ \epsilon =1$ on $\{ x: \delta (x) >
2\epsilon \}$ and we have the estimate $ | \nabla \phi_ \epsilon (x) | \leq C/\epsilon $. Since we assume the boundary between $D$ and $N$ is a Lipschitz surface, we have the following estimate for $ \epsilon $ sufficiently small \begin{equation} \label{creasecollar}
|\{ x: \delta (x) \leq 2 \epsilon \}| \leq C \epsilon ^2. \end{equation} Using our estimate for $ \nabla \phi_\epsilon$ and the inequality (\ref{creasecollar}), we have $$
| \int _\Omega \eta \nabla \phi_\epsilon \cdot \nabla
\Xi(x-\cdot) \frac { \partial u }{ \partial y_j} \,d y | \leq C \left ( \int_ { \{ y: \delta (y) < 2\epsilon \}
} |\nabla u | ^ 2\, dy \right ) ^ { 1/2} $$ and the last term tends to zero with $\epsilon$ since the gradient of a weak solution lies in $L^2 ( \Omega)$. Using this and similar estimates for the other terms in (\ref{dadgumidentity}), gives \begin{eqnarray*} \lefteqn{ \lim _{ \epsilon \rightarrow 0^+} \int _\Omega \nabla( \phi_\epsilon \eta ) \cdot \nabla_y \Xi(x- \cdot ) \frac { \partial u }{ \partial y_j} - \nabla _y \Xi (x- \cdot) \cdot \nabla u \frac { \partial (\phi_
\epsilon \eta ) } {\partial y_j} } ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ & & \\
+ \frac { \partial \Xi }{ \partial y_j } (x-\cdot ) \nabla u \cdot \nabla(\phi_\epsilon \eta ) \, dy &= & \int _\Omega \nabla \eta \cdot \nabla_y \Xi(x-\cdot ) \frac { \partial u }{ \partial y_j}
\\ & & \qquad - \nabla_y \Xi (x- \cdot ) \cdot \nabla u \frac { \partial \eta } {\partial y_j} \\ & &\qquad + \frac {\partial \Xi }{ \partial y_j } (x-\cdot ) \nabla u \cdot \nabla \eta \, dy . \end{eqnarray*} Thus we obtain the identity (\ref{dadgumidentity}) with $ u_\tau$ replaced by $u$ and without the support restriction on $\eta$. Finally, we choose a partition of unity which consists of functions that are either supported in a coordinate cylinder, or whose support does not intersect the boundary of $\Omega$. Summing as $\eta $ runs over this partition gives us the representation formula (\ref{RepFormula}) for $u$. As we have $ \nabla u \in L^ q ( \partial \Omega)$ for some $q>1$, it follows from the theorem of Coifman, McIntosh and Meyer \cite{CMM:1982} that $ \nontan { ( \nabla u )} $ lies in $L^ q ( \partial \Omega)$. However, a bit more work is needed to obtain the correct $L^1 $ estimate for $ \nontan { ( \nabla u )} $.
We claim \begin{eqnarray*} \int_{ \partial \Omega } \frac { \partial u}{\partial \nu } \, d\sigma &=&0 \\ \int_{ \partial \Omega } \nu _j \frac{ \partial u }{ \partial y_i } - \nu _i \frac{ \partial u }{ \partial y_j } \, d\sigma & = & 0. \end{eqnarray*} Since $ \nontan { ( \nabla u ) } $ lies in $L^q( \partial \Omega)$, the proof of these two identities is a standard application of the divergence theorem. Using these results and the estimates for $\nabla u$ in Theorem \ref{AtomicTheorem}, we can show that $\partial u /\partial \nu$ and $ \nu_j \partial u /\partial y_i-\nu _i \partial u/\partial y_j$ are molecules on the boundary (see \cite{CW:1976}) and hence it follows from the representation formula (\ref{RepFormula}) that $\nontan{(\nabla u )}$ lies in $L^1 ( \partial \Omega)$ and satisfies the estimate $$
\| \nontan {(\nabla u) }\|_{L^1 ( \partial \Omega)}\leq C. $$
Finally, the existence of non-tangential limits at the boundary follows from the estimate for the non-tangential maximal function. Once we know the limits exist it is easy to see that the boundary data for the $L^1$-mixed problem must agree with the boundary data for the weak formulation. \end{proof}
\section{Uniqueness of solutions} \label{Unique}
In this section we establish uniqueness of solutions to the $L^1$-mixed problem (\ref{MP}). We use the existence result
established in section \ref{Atoms} and argue by duality that if $u$ is a solution of the mixed problem with zero Dirichlet and Neumann data, then $u$ is also a solution of the regularity problem with zero data and hence is zero. \begin{theorem} \label{uRuniq} Let $ \Omega$, $N$ and $D$ be a standard domain for the mixed problem. Suppose that $u$ solves the $L^1$-mixed problem (\ref{MP}) with data $ f_N = 0$ and $ f_D=0$. If $ ( \nabla u ) ^ * \in L^ 1 ( \partial \Omega)$, then $u = 0$. \end{theorem}
Given a Lipschitz domain $ \Omega$, we may construct a sequence of smooth approximating domains. A careful exposition of this construction may be found in the dissertation of Verchota (\cite[Appendix A]{GV:1982}, \cite[Theorem 1.12]{GV:1984}). We will need this approximation scheme and a few extensions. Given a Lipschitz domain $ \Omega$, Verchota constructs a family of smooth domains $ \{ \Omega_k \}$ with $ \bar \Omega _k \subset \Omega$. In addition, he finds bi-Lipschitz homeomorphisms $ \Lambda _k : \partial \Omega \rightarrow \partial \Omega _k$ which are constructed as follows.
We choose a smooth vector field $ V$ so that for some $ \tau = \tau (M) >0$, $ V\cdot \nu \leq - \tau $ a.e.~on $\partial \Omega$ and define a flow $ f( \cdot ,\cdot ) : {\bf R} ^ n \times {\bf R} \rightarrow {\bf R} ^ n$ by $ \frac d { dt} f(x,t) = V(f(x,t))$, $f(x,0) = x$. One may find $ \xi > 0$ so that \begin{equation}\label{Gdef} {\cal O} = \{ f(x, t) : x \in \partial \Omega , - \xi < t < \xi \} \end{equation} is an open set and the map $ (x,t) \rightarrow f(x,t)$ from $ \partial \Omega \times ( -\xi, \xi) \rightarrow {\cal O}$ is bi-Lipschitz. Since the vector field $V$ is smooth, we have \begin{equation} \label{DThing} Df(x,t) = I _n + O(t) \end{equation} where $ I_n$ is the $n\times n$ identity matrix and $DF$ denotes the derivative of a map $F$. In addition, we have a Lipschitz function $t_k (x)$ defined on $ \partial \Omega$ so that $ \Lambda_k (x) = f(x, t_k ( x)) $ is a bi-Lipschitz homeomorphism, $ \Lambda _k : \partial \Omega \rightarrow \partial \Omega _k$. We may find a collection of coordinate cylinders $\{ Z _i \}$ so that each $ Z_i$ serves as a coordinate cylinder for $ \partial \Omega$ and for each of the approximating domains $ \partial \Omega_k$. If we fix a coordinate cylinder $Z$, we have functions $ \phi$ and $\phi_k$ so that $\partial \Omega \cap Z = \{ ( x', \phi ( x' )) : x' \in {\bf R} ^ {
n -1} \} \cap Z$ and $\partial \Omega_ k \cap Z = \{ ( x' , \phi_k( x' )) : x' \in {\bf R} ^ {
n -1} \} \cap Z$. The functions $ \phi _k $ are $ C^ \infty $ and $
\|\nabla' \phi _k \| _ { L ^ \infty ( {\bf R} ^ { n -1} )} $ is bounded in $k$, $ \lim _{ k \rightarrow \infty } \nabla' \phi _k ( x' ) = \nabla' \phi (x' )$ a.e. and $ \phi_k$ converges to $\phi$ uniformly. Here we are using $ \nabla'$ to denote the gradient on ${\bf R} ^ {n-1}$.
We let $ \pi: {\bf R} ^ n \rightarrow {\bf R} ^ { n-1}$ be the projection $ \pi (x', x_n ) = x' $ and define $ S_k (x' ) = \pi(\Lambda_k (x' ,\phi (x')))$. According to Verchota, the map $S_k $ is bi-Lipschitz and has a Jacobian which is bounded away from 0 and $ \infty$. We let $T_k$ denote $S_k^ { -1} $ and assume that both are defined in a neighborhood of $ \pi(Z)$.
We claim that \begin{equation} \label{Dclaim} \lim _ { k \rightarrow \infty } DT_k (S_k(x') ) = I _ { n-1}, \qquad \mbox{a.e.~in $\pi(Z)$}, \end{equation}
and the sequence $\| DT_k\|_ { L ^ \infty ( \pi (Z) )}$ is bounded in $k$.
To establish (\ref{Dclaim}), it suffices to show that $ DS_k $ converges to $ I_{ n-1}$ and that the Jacobian determinant of $ DS_k$ is bounded away from zero and infinity. The bound on the Jacobian is part of Verchota's construction (see \cite[p.~119]{GV:1982}). As a first step, we compute the derivatives of $t_k (x', \phi (x'))$. We first observe that \begin{eqnarray*} \lefteqn{\frac { \partial } { \partial x_i } f((x', \phi(x') ) , t_k (x', \phi (x' ))) } ~~~~~~ \\
& = & \frac { \partial f}{ \partial x_i } ( (x', \phi (x' )), t_k (x' , \phi (x'))) +\frac { \partial \phi }{ \partial x_i }( x' ) \frac { \partial f}{
\partial x_n } (( x', \phi (x' )), t_k (x' , \phi (x'))) \\ & & \qquad + V(f((x', \phi(x') ) , t_k (x', \phi (x' )))) \frac \partial { \partial x_i } t _k (x' , \phi (x' )). \end{eqnarray*} Since $ f((x', \phi ( x' )), t_k (x', \phi (x')))$ lies in $ \partial \Omega_k$, the derivative is tangent to $ \partial \Omega_k$ and we have \begin{equation}\label{Tangential}
\frac { \partial }{ \partial x_i }
f((x', \phi ( x' )), t_k (x', \phi (x'))) \cdot \nu _ k (y) = 0, \qquad \mbox{a.e.~in }\pi (Z), \end{equation} where $ y = ( S_k (x'), \phi_k ( S_k (x' )))$ and $ \nu _k $ is the normal to $ \partial \Omega_k$. Solving equation (\ref{Tangential}) for $ \frac \partial { \partial x_i } t_k $ gives \begin{eqnarray*} \frac \partial { \partial x_i } t _k (x' , \phi (x' )) & = & - ( V(y) \cdot \nu _k (y) ) ^ { -1} \left ( \frac { \partial f }{ \partial
x_i }( ( x' , \phi ( x' )), t_k (x', \phi (x' ))) \right . \\ & & \qquad + \left. \frac { \partial \phi } { \partial x_i } (x' ) \frac { \partial f }{ \partial
x_n }( ( x' , \phi ( x' )), t_k (x', \phi (x' ))) \right)\cdot \nu_k (y) . \end{eqnarray*} Since $\lim _{ k \rightarrow \infty } t_k (x', \phi (x')) = 0$ uniformly for $x' \in \pi (Z)$, (\ref{DThing}) holds, and $ \nu _ k ( y) $ converges pointwise a.e.~and boundedly to $ \nu (x)$, we obtain that \begin{equation} \label{tderiv} \lim _ { k \rightarrow \infty } \frac \partial { \partial x_i } t _k (x' , \phi (x' )) = 0, \qquad \mbox{a.e.~in $ \pi (Z)$}. \end{equation} Given (\ref{DThing}), (\ref{tderiv}), and recalling that $ S_k( x') = \pi(f( (x', \phi(x')), t_k (x', \phi (x')))$, (\ref{Dclaim}) follows.
\note { The proof of this Lemma is in RMB notebook 19, page 49. }
\begin{lemma} \label{Verchota} Let $ \Omega$, $N$ and $D$ be a standard domain for the mixed problem. If $u$ is in $\sobolev 1 1 (\partial \Omega _k)$ and $w$ is the weak solution of the mixed problem with Neumann data an atom for $N$ and zero Dirichlet data, then we have $$ \int _ { \partial \Omega _k } u \frac { \partial w }{ \partial \nu }
\,d\sigma \leq C_w \| u \|_{ W^ { 1,1 }( \partial \Omega _k )}. $$ \end{lemma}
\begin{proof} This may be proven using generalized Riesz transforms as in \cite[Section 5]{GV:1984}. Also, see more recent treatments by Sykes and Brown \cite[section 3]{SB:2001} and Kilty and Shen \cite[section 7]{KS:2010}. Verchota's argument uses square function estimates to show that the generalized Riesz transforms are bounded operators on $L^p ( \partial \Omega)$. In the proof of this Lemma, we need that the Riesz transforms of $w$ are bounded functions. From the estimate for the Green function in Lemma \ref{Green} and the representation of $ w= - \langle G, a\rangle_{\partial \Omega} $, we conclude that $ w$ is H\"older continuous. The H\"older continuity, and hence boundedness, of the Riesz transforms of $w$ follow from the following characterization of H\"older continuous harmonic functions. A harmonic function $u$ in a Lipschitz domain $\Omega$ is H\"older continuous of exponent $\alpha$, $0< \alpha < 1$, if and only if $ \sup _{ x\in \Omega } \mathop{\rm dist}\nolimits(x, \partial
\Omega )^ {1- \alpha } |\nabla u (x) | $ is finite. \end{proof}
We will need the following technical lemma on approximation of functions with $ \nontan{(\nabla u )} $ in $L^ 1 ( \partial \Omega)$. The proof relies on the approximation scheme of Verchota outlined above. In our application, we are interested in studying functions in Sobolev spaces on the family of approximating domains. Working with derivatives makes the argument fairly intricate.
\begin{lemma} \label{TechnicalMonstrosity} Let $\Omega$, $N$ and $D$ be a standard domain for the mixed problem. If $ u $ satisfies $ \nontan { ( \nabla u ) } \in L^ 1 (
\partial \Omega)$ and $ \nabla u$ has non-tangential limits a.e.~on $ \partial \Omega$, then we may find a sequence of Lipschitz functions $ U_j$ so that $$
\lim _{ k\rightarrow \infty } \| u - U _j \| _ { \sobolev 1 1 (
\partial \Omega _k )} \leq C /j. $$
If the non-tangential limits of $u$ are zero a.e. on $D$, then we may arrange that $U_j|_{\partial \Omega}$ is zero on $ D$.
The constant $C$ may depend on $ \Omega$ and $u$. \end{lemma}
\begin{proof} To prove the Lemma, it suffices to consider a function
$u$ which is zero outside one of the coordinate cylinders $Z$ as
given in Verchota's approximation scheme. We have $ u( x' , \phi (x')) \in \sobolev 1 1 ( {\bf R} ^ { n
-1})$, where we have set this function to be zero outside $ \pi (Z)$. Hence, there exists a sequence of Lipschitz functions $ u _j $
so that $\int _{ {\bf R} ^ { n-1} } | \nabla' u (x', \phi (x' )) -
\nabla' u _ j ( x', \phi (x' )) | \, dx' \leq 1/j$ where $ \nabla' $ denotes the gradient in $ {\bf R} ^ { n -1}$. We extend $u_j$ to a neighborhood of $ \partial \Omega$ by $$U _ j ( f(x,t) ) = \eta ( f(x, t) ) u _j (x) , \qquad x \in \partial \Omega$$ where $ \eta$ is a smooth cutoff function which is one on a neighborhood of $ \partial \Omega$ and supported in the set $ {\cal O} $ defined in (\ref{Gdef}). If we have that $u$ is zero on $D$, then standard approximation results for Sobolev spaces allow us to choose $u_j$ to be zero in a neighborhood of $\bar D$. This relies on our assumption on $ \Lambda $.
We consider \begin{eqnarray*} \lefteqn{
\int _ {\pi ( Z)} | \nabla' u (x' , \phi _k (x' )) - \nabla' U_j ( x',
\phi_k (x' ))| \, dx' } ~~~~~~~~~~~~~~~~~~~~ & & \\ & \leq &
\int _ {\pi ( Z)} | \nabla' u (x' , \phi _k (x' )) - \nabla' u ( x',
\phi (x' ))| \, dx' \\ & & \qquad +
\int _ {\pi ( Z)} | \nabla' u (x' , \phi (x' )) - \nabla' u_j ( x',
\phi (x' ))| \, dx'\\ & & \qquad +
\int _{ \pi ( Z)} | \nabla' u_j (x' , \phi (x' )) - \nabla' U_j ( x',
\phi_k (x' ))| \, dx'\\ & = & A_k +B +C_k. \end{eqnarray*} We have that $ \lim _{ k \rightarrow \infty } A_k =0 $ since we assume that $ \nontan {(\nabla u )} \in L ^ 1 ( \partial \Omega )$, $ \nabla u $ has non-tangential limits a.e., and $ \nabla' \phi _k $ converges pointwise a.e.~and boundedly to $ \nabla' \phi$. By our choice of $ u_j$, we have $ B\leq C/j$. Finally, our construction of $U_j$ and our definition of $T_k$ (before (\ref{Dclaim})) imply that $ U_j (x', \phi _k ( x' )) = u _j ( T_k (x') , \phi ( T_ k (x')))$ and hence we have \begin{eqnarray*} C_k & \leq &
\int _ { \pi (Z)} |
( I _ { n-1} - DT_ k ( x' )) \nabla' u _ j ( x', \phi (x')) | \, dx' \\ & & \qquad
+ \int _ { \pi (Z)} | DT_k (x' ) ( \nabla' u _ j ( x' , \phi(x' ))) -
\nabla' u _ j ( T_k (x' ), \phi ( T_k (x' )))| \, d x' \\ & = & C_ { k, 1 } + C_ {k,2 } . \end{eqnarray*} We have that $ \lim _ { k\rightarrow \infty } C_ { k,1 } = 0$ since $ \nabla' u _ j $ is bounded and (\ref{Dclaim}) holds.
Since $ T_k ( x' )$ converges uniformly to $x' $, $DT_k $ is bounded and the Jacobian of $S_k$ is bounded, we have that $ \lim _ { k \rightarrow \infty } C_{ k,2} = 0$. \end{proof}
\begin{proof}[Proof of Theorem \ref{uRuniq}] We let $ u$ be a solution of the $L^1$-mixed problem, (\ref{MP}), with $f_N = 0$ and $f_D=0$ and we wish to show that $u$ is zero. We fix $a$ an atom for $N$ and let $ w$ be a solution of the mixed problem with Neumann data $a$ and zero Dirichlet data. Our goal is to show that \begin{equation}\label{AtomClaim} \int _ { N } a u \, d\sigma =0. \end{equation} This implies that $u$ is zero on $ \partial \Omega$ and then Dahlberg and Kenig's result for uniqueness of solutions of the regularity problem \cite{DK:1987} implies that $u=0$ in $ \Omega$.
We turn to the proof of (\ref{AtomClaim}). Applying Green's second identity in one of the approximating domains $ \Omega _k$ gives us \begin{equation}\label{uniq42} \int _ { \partial \Omega _k } w \frac { \partial u }{ \partial \nu} \, d\sigma = \int _ { \partial \Omega _k } u \frac { \partial w }{ \partial \nu} \, d\sigma, \qquad k =1,2\dots . \end{equation} We have $ \nontan { ( \nabla u )} $ is in $L^1 ( \partial \Omega )$ while $ w$ H\"older continuous and hence bounded. Recalling that $w$ is zero on $D$ and $ \partial u /\partial \nu$ is zero on $N$, we may use the dominated convergence theorem to obtain \begin{equation}\label{uniq43} \lim _ { k \rightarrow \infty } \int _ { \partial \Omega _k } w \frac
{ \partial u }{ \partial \nu } \, d\sigma =0 . \end{equation} Thus, our claim will follow if we can show that \begin{equation} \label{ClaimFollow} \lim _{ k \rightarrow \infty } \int _ { \partial \Omega _k } u \frac { \partial w }{ \partial \nu } \, d\sigma = \int _{\partial \Omega } ua \, d\sigma . \end{equation} Note that the existence of the limit in (\ref{ClaimFollow}) follows from (\ref{uniq42}) and (\ref{uniq43}). We let $U_j$ be the sequence of functions from Lemma \ref{TechnicalMonstrosity} and consider \begin{eqnarray} \lefteqn{
\left | \int _{ \partial \Omega } ua \, d\sigma -\lim _{ k \rightarrow \infty} \int _{ \partial \Omega _k } u \frac { \partial w }{ \partial \nu }
\, d\sigma \right | } \label{U1}\\ & \leq &
\left| \int_{ \partial \Omega } ua \, d\sigma - \lim _{ k \rightarrow
\infty } \int _{ \partial
\Omega _k } U_j \frac { \partial w }{ \partial \nu } \, d\sigma
\right |
+
\limsup _{ k \rightarrow \infty } \left | \int _ { \partial \Omega _k } ( u -
U_j ) \frac { \partial w}{ \partial \nu } \, d\sigma \right| . \nonumber \end{eqnarray} Because we have that $ \nontan {( \nabla w ) } $ is in $L^ 1( \partial \Omega)$ and $U_j$ is bounded, we may take the limit of the first term on the right of (\ref{U1}) and obtain $$
\left | \int_{ \partial \Omega } ua \, d\sigma - \lim _{ k \rightarrow
\infty } \int _{ \partial
\Omega _k } U_j
\frac { \partial w }{ \partial \nu } \, d\sigma
\right | =
\left | \int _{ N } ( u -U _j ) a
\, d\sigma\right | \leq C/j . $$
Here we use that $U_j|_{ D } =0$. According to Lemmata \ref{Verchota} and \ref{TechnicalMonstrosity}, the second term on the right of (\ref{U1}) is bounded by $ C_w/j$. As $j$ is arbitrary, we obtain (\ref{ClaimFollow}) and hence the Theorem. \end{proof}
\section{A Reverse H\"older inequality at the boundary} \label{BoundaryReverse} In this section we establish an estimate in $L^p( \partial \Omega)$ for the gradient of a solution to the mixed problem. This is the key estimate that is used in section \ref{LpSection} to establish $L^p$-estimates for the mixed problem.
\begin{lemma} \label{newLocal}
Let $ \Omega$, $N$ and $D$ be a standard domain for the
mixed problem. Let $u$ be a weak solution of the mixed problem with Neumann data $f_N$ in $L^ \infty ( N)$ and zero Dirichlet data. Let $ p_0 >2$ be as in Lemma \ref{RHEstimate} and fix $ q$ satisfying $ 1< q < p_0/2$. For $ x \in \partial \Omega $ and $ r$ with $ 0 < r < r_0$ we have $$ \left ( -\!\!\!\!\!\!\int _ { \sball x r } { \nontan {( \nabla u ) _{cr} } }^ q
\, d\sigma \right) ^ { 1/q}
\leq C \left [
-\!\!\!\!\!\!\int _{ \dball x { 2r} } | \nabla u | \, dy
+ \|f_N\|_{ L^ \infty ( \sball x { 2r} \cap N)} \right]. $$ The constant $c=1/16$ and $C$ depends on $M$, $n$ and $q$. \end{lemma}
\begin{proof} We fix $ x\in \partial \Omega$ and $r$ with $0< r < r_0$. We claim that we have \begin{equation}\label{LocalClaim}
\left ( -\!\!\!\!\!\!\int _ { \sball x {4r}} |\nabla u |^ q \, d\sigma \right )
^ { 1/q} \leq C \left ( -\!\!\!\!\!\!\int _ { \dball x {16r}} |\nabla u | \,
dy + \| f_N\| _ { L^ \infty ( \sball x { 16r} \cap N)} \right). \end{equation} We will consider two cases: a) $ \delta (x) \leq 8r \sqrt { 1+M^2}$ , b) $ \delta (x) > 8r \sqrt { 1+M^2} $. We give the proof in case a). Since we assume $ 1 < q < p_0/2$, we may choose $ \epsilon $ satisfying $ 2-2/q < \epsilon < 2 - 4 /p_0$. We apply H\"older's inequality with exponents $ 2/q$ and $2/(2-q)$ to obtain \begin{eqnarray*}
\lefteqn{ \left ( \int _ { \sball x { 4r}} |\nabla u | ^ q \, d\sigma \right ) ^
{ \frac 1 q} } \qquad \\ & \leq &
\left ( \int _ { \sball x { 4r} } |\nabla u
|^ 2 \delta ^ { 1-\epsilon } \, d\sigma \right ) ^ { \frac 1 2 } \left ( \int _ {\sball x { 4r} } \delta ^ { ( \epsilon -1)q /( 2-q)}\,
d\sigma \right ) ^ { \frac 1 q - \frac 1 2 } \\ & \leq & C r ^ { ( n-1) ( \frac 1 q - \frac 1 2 ) + \frac { \epsilon -1 }
2 }\left ( \int _ { \sball x { 4r} } |\nabla u | ^ 2 \delta ^ {
1-\epsilon } \, d\sigma \right ) ^ {1/2} \end{eqnarray*} where we use that $ q (\epsilon -1) / ( 2-q) > -1$ or $ 2 - \frac 2 q < \epsilon $ which implies that the integral of $ \delta ^ { (
\epsilon -1) q /( 2-q)}$ is finite. Next, we use Lemma \ref{Whitney} and our hypothesis that $ \delta (x) \leq 8r \sqrt {
1+M^2}$ to obtain \begin{eqnarray*}
\lefteqn { \left ( \int _ { \sball x { 4r} } |\nabla u | ^ 2 \delta ^ { 1-
\epsilon } \, d\sigma \right ) ^ { 1/2 } } \\ & \leq & C \left [ \left ( \int _ {\dball
x {8r} } |\nabla u |^ 2 \, \delta ^ { - \epsilon } \, dy
\right ) ^ { 1/2} +
\left ( \int _ { \sball x { 8r}\cap N} |f_N |^ 2 \delta ^ { 1-
\epsilon } \, d\sigma \right ) ^ { 1/2} \right ] \\ & \leq & C \left [
\left ( \int _ { \dball x { 8r} } |\nabla u |^ 2\delta
^ { - \epsilon } \, dy \right ) ^ { 1/2 } + r ^ { \frac { n-
\epsilon } 2 } \| f_N \| _ { L^ \infty ( \sball x { 8r } \cap N
) }
\right ] . \end{eqnarray*}
To estimate $(\int _ {\dball x {8r}}|\nabla u | ^ 2 \delta ^ {
-\epsilon } \, dy ) ^ {1/2}$, we choose $p >2$, use H\"older's inequality with exponents $ p/2 $ and $ p / (p-2)$, and Lemma \ref{RHEstimate} to find \begin{eqnarray*} \lefteqn{
\left( \int _ { \dball x { 8r}}|\nabla u | ^ 2 \delta ^ { -\epsilon } \, dy \right ) ^ { 1/2} } \\ & \leq & \left( \int _ { \dball x { 8r}}\delta ^ {- \epsilon p / ( p-2)} \, dy \right ) ^ { \frac 1 2 - \frac 1 p }\left ( \int _ { \dball x { 8r
} }|\nabla u | ^ p \, dy \right ) ^ { 1/p } \\ & \leq & C r ^ { - \frac \epsilon 2 + \frac n 2 } \left [
-\!\!\!\!\!\!\int _ { \dball x { 16r } }|\nabla u | \, dy
+ \left ( -\!\!\!\!\!\!\int _ { \sball x { 16r} \cap N}|f_N | ^ { \frac { p _0 ( n-1)}
n} \, d\sigma \right ) ^ { \frac n { p_0 ( n-1) } } \right ]. \end{eqnarray*} Combining the two previous displayed inequalities gives the estimate $$
\left( \int _ { \sball x { 4r} } |\nabla u | ^ q \, d\sigma \right ) ^
{ 1/q}
\leq C r ^ { ( n-1)/q} \left ( -\!\!\!\!\!\!\int_ { \dball x { 16r}} |\nabla u | \, dy + \| f _N \| _ { L^ \infty ( \sball x { 16r} \cap N )}\right ), $$ which gives the claim (\ref{LocalClaim}).
Now we consider the proof of (\ref{LocalClaim}) in case b). Here, we use $ \delta (x) >8r \sqrt{ 1+M^2}$ and Lemma \ref{DorN} to conclude that $ \sball x {8r}\subset N$ or that $ \sball x {8r} \subset D$. Then we may use Lemma \ref{NeumannRegularity} or Lemma \ref{DirichletRegularity} to conclude that $$
\int _ { \sball x { 4r}} |\nabla u | ^ 2 \, d\sigma \leq C \left(
\int _ { \sball x { 8r} \cap N} |f_N |^ 2 \, d\sigma
+ \frac 1 r \int_{\dball x { 8r} } |\nabla u | ^ 2 \, dy \right ). $$ Next, Lemma \ref{RHEstimate} gives $$
\left ( -\!\!\!\!\!\!\int _ { \dball x { 8r}} |\nabla u | ^ 2 \, dy \right ) ^ { 1/2}
\leq C \left ( -\!\!\!\!\!\!\int _ { \dball x { 16r} } |\nabla u
| \, dy + \| f\| _ { L^ \infty ( \sball x { 16r} \cap N) } \right). $$ Using the two previous estimates and H\"older's inequality, we obtain the claim (\ref{LocalClaim}) in case b).
To obtain the estimate for the non-tangential maximal function, we choose a cutoff function $ \eta $ which is one on $ \ball x { 3r}$ and supported in $ \ball x { 4r}$. By repeating the arguments in the proof of Theorem \ref{HardyTheorem}, we may show that for $z $ in $ \Omega$ and $j =1,\dots,n$, we have the following representation for the derivatives of $u$: \begin{eqnarray*} (\eta\frac {\partial u}{\partial z_j}) (z)
& = & \int _ { \partial
\Omega } \eta ( \frac {\partial \Xi }{ \partial \nu } ( z-\cdot ) \frac {
\partial u }{\partial y _j } - \nu _j \nabla_y \Xi (z- \cdot ) \cdot \nabla u + \frac { \partial \Xi }{\partial y _j }(z-\cdot) \frac {
\partial u }{ \partial \nu } ) \, d\sigma \\ & & \qquad - \int _ { \Omega} \nabla \eta \cdot \nabla_y \Xi ( z-\cdot )\frac {
\partial u }{ \partial y _j } - \frac { \partial \eta }{ \partial y _j } \nabla_y \Xi ( z-\cdot)\cdot \nabla u \\ & & \qquad \qquad \qquad + \nabla \eta \cdot \nabla u \frac { \partial
\Xi }{ \partial y _j } ( z- \cdot ) \, dy . \end{eqnarray*} From this representation and the theorem of Coifman, McIntosh and Meyer \cite{CMM:1982}, we obtain $$ \left ( -\!\!\!\!\!\!\int _ { \sball x r } { \nontan {( \nabla u ) _{r} } }^ q
\, d\sigma \right) ^ { 1/q}
\leq C \left [ -\!\!\!\!\!\!\int_{ \dball x { 4r}} |\nabla u | \, dy +
\left( -\!\!\!\!\!\!\int _ { \sball x { 4r}}|\nabla u | ^ q \, d \sigma \right) ^ {
1/q}\right]. $$ From this estimate, the claim (\ref{LocalClaim}) and a covering argument, we obtain the Theorem. \end{proof}
\section{Estimates for solutions with data from $L^p$, $p> 1$} \label{LpSection} In this section, we use the following variant of an argument developed by Shen \cite{ZS:2007} to establish $L^p$-estimates for elliptic problems in Lipschitz domains. Shen's argument is based on earlier in work of Caffarelli and Peral \cite{MR1486629}.
As the argument depends on a Calder\'on-Zygmund decomposition into dyadic cubes, it will be stated using surface cubes rather than the surface balls $ \sball xr $ used elsewhere in this paper.
Let $Q _0$ be a cube in the boundary and let $F$ be defined on $4 Q_0$. Let the exponents $ p $ and $q$ satisfy $ 1< p < q$. Assume that for each $Q \subset Q_0$, we may find two functions $F_Q$ and $R_Q$ defined in $2Q$ such that \begin{eqnarray} \label{Shen1}
|F| & \leq & |F_Q| + |R_Q | , \\
-\!\!\!\!\!\!\int_{ 2Q} |F_Q| \, d\sigma &\leq & C \left( -\!\!\!\!\!\!\int_{ 4Q} |f|^ p \, d\sigma \right ) ^ {1/p}, \label{Shen2} \\
\left ( -\!\!\!\!\!\!\int_{2Q} |R_Q|^ q \, d\sigma \right) ^ {1/q} &\leq &
C \left [ -\!\!\!\!\!\!\int _{ 4Q} |F|\,d \sigma + \left( -\!\!\!\!\!\!\int_{4Q} |f|^ p \,
d\sigma \right) ^ {1/p} \right]. \label{Shen3} \end{eqnarray} Under these assumptions, for $r$ in the interval $ ( p, q)$, we have $$
\left( -\!\!\!\!\!\!\int _{Q_0} |F|^ r \, d\sigma \right)^ { 1/r } \leq C \left[
-\!\!\!\!\!\!\int _{4Q_0} |F|\, d\sigma + \left( -\!\!\!\!\!\!\int_{4Q_0} |f|^ r \, d\sigma \right ) ^ { 1/r} \right ] . $$ The constant in this estimate will depend on the Lipschitz constant of the domain, the $L^p$ indices involved and the constants in the estimates in the conditions (\ref{Shen2}--\ref{Shen3}). The argument to obtain this conclusion is more or less the same as in Shen
\cite[Theorem 3.2]{ZS:2007}. The main differences arise because the last term in (\ref{Shen3}) require us to substitute the maximal function $M(|f|^p)^ {1/p}$ for $M(f)$. We omit a detailed proof. Our hypotheses hypotheses differ from Shen's in that Shen has $p=1$ in (\ref{Shen2}) and (\ref{Shen3}) while we have $p>1$. We need to change Shen's formulation because we begin with results in Hardy spaces, rather than $L^p$-spaces.
In our application, we will let $ 4Q_0$ be a cube with sidelength comparable to $r_0$. We let $u$ be a solution of the mixed problem with Neumann data $f$ in $L^p(N)$ and Dirichlet data zero. We define $f$ to be zero in $D$. Since $L^p(N)$ is contained in the Hardy space $H^1(N)$, we may use Theorem \ref{AtomicTheorem} to obtain a solution of the mixed problem with Neumann data $f$ on $N$ and zero Dirichlet data on $D$. Let $F = \nontan{(\nabla u )}$ and given a cube $Q\subset Q_0$ and with diameter $r$, define $ F_Q$ and $R_Q$ as follows. We let $\bar f _{ 4Q} =0$ if $ 4Q \cap D \neq \emptyset $ and $ \bar f _{ 4Q} = -\!\!\!\!\!\!\int _{ 4Q} f \, d\sigma $ if $ 4Q \subset N$. Set $g = \chi _{
4Q} ( f-\bar f _{ 4Q})$ and $h = f-g$. As both $g$ and $h$ are elements of the Hardy space $ H^ 1(N)$, we may use Theorem \ref{HardyTheorem} to find solutions of the $L^1$-mixed problem with Neumann data $g$ or $h$. We let $v$ be the solution with Neumann data $g$ and $w$ be the solution with Neumann data $h$. According to the uniqueness result Theorem \ref{uRuniq} we have $ u = v+w$. We let $R_Q = \nontan { ( \nabla w )}$ and $ F_Q = \nontan {( \nabla v ) } $ so that (\ref{Shen1}) holds. We turn our attention to establishing (\ref{Shen2}) and (\ref{Shen3}).
To establish (\ref{Shen2}), observe that the
$H^1$-norm of $g$ satisfies the bound $$ \| g \|_{ H^1 (N)} \leq C \| f
\| _ {L^p( 4Q)} \sigma (Q) ^ { 1/p'}. $$ With this, the estimate (\ref{Shen2}) follows from Theorem \ref{AtomicTheorem}. Now we turn to the estimate (\ref{Shen3}) for $ F_Q = \nontan {(
\nabla w ) } $. We note that the Neumann data $h$ is constant on $ 4Q \cap N$. We define a maximal operator by taking the supremum over that part of the cone that is far from the boundary, $$ \nontan { ( \nabla w )_+}(x) = \sup _{ y \in \ntar x \setminus \ball x
{Ar} } |\nabla w (y)| $$ where $A$ is to be chosen.
A simple geometric argument gives that \begin{equation} \label{far} \nontan { ( \nabla w )_+}(x) \leq C -\!\!\!\!\!\!\int _{ 4Q} \nontan { (
\nabla w ) } \, d\sigma , \qquad x \in 2Q. \end{equation} The estimate for $ \nontan { ( \nabla w )_{Ar} } $ uses the local estimate for the mixed problem in Lemma \ref{newLocal} to conclude that \begin{eqnarray} \nonumber \left ( -\!\!\!\!\!\!\int _ { 2Q} { \nontan { ( \nabla w ) _ { Ar} }} ^ q \, d\sigma
\right) ^ { 1/q} & \leq & C
\left [ \|h\|_{L^ \infty ( 4Q)}
+ -\!\!\!\!\!\!\int _ { T( 3Q) } |\nabla w | \, d\sigma \right ] \\ & \leq &
C \left [ \left( -\!\!\!\!\!\!\int_{4Q} |f|\, d\sigma \right)
+ -\!\!\!\!\!\!\int _ {4Q} \nontan { ( \nabla w ) } \, d \sigma \right ].
\label{near} \end{eqnarray} provided that the constant $A$ in the definition of $ \nontan{ (\nabla w )_+} $ is chosen sufficiently small. Recall that $ T(Q)$ was defined at the beginning of the proof of Lemma \ref{Whitney}. From the estimates (\ref{far}) and (\ref{near}), we conclude that \begin{equation} \label{New3} \left( -\!\!\!\!\!\!\int _{ 2Q} ( R_Q ) ^ q \, d\sigma \right) ^ { 1/q}
\leq C \left[ -\!\!\!\!\!\!\int _ {4Q} |f| \, d\sigma
+ \left( -\!\!\!\!\!\!\int _{ 4Q} \nontan { ( \nabla w ) }\, d\sigma \right) ^ { 1/p } \right ]. \end{equation} We have $ \nontan {( \nabla w )} \leq \nontan { ( \nabla v ) } +
\nontan {( \nabla u ) } $ and hence we may estimate the term involving
$ \nontan { ( \nabla w )}$ by $$ -\!\!\!\!\!\!\int _ { 4Q} \nontan {( \nabla w ) } \, d\sigma \leq -\!\!\!\!\!\!\int _ { 4Q} \nontan {( \nabla u ) } \, d\sigma + -\!\!\!\!\!\!\int _ { 4Q} \nontan {( \nabla v ) } \, d\sigma \leq -\!\!\!\!\!\!\int _ { 4Q} \nontan {( \nabla u ) } \, d\sigma +
C \left ( -\!\!\!\!\!\!\int _ { 4Q} |f| ^ p \, d\sigma \right ) ^ { 1/p} $$ where we have used Theorem \ref{HardyTheorem} to estimate the term involving $\nontan{(\nabla v)}$. Combining this with (\ref{New3}) gives (\ref{Shen3}).
Applying the technique of Shen outlined above gives the $L^p$-estimate and thus we obtain the following theorem.
\begin{theorem} Let $ \Omega$, $N$ and $D$ be a standard domain for the mixed problem and let $p$ satisfy $ 1 < p < p_0/2$ where $p_0$ is from Lemma \ref{RHEstimate}.
Given data $f_N$ in $L^p(N)$, we may solve the $L^p$-mixed problem with Neumann data $f_N$ and Dirichlet data 0 and this solution satisfies the estimate $$
\| \nontan {( \nabla u )} \|_{ L^ p ( \partial \Omega )}
\leq C \| f _N \| _{ L^ p (\partial \Omega )} . $$ The constant $C$ depends on the global character of the domain and the index $p$. \end{theorem}
\section{Further questions} This work adds to our understanding of the mixed problem in Lipschitz domains. However, there are several avenues which are not yet explored. \begin{enumerate} \item Can we study the inhomogeneous mixed problem and obtain results
similar to those of Fabes, Mendez and M. Mitrea \cite{FMM:1998} and I.~Mitrea and M.~Mitrea \cite{MM:2007}? \item Is there an extension to $p < 1$ as the work of Brown \cite{RB:1995a}? \item Can we study the mixed problem for more general decompositions
of the boundary, $ \partial \Omega = D \cup N$? To what extent is
the condition that the boundary between $D$ and $N$ be a Lipschitz
graph needed? \item Can we extend these techniques to elliptic systems and higher
order elliptic equations? \end{enumerate}
\note { Index of notation.
\begin{tabular}{rl}
\sc Symbol & Meaning \rm \\ $\delta(x) $ & the distance from $x$ to the crease $\Lambda$ \\ $D$ & region where we specify Dirichlet data\\ $N$ & region where we specify Neumann data \\ $P$& $P$ operator \\ $\Xi$ & fundamental solution \\ $\ntar x $ & non-tangential approach region\\ $\Lambda $ & boundary between $N$ and $D$ \\ $\Omega $ & domain \\ $ \locdom x r $ & domain of size $r$ near a boundary point $x$. \\ $\dball x r $ & $\Omega \cap \ball x r $ \\ $T(Q) $ & in proof of Lemma \ref{Whitney} $T(Q) = \{ x \in \bar \Omega : \mathop{\rm dist}\nolimits(x, Q) < \mathop{\rm diam}\nolimits(Q)\} $ \end{tabular} }
\def$'${$'$} \def$'${$'$} \def$'${$'$}
\end{bibunit}
\small \noindent \today
\end{document}
|
arXiv
|
{
"id": "0909.0061.tex",
"language_detection_score": 0.6098796725273132,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[Short Title]{Quantum control and sensing of nuclear spins by electron spins under power limitations} \author{Nati Aharon} \affiliation{Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem 91904, Givat Ram, Israel} \author{Ilai Schwartz} \affiliation{NVision imaging, Ulm D-89069, Germany} \author{Alex Retzker} \affiliation{Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem 91904, Givat Ram, Israel} \date{\today}
\begin{abstract} State of the art quantum sensing experiments targeting frequency measurements or frequency addressing of nuclear spins require to drive the probe system at the targeted frequency. In addition, there is a substantial advantage to perform these experiments in the regime of high magnetic fields, in which the Larmor frequency of the measured spins is large. In this scenario we are confronted with a natural challenge of controlling a target system with a very high frequency when the probe system cannot be set to resonance with the target frequency. In this contribution we present a set of protocols that are capable of confronting this challenge, even at large frequency mismatches between the probe system and the target system, both for polarisation and for quantum sensing.
\end{abstract} \maketitle
\emph{Introduction ---} Nuclear spins control by electrons is a ubiquitous in quantum technology setups. Control experiments of nuclei in solids were realized via defects in diamond \cite{jelezko2006single,lee2013readout}, especially NV centers in diamond \cite{neumann2010single,pfender2018back,unden2016quantum,jiang2009repetitive,dutt2007quantum}, Silicon Carbide \cite{falk2015optical,ivady2015theoretical} and Silicon \cite{morton2008solid,pla2013high}. These experiments were motivated by quantum computing \cite{yao2012scalable,childress2013diamond,robledo2011high,van2012decoherence,taminiau2014universal}, quantum sensing \cite{aslam2018nanoscale,perunicic2014towards,kong2017atomic} and dynamical nuclear polarization \cite{shagieva2018microwave,pagliero2018multispin,broadway2018quantum,fernandez2018toward,scheuer2016optically,chen2016resonance}. Nuclear spins control requires to work at resonance, which is manifested by the Hartmann-Hahn (HH) condition \cite{hartmann1962nuclear}. The HH condition requires to equate the Rabi frequency (RF) at which the electron is driven to the Larmor frequency (LF) of the nuclei (Fig. \ref{idea3} (a)). There is, however, a strong motivation to perform experiments at high magnetic fields due to the prolonged nuclear coherence time and the improvement in single-shot readout. Such experiments are very challenging and only a few were realized successfully \cite{aslam2015single,haberle2017nuclear,stepanov2015high,pfender2017protecting}. Moreover, in some experiments (e.g., in biological environments) the maximal RF is restricted by deleterious heating effects that are associated with high power. In such cases it is challenging to reach the high RF that matches the nuclear LF (Fig. \ref{idea3} (b)).
In this Letter we present a few schemes that can overcome this limitation in the various regimes of the mismatch between the RF and the targeted LF. We show that by employing a detuned driving field with a constant bounded RF or a driving field with a (bounded) modulated RF or a modulated phase, it is possible to reach the HH condition (Fig. \ref{idea3} (c)). Although such protocols were achieved with pulsed schemes that require high power \cite{casanova2018shaped}, we introduce simpler continuous drive based constructions that are significantly more power-efficient \cite{gordon2008optimal,cao2017protecting}. While we focus on the NV center, the presented schemes are general and applicable to both the optical and microwave domains, and hence to a variety of atomic and solid state systems.
\begin{figure}
\caption{The main problem. (a) Control and sensing of nuclear spins is achieved by satisfying the HH condition. The electron is driven with a RF ($\Omega$) that is equal to the nuclear LF ($\omega_l$). This results in dressed electron states that are on resonance with the LF, enabling the electron-nucleus spin interaction. (b) The electron is driven with a bounded RF, which is smaller than the LF ($\Omega<\omega_l$) and thus no coupling can be achieved. This is a typical problem in the high magnetic fields regime. (c) We propose a set of protocols where even though the electron spin is driven with a bounded RF, $|\Omega(t)|<\omega_l$, an effective dressed electronic energy gap that is equal to the LF is obtained. The effective electron-nucleus coupling strength decreases for a larger frequency mismatch $\omega_l -\Omega$. Doted lines (solid lines) indicate energy gaps (driving fields).}
\label{idea3}
\end{figure}
\emph{The model ---} We consider an NV center electronic spin that is interacting with a single or several nuclei via the dipole - dipole interaction.
Under an on-resonance drive, the Hamiltonian of the NV and a nuclear spin is given by \cite{supp} $H = \frac{\omega_0}{2} \sigma_z + \frac{\omega_l}{2} I_z + g \sigma_z I_x + \Omega_1 \sigma_x \cos \left(\omega_0 t \right),$ where $\omega_0$ corresponds to the NV's energy gap, $\omega_l$ is the nucleus LF, $\sigma_z$ and $I_z$ are the Pauli operators in the direction of the static magnetic field of the NV and the nucleus respectively, $g$ is the NV - nucleus coupling strength, and $\Omega_1$ is the RF of the NV drive. For sensing and control of the nucleus by the NV the HH condition, $\Omega_1 = \omega_l$, must be fulfilled (Fig. \ref{idea3} (a)) \cite{supp}.
In the high magnetic field regime the nuclear LF, $\omega_l = \gamma_n B$, where $\gamma_n$ is the nuclear gyromagnetic ratio and $B$ is the static magnetic field, can be as high as $\sim100$ MHz. Hence, because of either technical limitations or avoidance of heating effects that occur due to the high power that is required to generate such a large RF, it is impossible to fulfil the HH condition by an on-resonance drive. Namely, we must work in the regime where $|\Omega_1|<\omega_l$ (Fig. \ref{idea3} (b)). We term the frequency difference, $\omega_l-\Omega_1$, as the frequency mismatch between the NV frequency ($\Omega_1$) and the nuclear LF ($\omega_l$).
We propose a set of protocols where even though the electron is driven with a bounded RF, $|\Omega(t)|<\omega_l$, an effective dressed electronic energy gap that is equal to the LF is obtained, and hence, the resonance condition is retrieved. Most generally, we consider the Hamiltonian $ H_s = \frac{\omega_0}{2} \sigma_z + \frac{\omega_l}{2} I_z + g \sigma_z I_x + \Omega_1(t) \sigma_x \cos \left(\phi(t) \right), $ where $ \Omega_1(t) $ and $\phi(t)$ are the modulated RF and modulated phase of a general driving field. The functions $\phi(t)$ and $\Omega_1(t)$ are our control tools that are used in order to reach the resonance condition in the small and large frequency mismatch regimes respectively, and therefore enable to probe the nuclei parameters and polarize it in the high magnetic field regime.
\emph{Small frequency mismatch ---} In continuous dynamical decoupling it is more beneficial to rely on a control by a robust phase modulation (PM) than on a control by a noisy amplitude modulation (AM) \cite{cohen2017continuous}. This concept was verified experimentally \cite{farfurnik2017experimental,cao2017protecting} and here we further develop it to design efficient and robust control in the high magnetic field regime when the frequency mismatch is small.
This scenario is relevant for a LF of $\sim 1-10$ MHz. For example, the LF of $^{13}$C ($^{15}$N) at a magnetic field of $1$T ($1.5$T) is $10$ MHz ($6.5$ MHz).
There are two key advantages of PM. First, PM is much more stable than a noisy AM and therefore results in longer coherence times. Second, the extra frequency that is required to fulfil the resonance condition ($\omega_l-\Omega_1$) originates only from the PM and therefore does not require extra power beyond the power limit of the bounded RF $\Omega_1$ \cite{supp}.
We consider the following Hamiltonian of the NV and the nucleus, $H = \frac{\omega_0}{2} \sigma_z + \delta B(t) \sigma_z+ \frac{\omega_l}{2} I_z + g \sigma_z I_x
+ \left( \Omega_1 + \delta \Omega_1(t) \right) \sigma_x \cos \left( \omega_0 t + 2 \frac{\Omega_2}{\Omega_1} \sin(\Omega_1 t) \right), $ where $ \delta B(t) $ is the magnetic noise, $\Omega_1$ is the RF of the drive, which defines the PM according to $\phi\left(t\right)=2 \frac{\Omega_2}{\Omega_1} \sin\left(\Omega_1 t\right) $, and $ \delta \Omega_1(t) $ is the amplitude noise in $\Omega_1$. The NV dynamics is modulated by two frequencies, $\Omega_1$ and $\Omega_2$, and thus we may expect transitions to occur whenever the resonance condition, $\Omega_1 + \Omega_2 = \omega_l$ is met. Indeed, this Hamiltonian results in double-dressed NV states for which we have that \cite{supp} $H_{II} \approx \frac {\Omega_2}{2}\sigma_z+ \frac{\omega_l}{2} I_z - \frac{g}{2} \left( \sigma_{+} \left(e^{i \Omega_1 t}- e^{-i \Omega_1 t}\right)+ \sigma_{-} \left(e^{-i \Omega_1 t} - e^{i \Omega_1 t}\right) \right) I_x, $ where $H_{II}$ is the Hamiltonian in the second interaction picture (IP) and in the basis of the double-dressed states. From this expression it is seen that a resonance condition appears when $\Omega_1 +\Omega_2 = \omega_l$ (or when $\Omega_1 -\Omega_2 = \omega_l$). Even though the power of the driving field is $\propto\Omega_1^2$ and is independent of $\Omega_2,$ higher Larmor frequencies than what is available by the peak power in a common HH scheme are reachable. While the modulation by the frequency $\Omega_1$ originates from AM and requires a power of $\propto\Omega_1^2$, the second modulation by the frequency $\Omega_2$ originates from the PM and as such it is not associated with extra power. Specifically, for $\Omega_2=\Omega_1$ the ratios of the peak power (the maximal instantaneous power value) and the cycle power (the power that is required for a complete energy transfer (flip-flop) between the NV and the nucleus) between a common HH drive and a phase modulated drive are $4$ and $2$ respectively \cite{supp}. Moreover, PM may result in significantly prolonged coherence times due to the precise phase control of microwave sources, and the elimination (to first order) of amplitude fluctuations in $\Omega_1$ \cite{supp}.
The above procedure is correct in the limit of $\Omega_2 \ll \Omega_1$. However, we aim to increase $\Omega_2$ as much as possible without reducing the sensitivity. To this end, we have to take into account the Bloch-Siegert Shift (BSS) due to the counter-rotating terms of the second modulation $\Omega_2$, which induces a shift of the resonance. In addition, this decreases the coupling to the nucleus, and more importantly, the coherence time of the NV as the decoupling effect of the drive is not effective any more (Fig. \ref{Coherence} (blue)). To improve this, we suggest to correct the BSS when adjusting the frequency $\Omega_1$ in the PM $\phi\left(t\right)=2 \frac{\Omega_2}{\Omega_1} \sin\left(\Omega_1 t\right) $ and modify it to $\tilde{\Omega}_1 = \frac{1}{3}\left( \Omega_1+ \sqrt{4 \Omega_1^2+3 \Omega_2^2}\right)$. In this case, the resonance frequency is $\tilde{\Omega}_1 +\tilde{\Omega}_2 = \omega_l$, where $\tilde{\Omega}_2 = \frac{\Omega_2}{2}\left(1+\frac{\Omega_1+\tilde{\Omega}_1}{\sqrt{\Omega_2^2+\left(\Omega_1+\tilde{\Omega}_1\right)^2}}\right)$ \cite{supp}.
\begin{figure}
\caption{ Polarization as a function of $\frac{\Omega_2}{\Omega_1}$ in the strong and weak (inset) coupling regimes without BSS correction (blue) and with BSS correction (green). Strong coupling regime: without correction the polarization rate begins to sharply decrease at $\Omega_2 \approx \Omega_1$. The correction enables to maintain good polarization rates up to $\Omega_2 \approx 1.8\Omega_1$. Weak coupling regime: The analysis takes noise into account. The polarization is effective up to $\Omega_2 \approx 1.4 \Omega_1$. }
\label{Polar}
\end{figure}
\begin{figure}
\caption{Coherence time ($T_2$) as a function of $\frac{\Omega_2}{\Omega_1}$. Without the BSS correction (blue) - at the regime of an efficient polarization, $T_2$ is decreased as $\Omega_2$ is increased. The optimal $T_2$ is sharply peaked at $\frac{\Omega_2}{\Omega_1}\approx0.125$ with $T_2\approx 330\:\mu s$ (not shown). With the BSS correction (green) - a long $T_2$ time is maintained while increasing $\Omega_2$. The optimal $T_2$ is peaked at $\frac{\Omega_2}{\Omega_1}\approx0.4$ with $T_2\approx 1000\:\mu s$. The coherence time is a crucial parameter in the efficiency of control and estimation.}
\label{Coherence}
\end{figure}
In Fig. (\ref{Polar}) we show simulation results \cite{supp} for the nucleus polarization as function of $\frac{\Omega_2}{\Omega_1}$. In the main figure we consider the strong coupling regime, where the polarization time $t=2\pi/g$ is much shorter than the decoherence time of the NV center and hence, decoherence effects are neglected. In the inset we consider the weak coupling regime where noise decreases the polarization rate \cite{supp}. We define the nuclear spin polarization, $P_N$, as the probability of the nuclear spin to be in its initial state $\ket{\uparrow_z}$. Specifically, we initialize the NV-Nucleus state to $\ket{\psi_i}=\ket{\downarrow_z}_{NV}\ket{\uparrow_z}_{N}=\ket{\downarrow_z \uparrow_z}$ and calculate the polarization according to $P_N =|\braket{\uparrow_z \uparrow_z}{\psi}|^2 + |\braket{\downarrow_z \uparrow_z}{\psi}|^2$, where $\ket{\psi}$ is the joint NV-Nucleus state at the optimal polarization time. Hence, $P_N=0$ corresponds to optimal polarization and $P_N=1$ corresponds to no polarization at all. While in the strong coupling regime the correction always results in better polarization rates, in the weak coupling regime the advantage of correction is lost at $\Omega_2 \approx 1.5 \Omega_1$. In Fig. (\ref{Coherence}) we show the expected coherence times, $T_2$, of the NV as function of $\frac{\Omega_2}{\Omega_1}$ \cite{supp}. Without the correction the optimal coherence time is sharply peaked at $\frac{\Omega_2}{\Omega_1}\approx0.125$ with $T_2\approx 330\:\mu s$ (not shown). The coherence time is reduced when $\Omega_2$ is increased due to an amplitude mixing of $\propto \frac{\Omega_2}{\Omega_1}$ between the dressed states, which introduces back a first order contribution of the drive noise $\propto \frac{\Omega_2} {\Omega_1} \delta \Omega_1$. This decoherence is greatly mitigated by the correction of the BSS up to $\Omega_2 \approx \Omega_1$, which results in an improvement of one order of magnitude in the coherence times. With the correction the optimal coherence time is peaked at $\frac{\Omega_2}{\Omega_1}\approx0.4$ with $T_2\approx 1000\:\mu s$. In this case, the coherence time is mainly limited by the second order contribution of the drive noise $\sim\frac{\delta \Omega_1^2}{\Omega_2}$. The BSS correction enables to further increase $\Omega_2$ and results in prolonged NV's coherence times and higher polarization rates.
\emph{Large frequency mismatch ---}
The natural way to compensate for the frequency mismatch is to introduce a detuning ($\delta$) to the drive. This detuning induces an extra modulation that creates an effective frequency of $\sqrt{\Omega_1^2 + \delta^2}$, which in principle, can be as high as needed $\left(\sqrt{\Omega_1^2 + \delta^2} \gg \Omega_1\right)$. When the effective frequency $\sqrt{\Omega_1^2 + \delta^2}$ is equal to the LF, the HH condition is fulfilled and the electron-nucleus interaction is enabled \cite{supp}. This however, comes with a price; the electron-nucleus coupling strength is decreased by a factor of $\sim\frac{\Omega_1}{\delta}$ \cite{supp} (Fig. \ref{idea3} (c)). Here the decoupling effect of a resonant drive vanishes and the NV's coherence time approaches $T_2^*$. In \cite{supp} we show how to circumvent this by adding a second drive. This scheme, however, could be extremely power efficient, e.g., for $\delta=10 \Omega_1$ the ratios of the peak power and the cycle power between a common HH drive and a detuned drive are $101$ and $10.1$ respectively \cite{supp}.
An alternative way to reach the resonance is to modulate the amplitude of the drive. This AM generates higher harmonics of the modulation frequency that can be tuned to be on-resonance with the LF. We start with the Hamiltonian $ H = \frac{\omega_0}{2} \sigma_z + \frac{\omega_l}{2} I_z + g \sigma_z I_x + \Omega(t) \sigma_x\cos(\omega_0 t) $ and set $\Omega(t) = \Omega_0 + \Omega_1 \cos(\Omega_2 t)$.
Moving to the IP with respect to $H_0 = \frac{\omega_0}{2} \sigma_z$ and making the rotating-wave-approximation (RWA) $\left(\omega_0 \gg |\Omega(t)|\right)$ we obtain $ H_I = \frac{\Omega(t)}{2} \sigma_x + \frac{\omega_l}{2} I_z + g \sigma_z I_x, $ which in the basis of the NV dressed states ($x \rightarrow z$, $z \rightarrow -x$, and $y \rightarrow y$) is given by $ H_I = \frac{\Omega(t)}{2} \sigma_z + \frac{\omega_l}{2} I_z - g \sigma_x I_x. $ We continue by moving to the second IP with respect to $H_0 = \frac{\Omega(t)}{2} \sigma_z + \frac{\omega_l}{2} I_z$, which results in $ H_{II} = -g \left( \sigma_+ e^{i \left( \Omega_0 t + \frac{\Omega_1}{\Omega_2} \sin(\Omega_2 t) \right) } +h.c \right) \left( I_+ e^{i \omega_l t} + I_- e^{-i \omega_l t} \right). $ The exponent $e^{i \left( \Omega_0 t + \frac{\Omega_1}{\Omega_2} \sin(\Omega_2 t) \right)}$ contains the higher harmonics of $\Omega_2,$ i.e., $n \Omega_2$, where $n$ is an integer. This can be seen by the equality $ e^{i \left( \Omega_0 t + \frac{\Omega_1}{\Omega_2} \sin(\Omega_2 t) \right)} = e^{i \Omega_0 t} \sum_{n = -\infty}^{n = +\infty}\left( i^n J_n\left(\frac{\Omega_1}{\Omega_2} \right) e^{i n \Omega_2 t} + h.c.\right). $ We can therefore set the resonance condition to $\Omega_0 + \Omega_2 = \omega_l$. Assuming the RWA $\left(\Omega_2 \gg g\right)$ we get that $ H_{II} \approx g J_1\left(\frac{\Omega_1}{\Omega_2} \right) \left( i \sigma_+ I_- -i \sigma_- I_+ \right) $ when the resonance condition is fulfilled. In the regime of $\Omega_2 \gg \Omega_1$, $J_1\left( \frac{\Omega_1}{\Omega_2} \right)\approx \frac{\Omega_1}{2\Omega_2}.$ Hence, the coupling strength is similar to the one in the previous method, however, this scheme is robust to magnetic noise. Numerical analysis of this method is shown in Fig. \ref{Bessel3}. With a single AM the method suffers from amplitude fluctuations in $\Omega_0$, which could be eliminated by realising this as a second drive from a PM \cite{supp}. This scheme is also power efficient, e.g., for $\Omega_2 = 9 \Omega_0$ the ratios of the peak power and the cycle power between a common HH drive and an amplitude modulated drive are $25$ and $3.7$ respectively \cite{supp}.
\emph{Quantum sensing ---}
Addressability is the ability of a probe to individually address and control nuclear spins, which was discussed above.
However, addressability is not necessary for quantum sensing where, e.g., one is only interested in estimating the LF, as in nano-NMR experiments. The resolution of addressability is defined by the ability to control a nucleus with a given frequency $\omega_l$, while leaving nuclei with different frequencies outside of a frequency width $\Delta \omega$ (centered at $\omega_l$) unaffected. As shown in Fig. \ref{Bessel3} in blue, the addressability resolution is limited by the coupling strength. This is because that all frequencies within a width of the coupling strength from the resonance will couple to the probe. Hence, the stronger the coupling the worst the resolution is and a larger band of frequencies will be addressed by the probe.
\begin{figure}
\caption{Polarization as a function of the nuclear LF in units of tenth of the coupling strength (blue line). The central resonance corresponds to $\omega_l=\Omega_0$. The two sidebands correspond to $\omega_l = \Omega_0 \pm \Omega_2.$ The $y$ axis corresponds to the occupation of the nucleus when the initial state is the $\vert \uparrow_z \rangle$ state, i.e., $P_N = 1.$ The main deep is broader than the side deeps because the coupling at the sideband frequencies is reduced.
In contrast, the yellow line represent the analysis of the quantum sensing Hamiltonian, which is much narrower and is not limited by the coupling strength. The numerical simulations were performed with $\Omega_0 = 1.5$ MHz, $\Omega_1=0.1$ MHz, $\Omega_2=1$ MHz, and $g= 0.05 \Omega_2$.}
\label{Bessel3}
\end{figure}
However, when the NV is used to estimate the LF, one would expect that the stronger the coupling the more information would be acquired; an increased coupling strength should improve the resolution and not limit it. The addressability resolution limit could be overcome by designing the Hamiltonian differently. In cases that control is not necessary, and one is just interested in frequency estimation of the nuclei, methods that are not limited by the coupling strength could be designed. The difference between the methods is analogous to the difference between Rabi and Ramsey spectroscopy. While power is a limiting factor in the first method (necessitating weak pulses), it poses no limitation in the second method.
The addressability resolution problem occurs as the NV coupling operator term is a $\sigma_{\pm}$ operator that is in charge of energy transfer. This is crucial for control, however, it is not needed for sensing.
An interaction of the addressability type, $g \left( \sigma_- I_+ + \sigma_+ I_- \right)$, transfers excitations between the two spins as long as their frequency difference is smaller than the coupling strength $g.$ Thus, the target frequencies within a spread of $g$ are addressed by the probe. However, an interaction of the type $g \sigma_x \left( I_+ + I_- \right) = g \sigma_x I_x$ could be utilized to estimate the frequencies of the target spins with a resolution that is not limited by the coupling strength \cite{schmitt2017submillihertz,bucher2017high,boss2017quantum}.
This can be achieved by transforming the $\sigma_-,\sigma_+$ operators into a $\sigma_x$ (or $\sigma_y$) operator, which is doable as $\sigma_\pm = \sigma_x \pm i \sigma_y$ and $\sigma_y$ could be eliminated with a suitable control, for example, by adding a strong $\sigma_x$ drive that will eliminate the $\sigma_y$ part.
For the case of the low frequency mismatch this can be achieved by adding an extra drive on the NV, which rotates at $\Omega_2$ (this amounts to $\Omega_s \cos(\omega_0 t)\cos(\Omega_2 t) \sigma_x$). In \cite{supp} we explicitly show that this results in an Hamiltonian that can be used for sensing the LF, i.e, $ H_{I} \approx \frac{g}{4} \sigma_z \left( I_x \cos(\delta t) - I_y \sin(\delta t) \right), $ where $\delta = \Omega_1 +\Omega_2 -\omega_l.$
As the extra term acts as a spin locking at $\Omega_s,$ the robustness of the methods is preserved. The classical version of this Hamiltonian was used in \cite{schmitt2017submillihertz,boss2017quantum,bucher2017high,laraoui2013high,zaiser2016enhancing,staudacher2015probing,ajoy2015atomic,rosskopf2016quantum,schmitt2017submillihertz,laraoui2011diamond,pfender2016nonvolatile} where it was shown that the resolution is only limited by the clock and signal coherence times. The resolution obtained by this Hamiltonian, which is the generic sensing Hamiltonian, is only limited by the coherence time of the nuclei and the sensitivity is improved with the coupling strength \cite{gefen2018quantum}.
The same can be done in the large frequency mismatch regime. The interaction should be changed from the flip - flop interaction $g \left( \sigma_+ I_- + \sigma_- I_+ \right)$ to $g \sigma_x I_x$ by adding, for example, a $\sigma_x$ drive to the modulation. In this case the Hamiltonian is transformed to \cite{supp} $ H \approx g J_1\left(\frac{\Omega_1}{\Omega_2} \right) \sigma_x \left(I_x \cos(\delta t) - I_y \sin(\delta t) \right). $ The result of using this Hamiltonian for estimating the nuclei's frequencies is shown in Fig. \ref{Bessel3}. The yellow line is the Fourier transform of the time series of NV measurements for a scenario in which a few nuclei are present at the three frequencies $\Omega_0,\Omega_0 \pm \Omega_1.$ The width of these peaks (one over the total experiment time) is narrower than the peaks of the control method (blue line), which is limited by the coupling strength.
The challenge of controlling and sensing high-frequency nuclei under power limitations of the driving fields was addressed both in the small and large frequency mismatch regimes. We have designed schemes that are robust both to magnetic field fluctuations and RF noise.
The presented protocols could potentially allow for the realization of experiments in an important regime which is currently out of reach and could considerably simplify state of the art experiments.
We would like to note that during the preparation of this manuscript we became aware of a related independent work by Casanova et al. \cite{casanova2019}.
\emph{Acknowledgements} A. R. acknowledges the support of ERC grant QRES, project No. 770929, grant agreement No 667192(Hyperdiamond), the MicroQC, the ASTERIQS and the DiaPol project.
\baselineskip=12pt
\section{Supplementary Material}
\section{The Model} We consider an NV center electronic spin that is interacting with a single or several nuclei via the dipole - dipole interaction. As we are interested in the regime in which the energy gap due to the Zeeman splitting of the NV is orders of magnitude larger than the energy gap of the nucleus, only the $T_{+1} + T_{-1} = \frac{3}{2} \sin \theta \cos \theta \sigma_z \left(I_x \cos \phi + I_y \sin \phi \right),$ (see, for example, \cite{cohen1986quantum}, Complement $B_{XI}$) term of the dipole - dipole interaction is significant, where $\theta, \phi$ are the polar angles representing the vector joining the NV and the nucleus. All other terms of the dipole-dipole interaction are fast rotating and thus can be neglected to leading order. In most cases it is the above term that is used for polarization and for sensing, in particular, in the high-field NMR experiments. Because the energy gaps of the ground state sub-levels of the NV are much larger than the RF, only two levels are addressed by the microwave driving fields and thus, the NV could be approximated as a two-level system. \section{The Hartmann-Hahn condition} \label{sec_HH} Under an on-resonance drive of the NV, the Hamiltonian of the NV center spin and the nuclear spin is given by \begin{equation} H = \frac{\omega_0}{2} \sigma_z + \frac{\omega_l}{2} I_z + g \sigma_z I_x + \Omega_1 \sigma_x \cos \left(\omega_0 t \right), \label{HH1} \end{equation} where $\omega_0$ corresponds to the energy gap of the NV center spin, $\omega_l$ is the Larmor frequency of the nuclear spin, $\sigma_z$ and $I_z$ are the Pauli operators in the direction of the static magnetic field of the NV center and the nucleus respectively, $g$ is the NV - nucleus coupling strength, which depends on the distance between the two, and where we have simplified the $T_1+T_{-1}$ term to $g \sigma_z I_x$, and $\Omega_1$ is the Rabi frequency of the on-resonance driving field of the NV center. By moving to the interaction picture (IP) with respect to the first term, $H_0 = \frac{\omega_0}{2} \sigma_z $, and making the rotating-wave-approximation (RWA) assuming that $\omega_0 \gg \Omega_1$, we obtain \begin{equation} H_I = \frac{\Omega_1}{2} \sigma_x + \frac{\omega_l}{2} I_z + g \sigma_z I_x . \end{equation} In the basis of the dressed NV center states ($x \rightarrow z$, $z \rightarrow -x$, and $y \rightarrow y$) we have that \begin{equation} H_I = \frac{\Omega_1}{2} \sigma_z + \frac{\omega_l}{2} I_z - g \sigma_x I_x. \end{equation} Moving now to the second IP with respect to $H_0 = \frac{\Omega_1}{2} \sigma_z + \frac{\omega_l}{2} I_z$, we arrive at \begin{equation} H_{II} \approx - g\left(\sigma_+ I_- e^{i\left(\Omega_1-\omega_l\right) t} + \sigma_- I_+ e^{-i\left(\Omega_1-\omega_l\right) t}\right), \label{HH} \end{equation} where fast rotating terms have been neglected. It is clear from eq. \ref{HH} that for sensing and control of the nuclear spin by the NV it is necessary to fulfill the resonance condition, $\Omega_1 = \omega_l$, which is the Hartmann-Hahn condition.
\begin{figure}
\caption{Small frequency mismatch scheme. (a) The electron spin is driven with a bounded RF $\Omega_1$ that is smaller than the nuclear LF $\omega_l$ ($\Omega_1<\omega_l$), but with a phase modulation $\phi\left(t\right)$ (see text). (b) This results in electronic dressed states with an energy gap of $\Omega_1$ that are driven on-resonance by a second drive with a RF of $\Omega_2$. The second drive originates only from the phase modulation which does not require additional power beyond the power of $\propto \Omega_1^2$ that is required for the bounded RF of $\Omega_1$. (c) The second drive $\Omega_2$ results in double dressed states of the electron that match the resonance condition with $\Omega_1 + \Omega_2 = \omega_l$. Doted lines indicate resonance frequencies and solid lines indicate driving fields.}
\label{smallM}
\end{figure}
\section{Phase modulation - the basic scheme} We consider the following Hamiltonian of the NV center and the nucleus, \begin{eqnarray} H &=& \frac{\omega_0}{2} \sigma_z + \delta B(t) \sigma_z+ \frac{\omega_l}{2} I_z + g \sigma_z I_x \nonumber \\
&+& \left( \Omega_1 + \delta \Omega_1(t) \right) \sigma_x \cos \left( \omega_0 t + 2 \frac{\Omega_2}{\Omega_1} \sin(\Omega_1 t) \right),
\label{phasedep} \end{eqnarray} where $ \delta B(t) $ is the noise in the magnetic field, $\Omega_1$ is the RF of the driving field, which defines the phase modulation according to $\phi\left(t\right)=2 \frac{\Omega_2}{\Omega_1} \sin\left(\Omega_1 t\right) $, and $ \delta \Omega_1(t) $ is the amplitude noise in the drive amplitude $\Omega_1$.
In order to see how the Hamiltonian of Eq. \ref{phasedep} results in the resonance condition $\Omega_1 + \Omega_2 = \omega_l$, we start by moving to the first IP in which the drive is time indepandant, i.e., with respect to $H_0 = \frac{\omega_0 + 2 \Omega_2 \cos(\Omega_1 t)}{2}\sigma_z$. This results in \begin{eqnarray} H_I &=& \frac{\left( \Omega_1 + \delta \Omega_1(t) \right)}{2} \sigma_x + \delta B(t) \sigma_z - \Omega_2 \cos \left( \Omega_1 t \right) \sigma_z \nonumber\\ &+& \frac{\omega_l}{2} I_z + g \sigma_z I_x, \label{eqIP1} \end{eqnarray} which is similar to a concatenated double-drive Hamiltonian, this time, however, with a very stable second drive, $\Omega_2$. Because the magnetic noise is perpendicular to the basis of the dressed states robustness to the magnetic noise (in first order) is achieved. From here on, we neglect the magnetic noise whose leading (second order) contribution is $\sim \frac{ \delta B(t)^2}{\Omega_1}$.
We continue by rotating to the basis of the dressed states (as in section \ref{sec_HH}) such that \begin{eqnarray} H_I &=& \frac{\left( \Omega_1 + \delta \Omega_1(t) \right)}{2} \sigma_z +\Omega_2 \cos \left( \Omega_1 t \right) \sigma_x \nonumber\\ &+& \frac{\omega_l}{2} I_z - g \sigma_x I_x. \end{eqnarray} It is now clear that the phase modulation results in a second drive that drives the dressed states on-resonance with a RF of $\Omega_2$. The double-dressed states are obtained by moving to the second IP with respect to $H_0=\frac{ \Omega_1}{2} \sigma_z$, \begin{eqnarray} H_{II} &=& \frac {\Omega_2}{2}\sigma_x+ \frac{\delta \Omega_1(t)}{2} \sigma_z \nonumber\\ &+& \frac{\omega_l}{2} I_z - g\left( \sigma_+ e^{i \Omega_1 t} + \sigma_- e^{-i \Omega_1 t} \right) I_x. \label{IP2} \end{eqnarray} Because the amplitude noise $ \delta \Omega_1(t) $ is perpendicular to the basis of the double-dressed states, robustness to the amplitude noise (in first order) is achieved. From here on, we neglect the amplitude noise whose leading (second order) contribution is $\sim \frac{\delta \Omega_1(t) ^2}{\Omega_2}$. Moving to the basis of the double-dressed states we get \begin{eqnarray} \label{eq7} H_{II} &=& \frac {\Omega_2}{2}\sigma_z+ \frac{\omega_l}{2} I_z - g\left( \cos \left(\Omega_1 t\right) \sigma_z + \sin \left(\Omega_1 t\right) \sigma_y\right) I_x \nonumber\\ &\approx& \frac {\Omega_2}{2}\sigma_z+ \frac{\omega_l}{2} I_z \\ &-& \frac{g}{2} \left( \sigma_{+} \left(e^{i \Omega_1 t}- e^{-i \Omega_1 t}\right)+ \sigma_{-} \left(e^{-i \Omega_1 t} - e^{i \Omega_1 t}\right) \right) I_x \nonumber, \end{eqnarray} where we have omitted the fast rotating terms $g \cos \left(\Omega_1 t\right) \sigma_z $ in the approximation. From this expression it is seen that a resonance condition appears when $\Omega_1 +\Omega_2 = \omega_l$ (or when $\Omega_1 -\Omega_2 = \omega_l$). Even though the power of the driving field is $\propto\Omega_1^2$ and is independent of $\Omega_2,$ Larmor frequencies which are higher than what is available by the peak power in a common HH scheme are reachable (Fig. \ref{smallM}).
\section{Correction of the Bloch-Siegert shift} In this section we give a detailed derivation of the correction of the Bloch-Siegert shift. The correction can be understood as follows. Without the correction, we first consider the dressed states due to the rotating-terms of the drive ($\frac{\Omega_2}{2}\sigma_x$) and then consider the effect of the off-resonance counter-rotating terms of the drive ($\frac{\Omega_2}{2}\left(\sigma_+ e^{i \Omega_1 t} + \sigma_- e^{-i \Omega_1 t}\right)$) on the dressed states (the eigenstates of $\frac{\Omega_2}{2}\sigma_x$). This results in an energy shift of the dressed states, and (a time-dependent) amplitude-mixing between the dressed states, which decreases the coherence time.
To correct this effect, we first consider the effect of the counter-rotating terms on the bare states, and then fix the frequency of the drive accordingly such that the rotating-terms will be on-resonance with the modified bare states. Consider the driving Hamiltonian \begin{equation} H_d= \frac{\Omega_1}{2} \sigma_x -\Omega_2 \cos \left( \omega_2 t \right) \sigma_z. \end{equation} Instead of moving to the IP of the rotating frame we first move to the IP of the counter-rotating frame with respect to $H_0 = -\frac{\omega_2}{2} \sigma_x$ and obtain \begin{equation} H_I= \frac{\Omega_1+\omega_2}{2} \sigma_x -\frac{\Omega_2}{2}\sigma_z -\frac{\Omega_2}{2}\left(\sigma_+ e^{-2i\omega_2 t}+\sigma_- e^{+2i\omega_2} \right). \end{equation} We continue by moving to the diagonal basis of the time-independent part of $H_I$, \begin{equation} H_I\approx \frac{1}{2} \sqrt{(\Omega_1+\omega_2)^2+\Omega_2^2} \sigma_z -\frac{\tilde{\Omega}_2}{2}\left( \sigma_+ e^{-2i\omega_2 t}+\sigma_- e^{+2i\omega_2} \right), \end{equation} where \begin{equation} \tilde{\Omega}_2 = \frac{\Omega_2}{2}\left(1+\frac{\Omega_1+\omega_2}{\sqrt{\Omega_2^2+\left(\Omega_1+\omega_2\right)^2}}\right). \label{eq_tilde_O2} \end{equation} By setting $2\omega_2= \sqrt{(\Omega_1+\omega_2)^2+\Omega_2^2}$ we have that the rotating terms are on-resonance with the energy gap of the modified bare states. The on-resonance condition is therefore given by \begin{equation} \omega_2=\frac{1}{3}\left(\Omega_1+ \sqrt{4 \Omega_1^2+3 \Omega_2^2}\right). \end{equation} In this case the amplitude-mixing between the dressed states is greatly diminished and hence, the coherence time of the NV center may be significantly prolonged compared to the scenario without the correction. Hence, we modify the frequency $\Omega_1$ in the phase modulation $\phi\left(t\right)=2 \frac{\Omega_2}{\Omega_1} \sin\left(\Omega_1 t\right) $ in Eq. \ref{phasedep} to \begin{equation} \tilde{\Omega}_1 = \frac{1}{3}\left( \Omega_1+ \sqrt{4 \Omega_1^2+3 \Omega_2^2}\right). \label{eq_tilde_O1} \end{equation} Eq. \ref{eq_tilde_O2} and Eq. \ref{eq_tilde_O1} imply that the resonance frequency, which is given by $\Omega_1 + \Omega_2 = \omega_l$, is modified to \begin{equation} \tilde{\Omega}_1 +\tilde{\Omega}_2 = \omega_l. \end{equation}
\section{Numerical analysis} \subsection{Strong coupling regime}
\begin{figure}
\caption{The polarization, which is defined as the population of the nuclei in the initial $\ket \uparrow_z$ state (see text), as a function of the resonance frequency shift $\delta \omega_l$ in units of $\Omega_2$. Here we consider polarization with the correction of the Bloch-Siegert shift with $\Omega_2 = 1.2 \Omega_1$. The optimal polarization is obtained for $\frac{\delta \omega_l}{\Omega_2} = -0.015$ and is equal to $P_N = 0.0015$. (Inset) The polarization as function of time for the optimal value of $\delta \omega_l$. The time is in units of $\mu$s.}
\label{Polar1}
\end{figure}
In the strong coupling regime we consider the scenario in which the polarization time $t=2\pi/g$ is short enough such that the effect of noise on the polarization is minor, that is, the polarization time is much shorter than the decoherence time of the NV center. Hence, we neglect decoherence effects.
Because we consider the regime of high magnetic fields, we have that $\omega_0 \gg \Omega_1$ and the RWA is valid with respect to the first drive $\Omega_1$. Hence, in the numerical analysis we simulated the Hamiltonian in the first IP, which is given by Eq. \ref{eqIP1}. Since here we neglect decoherence effects we omitted the terms of the magnetic and drive noise. The simulations were performed with $\Omega_1 = 2 \pi \times 3.3$ MHz and $g = 0.04 \Omega_1$, where the value of $\Omega_2$ was varied. For each value of $\Omega_2$ we scanned the resonance frequency around the ideal value of $\omega_l = \Omega_1 + \Omega_2$ or $\omega_l =\tilde{\Omega}_1 +\tilde{\Omega}_2$ (without and with the correction of the Bloch-Siegert shift respectively) and found the additional shift in the resonance frequency $\delta \omega_l$, which results from the effect of the fast-rotating terms (Fig. \ref{Polar1}). At the resonance frequency, $\omega_l = \Omega_1 + \Omega_2 + \delta \omega_l$ (or $\omega_l = \tilde{\Omega}_1 + \tilde{\Omega}_2 + \delta \omega_l$ with the correction), the maximal polarization is obtained. We define the nuclear spin polarization, $P_N$, as the probability of the nuclear spin to be in its initial state $\ket{\uparrow_z}$. Specifically, we initialize the NV-Nucleus state to $\ket{\psi_i}=\ket{\downarrow_z}_{NV}\ket{\uparrow_z}_{N}=\ket{\downarrow_z \uparrow_z}$ and calculate the polarization according to $P_N =|\braket{\uparrow_z \uparrow_z}{\psi}|^2 + |\braket{\downarrow_z \uparrow_z}{\psi}|^2$, where $\ket{\psi}$ is the joint NV-Nucleus state at the optimal polarization time. Hence, $P_N=0$ corresponds to optimal polarization and $P_N=1$ corresponds to no polarization at all. Note that since the $\hat{z}$ basis is the basis of the double-dressed states and also the measurement axis, the Rabi frequencies $\Omega_1$ and $\Omega_2$ are invisible to the population measurement of $P_N$. In Fig. (\ref{Polar2}) we show the nuclei polarization as function of the ratio $\frac{\Omega_2}{\Omega_1}$ both with (green) and without (blue) the correction of the Bloch-Siegert shift. It is clear that in the strong coupling regime the correction results in better polarization rates, especially when $\Omega_2 \gtrsim \Omega_1$.
\begin{figure}
\caption{The polarization as a function of $\Omega_2$ in units of $\Omega_1$ in the strong coupling regime. First method without correction of the Bloch-Siegert shift (blue) - the polarization rate begins to sharply decrease at $\Omega_2 \approx \Omega_1$. Second method with correction of the Bloch-Siegert shift (green) - the correction enables to maintain good polarization rates up to $\Omega_2 \approx 1.8\Omega_1$.}
\label{Polar2}
\end{figure}
\subsection{Weak coupling regime} In the weak coupling regime the polarization time $t=2\pi/g$ is long enough such that decoherence effects must be taken into account. Hence, the terms of the magnetic noise and the driving amplitude noise in Eq. \ref{eqIP1} are not omitted. In Fig. (\ref{Polar3}) we show the nuclei polarization as function of the ratio $\frac{\Omega_2}{\Omega_1}$ both with (green) and without (blue) the correction of the Bloch-Siegert shift. The simulations were performed with $\Omega_1 = 2 \pi \times 3.3$ MHz and $g = 0.01 \Omega_1$. While in the strong coupling regime the correction always results in better polarization rates, in the weak coupling regime the advantage of correction is lost at $\Omega_2 \approx 1.5 \Omega_1$.
The numerical simulations of the polarization rates and coherence times were performed under the assumption that the pure dephasing time of the NV is $T_2^*= 3 \mu$s, which results from a magnetic noise, $B\left(t\right)$, that is modeled by an Ornstein-Uhlenbeck (OU) process \cite{gillespie1996exact, aharon2016fully} with a zero expectation value, $\left\langle B\left(t\right)\right\rangle =0$, and a correlation function $\left\langle B\left(t\right)B\left(t'\right)\right\rangle =\frac{c\tau}{2}e^{-\gamma\left|t-t'\right|}$, where $c$ is the diffusion constant and $\tau = \frac{1}{\gamma} = 25 \mu$s is the correlation time of the noise. An OU process was also used to realize driving fluctuations. Here we used a correlation time of $\tau_{\Omega}=500\:\mu s$, and a relative amplitude error of $\delta_{\Omega}=1\%$.
\begin{figure}
\caption{The polarization as a function of $\Omega_2$ in units of $\Omega_1$ in the weak coupling regime. First method without correction of the Bloch-Siegert shift (blue). Second method with correction of the Bloch-Siegert shift (green). The analysis takes noise into account, which is crucial for a weak coupling ($g$). The polarization is effective upto $\Omega_2 \approx 1.4 \Omega_1$.}
\label{Polar3}
\end{figure}
\section{Large frequency mismatch} \subsection{Method I - Detuned driving field} In this section we provide the derivation of the the detuned driving method. The calculation goes as follows. By introducing a detuning $\delta$ to the driving filed in Eq. \ref{HH1} we obtain the Hamiltonian \begin{equation} H = \frac{\omega_0}{2} \sigma_z + \frac{\omega_l}{2} I_z + \Omega_1\cos((\omega_0-\delta)t)\sigma_x + g \sigma_z I_x. \end{equation} In order to analyze this scenario it is advantageous to move to the IP in which the drive is time independent and the problem can be analyzed via the resulting dressed states. Hence, we choose to move to the IP with respect to $ \frac{\omega_0-\delta}{2} \sigma_z$, which results in \begin{eqnarray} H_I &=& \frac{\delta}{2} \sigma_z+ \frac{\Omega_1}{2} \sigma_x + \frac{\omega_l}{2} I_z+ g \sigma_z I_x \\
&=& \frac{ \sqrt{\delta^2+\Omega_1^2}}{2} \sigma_\theta + \frac{\omega_l}{2} I_z+ g(\sigma_\theta \cos \theta + \sigma_{\theta_{\perp}} \sin \theta) I_x,\nonumber \end{eqnarray} where $\sigma_{\theta}$ is a Pauli matrix in the direction $\theta = \arctan \left( \frac{\Omega_1}{\delta} \right),$ which is the angle in the $x-z$ plane from the $z$ axis and $\sigma_{\theta_{\perp}}$ is the Pauli matrix in the orthogonal direction.
Close to resonance, when $\sqrt{\delta^2+\Omega_1^2} = \omega_l,$ we can expect excitation transfer between the electron and the nucleus due to the last term, i.e., $ \sigma_{\theta_{\perp}} \sin (\theta) I_x$. Since for large detunings $\sin \theta \approx \frac{\Omega_1}{\delta}$ the effective coupling strength is reduced from $g$ to $\approx \frac{\Omega_1}{\delta} g$, which is depicted in Fig.1 (c) of the main text.
This method, however, suffers from the fact that the decoupling effect due to the resonant drive vanishes and the coherence time of the NV approaches the $T_2^*$ time. Specifically, when moving to the first IP, the magnetic noise $\delta B(t) \sigma_z$ is modified to $\delta B(t)(\sigma_\theta \cos \theta + \sigma_{\theta_{\perp}} \sin \theta) $, which results in a first order contribution as long as has $\cos \theta \neq 0$. In order to circumvent this issue and to prolong the coherence time we suggest to introduce a second drive by adding an extra term in the Hamiltonian, namely, \begin{eqnarray} H &=& \frac{\omega_0}{2} \sigma_z + \frac{\omega_l}{2} I_z + g \sigma_z I_x + \Omega_1\cos((\omega_0-\delta)t)\sigma_x \nonumber\\
&+& \Omega_2 \cos\left((\omega_0-\delta\right)t + \frac{\pi}{2})\cos\left( \sqrt{\delta^2+\Omega_1^2}\right)\sigma_x. \end{eqnarray} Moving to the first IP with respect to $ \frac{\omega_0-\delta}{2} \sigma_z$ and the to the basis of the dressed states as above, we obtain \begin{eqnarray} H_I &=& \frac{ \sqrt{\delta^2+\Omega_1^2}}{2} \sigma_\theta + \frac{\omega_l}{2} I_z+ g(\sigma_\theta \cos \theta + \sigma_{\theta_{\perp}} \sin \theta) I_x \nonumber\\ &+& \frac{\Omega_2}{2}\cos\left( \sqrt{\delta^2+\Omega_1^2}\right)\sigma_y. \end{eqnarray} We continue by moving to the second IP with respect to $H_0=\frac{ \sqrt{\delta^2+\Omega_1^2}}{2} \sigma_\theta$, which leads to \begin{eqnarray} H_{II} &\approx & \frac{\Omega_2}{4}\sigma_y+ \frac{\omega_l}{2} I_z \nonumber\\ &+& g \sin \theta (\sigma_{\theta_{+}} e^{i\sqrt{\delta^2+\Omega_1^2}t }+\sigma_{\theta_{-}} e^{-i\sqrt{\delta^2+\Omega_1^2}t }) I_x, \end{eqnarray} where $\sigma_{\theta_{+}}$ and $\sigma_{\theta_{-}}$ are the raising and lowering operators in the basis of $\sigma_{\theta}$ respectively. Similar to Eq. \ref{IP2}, we see that the resonance condition is fulfilled when $\sqrt{\delta^2 + \Omega_1^2} + \frac{\Omega_2}{2} = \omega_l$. Hence, Larmor frequencies that are much higher than what is available by the power limitation, which here is $\propto \Omega_1^2 + \Omega_2^2$, are reached.
Because the second drive $\Omega_2$ is along the $y$ axis, which is perpendicular to the basis of the dressed states that is in the $x-z$ plane, the second drive achieves robustness to (first order) magnetic noise and amplitude noise in $\Omega_1$. Hence, the second drive prolongs the coherence time of the NV, and thus the resolution, while shifting the resonance to $\sqrt{\delta^2 + \Omega_1^2} + \frac{\Omega_2}{2} = \omega_l$.
The disadvantage of this method is that the second drive $\Omega_2$, which results in a drive along the $y$ direction, cannot be generated by a phase modulation, which results in a drive along the $z$ direction. and thus it is not robust against amplitude fluctuations of $\Omega_2$. Meaning, this decoupling limit is as good as the coherence time achieved in regular spin locking which is roughly an order of magnitude longer than $T_2^*.$ For some scenarios (weak coupling regime) the coherence time will have to be further prolonged by coherent control, for example, by adding an extra drive.
\subsection{Method II - Amplitude modulation}
With only a single amplitude modulation the method suffers from amplitude fluctuations in $\Omega_0.$ These fluctuations could be eliminated by creating this drive as a second drive from a phase modulation as in the small frequency mismatch method. Specifically, the driving Hamiltonian of the NV center is given by \begin{equation} H = \frac{\omega_0}{2} \sigma_z + \Omega_0 \cos \left(\omega_0 t + \varphi\left(t\right)\right), \end{equation} where \begin{eqnarray} \varphi\left(t\right)&=& 2\left(\frac{\Omega_1}{\Omega_0}\sin\left(\Omega_0 t\right)\right.\nonumber\\ &+&\frac{\Omega_2}{\Omega_0^2-\Omega_3^2}\left[\Omega_0\cos\left(\Omega_3 t\right)\sin\left(\Omega_0 t\right)\right.\nonumber\\ &-& \left.\Omega_3\cos\left(\Omega_0 t\right)\sin\left(\Omega_3 t\right)\right]\bigg) \end{eqnarray} Moving to the IP with respect to $H_0=\frac{\omega_0+\left(\Omega_1+\Omega_2\cos\left(\Omega_3 t\right)\right)\cos\left(\Omega_0 t\right)}{2} \sigma_z$ we obtain that \begin{eqnarray} H_I = \frac{\Omega_0}{2}\sigma_x -\left(\Omega_1+\Omega_2\cos\left(\Omega_3 t\right)\right)\cos\left(\Omega_0 t\right)\sigma_z. \end{eqnarray} Hence, robustness to amplitude fluctuation in $\Omega_0$ is achieved. Moreover, due to the utilization of phase modulation, increasing either $\Omega_1$, $\Omega_2$, or $\Omega_3$ is not associated with an increased power consumption.
\section{Power consumption} In this section we consider the difference in power consumption of the proposed schemes in comparison to the common Hartmann-Hahn method. For a given driving field, the magnitude of the magnetic field is proportional to the Rabi frequency, $B(t) \propto \Omega(t)$, and since the magnitude of the electric field is proportional to the magnitude of the magnetic field we have that the power density $P(t)=\frac{1}{\mu_0}\bold{E}\times\bold{B}$, where $\mu_0$ is the vacuum permeability, is $P(t) \propto \Omega^2(t)$. Because we are only interested in the ratio between the power consumption of the Hartmann-Hahn method, $P_{HH}(t)$, and the power consumption of a proposed method $m$, $P_{m}(t)$, we have that $\frac{P_{HH}(t)}{P_{m}(t)}=\frac{\Omega_{HH}^2(t)}{\Omega_{m}^2(t)}$, and hence, it is not necessary to calculate the exact power density.
We consider two figure of merits for the comparison of power consumption. The first is the peak power of a drive (the maximal instantaneous power value), which we denote by $P^{peak}$, and the second is the total cycle power that is required for a complete energy transfer (flip-flop) between the NV spin and the nucleus, which is denoted by $P^{cycle}=\int_{0}^{T}P(t)$, where $T$ is the cycle time.
The Rabi frequency of an on-resonance Hartmann-Hahn drive is given by $\Omega_{HH}=\Omega \cos(\omega_0 t)$, where, $\Omega = \omega_l$. Denoting the NV-nucleus coupling rate by $g$, the Hartmann-Hahn cycle time is given by $T_{HH}=\frac{2\pi}{g}$. For the Hartmann-Hahn drive we therefore have that \begin{equation} P_{HH}^{peak} \propto \Omega^2,\qquad P_{HH}^{cycle} \propto \frac{1}{2}\Omega^2 T_{HH}, \end{equation} where for $P_{HH}^{cycle}$ the above expression is approximately correct in the limit of $\omega_0\gg\Omega$. \subsection{Phase modulation} The Rabi frequency of a phase modulated driving field is given by $ \Omega_1 \cos \left( \omega_0 t + 2 \frac{\Omega_2}{\Omega_1} \sin(\Omega_1 t) \right)$. In the case of phase modulation the effective NV-nucleus coupling rate is reduced by a factor of $2$ (see Eq. (\ref{eq7})) and hence the cycle time is increased by a factor of $2$, $T_{PM} = \frac{4\pi}{g}$, so we have that \begin{equation} P_{PM}^{peak} \propto \Omega_1^2,\qquad P_{PM}^{cycle} \propto \frac{1}{2}\Omega_1^2 T_{PM}. \end{equation} This results in \begin{equation} \frac{P_{HH}^{peak}}{P_{PM}^{peak}} = \frac{\Omega^2}{\Omega_1^2},\qquad \frac{P_{HH}^{cycle}}{P_{PM}^{cycle}} \approx \frac{\Omega^2 T_{HH}}{\Omega_1^2 T_{PM}}. \end{equation} The phase modulation scheme is relevant for the small frequency mismatch regime, where good polarization rates can be achieved for $\Omega_2=\Omega_1$. In this case $\Omega=\omega_l=2 \Omega_1$ and hence \begin{equation} \frac{P_{HH}^{peak}}{P_{PM}^{peak}} = \frac{\Omega^2}{\Omega_1^2}=4,\qquad \frac{P_{HH}^{cycle}}{P_{PM}^{cycle}} \approx \frac{\Omega^2 T_{HH}}{\Omega_1^2 T_{PM}}=2. \end{equation}
\subsection{Detuned driving field} The Rabi frequency of a detuned driving field is given by $ \Omega_1 \cos \left(\left( \omega_0-\delta\right) t \right)$. Recall that the effective NV-nucleus coupling rate is reduced by a factor of $\sin \theta \approx \frac{\Omega_1}{\delta}$ from $g$ to $\approx \frac{\Omega_1}{\delta} g$ so the cycle time is increased to $T_{det} = \frac{2 \delta\pi}{\Omega_1 g}$. For a detuned driving field \begin{equation} P_{det}^{peak} \propto \Omega_1^2,\qquad P_{det}^{cycle} \propto \frac{1}{2}\Omega_1^2 T_{det}. \end{equation} Assuming, for example, that $\delta = 10 \Omega_1$ so $\omega_l = \sqrt{101}\Omega_1$, which corresponds to the large frequency mismatch, we have that \begin{equation} \frac{P_{HH}^{peak}}{P_{det}^{peak}} = \frac{\Omega^2}{\Omega_1^2}=101,\qquad \frac{P_{HH}^{cycle}}{P_{det}^{cycle}} \approx \frac{\Omega^2 T_{HH}}{\Omega_1^2 T_{det}}=10.1. \end{equation}
\subsection{Amplitude modulation} The Rabi frequency of an amplitude modulated driving field is given by $ \Omega_0 + \Omega_1 \cos \left(\Omega_2 t\right)$. Recall that the effective NV-nucleus coupling rate is reduced by a factor of $J_1(\frac{\Omega_1}{\Omega_2}) \approx \frac{\Omega_1}{2 \Omega_2}$ from $g$ to $\approx \frac{\Omega_1}{2 \Omega_2} g$ so the cycle time is increased to $T_{det} = \frac{4 \Omega_2\pi}{\Omega_1 g}$. For an amplitude modulated driving field \begin{equation} P_{AM}^{peak} \propto \left(\Omega_0+\Omega_1\right)^2,\qquad P_{AM}^{cycle} \propto \frac{1}{2}\left(\Omega_0^2+\frac{1}{2}\Omega_1^2\right) T_{AM}. \end{equation} Assuming, for example, that $\Omega_2 = 9 \Omega_0$ so $\omega_l = 10 \Omega_0$, which corresponds to the large frequency mismatch, we have that \begin{equation} \frac{P_{HH}^{peak}}{P_{AM}^{peak}} = \frac{\Omega^2}{ \left(\Omega_0+\Omega_1\right)^2}=25,\qquad \frac{P_{HH}^{cycle}}{P_{AM}^{cycle}} \approx \frac{\Omega^2 T_{HH}}{\left(\Omega_0^2+\frac{1}{2}\Omega_1^2\right) T_{AM}}=3.7. \end{equation}
\section{Quantum sensing} In this section we show how the $\sigma_-,\sigma_+$ operators can be transformed into a $\sigma_x$ (or $\sigma_y$) operators. For the case of the low frequency mismatch this can be achieved by adding an extra drive on the NV in the $y$ or $z$ direction in Eq. \ref{eq7} which rotates at $\Omega_2$ (this amounts to $\Omega_s \cos(\omega_0 t)\cos(\Omega_2 t) \sigma_x$). Specifically, consider the following Hamiltonian, \begin{eqnarray} H &=& \frac{\omega_0}{2} \sigma_z + \frac{\omega_l}{2} I_z + g \sigma_z I_x \nonumber \\
&+& \Omega_1 \sigma_x \cos \left( \omega_0 t \right) - \Omega_2 \sigma_z \cos \left( \Omega_1 t \right) \nonumber \\
&+& \Omega_s \cos \left( \omega_0 t \right)\cos \left( \Omega_2 t \right) \sigma_x.
\label{eq_sens1} \end{eqnarray} We proceed in a similar manner as in the previous sections. Moving to the IP with respect to $H_0=\frac{\omega_0}{2} \sigma_z $ and to the basis of the dressed states we have that \begin{eqnarray} H_I &=& \frac{\Omega_1}{2} \sigma_z + \frac{\omega_l}{2} I_z - g \sigma_x I_x \nonumber \\
&+& \Omega_2 \sigma_x \cos \left( \Omega_1 t \right) + \frac{\Omega_s}{2} \cos \left( \Omega_2 t \right) \sigma_z. \end{eqnarray} Moving to the second IP with respect to $H_0=\frac{\Omega_1}{2} \sigma_z $ and to the basis of the double-dressed states results in \begin{eqnarray} H_{II} &=& \frac{\Omega_2}{2} \sigma_z + \frac{\omega_l}{2} I_z - g \left(\cos\left(\Omega_1 t\right) \sigma_z - \sin\left(\Omega_1 t \right) \sigma_y\right) I_x \nonumber \\
&-& \frac{\Omega_s}{2} \cos \left( \Omega_2 t \right) \sigma_x. \end{eqnarray} Moving to the third IP with respect to $H_0=\frac{\Omega_2}{2} \sigma_z $ and to the basis of the triple-dressed states results in \begin{eqnarray} H_{III} &\approx & \frac{\Omega_s}{2} \sigma_z + \frac{\omega_l}{2} I_z \nonumber \\ &+& g \sin\left(\Omega_1 t\right) \left(\sin\left(\Omega_2 t\right) \sigma_z + \cos\left(\Omega_2 t \right) \sigma_y\right) I_x, \end{eqnarray} where we have omitted fast rotating terms. Thus, in the fourth IP, with respect to $H_0=\frac{\Omega_s}{2} \sigma_z + \frac{\omega_l}{2} I_z$ we get a Hamiltonian which can be used for sensing the nucleus frequency, i.e \begin{equation} H_{IV} \approx \frac{g}{4} \sigma_z \left( I_x \cos(\delta t) - I_y \sin(\delta t) \right), \label{eq_sens5} \end{equation} where $\delta = \Omega_1 +\Omega_2 -\omega_l.$ As the extra term acts as a spin locking term at $\Omega_s,$ the robustness of the methods is not decreased.
The same can be done in the large frequency mismatch regime. Thus, the interaction should be changed from the flip - flop interaction $g \left( \sigma_+ I_- + \sigma_- I_+ \right)$ to $g \sigma_x I_x$ by adding, for example, a $\sigma_x$ drive to the modulation. In this case the Hamiltonian is transformed to (the derivation is similar to the derivation in the small frequency mismatch, Eq. \ref{eq_sens1} - \ref{eq_sens5}) \begin{equation} H \approx g J_1\left(\frac{\Omega_1}{\Omega_2} \right) \sigma_x \left(I_x \cos(\delta t) - I_y \sin(\delta t) \right). \label{sens1} \end{equation}
\end{document}
|
arXiv
|
{
"id": "1808.02059.tex",
"language_detection_score": 0.8060406446456909,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[Square functions associated with operators]
{Weak and strong types estimates for square functions associated with operators}
\author{Mingming Cao} \address{Mingming Cao\\ Instituto de Ciencias Matem\'aticas CSIC-UAM-UC3M-UCM\\ Con\-se\-jo Superior de Investigaciones Cient{\'\i}ficas\\ C/ Nicol\'as Cabrera, 13-15\\ E-28049 Ma\-drid, Spain} \email{[email protected]}
\author{Zengyan Si} \address{Zengyan Si\\ School of Mathematics and Information Science\\ Henan Polytechnic University\\ Jiaozuo 454000\\ People's Republic of China} \email{[email protected]}
\author{Juan Zhang} \address{Juan Zhang\\ School of Science\\ Beijing Forestry University\\ Beijing, 100083 \\ People's Republic of China}\email{[email protected]}
\thanks{The first author acknowledges financial support from the Spanish Ministry of Science and Innovation, through the ``Severo Ochoa Programme for Centres of Excellence in R\&D'' (SEV-2015-0554) and from the Spanish National Research Council, through the ``Ayuda extraordinaria a Centros de Excelencia Severo Ochoa'' (20205CEX001).The second author was sponsored by Natural Science Foundation of Henan(No.202300410184), the Key Research Project for Higher Education in Henan Province(No.19A110017) and the Fundamental Research Funds for the Universities of Henan Province(No.NSFRF200329). The third author was supported by the Fundamental Research Funds for the Central Universities (No.BLX201926). }
\subjclass[2010]{42B20, 42B25}
\keywords{Square functions, Bump conjectures, Mixed weak type estimates, Local decay estimates}
\date{November 19, 2020}
\begin{abstract} Let $L$ be a linear operator in $L^2(\mathbb{R}^n)$ which generates a semigroup $e^{-tL}$ whose kernels $p_t(x,y)$ satisfy the Gaussian upper bound. In this paper, we investigate several kinds of weighted norm inequalities for the conical square function $S_{\alpha,L}$ associated with an abstract operator $L$. We first establish two-weight inequalities including bump estimates, and Fefferman-Stein inequalities with arbitrary weights. We also present the local decay estimates using the extrapolation techniques, and the mixed weak type estimates corresponding Sawyer's conjecture by means of a Coifman-Fefferman inequality. Beyond that, we consider other weak type estimates including the restricted weak-type $(p, p)$ for $S_{\alpha, L}$ and the endpoint estimate for commutators of $S_{\alpha, L}$. Finally, all the conclusions aforementioned can be applied to a number of square functions associated to $L$. \end{abstract}
\maketitle
\section{Introduction}\label{Introduction}
Given an operator $L$, the conical square function $S_{\alpha,L}$ associated with $L$ is defined by \begin{align}\label{def:SaL}
S_{\alpha,L}(f)(x) :=\bigg(\iint_{\Gamma_{\alpha}(x)}|t^mLe^{-t^mL}f(y)|^2\frac{dydt}{t^{n+1}}\bigg)^{\frac12}, \end{align}
where $\Gamma_{\alpha}(x)=\{(x,t)\in \mathbb{R}^n \times (0,\infty):|x-y|<\alpha t\}$. In particular, if $m=2$ and $L=-\Delta$, $S_{\alpha,L}$ is the classical area integral function. The conical square functions associated with abstract operators played an important role in harmonic analysis. For example, by means of $S_{\alpha, L}$, Auscher et al. \cite{ADM} introduced the Hardy space $H^1_L$ associated with an operator $L$. Soon after, Duong and Yan \cite{DY} showed that $\operatorname{BMO}_{L^*}$ is the dual space of the Hardy space $H^1_L$, which can be seen a generalization of Fefferman and Stein's result on the duality between $H^1$ and $\operatorname{BMO}$ spaces. Later, the theory of function spaces associated with operators has been developed and generalized to many other different settings, see for example \cite{DL, HM, HLMMY, LW}. Recently, Martell and Prisuelos-Arribas \cite{MP-1} studied the weighted norm inequalities for conical square functions. More specifically, they established boundedness and comparability in weighted Lebesgue spaces of different square functions using the Heat and Poisson semigroups. Using these square functions, they \cite{MP-2} define several weighted Hardy spaces $H_L^1(w)$ and showed that they are one and the same in view of the fact that the square functions are comparable in the corresponding weighted spaces. Very recently, Bui and Duong \cite{BD} introduced several types of square functions associated with operators and established the sharp weighted estimates.
In this paper, we continue to investigate several kinds of weighted norm inequalities for such operators, including bump estimates, Fefferman-Stein inequalities with arbitrary weights, the local decay estimates, the mixed weak type estimates corresponding Sawyer's conjecture. Beyond that, we consider other weak type estimates including the restricted weak-type $(p, p)$ estimates and the endpoint estimate for the corresponding commutators. For more information about the progress of these estimates, see \cite{CXY, F2, OPR, PW, MW, S83} and the reference therein.
Suppose that $L$ is an operator which satisfies the following properties: \begin{enumerate} \item[(A1)] $L$ is a closed densely defined operators of type $\omega$ in $L^2(\mathbb{R}^n)$ with $0\leq \omega< \pi/ 2$, and it has a bounded $H_\infty$-functional calculus in $L^2(\mathbb{R}^n)$.
\item[(A2)] The kernel $p_t(x,y)$ of $e^{-tL}$ admits a Gaussian upper bound. That is, there exists $m\geq 1$ and $C,c>0$ so that for all $x,y\in \mathbb{R}^n$ and $t>0,$
$$|p_t(x,y)|\leq \frac{C}{t^{n/ m}}\exp\bigg(-\frac{|x-y|^{m/(m-1)}}{c \, t^{1/(m-1)}}\bigg).$$ \end{enumerate}
Examples of the operator $L$ which satisfies condition (A1) and (A2) include: Laplacian $-\Delta$ on $\mathbb{R}^n$, or the Laplace operator on an open connected domain with Dirichlet boundary conditions, or the homogeneous sub-Laplacian on a homogeneous group; Schr\"{o}dinger operator $L=-\Delta+V$ with a nonnegative potential $0\leq V \in L^1_{\operatorname{loc}}(\mathbb{R}^n)$.
The main results of this paper can be stated as follows. We begin with the bump estimates for $S_{\alpha, L}$.
\begin{theorem}\label{thm:Suv} Let $1<p<\infty$, and let $S_{\alpha, L}$ be defined in \eqref{def:SaL} with $\alpha \ge 1$ and $L$ satisfying {\rm (A1)} and {\rm (A2)}. Given Young functions $A$ and $B$, we denote \begin{equation*}
\|(u, v)\|_{A, B, p} := \begin{cases}
\sup\limits_{Q} \|u^{\frac1p}\|_{p, Q} \|v^{-\frac1p}\|_{B, Q}, & \text{if } 1<p \le 2, \\%
\sup\limits_{Q} \|u^{\frac2p}\|_{A, Q}^{\frac12} \|v^{-\frac1p}\|_{B,Q}, & \text{if } 2<p<\infty. \end{cases} \end{equation*}
If the pair $(u, v)$ satisfies $||(u, v)||_{A, B, p}<\infty$ with $\bar{A} \in B_{(p/2)'}$ and $\bar{B} \in B_p$, then \begin{align}\label{eq:SLp}
\|S_{\alpha,L}(f)\|_{L^p(u)} &\lesssim \alpha^{n} \mathscr{N}_p \|f\|_{L^p(v)}, \end{align} where \begin{equation*} \mathscr{N}_p := \begin{cases}
||(u, v)||_{A,B,p} [\bar{B}]_{B_p}^{\frac1p}, & \text{if } 1<p \le 2, \\%
||(u, v)||_{A,B,p} [\bar{A}]_{B_{(p/2)'}}^{\frac12-\frac1p} [\bar{B}]_{B_p}^{\frac1p}, & \text{if } 2<p<\infty. \end{cases} \end{equation*} \end{theorem}
\begin{theorem}\label{thm:Sweak} Let $1<p<\infty$, and let $S_{\alpha, L}$ be defined in \eqref{def:SaL} with $\alpha \ge 1$ and $L$ satisfying {\rm (A1)} and {\rm (A2)}. Let $A$ be a Young function. If the pair $(u, v)$ satisfies $[u, v]_{A,p'}<\infty$ with $\bar{A} \in B_{p'}$, then \begin{align}\label{eq:S-weak}
\|S_{\alpha,L}(f)\|_{L^{p,\infty}(u)} \lesssim [u, v]_{A,p'} [\bar{A}]_{B_{p'}}^{\frac{1}{p'}} \|f\|_{L^p(v)}. \end{align} \end{theorem}
We next present the Fefferman-Stein inequalities with arbitrary weights.
\begin{theorem}\label{thm:FS} Let $1<p<\infty$, and let $S_{\alpha, L}$ be defined in \eqref{def:SaL} with $\alpha \ge 1$ and $L$ satisfying {\rm (A1)} and {\rm (A2)}. Then for every weight $w$, \begin{align}
\label{eq:SMw-1} \|S_{\alpha,L}(f)\|_{L^p(w)} &\lesssim \alpha^n \|f\|_{L^p(Mw)}, \quad 1<p \le 2, \\%
\label{eq:SMw-2} \|S_{\alpha,L}(f)\|_{L^p(w)} &\lesssim \alpha^n \|f (Mw/w)^{\frac12}\|_{L^p(w)}, \quad 2<p<\infty, \end{align} where the implicit constants are independent of $w$ and $f$. \end{theorem}
We turn to some weak type estimates for $S_{\alpha, L}$.
\begin{theorem}\label{thm:local} Let $S_{\alpha, L}$ be defined in \eqref{def:SaL} with $\alpha \ge 1$ and $L$ satisfying {\rm (A1)} and {\rm (A2)}. Let $B \subset X$ be a ball and every function $f \in L^{\infty}_c(\mathbb{R}^n)$ with $\operatorname{supp} (f) \subset B$. Then there exist constants $c_1>0$ and $c_2>0$ such that \begin{align}\label{eq:local}
| \big\{x \in B: S_{\alpha,L}(f)(x) > t M(f)(x) \big\}| \leq c_1 e^{- c_2 t^2} |B|, \quad \forall t>0. \end{align} \end{theorem}
\begin{theorem}\label{thm:mixed} Let $S_{\alpha, L}$ be defined in \eqref{def:SaL} with $\alpha \ge 1$ and $L$ satisfying {\rm (A1)} and {\rm (A2)}. If $u$ and $v$ satisfy \begin{align*} (1) \quad u \in A_{1} \text{ and } uv \in A_{\infty},\ \ \text{ or }\quad (2)\quad u \in A_1 \text{ and } v \in A_{\infty}, \end{align*} then we have \begin{align}
\bigg\|\frac{S_{\alpha,L}(f)}{v}\bigg\|_{L^{1,\infty}(uv)} \lesssim \| f \|_{L^1(u)}, \end{align} In particular, $S_{\alpha,L}$ is bounded from $L^1(u)$ to $L^{1, \infty}(u)$ for every $u \in A_{1}$. \end{theorem}
Given $1 \le p<\infty$, $A_p^{\mathcal{R}}$ denotes the class of weights $w$ such that \begin{align*}
[w]_{A_p^{\mathcal{R}}} := \sup_{E \subset Q} \frac{|E|}{|Q|} \bigg(\frac{w(Q)}{w(E)}\bigg)^{\frac1p}<\infty, \end{align*} where the supremum is taken over all cubes $Q$ and all measurable sets $E \subset Q$. This $A_p^{\mathcal{R}}$ class was introduced in \cite{KT} to characterize the restricted weak-type $(p, p)$ of the Hardy-Littlewood maximal operator $M$ as follows: \begin{align}\label{eq:ME}
\|M\mathbf{1}_E\|_{L^{p,\infty}(w)} \lesssim [w]_{A_p^{\mathcal{R}}} w(E)^{\frac1p}. \end{align} We should mention that $A_p \subsetneq A_p^{\mathcal{R}}$ for any $1<p<\infty$.
\begin{theorem}\label{thm:RW} Let $S_{\alpha, L}$ be defined in \eqref{def:SaL} with $\alpha \ge 1$ and $L$ satisfying {\rm (A1)} and {\rm (A2)}. Then for every $2<p<\infty$, for every $w \in A_p^{\mathcal{R}}$, and for every measurable set $E \subset \mathbb{R}^n$, \begin{align}\label{eq:SLE}
\|S_{\alpha, L}(\mathbf{1}_E)\|_{L^{p,\infty}(w)} \lesssim [w]_{A_p^{\mathcal{R}}}^{1+\frac{p}{2}} w(E)^{\frac1p}, \end{align} where the implicit constants are independent of $w$ and $E$. \end{theorem}
Finally, we obtain the endpoint estimate for commutators of $S_{\alpha, L}$ as follows. Given an operator $T$ and measurable functions $b$, we define, whenever it makes sense, the commutator by \begin{align*} C_{b}(T)(f)(x) := T((b(x)-b(\cdot))f(\cdot))(x). \end{align*}
\begin{theorem}\label{thm:SbA1} Let $S_{\alpha, L}$ be defined in \eqref{def:SaL} with $\alpha \ge 1$ and $L$ satisfying {\rm (A1)} and {\rm (A2)}. Then for every $w \in A_1$, \begin{equation}
w(\{x\in \mathbb{R}^n: C_b(S_{\alpha,L})f(x)>t\}) \lesssim \int_{\mathbb{R}^n} \Phi \Big(\frac{|f(x)|}{t}\Big) w(x) dx, \quad\forall t>0, \end{equation} where $\Phi(t)=t(1+\log^{+}t)$. \end{theorem}
\section{Applications}\label{sec:app} The goal of this section is to give some applications of Theorems \ref{thm:Suv}--\ref{thm:SbA1}. To this end, we introduce some new operators. Associated with $L$ introduced in Section \ref{Introduction}, we can also define the square functions $g_{L}$ and $g^*_{\lambda,L}$ ($\lambda>0$) as follows: \begin{align*}
g_{L}(f)(x) &:=\bigg(\int_0^\infty |t^m Le^{-t^m L}f(x)|^2 \frac{dt}{t}\bigg)^{\frac12}, \\%
g^*_{\lambda,L}(f)(x) &:=\bigg(\int_0^\infty\int_{\mathbb{R}^n}\bigg( \frac{t}{t+|x-y|}\bigg)^{n\lambda} |t^mLe^{-t^mL}f(y)|^2\frac{dydt}{t^{n+1}}\bigg)^{\frac12}. \end{align*} If $L$ satisfies (A1) and (A2), we have the following estimates (cf. \cite[p. 891]{BD}): \begin{align} \label{eq:gL-1} g_{L}(f)(x) &\lesssim g^*_{\lambda, L}(f)(x), \quad x \in \mathbb{R}^n, \\ \label{eq:gL-2} g^*_{\lambda,L}(f)(x) &\lesssim \sum_{k=0}^\infty 2^{-k\lambda n/ 2}S_{2^k, L}f(x),\quad x \in \mathbb{R}^n, \end{align} whenever $\lambda>2$. By \eqref{eq:gL-1}, \eqref{eq:gL-2} and Theorems \ref{thm:Suv}--\ref{thm:SbA1}, we conclude the following:
\begin{theorem}\label{thm:app-1} Let $L$ satisfy {\rm (A1)} and {\rm (A2)}. Then Theorems \ref{thm:Suv}--\ref{thm:SbA1} are also true for $g_L$ and $g^*_{\lambda,L}$, whenever $\lambda>2$. \end{theorem}
Next, we introduce a class of square functions associated to $L$ and $D$, where $D$ is an operator which plays the role of the directional derivative or gradient operator. Assume that $m$ is a positive even integer. Let $D$ be a densely defined linear operator on $L^2(\mathbb{R}^n)$ which possess the following properties: \begin{enumerate} \item[(D1)] $D^{m/ 2}L^{-1/ 2}$ is bounded on $L^2(\mathbb{R}^n)$;
\item[(D2)]there exist $c_1, c_2 > 0$ such that
$$|D^{m/ 2}p_t(x,y)|\leq \frac{c_1}{\sqrt{t}|B(x,t^{1/ m})|}\exp\bigg(-\frac{|x-y|^{m/(m-1)}}{c_2 \, t^{1/(m-1)}}\bigg).$$ \end{enumerate} Given $\alpha \ge 1$ and $\lambda>2$, we define the following square functions associated to $L$ and $D$: \begin{align*}
g_{D, L}(f)(x) &:=\bigg(\int_0^\infty |t^{\frac{m}{2}} D^{\frac{m}{2}}e^{-t^m L}f(x)|^2 \frac{dt}{t}\bigg)^{\frac12}, \\%
S_{\alpha,D, L}(f)(x) &:=\bigg(\iint_{\Gamma_{\alpha}(x)} |t^{\frac{m}{2}} D^{\frac{m}{2}}e^{-t^mL}f(y)|^2\frac{dydt}{t^{n+1}}\bigg)^{\frac12}, \\%
g^*_{\lambda,D,L}(f)(x) &:=\bigg(\int_0^\infty\int_{\mathbb{R}^n}\bigg( \frac{t}{t+|x-y|}\bigg)^{n\lambda} |t^{\frac{m}{2}} D^{\frac{m}{2}}e^{-t^mL}f(y)|^2\frac{dydt}{t^{n+1}}\bigg)^{\frac12}. \end{align*} It was proved in \cite[p.~895]{BD} that \begin{align}\label{eq:gDL-1} g_{D, L}(f)(x) \lesssim g^*_{\lambda, L}(f)(x), \quad x \in \mathbb{R}^n \text{ and } \lambda>2. \end{align} On the other hand, we note that $S_{\alpha,D, L}$ has the same properties as $S_{\alpha,L}$ and \begin{align}\label{eq:gDL-2} g^*_{\lambda,D,L}(f)(x) &\lesssim \sum_{k=0}^\infty 2^{-k\lambda n/ 2}S_{2^k, D, L}(f)(x),\quad x \in \mathbb{R}^n \text{ and } \lambda>2. \end{align} Then, \eqref{eq:gDL-1}, \eqref{eq:gDL-2} and Theorems \ref{thm:Suv}--\ref{thm:SbA1} give the following.
\begin{theorem}\label{thm:app-2} Let $L$ satisfy {\rm (A1)} and {\rm (A2)} and $D$ satisfy {\rm (D1)} and {\rm (D2)}. Let $\alpha\geq 1$ and $\lambda>2$. Then Theorems \ref{thm:Suv}--\ref{thm:SbA1} also hold for $g_{D,L}$, $S_{\alpha,D, L}$ and $g^*_{\lambda,D,L}$. \end{theorem}
Finally, we defined a class of more general square functions. Assume that $L$ is a nonnegative self-adjoint operator in $L^2(\mathbb{R}^n)$ and satisfies (A2). Denote by $E_L (\lambda)$ the spectral decomposition of $L$. Then by spectral theory, for any bounded Borel function $F : [0,\infty)\rightarrow C$ we can define \[ F(L)=\int_0^\infty F(\lambda)dE_L(\lambda) \] as a bounded operator on $L^2(\mathbb{R}^n)$.
Let $\psi$ be an even real-valued function in the Schwartz space $\mathcal{S}(\mathbb{R})$ such that $\int_0^\infty \psi^2(s)\frac{ds}{s}<\infty$. Given $\alpha \ge 1$ and $\lambda>2$, we now consider the following square functions: \begin{align*}
g_{\psi, L}(f)(x) &:=\bigg(\int_0^\infty |\psi(t^\frac{m}{2}\sqrt{L})f(x)|^2 \frac{dt}{t}\bigg)^{\frac12}, \\%
S_{\alpha,\psi, L}(f)(x) &:=\bigg(\iint_{\Gamma_{\alpha}(x)} |\psi(t^\frac{m}{2}\sqrt{L})f(y)|^2\frac{dydt}{t^{n+1}}\bigg)^{\frac12}, \\%
g^*_{\lambda,\psi,L}(f)(x) &:=\bigg(\int_0^\infty\int_{\mathbb{R}^n} \bigg( \frac{t}{t+|x-y|}\bigg)^{n\lambda} |\psi(t^\frac{m}{2}\sqrt{L})f(y)|^2\frac{dydt}{t^{n+1}}\bigg)^{\frac12}. \end{align*} Observe that for any $N>0$, \begin{align}\label{eq:psiL}
|\psi(t^{m/2} \sqrt{t})(x, y)| \le C_N \frac{1}{t^n} \bigg(1+\frac{|x-y|}{t}\bigg)^{-N},\quad t>0, \, x, y \in \mathbb{R}^n. \end{align} Using \eqref{eq:psiL} and the argument for $S_{\alpha,L}$, we obtain that the estimates in Section \ref{Introduction} is true for $S_{\alpha,D, L}$. Additionally, for any $\lambda>2$, \begin{align} \label{eq:gpsiL-1} g_{\psi, L}f(x) &\lesssim g^*_{\lambda, \varphi, L}(f)(x) + g^*_{\lambda,\psi, L}f(x), \quad x \in \mathbb{R}^n, \\ \label{eq:gpsiL-2} g^*_{\lambda,\psi,L}(f)(x) &\lesssim \sum_{k=0}^\infty 2^{-k\lambda n/ 2}S_{2^k,\psi, L}(f)(x),\quad x \in \mathbb{R}^n, \end{align} where $\varphi \in \mathcal{S}(\mathbb{R})$ is a fixed function supported in $[2^{-m/2}, 2^{m/2}]$. The proof of \eqref{eq:gpsiL-1} is given in \cite{BD}, while the proof of \eqref{eq:gpsiL-2} is as before. Together with Theorems \ref{thm:Suv}--\ref{thm:SbA1}, these estimates imply the conclusions as follows.
\begin{theorem}\label{thm:app-3} Let $L$ be a nonnegative self-adjoint operator in $L^2(\mathbb{R}^n)$ and satisfy {\rm (A2)}. Let $\alpha\geq 1$ and $\lambda>2$. Then Theorems \ref{thm:Suv}--\ref{thm:SbA1} are true for $g_{\psi, L}$, $S_{\alpha,\psi, L}$ and $g^*_{\lambda,\psi,L}$. \end{theorem}
\section{Preliminaries}\label{sec:pre}
\subsection{Muckenhoupt class} By a weight $w$, we mean that $w$ is a nonnegative locally integrable function on $\mathbb{R}^n$. The weight $w$ is said to belong to the Muckenhoupt class $A_p$, $1 < p<\infty$, if \[ [w]_{A_p} :=\sup_Q \bigg(\fint_{Q}w\, dx \bigg) \bigg(\fint_{Q} w^{-\frac{1}{p-1}}dx\bigg)^{p-1}<\infty, \] where the supremum is taken over all cubes in $\mathbb{R}^n$.
\subsection{Dyadic cubes} Denote by $\ell(Q)$ the sidelength of the cube $Q$. Given a cube $Q_0 \subset \mathbb{R}^n$, let $\mathcal{D}(Q_0)$ denote the set of all dyadic cubes with respect to $Q_0$, that is, the cubes obtained by repeated subdivision of $Q_0$ and each of its descendants into $2^n$ congruent subcubes.
\begin{definition} A collection $\mathcal{D}$ of cubes is said to be a dyadic grid if it satisfies \begin{enumerate} \item [(1)] For any $Q \in \mathcal{D}$, $\ell(Q) = 2^k$ for some $k \in \mathbb{Z}$. \item [(2)] For any $Q,Q' \in \mathcal{D}$, $Q \cap Q' = \{Q,Q',\emptyset\}$. \item [(3)] The family $\mathcal{D}_k=\{Q \in \mathcal{D}; \ell(Q)=2^k\}$ forms a partition of $\mathbb{R}^n$ for any $k \in \mathbb{Z}$. \end{enumerate} \end{definition}
\begin{definition}
A subset $\mathcal{S}$ of a dyadic grid is said to be $\eta$-sparse, $0<\eta<1$, if for every $Q \in \mathcal{S}$, there exists a measurable set $E_Q \subset Q$ such that $|E_Q| \geq \eta |Q|$, and the sets $\{E_Q\}_{Q \in \mathcal{S}}$ are pairwise disjoint. \end{definition}
By a median value of a measurable function $f$ on a cube $Q$ we mean a possibly non-unique, real number $m_f (Q)$ such that \[
\max \big\{|\{x \in Q : f(x) > m_f(Q) \}|,
|\{x \in Q : f(x) < m_f(Q) \}| \big\} \leq |Q|/2. \] The decreasing rearrangement of a measurable function $f$ on $\mathbb{R}^n$ is defined by \[
f^*(t) = \inf \{ \alpha > 0 : |\{x \in \mathbb{R}^n : |f(x)| > \alpha \}| < t \}, \quad 0 < t < \infty. \] The local mean oscillation of $f$ is \[ \omega_{\lambda}(f; Q)
= \inf_{c \in \mathbb{R}} \big( (f-c) \mathbf{1}_{Q} \big)^* (\lambda |Q|), \quad 0 < \lambda < 1. \] Given a cube $Q_0$, the local sharp maximal function is defined by \[ M_{\lambda; Q_0}^{\sharp} f (x) = \sup_{x \in Q \subset Q_0} \omega_{\lambda}(f; Q). \]
Observe that for any $\delta > 0$ and $0 < \lambda < 1$ \begin{equation}\label{eq:mfQ}
|m_f(Q)| \leq (f \mathbf{1}_Q)^* (|Q|/2) \ \ \text{and} \ \
(f \mathbf{1}_Q)^* (\lambda |Q|) \leq
\left( \frac{1}{\lambda} \fint_{Q} |f|^{\delta} dx \right)^{1/{\delta}}. \end{equation} The following theorem was proved by Hyt\"{o}nen \cite[Theorem~2.3]{Hy2} in order to improve Lerner's formula given in \cite{Ler11} by getting rid of the local sharp maximal function.
\begin{lemma}\label{lem:mf} Let $f$ be a measurable function on $\mathbb{R}^n$ and let $Q_0$ be a fixed cube. Then there exists a (possibly empty) sparse family $\mathcal{S}(Q_0) \subset \mathcal{D}(Q_0)$ such that \begin{equation}\label{eq:mf}
|f (x) - m_f (Q_0)| \leq 2 \sum_{Q \in \mathcal{S}(Q_0)} \omega_{2^{-n-2}}(f; Q) \mathbf{1}_Q (x), \quad a. e. ~ x \in Q_0. \end{equation} \end{lemma}
\subsection{Orlicz maximal operators} A function $\Phi:[0,\infty) \to [0,\infty)$ is called a Young function if it is continuous, convex, strictly increasing, and satisfies \begin{equation*} \lim_{t\to 0^{+}}\frac{\Phi(t)}{t}=0 \quad\text{and}\quad \lim_{t\to\infty}\frac{\Phi(t)}{t}=\infty. \end{equation*} Given $p \in[1, \infty)$, we say that a Young function $\Phi$ is a $p$-Young function, if $\Psi(t)=\Phi(t^{1/p})$ is a Young function.
If $A$ and $B$ are Young functions, we write $A(t) \simeq B(t)$ if there are constants $c_1, c_2>0$ such that $c_1 A(t) \leq B(t) \leq c_2 A(t)$ for all $t \geq t_0>0$. Also, we denote $A(t) \preceq B(t)$ if there exists $c>0$ such that $A(t) \leq B(ct)$ for all $t \geq t_0>0$. Note that for all Young functions $\phi$, $t \preceq \phi(t)$. Further, if $A(t)\leq cB(t)$ for some $c>1$, then by convexity, $A(t) \leq B(ct)$.
A function $\Phi$ is said to be doubling, or $\Phi \in \Delta_2$, if there is a constant $C>0$ such that $\Phi(2t) \leq C \Phi(t)$ for any $t>0$. Given a Young function $\Phi$, its complementary function $\bar{\Phi}:[0,\infty) \to [0,\infty)$ is defined by \[ \bar{\Phi}(t):=\sup_{s>0}\{st-\Phi(s)\}, \quad t>0, \] which clearly implies that \begin{align}\label{eq:stst} st \leq \Phi(s) + \bar{\Phi}(t), \quad s, t > 0. \end{align} Moreover, one can check that $\bar{\Phi}$ is also a Young function and \begin{equation}\label{eq:Young-1} t \leq \Phi^{-1}(t) \bar{\Phi}^{-1}(t) \leq 2t, \qquad t>0. \end{equation} In turn, by replacing $t$ by $\Phi(t)$ in first inequality of \eqref{eq:Young-1}, we obtain \begin{equation}\label{eq:Young-2} \bar{\Phi} \Big(\frac{\Phi(t)}{t}\Big) \leq \Phi(t), \qquad t>0. \end{equation}
Given a Young function $\Phi$, we define the Orlicz space $L^{\Phi}(\Omega, u)$ to be the function space with Luxemburg norm \begin{align}\label{eq:Orlicz}
\|f\|_{L^{\Phi}(\Omega, u)} := \inf\bigg\{\lambda>0:
\int_{\Omega} \Phi \Big(\frac{|f(x)|}{\lambda}\Big) du(x) \leq 1 \bigg\}. \end{align} Now we define the Orlicz maximal operator \begin{align*}
M_{\Phi}f(x) := \sup_{Q \ni x} \|f\|_{\Phi, Q} := \sup_{Q \ni x} \|f\|_{L^{\Phi}(Q, \frac{dx}{|Q|})}, \end{align*} where the supremum is taken over all cubes $Q$ in $\mathbb{R}^n$. When $\Phi(t)=t^p$, $1\leq p<\infty$, \begin{align*}
\|f\|_{\Phi, Q} = \bigg(\fint_{Q} |f(x)|^p dx \bigg)^{\frac1p}=:\|f\|_{p, Q}. \end{align*}
In this case, if $p=1$, $M_{\Phi}$ agrees with the classical Hardy-Littlewood maximal operator $M$; if $p>1$, $M_{\Phi}f=M_pf:=M(|f|^p)^{1/p}$. If $\Phi(t) \preceq \Psi(t)$, then $M_{\Phi}f(x) \leq c M_{\Psi}f(x)$ for all $x \in \mathbb{R}^n$.
The H\"{o}lder inequality can be generalized to the scale of Orlicz spaces \cite[Lemma~5.2]{CMP11}.
\begin{lemma} Given a Young function $A$, then for all cubes $Q$, \begin{equation}\label{eq:Holder-AA}
\fint_{Q} |fg| dx \leq 2 \|f\|_{A, Q} \|g\|_{\bar{A}, Q}. \end{equation} More generally, if $A$, $B$ and $C$ are Young functions such that $A^{-1}(t) B^{-1}(t) \leq c_1 C^{-1}(t), $ for all $t \geq t_0>0$, then \begin{align}\label{eq:Holder-ABC}
\|fg\|_{C, Q} \leq c_2 \|f\|_{A, Q} \|g\|_{B, Q}. \end{align} \end{lemma}
The following result is an extension of the well-known Coifman-Rochberg theorem. The proof can be found in \cite[Lemma~4.2]{HP}.
\begin{lemma} Let $\Phi$ be a Young function and $w$ be a nonnegative function such that $M_{\Phi}w(x)<\infty$ a.e.. Then
\begin{align}
\label{eq:CR-Phi} [(M_{\Phi}w)^{\delta}]_{A_1} &\le c_{n,\delta}, \quad\forall \delta \in (0, 1),
\\% \label{eq:MPhiRH} [(M_{\Phi} w)^{-\lambda}]_{RH_{\infty}} &\le c_{n,\lambda},\quad\forall \lambda>0.
\end{align}
\end{lemma}
Given $p \in (1, \infty)$, a Young function $\Phi$ is said to satisfy the $B_p$ condition (or, $\Phi \in B_p$) if for some $c>0$, \begin{align}\label{def:Bp} \int_{c}^{\infty} \frac{\Phi(t)}{t^p} \frac{dt}{t} < \infty. \end{align} Observe that if \eqref{def:Bp} is finite for some $c>0$, then it is finite for every $c>0$. Let $[\Phi]_{B_p}$ denote the value if $c=1$ in \eqref{def:Bp}. It was shown in \cite[Proposition~5.10]{CMP11} that if $\Phi$ and $\bar{\Phi}$ are doubling Young functions, then $\Phi \in B_p$ if and only if \begin{align*} \int_{c}^{\infty} \bigg(\frac{t^{p'}}{\bar{\Phi}(t)}\bigg)^{p-1} \frac{dt}{t} < \infty. \end{align*}
Let us present two types of $B_p$ bumps. An important special case is the ``log-bumps" of the form \begin{align}\label{eq:log} A(t) =t^p \log(e+t)^{p-1+\delta}, \quad B(t) =t^{p'} \log(e+t)^{p'-1+\delta},\quad \delta>0. \end{align} Another interesting example is the ``loglog-bumps" as follows: \begin{align} \label{eq:loglog-1} &A(t)=t^p \log(e+t)^{p-1} \log\log(e^e+t)^{p-1+\delta}, \quad \delta>0\\ \label{eq:loglog-2} &B(t)=t^{p'} \log(e+t)^{p'-1} \log\log(e^e+t)^{p'-1+\delta}, \quad \delta>0. \end{align} Then one can verify that in both cases above, $\bar{A} \in B_{p'}$ and $\bar{B} \in B_p$ for any $1<p<\infty$.
The $B_p$ condition can be also characterized by the boundedness of the Orlicz maximal operator $M_{\Phi}$. Indeed, the following result was given in \cite[Theorem~5.13]{CMP11} and \cite[eq. (25)]{HP}.
\begin{lemma}\label{lem:MBp}
Let $1<p<\infty$. Then $M_{\Phi}$ is bounded on $L^p(\mathbb{R}^n)$ if and only if $\Phi \in B_p$. Moreover, $\|M_{\Phi}\|_{L^p(\mathbb{R}^n) \to L^p(\mathbb{R}^n)} \le C_{n,p} [\Phi]_{B_p}^{\frac1p}$. In particular, if the Young function $A$ is the same as the first one in \eqref{eq:log} or \eqref{eq:loglog-1}, then \begin{equation}\label{eq:MAnorm}
\|M_{\bar{A}}\|_{L^{p'}(\mathbb{R}^n) \to L^{p'}(\mathbb{R}^n)} \le c_n p^2 \delta^{-\frac{1}{p'}},\quad\forall \delta \in (0, 1]. \end{equation} \end{lemma}
\begin{definition}\label{def:sepbum} Given $p \in (1, \infty)$, let $A$ and $B$ be Young functions such that $\bar{A} \in B_{p'}$ and $\bar{B} \in B_p$. We say that the pair of weights $(u, v)$ satisfies the {\tt double bump condition} with respect to $A$ and $B$ if \begin{align}\label{eq:uvABp}
[u, v]_{A,B,p}:=\sup_{Q} \|u^{\frac1p}\|_{A,Q} \|v^{-\frac1p}\|_{B,Q} < \infty. \end{align} where the supremum is taken over all cubes $Q$ in $\mathbb{R}^n$. Also, $(u, v)$ is said to satisfy the {\tt separated bump condition} if \begin{align}
\label{eq:uvAp} [u, v]_{A,p'} &:= \sup_{Q} \|u^{\frac1p}\|_{A,Q} \|v^{-\frac1p}\|_{p',Q} < \infty, \\%
\label{eq:uvpB} [u, v]_{p,B} &:= \sup_{Q} \|u^{\frac1p}\|_{p,Q} \|v^{-\frac1p}\|_{B,Q} < \infty. \end{align} \end{definition}
Note that if $A(t)=t^p$ in \eqref{eq:uvAp} or $B(t)=t^p$ in \eqref{eq:uvpB}, each of them actually is two-weight $A_p$ condition and we denote them by $[u, v]_{A_p}:=[u, v]_{p,p'}$. Also, the separated bump condition is weaker than the double bump condition. Indeed, \eqref{eq:uvABp} implies \eqref{eq:uvAp} and \eqref{eq:uvpB}, but the reverse direction is incorrect. The first fact holds since $\bar{A} \in B_{p'}$ and $\bar{B} \in B_p$ respectively indicate $A$ is a $p$-Young function and $B$ is a $p'$-Young function. The second fact was shown in \cite[Section~7]{ACM} by constructing log-bumps.
\begin{lemma}\label{lem:M-uv} Let $1<p<\infty$, let $A$, $B$ and $\Phi$ be Young functions such that $A \in B_p$ and $A^{-1}(t)B^{-1}(t) \lesssim \Phi^{-1}(t)$ for any $t>t_0>0$. If a pair of weights $(u, v)$ satisfies $[u, v]_{p, B}<\infty$, then \begin{align}\label{eq:MPhi-uv}
\|M_{\Phi}f\|_{L^p(u)} \leq C [u, v]_{p, B} [A]_{B_p}^{\frac1p} \|f\|_{L^p(v)}. \end{align} Moreover, \eqref{eq:MPhi-uv} holds for $\Phi(t)=t$ and $B=\bar{A}$ satisfying the same hypotheses. In this case, $\bar{A} \in B_p$ is necessary. \end{lemma}
The two-weight inequality above was established in \cite[Theorem~5.14]{CMP11} and \cite[Theorem~3.1]{CP99}. The weak type inequality for $M_{\Phi}$ was also obtained in \cite[Proposition~5.16]{CMP11} as follows.
\begin{lemma}\label{lem:Muv-weak} Let $1<p<\infty$, let $B$ and $\Phi$ be Young functions such that $t^{\frac1p} B^{-1}(t) \lesssim \Phi^{-1}(t)$ for any $t>t_0>0$. If a pair of weights $(u, v)$ satisfies $[u, v]_{p, B}<\infty$, then \begin{align}\label{eq:MPuv}
\|M_{\Phi}f\|_{L^{p,\infty}(u)} \leq C \|f\|_{L^p(v)}. \end{align} Moreover, \eqref{eq:MPuv} holds for $M$ if and only if $[u, v]_{A_p}<\infty$. \end{lemma}
\section{Proof of main results}
\subsection{Sparse domination} Let $\Phi$ be a radial Schwartz function such that $\mathbf{1}_{B(0, 1)} \le \Phi \le \mathbf{1}_{B(0, 2)}$. We define \begin{align*} \widetilde{S}_{\alpha,L}(f)(x):=\bigg(\int_{0}^{\infty} \int_{\mathbb{R}^n}
\Phi\Big(\frac{|x-y|}{\alpha t}\Big) |Q_{t,L}f(y)|^2\frac{dydt}{t^{n+1}}\bigg)^{1/2}, \end{align*} where $Q_{t,L}f:=t^m L e^{-t^m L}f$. It is easy to verify that \begin{align}\label{eq:SSS} S_{\alpha,L}(f)(x) \le \widetilde{S}_{\alpha,L}(f)(x) \le S_{2\alpha,L}(f)(x),\quad x \in \mathbb{R}^n. \end{align} Additionally, it was proved in \cite{ADM} that $S_{1,L}$ is bounded from $L^1(\mathbb{R}^n)$ to $L^{1,\infty}(\mathbb{R}^n)$. Then, this and \eqref{eq:SSS} give that \begin{align}\label{eq:S11}
\|\widetilde{S}_{\alpha,L}(f)\|_{L^{1,\infty}(\mathbb{R}^n)} \lesssim \alpha^n \|f\|_{L^1(\mathbb{R}^n)}. \end{align}
Using these facts, we can establish the sparse domination for $S_{\alpha,L}$ as follows.
\begin{lemma}\label{lem:S-sparse} For any $\alpha \geq 1$, we have \begin{align}\label{eq:S-sparse} S_{\alpha,L}(f)(x) &\lesssim \alpha^{n} \sum_{j=1}^{3^n} \mathcal{A}_{\mathcal{S}_j}^2 (f)(x) ,\quad \text{a.e. } x \in \mathbb{R}^n, \end{align} where \[
\mathcal{A}_{S}^2(f)(x):= \bigg(\sum_{Q \in \mathcal{S}} \langle |f| \rangle_Q^2 \mathbf{1}_Q(x)\bigg)^{\frac12}. \] \end{lemma}
\begin{proof} Fix $Q_0\in \mathcal{D}$. By \eqref{eq:mfQ}, Kolmogorov's inequality and \eqref{eq:S11}, we have \begin{align}\label{eq:mfSL}
|m_{\widetilde{S}_{\alpha,L}(f)^2}(Q_0)|
&\lesssim \bigg(\fint_{Q_0} |\widetilde{S}_{\alpha,L}(f \mathbf{1}_{Q_0})|^{\frac12} \, dx\bigg)^4 \nonumber\\%
&\lesssim \|\widetilde{S}_{\alpha,L}(f \mathbf{1}_{Q_0})\|^2_{L^{1,\infty}(Q_0, \frac{dx}{|Q_0|})}
\lesssim \alpha^{2n} \bigg(\fint_{Q_0} |f| \, dx\bigg)^2. \end{align} From \cite[Proposition~3.2]{BD}, we obtain that for any dyadic cube $Q\subset \mathbb{R}^n$, $\alpha\geq 1$ and $\lambda \in (0, 1)$, \begin{equation}\label{eq:osc-SL} \omega_{\lambda}(\widetilde{S}_{\alpha, L}(f)^2;Q)
\lesssim \alpha^{2n} \sum_{j=0}^\infty 2^{-j\delta} \bigg(\fint_{2^j Q} |f|\, dx\bigg)^2, \end{equation} where $\delta \in (0, 1)$ is some constant. Invoking Lemma \ref{lem:mf}, \eqref{eq:mfSL} and \eqref{eq:osc-SL}, one can pick a sparse family $\mathcal{S}(Q_0) \subset \mathcal{D}(Q_0)$ so that \begin{align} \widetilde{S}_{\alpha,L}(f)(x)^2
& \lesssim |m_{\widetilde{S}_{\alpha,L}(f)^2}(Q_0)| + \sum_{Q \in \mathcal{S}(Q_0)}\omega_{\varepsilon}(\widetilde{S}_{\alpha, L}(f)^2;Q) \mathbf{1}_{Q}(x) \nonumber\\%
&\lesssim \alpha^{2n} \sum_{Q\in \mathcal{S}(Q_0)}\sum_{j=0}^\infty 2^{-j\delta}\langle |f|\rangle_{2^jQ}^2 \mathbf{1}_{Q}(x) \label{eq:SLQj} \\% &=: \alpha^{2n} \sum_{j=0}^\infty 2^{-j\delta} \mathcal{T}^2_{\mathcal{S}(Q_0), j}(f)(x)^2, \quad\text{ a.e. } x\in Q_0. \label{eq:SLTS} \end{align} where \begin{align*} \mathcal{T}^2_{\mathcal{S},j}(f)(x)
&:=\bigg(\sum_{Q\in \mathcal{S}} \langle |f|\rangle_{2^jQ} \mathbf{1}_{Q}(x)\bigg)^{\frac12}. \end{align*} Denote \begin{align*} \mathcal{T}_{\mathcal{S},j}(f, g)(x)
&:=\sum_{Q\in \mathcal{S}} \langle |f|\rangle_{2^jQ} \langle |g|\rangle_{2^jQ} \mathbf{1}_{Q}(x), \\% \mathcal{A}_{\mathcal{S}}(f, g)(x)
&:=\sum_{Q\in \mathcal{S}} \langle |f|\rangle_{Q} \langle |g|\rangle_{2Q} \mathbf{1}_{Q}(x). \end{align*} Then, $\mathcal{T}^2_{\mathcal{S},j}(f)(x)=\mathcal{T}_{\mathcal{S},j}(f, f)(x)^{\frac12}$. On the other hand, the arguments in \cite[Sections~11-13]{LN} shows that there exist $3^n$ dyadic grids $S_j\in \mathcal{D}_j, j=1,\ldots,3^n$, such that \begin{align}\label{eq:TAA} \sum_{j=0}^\infty 2^{-j\delta} \mathcal{T}^1_{\mathcal{S}(Q_0), j}(f,f)(x) \lesssim \sum_{j=1}^{3^n} \mathcal{A}_{\mathcal{S}_j}(f, f)(x) = \sum_{j=1}^{3^n} \mathcal{A}^2_{\mathcal{S}_j}(f)(x)^2. \end{align} Gathering \eqref{eq:SLTS} and \eqref{eq:TAA}, we deduce that \begin{align*} \widetilde{S}_{\alpha,L}(f)(x)\lesssim \alpha^{n} \sum_{j=1}^{3^n}\mathcal{A}^2_{\mathcal{S}_j}(f)(x), \quad\text{ a.e. } x\in Q_0. \end{align*} Since $\mathbb{R}^n=\bigcup_{Q \in \mathcal{D}} Q$, it leads that \begin{align*} S_{\alpha,L}(f)(x) \le \widetilde{S}_{\alpha,L}(f)(x)\lesssim \alpha^{n} \sum_{j=1}^{3^n}\mathcal{A}^2_{\mathcal{S}_j}(f)(x), \quad\text{ a.e. } x\in \mathbb{R}^n. \end{align*} This completes our proof. \end{proof}
\subsection{Bump conjectures} In this subsection, we are going to show two-weight inequalities invoking bump conjectures.
\begin{proof}[\textbf{Proof of Theorem \ref{thm:Suv}.}] By Lemma \ref{lem:S-sparse}, the inequality \eqref{eq:SLp} follows from the following \begin{align}\label{eq:ASLp}
\|\mathcal{A}_{\mathcal{S}}^2(f)\|_{L^p(u)} \lesssim \mathscr{N}_p \|f\|_{L^p(v)}, \end{align} for every sparse family $\mathcal{S}$, where the implicit constant does not depend on $\mathcal{S}$.
To prove \eqref{eq:ASLp}, we begin with the case $1<p \le 2$. Actually, the H\"{o}lder's inequality \eqref{eq:Holder-AA} gives that \begin{align}\label{eq:ASp1}
\|\mathcal{A}_{\mathcal{S}}^2(f)\|_{L^p(u)}^p &=\int_{X} \bigg(\sum_{Q \in \mathcal{S}} \langle f \rangle_Q^2 \mathbf{1}_{Q}(x)\bigg)^{\frac{p}{2}} u(x) dx
\le \sum_{Q \in \mathcal{S}} \langle |f| \rangle_Q^p \int_Q u(x)dx \nonumber \\%
&\lesssim \sum_{Q \in \mathcal{S}} \|f v^{\frac1p}\|_{\bar{B}, Q}^p \|v^{-\frac1p}\|_{B, Q}^p
\|u^{\frac1p}\|_{p, Q}^p |Q| \nonumber \\%
&\lesssim ||(u, v)||_{A,B,p}^p \sum_{Q \in \mathcal{S}} \left(\inf_{Q} M_{\bar{B}}(f v^{\frac1p})\right)^p |E_Q| \nonumber \\%
&\le ||(u, v)||_{A,B,p}^p \int_{X} M_{\bar{B}}(f v^{\frac1p})(x)^p dx \nonumber \\%
&\le ||(u, v)||_{A,B,p}^p \|M_{\bar{B}}\|_{L^p}^p \|f\|_{L^p(v)}^p, \end{align} where Lemma \ref{lem:MBp} is used in the last step.
Next let us deal with the case $2<p<\infty$. By duality, one has \begin{align}\label{eq:AS-dual}
\|\mathcal{A}_{\mathcal{S}}^2(f)\|_{L^p(u)}^2 = \|\mathcal{A}_{\mathcal{S}}^2(f)^2\|_{L^{p/2}(u)}
=\sup_{\substack{0 \le h \in L^{(p/2)'}(u) \\ \|h\|_{L^{(p/2)'}(u)=1}}} \int_{\mathbb{R}^n} \mathcal{A}_{\mathcal{S}}^2(f)^2 h u\, dx. \end{align}
Fix a nonnegative function $h \in L^{(p/2)'}(u)$ with $\|h\|_{L^{(p/2)'}(u)}=1$. Then using H\"{o}lder's inequality \eqref{eq:Holder-AA} three times and Lemma \ref{lem:MBp}, we obtain \begin{align}\label{eq:ASp2} &\int_{X} \mathcal{A}_{\mathcal{S}}^2(f)(x)^2 h(x) u(x) dx
\lesssim \sum_{Q \in \mathcal{S}} \langle |f| \rangle_{Q}^2 \langle hu \rangle_{Q} |Q| \nonumber \\%
&\lesssim \sum_{Q \in \mathcal{S}} \|f v^{\frac1p}\|_{\bar{B}, Q}^2 \|v^{-\frac1p}\|_{B, Q}^2
\|hu^{1-\frac{2}{p}}\|_{\bar{A}, Q} \|u^{\frac{2}{p}}\|_{A, Q} |Q| \nonumber \\%
&\lesssim ||(u, v)||_{A,B,p}^2 \sum_{Q \in \mathcal{S}} \left(\inf_{Q} M_{\bar{B}}(f v^{\frac1p})\right)^2
\left(\inf_{Q} M_{\bar{A}}(hu^{1-\frac{2}{p}})\right) |E_Q| \nonumber \\%
&\le ||(u, v)||_{A,B,p}^2 \int_{X} M_{\bar{B}}(f v^{\frac1p})(x)^2 M_{\bar{A}}(hu^{1-\frac{2}{p}})(x) dx \nonumber \\%
&\le ||(u, v)||_{A,B,p}^2 \|M_{\bar{B}}(f v^{\frac1p})^2\|_{L^{p/2}}
\|M_{\bar{A}}(hu^{1-\frac{2}{p}})\|_{L^{(p/2)'}} \nonumber \\%
&\le ||(u, v)||_{A,B,p}^2 \|M_{\bar{B}}\|_{L^p}^2
\|M_{\bar{A}}\|_{L^{(p/2)'}} \|f\|_{L^p( v)}^2 \|h\|_{L^{(p/2)'}(u)}. \end{align} Therefore, \eqref{eq:ASLp} immediately follows from \eqref{eq:ASp1}, \eqref{eq:AS-dual} and \eqref{eq:ASp2}. \end{proof}
Let us present an endpoint extrapolation theorem from \cite[Corollary~8.4]{CMP11}.
\begin{lemma} Let $\mathcal{F}$ be a collection of pairs $(f, g)$ of nonnegative measurable functions. If for every weight $w$, \begin{align*}
\|f\|_{L^{1,\infty}(w)} \le C \|g\|_{L^1(Mw)}, \quad (f, g) \in \mathcal{F}, \end{align*} then for all $p \in (1, \infty)$, \begin{align*}
\|f\|_{L^{p,\infty}(u)} \le C \|g\|_{L^p(v)}, \quad (f, g) \in \mathcal{F}, \end{align*}
whenever $\sup_{B} \|u^{\frac1p}\|_{A, B} \|v^{-\frac1p}\|_{p', B}<\infty$, where $\bar{A} \in B_{p'}$. \end{lemma}
\begin{proof} [\textbf{Proof of Theorem \ref{thm:Sweak}.}] In view of, it suffices to prove that for every weight $w$, \begin{align*}
\|S_{\alpha, L}(f)\|_{L^{1,\infty}(w)} \leq C \|f\|_{L^1(Mw)}, \end{align*} where the constant $C$ is independent of $w$ and $f$. We should mention that although the norm of weights does not appear in \cite[Corollary~8.4]{CMP11}, one can check its proof to obtain the norm constant in \eqref{eq:S-weak}. Invoking Lemma \ref{lem:S-sparse}, we are reduced to showing that there exists a constant $C$ such that for every sparse family $\mathcal{S}$ and for every weight $w$, \begin{align}\label{eq:S-11}
\|\mathcal{A}_{\mathcal{S}}^2(f)\|_{L^{1,\infty}(w)} \leq C \|f\|_{L^1(M_{\mathcal{D}}w)}, \end{align}
Without loss of generality, we may assume that $f$ is bounded and has compact support. Fix $\lambda>0$ and denote $\Omega:=\{x \in \mathbb{R}^n: M_{\mathcal{D}}f(x)>\lambda\}$. By the Calder\'{o}n-Zygmund decomposition, there exists a pairwise disjoint family $\{Q_j\} \subset \mathcal{D}$ such that $\Omega=\bigcup_{j}Q_j$ and \begin{list}{\textup{(\theenumi)}}{\usecounter{enumi}\leftmargin=1cm \labelwidth=1cm \itemsep=0.2cm
\topsep=.2cm \renewcommand{\theenumi}{\arabic{enumi}}} \item\label{CZ-1} $f=g+b$, \item\label{CZ-2} $g=f\mathbf{1}_{\Omega}+\sum_{j} \langle f \rangle_{Q_j} \mathbf{1}_{Q_j}$, \item\label{CZ-3} $b=\sum_{j}b_j$ with $b_j=(f-\langle f \rangle_{Q_j}) \mathbf{1}_{Q_j}$,
\item\label{CZ-4} $\langle |f| \rangle_{Q_j}>\lambda$ and $|g(x)| \le 2^n \lambda$, a.e. $x \in \mathbb{R}^n$, \item\label{CZ-5} $\operatorname{supp}(b_j) \subset Q_j$ and $\fint_{Q_j} b_j\, dx=0$. \end{list} Then by \eqref{CZ-1}, we split \begin{align}\label{eq:IgIb} &w(\{x \in \mathbb{R}^n: \mathcal{A}_{\mathcal{S}}^2(f)(x)>\lambda\}) \le w(\Omega) + {\rm I_g} + {\rm I_b}, \end{align} where \begin{align*} {\rm I_g} = w(\{x \in \Omega^c: \mathcal{A}_{\mathcal{S}}^2(g)(x)>\lambda/2\}) \quad\text{and}\quad {\rm I_b} =w(\{x \in \Omega^c: \mathcal{A}_{\mathcal{S}}^2(b)(x)>\lambda/2\}) \end{align*} For the first term, we by \eqref{CZ-4} have \begin{align}\label{eq:wo}
w(\Omega) &\le \sum_{j} w(Q_j) \le \frac{1}{\lambda} \sum_{j} \frac{w(Q_j)}{|Q_j|} \int_{Q_j} |f(x)| dx \nonumber \\%
&\le \frac{1}{\lambda} \sum_{j} \int_{Q_j} |f(x)| M_{\mathcal{D}}w(x) dx
\le \frac{1}{\lambda} \int_{\mathbb{R}^n} |f(x)| M_{\mathcal{D}}w(x) dx. \end{align}
To estimate ${\rm I_b}$, we claim that $\mathcal{A}_{\mathcal{S}}^2(b_j)(x)=0$ for all $x \in \Omega^c$ and for all $j$. In fact, if there exist $x_0 \in \Omega^c$ and $j_0$ such that $\mathcal{A}_{\mathcal{S}}^2(b_{j_0})(x_0) \neq 0$, then there is a dyadic cube $Q_0 \in \mathcal{S}$ such that $x_0 \in Q_0$ and $\langle b_{j_0}\rangle_{Q_0} \neq 0$. The latter implies $Q_0 \subsetneq Q_{j_0}$ because of the support and the vanishing property of $b_{j_0}$. This in turn gives that $x_0 \in Q_{j_0}$, which contradicts $x_0 \in \Omega^c$. This shows our claim. As a consequence, the set $\{x \in \Omega^c: \mathcal{A}_{\mathcal{S}}^2(b)(x)>\lambda/2\}$ is empty, and hence ${\rm I_b}=0$.
In order to control ${\rm I_g}$, we first present a Fefferman-Stein inequality for $\mathcal{A}_{\mathcal{S}}^2$. Note that $v(x):=M_{\mathcal{D}}w(x) \ge \langle w \rangle_{Q}$ for any dyadic cube $Q \in \mathcal{S}$ containing $x$. Then for any Young function $A$ such that $\bar{A} \in B_2$, \begin{align}\label{eq:AS-FS}
\|\mathcal{A}_{\mathcal{S}}^2(f)\|_{L^2(w)}^2
&=\sum_{Q \in \mathcal{S}} \langle |f| \rangle_{Q}^2 w(Q)
\le \sum_{Q \in \mathcal{S}} \|f v^{\frac12}\|_{\bar{A}, Q}^2 \|v^{-\frac12}\|_{A, Q}^2 w(Q) \nonumber \\%
&\le \sum_{Q \in \mathcal{S}} \|f v^{\frac12}\|_{\bar{A}, Q}^2 \langle w \rangle_{Q}^{-1} w(Q)
\lesssim \sum_{Q \in \mathcal{S}} \Big(\inf_{Q} M_{\bar{A}}(f v^{\frac12}) \Big)^2 |E_Q| \nonumber \\% &\le \int_{\mathbb{R}^n} M_{\bar{A}}(f v^{\frac12})(x)^2 dx
\lesssim \|f\|_{L^2(v)}^2 = \|f\|_{L^2(M_{\mathcal{D}}w)}^2, \end{align} where we used Lemma \ref{lem:MBp} in the last inequality. Note that for any $x \in Q_j$, \begin{align}\label{eq:MD} M_{\mathcal{D}}(w\mathbf{1}_{\Omega^c})(x) &\le M_{\mathcal{D}}(w\mathbf{1}_{Q_j^c})(x)
=\sup_{x \in Q \in \mathcal{D}} \frac{1}{|Q|} \int_{Q \cap Q_j^c} w(y) dy \nonumber \\%
&\le \sup_{Q \in \mathcal{D}:Q_j \subsetneq Q} \frac{1}{|Q|} \int_{Q \cap Q_j^c} w(y) dy \le \inf_{Q_j} M_{\mathcal{D}}w. \end{align} Thus, combining \eqref{eq:AS-FS} with \eqref{eq:MD}, we have \begin{align}\label{eq:Ig} {\rm I_g} &\le \frac{4}{\lambda^2} \int_{\Omega^c} \mathcal{A}_{\mathcal{S}}^2(g)(x) w(x) dx \nonumber \\%
&\lesssim \frac{1}{\lambda^2} \int_{\mathbb{R}^n} |g(x)| M_{\mathcal{D}}(w\mathbf{1}_{\Omega^c})(x) dx
\lesssim \frac{1}{\lambda} \int_{\mathbb{R}^n} |g(x)| M_{\mathcal{D}}(w\mathbf{1}_{\Omega^c})(x) dx \nonumber \\%
&\le \frac{1}{\lambda} \int_{\Omega^c} |f(x)| M_{\mathcal{D}}(w\mathbf{1}_{\Omega^c})(x) dx + \frac{1}{\lambda} \sum_{j} \langle f \rangle_{Q_j} \int_{Q_j} M_{\mathcal{D}}(w\mathbf{1}_{\Omega^c})(x) dx \nonumber \\%
&\le \frac{1}{\lambda} \int_{\mathbb{R}^n} |f(x)| M_{\mathcal{D}}w(x) dx
+ \frac{1}{\lambda} \sum_{j} \int_{Q_j} |f(y)| M_{\mathcal{D}}w(y) dy \nonumber \\%
&\lesssim \frac{1}{\lambda} \int_{\mathbb{R}^n} |f(x)| M_{\mathcal{D}}w(x) dx. \end{align} Consequently, gathering \eqref{eq:IgIb}, \eqref{eq:wo} and \eqref{eq:Ig}, we conclude that \eqref{eq:S-11} holds. \end{proof}
\subsection{Fefferman-Stein inequalities} In order to show Theorem \ref{thm:FS}, we recall an extrapolation theorem for arbitrary weights in \cite[Theorem 1.3]{CP}, or \cite[Theorem~4.11]{CO} in the general Banach function spaces.
\begin{lemma}\label{lem:extra} Let $\mathcal{F}$ be a collection of pairs $(f, g)$ of nonnegative measurable functions. If for some $p_0 \in (0, \infty)$ and for every weight $w$, \begin{align*}
\|f\|_{L^{p_0}(w)} \le C \|g\|_{L^{p_0}(Mw)}, \quad (f, g) \in \mathcal{F}, \end{align*} then for every $p \in (p_0, \infty)$ and for every weight $w$, \begin{align*}
\|f\|_{L^p(w)} \le C \|g(Mw/w)^{\frac{1}{p_0}}\|_{L^p(w)}, \quad (f, g) \in \mathcal{F}. \end{align*} \end{lemma}
\begin{proof}[\textbf{Proof of Theorem \ref{thm:FS}.}]
Note that $v(x):=Mw(x) \ge \langle w\rangle_{Q}$ for any dyadic cube $Q \in \mathcal{S}$ containing $x$. Let $A$ be a Young function such that ${\bar{A}}\in B_{p}$. By Lemma \ref{lem:MBp}, we have \begin{align}\label{eq:MAf}
\|M_{\bar{A}} (f v^{\frac{1}{p}})\|_{L^{p} } \lesssim \|f\|_{L^{p}(v)}. \end{align} Thus, using Lemma \ref{lem:S-sparse}, H\"{o}lder's inequality and \eqref{eq:MAf}, we deduce that \begin{align*}
\|S_{\alpha,L}(\vec{f})\|_{L^p(w)}^p &\lesssim \alpha^{pn}\sum_{j=1}^{K_0} \sum_{Q \in \mathcal{S}_j}
\langle |f| \rangle_{Q}^p \int_Q w(x)dx \\% &\le \alpha^{pn}\sum_{j=1}^{K_0} \sum_{Q \in \mathcal{S}_j}
\|f \nu^{\frac{1}{p}}\|_{\bar{A}, Q}^p
\|\nu^{-\frac{1}{p}}\|_{A, Q}^p \int_Q w(x)dx \\% &\le \alpha^{pn}\sum_{j=1}^{K_0} \sum_{Q \in \mathcal{S}_j}
\|f \nu^{\frac{1}{p}}\|_{\bar{A}, Q}^p
|Q| \\% &\lesssim \alpha^{pn}\sum_{j=1}^{K_0} \sum_{Q \in \mathcal{S}_j}
\Big(\inf_{Q} M_{\bar{A}}(f \nu^{\frac{1}{p}}) \Big)^p |E_Q| \\% &\lesssim\alpha^{pn}\int_{\mathbb{R}^n}
M_{\bar{A}}(f \nu^{\frac{1}{p}})(x)^p dx \\%
&\lesssim \alpha^{pn}\|f\|_{L^{p}(v)}^p
= \alpha^{pn} \|f\|_{L^{p}(Mw)}^p. \end{align*} This shows \eqref{eq:SMw-1}. Finally, \eqref{eq:SMw-2} is a consequence of \eqref{eq:SMw-1} and Lemma \ref{lem:extra}. \end{proof}
\subsection{Local decay estimates}
To show Theorem \ref{thm:local}, we need the following Carleson embedding theorem from \cite[Theorem~4.5]{HP}.
\begin{lemma}\label{lem:Car-emb} Suppose that the sequence $\{a_Q\}_{Q \in \mathcal{D}}$ of nonnegative numbers satisfies the Carleson packing condition \begin{align*} \sum_{Q \in \mathcal{D}:Q \subset Q_0} a_Q \leq A w(Q_0),\quad\forall Q_0 \in \mathcal{D}. \end{align*} Then for all $p \in (1, \infty)$ and $f \in L^p(w)$, \begin{align*} \bigg(\sum_{Q \in \mathcal{D}} a_Q \Big(\frac{1}{w(Q)} \int_{Q} f(x) w(x) \ du(x)\Big)^p \bigg)^{\frac1p}
\leq A^{\frac1p} p' \|f\|_{L^p(w)}. \end{align*} \end{lemma}
\begin{lemma}\label{lem:CF-local} For every $1<p<\infty$ and $w \in A_p$, we have \begin{align}\label{eq:CF-local}
\|S_{\alpha,L}f\|_{L^2(B, w)} \leq c_{n,p} \alpha^{n}[w]_{A_p}^{\frac12} \|Mf\|_{L^2(B, w)}, \end{align} for every ball $B \subset \mathbb{R}^n$ and $f\in L_c^\infty(\mathbb{R}^n)$ with $\operatorname{supp}(f) \subset B$. \end{lemma}
\begin{proof} Let $w \in A_p$ with $1<p<\infty$. Fix a ball $B \subset \mathbb{R}^n$. From \eqref{eq:SLQj}, there exists a sparse family $\mathcal{S}(Q) \subset \mathcal{D}(Q)$ so that \begin{align*} \widetilde{S}_{\alpha,L}(f)(x)^2
&\lesssim \alpha^{2n} \sum_{Q'\in \mathcal{S}(Q)} \sum_{j=0}^\infty 2^{-j\delta} \langle |f|\rangle_{2^jQ'}^2 \mathbf{1}_{Q'}(x) \\% &\lesssim \alpha^{2n} \sum_{Q'\in \mathcal{S}(Q)} \inf_{Q'} M(f)^2 \mathbf{1}_{Q'}(x), \quad\text{ a.e. } x\in Q. \end{align*} This implies that \begin{align*}
\|\widetilde{S}_{\alpha,L}(f)\|_{L^2(Q, w)}^2 &\lesssim \alpha^{2n}\sum_{Q' \in \mathcal{S}(Q)} \inf_{Q'} M(f)^2 w(Q'). \end{align*} From this and \eqref{eq:SSS}, we see that to obtain \eqref{eq:CF-local}, it suffices to prove \begin{align}\label{eq:QSQ}
\sum_{Q' \in \mathcal{S}(Q)} \inf_{Q'} M(f)^2 w(Q') \lesssim [w]_{A_p} \|M(f)\|_{L^2(Q, w)}^2. \end{align}
Recall that a new version of $A_{\infty}$ was introduced by Hyt\"{o}nen and P\'{e}rez \cite{HP}: \begin{align*} [w]'_{A_{\infty}} := \sup_{Q} \frac{1}{w(Q)} \int_{Q} M(w \mathbf{1}_Q)(x) dx. \end{align*} By \cite[Proposition~2.2]{HP}, there holds \begin{align}\label{eq:AiAi} c_n [w]'_{A_{\infty}} \le [w]_{A_{\infty}} \leq [w]_{A_p}. \end{align} Observe that for every $Q'' \in \mathcal{D}$, \begin{align*} \sum_{Q' \in \mathcal{S}(Q): Q' \subset Q''} w(Q')
&=\sum_{Q' \in \mathcal{S}(Q): Q' \subset Q''} \langle w \rangle_{Q'} |Q'|
\lesssim \sum_{Q' \in \mathcal{S}(Q): Q' \subset Q''} \inf_{Q'} M(w \mathbf{1}_{Q''}) |E_{Q'}| \\% &\lesssim \int_{Q''} M(w \mathbf{1}_{Q''})(x) dx \leq [w]'_{A_{\infty}} w(Q'') \lesssim [w]_{A_p} w(Q''), \end{align*} where we used the disjointness of $\{E_{Q'}\}_{Q' \in \mathcal{S}(Q)}$ and \eqref{eq:AiAi}. This shows that the collection $\{w(Q')\}_{Q' \in \mathcal{S}(Q)}$ satisfies the Carleson packing condition with the constant $c_n [w]_{A_p}$. As a consequence, this and Lemma \ref{lem:Car-emb} give that \begin{align*} \sum_{Q' \in \mathcal{S}(Q)} \inf_{Q'} M(f)^2 w(Q') &\le \sum_{Q' \in \mathcal{S}(Q)} \bigg(\frac{1}{w(Q')} \int_{Q'} M(f)\, \mathbf{1}_Q w\, dx \bigg)^2 w(Q') \\%
&\lesssim [w]_{A_p} \|M(f) \mathbf{1}_Q\|_{L^2(w)}^2
=[w]_{A_p} \|M(f) \mathbf{1}_Q\|_{L^2(Q, w)}^2, \end{align*} where the above implicit constants are independent of $[w]_{A_p}$ and $Q$. This shows \eqref{eq:QSQ} and completes the proof of \eqref{eq:CF-local}. \end{proof}
Next, let us see how Lemma \ref{lem:CF-local} implies Theorem \ref{thm:local}.
\begin{proof}[\textbf{Proof of Theorem \ref{thm:local}.}] Let $p>1$ and $r>1$ be chosen later. Define the Rubio de Francia algorithm: \begin{align*}
\mathcal{R}h=\sum_{k=0}^{\infty} \frac{M^{k}h}{2^k\|M\|^k_{L^{r'}\to L^{r'}}}. \end{align*} Then it is obvious that \begin{align}\label{eq:hRh}
h \le \mathcal{R}h \quad\text{and}\quad \|\mathcal{R}h\|_{L^{r'} (\mathbb{R}^n)} \leq 2 \|h\|_{L^{r'} (\mathbb{R}^n)}. \end{align} Moreover, for any nonnegative $h \in L^{r'}(\mathbb{R}^n)$, we have that $\mathcal{R}h \in A_1$ with \begin{align}\label{eq:Rh-A1}
[\mathcal{R}h]_{A_1} \leq 2 \|M\|_{L^{r'} \to L^{r'}} \leq c_n \ r. \end{align}
By Riesz theorem and the first inequality in \eqref{eq:hRh}, there exists some nonnegative function $h \in L^{r'}(Q)$ with $\|h\|_{L^{r'}(Q)}=1$ such that \begin{align}\label{eq:FQ}
\mathscr{F}_Q^{\frac1r} &:= |\{x \in Q: S_{\alpha,L}(f)(x) > t M(f)(x)\}|^{\frac1r} \nonumber \\
&= |\{x \in Q: S_{\alpha,L}(f)(x)^2 > t^2 M(f)(x)^2\}|^{\frac1r} \nonumber \\
& \leq \frac{1}{t^2} \bigg\| \bigg(\frac{S_{\alpha,L}(f)}{M(f)}\bigg)^2 \bigg\|_{L^r(Q)} \leq \frac{1}{t^2} \int_{Q} S_{\alpha,L}(f)^2 \, h \, M(f)^{-2} dx \nonumber \\
&\leq t^{-2} \|S_{\alpha,L}(f)\|_{L^2(Q, w)}^2, \end{align} where $w=w_1 w_2^{1-p}$, $w_1= \mathcal{R}h$ and $w_2 = M(f)^{2(p'-1)}$. Recall that Coifmann-Rochberg theorem \cite[Theorem~3.4]{Jose} asserts that \begin{align}\label{eq:C-R} [(M(f))^{\delta}]_{A_1} \leq \frac{c_n}{1-\delta}, \quad\forall \delta \in (0, 1). \end{align} In view of \eqref{eq:Rh-A1} and \eqref{eq:C-R}, we see that $w_1, w_2 \in A_1$ provided $p>3$. Then the reverse $A_p$ factorization theorem gives that $w=w_1 w_2^{1-p} \in A_p$ with \begin{align}\label{eq:w-Ap-r} [w]_{A_p} \leq [w_1]_{A_1} [w_2]_{A_1}^{p-1} \leq c_n ~ r. \end{align} Thus, gathering \eqref{eq:CF-local}, \eqref{eq:FQ} and \eqref{eq:w-Ap-r}, we obtain \begin{align*} \mathscr{F}_Q^{\frac1r}
& \le c_{n} t^{-2}\alpha^{2n} [w]_{A_p} \|M(f)\|_{L^2(Q, w)}^2
\\
& = c_{n} t^{-2} \alpha^{2n}[w]_{A_p} \|\mathcal{R}h\|_{L^1(Q)}
\\
&\le c_{n} t^{-2} \alpha^{2n}[w]_{A_p} \|\mathcal{R}h\|_{L^{r'}(Q)} |Q|^{\frac1r}
\\
&\le c_{n} t^{-2}\alpha^{2n} [w]_{A_p} \|h\|_{L^{r'}(Q)} |Q|^{\frac1r}
\\
&\le c_{n} r t^{-2}\alpha^{2n} |Q|^{\frac1r}. \end{align*}
Consequently, if $t> \sqrt{c_n e}\,\alpha^{n}$, choosing $r>1$ so that $t^2/e = c_n \alpha^{2n}r$, we have \begin{align}\label{eq:FQr-1}
\mathscr{F}_Q \le (c_n \alpha^{2n}r t^{-2})^r |Q| = e^{-r} |Q|
= e^{-\frac{t^2}{c_n e\alpha^{2n}}} |Q|. \end{align} If $0<t \le \sqrt{c_n e}\alpha^{n}$, it is easy to see that \begin{equation}\label{eq:FQr-2}
\mathscr{F}_Q \le |Q| \le e \cdot e^{-\frac{t^2}{c_n e\alpha^{2n}}} |Q|. \end{equation} Summing \eqref{eq:FQr-1} and \eqref{eq:FQr-2} up, we deduce that \begin{equation*} \mathscr{F}_Q=\mu(\{x \in Q: S_{\alpha,L}(f)(x) > t M(f)(x)\})
\le c_1 e^{-c_2 t^2/\alpha^{2n}} |Q|,\quad\forall t>0. \end{equation*} This proves \eqref{eq:local}. \end{proof}
\subsection{Mixed weak type estimates} To proceed the proof of Theorem \ref{thm:mixed}, we establish a Coifman-Fefferman inequality.
\begin{lemma}\label{lem:CF} For every $0<p<\infty$ and $w \in A_{\infty}$, we have \begin{align}\label{eq:CF}
\|S_{\alpha, L}f\|_{L^p(w)} &\lesssim \|Mf\|_{L^p(w)}, \end{align} \end{lemma}
\begin{proof} Let $w \in A_\infty$. Then, it is well known that for any $\alpha \in (0, 1)$ there exists $\beta \in (0, 1)$ such that for any cube $Q$ and any measurable subset $A\subset Q$ \begin{align*}
|A| \le \alpha |Q| \quad\Longrightarrow\quad w(A) \le \beta w(Q). \end{align*} Thus, for the sparsity constant $\eta_j$ of $\mathcal{S}_j$ there exists $\beta_j \in (0,1)$ such that for $Q \in \mathcal{S}_j$, we have \begin{equation}\label{eq:w-EQ} w(E_Q) =w(Q) - w(Q \setminus E_Q) \geq (1-\beta_j)w(Q), \end{equation} since $w(Q \setminus E_Q) \leq (1-\eta_j)w(Q)$. It follows from \eqref{eq:S-sparse} and \eqref{eq:w-EQ} that \begin{align}\label{eq:p=1} \int_{\mathbb{R}^n} S_{\alpha,L}(\vec{f})(x)^2 w(x) \ dx
&\lesssim \sum_{j=1}^{3^n} \sum_{Q \in \mathcal{S}_j} \langle |f| \rangle_{Q}^2 w(Q) \nonumber \\ &\lesssim \sum_{j=1}^{3^n} \sum_{Q \in \mathcal{S}_j} \Big(\inf_{Q}M(f) \Big)^2 w(E_Q) \nonumber \\ &\lesssim \sum_{j=1}^{3^n} \sum_{Q \in \mathcal{S}_j} \int_{E_Q} M(f)(x)^2 w(x) \ dx \nonumber \\ &\lesssim \int_{\mathbb{R}^n} M(f)(x)^2 w(x) \ dx. \end{align} This shows the inequality \eqref{eq:CF} holds for $p=2$.
To obtain the result \eqref{eq:CF} for every $p \in (0, \infty)$, we apply the $A_{\infty}$ extrapolation theorem from \cite[Corollary 3.15]{CMP11} in the Lebesgue spaces or \cite[Theorem~3.36]{CMM} for the general measure spaces. Let $\mathcal{F}$ be a family of pairs of functions. Suppose that for some $p_0 \in (0, \infty)$ and for every weight $v_0 \in A_{\infty}$, \begin{align}\label{eq:fg-some}
\|f\|_{L^{p_0}(v_0)} \leq C_1 \|g\|_{L^{p_0}(v_0)}, \quad \forall (f, g) \in \mathcal{F}. \end{align} Then for every $p \in (0, \infty)$ and every weigh $v \in A_{\infty}$, \begin{align}\label{eq:fg-every}
\|f\|_{L^p(v)} \leq C_2 \|g\|_{L^p(v)}, \quad \forall (f, g) \in \mathcal{F}. \end{align} From \eqref{eq:p=1}, we see that \eqref{eq:fg-some} holds for the exponent $p_0=2$ and the pair $(S_{\alpha,L}f, Mf)$. Therefore, \eqref{eq:fg-every} implies that \eqref{eq:CF} holds for the general case $0<p<\infty$. \end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{thm:mixed}.}] In view of Lemma \ref{lem:CF}, we just present the proof for $S_{\alpha,L}$. We use a hybrid of the arguments in \cite{CMP} and \cite{LOPi}. Define \begin{align*} \mathcal{R}h(x)=\sum_{j=0}^{\infty} \frac{T^j_u h(x)}{2^j K_0^j}, \end{align*} where $K_0>0$ will be chosen later and $T_uf(x) := M(fu)(x)/u(x)$ if $u(x) \neq 0$, $T_uf(x)=0$ otherwise. It immediately yields that \begin{align}\label{eq:R-1} h \leq \mathcal{R}h \quad \text{and}\quad T_u(\mathcal{R}h) \leq 2K_0 \mathcal{R}h. \end{align} Moreover, follow the same scheme of that in \cite{CMP}, we get for some $r>1$, \begin{align}\label{eq:R-2} \mathcal{R}h \cdot uv^{\frac{1}{r'}} \in A_{\infty} \quad\text{and}\quad
\|\mathcal{R}h\|_{L^{r',1}(uv)} \leq 2 \|h\|_{L^{r',1}(uv)}. \end{align}
Note that \begin{equation}\label{e:Lpq}
\|f^q\|_{L^{p,\infty}(w)}= \|f\|^q_{L^{pq,\infty}(w)}, \ \ 0<p,q<\infty. \end{equation} This implies that \begin{align*}
& \bigg\| \frac{S_{\alpha,L}(f)}{v} \bigg\|_{L^{1,\infty}(uv)}^{\frac{1}{r}}
= \bigg\| \bigg(\frac{S_{\alpha,L}(f)}{v}\bigg)^{\frac{1}{r}}\bigg\|_{L^{r,\infty}(uv)} \\
&=\sup_{\|h\|_{L^{r',1}(uv)}=1}
\bigg|\int_{\mathbb{R}^n} |S_{\alpha,L}(f)(x)|^{\frac{1}{r}} h(x) u(x) v(x)^{\frac{1}{r'}} dx \bigg| \\
&\leq \sup_{\|h\|_{L^{r',1}(uv)}=1}
\int_{\mathbb{R}^n} |S_{\alpha,L}(f)(x)|^{\frac{1}{r}} \mathcal{R}h(x) u(x) v(x)^{\frac{1}{r'}} dx. \end{align*} Invoking Lemma \ref{lem:CF} and H\"{o}lder's inequality, we obtain \begin{align*}
& \int_{\mathbb{R}^n} |S_{\alpha,L}(f)(x)|^{\frac{1}{r}} \mathcal{R}h(x) u(x) v(x)^{\frac{1}{r'}} dx \\ & \lesssim \int_{\mathbb{R}^n} M(f)(x)^{\frac{1}{r}} \mathcal{R}h(x) u(x) v(x)^{\frac{1}{r'}} dx \\ & =\int_{\mathbb{R}^n} \bigg(\frac{M(f)(x)}{v(x)}\bigg)^{\frac{1}{r}} \mathcal{R}h(x) u(x) v(x) dx \\
& \leq \bigg\|\bigg(\frac{M(f)}{\nu}\bigg)^{\frac{1}{r}} \bigg\|_{L^{r,\infty}(uv)}\|\mathcal{R}h\|_{L^{r',1}(uv)} \\
& \leq \bigg\|\frac{M(f)}{v}\bigg\|_{L^{1,\infty}(uv)}^{\frac{1}{r}} \|h\|_{L^{r',1}(uv)}, \end{align*} where we used \eqref{e:Lpq} and \eqref{eq:R-2} in the last inequality. Here we need to apply the weighted mixed weak type estimates for $M$ proved in Theorems 1.1 in \cite{LOP}. Consequently, collecting the above estimates, we get the desired result \begin{equation*}
\bigg\| \frac{S_{\alpha,L}(f)}{v} \bigg\|_{L^{1,\infty}(uv)}
\lesssim \bigg\| \frac{M(f)}{v} \bigg\|_{L^{1,\infty}(uv)}
\lesssim \| f \|_{L^1(u)}. \end{equation*} The proof is complete. \end{proof}
\subsection{Restricted weak type estimates}
\begin{proof}[{\bf Proof of Theorem \ref{thm:RW}}] In view of \eqref{eq:S-sparse}, it suffices to show that \begin{align}
\|\mathcal{A}_{\mathcal{S}}^2(\mathbf{1}_E)\|_{L^{p, \infty}(w)} \lesssim [w]_{A_p^{\mathcal{R}}}^{p+1} w(E)^{\frac1p}. \end{align}
Since $\mathcal{S}$ is sparse, for every $Q \in \mathcal{S}$, there exists $E_Q \subset Q$ such that $|E_Q| \simeq |Q|$ and $\{E_Q\}_{Q \in \mathcal{S}}$ is a disjoint family. Note that $Q \subset Q(x, 2\ell(Q)) \subset 3Q$ for any $x \in Q$, where $Q(x, 2\ell(Q))$ denotes the cube centered at $x$ and with sidelength $2\ell(Q)$. Hence, for all $Q \in \mathcal{S}$ and for all $x \in Q$, \begin{align}\label{eq:EQQ} \frac{w(Q(x, 2\ell(Q)))}{w(E_Q)} \simeq \frac{w(Q)}{w(E_Q)}
\le [w]_{A_p^{\mathcal{R}}}^p \bigg(\frac{|Q|}{|E_Q|}\bigg)^p \lesssim [w]_{A_p^{\mathcal{R}}}^p. \end{align} By duality, one has \begin{align}\label{eq:AE}
\|\mathcal{A}_{\mathcal{S}}^2(\mathbf{1}_E)\|_{L^{p, \infty}(w)}^2
=\|\mathcal{A}_{\mathcal{S}}^2(\mathbf{1}_E)^2\|_{L^{p/2, \infty}(w)}
&=\sup_{\substack{0 \le h \in L^{(p/2)', 1}(w) \\ \|h\|_{L^{(p/2)',1}(w)}=1}} \int_{\mathbb{R}^n} \mathcal{A}_{\mathcal{S}}^2(\mathbf{1}_E)^2 hw\, dx. \end{align} Fix such $h$ above. Then, using \eqref{eq:EQQ}, the disjointness of $\{E_Q\}_{Q \in \mathcal{S}}$, H\"{o}lder's inequality and \eqref{eq:ME}, we conclude that \begin{align}\label{eq:AEE} &\int_{\mathbb{R}^n} \mathcal{A}_{\mathcal{S}}^2(\mathbf{1}_E)^2 hw\, dx =\sum_{Q \in \mathcal{S}} \langle \mathbf{1}_E \rangle_Q^2 \int_{Q} hw\, dx \nonumber\\% &\le \sum_{Q \in \mathcal{S}} \langle \mathbf{1}_E \rangle_Q^2 \bigg( \fint_{Q(x, 2\ell(Q))} h\, dw\bigg) w(E_Q) \bigg(\frac{w(Q(x, 2\ell(Q)))}{w(E_Q)}\bigg) \nonumber\\% &\lesssim [w]_{A_p^{\mathcal{R}}}^p \sum_{Q \in \mathcal{S}} \Big(\inf_Q M\mathbf{1}_E\Big)^2 \Big(\inf_Q M_w^c h\Big) w(E_Q) \nonumber\\% &\le [w]_{A_p^{\mathcal{R}}}^p \int_{\mathbb{R}^n} M\mathbf{1}_E(x)^2 M_w^c h(x) w\, dx \nonumber\\%
&\le [w]_{A_p^{\mathcal{R}}}^p \|(M\mathbf{1}_E)^2\|_{L^{p/2,\infty}(w)} \|M_w^c h\|_{L^{(p/2)', 1}(w)} \nonumber\\% &\lesssim [w]_{A_p^{\mathcal{R}}}^{p+2} w(E)^{\frac2p}. \end{align} Therefore, \eqref{eq:AE} and \eqref{eq:AEE} immediately imply \eqref{eq:SLE}. \end{proof}
\subsection{Endpoint estimates for commutators} Recall that the sharp maximal function of $f$ is defined by \[
M_{\delta}^{\sharp}(f)(x):=\sup_{x\in Q} \inf_{c \in \mathbb{R}} \bigg(\fint_{Q}|f^{\delta}-c|dx\bigg)^{\frac{1}{\delta}}. \] If we write $Q_{t, L} := t^m Le^{-t^m L}$, then \begin{align*}
C_b(S_{\alpha, L})f(x) &= \bigg(\iint_{\Gamma_{\alpha}(x)} |Q_{t, L} \big((b(x)-b(\cdot))f(\cdot) \big)(y)|^2 \frac{dydt}{t^{n+1}} \bigg)^{\frac12}. \end{align*}
\begin{lemma}\label{lem:MMf} For any $0<\delta<1$, \begin{align} M_{\delta}^{\#}(\widetilde{S}_{\alpha, L} f)(x_{0}) \lesssim \alpha^{2n} Mf(x_{0}), \quad \forall x_{0} \in \mathbb{R}^n. \end{align} \end{lemma}
\begin{proof} For any cube $Q \ni x_{0}$. The lemma will be proved if we can show that
\[
\left(\fint_{Q}|\widetilde{S}_{\alpha,L}(f)^{2}(x)-c_{Q}|^{\delta}dx\right)^{\frac{1}{\delta}}\lesssim \alpha^{2n}Mf(x_{0})^{2}, \] where $c_{Q}$ is a constant which will be determined later.
Denote $T(Q)=Q\times(0,\ell(Q))$. We write \[ \widetilde{S}_{\alpha,L}(f)^{2}(x) =E(f)(x)+F(f)(x), \] where \begin{align*}
E(f)(x) &:=\iint_{T(2Q)}\Phi\Big(\frac{|x-y|}{\alpha t}\Big)|Q_{t, L}f(y)|^{2}\frac{dydt}{t^{n+1}}, \\
F(f)(x) &:= \iint_{\mathbb{R}_{+}^{n+1}\backslash T(2Q)}\Phi\Big(\frac{|x-y|}{\alpha t}\Big)|Q_{t, L}f(y)|^{2}\frac{dydt}{t^{n+1}}. \end{align*} Let us choose $c_{Q}=F(f)(x_{Q})$ where $x_{Q}$ is the center of $Q$. Then \begin{align*}
\bigg(&\fint_{Q}|\widetilde{S}_{\alpha,\psi}(f)^{2}-c_{Q}|^{\delta} dx\bigg)^{\frac{1}{\delta}}
=\left(\fint_{Q}|E(f)(x)+F(f)(x)-c_{Q}|^{\delta} dx\right)^{\frac{1}{\delta}} \\
&\qquad \lesssim\left(\fint_{Q}|E(f)(x)|^{\delta} dx\right)^{\frac{1}{\delta}}+\left(\fint_{Q}|F(f)(x)-F(f)(x_{Q})|^{\delta}dx \right)^{\frac{1}{\delta}} =:I+II \end{align*} We estimate each term separately. For the first term $I$, we set $f=f_0+f^\infty$, where $f_{0}=f\chi_{Q^{*}}, f^\infty=f\chi_{(Q^{*})^{c}}$ and $Q^{*}=8Q$. Then we have \begin{equation}\label{E0} E(f)(x)\lesssim E(f_{0})(x)+E(f^\infty)(x). \end{equation} Therefore, \[
\left(\fint_{Q}|E(f)(x)|^{\delta} dx\right)^{\frac{1}{\delta}}
\lesssim\left(\fint_{Q}|E(f_{0})(x)|^{\delta} dx\right)^{\frac{1}{\delta}}
+\left(\fint_{Q}|E(f^\infty (x)|^{\delta}dx\right)^{\frac{1}{\delta}}. \]
It was proved in \cite[p. 884]{BD}, that
$\|\widetilde{S}_{\alpha,L}(f)\|_{L^{1,\infty}}\lesssim \alpha^{n}\|S_{1,L}(f)\|_{L^{1,\infty}}$. Then, by \eqref{eq:S11} and Kolmogorov inequality we have \begin{align}\label{E1}
\bigg(\fint_{Q} & |E(f_{0})(x)|^{\delta} dx\bigg)^{\frac{1}{\delta}}
\leq\left(\fint_{Q}|\widetilde{S}_{\alpha,L}(f_0)|^{2\delta} dx \right)^{\frac{2}{2\delta}} \nonumber\\
& \lesssim\|\widetilde{S}_{\alpha,L}(f_0)\|_{L^{1,\infty}(Q,\frac{dx}{|Q|})}^{2}
\lesssim \alpha^{2n}\Big(\fint_{Q^{*}}|f_{0}(x)|dx\Big)^{2}. \end{align} On the other hand, \[ \begin{aligned}
\left(\fint_{Q}|E(f^\infty)(x)|^{\delta}dx\right)^{\frac{1}{\delta}}&\lesssim\frac 1{|Q|}\int_{\mathbb{R}^{n}}\int_{T(2Q)}\Phi\Big(\frac{x-y}{\alpha t}\Big)\Big|Q_{t, L}f(y)\Big|^{2}\frac{dydt}{t^{n+1}}dx\\
&=\frac{\alpha^{2n}}{|Q|}\int_{T(2Q)}|Q_{t, L}(f^\infty)(y)|^{2}\frac{dydt}{t}, \end{aligned} \] since $\int_{\mathbb{R}^{n}}\Phi\Big(\frac{x-y}{\alpha t}\Big)dx\leq c_{n}(\alpha t)^{n}$.
For any $k\in N_+$, $p_{k,t}(x,y)$ denote the kernel of operator $(tL)^k e^{-tL}$. Note that condition (A2) implies that for any $\delta_0>0$, there exist $C, c>0$ such that
\begin{align}\label{kernelestimate}
|p_{k,t}(x,y)|\leq \frac{C}{t^{n/ m}} \bigg(\frac{t^{1/ m}}{t^{1/ m}+|x-y|}\bigg)^{n+\delta_0},\; \text{for all} \; x, y\in \mathbb{R}^n. \end{align}
Thus, \eqref{kernelestimate} implies that \[ \begin{aligned}
&\bigg(\int_{2Q}|Q_{t, L}(f^\infty)(y)|^{2}dy\bigg)^{1/ 2}\\
&\lesssim \sum_{j\geq 3}\bigg \{\int_{2Q} \bigg[\int_{2^{j+1}Q\setminus 2^j Q}\frac{1}{t^n}\bigg(\frac{t}{t+|y-z|}\bigg)^{n+\delta_0}|f(z)|dz\bigg]^2dy\bigg\}^{1/2}\\
&\lesssim \sum_{j\geq 3}\bigg \{\int_{2Q} \bigg[\int_{2^{j+1}Q\setminus 2^j Q}\frac{1}{t^n}\bigg(\frac{t}{2^j\ell(Q)}\bigg)^{n+\delta_0}|f(z)|dz\bigg]^2dy\bigg\}^{1/2}\\
&\lesssim \bigg(\frac{t}{\ell(Q)}\bigg)^{\delta_0} |2Q|^{1/ 2} \sum_{j\geq 3} \frac{1}{2^{j\delta_0}} \bigg(\fint_{2^{j}Q}|f(z)|dz\bigg). \end{aligned} \]
Then one has \begin{align*}
& \left(\fint_{Q}|E(f^\infty)(x)|^{\delta}dx\right)^{\frac{1}{\delta}} \\
& \lesssim \bigg[\sum_{l=0}^{\infty}\frac 1{2^{l\delta_0}}\bigg(\fint_{2^{l}Q}|f_{j}| dx \bigg)\bigg]^{2}\frac{\alpha^{2n}}{|Q|}\int_{T(2Q)}|2Q|(t/\ell(Q))^{2\delta_0}\frac{dydt}{t} \\
&\lesssim \alpha^{2n}\bigg[\sum_{l=0}^{\infty}\frac 1{2^{l\delta_0}}\bigg(\fint_{2^{l}Q}|f_{j}| dx \bigg)\bigg]^{2}
\lesssim \alpha^{2n}\sum_{l=0}^{\infty}\frac 1{2^{l\delta_0}}\bigg(\fint_{2^{l}Q}|f_j| dx\bigg)^{2}, \end{align*} where in the last inequality we used H\"older's inequality.
Therefore, we obtain that \begin{equation}\label{E2}
\left(\fint_{Q}|E(f^\infty)(x)|^{\delta}dx\right)^{\frac{1}{\delta}}
\lesssim \alpha^{2n}\sum_{l=0}^{\infty}\frac 1{2^{l\delta_0}}\bigg(\fint_{2^{l}Q}|f| dx\bigg)^{2}. \end{equation} Gathering \eqref{E0}, \eqref{E1} and \eqref{E2}, we deduce that \[ I\lesssim\alpha^{2n}M(f)(x_{0}). \] To estimate the second term $II$, we shall use the following estimate \cite[Eq (35)]{BD}: \begin{equation}
|F(f)(x)-F(f)(x_{Q})|\lesssim \alpha^{2n}\sum_{l=0}^{\infty}\frac 1{2^{l\delta}}\bigg(\fint_{2^{l}Q}|f| dx\bigg)^{2}, \end{equation} for some $\delta>0$ and all $x\in Q$, where $x_Q$ is the center of $Q$. As a consequence, we have \[
II=\left(\fint_{Q}|F(f)(x)-F(f)(x_{Q})|^{\delta} dx \right)^{\frac{1}{\delta}}\lesssim\alpha^{2n}Mf(x_{0}). \] This finish the proof. \end{proof}
\begin{lemma}\label{lem:MSL} For any $0<\delta<\varepsilon<1$ and for any $b \in \operatorname{BMO}$, \begin{align} M_{\delta}^{\#}(C_b(\widetilde{S}_{\alpha, L})f)(x) \lesssim
\|b\|_{\operatorname{BMO}} \big(M_{L\log L}f(x) +M_{\varepsilon}(\widetilde{S}_{\alpha, L}f)(x) \big). \end{align} \end{lemma}
\begin{proof} Let $x\in \mathbb{R}^n$, and let $Q$ be any arbitrary cube containing $x$. It suffices to show that there exists $c_Q$ such that \begin{align}\label{eq:Cbb}
\mathscr{A}&:=\bigg(\fint_Q |C_b(\widetilde{S}_{\alpha, L})f(z)-c_Q|^{\delta} dz\bigg)^{\frac{1}{\delta}}
\lesssim \|b\|_{\operatorname{BMO}} \big(M_{L\log L}f(x) +M_{\varepsilon}(\widetilde{S}_{\alpha, L}f)(x) \big). \end{align} Split $f=f_1+f_2$, where $f_1=f {\bf 1}_{8Q}$. Then, we have \begin{align}\label{eq:AAA}
\mathscr{A} & \lesssim \bigg(\fint_{Q} |(b(z)-b_Q)\widetilde{S}_{\alpha, L}(f)(z)|^{\delta} dz\bigg)^{\frac{1}{\delta}} \nonumber\\
&\qquad+ \bigg(\fint_Q |\widetilde{S}_{\alpha, L}((b-b_Q)f_{1})(z)|^{\delta} dz\bigg)^{\frac{1}{\delta}} \nonumber\\
&\qquad + \bigg(\fint_Q |\widetilde{S}_{\alpha, L}((b-b_Q)f_2)(z)-c_Q|^{\delta} dz\bigg)^{\frac{1}{\delta}} \nonumber\\ &:=\mathscr{A}_1 + \mathscr{A}_2 + \mathscr{A}_3. \end{align} To bound $\mathscr{A}_1$, we choose $r \in (1, \varepsilon/\delta)$. The H\"{o}lder's inequality gives that \begin{align}\label{eq:A1} \mathscr{A}_1
& \leq \bigg(\fint_Q |b(z)-b_Q|^{\delta r'} dz\bigg)^{\frac{1}{\delta r'}}
\bigg(\fint_Q |\widetilde{S}_{\alpha, L}f(z)|^{\delta r} dz\bigg)^{\frac{1}{\delta r}} \nonumber\\
&\lesssim \|b\|_{\operatorname{BMO}} M_{\delta r}(\widetilde{S}_{\alpha, L}f)(x)
\le \|b\|_{\operatorname{BMO}} M_{\varepsilon}(\widetilde{S}_{\alpha, L}f)(x). \end{align} Since $\widetilde{S}_{\alpha, L}: L^{1}(\mathbb{R}^n)\rightarrow L^{1,\infty}(\mathbb{R}^n)$ and $0<\delta <1$, there holds \begin{align}\label{eq:A2}
\mathscr{A}_2 & \lesssim \|\widetilde{S}_{\alpha, L}((b-b_Q)f_1)\|_{L^{1,\infty}(Q, \frac{dx}{|Q|})}
\lesssim \fint_Q |(b-b_Q)f_1|dz \nonumber\\
&\lesssim \|b-b_Q\|_{\exp L, Q} \|f\|_{L\log L, Q} \lesssim \|b\|_{\operatorname{BMO}} M_{L\log L}(f)(x). \end{align} For the last term, we take $c_Q=\widetilde{S}_{\alpha, L}((b-b_Q)f_2)(z_Q)$, where $z_Q$ is the center of $B$. We have \begin{align}\label{eq:A3}
\mathscr{A}_3 \le \fint_Q |\widetilde{S}_{\alpha, L}((b-b_Q)f_2)(z)-c_Q|\, dz =:\fint_Q J_Q(z)\, dz \le \bigg(\fint_Q J_Q(z)^2\, dz\bigg)^{\frac12}. \end{align} For any cube $Q\subset \mathbb{R}^n$, set $T_Q=Q \times (0, \ell(Q))$. Thus, for any $z \in Q$, \begin{align}\label{eq:JQJQ}
J_Q(z)^2 & \leq |\widetilde{S}_{\alpha, L}((b-b_Q)f_2)(z)^2 - \widetilde{S}_{\alpha, L}((b-b_Q)f_2)(z_Q)^2| \nonumber\\
&\leq \iint_{T(2Q)} \Phi \Big(\frac{z-y}{\alpha t}\Big) |Q_{t, L}((b-b_Q)f_2)(y)|^2 \frac{dydt}{t^{n+1}} \nonumber\\
&\qquad+ \iint_{T(2Q)} \Phi \Big(\frac{z'-y}{\alpha t}\Big) |Q_{t, L}((b-b_Q)f_2)(y)|^2 \frac{dydt}{t^{n+1}} \nonumber\\
&\qquad + \iint_{\mathbb{R}^{n+1} \setminus T(2Q)} \Big|\Phi \Big(\frac{z-y}{\alpha t}\Big) - \Phi \Big(\frac{z'-y}{\alpha t}\Big)\Big|
|Q_{t^m}((b-b_Q)f_2)(y)|^2 \frac{dydt}{t^{n+1}} \nonumber\\ &=: J_{Q, 1}(z)+J_{Q, 2}(z)+ J_{Q, 3}(z). \end{align} In order to estimate $J_{Q, 1}(z)$, we note that \begin{align}\label{eq:QPhi} \fint_Q \Phi \Big(\frac{z-y}{\alpha t}\Big) dz
\leq \frac{1}{|Q|} \int_{\mathbb{R}^n} \Phi \Big(\frac{z-y}{\alpha t}\Big) dz
\lesssim \frac{(\alpha t)^{n}}{|Q|}. \end{align} Furthermore, the kernel estimate \eqref{kernelestimate} gives that \begin{align}\label{eq:QtLbb}
\bigg(\int_{2Q} & |Q_{t, L}((b-b_Q)f_2)(y)|^2\bigg)^{\frac12} \nonumber\\
&\lesssim \sum_{j\geq 3} \bigg\{\int_{2Q} \bigg[\int_{2^{j+1}Q \setminus 2^j Q} \frac{1}{t^{n}}\bigg(\frac{t}{t+|z-y|}\bigg)^{n+\delta_0}|b-b_Q| |f| dz\bigg]^{2}dy\bigg\}^{\frac12} \nonumber\\
&\lesssim \sum_{j\geq 3}\bigg\{\int_{2Q}\bigg[\int_{2^{j+1}Q \setminus 2^jQ} \frac{1}{t^n} \bigg(\frac{t}{2^{j}\ell(Q)}\bigg)^{n+\delta_0} |b-b_Q| |f| dz\bigg]^2 dy \bigg\}^{\frac12} \nonumber\\
&\lesssim \sum_{j\geq 3} \bigg(\frac{t}{2^{j}\ell(Q)}\bigg)^{\delta_0} |Q|^{\frac{1}{2}}\fint_{2^{j+1}Q}|b-b_Q| |f| dz \nonumber\\
&\lesssim \bigg(\frac{t}{\ell(Q)}\bigg)^{\delta_0} |Q|^{\frac{1}{2}} \sum_{j\geq 0} 2^{-j\delta_0} \|b-b_Q\|_{\exp L, 2^{j+1}Q} \|f\|_{L\log L, 2^{j+1}Q} \nonumber\\
&\lesssim \bigg(\frac{t}{\ell(Q)}\bigg)^{\delta_0} |Q|^{\frac{1}{2}} \sum_{j\geq 0} 2^{-j\delta_0} j \|b\|_{\operatorname{BMO}} M_{L\log L}f(x) \nonumber\\
&\lesssim \bigg(\frac{t}{\ell(Q)}\bigg)^{\delta_0} |Q|^{\frac{1}{2}} \|b\|_{\operatorname{BMO}} M_{L\log L}f(x). \end{align} Then, gathering \eqref{eq:QPhi} and \eqref{eq:QtLbb}, we obtain \begin{align}\label{eq:JQ1} \fint_Q J_{Q, 1}(z)dz
&\leq \iint_{T(2Q)} \bigg(\fint_{Q}\Phi\Big(\frac{z-y}{\alpha t}\Big)dz\bigg) |Q_{t, L}((b-b_Q)f_2)(y)|^{2}\frac{dydt}{t^{n+1}} \nonumber\\
&\lesssim \frac{1}{|Q|} \int_{0}^{2\ell(Q)} \int_{2Q}|Q_{t, L}((b-b_Q)f_2)(y)|^2 dy \frac{dt}{t} \nonumber\\
&\lesssim \int_{0}^{2\ell(Q)} \bigg(\frac{t}{\ell(Q)}\bigg)^{2\delta_0} \frac{dt}{t} \, \|b\|_{\operatorname{BMO}}^2 M_{L\log L}f(x)^2 \nonumber\\
&\lesssim \|b\|_{\operatorname{BMO}}^2 M_{L\log L}f(x)^2. \end{align} Similarly, one has \begin{align}\label{eq:JQ2}
\fint_Q J_{Q, 2}(z)dz \lesssim \|b\|_{\operatorname{BMO}} M_{L\log L}f(x). \end{align} To control $J_{Q, 3}$, invoking \cite[eq. (35)]{BD}, we have \begin{align}\label{eq:JQ3}
J_{Q, 3}(z) &\lesssim \sum_{j\geq 0} 2^{-j\delta_{0}} \bigg(\fint_{2^j Q} |b-b_Q| |f| dz \bigg)^2 \nonumber\\
&\lesssim \sum_{j \geq 0} 2^{-j\delta_0} \|b-b_Q\|^{2}_{\exp L, 2^j Q} \|f\|_{L\log L,2^j Q} \nonumber\\
&\lesssim \sum_{j\geq 0} 2^{-j\delta_0} j \|b\|_{\operatorname{BMO}} M_{L\log L}f(x)
\lesssim \|b\|_{\operatorname{BMO}} M_{L\log L}f(x). \end{align} Combining \eqref{eq:A3}, \eqref{eq:JQJQ}, \eqref{eq:JQ1}, \eqref{eq:JQ2} and \eqref{eq:JQ3}, we conclude that \begin{equation}\label{eq:A33}
\mathscr{A}_3 \lesssim \|b\|_{\operatorname{BMO}} M_{L\log L}f(x). \end{equation} Therefore, \eqref{eq:Cbb} immediately follows from \eqref{eq:AAA}, \eqref{eq:A1}, \eqref{eq:A2} and \eqref{eq:A33}. \end{proof}
\begin{lemma}\label{lem:WEC} For any $w \in A_{\infty}$ and $b\in \operatorname{BMO}$, \begin{equation}\label{eq:SLML} \begin{split}
\sup_{t>0} \Phi(1/t)^{-1} & w(\{x\in \mathbb{R}^n: |C_b(S_{\alpha,L})f(x)|>t\}) \\ &\lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathbb{R}^n: M_{L\log L}f(x)>t\}), \end{split} \end{equation} for all $f \in L^{\infty}_c(\mathbb{R}^n)$. \end{lemma}
\begin{proof} Recall that the weak type Fefferman-Stein inequality: \begin{align}\label{WF-S} \sup_{\lambda>0} \varphi(\lambda)\omega(\{x\in \mathbb{R}^n:M_{\delta}f(x)>\lambda\}) \leq \sup_{\lambda>0}\varphi(\lambda)\omega(\{x\in \mathbb{R}^n:M^{\sharp}_{\delta}f(x)>\lambda\}) \end{align} for all function $f$ for which the left-hand side is finite, where $\varphi:(0,\infty)\rightarrow(0,\infty)$ is doubling. We may assume that the right-hand side of \eqref{eq:SLML} is finite since otherwise there is nothing to be proved. Now by the Lebesgue diffentiation theorem we have \begin{align*}
\mathscr{B}:=&\sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathbb{R}^n: |C_b(S_{\alpha,L})f(x)|>t\}) \\
=&\sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathbb{R}^n: |C_b(\widetilde{S}_{\alpha,L})f(x)|>t\}) \\
\leq & \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathbb{R}^n: M_{\delta}(C_b(\widetilde{S}_{\alpha,L}))f(x)|>t\}). \end{align*} Then Lemma \ref{lem:MMf}, Lemma \ref{lem:MSL} and \eqref{WF-S} give that \begin{align*} \mathscr{B} & \lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathbb{R}^n: M_{\delta}^{\#}(C_b(\widetilde{S}_{\alpha, L})f)(x) >t\}) \\ &\lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathbb{R}^n: M_{L\log L}f(x) +M_{\varepsilon}(\widetilde{S}_{\alpha, L}f)(x)>c_0t\}) \\ &\lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathbb{R}^n:M_{L\log L}f(x)>t\}) \\ &\quad + \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathbb{R}^n:M_{\varepsilon}(\widetilde{S}_{\alpha, L}f)(x)>t\}) \\ &\lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathbb{R}^n:M_{L\log L}f(x)>t\})\\ &\quad +\sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathbb{R}^n:M^{\sharp}_{\varepsilon}(\widetilde{S}_{\alpha, L}f)(x)>t\}) \\ &\lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathbb{R}^n:M_{L\log L}f(x)>t\}) \\ &\quad + \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathbb{R}^n:M(f)(x)>t\}) \\ &\lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathbb{R}^n:M_{L\log L}f(x)>t\}). \end{align*} The proof is complete. \end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{thm:SbA1}.}] Let $w \in A_1$. By homogeneity, it is enough to prove \begin{align}\label{eq:Cbt=1}
w(\{x\in \mathbb{R}^n: C_b(S_{\alpha,L})f(x)>1\}) \lesssim \int_{\mathbb{R}^n} \Phi(|f(x)|) w(x) dx. \end{align} Let us recall a result from \cite[Lemma~2.11]{CY} for $m=1$. For any $w \in A_1$, \begin{align}\label{eq:MLL}
w(\{x\in \mathbb{R}^n: M_{L\log L}f(x)>t\}) \lesssim \int_{\mathbb{R}^n} \Phi \bigg(\frac{|f(x)|}{t}\bigg) w(x) dx, \quad\forall t>0. \end{align} Since $\Phi$ is submultiplicative, Lemma \ref{lem:WEC} and \eqref{eq:MLL} imply that \begin{align*} &w(\{x\in \mathbb{R}^n: C_b(S_{\alpha,L})f(x)>1\}) \\
&\qquad \lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathbb{R}^n: |C_b(S_{\alpha,L})f(x)|>t\}) \\ &\qquad \lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathbb{R}^n:M_{L\log L}f(x)>t\}) \\
&\qquad \lesssim \sup_{t>0} \Phi(1/t)^{-1} \int_{\mathbb{R}^n} \Phi \bigg(\frac{|f(x)|}{t}\bigg) w(x) dx \\
&\qquad \leq \sup_{t>0} \Phi(1/t)^{-1} \int_{\mathbb{R}^n} \Phi (|f(x)|) \Phi(1/t) w(x)dx \\
&\qquad\le \int_{\mathbb{R}^n} \Phi(|f(x)|) w(x) dx. \end{align*} This shows \eqref{eq:Cbt=1} and hence Theorem \ref{thm:SbA1}. \end{proof}
\end{document}
|
arXiv
|
{
"id": "2011.11420.tex",
"language_detection_score": 0.4762513041496277,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Simulation of SPDE's for Excitable Media using Finite Elements \thanks{This work was supported by the Agence Nationale de la Recherche through the project MANDy, Mathematical Analysis of Neuronal Dynamics, ANR-09-BLAN-0008-01}}
\author{Muriel Boulakia \and Alexandre Genadot \and Michèle~Thieullen} \institute{ M. Boulakia \at
Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris, France.\\ CNRS, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris, France.\\ INRIA-Paris-Rocquencourt, EPC REO, Domaine de Voluceau, BP105, 78153 Le Chesnay Cedex.\\ \email{[email protected]}
\and
A. Genadot \at
Université Paris-Dauphine, UMR 7534, Centre De Recherche en Mathématiques de la Décision, 75775, Paris, France.\\ \email{[email protected]}
\and M. Thieullen \at
Sorbonne Universités, UPMC Univ Paris 06, UMR 7599, Laboratoire de Probabilités et Modèles Aléatoires, F-75005, Paris, France.\\ \email{[email protected]} }
\date{Received: date / Accepted: date} \maketitle \begin{abstract} In this paper, we address the question of the discretization of Stochastic Partial Differential Equations (SPDE's) for excitable media. Working with SPDE's driven by colored noise, we consider a numerical scheme based on finite differences in time (Euler-Maruyama) and finite elements in space. Motivated by biological considerations, we study numerically the emergence of reentrant patterns in excitable systems such as the Barkley or Mitchell-Schaeffer models.
\keywords{Stochastic partial differential equation \and Finite element method \and Excitable media} \subclass{60H15 \and 60H35 \and 65M60} \end{abstract}
\section{Introduction}
The present paper is concerned with the numerical simulation of Stochastic Partial Differential Equations (SPDE's) used to model excitable cells in order to analyze the effect of noise on such biological systems. Our aim is twofold. The first is to propose an efficient and easy-to-implement method to simulate this kind of models. We focus our work on practical numerical implementation with software used for deterministic PDE's such as FreeFem++ \cite{FF} or equivalent. The second is to analyze the effect of noise on these systems thanks to numerical experiments. Namely, in models for cardiac cells, we investigate the possibility of purely noise induced reentrant patterns such as spiral or scroll-waves as these phenomena are related to major troubles of the cardiac rhythm such as tachyarrhythmia. For numerical experiments, we focus on the Barkley and Mitchell-Schaeffer models, both originally deterministic models to which we add a noise source. \\ Mathematical models for excitable systems may describe a wide range of biological phenomena. Among these phenomena, the most known and studied are certainly the two following ones: the generation and propagation of the nerve impulse along a nerve fiber and the generation and propagation of a cardiac pulse in cardiac cells. For both, following the seminal work \cite{HH52}, very detailed models known as conductance based models have been developed, describing the physio-biological mechanism leading to the generation and propagation of an action potential. These physiological models are quite difficult to handle mathematically and phenomenological models have been proposed. These models describe qualitatively the generation and propagation of an action potential in excitable systems. For instance, the Morris-Lecar model for the nerve impulse and the Fitzhugh-Nagumo model for the cardiac potential. In the present paper, we will consider two phenomenological models: the Barkley and the Mitchell-Schaeffer models.\\ Mathematically, the models that we will consider consist in a degenerate system of Partial Differential Equations (PDE's) driven by a stochastic term, often referred to as noise. More precisely, the model may be written \begin{equation}\label{eq_model} \left\{ \begin{array}{ccl} {\rm d}u&=&[\nu\Delta u+\frac1\varepsilon f(u,v)]{\rm d}t+\sigma {\rm d} W,\\ {\rm d} v&=&g(u,v){\rm d} t, \end{array} \right. \end{equation} on $[0,T]\times D$, where $D$ is a regular bounded open set of $\mathbb{R}^2$ or $\mathbb{R}^3$. This system is completed with boundary and initial conditions. $W$ is a colored Gaussian noise source which will be defined more precisely later. System (\ref{eq_model}) is degenerate in two ways: there is no spatial operator such as the Laplacian neither noise source in the equation on $v$. All the considered models have the features of classical stochastic PDEs for excitable systems. The general structure of $f$ and $g$ is also typical of excitable dynamics. In particular, in the models that we will consider, the neutral curve $f(u,v)=0$ for $v$ held fixed is cubic in shape.\\ To achieve our first aim, that is to numerically compute a solution of system (\ref{eq_model}), we work with a numerical scheme based on finite difference discretization in time and finite element method in space. The choice of finite element discretization in space has been directed by two considerations. The first is that this method fits naturally to a general spatial domain: we want to investigate the behavior of solutions to (\ref{eq_model}) on domains with various geometry. The second is that it allows to numerically implement the method with popular software used to simulate deterministic PDE's such as the finite elements software FreeFem++ or equivalent. The discretization of SPDE by finite differences in time and finite elements in space has been considered by several authors in theoretical studies, see for example \cite{DePr09,CYY07,Kr14,KL12,LT11,Wa05}. Other methods of discretization are considered for example in \cite{ANZ98,J09,JR10,KLL10,LT10,Y05}. These methods are based on finite difference discretization in time coupled either to finite difference in space or to the Galerkin spectral method, or to the finite element method on the integral formulation of the evolution equation. We emphasize that we do not consider in this paper a Galerkin spectral method or exponential integrator, that is, roughly speaking, we neither use the spectral decomposition of the solution of (\ref{eq_model}) according to a Hilbert basis of $L^2(D)$ (or another Hilbert space related to $D$) nor the semigroup attached to the linear operator (the Laplacian in (\ref{eq_model})), in order to build our scheme. We only use the variational formulation of the problem in order to fit to commonly used finite elements method for deterministic PDE's, see \cite{BSK81,CL91,GMV12}. Moreover, the present paper is more oriented toward numerical applications than the above cited papers, in the spirit of \cite{Sh05}. In \cite{Sh05}, the author numerically analyzes the effect of noise on excitable systems thanks to a Galerkin spectral method of discretization on the square. In the present paper, we pursue the same objective using the finite element method instead of the Galerkin spectral one. We believe that the finite element method is easier to adapt to various spatial domains. Let us notice that a discretization scheme for SPDE's driven by white noise for spatial domains of dimension greater or equal to 2 may lead to non trivial phenomena, see \cite{HRW12}. Considering colored noises may also be seen as a way to circumvent these difficulties.\\ As is well known, one can consider two types of errors related to a numerical scheme for stochastic evolution equations: the strong error and the weak error. The strong error for the discretization that we consider has been analyzed for one dimensional spatial domains (line segments) in \cite{Wa05}. The weak error for more general spatial domains, of dimension 2 or 3 for example, has been considered in \cite{DePr09}. In the present paper, we recall results about the strong error of convergence of the scheme because we want to numerically investigate pathwise properties of the model. Working with spatial domains of dimension $d$, as obtained in \cite{Kr14,KL12}, the strong order of convergence of the considered method for a class of linear stochastic equations is twice less than the weak order obtained in \cite{DePr09}. This is what is expected since this same duality between weak and strong order holds for the discretization of finite dimensional stochastic differential equations (SDE's).\\ Our motivation for considering systems such as (\ref{eq_model}) comes from biological considerations. In the cardiac muscle, tachyarrhythmia is a disturbance of the heart rhythm in which the heart rate is abnormally increased. This is a major trouble of the cardiac rhythm since it may lead to rapid loss of consciousness and to death. As explained in \cite{Hi02,JC06}, the vast majority of tachyarrhythmia are perpetuated by a reentrant mechanism. In several studies, it has been observed that deterministic excitable systems of type (\ref{eq_model}) are able to generate sustained reentrant patterns such as spiral or meander, see for example \cite{K80,B90}. We show numerically that reentrant patterns may be generated and perpetuated only by the presence of noise. We perform the simulations on the Barkley model whose deterministic version has been intensively studied in \cite{B90,B92,B94} and the model of Mitchell-Schaeffer which allows to get more realistic shape for the action potential in cardiac cells \cite{MS03,BCFGZ10}. For Barkley model, similar experiments are presented in \cite{Sh05} where Galerkin spectral method is used as simulation scheme on a square domain. In our simulations, done on a square with periodic conditions or on a smoothed cardioid, we observe two kinds of reentrant patterns due to noise: the first may be seen as a scroll wave phenomenon whereas the second corresponds to spiral phenomenon. Both phenomena may be regarded as sources of tachyarrhythmia since in both cases, areas of the spatial domain are successively activated by the same wave which re-enters in the region.\\ All the simulations in the present paper have been performed using the FreeFem++ finite element software, see \cite{FF}. This software offers the advantage to provide the mesh of the domain, the corresponding finite element basis and to solve linear problems related to the finite element discretization of the model on its own. One of the originalities of the present work is to use this software to simulate stochastic PDE's.\\ Let us emphasize that the generic model (\ref{eq_model}) is endowed with a timescale parameter $\varepsilon$. The presence of this parameter is fundamental for the observation of traveling waves in the system: $\varepsilon$ enforces the system to be either quiescent or excited with a sharp transition between the two states. Moreover, the values of the timescale parameter $\varepsilon$ and the strength of the noise $\sigma$ appear to be of first importance to obtain reentrant patterns. This fact is also pointed out by our numerical bifurcation analysis. Let us mention that noise induced phenomena have been studied in \cite{BG06} for finite dimensional system of stochastic differential equations. The theoretical study of slow-fast SPDEs, through averaging methods, has been considered in \cite{Br12,CF09,WR12} for SPDEs.\\ In a forthcoming work, we plan to address the effect of noise on deterministic periodic forcing of the Barkley and Mitchell-Schaeffer models. We expect to observe as in \cite{JT10} for the one dimensional case, the annihilation by weak noise of the generation of some waves initiated by deterministic periodic forcing. We also want to investigate stochastic resonance phenomena in such a situation. On a theoretical point of view, we intend to derive the strong order of convergence of the discretization method used in the present paper for non-linear equations and systems of equations such as the FitzHugh-Nagumo, Barkley or Mitchell-Schaeffer models but also on simplified conductance based models.\\ The remainder of the paper is organized as follows. In Section \ref{sect_noise}, we begin with the definition of the noise source in system (\ref{eq_model}) and present its finite element discretization. In Section \ref{sect_heat_FHN}, we introduce a discretization scheme based on finite element method in space for a stochastic heat equation and we recall the estimates from \cite{Kr14,KL12} for the strong order of convergence. Then we apply the method to Fitzhugh-Nagumo model. In Section \ref{sect_arrhy}, we investigate the influence of noise on Barkley and Mitchell-Schaeffer models. We show that noise may initiate reentrant patterns which are not observable in the deterministic case. We also provide numerical bifurcation diagrams between the noise intensity $\sigma$ and the time-scale $\varepsilon$ of the models. At last, some technical proofs are postponed to the Appendix.
\section{Finite element discretization of $Q$-Wiener processes.}\label{sect_noise}
\subsection{Basic facts on $Q$-Wiener processes}\label{sect_Q}
Let $D$ be an open bounded domain of $\mathbb{R}^d$, $d=2$ or $3$, containing the origin and with polyhedral frontier. We denote by $L^2(D)$ the set of square integrable measurable functions with respect to the Lebesgue measure on $\mathbb{R}^d$. Writing $H$ for $L^2(D)$, we recall that $H$ is a real separable Hilbert space and we denote its usual scalar product by $(\cdot,\cdot)$ and the associated norm by $\|\cdot\|$. They are respectively given by \[
\forall (\phi_1,\phi_2)\in H\times H,\quad (\phi_1,\phi_2)=\int_D \phi_1(x)\phi_2(x){\rm d}x,\quad \|\phi_1\|=\left(\int_D \phi_1(x)^2{\rm d}x\right)^{\frac12}. \] Let $Q$ be a non-negative symmetric trace-class operator on $H$. Let us recall the definition of a $Q$-Wiener process on $H$ which can be found in \cite{PeZa07}, Section 4.4, as well as the basic properties of such a process. \begin{definition} Let $Q$ be a non-negative symmetric trace-class operator on $H$. There exists a probability space $(\Omega,\mathcal{F},\mathbb{P})$ on which we can define a stochastic process $(W^Q_t,~t\in\mathbb{R}_+)$ on $H$ such that \begin{itemize} \item For each $t\in\mathbb{R}_+$, $W^Q_t$ is a $H$-valued random variable. \item $W^Q$ starts at $0$ at time $0$: $W^Q_0=0_{H}$, $\mathbb{P}$-a.s. \item $(W^Q_t,~t\in\mathbb{R}_+)$ is a Lévy process, that is, it is a process with independent and stationary increments: \begin{itemize} \item Independent increments: for a sequence $t_1,\ldots,t_n$ of strictly increasing times, the random variables $W^Q_{t_2}-W^Q_{t_1},\ldots,W^Q_{t_n}-W^Q_{t_{n-1}}$ are independent. \item Stationary increments: for two times $s<t$, the random variable $W^Q_{t}-W^Q_{s}$ has same law as $W^Q_{t-s}$. \end{itemize} \item $(W^Q_t,~t\in\mathbb{R}_+)$ is a Gaussian process: for any $t\in\mathbb{R}_+$ and any $\phi\in H$, $(W^Q_t,\phi)$ is a real centered Gaussian random variable with variance $t(Q\phi,\phi)$. \item $(W^Q_t,~t\in\mathbb{R}_+)$ is a $H$-valued pathwise continuous process, $\mathbb{P}$-almost surely. \end{itemize} \end{definition} \noindent We recall the definition of non-negative symmetric linear operator on $H$ admitting a kernel. \begin{definition} A non-negative symmetric linear operator $Q:H\to H$ is a linear operator defined on $H$ such that \[ \forall (\phi_1,\phi_2)\in H\times H, \quad (Q\phi_1,\phi_2)=(Q\phi_2,\phi_1),\quad (Q\phi_1,\phi_1)\geq0. \] Let $q$ be a real valued integrable function on $D\times D$ such that \begin{align*} &\forall(x,y)\in \overline{D}\times \overline{D},\quad q(x,y)=q(y,x),\\ &\forall M\in\mathbb{N},\forall x_i,y_j\in \overline{D},\forall a_i\in\mathbb{R},i,j=1,\ldots M,\quad \sum_{i,j=1}^Mq(x_i,y_j)a_ia_j\geq0, \end{align*} that is $q$ is symmetric and non-negative definite on $\overline{D}\times \overline{D}$. We say that $Q$ has the kernel $q$ if \[ \forall \phi\in H,\forall x\in D,\quad Q\phi(x)=\int_D\phi(y)q(x,y){\rm d}y. \] \end{definition} \noindent Let $Q: H\to H$ be a non-negative symmetric operator with kernel $q$. Then $Q$ is a trace class operator whose trace is given by \[ {\rm Tr}(Q)=\int_D q(x,x){\rm d}x. \] For examples of kernels and basic properties of symmetric non-negative linear operators on Hilbert spaces, we refer to \cite{PeZa07}, Section 4.9.2 and Appendix A. Let us now state clearly our assumptions on the operator $Q$. \begin{assumption}\label{cond_C} The operator $Q$ is a non-negative symmetric operator with kernel $q$ given by \[ \forall (x,y)\in \overline{D}\times \overline{D},\quad q(x,y)=C(x-y), \] where $C$ belongs to $\mathcal{C}^{2}(\overline{D})$ and is an even function on $\overline{D}$ satisfying: \[ \forall M\in\mathbb{N},\forall x_i,y_j\in \overline{D},\forall a_i\in\mathbb{R},i,j=1,\ldots M,\quad \sum_{i,j=1}^MC(x_i-y_j)a_ia_j\geq0. \]
Particularly, $\nabla C(0)=0$ and $x\mapsto \frac{C(x)-C(0)}{|x|^2}$ is bounded on a neighborhood of zero. \end{assumption} \noindent For $x\in \overline{D}$ and $t\in\mathbb{R}_+$, one can show, see \cite{PeZa07}, Section 4.4, that we can define $W^Q_t$ at the point $x$ such that the process $(W^Q_t(x),(t,x)\in\mathbb{R}_+\times \overline{D})$ is a centered Gaussian process with covariance between the points $(t,x)$ and $(s,y)$ given by \[ \mathbb{E}\left(W^Q_t(x)W^Q_s(y)\right)=t\wedge s~q(x,y). \] In this case, the correlations in time are said to be white whereas the correlations in space are colored by the kernel $q$. \begin{proposition}\label{Cont_Bruit} Under Assumption \ref{cond_C}, the process $(W^Q_t(x),(t,x)\in\mathbb{R}_+\times \overline{D})$ has a version with continuous paths in space and time. \end{proposition} \begin{proof} This is an easy application of the Kolmogorov-Chentsov test, see \cite{DaZa92} Chapter 3, Section 3.2. Note that the regularity of $C$ is important to get the result. \qed \end{proof} \begin{remark}\label{Diff_Bruit2} The more the kernel $q$ smooth is, the more the Wiener process regular is. For example, let $\vec \iota\in \mathbb{R}^d$ and define $f(x)=\vec{\iota}\cdot{\rm Hess}C(x)\vec{\iota}$ for $x\in \overline{D}$. Suppose that $f$ is a twice differentiable function. Then, one can show that there exists a probability space on which $(W^Q_t,~t\in\mathbb{R}_+)$ is twice differentiable in the direction $\vec \iota$. \end{remark} \begin{remark} Let us assume that there exists a constant $\alpha$ and a (small) positive real $\delta$ such that \[
\forall y\in \overline{D},\quad |C(0)-C(y)|\leq \alpha|y|^{2+\delta}. \] Using the Kolmogorov-Chentsov continuity theorem, one can show that the process \[(W^Q_t(x),(t,x)\in \mathbb{R}_+\times \overline{D})\] has a modification which is $\gamma_1$-Hölder in time for all $\gamma_1\in\left(0,\frac12\right)$ and $\gamma_2$-Hölder in space for all $\gamma_2\in\left(0,1+\frac{\delta}{2}\right)$. Thus if $\delta>0$, by Rademacher theorem, for $\gamma_2=1$, this version is almost everywhere differentiable on $\overline{D}$. This is another way to obtain regularity in space for $W^Q$ without using another probability space. \end{remark}
\subsection{Finite element discretization}\label{section_fe}
In this part, we assume that $Q$ satisfies Assumption \ref{cond_C}. Let us present our approximation of the $Q$-Wiener process $W^Q$. We begin with the discretization of the domain $D$. Let $\mathcal{T}_h$ be a family of triangulations of the domain $D$ by triangles ($d=2$) or tetrahedra ($d=3$). The size of $\mathcal{T}_h$ is given by \[ h=\max_{T\in\mathcal{T}_h} h(T), \]
where $h(T)=\max_{x,y\in T}|x-y|$ is the diameter of the element $T$. We assume that there exist a positive constant $\rho$ such that \begin{equation}\label{hyp_triangle} \forall h>0,~\forall T\in\mathcal{T}_h,\quad\exists x\in T,\quad T\subset {\rm B}(x,\rho h), \end{equation} where ${\rm B}(x,r)$ stands for the euclidean ball centered at $x$ with radius $r$. We assume further that this triangulation is regular as in Figure \ref{Fig_mesh} (see \cite{RaTh83} p. 108 for a definition) where a triangulation is displayed and the property (\ref{hyp_triangle}) is illustrated. \begin{figure}
\caption{Meshing (triangulation) of a domain and illustration of (\ref{hyp_triangle})}
\label{Fig_mesh}
\end{figure} \noindent In the present work, we consider two kinds of finite elements: the Lagrangian P0 and P1 finite elements. However, the method could be adapted to other finite elements. The basis associated to the P0 finite element method is \[ \mathcal{B}_{0,\mathcal{T}_h}=\{1_T,T\in\mathcal{T}_h\}, \] where the function $1_T$ denotes the indicator function of the element $T$. Let $\{P_i, 1\leq i\leq N_h\}$ be the set of all the nodes associated to the triangulation $\mathcal{T}_h$. The basis for the P1 finite element method is given by \[ \mathcal{B}_{1,\mathcal{T}_h}=\{\psi_i,1\leq i\leq N_h\}, \] where $\psi_i$ is the continuous piecewise affine function on $D$ defined by $\psi_i(P_j)=\delta_{ij}$ (Kronecker symbol) for all $1\leq i,j\leq N_h$. \begin{definition}\label{Def_app} The P0 approximation of the noise $W^Q$ is given for $t\in\mathbb{R}_+$ by \begin{equation}\label{noise_P0} W^{Q,h,0}_t=\sum_{T\in\mathcal{T}_h}W^Q_t(g_T)1_T, \end{equation} where $g_T$ is the center of gravity of $T$. The P1 approximation is \begin{equation}\label{noise_P1} W^{Q,h,1}_t=\sum_{i=1}^{N_h}W^Q_t(P_i)\psi_i. \end{equation} We will also consider the following alternative choice for the P0 discretization \begin{equation}\label{noise_P0_CYY}
W^{Q,h,0_a}_t=\sum_{T\in\mathcal{T}_h}\frac{1}{|T|}(W^Q_t,1_T)1_T. \end{equation} \end{definition} \noindent $W^{Q,h,0_a}$ corresponds to an orthonormal projection on P0. These approximations are again Wiener processes as stated in the following proposition. \begin{proposition} For $i\in\{0,0_a,1\}$ the stochastic processes $(W^{Q,h,i}_t,~t\in\mathbb{R}_+)$ are centered $Q^{h,i}$-Wiener processes where, for $\phi\in H$ \begin{eqnarray*} Q^{h,0}\phi&=&\sum_{T,S\in\mathcal{T}_h}(1_T,\phi)q(g_T,g_S)1_S,\\
Q^{h,0_a}\phi&=&\sum_{T,S\in\mathcal{T}_h}(1_T,\phi)\frac{(Q1_T,1_S)}{|T||S|}1_S\\ \end{eqnarray*} and \[ Q^{h,1}\phi=\sum_{i,j=1}^{N_h}(\psi_i,\phi)q(P_i,P_j)\psi_{j}. \] \end{proposition} \begin{proof} The fact that for $i\in\{0,0_a,1\}$ the stochastic processes $(W^{Q,h,i}_t,~t\in\mathbb{R}_+)$ are Wiener processes is a direct consequence of their definition as linear functionals of the Wiener process $(W^Q_t,~t\in\mathbb{R}_+)$, see Definition \ref{Def_app}. The corresponding covariance operators are obtained by computing the quantity \[\mathbb{E}((W^{Q,h,i}_1,\phi_1)(W^{Q,h,i}_1,\phi_2)) \] for $i\in\{0,0_a,1\}$ and $\phi_1,\phi_2\in H$ (the details are left to the reader). \qed \end{proof} \noindent The P0 approximation (\ref{noise_P0_CYY}) of the noise has been considered for white noise in dimension 2 in \cite{CYY07}. White noise corresponds to $Q={\rm Id}_{H}$. In the white noise case, the associated Wiener process is not at all regular in space (the trace of $Q$ is infinite in this case). In the present paper, we work with trace class operators and thus with noises which are regular in space. Notice that discretization schemes for SPDE driven by white noise for spatial domains of dimension greater or equal to 2 may lead to non trivial phenomena. In particular, usual schemes may not converge to the desired SPDE, see \cite{HRW12}. \begin{theorem}[A global error]\label{prop_glob_err} For any $\tau\in\mathbb{R}_+$ and $i\in\{0,0_a,1\}$ we have \[
\mathbb{E}\left(\sup_{t\in[0,\tau]}\|W^Q_t-W^{Q,h,i}_t\|^2\right)\leq K\tau h^2 \]
where $K$ may be written $K=\tilde{K}\|{\rm Hess}~C\|_{\infty}$ for some constant $\tilde{K}$ which only depends on $|D|$. \end{theorem} \begin{proof} The proof is postponed to Appendix \ref{app_bgt_1}.\qed \end{proof} \noindent Let us comment the above result. Let us take, as it will be the case in the numerical experiments, the following special form for the kernel \[
\forall x\in D,\quad C_\xi(x)=\frac{a}{\xi^2}e^{-\frac{b}{\xi^2}|x|^2} \] for three positive real numbers $a,b,\xi$, where $D$ is a bounded domain in dimension $2$. This is a so-called Gaussian kernel. To a particular $\xi$-dependent kernel $C_\xi$, we associate the corresponding $\xi$-dependent covariance operator $Q_\xi$. Then, according to Theorem \ref{prop_glob_err}, in this particular case, we see that \[
\mathbb{E}\left(\sup_{t\in[0,\tau]}\|W^{Q_\xi}_t-W^{Q_\xi,h,i}_t\|^2\right)={\rm O}\left(\tau \frac{h^2}{\xi^4}\right) \] for any $\tau\in\mathbb{R}_+$. Thus, when $\xi$ goes to zero, this estimation becomes useless since the right hand-side goes to infinity. In fact, when $\xi$ goes to zero, $C_\xi$ converges in the distributional sense to a Dirac mass, and $W^{Q_\xi}$ tends to a white noise which is, as mentioned before, an irregular process. In particular, the white noise does not belong to $H$ and this is why our estimation is no longer useful in this case. The same phenomenon occurs for any bounded domain of dimension $d\geq2$. Let us mention that for white noise acting on steady PDEs and on particular domains (square and disc), the error considered in Theorem \ref{prop_glob_err} have been studied in \cite{CYY07}: the regularity of the colored noised improved these estimates in our case. We also remark that the proof of Theorem \ref{prop_glob_err} does not rely on the regularity of the functions of the finite element basis of P0 or P1 here. The key points are that $C$ is smooth enough, even and that $\sum_i\phi_i=1$, where $\{\phi_i\}$ corresponds to the finite element basis. \\ \noindent To conclude this section, we display some simulations. In Figure \ref{Fig_bruit_P1} we show simulations of the noise $W^{Q_\xi}_1$ with covariance kernel defined by \begin{equation}\label{kernel_xi}
\forall (x,y)\in D\times D,\quad q_\xi(x,y)=C_\xi(x-y)=\frac{1}{4\xi^2}e^{-\frac{\pi}{4\xi^2}|x-y|^2}, \end{equation} where $\xi>0$. We use the same kernel as in \cite{Sh05} for comparison purposes. As already mentioned, when $\xi$ goes to zero, the considered colored noise tends to a white noise. On the contrary, when $\xi$ increases, the correlation between two distinct areas increases as well. This property is illustrated in Figure \ref{Fig_bruit_P1}. In other words, $\xi$ is a parameter which allows to control the spatial correlation.\\ In these simulations, we have discretized $W^{Q_\xi,h,1}$ with the P1 discretization which reads \[ W^{Q_\xi,h,1}_1=\sum_{i=1}^{N_h}W^{Q_\xi}_1(P_i)\psi_i. \] We remark that the family $\{W^{Q_\xi}_1(P_i),1\leq i\leq N_h\}$ is a centered Gaussian vector with covariance matrix $(q_\xi(P_i,P_j))_{1\leq i,j\leq N_h}$. Using some basic linear algebra, it is not difficult to simulate a realization of this vector and to project it on the P1 finite element basis to obtain Figure \ref{Fig_bruit_P1}. \begin{figure}
\caption{Simulations of $W^{Q_\xi}_1$ with $\xi=1,1.5,2,3$ with P1 finite elements.}
\label{Fig_bruit_P1}
\end{figure}
\noindent We now propose a log-log graph to illustrate the estimate of Theorem \ref{prop_glob_err}. In the case where $\overline{D}$ is the square $[0,1]\times[0,1]$, let us consider the kernel $q$ given by: \[ \forall (x,y)\in\overline{D}\times \overline{D},\quad q(x,y)=f_{k_0p_0}(x)f_{k_0p_0}(y), \] where for two given integers $k_0,p_0\geq1$, $f_{k_0p_0}(x)=2\sin(k_0\pi x_1)\sin(p_0\pi x_2)$ (if $x=(x_1,x_2)\in\overline{D}$). Then, the covariance operator is given by $Q\phi=(\phi,f_{k_0p_0})f_{k_0p_0}$ for any $\phi\in L^2(\overline{D})$. Moreover, one can show that \[ \forall t\geq0,\quad W^Q_t=\beta_t f_{k_0p_0} \] for some real-valued Brownian motion $\beta$. All the calculations are straightforward in this setting. Suppose, in the finite element setting, that the square $\overline{D}$ is covered by $2N^2$ triangles, $N\in\mathbb{N}$. For any $N\in\mathbb{N}$, we denote by $W^{Q,N,0}_{1}$ the P0 approximation of $W^Q_1$ given by (\ref{noise_P0}). We show in Figure \ref{fig_P0_err} the log-log graph of the (discrete) function: \[
N\mapsto \mu_N=\mathbb{E}(\|W^Q_1-W^{Q,N,0}_{1}\|^2). \] According to Theorem \ref{prop_glob_err}, we should have $\mu_N={\rm O}\left(\frac{1}{2N^2}\right)$. This result is recovered numerically in Figure \ref{fig_P0_err}. \begin{figure}
\caption{Log-log graph of $N\mapsto \mu_N$ for $N=5,10,20,30$. In green is the comparison with a line of slope $-2$.}
\label{fig_P0_err}
\end{figure}
\section{Space-time numerical scheme}\label{sect_heat_FHN}
In this section, we first present our numerical scheme. The considered space-time discretization is based on the Euler scheme in time and on finite elements in space. In this section and in the next section, we will use the following notations. Let us fix a time horizon $T$. For $N\geq1$ we define a time step $\Delta t=\frac{T}{N}$ and denote by $(u_n,v_n)_{0\leq n\leq N}$ a sequence of approximations of the solution of (\ref{eq_model}) at times $t_n=n\Delta t$, $0\leq n\leq N$. The scheme, semi-discretized in time, is based on the following variational formulation, for $n\in\{0,\ldots,N-1\}$: \begin{equation}\label{scheme_ex_weak} \left\{ \begin{array}{ccl} \left(\frac{u_{n+1}-u_n}{\Delta t},\psi\right)+\kappa(\nabla u_{n+1},\nabla\psi)&=&\frac{1}{\varepsilon}(f_n,\psi)+\frac{\sigma}{\Delta t}(W^Q_{n+1}-W^Q_n,\psi),\\ \left(\frac{v_{n+1}-v_n}{\Delta t},\phi\right)&=&(g_n,\phi) \end{array} \right. \end{equation} for $\psi$ and $\phi$ in appropriate spaces of test functions. Here, $f_n$ and $g_n$ correspond to approximations of the reaction terms $f$ and $g$ in (\ref{eq_model}). The way we compute $f_n$ and $g_n$ is detailed in the sequel for each considered model. $W^Q_{n}$ is an appropriate approximation of $W^Q_{t_n}$ based on one of the discretizations proposed in Definition \ref{Def_app}. \\ In Subsection \ref{sect_heat}, we consider the strong error in the case of a linear stochastic partial differential equation driven by a colored noise to study the accuracy of the finite element discretization. We recall the estimates on the strong order of convergence for such a numerical scheme obtained in \cite{Kr14} and we numerically illustrate this result for an explicit example. In Subsection \ref{sect_FHN}, we present in more details the scheme for the Fitzhugh-Nagumo model with a colored noise source since this model is one of the most used phenomenological models in cardiac electro-physiology, see the seminal work \cite{Fi69} and the review \cite{LiGaNeSc04}.
\subsection{Linear parabolic equation with additive colored noise}\label{sect_heat}
Let us consider the following linear parabolic stochastic equation on $(0,T)\times D$ \begin{equation}\label{eq_l} \left\{\begin{array}{ccl} {\rm d} u_t&=& A u_t{\rm d} t+\sigma {\rm d} W^{Q}_t,\\ u_0&=&\zeta . \end{array}\right. \end{equation}
Remember that $H=L^2(D)$ is a separable Hilbert space with the scalar product and the corresponding norm respectively denoted by $(\cdot,\cdot)$ and $\|\cdot\|$. We assume that $W^Q$ is a $Q$-Wiener process with an operator $Q$ which satisfies Assumption \ref{cond_C}. We impose the following condition on the operator $A$ in (\ref{eq_l}). \begin{assumption}\label{ass_A} The operator $-A$ is a positive self-adjoint linear operator on $H$ whose domain is dense and compactly embedded in $H$. \end{assumption} \noindent It is well known that the spectrum of $-A$ is made up of an increasing sequence of positive eigenvalues $(\lambda_i)_{i\geq1}$. The corresponding eigenvectors $\{w_i,i\geq1\}$ form a Hilbert basis of $H$.\\
\noindent The domain of $(-A)^{\frac12}$ is the set \[ \left\{u=\sum_{i\geq1}(u,w_i)w_i,\quad \sum_{i\geq1}\lambda_i(u,w_i)^2<\infty\right\}, \]
that we denote here by $V$. It is continuously and densely embedded in $H$. The $V$-norm is given by $|u|=\sqrt{-(Au,u)}$ for all $u\in V$. We define a coercive continuous bilinear form $a$ on $V\times V$ by \[ a(u,v)=-(Au,v). \] Let $\zeta$ be a $V$-valued random variable. The following proposition states that problem (\ref{eq_l}) is well posed. \begin{proposition}\label{mild_sol} Equation (\ref{eq_l}) has a unique mild solution: \[ u_t=e^{At}\zeta+\sigma\int_0^t e^{A(t-s)}{\rm d} W^Q_s, \] Moreover $u$ is continuous in time and $u_t\in V$ for all $t\in[0,T]$, $\mathbb{P}$-a.s. \end{proposition} \begin{proof} This result is a direct consequence of Theorem 5.4 of \cite{DaZa92}, Assumptions \ref{cond_C} and \ref{ass_A}. \qed \end{proof}
For $h>0$, let $V_h$ be a finite dimensional subset of $V$ with the property that for all $v\in V$, there exists a sequence of elements $v_h\in V_h$ such that $\lim_{h\to 0}\|v-v_h\|=0$. For an element $u$ of $V$, we introduce its orthogonal projection on $V_{h}$ and denote it $\Pi_hu$. It is defined in a unique way by \begin{equation}\label{def_proj} \Pi_hu\in V_h\quad{\rm and}\quad\forall v_h\in V_{h},\quad a(\Pi_h u-u,v_h)=0. \end{equation} Let $I_h$ be the dimension of $V_h$. Notice that there exists a basis $(w_{i,h})_{1\leq i\leq I_h}$ of $V_h$ orthonormal in $H$ with the following property: for each $1\leq i\leq I_h$, there exists $\lambda_{i,h}$ such that \[ \forall v_h\in V_h,\quad a(v_h,w_{i,h})=\lambda_{i,h}(v_h,w_{i,h}), \] (see \cite{RaTh83}, Section 6.4). The family $(\lambda_{i,h})_{1\leq i\leq I_h}$ is an approximating sequence of the family of eigenvalues $(\lambda_i)_{i\geq1}$ so that \[ \lambda_{i,h}\geq \lambda_i,\quad\forall 1\leq i\leq I_h. \] We study the following numerical scheme to approximate equation (\ref{eq_l}) defined recursively as follows. For $u_0$ given in $V_h$, find $(u^h_n)_{0\leq n\leq N}$ in $V_h$ such that for all $n\leq N-1$ \begin{equation}\label{scheme_eq_l} \left\{\begin{array}{rcl} \frac{1}{\Delta t}(u^h_{n+1}-u^h_n,v_h)+a(u^h_{n+1},v_h)&=&\frac{\sigma}{\Delta t}(W^{Q,h}_{n+1}-W^{Q,h}_n,v_h)\\ u^h_0&=&u_0 \end{array}\right. \end{equation} for all $v_h\in V_h$ where $W^{Q,h}_n$ is an appropriate approximation of $W^Q_{n \Delta t}$ in $V_h$. The approximation error of the scheme can be written as the sum of two errors. \begin{definition} The discrete error introduced by the scheme (\ref{scheme_eq_l}) is defined by $E^h_n=e^h_n+p^h_n$ where \begin{equation} e^h_n=u^h_n-\Pi_h u_{t_n},\quad p^h_n=\Pi_h u_{t_n}-u_{t_n} \end{equation} for $0\leq n\leq N$. \end{definition} For $n\in\{0,\ldots,N\}$, the error $e^h_n$ is the difference between the approximated solution given by the scheme and the elliptic projection on $V_{h}$ of the exact solution at time $n\Delta t$. The error $p^h_n$ is the difference between the exact solution and its projection on $V_{h}$ at time $n\Delta t$. \\
\noindent In order to give explicit bounds for the error defined above, let us choose our approximation in $H$ specifically. This imposes to choose the space $V$ explicitly. Assume that the operator $A$ is such that $V= H^1_0(D)$ and that $V_h$ is a space of P1 finite elements, see Section \ref{section_fe}. In this P1 case, for $n\in\{0,\ldots,N\}$, we set $W^{Q,h}_n=W^{Q,h,1}_{n\Delta t}$ defined by Definition \ref{Def_app}.
The situation considered in the present section is also the one studied in example 3.4 and section 7 of \cite{Kr14}. We have the following bound for the numerical error coming from such a numerical scheme. \begin{theorem}[Example 3.4 and Corollary 7.2 in \cite{Kr14}]\label{cor:main} Let us assume that Assumptions \ref{cond_C} and \ref{ass_A} are satisfied. Moreover, assume that we are in the P1 case: for $n\in\{0,\ldots,N\}$, $W^{Q,h}_n=W^{Q,h,1}_{n\Delta t}$ defined by Definition \ref{Def_app}. Then, there exists $\Delta t_0>0$ such that for all $n\in\{1,\ldots,N\}$ and $\Delta t\in[0,\Delta t_0]$ \begin{equation}\label{so}
\sqrt{\mathbb{E}(\|E^h_n\|^2)}\leq K(h+\sqrt{\Delta t}), \end{equation}
where $K$ is a constant depending only on $T$ and $|D|$. \end{theorem}
\noindent Let us recall the weak order of convergence of the considered scheme obtained in \cite{DePr09} but under weaker assumptions. Since $C$ is a twice differentiable even function on $D$, $\Delta C$ is a bounded function on $D$ and therefore, according to \cite{DePr09} Theorem 3.1, for any bounded real valued twice differentiable function $\phi$ on $L^2(D)$, there exists a constant $K$ depending only on $T$ such that \begin{equation}\label{wo}
|\mathbb{E}(\phi(u^h_N))-\mathbb{E}(\phi(u_T))|\leq K(h^{2\gamma}+\Delta t^\gamma) \end{equation} for a given $\gamma<1$. In our situation, it is more natural to consider the strong error since we study pathwise behavior. For the method that we consider, estimates for the strong error have been obtained for one dimensional spatial domains and white noise in \cite{Wa05}. Many papers exist for finite dimensional systems. The estimate of Theorem \ref{cor:main} lies in between these two types of studies. The noise is colored but the spatial domain may be of any dimension. Notice that the order of weak convergence (\ref{wo}) is twice the order of strong convergence (\ref{so}), as for finite dimensional stochastic differential equations.\\
\noindent In the end of this subsection, we illustrate this error estimate in a simple situation. We consider the domain $D=(0,l)\times (0,l)$ for $l>0$. We set $A=\Delta$ and $\mathcal{D}(A)=H^2(D)\cap H^1_0(D)$, thus $V=H^1_0(D)$. That is we consider the equation: \begin{equation}\label{heat} \left\{ \begin{array}{ccl} {\rm d}u_t&=&\Delta u_t{\rm d}t+\sigma {\rm d} W^{Q}_t,\quad{\rm in}~D,\\ u_t&=&0,\quad{\rm on}~\partial D,\\ u_0&=&0,\quad{\rm in}~D \end{array} \right. \end{equation} for $t\in\mathbb{R}_+$. $W^{Q}$ is a $L^2(D)$-valued $Q$-Wiener process defined by Definition \ref{cond_C} and $Q$ satisfies Assumption \ref{cond_C}. The initial condition is zero, hence the solution of the corresponding deterministic equation, without noise, is simply zero for all time. Following Proposition \ref{mild_sol}, equation (\ref{heat}) has a unique mild solution such that $u_t\in H^1_0(D)$ for all $t\in[0,T]$, $\mathbb{P}$-almost surely. Moreover, $u$ has a version with time continuous paths and such that, for any time $T>0$: \[
\sup_{t\in[0,T]}\mathbb{E}(\|u_t\|^2_{H^1_0(D)})<\infty. \] \noindent We denote by $(e^{\Delta t},~t\geq0)$ the contraction semigroup associated to the operator $\Delta$. The mild solution to equation (\ref{heat}) is defined as the following stochastic convolution \[ u_t=\sigma\int_0^te^{\Delta (t-s)}{\rm d} W^{Q}_s \] for $t\in\mathbb{R}_+$, $\mathbb{P}$-almost-surely. In order to compute the expectation of the squared norm of $u$ in $L^2(D)$ analytically and also as precisely as possible numerically, we define the Hilbert basis $(e_{kp},k,p\geq1)$ of $L^2(D)$ which diagonalizes the operator $\Delta$ defined on $\mathcal{D}(A)$. For $k,p\geq1$ and $(x,y)\in D$ \[ e_{kp}(x,y)=\frac{2}{l}\sin\left(\frac{k\pi}{l}x\right)\sin\left(\frac{p\pi}{l}y\right). \] A direct computation shows that $\Delta e_{kp}=-\lambda_{kp}e_{kp}$ where $\lambda_{kp}=\frac{\pi^2}{l^2}(k^2+p^2)$. In the basis $(e_{kp},k,p\geq1)$ of $L^2(D)$, the semigroup $(e^{\Delta t},~t\geq0)$ is given by \[ e^{\Delta t}\phi=\sum_{k,p\geq1}e^{-\lambda_{kp} t}(\phi,e_{kp})e_{kp} \] for $t\in\mathbb{R}_+$ and $\phi\in L^2(D)$. Then for any $t\in\mathbb{R}_+$ (c.f. Proposition 2.2.2 of \cite{DaPa04}) \[
\mathbb{E}(\|u_t\|^2)=\sigma^2\int_0^t{\rm Tr}\left(e^{2\Delta s}Q\right){\rm d} s=\sigma^2\sum_{k,p\geq1}\frac{1-e^{-2\lambda_{kp} t}}{2\lambda_{kp}}(Q e_{kp},e_{kp}). \]
In the sequel, we write $\Gamma_t=\mathbb{E}(\|u_t\|^2)$. The above series expansion can then be implemented and we can compare this result with $\mathbb{E}(\|u^h_n\|^2)$ which is computed thanks to Monte-Carlo simulations. The Monte-Carlo simulation of $\mathbb{E}(\|u^h_n\|^2)$ consists in considering $(u^{h,p}_n)_{1\leq p\leq P}$, $P\in\mathbb{N}$ a sequence of independent realizations of the scheme (\ref{scheme_eq_l}) and define \begin{equation}
\Gamma^{(P)}_{n\Delta t}=\frac{1}{P}\sum_{p=1}^P\|u^{h,p}_n\|^2, \end{equation} the approximation of $\Gamma$ at time $n\Delta t$, $n\in\{0,\ldots,N\}$. We denote also by $\Gamma^{(P)}$ the continuous piecewise linear version of $\Gamma$. Figure \ref{Fig_var} displays numerical simulations of the processes $(\Gamma_t,~t\in\mathbb{R}_+)$ and $(\Gamma^{(P)}_t,~t\in\mathbb{R}_+)$. The simulations are done with $l=80$. Moreover the domain is triangulated with $5000$ triangles giving a space step of about $h=0.64$ and a number of vertices's of about $2600$. For this simulation, we choose $P=40$ which is not big but $\Gamma^{(40)}$ matches quite well with its corresponding theoretical version $\Gamma$, as expected by the law of large numbers. We remark also that for the same spatial discretization of the domain $D$, there is no particular statistical improvement to choose the P1 finite element basis instead of the P0.
\begin{figure}
\caption{Simulations of $(\Gamma_t,~t\in[0,10])$ in green and its approximation $\Gamma^{(40)}$ in red computed with: the P0 (on the left) and P1 (on the right) approximations of the noise. For the simulation we choose the coefficient of correlation $\xi=2$ with the kernel $q_\xi$ defined by (\ref{kernel_xi}). The intensity of the noise is $\sigma=0.15$. The time step is $0.05$ whereas the space step is about $0.64$. The two black curves are respectively $\Gamma$ plus, respectively minus, the error introduced by the scheme which is expected to be of order $\sqrt{\Delta t}+h$ equals here to $\sqrt{0.05}+0.64$.}
\label{Fig_var}
\end{figure}
\subsection{Space-time discretization of the Fitzhugh-Nagumo model}\label{sect_FHN}
We write the scheme for the Fitzhugh-Nagumo which is a widely used model of excitable cells, see \cite{Fi69,LiGaNeSc04}. The stochastic Fitzhugh-Nagumo model, abbreviated by FHN model in the sequel, consists in the following $2$-dimensional system \begin{equation}\label{FHN_BGT} \left\{ \begin{array}{ccl} {\rm d}u&=&[\kappa\Delta u+\frac1\varepsilon\left(u(1-u)(u-a)-v\right)]{\rm d}t+\sigma {\rm d} W^Q,\\ {\rm d} v&=&[u-v]{\rm d} t, \end{array} \right. \end{equation} on $[0,T]\times D$. In the above system, $\kappa>0$ is a \emph{diffusion} coefficient, $\varepsilon>0$ a \emph{time-scale} coefficient, $\sigma>0$ the intensity of the noise and $a\in(0,1)$ a parameter. $W^Q$ is a $Q$-Wiener process satisfying Assumption \ref{cond_C}. System (\ref{FHN_BGT}) must be endowed with initial and boundary conditions. We denote by $u_0$ and $v_0$ the initial conditions for $u$ and $v$. Moreover we assume that $u$ satisfies zero Neumann boundary conditions: \begin{equation}\label{bound_cond} \forall t\in[0,T],\quad \frac{\partial u_t}{\partial \vec{n}}=0,\quad\text{ on }\partial D, \end{equation} where $\partial D$ denotes the boundary of $D$ and $\vec{n}$ is the external unit normal to this boundary. Noisy FHN model and especially, FHN with white noise, have been extensively studied. We refer the reader to \cite{BoMa08} where all the arguments needed to prove the following proposition are developed. \begin{proposition} Let $W^Q$ be a colored noise with $Q$ satisfying Assumption \ref{cond_C}. We assume that $u_0$ and $v_0$ are in $L^2(D)$, $\mathbb{P}$-almost surely. Then, for any time horizon $T$, the system (\ref{FHN_BGT}) has a unique solution $(u,v)$ defined on $[0,T]$ which is $\mathbb{P}$-almost surely in $\mathcal{C}([0,T],H)\times\mathcal{C}([0,T],H)$. \end{proposition} \noindent The proof of this proposition relies on Itô Formula, see Chapter 1, Section 4.5 of \cite{DaZa92}, and the fact that the functional defined by \[ f(x)=x(1-x)(x-a),\quad\forall x\in \mathbb{R} \] satisfies the inequality \[
(f(u)-f(v),u-v)\leq\frac{1+a^2-a}{3}\|u-v\|^2,\quad \forall (u,v)\in H\times H, \] which implies that the map $f-\frac{1+a^2-a}{3}{\rm Id}$ is dissipative. The local kinetics of system (\ref{FHN_BGT}), that is the dynamics in the absence of spatial derivative, is illustrated in Figure \ref{Fig_FHN_phase}. It describes the dynamics of the system of ODEs \begin{equation}\label{FHN_ode} \left\{ \begin{array}{ccl} {\rm d}\mathfrak{u}&=&[\frac1\varepsilon \mathfrak{u}(1-\mathfrak{u})(\mathfrak{u}-a)-\mathfrak{v}]{\rm d}t,\\ {\rm d} \mathfrak{v}&=&[\mathfrak{u}-\mathfrak{v}]{\rm d} t, \end{array} \right. \end{equation} when the initial condition $(\mathfrak{u}_0,\mathfrak{v}_0)$ is in $[0,1]\times[0,1]$. \begin{figure}
\caption{Phase portrait with nullclines of system (\ref{FHN_ode}) for $a=0.1$ and $\varepsilon=0.1$. The blue points correspond to the three equilibrium points of the system.}
\label{Fig_FHN_phase}
\end{figure}
\noindent We explicitly give the numerical scheme used to simulate system (\ref{FHN_BGT}). Let us define the function $k$ given by \[ k(x)=\frac1\varepsilon\left(-x^3+x^2(1+a)\right), \forall x\in \mathbb{R}. \] This function corresponds to the non linear parts of the reaction term $f$. We use the following semi-implicit Euler-Maruyama scheme \begin{equation}\label{scheme_FHN} \left\{ \begin{array}{ccl} \frac{u_{n+1}-u_n}{\Delta t}&=&\kappa\Delta u_{n+1}-\frac a\varepsilon u_{n+1}+k(u_n)-v_{n+1}+\frac{\sigma}{\sqrt\Delta t}W^Q_{1,n+1},\\ \frac{v_{n+1}-v_n}{\Delta t}&=&u_{n+1}-v_{n+1}, \end{array} \right. \end{equation} where $(W^Q_{1,n})_{1\leq n\leq N+1}$ is a sequence of independent $Q$-Wiener processes evaluated at time $1$. Let $(H^*,\mathcal{B}(H^*),\tilde\mathbb{P})$ be chosen so that the canonical process has the same law as $W^Q_{1,n+1}$ under $\tilde\mathbb{P}$. Then, for a given $(u_{n},v_n)\in H^1(D)\times H$, the equation \[ (\frac{1}{\Delta t}+\frac a\varepsilon+\frac{\Delta t}{1+\Delta t})u_{n+1}-\kappa\Delta u_{n+1}=k(u_n)-\frac{1}{1+\Delta t}v_n+\frac{\sigma}{\sqrt\Delta t}W^Q_{1,n+1} \] has a unique weak solution $u_{n+1}$ in $H^1(D)$, $\tilde\mathbb{P}$-almost surely. This fact follows from Lax-Milgram Theorem and a measurable selection theorem, see Section 5 of the survey \cite{Wa80}. Therefore, without loss of generality, we may assume in this section that the probability space is $(\mathcal{C}([0,T],H^*),\mathcal{B}(\mathcal{C}([0,T],H^*)),\mathbb{P})$ such that under $\mathbb{P}$, the canonical process has the same law as $W^Q$. \begin{remark} In the scheme (\ref{scheme_FHN}), we could have chosen other ways to approximate the reaction term. For instance, it is also possible to work with a completely implicit scheme with $k(u_{n+1})$ instead of $k(u_n)$ in (\ref{scheme_FHN}). \end{remark} Let us consider the weak form for the first equation of (\ref{scheme_FHN}). We get, \begin{equation}\label{scheme_FHN_weak} \left\{ \begin{array}{ccl} (\frac{1}{\Delta t}+\frac a\varepsilon+\frac{\Delta t}{1+\Delta t})(u_{n+1},\psi)+\kappa(\nabla u_{n+1},\nabla \psi)&=&(k(u_n),\psi)-\frac{1}{1+\Delta t}(v_n,\psi)+\frac{\sigma}{\sqrt{\Delta t}}(W^Q_{1,n+1},\psi),\\ v_{n+1}-\frac{\Delta t}{1+\Delta t}u_{n+1}&=&\frac{1}{1+\Delta t}v_{n} \end{array} \right. \end{equation} for all $\psi\in H^1(D)$. Let $h>0$ and $(\psi_i,1\leq i\leq N_h)$ be the P1 finite element basis defined in Section \ref{sect_noise}. For $n\geq0$, we define the vectors \[ \vec{u}_n=(u_{n,i})_{1\leq i\leq N_h},\quad \vec{v}_n=(v_{n,i})_{1\leq i\leq N_h},\quad \vec{W}^Q_{n+1}=(W^Q_{1,n+1}(P_i))_{1\leq i\leq N_h}, \] which are respectively the coordinates of $u_n$, $v_n$ and $W^Q_{1,n+1}$ w.r.t. the basis $(\psi_i,1\leq i\leq N_h)$. We also define the stiffness matrix $A\in\mathcal{M}_{N_h}(\mathbb{R})$ and the mass matrix $M\in\mathcal{M}_{N_h}(\mathbb{R})$ by \[ A_{ij}=(\nabla\psi_i,\nabla\psi_j),\quad M_{ij}=(\psi_i,\psi_j). \] System (\ref{scheme_FHN_weak}) can be rewritten as \begin{eqnarray*}\label{scheme_FHN_final} \left( \begin{array}{cc} (\frac{1}{\Delta t}+\frac a\varepsilon+\frac{\Delta t}{1+\Delta t})M+\kappa A&0\\ -\frac{\Delta t}{1+\Delta t}I&I \end{array} \right) \left( \begin{array}{cc} \vec{u}_{n+1}\\ \vec{v}_{n+1} \end{array} \right) &=& \left( \begin{array}{cc} 0&-\frac{1}{1+\Delta t}M\\ 0&\frac{1}{1+\Delta t}I \end{array} \right) \left( \begin{array}{cc} \vec{u}_{n}\\ \vec{v}_{n} \end{array} \right) +\left( \begin{array}{cc} G(\vec{u}_{n})\\ 0 \end{array} \right) \\&+& \left( \begin{array}{cc} \frac{\sigma}{\sqrt\Delta t}M&0\\ 0&0 \end{array} \right) \left( \begin{array}{cc} \vec{W}^Q_{1,n+1}\\ 0\end{array} \right), \end{eqnarray*} where $G(\vec{u}_n)=(k(u_n),\psi_i)_{1\leq i\leq N_h}\in\mathbb{R}^{N_h}$. As for the parabolic stochastic equation considered in Section \ref{sect_heat}, one may expect a numerical strong error for this scheme of order \begin{equation}\label{err_eq_nl}
\mathbb{E}(\|(u_t,v_t)-(u_{t,n},v_{t,n})\|^2)^{\frac12}=\text{O}(h+\sqrt{\Delta t}), \end{equation} for $\Delta t\leq \Delta t_0$. In (\ref{err_eq_nl})$, (u_{t,n},v_{t,n})_{t\in[0,T]}$ is the interpolation of the discretized point which is piecewise linear in time.\\ \noindent We end up this section with Figure \ref{Fig_FHN} which displays simulations of the stochastic Fitzhugh-Nagumo model (\ref{FHN_BGT}) with zero Neumann boundary conditions on a cardioid domain and zero initial conditions. The kernel of the operator $Q$ is given by equation (\ref{kernel_xi}) for some $\xi>0$. Due to a strong intensity of the noise source ($\sigma=1$), we observe the spontaneous nucleation of a front wave with irregular front propagating throughout the whole domain. \begin{figure}
\caption{Simulations of system (\ref{FHN_BGT}) with $\xi=2$, $\sigma=1$, $\varepsilon=0.1$, $a=0.1$. These figures must be read from the up-left to the down-right. The time step is $0.05ms$ and there is $0.5 ms$ between each figure.}
\label{Fig_FHN}
\end{figure}
\section{Arrhythmia and reentrant patterns in excitable media}\label{sect_arrhy}
In this section, we focus on classical models for excitable cells, namely Barkley and Mitchell-Schaeffer models. We would like to observe cardiac arrhythmia, that is troubles that may appear in the cardiac beats. Among the diversity of arrhythmia, the phenomena of tachycardia are certainly the most dangerous as they lead to rapid loss of consciousness and death. Tachycardia is described as follows in \cite{JC06}. \begin{quote}The vast majority of tachyarrhythmias are perpetuated by reentrant mechanisms. Reentry occurs when previously activated tissue is repeatedly activated by the propagating action potential wave as it reenters the same anatomical region and reactivates it. \end{quote} In system (\ref{eq_model}), the equation on $u$ gives the evolution of the cardiac action potential. The equation on $v$ takes into account the evolution of internal biological mechanisms leading to the generation of this action potential. We will be more specifically interested by two systems of this form: the Barkley and Mitchell-Schaeffer models.
\subsection{Numerical study of the Barkley model}
\subsubsection{The model}\label{am}
In the deterministic setting, a paradigm for excitable systems where reentrant phenomena such as spiral, meander or scroll waves have been observed and studied is the Barkley model, see \cite{B90,B91,B92,B94}. This deterministic model is of the following form \begin{equation}\label{Bar_det} \left\{ \begin{array}{ccl} {\rm d}u&=&[\kappa\Delta u+\frac1\varepsilon u(1-u)(u-\frac{v+b}{a})]{\rm d}t,\\ {\rm d} v&=&[u-v]{\rm d} t. \end{array} \right. \end{equation} The parameter $\varepsilon$ is typically small so that the time scale of $u$ is much faster than that of $v$.
For more details on the dynamic of waves in excitable media, we refer the reader to \cite{K80}. The Barkley model, like two-variables models of this type, faithfully captures the behavior of many excitable systems. The deterministic model (\ref{Bar_det}) does not exhibit re-entrant patterns unless one imposes special conditions on the domain: for instance, one may impose that a portion of the spatial domain is a "dead zone". This means a region with impermeable boundaries where equations (\ref{Bar_det}) do not apply: when a wave reaches this dead region, the tip of the wave may turn around and this induces a spiral behavior, see Section 2.2 of \cite{K80}. One may also impose specific initial conditions such that some zones are intentionally hyper-polarized: the dead region is somehow transient in this case.
\subsubsection{Reentrant patterns}
As in \cite{Sh05} we add a colored noise with kernel of type (\ref{kernel_xi}) to equation (\ref{Bar_det}) and so we consider \begin{equation}\label{Bar} \left\{ \begin{array}{ccl} {\rm d}u&=&[\kappa\Delta u+\frac1\varepsilon u(1-u)(u-\frac{v+b}{a})]{\rm d}t+\sigma {\rm d} W^{Q_\xi},\\ {\rm d} v&=&[u-v]{\rm d} t, \end{array} \right. \end{equation} where the kernel of $Q_\xi$ is given by (\ref{kernel_xi}) for $\xi>0$.\\ Figure \ref{Fig_Bar_carre} displays a simulation of system (\ref{Bar}) on the square $D=[0,l]\times [0,l]$ with periodic boundary conditions: \begin{equation}\label{bcond} \begin{array}{lll} \forall t\in\mathbb{R}_+,&\forall x\in[0,l]\quad u_t(x,0)=u_t(x,l),&\quad{\rm and}\quad \frac{\partial u_t}{\partial \vec n}(x,0)=\frac{\partial u_t}{\partial \vec n}(x,l),\\ &\forall y\in[0,l]\quad u_t(0,y)=u_t(l,y),&\quad{\rm and}\quad \frac{\partial u_t}{\partial \vec n}(0,y)=\frac{\partial u_t}{\partial \vec n}(l,y), \end{array} \end{equation} where $\vec n$ is the external unit normal to the boundary. The numerical scheme is based on the following variational formulation. Given $u_0$ and $v_0$ in $H^1(D)$, find $(u_n,v_n)_{1\leq n\leq N}$ such that for all $0\leq n\leq N-1$, \begin{equation}\label{scheme_Bar_weak} \left\{ \begin{array}{ccl} (\frac{u_{n+1}-u_n}{\Delta t},\psi)+\kappa(\nabla u_{n+1},\nabla \psi)&=&\frac{1}{\varepsilon}(u_n(1-u_n)(u_n-\frac{v_n+b}{a}),\psi)+\frac{\sigma}{\sqrt\Delta t}(W^Q_{1,n+1},\psi),\\ \frac{v_{n+1}-v_n}{\Delta t}&=&u_{n+1}-v_{n+1} \end{array} \right. \end{equation} with boundary conditions $u_n(x,0)=u_n(x,l)$, $u_n(0,x)=u_n(l,y)$ and for all $\psi\in H^1(D)$ satisfying $\psi(x,0)=\psi(x,l)$ and $\psi(0,y)=\psi(l,y)$ for any $(x,y)\in [0,l]\times[0,l]$. We have solved this problem using the P1 finite element methods, see Section \ref{section_fe}.\\ Our aim is to observe reentrant patterns generated by the presence of the noise source in this system. \begin{figure}
\caption{Reentry is observed for system (\ref{Bar}) with $\xi=2$, $\sigma=0.15$, $\varepsilon=0.05$, $a=0.75$, $b=0.01$ and $\nu=1$. These figures must be read from the top-left to the bottom-right. The quiescent state is represented in green whereas the excited state is in violet. If time is recorded in $ms$, there is $0.5 ms$ between each figures for a time step of $0.05 ms$.}
\label{Fig_Bar_carre}
\end{figure} Figure \ref{Fig_Bar_carre} displays simulations of (\ref{Bar}) using the P1 finite element method. We observe the spontaneous generation of waves with a reentrant pattern. At some points in the spatial domain, the system is excited and exhibits a reentrant evolution which is self-sustained: a previously activated zone is re-activated by the same wave periodically. As explained in \cite{JC06} and quoted in Section \ref{am}, this phenomenon can be interpreted biologically as tachycardia in the heart tissue. We observe that, as in \cite{Sh05}, the constants $a$ and $b$ are chosen such that the deterministic version of system (\ref{Bar}) may exhibit spiral pattern, see the bifurcation diagram between $a$ and $b$ in \cite{B94}. However, in our context, the generation of spiral is a phenomenon which is due solely to the presence of noise. In particular, there is no need for a "dead region", as previously mentioned for the observation of spirals or reentrant patterns in a deterministic context. \noindent Figure \ref{Fig_Bar_coeur} displays a simulation of system (\ref{Bar}) on a cardioid domain with zero Neumann boundary conditions, see (\ref{bound_cond}). We observe the spontaneous generation of a wave turning around itself like a spiral and thus reactivating zones already activated by the same wave. \begin{figure}
\caption{Simulations of system (\ref{Bar}) with $\xi=2$, $\sigma=0.15$, $\varepsilon=0.05$, $a=0.75$, $b=0.01$ and $\nu=1$. As for the previous figure, the quiescent state is represented in green whereas the excited state is in violet. Another phenomena of re-entry is observed on this cardioid geometry with zero Neumann boundary conditions. There is $2 ms$ between each snapshot for a time step for the simulations equals to $0.05 ms$.}
\label{Fig_Bar_coeur}
\end{figure}
To gain a better insight into these reentrant phenomena, a bifurcation diagram between $\varepsilon$ and $\sigma$ in system (\ref{Bar}) is displayed in Figure \ref{Fig_Bar_carre_bif}. In this figure, the other parameters $a,b,\nu,\xi$ are held fixed. The domain and boundary conditions are the same as for Figure \ref{Fig_Bar_carre}. Three distinct areas emerge from repeated simulations: \begin{itemize} \item the area NW (for No Wave) where no wave is observed. \item the area W (for Wave) where at least one wave is generated on average. Such waves do not exhibit reentrant patterns. \item the area RW (for Reentrant Wave) where waves with re-entry are observed. The wave has the same pattern as in Figure \ref{Fig_Bar_carre}. \end{itemize} Let us mention that these three different area emerge from repeated simulations. The detection of re-entrant patterns is quite empirical here: we say that there is re-entrant patterns if we can actually see it on the figures. An automatic detection of re-entrant patterns may certainly be derived from \cite{LoTh12} even if rather difficult to implement in our setting.\\ At transition between the areas W and RW, ring waves with the same pattern as reentrant waves may be observed: two arms which join each other to form a ring. We also remark that for a fixed $\varepsilon$, when $\sigma$ increases, the number of nucleated waves increases. On the contrary, for a fixed $\sigma$ when $\varepsilon$ increases the number of nucleated waves decreases. \begin{figure}
\caption{Numerical bifurcation diagram between $\varepsilon$ and $\sigma$ of system (\ref{Bar}) with $\xi=2$, $a=0.75$, $b=0.01$ and $\nu=1$ held fixed. The $\varepsilon$ and $\sigma$-step are respectively $0.005$ and $0.025$ but has been refined around boundaries to draw the curve boundaries.}
\label{Fig_Bar_carre_bif}
\end{figure} Let us notice that for small $\varepsilon$, that is when the transition between the quiescent and excited state is very sharp, small noise may powerfully initiate spike. However, we only observe reentrant patterns when $\varepsilon$ is large enough. Notice also that the separation curve between the zone NW and the two zones W and RW is exponentially shaped. This may be related to the large deviation theory for slow-fast system of SPDE.
\subsection{Numerical study of the Mitchel-Schaeffer model}
\subsubsection{The model}
Fitzhugh-Nagumo model is the most popular phenomenological model for cardiac cells. However this model has some flaws, in particular the hyperpolarization and the stiff slope in the repolarization phase. The Mitchell-Schaeffer model \cite{MS03} has been proposed to improve the shape of the action potential in cardiac cells. The spatial version of the Mitchell-Schaeffer model reads as follows \begin{equation}\label{MS} \left\{ \begin{array}{ccc} {\rm d}u&=&\left[{\displaystyle \kappa\Delta u+\frac{v}{\tau_{\rm in}}u^2(1-u)-\frac{u}{\tau_{\rm out}}}\right]{\rm d}t+\sigma {\rm d} W^{Q_\xi},\\ {\rm d} v&=&\left[{\displaystyle\frac{1}{\tau_{\rm open}}\left(1-v\right)1_{u< u_{\rm gate}}-\frac{v}{\tau_{\rm close}}1_{u\geq u_{\rm gate}}}\right]{\rm d} t. \end{array} \right. \end{equation}
The numerical scheme is based on the following variational formulation. Given $u_0$ and $v_0$ in $H^1(D)$, find $(u_n,v_n)$ such that for all $0\leq n\leq N-1$, \begin{equation}\label{scheme_Bar_weak} \left\{ \begin{array}{ccl} (\frac{u_{n+1}-u_n}{\Delta t},\psi)+\kappa(\nabla u_{n+1},\nabla \psi)&=&\frac{1}{\varepsilon}(\frac{v_n}{\tau_{\rm in}}u^2_n(1-u_n)-\frac{1}{\tau_{\rm out}}u_n,\psi)+\frac{\sigma}{\sqrt{\Delta t}}(W^Q_{n+1}(1),\psi),\\ \frac{v_{n+1}-v_n}{\Delta t}&=&{\displaystyle \frac{1}{\tau_{\rm open}}\left(1-v_n\right)1_{u_n< u_{\rm gate}}-\frac{v_n}{\tau_{\rm close}}1_{u_n\geq u_{\rm gate}}} \end{array} \right. \end{equation} for $\psi\in H^1(D)$. More precisely, we solve this problem with the P1 finite element method.
\subsubsection{Numerical investigations}
Bifurcations have been investigated in Figure \ref{Fig_MS_carre_bif} for the same domain and boundary conditions as for the bifurcation diagram related to Barkley model (Figure \ref{Fig_Bar_carre_bif}). We choose to fix all the parameters except the intensity of the noise $\sigma$ and $\tau_{\rm close}$ to investigate the influence of the strength of the noise and the characteristic time for the recovery variable $v$ to get closed. From repeated simulations, five distinct areas emerge: \begin{itemize} \item the area NW (for No Wave) where no wave is observed. \item the area W (for Wave) where at least one wave is generated on average. These waves do not exhibit reentrant patterns. However, these waves may be generated with the same pattern as reentrant waves: two arms which meet up and agree to form a ring. \item the area RW (for Reentrant Wave) where waves with re-entry may be observed as in Figure \ref{Fig_Bar_carre}. \item the area DW (for Disorganized Wave) where reentrant waves are initiated but break down in numerous pieces resulting in a very disorganized evolution. In a sense, this disorganized evolution may be regarded as reentrant since previously activated zone may be re-activated by one of these resulting pieces. \item the area T (for Transition) is a transition area between reentrant waves and more disorganized patterns as observed in the area DW. \end{itemize}
\begin{figure}
\caption{Numerical bifurcation diagram between $\tau_{\rm close}$ and $\sigma$ of system (\ref{MS}) with $\xi=2$, $\tau_{\rm in}=0.07$, $\tau_{\rm out}=0.7$, $\tau_{\rm open}=8.$, $u_{\rm gate}=0.13$ and $\nu=0.03$ held fixed.}
\label{Fig_MS_carre_bif}
\end{figure} We would like to end the paper with a short discussion about a problem we intend to address in future works. In \cite{JT10}, the authors present simulations of a stochastic spatially-extended Hodgkin-Huxley model. This is a celebrated model of propagation and generation of an action potential in a nerve fiber. They consider the case of a one dimensional nerve fiber stimulated by a noisy signal. They show, using numerical experiments, that the presence of weak noise in the model may powerfully annihilate the generation (but not the propagation) of waves. We reproduce these numerical experiments in our setting for the Barkley and Fitzhugh Nagumo models in order to investigate if this phenomena may be observed in two dimensions. Thus, we consider the Barkley model with a periodic deterministic input and driven by a colored noise. We perform simulations of this model using growing noise intensity. Unfortunately, we were not able to produce inhibition of a periodic deterministic signal using weak noises. Of course, with stronger intensity of the noise, signals due to the stochasticity of the system are generated and perturb the deterministic periodic signal. However, this phenomenon is not surprising. It seems to us that FitzHugh-Nagumo could be a better model to consider on the question of annihilate the generation of waves by weak noise. We noted that a lot of care is required when choosing the mesh to use as well as the parameters (intensity of the noise, intensity of the deterministic signal, duration of its period), if we want to produce sound results.
\section{Proof of Theorem \ref{prop_glob_err}}\label{app_bgt_1}
Recall that the domain $D$ is polyhedral such that \[ \overline{D}=\bigcup_{T\in\mathcal{T}_h}T. \] Let $i\in\{0,0_a,1\}$. The process $(D_h(t),~t\in[0,\tau])$ defined by \[ D_h(t)=W^Q_t-W^{Q,h,i}_t \] is a centered Wiener process. In particular, it is a continuous martingale and thus, by the Burkholder-Davis-Gundy inequality (see Theorem 3.4.9 of \cite{PeZa07}) we have \[
\mathbb{E}\left(\sup_{t\in[0,\tau]}\|D_h(t)\|^2\right)\leq c_2\mathbb{E}(\|D_h(\tau)\|^2) \] with $c_2$ a constant which does not depend on $h$ or $\tau$. We begin with the case $i=1$. Since the processes $W^Q$ and $W^{Q,h,1}$ are regular in space, we write \[
\mathbb{E}(\|D_h(\tau)\|^2)=\mathbb{E}\left(\int_D(W^Q_\tau(x)-W^{Q,h,1}_\tau(x))^2{\rm d}x\right). \] We use the definition of $W^{Q,h,1}$ in Definition \ref{Def_app} and the fact that $\sum_{i=1}^{N_h}\psi_i=1$ to obtain \begin{eqnarray*}
\mathbb{E}(\|D_h(\tau)\|^2)&=&\mathbb{E}\left(\int_D(W^Q_\tau(x)-\sum_{i=1}^{N_h}W^Q_\tau(P_i)\psi_i(x))^2{\rm d}x\right)\\ &=&\mathbb{E}\left(\int_D(\sum_{i=1}^{N_h}(W^Q_\tau(x)-W^Q_\tau(P_i))\psi_i(x))^2{\rm d}x\right)\\ &=&\mathbb{E}\left(\int_D\sum_{i,j=1}^{N_h}(W^Q_\tau(x)-W^Q_\tau(P_i))(W^Q_\tau(x)-W^Q_\tau(P_j))\psi_i(x)\psi_{j}(x){\rm d}x\right). \end{eqnarray*} By an application of Fubini's theorem, exchanging over the expectation, integral and summation, we get \begin{eqnarray*}
\mathbb{E}(\|D_h(\tau)\|^2)&=&\sum_{i,j=1}^{N_h}\int_D\mathbb{E}\left((W^Q_\tau(x)-W^Q_\tau(P_i))(W^Q_\tau(x)-W^Q_\tau(P_j))\right)\psi_i(x)\psi_{j}(x){\rm d}x\\ &=&\tau\sum_{i,j=1}^{N_h}\int_D\left(C(0)-C(P_i-x)-C(P_j-x)+C(P_i-P_j)\right)\psi_i(x)\psi_{j}(x){\rm d}x. \end{eqnarray*} For all $1\leq i,j\leq N_h$, if the intersection of the supports of $\psi_i$ and $\psi_j$ is not empty, then \[
\forall x\in\text{supp} \psi_i,\forall y\in\text{supp}\psi_j, |x-y|\leq K h. \] Thus, there exists $K>0$ such that, for all $i,j$, if $\text{supp} \psi_i\cap\text{supp} \psi_{j}\neq \emptyset$ and $x\in \text{supp} \psi_i\cap\text{supp} \psi_{j}$, a Taylor's expansion yields \[
|C(0)-C(P_i-x)-C(P_j-x)+C(P_i-P_j)|\leq K\max_{x\in\overline{D}}\|{\rm Hess}~C(x)\| h^2, \] where we have used the fact that $\nabla C(0)=0$. Then,
\[\mathbb{E}(\|D_h(\tau)\|^2)\leq K\tau\max_{x\in\overline{D}}\|{\rm Hess}~C(x)\| h^2. \] This ends the proof for the case $i=1$. The case $i=0$ can be treated similarly.\\ For the case $i=0_a$, we proceed as follows. The process $W^{Q,h,0_a}$ is the orthonormal projection of $W^Q$ on the space P0, thus, we have, using the Pythagorean theorem, \[
\mathbb{E}(\|D_h(\tau)\|^2)=\mathbb{E}\left(\|W^Q_\tau-W^{Q,h,0_a}_\tau\|^2\right)=\mathbb{E}\left(\|W^Q_\tau\|^2-\|W^{Q,h,0_a}_\tau\|^2\right). \] Then, recalling that the processes $W^Q$ and $W^{Q,h,0_a}$ are regular in space and using the fact that the triangles $T\in\mathcal{T}_h$ do not intersect, we obtain, \begin{align*}
\mathbb{E}(\|D_h(\tau)\|^2)&=\mathbb{E}\left(\int_DW^Q_\tau(x)^2{\rm d} x-\sum_{T\in\mathcal{T}_h}\frac{1}{|T|}(W^Q_\tau,1_T)^2\right). \end{align*} By an application of Fubini's theorem, exchanging over the expectation and summation, we get \begin{equation}\label{bgt_pass}
\mathbb{E}(\|D_h(\tau)\|^2)=\tau\left(C(0)|D|-\sum_{T\in\mathcal{T}_h}\frac{1}{|T|}(Q1_T,1_T)\right). \end{equation} Since $\overline{D}=\bigcup_{T\in\mathcal{T}_h}T$ we have \[
C(0)|D|=\sum_{T\in\mathcal{T}_h}\frac{1}{|T|}\int_T\int_TC(0){\rm d}z_1{\rm d}z_2, \] hence, plugging in (\ref{bgt_pass}) \begin{equation}\label{eq_1}
\mathbb{E}(\|D_h(\tau)\|^2)=\tau\sum_{T\in\mathcal{T}_h}\frac{1}{|T|}\int_T\int_T[C(0)-C(z_1-z_2)]{\rm d}z_1{\rm d}z_2. \end{equation} Thanks to the fact that $\nabla C=0$, a Taylor's expansion yields \begin{equation}\label{eq_C_p}
C(0)-C(z_1-z_2)=(z_1-z_2)\cdot{\rm Hess}~C(0)(z_1-z_2)+{\rm o}(|z_1-z_2|^2). \end{equation} Thus, thanks to (\ref{hyp_triangle}), for all $z_1,z_2$ in the same triangle $T$ \[
|C(0)-C(z_1-z_2)|\leq K\max_{x\in\overline{D}}\|{\rm Hess}~C(x)\| h^2, \] where the constant $K$ is independent from $T\in\mathcal{T}_h$. Plugging in (\ref{eq_1}) yields \[
\mathbb{E}(\|D_h(\tau)\|^2)\leq K\max_{x\in\overline{D}}\|{\rm Hess}~C(x)\|\tau h^2 \] for a deterministic constant $K$.
\end{document}
|
arXiv
|
{
"id": "1411.1564.tex",
"language_detection_score": 0.7661243677139282,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\journal{...} \title{A nonsmooth variational approach to semipositone quasilinear problems in $\mathbb{R}^N$}
\author[1]{Jefferson Abrantes Santos\fnref{t1}} \address[1]{Universidade Federal de Campina Grande, Unidade Acad\^emica de Matem\'atica, \\ CEP: 58429-900, Campina Grande - PB, Brazil.} \ead{[email protected]} \author[1]{Claudianor O. Alves\fnref{t2}} \ead{[email protected]} \author[3]{Eugenio Massa\fnref{t3}}\address[3]{{ Departamento de Matem\'atica,
Instituto de Ci\^encias Matem\'aticas e de Computa\c c\~ao, Universidade de S\~ao Paulo,
Campus de S\~ao Carlos, 13560-970, S\~ao Carlos SP, Brazil.}}
\ead{[email protected]}
\fntext[t1]{J. Abrantes Santos was partially supported by CNPq/Brazil 303479/2019-1}\fntext[t2]{C. O. Alves was partially supported by CNPq/Brazil 304804/2017-7}\fntext[t3]{E. Massa was partially supported by grant $\#$303447/2017-6, CNPq/Brazil.}
\date{}
\begin{abstract}
This paper concerns the existence of a solution for the following class of semipositone quasilinear problems
\begin{equation*}
\left \{
\begin{array}{rclcl}
-\Delta_p u & = & h(x)(f(u)-a) & \mbox{in} & \mathbb{R}^N, \\
u& > & 0 & \mbox{in} & \mathbb{R}^N, \\
\end{array}
\right.
\end{equation*} where $1<p<N$, $a>0$, $ f:[0,+\infty) \to [0,+\infty)$ is a function with subcritical growth and $f(0)=0$, while $h:\mathbb{R}^N \to (0,+\infty)$ is a continuous function that satisfies some technical conditions. We prove via nonsmooth critical points theory and comparison principle, that a solution exists for $a$ small enough. We also provide a version of Hopf's Lemma and a Liouville-type result for the $p$-Laplacian in the whole $\mathbb{R}^N$.
\noindent {\bf Mathematical Subject Classification MSC2010:} 35J20, 35J62 (49J52).
\noindent {\bf Key words and phrases:}
semipositone problems; quasilinear elliptic equations; nonsmooth nariational methods; Lipschitz functional; positive solutions. \end{abstract}
\maketitle
\section{Introduction}
In this paper we study the existence of positive weak solutions for the $p$-Laplacian semipositone problem in the whole space \begin{equation}\label{Problem-P}\tag{$P_a$}
\left \{
\begin{array}{rclcl}
-\Delta_p u & = & h(x)(f(u)-a) & \mbox{in} & \mathbb{R}^N, \\
u& > & 0 & \mbox{in} & \mathbb{R}^N, \\
\end{array}
\right. \end{equation} where $1<p<N$, $a>0$, $f:[0,+\infty) \to [0,+\infty)$ is a continuous function with subcritical growth and $f(0)=0$. Moreover, the function $h:\mathbb{R}^N \to (0,+\infty)$ is a continuous function satisfying \begin{itemize} \item[(\aslabel{$P_{1}$})\label{Hp_P2_L1Li}] $h\in L^{1}(\mathbb{R}^N)\cap L^{\infty}(\mathbb{R}^N)$,
\item[(\aslabel{$P_{2}$})\label{Hp_P4_Bbeta}] $h(x)<B|x|^{-\vartheta}$ for $x\neq0$, with $\vartheta>N$ and $B>0$. \end{itemize}
An example of a function $h$ that satisfies the hypotheses \eqref{Hp_P2_L1Li}$-$\eqref{Hp_P4_Bbeta} is given below: $$
h(x)=\frac{B}{1+|x|^{\vartheta}}, \quad \forall x \in \mathbb{R}^N. $$
In the whole of this paper, we say that a function $u \in D^{1,p}(\mathbb{R}^N)$ is a weak solution for (\ref{Problem-P}) if $u$ is a continuous positive function that verifies $$
\int_{\mathbb{R}^N}|\nabla u|^{p-2}\nabla u \nabla v\,dx=\int_{\mathbb{R}^N}h(x)(f(u)-a)v\,dx, \quad \forall v \in D^{1,p}(\mathbb{R}^N). $$
\subsection{State of art.} The problem (\ref{Problem-P}) for $a = 0$ is very simple to be solved, either employing the well known mountain pass theorem due to Ambrosetti and Rabinowitz \cite{AmbRab1973}, or via minimization. However, for the case where (\ref{Problem-P}) is semipositone, that is, when $a> 0$, the existence of a positive solution is not so simple, because the standard arguments via the mountain pass theorem combined with the maximum principle do not directly give a positive solution for the problem, and in this case, a very careful analysis must be done.
The literature associated with semipositone problems in bounded domains is very rich since the appearance of the paper by Castro and Shivaji \cite{CasShi1988} who were the first to consider this class of problems. We have observed that there are different methods to prove the existence and nonexistence of solutions, such as subsupersolutions, degree theory arguments, fixed point theory and bifurcation; see for example \cite{AliCaShi}, \cite{AmbArcBuff},\cite{AllNisZecca}, \cite{AnuHaiShi1996} and their references. In addition to these methods, also variational methods were used in a few papers as can be seen in \cite{AldHoSa19_semipos_omega}, \cite{CalCasShiUns2007}, \cite{CFELo}, \cite{CDS}, \cite{CQT}, \cite{CTY}, \cite{DC}, \cite{FigMasSan_KirchSemipos}, \cite{MR2008685} and \cite{Jea1_cont}. We would like to point out that in \cite{CFELo}, Castro, de Figueiredo and Lopera studied the existence of solutions for the following class of semipositone quasilinear problems \begin{equation}\label{Problem-P2}
\left \{
\begin{array}{rclcl}
-\Delta_p u & = & \lambda f(u) & \mbox{in} & \Omega, \\
u(x)& > & 0 & \mbox{in} & \Omega, \\
u & = & 0 & \mbox{on} & \partial\Omega, \\
\end{array}
\right. \end{equation} where $\Omega \subset\mathbb{R}^{N}$, $N > p>2$, is a smooth bounded domain, $\lambda >0$ and $f:\mathbb{R} \to \mathbb{R}$ is a differentiable function with $f(0)<0$. In that paper, the authors assumed that there exist $q \in (p-1, \frac{Np}{N-p}-1), A,B>0 $ such that $$ \left\{ \begin{array}{l}
A(t^q-1)\leq f(t) \leq B(t^q-1), \quad \mbox{for} \quad t>0\\
f(t)=0, \quad \mbox{for} \quad t \leq -1. \end{array} \right. $$ The existence of a solution was proved by combining the mountain pass theorem with the regularity theory. Motivated by the results proved in \cite{CFELo}, Alves, de Holanda and dos Santos \cite{AldHoSa19_semipos_omega} studied the existence of solutions for a large class of semipositone quasilinear problems of the type \begin{equation}\label{Problem-P3}
\left \{
\begin{array}{rclcl}
-\Delta_\Phi u & = & f(u)-a & \mbox{in} & \Omega, \\
u(x)& > & 0 & \mbox{in} & \Omega, \\
u & = & 0 & \mbox{on} & \partial\Omega, \\
\end{array}
\right. \end{equation} where $\Delta_\Phi$ stands for the $\Phi$-Laplacian operator. The proof of the main result is also done via variational methods, however in their approach the regularity results found in Lieberman \cite{L1,L2} play an important role. By using the mountain pass theorem, the authors found a solution $u_a$ for all $a>0$. After that, by taking the limit when $a$ goes to 0 and using the regularity results in \cite{L1,L2}, they proved that $u_a$ is positive for $a$ small enough.
Related to semipositone problems in unbounded domains, we only found the paper due to Alves, de Holanda, and dos Santos \cite{AldHoSa19_semipos_RN} that studied the existence of solutions for the following class of problems \begin{equation}\label{Problem-PAHS}
\left \{
\begin{array}{rclcl}
-\Delta u & = & h(x)(f(u)-a) & \mbox{in} & \mathbb{R}^N, \\
u& > & 0 & \mbox{in} & \mathbb{R}^N, \\
\end{array}
\right. \end{equation} where $a>0$, $f:[0,+\infty) \to [0,+\infty)$ and $h:\mathbb{R}^N \to (0,+\infty)$ are continuous functions with $f$ having a subcritical growth and $h$ satisfying some technical conditions. The main tools used were variational methods combined with the Riesz potential theory.
\subsection{Statement of the main results.} Motivated by the results found in \cite{CFELo}, \cite{AldHoSa19_semipos_omega} and \cite{AldHoSa19_semipos_RN}, we intend to study the existence of solutions for (\ref{Problem-P}) with two different types of nonlinearities. In order to state our first main result, we assume the following conditions on $f$: \begin{itemize}
\item[(\aslabel{$f_0$}) \label{Hp_forig}] \qquad $$\displaystyle\lim_{t\to 0^+} \frac{F(t)}{t^{p}}=0\,;$$ \end{itemize} \begin{itemize} \item[(\aslabel{$f_{sc}$})\label{Hp_estSC}] \qquad there exists $q\in (1,p^*)$ such that $\displaystyle \limsup_{t\to +\infty} \frac{f(t)}{t^{q-1}}<\infty,$ \end{itemize} where $p^*=\frac{pN}{N-p}$ is the critical Sobolev exponent; \begin{itemize} \item[(\aslabel{$f_\infty$}) \label{Hp_PS_SQ}] \qquad $q>p$ in \eqref{Hp_estSC} and there exist $\theta>p$ and $t_0>0$ such that \begin{eqnarray*}
&& 0< \theta F(t) \leq f(t)t, \quad \forall t>t_0, \end{eqnarray*} \end{itemize} where $F(t)=\int_{0}^{t}f(\tau) \, d \tau$.
Our first main result has the following statement
\begin{theorem} \label{Theorem1} Assume the conditions \eqref{Hp_P2_L1Li}$-$\eqref{Hp_P4_Bbeta}, \eqref{Hp_estSC}, \eqref{Hp_forig} and \eqref{Hp_PS_SQ}.
Then there exists $a^{\ast}>0$
such that, if $a\in[0,a^{\ast})$, problem \eqref{Problem-P} has a positive weak solution $u_a\in C(\mathbb{R}^N) \cap D^{1,p}(\mathbb{R}^N)$. \end{theorem}
As mentioned above, a version of Theorem \ref{Theorem1} was proved in \cite{AldHoSa19_semipos_RN} in the semilinear case $p=2$. Their proof exploited variational methods for $C^1$ functionals and Riesz potential theory in order to prove the positivity of the solutions of a smooth approximated problem, which then resulted to be actual solutions of problem \eqref{Problem-P}. In our setting, since we are working with the $p$-Laplacian, that is a nonlinear operator, we do not have a Riesz potential theory analogue that works well for this class of operator. Hence, a different approach was developed in order to treat the problem (\ref{Problem-P}) for $p \not =2$. Here, we make a different approximation for problem \eqref{Problem-P}, which results in working with a {\it nonsmooth approximating functional}.
As a result, the Theorem \ref{Theorem1} is also new when $p=2$, since the set of hypotheses we assume here is different. In fact, avoiding the use of the Riesz theory, we do not need to assume that $f$ is Lipschitz (which would not even be possible in the case of condition \eqref{Hp_forig_up} below), and a different condition on the decaying of the function $h$ is required.
The use of the nonsmooth approach turns out to simplify several technicalities usually involved in the treatment of semipositone problems. Actually, working with the $C^1$ functional naturally associated to \eqref{Problem-P}, one obtains critical points $u_a$ that may be negative somewhere.
When working in bounded sets, the positivity of $u_a$ is obtained, in the limit as $a\to0$, by proving convergence in $C^1$ sense to the positive solution $u_0$ of the case $a=0$, which is enough since $u_0$ has also normal derivative at the boundary which is bounded away from zero in view of the Hopf's Lemma. This approach can be seen for instance in \cite{Jea1_cont,CFELo,AldHoSa19_semipos_omega,FigMasSan_KirchSemipos}.
In $\R^n$, a different argument must be used: actually one can obtain convergence on compact sets, but the limiting solution $u_0$ goes to zero at infinity as $|x|^{(p-N)/(p-1) }$ (see Remark \ref{rm_udec}), which means that one needs to be able to do some finer estimates on the convergence.
In \cite{AldHoSa19_semipos_RN}, with $p=2$, the use of the Riesz potential, allowed to prove that $|x|^{N-2 }|u_a-u_0|\to0$ uniformly, which then led to the positivity of $u_a$ in the limit.
In the lack of this tool, we had to find a different way to prove the positivity of $u_a$.
The great advantage of our approach via nonsmooth analysis, is that our critical points $u_a$ will always be nonnegative functions (see Lemma \ref{lm_prop_minim}). In spite of not necessary being week solutions of the equation in problem \eqref{Problem-P}, they turn out to be supersolutions and also subsolutions of the limit equation with $a=0$. These properties will allow us to use comparison principle in order to prove the strict positivity of $u_a$ with the help of a suitable barrier function (see the Lemmas \ref{lm_z} and \ref{lm_ujpos}). From the positivity it will immediately follow that $u_a$ is indeed a weak solutions of \eqref{Problem-P}.
\par
The reader is invited to see that by \eqref{Hp_PS_SQ}, there exist $A_1,B_1>0$ such that \begin{equation} \label{AR}
F(t) \geq A_1|t|^{\theta}-B_1, \quad for \ t\geq0. \end{equation} This inequality yields that the functional we will be working with is not bounded from below. On the other hand, the condition \eqref{Hp_forig} will produce a ``range of mountains" geometry around the origin for the functional, which completes the mountain pass structure.
Finally, conditions \eqref{Hp_estSC} and \eqref{Hp_PS_SQ} impose a subcritical growth to $f$, which are used to obtain the required compactness condition.
\par
Next, we are going to state our second result. For this result, we still assume \eqref{Hp_estSC} together with the following conditions: \begin{itemize} \item[(\aslabel{$\widetilde f_0$}) \label{Hp_forig_up}] $$\displaystyle\lim_{t\to 0^+} \frac{F(t)}{t^{p}}=\infty\,;$$ \end{itemize} \begin{itemize} \item[(\aslabel{$\widetilde f_\infty$}) \label{Hp_PS_sQ}]\qquad $q<p$ in \eqref{Hp_estSC}. \end{itemize}
Our second main result is the following:
\begin{theorem} \label{Theorem2} Assume the conditions \eqref{Hp_P2_L1Li}$-$\eqref{Hp_P4_Bbeta}, \eqref{Hp_estSC}, \eqref{Hp_forig_up} and \eqref{Hp_PS_sQ}. Then there exists $a^{\ast}>0$ such that, if $a\in[0,a^{\ast})$,
problem \eqref{Problem-P} has
a positive weak solution $u_a\in C(\mathbb{R}^N) \cap D^{1,p}(\mathbb{R}^N)$. \end{theorem}
In the proof of Theorem \ref{Theorem2}, the condition \eqref{Hp_forig_up} will produce a situation where the origin is not a local minimum for the energy functional, while \eqref{Hp_PS_sQ} will make the functional coercive, in view of \eqref{Hp_estSC}. It will be then possible to obtain solutions via minimization. As in the proof of Theorem \ref{Theorem1}, we will work with a nonsmooth approximating functional that will give us an approximate solution. After some computation, we prove that this approximate solution is in fact a solution for the original problem when $a$ is small enough.
\par
\begin{remark} Observe that if $f,h$ satisfy the set of conditions of Theorem \ref{Theorem1} or those of Theorem \ref{Theorem2} and $u$ is a solution of Problem \eqref{Problem-P}, then the rescaled function $v=a^{\frac{-1}{q-1}}u$ is a solution of the problem:
\begin{equation}\label{Problem-P_resc}
\left \{
\begin{array}{rclcl}
-\Delta_p v & = & a^{\frac{(q-p)}{q-1}} h(x)(\widetilde f_a(v)-1) & \mbox{in} & \mathbb{R}^N, \\
v& > & 0 & \mbox{in} & \mathbb{R}^N, \\
\end{array}
\right.
\end{equation}
which then takes the form of Problem \eqref{Problem-P2}, with $\lambda:=a^{\frac{(q-p)}{q-1}}$ and a new nonlinearity $\widetilde f_a(t)=a^{-1}f(a^{\frac{1}{q-1}}t)$, which satisfies the same hipotheses of $f$. In particular, if $f(t)=t^{q-1}$ then $\widetilde f_a\equiv f$.
In the conditions of Theorem \ref{Theorem1}, where $q>p$, we obtain a solution of Problem \eqref{Problem-P_resc} for suitably small values of $\lambda$, while in the conditions of Theorem \ref{Theorem2}, where $q<p$, solutions are obtained for suitably large values of $\lambda$.
It is worth noting that, as $a\to0$, the solutions of Problem \eqref{Problem-P} that we obtain are bounded and converge, up to subsequences, to a solution of Problem \eqref{Problem-P} with $a=0$ (see Lemma \ref{lemma6}). As a consequence, the corresponding solutions of Problem \eqref{Problem-P_resc} satisfy $v(x)\to \infty$ for every $x\in\R^N$.
Semipositone problems formulated as in \eqref{Problem-P_resc} were considered recently in \cite{CoRQTeh_semipos,PeShSi_semiposCrit}.
\end{remark}
\par
As a final result, we also show that, by the same technique used to prove the positivity of our solution, it is possible to obtain a version of Hopf's Lemma for the $p$-Laplacian in the whole $\mathbb{R}^N$, see Proposition \ref{hopf}. A further consequence is the following Liouville-type result:
\begin{proposition}\label{prop_Liou} Let $N>p>1$ and $u \in D^{1,p}(\mathbb{R}^N) \cap C_{loc}^{1,\alpha}(\mathbb{R}^N)$ be a solution of problem: \begin{equation} \left \{ \begin{array}{rclcl} -\Delta_p u & = & g(x) \hat f(u)& \mbox{in} & \mathbb{R}^N, \\ u& \geq & 0 & \mbox{in} & \mathbb{R}^N, \\ \end{array} \right. \end{equation}
where $\hat f,g$ are continuous, $g(x)>0$ in $\mathbb{R}^N$ and $\hat f(0)=0$ while $\hat f(t)>0$ for $t>0$. If $\liminf_{|x|\to\infty} |x|^{(N-p)/(p-1)}u(x)=0$, then $u\equiv0$. \end{proposition}
\subsection{Organization of the article.}
This article is organized as follows: in Section \ref{sec_prelim}, we prove the existence of a nonnegative solution, denoted by $u_a$, for a class of approximate problems. In Section \ref{sec_estim}, we establish some properties involving the approximate solution $u_a$. In Section \ref{sec_prfmain}, we prove the Theorems \ref{Theorem1} and \ref{Theorem2} respectively. Finally, in Section \ref{sec_hopf}, we prove the Proposition \ref{hopf} about Hopf's Lemma for the $p$-Laplacian in the whole $\mathbb{R}^N$.
\subsection{Notations.} Throughout this paper, the letters $c$, $c_{i}$, $C$, $C_{i}$, $i=1, 2, \ldots, $ denote positive constants which vary from line to line, but are independent of terms that take part in any limit process. Furthermore, we denote the norm of $L^{p}(\Omega)$ for any $p\geq 1$ by $\|\,.\,\|_{p}$. In some places we will use $"\rightarrow"$, $"\rightharpoonup"$ and $"\stackrel{*}{\rightharpoonup}"$ to denote the strong convergence, weak convergence and weak star convergence, respectively.
\section{Preliminary results}\label{sec_prelim}
In the sequel, we consider the discontinuous function $f_a:\mathbb{R} \longrightarrow\mathbb{R}$ given by \begin{equation}\label{eq_fa_d}
f_a(t)=\left \{
\begin{array}{ccl}
f(t)-a & \mbox{if} & t\geq 0, \\
0 & & t<0, \\
\end{array}
\right. \end{equation} and its primitive \begin{equation}\label{eq_Fa}
F_a(t)=\displaystyle\int_{0}^{t} f_{a}(\tau)d\tau=\left \{
\begin{array}{ccl}
F(t)-at & \mbox{if} & t\geq 0, \\
0& \mbox{if} & t\leq 0. \\
\end{array}
\right. \end{equation} A direct computation gives \begin{equation}\label{eq_Fa_est}
-at^+\leq F_a(t)\leq \left \{\begin{array}{ccl}
F(t) & \mbox{if} & t\geq 0, \\
0 & \mbox{if} & t\leq 0,
\end{array}\right. \end{equation} where $t^+=\max\{t,0\}$.
Our intention is to prove the existence of a positive solution for the following auxiliary problem \begin{equation}\label{Problem-PA}\tag{AP$_a$}
\left \{
\begin{array}{rclcl}
-\Delta_p u & = & h(x)f_a(u) & \mbox{in} & \mathbb{R}^N, \\
u& > & 0 & \mbox{in} & \mathbb{R}^N, \\
\end{array}
\right. \end{equation}
because such a solution is also a solution of \eqref{Problem-P}.
Associated with \eqref{Problem-PA}, we have the energy functional $I_a:D^{1,p}(\mathbb{R}^N)\longrightarrow\mathbb{R}$ defined by \begin{equation*}
I_a(u)=\frac{1}{p}\int_{\mathbb{R}^N}|\nabla u|^pdx -\int_{\mathbb{R}^N} h(x)F_{a}(u)dx, \end{equation*} which is only locally Lipschitz.
Hereafter, we will endow $D^{1,p}(\mathbb{R}^N)=\left\{u\in L^{p^*}(\mathbb{R}^N);\, \nabla u\in L^p(\mathbb{R}^N,\mathbb{R}^N)\right\} $ with
the usual norm
$$
\|u\|=\left( \int_{\mathbb{R}^N}|\nabla u|^{p}\,dx \right)^{\frac{1}{p}}.
$$
Since the Gagliardo-Nirenberg-Sobolev inequality (see \cite{Evans})
$$
\|u\|_{p^*}\leq S_{N,p} \|u\|
$$
holds for all $u\in D^{1,p}(\mathbb{R}^N)$ for some constant $S_{N,p}>0$, we have that the embedding
\begin{equation} \label{IM}
D^{1,p}(\mathbb{R}^N)\hookrightarrow L^{p^*}(\mathbb{R}^N)
\end{equation}
is continuous. The following Lemma provides us an useful compact embedding for $D^{1,p}(\mathbb{R}^N)$. \begin{lemma}\label{l1}
Assume \eqref{Hp_P2_L1Li}.
Then, the embedding $D^{1,p}(\mathbb{R}^N)\hookrightarrow L^q_h(\mathbb{R}^N)$ is continuous and compact for every $q\in [1,p^*)$. \end{lemma} \begin{proof} The continuity is obtained by H\"older inequality, using \eqref{IM} and \eqref{Hp_P2_L1Li}: \begin{equation} \label{I1}
\int_{\mathbb{R}^N}h|u|^{q}\,dx \leq \n{h}_{r}\|u\|^q_{p^*}\leq C_h\|u\|^q, \quad \forall u \in D^{1,p}(\mathbb{R}^N), \end{equation} where $r=p^*/(p^*-q)$ is dual to $p^*/q$.
Let $\{u_n\}$ be a sequence in $D^{1,p}(\mathbb{R}^N)$ with $u_n\rightharpoonup 0\ \mbox{in}\ D^{1,p}(\mathbb{R}^N).$ For each $R>0$, we have the continuous embedding $D^{1,p}(\mathbb{R}^N) \hookrightarrow W^{1,p}(B_R(0))$. Since the embedding $W^{1,p}(B_R(0)) \hookrightarrow L^p(B_R(0))$ is compact, it follows that $D^{1,p}(\mathbb{R}^N) \hookrightarrow L^p(B_R(0))$ is a compact embedding as well. Hence, for some subsequence, still denoted by itself, $$ u_n(x)\rightarrow 0\ \mbox{a.e. in}\ \mathbb{R}^N. $$
By the continuous embedding \eqref{IM}, we also know that $\{|u_n|^{q}\}$ is a bounded sequence in $L^{\frac{p^*}{q}}(\mathbb{R}^N)$. Then, up to a subsequence if necessary, $$
|u_n|^{q}\rightharpoonup 0 \mbox{ in } L^{\frac{p^*}{q}}(\mathbb{R}^N), $$ or equivalently, $$
\int_{\mathbb{R}^N}|u_n|^{q}\varphi dx \to 0, \quad \forall \varphi \in L^{r}(\mathbb{R}^N). $$ As \eqref{Hp_P2_L1Li} guarantees that $h \in L^{r}(\mathbb{R}^N)$, it follows that $$
\int_{\mathbb{R}^N}h(x)|u_n|^qdx \to 0. $$ This shows that $u_n \to 0$ in $L^q_h(\mathbb{R}^N)$, finishing the proof. \end{proof}
We also give the following result that will be used later. \begin{lemma} \label{CN} If $u_n\rightharpoonup u$ in $D^{1,p}(\mathbb{R}^N)$ and \eqref{Hp_P2_L1Li}
holds, then $$
\int_{\mathbb{R}^N}h(x)|u_n-u||u_n|^{q-1}\,dx \to 0 \quad \mbox{as} \quad n \to +\infty, \quad \forall q\in[ 1,p^*). $$ \end{lemma} \begin{proof} Set $r=\frac{p^*}{q-1}\in \left(\frac{p^*}{p^*-1}, \infty\right]$ and $r'=\frac{p^*}{p^*-(q-1)}\in [1,p^*)$ its dual exponent.
First note that $\{u_n\}$ is bounded in $D^{1,p}(\mathbb{R}^N)$, and so, $\{|u_n|^{q-1}\}$ is bounded in $L^r(\mathbb{R}^N)$
by \eqref{IM}, while $h|u_n-u| \to0$ in $L^{r'}$ since we can apply Lemma \ref{l1} with $h^{r'}$ in the place of $h$, which also satisfies condition \eqref{Hp_P2_L1Li}. Then by H\"older inequality
$$
\int_{\mathbb{R}^N}h(x)|u_n-u||u_n|^{q-1}\,dx \leq \|h(u_n-u)\|_{r'}\|u_n^{q-1}\|_{r}\to 0.
$$ \end{proof}
\subsection{Critical points theory for the functional $I_a$ }
As mentioned in the last subsection, the functional $I_a$ is only locally Lipschitz in $D^{1,p}(\mathbb{R}^N)$, then we cannot use variational methods for $C^1$ functionals. Having this in mind, we will then use the theory of critical points for locally Lipschitz functions in a Banach space, see Clarke \cite{Clarke_nonsmooth} for more details.
First of all, we recall that $u\in D^{1,p}(\mathbb{R}^N)$ is a critical point of $I_a$ if \begin{equation}\label{eq_varminim}
\int_{\mathbb{R}^N}|\nabla u|^{p-2}\nabla u \nabla v\,dx+\int_{\mathbb{R}^N}h(x)(-F_a)^0(u,v)\,dx\geq 0, \quad \forall v \in D^{1,p}(\mathbb{R}^N), \end{equation} where $$(-F_a)^0(t,s)=\limsup_{\xi\searrow0,\,\tau\to t}\frac {-F_a(\tau+\xi s)+F_a(\tau)}{\xi }$$ indicates the generalized directional derivative of $-F_a$ at the point $t$ along the direction $s$.
It is easy to see that a global minimum is always a critical point; moreover, an analogous of the classical mountain pass theorem holds true (see \cite{Chang81_VMnsm}), where a critical point in the sense of \eqref{eq_varminim} is obtained at the usual minimax level provided the following form of (PS)-condition holds true: \begin{itemize} \item [(\aslabel{$PS_L$}) \label{PSL}]
If $\{u_n\}$ is a sequence in $D^{1,p}(\mathbb{R}^N)$ such that $\{I_a(u_n)\}$ is bounded and \begin{equation}\label{eq_varPS}
\int_{\mathbb{R}^N}|\nabla u_n|^{p-2}\nabla u_n \nabla v\,dx+\int_{\mathbb{R}^N}h(x)(-F_a)^0(u_n,v)\,dx\geq- \varepsilon_n\n{v}, \end{equation} $\forall v \in D^{1,p}(\mathbb{R}^N),$ where $\varepsilon_n\to0$, then $\{u_n\}$ admits a convergent subsequence. \end{itemize}
\par
In the next Lemma, let us collect some useful properties that can be derived by the definition of critical points of $I_a$, given in \eqref{eq_varminim}. \begin{lemma}\label{lm_prop_minim} Assume \eqref{Hp_P2_L1Li} and (\ref{Hp_estSC}). Then a critical point $u_a$ of the functional $I_a$, as defined in \eqref{eq_varminim}, has the following properties: \begin{enumerate} \item $u_a\geq 0$ in $\R^N$; \item if $u_a>0$ in $\R^N$ then it is a weak solution of problem \eqref{Problem-PA}, and also a solution of problem \eqref{Problem-P}; \item $u_a$ is a weak subsolution of $-\Delta_p u=h(x)f(u)$ in $\R^N$; \item $u_a$ is a weak supersolution of $-\Delta_p u=h(x)(f(u)-a)$ in $\R^N$. \end{enumerate} \end{lemma} \begin{proof} Straightforward calculations give \begin{equation}
\label{eq_Fa0}
(-F_a)^0(t,s)=\begin{cases}
-(f(t)-a)s& \mbox{for $t>0,\ s\in\R$}\\
as&\mbox{for $t=0,\,s>0$}\\
0& \mbox{for $\begin{cases}t<0,\ s\in\R\\t=0,\,s\leq0\,.\end{cases}$}
\end{cases} \end{equation} By using $u_a^-=\max\cub{0,-u_a}$ as a test function in \eqref{eq_varminim} we get
$$-\n {u_a^-}^p= \int_{\mathbb{R}^N}|\nabla u_a|^{p-2}\nabla u_a \nabla u_a^-\,dx\geq-\int_{\mathbb{R}^N}h(x)(-F_a)^0(u_a,u_a^-)\,dx \geq 0\,,$$ then $u_a^-\equiv0$ and then (1.) is proved.
If $u_a>0$ in $supp(\phi)$, then from \eqref{eq_varminim}, \begin{equation*}
\int_{\mathbb{R}^N}|\nabla u_a|^{p-2}\nabla u_a \nabla \phi\,dx\geq
-\int_{\mathbb{R}^N}h(x)(-F_a)^0(u_a,\phi)\,dx=\int _{\mathbb{R}^N} h(x)f_a(u_a) \phi\,dx\,, \end{equation*} and by testing also with $-\phi$ one obtains equality, then (2.) is proved.
If $\phi\geq0$ in \eqref{eq_varminim} then \begin{equation*}
\int_{\mathbb{R}^N}|\nabla u_a|^{p-2}\nabla u_a \nabla \phi\,dx\geq
-\int_{\mathbb{R}^N}h(x)(-F_a)^0(u_a,\phi)\,dx\geq \int_{\mathbb{R}^N}h(x)(f(u_a)-a)\phi\,dx\,; \end{equation*} by testing with $-\phi$ one obtains \begin{equation*}
- \int_{\mathbb{R}^N}|\nabla u_a|^{p-2}\nabla u_a \nabla \phi\,dx\geq
-\int_{\mathbb{R}^N}h(x)(-F_a)^0(u_a,-\phi)\,dx\geq -\int_{\mathbb{R}^N}h(x)f(u_a)\phi\,dx\,. \end{equation*} The above analysis guarantees that, for every $\phi\in D^{1,p}(\R^N)$ with $\phi\geq0$, \begin{equation}\label{eq_subsup}
\int_{\mathbb{R}^N}h(x)(f(u_a)-a)\phi\,dx\leq \int_{\mathbb{R}^N}|\nabla u_a|^{p-2}\nabla u_a \nabla \phi\,dx\leq\int_{\mathbb{R}^N}h(x)f(u_a)\phi\,dx \,, \end{equation} which proves the claims (3.) and (4.). \end{proof}
\subsection{Mountain pass geometry}\label{sec_MP}
Throughout this subsection we assume the hypotheses of Theorem \ref{Theorem1}. The next two Lemmas will be useful to prove that in this case $I_a$ verifies the mountain pass geometry. \begin{lemma}\label{lemma1}
There exist $\rho,\alpha>0$ such that
$$I_a(u)\geq\alpha, \qquad\text{ for $\n u=\rho$ and any $a\geq0$.}$$ \end{lemma} \begin{proof} Notice that, in view of \eqref{Hp_forig}, \eqref{Hp_estSC} and \eqref{eq_Fa_est}, given $\epsilon>0$, there exists $C_{\epsilon}>0$ such that $$
F_a(t)\leq{\epsilon}|t|^{p}+ C_{\epsilon}{|t|^{q}} ,\quad \forall t\in\R. $$ Thus, by Lemma \ref{l1},
$$\int_{\R^N}h(x)F_a(u(x))dx\leq {\epsilon}C\|u\|^{p}+ C_{\epsilon}C{\|u\|^{q}}, \quad \forall u\in D^{1,p}(\R^N).$$ Thereby, setting $\n u=\rho$, we obtain $$I_a(u)\geq \rho^p\rob{\frac1p-\varepsilon C-{CC_\varepsilon}\rho^{q-p}}\,.$$ Now, fixing $\varepsilon=1/(2pC) $ and choosing $\rho$ sufficiently small such that $CC_\varepsilon \rho^{q-p}\leq 1/4p$, so that the term in parentheses is at least $1/4p$, we see that the claim is satisfied by taking $\alpha =(1/4p)\rho^p$. \end{proof}
\begin{lemma}\label{lemma2}
There exists $v\in D^{1,p}(\mathbb{R}^N) $ and $a_1>0$ such that $\|v\|>\rho$ and $I_a(v)< 0$, for all $a \in [0,a_1)$. \end{lemma} \begin{proof} Fix a function $$
\varphi\in C_{0}^{\infty}(\mathbb{R}^N) \setminus \{0\}, \quad \mbox{with} \quad \varphi \geq 0 \quad \mbox{and} ~ ||\varphi||=1. $$ Note that for all $t>0$, \begin{eqnarray*}
I_{a}(t\varphi) &=& \frac1p t^p- \int_{\Omega}h(x)F_a(t\varphi)dx \\
&=&\frac1p t^p- \int_{\Omega}h(x)F(t\varphi)\,dx + a\int_{\Omega}h(x)t\varphi\, dx, \end{eqnarray*} where $\Omega=supp \,\varphi$. Now, estimating with \eqref{AR} and assuming that $a$ is bounded in some set $[0,a_1)$, we find \begin{eqnarray}\label{eq_est_abv_phi} I_{a}(t\varphi) & \leq& \frac1p t^p- {A_1t^{\theta}}\int_{\R^N}h(x)\varphi^\theta dx+ B_1\n h_1+t a_1\int_{\R^N}h(x)\varphi dx \,. \end{eqnarray} Since $h>0$ the two integrals are positive, and using the fact that $\theta>p>1$, we can fix $t_1>\rho$ large enough so that $I_a(v)<0$, where $v=t_1\varphi\in D^{1,p}(\mathbb{R}^N)$. \end{proof}
In the sequel, we are going to prove the version of (PS)-condition required in the critical points theory for Lipshitz functionals, for the functional $I_a$. To do this, observe that \eqref{Hp_PS_SQ} yields that $f_a$ also satisfies the famous condition due to Ambrosetti-Rabinowitz, that is, there exists $T>0$, which does not depend on $a\geq 0$, such that \begin{equation}\label{ARCondition} \theta F_a(t) \leq tf_a(t)+T, \quad t\in\mathbb{R}\,. \end{equation}
\begin{lemma}\label{lemma3} For all $a\geq0$, the functional $I_a$ satisfies the condition \eqref{PSL}. \end{lemma} \begin{proof} Observe that, by \eqref{eq_Fa0}, $(-F_a)^0(t,\pm t)= \mp f_a(t)t$ for all $t \in \mathbb{R}$. Then, from \eqref{eq_varPS}, $$
\m{\int_{\mathbb{R}^N}|\nabla u_n|^{p}\,dx-\int_{\mathbb{R}^N}h(x)f_a(u_n) u_n\,dx }\leq \varepsilon_n\n{u_n}. $$ For $n$ large enough, we assume $\varepsilon_n<1$ so we get \begin{equation}\label{ineq1-lemma1}
-\|u_n\|-\|u_n\|^{p} \leq-\int_{\mathbb{R}^N} h(x)f_{a}(u_n)u_n dx\,. \end{equation}
On the other hand, since $|I_a(u_n)|\leq K$ for some $K>0$, it follows that \begin{equation}\label{ineq2-lemma1}
\frac{1}{p}\|u_n\|^{p} - \int_{\mathbb{R}^N}h(x)F_a(u_n)dx\leq K, \quad \forall n \in\mathbb{N}. \end{equation} From \eqref{ARCondition} and \eqref{ineq2-lemma1}, \begin{equation}\label{ineq3lemma1}
\frac{1}{p}\|u_n\|^{p} - \frac{1}{\theta}\int_{\mathbb{R}^N}h(x)f_a(u_n)u_n\,dx-\frac{1}{\theta}T\|h\|_1 \leq K, \quad \end{equation} thereby, by \eqref{ineq1-lemma1} and \eqref{ineq3lemma1}, $$
\left(\frac{1}{p}-\frac{1}{\theta}\right)|| u_n||^p - \frac{1}{\theta}|| u_n||\leq K+\frac{1}{\theta}T\|h\|_1, $$ for $n$ large enough. This shows that $\{u_n\}$ is bounded in $D^{1,p}(\mathbb{R}^N)$. Thus, without loss of generality, we may assume that $$ u_n\rightharpoonup u ~~ \mbox{in} ~~ D^{1,p}(\mathbb{R}^N) $$ and $$ u_n(x) \to u(x) \quad \mbox{a.e. in} \quad \mathbb{R}^N. $$ By \eqref{eq_Fa0} and conditions \eqref{Hp_estSC}-\eqref{Hp_forig}, there exists $C>0$ that does not dependent on $a$ such that $$
|(-F_a)^0(t,s)|\leq \rob{C(|t|^{q-1}+|t|)+a}|s| $$ and so, $$
|h(x)(-F_a)^0(u_n,u_n-u)| \leq Ch(x)|u_n-u|(|u_n|^{q-1}+|u_n|+a). $$ By Lemma \ref{CN}, we have the limit $$
\int_{\mathbb{R}^N} h(x)(-F_{a})^0(u_n,\pm(u_n-u)) dx \to 0, $$ that combines with the inequalities below, obtained from \eqref{eq_varPS}, \begin{multline}
-\varepsilon_n\n{u-u_n} -\int_{\mathbb{R}^N}h(x)(-F_a)^0(u_n,u-u_n)\,dx \\\leq \int_{\mathbb{R}^N}|\nabla u_n|^{p-2}\nabla u_n \nabla (u-u_n)\,dx\\\leq\int_{\mathbb{R}^N}h(x)(-F_a)^0(u_n,u_n-u)\,dx+ \varepsilon_n\n{u-u_n}, \end{multline} to give \begin{equation}\label{conv1lemma3}
\int_{\mathbb{R}^N}|\nabla u_n|^{p-2}\nabla u_n \nabla (u-u_n)\,dx\to 0. \end{equation} The weak convergence $u_n\rightharpoonup u$ in $D^{1,p}(\mathbb{R}^N)$ yields \begin{equation}\label{conv2lemma3}
\int_{\mathbb{R}^N} |\nabla u|^{p-2}\nabla u\nabla (u_n-u)dx \to 0. \end{equation} From \eqref{conv1lemma3}, \eqref{conv2lemma3} and the (S+) property of the $p$-Laplacian, we deduce that $u_n\to u$ in $D^{1,p}(\mathbb{R}^N)$, finishing the proof. Here, the Simon's inequality found in \cite[Lemma A.0.5]{PA} plays an important role to conclude the strong convergence. \end{proof}
Next, we obtain a critical point for $I_a$, by the mountain pass theorem for Lipschitz functionals. Furthermore, we will make explicit the dependence of the constants on the bounded interval $[0,\overline a)$ where the parameter $a$ is taken, by using as subscript its endpoint, which we still have to fix, while we will not mention their dependence on $h$ and $f$. \begin{lemma}\label{lm_ua} There exists a constant $C_{a_1}>0$ such that $I_a$ has a critical point $u_a\in D^{1,p}(\R^N)$ satisfying $0<\alpha\leq I_a(u_a)\leq C_{a_1}$, for every $a\in[0,a_1)$. \end{lemma} \begin{proof} The Lemmas \ref{lemma1}, \ref{lemma2} and \ref{lemma3} guarantee that we can apply the mountain pass theorem for Lipchitz functionals due to \cite{Chang81_VMnsm} to show the existence of a critical point $u_a \in D^{1,p}(\mathbb{R}^N)$ for all $a \in [0,a_1)$, with $I_a(u_a)=d_a\geq \alpha >0$, where $d_a$ is the mountain pass level associated with $I_a$.
Now, taking $\varphi\in C_{0}^{\infty}(\Omega)$ as in the proof of Lemma \ref{lemma2}, $t>0$, and estimating as in \eqref{eq_est_abv_phi}, we see that $I_{a}(t\varphi) $ is bounded from above, uniformly if $a\in[0,a_1)$. Consequently, the mountain pass level is also estimated in the same way, that is, $$ 0<\alpha\leq d_a =I_a(u_a) \leq \max\{I_a(t\varphi); t\geq 0\} \leq C_{a_1}. $$ \end{proof}
The next Lemma establishes a very important estimate involving the Sobo\-lev norm of the solution $u_a$ for $a \in [0,a_1)$. \begin{lemma}\label{lm_nHincompact}
There exist constants $k_{a_1},K_{a_1}$, such that $0<k_{a_1}\leq\|u_a\|\leq K_{a_1}$ for all $a \in [0,a_1)$. \end{lemma} \begin{proof} Using again that $(-F_a)^0(t,\pm t)= \mp f_a(t)t$, we get from \eqref{eq_varminim} that \begin{equation}\label{eq_Ipuu}
\|u_a\|^{p}-\int_{\R^N}h(x) f_{a}(u_a)u_a=0\,.
\end{equation} By Lemma \ref{lm_ua}, and subtracting \eqref{eq_Ipuu} divided by $\theta$, we get the inequality below $$
C_{a_1} \geq I_a(u_a) = \rob{ \frac 1p-\frac1\theta} \|u_a\|^{p} + \int_{\R^N}h(x) \left(\frac{1}{\theta}f_{a}(u_a)u_a -F_a(u_a)\right)dx, $$ which combined with \eqref{ARCondition} leads to $$
C_{a_1}\geq\rob{ \frac 1p-\frac1\theta} \|u_a\|^{p} -\n{h}_\infty T\,, $$ establishing the estimate from above.
In order to get the estimate from below, just note that by \eqref{eq_Fa_est} and the embeddings in Lemma \ref{l1}, $$ \alpha\leq I_a(u_a)\leq \frac1p\n{u_a}^p+a\int_{\R^N} u_a^+\,dx \leq \frac1p\n{u_a}^p+Ca_1\n{u_a}. $$ This gives the desired estimate from below. \end{proof}
\subsection{Gobal minimum geometry}\label{sec_min}
Throughout this subsection, we assume the hypotheses of Theorem \ref{Theorem2}. The next three Lemmas will prove that $I_a$ has a global minimum at a negative level.
\begin{lemma}\label{lemma1m} There exist $a_1,\alpha>0$ and $u_0\in D^{1,p}(\mathbb{R}^N)$ such that $$I_a(u_0)\leq-\alpha, \qquad\text{ for any $a\in[0,a_1)$.}$$ \end{lemma} \begin{proof} Let $\varphi\in C_{0}^{\infty}(\mathbb{R}^N)$ be as in the proof of Lemma \ref{lemma2}. For $t>0$, \begin{eqnarray*}
I_{a}(t\varphi) &=&\frac1p t^p - \int_{\Omega}h(x)F(t\varphi)\,dx + a\int_{\Omega}h(x)t\varphi\, dx\,, \end{eqnarray*} where $\Omega=supp \,\varphi$.
From \eqref{Hp_forig_up} and using the fact that $\displaystyle \inf_{x \in supp \,\varphi }h(x)=h_0>0$, we have that, for $t_0>0$ small enough $$ \quad\int_{\Omega}h(x)F(t_0\varphi)\,dx\geq \frac2pt_0^{p}\,.$$ Therefore, $$ I_a(t_0\varphi)\leq -\frac{1}p t_0^{p}+at_0\int_{\Omega}h(x) \varphi\,dx. $$ Now fixing $\alpha=\frac{1}{2p}t_0^p>0$ and choosing $a_1=a_1(t_0)$ in such way that \\$a_1t_0\int_{\Omega} h(x)\varphi\,dx\leq\alpha$, we derive that $$ I_a(t_0\varphi)\leq -\alpha<0 \qquad \text{for $a\in[0,a_1)$,} $$ showing the Lemma. \end{proof}
\begin{lemma}\label{lemma2m}
$I_a$ is coercive, uniformly with respect to $a\geq0$, in fact, there exist $H,\rho>0$ independent of $a$ such that $I_a(u)\geq H$ whenever $\n u\geq \rho$. \end{lemma} \begin{proof} By \eqref{Hp_estSC} and Lemma \ref{l1}, there is $C>0$ such that \begin{eqnarray}\label{eq_coerc}
I_a(u)&\geq &\frac 1p\n{u}^{p } - \int_{\R^N}h(x)\rob{C+C|u|^q}\,dx \\\nonumber&\geq &\frac 1p\n{u}^{p }-C-C\n{u}^q, \end{eqnarray} then the claim follows easily since $p>q$ from \eqref{Hp_PS_sQ}. \end{proof}
\begin{lemma} For every $a\in\R$, $I_a$ is weakly lower semicontinuos. \end{lemma} \begin{proof} The proof is classical, since the norm is weakly lower semicontinuos and the term $\int_{\R}h(x)F_a(u)\,dx$ is weakly continuous. To see this, let $\{u_n\}$ be a sequence in $D^{1,p}(\mathbb{R}^N)$ such that $$ u_n\rightharpoonup u ~~ \mbox{in} ~~ D^{1,p}(\mathbb{R}^N). $$ Then, proceeding as in the proof of Lemma \ref{l1}, up to a subsequence $$ u_n(x) \to u(x) \quad \mbox{in $L^{q}_h(\R^N)$\ and\ a.e. in} \ \mathbb{R}^N. $$ This means that $w_n=h^{1/q}u_n\to w= h^{1/q}u$ in $L^{q}$, as a consequence, we may also assume that $\{w_n\}$ is dominated by some $g\in L^{q}$. On the other hand, by \eqref{Hp_estSC} and \eqref{Hp_P2_L1Li} $$
|h\,F_a(u_n)|\leq h\,C(|u_n|^q+1)\leq C(g^q+h)\in L^1(\mathbb{R}^N), $$ and so, $hF_a(u_n)$ is dominated and converges to its a.e. limit $hF_a(u)$. Since the same argument can be applied to any subsequence of the initial sequence, we can ensure that $$ \lim_{n \to +\infty}\int_{\R^N} hF_a(u_n)\,dx=\int_{\R^N} hF_a(u)\,dx $$ along any $D^{1,p}$-weakly convergent sequence. \end{proof}
We will now obtain a candidate solution for problem \eqref{Problem-PA} by minimization. \begin{lemma}\label{lm_ua_m} There exists a constant $C_{a_1}>0$ such that $I_a$ has a global minimizer $u_a\in D^{1,p}(\R^N)$ satisfying $0>-\alpha\geq I_a(u_a)\geq -C_{a_1}$, for every $a\in[0,a_1)$. \end{lemma} \begin{proof} The minimizer is obtained in view of the above Lemmas. Actually the global minimum of $I_a$ stays below $-\alpha$ by Lemma \ref{lemma1m}, while the boundedness from below is a consequence of \eqref{eq_coerc}. \end{proof}
The next Lemma establishes the same important estimate as the one in Lemma \ref{lm_nHincompact}, for the minimizer $u_a$.
\begin{lemma}\label{lm_nHincompact_m}
There exist constants $k_{a_1},K_{a_1}$, such that $0<k_{a_1}\leq\|u_a\|\leq K_{a_1}$ for all $a \in [0,a_1)$. \end{lemma} \begin{proof} The bound from above for the norm of $u_a$ is a consequence of the uniform coercivity proved in Lemma \ref{lemma2m}, since $I_a(u_0)<0$. For the estimate from below, just note that by \eqref{eq_Fa_est}, $$ 0>-\alpha\geq I_a(u_a)= \frac1p\n u^p-\int_{\R^N} h(x)F_a(u_a)\,dx $$ $$ \geq-\int_{\R^N} h(x)F( u_a^+)\,dx $$ and the right hand side goes to zero if $\n{u_a}$ goes to zero, by Lemma \ref{l1} and the continuity of the integral. \end{proof}
\section{Further estimates for the critical points $u_a$}\label{sec_estim}
From now on $u_a$ will be the critical point obtained in Lemma \ref{lm_ua} or in Lemma \ref{lm_ua_m}. Our first result ensures that the family $\{u_a\}_{a\in[0,\overline a)}$ is a bounded set in $L^{\infty}(\R^N)$ for $\overline a$ small enough. This fact is crucial in our approach. \begin{lemma} \label{Estimativa}
There exists $C_{a_1}^\infty>0$ such that \begin{equation}\label{eq_estCinfty}
\|u_a\|_{\infty} \leq C_{a_1}^\infty, \quad \forall a \in [0,a_1). \end{equation} \end{lemma} \begin{proof} By Lemma \ref{lm_prop_minim} we know that for $a\in[0,a_1)$, $u_a\geq0$ and it is a weak subsolution of $$ -\Delta_p u=h(x)f(u), \quad \mbox{in} \quad \mathbb{R}^N. $$
In the case of the mountain pass geometry, $u_a$ is also a weak subsolution of $$
-\Delta_p u=h(x)\alpha(x)\rob{1+|u|^{p-2}u}\quad \mbox{in} \quad \mathbb{R}^N, $$ where, from \eqref{Hp_estSC} and \eqref{Hp_PS_SQ}, $$ \alpha(x):=\frac{f(u_a(x))}{1+u_a(x)^{p-1}}\leq D(1+u_a(x)^{q-p})\quad \mbox{in} \quad \mathbb{R}^N, $$ for some $D>0$ which depends only on $f$.
Let $K_\rho(x)$ denote a cube centered at $x$ with edge length $\rho$, and $\n{\cdot}_{r,K}$ denote the $L^r$ norm restricted to the set $K$. Our goal is to prove that, for a fixed $\rho>0$ and any $x\in\R^N$, one has \begin{equation}\label{eq_Tr1}
\sup_{K_\rho(x)}u_a\leq C \rob{1+\n{u_a}_{p^*, K_{2\rho}(x)}} \end{equation} where $C$ depends on $p,N,f,h $ only. Since $K_\rho(x)$ can be taken anywhere and the right hand side is bounded for $\cub{u_a}_{a\in[0,a_1)}$ by Lemma \ref{lm_nHincompact} and \eqref{IM}, equation \eqref{eq_Tr1} gives a uniform bound for $u_a$ in $L^\infty$, proving our claim.
In order to prove \eqref{eq_Tr1}, we will use Trudinger \cite[Theorem 5.1]{Trud67_HarnType} (see also Theorem 1.3 and Corollary 1.1). For this, we need to show that $$ \sup_{x\in\R^N,\ \rho>0}\frac{\n {h\alpha} _{N/p,K_\rho(x)}}{\rho^\delta}\leq C $$ for a suitable $\delta>0$ and $C$ that do not depend on $a\in[0,a_1)$ (see eq. (5.1) in \cite{Trud67_HarnType}).
Actually let $\tau =p^*/(q-p)>N/p$, then $$
\n {h\alpha} _{N/p,K_\rho(x)} \leq\n{h\alpha}_{\tau,{K_\rho(x)}} |{K_\rho(x)}|^{p/N-1/\tau}\,, $$ and \begin{multline} \n {h\alpha} _{\tau,K_\rho(x)}^\tau= \int_{K_\rho(x)} (h\alpha) ^\tau dx \leq\int_{K_\rho(x)} h^\tau D(1+u_a^{q-p})^\tau\,dx\leq \\\leq D'\int_{K_\rho(x)} h^\tau (1+u_a^{p^*})dx\leq D''(1+\n{u_a}_{p^*}^{p^*} )\,. \end{multline}
Using the fact that $|{K_\rho(x)}|= \rho^N$, we conclude that $ \rho^{-\delta}\n {h\alpha} _{N/p,K_\rho(x)}$ is bounded, for a suitable $\delta>0$, by a constant depending only on $p,N,f,h $ and $\n{u_a}_{p^*}$, which is bounded by Lemma \ref{lm_nHincompact} and \eqref{IM}.
In the case of the minimum geometry, we can take $\alpha$ to be a constant and then the boundedness of $ \rho^{-\delta}\n {h\alpha} _{N/p,K_\rho(x)}$ is easily obtained since $h\in L^\infty$ (in this case \eqref{eq_Tr1} can also be obtained directly from Theorem 1.3 and Corollary 1.1 in \cite{Trud67_HarnType}). \end{proof}
In what follows, we show an estimate from below of the norm $L^{\infty}(B_\gamma)$ of $u_a$ for $a$ small enough, where $B_\gamma \subset \mathbb{R}^N$ is the open ball centered at origin with radio $\gamma>0$. This estimate is a key point to understand the behavior of the family $\{u_a\}$ when $a$ goes to 0.
\begin{lemma}\label{lemma6} There exist $\delta,\gamma>0$ that do not depend on $a \in [0,a_1)$, such that $\|u_a\|_{\infty,{B_\gamma}}\geq \delta$ for all $a\in[0,{a}_1)$. \end{lemma} \begin{proof} By \eqref{eq_subsup}, since $u_a\geq0$, \begin{equation}
\int_{\mathbb{R}^N}|\nabla u_a|^{p}\,dx\leq\int_{\mathbb{R}^N}h(x)f(u_a)u_a\,dx \,. \end{equation} By Lemma \ref{lm_nHincompact} (resp. Lemma \ref{lm_nHincompact_m}) the left hand side is bounded from below by $k_{a_1}^p$. Let now \\\indent $\bullet$ $\Gamma$ be such that $f(t)t< \Gamma$ for $t\in [0,C_{a_1}^\infty]$, where $C_{a_1}^\infty$ was given in Lemma \ref{Estimativa}, \\\indent $\bullet$ $\gamma$ be such that $\int_{\R^N\setminus B_\gamma}h\,dx<k_{a_1}^p/(2\Gamma)$, \\\indent $\bullet$
$\delta$ be such that $f(t)t< k_{a_1}^p/(2\n{h}_\infty|B_\gamma|)$ for $t\in [0,\delta]$. \\ Then if $u_a<\delta$ in $B_\gamma$ we are lead to the contradiction $$
k_{a_1}^p \leq \int_{\R^N}|\nabla u_a|^p \,dx \leq\int_{\R^N\setminus B_\gamma} h(x)f(u_a)\,u_a\,dx+\int_{B_\gamma} h(x)f(u_a)\,u_a\,dx<k_{a_1}^p $$ and then the claim is proved. \end{proof}
We can now prove the following convergence result. \begin{lemma}\label{lm_subtou} Given a sequence of positive numbers $a_j\to 0$, there exists $u\in D^{1,p}(\R^N)$ and $\beta>0$ such that, up to a subsequence, $u_{a_j}\to u$ weakly in $D^{1,p}(\R^N)$ and in ${C}^{1,\beta}$ sense in compact sets. Moreover, $u>0$ is a solution of \eqref{Problem-P} with $a=0$. \end{lemma} \begin{proof} Fixing $u_j=u_{a_j}$, it follows that $\{u_j\}$ is bounded in $L^\infty(\mathbb{R}^N)$, which means that we may apply \cite[Theorem 1]{Tolksdorf} to obtain that it is also bounded in ${C}_{loc}^{1,\alpha}(\R^N)$ for some $\alpha>0$. As a consequence, for $\beta\in(0,\alpha)$, in any compact set $\overline\Omega$ it admits a subsequence that converges in ${C}^{1,\beta}(\overline\Omega)$ and using a diagonal procedure we see that there exists $u\in {C}^{1,\beta}(\R^N)$ such that, again up to a subsequence, $u_n\rightarrow u$ in ${C}^{1,\beta}$ sense in compact sets. From Lemma \ref{lemma6}, $u$ is not identically zero. The boundedness in $W_{loc}^{1,p}(\mathbb{R}^N)$ implies that we may also assume that $u_j\rightharpoonup u$ in $W_{loc}^{1,p}$ and in $L_{loc}^{p^*}(\mathbb{R}^N)$.
For $\phi\geq 0$ with support in some bounded set $\Omega$, from \eqref{eq_subsup} we have \begin{equation}
\int_{\Omega}h(x)(f(u_j)-a_j)\phi\,dx\leq \int_{\Omega}|\nabla u_j|^{p-2}\nabla u_j \nabla \phi\,dx\leq\int_{\Omega}h(x)f(u_j)\phi\,dx \,; \end{equation}
the above convergences bring \begin{equation}
\int_{\Omega}|\nabla u|^{p-2}\nabla u \nabla \phi\,dx=\int_{\Omega}h(x)f(u)\phi\,dx \,, \end{equation} then $u$ is a nontrivial solution of \eqref{Problem-P} with $a=0$, and since $f\geq0$, it follows that $u$ is everywhere positive. \end{proof}
\section{Proof of the main Theorems }\label{sec_prfmain}
In order to prove that $u_a>0$ for $a$ small enough, we will first construct a subsolution that will be used for comparison.
\begin{lemma}\label{lm_z}
Let $\vartheta>N>p$ be as in \eqref{Hp_P4_Bbeta}. Given $A,r>0$ there exists $H>0$ such that the problem \begin{equation}\label{eq_probz}
\left \{
\begin{array}{rclcl}
-\Delta_p z & = & A & \mbox{in} & B_r\,, \\
-\Delta_p z & = & -H |x|^{-\vartheta} & \mbox{in} & \mathbb{R}^N\setminus B_r\,, \\
\end{array}
\right. \end{equation}
has an explicit family of bounded radial and radially decreasing weak solutions, defined up to an additive constant. More precisely, if we take $H=A\frac {\vartheta-N}{N} r^{\vartheta}$ and if we fix $\lim_{|x|\to\infty}z(x)=0$, then the solution is \begin{equation}\label{eq_z}
z(x)= \begin{cases}
C-\rob{\frac A N}^{1/(p-1)}\frac{p-1}p|x|^{p/(p-1)}& \mbox{ for $|x|<r$}\,,\\
\rob{\frac {A} {N}r^{\vartheta}}^{1/(p-1)}\frac{p-1}{\vartheta-p}|x|^{(p-\vartheta)/(p-1)} & \mbox{ for $|x|\geq r$}\,,\\
\end{cases} \end{equation}
where C is chosen so that the two formulas coincide for $|x|=r$.
\end{lemma}
\begin{proof}
For a radial function $u(x)=v(|x|)$ one has $$\Delta_p u =|v'|^{p-2}\sqb{(p-1)v''+\frac{ N-1}{\rho}v'}\,.$$
By substitution, one can see that a function in the form $u(x)=v(|x|)=\sigma |x|^\lambda$ is a solution of the equation $\Delta_p u=\varrho|x|^b $ provided $$ \begin{cases} \lambda=\frac{p+b}{p-1}\,, \\
|\sigma|^{p-2}\sigma=\frac 1{(N+b)|\lambda|^{p-2}\lambda}\,\varrho\,. \end{cases} $$
In particular, \begin{itemize} \item if $b=0$ then $\lambda=\frac p{p-1}>0$ and $\sigma$ has the same sign of $\varrho$: \item if $b=-\vartheta$ with $\vartheta>N>p$ then $\lambda=\frac {p-\vartheta}{p-1}<0$ and still $\sigma$ has the same sign of $\varrho$. \end{itemize}
Now, taking the two functions $$
\begin{cases}
-\rob{\frac A N}^{1/(p-1)}\frac{p-1}p|x|^{p/(p-1)}& \mbox{for $|x|<r$\,, }
\\\rob{\frac {H} {\vartheta-N}}^{1/(p-1)}\frac{p-1}{\vartheta-p}|x|^{(p-\vartheta)/(p-1)}& \mbox{for $|x|>r$\,, }
\end{cases} $$ they satisfy \eqref{eq_probz} and their radial derivatives are $$
\begin{cases}
-\rob{\frac A N}^{1/(p-1)}|x|^{1/(p-1)}& \mbox{for $|x|<r$\,, }
\\-\rob{\frac {H} {\vartheta-N}}^{1/(p-1)}|x|^{(1-\vartheta)/(p-1)}& \mbox{for $|x|>r$. }
\end{cases} $$
Note that the derivatives are equal at $|x|=r$ provided $$ \frac A N r=\frac {H}{\vartheta-N} r^{1-\vartheta}\,. $$
Having this in mind, we can therefore construct $z$ piecewise as in \eqref{eq_z} obtaining a bounded, radial and radially decreasing function, to which any constant can be added. \end{proof}
In order to finalize the proof of our main Theorems, we only need to show that for any sequence of positive numbers $a_j\to0$ there exists a subsequence of the corresponding critical points $u_j:=u_{a_j}$ that are positive: this is proved in the following Lemma. \begin{lemma}\label{lm_ujpos} If $h$ satisfies \eqref{Hp_P2_L1Li}$-$\eqref{Hp_P4_Bbeta}, then the sequence $\{u_j\}$ satisfies $u_j>0$ for $j$ large enough. \end{lemma} \begin{proof} Fix $r>0$. From Lemma \ref{lm_subtou}, up to a subsequence, $u_j\to u>0$ uniformly in $\overline{B_r}$. Thus, there exist $A,j_0>0$ such that, \begin{equation}\label{eq_hfA}
\mbox{ $u_j>0$ \quad and \quad $h(x)f_{a_j}(u_j)\geq A$,\quad in $\overline{B_r}$,\quad for $j>j_0$. } \end{equation} Now let $B$ be the constant in condition \eqref{Hp_P4_Bbeta}, $ H=A\frac {\vartheta-N}{N} r^{\vartheta}$ as from Lemma \ref{lm_z} and let $j_1>j_0$ be such that $a_jB<H$ for $j>j_1$. Then we have, \begin{equation}\label{eq_hfB}
\mbox{ $h(x)f_{a_j}(u_j)\geq-h(x)a_j\geq
-H |x|^{-\vartheta}$,\ \ in $B_r^C$,\ \ for $j>j_1$. } \end{equation} Combining equation \eqref{eq_subsup}, the above inequalities and \eqref{eq_probz}, we get, for every $\phi\in D^{1,p}(\R^N), \ \phi\geq0$, \begin{equation}\label{eq_compuz}
\int_{\R^N} |\nabla u_j|^{p-2}\nabla u_j\nabla \phi\,dx \geq
\int_{\R^N} h(x)(f_{a_j}(u_j))\phi\,dx
\geq
\int_{\R^N} |\nabla z|^{p-2}\nabla z\nabla \phi\,dx\,. \end{equation}
In order to conclude, fix an arbitrary $R>r$ and define $z_R$ by subtracting from $z$ a constant so that $z_R=0$ on $\partial B_R$ (but observe that $z_R>0$ in $ B_R$).
By \eqref{eq_compuz} (which holds true also for $z_R$) and since $u_j\geq 0=z_R$ on $\partial B_R$ we obtain by Comparison Principle that $u_j\geq z_R>0$ in $B_R$.
Since $R$ is arbitrary, we have proved that $u_j>0$ in $\R^N$ for $j>j_1$.
In particular, since $z_R\to z$ uniformly as $R\to \infty$, we conclude that $u_j\geq z$ in $\R^N$ for $j>j_1$.
\end{proof}
\section{Final comments. }\label{sec_hopf}
In this section we would like to point out some results that can be useful when studying problems that involve the $p$-Laplacian operator in the whole $\mathbb{R}^N$. Proposition \ref{hopf} below works like a Hopf's Lemma for the $p$-Laplacian operator in the whole $\mathbb{R}^N$ and it allows to prove the Liouville-type result in Proposition \ref{prop_Liou}.
\begin{lemma}\label{lm_z_h} Let $N>p$ and $A,r>0$. Then, the problem \begin{equation}\label{eq_probz_h}
\left \{
\begin{array}{rclcl}
-\Delta_p z & = & A & \mbox{in} & B_r, \\
-\Delta_p z & = & 0 & \mbox{in} & \mathbb{R}^N\setminus B_r, \\
\end{array}
\right. \end{equation}
has an explicit family of bounded radial and radially decreasing weak solutions, defined up to an additive constant. More precisely, if we fix $\displaystyle \lim_{|x|\to\infty}z(x)=0$, then the solution is $$
z(x)= \begin{cases}
C-\rob{\frac A N}^{1/(p-1)}\frac{p-1}p|x|^{p/(p-1)}& \mbox{ for $|x|<r$}\,,\\
\rob{\frac A N}^{1/(p-1)}\frac{p-1}{N-1}r^{N/(p-1)}|x|^{(p-N)/(p-1) }& \mbox{ for $|x|\geq r$}\,,\\
\end{cases} $$
where C is chosen so that the two formulas coincide for $|x|=r$. \end{lemma}
\begin{proof}
For $|x|>r$ we consider the family of p-harmonic functions $C_1|x|^{\frac{p-N}{p-1}}$, with radial derivative $-C_1\frac{N-p}{p-1}|x|^{-\frac{N-1}{p-1}}$.
Then we only have to set $-\rob{\frac A N}^{1/(p-1)}r^{1/(p-1)}=-C_1\frac{N-p}{p-1}r^{-(N-1)/(p-1)} $, that is, \begin{equation}\label{eq_consthopf}C_1=\rob{\frac A N}^{1/(p-1)}\frac{p-1}{N-p}r^{N/(p-1)}. \end{equation} \end{proof}
Proceeding as in the proof of Lemma \ref{lm_ujpos}, we obtain the following Proposition as an immediate consequence of the above Lemma. \begin{proposition}[Hopf's Lemma] \label{hopf} Suppose $N>p$, $A,r,\alpha>0$ and $u \in D^{1,p}(\mathbb{R}^N) \cap C_{loc}^{1,\alpha}(\mathbb{R}^N)$ satisfying $$
\begin{cases}
-\Delta_p u\geq A>0& in\ B_r\,, \\-\Delta_p u\geq 0 & in\ \R^N\,, \\u\geq 0& in\ \R^N\,. \end{cases} $$
Then $u(x)\geq C|x|^{(p-N)/(p-1) }$ for $|x|>r$, where $C$ is given in \eqref{eq_consthopf}. \end{proposition}
The above Proposition complements, in some sense, the study made in \cite[Theorem 3.1]{CTT}, which obtained a similar estimate when $u$ is a solution of a particular class of $p$-Laplacian problems in the whole $\mathbb{R}^N$.
\begin{remark}\label{rm_udec} Proposition \ref{hopf} applies in particular to the limit solution $u$ obtained in Lemma \ref{lm_subtou}, providing us with its decay rate at infinity. \end{remark}
Finally, from Proposition \ref{hopf} it is straightforward to derive Proposition \ref{prop_Liou}.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\end{document}
|
arXiv
|
{
"id": "2210.14887.tex",
"language_detection_score": 0.6509608626365662,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{The Douglas--Rachford algorithm\\
for a hyperplane and a doubleton}
\author{
Heinz H.\ Bauschke\thanks{
Mathematics, University of British Columbia, Kelowna, B.C.\ V1V~1V7, Canada.
E-mail: \texttt{[email protected]}.},~
Minh N.\ Dao\thanks{
CARMA, University of Newcastle, Callaghan, NSW 2308, Australia.
E-mail: \texttt{[email protected]}.}~~~and~
Scott B.\ Lindstrom\thanks{
CARMA, University of Newcastle, Callaghan, NSW 2308, Australia.
E-mail: \texttt{[email protected]}.} }
\date{April 24, 2018}
\maketitle
\begin{abstract} The Douglas--Rachford algorithm is a popular algorithm for solving both convex and nonconvex feasibility problems. While its behaviour is settled in the convex inconsistent case, the general nonconvex inconsistent case is far from being fully understood. In this paper, we focus on the most simple nonconvex inconsistent case: when one set is a hyperplane and the other a doubleton (i.e., a two-point set). We present a characterization of cycling in this case which --- somewhat surprisingly --- depends on whether the ratio of the distance of the points to the hyperplane is rational or not. Furthermore, we provide closed-form expressions as well as several concrete examples which illustrate the dynamical richness of this algorithm. \end{abstract}
{\small \noindent {\bfseries 2010 Mathematics Subject Classification:} {Primary: 47H10, 49M27; Secondary: 65K05, 65K10, 90C26. }
\noindent {\bfseries Keywords:} closed-form expressions, cycling, Douglas--Rachford algorithm, feasibility problem, finite set, hyperplane, method of alternating projections, projector, reflector }
\section{Introduction} \label{s:intro}
The Douglas--Rachford (DR) algorithm \cite{DR56} is a popular algorithm for finding minimizers of the sum of two functions, defined on a real Hilbert space and possibly nonsmooth. Its convergence properties are fairly well understood in the case when the function are convex; see \cite{LM79}, \cite{EB92}, \cite{Com04}, \cite{BCL04}, \cite{BDM16}, and \cite{BM17}. When specialized to indicator functions, the DR algorithm aims to solve a feasibility problem.
The \emph{goal} of this paper is to analyze an instructive --- and perhaps the most simple --- nonconvex setting: when one set is a hyperplane and the other is a doubleton (i.e., it consists of just two distinct points). Our analysis reveals interesting dynamic behaviour whose \emph{periodicity} depends on whether or not a certain ratio of distances is rational (Theorem~\ref{t:2points}). We also provide \emph{explicit closed-form expressions} for the iterates in various circumstances (Theorem~\ref{t:closedform}). Our work can be regarded as complementary to the recently rapidly growing body of works on the DR algorithm in nonconvex settings including \cite{ERT07}, \cite{BS11}, \cite{HL13}, \cite{BN14}, \cite{ABT16}, \cite{Pha16}, and \cite{DT17}.
The remainder of the paper is organized as follows. In Section~\ref{s:setup}, we recall the necessary background material to start our analysis. The case when one set contains not just $2$ but finitely many points is considered in Section~\ref{s:finite}. Section~\ref{s:cycling} provides a characterization of when cycling occurs, while Section~\ref{s:closed-form} presents closed-form expressions and various examples. We conclude the paper with Section~\ref{s:conclusion}.
\section{The set up} \label{s:setup}
Throughout we assume that \begin{empheq}[box=\mybluebox]{equation} \text{$X$ is a finite-dimensional real Hilbert space} \end{empheq}
with inner product $\scal{\cdot}{\cdot}$ and induced norm $\|\cdot\|$, and \begin{empheq}[box=\mybluebox]{equation} \text{$A$ and $B$ are nonempty closed subsets of $X$}. \end{empheq} To solve the feasibility problem \begin{empheq}[box=\mybluebox]{equation} \label{e:prob} \text{find a point in $A\cap B$}, \end{empheq} we employ the \emph{Douglas--Rachford algorithm} (also called \emph{averaged alternating reflections}) that uses the \emph{DR operator}, associated with the ordered pair $(A, B)$, \begin{empheq}[box=\mybluebox]{equation} T :=\tfrac{1}{2}(\ensuremath{\operatorname{Id}}+R_BR_A) \end{empheq} to generate a \emph{DR sequence} $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ with starting point $x_0 \in X$ by \begin{empheq}[box=\mybluebox]{equation} \label{e:DRAseq} (\forall\ensuremath{{n\in{\mathbb N}}})\quad x_{n+1} \in Tx_n, \end{empheq} where $\ensuremath{\operatorname{Id}}$ is the identity operator, $P_A$ and $P_B$ are the projectors, and $R_A :=2P_A -\ensuremath{\operatorname{Id}}$ and $R_B :=2P_B -\ensuremath{\operatorname{Id}}$ are the reflectors with respect to $A$ and $B$, respectively. Here the projection $P_Ax$ of a point $x \in X$ is the nearest point of $x$ in the set $A$, i.e., \begin{equation}
P_Ax :=\ensuremath{\operatorname*{argmin}}_{a \in A} \|x -a\| =\menge{a \in A}{\|x -a\| =d_A(x)}, \end{equation}
where $d_A(x) := \min_{a \in A} \|x -a\|$ is the distance from $x$ to the set $A$. Note from \cite[Corollary~3.15]{BC17} that closedness of the set $A$ is necessary and sufficient for $A$ being proximinal, i.e., $(\forall x\in X)$ $P_Ax\neq \varnothing$. According to \cite[Theorem~3.16]{BC17}, if $A$ and $B$ are convex, then $P_A$, $P_B$ and hence $T$ are single-valued. We also note that \begin{equation} (\forall x \in X)\quad Tx =\tfrac{1}{2}(\ensuremath{\operatorname{Id}}+R_BR_A)x =\menge{x -a +P_B(2a -x)}{a \in P_Ax}, \end{equation} and if $P_A$ is single-valued then \begin{equation} \label{e:PAsingle} T =\tfrac{1}{2}(\ensuremath{\operatorname{Id}}+R_BR_A) =\ensuremath{\operatorname{Id}} -P_A +P_BR_A. \end{equation} For further information on the DR algorithm in the classical case (when $A$ and $B$ are both convex), see \cite{LM79}, \cite{Com04}, \cite{BCL04}, \cite{BM17}, and \cite{BDNP16b}. Results complementary to the rapidly increasing body of works on the DR algorithm in nonconvex settings can be found in \cite{BN14}, \cite{Pha16}, \cite{DP16}, \cite{BLSSS17}, \cite{LLS17}, \cite{LSS17}, \cite{DP17}, and the references therein.
The notation and terminology used is standard and follows, e.g., \cite{BC17}. The nonnegative integers are $\ensuremath{\mathbb N}$, the positive integers are $\ensuremath{\mathbb N}^*$, and the real numbers are $\ensuremath{\mathbb R}$, while $\ensuremath{\mathbb{R}_+} := \menge{x \in \ensuremath{\mathbb R}}{x \geq 0}$ and $\ensuremath{\mathbb{R}_{++}} := \menge{x \in \ensuremath{\mathbb R}}{x >0}$. We are now ready to start deriving the results we announced in Section~\ref{s:intro}.
\section{Hyperplane and finitely many points} \label{s:finite}
We focus on the case when $B$ is a finite set, and we start with the following observation. \begin{lemma} \label{l:notcvg} Suppose that $A$ is convex, that $B$ is finite, and that $A\cap B =\varnothing$. Let $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ be a DR sequence with respect to $(A, B)$. Then $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ is not convergent. \end{lemma} \begin{proof} Since $A$ is convex, $P_A$ is single-valued and continuous on $X$. By \eqref{e:PAsingle}, $T =\frac{1}{2}(\ensuremath{\operatorname{Id}} +R_BR_A) =\ensuremath{\operatorname{Id}} -P_A +P_BR_A$, and hence \begin{equation} (\forall\ensuremath{{n\in{\mathbb N}}})\quad b_n :=x_{n+1} -x_n +P_Ax_n \in P_BR_Ax_n \subseteq B. \end{equation} Suppose that $x_n \to x \in X$. Then $b_n \to P_Ax$. But $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ lies in $B$ and $B$ is finite, there exists $n_0 \in \ensuremath{\mathbb N}$ such that $(\forall n\geq n_0)$ $b_n =b \in B$. We obtain $P_Ax =b \in A\cap B$, which contradicts the assumption that $A\cap B =\varnothing$. \end{proof}
From here onwards, we assume that $A$ is a hyperplane and $B$ is a finite subset of $X$ containing $m$ pairwise distinct vectors; more specifically, \begin{subequations} \begin{empheq}[box=\mybluebox]{equation}
A= \{u\}^\perp \quad\text{with}\quad u\in X, \|u\|= 1 \end{empheq} and \begin{empheq}[box=\mybluebox]{equation} \label{e:B} B= \{b_1, \dots, b_m\}\subseteq X \quad\text{with}\quad \scal{b_1}{u}\leq \cdots \leq \scal{b_m}{u}. \end{empheq} \end{subequations}
\begin{fact} \label{f:A} Let $x\in X$. Then the following hold: \begin{enumerate} \item\label{f:A_P} $P_Ax= x- \scal{x}{u}u$. \item\label{f:A_R} $R_Ax= x- 2\scal{x}{u}u$. \item\label{f:A_d}
$d_A(x)= |\scal{x}{u}|$. \end{enumerate} \end{fact} \begin{proof} This follows from \cite[Example~2.4(i)]{BD17} with noting that
$R_Ax= 2P_Ax- x$ and that $d_A(x)= \|x- P_Ax\|$. \end{proof}
Let $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ be a DR sequence with respect to $(A, B)$ with starting point $x_0\in X$. Since $P_A$ is single-valued, we derive from \eqref{e:PAsingle} that \begin{equation} (\forall n\in \ensuremath{\mathbb N}^*)\quad x_n- x_{n-1}+ P_Ax_{n-1}\in Tx_{n-1}- x_{n-1}+ P_Ax_{n-1}= P_BR_Ax_{n-1}\subseteq B. \end{equation} Let us set \begin{empheq}[box=\mybluebox]{equation} \label{e:b_kn} (\forall n\in \ensuremath{\mathbb N}^*)\quad b_{k(n)}:= x_n- x_{n-1}+ P_Ax_{n-1}\in P_BR_Ax_{n-1}\subseteq B \text{~with~} k(n) \in \{1, \dots, m\}. \end{empheq} The following lemma shows that the subsequence $(x_n)_{n\in \ensuremath{\mathbb N}^*}$ lies in the union of the lines through the points in $B$ with a common direction vector $u$.
\begin{lemma} \label{l:lines} For every $n\in \ensuremath{\mathbb N}^*$, \begin{equation} x_n= \scal{x_{n-1}}{u}u+ b_{k(n)} \quad\text{and}\quad \scal{x_n}{u}= \scal{x_{n-1}}{u}+ \scal{b_{k(n)}}{u}, \end{equation} where $k(n)\in \{1, \dots, m\}$. Consequently, the subsequence $(x_n)_{n\in \ensuremath{\mathbb N}^*}$ lies in the union of finitely many (affine) lines: \begin{equation} B+ \ensuremath{\mathbb R} u=\bigcup_{b\in B} (b+ \ensuremath{\mathbb R} u)= \menge{b+ \lambda u}{b\in B, \lambda\in \ensuremath{\mathbb R}}. \end{equation} \end{lemma}
\begin{proof} By combining \eqref{e:b_kn} with Fact~\ref{f:A}\ref{f:A_P}, \begin{equation} \label{e:x+} (\forall n\in \ensuremath{\mathbb N}^*)\quad x_n= x_{n-1}- P_Ax_{n-1}+ b_{k(n)}= \scal{x_{n-1}}{u}u+ b_{k(n)}. \end{equation} Taking the inner product with $u$ yields \begin{equation} \label{e:x+,u} (\forall n\in \ensuremath{\mathbb N}^*)\quad \scal{x_n}{u}= \scal{\scal{x_{n-1}}{u}u+ b_{k(n)}}{u}= \scal{x_{n-1}}{u}+ \scal{b_{k(n)}}{u}, \end{equation} which completes the proof. \end{proof}
\begin{proposition} \label{p:mpoints} Exactly one of the following holds. \begin{enumerate} \item \label{p:mpoints_finite} $B$ is contained in one of the two closed halfspaces induced by $A$. Then either {\rm (a)} the sequence $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ converges finitely to a point $x \in \ensuremath{\operatorname{Fix}} T$ and $P_Ax \in A\cap B$,
or {\rm (b)} $A\cap B= \varnothing$ and $\|x_n\| \to +\infty$ in which case $(P_Ax_n)_\ensuremath{{n\in{\mathbb N}}}$ converges finitely to a best approximation solution $a \in A$ relative to $A$ and $B$ in the sense that $d_B(a)= \min d_B(A)$. \item \label{p:mpoints_bounded} $B$ is not contained in one of the two closed halfspaces induced by $A$. Then the sequence $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ is bounded. If additionally $A\cap B= \varnothing$, then $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ is not convergent and \begin{equation}
(\forall\ensuremath{{n\in{\mathbb N}}})\quad \|x_n- x_{n+1}\| \geq \min d_A(B)> 0. \end{equation} \end{enumerate} \end{proposition} \begin{proof} \ref{p:mpoints_finite}: This follows from \cite[Theorem~7.5]{BD17}.
\ref{p:mpoints_bounded}: Since $B$ is not a subset of one of two closed halfspaces induced by $A$, it follows from \eqref{e:B} that \begin{equation} \label{e:b1bm} \scal{b_1}{u}< 0< \scal{b_m}{u}. \end{equation} Combining Fact~\ref{f:A}\ref{f:A_R} with Lemma~\ref{l:lines} yields \begin{subequations} \begin{align} \label{e:RAx+} (\forall n\in \ensuremath{\mathbb N}^*)\quad R_Ax_n&= x_n- 2\scal{x_n}{u}u \\ &= \big( \scal{x_{n-1}}{u}u+ b_{k(n)} \big)- \big( \scal{x_{n-1}}{u}+ \scal{b_{k(n)}}{u} \big)u- \scal{x_n}{u}u \\ &= -(\scal{x_n}{u}+\scal{b_{k(n)}}{u})u+ b_{k(n)}. \end{align} \end{subequations} For any $n\in \ensuremath{\mathbb N}^*$ and any distinct indices $i, j\in \{1, \dots, m\}$, we have the following equivalences: \begin{subequations} \label{e:compare} \begin{align}
&\|b_i- R_Ax_n\|\leq \|b_j- R_Ax_n\| \\
\Leftrightarrow{} &\|(\scal{x_n}{u}+\scal{b_{k(n)}}{u})u+ (b_i-b_{k(n)})\|^2
\leq \|(\scal{x_n}{u}+\scal{b_{k(n)}}{u})u+ (b_j-b_{k(n)})\|^2 \\
\Leftrightarrow{} &\|b_i-b_{k(n)}\|^2- \|b_j-b_{k(n)}\|^2 \leq 2(\scal{x_n}{u}+\scal{b_{k(n)}}{u})\scal{b_j-b_i}{u} \\ \Leftrightarrow{} &\begin{cases} \scal{x_n}{u} \geq \beta_{i,j,n} &\text{if~} \scal{b_i}{u}< \scal{b_j}{u}, \\
\|b_i-b_{k(n)}\|\leq \|b_j-b_{k(n)}\| &\text{if~} \scal{b_i}{u}= \scal{b_j}{u}, \\ \scal{x_n}{u} \leq \beta_{i,j,n} &\text{if~} \scal{b_i}{u}> \scal{b_j}{u}, \end{cases} \end{align} \end{subequations} where \begin{equation}
\beta_{i,j,n}:= \frac{\|b_i-b_{k(n)}\|^2- \|b_j-b_{k(n)}\|^2}{2\scal{b_j-b_i}{u}} -\scal{b_{k(n)}}{u}. \end{equation} We shall now show that $(\scal{x_n}{u})_\ensuremath{{n\in{\mathbb N}}}$ is bounded above. Setting \begin{equation} r:= \max\menge{k\in \{1, \dots, m\}}{\scal{b_k}{u}= \scal{b_1}{u}}, \end{equation} we see that $r< m$ due to \eqref{e:b1bm} and that, by \eqref{e:B}, \begin{equation} \label{e:b1br} \scal{b_1}{u}= \cdots= \scal{b_r}{u}< \scal{b_{r+1}}{u}\leq \cdots\leq \scal{b_m}{u}. \end{equation} Now let $n\in \ensuremath{\mathbb N}^*$ and set \begin{equation}
I(n):= \menge{i\in \{1, \dots, r\}}{(\forall j\in \{1, \dots, r\})\quad \|b_i-b_{k(n)}\|\leq \|b_j-b_{k(n)}\|}. \end{equation} Then $I(n)= \{k(n)\}$ whenever $k(n)\in \{1, \dots, r\}$ and, by \eqref{e:compare}, \begin{equation} \label{e:pre-compare}
(\forall i\in I(n))(\forall j\in \{1, \dots, r\})\quad \|b_i- R_Ax_n\|\leq \|b_j- R_Ax_n\|. \end{equation} Define \begin{equation} \beta_n:= \max\menge{\beta_{i,j,n}}{i\in I(n), j\in \{r+1, \dots, m\}}. \end{equation} If $\scal{x_n}{u}> \beta_n$, then \eqref{e:B} and \eqref{e:compare} yield \begin{equation}
(\forall i\in I(n))(\forall k\in \{r+1, \dots, m\})\quad \|b_i- R_Ax_n\|< \|b_k- R_Ax_n\|, \end{equation} which together with \eqref{e:pre-compare} implies that $k(n+1)\in I(n)\subseteq \{1, \dots, r\}$ and, by \eqref{e:x+,u}, \eqref{e:b1bm} and \eqref{e:b1br}, \begin{equation} \label{e:decrease} \scal{x_{n+1}}{u}= \scal{x_n}{u}+ \delta \quad\text{with}\quad \delta:= \scal{b_{k(n+1)}}{u}= \scal{b_1}{u}< 0. \end{equation} Noting that \eqref{e:decrease} holds whenever $\scal{x_n}{u}> \beta_n$ and that the sequence $(\beta_n)_\ensuremath{{n\in{\mathbb N}}}$ is bounded since the set $\menge{\beta_{i,j,n}}{i\in I(n), j\in \{r+1, \dots, m\}, n\in \ensuremath{\mathbb N}^*}$ is finite, we deduce that $(\scal{x_n}{u})_\ensuremath{{n\in{\mathbb N}}}$ is bounded above. By a similar argument, $(\scal{x_n}{u})_\ensuremath{{n\in{\mathbb N}}}$ is also bounded below. Combining with \eqref{e:x+}, we get boundedness of $(x_n)_\ensuremath{{n\in{\mathbb N}}}$.
Finally, if $A\cap B= \varnothing$, then, by Lemma~\ref{l:notcvg}, $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ is not convergent and, by the Cauchy--Schwarz inequality, Lemma~\ref{l:lines}, and Fact~\ref{f:A}\ref{f:A_d}, \begin{equation}
(\forall\ensuremath{{n\in{\mathbb N}}})\quad \|x_{n+1}- x_n\|\geq |\scal{x_{n+1}-x_n}{u}|=
|\scal{b_{k(n+1)}}{u}|= d_A(b_{k(n+1)})\geq \min d_A(B)> 0. \end{equation} The proof is complete. \end{proof}
\section{Hyperplane and doubleton: characterization of cycling} \label{s:cycling}
From now on, we assume that $B$ is a doubleton where the two points do not belong to the same closed halfspace induced by $A$; more precisely, \begin{empheq}[box=\mybluebox]{equation} B= \{b_1, b_2\}\subseteq X \quad\text{with}\quad \scal{b_1}{u}< 0< \scal{b_2}{u}. \end{empheq} Set \begin{empheq}[box=\mybluebox]{equation} \label{e:32}
\beta_1:= \scal{b_1}{u}< 0,\quad \beta_2:= \scal{b_2}{u}>0, \quad\text{and}\quad \beta:= \frac{\|b_1-b_2\|^2}{2(\beta_1-\beta_2)}= -\frac{\|b_1-b_2\|^2}{2\scal{b_2-b_1}{u}}< 0. \end{empheq}
\begin{proposition} \label{p:2points} The following holds for the DR sequence $(x_n)_\ensuremath{{n\in{\mathbb N}}}$. \begin{enumerate} \item \label{p:2points_bounded} $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ is bounded but not convergent with \begin{equation}
(\forall\ensuremath{{n\in{\mathbb N}}})\quad \|x_n- x_{n+1}\| \geq \min\{d_A(b_1), d_A(b_2)\}> 0. \end{equation} \item \label{p:2points_x+} For every $n\in \ensuremath{\mathbb N}^*$, \begin{equation} \label{e:x,u} x_n= \scal{x_{n-1}}{u}u+ b_{k(n)} \quad\text{and}\quad \scal{x_n}{u}= \scal{x_{n-1}}{u}+ \scal{b_{k(n)}}{u}, \end{equation} where $k(n)\in \{1, 2\}$ and where \begin{subequations} \label{e:kn+} \begin{align} k(n)= 1\ \&\ \scal{x_n}{u}> \beta- \scal{b_1}{u} &\implies k(n+1)= 1, \\ k(n)= 1\ \&\ \scal{x_n}{u}< \beta- \scal{b_1}{u} &\implies k(n+1)= 2, \\ k(n)= 2\ \&\ \scal{x_n}{u}> -\beta- \scal{b_2}{u} &\implies k(n+1)= 1, \\ k(n)= 2\ \&\ \scal{x_n}{u}< -\beta- \scal{b_2}{u} &\implies k(n+1)= 2. \end{align} \end{subequations} \item \label{p:2points_coeffs} There exist increasing (a.k.a.\ ``nondecreasing'') sequences $(l_{1,n})_\ensuremath{{n\in{\mathbb N}}}$ and $(l_{2,n})_\ensuremath{{n\in{\mathbb N}}}$ in $\ensuremath{\mathbb N}$ such that \begin{equation} (\forall\ensuremath{{n\in{\mathbb N}}})\quad \scal{x_n}{u} =\scal{x_0}{u} +l_{1,n}\scal{b_1}{u} +l_{2,n}\scal{b_2}{u} \quad\text{and}\quad l_{1,n} +l_{2,n} =n. \end{equation} Moreover, \begin{equation} \frac{l_{1,n}}{n}\to \frac{\scal{b_2}{u}}{\scal{b_2-b_1}{u}}\in \left]0, 1\right[ \quad\text{and}\quad \frac{l_{2,n}}{n}\to \frac{\scal{b_1}{u}}{\scal{b_1-b_2}{u}}\in \left]0, 1\right[ \quad\text{as~} n\to +\infty. \end{equation} \end{enumerate} \end{proposition} \begin{proof} \ref{p:2points_bounded}: By assumption, $b_1, b_2 \notin A$, and hence $A\cap B =\varnothing$. The conclusion follows from Proposition~\ref{p:mpoints}\ref{p:mpoints_bounded}.
\ref{p:2points_x+}: We get \eqref{e:x,u} from Lemma~\ref{l:lines}. The equivalences \eqref{e:compare} in the proof of Proposition~\ref{p:mpoints}\ref{p:mpoints_bounded} state \begin{equation}
\|b_1- R_Ax_n\|\leq \|b_2- R_Ax_n\| \Leftrightarrow{}
\scal{x_n}{u}\geq \frac{\|b_1-b_{k(n)}\|^2- \|b_2-b_{k(n)}\|^2}{2\scal{b_2-b_1}{u}} -\scal{b_{k(n)}}{u}, \end{equation} which implies \eqref{e:kn+}.
\ref{p:2points_coeffs}: Using \eqref{e:x,u}, we find increasing sequences $(l_{1,n})_\ensuremath{{n\in{\mathbb N}}}$ and $(l_{2,n})_\ensuremath{{n\in{\mathbb N}}}$ in $\ensuremath{\mathbb N}$ such that \begin{equation} \label{e:xnx0} (\forall\ensuremath{{n\in{\mathbb N}}})\quad \scal{x_n}{u} =\scal{x_0}{u} +l_{1,n}\scal{b_1}{u} +l_{2,n}\scal{b_2}{u} \end{equation} and that \begin{equation} (\forall\ensuremath{{n\in{\mathbb N}}})\quad l_{1,n} +l_{2,n} =n. \end{equation} Combining with \ref{p:2points_bounded}, we obtain that \begin{equation} l_{1,n}\scal{b_1}{u} +(n -l_{1,n})\scal{b_2}{u} =l_{1,n}\scal{b_1}{u} +l_{2,n}\scal{b_2}{u} =\scal{x_n}{u} -\scal{x_0}{u} \end{equation} is bounded. It follows that \begin{equation} \frac{l_{1,n}}{n}\scal{b_1-b_2}{u}+ \scal{b_2}{u}\to 0 \quad\text{as~} n\to +\infty, \end{equation} which yields \begin{equation} \frac{l_{1,n}}{n}\to \frac{\scal{b_2}{u}}{\scal{b_2-b_1}{u}}\in \left]0, 1\right[ \quad\text{and}\quad \frac{l_{2,n}}{n}= 1- \frac{l_{1,n}}{n}\to \frac{-\scal{b_1}{u}}{\scal{b_2 -b_1}{u}}\in \left]0, 1\right[ \end{equation} as $n\to +\infty$. \end{proof}
\begin{theorem}[cycling and rationality] \label{t:2points} The DR sequence $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ cycles after a certain number of steps regardless of the starting point if and only if $d_A(b_1)/d_A(b_2)\in \mathbb{Q}$. \end{theorem} \begin{proof}
First, by Fact~\ref{f:A}\ref{f:A_d}, $d_A= |\scal{\cdot}{u}|$, which yields \begin{equation} \label{e:dAb1b2} d_A(b_1)= -\scal{b_1}{u} \quad\text{and}\quad d_A(b_2)= \scal{b_2}{u}. \end{equation} We also note from Proposition~\ref{p:2points}\ref{p:2points_bounded}--\ref{p:2points_x+} that \begin{equation} \label{e:bounded}
\text{$(|\scal{x_n}{u}|)_\ensuremath{{n\in{\mathbb N}}}$ is bounded}, \end{equation} that \begin{equation} \label{e:xn} (\forall n\in \ensuremath{\mathbb N}^*)\quad x_n= \scal{x_{n-1}}{u}u+ b_{k(n)}, \end{equation} and that \begin{equation} \label{e:xnu} (\forall n\in \ensuremath{\mathbb N}^*)\quad \scal{x_n}{u}= \scal{x_{n-1}}{u}+ \scal{b_{k(n)}}{u}, \end{equation} where $k(n)\in \{1, 2\}$.
``$\Leftarrow$'': Assume that $d_A(b_1)/d_A(b_2)\in \mathbb{Q}$. Then there exist $q_1, q_2\in \ensuremath{\mathbb N}^*$ such that $q_1d_A(b_1) =q_2d_A(b_2)$, or equivalently (using \eqref{e:dAb1b2}), \begin{equation} \label{e:suff} q_1\scal{b_1}{u} +q_2\scal{b_2}{u} =0. \end{equation} It follows from Proposition~\ref{p:2points}\ref{p:2points_coeffs} that \begin{equation} \label{e:xnx0'} (\forall\ensuremath{{n\in{\mathbb N}}})\quad \scal{x_n}{u} =\scal{x_0}{u} +l_{1,n}\scal{b_1}{u} +l_{2,n}\scal{b_2}{u} \end{equation} with $(l_{1,n}, l_{2,n})\in \ensuremath{\mathbb N}^2$. By \eqref{e:suff}, whenever $l_{1,n} \geq q_1$ and $l_{2,n} \geq q_2$, we have \begin{equation} \scal{x_n}{u} = \scal{x_0}{u} +(l_{1,n} -q_1)\scal{b_1}{u} +(l_{2,n} -q_2)\scal{b_2}{u}. \end{equation} We can thus restrict to considering the sequences $l_{1,n}',l_{2,n}'$ satisfying \eqref{e:xnx0'} and also the additional stipulation that $l_{1,n}' <q_1$ or $l_{2,n}' <q_2$. Then $l_{1,n}'\scal{b_1}{u}$ or $l_{2,n}'\scal{b_2}{u}$ is bounded. This together with \eqref{e:bounded} and \eqref{e:xnx0'} implies that both $l_{1,n}'\scal{b_1}{u}$ and $l_{2,n}'\scal{b_2}{u}$ are bounded, and so are $l_{1,n}'$ and $l_{2,n}'$. Hence, there exist $L_1, L_2 \in \ensuremath{\mathbb N}$ such that \begin{equation} (\forall\ensuremath{{n\in{\mathbb N}}})\quad 0 \leq l_{1,n}' \leq L_1 \quad\text{and}\quad 0 \leq l_{2,n}' \leq L_2. \end{equation} By combining with \eqref{e:xn} and \eqref{e:xnx0'}, $(\forall n\in \ensuremath{\mathbb N}^*)$ $x_n\in S$, where \begin{equation} S :=\menge{\scal{x_0}{u}u +l_1'\scal{b_1}{u}u +l_2'\scal{b_2}{u}u +b_k} {l_1' =0, \dots, L_1,\ l_2' =0, \dots, L_2,\ k =1, 2}. \end{equation} Since $S$ is a finite set, there exist $n_0\in \ensuremath{\mathbb N}$ and $m\in \ensuremath{\mathbb N}^*$ such that $x_{n_0} =x_{n_0+m}$. It follows that the sequence $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ cycles between $m$ points $x_{n_0}, \dots, x_{n_0+m-1}$ from $n_0$ onwards.
``$\Rightarrow$'': Assume that $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ cycles between $m$ points from $n_0 \in \ensuremath{\mathbb N}$ onwards, i.e., $(\forall n \geq n_0)$ $x_{n+m} =x_n$. By \eqref{e:xnu}, \begin{equation} \scal{x_{n_0}}{u} +\sum_{n=n_0}^{n_0+m-1} \scal{b_{k(n)}}{u} =\scal{x_{n_0}}{u}. \end{equation} There thus exist $q_1, q_2 \in \ensuremath{\mathbb N}$ such that $q_1 +q_2 =m >0$ and $q_1\scal{b_1}{u} +q_2\scal{b_2}{u} =0$. Combining with \eqref{e:dAb1b2} implies that $q_1, q_2\neq 0$ and that $d_A(b_1)/d_A(b_2)= q_2/q_1\in \mathbb{Q}$. \end{proof}
\section{Hyperplane and doubleton: closed-form expressions} \label{s:closed-form}
In this final section, we refine the previously considered case with the aim of obtaining \emph{closed-form} expressions for the terms of the DR sequence $(x_n)_\ensuremath{{n\in{\mathbb N}}}$.
Recall from Proposition~\ref{p:2points}\ref{p:2points_x+} that \begin{equation} \label{e:x,u'} (\forall n\in \ensuremath{\mathbb N}^*)\quad x_n= \scal{x_{n-1}}{u}u+ b_{k(n)} \quad\text{and}\quad \scal{x_n}{u}= \scal{x_{n-1}}{u}+ \scal{b_{k(n)}}{u}, \end{equation} where $k(n)\in \{1, 2\}$ and where \begin{subequations} \begin{align} k(n)= 1\ \&\ \scal{x_n}{u}> \beta-\beta_1 &\implies k(n+1)= 1, \label{e:kn11} \\ k(n)= 1\ \&\ \scal{x_n}{u}< \beta-\beta_1 &\implies k(n+1)= 2, \label{e:kn12} \\ k(n)= 2\ \&\ \scal{x_n}{u}> -\beta-\beta_2 &\implies k(n+1)= 1. \label{e:kn21} \end{align} \end{subequations} We note here that if $k(n)= 1$ and $\scal{x_n}{u}= \beta-\beta_1$, then both $1$ and $2$ are acceptable values for $k(n+1)$; for the sake of simplicity, we choose $k(n+1)= 2$ in this case. Define \begin{subequations} \begin{empheq}[box=\mybluebox]{align} S_1&:= \Menge{x_n}{n\in \ensuremath{\mathbb N}^*,\ k(n)= 1,\ \scal{x_n}{u}\in \left]\beta, \beta+\beta_2\right]},\\ S_2&:= \Menge{x_n}{n\in \ensuremath{\mathbb N}^*,\ k(n)= 2,\ \scal{x_n}{u}\in \left]\beta+\beta_2, \beta-\beta_1+\beta_2\right]}. \end{empheq} \end{subequations}
\begin{proposition} \label{p:segments} Let $n\in \ensuremath{\mathbb N}^*$. Then the following hold: \begin{enumerate} \item \label{p:segments_11} If $k(n)= 1$ and $\scal{x_n}{u}\in \left]\beta-\beta_1, \beta+\beta_2\right]$, then \begin{equation} k(n+1)= 1 \quad\text{and}\quad \scal{x_{n+1}}{u}= \scal{x_n}{u}+ \beta_1\in \left]\beta, \beta+\beta_2\right]. \end{equation} \item \label{p:segments_12} If $k(n)= 1$ and $\scal{x_n}{u}\in \left]\beta, \beta-\beta_1\right]$, then \begin{equation} k(n+1)= 2 \quad\text{and}\quad \scal{x_{n+1}}{u}= \scal{x_n}{u}+ \beta_2\in \left]\beta+\beta_2, \beta-\beta_1+\beta_2\right]. \end{equation} \item \label{p:segments_21} If $k(n)= 2$, $\scal{x_n}{u}\in \left]\beta+\beta_2, \beta-\beta_1+\beta_2\right]$ and $\beta+\beta_2\geq 0$, then \begin{equation} k(n+1)= 1 \quad\text{and}\quad \scal{x_{n+1}}{u}= \scal{x_n}{u}+ \beta_1\in \left]\beta+\beta_1+\beta_2, \beta+\beta_2\right]\subseteq \left]\beta, \beta+\beta_2\right]. \end{equation} \end{enumerate} Consequently, \begin{equation} \big(\beta+\beta_2\geq 0 \;\text{and}\; x_n\in S_1\cup S_2\big) \implies x_{n+1}\in S_1\cup S_2. \end{equation} \end{proposition} \begin{proof} Notice from \eqref{e:x,u'} that \begin{equation} \label{e:xu+} (\forall\ensuremath{{n\in{\mathbb N}}})\quad \scal{x_{n+1}}{u}= \scal{x_n}{u}+ \scal{b_{k(n+1)}}{u}. \end{equation}
\ref{p:segments_11}: Combine \eqref{e:kn11} and \eqref{e:xu+} while noting that $\beta+\beta_1+\beta_2< \beta+\beta_2$ by \eqref{e:32}.
\ref{p:segments_12}: Combine \eqref{e:kn12} and \eqref{e:xu+}.
\ref{p:segments_21}: By \eqref{e:32} and the Cauchy--Schwarz inequality, we obtain \begin{equation}
0< \beta_2- \beta_1= \scal{b_2-b_1}{u}\leq \|b_2-b_1\|\|u\|= \|b_2-b_1\| \end{equation} and \begin{equation} \label{e:inequality}
\beta_2- \beta_1\leq \frac{\|b_2-b_1\|^2}{\beta_2-\beta_1}= -2\beta. \end{equation} Now assume that $\beta+ \beta_2\geq 0$. Then $\beta_1+ \beta_2\geq (2\beta+ \beta_2)+ \beta_2= 2(\beta+ \beta_2)\geq 0$, and hence $\left]\beta+\beta_1+\beta_2, \beta+\beta_2\right]\subseteq \left]\beta, \beta+\beta_2\right]$. It follows from $\scal{x_n}{u}> \beta+\beta_2\geq 0$ that $\scal{x_n}{u}> -\beta-\beta_2$. Now use \eqref{e:kn21} and \eqref{e:xu+}.
Finally, assume that $x_n\in S_1\cup S_2$. If $x_n\in S_2$, then we have from \ref{p:segments_21} that $x_{n+1}\in S_1$. If $x_n\in S_1$ and $\scal{x_n}{u}\in \left]\beta, \beta-\beta_1\right]$, then, by \ref{p:segments_12}, $x_{n+1}\in S_2$. If $x_n\in S_1$ and $\scal{x_n}{u}\in \left]\beta-\beta_1, \beta+\beta_2\right]$, then $x_{n+1}\in S_1$ due to \ref{p:segments_11}. Altogether, $x_{n+1}\in S_1\cup S_2$. \end{proof}
\begin{theorem}[closed-form expressions] \label{t:closedform} Suppose that $\beta+\beta_2\geq 0$ and that $x_1\in S_1\cup S_2$. Then \begin{subequations} \label{e:xu} \begin{align} (\forall n\in \ensuremath{\mathbb N}^*)\quad \scal{x_n}{u} &= \scal{x_0}{u}+ n\beta_1+ \left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor (\beta_2- \beta_1) \\ &= \scal{x_0}{u}- \left\lfloor \frac{-\scal{x_0}{u}+\beta-\beta_1-(n-1)\beta_2}{\beta_2-\beta_1} \right\rfloor \beta_1 \notag \\ &\hspace{3.5cm}+ \left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor \beta_2 \end{align} \end{subequations} and \begin{equation} \label{e:x} (\forall n\in \ensuremath{\mathbb N}^*)\quad x_n= \scal{x_{n-1}}{u}u+ b_{k(n)}, \end{equation} where \begin{subequations} \label{e:kn} \begin{align} (\forall n\in \ensuremath{\mathbb N}^*)\quad k(n)&= \begin{cases} 1 &\text{if~} \scal{x_n}{u}\leq \beta+\beta_2, \\ 2 &\text{if~} \scal{x_n}{u}> \beta+\beta_2 \end{cases} \\ &= \left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor- \left\lfloor \frac{-\scal{x_0}{u}+\beta-n\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor+ 1. \end{align} \end{subequations} \end{theorem} \begin{proof} Note that \eqref{e:x} follows from \eqref{e:x,u}. According to Proposition~\ref{p:2points}\ref{p:2points_coeffs}, \begin{equation} \label{e:xnx0''} (\forall\ensuremath{{n\in{\mathbb N}}})\quad \scal{x_n}{u}= \scal{x_0}{u}+ (n-l_n)\beta_1+ l_n\beta_2 \quad\text{with}\quad l_n\in \ensuremath{\mathbb N}. \end{equation} Since $x_1\in S_1\cup S_2$, Proposition~\ref{p:segments} yields \begin{equation} \label{e:xnS1S2} (\forall n\in \ensuremath{\mathbb N}^*)\quad x_n\in S_1\cup S_2. \end{equation} Let $n\in \ensuremath{\mathbb N}^*$. It follows from \eqref{e:32} and \eqref{e:xnS1S2} that $\scal{x_n}{u}\in \left]\beta, \beta-\beta_1+\beta_2\right]$, which, combined with \eqref{e:xnx0''}, gives \begin{equation} \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1}- 1<l_n\leq \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1}. \end{equation} Therefore, \begin{equation} l_n= \left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor \end{equation} and \begin{equation} n- l_n= -\left\lfloor \frac{-\scal{x_0}{u}+\beta-\beta_1-(n-1)\beta_2}{\beta_2-\beta_1} \right\rfloor, \end{equation} which imply \eqref{e:xu}.
To get \eqref{e:x} and \eqref{e:kn}, we distinguish two cases.
\emph{Case 1}: $\scal{x_n}{u}\leq \beta+ \beta_2$. On the one hand, by \eqref{e:xnS1S2} we must have $x_n\in S_1$ and $k(n)= 1$. On the other hand, from $\scal{x_n}{u}\leq \beta+ \beta_2$ and \eqref{e:xu}, noting that $\beta_1< 0$, we obtain that \begin{subequations} \begin{align} \left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor &\leq \frac{-\scal{x_0}{u}+\beta-n\beta_1+\beta_2}{\beta_2-\beta_1} \\ &< \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \end{align} \end{subequations} which yields \begin{equation} \left\lfloor \frac{-\scal{x_0}{u}+\beta-n\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor = \left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor, \end{equation} hence \eqref{e:x} and \eqref{e:kn} hold.
\emph{Case 2}: $\scal{x_n}{u}> \beta+ \beta_2$. By \eqref{e:xnS1S2}, $x_n\in S_2$ and $k(n)= 2$. Again using \eqref{e:xu} and noting that $\beta_1< 0< \beta_2$, we derive that \begin{subequations} \begin{align} \left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor &> \frac{-\scal{x_0}{u}+\beta-n\beta_1+\beta_2}{\beta_2-\beta_1} \\ &= \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1}+ \frac{\beta_1}{\beta_2-\beta_1} \\ &> \left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor- 1. \end{align} \end{subequations} It follows that \begin{equation} \left\lfloor \frac{-\scal{x_0}{u}+\beta-n\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor = \left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor- 1, \end{equation} and we have \eqref{e:x} and \eqref{e:kn}. The proof is complete. \end{proof}
\begin{corollary} \label{c:closedform}
Suppose that $\beta_1> \beta\geq -\beta_2$, that $x_0\in A$, and that $2\scal{x_0}{b_1-b_2}> \|b_1\|^2- \|b_2\|^2$. Then \begin{equation} \label{e:simplify} (\forall n\in \ensuremath{\mathbb N})\quad \scal{x_n}{u}= n\beta_1+ \left\lfloor \frac{\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor (\beta_2- \beta_1) \end{equation} and \begin{equation} \label{e:x_simplify} (\forall n\in \ensuremath{\mathbb N}^*)\quad x_n= \left( (n-1)\beta_1+ \left\lfloor \frac{\beta-n\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor (\beta_2- \beta_1) \right)u+ b_{k(n)}, \end{equation} where \begin{equation} \label{e:kn_simplify} (\forall n\in \ensuremath{\mathbb N}^*)\quad k(n)= \left\lfloor \frac{\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor- \left\lfloor \frac{\beta-n\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor+ 1. \end{equation} \end{corollary} \begin{proof} From $x_0\in A$, we have that $\scal{x_0}{u}= 0$ and also $R_Ax_0= P_Ax_0= x_0$.
Since $2\scal{x_0}{b_1-b_2}> \|b_1\|^2- \|b_2\|^2$,
it holds that $\|b_1-x_0\|^2< \|b_2-x_0\|^2$, which yields $P_BR_Ax_0= P_Bx_0= b_1$. Therefore, $k(1)= 1$, $x_1= x_0- P_Ax_0 +P_BR_Ax_0= b_1$, and $\scal{x_1}{u}= \scal{b_1}{u}= \beta_1$.
On the other hand, it follows from $\beta_1> \beta\geq -\beta_2$ and $\beta_1< 0$ that $\beta+ \beta_2\geq 0$ and that $\beta< \beta_1<0 \leq \beta+ \beta_2$. We deduce that $\scal{x_1}{u}= \beta_1\in \left]\beta, \beta+ \beta_2 \right[$, which implies that $x_1\in S_1$. Using Theorem~\ref{t:closedform}, we get \eqref{e:simplify} for all $n\in \ensuremath{\mathbb N}^*$. When $n= 0$, the right-hand side of \eqref{e:simplify} becomes \begin{equation} \left\lfloor \frac{\beta-\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor (\beta_2- \beta_1) =0= \scal{x_0}{u} \end{equation} since $0< \beta-\beta_1+\beta_2< \beta_2-\beta_1$. Hence, \eqref{e:simplify} holds for all $n\in \ensuremath{\mathbb N}$, which together with the second part of Theorem~\ref{t:closedform} completes the proof. \end{proof}
\begin{example} \label{ex:R} Suppose that $X= \ensuremath{\mathbb R}$, that $A= \{0\}$, and that $B= \{b_1, b_2\}$ with $b_1= -1$ and $b_2= r$, where $r\in \ensuremath{\mathbb R}$, $r> 1$. Let $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ be a DR sequence with respect to $(A, B)$ with starting point $x_0= 0$. Then \begin{equation} (\forall n\in \ensuremath{\mathbb N})\quad x_n= -n+ \left\lfloor \frac{n}{r+1} + \frac{1}{2} \right\rfloor (r+1). \end{equation} \end{example} \begin{proof} Let $u= 1$. Then $A= \{u\}^\perp$ and $(\forall x\in \ensuremath{\mathbb R})$ $\scal{x}{u}= x$. We have that $\beta_1= \scal{b_1}{u}= -1< 0$, $\beta_2= \scal{b_2}{u}= r >0$, and, since $r> 1$, \begin{equation}
-1=\beta_1> \beta= \frac{|b_1-b_2|^2}{2(\beta_1-\beta_2)}= -\frac{(r+1)^2}{2(r+1)}= -\frac{r+1}{2}> -\beta_2=-r. \end{equation} It is clear that $x_0= 0\in A$ and that
$2\scal{x_0}{b_1-b_2}= 0> 1- r^2= |b_1|^2- |b_2|^2$. Now applying Corollary~\ref{c:closedform} yields \begin{equation} (\forall n\in \ensuremath{\mathbb N})\quad x_n= \scal{x_n}{u}= -n+ \left\lfloor \frac{-\frac{r+1}{2}+(n+1)+r}{r+1} \right\rfloor (r+1), \end{equation} and the conclusion follows. \end{proof}
\begin{example} \label{ex:R2} Suppose that $X= \ensuremath{\mathbb R}^2$, that $A= \ensuremath{\mathbb R}\times \{0\}$, and that $B= \{b_1, b_2\}$ with $b_1= (0, -1)$ and $b_2= (1, r)$, where $r\in \ensuremath{\mathbb R}$, $r\geq \sqrt{2}$. Let $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ be a DR sequence with respect to $(A, B)$ with starting point $x_0= (\alpha, 0)$, where $\alpha\in \ensuremath{\mathbb R}$, $\alpha< r^2/2$. Then $(\forall n\in \ensuremath{\mathbb N}^*)$: \begin{multline} x_n= \left(\left\lfloor \frac{n}{r+1}+ \frac{r^2+2r}{2(r+1)^2} \right\rfloor- \left\lfloor \frac{n-1}{r+1}+ \frac{r^2+2r}{2(r+1)^2} \right\rfloor, -n+ \left\lfloor \frac{n}{r+1}+ \frac{r^2+2r}{2(r+1)^2} \right\rfloor (r+1) \right). \end{multline} \end{example} \begin{proof} In this case, $A= \{u\}^\perp$ with $u= (0, 1)$, $\beta_1= \scal{b_1}{u}= -1< 0$, $\beta_2= \scal{b_2}{u}= r >0$, and \begin{equation}
\beta_1= -1> \beta= \frac{\|b_1-b_2\|^2}{2(\beta_1-\beta_2)}= -\frac{1+ (r+1)^2}{2(r+1)}= -1- \frac{r^2}{2(r+1)}. \end{equation} On the one hand, $\beta+ \beta_2= \frac{r^2-2}{2(r+1)}\geq 0$. On the other hand, it is straightforward to see that $x_0\in A$
and that $2\scal{x_0}{b_1-b_2} = -2\alpha>-r^2= \|b_1\|^2 -\|b_2\|^2$. Applying Corollary~\ref{c:closedform}, we obtain that \begin{subequations} \begin{align} (\forall n\in \ensuremath{\mathbb N}^*)\quad \scal{x_n}{u}&= -n+ \left\lfloor \frac{-1-\frac{r^2}{2(r+1)}+(n+1)+r}{r+1} \right\rfloor (r+1) \\ &= -n+ \left\lfloor \frac{n}{r+1}+ \frac{r^2+2r}{2(r+1)^2} \right\rfloor (r+1). \end{align} \end{subequations} Now for each $n\in \ensuremath{\mathbb N}^*$, writing $x_n= (\alpha_n, \beta_n)\in \ensuremath{\mathbb R}^2$, we observe that $\beta_n= \scal{x_n}{u}$ and, by \eqref{e:x_simplify}, $\alpha_n$ is actually the first coordinate of $b_{k(n)}$, that is, \begin{equation} \alpha_n= \begin{cases} 0 &\text{if~} k(n)= 1, \\ 1 &\text{if~} k(n)= 2, \end{cases} \end{equation} which combined with \eqref{e:kn_simplify} implies that \begin{equation} \alpha_n= k(n)- 1= \left\lfloor \frac{n}{r+1}+ \frac{r^2+2r}{2(r+1)^2} \right\rfloor- \left\lfloor \frac{n-1}{r+1}+ \frac{r^2+2r}{2(r+1)^2} \right\rfloor. \end{equation} The conclusion follows. \end{proof}
Let us specialize Example~\ref{ex:R} further and also illustrate Theorem~\ref{t:2points}.
\begin{example}[rational case] Suppose that $X= \ensuremath{\mathbb R}$, that $A= \{0\}$, and that $B= \{-1, {2}\}$. Let $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ be a DR sequence with respect to $(A, B)$ with starting point $x_0= 0$. Then \begin{equation} (\forall n\in \ensuremath{\mathbb N})\quad x_n= -n+ 3 \left\lfloor \frac{n}{3}+\frac{1}{2} \right\rfloor \end{equation} and $(x_n)_\ensuremath{{n\in{\mathbb N}}} = \big( 0, -1, 1, 0, -1, 1, 0, -1, 1, \ldots \big)$ is periodic. (See also \cite[Remark~6]{BN14} for another cyclic example.) \end{example} \begin{proof} Apply Example~\ref{ex:R} with $b_1=-1$ and $b_2=2$. \end{proof}
\begin{example}[irrational case] Suppose that $X= \ensuremath{\mathbb R}$, that $A= \{0\}$, and that $B= \{-1, \sqrt{2}\}$. Let $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ be a DR sequence with respect to $(A, B)$ with starting point $x_0= 0$. Then \begin{equation} (\forall n\in \ensuremath{\mathbb N})\quad x_n= -n+ \left\lfloor \frac{n}{\sqrt{2}+1}+\frac{1}{2} \right\rfloor (\sqrt{2}+1) \end{equation} and $(x_n)_\ensuremath{{n\in{\mathbb N}}} = \big( 0, -1, -1+\sqrt{2}, -2+\sqrt{2}, -2+2\sqrt{2}, -3+2\sqrt{2}, -4+2\sqrt{2}, -4+3\sqrt{2},
\ldots \big)$ which is not periodic. \end{example} \begin{proof} Apply Example~\ref{ex:R} with $b_1=-1$ and $b_2=\sqrt{2}$. \end{proof}
\begin{remark} Some comments on the last examples are in order. \begin{enumerate} \item We note that the last examples feature terms resembling (inhomogeneous) Beatty sequences; see \cite{Hav12}. In fact, let us disclose that we started this journey by experimentally investigating Example~\ref{ex:R2} which eventually led to the more general analysis in this paper. Specifically, in Example~\ref{ex:R2}, if $r= \sqrt{2}$, then $x_n= (u_n, -v_n+w_n\sqrt{2})$, where the integer sequences \begin{subequations} \begin{align} u_n&:= \lfloor (n+1)(\sqrt{2}-1) \rfloor- \lfloor n(\sqrt{2}-1) \rfloor= \lfloor (n+1)\sqrt{2} \rfloor- \lfloor n\sqrt{2} \rfloor- 1,\\ v_n&:= n-\lfloor (n+1)(\sqrt{2}-1) \rfloor= \lfloor (n+1)(2-\sqrt{2}) \rfloor,\\ w_n&:= \lfloor (n+1)(\sqrt{2}-1) \rfloor= \lfloor (n+1)\sqrt{2} \rfloor- n- 1 \end{align} \end{subequations} are respectively listed as \cite{OEIS_A188037}, \cite{OEIS_A074840}, and \cite{OEIS_A097508} (shifted by one) in the \emph{On-Line Encyclopedia of Integer Sequences}. \item Finally, let us contrast the DR algorithm to the method of alternating projections (see, e.g., \cite{BB96} and \cite{BC17}) in the setting of Example~\ref{ex:R}: indeed, the sequence $(x_0,P_Ax_0,P_BP_Ax_0,\ldots)$ is simply $(0,0,-1,0,-1,0,\ldots)$ regardless of whether or not $r>1$ is irrational. It was also suggested in \cite{BDNP16a} that, for the convex feasibility problem, the DR algorithm outperforms the method of alternating projections in the absence of constraint qualifications. \end{enumerate} \end{remark}
\section{Conclusion} \label{s:conclusion}
In this paper, we provided a detailed analysis of the Douglas--Rachford algorithm for the case when one set is a hyperplane and the other a doubleton. We characterized cycling of this method in terms of the ratio of the distances of the points to the hyperplane. Moreover, we presented closed-form expressions of the actual iterates. The results obtained show the surprising complexity of this algorithm when compared to, e.g., the method of alternating projections.
\subsection*{Acknowledgments} HHB was partially supported by the Natural Sciences and Engineering Research Council of Canada. MND was partially supported by the Australian Research Council.
\end{document}
|
arXiv
|
{
"id": "1804.08880.tex",
"language_detection_score": 0.4928800165653229,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Reconstructing the ideal results of a perturbed analog quantum simulator}
\author{Iris Schwenk} \affiliation{Institute of Theoretical Solid State Physics,
Karlsruhe Institute of Technology (KIT), 76131 Karlsruhe, Germany}
\author{Jan-Michael Reiner} \affiliation{Institute of Theoretical Solid State Physics,
Karlsruhe Institute of Technology (KIT), 76131 Karlsruhe, Germany}
\author{Sebastian Zanker} \affiliation{Institute of Theoretical Solid State Physics,
Karlsruhe Institute of Technology (KIT), 76131 Karlsruhe, Germany}
\author{Lin Tian} \affiliation{School of Natural Sciences, University of California, Merced, California 95343, USA}
\author{Juha Lepp\"akangas} \affiliation{Institute of Theoretical Solid State Physics,
Karlsruhe Institute of Technology (KIT), 76131 Karlsruhe, Germany}
\author{Michael Marthaler} \affiliation{Institut für Theorie der Kondensierten Materie (TKM), Karlsruhe Institute of Technology (KIT), 76131 Karlsruhe, Germany} \affiliation{Theoretical Physics, Saarland University, 66123 Saarbrücken, Germany}
\date{\today}
\begin{abstract} Well-controlled quantum systems can potentially be used as quantum simulators. However, a quantum simulator is inevitably perturbed by coupling to additional degrees of freedom. This constitutes a major roadblock to useful quantum simulations. So far there are only limited means to understand the effect of perturbation on the results of quantum simulation. Here, we present a method which, in certain circumstances, allows for the reconstruction of the ideal result from measurements on a perturbed quantum simulator. We consider extracting the value of the correlator $\braket{\hat{O}^i(t) \hat{O}^j(0)}$ from the simulated system, where $\hat{O}^i$ are the operators which couple the system to its environment. The ideal correlator can be straightforwardly reconstructed by using statistical knowledge of the environment, if any $n$-time correlator of operators $\hat{O}^i$ of the ideal system can be written as products of two-time correlators. We give an approach to verify the validity of this assumption experimentally by additional measurements on the perturbed quantum simulator. The proposed method can allow for reliable quantum simulations with systems subjected to environmental noise without adding an overhead to the quantum system. \end{abstract}
\pacs{03.67.Pp,
03.67.Lx,
85.25.Cp
}
\maketitle
\section{Introduction and central results}\label{sec_Introduction_and_central_results}
Today we possess in principle the full knowledge to describe all processes of interest in a wide range of fields, such as chemistry, biology and solid state physics. In all these fields a truly microscopic description is possible using quantum mechanics. However, it is also well understood that in practice full quantum mechanical simulations of even modestly-sized systems are impossible~\cite{Troyer_1}. To efficiently study quantum problems, we need to use other, well controlled quantum mechanical systems~\cite{Cold_Gases_Simulator,Ion_Simulator,Cirac_Review,Quantum_Simulator_EPJ}. In recent years unprecedented direct control over quantum systems has been achieved~\cite{Super_Review,Ion_Review,Rydberg_Review,Marinis_Threshold,Ion_Simulator}. Precise experiments in the quantum regime have been performed using atomic systems~\cite{Trapped_Fermi_2002,Trapped_Fermi_2006,Frustrated_Spins}, superconducting qubits~\cite{IBM_Fault_tolerance,DiCarlo_Stabilized,Devoret_Cat_states,Wallraff_Spins,Martinis_Quantum_Simulations}, photonic circuits~\cite{Photonic_Hydrogen,Boson_Sampling_1,Boson_Sampling_2}, and nuclear spins~\cite{Nuclear_Wrachtrup,Nuclear_Phosphor}. Larger systems have been demonstrated using trapped ions~\cite{350_Spin_simulator,Bohnet2016} and the equilibration of interacting bosons has been studied in cold gases~\cite{Bloch_Equilbiration,Eisert_Equlibiration}.
A promising approach to understanding quantum systems is analog quantum simulation~\cite{Analog_Manousakis}, where the goal is to create an artificial system with a Hamiltonian that is equivalent to the system we intend to study. Apart from quantum simulations using cold gases~\cite{Cirac_Cold_gases,Fermi_Sea_Heidelberg} and trapped ions~\cite{Cirac_PRL,Quantum_Magnet_Schaetz}, there are many proposals for analog quantum simulation with superconducting circuits~\cite{Photosynthesis_Super,Tian_Simu_1,Tian_Simu_2,Solan_digital_analog}, exploiting the controllability of superconducting systems, which in principle allows the creation of a large class of Hamiltonians. While most current superconducting systems are relatively small~\cite{Fermi_Simulation_super_qubits,Weak_localized_super_qubits}, larger networks of superconducting non-linear elements are now being explored~\cite{Pascal_Meta,D_Wave500Qubits,D_Wave1000_qubits}. Other architectures for analog quantum simulation have also been investigated~\cite{Nature_Plenio,Nature_Silicon}.
In this article, we study an analog quantum simulator with the ideal Hamiltonian $H_S$. To understand the properties of the simulated system, we would like to use a measurement to extract a time-ordered correlation function (Green's function), \begin{eqnarray}\label{eq_Ideal_Correlator} iG_{S0}(t) &=& \langle {\cal T} \hat{O}(t)\hat{O}(0) \rangle_0\\
&=& \bra{0} {\cal T} e^{i H_S t}\hat{O} e^{-i H_S t} \hat{O} \ket{0}\, , \nonumber \end{eqnarray} where ${\cal T}$ is the time-ordering operator.
The index $S0$ indicates that we are considering the ideal Green's function of the unperturbed Hamiltonian $H_S$, without coupling to additional degrees of freedom, and $\ket{0}$ is the ground state of $H_S$ (zero-temperature limit). We start our analysis from this simple example and later in Sec.~\ref{sec_Full_model_and_discussion} extend the theory to multiple operators $\hat{O}^i$, and to finite temperatures. We consider time-ordered Green's functions, since these are in general connected to numerous quantities of interest in experiments, such as heat or electric transport coefficients. There are several proposals which describe methods to measure the relevant correlators in the context of analog quantum simulation~\cite{Tian_Correlator,Tian_ReadoutReconstruction,Correlator_Buchleitner,Correlator_Anderes_PAper}. Thus, we assume that Green's functions play a central role in extracting results from a quantum simulator.
However, if we want to use measurements on a quantum simulator to study the properties of an ideal Hamiltonian, the key challenge remains: What is the role of errors and imperfections of the artificial system in a real measurement~\cite{Quantum_Simulator_EPJ,Can_we_trust_Emulator,Reliability_AQS,Certification_Eisert}? Usually we quantify the influence of external degrees of freedom by comparing measurements to theoretical predictions. However, by definition, for quantum simulation it should not be possible to predict the result; neither analytically nor numerically using classical computers. Some proposals exist to analyze \cite{IrisPRL} or mitigate \cite{mitigationLi,mitigationGambetta} errors for small noise in analog or digital quantum simulators. The approach we introduce in this paper works potentially also for intermediate levels of noise strength. It is based on connecting the ideal Green's function, Eq.~(\ref{eq_Ideal_Correlator}), to the perturbed Green's function we measure using a quantum simulator. We consider Green's functions where $\hat{O}$ is also the operator by which the quantum simulator couples to additional degrees of freedom (which cause the errors). This restricts the generality of the approach, but in reality it is actually very likely that the same mechanism which connects the system to its bath also allows for the readout of the system. For example, readout via a resonator for modern superconducting qubits can be done dispersively (via $\sigma_z$) or resonantly (via $\sigma_x$). In the case of $T_1$ limited qubits with resonant readout or $T_2$ limited qubits with dispersive readout \cite{IBMqubits} our requirement is fulfilled. So it is reasonable to assume that this is one of the Green's functions to which we have an easy access in experiments.
We show that under specific conditions it is in fact possible to extract the ideal correlator of the operator $\hat{O}$ even from a perturbed system. One ingredient in our approach is a good statistical knowledge of the additional degrees of freedom which act on $H_S$. This assumption is justified, for example, for a quantum simulator build from tunable qubits, where qubits can be decoupled and the properties of the baths of individual qubits can be probed by established spectroscopical methods. Apart from this, only one assumption is necessary about the properties of the ideal correlators. We need that any $n$-time correlation function can be expressed as the product of two-time correlation functions. This condition will be discussed in more detail in Sec.~\ref{sec:PrincipalIdea}. In the present paper we describe this method assuming $\hat{O}$ and that the additional degrees of freedom are bosonic, but the method can also be directly transferred to fermionic operators $\hat{O}$ and fermionic baths.
\subsection{Principal idea}\label{sec:PrincipalIdea}
We start by presenting a simple example of our approach, where we show how to extract the ideal properties from an imperfect simulator in equilibrium. In Sec.~\ref{sec_Full_model_and_discussion}, we extend this result to more general situations.
The full system we consider can be described by the Hamiltonian, \begin{equation}\label{eq_Full_Hamiltonian}
H=H_S+H_C+H_B\,\, ,\,\, H_C=\hat{O}\hat{X} \;. \end{equation} Here the ideal Hamiltonian of the simulator $H_S$ is coupled via the Hamiltonian $H_C$ to the additional degrees of freedom contained in the bath Hamiltonian $H_B$. The system operator in $H_C$ is $\hat{O}$, which is the same as what we used to define the ideal correlator in Eq.~(\ref{eq_Ideal_Correlator}), and the bath operator is $\hat X$.
\begin{figure}
\caption{The quantum simulator is coupled to a perturbative bath. The simulator-bath system is coupled weakly to an environment that establishes thermal equilibrium. For each sub-component of the system we define a free correlator: the ideal correlator of the simulator $iG_{S0}(t)=\langle {\cal T} \hat{O}(t)\hat{O}(0)\rangle_0$ as defined in Eq.~(\ref{eq_Ideal_Correlator}) and the free correlator of the bath $iG_{B0}(t)=\langle{\cal T} \hat{X}(t)\hat{X}(0)\rangle_0$. The full correlator $iG_{SB}(t)=\langle{\cal T} \hat{O}(t)\hat{O}(0)\rangle$ accounts for the coupling in the full Hamiltonian, Eq.~(\ref{eq_Full_Hamiltonian}).}
\label{fig_total_system}
\end{figure}
The bath can usually be described by a set of bosonic modes and we assume that the free correlator of the bath $G_{B0}(t)$ is known, for example, from spectroscopic measurements. For the definition of all relevant Green's functions see Fig.~\ref{eq_Ideal_Correlator} and its caption. In Sec.~\ref{sec_Full_model_and_discussion}, we give a more precise definition.
The total system described by $H$ is in thermal equilibrium. It should be emphasized that if coupling to the thermal bath is not infinitely weak it cannot be assumed that the only result of this coupling is the creation of equilibrium~ \cite{Strong_Coupling_Quantum,Strong_Coupling_Iris,Strong_Coupling_Classical}. In the main part of this paper we focus on the situation at zero temperature and in Sec.~\ref{sec_finite_temperatures} extend our method to finite temperatures.
We want to connect the spectral function of the bath to the properties of the perturbed quantum simulator. Standard many-body physics techniques exist which can be used to expand the full Green's function $G_{SB}(\omega)$ in terms of the ideal Green's functions $G_{S0}(\omega)$ and $G_{B0}(\omega)$~\cite{Greensfunction_Rickayzen}. However, to apply these techniques there is one key assumption that is absolutely crucial: Wick's theorem needs to apply in some form. Using this theorem it is possible to connect a single correlator of $2n$ operators with $n$ two-time correlators. Wick's theorem for the system operator $\hat{O}$ takes the form \begin{eqnarray}\label{eq_Wick_Theorem_For_O}
& &\langle {\cal T} \hat{O}(t_1)\hat{O}(t_2)\ldots\hat{O}(t_{n-1})\hat{O}(t_n) \rangle_0\\
& &=
\langle {\cal T} \hat{O}(t_1)\hat{O}(t_2) \rangle_0
\langle {\cal T} \hat{O}(t_3)\ldots\hat{O}(t_{n-1})\hat{O}(t_n) \rangle_0 \nonumber \\
& & + \langle {\cal T} \hat{O}(t_1)\hat{O}(t_3) \rangle_0
\langle {\cal T} \hat{O}(t_2)\ldots\hat{O}(t_{n-1})\hat{O}(t_n) \rangle_0 \nonumber\\
& & + \ldots + \langle {\cal T} \hat{O}(t_1)\hat{O}(t_n) \rangle_0
\langle {\cal T} \hat{O}(t_2)\ldots\hat{O}(t_{n-1}) \rangle_0 \,. \nonumber \end{eqnarray} This relation can be applied repeatedly until only two-time correlators remain. For the bath operator $\hat{X}$ it is natural to assume that Wick's theorem applies, in accordance with numerous system-bath descriptions. However, for the system operator $\hat{O}$ this is not in general true. A well known case, where Eq.~(\ref{eq_Wick_Theorem_For_O}) holds is if the system $H_S$ can be described as a system of non-interacting quasiparticles and $\hat{O}$ can be written as a linear combination of the annihilation and creation operators of these quasiparticles. More generally, Eq.~(\ref{eq_Wick_Theorem_For_O}) is valid if the fluctuations of $\hat{O}(t)$ have a Gaussian distribution. The expansion of $n$-time correlators in pair and higher correlators has been studied extensively for spin systems~\cite{Wick_1,Wick_2,Wick_3} and deviations from Gaussian statistics have been studied in the field of full-counting statistics~\cite{Counting_1,Counting_2,Counting_3,Counting_4}. From relatively general considerations, such as the central limit theorem~\cite{Central_Limit_Book}, we expect that fluctuations become more Gaussian as the system size increases, which is also the most interesting limit for a quantum simulator. However, in some systems non-Gaussian fluctuations are known to persist even at large system size~\cite{Non_Gaussian_Phase_transition,Non_Gaussian_Flow} or become even size independent~\cite{Non_Gaussian_Resistance_Fluctuations_in_Disordered Materials}. For different expansions it could also be useful to map qubits coupled to bosonic baths to an effective electron-phonon model\cite{basti}. In Sec.~\ref{sec_Corrections_to_the_ Wick_Theorem} we discuss, how Eq.~(\ref{eq_Wick_Theorem_For_O}) can be checked, to some extend, by making appropriate measurements on the perturbed quantum simulator.
Assuming Eq.~(\ref{eq_Wick_Theorem_For_O}) holds, we find an exact relation between the Green's functions, \begin{equation}\label{eq_Full_Equation_of_Correlator_Simplest_case}
G_{SB}(\omega)=G_{S0}(\omega)+G_{S0}(\omega)G_{B0}(\omega)G_{SB}(\omega) \;. \end{equation} This is the well-known Dyson equation that defines the total Green's function as a function of the free system and bath Green's functions.
\subsection{Central result} From Eq.~(\ref{eq_Full_Equation_of_Correlator_Simplest_case}) we see that the perturbed quantum simulator can be used to find the correlator of the unperturbed simulator $G_{S0}(\omega)$ as long as we know the free Green's function of the bath $G_{B0}(\omega)$, since \begin{equation}\label{eq_Essential_Result_most_simple}
G_{S0}(\omega)\! = \!
\frac{G_{SB}(\omega)}{1 + G_{B0}(\omega)G_{SB}(\omega)}\,. \end{equation} This states the central idea of this paper in the simplest form. To derive Eq.~(\ref{eq_Essential_Result_most_simple}) we use an important assumption: that Wick's theorem in the form in Eq.~(\ref{eq_Wick_Theorem_For_O}) applies for the system operator $\hat{O}$. This condition will be discussed in more detail in Sec.~\ref{sec_Corrections_to_the_ Wick_Theorem}, where we also show how to extract the lowest order correction to this result from the perturbed simulator. Apart from this the quality of the reconstruction is also restricted by the precision of the knowledge of the correlators, which is the subject of Sec.~\ref{sec_imperfect_measurement}. In particular, we presume that the properties of the bath are measured independently of the system, which will be discussed more detailed in Sec.~\ref{sec_Full_model_and_discussion}. In Sec.~\ref{sec_Full_model_and_discussion}, we also consider the case where multiple baths couple to system via operators $\hat{O}^i$ and extend the reconstruction method to finite temperatures. Finally, we discuss a simple example, which can be solved analytically, to validate our result.
\section{Verifying Wick's Theorem}\label{sec_Corrections_to_the_ Wick_Theorem}
The validity of Wick's theorem for the system operator $\hat{O}$ is crucial for the derivation of Eq.~(\ref{eq_Essential_Result_most_simple}); however, for non trivial systems we cannot in general predict if Wick's theorem holds. Therefore, we describe a method to verify the validity of Wick's theorem using the quantum simulator itself. A detailed derivation is given in Appendix~\ref{app_4-time_correlator}.
We introduce the lowest-order correction to Wick's theorem $G_4(t_1,t_2,t_3,t_4)$, \begin{align} &G_4(t_1,t_2,t_3,t_4) \nonumber\\ &= \braket{{\cal T} \hat{O}_1\hat{O}_2\hat{O}_3\hat{O}_4}_{0,F}- \braket{{\cal T} \hat{O}_1\hat{O}_2\hat{O}_3\hat{O}_4}_{0} \\ &= \braket{{\cal T} \hat{O}_1\hat{O}_2\hat{O}_3\hat{O}_4}_{0,F} -\sum_{\substack{3 \text{ perm.} \\ a,b,c,d\\ \in\{1,2,3,4\}}} \braket{{\cal T}\hat{O}_a\hat{O}_b}_{0}\braket{{\cal T}\hat{O}_c\hat{O}_d}_{0} \nonumber\;, \end{align} where we make use of the abbreviation $\hat{O}_i=\hat{O}(t_i)$. The summation runs over all indistinguishable permutations. With $\braket{\dots}_{0}$ and $\braket{\dots}$, we refer to correlators for which we assume Wick's theorem to be exactly valid. The index $0$ indicates that the system is considered without perturbation by the bath. In contrast to this, $\braket{\dots}_F$ ($\braket{\dots}_{0,F}$) describes the (un)perturbed correlators including the corrections to Wick's theorem. In this paper we consider corrections up to first order in $G_4$.
With measurements on the quantum simulator we have access to $n$-time correlators $\braket{\dots}_F$ of $\hat{O}$. Measuring two- and four-time correlators, \begin{equation} \label{eq_checkingwick_4er} \braket{{\cal T} \hat{O}_{1}\hat{O}_{2}\hat{O}_{3}\hat{O}_{4}}_F - \!\!\!\!\!\!\!\!\!\! \sum_{\substack{3 \text{ perm.} \\ a,b,c,d\\ \in\{1,2,3,4\}}} \!\!\!\!\!\!\!\!\!\braket{{\cal T}\hat{O}_a\hat{O}_b}_F \braket{{\cal T}\hat{O}_c\hat{O}_d}_F = \; \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=2.0pt] (A) -- (D);
\draw[line width=2.0pt] (B) -- (C); \end{tikzpicture} \;, \end{equation} we get access to the quantity, \begin{align} \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=2.0pt] (A) -- (D);
\draw[line width=2.0pt] (B) -- (C); \end{tikzpicture} =& \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=1.0pt] (A) -- (D);
\draw[line width=1.0pt] (B) -- (C); \end{tikzpicture} + \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=1.0pt] (A) -- (D);
\draw[line width=1.0pt] (B) -- (C);
\coordinate (L1a) at (1.1,0);
\draw[line width=1.0pt, snake it] (C) -- (L1a);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\coordinate (L1b) at (1.7,0);
\draw[line width=1.0pt] (L1a) -- (L1b);
\fill (L1a) circle (2pt);
\fill[white] (L1a) circle (1pt); \end{tikzpicture} + \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=1.0pt] (A) -- (D);
\draw[line width=1.0pt] (B) -- (C);
\coordinate (L1a) at (1.1,0);
\coordinate (L2a) at (1.1,0.5);
\draw[line width=1.0pt, snake it] (C) -- (L1a);
\draw[line width=1.0pt, snake it] (D) -- (L2a);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\fill (D) circle (2pt);
\fill[white] (D) circle (1pt);
\coordinate (L1b) at (1.7,0);
\coordinate (L2b) at (1.7,0.5);
\draw[line width=1.0pt] (L1a) -- (L1b);
\draw[line width=1.0pt] (L2a) -- (L2b);
\fill (L1a) circle (2pt);
\fill[white] (L1a) circle (1pt);
\fill (L2a) circle (2pt);
\fill[white] (L2a) circle (1pt); \end{tikzpicture} \nonumber\\ &+ \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=1.0pt] (A) -- (D);
\draw[line width=1.0pt] (B) -- (C);
\coordinate (L1a) at (1.1,0);
\draw[line width=1.0pt, snake it] (C) -- (L1a);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\coordinate (L1b) at (1.7,0);
\draw[line width=1.0pt] (L1a) -- (L1b);
\fill (L1a) circle (2pt);
\fill[white] (L1a) circle (1pt);
\coordinate (L1c) at (2.3,0);
\draw[line width=1.0pt, snake it] (L1b) -- (L1c);
\fill (L1b) circle (2pt);
\fill[white] (L1b) circle (1pt);
\coordinate (L1d) at (2.9,0);
\draw[line width=1.0pt] (L1c) -- (L1d);
\fill (L1c) circle (2pt);
\fill[white] (L1c) circle (1pt); \end{tikzpicture} +\dots \nonumber \\ &+ \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=1.0pt] (A) -- (D);
\draw[line width=1.0pt] (B) -- (C);
\coordinate (L1a) at (1.1,0);
\coordinate (L2a) at (1.1,0.5);
\draw[line width=1.0pt, snake it] (C) -- (L1a);
\draw[line width=1.0pt, snake it] (D) -- (L2a);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\fill (D) circle (2pt);
\fill[white] (D) circle (1pt);
\coordinate (L1b) at (1.7,0);
\coordinate (L2b) at (1.7,0.5);
\draw[line width=1.0pt] (L1a) -- (L1b);
\draw[line width=1.0pt] (L2a) -- (L2b);
\fill (L1a) circle (2pt);
\fill[white] (L1a) circle (1pt);
\fill (L2a) circle (2pt);
\fill[white] (L2a) circle (1pt);
\coordinate (L3a) at (-0.6,0);
\coordinate (L3b) at (-1.1,0);
\draw[line width=1.0pt, snake it] (A) -- (L3a);
\draw[line width=1.0pt] (L3a) -- (L3b);
\fill (A) circle (2pt);
\fill[white] (A) circle (1pt);
\fill (L3a) circle (2pt);
\fill[white] (L3a) circle (1pt); \end{tikzpicture} +\dots \;, \end{align} where the thin cross represents the correction $G_4$ and the sinuous lines stand for the bath correlation function (see table~\ref{tab_All_Correlators}). The central result here is that the correction to the perturbed two-time correlator can be expressed as \begin{equation} \label{eq_checkingwick_2er} \braket{{\cal T} \hat{O}_{1} \hat{O}_{2}}_F =
\braket{{\cal T} \hat{O}_{1} \hat{O}_{2}} + \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=2.0pt] (A) -- (D);
\draw[line width=2.0pt] (B) -- (C);
\draw[line width=1.0pt,snake it] (D) to[out=-45,in=45] (C);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\fill (D) circle (2pt);
\fill[white] (D) circle (1pt); \end{tikzpicture} \;. \end{equation} Eqs.~(\ref{eq_checkingwick_4er}) and (\ref{eq_checkingwick_2er}) show that it is possible to estimate the deviation from Wick's theorem by measuring the two- and four-time correlators and combining the measured result with our knowledge of the bath correlator. This allows us to check whether the assumption of Wick's theorem is justified and the result of the reconstruction is reliable.
\section{Imperfect knowledge}\label{sec_imperfect_measurement}
A fundamental prerequisite for the reconstruction of the unperturbed correlator is the knowledge of the perturbed correlator of the system $G_{SB}$ and the correlator of the bath $G_{B0}$. In reality, we will not receive these quantities with full accuracy. In this section, we address the question how imperfect knowledge affects the reconstruction of the ideal Green's function.
\subsection{Bath correlator} We consider a variation of the Green's function of the bath $G_{B0}(\omega)+\delta G_{B0}(\omega)$. With this Green's function, we reconstruct the correlator of the simulator using Eq.~(\ref{eq_Essential_Result_most_simple}) with \begin{equation} \tilde{G}_{S0}(\omega)= \frac{G_{SB}(\omega)}{1+G_{B0}(\omega)G_{SB}(\omega)+\delta G_{B0}(\omega)G_{SB}(\omega)} \,. \end{equation}
For $|\delta G_{B0}(\omega)|\ll| G^{-1}_{SB}(\omega)+ G_{B0}(\omega)|$, we find \begin{equation} \label{eq_variation_bath} \tilde{G}_{S0}(\omega) \approx G_{S0}(\omega) [1-G_{S0}(\omega) \delta G_{B0}(\omega)]\,. \end{equation} Hence, the impact of $\delta G_{B0}(\omega)$ is large at the peaks of $G_{S0}(\omega)$. The influence of $\delta G_{B0}(\omega)$ is independent of the value of $G_{B0}(\omega)$. This means that the quality of the reconstruction is defined by the absolute error $\delta G_{B0}(\omega)$ only.
\subsection{Full system correlator} For a deviation of the full system correlator $G_{SB}(\omega)+\delta G_{SB}(\omega)$, we have \begin{equation} \tilde{G}_{S0}(\omega)= \frac{G_{SB}(\omega)+ \delta G_{SB}(\omega)}{1+G_{B0}(\omega)G_{SB}(\omega)+G_{B0}(\omega) \delta G_{SB}(\omega)} \,. \end{equation}
For $|\delta G_{SB}(\omega)|\ll| G^{-1}_{B0}(\omega)+ G_{SB}(\omega)|$, we find \begin{equation} \tilde{G}_{S0}(\omega) \approx G_{S0}(\omega) \left( 1+ \frac{ G_{S0}(\omega)}{ G_{SB}(\omega)} \frac{\delta G_{SB}(\omega)}{ G_{SB}(\omega)} \right) \,. \end{equation} The ratio of $G_{S0}(\omega)$ and $G_{SB}(\omega)$ implies that the variation of the full system correlator $G_{S0}(\omega)$ is large at the peaks of this function. In contrast to the variation of the bath correlator in Eq.~(\ref{eq_variation_bath}), the relative error $\delta G_{SB}(\omega)/ G_{SB}(\omega)$ enters here.
In addition, this equation shows the limit of our reconstruction method. Consider the limit of large coupling of the bath to the system. Eq.~(\ref{eq_Essential_Result_most_simple}) is still valid, but a reconstruction is no longer possible if the bath widens the peaks of $G_{S0}(\omega)$ significantly. In this case $ G_{S0}(\omega)/ G_{SB}(\omega)\gg 1$ at the peaks. Therefore, even a small relative error in the measurement of $G_{SB}(\omega)$ makes the reconstruction of $G_{S0}(\omega)$ practically impossible.
\section{Full model and discussion}\label{sec_Full_model_and_discussion}
\subsection{Extended Model}
In this section, we extend the model to a more general scenario and discuss the derivation of our results in detail. To make our model more realistic, we consider multiple baths. In practice, a system consisting of $N$ coupled qubits or resonators arranged in a certain two-dimensional geometry does not couple to a single bath. Instead, we consider a system with multiple independent baths $H_B=\sum_{i=1}^N H_{B}^i$ with $[H_{B}^i, H_{B}^j]=0$ and a similarly adjusted coupling term. The full Hamiltonian can now be written in the form \begin{eqnarray}
H=H_S+H_C+\sum_{i=1}^N H_{B}^i \,. \end{eqnarray} The coupling $H_C$ between the system and the additional degrees of freedom contained in $\sum_{i}H_{B}^i$ is assumed to be of the form \begin{eqnarray}\label{eq_Coupling_to_a_Multipartite_bath}
H_C &=& \lambda_B\sum_{i=1}^N \hat{O}^i \hat{X}^i\, . \end{eqnarray} The system and bath variables satisfy the commutation relation $[\hat{X}^i,H_S]=[\hat{O}^i,H_B]=[\hat{X}^i,\hat{X}^j]=0$. We have now $N$ system operators $\hat{O}^i$ which couple the system to $N$ baths via the corresponding bath operators $\hat{X}^i$. We have introduced the dimensionless constant $\lambda_B \in \{0,1\}$, which allows us to define the free and perturbed correlators in a more rigorous way (see Table~\ref{tab_All_Correlators}).
To perform the reconstruction of the unperturbed Green's function of the system, we need to characterize the properties of the baths independently of the system~\cite{Wilhelm}. This assumption is justified, for example, for a large network of superconducting flux qubits coupled in a two-dimensional (2D) structure to simulate a spin system. Such systems have been realized with up to 1000 qubits~\cite{D_Wave500Qubits,D_Wave1000_qubits}. The ideal Hamiltonian in this case would be, e.g., $H_S=\frac{1}{2}\sum_i h_i \sigma_x^i+\sum_{ij} J_{ij}\sigma_z^i\sigma_z^j $. Here $h_i$ and $J_{ij}$ are adjustable parameters which define the model under investigation and $\sigma_k^i$ are the Pauli matrices acting on qubit $i$. Under the assumption that the effect of the noise on a single qubit is almost Markovian it is possible to characterize the noise spectral density of the decoupled qubits using the method described in \cite{measure_bath}. The qubits are coupled to individual baths, whose bath correlators $\langle \hat{X}^i(t) \hat{X}^i(0)\rangle_0$ are known relatively well, as estimated in Ref.~[\onlinecite{D_WaveNoise}]. From a multitude of similar experiments we know that the system operator that couples to the bath corresponds to $\hat{O}^i=\sigma_z^i$. Thus, for such a quantum simulator the characterization of the bath correlator is possible independently of the properties of the simulator. Furthermore, the applicability of Wick's theorem has been studied broadly~\cite{Wick_1,Wick_2,Wick_3} in context of spin systems. Devices such as large networks of superconducting flux qubits coupled in a 2D structure can also be tuned into alternative regimes, e.g., into a weakly nonlinear regime where proposals exist on how to use such devices for the simulation of vibronic transitions~\cite{Vibronic}. In this limit, the application of Wick's theorem would also be more straightforward.
\subsection{The full Green's function}
In Eq.~(\ref{eq_Ideal_Correlator}) we introduced the Green's function of the system without coupling to external degrees of freedom. In this section we consider the Green's function ${\bf G}_{SB}$ of the system coupled to its bath in matrix form with the elements \begin{equation} \label{eq_GSB_def} G^{ij}_{SB}(t) = -i\braket{{\cal T}\hat{O}^i(t)\hat{O}^j(0)} \,, \end{equation} where $\braket{\dots}$ is an expectation value of the ground state of the full system. Using the standard technique for Green's functions at zero temperature, we expand $G^{ij}_{SB}(t)$ in orders of $H_C$. Therefore, the zeroth-order Hamiltonian is given by $H_0=H_S+\sum_i H_{B}^i$. We define the time evolution \begin{equation}
S_{\lambda_B}(t)=e^{-i H t} \,, \end{equation} and transform all operators $\hat{A}$ into the appropriate picture using the definition \begin{equation}
\hat{A}(t)=S_{\lambda_B}^{-1}(t) \, \hat{A} \, S_{\lambda_B}(t)\,. \end{equation} For unperturbed correlators $\braket{\dots}_0$ this transformation with $S_{\lambda_B =0}(t)~=~e^{-i H_0 t}$ defines operators in the interaction picture, while $\lambda_B=1$ denotes the full time evolution in the Heisenberg picture for the perturbed correlators $\braket{\dots}$. The full Green's function can be written in the form \begin{equation} G^{ij}_{SB}(t) = -i \frac{\braket{{\cal T}S(\infty)\hat{O}^i(t)\hat{O}^j(0)}_0}{\braket{{\cal T}S(\infty)}_0} \;, \end{equation} with the time evolution operator \begin{equation} S(\infty) = {\cal T} e^{-i \int_{-\infty}^{\infty}\mathrm{d}t \, H_C(t)} \,, \end{equation} where we use the coupling Hamiltonian in the interaction picture. We introduce the Fourier transform of the Green's function \begin{equation} G^{ij}_X(\omega) = \int\limits_{-\infty}^{\infty} \mathrm{d}t \, e^{i \omega t} G^{ij}_X(t) \,. \end{equation}
\subsection{Diagrammatic expansion} \label{sec_diagrammatic_expansion}
\begin{table*}[t]
\begin{tabular}{|c|c|c|p{0.5\textwidth}|}\hline
Green's function & Matrix form & Diagram & Definition \\ \hline
$G_{SB}^{ij}(t) = -i \langle{\cal T} \hat{O^i}(t)\hat{O^j}(0)\rangle $ & $[{\bf G}_{SB}]_{ij}=G_{SB}^{ij}$ &
\begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.44);
\coordinate (B) at (1.5,0.44);
\coordinate (C) at (0,0.36);
\coordinate (D) at (1.5,0.36);
\draw[line width=1.0pt] (A) -- (B);
\draw[line width=1.0pt] (C) -- (D);
\end{tikzpicture} & full correlator of the system operators, including the effects of the bath ($\lambda_B=1$) \\ \hline
$G_{S0}^{ij}(t) = -i\langle{\cal T} \hat{O^i}(t)\hat{O^j}(0)\rangle_{0} $ & $[{\bf G}_{S0}]_{ij}=G_{S0}^{ij}$ &
\begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.4);
\coordinate (B) at (1.5,0.4);
\coordinate (C) at (0,0.32);
\coordinate (D) at (1.5,0.32);
\draw[line width=1.0pt] (A) -- (B);
\end{tikzpicture} & free correlator of the system operators, without the effects of the bath ($\lambda_B=0$) \\ \hline
$G_{B0}^{ij}(t) = -i\langle{\cal T} \hat{X}^i(t)\hat{X}^j(0)\rangle_0 $ & $[{\bf G}_{B0}]_{ij}=G_{B0}^{ij}$ &
\begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.4);
\coordinate (B) at (1.5,0.4);
\draw[line width=1.0pt,snake it] (A) -- (B);
\end{tikzpicture} & free correlator of the bath, without the effects of the system ($\lambda_B=0$) \\ \hline
\end{tabular}
\caption{Summary of all relevant correlators and their diagrammatic representation.}
\label{tab_All_Correlators}
\end{table*}
\begin{table}[b]
\begin{tabular}{|c|c|p{0.25\textwidth}|}\hline
Interaction & Diagram & Definition \\ \hline
$ \sum_{i=1}^N \hat{O}_i \hat{X}^i $ &
\begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.4);
\coordinate (B) at (0.75,0.4);
\coordinate (C) at (0.75,0.4);
\coordinate (D) at (1.5,0.4);
\draw[line width=1.0pt] (A) -- (B);
\draw[line width=1.0pt, snake it] (C) -- (D);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\end{tikzpicture} & Interaction between bath and system. \\ \hline
\end{tabular}
\caption{Each circle represents a term of the expansion in $H_C$.}\label{tab_All_Interactions}
\end{table}
We show now the diagrammatic expansion that leads to expressions such as Eq.~(\ref{eq_Full_Equation_of_Correlator_Simplest_case}) if Wick's theorem is valid for the coupling operators. All relevant correlators and their diagrammatic representations are shown in Table~\ref{tab_All_Correlators} and the interaction term is shown in Table~\ref{tab_All_Interactions}.
Using an expansion of $S(\infty)$ in $H_C$, we can directly show the connection between the Green's function of the simulator perturbed by a bath $G_{SB}^{ij}$ and the unperturbed ideal Green's functions, \begin{eqnarray}
\begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.44);
\coordinate (B) at (0.7,0.44);
\coordinate (C) at (0,0.36);
\coordinate (D) at (0.7,0.36);
\draw[line width=1.0pt] (A) -- (B);
\draw[line width=1.0pt] (C) -- (D);
\end{tikzpicture}
\!\!&=&\!\!
\begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.4);
\coordinate (B) at (0.6,0.4);
\draw[line width=1.0pt] (A) -- (B);
\end{tikzpicture}
+ \begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.4);
\coordinate (B) at (0.6,0.4);
\coordinate (C) at (1.2,0.4);
\coordinate (D) at (1.8,0.4);
\draw[line width=1.0pt] (A) -- (B);
\draw[line width=1.0pt, snake it] (B) -- (C);
\draw[line width=1.0pt] (C) -- (D);
\fill (B) circle (2pt);
\fill[white] (B) circle (1pt);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\end{tikzpicture}
+
\begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.4);
\coordinate (B) at (0.6,0.4);
\coordinate (C) at (1.2,0.4);
\coordinate (D) at (1.8,0.4);
\coordinate (E) at (2.4,0.4);
\coordinate (F) at (3,0.4);
\draw[line width=1.0pt] (A) -- (B);
\draw[line width=1.0pt, snake it] (B) -- (C);
\draw[line width=1.0pt] (C) -- (D);
\draw[line width=1.0pt, snake it] (D) -- (E);
\draw[line width=1.0pt] (E) -- (F);
\fill (B) circle (2pt);
\fill[white] (B) circle (1pt);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\fill (D) circle (2pt);
\fill[white] (D) circle (1pt);
\fill (E) circle (2pt);
\fill[white] (E) circle (1pt);
\end{tikzpicture} +\ldots \nonumber\\
&=& \!\!
\begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.4);
\coordinate (B) at (0.6,0.4);
\draw[line width=1.0pt] (A) -- (B);
\end{tikzpicture}
+ \begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.4);
\coordinate (B) at (0.6,0.4);
\coordinate (C) at (1.2,0.4);
\draw[line width=1.0pt] (A) -- (B);
\draw[line width=1.0pt, snake it] (B) -- (C);
\fill (B) circle (2pt);
\fill[white] (B) circle (1pt);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\end{tikzpicture}
(
\begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.4);
\coordinate (B) at (0.6,0.4);
\draw[line width=1.0pt] (A) -- (B);
\end{tikzpicture}
+ \begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.4);
\coordinate (B) at (0.6,0.4);
\coordinate (C) at (1.2,0.4);
\coordinate (D) at (1.8,0.4);
\draw[line width=1.0pt] (A) -- (B);
\draw[line width=1.0pt, snake it] (B) -- (C);
\draw[line width=1.0pt] (C) -- (D);
\fill (B) circle (2pt);
\fill[white] (B) circle (1pt);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\end{tikzpicture}
\ldots )
\nonumber\\
&=& \!\! \begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.4);
\coordinate (B) at (0.6,0.4);
\draw[line width=1.0pt] (A) -- (B);
\end{tikzpicture}
+ \begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.4);
\coordinate (B) at (0.6,0.4);
\coordinate (C) at (1.2,0.4);
\coordinate (A1) at (1.2,0.44);
\coordinate (B1) at (2,0.44);
\coordinate (C1) at (1.2,0.36);
\coordinate (D1) at (2,0.36);
\draw[line width=1.0pt] (A1) -- (B1);
\draw[line width=1.0pt] (C1) -- (D1);
\draw[line width=1.0pt] (A) -- (B);
\draw[line width=1.0pt, snake it] (B) -- (C);
\fill (B) circle (2pt);
\fill[white] (B) circle (1pt);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\end{tikzpicture} \;. \end{eqnarray} Here all disconnected diagrams are canceled by the vacuum diagrams in $\braket{ {\cal T} S(\infty)}_0$ (see Appendix~\ref{appendix_disconnected_diagrams}). Therefore we can write the Dyson equation in matrix form as \begin{equation}\label{eq_Multipartite_Bath_only_Solution}
{\bf G}_{SB}(\omega)={\bf G}_{S0}(\omega)+{\bf G}_{S0}(\omega){\bf G}_{B0}(\omega){\bf G}_{SB}(\omega) \,. \end{equation} If all Green's functions $G_{SB}^{ij}(\omega)$ and $G_{B0}^{ij}(\omega)$ are known, this equation can be solved for ${\bf G}_{S0}$: \begin{equation}
{\bf G}_{S0}(\omega)={\bf G}_{SB}(\omega)\left[1+{\bf G}_{B0}(\omega){\bf G}_{SB}(\omega)\right]^{-1} \label{eq_GSO} \,. \end{equation} If we reduce the system to a single-bath situation, this result transforms to Eq.~(\ref{eq_Essential_Result_most_simple}). It connects the ideal correlator in Eq.~(\ref{eq_Ideal_Correlator}) to quantities which can be readily measured.
\subsection{Extension to finite temperatures} \label{sec_finite_temperatures} The diagrammatic expansion in Sec.~\ref{sec_diagrammatic_expansion} can also be applied to the Matsubara Green's functions $\mathcal{G}_{M,X}$, which are connected to the retarded Green's functions for finite temperatures $\mathcal{G}^R_{X}$. This is a way to extend this method to systems in thermal equilibrium. The analog of Eq.~(\ref{eq_Essential_Result_most_simple}) for finite temperatures is given by \begin{equation}\label{eq_Essential_Result_most_simple_Matsubara}
\mathcal{G}^R_{S0}(i\omega_n) = \frac{\mathcal{G}^R_{SB}(i\omega_n)}{1+\mathcal{G}^R_{B0}(i\omega_n)\mathcal{G}^R_{SB}(i\omega_n)}\,. \end{equation} Below we introduce the Matsubara Green's functions and explain the connection to the spectral function.
\subsubsection{Expansion in imaginary time} As we consider the whole system to be in thermal equilibrium, it is reasonable to use the standard Matsubara Green's function method. Therefore, we define the imaginary time $\tau=i t$ where we require $0<\tau<\beta$. The Matsubara Green's function equivalent to Eq.~(\ref{eq_GSB_def}) is \begin{equation} \mathcal{G}^{ij}_{M,SB}(\tau)= -\langle{\cal T} \hat{O}^i(\tau)\hat{O}^j(0)\rangle \,, \end{equation} where ${\cal T}$ is the time-ordering operator for $\tau$. In the case of finite temperatures, $\braket{\dots}$ refers to the equilibrium expectation value $\mathrm{Tr}(\frac{1}{Z}e^{-\beta H}\dots)$, with $Z=\mathrm{Tr}(e^{-\beta H})$. The time evolution in imaginary time is given by \begin{equation}
U_{\lambda_B}(\tau)=e^{-H\tau} \,. \end{equation} We transform all operators $\hat{A}$ into the appropriate picture in imaginary time using the definition \begin{equation}
\hat{A}(\tau)=U_{\lambda_B}^{-1}(\tau) \, \hat{A} \, U_{\lambda_B}(\tau)\,. \end{equation} The full correlator can be written in the form \begin{equation}\label{eq_Full_correlator_In_expansion_from_Matsubara} \mathcal{G}_{M,SB}(\tau)=-\frac{\langle {\cal T} U(\beta) \hat{O}^i(\tau) \hat{O}^j(0) \rangle_0}
{\langle {\cal T} U(\beta)\rangle_0} \,, \end{equation} with evolution operator \begin{equation} U(\tau)={\cal T} e^{-\int_0^{\tau}d\tau' H_{C,I}(\tau')} \,. \end{equation} As for zero temperature, all disconnected diagrams are canceled by the factor $\braket{ {\cal T} U(\beta)}_0$, the so-called vacuum diagrams.
The correlator in imaginary time is periodic in $\tau$ with period $\beta$. It is convenient to transform it to frequency space using the discrete Fourier transform, \begin{equation} \mathcal{G}^{ij}_{M,X}(\tau)=\frac{1}{\beta}\sum_n \mathcal{G}^{ij}_{M,X}(\omega_n)e^{-i\omega_n \tau} \end{equation} with the Matsubara frequencies $\omega_n=2\pi n/\beta$.
\subsubsection{Connecting a real time correlator to the Matsubara Green's function}\label{subsec_connecting_real_to_imaginary} Now we discuss the connection of the Matsubara Green's function to measurable quantities such as the spectral function or correlators. As an example we focus on the Green's function of the bath.
We define the correlation function \begin{equation} C^{i}(t)= \left(\braket{\hat{X}^i(t)\hat{X}^i(0)}_0-\braket{\hat{X}^i(0)\hat{X}^i(t)}_0 \right) \theta(t)\,. \end{equation}
The eigenstates of the bath are given by $|n\rangle$, with $H_B|n\rangle=E_n|n\rangle$. This allows us to rewrite the correlator, \begin{equation}
C^{i}(t)=\frac{\theta(t)}{Z_B}\sum_{nm}|\langle n|\hat{X}^i|m\rangle|^2 e^{i(E_n-E_m)t}(e^{-\beta E_n} -e^{-\beta E_m}), \end{equation} with the partition function $Z_B={\rm Tr}(e^{-\beta H_B})$. The real part of the Fourier transform of the correlator gives us the spectral function \begin{eqnarray}
A^{i}(\omega) &=& \frac{1}{\pi} {\rm Re}\left(\int_{-\infty}^{\infty} \mathrm{d}t \, e^{i\omega t} C^i(t)\right)\\
&=&\!\!
\frac{1}{Z_B}\sum_{nm}|\langle n|\hat{X}^i|m\rangle|^2\nonumber\\
& & (e^{-\beta E_m} -e^{-\beta E_n}) \,\delta\!\left[\omega-(E_n\!-\!E_m)\right] \,.
\nonumber \end{eqnarray} Apart from a factor $-1$, the imaginary part of the retarded Green's function $\mathcal{G}_{B0}^R(t)$ is equivalent to the correlation function $C^i(t)$, since \begin{equation} \mathcal{G}_{B0}^{R, ii}(t) = - i \langle [\hat{X}^i(t),\hat{X}^i(0)]\rangle_0 \theta(t) \,. \end{equation} Assuming that $A^{i}(\omega)$ has been measured, the retarded Green's function $\mathcal{G}_{B0}^R(\omega)$ of the bath can be calculated using \begin{equation} \label{eq_GBOR_S}
\mathcal{G}_{B0}^{R, ii}(\omega)=\int_{-\infty}^{\infty} \mathrm{d}\omega_1\frac{A^i(\omega_1)}{\omega -\omega_1+i0} \,. \end{equation} Describing the Matsubara Green's function in terms of the spectral function shows a connection to the retarded Green's function for finite temperatures $\mathcal{G}^{R}_{P}$, \begin{equation} \mathcal{G}^{ij}_{M,P}(\omega_n) = \mathcal{G}^{R,ij}_{P}(i \omega_n) \;,\; \omega_n>0 \,, \label{eq_GM_equals_GR} \end{equation} with $P\in\{B0,S0,SB\}$. This requires an analytic continuation of $\mathcal{G}^{R}_{P}$ in the complex plane. Via the spectral function we can derive the Kramers-Kronig relation, \begin{equation} \mathcal{G}^{ij}_{P}(\omega) = {\rm Re}\mathcal{G}^{R,ij}_{P}(\omega)+ i (1+2 \bar{n}(\omega)) {\rm Im}\mathcal{G}^{R,ij}_{P}(\omega)\,, \end{equation} with $\bar{n}(\omega) = (e^{\beta\omega}-1)^{-1}$. Starting from the Matsubara Green's function, we obtain information about the retarded Green's function at the points $i\omega_n$. We would like to have the ideal Green's function $\mathcal{G}^R_{S0}$, i.e., the spectral function, for the complete real axis. This can be achieved by using numerical methods like the Pad\'{e} approximation approach~\cite{Pade_1,Pade_2}. However, it should be emphasized that the numerical transformation of a Green's function at the Matsubara frequencies to the real axis is still a non trivial problem and an active research field~\cite{Analytical_continuatioon}.
\subsection{Model system: chain of resonators with individual baths}
In this section we give an explicit example of our method and particularly of the validity of Eq.~(\ref{eq_GSO}). We consider a system of coupled harmonic oscillators, \begin{equation} H_S = \sum_{j=1}^N\left( \frac{1}{2} m\omega_r^2 q_j^2 + \frac{1}{2m} p_j^2 + \frac{m \Omega^2}{2}(q_{j+1}-q_j)^2 \right) \,, \end{equation} where $N$ is the number of resonators, $m$ refers to the mass, $\omega_r$ is the eigenfrequency of an uncoupled resonator, and $\Omega$ describes the coupling between neighboring oscillators. We assume periodic boundary conditions. For a system of coupled resonators, Wick's theorem stated in Eq.~(\ref{eq_Wick_Theorem_For_O}) is clearly valid. Here we show the validity of our previously derived results. We validate our results for the connection between the ideal and perturbed correlators by using the quantum regression theorem (QRT)~\cite{Carmichael}. While the system of bare coupled resonators would not make for a good quantum simulator, proposals exist for modeling the Bose-Hubbard model using coupled non linear resonators~\cite{Hartmann}. Similarly, limiting cases from non interacting bosons to hard-core bosons have been studied in the context of analog quantum simulation~\cite{Bose_Hubbard_Cirac}.
We assume that each of the resonators is coupled to an individual bosonic bath, \begin{align} H_C &= \sum_j \hat{O}^j \hat{X}^j \,, \quad H_B = \sum_{j,m} \bar{\omega}_m^{(j)} b_m^{(j)\dagger} b_m^{(j)} \,, \\ \nonumber
&\text{with} \ \hat{O}^j=q_j\,, \ \text{and} \ \hat{X}^j=\sum_m t_m^{(j)}(b_m^{(j)\dagger} + b_m^{(j)}) \;. \end{align} We assume the baths to be identical, i.e., \begin{equation} \bar{\omega}^{(j)}_m=\bar{\omega}_m \;,\quad t_m^{(j)}=t_m\; , \end{equation} but independent \begin{equation} \braket{\hat{X}^{j_1}(t_1)\hat{X}^{j_2}(t_2)}_0 = 0 \ \text{for} \ j_1 \neq j_2 \, . \end{equation} Diagonalizing the system Hamiltonian results in \begin{equation} H_S = \sum_k \Omega_k a_k^\dagger a_k\, , \ \text{with} \ \Omega_k = \sqrt{\left[2 \Omega \sin(k\frac{\varphi_0 }{2})\right]^2+\omega_r^2} \,, \end{equation} where $\varphi_0=\frac{2 \pi}{N}$. The connection of annihilation and creation operators of system eigenstates, $a_k$ and $a_k^\dagger$, to the original operators has the form \begin{alignat}{3} q_j &= \sqrt{\frac{1}{2m\omega_r}} && \!\!\!( d_j^\dagger + d_j ) \,, \\ d_j &= \frac{1}{2\sqrt{N}} \sum_{k=1}^N &&\left[ e^{-ikj\varphi_0} \left( \sqrt{\frac{\omega_r}{\Omega_K}} -\sqrt{\frac{\Omega_K}{\omega_r}} \right) a_k^\dagger \right. \nonumber\\ &&&+ \left. e^{ijk\varphi_0} \left( \sqrt{\frac{\omega_r}{\Omega_K}} + \sqrt{\frac{\Omega_K}{\omega_r}} \right) a_k \right] \, . \end{alignat} We consider finite temperatures. Therefore, the spectral density of the bath is given by \begin{equation}
A^i(\omega) \approx \frac{1}{2\pi} \mathrm{sign}(\omega) J^i(|\omega|) \,, \end{equation} with $J^i(\omega) = J(\omega) = 2\pi \sum_m t_m^{2} \delta(\omega-\bar{\omega}_m)$.
To compare Eq.~(\ref{eq_GSO}) to correlators calculated using a master-equation approach,
we calculate the full Green's function $\mathcal{G}^{j_1 j_2}_{M,SB}$ using the QRT. To this end we assume the dynamics of the full system to be approximately described by the Lindblad equation \begin{equation} \dot{\rho}(t) = \mathcal{L}\rho(t) \, , \end{equation} with the Lindblad terms \begin{align} \mathcal{L}\rho =& -i [H_S,\rho] \nonumber\\ &+ \sum_{k=1}^N \frac{\Gamma_k}{2} (\bar{n}_k +1) \left( 2 a_k \rho a_k^\dagger - a_k^\dagger a_k \rho - \rho a_k^\dagger a_k \right) \nonumber\\
&+ \sum_{k=1}^N \frac{\Gamma_k}{2} \bar{n}_k \left( 2 a_k^\dagger \rho a_k - a_k a_k^\dagger \rho - \rho a_k a_k^\dagger \right) \,, \label{eq_Lindblad_Resonators} \end{align} where $\bar{n}_k=(e^{\beta\Omega_k}-1)^{-1}$. Assuming the spectral density of the bath to be smooth, we find the effective rates \begin{equation} \Gamma_k = \frac{1}{2m\Omega_k} J(\Omega_k) \,, \end{equation} where the prefactor $(2m\omega_r)^{-1}$ arises from $\hat{O}^i= \sqrt{2m\omega_r}^{-1} (d_j^{\dagger}+d_j)$ and $\frac{\omega_r}{\Omega_k}$ is a result of the transition from $d_j^{\dagger}+d_j$ to $a_k^{\dagger}+a_k$. In accordance with the assumptions used for the Lindblad equation, Eq.~(\ref{eq_GBOR_S}) reduces to \begin{equation} \label{eq_GBOR_J}
i\mathcal{G}_{B0}^{R, ij}(\omega)\approx\delta_{ij} \frac{1}{2}\mathrm{sign}(\omega)J^i(|\omega|) \,. \end{equation} For the Lindblad equation to be valid, some assumptions have to be made about the spectral density of the bath.
With the QRT, the Lindblad terms fulfill the following equation for an arbitrary operator $\hat{A}$ and all $k$~\cite{Carmichael}: \begin{equation} {\rm Tr}\left[ a_k \mathcal{L} \hat{A} \right] = -(i\Omega_k +\frac{\Gamma_k}{2}) {\rm Tr}\left[ a_k \hat{A}\right] \, . \end{equation} For $t>0$ we get \begin{align} \braket{\hat{A}(t_0) a_{k}(t + t_0)} &= e^{-i \Omega_k t} e^{- \frac{\Gamma_k}{2} t}\braket{\hat{A}(t_0) a_{k}(t_0)} \,,\\ \braket{a_{k}(t + t_0)\hat{A}(t_0)} &= e^{-i \Omega_k t} e^{- \frac{\Gamma_k}{2} t}\braket{a_{k}(t_0)\hat{A}(t_0)}\,,\\ \braket{\hat{A}(t_0) a_{k}^\dagger(t + t_0)} &= e^{+i \Omega_k t} e^{- \frac{\Gamma_k}{2} t}\braket{\hat{A}(t_0) a_{k}^\dagger(t_0)} \,,\\ \braket{a_{k}^\dagger(t + t_0)\hat{A}(t_0)} &= e^{+i \Omega_k t} e^{- \frac{\Gamma_k}{2} t}\braket{a_{k}^\dagger(t_0)\hat{A}(t_0)} \, . \end{align} The stationary solution of the Lindblad equation is proportional to $e^{-\beta \sum_k\Omega_k a_k^\dagger a_k}$. Using this, we calculate the initial values for $\braket{a_{k}^{(\dagger)}(t_0)a_k^{(\dagger)}(t_0)}$ and find \begin{align} \braket{a_k(t_1)a_{k'}(t_2)} &= 0\,,\\
\braket{a_k^\dagger(t_1)a_{k'}(t_2)} &= \delta_{k,k'} \bar{n}_k e^{i\Omega_k (t_1-t_2)} e^{-\frac{\Gamma_k}{2} |t_1-t_2|}\,,\\\
\braket{a_k(t_1)a_{k'}^\dagger(t_2)} &= \delta_{k,k'} (\bar{n}_k + 1) e^{i\Omega_k (t_1-t_2)} e^{-\frac{\Gamma_k}{2} |t_1-t_2|}\,,\\ \braket{a_k^\dagger(t_1)a_{k'}^\dagger(t_2)} &= 0 \,. \end{align} A direct calculation of the free correlators results in \begin{align} \braket{a_k(t_1)a_{k'}(t_2)}_0 &= 0\,,\\ \braket{a_k^\dagger(t_1)a_{k'}(t_2)}_0 &= \delta_{k,k'} \bar{n}_k e^{i\Omega_k (t_1-t_2)} \,,\\\ \braket{a_k(t_1)a_{k'}^\dagger(t_2)}_0 &= \delta_{k,k'} (\bar{n}_k + 1) e^{i\Omega_k (t_1-t_2)}\,,\\ \braket{a_k^\dagger(t_1)a_{k'}^\dagger(t_2)}_0 &= 0 \, . \end{align} From this result we calculate the retarded Green's functions $\mathcal{G}^{R,j_1j_2}_{S0}(t)$, $\mathcal{G}^{R,j_1j_2}_{SB}(t)$ and perform the Fourier transform. With an analytic continuation and Eq.~(\ref{eq_GM_equals_GR}) we finally arrive at the Matsubara Green's functions for $\omega_n>0$: \begin{align} \mathcal{G}^{j_1j_2}_{M,SO}(\omega_n) =& \frac{1}{N} \sum_{k=1}^N \frac{1}{2 m \Omega_k} \nonumber\\ &\times \left[ e^{-i k(j_1-j_2)\varphi_0}\bar{n}_k -e^{i k(j_1-j_2)\varphi_0}(\bar{n}_k +1) \right] \nonumber\\ &\times \left( \frac{1}{i \omega_n +\Omega_k+ i0} -\frac{1}{i \omega_n -\Omega_k+ i0}\right) \,,\\ \mathcal{G}^{j_1j_2}_{M,SB}(\omega_n) =& \frac{1}{N} \sum_{k=1}^N \frac{1}{2 m \Omega_k} \nonumber\\ &\times \left[ e^{-i k(j_1-j_2)\varphi_0}\bar{n}_k -e^{i k(j_1-j_2)\varphi_0}(\bar{n}_k +1) \right] \nonumber\\ &\times \left( \frac{1}{i \omega_n +\Omega_k+ i\frac{\Gamma_k}{2}} -\frac{1}{i \omega_n -\Omega_k+ i\frac{\Gamma_k}{2}}\right) \,. \end{align} To calculate the bath Green's function using Eq.~(\ref{eq_GSO}) we introduce the transformation \begin{align} \mathcal{G}^{k}_{M,S0}(\omega_n) =& \sum_{j_1,j_2} \mathcal{G}^{j_1j_2}_{M,SO} e^{ik(j_1-j_2)k\varphi_0}\nonumber\\
=&\frac{N}{2m\Omega_k} \left( \frac{1}{i \omega_n -\Omega_k+ i0} -\frac{1}{i \omega_n +\Omega_k+ i0} \right) \,, \\ \mathcal{G}^{k}_{M,SB}(\omega_n) =& \sum_{j_1,j_2} \mathcal{G}^{j_1j_2}_{M,SB} e^{ik(j_1-j_2)k\varphi_0}\nonumber\\
=&\frac{N}{2m\Omega_k} \left( \frac{1}{i \omega_n -\Omega_k+ i\frac{\Gamma_k}{2}} -\frac{1}{i \omega_n +\Omega_k+ i\frac{\Gamma_k}{2}} \right) \,. \end{align} With this Eq.~(\ref{eq_GSO}) results in \begin{align} \mathcal{G}^{k}_{M,SB}(\omega_n) = \mathcal{G}^{k}_{M,S0}(\omega_n) +& \mathcal{G}^{k}_{M,S0}(\omega_n) \mathcal{G}^{k}_{M,SB}(\omega_n) \nonumber\\ &\times\sum_j \mathcal{G}^{jj}_{M,BO}(\omega_n) \frac{1}{N^2} \,. \end{align} In the Lindblad equation we take into account the spectral density of the bath at $\Omega_k$. Since the bath Green's function depends on the spectral density of the bath, the relation is true for $\omega_n\approx\Omega_k$. Using the assumption of identical and independent baths we arrive at \begin{equation} \mathcal{G}^{j_1j_2}_{M,BO}(\omega_n\approx \Omega_k) \approx \delta_{j_1,j_2} m \Gamma_k\left( \Omega_k +\frac{\Gamma_k}{4} \right) \, . \end{equation} In the limit of small coupling to the bath $\Gamma_k \ll \Omega_k$, we are left with \begin{equation} \mathcal{G}^{j_1j_2}_{M,BO}(\omega_n\approx\Omega_k)\approx \delta_{j_1,j_2} \frac{1}{2} J(\Omega_k) \,, \end{equation} From a comparison to Eq.~(\ref{eq_GBOR_J}), we conclude that Eq.~(\ref{eq_GSO}) holds for this example. For an Ohmic spectral density and with $\Omega_k \rightarrow i\omega$ the Matsubara Green's function of the bath coincides with Eq.~(\ref{eq_GBOR_J}).
\vspace*{1em}
\section{Conclusions} The main result we presented in this paper is twofold. On the one hand, we introduced a method that can be used to reconstruct certain unperturbed (ideal) Green's functions from the perturbed ones, measured by a quantum simulator coupled to additional degrees of freedom. To achieve this, we assumed that any $n$-time correlator of the coupling operator of the ideal system can be written as a product of two-time correlators. This is known as Wick's theorem. On the other hand, we explained how to verify this assumption by a measurement. Furthermore, we assumed good knowledge of the bath correlators to perform the reconstruction. In particular, we presumed that these correlators are measured independently when not coupled to the ideal system. We also clarified how imperfect measurements of the bath and of the full correlator affect the reconstruction. For example, in the case of strong coupling to the bath our result is still valid, but the reconstruction fails even in the presence of small noise during the measurement.
Presently, the applicability of analog quantum simulation is severely restricted, since the influence of sources of errors is not well understood. The approach presented in this paper leads the way to quantify and even correct errors in quantum simulation. Since the reconstruction method is based on classical postprocessing, this method helps to make the results of quantum simulation reliable without adding an overhead to the quantum system. Therefore, the promising potential of quantum simulation to yield interesting results even using small quantum systems remains.
\begin{widetext} \appendix \section{Disconnected Diagrams} \label{appendix_disconnected_diagrams} In this section, we explain how the so-called vacuum diagrams $\braket{{\cal T} S(\infty)}_0$ cancel the disconnected diagrams in the free two-time correlator $\braket{ {\cal T} S(\infty) \hat{O}_I(t) \hat{O}_I(0)}_0$. To shorten the equations we use $\hat{A}_i$ as an abbreviation for $\hat{A}(t_i)$. For simplicity we base our discussion on a coupling Hamiltonian of the form $H_C=\hat{O}\hat{X}$. It is straight forward to extend this calculations on the full model described in Sec.~(\ref{sec_Full_model_and_discussion}). The vacuum diagrams are given by \begin{equation} \braket{{\cal T} S(\infty)}_0 = \sum_n \frac{1}{n!} (-i)^n \int\limits_{-\infty}^\infty \mathrm{d}t_1 \dots \int\limits_{-\infty}^\infty \mathrm{d}t_n \braket{{\cal T} \hat{O}_1 \dots \hat{O}_n}_0 \braket{{\cal T} \hat{X}_1 \dots \hat{X}_n}_0 = \sum_n V_n \,, \end{equation} where we assume \begin{equation} \braket{\hat{O}_I(t)}_0 = 0 \,, \quad \braket{\hat{X}_{I}(t)}_0 = 0 \,, \end{equation} so that terms with $n$ being an odd number are zero. We have introduced $V_n$, the vacuum diagrams of order $n$. Now we elaborate the connection between the free correlator and the vacuum diagrams. The free two-time correlator is given by \begin{equation} \braket{{\cal T}S(\infty) \hat{O}_a \hat{O}_b}_0 = \sum_n \frac{1}{n!} (-i)^n \int\limits_{-\infty}^\infty \mathrm{d}t_1 \dots \int\limits_{-\infty}^\infty \mathrm{d}t_n \braket{{\cal T} \hat{O}_a \hat{O}_b\hat{O}_1 \dots \hat{O}_n}_0 \braket{{\cal T} \hat{X}_1 \dots \hat{X}_n}_0 \,. \end{equation} From this we apply Wick's theorem and take out the two-time correlators which form a connected diagram and recombine the surplus correlators in a higher-order correlator. There are $\frac{n!}{(n-m)!}$ possibilities to choose $m$ vertices out of $n$. Therefore, a connected diagram with $m$ vertices occurs $\frac{n!}{(n-m)!}$ times \begin{eqnarray} \braket{{\cal T}S(\infty) \hat{O}_a \hat{O}_b}_0 = \sum_n \frac{1}{n!} (-i)^n \!\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_1 \dots\! \int\limits_{-\infty}^\infty \mathrm{d}t_n \sum_m^n && \!\!\!\! \braket{{\cal T}\hat{O}_a\hat{O}_1}_0 \braket{{\cal T}\hat{X}_1\hat{X}_2}_0 \braket{{\cal T}\hat{O}_2\hat{O}_3}_0 \dots \braket{{\cal T}\hat{X}_{m-1}\hat{X}_m}_0 \braket{{\cal T}\hat{O}_m\hat{O}_b}_0 \nonumber\\ &&\cdot {\textstyle \frac{n!}{(n-m)!}} \braket{{\cal T} \hat{O}_{m+1} \dots \hat{O}_n}_0 \braket{{\cal T} \hat{X}_{m+1} \dots \hat{X}_n}_0 \,. \end{eqnarray} By resorting the factors we can identify the vacuum diagrams of order $n-m$, \begin{eqnarray} \braket{{\cal T}S(\infty) \hat{O}_a \hat{O}_b}_0=\sum_n \sum_m^n & \underbrace{ (-i)^m \!\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_1 \dots\! \int\limits_{-\infty}^\infty \mathrm{d}t_m \braket{{\cal T}\hat{O}_a\hat{O}_1}_0 \braket{{\cal T}\hat{X}_1\hat{X}_2}_0 \braket{{\cal T}\hat{O}_2\hat{O}_3}_0 \dots \braket{{\cal T}\hat{X}_{m-1}\hat{X}_m}_0 \braket{{\cal T}\hat{O}_m\hat{O}_b}_0 }_{=C^{a,b}_m} \nonumber\\ &\cdot \overbrace{ {\textstyle\frac{1}{(n-m)!}} (-i)^{n-m} \int\limits_{-\infty}^\infty \mathrm{d}t_{m+1} \dots\! \int\limits_{-\infty}^\infty \mathrm{d}t_n \braket{{\cal T} \hat{O}_{m+1} \dots \hat{O}_n}_0 \braket{{\cal T} \hat{X}_{m+1} \dots \hat{X}_n}_0 }^{=V_{n-m}} \,, \end{eqnarray} and find the connected diagrams of order $m$, which we will call $C^{a,b}_m$, with $C^{a,b}_0=\braket{{\cal T} \hat{O}_a \hat{O}_b}_0$. One can factor out $\braket{{\cal T} S(\infty)}_0$ by using the Cauchy product formula \begin{equation} \braket{{\cal T}S(\infty) \hat{O}_a \hat{O}_b}_0=\sum_n^\infty \sum_m^n C^{a,b}_m V_{n-m} = \sum_m^\infty C^{a,b}_m\sum_n^\infty V_n = \braket{{\cal T} S(\infty)}_0 \sum_m^\infty C^{a,b}_m \,. \end{equation} This means, that the vacuum diagrams cancel all disconnected diagrams, i.e., \begin{equation} \frac{\braket{{\cal T}S(\infty) \hat{O}_a \hat{O}_b}_0}{\braket{{\cal T} S(\infty)}_0} =
\begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.4);
\coordinate (B) at (0.6,0.4);
\draw[line width=1.0pt] (A) -- (B);
\end{tikzpicture}
+ \begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.4);
\coordinate (B) at (0.6,0.4);
\coordinate (C) at (1.2,0.4);
\coordinate (D) at (1.8,0.4);
\draw[line width=1.0pt] (A) -- (B);
\draw[line width=1.0pt, snake it] (B) -- (C);
\draw[line width=1.0pt] (C) -- (D);
\fill (B) circle (2pt);
\fill[white] (B) circle (1pt);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\end{tikzpicture}
+ \begin{tikzpicture}[anchor=base,baseline=8pt]
\coordinate (A) at (0,0.4);
\coordinate (B) at (0.6,0.4);
\coordinate (C) at (1.2,0.4);
\coordinate (D) at (1.8,0.4);
\coordinate (E) at (2.4,0.4);
\coordinate (F) at (3.0,0.4);
\draw[line width=1.0pt] (A) -- (B);
\draw[line width=1.0pt, snake it] (B) -- (C);
\draw[line width=1.0pt] (C) -- (D);
\draw[line width=1.0pt, snake it] (D) -- (E);
\draw[line width=1.0pt] (E) -- (F);
\fill (B) circle (2pt);
\fill[white] (B) circle (1pt);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\fill (D) circle (2pt);
\fill[white] (D) circle (1pt);
\fill (E) circle (2pt);
\fill[white] (E) circle (1pt);
\end{tikzpicture}
+ \dots \,. \end{equation}
\section{Four-time correlator} \label{app_4-time_correlator} In this section, we consider a system where Wick's theorem is not exactly valid. The goal is to derive Eqs.~(\ref{eq_checkingwick_4er}) and (\ref{eq_checkingwick_2er}), in order to quantify the deviation from Wick's theorem. We define the lowest order correction to Wick's theorem as $G_4(t_1,t_2,t_3,t_4)$, \begin{equation} G_4(t_1,t_2,t_3,t_4) = \braket{{\cal T} \hat{O}_1\hat{O}_2\hat{O}_3\hat{O}_4}_{0,F}- \braket{{\cal T} \hat{O}_1\hat{O}_2\hat{O}_3\hat{O}_4}_{0}= \braket{{\cal T} \hat{O}_1\hat{O}_2\hat{O}_3\hat{O}_4}_{0,F} -\sum_{\substack{3 \text{ perm.} \\ a,b,c,d}} \braket{{\cal T}\hat{O}_a\hat{O}_b}_{0}\braket{{\cal T}\hat{O}_c\hat{O}_d}_{0} \; , \end{equation} where the summation runs over all three indistinguishable permutations. With $\braket{\dots}$ ($\braket{\dots}_{0}$) we refer to (un)perturbed correlators for which we assume Wick's theorem to be exactly valid. In contrast to this, $\braket{\dots}_F$ ($\braket{\dots}_{0,F}$) describe the (un)perturbed correlators including the corrections to Wick's theorem. In this paper we only consider the lowest-order correction to Wick's theorem ($G_4$). All higher-order corrections are neglected. To shorten the equations we use the abbreviation $G_4(1,2,3,4)=G_4(t_1,t_2,t_3,t_4)$. An $n$-time correlator is then given by \begin{equation} \braket{{\cal T} \hat{O}_1 \dots \hat{O}_n}_{0,F} = \braket{{\cal T} \hat{O}_1 \dots \hat{O}_n}_{0} + \sum_{\substack{\text{perm.}\\ \alpha,\beta,\gamma,\delta}} G(\alpha,\beta,\gamma,\delta) \braket{{\cal T} \, \prod\limits_{\substack{k \in \{1,\dots n\}\backslash\{\alpha,\beta,\gamma,\delta\}}} \hat{O}_k}\!{}_{0} \label{eq_correctiontowick} \,. \end{equation}
At first we show for the four-time correlator that if Wick's theorem is valid for the unperturbed correlator, it is also valid for the perturbed one. We start with \begin{align} \braket{{\cal T} \hat{O}_{\text{I}}\hat{O}_{\text{II}}\hat{O}_{\text{III}}\hat{O}_{\text{IV}}} &= \frac{\braket{{\cal T}S(\infty) \hat{O}_{\text{I}} \hat{O}_{\text{II}}\hat{O}_{\text{III}}\hat{O}_{\text{IV}}}_{0}}{\braket{{\cal T} S(\infty)}_{0}} \\ &= \sum_n \frac{(-i)^n}{n!} \int_{-\infty}^\infty \mathrm{d}t_1 \dots\! \int\limits_{-\infty}^\infty \mathrm{d}t_n \frac{1}{\braket{{\cal T} S(\infty)}_{0}}\braket{{\cal T}\hat{O}_{\text{I}} \hat{O}_{\text{II}}\hat{O}_{\text{III}}\hat{O}_{\text{IV}}\hat{O}_1 \dots \hat{O}_n}_{0} \braket{{\cal T}\hat{X}_1 \dots \hat{X}_n}_{0} \;. \end{align} We focus on a coupling Hamiltonian of the form $H_C=\hat{O} \hat{X}$. We proceed as in the above section and identify connected diagrams $C^{a,b}_m$ with $m$ vertices. Such diagrams occur $\frac{n!}{(n-m)!}$ times. There are six indistinguishable possibilities to choose $a$ and $b$. Out of the remaining $n-m$ operators we choose a connected diagram $C^{c,d}_k$ with $k$ vertices. This occurs $\frac{(n-m)!}{(n-m-k)!}$ times. As, for example, $C^{\text{I},\text{II}}_m$ and $C^{\text{I},\text{II}}_k$ for $m=k$ are indistinguishable, we have in fact three indistinguishable permutations to take into account: \begin{align} &\braket{{\cal T} \hat{O}_{\text{I}}\hat{O}_{\text{II}}\hat{O}_{\text{III}}\hat{O}_{\text{IV}}} \nonumber\\ &=\sum_n \frac{(-i)^n}{n!} \!\! \int\limits_{-\infty}^\infty \mathrm{d}t_1 \dots\! \int\limits_{-\infty}^\infty \mathrm{d}t_n \frac{1}{\braket{{\cal T} S(\infty)}_{0}} \sum_{\substack{3 \text{ perm.} \\ a,b}} \sum_m^n \frac{n!}{(n-m)!} \braket{{\cal T}\hat{O}_a\hat{O}_1}_{0} \braket{{\cal T}\hat{X}_1\hat{X}_2}_{0} \braket{{\cal T}\hat{O}_2\hat{O}_3}_{0} \dots \braket{{\cal T}\hat{O}_m\hat{O}_b}_{0} \nonumber\\ &\cdot \sum_k^{n-m} \frac{(n-m)!}{(n-m-k)!} \braket{{\cal T}\hat{O}_c\hat{O}_{m+1}}_{0} \braket{{\cal T}\hat{X}_{m+1}\hat{X}_{m+2}}_{0} \braket{{\cal T}\hat{O}_{m+2}\hat{O}_{m+3}}_{0} \dots \braket{{\cal T}\hat{O}_{m+k}\hat{O}_d}_{0} \nonumber\\ &\cdot \braket{{\cal T}\hat{O}_{m+k+1} \dots \hat{O}_n}_{0} \braket{{\cal T}\hat{X}_{m+k+1} \dots \hat{X}_n}_{0} \\ &= \sum_{\substack{3 \text{ perm.} \\ a,b}}\frac{1}{\braket{{\cal T} S(\infty)}_{0}} \sum_n^\infty \sum_m^n C^{a,b}_m \sum_k^{n-m} C^{c,d}_k \ V_{n-m-k} = \sum_{\substack{3 \text{ perm.} \\ a,b}} \frac{1}{\braket{{\cal T} S(\infty)}_{0}} \sum_n^\infty V_n \sum_m^\infty C^{a,b}_m \sum_k^\infty C^{c,d}_k \label{eq_resummation_4timecorr_W} \\ &= \sum_{\substack{3 \text{ perm.} \\ a,b}} \braket{{\cal T} \hat{O}_a \hat{O}_b} \braket{{\cal T} \hat{O}_c \hat{O}_d} \,. \end{align} The resummation in Eq.~(\ref{eq_resummation_4timecorr_W}) represents the Cauchy product formula for three series followed by an index shift. Hence, we expressed the full four-time correlator in terms of full two-time correlators.
Now we include the corrections to Wick's theorem and only consider the lowest-order correction $G_4$. We introduce the correction to the normalization $\braket{{\cal T} S(\infty)}_{0,\text{corr}}$, \begin{equation} \frac{1}{\braket{{\cal T} S(\infty)}_{0,F}}=\frac{1}{\braket{{\cal T} S(\infty)}_{0}+\braket{{\cal T} S(\infty)}_{0,\text{corr}}}\approx \frac{1}{\braket{{\cal T} S(\infty)}_{0}} \left( 1 - \frac{\braket{{\cal T} S(\infty)}_{0,\text{corr}}}{\braket{{\cal T} S(\infty)}_{0}} \right)\; . \end{equation} With this and Eq.~(\ref{eq_correctiontowick}) we can identify the corrections to the full four-time correlator. For that we use the following abbreviation to describe on which set of operators we apply Wick's theorem, \begin{equation} \text{Wick}(A,\pi_n\backslash B,C) = \braket{{\cal T} \, \prod\limits_{\substack{k \in A\cup\pi_n\backslash B}} \hat{O}_k}\!{}_{0} \braket{{\cal T} \, \prod\limits_{l \in \pi_n\cup C} \hat{X}_l}\!{}_{0}\; , \end{equation} where $\pi_n=\{1,\dots,n\}$ describes the initial set of operators $\hat{O}_i$ and $\hat{X}_i$. With this notation we keep in mind which additional operators $\hat{O}_i$ we have and which operators $\hat{O}_i$ are missing. The full four-time correlator with corrections reads \begin{align} \braket{{\cal T} \hat{O}_{\text{I}}\hat{O}_{\text{II}}\hat{O}_{\text{III}}\hat{O}_{\text{IV}}}_F =& \sum_{\substack{3 \text{ perm.} \\ a,b,c,d}} \!\! \braket{{\cal T} \hat{O}_a \hat{O}_b} \braket{{\cal T} \hat{O}_c \hat{O}_d} -\sum_{\substack{3 \text{ perm.} \\ a,b,c,d}} \!\! \braket{{\cal T} \hat{O}_a \hat{O}_b} \braket{{\cal T} \hat{O}_c \hat{O}_d} \frac{\braket{{\cal T} S(\infty)}_{0,\text{corr}}}{\braket{{\cal T} S(\infty)}_{0}}+ G_4(\text{I},\text{II},\text{III},\text{IV}) \nonumber\\ & +\sum_n \frac{(-i)^n}{n!} \!\! \int\limits_{-\infty}^\infty \mathrm{d}t_1 \dots\! \int\limits_{-\infty}^\infty \mathrm{d}t_n \frac{1}{\braket{{\cal T} S(\infty)}_{0}} \left( \sum_{\substack{4 \ \text{perm.} \\ a-d}} \sum_{\substack{\text{perm.} \\ \delta}} G_4(a,b,c,\delta) \text{Wick}(\{d\},\pi_n\backslash\{\delta\}) \right. \nonumber\\ &+ \sum_{\substack{6 \ \text{perm.} \\ a-d}} \sum_{\substack{\text{perm.} \\ \gamma,\delta}} G_4(a,b,\gamma,\delta) \text{Wick}(\{c,d\},\pi_n\backslash\{\gamma,\delta\}) +\sum_{\substack{\text{perm.} \\ \alpha-\delta}} G_4(\alpha,\beta,\gamma,\delta) \text{Wick}(\{a-d\},\pi_n\backslash\{\alpha-\delta\})\nonumber\\ &+\left. \sum_{\substack{4 \ \text{perm.} \\ a-d}} \sum_{\substack{\text{perm.} \\ \beta,\gamma,\delta}} G_4(a,\beta,\gamma,\delta) \text{Wick}(\{b,c,d\},\pi_n\backslash\{\beta,\gamma,\delta\}) \right) \,. \end{align} The summations go over all distinguishable permutations. In addition, the correction to the vacuum diagrams reads \begin{equation} \frac{\braket{{\cal T} S(\infty)}_{0,\text{corr}}}{\braket{{\cal T} S(\infty)}_{0}} = \sum_n \frac{(-i)^n}{n!} \!\! \int\limits_{-\infty}^\infty \mathrm{d}t_1 \dots\! \int\limits_{-\infty}^\infty \mathrm{d}t_n \frac{1}{\braket{{\cal T} S(\infty)}_{0}} \sum_{\substack{\text{perm.} \\ \alpha-\delta}} G_4(\alpha,\beta,\gamma,\delta) \text{Wick}(\pi_n\backslash\{\alpha-\delta\}) \,. \end{equation} We can do the same for a two-time correlator \begin{align} \braket{{\cal T} \hat{O}_a \hat{O}_b}_F =& \braket{{\cal T} \hat{O}_a \hat{O}_b} - \braket{{\cal T} \hat{O}_a \hat{O}_b} \frac{\braket{{\cal T} S(\infty)}_{0,\text{corr}}}{\braket{{\cal T} S(\infty)}_{0}} \nonumber\\ &+\sum_n \frac{(-i)^n}{n!} \!\! \int\limits_{-\infty}^\infty \mathrm{d}t_1 \dots\! \int\limits_{-\infty}^\infty \mathrm{d}t_n \frac{1}{\braket{{\cal T} S(\infty)}_{0}} \left( \sum_{\substack{\text{perm.} \\ \gamma,\delta}} G_4(a,b,\gamma,\delta) \text{Wick}(\pi_n\backslash\{\gamma,\delta\})\right. \nonumber\\ &+ \left. \sum_{\substack{2 \ \text{perm.} \\ k,l}} \sum_{\substack{\text{perm.} \\ \beta,\gamma,\delta}} G_4(k,\beta,\gamma,\delta) \text{Wick}(l,\pi_n\backslash\{\beta,\gamma,\delta\})+\sum_{\substack{\text{perm.} \\ \alpha-\delta}} G_4(\alpha,\beta,\gamma,\delta) \text{Wick}(a,b,\pi_n\backslash\{\alpha-\delta\} \right) \;. \end{align} With these relations we calculate \begin{equation} \braket{{\cal T} \hat{O}_{\text{I}}\hat{O}_{\text{II}}\hat{O}_{\text{III}}\hat{O}_{\text{IV}}}_F -\sum_{\substack{3 \text{ perm.} \\ a,b}} \braket{{\cal T}\hat{O}_a\hat{O}_b}_F \braket{{\cal T}\hat{O}_c\hat{O}_d}_F \; . \end{equation} We have to compare terms with the same type of $G_4$, because only these terms can cancel each other. As an example we explain the procedure for $G_4(a,b,\gamma,\delta)$. Focusing on $G_4(a,b,\gamma,\delta)$ we obtain \begin{align} \sum_n \frac{(-i)^n}{n!} \!\! \int\limits_{-\infty}^\infty \mathrm{d}t_1 \dots\! \int\limits_{-\infty}^\infty \mathrm{d}t_n \frac{1}{\braket{{\cal T} S(\infty)}_{0}}&\left( \sum_{\substack{6 \ \text{perm.} \\ a-d}} \sum_{\substack{\text{perm.} \\ \gamma,\delta}} G_4(a,b,\gamma,\delta) \text{Wick}(\{c,d\},\pi_n\backslash\{\gamma,\delta\})\right. \nonumber\\ &-\left.\sum_{\substack{6 \ \text{perm.} \\ a-d}} \sum_{\substack{\text{perm.} \\ \gamma,\delta}} G_4(a,b,\gamma,\delta) \text{Wick}(\pi_n\backslash\{\gamma,\delta\}) \braket{{\cal T}\hat{O}_c\hat{O}_d}\right) \;. \end{align} In the last term we have to take into account six permutations, since the $G_4(a,b,\gamma,\delta)$ occurs in both two-time correlators. The summation over the permutations for $\gamma,\delta$ yields a factor $\frac{n!}{(n-2)!}$. Since the first contribution arises for $n=2$ we define $\tilde{n}=n-2$, which yields \begin{align} \sum_{\substack{6 \ \text{perm.} \\ a-d}}\!\!(-i)^2 \!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_1}\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_2} G_4(a,b,x_1,x_2) \frac{1}{\braket{{\cal T} S(\infty)}_{0}} \sum_{\tilde{n}} \frac{(-i)^{\tilde{n}}}{\tilde{n}!} \!\! \int\limits_{-\infty}^\infty \mathrm{d}t_1 \dots\! \int\limits_{-\infty}^\infty \mathrm{d}t_{\tilde{n}} &\left(\vphantom{\hat{O}_c} \text{Wick}(\{c,d\},\pi_{\tilde{n}},\{x_1,x_2\})\right. \nonumber\\ &-\left. \text{Wick}(\pi_{\tilde{n}},\{x_1,x_2\}) \braket{{\cal T}\hat{O}_c\hat{O}_d} \right) \;. \label{eq_G(a,b,x_1,x_2)} \end{align} The possible types of diagrams in these constellations are \begin{equation} (\text{I}): \; \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=1.0pt] (A) -- (D);
\draw[line width=1.0pt] (B) -- (C);
\coordinate (L1a) at (1.1,0);
\coordinate (L2a) at (1.1,0.5);
\draw[line width=1.0pt, snake it] (C) -- (L1a);
\draw[line width=1.0pt, snake it] (D) -- (L2a);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\fill (D) circle (2pt);
\fill[white] (D) circle (1pt);
\coordinate (L1b) at (1.4,0);
\coordinate (L2b) at (1.4,0.5);
\draw[line width=1.0pt] (L1a) -- (L1b);
\draw[line width=1.0pt] (L2a) -- (L2b);
\fill (L1a) circle (2pt);
\fill[white] (L1a) circle (1pt);
\fill (L2a) circle (2pt);
\fill[white] (L2a) circle (1pt);
\coordinate (L1c) at (1.7,0);
\coordinate (L2c) at (1.7,0.5);
\node at (L1c) {\dots};
\node at (L2c) {\dots};
\coordinate (L1d) at (1.9,0);
\coordinate (L2d) at (1.9,0.5);
\coordinate (L1e) at (2.4,0);
\coordinate (L2e) at (2.4,0.5);
\coordinate (L1f) at (3.0,0);
\coordinate (L2f) at (3.0,0.5);
\draw[line width=1.0pt,snake it] (L1d) -- (L1e);
\draw[line width=1.0pt,snake it] (L2d) -- (L2e);
\draw[line width=1.0pt] (L1e) -- (L1f);
\draw[line width=1.0pt] (L2e) -- (L2f);
\fill (L1e) circle (2pt);
\fill[white] (L1e) circle (1pt);
\fill (L2e) circle (2pt);
\fill[white] (L2e) circle (1pt); \end{tikzpicture} \quad (\text{II}): \; \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=1.0pt] (A) -- (D);
\draw[line width=1.0pt] (B) -- (C);
\coordinate (R1a) at (1,-0.15);
\coordinate (R2a) at (1,0.65);
\draw[line width=1.0pt, snake it] (C) to[out=-45,in=-120] (R1a);
\draw[line width=1.0pt, snake it] (D) to[out=45,in=120] (R2a);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\fill (D) circle (2pt);
\fill[white] (D) circle (1pt);
\coordinate (R1b) at (1.17,0.05);
\coordinate (R2b) at (1.17,0.45);
\draw[line width=1.0pt] (R1a) to[out=30,in=-100] (R1b);
\draw[line width=1.0pt] (R2a) to[out=-30,in=100] (R2b);
\fill (R1a) circle (2pt);
\fill[white] (R1a) circle (1pt);
\fill (R2a) circle (2pt);
\fill[white] (R2a) circle (1pt);
\node at (1.17,0.1) {\vdots};
\coordinate (E) at (1.7,0.25);
\coordinate (F) at (2.3,0.25);
\coordinate (G) at (2.9,0.25);
\coordinate (H) at (3.2,0.25);
\coordinate (I) at (3.5,0.25);
\draw[line width=1.0pt] (E) -- (F);
\draw[line width=1.0pt, snake it] (F) -- (G);
\draw[line width=1.0pt] (G) -- (H);
\fill (F) circle (2pt);
\fill[white] (F) circle (1pt);
\fill (G) circle (2pt);
\fill[white] (G) circle (1pt);
\node at (I) {\dots};
\coordinate (J) at (3.8,0.25);
\coordinate (K) at (4.3,0.25);
\coordinate (L) at (4.9,0.25);
\draw[line width=1.0pt, snake it] (J) -- (K);
\draw[line width=1.0pt] (K) -- (L);
\fill (K) circle (2pt);
\fill[white] (K) circle (1pt); \end{tikzpicture} \;, \end{equation} multiplied by an appropriate vacuum diagram. The cross represents $G_4$. Both kinds appear in the first term. However, all contributions in the second term are of the form (II). We define the following abbreviations that describe the leg- and ring-type structures in the above diagrams: \begin{align} L_m^{x_1,a}&= (-i)^m \!\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_1 \dots\! \int\limits_{-\infty}^\infty \mathrm{d}t_m \braket{{\cal T}\hat{X}_{x_1}\hat{X}_{1}}_{0} \braket{{\cal T}\hat{O}_{1}\hat{O}_{2}}_{0} \braket{{\cal T}\hat{X}_{2}\hat{X}_{3}}_{0} \dots \braket{{\cal T}\hat{O}_{m}\hat{O}_{a}}_{0} \;,\; L_0^{x_1,a}=0 \\ R_m^{x_1,x_2} &=(-i)^m \!\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_1 \dots\! \int\limits_{-\infty}^\infty \mathrm{d}t_m \braket{{\cal T}\hat{X}_{x_1}\hat{X}_{1}}_{0} \braket{{\cal T}\hat{O}_{1}\hat{O}_{2}}_{0} \braket{{\cal T}\hat{X}_{2}\hat{X}_{3}}_{0} \dots \braket{{\cal T}\hat{X}_{m}\hat{X}_{x_2}}_{0} \; . \end{align} Now we proceed analogously with Eq.~(\ref{eq_G(a,b,x_1,x_2)}) and identify similar structures, get combinational factors, and do the resummation using the Cauchy product formula. It turns out that the terms of type (II) fully cancel out. So we are left with \begin{equation} \sum_{\substack{6 \ \text{perm.} \\ a-d}}\!\!(-i)^2 \!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_1}\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_2} G_4(a,b,x_1,x_2) \sum_l^{\infty} \sum_k^{\infty} L_l^{x_1,c} L_k^{x_2,d} \;. \end{equation} Repeating this procedure for all kinds of $G_4$ terms, we find \begin{align} \braket{{\cal T} \hat{O}_{\text{I}}\hat{O}_{\text{II}}\hat{O}_{\text{III}}\hat{O}_{\text{IV}}}_F =& \sum_{\substack{3 \text{ perm.} \\ a,b}} \braket{{\cal T}\hat{O}_a\hat{O}_b}_F \braket{{\cal T}\hat{O}_c\hat{O}_d}_F +G_4(\text{I},\text{II},\text{III},\text{IV}) -i\!\sum_{\substack{4 \ \text{perm.} \\ a-d}}\!\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_1} G_4(a,b,c,x_1) \sum_k^{\infty} L_k^{x_1,d} \nonumber\\ &- \!\sum_{\substack{6 \ \text{perm.} \\ a-d}}\!\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_1}\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_2} G_4(a,b,x_1,x_2) \sum_l^{\infty} \sum_k^{\infty} L_l^{x_1,c} L_k^{x_2,d} \nonumber\\ &+i\!\sum_{\substack{4 \ \text{perm.} \\ a-d}}\!\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_1}\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_2} \!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_3}G_4(a,x_1,x_2,x_3) \sum_l^{\infty} \sum_k^{\infty}\sum_m^{\infty} L_l^{x_1,b} L_k^{x_2,c}L_m^{x_3,d} \nonumber\\ &+\int\limits_{-\infty}^\infty \mathrm{d}t_{x_1}\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_2} \!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_3}\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_4}G_4(x_1,x_2,x_3,x_4) \sum_l^{\infty} \sum_k^{\infty}\sum_m^{\infty}\sum_n^{\infty} L_l^{x_1,a} L_k^{x_2,b}L_m^{x_3,c} L_m^{x_4,d} \,. \end{align} We define a diagrammatic representation for these corrections, \begin{equation} \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=2.0pt] (A) -- (D);
\draw[line width=2.0pt] (B) -- (C); \end{tikzpicture} = \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=1.0pt] (A) -- (D);
\draw[line width=1.0pt] (B) -- (C); \end{tikzpicture} + \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=1.0pt] (A) -- (D);
\draw[line width=1.0pt] (B) -- (C);
\coordinate (L1a) at (1.1,0);
\draw[line width=1.0pt, snake it] (C) -- (L1a);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\coordinate (L1b) at (1.7,0);
\draw[line width=1.0pt] (L1a) -- (L1b);
\fill (L1a) circle (2pt);
\fill[white] (L1a) circle (1pt); \end{tikzpicture} + \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=1.0pt] (A) -- (D);
\draw[line width=1.0pt] (B) -- (C);
\coordinate (L1a) at (1.1,0);
\coordinate (L2a) at (1.1,0.5);
\draw[line width=1.0pt, snake it] (C) -- (L1a);
\draw[line width=1.0pt, snake it] (D) -- (L2a);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\fill (D) circle (2pt);
\fill[white] (D) circle (1pt);
\coordinate (L1b) at (1.7,0);
\coordinate (L2b) at (1.7,0.5);
\draw[line width=1.0pt] (L1a) -- (L1b);
\draw[line width=1.0pt] (L2a) -- (L2b);
\fill (L1a) circle (2pt);
\fill[white] (L1a) circle (1pt);
\fill (L2a) circle (2pt);
\fill[white] (L2a) circle (1pt); \end{tikzpicture} + \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=1.0pt] (A) -- (D);
\draw[line width=1.0pt] (B) -- (C);
\coordinate (L1a) at (1.1,0);
\draw[line width=1.0pt, snake it] (C) -- (L1a);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\coordinate (L1b) at (1.7,0);
\draw[line width=1.0pt] (L1a) -- (L1b);
\fill (L1a) circle (2pt);
\fill[white] (L1a) circle (1pt);
\coordinate (L1c) at (2.3,0);
\draw[line width=1.0pt, snake it] (L1b) -- (L1c);
\fill (L1b) circle (2pt);
\fill[white] (L1b) circle (1pt);
\coordinate (L1d) at (2.9,0);
\draw[line width=1.0pt] (L1c) -- (L1d);
\fill (L1c) circle (2pt);
\fill[white] (L1c) circle (1pt); \end{tikzpicture} +\dots \;, \end{equation} and are left with \begin{equation} \braket{{\cal T} \hat{O}_{\text{I}}\hat{O}_{\text{II}}\hat{O}_{\text{III}}\hat{O}_{\text{IV}}}_F = \sum_{\substack{3 \text{ perm.} \\ a,b,c,d}} \braket{{\cal T}\hat{O}_a\hat{O}_b}_F \braket{{\cal T}\hat{O}_c\hat{O}_d}_F + \; \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=2.0pt] (A) -- (D);
\draw[line width=2.0pt] (B) -- (C); \end{tikzpicture} \,. \end{equation} With the above it is easy to derive the equation \begin{align} \braket{{\cal T} \hat{O}_{\text{I}} \hat{O}_{\text{II}}}_F =& \braket{{\cal T} \hat{O}_{\text{I}} \hat{O}_{\text{II}}} - \int\limits_{-\infty}^\infty \mathrm{d}t_{x_1}\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_2} G_4(\text{I},\text{II},x_1,x_2) \sum_k^{\infty} R_k^{x_1,x_2} \nonumber\\ &+i\!\sum_{\substack{2 \ \text{perm.} \\ a,b}}\!\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_1}\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_2} \!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_3}G_4(a,x_1,x_2,x_3) \sum_l^{\infty} \sum_k^{\infty} R_k^{x_1,x_2} L_l^{x_3,b} \nonumber\\ &+\int\limits_{-\infty}^\infty \mathrm{d}t_{x_1}\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_2} \!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_3}\!\!\int\limits_{-\infty}^\infty \mathrm{d}t_{x_4}G_4(x_1,x_2,x_3,x_4) \sum_l^{\infty} \sum_k^{\infty}\sum_m^{\infty}R_k^{x_1,x_2} L_l^{x_3,\text{I}} L_m^{x_4,\text{II}} \\ =& \braket{{\cal T} \hat{O}_{\text{I}} \hat{O}_{\text{II}}} + \begin{tikzpicture}[anchor=base,baseline=5pt]
\coordinate (A) at (0,0);
\coordinate (B) at (0,0.5);
\coordinate (C) at (0.5,0);
\coordinate (D) at (0.5,0.5);
\draw[line width=2.0pt] (A) -- (D);
\draw[line width=2.0pt] (B) -- (C);
\draw[line width=1.0pt,snake it] (D) to[out=-45,in=45] (C);
\fill (C) circle (2pt);
\fill[white] (C) circle (1pt);
\fill (D) circle (2pt);
\fill[white] (D) circle (1pt); \end{tikzpicture} \;. \end{align}
\end{widetext}
\end{document}
|
arXiv
|
{
"id": "1701.02683.tex",
"language_detection_score": 0.6278704404830933,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\date{} \title{On the $\frac 1H$-variation of the divergence integral with respect to fractional Brownian motion with Hurst parameter $H<\frac12$ } \author{El Hassan Essaky \\ Universit\'{e} Cadi Ayyad\\
Facult\'{e} Poly-disciplinaire\\ Laboratoire de Mod\'{e}lisation et Combinatoire\\ D\'{e}partement de Math\'{e}matiques et d'Informatique\\ BP 4162, Safi, Maroc \\ Email: [email protected]\vspace*{0.2in}\\
David Nualart \thanks{D. Nualart is supported by the NSF grant DMS 1208625} \\
The University of Kansas\\ Department of Mathematics\\
Lawrence, Kansas 66045, USA\\ Email: [email protected]} \maketitle\maketitle
\begin{abstract}
In this paper, we study the $\frac 1H$-variation of stochastic divergence integrals $X_t=\int_{0}^{t} u_{s} \delta B_{s}$ with respect to a fractional Brownian motion $B$ with Hurst parameter $H< \frac12$. Under suitable assumptions on the process $u$, we prove that the $\frac 1H$-variation of $X$ exists in $L^1(\Omega)$ and is equal to $e_H \int_0^T |u_s|^{\frac{1}{H}} ds$, where $e_H = \E\left[|B_1|^{\frac{1}{H}}\right]$. In the second part of the paper, we establish an integral representation for the fractional Bessel Process $\|B_t\|$, where $B_t$ is a $d$-dimensional fractional Brownian motion with Hurst parameter $H<\frac 12$. Using a multidimensional version of the result on the $\frac 1H$-variation of divergence integrals, we prove that if $2dH^2>1$, then the divergence integral in the integral representation of the fractional Bessel process has a $\frac 1H$-variation equals to a multiple of the Lebesgue measure. \end{abstract}
\vskip0.2cm
\ni {\small {\bf{Key words:}}
Fractional Brownian motion, Malliavin calculus, Skorohod integral, Fractional Bessel processes.}
\vskip0.2cm
\noindent {\small {\bf{Mathematics Subject Classification:}} 60H05, 60H07, 60G18.}
\section{Introduction} The fractional Brownian motion (fBm for short) $B=\{B_{t} , t\in [0,T]\}$ with Hurst parameter $H\in (0,1)$ is a Gaussian self-similar process with stationary increments. This process was introduced by Kolmogorov \cite{kol} and studied by Mandelbrot and Van Ness in \cite{MN}, where a stochastic integral representation in terms of a standard Brownian motion was established. The parameter $H$ is called Hurst index from the statistical analysis, developed by the climatologist Hurst \cite{hurst}. The self-similarity and stationary increments properties make the fBm an appropriate model for many applications in diverse fields from biology to finance. From the properties of the fBm, it follows that for every $\alpha >0$ $$
\E\left(|B_t-B_s|^{\alpha}\right) = \E\left(|B_1|^{\alpha}\right)|t-s|^{\alpha H}. $$ As a consequence of the Kolmogorov continuity theorem, we deduce that there exists a version of the fBm $B$ which is a continuous process and whose paths are $\gamma$-H\"{o}lder continuous for every $\gamma <H$. Therefore, the fBm with Hurst parameter $H\neq \frac12$ is not a semimartingale and then the It\^{o} approach to the construction of stochastic integrals with respect to fBm is not valid. Two main approaches have been used in the literature to define stochastic integrals with respect to fBm with Hurst parameter $H$. Pathwise Riemann-Stieltjes stochastic integrals can be defined using Young's integral \cite{young} in the case $H>\frac 12$. When $H\in (\frac14, \frac12)$, the rough path analysis introduced by Lyons \cite{lyons} is a suitable method to construct pathwise stochastic integrals.
A second approach to develop a stochastic calculus with respect to the fBm is based on the techniques of Malliavin calculus. The divergence operator, which is the adjoint of the derivative operator, can be regarded as a stochastic integral, which coincides with the limit of Riemann sums constructed using the Wick product.
This idea has been developed by Decreusefond and \"{U}st\"{u}nel \cite{DU}, Carmona, Coutin and Montseny \cite{CC}, Al\`os, Mazet and Nualart \cite{AMN1, AMN2}, Al\`os and Nualart \cite{AN} and Hu \cite{hu}, among others. The integral constructed by this method has zero mean. Different versions of the It\^o formula have been proved by the divergence integral in these papers. In particular, if $H\in (\frac 14, 1)$ and $f\in C^2(\R)$ is a real-valued function satisfying some suitable growth condition, then the stochastic process $\{f'(B_t)\bf{1}_{[0,t]}, 0\le t \le T\}$ belongs to domain of the divergence operator and \begin{equation}\label{ito} f(B_t) = f(0) + \displaystyle\int_0^t f'(B_s)\delta B_s
+ H\displaystyle\int_0^tf''(B_s) s^{2H-1} ds. \end{equation} For $H\in (0, \frac 14]$, this formula still holds, if the stochastic integral is interpreted as an extended divergence operator (see \cite{CN,LN}). A multidimensional version of the change of variable formula for the divergence integral has been recently proved by Hu, Jolis and Tindel in \cite{HMS}.
Using the self-similarity of fBm and the Ergodic Theorem one can prove that the fBm has a finite $\frac 1H$-variation on any interval $[0,t]$, equals to $e_H t$, where $e_H = \E\left[|B_1|^{\frac{1}{H}}\right]$ (see, for instance, Rogers \cite{rogers}). More precisely, we have, as $n$ tends to infinity \begin{equation}\label{res1}
\sum_{i=0}^{n-1}|B_{t(i+1)/n}-B_{it/n}| ^{\frac{1}{H}} \overset{L^{1}(\Omega)} {\longrightarrow} t\, e_H. \end{equation} This result has been generalized by Guerra and Nualart \cite{GN} to the case of divergence integrals with respect to the fBm with Hurst parameter $H\in (\frac12, 1)$.
The purpose of this paper is to study the $\frac 1H$-variation of divergence processes
$X=\{ X_t, t\in [0,T]\}$, where $X_t=\int_{0}^{t} u_{s} \delta B_{s}$, with respect to the fBm with Hurst parameter $H< \frac12$. Our main result, Theorem \ref{the2}, states that the $\frac 1H$-variation of $X$ exists in $L^1(\Omega)$ and is equal to $e_H \int_0^T |u_s|^{\frac{1}{H}} ds$, under suitable assumptions on the integrand $u$. This is done by proving an estimate of the $L^{p}$-norm of the Skorohod integral $\int_{a}^{b} u_{s} \delta B_{s}$, where $0\leq a\leq b\leq T$. Unlike the case $H>\frac 12$, here we need to impose H\"older continuity conditions on the process $u$ and its Malliavin derivative. We also derive an extension of this result to divergence integrals with respect to a $d$-dimensional fBm, where $d\ge 1$.
In the last part of the paper, we study the fractional Bessel process $R= \{R_t, t\in [0,T]\}$, defined by $R_t:= \|B_t\|$, where $B$ is a $d$-dimensional fractional Brownian motion with Hurst parameter $H<\frac 12$.
The following integral representation of this process \begin{equation} \label{rep1} R_t = \displaystyle\sum_{i=1}^{d}\int_ 0^t\dfrac{B_s^{(i)}}{R_s}\delta B_s^{(i)} + H(d-1)\int_ 0^t \dfrac{s^{2H-1}}{R_s}ds, \end{equation} has been derived in \cite{GN} when $H>\frac 12$. Completing the analysis initiated in \cite{HN}, we establish the representation (\ref{rep1}) in the case $H<\frac 12$, using a suitable notion of the extended domain of the divergence operator. Applying the results obtained in the first part of the paper and assuming $2dH^2>1$, we prove that the $\frac 1H$-variation of the divergence integral of the process $$\Theta_t:= \displaystyle\sum_{i=1}^{d}\int_ 0^t\dfrac{B_s^{(i)}}{R_t}\delta B_s^{(i)},$$
exists in $L^1(\Omega)$ and is equal to$ \displaystyle\int_{\R^d}\left[\displaystyle\int_ 0^T \left | \left\langle\dfrac{B_s}{R_s}, \xi\right \rangle \right|^{\frac{1}{H}}ds\right]\nu(d\xi), $ where $\nu$ is the normal distribution $N(0, I)$ on $\R^d$. We also discuss some other properties of the process $\{\Theta_t,t\in [0,T]\} $.
The paper is organized as follows. Section 2 contains some preliminaries on Malliavin calculus. In Section 3, we prove an $L^p$-estimate for divergence integral with respect to fBm. Section 4, is devoted to the study the $\frac 1H $-variation of the divergence integral with respect to fBm, for $H<\frac12$. Section 5, deals with the $\frac 1H $-variation of the divergence integral with respect to $d$-dimensional fBm. An application to fractional Bessel process has been given in Section 6.
\section{Preliminaries on Malliavin calculus} Here we describe the elements from stochastic analysis that we will need in the paper. Let $B=\{B_{t} , t\in [0,T]\}$ be a fractional Brownian motion with Hurst parameter $H\in (0,1)$ defined in a complete probability space $(\Omega, \mathcal{F},P)$,
where $\mathcal{F}$ is generated by $B$. That is, $B$ is a centred Gaussian process with covariance function \begin{equation*}
R_H(t,s):=\E(B_{t}B_{s}) = \dfrac12 (t^{2H}+s^{2H}-|t-s|^{2H}), \end{equation*} for $s,t \in [0,T]$. We denote by $\EuFrak H$ the Hilbert space associated to $B$, defined as the closure of the linear space generated by the indicator functions $\{ \mathbf{1}_{[0,t]}, t\in [0,T]\} $, with respect to the inner product \begin{equation*} \langle \mathbf{1}_{[0,t]} , \mathbf{1}_{[0,s] } \rangle _{\EuFrak H} =R_H(t,s), \hskip0.5cm s,t\in [0,T]. \end{equation*} The mapping $\mathbf{1}_{[0,t]} \to B_{t}$ can be extended to a linear isometry between $\EuFrak H$ and the Gaussian space generated by $B$. We denote by $B(\varphi)= \int_0^T \varphi_t dB_t $ the image of an element $\varphi \in \EuFrak H$ by this isometry.
We will first introduce some elements of the Malliavin calculus associated with $B$. We refer to \cite{nualart} for a detailed account of these notions. For a smooth and cylindrical random variable $F=f\left( B(\varphi _{1}), \ldots , B(\varphi_{n})\right) $, with $\varphi_{i} \in \EuFrak H$ and $f\in C_{b}^{\infty}(\R^{n})$ ($f$ and all its partial derivatives are bounded), the derivative of $F$ is the $\EuFrak H$-valued random variable defined by \begin{equation*} D F =\sum_{j=1}^{n}\frac{\partial f}{\partial x_{j}}(B(\varphi_{1}),\dots,B( \varphi_{n}))\varphi_{j}. \end{equation*} For any integer $k\ge 1$ and any real number $p\ge 1$ we denote by $\mathbb{D }^{k,p}$ the Sobolev space defined as the the closure of the space of smooth and cylindrical random variables with respect to the norm \begin{equation*}
\Vert F\Vert_{k,p}^{p}=\E(|F|^{p})+\sum_{j=1}^{k} \E (\Vert D^{j}F\Vert_{ \EuFrak H ^{\otimes j} }^{p}). \end{equation*} Similarly, for a given Hilbert space $V$ we can define Sobolev spaces of $V$ -valued random variables $\mathbb{D}^{k,p}(W)$.
The divergence operator $\delta$ is introduced as the adjoint of the derivative operator. More precisely, an element $u\in L^{2}(\Omega;\EuFrak H)$ belongs to the domain of $\delta$, denoted by ${\rm Dom}\, \delta$, if there exists a constant $c_u$ depending on $u$ such that \begin{equation*}
|\E(\langle D F,u\rangle_{\EuFrak H})|\leq c_u\Vert F\Vert_{2}, \end{equation*} for any smooth random variable $F\in \mathcal{S}$. For any $u\in {\rm Dom}\, \delta$, $\delta(u)$ is the element of $L^{2}(\Omega)$ given by the duality relationship \begin{equation*} \E(\delta (u)F)=\E(\langle D F,u\rangle_{\EuFrak H}), \end{equation*} for any $F\in \mathbb{D}^{1,2}$. We will make use of the notation $\delta (u)=\int_{0}^{T}u_{s}\delta B_{s}$, and we call $\delta(u)$ the divergence integral of $u$ with respect to the fBm $B$. Note that $\E(\delta ( u ) )=0$. On the other hand, the space $\mathbb{D}^{1,2}(\EuFrak H)$ is included in the domain of $\delta $, and for $u\in \mathbb{D}^{1,2}(\EuFrak H)$, the variance of $\delta(u)$ is given by \begin{equation*} \E(\delta (u)^{2})=\E(\Vert u\Vert_{\EuFrak H}^{2})+\E(\langle D u,(D u)^{\ast}\rangle_{\EuFrak H\otimes\EuFrak H} ), \end{equation*} where $(D u)^{\ast}$ is the adjoint of $D u$ in the Hilbert space $\EuFrak H\otimes\EuFrak H$. By Meyer's inequalities (see Nualart \cite{nualart}), for all $p>1$, the divergence operator is continuous from $ \mathbb{D}^{1,p}(\EuFrak H)$ into $ L^p(\Omega)$, that is, \begin{equation}\label{meyer}
\E(|\delta (u)|^{p})\leq C_{p}\left( \E(\Vert u\Vert_{\EuFrak H }^{p})+\E(\Vert D u\Vert_{\EuFrak H\otimes\EuFrak H}^{p})\right). \end{equation}
We will make use of the property \begin{equation}\label{p1} \delta (Fu)= F\delta (u)+\langle D F,u\rangle_{\EuFrak H}, \end{equation} which holds if $F\in \mathbb{D}^{1,2}$, $u\in {\rm Dom} \, \delta$ and the right-hand side is square integrable. We have also the commutativity relationship between $ D $ and $\delta $ \begin{equation*} D \delta (u)= u + \int_{0}^{T} D u_{s}\delta B_{s}, \end{equation*} which holds if $u\in \mathbb{D}^{1,2}(\EuFrak H)$ and the $\EuFrak H$-valued process $\{D u_s, s\in [0,T]\}$ belongs to the domain of $\delta $.
The covariance of the fractional Brownian motion can be written as
$$ R_H(t,s) = \int_0^{t\wedge s} K_H(t,u)K_H(s,u)du, $$ where $K_H(t,s)$ is a square integrable kernel, defined for $0<s<t<T$. In what follows, we assume that $0<H <\frac12$. In this case, this kernel has the following expression $$ K_H(t,s)= c_H\left[ \left(\frac{t}{s}\right)^{H-\frac12}(t-s)^{H-\frac12} -(H-\frac12)s^{H-\frac12}\int_s^t u^{H-\frac32}(u-s)^{H-\frac12}du\right], $$ with $c_H = \left(\frac{2H}{(1-2H)\beta(1-2H, H+\frac12)}\right)^{\frac12}$ and $\beta(x,y):= \displaystyle\int_ 0^1 t^{x-1}(1-t)^{y-1}dt$ for $x, y>0$. Notice also that $$ \frac{\partial K_H}{\partial t}(t,s) = c_H (H-\frac12)\left(\frac{t}{s}\right)^{H-\frac12}(t-s)^{H-\frac32}. $$ From these expressions it follows that the kernel $K_H$ satisfies the following two estimates \begin{equation}\label{est1A}
\left|\frac{\partial K_H}{\partial t}(t,s)\right| \leq c_H (t-s)^{H-\frac32}, \end{equation} and \begin{equation}\label{est2}
|K_H(t,s)|\leq d_H \left((t-s)^{H-\frac12} + s^{H-\frac 12} \right), \end{equation} for some constant $d_H$.
Let $\mathcal{E}$ be the linear span of the indicator functions on $[0,T]$. Consider the linear operator $K_H^*$ from ${\mathcal{E}}$ to $L^2([0, T])$ defined by \begin{equation}\label{est0} K_H^*(\varphi)(s) = K_H(T,s)\varphi(s)+ \int_s^T (\varphi(t)-\varphi(s))\dfrac{\partial K_H}{\partial t}(t,s)dt. \end{equation}
Notice that $$ K_H^*(\bf{1}_{[0, t]})(s) = K_H(t,s)\bf{1}_{[0, t]}(s). $$ The operator $K_H^*$ can be expressed in terms of fractional derivatives as follows $$ (K_H^*\varphi)(s)= c_H \Gamma(H+\frac12)s^{\frac12 -H}(D_{T-}^{\frac12 -H} u^{H -\frac12}\varphi(u))(s). $$ In this expression, $D_{t-}^{\frac 12 -H}$ denotes the left-sided fractional derivative operator, given by $$ D_{t-}^{\frac 12 -H }f(s):= \frac{1}{\Gamma(\frac 12+H )}\left(\dfrac{f(t)}{(t-s)^{\frac 12-H}}+\left(\frac 12 -H\right)\displaystyle\int_s^t\dfrac{f(s)-f(y)}{(y-s)^{\frac 32-H}}dy\right), $$ for almost all $s\in (0,t)$ and for a function $f $ in the image of $L^p([0,t])$, $p\ge 1$, by the left-sided fractional operator $I^{\frac 12-H}_{t-}$ (see \cite{SK} for more details). As a consequence $C^{\gamma}([0,T])\subset \EuFrak H\subset L^2([0,T])$. It should be noted that the operator $K_H^*$ is an isometry between the Hilbert space $\EuFrak H$ and $L^2([0,T])$. That is, for every $\varphi, \psi\in\EuFrak H$, \begin{equation} \label{equ1} \langle \varphi, \psi \rangle_\EuFrak H= \langle K_H^*\varphi, K_H^* \psi \rangle_{L^2([0,T])}. \end{equation}
Consider the following seminorm on the space ${\mathcal{E}}$ \begin{equation}\label{iso} \begin{array}{ll}
\| \varphi\|_ K^2 = \displaystyle\int_ 0^T &\varphi^2(s)[(T-s)^{2H-1}+ s^{2H-1}]ds \\ & + \displaystyle\int_0^T\left(\displaystyle\int_s^T |\varphi(t)-\varphi(s)|(t-s)^{H-\frac32}dt\right)^2 ds. \end{array} \end{equation} We denote by $\EuFrak H_K$ the completion of ${\mathcal{E}}$ with respect to this seminorm. From the estimates (\ref{est1A}) and (\ref{est2}), there exists a constant $k_H$ such that for any $\varphi \in \EuFrak H_K$, \begin{equation}\label{est01}
\| \varphi\|^2_{\EuFrak H} =\|K^*_{H}(\varphi)\|^2_{L^2([0,T])}
\leq k_H\| \varphi\|^2_{ K} . \end{equation} As a consequence, the space $\EuFrak H_K$ is continuously embedded in $\EuFrak H$. This implies also that $\mathbb{D}^{1, 2}(\EuFrak H_K) \subset \mathbb{D}^{1,2}(\EuFrak H) \subset {\rm Dom}\, \delta$.
One can show also that $\EuFrak H = I_{T-}^{\frac12 -H}(L^2([0,T]))$ (see \cite{DU}). Then, the space $\EuFrak H$ is too small for some purposes. For instance, it has been proved in \cite{CN}, that the trajectories of the fBm $B$ belongs to $\EuFrak H$ if and only if $H>\frac14$. This creates difficulties when defining the divergence $\delta(u)$ of a stochastic process whose trajectories do not belong to $\EuFrak H$, for example, if $u_t=f(B_t)$ and $H<\frac 14$, because the domain of $\delta$ is included in $L^{2}(\Omega; \EuFrak H)$. To overcome this difficulty, an extended domain of the divergence operator has been introduced in \cite{CN}. The main ingredient in the definition of this extended domain is the extension of the inner produce $\langle \varphi, \psi \rangle_\EuFrak H$ to the case where $\psi \in \mathcal{E}$ and $\varphi \in L^\beta([0,T])$ for some $\beta >\frac 1{2H}$ (see \cite{LN}). More precisely, for $\varphi \in L^\beta([0,T])$ and $\psi = \sum_{j=1}^{m}b_j\bf{1}_{[0,t_j]} \in \mathcal{E}$ we set \begin{equation} \label{ext} \langle\varphi, \psi \rangle_\EuFrak H = \displaystyle\sum_{j=1}^{m}b_j\displaystyle\int_0^T\varphi_s \dfrac{\partial R}{\partial s}(s, t_j)ds. \end{equation}
This expression coincides with the inner produce in $\EuFrak H$ if $\varphi \in \EuFrak H$, and it is well defined, because \[
|\langle\varphi, \bf{1}_{[0,t]} \rangle_\EuFrak H|
= \left|\int_0^T\varphi_s \dfrac{\partial R}{\partial s}(s, t)ds \right|
\leq \|\varphi\|_{L^\beta([0,T])} \sup_{0\leq t\leq T} \left(\int_0^T|\dfrac{\partial R}{\partial s}(s, t_j)|^{\alpha}ds\right)^{\frac{1}{\alpha}}<\infty. \]
We will make use of following notations: for each $(a, b)\in\R^2$, $a\wedge b = \min(a, b)$ and $a\vee b = \max(a, b)$.
\section{$L^p$-estimate of divergence integrals with respect to fBm}
Let $V$ be a given Hilbert space. We introduce the following hypothesis for a $V$-valued stochastic process $u=\{ u_t, t\in [0,T]\}$, for some $p\ge 2$.
\noindent \textbf{Hypothesis} $\mathbf{(A.1)}_p$ \textit{ Let $p\ge 2$. Then, $\displaystyle\sup_{0\leq s\leq T}\Vert u_s\Vert_{L^{p}(\Omega; V)} <\infty $ and there exist constants $L>0$, $0<\alpha <\frac12$ and $\gamma >\frac12 -H$ such that, \begin{equation*} \label{A1}
\Vert u_t -u_s\Vert_{L^{p}(\Omega; V)}\leq L s^{-\alpha }|t-s|^{\gamma}, \end{equation*}
for all $0<s\leq t \leq T$. }
For any $ 0\le a< b \le T$, we will make use of the notation \[
\|u\|_{p,a,b} = \sup_{a\le s\le b} \|u_s\| _{L^{p}(\Omega; V)}. \]
The following lemma is a crucial ingredient to establish the
$L^p$-estimates for the divergence integral with respect to fBm.
\begin{lemma}\label{lem1} Let $u=\{u_t, 0\leq t\leq T\}$ be a process with values in a Hilbert space $V$, satisfying assumption $\mathbf{(A.1)}_p$ for some $p\geq 2$. Then, there exists a positive constant $C$ depending on $H$, $\gamma$ and $p$ such that for every $0<a\leq b \le T$ \begin{equation} \label{est1}
\E \left( \| u {\mathbf 1}_{[a,b]} \| ^p_{\EuFrak H \otimes V} \right) \leq C\left(\| u\|_{p,a,b}^p(b-a)^{pH}+ L^pa^{-p\alpha }(b-a)^{p\gamma +pH}\right). \end{equation} Moreover if $a=0$, then \begin{equation} \label{est1a}
\E \left( \|u {\mathbf 1}_{[0,b]} \|^p_{\EuFrak H \otimes V} \right) \leq C\left(\| u\|_{p,0,b}^p b^{pH}+L^pb^{-p\alpha +p\gamma+pH}\right). \end{equation} \end{lemma}
\bop. Suppose first that $a> 0$. By equalities (\ref{equ1}) and (\ref{est0}) we obtain \begin{eqnarray*}
&& \E \left( \| u {\mathbf 1}_{[a,b]} \| ^p_{\EuFrak H \otimes V} \right)
= \E \left( \| K_H^*(u \bf{1}_{[a,b]} ) \| ^p_{L^2([0,T];V)} \right) \\
& &= \E \left( \left\| K_H(T,s)u _s{\mathbf 1}_{[a,b]}(s) +\displaystyle\int_{s}^T\Big(u_t{\mathbf 1}_{[a,b]}(t)-u_{s} {\mathbf 1}_{[a,b]}(s)\Big)\dfrac{\partial K_H}{\partial t}(t,s)dt \right \|^p_{L^2([0,T];V)} \right). \end{eqnarray*}
Consider the decomposition \begin{eqnarray*}
&&\displaystyle\int_s^T\Big(u_t{\mathbf 1}_{[a,b]}(t)-u_s{\mathbf 1}_{[a,b]}(s)\Big)\dfrac{\partial K_H}{\partial t}(t,s)dt
= \left[\displaystyle\int_s^b(u_t -u_s)\dfrac{\partial K_H}{\partial t}(t,s)dt\right]{\mathbf 1}_{[a,b]}(s) \\
&&\qquad +\left[-\displaystyle\int_b^T u_s\dfrac{\partial K_H}{\partial t}(t,s)dt\right]{\mathbf 1}_{[a,b]}(s)
+\left[ \displaystyle\int_a^b u_t\dfrac{\partial K_H}{\partial t}(t,s)dt\right]{\mathbf 1}_{[0,a]}(s) \\ && \qquad := I_1 +I_2+I_3. \end{eqnarray*} Therefore \[
\E \left( \| u {\mathbf 1}_{[a,b]} \| ^p_{\EuFrak H \otimes V} \right) \le C\sum_{i=0}^3 A_i,
\]
where $A_0= \E\left[\| K_H(T,\cdot)u \bf{1}_{[a,b]} \|^p_{L^2([0,T]; V)} \right]$ and for $i=1,2,3$, $A_i= \E \left[\| I_i\|^p_{L^2([0,T];V)} \right]$. Let us now estimate the four terms $A_i$, $i=0,1,2,3$, in the previous inequality. By estimate (\ref{est2}), Minkowski inequality and Hypothesis $\mathbf{(A.1)}_p$ we obtain \begin{eqnarray}\notag A_0
& \leq & C \E\left(\displaystyle\int_a^b [(T-s)^{2H-1}+ s^{2H-1}]\Vert u_s\Vert^2_Vds\right)^{\frac{p}{2}} \\ \notag
&\leq & C\left(\displaystyle\int_a^b [(T-s)^{2H-1}+s^{2H-1}]\| u_s\|^2_{{L^{p}(\Omega; V)}}ds\right)^{\frac{p}{2}} \\
& \leq & C \| u \|_{p,a,b}^p (b-a)^{pH},\label{eqA0} \end{eqnarray} where we have used that $(T-a)^{2H}\leq (T-b)^{2H} +(b-a)^{2H}$ and $b^{2H} -a^{2H} \le (b-a)^{2H}$. Using Minkowski inequality, Hypothesis $\mathbf{(A.1)}_p$ and estimate (\ref{est1A}), it follows that \begin{eqnarray} A_1 \notag
& \leq & \left(\displaystyle\int_a^b \left\Vert \displaystyle\int_s^b(u_t -u_s)\dfrac{\partial K_H}{\partial t}(t,s)dt\right\Vert^2_{{L^{p}(\Omega; V)}}ds\right)^{\frac{p}{2}} \\ \notag
& \leq & \left(\displaystyle\int_a^b \left( \displaystyle\int_s^b\Vert u_t -u_s\Vert_{{L^{p}(\Omega; V)}}\left|\dfrac{\partial K_H}{\partial t}(t,s)\right|dt\right)^2ds \right)^{\frac{p}{2}} \\
&\leq & C L^p\left(\displaystyle\int_a^b \left( \displaystyle\int_s^b s^{-\alpha }(t-s)^{\gamma+H-\frac32}dt\right)^2ds\right)^{\frac{p}{2}}.
\label{eqA00} \end{eqnarray} We have \begin{eqnarray*}\notag
\displaystyle\int_a^b \left( \displaystyle\int_s^b s^{-\alpha}(t-s)^{\gamma+H-\frac32}dt\right)^2ds \notag
&= &\dfrac{1}{(\gamma +H-\frac12)^2}\displaystyle\int_a^bs^{-2\alpha }(b-s)^{2\gamma +2H-1}ds \\ \notag
&\le & \dfrac{1}{(\gamma +H-\frac12)^2 (2\gamma +2H)} a^{-2\alpha } (b-a)^{2\gamma +2H}. \end{eqnarray*} Substituting this expression into inequality (\ref{eqA00}), yields \begin{equation}\label{eqA1} A_1
\leq C L^p a^{-p\alpha } (b-a)^{p\gamma +pH}. \end{equation} By the same arguments as above, it follows from Minkowski inequality and Hypothesis $\mathbf{(A.1)}_p$ that \begin{eqnarray} A_2
& =& \notag
\left(\displaystyle\int_a^b \left\Vert\displaystyle\int_b^T u_s\dfrac{\partial K_H}{\partial t}(t,s)dt\right\Vert^2_{{L^{p}(\Omega; V)}}ds\right)^{\frac{p}{2}} \\ \notag
& \leq & C \left(\displaystyle\int_a^b\left(\displaystyle\int_b^T \Vert u_s\Vert_{{L^{p}(\Omega; V)}}(t-s)^{H-\frac32}dt\right)^2ds\right)^{\frac{p}{2}} \\ \notag
&\leq & C\|u \|_{p,a,b}^p \left(\displaystyle\int_a^b\left((T-s)^{H-\frac12}-(b-s)^{H-\frac12}\right)^2ds\right)^{\frac{p}{2}} \\ \notag
&\leq &C\|u \|_{p,a,b}^p\Big((T-a)^{2H}-(T-b)^{2H})+ (b-a)^{2H}\Big)^{\frac{p}{2}} \\ \label{eqA2}
&\leq & C\| u \|_{p,a,b}^p(b-a)^{pH}, \end{eqnarray} where we have used that $(T-a)^{2H}-(T-b)^{2H} \leq (b-a)^{2H}$.\\ Finally, for the term $A_3$, we obtain in the same way \begin{eqnarray}\notag A_3 \notag
&\leq & \left(\displaystyle\int_0^a \left(\displaystyle\int_a^b \Vert u_t\Vert_{{L^{p}(\Omega; V)}}|\dfrac{\partial K_H}{\partial t}(t,s)|dt\right)^2ds\right)^{\frac{p}{2}} \\
& \leq & C\| u \|_{p,a,b}^p\left(\displaystyle\int_0^a \left(\displaystyle\int_a^b (t-s)^{H-\frac{3}{2}}dt\right)^2ds\right)^{\frac{p}{2}} \notag \\
& \le & C\| u \|_{p,a,b}^p\left(\displaystyle\int_0^a \left((a-s)^{H-\frac12} -(b-s)^{H-\frac12}\right)^2ds\right)^{\frac{p}{2}} \notag \\
& \le & C \| u \|_{p,a,b}^p (b-a) ^{pH}. \label{eqA3} \end{eqnarray} For the last inequality we have used the following computations \begin{eqnarray*} & & \displaystyle\int_0^a \left((a-s)^{H-\frac12} -(b-s)^{H-\frac12}\right)^2ds \\ && \quad = \frac 1{2H} \left( b^{2H} + a^{2H} -(b-a) ^{2H} \right)
-2\displaystyle\int_0^a (a-s)^{H-\frac12}(b-s)^{H-\frac12}ds \\
& & \quad \leq \frac 1{2H} \left( b^{2H} + a^{2H} -(b-a) ^{2H} \right) -2\displaystyle\int_0^a (b-s)^{2H-1}ds \\
&& \quad \leq \frac 1{2H} \left( (b-a) ^{2H} - (b^{2H} -a^{2H}) \right) \le \frac 1{2H} (b-a)^{2H}. \end{eqnarray*}
The inequality (\ref{est1}) follows from the estimates (\ref{eqA0}), (\ref{eqA1}), (\ref{eqA2}) and (\ref{eqA3}). The case $a=0$ can be proved using similar arguments. The proof of Lemma \ref{lem1} is then completed.
\eop
We are now in the position to prove the following theorem which gives an estimate of the $L^{p}$-norm of the Skorohod integral of a process $u$ with respect to a fBm with Hurst parameter $H\in (0,\frac 12)$. We first need the following assumption on the process $u$.
\noindent \textbf{Hypothesis} $\mathbf{(A.2)}_p$ \textit{ Let $u\in \mathbb{D}^{1, 2}(\EuFrak H)$ be a real-valued stochastic process, which satisfies Hypothesis $\mathbf{(A.1)}_p$ with constants $L_u$, $ \alpha_1$ and $\gamma$ for a fixed $p\geq 2$. We also assume that the $\EuFrak H$-valued process $\{Du_s, s\in [0,T]\}$ satisfies Hypothesis $\mathbf{(A.1)}_p$ with constants $L_{Du}$, $ \alpha_2$ and $\gamma$ for the same value of $p$. }
Hypothesis $\mathbf{(A.2)}_p$ means that $u_s$ and $Du_s$ have bounded $L^p$ norms in $[0,T]$ and satisfy \begin{eqnarray}
\Vert u_t -u_s\Vert_{L^{p}(\Omega)}&\leq& L_us^{-\alpha _1}|t-s|^{\gamma} \label{assump1}
\\
\Vert Du_t -Du_s\Vert_{L^{p}(\Omega; \EuFrak H)}&\leq & L_{Du}s^{-\alpha _2}|t-s|^{\gamma}, \label{assump2} \end{eqnarray} for all $0<s\leq t\leq T$.
\begin{theorem}\label{the1} Suppose that $u\in \mathbb{D}^{1, 2}(\EuFrak H)$ is a stochastic process satisfying Hypothesis $\mathbf{(A.2)}_p$ for some $p\geq 2$. Let $0< a\leq b\leq T$. Then, there exists a positive constant $C$ depending on $H$, $\gamma$ and $p$ such that
\begin{eqnarray} \notag
& & \E\left( \left |\displaystyle\int_a^b u_s \delta B_s\right|^p \right) \\ & \leq &
C\left((\| u \|_{p,a,b}^p+\| Du \|_{p,a,b}^p)(b-a)^{pH}+ (L_u ^pa^{-p\alpha_1}+L_{Du}^pa^{-p\alpha_2})(b-a)^{p\gamma +pH}\right).\qquad \label{ineq1} \end{eqnarray} If $a=0$, then
\begin{eqnarray}
\E \left( \left|\displaystyle\int_0^b u_s \delta B_s\right|^p \right) \leq C\left((\|u \|_{p,a,b}^p+\|Du \|_{p,a,b}^p) b^{pH}+(L_u^pb^{-p\alpha_1}+L_{Du}^pb^{-p\alpha_2})b^{p\gamma+pH}\right). \label{ineq2} \end{eqnarray} \end{theorem} \bop. By inequality (\ref{meyer}), we have $$
\E \left( \left|\displaystyle\int_a^b u_s \delta B_s\right|^p \right) \leq C_p\left ( \E (\| u {\mathbf 1}_{[a,b]} \| ^p_{ \EuFrak H } )+\E ( \| D_s(u_t\bf{1}_{[a,b]}(t)) \| ^p_{ \EuFrak H \otimes \EuFrak H}\right). $$ The first and the second terms of the above inequality can be estimated applying Lemma \ref{lem1} to the processes $u$ and $Du$, with $V= \mathbb{R}$ and $V=\EuFrak H$, respectively. Theorem \ref{the1} is then proved.\eop
\begin{remark} If we suppose that $\alpha_1= \alpha_2=0$ in Hypothesis $\mathbf{(A.2)}_p$, that is, $u$ and $Du$ are H\"older continuous in $L^p$ on $[0,T]$, then estimate (\ref{ineq1}) in Theorem \ref{the1} can be written as $$
\E\left( \left| \displaystyle\int_a^b u_s \delta B_s\right|^p \right) \leq C\Vert u\Vert_{1,p,\gamma}^p(b-a)^{pH}, $$ where $$
\Vert u\Vert_{1,p,\gamma}=\displaystyle\sup_{0\leq s<t\leq T}\dfrac{\Vert u_t -u_s\Vert_{1,p}}{|t-s|^{\gamma}}+\displaystyle\sup_{0\leq s\leq T} \Vert u_s\Vert_{1,p}. $$ \end{remark}
\section{The $\frac{1}{H}$-variation of divergence integral with respect to fBm} Fix $q\geq 1$ and $T>0$ and set $t_i^n:= \frac{iT}{n}$, where $n$ is a positive integer and $i=0,1,2,\dots,n$. We need the following definition. \begin{definition} Let $X$ be a given stochastic process defined in the complete probability space $(\Omega, {\cal F}, P)$. Let $V_n^q(X)$ be the random variable defined by $$
V_n^q(X):= \sum_{i=0}^{n-1}|\Delta_i^n X|^q, $$ where $\Delta_i^n X := X_{t^n_{i+1}}-X_{t^n_{i}}$. We define the $q$-variation of $X$ as the limit in $L^1(\Omega)$, as $n$ goes to infinity, of $V_n^q(X)$ if this limit exists. \end{definition}
As in the last section we assume that $H\in (0, \frac12)$. In this section, we need the following assumption on the process $u$.
\noindent \textbf{Hypothesis} $\mathbf{(A.3)}$ \textit{
Let $u\in \mathbb{D}^{1, 2}(\EuFrak H)$ be a real-valued stochastic process which is bounded in $L^q(\Omega)$ for some $q>\frac{1}{H}$ and satisfies the H\"older continuity property (\ref{assump1}) with $p=\frac{1}{H}$, that is \begin{eqnarray}
\Vert u_t -u_s\Vert_{L^{\frac{1}{H}}(\Omega)}&\leq& L_us^{-\alpha _1}|t-s|^{\gamma}. \label{assump11} \end{eqnarray} Suppose also that the $\EuFrak H$-valued process $\{Du_s, s\in [0,T]\}$ is bounded in $L^{\frac{1}{H}}(\Omega; \EuFrak H)$ and satisfies the H\"older continuity property (\ref{assump2}) with $p=\frac{1}{H}$, that is \begin{eqnarray}
\Vert Du_t -Du_s\Vert_{L^{\frac{1}{H}}(\Omega; \EuFrak H)}&\leq& L_{Du}s^{-\alpha _2}|t-s|^{\gamma}. \label{assump21} \end{eqnarray} Moreover, we assume that the derivative $\{D_tu_s, s,t\in [0,T]\}$ satisfies \begin{equation}\label{assump3} \displaystyle\sup_{0\leq s\leq T}\Vert D_su_t\Vert_{L^{\frac{1}{H}}(\Omega)} \leq K t^{-\alpha_3}, \end{equation} for every $t\in(0, T]$ and for some constants $0<\alpha_3<2H$ and $K>0$.}
Consider the indefinite divergence integral of $u$ with respect to the fBm $B$, given by \begin{equation} \label{equ2} X_t = \int_0^t u_s \delta B_s := \delta(u\bf{1}_{[0,t]}). \end{equation} The main result of this section is the following theorem.
\begin{theorem}\label{the2} Suppose that $u\in \mathbb{D}^{1, 2}(\EuFrak H)$ is a stochastic process satisfying Hypothesis $\bf{(A.3)}$, and consider the divergence integral process $X$ given by (\ref{equ2}). Then, we have \[
V_n^{\frac{1}{H}}(X) \overset{L^{1}(\Omega)} {\longrightarrow} e_H \displaystyle\int_ 0^T|u_s|^{\frac{1}{H}}ds, \]
as $n$ tends to infinity, where $e_H = \E \left[|B_1|^{\frac{1}{H}}\right]$. \end{theorem} \bop. We need to show that the expression \[
F_n:= \E\left(\left|\sum_{i=0}^{n-1}\left|\displaystyle\int_{t_i^n}^{t_{i+1}^n} u_s \delta B_s\right|^{\frac{1}{H}}-e_H \displaystyle\int_0^T |u_s|^{\frac{1}{H}}ds\right|\right), \] converges to zero as $n$ tends to infinity. Using (\ref{p1}), we can write \begin{equation}\label{decom} \begin{array}{ll} \displaystyle\int_{t_i^n}^{t_{i+1}^n} u_s \delta B_s
&=\displaystyle\int_{t_i^n}^{t_{i+1}^n} (u_s-u_{t_i^n}) \delta B_s + \displaystyle\int_{t_i^n}^{t_{i+1}^n} u_{t_{i}^n}\delta B_s \\ & =\displaystyle\int_{t_i^n}^{t_{i+1}^n} (u_s-u_{t_i^n}) \delta B_s-\langle Du_{t_i^n}, \bf{1}_{[t_{i}^n, t_{i+1}^n]}\rangle_{{\EuFrak H}} + u_{t_{i}^n}(B_{t_{i+1}^n}-B_{t_{i}^n}). \\ & := A_{i}^{1,n} -A_{i}^{2,n} +A_{i}^{3,n}. \end{array} \end{equation} By the triangular inequality, we obtain \begin{equation}
F_n \le \E\left(\sum_{i=0}^{n-1}\left| |A_{i}^{1,n} -A_{i}^{2,n} +A_{i}^{3,n} |^{\frac{1}{H}} - |A_{i}^{3,n} |^{\frac{1}{H}}\right|\right) +D_n, \label{eq45} \end{equation} where \[
D_n=\E\left(\left|
\sum_{i=0}^{n-1} |A_{i}^{3,n} |^{\frac{1}{H}}-e_H \displaystyle\int_0^T |u_s|^{\frac{1}{H}}ds\right|\right). \] Using the mean value theorem and H\"older inequality, we can write \begin{eqnarray} \notag
& &\E\left( \sum_{i=0}^{n-1}\left| |A_{i}^{1,n} -A_{i}^{2,n} +A_{i}^{3,n} |^{\frac{1}{H}} - |A_{i}^{3,n} |^{\frac{1}{H}}\right|\right) \\ \notag
&& \leq \frac{1}{H}\E\left(\sum_{i=0}^{n-1} |A_{i}^{1,n} -A_{i}^{2,n} |\left[ |A_{i}^{1,n} -A_{i}^{2,n}+A_{i}^{3,n}|^{\frac{1}{H}-1} + |A_{i}^{3,n}|^{\frac{1}{H}-1}\right]\right) \\ \notag & & \leq
C\left[ \E\left(\sum_{i=0}^{n-1} |A_{i}^{1,n} -A_{i}^{2,n} |^{\frac{1}{H}}\right)\right]^H \\
&& \qquad \qquad \times \left[\E\left(\sum_{i=0}^{n-1} |A_{i}^{1,n} -A_{i}^{2,n}+A_{i}^{3,n} |^{\frac{1}{H}}\right)
+\E\left(\sum_{i=0}^{n-1} |A_{i}^{3,n} |^{\frac{1}{H}}\right)\right]^{1-H}. \label{eq451} \end{eqnarray} Substituting (\ref{eq451}) into (\ref{eq45}) yields \[ F_n \le CA_{n}^H(B_n + C_n)^{1-H} + D_n, \] where \begin{eqnarray*}
A_n&=&\E\left(\sum_{i=0}^{n-1} |A_{i}^{1,n} -A_{i}^{2,n} |^{\frac{1}{H}}\right), \\
B_n &=& \E\left(\sum_{i=0}^{n-1} |A_{i}^{1,n} -A_{i}^{2,n}+A_{i}^{3,n} |^{\frac{1}{H}}\right),\\
C_n &=&\E\left(\sum_{i=0}^{n-1} |A_{i}^{3,n} |^{\frac{1}{H}}\right). \end{eqnarray*} The proof will be divided into several steps. Along the proof, $C$ will denote a generic constant, which may vary from line to line and may depend on the processes $u$ and $Du$ and the different parameters appearing in the computations, but it is independent of $n$. \\
\it{Step 1.} We first prove that $B_n$ and $C_n$ are bounded. Remark that \begin{eqnarray*}
B_n &= &\E\left( \left| \int_{0}^{\frac{T}{n}} u_s \delta B_s\right|^{\frac{1}{H}} \right)+ \E \left( \sum_{i=1}^{n-1}\left|\int_{t_i^n}^{t_{i+1}^n} u_s\delta B_s\right|^{\frac{1}{H}} \right) \\ & := &K_1^n+K_2^n. \end{eqnarray*} Using estimate (\ref{ineq2}) with $p=\frac{1}{H}$, it follows that \begin{eqnarray*}
K_1^n
&\leq & C\left(\| u\| ^\frac{1}{H}_{\frac{1}{H},0,\frac{T}{n}}+\| Du\|^\frac{1}{H}_{\frac{1}{H},0,\frac{T}{n}}\right)n^{-1}+ \left(L_{u}^{\frac{1}{H}} n^{\frac{\alpha_1}{H}}+L_{Du}^{\frac{1}{H}}n^{\frac{\alpha_2}{H}}\right)n^{-\frac{\gamma}{H}-1} \\ & \leq & C \left(n^{-1} +n^{\frac{\alpha_1}{H}-\frac{\gamma}{H}-1}+n^{\frac{\alpha_2}{H}-\frac{\gamma}{H}-1}\right). \end{eqnarray*}
Therefore, $K_1^n$ is bounded since $\alpha_1 <\gamma +H$ and $\alpha_2 <\gamma +H$. In a similar way, estimate (\ref{ineq1}) leads to \begin{eqnarray*}
K_2^n &\leq & C\sum_{i=1}^{n-1}\bigg\{\left(\| u\|^{\frac{1}{H}}_{\frac{1}{H},t_i^n,t_{i+1}^n}+\| Du \|^{\frac{1}{H}}_{\frac{1}{H},t_i^n,t_{i+1}^n}\right)(t_{i+1}^n-t_{i}^n) \\
&& \qquad\qquad\quad+ \left(L_{u}^{\frac{1}{H}}(t_i^n)^{-\frac{\alpha_1}{H}}+L_{Du}^{\frac{1}{H}}(t_{i}^n)^{-\frac{\alpha_2}{H}}\right)(t_{i+1}^n-t_{i}^n)^{\frac{\gamma}{H} +1}\bigg\} \\
& \leq &C \left(1+ n^{\frac{\alpha_1}{H}-\frac{\gamma}{H}-1}\displaystyle\sum_{i=1}^{n-1}{i^{-\frac{\alpha_1}{H}}}+ n^{\frac{\alpha_2}{H}-\frac{\gamma}{H}-1}\displaystyle\sum_{i=1}^{n-1}{i^{-\frac{\alpha_2}{H}}}\right). \end{eqnarray*} This proves that $K_2^n$ is bounded and so is $B_n$.
Using H\"{o}lder inequality and the fact that $u$ is bounded in $L^q(\Omega)$ for $q>\frac{1}{H}$, we obtain \begin{eqnarray*} C_n
&=&\sum_{i=0}^{n-1}\E\left( |u_{t_{i}^n} |^{\frac{1}{H}} |B_{t_{i+1}^n}-B_{t_{i}^n})|^{\frac{1}{H}}\right)
\\ & \leq & \sum_{i=0}^{n-1}\left[ \E\left(|u_{t_{i}^n}|^{q}\right)\right] ^{\frac{1}{qH}}\left[ \E\left(|B_{t_{i+1}^n}-B_{t_{i}^n})|^{\frac{q}{qH -1}}\right)\right]^{1-\frac{1}{qH}} \\ & \leq & C \sum_{i=0}^{n-1}(t_{i+1}^n-t_{i}^n) =CT, \end{eqnarray*} and this proves the boundedness of $C_n$.\\ \it{Step 2.} We prove that $A_n$ converges to zero. Consider the decomposition \[
\sum_{i=0}^{n-1}|A_{i}^{1,n}| ^{\frac{1}{H}}
=\left|\int_{0}^{\frac{T}{n}} (u_s-u_{0}) \delta B_s\right|^{\frac{1}{H}} + \sum_{i=1}^{n-1}|A_{i}^{1,n} |^{\frac{1}{H}}. \] Using estimate (\ref{ineq2}) with $p =\frac{1}{H}$, it follows that \begin{eqnarray*}
&& \E\left( \left|\int_{0}^{\frac{T}{n}} (u_s-u_{0}) \delta B_s \right|^{\frac{1}{H}} \right) \\
&& \leq C\left[ \|u-u_{0} \|^\frac{1}{H}_{\frac{1}{H},0,\frac{T}{n}}+ \| Du-Du_{0} \|^\frac{1}{H}_{\frac{1}{H},0,\frac{T}{n}}\right]n^{-1}+ \left[L_{u}^{\frac{1}{H}}{n}^{\frac{\alpha_1}{H}}+L_{Du}^{\frac{1}{H}}{n}^{\frac{\alpha_2}{H}}\right]{n}^{-\frac{\gamma}{H}-1} \\ && \leq C n^{-1}\left(1 +{n^{\frac{\alpha_1}{H}-\frac{\gamma}{H}}}+{n^{\frac{\alpha_2}{H}-\frac{\gamma}{H}}}\right). \end{eqnarray*}
Therefore $ \E\left( \left|\int_{0}^{\frac{T}{n}} (u_s-u_{0}) \delta B_s \right|^{\frac{1}{H}} \right) $ converges to zero as $n$ tends to infinity, since $\alpha_1 <\gamma +H$ and $\alpha_2 <\gamma +H$. We can also prove that $\E \left(\sum_{i=1}^{n-1} |A_{i}^{1,n}|^{\frac{1}{H}} \right)$ converges to zero. In fact, using estimate (\ref{ineq1}) with $p =\frac{1}{H}$, we obtain \begin{eqnarray*}
\E\left( \sum_{i=1}^{n-1} |A_{i}^{1,n}|^{\frac{1}{H}} \right)
& \leq &
C \sum_{i=1}^{n-1}\Bigg[\left(\| u-u_{t_i^n} \|^{\frac{1}{H}}_{\frac{1}{H},t_i^n,t_{i+1}^n}+\| Du-Du_{t_i^n} \|^{\frac{1}{H}}_{\frac{1}{H},t_i^n,t_{i+1}^n}\right)(t_{i+1}^n-t_{i}^n)\\
&& \qquad\qquad\quad+ \left(L_{u}^{\frac{1}{H}}(t_i^n)^{-\frac{\alpha_1}{H}}+L_{Du}^{\frac{1}{H}}(t_{i}^n)^{-\frac{\alpha_2}{H}}\right)(t_{i+1}^n-t_{i}^n)^{\frac{\gamma}{H} +1}\Bigg] \\ & & \leq C n^{-\frac{\gamma}{H}-1}\left({n^{\frac{\alpha_1}{H}}}\displaystyle\sum_{i=1}^{n-1}{i^{-\frac{\alpha_1}{H}}}+ {n^{\frac{\alpha_2}{H}}}\displaystyle\sum_{i=1}^{n-1}{i^{-\frac{\alpha_2}{H}}}\right), \end{eqnarray*} where we have used the fact that \begin{equation*}
\| u-u_{t_i^n} \|_{\frac{1}{H},t_i^n,t_{i+1}^n} \leq L_{u}T^{\gamma-\alpha_1} i^{-\alpha_1} n^{\alpha_1-\gamma}, \end{equation*} and \begin{equation*}
\| Du-Du_{t_i^n} \|_{\frac{1}{H},t_i^n,t_{i+1}^n} \leq L_{Du}T^{\gamma-\alpha_1} i^{-\alpha_2} n^{\alpha_2-\gamma}. \end{equation*}
From the above computations,\ it follows that $\E\left( \sum_{i=1}^{n-1}|A_{i}^{1,n}|^{\frac{1}{H}} \right)$ converges to zero as $n$ goes to infinity. Therefore, we conclude that \begin{equation}\label{term1}
\lim _{n \rightarrow \infty} \E\left( \sum_{i=1}^{n-1} |A_{i}^{1,n}|^{\frac{1}{H}} \right) =0. \end{equation}
Second, let us prove that $\E \left(\sum_{i=1}^{n-1}|A_{i}^{2,n}|^{\frac{1}{H}} \right)$ converge to zero as $n$ tends to infinity. It follows from (\ref{ext}) that each term $A_i^{2,n}$ can be expressed as \begin{equation*} A_{i}^{2,n}
=\int_0^T D_su_{t_i^n}\dfrac{\partial}{\partial s}\bigg(R(s,t_{i+1}^n)-(R(s,t_{i}^n)\bigg)ds. \end{equation*} Therefore we have the following decomposition \begin{equation*} A_{i}^{2,n}:= J_1^{i,n}+J_2^{i,n}+J_3^{i,n}, \end{equation*} where \begin{eqnarray*}
J_1^{i,n} &=& \frac 12
\int_0^{t_{i}^n} D_su_{t_i^n}\dfrac{\partial}{\partial s}\left(((t_{i}^n-s)^{2H}-(t_{i+1}^n-s)^{2H}))\right)ds, \\
J_2^{i,n} &=&\frac 12 \int^{t_{i+1}^n}_{t_{i}^n} D_su_{t_i^n}\dfrac{\partial}{\partial s}\left(((s-t_{i}^n)^{2H}-(t_{i+1}^n-s)^{2H}))\right)ds, \\
J_3^{i,n}&=& \frac 12 \int_{t_{i+1}^n}^{T} D_su_{t_i^n}\frac{\partial}{\partial s}\left(((s-t_{i}^n)^{2H}-(s-t_{i+1}^n)^{2H}))\right)ds. \end{eqnarray*} Using Minkowski inequality and assumption (\ref{assump3}), we obtain \begin{eqnarray*}
\E \left( \sum_{i=0}^{n-1}|J_1^{i,n}|^{\frac{1}{H}} \right)
& \leq & H \sum_{i=0}^{n-1}\left[ \int_0^{t_{i}^n} \Vert D_su_{t_i^n}\Vert_{L^{\frac{1}{H}}(\Omega)} \left|(t_{i+1}^n-s)^{2H-1}-(t_{i}^n-s)^{2H-1})\right|ds\right]^{\frac{1}{H}} \\ & \leq & C \sum_{i=1}^{n-1}(t_{i}^n)^{-\frac{\alpha_3}{H}}\left[ \int_0^{t_{i}^n} \left[(t_{i}^n-s)^{2H-1}-(t_{i+1}^n-s)^{2H-1}\right]ds\right]^{\frac{1}{H}} \\ & =& C \sum_{i=1}^{n-1}(t_{i}^n)^{-\frac{\alpha_3}{H}}\left[ (t_{i+1}^n-t_{i}^n)^{2H}-\left[(t_{i+1}^n)^{2H}-(t_{i}^n)^{2H}\right]\right]^{\frac{1}{H}} \\
& \leq & C \sum_{i=1}^{n-1}(t_{i}^n)^{-\frac{\alpha_3}{H}}(t_{i+1}^n-t_{i}^n)^{2} \\
& \leq & C {n^{\frac{\alpha_3}{H} -2}}\sum_{i=1}^{n-1} {i^{-\frac{\alpha_3}{H}}}. \end{eqnarray*}
Taking into account that $\alpha_3 <2H$, we obtain that $\E \left( \sum_{i=0}^{n-1}|J_1^{i,n}|^{\frac{1}{H}} \right) $ converges to zero as $n$ tends to infinity. By means of similar arguments, we can show that $\E \left( \sum_{i=0}^{n-1}|J_2^{i,n}|^{\frac{1}{H}} \right) $ and $\E \left( \sum_{i=0}^{n-1}|J_3^{i,n}|^{\frac{1}{H}} \right) $ converge to zero as $n$ tends to infinity. Therefore, \begin{equation}\label{term2}
\lim _{n \rightarrow \infty} \E\left( \sum_{i=1}^{n-1}|A_{i}^{2,n}|^{\frac{1}{H}} \right) =0. \end{equation} Consequently, from (\ref{term1}) and (\ref{term2}) we deduce that that $A_n$ converge to zero as $n$ goes to infinity.\\
\noindent \it{Step 3.} In order to show that the term $D_n$ converges to zero as $n$ tends to infinity, we replace $n$ by the product $nm$ and we let first $m$ tend to infinity. That is, we consider the partition of interval $[0,T]$ given by $0=t_0^{nm}<\cdots <t_{nm}^{nm} = T$ and we define \begin{eqnarray} Z^{n,m} &:=& \notag
\left| \sum_{i=0}^{nm-1} |u_{t_{i}^{nm}}|^{\frac{1}{H}}|\Delta_{i}^{nm}B|^{\frac{1}{H}}
-e_H \sum_{j=0}^{n-1} |u_{t_{j}^{n}}|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n})\right| \\ \notag
& = & \bigg| \sum_{j=0}^{n-1} \bigg[\sum_{i=jm}^{(j+1)m -1} \left(|u_{t_{i}^{nm}}|^{\frac{1}{H}}-|u_{t_{j}^{n}}|^{\frac{1}{H}}\right)|\Delta_{i}^{nm}B|^{\frac{1}{H}} \\ \notag
& & \qquad +|u_{t_{j}^{n}}|^{\frac{1}{H}}\left(\sum_{i=jm}^{(j+1)m -1}|\Delta_{i}^{nm}B|^{\frac{1}{H}}-e_H (t_{j+1}^{n} -t_{j}^{n})\right)\bigg]\bigg|. \\ \notag
& \leq & \sum_{j=0}^{n-1} \sum_{i=jm}^{(j+1)m -1} \left||u_{t_{i}^{nm}}|^{\frac{1}{H}}-|u_{t_{j}^{n}}|^{\frac{1}{H}}\right||\Delta_{i}^{nm}B|^{\frac{1}{H}} \\ \notag
& &\qquad +\sum_{j=0}^{n-1}|u_{t_{j}^{n}}|^{\frac{1}{H}}\left|\sum_{i=jm}^{(j+1)m -1}|\Delta_{i}^{nm}B|^{\frac{1}{H}}-e_H (t_{j+1}^{n} -t_{j}^{n})\right| \\ \notag \label{eq3}
& := & Z_1^{n,m} + Z_2^{n,m}. \end{eqnarray} By the mean value theorem, we can write \[ Z^{n,m}_1 \le
\frac 1H \sum_{j=0}^{n-1} \sum_{i=jm}^{(j+1)m -1} \left|u_{t_{i}^{nm}}-u_{t_{j}^{n}}\right|\left(|u_{t_{i}^{nm}}|^{\frac{1}{H}-1}+|u_{t_{j}^{n}}|^{\frac{1}{H}-1}\right)|\Delta_{i}^{nm}B|^{\frac{1}{H}}. \] Using H\"{o}lder inequality, assumption (\ref{assump11}) as well as the boundedness of $u$ in $L^q(\Omega)$ for some $q>\frac{1}{H}$, we obtain \[ \E (Z^{n,m}_1) \leq Cn^{-1}m^{-1} \sum_{j=0}^{n-1} \sum_{i=jm}^{(j+1)m -1} (t_j^n)^{-{\alpha_1}}(t_i^{nm}-t_j^{n})^{{\gamma}} \leq {C}{n^{{\alpha_1}-{\gamma} -1}} \sum_{j=0}^{n-1} {j^{{-\alpha_ 1}}}, \] which implies \begin{equation} \lim_{n\rightarrow \infty}\sup_{m\ge 1}\E (Z^{n,m}_1 )= 0. \label{equ6} \end{equation} On the other hand, using H\"{o}lder inequality and the fact that $u$ is bounded in $L^q(\Omega)$ for some $q>\frac{1}{H}$, we have \begin{eqnarray*} \E(Z^{n,m}_2) &\leq &
\sum_{j=0}^{n-1}\left[ \E(|u_{t_{j}^{n}}|^q)\right]^{\frac{1}{qH}}\left[ \E \left(\left|\sum_{i=jm}^{(j+1)m -1}|\Delta_{i}^{nm}B|^{\frac{1}{H}}-e_H (t_{j+1}^{n} -t_{j}^{n})\right|^{\frac{qH}{qH-1}} \right)\right]^{1-\frac{1}{qH}} \\
& \leq & C \sum_{j=0}^{n-1}\left[ \E \left( \left|\sum_{i=jm}^{(j+1)m -1}|\Delta_{i}^{nm}B|^{\frac{1}{H}}-e_H (t_{j+1}^{n} -t_{j}^{n})\right|^{\frac{qH}{qH-1}} \right)\right]^{1-\frac{1}{qH}}. \end{eqnarray*}
For any fixed $n\ge 1$, by the Ergodic Theorem the sequence $ \sum_{i=jm}^{(j+1)m -1}|\Delta_{i}^{nm}B|^{\frac{1}{H}}-e_H (t_{j+1}^{n} -t_{j}^{n})$ converges to $0$ in $L^s$ as $m$ tends to infinity, for every $s>1$. This implies that, for any $n\ge 1$, \begin{equation} \lim_{m\rightarrow \infty} \E (Z^{n,m}_2 )= 0. \label{equ7} \end{equation} Therefore, it follows from (\ref{equ6}) and (\ref{equ7}) that \begin{equation} \lim_{n\rightarrow \infty} \lim_{m\rightarrow \infty} \E (Z^{n,m})= 0. \label{equ8} \end{equation} By the mean value theorem, we can write \begin{eqnarray*}
&& \Big| \sum_{j=0}^{n-1} |u_{t_{j}^{n}}|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n}) -\displaystyle\int_0^T|u_s|^{\frac{1}{H}} ds\Big|
\leq \sum_{j=0}^{n-1}\int_{t^n_j}^{t^n_{j+1}}\left||u_{t^n_j}|^{\frac{1}{H}}-|u_s|^{\frac{1}{H}}\right| ds \\
& & \qquad \qquad \leq \frac 1H \sum_{j=0}^{n-1}\int_{t^n_j}^{t^n_{j+1}}|u_{t^n_j}-u_s| \left(|u_{t^n_j}|^{\frac{1}{H} -1}+|u_s|^{\frac{1}{H} -1}\right) ds. \end{eqnarray*} Then, applying H\"{o}lder inequality and assumption (\ref{assump11}), yields \begin{eqnarray*}
\E \left( \left| \sum_{j=0}^{n-1} |u_{t_{j}^{n}}|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n}) -\displaystyle\int_0^T|u_s|^{\frac{1}{H}} ds\right| \right)
&\leq& C \sum_{j=1}^{n-1}\displaystyle\int_{t^n_j}^{t^n_{j+1}} (t^n_j)^{-\alpha_1} ({t^n_{j+1}} -{t^n_{j}})^{\gamma} ds + Cn^{-1} \\
&\leq &C n^{\alpha_1-{\gamma}-1} \sum_{i=1}^{n-1}{i^{-{\alpha_1}}}+Cn^{-1}. \end{eqnarray*}
This proves that $ \sum_{j=0}^{n-1} |u_{t_{j}^{n}}|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n})$
converge in $L^1$ to $\displaystyle\int_0^T|u_s|^{\frac{1}{H}} ds$ as $n$ tends to infinity. This convergence, together with (\ref{equ8}), imply that $D_n$ converges to zero as $n$ goes to infinity, which concludes the proof of the theorem. \eop
\section{Divergence integral with respect to a $d$-dimensional fBm} The purpose of this section is to generalize Theorem \ref{the2} to multidimensional processes. In order to proceed with this generalization, we first introduce the following notation.
Consider a $d$-dimensional fractional Brownian motion ($d\ge 2$) $$ B=\{B_t, t\in [0,T]\} = \{ (B_t^{(1)}, B_t^{(2)},\dots, B_t^{(d)}),\,\, {t\in [0,T]}\} $$ with Hurst parameter $H\in (0,1)$ defined in a complete probability space $(\Omega, \mathcal{F},P)$, where $\mathcal{F}$ is generated by $B$. That is, the components $B^{(i)}$, $i=1,\dots,d$, are independent fractional Brownian motions with Hurst parameter $H$. We can define the derivative and divergence operators, $D^{(i)}$ and $\delta^{(i)}$, with respect to each component $B^{(i)},$ as in Section 2. Denote by $\mathbb{D}_i^{1,p}(\EuFrak H)$ the associated Sobolev spaces. We assume that these spaces include functionals depending on of all the components of $B$ and not only the $i$th component.
The Hilbert space $\EuFrak H_d$ associated with $B$ is the completion of the space $\mathcal{E}_d$ of step functions $\varphi =(\varphi^{(1)},\dots,\varphi^{(d)}) : [0,T]\rightarrow \R^d$ with respect to the inner product \[ \langle \varphi, \phi \rangle_{\EuFrak H_d} =\sum_{k=1}^d \langle \varphi^{(k)}, \phi^{(k)} \rangle_\EuFrak H. \] We can develop a Malliavin calculus for the process $B$, based on the Hilbert space $\EuFrak H_d$. We denote by $\mathcal{S}_{d}$ the space of smooth and cylindrical random variables of the form
$$
F=f\left( B(\varphi _{1}), \ldots , B(\varphi_{n})\right), $$ where $f\in C_{b}^{\infty}(\R^{n})$, $\varphi_{j} =(\varphi_{j}^{(1)},\dots,\varphi_{j}^{(d)}) \in \mathcal{E}_d$, and $B(\varphi_{j}) =\displaystyle\sum_{k=1}^{d} B ^{(k)} (\varphi_{j}^{(k)})$.
Denote by $\langle \cdot , \cdot \rangle$ the usual inner product on $\R^d$. The following result has been proved in \cite{GN} using the Ergodic Theorem. \begin{lemma} \label{lem3} Let $F$ be a bounded random variable with values in $\R^d$. Then, we have $$
V_n^{\frac{1}{H}}(\langle F, B \rangle) \overset{L^{1}(\Omega)}{\longrightarrow} \displaystyle\int_{\R^d}\left[\displaystyle\int_ 0^T|\langle F, \xi\rangle|^{\frac{1}{H}}ds\right]\nu(d\xi), $$ as $n$ tends to infinity, where $\nu$ is the normal distribution $N(0, I)$ on $\R^d$. \end{lemma} The following theorem is the multidimensional version of Theorem \ref{the2}. \begin{theorem}\label{the5} Suppose that for each $i=1,\dots, d$, $u^{(i)}\in \mathbb{D}^{1, 2}(\EuFrak H)$ is a stochastic process satisfying Hypothesis $\bf{(A.3)}$. Set $u_t=(u_t^{(1)},\dots,u_t^{(d)})$ and consider the divergence integral process $X=\{X_t, t \in [0,T]\}$ defined by $X_t :=\sum_{i=1}^d \int_0^t u_s^{(i)}\delta B_s^{(i)}$. Then, we have $$
V_n^{\frac{1}{H}}(X) \overset{L^{1}(\Omega)}{\longrightarrow} \displaystyle\int_{\R^d}\left[\displaystyle\int_ 0^T|\langle u_s, \xi\rangle|^{\frac{1}{H}}ds\right]\nu(d\xi), $$ as $n$ tends to infinity, where $\nu$ is the normal distribution $N(0, I)$ on $\R^d$. \end{theorem} \bop. This theorem can be proved by the same arguments as in the proof of Theorem \ref{the2}. We need to show that the expression \[
F_n:= \E\left(\left|\sum_{i=0}^{n-1}\left|\sum_{k=1}^d\displaystyle\int_{t_i^n}^{t_{i+1}^n} u_s^{(k)} \delta B_s^{(k)}\right|^{\frac{1}{H}}-\displaystyle\int_{\R^d}\bigg[\displaystyle\int_ 0^T|\langle u_s, \xi\rangle|^{\frac{1}{H}}ds\bigg]\nu(d\xi)\right|\right), \] converges to zero as $n$ tends to infinity. Using the decomposition (\ref{decom}) for $\displaystyle\int_{t_i^n}^{t_{i+1}^n} u_s^{(k)} \delta B_s^{(k)}$, and applying the same techniques as in the proof of Theorem \ref{the2}, it is not difficult to see that \begin{equation*}\label{Fn} F_n \le CA_{n}^H(B_n + C_n)^{1-H} + D_n, \end{equation*} where $B_n,$ $C_n$ are bounded, $A_n$ converges to zero as $n$ tends to infinity, and $D_n$ is given by \begin{eqnarray*}
D_n &:=& \E\left(\left|\sum_{i=0}^{n-1}| \langle u_{t_{i}^{n}}, \Delta_{i}^{n}B\rangle |^{\frac{1}{H}}-\displaystyle\int_{\R^d}\bigg[\displaystyle\int_ 0^T|\langle u_s, \xi\rangle|^{\frac{1}{H}}ds\bigg]\nu(d\xi)\right|\right). \end{eqnarray*} It only remains to show that $D_n$ converges to zero as $n$ tends to infinity. To do this, as in the proof of Theorem \ref{the2}, we introduce the partition of interval $[0,T]$ given by $0=t_0^{nm}<\cdots <t_{nm}^{nm} = T$, and we write \begin{eqnarray} V^{n,m} &:=& \notag
\left| \sum_{i=0}^{nm-1} |\langle u_{t_{i}^{nm}}, \Delta_{i}^{nm}B\rangle|^{\frac{1}{H}}-\sum_{j=0}^{n-1}\displaystyle\int_{\R^d}
|\langle u_{t_{j}^{n}}, \xi\rangle|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n})\nu(d\xi)\right| \\ \notag
& \leq & \sum_{j=0}^{n-1} \sum_{i=jm}^{(j+1)m -1} |\langle u_{t_{i}^{nm}}-u_{t_{j}^{n}}, \Delta_{i}^{nm}B\rangle|^{\frac{1}{H}}\\ \notag
& & \qquad +\sum_{j=0}^{n-1}\bigg|\sum_{i=jm}^{(j+1)m -1}|\langle u_{t_{j}^{n}}, \Delta_{i}^{nm}B\rangle|^{\frac{1}{H}}-\displaystyle\int_{\R^d}|\langle u_{t_{j}^{n}}, \xi\rangle|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n})\nu(d\xi)\bigg| \\ \notag
& := & V_1^{n,m} + V_2^{n,m}. \end{eqnarray} Then, using the same arguments as in Theorem \ref{the2}, we have \begin{equation}\label{equ6m} \lim_{n\rightarrow \infty}\sup_{m\ge 1}\E (V^{n,m}_1 )= 0. \end{equation} On the other hand, Lemma \ref{lem3} implies that for all $n\geq 1$ \begin{equation}\label{equ7m} \lim_{m\rightarrow \infty} \E (V^{n,m}_2 )= 0. \end{equation} Moreover, it is not difficult to show that $$
\displaystyle\lim_{n\rightarrow \infty}\E\left|\sum_{j=0}^{n-1}\displaystyle\int_{\R^d}
|\langle u_{t_{j}^{n}}, \xi\rangle|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n})\nu(d\xi)-\displaystyle\int_{\R^d}\bigg[\displaystyle\int_ 0^T|\langle u_s, \xi\rangle|^{\frac{1}{H}}ds\bigg]\nu(d\xi)\right| =0.
$$
Finally, this convergence, together with (\ref{equ6m}) and (\ref{equ7m}), imply that $D_n$ converges to zero as $n$ tends to infinity. This completes the proof of
Theorem \ref{the5}.\eop \section{Fractional Bessel process} In this section, we are going to apply the results of the previous section to the fractional Bessel process. Let $B$ be a $d$-dimensional fractional Brownian motion ($d\ge 2$).
The process $R= \{R_t, t\in [0,T]\}$, defined by $R_t= \|B_t\|$, is called the fractional Bessel process of dimension $d$ and Hurst parameter $H$. It has been proved in \cite{CN} that, for $H> \frac12$, the fractional Bessel process $R$ has the following representation \begin{equation}\label{rep} R_t = \displaystyle\sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{B_s^{(i)}}{R_s}\delta B_s^{(i)} + H(d-1)\displaystyle\int_ 0^t \dfrac{s^{2H-1}}{R_s}ds. \end{equation}
This representation (\ref{rep}) is similar the one obtained for Bessel processes with respect to standard Brownian motion (see, for instance, Karatzas and Shreve \cite{KS}). Indeed, if $W$ is a $d$-Brownian motion and $R_t =\|W_t\|$, then $$ R_t = \displaystyle\sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{W_s^{(i)}}{R_s}dW_s^{(i)} + \frac{d-1}2\displaystyle\int_ 0^t \dfrac{ds}{R_s}. $$
The goal of this section is to extend the integral representation (\ref{rep}) to the case $H<\frac 12$. We cannot apply directly the It\^o's formula because the function $\|x\|$ is not smooth at the origin. We need the following extension of the domain of the divergence operator to processes with trajectories in $L^{\beta}([0,T], \R^d)$, where $\beta >\frac 1{2H}$. \begin{definition}\label{def3} Fix $\beta >\frac 1{2H}$. We say that a $d$-dimensional stochastic process $u=(u^{(1)},\dots , u^{(d)})\in L^1(\Omega; L^{\beta}([0,T], \R^d))$ belongs to the extended domain of the divergence ${\rm Dom}^*\delta$, if there exists $q>1$ such that \begin{equation} \label{78}
|\E\langle u, DF\rangle_{\EuFrak H_d}|= \left |\sum_{i=1}^{d}\E(\langle u^{(i)}, D^{(i)} F\rangle_{\EuFrak H})\right | \leq c_u \| F\|_{L^q(\Omega)}, \end{equation} for every smooth and cylindrical random variable $F \in \mathcal{S}_{d}$, where $c_u$ is some constant depending on $u$. In this case $\delta(u)\in L^{p}(\Omega)$, where $p$ is the conjugate of $q$, is defined by the duality relationship $$ \E(\langle u, DF\rangle_\EuFrak H )=\E(\delta(u) F), $$ for every smooth and cylindrical random variable $F \in \mathcal{S}_{d}$. \end{definition} Notice that the inner product in (\ref{78}) is well defined by formula (\ref{ext}). If $u\mathbf{1}_{[0,t]}$ belongs to the extended domain of the divergence, we will make use of the notation \[ \delta(u\mathbf{1}_{[0,t]}) =\sum_{i=1}^d \int_0^t u^{(i)}_s \delta B_s^{(i)}. \] \begin{remark} Notice that, since $\beta >\frac 1{2H}$, we have $\EuFrak H_{d}\subset L^{\beta}([0,T], \R^d))$ and then ${\rm Dom}\, \delta \subset {\rm Dom}^*\delta$. \end{remark}
\begin{remark} \label{rem5.1} It should be noted that the process $R$ satisfies the following \begin{equation}\label{eq6} \E(R_t^{-q}) =C t^{-Hq} \displaystyle\int_0^{\infty} y^{d-1-q} e^{\frac{-y^2}{2}} dy:= K_q t^{-Hq}, \end{equation} for every $q<d$, where $K_q$ is a positive constant. This property will be used later. \end{remark}
We recall the following multidimensional It\^{o} formula for the fBm (see \cite{HMS}). This formula requires a notion of extended domain of the divergence operator, ${\rm Dom}^{E} \delta$ introduced in \cite[Definition 3.9]{HMS}, which is slightly different from Definition \ref{def3}, because we require $u\in L^1(\Omega; L^{\beta}([0,T], \R^d))$ (instead of $u\in L^2(\Omega \times [0,T]; \mathbb{R}^d)$ and the extended divergence belongs to $L^p(\Omega)$ (instead of $L^2(\Omega)$). Our notion of extended domain will be useful to handle the case of the fractional Bessel process. Moreover, the class of test functionals is not the same, although this is not relevant because both classes are dense in $L^p(\Omega)$.
\begin{theorem} Let $B$ a $d$-dimensional fractional Brownian motion with Hurst parameter $H<\frac 12$. Suppose that $F\in C^2(\R^d)$ satisfies the growth condition \begin{equation}\label{growth}
\max_{x \in\R^d}\left\{|F(x)|, \left \|\frac{\partial F}{\partial x_i}(x)\right\|, \left\| \frac{\partial^2 F}{\partial x^2_i}(x) \right\| , i=1,\dots,d\right\}\leq ce^{\lambda x^2}, \end{equation} where $c$ and $\lambda$ are positive constants such that $\lambda <\dfrac{T^{-2H}}{4d }$. Then, for each $i=1,...,d$ and $t\in [0,T]$, the process $\bf{1}_{[0,t]}\dfrac{\partial F}{\partial x_i}(B_t) \in {\rm Dom}^{E} \delta$, and the following formula holds \begin{equation} \label{ito} F(B_t) = F(0)+ \sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{\partial F}{\partial x_i}(B_s)\delta B_s^{(i)}+ H \sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{\partial^2 F}{\partial x^2_i}(B_s)s^{2H-1}ds, \end{equation} where ${\rm Dom}^{E} \delta$ is the extended domain of the divergence operator in the sense of Definition 3.9 in \cite{HMS}. \end{theorem}
The next result is a change of variable formula for the fractional Bessel process in the case $H<\frac{1}{2}$.
\begin{theorem}\label{pro1} Let $H<\frac{1}{2}$, and let $R=\{R_t, \in [0,T]\}$ be the fractional Bessel process. Set $u_t^i =\frac {B^i_t}{R_t}$ and $u_t=(u_t^{(1)},\dots,u_t^{(d)})$, for $t\in [0,T]$. Then, we have the following results: \begin{enumerate} \item[(i)] For any $t\in (0,T]$, the process $\{u_s \mathbf{1}_{[0,t]}(s), s\in [0,T]\}$ belongs to the extended domain ${\rm Dom}^*\delta$ and the representation (\ref{rep}) holds true. \item[(ii)] If $H>\frac{1}{4}$, for any $t\in [0,T]$, the process $u\mathbf{1}_{[0,t]} $ belongs to $L^2(\Omega;\EuFrak H_d)$ and to
the domain of $\delta$ in $L^p(\Omega)$ for any $p<d$. \end{enumerate} \end{theorem}
\bop. Let us first prove part (i). Since the function $\|x\|$ is not differentiable at the origin, the It\^{o} formula (\ref{ito}) cannot be applied and we need to make a suitable approximation. For $\varepsilon >0$, consider the function $F_{\varepsilon}(x) = (\| x\|^2 +\varepsilon^2)^{\frac12}$, which is smooth and satisfies condition (\ref{growth}). Applying It\^{o}'s formula (\ref{ito}) we have \begin{equation}\label{eq7} F_{\varepsilon}(B_t) = \varepsilon+ \sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{B_s^{(i)}}{(R_s^2 +\varepsilon^2)^{\frac12}}\delta B_s^{(i)}+ Hd \int_ 0^t \dfrac{s^{2H-1}}{(R_t^2 +\varepsilon^2)^{\frac12}}ds-H \displaystyle\int_ 0^t \frac{s^{2H-1} R_s^2}{(R_s^2 +\varepsilon^2)^{\frac32}}ds. \end{equation} Clearly, $F_{\varepsilon}(B_t)$ converges to $R_t$ in $L^p$ for any $p\ge 1$. Let $1\leq p <d$. Using Minkowski's inequality, and taking into account Remark \ref{rem5.1}, we have \begin{eqnarray*}
\E\left( \left| \int_ 0^t s^{2H-1}R_s^{-1}ds\right|^p \right)
&\leq &\left(\int_ 0^t s^{2H-1}(\E(R_s^{-p})^{\frac{1}{p}}ds\right)^p \\ &\leq & K_p \left(\int_ 0^t s^{-H} s^{2H-1}ds\right)^p \le K_p H^{-p} t^{pH}. \end{eqnarray*} Since for every $\varepsilon >0,$ $ \frac{s^{2H-1}}{(R_s^2 +\varepsilon^2)^{\frac12}}\leq s^{2H-1}R_s^{-1}$, the dominated convergence theorem leads to the fact that $\int_ 0^t \frac{s^{2H-1}}{(R_s^2 +\varepsilon^2)^{\frac12}}ds$ converges to $\int_ 0^t \frac{s^{2H-1}}{R_s}ds$ in $L^{p}$ for any $1\leq p<d$, as $\varepsilon$ converges to zero. In the same way, we prove that $\int_ 0^t \frac{s^{2H-1}R_s^2}{(R_s^2 +\varepsilon^2)^{\frac32}}ds$ converges to $\int_ 0^t \frac{s^{2H-1}}{R_s}ds$ in $L^{p}$ for any $1\leq p<d$, as $\varepsilon$ converges to zero. Coming back to (\ref{eq7}), we deduce that $ \sum_{i=1}^{d}\int_ 0^t\frac{B_s^{(i)}}{(R_t^2 +\varepsilon^2)^{\frac12}}\delta B_s^{(i)}$ converges in $L^{p}$ for any $1\leq p<d$, to some limit $G_t$, as $\varepsilon$ tends to zero.
We are going to show that the process $u\mathbf{1}_{[0,t]}$ belongs to the extended domain of the divergence and $\delta(u\mathbf{1}_{[0,t]})=G_t$. Let $F$ be a smooth and cylindrical random variable in $\mathcal{S}_{d}$. For $i=1,\dots,d$, let $u_s^{\varepsilon, (i)} =\frac{B_s^{(i)}}{(R_t^2 +\varepsilon^2)^{\frac12}}$, and $u_s^\varepsilon= (u_s^{\varepsilon, (1)}, \dots, u_s^{\varepsilon, (d)})$. By the duality relationship we obtain \begin{equation*}\label{duality} \E (\langle u^{\varepsilon}\bf{1}_{[0,t]}, DF\rangle_{\EuFrak H_d} ) = \E(\delta(u^{\varepsilon}\bf{1}_{[0,t]}) F). \end{equation*}
Taking into account that $\delta(u^{\varepsilon}\bf{1}_{[0,t]})$ converges to $G_t$ in $L^p$, and that \[ \lim_{\varepsilon \rightarrow 0} \E(\langle u^{\varepsilon}\bf{1}_{[0,t]}, DF\rangle_{\EuFrak H_d}) =\E(\langle u \bf{1}_{[0,t]}, DF\rangle_{\EuFrak H_d}), \] since the components of $u$ are bounded by one, we deduce that \[ \E(\langle u1_{[0,t]}, DF\rangle_{\EuFrak H_d})=\E(G_tF). \] This implies that $u\mathbf{1}_{[0,t]}$ belongs to the extended domain of the divergence and $\delta(u\mathbf{1}_{[0,t]})=G_t$.
To show part (ii), let us assume that $H>\frac14$. We first show that for any $i=1,\dots,d$, $u^{(i)} \in L^2( \Omega; \EuFrak H)$. We can write \begin{eqnarray*}
|u_t^{(i)}-u_s^{(i)}|
& \leq &{|B_t^{(i)} - B_s^{(i)}|}{R_t^{-1}} + {|R_s - R_t||B_s^{(i)}|}{R_t^{-1}R_s^{-1}} \\
& \leq & {\| B_t - B_s\| }{R_t^{-1}} + {|R_s - R_t|\| B_s\|}{R_t^{-1}R_s^{-1}} \\
& \leq & 2{\| B_t - B_s\| }{R_t^{-1}}, \end{eqnarray*} where we have used the fact that $$
|R_s - R_t|=\left| \| B_t\| - \| B_s\|\right| \leq \| B_t - B_s\|. $$
Since $|u_t^{(i)}-u_s^{(i)}|\leq 2$, we obtain \[
|u_t^{(i)}-u_s^{(i)}| \leq 2\left({\| B_t - B_s\| }{R_t^{-1}}\wedge 1\right), \] which implies \begin{equation}\label{eqqq1}
|u_t^{(i)}-u_s^{(i)}| \leq 2\| B_t - B_s\|^{\alpha} {R_t^{-\alpha}}, \end{equation} for every $\alpha\in[0,1]$. We can write, using (\ref{est01}), \begin{eqnarray*}
\E(\| u_t^{(i)}\|_{\EuFrak H}^2) & \leq & k_H \E \left( \int_ 0^T (u_s^{(i)})^2[(T-s)^{2H-1}+ s^{2H-1}]ds \right) \\
&& + k_H \E\left(\displaystyle\int_0^T\left(\displaystyle\int_s^T |u_t^{(i)}-u_s^{(i)}|(t-s)^{H-\frac32}dt\right)^2 ds\right) \\
& :=& k_H[N_1 + N_2]. \end{eqnarray*}
Since $|u_t^i|\leq 1$, it is clear that $N_1$ is bounded. To estimate $N_2$, choose $\alpha$, $q$ and $p$ such that $\frac{1}{2H} -1<\alpha \leq 1 $, $1<q<\frac{d}{2\alpha}$, and $\frac{1}{p}+\frac{1}{q} =1$. Using inequality (\ref{eqqq1}) and Minkowski and H\"{o}lder inequalities, we get
\begin{eqnarray*} N_2& \leq & 2
\int_0^T\E\left(\displaystyle\int_s^T \| B_t - B_s\|^{\alpha} {R_t^{-\alpha}}(t-s)^{H-\frac32}dt\right)^2 ds \\
& \leq & 2
\int_0^T\left(\int_s^T\left[ E( \| B_t -B_s\|^{2\alpha p})\right]^{\frac{1}{2p}} \left[ \E (R_t^{-2\alpha q}) \right]^{\frac{1}{2q}}(t-s)^{H-\frac32}dt\right)^2 ds \\ & \leq & C\int_0^T\left(\int_s^T(t-s)^{\alpha H} t^{-\alpha H}(t-s)^{ H -\frac32}dt\right)^2 ds \\ & \leq & C\displaystyle\int_0^T s^{-2\alpha H}(T-s)^{2(\alpha +1)H -1} ds \\
& =& C T^{2H}\beta(-2\alpha H+1, 2(\alpha +1)H). \end{eqnarray*}
Hence, for $i=1,\dots,d$, $\E (\| u_t^{(i)}\|_{\EuFrak H}^2) <\infty$ and, therefore, $u\in L^2(\Omega, \EuFrak H_d)$. Moreover, by the first assertion, it follows that for every $F\in \mathcal{S}_{d}$ and for $p<d$, \begin{equation*}
\E(\langle D F,u \mathbf{1}_{[0,t]}\rangle_{\EuFrak H_d})= \E(G_t F) \leq \|G_t\|_{p} \|F\|_{q}. \end{equation*}
Therefore, $u\mathbf{1}_{[0,t]}$ belongs to the domain of $\delta$ in $L^p(\Omega)$. \eop
Notice that, if $d>2$, then we can take $p=2$ in part (ii), and $u\mathbf{1}_{[0,t]}$ belongs to ${\rm Dom}\, \delta$. Also, we remark that although $u\mathbf{1}_{[0,t]}$ belongs to the (extended) domain of the divergence, this does not imply that each component $u^{(i)}\mathbf{1}_{[0,t]}$ belongs to the domain of $\delta^{(i)}$. In the next theorem, we show that under the stronger condition $2dH^2 >1$, each process $u^{(i)}$ belongs to $\mathbb{D}^{1,2}_i (\EuFrak H)$, and satisfy the Hypothesis {\bf{(A.3)} of Section 4.
\begin{theorem}\label{pro2}
Suppose that $2dH^2>1$. Let $R=\{R_t, t\in [0,T]\}$ be the fractional Bessel process. Then, for $i=1,2,\dots,d$, the process $u_t^{(i)}=\dfrac{B_t^{(i)}}{R_t}$ satisfies Hypothesis $\bf{(A.3)}$. \end{theorem} \bop. Fix $i=1,\dots, d$. The random variable $u_t^{(i)}$ is bounded and so, it is bounded in $L^{q}(\Omega)$ for all $q>\frac{1}{H}$. The Malliavin derivative $D^{(i)} u^{(i)}$ is given by $$ D^{(i)}_su_t^{(i)} = \left(-R_t^{-3} (B_t^{(i)})^2+R_t^{-1}\right) \bf{1}_{[0,t]}(s):= \phi_t \bf{1}_{[0,t]}(s). $$ Notice that \begin{equation*} \label{89}
\| D^{(i)}u^{(i)}_t \|_{\EuFrak H} \le 2R_t^{-1} t^{H}. \end{equation*} This implies $D^{(i)}u^{(i)}_t$ is bounded in $L^{\frac{1}{H}}(\Omega; \EuFrak H)$ because $dH>1$. Indeed, we have
\begin{equation*}
\| D^{(i)}u^{(i)}_t\|_{L^{\frac{1}{H}}(\Omega; \EuFrak H)} \le 2 \left(\E[R_t^{-\frac{1}{H}}]\right)^{H} t^{H} \le C. \end{equation*} Let us now prove that $u^{(i)}$ satisfies the inequalities (\ref{assump11}) and (\ref{assump21}), with $p=\frac{1}{H}$. Let $0<s\le t \le T$. Using estimate (\ref{eqqq1}) and choosing $\frac{1}{2H} -1<\alpha < Hd\wedge 1$, it follows that for $1< q< \frac{Hd}{\alpha}$ and $p_1>1$ such that $\frac{1}{p_1}+\frac{1}{q} =1$, \begin{equation*}
\| u_t^{(i)}-u_s^{(i)}\|_{L^{\frac{1}{H}}(\Omega)} \leq 2\left[ \E \left( \| B_t - B_s\|^{\frac{\alpha p_1}{H}} \right) \right]^{\frac{H}{p_1}} \left(\E (R_t^{-\frac{\alpha q}{H}}) \right)^{\frac{H}{q}}
\leq C (t-s)^{\alpha H} s^{-\alpha H}. \end{equation*} Hence inequality (\ref{assump11}) is satisfied with $\alpha_1 =\alpha H<\frac12$ and $\gamma =\alpha H>\frac12 -H$. In order to show inequality (\ref{assump21}) with $p=\frac{1}{H}$, we first write for $0<r \le t \le T$, \begin{eqnarray} \notag
\| \phi_t\bf{1}_{[0,t]}-\phi_r\bf{1}_{[0,r]} \|_{\EuFrak H}
&\leq & \| \phi_t(\bf{1}_{[0,t]}-\bf{1}_{[0,r]}) \|_{\EuFrak H}+\| (\phi_t-\phi_r)\bf{1}_{[0,r]} \|_{\EuFrak H} \\ \notag
& =& |\phi_t| \| \bf{1}_{[0,t]}-\bf{1}_{[0,r]} \|_{\EuFrak H}+|\phi_t-\phi_r|\| \bf{1}_{[0,r]} \|_{\EuFrak H} \\
& \leq & C\left( R_t^{-1}(t-r)^{H}+|\phi_t-\phi_r|r^{H}\right). \label{phi} \end{eqnarray} We have \begin{eqnarray*}
|\phi_t-\phi_r|& \le & \left| R_t^{-3} (B^{(i)}_t)^2 -R_r^{-3} (B^{(i)}_r)^2 \right| + | R_t^{-1} - R_r^{-1} | \\
&\le& R_t^{-3} R_r^{-3} \left( |R_t^3-R_r^3| (B^{(i)}_r)^2 + R_t^3 |(B^{(i)}_t)^2-(B^{(i)}_r)^2 | \right)+ R_t^{-1} R_r^{-1} |R_t-R_r| \\
&\le & \| B_t- B_r\| \left( 2R_t^{-1} R_r^{-1}+ 2R_t^{-3} R_r + R_t^{-2} + R_r^{-2} \right), \end{eqnarray*} and $$
|\phi_t-\phi_r|\leq |\phi_t|+ |\phi_r| \leq 2(R^{-1}_t +R^{-1}_r). $$ Put $R_{tr} := R_t^{-1} R_r^{-1}+ R_t^{-3} R_r + R_t^{-2} + R_r^{-2}$. Then, the above inequalities imply \begin{equation*}
|\phi_t-\phi_r|\leq 4\left[ \left(\| B_t -B_r\|R_{tr}\right)\wedge \left(R_t^{-1}\vee R_r^{-1}\right)\right]. \end{equation*} By using the same argument as above one can find also that \begin{equation*}
|\phi_t-\phi_r|\leq 4\left[ \left(\| B_t -B_r\|R_{rt}\right)\wedge \left(R_t^{-1}\vee R_r^{-1}\right)\right]. \end{equation*} Therefore, for every $\alpha \in [0,1]$, we can write \begin{eqnarray} \notag
|\phi_t-\phi_r| & \leq& 4\left[ \left(\| B_t -B_r\| (R_{tr}\wedge R_{rt})\right)\wedge\left(R_t^{-1}\vee R_r^{-1}\right) \right]\\ \notag
& \leq &4 \| B_t -B_r\| ^\alpha (R_{tr}^\alpha\wedge R_{rt}^\alpha)\left(R_t^{\alpha-1}\vee R_r^{\alpha-1}\right) \\ \label{79}
&\le & C\| B_t -B_r\|^{\alpha}\left(R_t^{-\alpha-1}\vee R_r^{-\alpha-1}\right).
\end{eqnarray}
Then, substituting (\ref{79}) into (\ref{phi}) yields \begin{equation*}
\| \phi_t\bf{1}_{[0,t]}-\phi_r\bf{1}_{[0,r]} \|_{\EuFrak H}
\leq C\left( R_t^{-1}(t-r)^{H}+\| B_t -B_r\|^{\alpha}\left(R_t^{-\alpha-1}\vee R_r^{-\alpha-1}\right)r^{H}\right). \end{equation*} Choose $\alpha$, $p_1$ and $q$ such that $\frac{1}{2H}-1<\alpha < (Hd-1)\wedge 1$, $1<p_1<\frac{dH}{\alpha +1}$ and $\frac{1}{p_1}+\frac{1}{q}= 1$. Then, we can write \begin{eqnarray*}
&& \E \left(\| \phi_t\bf{1}_{[0,t]}-\phi_r\bf{1}_{[0,r]} \|_{\EuFrak H}^{\frac{1}{H}} \right) \\
&&\le C \E\left[ R_t^{-\frac{1}{H}}(t-r)+ r\| B_t -B_r\|^{ \frac{\alpha}{H}}\left(R_t^{-\frac{\alpha+1}{H}}\vee R_r^{-\frac{\alpha+1}{H}}\right)\right] \\
& & \leq C \left[C t^{-1}(t-r)+ r\left[ \E \left( \|B_t -B_r\|^{\frac{\alpha q}{H}} \right) \right] ^{\frac{1}{q}}\left[ \E \left( \left(R_t^{-\frac{\alpha+1}{H}}\vee R_r^{-\frac{\alpha+1}{H}}\right)^{p_1} \right)\right]^{\frac{1}{p_1}}\right] \\ & & \leq C\bigg(r^{-1}(t-r)+ r^{-\alpha }(t-r)^{\alpha }\bigg) \\
& & \leq 2C \max(r^{-1}(t-r),r^{-\alpha }(t-r)^{\alpha }), \end{eqnarray*} and inequality (\ref{assump21}) is satisfied with $\alpha_2 =H$ and $\gamma =\alpha H$. Finally, for every $s \le t$, we have \begin{equation*}
\|D^{(i)}_su^{(i)}_t\|_{L^{\frac{1}{H}}(\Omega)} \leq \left(\E (R_t^{-\frac{1}{H}} ) \right)^{H}
= C t^{-H}, \end{equation*} and then assumption (\ref{assump3}) is satisfied with $\alpha_3 = H$. This ends the proof of Theorem \ref{pro2}. \eop
\begin{remark} If $Hd>1$, we can show, using the same arguments as in the proof of Theorem \ref{pro2}, that $u^{(i)} \in \mathbb{D}^{1,2}_i (\EuFrak H)$, for $i=1,2,\dots,d$. \end{remark}
We now discuss the properties of the process $\Theta= \{\Theta_t, t\in [0,T]\}$ defined by $$ \Theta_t:= \displaystyle\sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{B_s^{(i)}}{R_t}\delta B_s^{(i)}. $$ By Theorem \ref{pro2}, we have that for every $i=1,\dots,d$, $u_t^{(i)}=\dfrac{B_t^{(i)}}{R_t}$ satisfies Hypothesis $\bf{(A.3)}$ if $2dH^2 >1$. Therefore, applying Theorem \ref{the5}, we have the following corollary. \begin{corollary}\label{cor} Suppose that $2dH^2 >1$. Then we have the following \begin{equation*} \begin{array}{ll}
V_n^{\frac{1}{H}}(\Theta) \overset{L^{1}(\Omega)}{{\longrightarrow}} \displaystyle\int_{\R^d}\left[\displaystyle\int_ 0^T \left| \left\langle\dfrac{B_s}{R_s}, \xi \right \rangle \right |^{\frac{1}{H}}ds\right]\nu(d\xi), \end{array} \end{equation*} as $n$ tends to infinity, where $\nu$ is the normal distribution $N(0, I)$ on $\R^d$. \end{corollary} \begin{proposition} \label{prop7} The process $\Theta$ is $H$-self-similar. \end{proposition} \bop. Let $a>0$. By the representation (\ref{rep}) and the self-similarity of fBm, we have \begin{eqnarray}\notag \Theta_{at} &= & R_{at} -H(d-1)\displaystyle\int_O^{at} \dfrac{s^{2H-1}}{R_s}ds \\ & \overset{d}{=}& a^HR_t -H(d-1)a^H\displaystyle\int_0^t \dfrac{u^{2H-1}}{R_u}du = a^H \Theta_t,\notag \end{eqnarray} where the symbol $\overset{d}{=}$ means that the distributions of both processes are the same. This proves that $\Theta$ is $H$-self-similar. \eop \begin{remark} \begin{enumerate} \item Corollary \ref{cor} and Proposition \ref{prop7} imply that the process $\Theta$ and the fBm have the same $\frac 1H$-variation, if $2dH^2>1$, and they are both $H$-self-similar. These results generalize those proved by Guerra and Nualart in \cite{GN} in the case $H<\frac 12$. \item Let us note that although $\Theta$ and the one-dimensional fBm are both $H$-self-similar and have the same $\frac 1H$-variation, as it is shown in \cite{HN}, it is not a fractional Brownian motion with Hurst parameter $H$. The proof of this fact is based on the Wiener chaos expansion. Whereas, in the classical Brownian motion case it is well known, from L\'{e}vy's characterization theorem, that the process $\Theta$ is a Brownian motion. \end{enumerate} \end{remark} \bf{Acknowledgements.} This work was carried out during a stay of El Hassan Essaky at Kansas University (Lawrence, KS), as a part of Fulbright program. He would like to thank KU, especially Professor David Nualart, for warm welcome and kind hospitality.
\addcontentsline{toc}{chapter}{Bibliographie}
\end{document}
|
arXiv
|
{
"id": "1501.06986.tex",
"language_detection_score": 0.5452080368995667,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\setcounter{pac}{0} \setcounter{footnote}0
\begin{center}
\phantom.
{\Large\bf On explicit realization of algebra of complex powers of generators of $U_{q}(\mathfrak{sl}(3))$}
{\large Pavel Sultanich \footnote {E-mail: [email protected]}},\\
{\it Moscow Center for Continuous Mathematical Education, 119002, Bolshoy Vlasyevsky Pereulok 11, Moscow, Russia }\\
\end{center}
\begin{abstract}
\noindent
In this note we prove an integral identity involving complex powers of generators of quantum group $U_{q}(\mathfrak{sl}(3))$ considered as certain positive operators in the setting of positive principal series representations. This identity represents a continuous analog of one of the Lusztig's relations between divided powers of generators of quantum groups, which play an important role in the study of irreducible modules \cite{Lu 1}. We also give definitions of arbitrary functions of $U_{q}(\mathfrak{sl}(3))$ generators and give another proofs for some of the known results concerning positive principal series representations of $U_{q}(\mathfrak{sl}(3))$.
\end{abstract}
\section{Introduction}
The notion of modular double of a quantum group $U_{q}(\mathfrak{g})$ plays an important role in different areas of mathematical physics such as Liouville theory \cite{PT},\cite{FKV}, relativistic Toda model \cite{KLSTS} and others. It was introduced by Faddeev in \cite{F2} who noticed that certain representations of a quantum group $U_{q}(\mathfrak{sl}(2))$, $q = e^{\pi\imath b^{2}}$ have a remarkable duality under $b\leftrightarrow b^{-1}$ and proposed to consider instead of single quantum group enlarged object generated by two sets of generators $K$, $E$, $F\in U_{q}(\mathfrak{sl}(2))$ and $\tilde{K}$, $\tilde{E}$, $\tilde{F}\in U_{\tilde{q}}(\mathfrak{sl}(2))$, $\tilde{q} = e^{\pi\imath b^{-2}}$. In \cite{BT} it was shown that in a special class of representations of modular double the rescaled generators defined by $K$, $\mathcal{E} = -\imath(q-q^{-1})E$, $\mathcal{F} = -\imath(q-q^{-1})F$ of $U_{q}(\mathfrak{sl}(2))$ are positive operators. This allows one to use functional calculus and consider arbitrary functions of them. Moreover, the generators of dual group $\tilde{K}$, $\tilde{\mathcal{E}} = -\imath(\tilde{q}-\tilde{q}^{-1})\tilde{E}$, $\tilde{\mathcal{F}} = -\imath(\tilde{q}-\tilde{q}^{-1})\tilde{F}$ are expressed as non-integer powers of the original generators \begin{equation} \tilde{K} = K^{b^{-2}}, \end{equation} \begin{equation} \tilde{\mathcal{E}} = \mathcal{E}^{b^{-2}}, \end{equation} \begin{equation} \tilde{\mathcal{F}} = \mathcal{F}^{b^{-2}}. \end{equation} These relations were called transcendental relations. This kind of representations, admitting the transcendental relations, has been generalized to higher ranks \cite{FrIp}, \cite{Ip2} and has been called positive principal series representations. Introduction of particular non-integer powers of generators of quantum group naturally leads to consideration of arbitrary powers of generators. Thus, the modular double becomes a discrete subalgebra in the algebra generated by arbitrary powers of generators $K^{\imath p}$, $\mathcal{E}^{\imath s}$, $\mathcal{F}^{\imath t}$.
In \cite{Lu 1}, eq.(4.1a)-eq.(4.1j) Lusztig summarized the relations between the divided powers of generators of $U_{q}(\mathfrak{g})$ for simply-laced $\mathfrak{g}$. He used these identities in the study of finite-dimensional modules of $U_{q}(\mathfrak{g})$ in the case where $q$ is a root of unity. So in the study of the algebra of arbitrary complex powers of quantum group generators, the question of the generalization of these relations arises. Some of the relations were found in \cite{Ip1}, eq.(6.16), eq.(6.17). Another integral relation which is a generalization of Kac's identity \cite{Lu 1}, eq.(4.1a) appeared in \cite{Su1} and was proved in the explicit representation for the case of $U_{q}(\mathfrak{sl}(2))$ in \cite{Su2}. To write it down explicitly, let $G_{b}(x)$ be quantum dilogarithm \cite{F1} which is a special function playing an important role in the study of algebra of complex powers of generators of $U_{q}(\mathfrak{g})$. Its properties will be outlined in Section 2. Let $K_{j} = q^{H_{j}}$, $\mathcal{E}_{j}$, $\mathcal{F}_{j}$ be $U_{q}(\mathfrak{g})$ generators which are assumed to be positive operators so that the functions of them are defined. Let continuous analogs of divided powers be defined by $A^{(\imath s)}_{i} = G_{b}(-\imath bs)A^{\imath s}$. Explicit expressions for the powers of operators under consideration will be given later. Then the generalized Kac's identity reads
\begin{equation}\begin{split} \mathcal{E}_{j}^{(\imath s)}\mathcal{F}_{j}^{(\imath t)} = \int\limits_{\mathcal{C}} d\tau e^{\pi bQ\tau}\mathcal{F}_{j}^{(\imath t+\imath\tau)}K_{j}^{-\imath \tau} \frac{G_{b}(\imath b\tau)G_{b}(-bH_{j} + \imath b(s+t+\tau))}{G_{b}(-bH_{j}+\imath b(s+t+2\tau))}\mathcal{E}_{j}^{(\imath s + \imath \tau)}, \end{split}\end{equation} where the contour $\mathcal{C}$ goes slightly above the real axis but passes below the pole at $\tau = 0$. In this note we prove this identity for the case of positive principal series representations of $U_{q}(\mathfrak{sl}(3))$.
The paper is organized as follows. In Section 2, we recall the definition of quantum group $U_{q}(\mathfrak{g})$ and outline the definition and basic properties of the quantum dilogarithm $G_{b}(x)$ and related function $g_{b}(x)$. In Section 3 we recall the construction of arbitrary functions of generators and generalized Kac's identity in the case of positive principal series representations of $U_{q}(\mathfrak{sl}(2))$.The main result of the paper is formulated in Theorem 4.1 in Section 4. We define arbitrary functions of $U_{q}(\mathfrak{sl}(3))$ generators in the positive principal series representations. We prove the generalized Kac's identity using unitary transform interwining the formulas of functions of generators of $U_{q}(\mathfrak{sl}(2))_{i}$ subalgebra corresponding to simple root $i$, with the formulas for $U_{q}(\mathfrak{sl}(2))$, defined in Section 3. This calculation also represents another proof of Theorem 4.7 in \cite{Ip1} which states that positive principal series representation of $U_{q}(\mathfrak{sl}(3))$ decomposes into direct integral of positive principal series representations of its $U_{q}(\mathfrak{sl}(2))$ subalgebra corresponding to each simple root.
{\bf Acknowledgements:} The research was supported by RSF (project 16-11-10075). I am grateful to A.A.Gerasimov and D.R.Lebedev for helpful discussions and interest in this work.
\section{Preliminaries}
We start with the definition of quantum groups following \cite{ChPr},\cite{Lu book}. Let $(a_{ij})_{1\le i,j\le r}$ be Cartan matrix of semisimple Lie algebra $\mathfrak{g}$ of rank $r$. Let $\mathfrak{b}_{\pm}\subset \mathfrak{g}$ be opposite Borel subalgebras. For simplicity let us restrict ourselves to the simply-laced case $a_{ii} = 2$, $a_{ij} = a_{ji} = \{0,-1\}$, $i\ne j$. Let $U_{q}(\mathfrak{g})$ $(q = e^{\pi\imath b^{2}}$, $b^{2}\in \mathbb{R}\setminus \mathbb{Q})$ be the quantum group with generators $E_{j}$, $F_{j}$, $K_{j} = q^{H_{j}}$, $1\le j \le r$ and relations \begin{equation}
K_{i}K_{j} = K_{j}K_{i}, \end{equation} \begin{equation}
K_{i}E_{j} = q^{a_{ij}}E_{j}K_{i}, \end{equation} \begin{equation}
K_{i}F_{j} = q^{-a_{ij}}F_{j}K_{i}, \end{equation} \begin{equation}
E_{i}F_{j} - F_{j}E_{i} = \delta_{ij}\frac{K_{i} - K_{i}^{-1}}{q-q^{-1}}. \end{equation} For $a_{ij} = 0$ we have \begin{equation}
E_{i}E_{j} = E_{j}E_{i}, \end{equation} \begin{equation}
F_{i}F_{j} = F_{j}F_{i}. \end{equation} For $a_{ij} = -1$ we have \begin{equation}
E_{i}^{2}E_{j} - (q+q^{-1})E_{i}E_{j}E_{i} + E_{j}E_{i}^{2} = 0, \end{equation} \begin{equation}
F_{i}^{2}F_{j} - (q+q^{-1})F_{i}F_{j}F_{i} + F_{j}F_{i}^{2} = 0, \end{equation} Coproduct is given by \begin{equation} \Delta E_{j} = E_{j}\otimes 1 + K_{j}^{-1}\otimes E_{j}, \end{equation} \begin{equation} \Delta F_{j} = 1\otimes F_{j} + F_{j}\otimes K_{j}, \end{equation} \begin{equation} \Delta K_{j} = K_{j}\otimes K_{j}. \end{equation}
Non-compact quantum dilogarithm $G_{b}(z)$ is a special function introduced in \cite{F1} (see also \cite{F0}, \cite{FKV}, \cite{V}, \cite{Ka1}, \cite{KLSTS}, \cite{BT}). It is defined as follows \begin{equation} \log G_{b}(z) = \log\bar{\zeta}_{b} - \int\limits_{\mathbb{R}+\imath 0} \frac{dt}{t}\frac{e^{zt}}{(1-e^{bt})(1-e^{b^{-1}t})}, \end{equation}
where $Q = b+b^{-1}$ and $\zeta_{b} = e^{\frac{\pi\imath}{4} + \frac{\pi\imath(b^{2}+b^{-2})}{12}}$. Note, that $G_{b}(z)$ is closely related to the double sine function $S_{2}(z|\omega_{1},\omega_{2})$, see eq.(A.22) in \cite{KLSTS}.
Below we outline some properties of $G_{b}(z)$\\* 1. The function $G_{b}(z)$ has simple poles and zeros at the points \begin{equation}
z = -n_{1}b -n_{2}b^{-1}, \end{equation} \begin{equation}
z = Q +n_{1}b + n_{2}b^{-1}, \end{equation} respectively, where $n_{1}$,$n_{2}$ are nonnegative integer numbers.\\* 2. $G_{b}(z)$ has the following asymptotic behavior: \begin{equation}
G_{b}(z) \sim
\begin{cases} \bar{\zeta}_{b}, Im z \rightarrow +\infty ,\\ \zeta_{b} e^{\pi\imath z(z-Q)}, Im z \rightarrow -\infty . \end{cases} \end{equation} 3. Functional equation: \begin{equation} G_{b}(z +b^{\pm 1}) = (1-e^{2\pi\imath b^{\pm 1}z})G_{b}(z). \end{equation} 4. Reflection formula: \begin{equation} G_{b}(z)G_{b}(Q-z) = e^{\pi\imath z(z-Q)}. \end{equation}\\* 5. 4-5 integral identity, \cite{V}: \begin{equation} \int d\tau e^{-2\pi\gamma\tau} \frac{G_{b}(\alpha+\imath\tau)G_{b}(\beta+\imath\tau)}{G_{b}(\alpha+\beta+\gamma+\imath\tau)G_{b}(Q+\imath\tau)} = \frac{G_{b}(\alpha)G_{b}(\beta)G_{b}(\gamma)}{G_{b}(\alpha+\gamma)G_{b}(\beta+\gamma)}. \end{equation}\\* Define also the function $g_{b}(x)$ by \begin{equation}
g_{b}(x) = \frac{\bar{\zeta}_{b}}{G_{b}(\frac{Q}{2} +\frac{1}{2\pi\imath b}\log x)}. \end{equation} It has the following properties:\\*
1. $|g_{b}(x)| = 1$, if $x\in \mathbb{R}_{+}$. So if $A$ is a positive self-adjoint operator, then $g_{b}(A)$ is unitary. \\* 2. Fourier transform: \begin{equation} g_{b}(x) = \int d\tau x^{\imath b^{-1}\tau}e^{\pi Q\tau}G_{b}(-\imath\tau). \end{equation}\\* Let $U$, $V$ be positive self-adjoint operators satisfying the relation $UV = q^{2}VU$. Then the following non-commutative identities hold:\\* 3. Quantum exponential relation, \cite{F2}: \begin{equation}
g_{b}(U)g_{b}(V) = g_{b}(U+V). \end{equation} 4. Quantum pentagon relation, \cite{Ka0}: \begin{equation}
g_{b}(V)g_{b}(U) = g_{b}(U)g_{b}(q^{-1}UV)g_{b}(V). \end{equation} 5. Another useful relation, \cite{BT}: \begin{equation}
U + V = g_{b}(qU^{-1}V)Ug^{\ast}_{b}(qU^{-1}V), \end{equation} where the star means hermitian conjugation.
\section{Algebra of complex powers of generators of $U_{q}(\mathfrak{sl}(2))$}
Let $q = e^{\pi\imath b^{2}}$, $(b^{2}\in \mathbb{R}\setminus \mathbb{Q})$ and let $K = q^{H}$, $E$, $F$ be the generators of $U_{q}(\mathfrak{sl}(2))$ subjected to the relations \begin{equation} KE = q^{2}EK, \end{equation} \begin{equation} KF = q^{-2}FK, \end{equation} \begin{equation} EF-FE = \frac{K-K^{-1}}{q-q^{-1}}. \end{equation}
Define the rescaled versions of generators $E$, $F$ by \begin{equation}\label{rescaled E} \mathcal{E} = -\imath(q-q^{-1})E, \end{equation} \begin{equation} \mathcal{F} = -\imath(q-q^{-1})F. \end{equation}
Let $\nu$ be a positive real number. There is a well-known representation of $U_{q}(\mathfrak{sl}(2))$ (see e.g.\cite{PT1}): \begin{equation} H = -2\imath b^{-1}u, \end{equation} \begin{equation} K = q^{H} = e^{2\pi bu}, \end{equation} \begin{equation} \mathcal{E} = q^{-\frac{1}{2}}e^{\pi b\nu +\pi bu}e^{-\imath b\partial_{u}} + q^{\frac{1}{2}}e^{-\pi b\nu -\pi bu}e^{-\imath b\partial_{u}}, \end{equation} \begin{equation} \mathcal{F} = q^{-\frac{1}{2}}e^{\pi b\nu-\pi bu}e^{\imath b\partial_{u}} +q^{\frac{1}{2}}e^{-\pi b\nu+\pi bu}e^{\imath b\partial_{u}}. \end{equation} This representation is a particular example of positive principal series representations of $U_{q}(\mathfrak{g})$ \cite{Ip2}.\\* The following lemma was proven for a slightly different representation of $U_{q}(\mathfrak{sl}(2))$ in \cite{BT} and for the representation we use in this paper in \cite{Ip1}. It gives the expressions of the generators of $U_{q}(\mathfrak{sl}(2))$ in a form convenient for the definition of functions of them. It is based on the formula eq.(B.2) in \cite{BT}, stating that given positive self-adjoint operators $U$,$V$ satisfying $UV = q^{2}VU$, one can write \begin{equation} U+V = g_{b}(qU^{-1}V)U(g_{b}(qU^{-1}V))^{-1}. \end{equation} \begin{lem} Let $\mathcal{E}$, $\mathcal{F}$ be the rescaled positive generators of $U_{q}(\mathfrak{sl}(2))$ defined above. They can be written in the following form: \begin{equation} \mathcal{E} = g_{b}(e^{-2\pi b\nu-2\pi bu})e^{\pi b\nu +\pi bu - \imath b\partial_{u}}g^{\ast}_{b}(e^{-2\pi b\nu-2\pi bu}), \end{equation} \begin{equation} \mathcal{F} = g_{b}(e^{-2\pi b\nu+2\pi bu})e^{\pi b\nu-\pi bu+\imath b\partial_{u}}g^{\ast}_{b}(e^{-2\pi b\nu+2\pi bu}). \end{equation} \end{lem} $\noindent {\it Proof}. $ Let $U$, $V$ be positive self-adjoint operators such that $$ UV = q^{2}VU. $$ Then, \cite{BT}: $$ U+V = g_{b}(qU^{-1}V)U(g_{b}(qU^{-1}V))^{-1}. $$ For $\mathcal{E}$ we have $U = q^{-\frac{1}{2}}e^{\pi b\nu +\pi bu}e^{-\imath b\partial_{u}}$, $V = q^{\frac{1}{2}}e^{-\pi b\nu -\pi bu}e^{-\imath b\partial_{u}}$ and $$ qU^{-1}V = q q^{\frac{1}{2}}e^{\imath b\partial_{u}}e^{-\pi b\nu-\pi bu}q^{\frac{1}{2}}e^{-\pi b\nu -\pi bu}e^{-\imath b\partial_{u}} = e^{-2\pi b\nu-2\pi bu}, $$ so $$ \mathcal{E} = g_{b}(e^{-2\pi b\nu-2\pi bu})q^{-\frac{1}{2}}e^{\pi b\nu +\pi bu}e^{-\imath b\partial_{u}}(g_{b}(e^{-2\pi b\nu-2\pi bu}))^{-1}. $$ For $\mathcal{F}$ we have $U = q^{-\frac{1}{2}}e^{\pi b\nu-\pi bu}e^{\imath b\partial_{u}}$, $V = q^{\frac{1}{2}}e^{-\pi b\nu+\pi bu}e^{\imath b\partial_{u}}$, $$ qU^{-1}V = qq^{\frac{1}{2}}e^{-\imath b\partial_{u}}e^{-\pi b\nu+\pi bu}q^{\frac{1}{2}}e^{-\pi b\nu+\pi bu}e^{\imath b\partial_{u}} = e^{-2\pi b\nu +2\pi bu}, $$ $$ \mathcal{F} = g_{b}(e^{-2\pi b\nu+2\pi bu})q^{-\frac{1}{2}}e^{\pi b\nu-\pi bu}e^{\imath b\partial_{u}}(g_{b}(e^{-2\pi b\nu+2\pi bu}))^{-1}. $$ $\Box$
Multiplication by $g_{b}(e^{-2\pi b\nu-2\pi bu})$ and $g_{b}(e^{-2\pi b\nu+2\pi bu})$ is unitary transformation, since $|g_{b}(x)| = 1$, for $x\in\mathbb{R}_{+}$. As a consequence, following \cite{BT}, eq.(3.15), eq.(3.21), one can define functions of generators $\mathcal{E}$ and $\mathcal{F}$ as follows: \begin{de} Let $\varphi(x)$ be a complex-valued function and let $K$, $\mathcal{E}$, $\mathcal{F}$ be $U_{q}(\mathfrak{sl}(2))$ generators in the positive principal series representation. The functions of these operators are defined as follows \begin{equation}
\varphi(K) = \varphi(e^{2\pi bu}), \end{equation} \begin{equation}\label{function of E} \varphi(\mathcal{E}) = g_{b}(e^{-2\pi b\nu-2\pi bu})\varphi(e^{\pi b\nu +\pi bu - \imath b\partial_{u}})g^{\ast}_{b}(e^{-2\pi b\nu-2\pi bu}), \end{equation} \begin{equation}\label{function of F} \varphi(\mathcal{F}) = g_{b}(e^{-2\pi b\nu+2\pi bu})\varphi(e^{\pi b\nu-\pi bu+\imath b\partial_{u}})g^{\ast}_{b}(e^{-2\pi b\nu+2\pi bu}). \end{equation} \end{de} In particular, the powers of $\mathcal{E}$ and $\mathcal{F}$ are given by \begin{equation}\label{Imaginary power E Uqsl(2)} \mathcal{E}^{\imath s} = g_{b}(e^{-2\pi b\nu-2\pi bu})e^{\pi\imath bs\nu +\pi\imath bsu + bs\partial_{u}}g^{\ast}_{b}(e^{-2\pi b\nu-2\pi bu}), \end{equation} \begin{equation}\label{Imaginary power F Uqsl(2)} \mathcal{F}^{\imath t} = g_{b}(e^{-2\pi b\nu+2\pi bu})e^{\pi\imath bt\nu-\pi\imath btu- bt\partial_{u}}g^{\ast}_{b}(e^{-2\pi b\nu+2\pi bu}). \end{equation}
The formulas for the powers in this particular representation were obtained in \cite{Ip1}.\\* Define the arbitrary divided powers of $A$ by \begin{equation}\label{complex devided power}
A^{(\imath s)} = G_{b}(-\imath bs)A^{\imath s}. \end{equation}
\begin{te}\cite{Su2} The following generalized Kac's identity holds \begin{equation} \mathcal{E}^{(\imath s)}\mathcal{F}^{(\imath t)} = \int\limits_{\mathcal{C}} d\tau e^{\pi bQ\tau}\mathcal{F}^{(\imath t+\imath\tau)}K^{-\imath\tau}\frac{G_{b}(\imath b\tau)G_{b}(-bH+\imath b(s+t+\tau))}{G_{b}(-bH+\imath b(s+t+2\tau))}\mathcal{E}^{(\imath s+\imath\tau)}, \end{equation} where the contour $\mathcal{C}$ goes slightly above the real axis but passes below the pole at $\tau = 0$. \end{te}
\section{Algebra of complex powers of generators of $U_{q}(\mathfrak{sl}(3))$}
In this section we prove the Generalized Kac's identity in the case of positive principal series representations of $U_{q}(\mathfrak{sl}(3))$.
Let $q = e^{\pi\imath b^{2}}$, ($b^{2}\in \mathbb{R}\setminus \mathbb{Q}$). Let $a_{ij}$, ($i$,$j = 1$, $2$) be the Cartan matrix corresponding to $\mathfrak{sl}(3)$ Lie algebra, i.e. $a_{11} = a_{22} = 2$, $a_{12} = a_{21} = -1$. $U_{q}(\mathfrak{sl}(3))$ $(q = e^{\pi\imath b^{2}}$, $b^{2}\in \mathbb{R}\setminus \mathbb{Q})$ is defined by generators $E_{j}$, $F_{j}$, $K_{j} = q^{H_{j}}$, $1\le j \le 2$ and relations \begin{equation}
K_{i}K_{j} = K_{j}K_{i}, \end{equation} \begin{equation}
K_{i}E_{j} = q^{a_{ij}}E_{j}K_{i}, \end{equation} \begin{equation}
K_{i}F_{j} = q^{-a_{ij}}F_{j}K_{i}, \end{equation} \begin{equation}
E_{i}F_{j} - F_{j}E_{i} = \delta_{ij}\frac{K_{i} - K_{i}^{-1}}{q-q^{-1}}. \end{equation} For $i\ne j$ we have \begin{equation}
E_{i}^{2}E_{j} - (q+q^{-1})E_{i}E_{j}E_{i} + E_{j}E_{i}^{2} = 0, \end{equation} \begin{equation}
F_{i}^{2}F_{j} - (q+q^{-1})F_{i}F_{j}F_{i} + F_{j}F_{i}^{2} = 0. \end{equation}
The general construction of the positive principal series representations of $U_{q}(\mathfrak{g})$ in the simply-laced case using Lusztig's data was given in \cite{Ip2}. Let $w_{0}$ be the longest element of the Weyl group. There are different realizations of positive principal series representations corresponding to each reduced expressions of $w_{0}$. In the case of $U_{q}(\mathfrak{sl}(3))$ there are two options $w_{0} = s_{1}s_{2}s_{1}$ and $w_{0} = s_{2}s_{1}s_{2}$. In the following we give the explicit formulas for both of this cases.
Let $\mathcal{E}_{j} = -\imath(q-q^{-1})E_{j}$, $\mathcal{F}_{j} = -\imath(q-q^{-1})F_{j}$, $j = 1$, $2$ be the rescaled versions of $U_{q}(\mathfrak{sl}(3))$ generators.
\begin{prop} \cite{Ip2}. Let $K_{j}$, $\mathcal{E}_{j}$, $\mathcal{F}_{j}$ be the rescaled generators of $U_{q}(\mathfrak{sl}(3))$. Let $w_{0} = s_{1}s_{2}s_{1}$ be reduced expression of the longest Weyl element. Let $\nu_{1}$,$\nu_{2}$ be positive real numbers. The positive principal series representation of $U_{q}(\mathfrak{sl}(3))$ corresponding to these data is given by: \begin{equation} K_{1} = e^{-2\pi b\nu_{1}+2\pi bu-\pi bv+2\pi bw}, \end{equation} \begin{equation} K_{2} = e^{-2\pi b\nu_{2}-\pi bu+2\pi bv-\pi bw}, \end{equation} \begin{equation} \mathcal{E}_{1} = e^{\pi bw-\imath b\partial_{w}} + e^{-\pi bw-\imath b\partial_{w}}, \end{equation} \begin{equation} \mathcal{E}_{2} = e^{\pi bv-\pi bw-\imath b\partial_{v}}+ e^{\pi bu -\imath b\partial_{u} -\imath b\partial_{v}+\imath b\partial_{w}} + e^{-\pi bu-\imath b\partial_{u}-\imath b\partial_{v}+\imath b\partial_{w}} + e^{-\pi bv+\pi bw-\imath b\partial_{v}}, \end{equation} \begin{equation} \mathcal{F}_{1} = e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}} + e^{2\pi b\nu_{1}-\pi bu+\imath b\partial_{u}} + e^{-2\pi b\nu_{1}+\pi bu+\imath b\partial_{u}} + e^{-2\pi b\nu_{1}+2\pi bu-\pi bv+\pi bw+\imath b\partial_{w}}, \end{equation} \begin{equation} \mathcal{F}_{2} = e^{2\pi b\nu_{2}+\pi bu-\pi bv +\imath b\partial_{v}} + e^{-2\pi b\nu_{2}-\pi bu +\pi bv+\imath b\partial_{v}}. \end{equation} \end{prop}
Similar to $U_{q}(\mathfrak{sl}(2))$ case using eq.(B.2), \cite{BT} we represent the generators in a form convenient for the definition of functions of them \begin{lem} Let $\mathcal{E}_{i}$, $\mathcal{F}_{i}$, $(i = 1,2)$ be the generators of $U_{q}(\mathfrak{sl}(3))$ in the positive principal series representation corresponding to the reduced expression $w_{0} = s_{1}s_{2}s_{1}$. They can be represented in the following form \begin{equation} \mathcal{E}_{1} = g_{b}(e^{-2\pi bw})e^{\pi bw-\imath b\partial_{w}}g_{b}^{\ast}(e^{-2\pi bw}), \end{equation} \begin{multline} \mathcal{E}_{2} = g_{b}(e^{\pi bu-\pi bv +\pi bw-\imath b\partial_{u}+\imath b\partial_{w}})g_{b}(e^{-\pi bu -\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}})g_{b}(e^{-2\pi bv+2\pi bw})\times \\ e^{\pi bv-\pi bw-\imath b\partial_{v}}\times \\ g^{\ast}_{b}(e^{-2\pi bv+2\pi bw})g^{\ast}_{b}(e^{-\pi bu -\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g^{\ast}_{b}(e^{\pi bu-\pi bv +\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}), \end{multline} \begin{multline} \mathcal{F}_{1} = g_{b}(e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})\times \\ e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}}\times \\ g_{b}^{\ast}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}) g_{b}^{\ast}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}^{\ast}(e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}), \end{multline} \begin{equation} \mathcal{F}_{2} = g_{b}(e^{-4\pi b\nu_{2}-2\pi bu+2\pi bv})e^{2\pi b\nu_{2}+\pi bu-\pi bv+\imath b\partial_{v}} g^{\ast}_{b}(e^{-4\pi b\nu_{2}-2\pi bu+2\pi bv}). \end{equation} \end{lem} $\noindent {\it Proof}. $
Let $q=e^{\pi\imath b^{2}}$ and let $U$, $V$ be positive essentially self-adjoint operators subjected to the relation $UV=q^{2}VU$. We will need the following identity,\cite{BT}: $$ U+V = g_{b}(qU^{-1}V)Ug^{\ast}_{b}(qU^{-1}V), $$ and the quantum exponential relation, \cite{F2}: $$ g_{b}(U+V) = g_{b}(U)g_{b}(V). $$ Let us start with $\mathcal{E}_{1}$. It has the following form $$ \mathcal{E}_{1} = U+V, $$ where $U = e^{\pi bw-\imath b\partial_{w}}$, $V = e^{-\pi bw-\imath b\partial_{w}}$. Using the identities $e^{A}e^{B} = e^{\frac{[A,B]}{2}}e^{A+B}$ and $e^{A}e^{B} = e^{[A,B]}e^{B}e^{A}$ in the case when the commutator $[A,B]$ commutes with both $A$ and $B$, and also the identity $[x,\partial_{x}] = -1$, one checks that $$ UV = e^{\pi bw-\imath b\partial_{w}}e^{-\pi bw-\imath b\partial_{w}} = e^{[\pi bw-\imath b\partial_{w},-\pi bw-\imath b\partial_{w}]}e^{-\pi bw-\imath b\partial_{w}}e^{\pi bw-\imath b\partial_{w}} = $$ $$ e^{2\pi\imath b^{2}}e^{-\pi bw-\imath b\partial_{w}}e^{\pi bw-\imath b\partial_{w}} = q^{2}VU, $$ and $$ qU^{-1}V = e^{\pi\imath b^{2}}e^{-\pi bw+\imath b\partial_{w}}e^{-\pi bw-\imath b\partial_{w}}= e^{\pi\imath b^{2}}e^{\frac{1}{2}[-\pi bw+\imath b\partial_{w},-\pi bw-\imath b\partial_{w}]} = e^{-2\pi bw}, $$ so we have $$ \mathcal{E}_{1} = g_{b}(qU^{-1}V)Ug^{\ast}_{b}(qU^{-1}V) = g_{b}(e^{-2\pi bw})e^{\pi bw-\imath b\partial_{w}}g_{b}^{\ast}(e^{-2\pi bw}). $$ For $\mathcal{E}_{2}$ we have $$ \mathcal{E}_{2} = U_{1} + U_{2} + U_{3} + U_{4}, $$ where $U_{1} = e^{\pi bv-\pi bw-\imath b\partial_{v}}$ $U_{2} = e^{\pi bu -\imath b\partial_{u} -\imath b\partial_{v}+\imath b\partial_{w}}$, $U_{3} = e^{-\pi bu-\imath b\partial_{u}-\imath b\partial_{v}+\imath b\partial_{w}}$, $U_{4} = e^{-\pi bv+\pi bw-\imath b\partial_{v}}$. These operators satisfy the relations $$ U_{i}U_{j} = q^{2}U_{j}U_{i}, $$ if $i< j$. We obtain $$ \mathcal{E}_{2} = g_{b}(qU_{1}^{-1}(U_{2}+U_{3}+U_{4}))U_{1}g^{\ast}_{b}(qU_{1}^{-1}(U_{2}+U_{3}+U_{4})) = $$ $$ g_{b}(qU_{1}^{-1}U_{2})g_{b}(qU_{1}^{-1}U_{3})g_{b}(qU_{1}^{-1}U_{4})U_{1}g^{\ast}_{b}(qU_{1}^{-1}U_{4})g^{\ast}_{b}(qU_{1}^{-1}U_{3})g^{\ast}_{b}(qU_{1}^{-1}U_{2}), $$ where in the second equality we have used the quantum exponential relation, provided that operators $(qU_{1}^{-1}U_{i})$ are positive and satisfy $(qU_{1}^{-1}U_{i})(qU_{1}^{-1}U_{j}) = q^{2}(qU_{1}^{-1}U_{j})(qU_{1}^{-1}U_{i})$ for $1<i<j$ . $$ qU_{1}^{-1}U_{2} = e^{\pi\imath b^{2}}e^{-\pi bv+\pi bw+\imath b\partial_{v}}e^{\pi bu -\imath b\partial_{u} -\imath b\partial_{v}+\imath b\partial_{w}} = $$ $$ e^{\pi\imath b^{2}}e^{\frac{1}{2}[-\pi bv+\pi bw+\imath b\partial_{v},\pi bu -\imath b\partial_{u} -\imath b\partial_{v}+\imath b\partial_{w}]}e^{\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}} = $$ $$ e^{\pi\imath b^{2}}e^{-\pi\imath b^{2}}e^{\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}} = e^{\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}. $$ Analogously we obtain $$ qU_{1}^{-1}U_{3} = e^{-\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}, $$ $$ qU_{1}^{-1}U_{4}= e^{-2\pi bv+2\pi bw}. $$ Substituting these expressions into the formula for $\mathcal{E}_{2}$ we obtain $$ \mathcal{E}_{2} = g_{b}(e^{\pi bu-\pi bv +\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{-\pi bu -\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{-2\pi bv+2\pi bw})e^{\pi bv-\pi bw-\imath b\partial_{v}}\times $$ $$ g^{\ast}_{b}(e^{-2\pi bv+2\pi bw})g^{\ast}_{b}(e^{-\pi bu -\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g^{\ast}_{b}(e^{\pi bu-\pi bv +\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}). $$
For the generator $\mathcal{F}_{1}$ we have $$ \mathcal{F}_{1} = U_{1} + U_{2} + U_{3} + U_{4}, $$ where we used the notations $$ U_{1} = e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}},$$ $$ U_{2} = e^{2\pi b\nu_{1}-\pi bu+\imath b\partial_{u}}, $$ $$ U_{3} = e^{-2\pi b\nu_{1}+\pi bu+\imath b\partial_{u}}, $$ $$ U_{4} = e^{-2\pi b\nu_{1}+2\pi bu-\pi bv+\pi bw+\imath b\partial_{w}}. $$ Again, for $i<j$ we have the relations $$ U_{i}U_{j} = q^{2}U_{j}U_{i}, $$ and for $1<i<j$ $$ (qU^{-1}_{1}U_{i})(qU^{-1}_{1}U_{j}) = q^{2}(qU^{-1}_{1}U_{j})(qU^{-1}_{1}U_{i}). $$ Explicit expressions for the operators $qU^{-1}_{1}U_{i}$ are given by $$ qU_{1}^{-1}U_{2} = e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}, $$ $$ qU_{1}^{-1}U_{3} = e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}, $$ $$ qU_{1}^{-1}U_{4} = e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}. $$ Using the formulas $U+V = g_{b}(qU^{-1}V)Ug^{\ast}_{b}(qU^{-1}V)$ and $g_{b}(U+V) = g_{b}(U)g_{b}(V)$ for positive operators satisfying the relation $UV = q^{2}VU$ we obtain $$ \mathcal{F}_{1} = g_{b}(qU_{1}^{-1}(U_{2}+U_{3}+U_{4}))U_{1}g^{\ast}_{b}(qU_{1}^{-1}(U_{2}+U_{3}+U_{4})) = $$ $$ g_{b}(qU_{1}^{-1}U_{2})g_{b}(qU_{1}^{-1}U_{3})g_{b}(qU_{1}^{-1}U_{4})U_{1}g^{\ast}_{b}(qU_{1}^{-1}U_{4})g^{\ast}_{b}(qU_{1}^{-1}U_{3})g^{\ast}_{b}(qU_{1}^{-1}U_{2}) = $$ $$ g_{b}(e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})\times $$ $$ e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}}\times $$ $$ g_{b}^{\ast}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}) g_{b}^{\ast}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}^{\ast}(e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}). $$
Generator $\mathcal{F}_{2}$. $$ \mathcal{F}_{2} = A_{1} + A_{2}, $$ where $$ A_{1} = e^{2\pi b\nu_{2}+\pi bu-\pi bv +\imath b\partial_{v}}, $$ $$ A_{2} = e^{-2\pi b\nu_{2}-\pi bu +\pi bv+\imath b\partial_{v}}. $$ Then $$ qA_{1}^{-1}A_{2} = e^{-4\pi b\nu_{2}-2\pi bu+2\pi bv}, $$ $$ A_{1}A_{2} = q^{2}A_{2}A_{1}, $$ and using the identity $$ A_{1} + A_{2} = g_{b}(qA_{1}^{-1}A_{2})A_{1}g_{b}^{\ast}(qA_{1}^{-1}A_{2}), $$ we obtain $$ \mathcal{F}_{2} = g_{b}(e^{-4\pi b\nu_{2}-2\pi bu+2\pi bv})e^{2\pi b\nu_{2}+\pi bu-\pi bv+\imath b\partial_{v}} g^{\ast}_{b}(e^{-4\pi b\nu_{2}-2\pi bu+2\pi bv}). $$
$\Box$
Similar to eq.(3.15), eq.(3.21) \cite{BT}, we define the functions of the operators in the following way: \begin{de} Let $\varphi(x)$ be a complex-valued function of one variable and let $K_{i}$, $\mathcal{E}_{i}$, $\mathcal{F}_{i}$, $(i = 1,2)$ be the generators of $U_{q}(\mathfrak{sl}(3))$ in the positive principal series representation corresponding to the reduced expression $w_{0} = s_{1}s_{2}s_{1}$. \begin{equation} \varphi(K_{1}) = \varphi(e^{-2\pi b\nu_{1}+2\pi bu-\pi bv+2\pi bw}), \end{equation}
\begin{equation} \varphi(K_{2}) = \varphi(e^{-2\pi b\nu_{2}-\pi bu+2\pi bv-\pi bw}), \end{equation}
\begin{equation} \varphi(\mathcal{E}_{1}) = g_{b}(e^{-2\pi bw})\varphi(e^{\pi bw-\imath b\partial_{w}})g_{b}^{\ast}(e^{-2\pi bw}), \end{equation}
\begin{multline} \varphi(\mathcal{E}_{2}) = g_{b}(e^{\pi bu-\pi bv +\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{-\pi bu -\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{-2\pi bv+2\pi bw})\times \\ \varphi(e^{\pi bv-\pi bw-\imath b\partial_{v}})\times \\ g^{\ast}_{b}(e^{-2\pi bv+2\pi bw})g^{\ast}_{b}(e^{-\pi bu -\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g^{\ast}_{b}(e^{\pi bu-\pi bv +\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}), \end{multline}
\begin{multline} \varphi(\mathcal{F}_{1}) = g_{b}(e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})\times \\ \varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}})\times \\ g_{b}^{\ast}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}) g_{b}^{\ast}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}^{\ast}(e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}), \end{multline}
\begin{equation} \varphi(\mathcal{F}_{2}) = g_{b}(e^{-4\pi b\nu_{2}-2\pi bu+2\pi bv})\varphi(e^{2\pi b\nu_{2}+\pi bu-\pi bv+\imath b\partial_{v}}) g^{\ast}_{b}(e^{-4\pi b\nu_{2}-2\pi bu+2\pi bv}). \end{equation} \end{de} Choosing in the definition the function to be $\varphi(x) = x^{\imath s}$ we obtain the expressions for arbitrary powers of generators.
In the following we repeat the same steps for representations corresponding to another choice of reduced expression $w_{0} = s_{2}s_{1}s_{2}$. \begin{prop}\cite{Ip2}. The positive principal series representation of $U_{q}(\mathfrak{sl}(3))$ corresponding to the reduced expression of the longest Weyl element $w_{0} = s_{2}s_{1}s_{2}$ and positive real parameters $\nu_{1}$,$\nu_{2}$ is given by \begin{equation} K_{1} = e^{-2\pi b\nu_{1}-\pi bu+2\pi bv-\pi bw}, \end{equation}
\begin{equation} K_{2} = e^{-2\pi b\nu_{2}+2\pi bu-\pi bv+2\pi bw}, \end{equation}
\begin{equation} \mathcal{E}_{1} = e^{\pi bv-\pi bw-\imath b\partial_{v}}+ e^{\pi bu -\imath b\partial_{u} -\imath b\partial_{v}+\imath b\partial_{w}} + e^{-\pi bu-\imath b\partial_{u}-\imath b\partial_{v}+\imath b\partial_{w}} + e^{-\pi bv+\pi bw-\imath b\partial_{v}}, \end{equation}
\begin{equation} \mathcal{E}_{2} = e^{\pi bw-\imath b\partial_{w}} + e^{-\pi bw-\imath b\partial_{w}}, \end{equation}
\begin{equation} \mathcal{F}_{1} = e^{2\pi b\nu_{1}+\pi bu-\pi bv +\imath b\partial_{v}} + e^{-2\pi b\nu_{1}-\pi bu +\pi bv+\imath b\partial_{v}}, \end{equation}
\begin{equation} \mathcal{F}_{2} = e^{2\pi b\nu_{2}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}} + e^{2\pi b\nu_{2}-\pi bu+\imath b\partial_{u}} + e^{-2\pi b\nu_{2}+\pi bu+\imath b\partial_{u}} + e^{-2\pi b\nu_{2}+2\pi bu-\pi bv+\pi bw+\imath b\partial_{w}}. \end{equation} \end{prop}
\begin{lem} Let $\mathcal{E}_{i}$, $\mathcal{F}_{i}$, $(i = 1,2)$ be the generators of $U_{q}(\mathfrak{sl}(3))$ in the positive principal series representation corresponding to the reduced expression $w_{0} = s_{2}s_{1}s_{2}$. They can be represented in the following form
\begin{multline} \mathcal{E}_{1} = g_{b}(e^{\pi bu-\pi bv +\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{-\pi bu -\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{-2\pi bv+2\pi bw})\times \\ e^{\pi bv-\pi bw-\imath b\partial_{v}}\times \\ g^{\ast}_{b}(e^{-2\pi bv+2\pi bw})g^{\ast}_{b}(e^{-\pi bu -\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g^{\ast}_{b}(e^{\pi bu-\pi bv +\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}), \end{multline}
\begin{equation} \mathcal{E}_{2} = g_{b}(e^{-2\pi bw})e^{\pi bw-\imath b\partial_{w}}g_{b}^{\ast}(e^{-2\pi bw}), \end{equation}
\begin{equation} \mathcal{F}_{1} = g_{b}(e^{-4\pi b\nu_{1}-2\pi bu+2\pi bv})e^{2\pi b\nu_{1} +\pi bu-\pi bv+\imath b\partial_{v}} g^{\ast}_{b}(e^{-4\pi b\nu_{1}-2\pi bu+2\pi bv}), \end{equation}
\begin{multline} \mathcal{F}_{2} = g_{b}(e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{2}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{2}+4\pi bu-2\pi bv+2\pi bw})\times \\ e^{2\pi b\nu_{2}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}}\times \\ g_{b}^{\ast}(e^{-4\pi b\nu_{2}+4\pi bu-2\pi bv+2\pi bw}) g_{b}^{\ast}(e^{-4\pi b\nu_{2}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}^{\ast}(e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}). \end{multline} \end{lem}
\begin{de} Let $\varphi(x)$ be a complex-valued function of one variable and let $K_{i}$, $\mathcal{E}_{i}$, $\mathcal{F}_{i}$, $(i = 1,2)$ be the generators of $U_{q}(\mathfrak{sl}(3))$ in the positive principal series representation corresponding to the reduced expression $w_{0} = s_{2}s_{1}s_{2}$.The functions of the generators are defined as follows:
\begin{equation} \varphi(K_{1}) = \varphi(e^{-2\pi b\nu_{1}-\pi bu+2\pi bv-\pi bw}), \end{equation}
\begin{equation} \varphi(K_{2}) = \varphi(e^{-2\pi b\nu_{2}+2\pi bu-\pi bv+2\pi bw}), \end{equation}
\begin{multline} \varphi(\mathcal{E}_{1}) = g_{b}(e^{\pi bu-\pi bv +\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{-\pi bu -\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{-2\pi bv+2\pi bw})\times \\ \varphi(e^{\pi bv-\pi bw-\imath b\partial_{v}})\times \\ g^{\ast}_{b}(e^{-2\pi bv+2\pi bw})g^{\ast}_{b}(e^{-\pi bu -\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g^{\ast}_{b}(e^{\pi bu-\pi bv +\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}), \end{multline}
\begin{equation} \varphi(\mathcal{E}_{2}) = g_{b}(e^{-2\pi bw})\varphi(e^{\pi bw-\imath b\partial_{w}})g_{b}^{\ast}(e^{-2\pi bw}), \end{equation}
\begin{equation} \varphi(\mathcal{F}_{1}) = g_{b}(e^{-4\pi b\nu_{1}-2\pi bu+2\pi bv})\varphi(e^{2\pi b\nu_{1} +\pi bu-\pi bv+\imath b\partial_{v}}) g^{\ast}_{b}(e^{-4\pi b\nu_{1}-2\pi bu+2\pi bv}), \end{equation}
\begin{multline} \varphi(\mathcal{F}_{2}) = g_{b}(e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{2}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{2}+4\pi bu-2\pi bv+2\pi bw})\times \\ \varphi(e^{2\pi b\nu_{2}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}})\times \\ g_{b}^{\ast}(e^{-4\pi b\nu_{2}+4\pi bu-2\pi bv+2\pi bw}) g_{b}^{\ast}(e^{-4\pi b\nu_{2}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}^{\ast}(e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}). \end{multline}
\end{de} Assuming $\varphi(x) = x^{\imath s}$, we obtain the expressions for arbitrary powers of generators.\\* Recall the definition of the divided powers of $A$ \begin{equation}
A^{(\imath s)} = G_{b}(-\imath bs)A^{\imath s}. \end{equation}
Now, after we have defined arbitrary devided powers of the generators of $U_{q}(\mathfrak{sl}(3))$ in the positive principal series representations corresponding to both reduced expressions of the Weyl element, we can state the main theorem \begin{te} Let $q = e^{\pi\imath b^{2}}$, $(b^{2}\in \mathbb{R}\setminus \mathbb{Q})$ and let $K_{j} = q^{H_{j}}$, $\mathcal{E}_{j} = -\imath (q-q^{-1})E_{j}$, $\mathcal{F}_{j} = -\imath (q-q^{-1})F_{j}$, $1\le j\le 2$ be $U_{q}(\mathfrak{sl}(3))$ generators in the positive principal series representation corresponding to any reduced expression of the Weyl element. Then the following generalized Kac's identity holds:
\begin{equation}\begin{split} \mathcal{E}_{j}^{(\imath s)}\mathcal{F}_{j}^{(\imath t)} = \int\limits_{\mathcal{C}} d\tau e^{\pi bQ\tau}\mathcal{F}_{j}^{(\imath t+\imath\tau)}K_{j}^{-\imath \tau} \frac{G_{b}(\imath b\tau)G_{b}(-bH_{j} + \imath b(s+t+\tau))}{G_{b}(-bH_{j}+\imath b(s+t+2\tau))}\mathcal{E}_{j}^{(\imath s + \imath \tau)}, \end{split}\end{equation}
where the contour $\mathcal{C}$ goes slightly above the real axis but passes below the pole at $\tau = 0$. \end{te} $\noindent {\it Proof}. $ The proof follows from the results stated in Proposition 4.3, Lemma 4.3, Proposition 4.4, Corollary 4.1, Corollary 4.2, Corollary 4.3.
The next statement (Theorem 5.7 in \cite{Ip2}) establishes the unitary equivalence of positive principal series representations corresponding to different expressions of the longest Weyl element. We give here another proof for the case of $U_{q}(\mathfrak{sl}(3))$ of this result which allows explicitly illustrate its validness for arbitrary functions of generators. In this proof the pentagon identity \cite{Ka0} is extensively used which states that for positive self-adjoint operators $U$,$V$ satisfying the relation $UV = q^{2}VU$ we have \begin{equation}
g_{b}(V)g_{b}(U) = g_{b}(U)g_{b}(q^{-1}UV)g_{b}(V). \end{equation}
\begin{prop} Let $X_{s_{1}s_{2}s_{1}}$ be any generator of $U_{q}(\mathfrak{sl}(3))$ in the positive principal series representation corresponding to the reduced expression $w_{0} = s_{1}s_{2}s_{1}$. Let $X_{s_{2}s_{1}s_{2}}$ be the same generator in the positive principal series representation corresponding to the reduced expression $w_{0} = s_{2}s_{1}s_{2}$ and let $\varphi(x)$ be a complex-valued function. The unitary transformation defined by \begin{equation} U = (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}). \end{equation} relates the functions of generators in these two representations by $$ \varphi(X_{s_{2}s_{1}s_{2}}) = U\varphi(X_{s_{1}s_{2}s_{1}})U^{\ast}. $$ \end{prop} $\noindent {\it Proof}. $ Let $\mathcal{E}_{1}$ be the generator in $s_{1}s_{2}s_{1}$ representation. Its unitary transform is given by $$ U\varphi(\mathcal{E}_{1})U^{\ast} = (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}})\times $$ $$ g_{b}(e^{-2\pi bw})\varphi(e^{\pi bw-\imath b\partial_{w}})(h.c.) = $$ $$ (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}})g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) \times $$ $$ g_{b}(e^{-2\pi bw})\varphi(e^{\pi bw-\imath b\partial_{w}})(h.c.). $$ We have used the commutation of the factors $g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}})$ and $g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}})$. Now, according to the identity $g_{b}(x)g_{b}(\frac{1}{x}) = e^{\frac{\pi\imath}{4\pi^{2}b^{2}}\log^{2}x}$ we have $$ g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) = e^{-\frac{\pi\imath}{4\pi^2 b^{2}}(\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w})^{2}} g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}). $$ Let $$ A_{1} = e^{-2\pi bw}, $$ $$ A_{2} = e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}, $$ Then $$ q^{-1}A_{1}A_{2} = e^{-\pi bu+\pi bv-3\pi bw-\imath b\partial_{u}+\imath b\partial_{w}} $$ and $A_{1}A_{2} = q^{2}A_{2}A_{1}$. Using the pentagon identity, \cite{Ka0}: $$ g_{b}(A_{2})g_{b}(A_{1}) = g_{b}(A_{1})g_{b}(q^{-1}A_{1}A_{2})g_{b}(A_{2}), $$ we have $$ g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}})g_{b}(e^{-2\pi bw}) = g_{b}(e^{-2\pi bw})g_{b}(e^{-\pi bu+\pi bv-3\pi bw-\imath b\partial_{u}+\imath b\partial_{w}})g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}). $$ Substituting all these into the expression for $U\varphi(\mathcal{E}_{1})U^{\ast}$ we obtain $$ U\varphi(\mathcal{E}_{1})U^{\ast} = $$ $$ (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}} e^{-\frac{\pi\imath}{4\pi^2 b^{2}}(\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w})^{2}} g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imath b\partial_{u}-\imath b\partial_{w}})\times $$ $$ g_{b}(e^{-2\pi bw})g_{b}(e^{-\pi bu+\pi bv-3\pi bw-\imath b\partial_{u}+\imath b\partial_{w}})g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) \varphi(e^{\pi bw-\imath b\partial_{w}})(h.c.). $$ Note, that $g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}})$ and $\varphi(e^{\pi bw-\imath b\partial_{w}})$ commute, so the quantum dilogarithm passes through and cancels with its hermitian conjugate: $$ U\varphi(\mathcal{E}_{1})U^{\ast} = $$ $$ (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}} e^{-\frac{\pi\imath}{4\pi^2 b^{2}}(\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w})^{2}} g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imath b\partial_{u}-\imath b\partial_{w}})\times $$ $$ g_{b}(e^{-2\pi bw})g_{b}(e^{-\pi bu+\pi bv-3\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) \varphi(e^{\pi bw-\imath b\partial_{w}})(h.c.). $$ Let $A$, $B$ be self-adjoint operators satisfying the relation $[A,B] = c$, where $c$ is a number. Let $f(x)$ be a function and $\alpha$ a number. Then $$ e^{\alpha B^{2}}f(A) = f(A -2c\alpha B)e^{\alpha B^{2}}. $$ Using this identity to push the exponent $e^{-\frac{\pi\imath}{4\pi^2 b^{2}}(\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w})^{2}}$ to the right we obtain $$ U\varphi(\mathcal{E}_{1})U^{\ast} = (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}} g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imath b\partial_{u}-\imath b\partial_{w}})\times $$ $$ g_{b}(e^{-\pi bu+\pi bv-3\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-2\pi bu+2\pi bv-4\pi bw}) \varphi(e^{\pi bu-\pi bv+2\pi bw-\imath b\partial_{u}})(h.c.) $$ Now use the relation $$ e^{\alpha x\partial_{y}}f(y,\partial_{x}) = f(y+\alpha x,\partial_{x}-\alpha\partial_{y})e^{\alpha x\partial_{y}}, $$ to push the exponents $e^{-w\partial_{u}}$ and $e^{w\partial_{v}}$ to the right: $$ U\varphi(\mathcal{E}_{1})U^{\ast} = (uv)(vw)g_{b}(e^{-\pi bu+\pi bv+\pi bw+\imath b\partial_{v}-\imath b\partial_{w}})\times $$ $$ g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imath b\partial_{v}-\imath b\partial_{w}}) g_{b}(e^{-2\pi bu+2\pi bv}) \varphi(e^{\pi bu-\pi bv-\imath b\partial_{u}})(h.c.) = $$ $$ g_{b}(e^{\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{-\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{-2\pi bv+2\pi bw}) \varphi(e^{\pi bv-\pi bw-\imath b\partial_{v}})(h.c.) $$
Let $\mathcal{E}_{2}$ be the generator in $s_{1}s_{2}s_{1}$ representation. Then $$ U\varphi(\mathcal{E}_{2})U^{\ast} = (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}})\times $$ $$ g_{b}(e^{\pi bu-\pi bv +\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{-\pi bu -\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{-2\pi bv+2\pi bw}) \varphi(e^{\pi bv-\pi bw-\imath b\partial_{v}})\times (h.c.), $$ where by $(h.c.)$ we denoted the hermitian conjugate operator of everything that stands before $\varphi(e^{\pi bv-\pi bw-\imath b\partial_{v}})$. Noticing that $$ g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{\pi bu-\pi bv +\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) = 1, $$ we obtain $$ U\varphi(\mathcal{E}_{2})U^{\ast} = (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}\times $$ $$ g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{-\pi bu -\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{-2\pi bv+2\pi bw}) \varphi(e^{\pi bv-\pi bw-\imath b\partial_{v}})\times (h.c.). $$ Let $A_{1}$, $A_{2}$ be as follows $$ A_{1} = e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}, $$ $$ A_{2} = e^{-2\pi bv+2\pi bw}, $$ Then $$ q^{-1}A_{1}A_{2} = e^{-\pi bu -\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}, $$ moreover $$ A_{1}A_{2} = q^{2}A_{2}A_{1}, $$ and we can apply the pentagon identity \cite{Ka0}: $$ g_{b}(A_{1})g_{b}(q^{-1}A_{1}A_{2})g_{b}(A_{2}) = g_{b}(A_{2})g_{b}(A_{1}), $$ which leads to the following result $$ U\varphi(\mathcal{E}_{2})U^{\ast} = (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}} g_{b}(e^{-2\pi bv+2\pi bw})g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) \varphi(e^{\pi bv-\pi bw-\imath b\partial_{v}})\times (h.c.). $$ Operators $g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}})$ and $\varphi(e^{\pi bv-\pi bw-\imath b\partial_{v}})$ commute, so the quantum dilogarithm passes through and cancels with its conjugate. We obtain $$ U\varphi(\mathcal{E}_{2})U^{\ast} = (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}} g_{b}(e^{-2\pi bv+2\pi bw})\varphi(e^{\pi bv-\pi bw-\imath b\partial_{v}})\times (h.c.) = $$ $$ (uv)(vw)g_{b}(e^{-2\pi bv})\varphi(e^{\pi bv-\imath b\partial_{v}})g^{\ast}_{b}(e^{-2\pi bv})(vw)(uv) = g_{b}(e^{-2\pi bw})\varphi(e^{\pi bw-\imath b\partial_{w}})g^{\ast}_{b}(e^{-2\pi bw}). $$
The case of $\mathcal{F}_{1}$
$$ U\varphi(\mathcal{F}_{1})U^{\ast} = $$ $$ (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}})\times $$ $$ g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}) \varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}})(h.c.) $$ The factors $g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}})$ and $g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}})$ commute so we can change their order. After that we observe that $$ g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) g_{b}(e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) = e^{\frac{\pi\imath}{4\pi^{2}b^{2}}(\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w})^{2}}, $$ which follows from the identity $g_{b}(x)g_{b}(\frac{1}{x}) = e^{\frac{\pi\imath}{4\pi^{2}b^{2}}\log^{2}x}$. Doing these we obtain $$ U\varphi(\mathcal{F}_{1})U^{\ast} = $$ $$ (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}} e^{\frac{\pi\imath}{4\pi^{2}b^{2}}(\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w})^{2}} g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}})\times $$ $$ g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}) \varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}})(h.c.). $$ Again using the identity $g_{b}(x)g_{b}(\frac{1}{x}) = e^{\frac{\pi\imath}{4\pi^{2}b^{2}}\log^{2}x}$ to rewrite $$ g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w}}) = e^{-\frac{\pi\imath}{4\pi^{2}b^{2}}(\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w})^{2}} g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}), $$ we have $$ U\varphi(\mathcal{F}_{1})U^{\ast} = $$ $$ (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}} e^{\frac{\pi\imath}{4\pi^{2}b^{2}}(\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w})^{2}} e^{-\frac{\pi\imath}{4\pi^{2}b^{2}}(\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w})^{2}} g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imath b\partial_{u}-\imath b\partial_{w}})\times $$ $$ g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}) \varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}})(h.c.). $$ Let $$ A_{1} = e^{-\pi bu+\pi bv-\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}, $$ $$ A_{2} = e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}. $$ Then $$ q^{-1}A_{1}A_{2} = e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}, $$ $$ A_{1}A_{2} = q^{2}A_{2}A_{1}, $$ and we can apply the pentagon identity $g_{b}(A_{1})g_{b}(q^{-1}A_{1}A_{2})g_{b}(A_{2}) = g_{b}(A_{2})g_{b}(A_{1})$: $$ g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}) = $$ $$ g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) $$ Note also that $g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imath b\partial_{u}-\imath b\partial_{w}})$ commutes with $\varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}})$ so we obtain $$ U\varphi(\mathcal{F}_{1})U^{\ast} = $$ $$ (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}} e^{\frac{\pi\imath}{4\pi^{2}b^{2}}(\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w})^{2}} e^{-\frac{\pi\imath}{4\pi^{2}b^{2}}(\pi bu-\pi bv+\pi bw-\imath b\partial_{u}+\imath b\partial_{w})^{2}}\times $$ $$ g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}) \varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}})(h.c.) $$ To push the quadratic exponents to the right we use the formula $e^{\alpha B^{2}}f(A) = f(A -2c\alpha B)e^{\alpha B^{2}}$ for self-adjoint $A$ and $B$ satisfying the relation $[A,B] = c$, where $c$, $\alpha$ are numbers: $$ U\varphi(\mathcal{F}_{1})U^{\ast} = (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}} e^{\frac{\pi\imath}{4\pi^{2}b^{2}}(\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w})^{2}}\times $$ $$ g_{b}(e^{-4\pi b\nu_{1} +3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) \varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}})(h.c.) = $$ $$ (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}} g_{b}(e^{-4\pi b\nu_{1}+2\pi bu}) \varphi(e^{2\pi b\nu_{1}-\pi bu+\imath b\partial_{u}})(h.c.) = $$ $$ (uv)(vw) g_{b}(e^{-4\pi b\nu_{1}+2\pi bu-2\pi bw}) \varphi(e^{2\pi b\nu_{1}-\pi bu+\pi bw+\imath b\partial_{u}})(h.c.) = $$ $$ g_{b}(e^{-4\pi b\nu_{1}-2\pi bu+2\pi bv}) \varphi(e^{2\pi b\nu_{1}+\pi bu-\pi bv+\imath b\partial_{v}})(h.c.). $$
$\Box$
Let $K_{1}$, $\mathcal{E}_{1}$, $\mathcal{F}_{1}$ be the subset of $U_{q}(\mathfrak{sl}(3))$ generators in $s_{1}s_{2}s_{1}$ principal series representation. Recall that for a complex-valued function $\varphi(x)$ we have
$$ \varphi(K_{1}) = \varphi(e^{-2\pi b\nu_{1}+2\pi bu-\pi bv+2\pi bw}), $$
$$ \varphi(\mathcal{E}_{1}) = g_{b}(e^{-2\pi bw})\varphi(e^{\pi bw-\imath b\partial_{w}})g_{b}^{\ast}(e^{-2\pi bw}), $$
$$ \varphi(\mathcal{F}_{1}) = g_{b}(e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})\times $$ $$ \varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}})\times $$ $$ g_{b}^{\ast}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}) g_{b}^{\ast}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}^{\ast}(e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}), $$
This subset generates $U_{q}(\mathfrak{sl}(2))$ subalgebra.
The unitary transform from the following proposition was given in the proof of Theorem 4.7 in \cite{Ip1}. It was used as the first step in mapping the generators $K_{i}$, $\mathcal{E}_{i}$, $\mathcal{F}_{i}$ of $U_{q}(\mathfrak{sl}(2))_{i}$ subalgebra of $U_{q}(\mathfrak{g})$ to the formulas corresponding to positive principal series representations of $U_{q}(\mathfrak{sl}(2))$. We explicitly check its action on the functions of generators.
\begin{lem} Let $K_{1}$, $\mathcal{E}_{1}$, $\mathcal{F}_{1}$ be as above. Let $\varphi(x)$ be a complex-valued function. Let \begin{equation} V = e^{(\nu_{1}+\frac{v}{2})\partial_{w}}e^{(\nu_{1}+\frac{v}{2})\partial_{u}} e^{-\frac{\pi\imath u^{2}}{2}+2\pi\imath\nu_{1}u}e^{-u\partial_{w}}e^{-\frac{\pi\imath w^{2}}{2}}g_{b}(e^{2\pi bw}) g_{b}(e^{4\pi b\nu_{1}-2\pi bu}) \end{equation} be a unitary transform. Then \begin{equation} V\varphi(K_{1})V^{\ast} = \varphi(e^{2\pi bw}), \end{equation}
\begin{equation} V\varphi(\mathcal{E}_{1})V^{\ast} = \varphi(e^{-\imath b\partial_{w}}), \end{equation}
\begin{multline} V\varphi(\mathcal{F}_{1})V^{\ast} = g_{b}(e^{2\pi bw}e^{-2\pi bu})g_{b}(e^{2\pi bw}e^{\imath b\partial_{u}})g_{b}(e^{2\pi bw}e^{2\pi bu})\times \\ \varphi(e^{-2\pi bw+\imath b\partial_{w}})\times \\ g^{\ast}_{b}(e^{2\pi bw}e^{2\pi bu})g^{\ast}_{b}(e^{2\pi bw}e^{\imath b\partial_{u}})g^{\ast}_{b}(e^{2\pi bw}e^{-2\pi bu}). \end{multline} \end{lem} $\noindent {\it Proof}. $ $$ V\varphi(\mathcal{F}_{1})V^{\ast} = e^{(\nu_{1}+\frac{v}{2})\partial_{w}}e^{(\nu_{1}+\frac{v}{2})\partial_{u}} e^{-\frac{\pi\imath u^{2}}{2}+2\pi\imath\nu_{1}u}e^{-u\partial_{w}}e^{-\frac{\pi\imath w^{2}}{2}}g_{b}(e^{2\pi bw})\times $$ $$ g_{b}(e^{4\pi b\nu_{1}-2\pi bu}) g_{b}(e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})\times $$ $$ \varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}})\times (h.c.). $$ Let $$ U_{1} = e^{4\pi b\nu_{1}-2\pi bu}, $$ $$ U_{2} = e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}. $$ Then $$ q^{-1}U_{1}U_{2} = e^{\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}, $$ and the following relation holds $$ U_{1}U_{2} = q^{2}U_{2}U_{1}, $$ which allows us to use the quantum pentagon identity $$ g_{b}(U_{1})g_{b}(q^{-1}U_{1}U_{2})g_{b}(U_{2}) = g_{b}(U_{2})g_{b}(U_{1}). $$ We obtain $$ V\varphi(\mathcal{F}_{1})V^{\ast} = e^{(\nu_{1}+\frac{v}{2})\partial_{w}}e^{(\nu_{1}+\frac{v}{2})\partial_{u}} e^{-\frac{\pi\imath u^{2}}{2}+2\pi\imath\nu_{1}u}e^{-u\partial_{w}}e^{-\frac{\pi\imath w^{2}}{2}}g_{b}(e^{2\pi bw})\times $$ $$ g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{4\pi b\nu_{1}-2\pi bu}) g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}) \varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}})\times (h.c.) $$ Note that the operator $g_{b}(e^{4\pi b\nu_{1}-2\pi bu})$ commutes with $g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})$ and $\varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}})$, so it goes through and cancels with its hermitian conjugate. $$ V\varphi(\mathcal{F}_{1})V^{\ast} = e^{(\nu_{1}+\frac{v}{2})\partial_{w}}e^{(\nu_{1}+\frac{v}{2})\partial_{u}} e^{-\frac{\pi\imath u^{2}}{2}+2\pi\imath\nu_{1}u}e^{-u\partial_{w}}e^{-\frac{\pi\imath w^{2}}{2}}g_{b}(e^{2\pi bw})\times $$ $$ g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}) \varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imath b\partial_{w}})\times (h.c.). $$ Using the following operator relations $$ e^{\alpha x^{2} +\beta x}f(\partial_{x}) = f(\partial_{x}-2\alpha x -\beta)e^{\alpha x^{2} +\beta x}, $$ $$ e^{\alpha x\partial_{y}}f(y,\partial_{x}) = f(y+\alpha x,\partial_{x}-\alpha\partial_{y})e^{\alpha x\partial_{y}}, $$ we move all the exponents to the right $$ V\varphi(\mathcal{F}_{1})V^{\ast} = e^{(\nu_{1}+\frac{v}{2})\partial_{w}}e^{(\nu_{1}+\frac{v}{2})\partial_{u}} e^{-\frac{\pi\imath u^{2}}{2}+2\pi\imath\nu_{1}u}e^{-u\partial_{w}}g_{b}(e^{2\pi bw})\times $$ $$ g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+2\pi bw+\imath b\partial_{u}-\imath b\partial_{w}}) g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}) \varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-2\pi bw+\imath b\partial_{w}})\times (h.c.) = $$ $$ e^{(\nu_{1}+\frac{v}{2})\partial_{w}}e^{(\nu_{1}+\frac{v}{2})\partial_{u}} e^{-\frac{\pi\imath u^{2}}{2}+2\pi\imath\nu_{1}u}g_{b}(e^{2\pi bw-2\pi bu})\times $$ $$ g_{b}(e^{-4\pi b\nu_{1}+\pi bu-\pi bv+2\pi bw+\imath b\partial_{u}}) g_{b}(e^{-4\pi b\nu_{1}+2\pi bu-2\pi bv+2\pi bw}) \varphi(e^{2\pi b\nu_{1}+\pi bv-2\pi bw+\imath b\partial_{w}})\times (h.c.) = $$ $$ e^{(\nu_{1}+\frac{v}{2})\partial_{w}}e^{(\nu_{1}+\frac{v}{2})\partial_{u}} g_{b}(e^{2\pi bw-2\pi bu}) g_{b}(e^{-2\pi b\nu_{1}-\pi bv+2\pi bw+\imath b\partial_{u}})\times $$ $$ g_{b}(e^{-4\pi b\nu_{1}+2\pi bu-2\pi bv+2\pi bw}) \varphi(e^{2\pi b\nu_{1}+\pi bv-2\pi bw+\imath b\partial_{w}})\times (h.c.) = $$ $$ g_{b}(e^{2\pi bw-2\pi bu})g_{b}(e^{2\pi bw+\imath b\partial_{u}}) g_{b}(e^{2\pi bw+2\pi bu}) \varphi(e^{-2\pi bw+\imath b\partial_{w}}) g^{\ast}_{b}(e^{2\pi bw+2\pi bu})g^{\ast}_{b}(e^{2\pi bw+\imath b\partial_{u}}) g^{\ast}_{b}(e^{2\pi bw-2\pi bu}). $$
$\Box$
To finish the mapping $\varphi(K_{1})$,$\varphi(\mathcal{E}_{1})$,$\varphi(\mathcal{F}_{1})\rightarrow$ $\varphi(K)$,$\varphi(\mathcal{E})$,$\varphi(\mathcal{F})$, where the second set of operators is defined by the equations (\ref{function of E})-(\ref{function of F}), we need to perform a certain integral transformation which will be defined shortly.
Let $\lambda$ be a positive real number. Define the following set of functions, \cite{Ka2}, \cite{KLSTS} \begin{equation} \Phi_{\lambda}(u) = e^{\pi\imath u^{2}+\pi Qu}G_{b}(-\imath u+\imath\lambda)G_{b}(-\imath u-\imath\lambda). \end{equation} The integral transform $\Phi$ is defined by $$ \Phi: L^{2}(\mathbb{R}) \rightarrow L^{2}(\mathbb{R}^{+},d\mu(\lambda)), $$ \begin{equation}\label{Kashaev integral transform} \Phi: f(u) \rightarrow F(\lambda) = \int\limits_{\mathbb{R}-\imath 0} du f(u)\Phi^{\ast}_{\lambda}(u), \end{equation} This transform is an isometry, see \cite{Ka2}. The inverse is given by $$ \Phi^{-1}: L^{2}(\mathbb{R}^{+},d\mu(\lambda)) \rightarrow L^{2}(\mathbb{R}), $$ \begin{equation}\label{Kashaev inverse integral transform} \Phi^{-1} : F(\lambda) \rightarrow f(u) = \lim\limits_{\epsilon\rightarrow 0}\int\limits_{0}^{+\infty}F(\lambda)\Phi_{\lambda}(u+\imath\epsilon)e^{-2\pi\epsilon u} d\mu(\lambda), \end{equation} with the measure given by $d\mu(\lambda) = 4\sinh(\pi b\lambda)\sinh(\pi b^{-1}\lambda)$.
\begin{prop} The function $\Phi_{\lambda}(u)$ is an eigenfunction of the operator $g_{b}(e^{2\pi bw}e^{-2\pi bu})g_{b}(e^{2\pi bw}e^{\imath b\partial_{u}})g_{b}(e^{2\pi bw}e^{2\pi bu})$: \begin{equation} g_{b}(e^{2\pi bw}e^{-2\pi bu})g_{b}(e^{2\pi bw}e^{\imath b\partial_{u}})g_{b}(e^{2\pi bw}e^{2\pi bu})\Phi_{\lambda}(u) = g_{b}(e^{2\pi b\lambda+2\pi bw})g_{b}(e^{-2\pi b\lambda+2\pi bw})\Phi_{\lambda}(u). \end{equation} \end{prop} $\noindent {\it Proof}. $
Recalling the definition of $g_{b}(x)$ $$ g_{b}(x) = \frac{\bar{\zeta}_{b}}{G_{b}(\frac{Q}{2} + \frac{1}{2\pi\imath b}\log x)}, $$ and the Fourier transform $$ g_{b}(x) = \int d\tau x^{\imath b^{-1}\tau}e^{\pi Q\tau}G_{b}(-\imath\tau), $$ we obtain $$ g_{b}(e^{2\pi bw}e^{-2\pi bu}) = \frac{\bar{\zeta}_{b}}{G_{b}(\frac{Q}{2} -\imath w + \imath u)}, $$ $$ g_{b}(e^{2\pi bw}e^{2\pi bu}) = \frac{\bar{\zeta}_{b}}{G_{b}(\frac{Q}{2} -\imath w - \imath u)}, $$ $$ g_{b}(e^{2\pi bw}e^{\imath b\partial_{u}}) = \int d\tau e^{\pi Q\tau+2\pi\imath w\tau}G_{b}(-\imath\tau)e^{-\tau\partial_{u}}. $$ Substituting these expressions into the left-hand side of the eigenvalue equation we obtain $$ g_{b}(e^{2\pi bw}e^{-2\pi bu})g_{b}(e^{2\pi bw}e^{\imath b\partial_{u}})g_{b}(e^{2\pi bw}e^{2\pi bu})\Phi_{\lambda}(u) = $$ $$ \frac{1}{G_{b}(\frac{Q}{2}-\imath w+\imath u)}\int d\tau e^{\pi Q\tau + 2\pi\imath w\tau}G_{b}(-\imath\tau)e^{-\tau\partial_{u}} \frac{e^{\pi\imath u^{2}+\pi Qu}G_{b}(-\imath u+\imath\lambda)G_{b}(-\imath u-\imath\lambda)}{G_{b}(\frac{Q}{2}-\imath w-\imath u)} = $$ $$ \frac{1}{G_{b}(\frac{Q}{2}-\imath w+\imath u)}\int d\tau e^{\pi Q\tau+2\pi\imath w\tau+\pi\imath(u-\tau)^{2}+\pi Q(u-\tau)} \frac{G_{b}(-\imath\tau)G_{b}(-\imath u+\imath\lambda+\imath\tau)G_{b}(-\imath u-\imath\lambda+\imath\tau)}{G_{b}(\frac{Q}{2}-\imath w-\imath u+\imath\tau)} $$ Applying the reflection formula $$ G_{b}(-\imath\tau) = \frac{e^{-\pi\imath\tau^{2}-\pi Q\tau}}{G_{b}(Q+\imath\tau)}, $$ we get $$ g_{b}(e^{2\pi bw}e^{-2\pi bu})g_{b}(e^{2\pi bw}e^{\imath b\partial_{u}})g_{b}(e^{2\pi bw}e^{2\pi bu})\Phi_{\lambda}(u) = $$ $$ \frac{e^{\pi\imath u^{2}+\pi Qu}}{G_{b}(\frac{Q}{2}-\imath w+\imath u)}\int d\tau e^{-2\pi(\frac{Q}{2}+\imath u-\imath w)\tau} \frac{G_{b}(-\imath u+\imath\lambda+\imath\tau)G_{b}(-\imath u-\imath\lambda+\imath\tau)}{G_{b}(\frac{Q}{2}-\imath w-\imath u+\imath\tau)G_{b}(Q+\imath\tau)}. $$ Let $\alpha = -\imath u+\imath\lambda$, $\beta = -\imath u-\imath\lambda$, $\gamma = \frac{Q}{2}+\imath u-\imath w$. Then $$ g_{b}(e^{2\pi bw}e^{-2\pi bu})g_{b}(e^{2\pi bw}e^{\imath b\partial_{u}})g_{b}(e^{2\pi bw}e^{2\pi bu})\Phi_{\lambda}(u) = $$ $$ \frac{e^{\pi\imath u^{2}+\pi Qu}}{G_{b}(\frac{Q}{2}-\imath w+\imath u)}\int d\tau e^{-2\pi\gamma\tau} \frac{G_{b}(\alpha+\imath\tau)G_{b}(\beta+\imath\tau)}{G_{b}(\alpha+\beta+\gamma+\imath\tau)G_{b}(Q+\imath\tau)} = $$ $$ \frac{e^{\pi\imath u^{2}+\pi Qu}}{G_{b}(\frac{Q}{2}-\imath w+\imath u)} \frac{G_{b}(\alpha)G_{b}(\beta)G_{b}(\gamma)}{G_{b}(\alpha+\gamma)G_{b}(\beta+\gamma)}, $$ here $4-5$ integral identity \cite{V} has been used. Substituting $\alpha$, $\beta$, $\gamma$ we obtain $$ g_{b}(e^{2\pi bw}e^{-2\pi bu})g_{b}(e^{2\pi bw}e^{\imath b\partial_{u}})g_{b}(e^{2\pi bw}e^{2\pi bu})\Phi_{\lambda}(u) = e^{\pi\imath u^{2}+\pi Qu}\frac{G_{b}(-\imath u+\imath\lambda)G_{b}(-\imath u-\imath\lambda)}{G_{b}(\frac{Q}{2}+\imath\lambda-\imath w)G_{b}(\frac{Q}{2}-\imath\lambda-\imath w)} = $$ $$ \frac{1}{G_{b}(\frac{Q}{2}+\imath\lambda-\imath w)G_{b}(\frac{Q}{2}-\imath\lambda-\imath w)}\Phi_{\lambda}(u) = g_{b}(e^{2\pi b\lambda+2\pi bw})g_{b}(e^{-2\pi b\lambda+2\pi bw})\Phi_{\lambda}(u). $$ $\Box$
\begin{cor} Let $\varphi(K_{1})$,$\varphi(\mathcal{E}_{1})$,$\varphi(\mathcal{F}_{1})$ be the functions of the subset of generators of $U_{q}(\mathfrak{sl}(3))$ in the positive principal series representation corresponding to the reduced expression $w_{0} = s_{1}s_{2}s_{1}$ of the longest Weyl element. Let $\Phi$, $\Phi^{\ast}$ be the integral transform and its inverse defined in (\ref{Kashaev integral transform})-(\ref{Kashaev inverse integral transform}). Let $\Omega$ be the unitary transform defined by \begin{equation}
\Omega_{1} =
e^{-\frac{\pi\imath}{2}(\lambda+w)^{2}}g_{b}(e^{-2\pi b\lambda-2\pi bw})
\circ\Phi \circ e^{(\nu_{1}+\frac{v}{2})\partial_{w}}e^{(\nu_{1}+\frac{v}{2})\partial_{u}} e^{-\frac{\pi\imath u^{2}}{2}+2\pi\imath\nu_{1}u}e^{-u\partial_{w}}e^{-\frac{\pi\imath w^{2}}{2}}g_{b}(e^{2\pi bw}) g_{b}(e^{4\pi b\nu_{1}-2\pi bu}). \end{equation} Then \begin{equation} \Omega_{1}\varphi(K_{1})\Omega_{1}^{\ast} = \varphi(e^{2\pi bw}), \end{equation} \begin{equation} \Omega_{1}\varphi(\mathcal{E}_{1})\Omega_{1}^{\ast} = g_{b}(e^{-2\pi b\lambda-2\pi bw})\varphi(e^{\pi b\lambda +\pi bw - \imath b\partial_{w}})g^{\ast}_{b}(e^{-2\pi b\lambda-2\pi bw}), \end{equation} \begin{equation} \Omega_{1}\varphi(\mathcal{F}_{1})\Omega_{1}^{\ast} = g_{b}(e^{-2\pi b\lambda+2\pi bw})\varphi(e^{\pi b\lambda-\pi bw+\imath b\partial_{w}})g^{\ast}_{b}(e^{-2\pi b\lambda+2\pi bw}). \end{equation} \end{cor}
\begin{cor} Let $\varphi(K_{2})$,$\varphi(\mathcal{E}_{2})$,$\varphi(\mathcal{F}_{2})$ be the functions of the subset of generators of $U_{q}(\mathfrak{sl}(3))$ in the positive principal series representation corresponding to the reduced expression $w_{0} = s_{2}s_{1}s_{2}$ of the longest Weyl element. Let $\Omega_{2}$ be a unitary transform defined by \begin{equation}
\Omega_{2} =
e^{-\frac{\pi\imath}{2}(\lambda+w)^{2}}g_{b}(e^{-2\pi b\lambda-2\pi bw})
\circ\Phi \circ e^{(\nu_{2}+\frac{v}{2})\partial_{w}}e^{(\nu_{2}+\frac{v}{2})\partial_{u}} e^{-\frac{\pi\imath u^{2}}{2}+2\pi\imath\nu_{1}u}e^{-u\partial_{w}}e^{-\frac{\pi\imath w^{2}}{2}}g_{b}(e^{2\pi bw}) g_{b}(e^{4\pi b\nu_{2}-2\pi bu}). \end{equation} Then \begin{equation} \Omega_{2}\varphi(K_{2})\Omega_{2}^{\ast} = \varphi(e^{2\pi bw}), \end{equation} \begin{equation} \Omega_{2}\varphi(\mathcal{E}_{2})\Omega_{2}^{\ast} = g_{b}(e^{-2\pi b\lambda-2\pi bw})\varphi(e^{\pi b\lambda +\pi bw - \imath b\partial_{w}})g^{\ast}_{b}(e^{-2\pi b\lambda-2\pi bw}), \end{equation} \begin{equation} \Omega_{2}\varphi(\mathcal{F}_{2})\Omega_{2}^{\ast} = g_{b}(e^{-2\pi b\lambda+2\pi bw})\varphi(e^{\pi b\lambda-\pi bw+\imath b\partial_{w}})g^{\ast}_{b}(e^{-2\pi b\lambda+2\pi bw}). \end{equation} \end{cor} \noindent {\it Proof}. Note, that swapping the indices $K_{1}\leftrightarrow K_{2}$, $\mathcal{E}_{1}\leftrightarrow \mathcal{E}_{2}$, $\mathcal{F}_{1}\leftrightarrow \mathcal{F}_{2}$, $\nu_{1}\leftrightarrow \nu_{2}$, of generators and parameters in a representation of $U_{q}(\mathfrak{sl}(3))$ corresponding to a particular choice of reduced expression of the longest Weyl element gives representation for another choice of reduced expression. So, from the statement, that $\Omega_{1}$ transforms the action of operators $\varphi(K_{1})$,$\varphi(\mathcal{E}_{1})$,$\varphi(\mathcal{F}_{1})$ in $s_{1}s_{2}s_{1}$ representation to the $U_{q}(\mathfrak{sl}(2))$ formulas (\ref{function of E})-(\ref{function of F}), it automatically follows that $\Omega_{2}$ which is obtained from $\Omega_{1}$ by the replacement of the parameter $\nu_{1}$ by $\nu_{2}$, transforms the action of operators $\varphi(K_{2})$,$\varphi(\mathcal{E}_{2})$,$\varphi(\mathcal{F}_{2})$ in $s_{2}s_{1}s_{2}$ representation to $U_{q}(\mathfrak{sl}(2))$ formulas. $\Box$
\begin{cor} In the positive principal series representation corresponding to any reduced expression of the Weyl element the generalized Kac's identity holds. \end{cor} \noindent {\it Proof}. As follows from the corollaries 4.2, 4.3, there is a unitary transformation which transforms the operators $\varphi(K_{i})$,$\varphi(\mathcal{E}_{i})$,$\varphi(\mathcal{F}_{i})$ defined in the positive principal series of $U_{q}(\mathfrak{sl}(3))$ to the operators $\varphi(K)$,$\varphi(\mathcal{E})$,$\varphi(\mathcal{F})$ defined in positive principal series representation of $U_{q}(\mathfrak{sl}(2))$. Since the generalized Kac's identity is valid in $U_{q}(\mathfrak{sl}(2))$ case, it follows that it is as well valid in the case of $U_{q}(\mathfrak{sl}(3))$. $\Box$\\* This completes the proof of the Theorem 4.1.
Note, that we have also given another proof in $U_{q}(\mathfrak{sl}(3))$ for the Theorem 4.7 in \cite{Ip1} which states that the positive principal series representation of $U_{q}(\mathfrak{sl}(3))$ decomposes into direct integral of positive principal series representations of its $U_{q}(\mathfrak{sl}(2))$ subalgebra corresponding to any simple root.
\end{document}
|
arXiv
|
{
"id": "2010.15567.tex",
"language_detection_score": 0.4891061782836914,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[Unboundedness of rational points]{Unboundedness of the number
of rational points on curves over function fields}
\author{Ricardo Concei\c c\~ao} \address{Department of Mathematics, Oxford College of Emory
University, Oxford, GA 30054, USA} \email{[email protected]}
\author{Douglas Ulmer} \address{School of Mathematics, Georgia Institute of
Technology, Atlanta, GA 30332, USA} \email{[email protected]}
\author{Jos\'e Felipe Voloch} \address{Department of Mathematics, University of Texas,
Austin, TX 78712, USA} \email{[email protected]}
\begin{abstract} We give examples of sequences of smooth non-isotrivial curves for every genus at least two, defined over a rational function field of positive characteristic, such that the (finite) number of rational points of the curves in the sequence cannot be uniformly bounded. \end{abstract}
\maketitle
The question of whether there is a uniform bound for the number of rational points on curves of fixed genus greater than one over a fixed number field has been considered by several authors. In particular, Caporaso et al. \cite{CaporasoHarrisMazur97} showed that this would follow from the Bombieri-Lang conjecture that the set of rational points on a variety of general type over a number field is not Zariski dense. Abramovich and the third author \cite{AbramovichVoloch96} extended this result to get other uniform boundedness consequences of the Bombieri-Lang conjecture and gave some counterexamples for function fields. These counterexamples are singular curves that ``change genus''. They behave like positive genus curves (and, in particular, have finitely many rational points), but are parametrizable over an inseparable extension of the ground field. In \cite{AbramovichVoloch96} it is shown that, for this class of equation, uniform boundedness does not hold. Specifically, one gets a one-parameter family of equations which, for suitable choice of the parameter, have a finite but arbitrarily large number of solutions. However, a negative answer to the original uniform boundedness question for smooth curves of genus at least two remained open in the function field case. In this paper we provide counterexamples to this uniform boundedness, extending constructions of the first two authors \cite{ConceicaoThesis, UlmerLegendre} for elliptic curves.
\begin{thm}
Let $p>3$ be a prime number and let $r$ be an odd number coprime to
$p$. The number of rational points over ${\bf F}_p(t)$ of the curve
$X_a$ with equation $y^2 = x(x^r+1)(x^r+a^r)$ is unbounded as $a$
varies in ${\bf F}_p(t)\setminus{\bf F}_p$. \end{thm}
\begin{proof}
We first note that if $d=p^n+1$ and $a=t^d$, then we have a rational
point $(x,y)=(t,t^{(r+1)/2}(t^r+1)^{d/2})$ on $X_a$. Second, if $m$
divides $n$ and $n/m$ is odd, then $d'=p^m+1$ divides $d$. Setting
$e=d/d'$, we have another rational point
$(x,y)=(t^e,t^{e(r+1)/2}(t^{re}+1)^{d'/2})$ on $X_a$. Thus if we
take $n$ to be odd with many factors, we have many points. \end{proof}
The curves given by the theorem are non-isotrivial of odd genus $r$. To obtain counterexamples of even genus, one may proceed as follows: Let $Y_a$ be the quotient of $X_a$ by the fixed-point-free involution $(x,y) \mapsto (a/x,-a^{(r+1)/2} y/x^{r+1})$. Then $X_a$ is an unramified cover of $Y_a$, a curve of genus $(r+1)/2$. This shows that for all $g>1$ and all but finitely many $p>2$ (the exceptions depending on $g$), unboundedness of the number of rational points over ${\bf F}_p(t)$ holds for curves of genus g.
One can obtain other explicit examples by a slight modification of the argument of the theorem. In what follows $d=p^n+1$, $m|n$ with $n/m$ odd, $d'=p^m+1$, $e=d/d'$, and $a=t^d$, as in the proof of the theorem.
\begin{example}
Let $p\equiv 2\mathop{\rm mod}\nolimits 9$ and $n$ be divisible by 3 and a product of
primes $\equiv 1\mathop{\rm mod}\nolimits 6$. Then the curve $y^6 = x(x+1)(x+a)$ contains
the point $(t^e,t^{e}(t^e+1)^{d'/2})$.
\end{example}
\begin{example}
Let $f(x)\in{\bf F}_p[x]$ be a polynomial of degree $2b$ with distinct
roots, none of them zero. Then the curve $y^2=f(x)x^{2b}f(a/x)$ has
the point $(t^e,t^{be}f(t^e)^{d'/2})$. \end{example}
\begin{example}
Let $r$ be a prime satisfying $p\equiv r-1\mathop{\rm mod}\nolimits r^2$. Let $n$ be
divisible by $r$ and primes $\equiv 1 \mathop{\rm mod}\nolimits r(r-1)$. Then
$(t^e,t^{2e/r}(t^e+1)^{d'/r})$ is a point on the curve
$y^r=x(x+1)(x+a)$. This curve has a simple Jacobian, since the ring
of integers of the $r$-th cyclotomic field acts as endomorphisms of
the Jacobian (see \cite{Zarhin06}, Theorem 3.1). \end{example}
\begin{rems}\mbox{}
The curve in Example 1 and the curve in the theorem cover the
Legendre elliptic curve $E$ studied in \cite{UlmerLegendre}. Thus
their Jacobians have $E$ as a factor, and consequently have
unbounded Mordell-Weil rank as $a$ varies. On the other hand, $E$ is
not a factor of the Jacobian of the curve $Y_a$, nor of the
Jacobians of the curves in Examples 2 and 3. Nonetheless, by the
main result of \cite{BuiumVoloch96}, the rank of the Mordell-Weil
group of the Jacobian of these curves is also unbounded as $a$
varies. \end{rems}
Let $X$ be the smooth projective surface with affine model $$y^2=x(x^r+1)(x^r+t^{r}).$$ By \cite{CaporasoHarrisMazur97} the fibration $X \to {\bf P}^1$, $(x,y,t) \mapsto t$ has a fibered power which covers a variety of general type. However, since this fibration is defined over a finite field, this variety of general type will also be defined over a finite field and can have a Zariski dense set of ${\bf F}_p(t)$-rational points, so the rest of the argument of \cite{CaporasoHarrisMazur97} does not apply.
\emph{ Acknowledgments:} We would like to thank Bjorn Poonen for his suggestions, in particular for suggesting the involution that leads to the curves $Y_a$.
{}
\end{document}
|
arXiv
|
{
"id": "1204.2001.tex",
"language_detection_score": 0.8512027263641357,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{\bf Revisiting Offspring Maxima in Branching Processes} \author{George P. Yanev \\ Department of Mathematics and Statistics \\ University of South Florida \\ Tampa, Florida 33620 \\ e-mail: [email protected]} \date{\empty}
\maketitle
\begin{abstract} We present a progress report for studies on maxima related to offspring in branching processes. We summarize and discuss the findings on the subject that appeared in the last ten years. Some of the results are refined and illustrated with new examples. \end{abstract}
\section{Introduction} There is a significant amount of research in the theory of branching processes devoted to extreme value problems concerning different population characteristics. The history of such studies goes back to the works in 50-ies by Zolotarev \cite{Zol54} and Urbanik \cite{Urb56} (see also \cite{Har63}) who considered the maximum generation size. Our goal here is to summarize and discuss results on maxima related to the offspring. Papers directly addressing this area of study have begun to appear in the last ten years (though see "hero mothers" example in \cite{JagNer84}.)
Let ${\cal M}_n$ denote the maximum offspring size of all individuals living in the $(n-1)$-st generation of a branching process. This is a maximum of random number of independent and identically distributed (i.i.d.) integer-valued random variables, where the random index is the population size of the process. ${\cal M}_n$ has two characteristic features: (i) the i.i.d. random variables are integer-valued and (ii) the distribution of the random index is connected to the distribution of the terms involved through the branching mechanism. These two characteristics distinguish the subject matter maxima among those studied in the general extreme value theory.
The study of the sequence \ $\{{\cal M}_n\}$\ might be motivated in different ways. It provides a fertility measure characterizing the most prolific individual in one generation. It also measures the maximum litter (or family) size. In the branching tree context, it is the maximum degree of a vertex. The asymptotic behavior of ${\cal M}_n$\ gives us some information about the influence of the largest families on the size and survival of the entire population.
The paper is organized as follows. Next section deals with maxima in simple branching processes with or without immigration. In Section~3 we derive results about maxima of a triangular array of zero-inflated geometric variables. Later we apply these to branching processes with varying geometric environments. Section 4 begins with limit theorems for the max-domain of attraction of bivariate geometric variables. Then we discuss one application to branching processes with promiscuous matting. The final section considers a different construction in which a random score (a continuous random variable) is associated with each individual in a simple branching process. We present briefly limiting results for the score's order statistics. In the end of the section, we give an extension to two-type processes.
\section{Maximum family size in simple branching processes}
Define a Bienaym\'{e}--Galton--Watson (BGW) branching process and its $n$-th generation maximum family size by $Z_0=1$; \[ Z_n=\sum_{i=1}^{Z_{n-1}}X_i(n)\quad \mbox{and}\quad {\cal M}_n= \max_{ 1\le i\le Z_{n-1}}X_i(n) \quad (n=1,2,\ldots), \] respectively, where the offspring variables $X_i(n)$ are i.i.d. nonnegative and integer-valued.
Along with the BGW process $\{Z_n\}$, we consider the process with immigration $\{Z^{im}_n\}$ and its offspring maximum \[ Z^{im}_n=\sum_{i=1}^{Z^{im}_{n-1}}X_i(n)+ Y_n \quad \mbox{and} \quad {\cal M}^{im}_n=\max_{1\le i\le Z^{im}_{n-1}}X_i(n) \quad (n=1,2,\ldots), \] respectively, where $\{Y_n, \ n=1,2,...\}$ are independent of the offspring variables, i.i.d. and integer-valued non-negative random variables.
Finally, let us modify the immigration component such that immigrants may enter the $n$-th generation only
if the $(n-1)$-st generation size is zero. Thus, we have the Foster-Pakes
process and its offspring maximum \[ Z^{0}_n=\sum_{i=1}^{Z^0_{n-1}}X_i(n)+ I_{\displaystyle \{Z^{0}_{n-1}=0\}}Y_n \quad \mbox{and} \quad {\cal M}^{0}_n=\max_{1\le i \le Z^0_{n-1}}X_i(n)\quad (n=1,2,\ldots), \] where $I_{A} $ stands for the indicator of $A$.
Denote by $F(x)=P(X_i(n)\leq x)$\ the common distribution function of the offspring variables with mean \ $0<m<\infty$\ and variance \ $0<\sigma^2 \leq \infty$. In this section, we deal with the subcritical $(m<1)$, critical $(m=1)$, and supercritical $(m>1)$ processes separately.
\subsection{Subcritical processes} Let $\hat{{\cal M}}_n$ denote the maximum family size in all three processes defined above: $\{ Z_n\}$, $\{Z^{im}_n\}$, and $\{Z^0_n\}$. Let $g(s)$ be the immigration p.g.f.. Also, let ${\cal A}_n=\{Z_{n-1}>0\}$ for processes without immigration, and ${\cal A}_n$ be the certain event - otherwise. The following result is true.
\begin{theorem} If $0<m<1$, then for $x\ge 0$
\begin{equation} \label{distr_limit}\lim_{n\to \infty}P(\hat{{\cal M}}_n\le x|{\cal A}_n)=\gamma(F(x)) \end{equation} and \begin{equation} \label{moment_limit} \lim_{n\to
\infty}E(\hat{{\cal M}}_n|{\cal A}_n)=\sum_{k=0}^\infty [1-\gamma(F(k))] \end{equation} where
(i) in case of $\{Z_n\}$, $\gamma$ is the unique p.g.f. solution of $\gamma(f(s))=m\gamma(s)+1-m$ and (\ref{moment_limit}) holds if, in addition, $EX_i(n)\log(1+ X_i(n))<\infty$.
(ii) in case of process $\{Z^{im}_n\}$, (\ref{distr_limit}) holds
provided $E\log(1+Y_n)<\infty$ and $\gamma$ is the unique p.g.f. solution of $ \gamma(s)=g(s)\gamma(f(s))$. (\ref{moment_limit}) is true if, in addition, $EY_n<\infty$.
(iii) in case of process $\{Z^{0}_n\}$ we assume that $E\log(1+Y_n)<\infty$. Then $ \gamma(s)= 1 - \sum_{n=0}^\infty [1-g(f_n(s))]$ $(0< s\le 1)$ and $\gamma(0) = \{1 + \sum_{n=0}^\infty [1-g(f_n(0))]\}^{-1}$. Also, (\ref{moment_limit}) holds if, in addition, $EY_n<\infty$. \end{theorem}
\begin{example} Consider $\{Z_n\}$ with geometric offspring p.g.f. $f(s)=p/(1-qs),$\ where \ $1/2<p=1-q<1$. \ Then \ $m=q/p<1$\ and it is not difficult to see that $\gamma(s)=(1-m)s/(1-ms).$\ Hence \[ \lim_{n\to\infty}P({\cal M}_n\leq k\mid Z_{n-1}>0)= \frac{\displaystyle (p-q)(1-q^{k+1})}{\displaystyle p-q(1-q^{k+1})}. \] It can also be seen (\cite{RahYan99}) that \[
\frac{m}{1-pm}\le \lim_{n\to \infty}E({\cal M}_n|Z_n>0)\le \frac{m}{1-m}. \] \end{example}
\begin{example} Consider $\{Z^{im}_n\}$ (see \cite{Pak71}) with \[ f(s)=(1+m-ms)^{-1} \quad (0<m<1) \quad \mbox{and} \quad g(s)= f^{\nu}(s) \qquad (\nu >0).\] Then $ \gamma(s)=((1-m)/(1-ms))^\nu$, a negative binomial p.g.f., and the above theorem yields \[
\lim_{n \to \infty} P({\cal M}^{im}_n \le x) = \left( \frac{1-m}{1-mF(x)}\right)^\nu \ \mbox{and}\ \ \lim_{n\to \infty} E{\cal M}^{im}_n=\sum_{j=0}^{\infty} 1-\left[ \frac{1-F(j)}{1-mF(j)}\right]^{\nu}\le \frac{\nu m^2}{1-m}. \] \end{example}
\begin{example} Let $\mu=EY_n$. Consider $\{Z^0_n\}$ with \[ f(s)=(1+m-ms)^{-1} \qquad \mbox{and} \qquad g(s)= 1 - (\mu / m)\log (1+m - ms)\qquad (0<m<1).\]
In this case $\gamma = (m-\mu \log(1-ms))/(m-\mu \log (1-m))$ and by the theorem \[ \displaystyle \lim_{n \to \infty} P\{{\cal M}^0_n \le x\} = {m - \mu \log (1-mF(x)) \over m -\mu \log (1-m)}\ , \] and \[ \lim_{n \to \infty}E{\cal M}^0_n = \mu\ {\displaystyle m+\sum_{k=0}^\infty \log \frac{1-m[(1+m)^{k+1}-m^{k+1}]}{1-m} \over m-\mu\log (1-m)}\le {\displaystyle \mu m \over m -\mu \log(1-m)}\ {\displaystyle m \over 1-m}. \] \end{example}
\subsection{Critical processes} In the rest of this section we need some asymptotic results for the maxima of i.i.d. random variables. Recall that a distribution function $F(x)$ belongs to the max-domain of attraction of a distribution function $H(x,\theta)$ (i.e., $F\in D(H)$) if and only if there exist sequences $ a(n)>0$\ and \ $b(n)$\ such that \begin{equation}\label{3.8} \lim_{n\to\infty}F^n(a(n)x+b(n))=H(x, \theta)\ , \end{equation} weakly. According to the classical Gnedenko's result, \ $H(x;\theta)$
has the following (von Mises) form \begin{equation} \label{dom_attr}H(x;\theta)=\exp\{-h(x;\theta)\} =\exp\left\{-(1+x\theta^{-1})^{-\theta}\right\}, \quad 1+x\theta^{-1}>0; \ -\infty<\theta<\infty. \end{equation} Necessary and sufficient conditions for \ $F\in D(H)$ are well-known. In particular, \ $F\in D(\exp\{-x^{-a}\})$, \ $a>0$\ if and only if for \ $x>0$ the following regularity condition on the tail probability holds \begin{equation} \label {24} 1-F(x)=x^{-a}L(x)\ , \end{equation} where $L(x)$ is a slowly varying at infinity function (s.v.f.).
{\bf A. Processes without immigration.}\ In case of a simple BGW process, the following result holds.
\begin{theorem} Let $m=1$ and $\sigma^2<\infty$. (i) If (\ref{3.8}) holds, then \begin{equation} \label{thm2_dist}\lim_{n\to
\infty}P\left(\frac{{\cal M}_n-b(n)}{a(n)}\le x|Z_{n-1}>0\right)=\frac{1}{1+\sigma^2h(x, \theta)/2}. \end{equation}
(ii) If (\ref{24}) holds, then \begin{equation} \label{thm2_exp} \lim_{n\to
\infty}\frac{E({\cal M}_n|Z_{n-1}>0)}{n^{1/a}L_1\left(n\right)}= \frac{\pi/a}{\sin(\pi/a)} \qquad (a\ge 2), \end{equation} where $L_1(x)$ is certain s.v.f. with known asymptotics. \end{theorem}
The theorem implies that if $F\in D(\exp\{-e^{-x}\})$ then the limiting distribution is logistic with c.d.f. $\left(1+e^{-x}\right)^{-1}$; and if $F\in D(\exp\{-x^{-a}\})$ then the limiting distribution is log-logistic with c.d.f. $\left(1+x^{-a}\right)^{-1}$.
\begin{theorem} Let $m=1$, $\sigma^2=\infty$, and (\ref{24}) holds. Then for $x \geq 0$ and $1<a\le 2$ \begin{equation} \label{thm3_dist}\lim_{n\to\infty}P\left( \frac{\displaystyle {\cal M}_n}{\displaystyle n^{1/[a(a-1)]}L_2\left(n\right)} \leq x\mid Z_{n-1}>0\right) = 1-\frac{1}{\displaystyle \left(1+x^{a(a-1)}\right)^{1/(a-1)}} , \end{equation} which is a Burr Type XII distribution (e.g. \cite{Tad80}) and \begin{equation}\label{4.77}
\lim_{n\rightarrow \infty}\frac{E({\cal M}_n|Z_{n-1}>0) }{n^{1/[a(a-1)]}L_2\left(n\right)}\! \! = \frac{1}{a-1} B \left( \frac{1}{a-1}-\frac{1}{a(a-1)}, 1+\frac{1}{a(a-1)} \right) \quad (1<a\le 2), \end{equation} where $B(u,v)$ is the Beta function and $L_2(x)$ is certain s.v.f. with known asymptotics. \end{theorem}
Note that for $a=2$ the right-hand sides in (\ref{thm2_dist}) (under assumption (\ref{24})) and (\ref{thm2_exp}) coincide with those in (\ref{thm3_dist}) and (\ref{4.77}), respectively. The right-hand side in (\ref{4.77}) is the expected value of the limit in (\ref{thm3_dist}) (see \cite{Tad80}).
\begin{example} Let $1-F(x)\sim x^{-2}\log x$. In this case one can check (see \cite{RahYan99}) that Theorem~3 with $a=2$ implies \[ \lim_{n\to\infty}P\left( \frac{\displaystyle {\cal M}_n}{\displaystyle n^{1/2}(\log n)^{3/2}}\leq x\mid Z_{n-1}>0\right) = \frac{4x^2}{1+4x^2}\ . \] for $x \geq 0$ and \[
\lim_{n\rightarrow \infty}\frac{\displaystyle E({\cal M}_n|Z_{n-1}>0)}{\displaystyle n^{1/2}(\log n)^{3/2}} = \frac{\pi}{2}\ . \] \end{example}
{\bf B. Processes with immigration $\{Z^{im}_n\}$.}\ Let $\mu=EY_n$. We have the following theorem.
\begin{theorem} Assume that $ m=1, \ 0<\sigma^2<\infty$, and $0 < \mu < \infty.$ (i) If (\ref{3.8}) holds, then \begin{equation} \label{thm4_dist} \lim_{n\to \infty}P\left( \frac{{\cal M}^{im}_n-b(n)}{a(n)} \leq x \right) = \frac{1}{\displaystyle (1+\sigma^2 h(x, \theta)/2)^{2\mu/\sigma^2}}.\end{equation} (ii) If (\ref{24}) is true, then \begin{equation} \label{thm4_exp} \lim_{n\to \infty}\frac{\displaystyle EM^{im}_n}{\displaystyle n^{1/a}L_2(n)} = \frac{2\mu}{\sigma^2} B\left(\frac{2\mu}{\sigma^2}+\frac{1}{a}, 1-\frac{1}{a}\right) \quad (a\ge 2),\end{equation} where $B(u,v)$ is the Beta function and $L_2(x)$ is certain s.v.f. with known asymptotics. \end{theorem}
The theorem implies that if $F\in D(\exp\{-e^{-x}\})$ then the limiting distribution is generalized logistic with c.d.f. $\left(1+\sigma^2 e^{-x}/2\right)^{-2\mu/\sigma^2}$; if $F\in D(\exp\{-x^{-a}\})$ then the limiting distribution is a Burr Type III (e.g. \cite{Tad80}) with c.d.f. $\left(1+\sigma^2x^{-a}/2\right)^{-2\mu/\sigma^2}$. The right-hand side in (\ref{thm4_exp}) is the expected value of the limit in (\ref{thm4_dist}) (see \cite{Tad80}).
\begin{theorem} Let $m=1$, $\sigma^2=\infty$, and (\ref{24}) holds. In addition, suppose \begin{equation} \label{theta_cond}\Theta(x):=-\int_0^x \log[1-P(Z^{im}_t>0)]dt=c\log x+d+\varepsilon(x), \end{equation} where $\lim_{x\to \infty}\varepsilon(x)=0$, $c>0$, and $d$ are constants. Then for $x \geq 0$, \begin{equation} \label{thm5_dist}\lim_{n\to\infty}P\left( \frac{\displaystyle {\cal M}^{im}_n}{\displaystyle n^{1/[a(a-1)]}L_2\left(n\right)} \leq x\right) = \frac{1}{(1+x^{-a(a-1)})^c} \quad (1<a\le 2), \end{equation} which is a Burr Type III distribution (e.g. \cite{Tad80}) and \begin{equation} \label{thm5_exp}\lim_{n\rightarrow \infty}\frac{E{\cal M}_n^{im} }{n^{1/[a(a-1)]}L_2\left(n\right)}= cB\left(c+\frac{1}{a(a-1)}, 1-\frac{1}{a(a-1)}\right)\quad (1<a\le 2), \end{equation} where $B(u,v)$ is the Beta function and $L_2(x)$ is certain s.v.f. with known asymptotics. The right-hand side in (\ref{thm5_exp}) is the expected value of the limit in (\ref{thm5_dist}). \end{theorem} Note that for $c=1$ and $a=2$ the right-hand sides in (\ref{thm5_dist}) and (\ref{thm5_exp}) coincide with those in (\ref{thm3_dist}) and (\ref{4.77}), respectively. The condition (\ref{theta_cond}) holds even when the immigration mean is not finite. Next example illustrates this point.
\begin{example} Following \cite{Pak75}, we consider offspring and immigrants generated by \[ f(s)=1-(1-s)(1+(a-1)(1-s))^{-1/(a-1)}\ \mbox{and}\ g(s)=\exp\{-\lambda (1-s)^{a-1}\}, \] respectively. Then (\ref{24}) holds and (\ref{theta_cond}) yields \[ \Theta(t)=(\lambda/(a-1))[\log t+\log (a-1)+\log (1+(a-1)t)^{-1}]. \] Therefore, \[ \lim_{n\to\infty}P\left( \frac{\displaystyle {\cal M}^{im}_n}{\displaystyle n^{1/[a(a-1)]}L_3\left(n\right)} \leq x\right) = \frac{1}{(1+x^{-a(a-1)})^{\lambda/(a-1)}} \quad (1<a\le 2), \] and \[ \lim_{n\rightarrow \infty}\frac{E {\cal M}_n^{im} }{n^{1/[a(a-1)]}L_3\left(n\right)}= \frac{\lambda}{a-1}B\left(\frac{\lambda }{a-1}-\frac{1}{a(a-1)}, 1-\frac{1}{a(a-1)}\right) \quad (1<a\le 2), \] where $B(u,v)$ is the Beta function and $L_3(x)$ is certain s.v.f. with known asymptotics. \end{example}
{\bf C. Foster-Pakes processes $\{Z^0_n\}$.} The following limit theorem for ${\cal M}^0_n$ under a non-linear normalization holds.
\begin{theorem} Assume that $ m=1, \ 0<\sigma^2<\infty$, and $0 < \mu < \infty.$ If \begin{equation}\label{cond} \lim_{n\to\infty} {\displaystyle P(X_1(1)>n) \over P(X_1(1)>n+1)} = 1\ \end{equation} then for \ $0<x <1$, \begin{equation}\label{crthm} \lim_{n\to\infty} P\left( {\displaystyle \log U({\cal M}^0_n) \over \log n} \leq x \right) = x, \end{equation} where $U(y)=1/(1-F(y))$. \end{theorem}
Note that (\ref{cond}) is a necessary condition for $X_1(n)$ to be in a max-domain of attraction.
\subsection{Supercritical processes} Denote by $\hat{{\cal M}}_n$ (as in the subcritical case above) the maximum family size in all three processes: $\{ Z_n\}$, $\{Z^{im}_n\}$, and $\{Z^0_n\}$. The following result is true.
\begin{theorem} Assume that $m>1$ and $EX_i(n)\log(1+ X_i(n))<\infty$. If (\ref{3.8}) holds, then \[ \lim_{n\to \infty}P\left( \frac{\hat{{\cal M}}_n-b(m^n)}{a(m^n)} \leq x \right) =\psi(h(x, \theta))\] If (\ref{24}) is true, then
\[\lim_{n\to \infty}\frac{E\hat{{\cal M}}_n}{m^{-n/a}L_1\left(m^{-n/a}\right)}=\int_0^\infty 1-\psi(x^{-a})dx, \] where $L_1(x)$ is certain s.v.f. with known asymptotics.
(i) in case of $\{Z_n\}$, $\psi$ is the unique, among the Laplace transforms, solution of \begin{equation} \label{psi_eqn} \psi(u)=f(\psi(um^{-1})), \qquad (u>0).\end{equation}
(ii) in case of $\{Z^{im}_n\}$, we assume in addition that $E \log(1+ Y_n) < \infty$ and \[
\psi(u) = \prod_{k=1}^{\infty}g(\varphi(um^{-k})) \qquad (u>0) \ , \] where $\varphi(u)$ is the unique, among the Laplace transforms, solution of (\ref{psi_eqn}).
(iii) in case of $\{Z^0_n\}$, we assume in addition that $E Y_n < \infty$ and \[ \psi(u) = g(\varphi(u)) - \sum_{n=0}^\infty [1 - f(\varphi({u m^{-n}}))]P(Z^0_n=0) \qquad (u>0)\] and $\varphi(u)$ is the unique, among the Laplace transforms, solution of (\ref{psi_eqn}). \end{theorem}
It is interesting to compare the limiting behavior of the maximum family size in the processes allowing immigration with that when the processes evolve in "isolation", i.e., without immigration. In the supercritical case, as might be expected, the immigration has little effect on the asymptotics of the maximum family size. The limits differ only in the form of the Laplace transform $\psi(u)$. In the subcritical and critical cases the mechanism of immigration eliminates the conditioning on non--extinction. Theorem~6 for the Foster-Pakes process differs from the rest of the results by the non-linear norming of ${\cal M}_n$. The study of the limiting behavior of the expectation in this case needs additional efforts.
It is known that some of the most popular discrete distributions, like geometric and Poisson, do not belong to any max-domain of attraction. This restricts the applicability of the results in the critical and supercritical cases above. A general construction of discrete distributions attracted in a max-domain is given in Wilms (1994). As it is proved there, if $X$ is attracted by a Gumbel or Fr\'{e}chet distributions, then the same holds for the integer part $[X]$. Next we follow a different approach considering triangular arrays of geometric variables which leads to branching processes with varying environments.
The results in this section are published in \cite{Mit98}, \cite{MitYan99}, and \cite{RahYan96}-\cite{RahYan99}. In \cite{YanTso00} an extension for order statistics is considered.
\section{Maximum family size in processes with varying environments}
It is well-known that the geometric law is not attracted to any max-stable law. Therefore, the limit theorems for maxima in the critical and supercritical cases above do not apply to geometric offspring. In this section we utilize a triangular array of zero-modified geometric (ZMG) offspring distributions, instead.
\subsection{Maxima of arrays of zero-modified geometric variables} In this subsection we prove limit theorems for maximum of ZMG with p.m.f. \begin{eqnarray*} P(X_i(n)=j)=\cases{a_np_n(1-p_n)^{j-1} & if $j\ge 1$, \cr 1-a_n & if $j=0$, $\qquad (n=1,2,\ldots)$} \end{eqnarray*} For a
positive integer $\nu_n$ consider the triangular array of variables \begin{eqnarray*}
X_1(1), X_2(1), & \ldots, & X_{\nu_1}(1)\\ X_1(2), X_2(2), & \ldots, & \qquad X_{\nu_2}(2) \\ & \ldots & \\ X_1(n), X_2(n), & \ldots, & \qquad \qquad \qquad X_{\nu_n}(n) \end{eqnarray*} We prove limit theorems as $\nu_n\to \infty$ for the row maxima \[ {\cal M}_n=\max_{1\le i\le \nu_n}X_i(n). \] Let $\Lambda$ has the standard Gumbel law with c.d.f. $ \exp(-e^{-x})$ for $-\infty<x<\infty$.
\begin{theorem} Assume that for some real $c$ \[ \lim_{n\to \infty}p_n=0 \quad \mbox{and} \quad \lim_{n\to \infty}p_n\log(\nu_na_n)=2c. \]
A. If $\lim_{n\to \infty}\log(\nu_na_n)=\infty$, then $c\ge0$ and \[ p_n{\cal M}_n-\log(\nu_na_n)\stackrel{d}\to \Lambda -c. \]
B. If $\lim_{n\to \infty}\log(\nu_na_n)=\alpha$, $(-\infty<\alpha<\infty)$, then \[ p_n{\cal M}_n \stackrel{d}\to (\Lambda+\alpha)^+. \] \end{theorem}
The idea of the proof is to exploit: (i) the exponential approximation to the zero-modified geometric law when its mean $a_n/p_n$ is large; (ii) the fact that exponential law is attracted by Gumbel distribution.
\subsection{Processes with varying geometric environments} Consider a branching process with ZMG offspring law defined over the triangular array above. Thus, we have a simple branching process with geometric varying environments. For this process we prove limit theorems for the offspring maxima in all three classes: subcritical, critical, and supercritical. Define $\mu_0=1$, \[
\mu_n=E(Z_n|Z_0=1)=\prod_{j=1}^nm_j \qquad (n\ge1). \] If the environments are weakly varying, i.e., $\mu=\lim_{n\to \infty}\mu_n$ exists, then the processes can be classify (see \cite{MitPakYan03}) as follows.
\begin{eqnarray*}
\{Z_n\}\ \mbox{is}\ \ \cases{\mbox{supercritical} & if $\mu=\infty$ \quad \quad \ \ \mbox{i.e.} \ $\sum_n(m_n-1)\to \infty$ \cr \mbox{critical} & if $\mu\in(0,\infty)$ \quad \mbox{i.e.} \ $\sum_n(m_n-1)< \infty$ \cr \mbox{subcritical} & if $\mu=0$ \quad \quad \quad \ \mbox{i.e.} \ $\sum_n(m_n-1)\to -\infty$} \end{eqnarray*} Define the maximum family size for the process with varying geometric environments as \[ {\cal M}_n^{ge}=\max_{1\le i\le Z_n}X_i(n), \qquad (n=1,2,\ldots) \] In the result below the role played by $\nu_n$ before is played by $B_{n-1}$ where \[ B_n=\mu_n\sum_{j=1}^n \frac{p_j^{-1}-1}{\mu_j}. \] Let ${\cal V}$ be a standard logistic random variable with c.d.f. $(1+e^{-x})^{-1}$ for $-\infty<x<\infty$.
\begin{theorem} Suppose that $\lim_{n\to \infty} B_n=\infty$ and for $c$ real \[ \lim_{n\to \infty}p_n=0 \quad \mbox{and} \quad \lim_{n\to \infty}p_n\log(B_{n-1}a_n)=2c. \]
A. If $\lim_{n\to \infty}\log(B_{n-1}a_n)=\infty$, then \[
(p_n{\cal M}_n^{ge}-\log(B_{n-1}a_n)|Z_{n-1}>0)\stackrel{d}\to {\cal V} -c. \]
B. If $\lim_{n\to \infty}\log(B_{n-1}a_n)=\alpha$, $(-\infty<\alpha<\infty)$, then \[
(p_n{\cal M}_n^{ge}|Z_{n-1}>0) \stackrel{d}\to ({\cal V}+\alpha)^+. \] \end{theorem}
Referring to the above theorem, we can say that the branching mechanism transforms Gumbel to logistic distribution. It is interesting to notice that this is in parallel with results for maximum of i.i.d. random variables with random geometrically distributed index discussed in \cite{GneGne82}.
\begin{example} Let us sample a linear birth and death process $({\cal B}_t)$ at irregular times. Let $Z_n={\cal B}_{t_n}$ where $0<t_n<t_{n+1}\to t_{\infty}\le \infty$. If $\lambda$ and $\mu$ are the birth and death rates, respectively, and $d_n=t_n-t_{n-1}$, then $a_n=m_np_n$, $$p_n=\cases{{\displaystyle \lambda-\mu\over \displaystyle \lambda m_n-\mu} & if $\lambda \not=\mu$,\cr \frac{\displaystyle 1}{\displaystyle 1+\lambda d_n} & if $\lambda=\mu$,} \quad m_n=e^{(\lambda -\mu)d_n}.$$ and $$B_n=\cases{{\displaystyle \lambda (\mu_n-1)\over \displaystyle \lambda -\mu} & if $\lambda \not=\mu$,\cr \lambda t_n & if $\lambda =\mu$,} \quad \mu_n=e^{(\lambda-\mu)t_n}.$$
A. If $\lambda>\mu$ and $$\lim_{n\to \infty}\frac{\displaystyle t_n}{\displaystyle m_n}=\frac{2c}{\lambda-\mu}\in [0,\infty),$$ then \[
\left(\frac{{\cal M}_n^{ge}}{m_n}-(\lambda-\mu)t_n)\ |\ Z_{n-1}>0\right)\stackrel{d}\to {\cal V} -c. \] B. If $\lambda=\mu$ and $t_n=n^{\delta}l(n)$ \ \ $(\delta \ge 1)$, then \[
\left(\frac{{\cal M}_n^{ge}}{\lambda\delta n^{\delta-1}l(n)}-\log n\ |\ Z_{n-1}>0\right) \stackrel{d}\to {\cal V}. \] \end{example}
The results in this section can be found in \cite{MitPakYan03}.
\section{Maxima in bisexual processes}
In this section we consider maxima of triangular arrays of bivariate geometric random vectors. The obtained results are applied to a class of bisexual branching processes.
\subsection{Max-domain of attraction of bivariate geometric arrays} The following construction is due to Marshall and Olkin \cite{MO85}. Consider a random vector $(U, V)$ having Bernoulli marginals, i.e., it takes on four possible values (0,0), (0,1), (1,0), and (1,1) with probabilities $p_{00}, \ p_{01}, \ p_{10}$, and $p_{11}$, respectively. Thus the marginal probabilities for $U$ and $V$ are \begin{eqnarray*} P(U=0)=p_{0+}=p_{00}+p_{01}, & & P(U=1)=p_{1+}=p_{10}+p_{11} \\
P(V=0)=p_{+0}=p_{00}+p_{10}, & & P(V=1)=p_{+1}=p_{01}+p_{11} .
\end{eqnarray*} Consider a sequence $\{(U_n, V_n)\}_{n=1}^\infty$ of
independent and identically distributed with $(U, V)$ random
vectors. Let $\xi$ and $\eta$ be the number of zeros preceding the
first 1 in the sequences $\{U_n\}_{n=1}^\infty$ and
$\{V_m\}_{n=1}^\infty$, respectively. Both $\xi$ and $\eta$ follow
a geometric distribution and, in general, they are dependent variables.
The vector $(\xi, \eta)$ has a bivariate geometric distribution
with probability mass function for integer $l$ and $k$
\begin{equation}\label{def1} P(\xi=l, \eta=k) =
\left\{ \begin{array}{ll}
p_{00}^lp_{10}p_{+0}^{k-l-1}p_{+1} & \mbox{if} \quad 0\leq l <k,\\
p_{00}^l p_{11} & \mbox{if} \quad l=k, \\
p_{00}^k p_{01} p_{0+}^{l-k-1}p_{1+} & \mbox{if} \quad 0\leq k <l.\\ \end{array} \right. \end{equation} and \begin{equation}\label{def2} P(\xi > l, \eta > k)=
\left\{ \begin{array}{ll}
p_{00}^{l+1}p_{+0}^{k-l}& \mbox{if} \quad 0\leq l \leq k,\\
p_{00}^{k+1} p_{0+}^{l-k} & \mbox{if} \quad 0\leq k <l.\\ \end{array} \right. \end{equation} The marginals of $\xi$ and $\eta$ for integer $l$ and $k$ are $ P(\xi=l)=p_{1+}p_{0+}^l \quad (l\geq 0)$ and $P(\eta=k)=p_{+1}p_{+0}^k \quad (k\geq 0)$, respectively and \begin{eqnarray} \label{marg} \bar{F}_\xi(l)=P(\xi > l)=p_{0+}^{l+1} \quad (l\geq 0), \qquad \bar{F}_\eta(k)=P(\eta > k)=p_{+0}^{k+1} \quad (k\geq 0). \end{eqnarray}
For $n=1,2,\ldots$, let $\nu_n$ be a positive integer and $\{(\xi_i(n), \eta_i(n)): i=1,2,\ldots, \nu_n\}$ be a triangular array of independent random vectors with the same bivariate geometric distribution (\ref{def1}) where $p_{ij}$ are replaced by $p_{ij}(n)\ \ (i,j=0,1)$ for $n=1,2,\ldots$ That is, \begin{eqnarray*} (\xi_1(1), \eta_1(1)), (\xi_2(1), \eta_2(1)), & \ldots, & (\xi_{\nu_1}(1), \eta_{\nu_1}(1))\\ (\xi_1(2), \eta_1(2)), (\xi_2(2), \eta_2(2)), & \ldots, & \qquad (\xi_{\nu_2}(2), \eta_{\nu_2}(2)) \\ & \ldots & \\ (\xi_1(n), \eta_1(n)), (\xi_2(n), \eta_2(n)), & \ldots, & \qquad \qquad \qquad (\xi_{\nu_n}(n), \eta_{\nu_n}(n)) \end{eqnarray*} Below we prove a limit theorem as $\nu_n \to \infty$ for the bivariate row maximum \[ ({\cal M}_n^\xi, {\cal M}_n^\eta)=\left(\max_{1\leq i\leq \nu_n} \xi_i(n), \max_{1\leq i\leq \nu_n}\eta_i(n)\right). \]
\begin{theorem} Let $\lim_{n\to \infty}\nu_n=\infty$. If there are constants $0\leq a, b, c < \infty$, such that \begin{equation} \label{assum2} \lim_{n\to \infty}p_{11}(n)\log \nu_n = 2c \quad \lim_{n\to \infty} \frac{p_{10}(n)}{p_{11}(n)}\log \nu_n = a \quad \mbox{and} \quad \lim_{n\to \infty} \frac{p_{01}(n)}{p_{11}(n)}\log \nu_n = b, \end{equation} then for $x, y \geq 0$ \begin{eqnarray*} \lefteqn{ \lim_{n\to\infty}
P\left( p_{11}(n){\cal M}_n^\xi-\log\nu_n\leq x, \ \ p_{11}(n){\cal M}_n^\eta-\log\nu_n\leq
y \right)}\\
& & =
\exp\left\{ -e^{ -x-a-c}-e^{ -y-b-c}+e^{ -\max\{x,y\}-a -b -c}\right\}.
\end{eqnarray*} \end{theorem}
{\bf Proof}\ Set $x_n=(x+\log \nu_n)/p_{11}(n)$ and $y_n=(y+\log \nu_n)/p_{11}(n)$. \begin{eqnarray*} P\left( {\cal M}_n^\xi\leq x_n, {\cal M}_n^\eta\leq y_n\right) & = & (F(x_n, y_n))^{\nu_n} \\
& = &
\left(1-\bar{F}_\xi(x_n)-\bar{F}_\eta(y_n)+P(\xi_i(n)>x_n, \eta_i(n)>y_n)\right)^{\nu_n}.
\end{eqnarray*}
Let $x<y$ and thus, $x_n<y_n$. Taking logarithm, expanding in Taylor
series,
and using
(\ref{def2}) and (\ref{marg}), we obtain
\begin{eqnarray} \label{lim0}
\lefteqn{\log P\left( {\cal M}_n^\xi\leq x_n, {\cal M}_n^\eta\leq y_n\right)}\\
& = & \nu_n \log \left(1-\bar{F}_\xi(x_n)-\bar{F}_\eta(y_n)+P(\xi_i(n)>x_n, \eta_i(n)>y_n)\right) \nonumber \\
& = &
-\nu_n \left\{ [\bar{F}_\xi(x_n)+\bar{F}_\eta(y_n)-P(\xi_i(n)>x_n, \eta_i(n)>y_n)](1+o(1))\right\}\nonumber \\
& = &
-\left( \nu_n p_{0+}(n)^{[x_n]+1} + \nu_n p_{+0}(n)^{[y_n]+1}
- \nu_n p_{00}(n)^{[x_n]+1}p_{+0}(n)^{[y_n]-[x_n]}\right)
(1+o(1)) . \nonumber
\end{eqnarray}
Write $[x_n]=x_n-\{x_n\}$, where $0\leq \{x_n\}<1$ is
the fractional part of $x_n$. It is easily seen that
$\lim_{n\to \infty}(p_{0+}(n))^{[x_n]+1}=\lim_{n\to
\infty}(p_{0+}(n))^{x_n+1-\{x_n\}}=
\lim_{n\to \infty}(p_{0+}(n))^{x_n}$ as $n\to \infty$.
Furthermore, taking into account (\ref{assum2}), we have
\begin{eqnarray*}
\lefteqn{\log \left(\nu_n p_{0+}^{x_n}(n)\right)=
\log \nu_n + \frac{x+\log
\nu_n}{p_{11}(n)}\log(1-p_{1+}(n))}\nonumber \\
& = &
\log \nu_n - \frac{x+\log
\nu_n}{p_{11}(n)}\left(p_{11}(n)+p_{10}(n)
+\frac{1}{2}(p_{11}(n)+p_{10}(n))^2+O(p^3_{1+}(n))\right)
\nonumber \\
& = &
-x(1+o(1)) - \frac{p_{10}(n)}{p_{11}(n)}\log \nu_n -
\frac{(p_{11}(n)+p_{10}(n))^2}{2p_{11}(n)}\log \nu_n +
O(p^2_{11}(n))
\nonumber \\
& = &
-x(1+o(1))-\left(\frac{p_{10}(n)}{p_{11}(n)}+
\frac{1}{2}p_{11}(n)\right)\log\nu_n (1+o(1)) +
O(p^2_{11}(n))
\nonumber \\
& \to &
-x -a -c \ .
\nonumber
\end{eqnarray*}
Therefore
\begin{equation} \label{lim1}
\lim_{n \to \infty}\nu_n p_{0+}(n)^{[x_n]+1}=e^{\displaystyle -x -a -c} \ .
\end{equation}
Similarly we arrive at
\begin{equation} \label{lim2}
\hspace{-0.3cm} \lim_{n \to \infty}\nu_n p_{+0}(n)^{[y_n]+1}=e^{\displaystyle -y -b -c} \
\mbox{and} \
\lim_{n \to \infty}\nu_n p_{00}(n)^{[x_n]+1}=e^{\displaystyle -x -a -b -c} \ .
\end{equation}
Finally,
\begin{eqnarray*}
\lefteqn{\log p_{+0}^{y_n-x_n}(n)=
\frac{(y-\log\nu_n)-(x-\log\nu_n)}{p_{11}(n)}\log
(1-p_{11}(n)-p_{01}(n))} \\
& = &
-\frac{y-x}{p_{11}(n)}\left(p_{11}(n)+p_{01}(n)
+\frac{1}{2}(p_{11}(n)+p_{01}(n))^2+O(p^3_{+1}(n))\right)\\
& = &
x-y -(y-x)\left(
\frac{p_{01}(n)}{p_{11}(n)}(1+o(1))+\frac{1}{2}p_{11}(n)(1+o(1))
+ O(p^2_{+1}(n))\right)\\
& \to &
x-y
\end{eqnarray*}
Thus,
\begin{equation} \label{lim3}
\lim_{n \to \infty}p_{+0}(n)^{[y_n]-[x_n]}=e^{\displaystyle x-y} \ .
\end{equation}
The assertion of the theorem for $x<y$ follows from
(\ref{lim0})-(\ref{lim3}). The case $y<x$ is treated
similarly. This completes the proof.
In particular, if $a=b=0$ then
\[ \lim_{n\to\infty}
P\left( p_{11}(n){\cal M}_n^\xi-\log\nu_n\leq x, p_{11}(n){\cal M}_n^\eta-\log\nu_n\leq
y \right)
=
\exp\left\{ -e^{ -\min\{x,y\}-c}\right\}.
\]
Note that in this case the limit is proportional to the upper bound for the possible asymptotic
distribution of a multivariate maximum given in
\cite{Gal87}, Theorem~5.4.1.
For the componentwise maxima, applying Theorem~10, one can obtain
the following limiting results. If
$p_{1+}(n)\log \nu_n \to 2c_1<\infty$, then
\[
\lim_{n\to\infty}P( p_{11}(n){\cal M}_n^\xi-\log\nu_n\leq x)
=
\exp\left\{ -e^{ -x-c_1}\right\}.
\]
If $p_{1+}(n)\log \nu_n \to 2c_2<\infty$, then
\[
\lim_{n\to\infty}
P\left(p_{11}(n){\cal M}_n^\eta-\log\nu_n\leq
y \right) =
\exp\left\{ -e^{ -y-c_2}\right\}.
\]
\subsection{Bisexual processes with varying geometric environments} Consider the array of bivariate random vectors $\{(\xi_i(n),\eta_i(n)):\ i=1,2,\ldots; \ n=0,1,\ldots\}$, which are independent with respect to both indexes. Let $L:{\cal R}^+\times{\cal R}^+\to{\cal R}^+$ be a mating function. A bisexual process with varying environments is defined (see \cite{MMR04}) by the recurrence: $Z_0=N>0$, \[ (Z^F_{n+1},Z^M_{n+1})=\sum_{i=1}^{Z_n}(\xi_i(n),\eta_i(n))\] and \[ Z_{n+1}=L(Z^F_{n+1},Z^M_{n+1}) \quad (n=0,1,\ldots). \] Define the mean growth rate per mating unit \[
r_{nj}=j^{-1}E(Z_{n+1}|Z_n=j) \quad (j=1,2,\ldots) \quad \mbox{and} \quad \mu_n=\prod_{i=0}^{n-1}r_{i1}, \ \mu_0=1 \quad (n=1,2, \ldots) \]
{\bf Lemma } (\cite{MMR04}) {\it If
\begin{equation} \label{assump1}
\sum_{n=0}^\infty \left( 1- \frac{r_{n1}}{r_n}\right)\
\end{equation}
then \[
\lim_{n\to \infty} \frac{Z_n}{\mu_n
} = W \quad \mbox{a.s.},
\]
where $W$ is a nonnegative random variable with
$E(W)<\infty$.
If, in addition, there exist constants $A>0$ and $c>1$ such that
\begin{equation} \label{assump2}
\prod_{i=j}^{n+j-1}r_{i1}\geq Ac^n \quad j=1,2,\ldots; \
n=0,1,\ldots
\end{equation}
and there exists a random variable $X$ with $E(X\log(1+X))<\infty$
such that for any $u$
\begin{equation} \label{assump3}
P(X\leq u) \leq P\left(\frac{L(\xi_i(n), \eta_i(n))}{r_{n1}}\leq u\right)\qquad
(n=0,1,\ldots),
\end{equation} then $P(W>0)>0$.}
Further on we assume that $(\xi_i(n),\eta_i(n))$ are i.i.d. copies of the bivariate geometric vector $(\xi, \eta)$ introduced above and that the mating is promiscuous, i.e., \begin{equation} \label{mating} L(\xi(n),\eta(n))=\xi(n)\min\{1,\eta(n)\}.\end{equation}
\begin{theorem} \label{cpm2} Let $\{ Z_n\}$ be a bisexual branching process with varying geometric environments and mating function (\ref{mating}). If \begin{equation} \label{cpm_lim_assum1}
\prod_{j=1}^\infty p_{+0}(j)p_{0+}(j)\neq 0 \quad \mbox{and} \quad \sum_{n= 0}^\infty
p_{+1}(n)<\infty \ ,
\end{equation}
then \begin{equation} \label{cpm_Wlim1}
\lim_{n\to \infty} \frac{Z_n}{\mu_n
} = W \quad \mbox{a.s.},
\end{equation}
where $W$ is a nonnegative random variable with
$E(W)<\infty$ and $P(W>0)>0$.
\end{theorem}
{\bf Proof}\ To prove the theorem it is sufficient to verify the assumptions (\ref{assump1})-(\ref{assump3}) in the above lemma. First, we prove that (\ref{assump1}) holds. Indeed, for $j\geq 1$ \begin{eqnarray} jr_{nj} & = & E(Z^F_{n+1}\min \{1,Z^M_{n+1}\}) \label{growth rate}\\
& = &
EE(Z^F_{n+1}\min \{1,Z^M_{n+1}\}\ |\ Z^M_{n+1}) \nonumber \\
& = &
(1-P(Z^M_{n+1}=0))EZ^F_{n+1} \nonumber \\
& = &
(1-p_{+1}^j(n))\frac{jp_{0+}(n)}{p_{1+}(n)}\ , \nonumber
\end{eqnarray}
where we have used that both $Z^M_{n+1}$ and $Z^F_{n+1}$ are
negative binomial with parameters $(j, p_{+1}(n))$
and $(j,p_{1+}(n))$, respectively. Thus,
\begin{equation} \label{cpm_rn}
r_n=\lim_{j\to \infty}r_{nj}=\lim_{j\to
\infty}
(1-p_{+1}^j(n))\frac{p_{0+}(n)}{p_{1+}(n)}=\frac{p_{0+}(n)}{p_{1+}(n)}
\ .
\end{equation}
Now, (\ref{growth rate}) and (\ref{cpm_rn}) imply
$
1-r_{n1}/r_n=p_{+1}(n)
$,
which along with (\ref{cpm_lim_assum1}) leads to
(\ref{assump1}).
Let us prove (\ref{assump3}). Indeed, for $k\geq 1$
\begin{eqnarray*}
P(L(\xi(n),\eta(n))=k)\! & \!= \!& \! \sum_{j=1}^\infty
P(\xi(n)\min\{1,\eta(n)\}=k|\eta(n)=j)P(\eta(n)=j) \\
\! & = & \! P(\xi(n)=k)\sum_{j=1}^\infty P(\eta(n)=j) \nonumber \\
\! & = & \! p_{1+}(n)p_{0+}^k(n)\sum_{j=1}^\infty
p_{+1}(n)p_{+0}^j(n) \nonumber \\
\! & = & \!
p_{+0}(n)p_{1+}(n)p_{0+}^k(n)\ .
\nonumber
\end{eqnarray*} Therefore,
$
P(L(\xi(n),\eta(n))/r_{n1}\geq u)=p_{0+}^{[ur_{n1}]+1}(n)
$
and hence, similarly to (\ref{lim1}), taking into account
(\ref{cpm_lim_assum1}), we obtain
\begin{eqnarray*}
\log P\left(\frac{\displaystyle L(\xi(n),\eta(n))}{\displaystyle r_{n1}}\geq u\right) & \sim & ur_{n1}\log p_{0+}(n) \\
& = &
-u
\frac{p_{+0}(n)p_{0+}(n)}{p_{1+}(n)}p_{1+}(n)(1+o(1))
\\
& \to & -u
\end{eqnarray*}
Thus,
$\lim_{n\to \infty}P(L(\xi(n),\eta(n)/r_{n1})\geq u)=e^{-u}$, which implies (\ref{assump3}).
Finally, to prove (\ref{assump2}), observe that (\ref{growth rate}) implies for any $j$
and $n$
\begin{eqnarray*}
\prod_{i=j}^{n+j-1}r_{i1} & = &
\prod_{i=j}^{n+j-1}p_{+0}(i)p_{0+}(i)
\prod_{i=j}^{n+j-1}p_{11}^{-1}(i) \\
& \geq &
\prod_{i=1}^\infty p_{+0}(i)p_{0+}(i)
\prod_{i=j}^{n+j-1}p_{11}^{-1}(i) \\
& \geq &
Ac^n \ ,
\end{eqnarray*}
where $A=\prod_{i=1}^\infty p_{+0}(i)p_{0+}(i)>0$ (provided that the product in (\ref{cpm_lim_assum1}) is finite) and
$c=\min_{i\geq j}p^{-1}_{11}(i)>1$ ($p_{11}(i)\to 0$ under
(\ref{cpm_lim_assum1})). (\ref{assump2}) also holds if
the product in (\ref{cpm_lim_assum1}) is infinite. Now, referring to the above lemma we complete the
proof of the theorem.
Define offspring maxima in the bisexual process $\{Z_n\}$ by \[ ({\cal M}_n^F, {\cal M}_n^M)=\left(\max_{1\le i\le Z_{n}}\xi_i(n),\ \max_{1\le i\le Z_{n}}\eta_i(n)\right). \]
\begin{theorem} Assume that $\mu_n\to \infty$ and there are constants $0\leq a, b, c < \infty$, such that \begin{equation} \label{assum_1} \lim_{n\to \infty}p_{11}(n)\log \mu_n = 2c \quad \lim_{n\to \infty} \frac{p_{10}(n)}{p_{11}(n)}\log \mu_n = a \quad \mbox{and} \quad \lim_{n\to \infty} \frac{p_{01}(n)}{p_{11}(n)}\log \mu_n = b. \end{equation} Also assume that \begin{equation} \label{assum_2}
\prod_{j=1}^\infty p_{+0}(j)p_{0+}(j)\neq 0 \quad \mbox{and} \quad \sum_{n= 0}^\infty
p_{+1}(n)<\infty \ .
\end{equation} Then \[ \lim_{n\to\infty}
P\left( p_{11}(n){\cal M}_n^F -\log\mu_n\le x, p_{11}(n){\cal M}_n^M-\log\mu_n \le y\right) = \int_0^\infty (G(x,y))^zdP(W\le z), \] where \[ G(x,y)=\exp\left\{ -e^{ -x-a-c}-e^{ -y-b-c}+e^{ -\max\{x,y\}-a -b -c}\right\}.
\] \end{theorem}
{\bf Proof}\ Set $x_n=(x+\log \mu_n)/p_{11}(n)$ and $y_n=(y+\log \mu_n)/p_{11}(n)$. Under assumption (\ref{assum_1}), Theorem 11 implies \begin{equation} \label{theorem_11} P\left( {\cal M}_n^F\leq x_n,
{\cal M}_n^M\leq y_n \ | \ Z_n=k\right) = (F(x_n, y_n))^{k}\to H(x,y). \end{equation} Under (\ref{assum_2}), Theorem 12 implies \begin{equation} \label{theorem_12} \lim_{n\to \infty}P\left( \frac{Z_n}{\mu_n}\le x\right)=P(W\le x). \end{equation} Therefore, by (\ref{theorem_11}) and (\ref{theorem_12}), \begin{eqnarray*} \lefteqn{\hspace{-2cm}P\left({\cal M}_n^F\leq \frac{x+\log \mu_n}{p_{11}(n)}, {\cal M}_n^M\le \frac{y+\log \mu_n}{p_{11}(n)}\right)=\sum_{k= 0}^\infty P\left(Z_n=k
\right)(F(x_n, y_n))^{k}}\\ & = & \sum_{k=0}^\infty P\left( \frac{Z_n}{\mu_n}=\frac{k}{\mu_n}\right)(F(x_n, y_n))^{\mu_n k/\mu_n} \\ & = & \int_0^\infty (G(x,y))^zdP(W\le z). \end{eqnarray*}
Next example, adopted from \cite{Mit05}, shows that the various conditions in Theorem~13 can be satisfied. \begin{example} Let $\alpha>1$ and $\beta>1$. Set \[ p_{11}(n)=n^{-\alpha} \qquad \mbox{and}\qquad p_{01}(n)=p_{10}(n)=n^{-(\alpha+\beta)} \qquad (n\ge 2). \] It is not difficult to see that with this choice of $p_{ij}(n)$ $(i,j=0,1)$, we have \[ \log \mu_n\sim \alpha n \log n \qquad \mbox{as}\qquad n\to \infty \] and both (\ref{assum_1}) (with $a=b=c=0$) and (\ref{assum_2}) are satisfied. \end{example}
The exposition in this section follows \cite{Mit05}, extending some of the results there.
\section{Maximum score}
In this section we assume that every individual in a Galton-Watson family tree has a continuous random characteristic which maximum is of interest.
\subsection{Maximum scores in Galton-Watson processes}
Let us go back to the simple BGW process and attach random scores to each individual in the family tree. More specifically, associate with the $j$-th individual in the $n$-th generation a continuous random variable $Y_j(n)$. Arnold and Villase\~{n}or (1996) published the first paper studying the maxima individual scores ("heights"). Pakes (1998) proves more general results concerning the laws of offspring score order statistics. Quoting \cite{Pak98}, "these results provide examples of the behavior of extreme order statistics of observations from samples of random size." Define by $M_{(k),n}$ the $k$-th largest score within the $n$-th generation and by $\bar{M}_{(k),n}$ the $k$-th largest among the random variables $\{Y_{i}(n):\ 1\le i\le Z_\nu, 0\le \nu\le n\}$, i.e., the $k$-th largest score up to and including the $n$-th generation. Pakes (1998) studies the limiting behavior of "near maxima", i.e., (upper) extreme order statistics $M_{(k),n}$ and $\bar{M}_{(k),n}$ when $n\to \infty$ and $k$ remains fixed. The two general cases that arise are whether the law of $Z_n$ (or the total progeny $T_n=\sum_{\nu=0}^nZ_\nu$), conditional on survival, do not require or do require, normalization to converge to non-degenerate limits.
If no normalization is required then no particular restriction need to be placed on the score distribution function $S$, but the limit laws are rather complex mixtures of the laws of extreme order statistics. The principal result states that \[
\lim_{n\to\infty}P(M_{(k),n}\le x| {\cal A}_n)=\sum_{j=1}^\infty\sum_{i=0}^{k-1}{j \choose i}(1-S(x))^iS^{j-i}(x)g_j, \] where it is assumed that the conditional law ${\cal G}_n$ of $Z_n$ given ${\cal A}_n$ (${\cal A}_n$ includes non-extinction) converges to a discrete and non-defective limit ${\cal G}$ and $g_j$ denote the masses attributed to $j$ by ${\cal G}$.
If normalization is required then one must assume that the score distribution function $S$ is attracted to an extremal law, and then the limit laws are mixtures of the classical limiting laws of extreme order statistics. let us assume that there are positive constants $C_n\uparrow \infty$ such that for the conditional law ${\cal G}_n$ we have $ {\cal G}_n(xC_n)\Rightarrow N(x), $ where $N(x)$ is a non-defective but possibly degenerate distribution function. Assume also that the score distribution function $S$ is in the domain of attraction of on extremal law given by (\ref{dom_attr}). The general result in \cite{Pak98} is \begin{equation}
\label{norm_limit}\lim_{n\to\infty}P\left(\frac{M_{(k),n}-b(C_n)}{a(C_n)}\le x|{\cal A}_n\right)=\sum_{i=0}^{k-1}\frac{(h(x, \theta))^i}{i!}\int_0^\infty y^ie^{-yh(x, \theta)}dN(y). \end{equation}
\begin{example} Consider an immortal (i.e., $P(X=0)=0$) supercritical process with shifted geometric offspring law given by its p.g.f. $f(s)=s/(1+m-ms)$ $(m>1)$, then (see Pakes (1998)) (\ref{norm_limit}) becomes \[
\lim_{n\to\infty}P\left(\frac{M_{(k),n}-b(C_n)}{a(C_n)}\le x|{\cal A}_n\right)=1-\left(\frac{h(x, \theta)}{1+h(x, \theta)}\right)^k. \] Thus the limit has a generalized logistic law when the score law is attracted to Gumbel law, $h(x, \theta)=e^{-x}$; and a Pareto-type law results when $S$ is attracted to the Fr\'{e}chet law. \end{example}
Phatarford (see \cite{Pak98}) has raised the question (in the context of horse racing), "What is the probability that the founder of a family tree is better than all its descendants?" The answer turns out to be $E\left(T^{-1}\right)$, where $T={\displaystyle \sum_{n=0}^\infty Z_n}$ is the total number of individuals in the family tree. More generally, if $\tau_n$ is the index of the generation up to the $n$-th which contains the largest score, Pakes (1998) proves that \[ P(\tau_n=k)=E\left(\frac{Z_k}{T_n}\right), \quad (k=0,1,\ldots,n), \] as well as limit theorems for $\tau_n$ as $n\to \infty$.
This subsection is based on \cite{ArnVil96} and \cite{Pak98}.
\subsection{Maximum scores in two-type processes} Let each individual in a two-type branching process be equipped with a non-negative continuous random variable - individual score. We present limit theorems for the maximum individual score. Consider two independent sets of independent random vectors with integer nonnegative components \[\{{\bf X}^1(n)\}=\{(X^1_{1j}(n), X^1_{2j}(n))\} \ \ \mbox{and} \ \ \{{\bf X}^2(n)\}=\{(X^2_{1j}(n), X^2_{2j}(n))\} \ \ (j\ge 1; n\ge 0). \] A two-type branching process $\{{\bf Z}(n)\}=\{(Z_1(n),Z_2(n))\}$ is defined as follows: ${\bf Z}(0)\neq {\bf 0}$ a.s. and for $n=1,2,\ldots$ $$ Z_1(n)=\sum_{j=1}^{Z_1(n-1)} X^1_{1j}(n) + \sum_{j=1}^{Z_2(n-1)} X^2_{1j}(n),$$ \[Z_2(n)=\sum_{j=1}^{Z_1(n-1)} X^1_{2j}(n) + \sum_{j=1}^{Z_2(n-1)} X^2_{2j}(n).\] Here $X^i_{kj}(n)$ refers to the number of offspring of type $k$ produced by the $j$-th individual of type $i$. With the $j$-th individual of type $i$ living in the $n$-th generation we associate a non-negative continuous random variable $\zeta_{ij}(n)$, $(i=1,2)$ "score", say. Assume that the offspring of type $1$ and type $2$ have scores, which are independent and identically distributed within each type. Define the maximum score within the $n$-th generation by \[{\cal M}_n^\zeta =\max\{{\cal M}_n^{\zeta_1},\ {\cal M}_n^{\zeta_2}\}, \qquad \mbox{where}\quad {\cal M}_n^{\zeta_i}=\max_{1\le j\le Z_i(n-1)}\zeta_{ij}(n)\qquad (i=1,2). \] Note that this is maximum of random number, independent but non-identically distributed random variables. Let $F_i(x)=P(\zeta_i\le x)$ $(i=1,2)$ be the c.d.f.'s of the scores of type 1 and type 2 individuals, respectively.
\noindent{\it Assumption 1}\ (tail-equivalence)\ We assume that $F_1$ and $F_2$ are tail equivalent, i.e., they have the same right endpoint $x_0$ and for some $A>0$ $$ \lim_{x \uparrow x_0} \frac{1-F_1(x)}{1-F_2(x)}=A.$$
\noindent{\it Assumption 2}\ (max-stability)\ Suppose $F_1$ is in a max-domain of attraction, i.e., (\ref{3.8}) holds.
We consider the critical branching process ${\bf Z}(n)$ with mean matrix ${\bf M}$, which is positively regular and nonsingular. Let ${\bf M}$ has maximum eigenvalue 1 and associated right and left eigenvectors ${\bf u}=(u_1,u_2)$ and ${\bf v}=(v_1,v_2)$, normalized such that ${\bf u \cdot v}=1$ and ${\bf u\cdot 1}=1$.
\begin{theorem}\ Let $\{{\bf Z}(n)\}$ be the above critical two-type branching process. If the offspring variance $2B<\infty$ and both Assumptions 1 and 2 hold, then \begin{equation} \label{theorem_2D} \hspace{-0.3cm} \lim_{n \to \infty}
P\left(\frac{{\cal M}_n^\zeta-b(v_1Bn)}{a(v_1Bn)} \le x|{\bf Z}(n) \ne {\bf 0} \right)=\frac{1}{1+h(x,\theta)+(v_2/v_1)h(cx+d,\theta)}, \end{equation} where if $-\infty<\theta<\infty$ is fixed, then
$c=A^{1/|\theta |}$ and $d=0$; if $\theta\to \pm \infty$, then $c=1$ and $d=\ln A$. \end{theorem}
{\bf Proof}. Since $F_1(x)$ and $F_2(x)$ are tail-equivalent, we have (see \cite{Res87}, p.67) \[ \lim_{n\to \infty}\left(F_2(a(n)x + b(n))\right)^n \to H(cx+d,\theta), \] where the constants $c$ and $d$ are as in (\ref{theorem_2D}). On the other hand, it is well-known (see \cite{AthNey72}, p.191) that for $x>0$ and $ y>0$ \[
\lim_{n\to\infty}
P\left(\frac{Z_1(n)}{v_1Bn}\le x,\frac{Z_2(n)}{v_2Bn}\le y|{\bf Z}(n) \ne {\bf 0} \right)=G(x,y), \]
where the limiting distribution has Laplace transform \begin{equation}\label{LT}
\psi(\lambda, \mu)=\frac{1}{1+\lambda+\mu} \qquad (\lambda>0, \ \mu > 0). \end{equation} Set $x_n=a(v_1Bn)x+b(v_1Bn)$, $s_n=k/v_1Bn$, and $t_n=l/v_2Bn$. Referring to the definition of both ${\cal M}_n^\zeta$ and process $\{{\bf Z}(n)\}$ we obtain \begin{eqnarray*}
\lefteqn{P\left({\cal M}_n^\zeta \leq x_n|{\bf Z}_n\neq {\bf 0}\right)=\sum_{(k,l)={\bf 0}}^\infty P\left( {\bf Z}(n)=(k,l)|{\bf Z}(n)\neq {\bf
0}
\right) P\left(\max\left\{{\cal M}_n^{\zeta_1}, {\cal M}_n^{\zeta_2}\right\}\le x_n\right)} \\ & = & \sum_{(k,l)={\bf 0}}^\infty P\left( \frac{Z_1(n)}{v_1Bn}=\frac{k}{v_1Bn},
\frac{Z_2(n)}{v_2Bn}=\frac{l}{v_2Bn}|{\bf Z}(n)\neq {\bf 0} \right) \left[F_1(x_n)\right]^k\left[F_2(x_n)\right]^l \\ & = & \sum_{(k,l)={\bf 0}}^\infty P\left(
\frac{Z_1(n)}{v_1Bn}=s_n, \frac{Z_2(n)}{v_2Bn}=t_n|{\bf Z}(n)\neq {\bf 0} \right) \left[F_1(x_n)\right]^{(v_1Bn)s_n}\left[F_2(x_n)\right]^{(v_1Bn)t_n(v_2/v_1)} \\ & \to & \int_0^\infty \int_0^\infty H(x,\theta)^s H(cx+d,\theta)^{(v_2/v_1)t}d G(s,t) \\ & = & \int_0^\infty \int_0^\infty \exp \left\{ -sh(x,\theta)-t\frac{v_2}{v_1} h(cx+d, \theta)\right\}d G(s,t)\\ & = & \left[ 1+h(x, \theta)+\frac{v_2}{v_1} h(cx+d, \theta)\right]^{-1}, \end{eqnarray*} where in the last formula we used the Laplace transform of $G(u,v)$ given in (\ref{LT}). The proof is complete.
The two examples below illustrate the kind of limit laws that can be encountered.
\begin{example} Let $F_1$ and $F_2$ be Pareto c.d.f.'s given for \ $x_i >\theta_i>0$ and $ c>0$ by \[ F_i(x_i)=1-\left(\frac{\theta_i}{x_i}\right)^c \qquad (i=1,2). \] Note that the two distributions share the same value of the parameter $c$. It is not difficult to check that the limit is log-logistic given by $$\lim_{n \to \infty}
P\left\{\frac{{\cal M}_n^\zeta}{\theta_1(v_1Bn)^{1/c}} \le x|{\bf Z}(n) \ne {\bf 0} \right\}=\left[1+\left(1+\frac{v_2}{v_1}\left(\frac{\theta_1}{\theta_2}\right)^{-c}\right)x^{-c}\right]^{-1}.$$ \end{example}
\begin{example} Let $F_1$ and $F_2$ be logistic and exponential c.d.f.'s given by \[ F_1(x_1)=1-e^{-x_1} \quad (0<x_1<\infty)\quad \mbox{and}\quad F_2(x_2)=\frac{1}{1+e^{-x_2}} \quad (-\infty < x_2 < \infty), \] respectively. It is known that both are in the max-domain of attraction of $H(x)=\exp\{-\exp\{-x\}\}$ and share (see \cite{AhsNev01}, p.91) the same normalizing constants $a(n)=1$ and $b(n)=\ln n$. This fact, after inspecting the proof of the theorem, allows us to bypass the tail-equivalence assumption and obtain a logistic limiting distribution, i.e, for $-\infty <x <\infty$
$$\lim_{n \to \infty} P\left\{{\cal M}_n^\zeta-\log (v_1Bn) \le x\ |\ {\bf Z}(n) \ne {\bf 0} \right\}= \left[1+\left(1+\frac{v_2}{v_1}\right)e^{-x}\right]^{-1}.$$ \end{example}
The results in this subsection are modifications of those in \cite{MitYan02}.
\section*{Acknowledgments.} I thank I. Rahimov for igniting my interest to extremes in branching processes. Thanks to the organizers of ISCPS 2007 for the excellent conference. This work is partially supported by NFSI-Bulgaria, MM-1101/2001.
\end{document}
|
arXiv
|
{
"id": "0610647.tex",
"language_detection_score": 0.6031609773635864,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[Self-adjoint free semigroupoid algebras]{All finite transitive graphs admit a self-adjoint free semigroupoid algebra}
\author[A. Dor-On]{Adam Dor-On} \address{Department of Mathematical Sciences \\ University of Copenhagen \\ Copenhagen\\ Denmark} \email{[email protected]}
\author[C. Linden]{Christopher Linden} \address{Department of Mathematics \\ University of Illinois at Urbana-Champaign\\ Urbana\\ IL \\ USA} \email{[email protected]}
\subjclass[2010]{Primary: 47L55, 47L15, 05C20.} \keywords{Graph algebra, Cuntz Krieger, free semigroupoid algebra, road coloring, periodic, cyclic decomposition, directed graphs}
\thanks{The first author was supported by an NSF grant DMS-1900916 and by the European Union's Horizon 2020 Marie Sklodowska-Curie grant No 839412.}
\maketitle
\begin{abstract} In this paper we show that every non-cycle finite transitive directed graph has a Cuntz-Krieger family whose WOT-closed algebra is $B({\mathcal{H}})$. This is accomplished through a new construction that reduces this problem to in-degree $2$-regular graphs, which is then treated by applying the periodic Road Coloring Theorem of B\'eal and Perrin. As a consequence we show that finite disjoint unions of finite transitive directed graphs are exactly those finite graphs which admit self-adjoint free semigroupoid algebras. \end{abstract}
\section{Introduction}
One of the many instances where non-self-adjoint operator algebra techniques are useful is in distinguishing representations of C*-algebras up to unitary equivalence. By work of Glimm we know that classifying representations of non-type-I C*-algebras up to unitary equivalence cannot be done with countable Borel structures \cite{Gli61}. Hence, in order to distinguish representations of Cuntz algebra ${\mathcal{O}}_n$, one either restricts to a tractable subclass or weakens the invariant. By restricting to permutative or atomic representations, classification was achieved by Bratteli and Jorgensen in \cite{BJ99} and by Davidson and Pitts in \cite{DP99}.
Since general representations of ${\mathcal{O}}_n$ are rather unruly, one can weaken unitary equivalence by considering isomorphism classes of not-necessarily-self-adjoint free semigroup algebras, which are WOT-closed operator algebras generated by the Cuntz isometries of a given representation of ${\mathcal{O}}_n$. The study of free semigroup algebras originates from the work of Popescu on his non-commutative disc algebra \cite{Pop96}, and particularly from work of Arias and Popescu \cite{AP00}, and of Davidson and Pitts \cite{DP98, DP99}. This work was subsequently used by Davidson, Katsoulis and Pitts to establish a general non-self-adjoint structure theorem for any free semigroup algebra \cite{DKP01} which can be used to distinguish many representations of Cuntz algebra ${\mathcal{O}}_n$ via non-self-adjoint techniques.
The works of Bratteli and Jorgensen on iterated function systems were eventually generalized, and classification of Cuntz-Krieger representations of directed graphs found use in the work of Marcolli and Paolucci \cite{MP11} for producing wavelets on Cantor sets, and in work of Bezuglyi and Jorgensen \cite{BJ15} where they are associated to one-sided measure-theoretic dynamical systems called ``semi-branching function systems''.
Towards establishing a non-self-adjoint theory for distinguishing representations of directed graphs, and by building on work of many authors \cite{MS11, DLP05, JK05, KK05, Ken11, Ken13, KP04}, the first author together with Davidson and B. Li extended the theory of free semigroup algebras to classify representations of directed graphs via non-self-adjoint techniques \cite{DDL20}.
\begin{definition} Let $G=(V,E,r,s)$ be a directed graph with range and source maps $r,s: E \rightarrow V$. A family $\mathrm{S} = (S_v,S_e)_{v\in V,e\in E}$ of operators on Hilbert space $\mathcal{H}$ is a \emph{Toeplitz-Cuntz-Krieger} (TCK) family if \begin{enumerate} \item[1.] $\{S_v\}_{v\in V}$ is a set of pairwise orthogonal projections;
\item[2.] $S_e^*S_e = S_{s(e)}$ for every $e\in E$;
\item[3.] $\sum_{e\in F} S_eS_e^* \leq S_v$ for every finite subset $F\subseteq r^{-1}(v)$. \end{enumerate} We say that $\mathrm{S}$ is a \emph{Cuntz-Krieger} (CK) family if additionally \begin{enumerate} \item[4.]
$\sum_{e \in r^{-1}(v)}S_eS_e^* = S_v$ for every $v\in V$ with $0 < |r^{-1}(v)| < \infty$. \end{enumerate} We say $\mathrm{S}$ is a \textit{fully-coisometric} family if additionally \begin{enumerate} \item[5.] $\sotsum_{e \in r^{-1}(v)} S_e S_e^* = S_v$ for every $v \in V$. \end{enumerate} \end{definition}
Given a TCK family $\mathrm{S}$ for a directed graph $G$, we say that $\mathrm{S}$ is \emph{fully supported} if $S_v \neq 0$ for all $v\in V$. When $\mathrm{S}$ is not fully supported, we may induce a subgraph $G_{\mathrm{S}}$ on the support $V_{\mathrm{S}} = \{ \ v \in V \ | \ S_v \neq 0 \ \}$ of $\mathrm{S}$ so that $\mathrm{S}$ is really just a TCK family for the smaller graph $G_{\mathrm{S}}$. Thus, if we wish to detect some property of $G$ from a TCK family $\mathrm{S}$ of $G$, we will have to assume that $\mathrm{S}$ is fully supported. When $G$ is transitive and $S_w \neq 0$ for some $w \in V$ it follows that $\mathrm{S}$ is fully supported.
Studying TCK or CK families amounts to studying representations of Toeplitz-Cuntz-Krieger and Cuntz-Krieger C*-algebras. More precisely, let ${\mathcal{T}}(G)$ and ${\mathcal{O}}(G)$ be the universal C*-algebras generated by TCK and CK families respectively. Then representations of TCK or CK C*-algebras are in bijection with TCK or CK families respectively. The C*-algebra ${\mathcal{O}}(G)$ is the well-known graph C*-algebra of $G$, which generalizes the Cuntz--Krieger algebra introduced in \cite{CK80} for studying subshifts of finite type. We recommend \cite{Raeb05} for further preliminaries on TCK and CK families, as well as C*-algebras associated to directed graphs.
When $G=(V,E,r,s)$ is a directed graph, we denote by $E^{\bullet}$ the collection of finite paths $\lambda = e_1 ... e_n$ in $G$ where $s(e_i) = r(e_{i+1})$ for $i=1,...,n-1$. In this case we say that $\lambda$ is of length $n$, and we regard vertices as paths of length $0$. Given a TCK family $\mathrm{S} = (S_v,S_e)$ and a path $\lambda = e_1 ... e_n$ we define $S_{\lambda} = S_{e_1} \circ ... \circ S_{e_n}$. We extend the range and source maps of paths $\lambda = e_1...e_n$ by setting $r(\lambda) := r(e_1)$ and $s(\lambda) := s(e_n)$, and for a vertex $v \in V$ considered as a path we define $r(v) = v = s(v)$. A path $\lambda$ of length $|\lambda| >0$ is said to be a cycle if $r(\lambda) = s(\lambda)$. We will often not mention the range and source maps $r$ and $s$ in the definition of a directed graphs, and understand them from context.
\begin{hypothesis}\label{h:1} Throughout the paper we will assume that whenever $\mathrm{S}=(S_v,S_e)$ is a TCK family, then $\sotsum_{v\in V} S_v = I_{{\mathcal{H}}}$. In terms of representations of the C*-algebras this is equivalent to requiring that all $*$-representations of our C*-algebras ${\mathcal{T}}(G)$ and ${\mathcal{O}}(G)$ are non-degenerate. \end{hypothesis}
\begin{definition} Let $G=(V,E)$ be a directed graph, and let $\mathrm{S} = (S_v,S_e)$ be a TCK family on a Hilbert space $\mathcal{H}$. The WOT-closed algebra ${\mathfrak{S}}$ generated by $\mathrm{S}$ is called a \emph{free semigroupoid algebra} of $G$. \end{definition}
The main purpose of this paper is to characterize which finite graphs admit self-adjoint free semigroupoid algebras. For the $n$-cycle graph, we know from \cite[Theorem 5.6]{DDL20} that $M_n(L^{\infty}(\mu))$ is a free semigroupoid algebra when $\mu$ is a measure on the unit circle $\mathbb{T}$ which is not absolutely continuous with respect to Lebesgue measure $m$ on $\mathbb{T}$. Thus, an easy example for a self-adjoint free semigroupoid algebra for the $n$-cycle graph is simply $M_n(\mathbb{C})$, by taking some Dirac measure $\mu = \delta_z$ for $z\in \mathbb{T}$.
On the other hand, for non cycle graphs examples which are self-adjoint are rather difficult to construct, and the first example showing this is possible was provided by Read \cite{Rea05} for the graph with a single vertex and two loops. More precisely, Read shows that there are two isometries $Z_1, Z_2$ on a Hilbert space ${\mathcal{H}}$ with pairwise orthogonal ranges that sum up to ${\mathcal{H}}$ such that the WOT-closed algebra generated by $Z_1,Z_2$ is $B({\mathcal{H}})$. Read's proof was later streamlined and simplified by Davidson in \cite{Dav06}.
\begin{definition} Let $G = (V,E)$ be a directed graph. We say that $G$ is \begin{enumerate} \item \emph{transitive} if there is a path between any one vertex and another. \item \emph{aperiodic} if for any two vertices $v,w \in V$ there is a $K_0$ such that any length $K\geq K_0$ can occur as the length of a path from $v$ to $w$. \end{enumerate} \end{definition}
Notice that for \emph{finite} (both finitely many vertices and edges) transitive graphs, the notions of a CK family and fully-coisometric TCK family coincide. In \cite[Theorem 4.3 \& Corollary 6.13]{DDL20} restrictions were found on graphs and TCK families so as to allow for self-adjoint examples.
\begin{theorem}[Theorem 4.3 \& Corollary 6.13 in \cite{DDL20}] \label{t:sa-rest} Let ${\mathfrak{S}}$ be a free semigroupoid algebra generated by a TCK family $\mathrm{S}$ of a directed graph $G = (V,E)$ such that $S_v \neq 0$ for all $v\in V$. If ${\mathfrak{S}}$ is self-adjoint then \begin{enumerate} \item $\mathrm{S}$ must be fully-coisometric, and; \item $G$ must be a disjoint union of transitive components. \end{enumerate} \end{theorem}
Showing that non-cycle transitive graphs other than the single vertex with two loops admit a self-adjoint free semigroupoid algebra required new ideas from directed graph theory.
\begin{definition} Let $G = (V,E)$ be a transitive, finite and in-degree $d$-regular graph. A \emph{strong edge coloring} $c:E \rightarrow \{1,...,d\}$ is one where $c(e)\neq c(f)$ for any two distinct edges $e,f \in r^{-1}(v)$ and $v\in V$. \end{definition}
Whenever $G = (V,E)$ has a strong edge coloring $c$, it induces a labeling of finite paths $c : E^{\bullet} \rightarrow \mathbb{F}^+_d$ which is defined for $\lambda = e_1...e_n$ via $c(\lambda) = c(e_1) ... c(e_n)$. Since $c$ is a strong edge coloring, whenever $w \in V$ is a vertex and $\gamma = i_1 ... i_n \in \mathbb{F}^+_d$ with $i_j \in \{1,...,d\}$ is a word in colors, we may inductively construct a path $\lambda = e_1... e_n$ such that $c(e_j) = i_j$ and $r(e_1) = w$. In this way, every vertex $w$ and a word in colors uniquely define a back-tracked path whose range is $w$.
\begin{definition} Let $G = (V,E)$ be a transitive, finite and in-degree $d$-regular graph. A strong edge coloring is called \emph{synchronizing} if for some vertex $v \in V$ there is a word $\gamma_v \in \mathbb{F}^+_d$ in colors $\{1,...,d\}$ such that for any other vertex $w\in V$, the unique back-tracked path $\lambda$ with range $w$ and color $c(\lambda) = \gamma_v$ has source $s(\lambda) = v$. \end{definition}
It is easy to see that if a finite in-degree regular graph has a \emph{synchronizing} strong edge coloring then it is aperiodic. The converse of this statement is a famous conjecture made by Adler and Weiss in the late 60s \cite{AW70}. This conjecture was eventually proven by Trahtman \cite{Tra09} and is now called the Road Coloring Theorem. The Road Coloring Theorem was the key device that enabled the construction of self-adjoint free semigroupoid algebras for in-degree regular aperiodic directed graphs in \cite[Theorem 10.11]{DDL20}.
\begin{definition} Let $G=(V,E)$ be a transitive directed graph. We say that $G$ \emph{has period $p$} if $p$ is the largest integer such that we can partition $V=\sqcup_{i =1}^p V_i$ so that when $e \in E$ is an edge with $s(e) \in V_i$ then $r(e) \in V_{i+1}$ (here we identify $p+1 \equiv 1$). This decomposition is called the \emph{cyclic decomposition} of $G$, and the sets $\{V_i\}$ are called the \emph{cyclic components} of $G$. \end{definition}
\begin{remark} In a transitive graph $G$ every vertex $v \in V$ has a cycle of finite length through it. Hence, if $G$ does not have finite periodicity this would imply that any cycle around $v$ must have arbitrarily large length. Hence, transitive graphs have finite periodicity. \end{remark}
It is a standard fact that $G$ is $p$-periodic exactly when for any two vertices $v,w \in V$ there exists $0 \leq r < p$ and $K_0$ such that for any $K \geq K_0$ the length $pK + r$ occurs as the length of a path from $v$ to $w$, while $pK +r'$ does not occur for any $K$ and $0 \leq r' < p$ with $r\neq r'$. Hence, $G$ is $1$-periodic if and only if it is aperiodic. We will henceforth say that $G$ is periodic when $G$ is $p$-periodic with period $p \geq 2$. For a transitive directed graph, the period $p$ is also equal to the greatest common divisor of the lengths of its cycles. This equivalent definition of periodicity of a transitive directed graph is the one most commonly used in the literature.
In \cite[Question 10.13]{DDL20} it was asked whether periodic in-degree $d$-regular finite transitive graphs with $d\geq 3$ have a self-adjoint free semigroupoid algebra. In this paper we answer this question in the affirmative. In fact, we are able to characterize exactly which finite graphs have a self-adjoint free semigroupoid algebra. A generalization of the road coloring for periodic in-degree regular graphs proven by B\'eal and Perrin \cite{BP14} is then the replacement for Trahtman's aperiodic Road Coloring Theorem when the graph is periodic.
\begin{theorem} \label{t:safsa} Let $G = (V,E)$ be a finite graph. There exists a fully supported CK family $\mathrm{S}=(S_v,S_e)$ which generates a \emph{self-adjoint} free semigroupoid algebra ${\mathfrak{S}}$ if and only if $G$ is the union of transitive components.
Furthermore, if $G$ is transitive and not a cycle then $B({\mathcal{H}})$ is a free semigroupoid algebra for $G$ where ${\mathcal{H}}$ is a separable infinite dimensional Hilbert space. \end{theorem}
This paper is divided into four sections including this introduction. In Section \ref{s:prc} we translate the periodic Road Coloring Theorem of B\'eal and Perrin to a more concrete statement that we end up using. In Section \ref{s:B(H)-fsa} a reduction to graphs with in-degree at least $2$ at every vertex is made, and our new construction reduces that case to the in-degree $2$-regular case. Finally in Section \ref{s:main-theorem} we combine everything together for a proof of Theorem \ref{t:safsa} and give some concluding remarks.
\section{Periodic Road coloring} \label{s:prc}
The following is the generalization of the notion of synchronization to $p$-periodic finite graphs that we shall need.
\begin{definition} \label{d:p-synch} Let $G = (V,E)$ be a transitive, finite and in-degree $d$-regular $p$-periodic directed graph with cyclic decomposition $V=\sqcup_{i =1}^p V_i$. A strong edge coloring $c$ of $G$ with $d$ colors is called \emph{$p$-synchronizing} if there exist \begin{enumerate} \item distinguished vertices $v_i \in V_i$ for each $1 \leq i \leq p$, and; \item a word $\gamma$ such that for any $1 \leq i \leq p$ and $v \in V_i$, the unique backward path $\lambda_v$ with $r(\lambda_v) =v$ and $c(\lambda_v) = \gamma$ has source $v_i$. \end{enumerate} Such a $\gamma$ is called a $p$-synchronizing word for the tuple $(v_1,...,v_p)$. \end{definition}
In order to show that every in-degree $d$-regular $p$-periodic graph is $p$-synchronizing we will translate a periodic version of the Road Coloring Theorem due to B\'eal and Perrin \cite{BP14}. We warn the reader that the graphs we consider here are in-degree regular whereas in \cite{BP14} the graphs are out-degree regular. Thus, so as to fit our choice of graph orientation, we state their definitions and theorem with edges having reversed ranges and sources.
Let $G=(V,E)$ be a transitive, finite, in-degree $d$-regular graph. If $c: E \rightarrow \{1,...,d\}$ is a strong edge coloring, each word $\gamma \in \mathbb{F}_d^+$ in colors gives rise to a map $\gamma : V \rightarrow V$ defined as follows. For $v\in V$ let $\lambda_v$ be the unique path with $r(\lambda_v) =v$ whose color is $c(\lambda) = \gamma$. Then we define $v \cdot \gamma := s(\lambda_v)$. In this way, we can apply the function $\gamma$ to each subset $I \subseteq V$ to obtain another subset of vertices $I \cdot \gamma = \{ \ v \cdot \gamma \ | \ v \in I \ \}$.
\begin{definition} Let $G=(V,E)$ be a transitive, finite and in-degree $d$-regular with a strong edge coloring $c : E \rightarrow \{1,...,d\}$. \begin{enumerate} \item We say that a subset $I$ is a \emph{$c$-image} if there exists a word $\gamma$ such that $V \cdot \gamma = I$. \item A $c$-image $I \subseteq V$ is called \emph{minimal} if there is no $c$-image with smaller cardinality. \end{enumerate} We define the \emph{rank} of $c$ to be the size of a minimal $c$-image. \end{definition}
Note that the rank of a transitive graph is always well-defined, since any two minimal $c$-images have the same cardinality. Next, we explain some of the language used in the statement of B\'eal and Perrin's \cite[Theorem 6]{BP14}.
A (finite) \emph{automaton} is a pair $(G,c)$ where $G=(V,E)$ is a finite directed graph and $c :E \rightarrow \{1,...,d\}$ some labeling. We say that $(G,c)$ be a \emph{complete deterministic} automaton if $c|_{r^{-1}(v)}$ is bijective for each $v\in V$. This forces $G$ to be in-degree $d$-regular with a strong edge coloring $c$. We say that an automaton is \emph{irreducible} if its underlying graph is irreducible. Finally, we say that two automata $(G,c)$ and $(H,d)$ are \emph{equivalent} if they have isomorphic underlying graphs. The statement of \cite[Theorem 6]{BP14} then says that any irreducible, complete deterministic automaton $(G,c)$ is equivalent to a complete deterministic automaton whose rank is equal to the period of $G$. This leads to the following restatement of the periodic Road Coloring Theorem of B\'eal and Perrin \cite[Theorem 6]{BP14} in the language of directed graphs and their colorings.
\begin{theorem}[Theorem 6 of \cite{BP14}] Let $G=(V,E)$ be a transitive, finite and in-degree $d$-regular graph. Then $G$ is $p$-periodic if and only if there exists a strong edge coloring $c : E \rightarrow \{1,...,d\}$ with rank $p$. \end{theorem}
As a corollary, we have that every finite in-degree $d$-regular graph is $p$-synchronizing (in the sense of Definition \ref{d:p-synch}) when its period is $p$. This will be useful to us in the next section.
\begin{corollary} \label{c:p-synch-exists} Let $G=(V,E)$ be a transitive, finite, in-degree $d$-regular and $p$-periodic graph with cyclic decomposition $V=\sqcup_{i =1}^p V_i$. Then there is a strong edge coloring $c : E \rightarrow \{1,...,d\}$ which is $p$-synchronizing.
Furthermore, if $\gamma$ is a $p$-synchronizing word for $(v_1,...,v_p)$ and $w_i \in V_i$ is some vertex for some $1 \leq i \leq p$, then there are vertices $w_j \in V_j$ for $j \neq i$ and a $p$-synchronizing word $\mu$ for $(w_1,...,w_p)$. \end{corollary}
\begin{proof}
Let $c$ be a strong edge coloring with a minimal $c$-image of size $p$. This means that there is a word $\gamma \in \mathbb{F}_d^+$ such that $|V \cdot \gamma| = p$. If $\gamma$ is not of length which is a multiple of $p$, we may concatenate $\gamma'$ to $\gamma$ to ensure that $|\gamma \gamma'| = kp$ for some $0 \neq k \in \mathbb{N}$. Since $V \cdot \gamma$ is minimal we have that $V \cdot \gamma \gamma'$ is also of size $p$. Hence, without loss of generality we have that $|\gamma| = kp$ for some $0 \neq k \in \mathbb{N}$. Since $|\gamma| = kp$, we see that the function $\gamma : V \rightarrow V$ must send elements in $V_i$ to elements in $V_i$. Since each $V_i$ is non-empty and $|V \cdot \gamma| = p$, there is a unique $v_i \in V \cdot \gamma$ such that $V_i \cdot \gamma = \{v_i\}$. It is then clear that $(v_1,...,v_p)$ together with $\gamma$ show that $c$ is $p$-synchronizing.
For the second part, without loss of generality assume that $i=1$, and let $\lambda$ be a path with range $v_1$ and source $w_1$, whose length must be a multiple of $p$. Then for $\mu := \gamma \cdot c(\lambda)$, we still have $|V \cdot \mu| = p$ from minimality of $V \cdot \gamma$, and also $v_1 \cdot c(\lambda) = w_1$. Thus, we see that as in the above proof there are some $w_j \in V_j$ with $j\neq 1$ such that $\mu$ is $p$-synchronizing for $(w_1,...,w_p)$. \end{proof}
\section{$B({\mathcal{H}})$ as a free semigroupoid algebra} \label{s:B(H)-fsa}
In \cite{Dav06} Read isometries $Z_1,Z_2$ with additional useful properties are obtained on a separable infinite dimensional Hilbert space ${\mathcal{H}}$. More precisely, from the proof of \cite[Lemma 1.6]{Dav06} we see that there are orthonormal bases $\{h_j \}_{j \in \mathbb{N}}$ and $\{g_i \}_{i \in \mathbb{N}}$ together with a sequence $$
S_{i,j,k} \in \operatorname{span} \{ \ Z_w \ | \ w\in \mathbb{F}_2^+ \ \text{with} \ |w| = 2^k \ \} $$
such that $S_{i,j,k}$ converges WOT to the rank one operator $g_i \otimes h_j^*$. As a consequence of this, in \cite[Theorem 1.7]{Dav06} it is shown that the WOT closed algebra generated by $\{ \ Z_w \ | \ \mu \in \mathbb{F}_2^+ \ \text{with} \ |w| = 2^k \ \}$ is still $B({\mathcal{H}})$. That $2^k$ above can be replaced with any non-zero $p \in \mathbb{N}$ was claimed after \cite[Question 10.13]{DDL20}, and we provide a proof for it here.
\begin{proposition}\label{p:pgen} For any non-zero $p \in \mathbb{N}$, we have that the WOT-closed algebra ${\mathfrak{Z}}_p$ generated by $\{ \ Z_{\mu} \ | \ \mu \in \mathbb{F}_2^+ \ \text{with} \ |\mu| =p \ \}$ is $B({\mathcal{H}})$. \end{proposition}
\begin{proof}
As there are finitely many residue classes modulo $p$, there is some $m$ such that $p$ divides $2^k + m$ for infinitely many $k$. Pick a word $w$ with length $|w|=m$, and note that $S_{i,j,k}Z_w \in \operatorname{Alg} \{ \ Z_{\mu} \ | \ \mu \in \mathbb{F}_2^+ \ \text{with} \ |\mu| =p \ \}$ for infinitely many $k$. Thus, the rank one operator $(g_i \otimes h_j^*) Z_w$ is in ${\mathfrak{Z}}_p$. Since any operator in $B({\mathcal{H}})$ is the WOT limit of finite linear combinations of operators of the form $g_i \otimes h_j^*$, we see that $A = AZ^*_w Z_w \in {\mathfrak{Z}}_p$ for any $A \in B({\mathcal{H}})$. Hence, ${\mathfrak{Z}}_p = B({\mathcal{H}})$. \end{proof}
Next we reduce the problem of showing that $B({\mathcal{H}})$ is a free semigroupoid algebra for a transitive graph $G$ to a problem about vertex corners.
\begin{lemma} \label{l:cornerB(H)} Let $G$ be a transitive graph. Suppose $\mathrm{S}$ is a TCK family on ${\mathcal{H}}$ such that for any $v \in V$ there exists $w\in V$ such that $S_v {\mathfrak{S}} S_w = S_v B({\mathcal{H}}) S_w$. Then ${\mathfrak{S}} = B({\mathcal{H}})$. \end{lemma}
\begin{proof} Let $v',w' \in V$ be arbitrary vertices. By assumption, there is $w \in V$ such that $S_{v'}{\mathfrak{S}} S_w = S_{v'}B({\mathcal{H}})S_w$. Let $\lambda$ be a path from $w'$ to $w$, and let $B \in B({\mathcal{H}})$. Then $S_{v'}BS_{\lambda} \in S_{v'} {\mathfrak{S}} S_{w'}$ for any $B\in B({\mathcal{H}})$. Let $B = A S_{\lambda}^*$ for general $A\in B({\mathcal{H}})$ so that $S_{v'}BS_{\lambda} = S_{v'} A S_{w'} \in S_{v'} {\mathfrak{S}} S_{w'}$. Hence, we obtain that $S_{v'} {\mathfrak{S}} S_{w'} = S_{v'} B({\mathcal{H}}) S_{w'}$ for any $v',w' \in V$. Since $\sotsum_{v\in V} S_v = I_{{\mathcal{H}}}$ we get that ${\mathfrak{S}} = B({\mathcal{H}})$. \end{proof}
Let $G = (V,E)$ be a finite directed graph. For an edge $e \in E$ we define the \emph{edge contraction} $G/e$ of $G$ by $e$ to be the graph obtained by removing the edge $e$ and identifying the vertices $s(e)$ and $r(e)$. When $r(e) \neq s(e)$, our convention will be that the identification of $r(e)$ and $s(e)$ is carried out by removing the vertex $r(e)$, and every edge with source / range $r(e)$ is changed to have source / range $s(e)$ respectively.
\begin{lemma}\label{l:edge-cont}
Let $G = (V,E)$ be a transitive and finite directed graph which is not a cycle. Let $e_0 \in E$ such that $r(e_0)$ has in-degree $1$. Then:
\begin{enumerate}
\item The edge $e_0$ is not a loop.
\item The edge contraction $G/e_0$ has one fewer vertex of in-degree $1$ than $G$ has.
\item The edge contraction $G/e_0$ is a finite transitive directed graph which is not a cycle.
\end{enumerate}
\end{lemma}
\begin{proof}
If $e_0$ were a loop, then the assumptions that $G$ is transitive and that $r(e_0)$ has in-degree $1$ would imply that $G$ is a cycle with one vertex. This contradicts our assumption that $G$ is not a cycle, so (i) is proved.
Since now we must have $r(e_0) \neq s(e_0)$, we adopt our convention discussed above. Since $r(e_0)$ has in-degree $1$, there are no other edges whose range is $r(e_0)$, so contraction does not change the range of any of the surviving edges. In particular, the in-degree of the remaining vertices is unchanged. Since we have removed a single vertex of in-degree $1$, (ii) is proved.
By construction $G/e_0 $ is a finite directed graph. It is straightforward to verify that transitivity of $G$ implies transitivity of $G/e_0$. A finite, transitive, directed graph is a cycle if and only if every vertex has in-degree $1$. Since $G$ is by assumption not a cycle, it has a vertex of in-degree at least $2$. Since the in-degree of the remaining vertices is unchanged, $G/e_0$ also has a vertex of in-degree at least $2$. Hence $G/e_0$ is not a cycle and (iii) is proved.
\end{proof}
The following proposition shows that such edge contractions preserve the property of having $B({\mathcal{H}})$ as a free semigroupoid algebra.
\begin{proposition} \label{p:edge-cont-FSA} Let $G = (V,E)$ be a transitive and finite directed graph that is not a cycle, and let $e_0 \in E$ such that $ r(e_0)$ has in-degree $1$. If $G/e_0$ has $B({\mathcal{H}})$ as a free semigroupoid algebra, then so does $G$. \end{proposition}
\begin{proof} Let $v_0 := r(e_0)$ so that $G/e_0 = (\tilde{V}, \tilde{E}) = (V \setminus \{v_0\}, E \setminus \{ e_0\})$. Let $\widetilde{\mathrm{S}} = (\widetilde{S}_{v}, \widetilde{S}_{e})$ be a TCK family for $G/e$ on ${\mathcal{H}}$ such that $\widetilde{{\mathfrak{S}}} = B({\mathcal{H}})$. By Theorem \ref{t:sa-rest} we get that $\widetilde{\mathrm{S}}$ is actually a CK family. Write ${\mathcal{H}} = \bigoplus_{v \in \tilde{V}} \widetilde{S}_{v} {\mathcal{H}}$ and let ${\mathcal{H}}_v = \widetilde{S}_{v} {\mathcal{H}}$ for $v \in \tilde{V}$. Let ${\mathcal{H}}_{v_0}$ be a Hilbert space identified with $\widetilde{S}_{s(e_0)}{\mathcal{H}}$ via a fixed unitary identification $J : {\mathcal{H}}_{v_0} \rightarrow \widetilde{S}_{{s(e_0)}}{\mathcal{H}}$ and form the space ${\mathcal{K}} = \bigoplus_{v\in V} {\mathcal{H}}_v$. We define a CK family $\mathrm{S}$ for $G$ on ${\mathcal{K}}$ as follows: Let $S_v$ be the projection onto ${\mathcal{H}}_v$ for each $v\in V$. For edges $e \in \tilde{E}$ with $s(e)\neq v_0$ we extend linearly the rule $$ S_e \xi = \begin{cases} \widetilde{S}_{e} \xi & \text{ when } \xi \in {\mathcal{H}}_{s(e)}, \\ 0 & \text{ when } \xi \in {\mathcal{H}}_{s(e)}^{\perp}, \end{cases} $$ for edges $e\in \tilde{E}$ with $s(e) = v_0$ we extend linearly the rule $$ S_e \xi = \begin{cases} \widetilde{S}_{e}J \xi & \text{ when } \xi \in H_{v_0}, \\ 0 & \text{ when } \xi \in H_{v_0}^{\perp}, \end{cases} $$ and finally for $e_0$ we extend linearly the rule $$ S_{e_0} \xi = \begin{cases} J^* \xi & \text{ when } \xi \in {\mathcal{H}}_{s(e_0)}, \\ 0 & \text{ when } \xi \in {\mathcal{H}}_{s(e_0)}^{\perp}. \end{cases} $$ Since $J$ is a unitary, and since $\widetilde{\mathrm{S}}$ is a CK family for $\widetilde{G}$, we see that $\mathrm{S} = (S_v,S_e)$ is a CK family for $G$.
We verify that for any vertex $v \in V$ we have $S_v {\mathfrak{S}} S_v = S_v B({\mathcal{K}}) S_v$. First note that for any $v \in V$ we have $$
S_v {\mathfrak{S}} S_v = \overline{\operatorname{span}}^{\textsc{wot}}\{ \ S_{\lambda} \ | \ r(\lambda) = v = s(\lambda), \ \lambda \in E^{\bullet} \ \}, $$ and similarly for every $v\in \tilde{V}$ we have a description as above for $\widetilde{S}_v \widetilde{{\mathfrak{S}}} \widetilde{S}_v$ in terms of cycles in $G / e_0$.
Suppose now that $v \in \tilde{V}$ and that $\lambda$ is a cycle through $v$ in $G$. If $\lambda$ does not go through $v_0$, then $S_v S_{\lambda} S_v = S_v \widetilde{S}_{\lambda} S_v \in S_v \widetilde{{\mathfrak{S}}} S_v$. Next, if $\lambda = \mu \nu$ is a simple cycle such that $s(\mu) = v_0$, since $e_0$ is the unique edge with $r(e_0) = v_0$, we may write $\nu = e_0 \nu'$. We then get that $$ S_v S_{\lambda} S_v = S_v S_{\mu}J^* S_{\nu'} S_v = S_v \widetilde{S}_{\mu} \widetilde{S}_{\nu'} S_v \in S_v \widetilde{{\mathfrak{S}}} S_v. $$ For a general cycle $\lambda$ around $v$ which goes through $v_0$, we may decompose it as a concatenation of simple cycles, and apply the above iteratively to eventually get that $S_v S_{\lambda} S_v \in S_v\widetilde{{\mathfrak{S}}} S_v$. Hence, for $v \in \tilde{V}$ we have \begin{equation*} S_{v} {\mathfrak{S}} S_{v} = S_{v} \widetilde{{\mathfrak{S}}} S_{v} = S_{v} B({\mathcal{H}})S_{v} = S_v B({\mathcal{K}}) S_v. \end{equation*} Finally, for $v = v_0$ fix $\mu$ some cycle going through $v_0$ in $G$, and write $\mu =e_0 \mu'$. For any $\lambda$ which is a cycle in $G$ going through $s(e_0)$, we have that $$ J^*S_{\lambda}JS_{\mu} = S_{e_0} S_{\lambda} S_{\mu'} \in S_{v_0} {\mathfrak{S}} S_{v_0}. $$ Hence, from our previous argument applied to $s(e_0) \neq v_0$, we get \begin{equation} \label{eq:supset-cycle} J^* S_{s(e_0)} B({\mathcal{H}}) S_{s(e_0)} J S_{\mu} = J^* S_{s(e_0)} {\mathfrak{S}} S_{s(e_0)} J S_{\mu} \subseteq S_{v_0} {\mathfrak{S}} S_{v_0}. \end{equation} Next, for $B\in S_{v_0} B({\mathcal{K}}) S_{v_0}$ we take $A= S_{s(e_0)} J B S_{\mu}^*J^* S_{s(e_0)}$ which is now in $S_{s(e_0)} B({\mathcal{H}}) S_{s(e_0)}$, so that $$ S_{s(e_0)} J B S_{v_0} = A S_{s(e_0)} J S_{\mu} \in S_{s(e_0)} B({\mathcal{H}}) S_{s(e_0)} J S_{\mu}. $$ By varying over all $B\in S_{v_0} B({\mathcal{K}}) S_{v_0}$ and multiplying by $J^*$ on the left, we get that \begin{equation*} S_{v_0} B({\mathcal{K}}) S_{v_0} \subseteq J^* S_{s(e_0)} B({\mathcal{H}}) S_{s(e_0)} J S_{\mu}. \end{equation*} Thus, with equation \eqref{eq:supset-cycle} we get that $S_{v_0} B({\mathcal{K}})S_{v_0} = S_{v_0} {\mathfrak{S}} S_{v_0}$. Now, since for any $v\in V$ we have that $S_{v} {\mathfrak{S}} S_{v} = S_v B({\mathcal{K}}) S_v$, by Lemma \ref{l:cornerB(H)} we are done. \end{proof}
Hence, edge contraction together with Lemma \ref{l:edge-cont} can be repeatedly applied to any finite transitive directed graph $G$ which is not a cycle in order to obtain another such graph $\widetilde{G}$ which has in-degree at least $2$ for every vertex. By applying Proposition \ref{p:edge-cont-FSA} to this procedure, we obtain the following corollary.
\begin{corollary} \label{c:in-deg-2} Let $G$ be a transitive and finite directed graph which is not a cycle, and let $\widetilde{G}$ be a graph resulting from repeatedly applying edge contractions to edges $e\in E$ with $r(e)$ of in-degree $1$. Then $\widetilde{G}$ has in-degree at least $2$ for every vertex, and if $\widetilde{G}$ has $B({\mathcal{H}})$ as a free semigroupoid algebra, then so does $G$. \end{corollary}
Let $G = (V,E)$ is finite directed graph which is transitive and has in-degree at least $2$ at every vertex. For a vertex $v\in V$ with in-degree $d_v \geq 3$ we define the \emph{$v$-lag} of $G$ to be the graph $\widehat{G}_v = (\widehat{V}_v, \widehat{E}_v)$ obtained as follows: all vertices beside $v$ and edges ranging in such vertices remain the same. We list $(u_0,...,u_{d_v-1})$ the tuple of vertices in $G$ which are the source of an edge with range $v$, counted with repetition so that there is a unique edge $e_j$ from each $u_j$ to $v$. We add $d_v-2$ vertices $v_1,...,v_{d_v-2}$ and set an edge $f_i$ from $v_i$ to $v_{i-1}$ when $2 \leq i \leq d_v-2$ and an edge $f_1$ from $v_1$ to $v$. We then replace each edge $e_j$ from $u_j$ to $v$ in $G$ with an edge $\hat{e}_j$ from $u_j$ to $v_j$ when $1 \leq j \leq d_v-2$, replace an edge $e_0$ from $u_0$ to $v$ in $G$ by an edge $\hat{e}_0$ from $u_0$ to $v$, and replace an edge $e_{d_v-1}$ from $u_{d_v-1}$ to $v$ in $G$ with an edge $\hat{e}_{d_v-1}$ from $u_{d_v-1}$ to $v_{d_v-2}$. The resulting graph is over the vertex set $\widehat{V}_v = V \sqcup \{v_1,...,v_{d_v-2}\}$ and we denote it by $\widehat{G}_v$. The construction will replace only those edges going into $v$ with those shown in Figure \ref{f:split} (the sources $u_0,...,u_{d_v -1}$ of such edges may have repetitions), and everything else will remain the same.
\begin{figure}
\caption{The lag of $G$ applied at $v$.}
\label{f:split}
\end{figure}
\begin{lemma} \label{l:transitive-less-in-deg-3} Let $G=(V,E)$ be a transitive and finite directed graph with in-degree at least $2$ at every vertex. Let $v \in V$ be a vertex with in-degree $d_v \geq 3$. Then $\widehat{G}_v$ is a finite transitive directed graph with in-degree at least $2$ at every vertex, and has one fewer vertex of in-degree at least $3$. \end{lemma}
\begin{proof} We see from the construction that $v_1,..., v_{d_v-2}$, as well as $v$, all have in-degree $2$. So we have reduced by one the number of vertices of in-degree at least $3$, and all other vertices except for $v_1,...,v_{d_v -2}$ and $v$ still have the same in-degree. Hence, $\widehat{G}_v$ has one fewer vertex with in-degree at least $3$.
We next show that $\widehat{G}_v$ is transitive. Indeed, every one of the vertices $\{v_1,..., v_{d_v-2}\}$ leads to $v$, and the set of vertices $\{u_1,...,u_{d_v-1}\}$ lead to $v_1,...,v_{d_v-2}$ so it would suffice to show that for any two vertices $w,u \in V$ we have a path in $\widehat{G}_v$ from $w$ to $u$. Since $G$ is transitive, we have a path $\lambda = g_1...g_n$ in $G$ from $w$ to $u$. Next, define $\hat{g}_k$ to be $g_k$ if $g_k \neq e_j$ for all $j$ and whenever $g_k = e_j$ for some edge $e_j$ with range in $v$ we set $g_k:= f_1...f_j\hat{e}_j$ which is a path in $\widehat{G}_v$ from $u_j$ to $v$. This way the new path $\hat{g} = \hat{g}_1....\hat{g}_n$ is a path in $\widehat{G}_v$ from $w$ to $u$. \end{proof}
Our construction depends on the choice of orderings for the $\{u_j\}$, so when we write $\widehat{G}_v$ we mean a fixed ordering for sources of incoming edges of the vertex $v$ in the above construction.
Next, let $G=(V,E)$ be a finite directed graph with in-degree at least $2$ at every vertex. For a vertex of in-degree at least $3$, let $\widehat{G}_v$ be the $v$-lag of $G$. We define a map $\theta$ on paths $E^{\bullet}$ of $G$ as follows: If $e\in E$ is any edge with $r(e) \neq v$, we define $\theta(e) = e$. Next, if $e_j$ is the unique edge from $u_j$ to $v$ in $G$ we define $\theta(e_j) = f_1 f_2 ... f_j \hat{e}_j$ which is a path from $u_j$ to $v$ in $\widehat{G}_v$. The map $\theta$ then extends to a map (denoted still by) $\theta$ on finite paths $E^{\bullet}$ of $G$ by concatenation, whose restriction to $V$ is the embedding of $V \subseteq \widehat{V}_v$.
\begin{lemma} \label{l:bijection} Let $G=(V,E)$ be a transitive and finite directed graph with in-degree at least $2$ at every vertex. For a vertex $v\in V$ of in-degree at least $3$, let $\widehat{G}_v$ be the $v$-lag of $G$. Then $\theta$ is a bijection between $E^{\bullet}$ and paths in $\widehat{E}_v^{\bullet}$ whose range and source are both in $V$. \end{lemma}
\begin{proof} Since $\theta$ is injective on edges and vertices, it must be injective on paths as it is defined on paths by extension.
Suppose now that $u \in V$ and $\widehat{\lambda} = \hat{g}_1 ... \hat{g}_j$ is a path in $\widehat{G}_v$ from $u$ to $v$ such that $s(\hat{g}_k) \notin V$ for $k=1,...,j-1$. Then it must be that $\widehat{\lambda} = f_1f_2...f_j\hat{e}_j$ with $s(\hat{e}_j) = u_j = u$, so that $\widehat{\lambda} = \theta(e_j)$. A general path in $\widehat{G}_v$ between two vertices in $V$ is then the concatenation of edges in $E$ and paths of the form $f_1f_2...f_j\hat{e}_j$ as above, so that $\theta$ is surjective. \end{proof}
Applying a $v$-lag at each vertex of in-degree $3$ repeatedly until there are no more such vertices, we obtain a directed graph $\widehat{G} = (\widehat{V},\widehat{E})$. Since the construction at each vertex of in-degree at least $3$ changes only edges going into the vertex $v$, the order in which we apply the lags does not matter, and we will get the same directed graph $\widehat{G} = (\widehat{V},\widehat{E})$. The following is a result of applying Lemma \ref{l:transitive-less-in-deg-3} and Lemma \ref{l:bijection} at each step of this process.
\begin{corollary} \label{c:in-deg-reg-2} Let $G=(V,E)$ be a transitive and finite directed graph with in-degree at least $2$ at every vertex. Let $\widehat{G} = (\widehat{V},\widehat{E})$ be the graph resulting from repeatedly applying a $v$-lag at every vertex $v$ of in-degree at least $3$. Then $\widehat{G}$ is in-degree $2$-regular and there is an embedding $V\subseteq \widehat{V}$ which extends to a bijection $\theta$ from paths $E^{\bullet}$ to paths in $\widehat{E}^{\bullet}$ whose range and source in $V$. \end{corollary}
Now let $G=(V,E)$ be a finite directed graph with in-degree at least $2$ at every vertex, and $\widehat{G} = (\widehat{V},\widehat{E})$ be the graph in Corollary \ref{c:in-deg-reg-2}. Given a strong edge coloring $c$ of $\widehat{G}$ with two colors $\{1,2\}$, each edge $g \in E$ inherits a labeling $\ell$ given by $\ell(g) := c(\theta(g))$ with labels $\{c(\theta(g))\}$. We then extend this labeling to paths in $G$ by setting for any path $\lambda = g_1...g_n \in E^{\bullet}$ in $G$ the label $\ell(\lambda) = c(\theta(\lambda)) = c(\theta(g_1)) ... c(\theta(g_n))$ where $\theta(\lambda) \in \widehat{E}^{\bullet}$ is the path in $\widehat{G}$ corresponding to $\lambda$ whose range and source are always in $V$.
Using the labeling $\ell$ we construct a Cuntz-Krieger family $\mathrm{S}^{\ell} = (S^{\ell}_v,S^{\ell}_e)$ for our original graph $G$ as follows: let ${\mathcal{H}}$ be a separable infinite dimensional Hilbert space. Let ${\mathcal{K}} = \bigoplus_{v \in V} {\mathcal{H}}_v$ where ${\mathcal{H}}_v$ is a copy of ${\mathcal{H}}$ identified via a unitary $J_v : {\mathcal{H}}_v \rightarrow {\mathcal{H}}$. First we define $S^{\ell}_v$ to be the projection onto ${\mathcal{H}}_v$ for $v \in V$. Then, for $e \in E$ we define $S^{\ell}_e$ by linearly extending the rule $$ S^{\ell}_e\xi = \begin{cases} J_{r(e)}^*Z_{\ell(e)} J_{s(e)} \xi & \text{ for } \xi \in H_{s(e)}\\ 0 & \text{ for } \xi \in H_{s(e)}^{\perp}. \end{cases} $$ where $Z_{\ell(e)}$ is the composition of Read isometries $Z_1$ and $Z_2$ given as follows: for each $e\in E$ there are $f_1,...,f_j \in \widehat{E}$ (or non at all when $r(e)$ has in-degree $2$ in $G$) such that $\theta(e) = f_1 f_2 ... f_j \hat{e}$ as in the iterated construction of $\widehat{G}$. Thus, we get that $Z_{\ell(e)} = Z_{c(\theta(e))} = Z_{c(f_1)} \circ ... \circ Z_{c(f_j)} \circ Z_{c(\hat{e})}$.
\begin{proposition}\label{p:CKgen} Let $G$ be a transitive and finite directed graph such that all vertices have in-degree at least $2$, and let $\widehat{G}$ be the in-degree $2$ regular graph constructed in Corollary \ref{c:in-deg-reg-2}. Let $p$ be the period of $\widehat{G}$. Then for any strong edge coloring $c: \widehat{E} \rightarrow \{1,2\}$ for $\widehat{G}$ we have that $\mathrm{S}^{\ell}$ is a CK family for $G$. Furthermore, if $c$ is $p$-synchronizing for $\widehat{G}$, then the free semigroupoid algebra ${\mathfrak{S}}^{\ell}$ generated by $\mathrm{S}^{\ell}$ as above is $B({\mathcal{K}})$. \end{proposition}
\begin{proof} We first show that $\mathrm{S}^{\ell}$ is a CK family, given a strong edge coloring $c$ for $\widehat{G}$. It is easy to show by definition that $\mathrm{S}^{\ell}$ is a TCK family, so we show the condition that makes it into a CK family. Indeed, for $v\in V$ we have that \begin{equation} \label{e:labelck} \sum_{e\in r^{-1}(v)} S_e^{\ell}(S_e^{\ell})^* = S^{\ell}_vJ_v^* \Big( \sum_{e \in r^{-1}(v)}Z_{\ell(e)}Z_{\ell(e)}^* \Big) J_v S^{\ell}_v. \end{equation} Now, let the in-degree of $v$ be $d=d_v$, and let $(u_0,...,u_{d-1})$ be the sources of edges incoming to $v$ in $G$. Then in $\widehat{G}$ the path $\theta(e_j)$ (associated to the edge $e_j$ from $u_j$ to $v$ in $G$) is given by $\theta(e_j) = f_1f_2 .. f_j \hat{e}_j$. Hence, since $c$ is a strong edge coloring with two colors we see that $$ Z_{c(f_1...f_{d-2})}Z_{c(f_1...f_{d-2})}^* = Z_{c(\theta(e_{d-1}))} Z_{c(\theta(e_{d-1}))}^* + Z_{c(\theta(e_{d-2}))}Z_{c(\theta(e_{d-2}))}^* $$ as well as $$ Z_{c(f_1...f_j)}Z_{c(f_1...f_j)}^* = Z_{c(\theta(e_j))} Z_{c(\theta(e_j))}^* + Z_{c(f_1...f_jf_{j+1})}Z_{c(f_1...f_jf_{j+1})}^*. $$ By applying these identities repeatedly we obtain that $$ \sum_{e \in r^{-1}(v)}Z_{\ell(e)}Z_{\ell(e)}^* = I_{{\mathcal{H}}}. $$ Thus from equation \eqref{e:labelck} we get that $\mathrm{S}^{\ell}$ is a CK family.
Next, suppose that the strong edge coloring $c$ of $\widehat{G}$ is $p$-synchronizing. We show that the free semigroupoid algebra ${\mathfrak{S}}^{\ell}$ of $\mathrm{S}^{\ell}$ for the graph $G$ is $B({\mathcal{K}})$. Let $v \in V$ be a vertex. By the second part of Corollary \ref{c:p-synch-exists} we have a $p$-synchronizing word $\gamma_v$ for $v \in \widehat{V}$ in the sense that whenever $u \in \widehat{V}$ is in the same cyclic component of $v$ in $\widehat{G}$, then $u \cdot \gamma_v = v$. Let $\gamma$ be a word in two colors of length divisible by $p$. There is a unique path $\widehat{\lambda}$ in $\widehat{G}$ with $r(\widehat{\lambda}) =v$ such that $c(\widehat{\lambda}) = \gamma$. Since the length of $\widehat{\lambda}$ is divisible by $p$, we have that $s(\widehat{\lambda})$ must also be in the same cyclic component as $v$, so by the $p$-synchronizing property of $\gamma_v$ there is a unique path $\widehat{\lambda}_v$ with $c(\widehat{\lambda}_v) = \gamma_v$ and $r(\widehat{\lambda}_v) = s(\widehat{\lambda})$ and $s(\widehat{\lambda}_v) = v$. This defines a cycle $\widehat{\lambda} \widehat{\lambda}_v$ around $v$ whose color is $c(\widehat{\lambda}\widehat{\lambda}_v) = \gamma\gamma_v$.
By Corollary \ref{c:in-deg-reg-2}, there is a unique cycle $\mu$ around $v$ in $G$ such that $\theta(\mu) = \widehat{\lambda} \widehat{\lambda}_v$. Then $\ell(\mu) = c(\widehat{\lambda}\widehat{\lambda}_v) = \gamma \gamma_v$, and we get that $$ S^{\ell}_{\mu} = S^{\ell}_vJ_{v}^*Z_{\gamma}Z_{\gamma_v} J_{v} S^{\ell}_v \in S^{\ell}_v \mathfrak{S}S^{\ell}_v. $$
Since $\widehat{\lambda}$ is a general path with range in $v$ whose length is divisible by $p$, we see that $c(\widehat{\lambda}) = \gamma$ is an arbitrary word of length divisible by $p$. Thus, by Proposition \ref{p:pgen} we obtain $S_vJ_v^*BZ_{\gamma_v} J_v S_v \in S_v {\mathfrak{S}} S_v$ for any $B \in B({\mathcal{H}})$. By taking $B = A Z_{\gamma_v}^*$ we see that $S_v J_v^* A J_v S_v \in S_v {\mathfrak{S}} S_v$ for any $A \in B({\mathcal{H}})$. Finally, we have shown that $S_v {\mathfrak{S}} S_v = S_v B({\mathcal{K}}) S_v$ for arbitrary $v\in V$ so that by Lemma \ref{l:cornerB(H)} we conclude that ${\mathfrak{S}} = B({\mathcal{K}})$. \end{proof}
\begin{example}\label{e:embed} In the case where the graph $G$ is a single vertex with $d\geq 3$ edges, the construction of $\widehat{G}$ yields the graph shown in Figure \ref{f:onevert}.
\begin{figure}
\caption{Splitting a vertex with $d\geq 3$ loops.}
\label{f:onevert}
\end{figure}
A strong $2$-coloring of the graph in Figure \ref{f:onevert} lifts to a coloring of $G$ by words in $\{1,2\}$. This determines a monomial embedding ${\mathcal{O}}_d \hookrightarrow {\mathcal{O}}_2$, where each generator for ${\mathcal{O}}_d$ is sent to the appropriate composition of the generators of $\mathcal{O}_2$. Such monomial embeddings arise and are studied in \cite{Lin20}. If we represent the canonical generators of ${\mathcal{O}}_2$ by a pair of Read isometries $Z_1,Z_2$, we obtain a representation of ${\mathcal{O}}_d$. If the coloring is synchronizing, the generating isometries of ${\mathcal{O}}_d$ will generate $B({\mathcal{H}})$ as a free semigroup algebra. In fact, any strong 2-coloring of the graph in Figure \ref{f:onevert} is synchronizing: if the edge labeled $e$ has color $i \in \{1,2\}$, then it is easy to see that $i^d$ is a synchronizing word for the vertex $v$. \end{example}
\section{Self-adjoint free semigroupoid algebras} \label{s:main-theorem}
In this section we tie everything together to obtain our main theorem, and make a few concluding remarks.
\noindent {\bf Theorem \ref{t:safsa}} (Self-adjoint free semigroupoid algebras){\bf .} \emph{ Let $G = (V,E)$ be a finite graph. There exists a fully supported CK family $\mathrm{S}=(S_v,S_e)$ which generates a \emph{self-adjoint} free semigroupoid algebra ${\mathfrak{S}}$ if and only if $G$ is the union of transitive components.}
\emph{Furthermore, if $G$ is transitive and not a cycle then $B({\mathcal{H}})$ is a free semigroupoid algebra for $G$ where ${\mathcal{H}}$ is a separable infinite dimensional Hilbert space.}
\begin{proof} If ${\mathfrak{S}}$ is self-adjoint, Theorem \ref{t:sa-rest} tells us that $G$ must be the disjoint union of transitive components.
Conversely, if $G$ is the union of transitive components, we will form a CK family whose free semigroupoid algebra is self-adjoint for each transitive component separately, and then define the one for $G$ by taking their direct sum. Hence, we need only show that every finite transitive graph $G$ has a self-adjoint free semigroupoid algebra. Then there are two cases:
\begin{enumerate} \item {\bf If $G$ is a cycle of length $n$:} By item (1) of \cite[Theorem 5.6]{DDL20} we have that $M_n(L^{\infty}(\mu))$ is a free semigroupoid algebra when $\mu$ is a measure on the unit circle $\mathbb{T}$ which is not absolutely continuous with respect to Lebesgue measure $m$ on $\mathbb{T}$.
\item {\bf If $G$ is not a cycle:} By Corollary \ref{c:in-deg-2} we may assume without loss of generality that $G$ has no vertices with in-degree $1$. By the first part of Corollary \ref{c:p-synch-exists} there exists a $p$-synchronizing strong edge coloring $c$ for $\widehat{G}$, so we may apply Proposition \ref{p:CKgen} to deduce that $B({\mathcal{H}})$ is a free semigroupoid algebra for $G$. \end{enumerate} \end{proof}
\begin{remark} Note that in the proof of Theorem \ref{p:CKgen}, since $\widehat{G}$ is always in-degree $2$-regular, we only needed the in-degree $2$-regular case of the periodic Road Coloring Theorem in Corollary \ref{c:p-synch-exists}. On the other hand, we could only prove Proposition \ref{p:pgen} for the free semigroup on two generators, so it was also necessary to reduce the problem to the in-degree $2$-regular case. The latter explains why an iterated version of Proposition \ref{p:CKgen} akin to the one for the first construction in Proposition \ref{p:edge-cont-FSA} is not so readily available. \end{remark}
To construct a suitable Cuntz-Krieger family as in the discussion preceding Proposition \ref{p:CKgen}, for each vertex in $G$ with in-degree at least $3$ we must choose a monomial embedding of ${\mathcal{O}}_d$ into ${\mathcal{O}}_2$. Example \ref{e:embed} shows that choosing a monomial embedding is equivalent to choosing a strong edge coloring of a certain binary tree with $d$ leaves. Constructing such a tree for each vertex gives the construction of $\widehat{G}$, and this is where the intuition for our proof originated from.
\subsection*{Acknowledgments} The first author would like to thank Boyu Li for discussions that led to the proof of Proposition \ref{p:pgen}. Both authors are grateful to the anonymous referees for pointing out issues with older proofs, and for providing suggestions that improved the exposition of the paper. Both authors are also grateful to Florin Boca and Guy Salomon for many useful remarks on previous draft versions of the paper.
\end{document}
|
arXiv
|
{
"id": "1811.11058.tex",
"language_detection_score": 0.8131635189056396,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Quantum Discord and Geometry for a Class of Two-qubit States}
\author{Bo Li} \affiliation{School of Mathematical Sciences, Capital Normal University, Beijing 100048, China} \affiliation{Department of Mathematics and Computer, Shangrao Normal University,
Shangrao 334001, China} \author{Zhi-Xi Wang} \affiliation{School of Mathematical Sciences, Capital Normal University, Beijing 100048, China} \author{Shao-Ming Fei} \affiliation{School of Mathematical Sciences, Capital Normal University, Beijing 100048, China} \affiliation{Max-Planck-Institute for Mathematics in the Sciences, 04103 Leipzig, Germany}
\begin{abstract} We study the level surfaces of quantum discord for a class of two-qubit states with parallel nonzero Bloch vectors. The dynamic behavior of quantum discord under decoherence is investigated. It is shown that a class of $X$ states has sudden transition between classical and quantum correlations under decoherence. Our results include the ones in [Phys. Rev. Lett. 105. 150501] as a special case and show new pictures and structures of quantum discord. \end{abstract} \pacs{03.67.Mn, 03.65.Ud, 03.65.Yz} \maketitle \section{\bf Introduction}
The quantum entanglement acts as the most important resources in quantum information \cite{horodecki1,nielsen}. However, entanglement is not the only correlation that is useful for quantum information processing. Recently, it is found that many tasks, e.g. quantum nonlocality without entanglement \cite{horodecki1,bennett,niset}, can be carried out with quantum correlations other than entanglement. It has been shown both theoretically and experimentally \cite{datta,lanyon} that some separable states may speed up certain tasks over their classical counterparts.
One kind of nonlocal correlation called quantum discord, as introduced by Oliver and Zurek \cite{ollivier}, has received much attention recently \cite{ollivier,bylicka,werlang,sarandy,ferraro,fanchini,dakic,modi,luo1,luo,lang,ali,Mazzola,Maziero}. The idea is to measure the discrepancy between two natural yet different quantum analogs of the classical mutual information. Let $\rho^{AB}$ denote the density operator of a composite bipartite system AB, and $\rho^{A(B)}=Tr_{B(A)}(\rho^{AB})$ the reduced density operator of the partition B(A). The quantum mutual information is defined by \begin{eqnarray} \mathcal{I}(\rho^{AB})=S(\rho^A)+S(\rho^B)-S(\rho^{AB}), \end{eqnarray} where $S(\rho)=-Tr(\rho\log_2\rho)$ is the Von Neuman entropy. It was shown that quantum mutual information is the information-theoretic measure of the total correlation in a bipartite quantum state. In order to determine quantum discord \cite{ollivier,luo}, Ollivier and Zurek use a measurement-based conditional density operator to generalize the classical mutual information. Let ${B_k}$ be a set of one-dimensional project measurement performed on subsystem $B$, the conditional density operator $\rho_k$ associated with the measurement result $k$ is
\begin{eqnarray} \rho_k=\frac{1}{p_k}(I\otimes B_k)\rho (I\otimes B_k), \end{eqnarray} where $p_k=tr(I\otimes B_k)\rho (I\otimes B_k)$, $I$ is the identity operator on the subsystem $A$. With this conditional density operator, the quantum conditional entropy with respect to this measurement is defined by \begin{eqnarray}
S(\rho|\{B_k\}):=\sum_{k}p_kS(\rho_k), \end{eqnarray} and the associated quantum mutual information is given by \begin{eqnarray}
\mathcal{I}(\rho|\{B_k \}):=S(\rho^A)-S(\rho|\{B_k\}). \end{eqnarray}
Classical correlation is defined as the superior of $\mathcal{I}(\rho|\{B_k\})$ over all possible Von Neumann measurement ${B_k}$, \begin{eqnarray}
\mathcal{C}(\rho):=\sup_{\{B_k\}}I(\rho|\{B_k\}). \end{eqnarray} Quantum discord is then given by the difference of mutual information $\mathcal{I}(\rho)$ and the classical correlation $\mathcal{C}(\rho)$, \begin{eqnarray} \mathcal{Q}(\rho):=\mathcal{I}(\rho)-\mathcal{C}(\rho). \end{eqnarray}
The analytical expressions for classical correlation and quantum discord are only available for two-qubit Bell diagonal state and a seven-parameter family of two-qubit $X$ states \cite{luo,ali} till now. For the two-qubit Bell-diagonal state: \begin{eqnarray} \rho=\frac{1}{4}(I\otimes I+\sum_{i=1}^3c_i\sigma_i\otimes\sigma_i),\label{bellstate} \end{eqnarray} the classical correlation is given by \begin{eqnarray} \mathcal{C}(\rho)=\frac{1-c}{2}\log_2(1-c)+\frac{1+c}{2}\log_2(1+c), \end{eqnarray}
where $c=\max\{|c_1|,|c_2|,|c_3|\}$. The quantum discord is given by \begin{eqnarray} \mathcal{Q}(\rho)&=&\frac{1-c_1-c_2-c_3}{4}\log_2(1-c_1-c_2-c_3)\nonumber\\ & &+\frac{1-c_1+c_2+c_3}{4}\log_2(1-c_1+c_2+c_3)\nonumber\\ & &+\frac{1+c_1-c_2+c_3}{4}\log_2(1+c_1-c_2+c_3)\nonumber\\ & &+\frac{1+c_1+c_2-c_3}{4}\log_2(1+c_1+c_2-c_3)\nonumber\\ & &-\frac{1-c}{2}\log_2(1-c)-\frac{1+c}{2}\log_2(1+c). \end{eqnarray}
The geometry of Bell-diagonal states was first introduced by Horodecki \cite{horodecki2}. From the positivity of the spectral of a Bell-diagonal state $\rho$ in Eq.(\ref{bellstate}), one can see that $\rho$ belongs to a tetrahedron ${\cal T}$ with vertices $v_1=(1,-1,1)$, $v_2=(-1,1,1)$, $v_3=(1,1,-1)$, and $v_4=(-1,-1,-1)$ in the correlation vector space. Similarly, from the positivity of the partial transpose of $\rho$, it has been shown that that the separable states belong to the octahedron ${\cal O}$ with vertices $O_1^\pm=(\pm1,0,0)$, $O_2^\pm=(0,\pm1,0)$ and $O_3^\pm=(0, 0 ,\pm 1)$ \cite{horodecki2,kim,lang}.
Very recently, Matthias D. Lang and Carlton M. Caves \cite{lang} depicted the level surfaces of entanglement and quantum discord for Bell-diagonal states, they discovered that the picture and the structure of the quantum entanglement and the quantum discord are very different. There doesn't exist simple relations between them.
In this article, we study the quantum discord for a class of $X$ states that the Bloch vectors are $z$ directional, which including Bell-diagonal states as a special case. We study the level surfaces of quantum discord and dynamic behavior of quantum discord under decoherence. It is demonstrated that the surfaces of constant discord shrinks along with the geometrical deformation of ${\cal T}$ in Ref.\cite{kim}. Moreover we find that there is a class of $X$ states for which the quantum discord is not destroyed by decoherence in a finite time interval.
We calculate different kinds of correlation such as entanglement, classical correlation and quantum discord for the state we concerned in sec. \ref{surface}. We depict the level surface of constant discord in four different situations. In sec. \ref{dynamics}, we discuss the dynamics of quantum discord and show that the quantum discord of a certain class of $X$ states does not decay under decoherence. A brief conclusion is given in sec. \ref{discuss}.
\section{\bf Geometrical depiction of ${\cal C}$ and ${\cal D}$ }\label{surface} Under appropriate local unitary transformations, any two-qubit state $\rho$ can be written as: \begin{eqnarray} \rho=\frac{1}{4}[I\otimes I+\textbf{r}\cdot\sigma\otimes I+I\otimes\textbf{s}\cdot\sigma+\sum_{i=1}^3c_i\sigma_i\otimes\sigma_i], \label{twoqubitdiaostate} \end{eqnarray} where \textbf{r} and \textbf{s} are Bloch vectors and $\{\sigma_i\}_{i=1}^3$ are the standard Pauli matrices. When \textbf{r}=\textbf{s}=\textbf{0}, $\rho$ reduces to the two-qubit Bell-diagonal states. In the following, we assume that the Bloch vectors are $z$ directional, that is, $\textbf{r}=(0,0,r)$, $\textbf{s}=(0,0,s)$. One can also change them to be $x$ or $y$ directional via an appropriate local unitary transformation without losing its diagonal property of the correlation term \cite{kim}. In this case the arbitrary state $\rho$ defined in Eq.(\ref{twoqubitdiaostate}) has the form \begin{widetext} \begin{eqnarray} \rho = \frac{1}{4} \left( \begin{array}{cccc} 1+r+s+c_3 & 0 & 0 & c_1 -c_2 \\ 0 & 1+r-s-c_3 & c_1+c_2 & 0 \\ 0 & c_1 +c_2 & 1-r+s-c_3 & 0 \\ c_1 -c_2 & 0 & 0 & 1-r-s+c_3 \end{array} \right) \,. \label{Eq:AXstate} \end{eqnarray} \end{widetext}
The entanglement of formation \cite{wootters} is a monotonically increasing function of the Wootter's concurrence. While the concurrence can be calculated in terms of the eigenvalues of $\rho\widetilde{\rho}$, where $\widetilde{\rho}=\sigma_y\otimes \sigma_y\rho^*\sigma_y\otimes \sigma_y$. For the state Eq.(\ref{Eq:AXstate}), the eigenvalues of $\rho\widetilde{\rho}$ are \begin{eqnarray} \lambda_1&=&\frac{1}{16}(c_1-c_2-\sqrt{(1+c_3)^2-(r+s)^2})^2\nonumber\\ &=&\frac{1}{16}(c_1-c_2-\sqrt{(1+r+s+c_3)(1-r-s+c_3)})^2\nonumber, \end{eqnarray} \begin{eqnarray} \lambda_2&=&\frac{1}{16}(c_1-c_2+\sqrt{(1+c_3)^2-(r+s)^2})^2\nonumber\\ &=&\frac{1}{16}(c_1-c_2+\sqrt{(1+r+s+c_3)(1-r-s+c_3)})^2\nonumber, \end{eqnarray} \begin{eqnarray} \lambda_3&=&\frac{1}{16}(c_1+c_2-\sqrt{(1-c_3)^2-(r-s)^2})^2\nonumber\\ &=&\frac{1}{16}(c_1+c_2-\sqrt{(1+r-s-c_3)(1-r+s-c_3)})^2\nonumber, \end{eqnarray}
\begin{eqnarray} \lambda_4&=&\frac{1}{16}(c_1+c_2+\sqrt{(1-c_3)^2-(r-s)^2})^2\nonumber\\ &=&\frac{1}{16}(c_1+c_2+\sqrt{(1+r-s-c_3)(1-r+s-c_3)})^2\nonumber. \end{eqnarray} The concurrence is given by \begin{widetext} \begin{eqnarray} C(\rho)=\max\{2\max\{\sqrt{\lambda_1},\sqrt{\lambda_2},\sqrt{\lambda_3},\sqrt{\lambda_4}\} -\sqrt{\lambda_1}-\sqrt{\lambda_2}-\sqrt{\lambda_3}-\sqrt{\lambda_4},0 \}. \label{twoqubitconcurrence} \end{eqnarray} \end{widetext}
If one fixes the parameters $r$ and $s$, the above states and their concurrence are a three parameters set, with the Bell-diagonal states belonging to the set with $r=s=0$. The geometry of such set with nonzero Bloch vectors has been considered by Hungsoo Kim \textit{et al.} recently \cite{kim}. The geometrical deformation of the octahedron ${\cal T}$ for the set of Bell-diagonal states and the octahedron ${\cal O}$ for the separable Bell-diagonal states has been depicted. The deformation of ${\cal O}$ can also be obtained from the region where $C(\rho)=0$ in Eq.(\ref{twoqubitconcurrence}), as the concurrence of separable state must be zero. The level surfaces of concurrence or entanglement can be plotted correspondingly.
As $\rho$ in Eq.(\ref{Eq:AXstate}) is a two-qubit X state, the discord can be calculated in a way presented in \cite{ali}. The eigenvalues of $\rho$ in Eq.(\ref{Eq:AXstate}) is given by $$ \begin{array}{l} u_\pm=\frac{1}{4}[1-c_3\pm\sqrt{(r-s)^2+(c_1+c_2)^2} ],\\[2mm] v_\pm=\frac{1}{4}[1+c_3\pm\sqrt{(r+s)^2+(c_1-c_2)^2} ]. \end{array} $$ For convenience, we define $f(t)=-\frac{1-t}{2}\log_2(1-t)-\frac{1+t}{2}\log_2(1+t)$. $f(t)$ is a monotonically decreasing function for $0\leq t\leq 1$. The quantum mutual information is given by \begin{eqnarray} \mathcal{I}(\rho)&=&S(\rho^A)+S(\rho^B)+u_+\log_2u_+\nonumber\\ &+&u_-\log_2u_-+v_+\log_2v_++v_-\log_2v_-, \label{mutualinformation} \end{eqnarray} where $S(\rho^A)$ and $S(\rho^B)$ are given by $S(\rho^A)=1+f(r)$, $S(\rho^B)=1+f(s)$.
We evaluate next the classical correlation $\mathcal{C}(\rho)$. The Von Neumann measurement for subsystem $B$ can be written as $B_i=V\prod_iV^+$, $i=0,1$, where $\prod_i=|i\rangle\langle i|$ is the projector associated with the subsystem $B$ and $V=tI+i\overrightarrow{y}\cdot\overrightarrow{\sigma}\in SU(2)$, $t, y_1, y_2, y_3\in R$ and $t^2+y_1^2+y_2^2+y_3^2=1$. After the measurement, we have the ensemble $\{\rho_i, p_i\}.$ The classical correlation is therefore given by \begin{eqnarray}
\mathcal{C}(\rho) &=& \sup_{\{B_i\}} \, \mathcal{I} (\rho|\{B_i\}) \nonumber\\
&=& S(\rho^A)- \min_{\{B_i\}} S (\rho|\{B_i\}), \label{Eq:CC} \end{eqnarray} where \begin{eqnarray}
S(\rho|\{B_i\})=p_0S(\rho_0)+p_1S(\rho_1). \label{conditionalentropy} \end{eqnarray} By a the parameter transformation $$ \begin{array}{c} m=(ty_1+y_2y_3)^2, ~n=(ty_2-y_1y_3)(ty_1+y_2y_3),\\[2mm] k=t^2+y_3^2,~ l=y_1^2+y_2^2, \end{array} $$ which satisfies $m^2+n^2=klm$, $k+l=1, k\in[0,1]$, $m\in[0,\frac{1}{4}]$ and $n\in[-\frac{1}{8},\frac{1}{8}]$, according to \cite{ali} we observe that the minimum of Eq.(\ref{conditionalentropy}) can only be obtained in the following cases:
(1) $k=1$, $l=0$, $m=n=0.$ For state (\ref{Eq:AXstate}), Eqs.(14-17) in Ref.\cite{ali} turn out to be $p_0=\frac{1+s}{2},$ $p_1=\frac{1-s}{2},$ $\theta=\mid \frac{r+c_3}{1+s}\mid$, $\theta'=\mid \frac{r-c_3}{1-s}\mid$, $v_\pm (\rho_0)=\frac{1\pm\theta}{2}$, $\omega_\pm (\rho_1)=\frac{1\pm\theta'}{2}$. Thus, \begin{eqnarray}
S_1&=& S(\rho|\{B_i\})=p_0S(\rho_0)+p_1S(\rho_1) \nonumber\\
&=& -\frac{1+r+s+c_3}{4}\log_2\frac{1+r+s+c_3}{2(1+s)}\nonumber\\
&-&\frac{1-r+s-c_3}{4}\log_2\frac{1-r+s-c_3}{2(1+s)}\nonumber\\
&-&\frac{1+r-s-c_3}{4}\log_2\frac{1+r-s-c_3}{2(1-s)}\nonumber\\
&-&\frac{1-r-s+c_3}{4}\log_2\frac{1-r-s+c_3}{2(1-s)}. \label{Eq:S1} \end{eqnarray}
(2) $k=0$, $l=1$, $m=n=0$. It is easy to find that the minimum is the same as $S_1$.
(3) $k=l=\frac{1}{2}$. In this case, we have $$\theta=\theta'=\sqrt{r^2+c_1^2-4m(c_1^2-c_2^2)},$$ here $\theta,\, \theta'$ are defined by Eqs. (16,17) in \cite{ali}, $S(\rho_0)=S(\rho_1),$ which is a monotonically function of $m$. Therefore the minimum is obtained at $m=0$ or $m=\frac{1}{4}$. We have either $\theta=\theta'=\sqrt{r^2+c_1^2}$ or $\theta=\theta'=\sqrt{r^2+c_2^2}$. The quantum conditional entropy is given by \begin{eqnarray} S_2=1+f(\sqrt{r^2+c_1^2}), \label{Eq:S2} \end{eqnarray} \begin{eqnarray} S_3=1+f(\sqrt{r^2+c_2^2}). \label{Eq:S3} \end{eqnarray} Therefore, we have \begin{theorem} For any state $\rho$ of the form Eq.(\ref{Eq:AXstate}), the classical correlation of $\rho$ is given by \begin{eqnarray} \label{proposition1} \mathcal{C}(\rho)= S(\rho^A)- \min\{S_1, S_2, S_3\}, \end{eqnarray} where $S_1, S_2, S_3$ are defined by Eqs.(\ref{Eq:S1}), (\ref{Eq:S2}), (\ref{Eq:S3}) respectively. The quantum discord is given by \begin{eqnarray} \mathcal{Q}(\rho)=\mathcal{I}(\rho)-\mathcal{C}(\rho), \end{eqnarray} with $\mathcal{I}(\rho)$ given by (\ref{mutualinformation}). \end{theorem}
In Fig.1 we plot the level surface of discord when (a) $r=s=0.3$, $\mathcal{Q}(\rho)=0.03$;
(b) $r=s=0.5$, $\mathcal{Q}(\rho)=0.03$; (c) $r=s=0.3$, $\mathcal{Q}(\rho)=0.15$; (d) $r=s=0.5$, $\mathcal{Q}(\rho)=0.15$. From Fig.1 one can see that the level surface of discord has a great change from the case $r=s=0$ studied in Ref. \cite{lang}. The surface shrinks with the effect of $r$ and $s$ and the shrinking rate becomes larger with the increasing $|r|$ and $|s|$. What is more, when the discord is small (such as $\mathcal{Q}(\rho)=0.03$), the horizontal ``tubes'' are closed! see Fig (a). For larger $r$ and $s$, the picture is moved up the plane $c_3=0$, see Fig. (b). For larger discord and small $r$ and $s$, Fig. (c), the figure is similar to the ones in case of $r=s=0$. But for larger $r$ and $s$, Fig. (d), the figure is moved up again and changes dramatically also.
\begin{figure}
\caption{Surfaces of constant discord: (a) $r=s=0.3$, $\mathcal{Q}(\rho)=0.03$; (b) $r=s=0.5$, $\mathcal{Q}(\rho)=0.03$; (c) $r=s=0.3$, $\mathcal{Q}(\rho)=0.15$; (d) $r=s=0.5$, $\mathcal{Q}(\rho)=0.15$.}
\label{Fig:4}
\end{figure}
\section{\bf Dynamics of quantum discord under local nondissipative channels}\label{dynamics} It has been recently discovered that for some Bell-diagonal states, their quantum discord are invariant under some decoherence for a finite time interval \cite{Mazzola}. An inetresting question is if such phenomena exits in other systems. In the following we consider that the state $\rho$ in Eq.(\ref{Eq:AXstate}) undergoes the phase flip channel \cite{Maziero}, with the Kraus operators $\Gamma_0^{(A)}=$ diag$(\sqrt{1-p/2},\sqrt{1-p/2})\otimes I$, $\Gamma_1^{(A)}=$ diag$(\sqrt{p/2},-\sqrt{p/2})\otimes I$, $\Gamma_0^{(B)}= I \otimes$ diag$(\sqrt{1-p/2},\sqrt{1-p/2}) $, $\Gamma_1^{(B)}= I \otimes$ diag$(\sqrt{p/2},-\sqrt{p/2}) $, where $p=1-\exp(-\gamma t)$, $\gamma$ is the phase damping rate \cite{Maziero,yu}.
Let $\varepsilon(\cdot)$ represent the operator of decoherence. Then under the phase flip channel, we have \begin{eqnarray} \varepsilon(\rho)&=& \frac{1}{4}(I\otimes I+r\sigma_3\otimes I+I\otimes s \sigma_3+(1-p)^2c_1\sigma_1\otimes\sigma_1\nonumber\\
&+&(1-p)^2c_2\sigma_2\otimes\sigma_2+c_3\sigma_3\otimes\sigma_3). \label{Eq:epsilonrho} \end{eqnarray}
Noting that $r,\,s,\,c_3$ are independent of time, we consider the case that \begin{eqnarray} c_2=-c_3c_1,~ s=c_3r,~ -1\leq c_3\leq 1,~ -1\leq r\leq 1. \label{condition} \end{eqnarray} Then the eigenvalues of $\varepsilon(\rho)$ are given by $$ \begin{array}{l} u_\pm=\frac{1-c_3}{4}(1\pm \sqrt{r^2+(1-p)^4c_1^2}),\\[2mm] v_\pm=\frac{1+c_3}{4}(1\pm \sqrt{r^2+(1-p)^4c_1^2}). \end{array} $$ From (\ref{mutualinformation}) we have the quantum mutual information \begin{eqnarray} \mathcal{I}(\varepsilon(\rho))&=&f(r)+f(c_3r)-f(c_3)-f(\sqrt{r^2+(1-p)^4c_1^2}).\nonumber\\ \label{emutualinformation} \end{eqnarray}
To calculate the classical correlation, we need to determine $S_1$, $S_2$ and $S_3$ defined by (\ref{Eq:S1}), (\ref{Eq:S2}) and (\ref{Eq:S3}) respectively, which are given by \begin{eqnarray} S_1(p)=1+f(r)+f(c_3)-f(c_3r), \label{s11} \end{eqnarray} \begin{eqnarray} S_2(p)=1+f(\sqrt{r^2+(1-p)^4c_1^2}), \label{s22} \end{eqnarray} \begin{eqnarray} S_3(p)=1+f(\sqrt{r^2+(1-p)^4c_2^2}). \label{s22} \end{eqnarray} From the condition (\ref{condition}), we have $S_3(p)\geq S_2(p)$ for any $p$, while $S_2(p)$ increases under decoherence, and $S_1(p)$ is constant under decoherence. If we select appropriate $r,\, c_1,\, c_3$ then the initial state $S_2(0) < S_1(0)$. On the other hand, since $f(c_3)\leq f(c_3r)$, we always have $S_2(1) \geq S_1(1)$ . Therefore there exist $0\leq p_0\leq 1$ such that $min\{S_1,S_2,S_3\}=S_2$ for $0\leq p\leq p_0,$ and $min\{S_1,S_2,S_3\}=S_1$ for $ p_0\leq p\leq 1$. In this case $\mathcal{Q}(\varepsilon(\rho))$ monotonically decreases to zero.
When $min\{S_1,S_2,S_3\}=S_2,$ we have \begin{eqnarray} \mathcal{Q}(\varepsilon(\rho))&=&\mathcal{I}(\varepsilon(\rho))-\mathcal{C}(\varepsilon(\rho))\nonumber\\ &=&f(c_3r)-f(c_3), \end{eqnarray} $\mathcal{Q}(\varepsilon(\rho))$ is constant under decoherence during the time interval such that the condition $min\{S_1,S_2,S_3\}=S_2$ is satisfied.
As an example, for $r=s=0$, $c_1=1$, $-1\leq c_2=-c_3\leq1$, we have that $S_1(0)=1+f(c_3)$, $S_2(0)=1+f(1)<S_1(0)$. Therefore the state has constant discord under decoherence, which recovers the results in \cite{lang,Mazzola}. For an example with nonzero $r$ and $s$, we set $r=\frac{3}{10}$, $s=\frac{3}{20}$, $c_1^2=\frac{4}{5},$ $c_2=-\frac{c_1}{2},$ $ c_3=\frac{1}{2}$. It is direct to verify that $S_1(0)=0.762,$ $S_2(0)=0.186$. Therefore we have $min\{S_1,S_2,S_3\}=S_2$ and the state has a constant discord. The dynamic behavior of correlation of the state under the phase flip channel is depicted in Fig.2. We find that the concurrence $C$ is greater than the quantum discord $\mathcal{Q}$ for $0\leq p\leq 0.217$. A sudden transition of classical and quantum correlation happens at $p=0.274$, and a sudden death of entanglement \cite{yu1} appears at $p=0.4$. Moreover, different from the case of zero $r$ and $s$ in \cite{Mazzola}, where the entanglement disappears before the sudden transition of classical and quantum correlation, here one sees that the concurrence keeps non-zero after the transition. Therefore for these states the entanglement is more robust against the decoherence than the discord.
\begin{figure}
\caption{Concurrence(dashed line) and quantum discord(solid line) under phase flip channel for $r=\frac{3}{10}$, $s=\frac{3}{20}$, $c_1^2=\frac{4}{5},$ $c_2=-\frac{c_1}{2}$ and $c_3=\frac{1}{2}$.}
\label{transition}
\end{figure}
\section{\bf summary}\label{discuss} We have studied the correlation for a class of $X$ states. The level surfaces of quantum discord have been depicted. For $r=s=0$ our results reduce to the ones for Bell-diagonal states. For nonzero $r$ and $s$, it has been shown that the level surfaces of quantum discord may have quite different geometry and topology. While the quantum discord could still keep constant under decoherence in certain time interval for some initial states, the order of sudden transition of classical and quantum correlation and the sudden death of entanglement can be exchanged.
\noindent {\bf Acknowledgments} We thank Matthias D. Lang for very helpful discussions. This work is supported by the NSFC10875081, NSFC10871227, KZ200810028013 and PHR201007107 and NSFBJ1092008.
\end{document}
|
arXiv
|
{
"id": "1104.1843.tex",
"language_detection_score": 0.6725744009017944,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Interor and $\mathfrak h$ Operators on the Category of Locales} \author{Joaqu\'in Luna-Torres } \thanks{Programa de Matem\'aticas, Universidad Distrital Francisco Jos\'e de Caldas, Bogot\'a D. C., Colombia (retired professor)}
\email{[email protected]} \subjclass{06D22; 18B35; 18F70} \keywords{ Frame, Locale, Sublocale, Interior operator, $\mathfrak h$ operator, Topological category} \begin{abstract} We present the concept of interior operator $I$ on the category $\mathbf{Loc}$ of locales and then we construct a topological category \linebreak $\big(\mathbf{I\text{-}Loc},\ U\big)$, where $U:\mathbf{I\text{-}Loc}\rightarrow \mathbf{Loc}$ is a forgetful functor; and we also introduce the notion of $\mathfrak h$ operator on the category $\mathbf{Loc}$ and discuss some of their properties for constructing the topological category $\big(\mathbf{\mathfrak h\text{-}Loc},\ U\big)$ associated to the forgetful functor $U:\mathbf{\mathfrak h\text{-}Loc}\rightarrow \mathbf{Loc}$.
\end{abstract} \maketitle \baselineskip=1.7\baselineskip \section*{0. Introduction} Kuratowski operators (closure, interior, exterior, boundary and others) have been used intensively in General Topology (\cite{Du}, \cite{K1}, \cite{K2}). For a topological space it is well-known that, for example, the associated closure and interior operators provide equivalent descriptions of the topology; but this is not always true in other categories, consequently it makes sense to define and study separately these operators. In this context, we study an interior operator $I$ on the the coframe $\mathcal{S}_\text{{\bsifamily{l}}}(L)$ of sublocales of every object $L$ in the category $\mathbf{Loc}$.
On the other hand, a new topological operador $\mathfrak h$ was introduced by M. Suarez \cite{MSM} in order to complete a Boolean algebra with all topological operators in General Topology. Following his ideas, we study an operator $\mathfrak h$ on the colection $\mathcal{S}_\text{{\bsifamily{l}}}^{c}(L)$ of all complemented sublocales of every object $L$ in the category $\mathbf{Loc}$.
The paper is organized as follows, we begin presenting, in section 1, the basic concepts of Heyting algebras, Frames, locales, sublocales, images and preimages of sublocales for the morphisms of $\mathbf{Loc}$ and the notions of closed and open sublocales; these notions can be found in Picado and Pultr \cite{PP} and A. L. Suarez \cite{ALS}, In section 2, we present the concept of interior operator $I$ on the category $\mathbf{Loc}$ and then we construct a topological category $\big(\mathbf{I\text{-}Loc},\ U\big)$, where $U:\mathbf{I\text{-}Loc}\rightarrow \mathbf{Loc}$ is a forgetful functor. Finally in section 3 we present the notion of $\mathfrak h$ operator on the category $\mathbf{Loc}$ and discuss some of their properties for constructing the topological category $\big(\mathbf{\mathfrak h\text{-}Loc},\ U\big)$ associated to the forgetful functor $U:\mathbf{\mathfrak h\text{-}Loc}\rightarrow \mathbf{Loc}$.
\section{Preliminaries} For a comprehensive account on the the categories of frames and locales we refer to Picado and Pultr \cite{PP} and A. L. Suarez \cite{ALS}, from whom we take the following useful facts. \subsection{Heyting algebras} A bounded lattice $L$ is called a Heyting algebra if there is a binary operation $x \rightarrow y$ (the Heyting operation) such that for all $a, b, c$ in $L$, \[ a \land b \leqslant c\,\ \text{iff}\,\ a \leqslant b \rightarrow c. \] Thus for every $b \in L$ the mapping $b \rightarrow (-) : L \rightarrow L$ is a right adjoint to $(-) \land b : L \rightarrow L$ and hence, if it exists, is uniquely determined.
In a complete Heyting algebra we have $(\bigvee A) \land b = \bigvee_{a\in A}(a \land b)$ for any $A\subseteq L$, $b\rightarrow (\bigvee A) = \bigvee_{a\in A}(b \rightarrow a)$, and $(\bigvee A) \rightarrow b = \bigvee{a\in A}(a \rightarrow b)$. \subsection{Frames} A {\it frame} is a complete lattice $L$ satisfying the distributive law \[ \big(\bigvee A\big)\land b =\bigvee \{ a\land b\mid a\in A\} \] for all $A \subseteq L$ and $b\in L$ (hence a complete Heyting algebra); a {\it frame homomorphism} preserves all joins and all finite meets.
The lattice $\Omega(X)$ of all open subsets of a topological space $X$ is an example of a frame, and if $f: X \rightarrow Y$ is continuous we obtain a frame homomorphism $\Omega( f ): \Omega(Y )\rightarrow \Omega (X)$ by setting $\Omega(f)(U) = f^{-1}[U]$. Thus we have a contravariant functor $\Omega : \mathbf{Top} \rightarrow \mathbf{Frm}^{op} $ from the category of topological spaces into that of frames. \subsection{Locales} The adjunction $\Omega : \mathbf{Top} \rightarrow \mathbf{Frm}^{op} $, \,\ $\mathbf{pt}:\mathbf{Frm}^{op}\rightarrow \mathbf{Top}$ with $\Omega \dashv \mathbf{pt}$, connects the categories of frames with that of topological spaces. The functor $\Omega$ assigns to each space its lattice of opens, and $\mathbf{pt}$ assigns to a frame $L$ the collection of the frame maps $f : L \rightarrow 2$, topologized by setting the opens to be exactly the sets of the form $ \{f: L \rightarrow 2 \mid f(a) = 1\}$ for some $a\in L$.
A frame $L$ is {\it spatial} if for $a, b \in L$ whenever $a\nleqslant b$ there is some frame map $f: L \rightarrow 2$ such that $f(a) = 1\ne f(b)$. Spatial frames are exactly those of the form $\Omega(X)$ for some space $X$.
A space is {\it sober} if every irreducible closed set is the closure of a unique point. Sober spaces are exactly those of the form $\mathbf{pt}(L)$ for some frame $L$.
The adjunction $\Omega\dashv \mathbf{pt}$ restricts to a dual equivalence of categories between spatial frames and sober spaces
The category of sober spaces is a full reflective subcategory of $\mathbf{Top}$. For each space $X$ we have a sobrification map $N : X\rightarrow \mathbf{pt}(\Omega(X))$ mapping each point $x\in X$ to the map $f_x :(X) \rightarrow 2$ defined as $f(U) = 1$ if and only if $x\in U$.
The category of {\it spatial frames} is a full reflective subcategory of $\mathbf{Frm}$. For each frame we have a spatialization map $\phi : L \rightarrow \Omega(\mathbf{pt}(L))$ which sends each $a\in L$ to $\{f : L \rightarrow 2 \mid f(a) = 1\}$.
This justifies to view the dual category $\mathbf{Loc} =\mathbf{Frm}^{op}$ as an extended category of spaces; one speaks of the category of {\it locales}.
Maps in the category of locales have a concrete description: they can be characterized as the right adjoints of frame maps (since frame maps preserve all joins, they always have right adjoints). \subsubsection{\bf{Sublocales}} A {\it sublocale} of a locale $L$ is a subset $S\subseteq L$ such that it is closed under arbitrary meets, and such that $s\in S$ implies $x\rightarrow s \in S$ for every $x \in L$. This is equivalent to $S\subseteq L$ being a locale in the inherited order, and the subset inclusion being a map in $\mathbf{Loc}$.
Sublocales of $L$ are closed under arbitrary intersections, and so the collection
$\mathcal{S}_\text{{\bsifamily{l}}}(L)$ of all sublocales of $L$, ordered under set inclusion, is a complete lattice. The join of sublocales is (of course) not the union, but we have a very simple formula
$\bigvee_{i} S_{i} = \{\bigvee M \mid M \subseteq\bigcup_{i} S_{i}\}$.
In the coframe $\mathcal{S}_\text{{\bsifamily{l}}}(L)$ the bottom element is the sublocale $\{1\}$ and the top element is $L$.
\subsubsection{\bf{ Images and Preimages of sublocales}} Let $f: L\rightarrow M$ be a localic map and if $S \subseteq L$ is a sublocale then the standard set-theoretical image $f [S]$ is also a sublocale The set-theoretic preimage $f^{-1}[T]$ of a sublocale $T\subseteq M$ is not necessarily a sublocale of $L$. To obtain a concept of a preimage suitable for our purposes we will, first, make the following observation: ``Let $A\subseteq L$ be a subset closed under meets. Then $\{1\} \subseteq A$ and if $S_i \subseteq A$ for $i \in J$ then $\bigwedge_{i\in J} S_i\subseteq A$''. Consequently there exists the largest sublocale contained in $A$. It will be denoted by $A_{sloc}$.
The set-theoretic preimage $f^{-1}[T]$ of a sublocale $T$ is closed under meets \big(indeed, $f(1) = 1$, and if $x_i \in f^{-1}[T])$ then $f(x_i) \in T$, and hence $ f(\bigwedge_{i\in J} x_i)=\bigwedge_{i\in J} f(x_i)$ belongs to $T$ and $\bigwedge_{i\in J} x_i\in f^{-1}[T]$ \big) and we have the sublocale $f_{-1}[T]:= f^{-1}[T]_{sloc}$. It will be referred to as {\it the preimage} of $T$, and we shall sat that $f_{-1}[-]$ is {\it the preimage function} of $f$.
For every localic map $f: L \rightarrow M$, the preimage function $f_{-1}[-] $ is a right Galois adjoint of the image function $f [-] :\mathcal{S}_\text{{\bsifamily{l}}}(L)\rightarrow \mathcal{S}_\text{{\bsifamily{l}}}(M)$.
\subsubsection{\bf{ Closed and Open sublocales}}\label{open} Embedded in $\mathcal{S}_\text{{\bsifamily{l}}}(L)$ we have the coframe of {\it closed sublocales} which is isomorphic to $L^{op}$. The closed sublocale $\mathfrak c(a) \subseteq L$ is defined to be $\uparrow a$ for $a \in L$.
Embedded in $\mathcal{S}_\text{{\bsifamily{l}}}(L)$ we also have the frame of open sublocales which is isomorphic to $L$. The open sublocale is defined to be $\{a \rightarrow x \mid x \in L\}$ for $a \in L$.
The sublocales $\mathfrak o(a)$ and $\mathfrak c(a)$ are complements of one another in the coframe $\mathcal{S}_\text{{\bsifamily{l}}}(L)$ for any element $a\in L$. Furthermore, open and closed sublocales generate the coframe $\mathcal{S}_\text{{\bsifamily{l}}}(L)$ in the sense that for each \linebreak $S \in \mathcal{S}_\text{{\bsifamily{l}}}(L)$ we have $S = \bigcap\{\mathfrak o(x) \cup \mathfrak c(y) \mid S \subseteq \mathfrak o(x) \cup \mathfrak c(y)\}$.
A pseudocomplement of an element $a$ in a meet-semilattice $L$
with $0$ is the largest element $b$ such that $b\land a = 0$, if it exists. It is usually denoted by $\neg a$. Recall that in a Heyting algebra $H$ the pseudocomplement can be expressed as $\neg x= x\rightarrow 0$.
\section{Interior Operators} We shall be conserned in this section with a version on locales of the interior operator studied in \cite{LO}.
Before stating the next definition, we need to observe that since for localic maps $f: L \rightarrow M$ and $g:M\rightarrow N$: \begin{itemize} \item the preimage function $f_{-1}[-] $ is a right Galois adjoint of the image function $f [-] :\mathcal{S}_\text{{\bsifamily{l}}}(L)\rightarrow \mathcal{S}_\text{{\bsifamily{l}}}(M)$; \item $ g [-]\circ f [-]=(g\circ f) [-]$. \end{itemize} Therefore $g_{-1}[-]\circ f_{-1}[-]= (g\circ f)_{-1}[-]$ because given two adjunctions the composite functors yield an adjunction.
\begin{defi}
An interior operator $I$ of the category $\mathbf{Loc}$ is given by a family $I =(i_{\text{\tiny{$L$}}})_{\text{$L\in \mathbf{Loc}$}}$ of maps $i_{\text{\tiny{$L$}}}:\mathcal{S}_\text{{\bsifamily{l}}}(L)\rightarrow \mathcal{S}_\text{{\bsifamily{l}}}(L)$ such that
\begin{itemize}
\item[($I_1)$] $\left(\text{Contraction}\right)$\,\ $i_{\text{\tiny{$L$}}}(S)\subseteq S$ for all $S \in \mathcal{S}_\text{{\bsifamily{l}}}(L)$;
\item[($I_2)$] $\left(\text{Monotonicity}\right)$\,\ If $S\subseteq T$ in $\mathcal{S}_\text{{\bsifamily{l}}}(L)$, then $i_{\text{\tiny{$L$}}}(S)\subseteq i_{\text{\tiny{$L$}}}(T)$
\item[($I_3)$] $\left(\text{Upper bound}\right)$\,\ $i_{\text{\tiny{$L$}}}(L)=L$.
\end{itemize}
\end{defi}
\begin{defi}
An $I$-space is a pair $(L,i_{\text{\tiny{$L$}}})$ where $L$ is an object of $\mathbf{Loc}$ and $i_{\text{\tiny{$L$}}}$ is an interior operator on $L$.
\end{defi}
\begin{defi}
A morphism $f:L\rightarrow M$ of $\mathbf{Loc}$ is said to be $I$-continuous if
\begin{equation}\label{conti}
f_{-1}\left[ i_{\text{\tiny{$M$}}}(T)\right]\subseteq i_{\text{\tiny{$L$}}}\left( f_{-1}[T]\right)
\end{equation}
for all $T\in \mathcal{S}_\text{{\bsifamily{l}}}(M)$. Where $f_{-1}[-]$ is the preimage of $f[-]$.
\end{defi}
\begin{prop}
Let $f:L\rightarrow M$ and $g:M\rightarrow N$ be two $I$-continuous morphisms of $\mathbf{Loc}$ then $g\centerdot f$ is an $I$-continuous morphism of $\mathbf{Loc}$.
\end{prop}
\begin{proof}
Since $g:M\rightarrow N$ is $I$-continuous, we have $$g_{-1}\big[ i_{\text{\tiny{$N$}}}(S)\big]\subseteq i_{\text{\tiny{$M$}}}\big( g_{-1}[S]\big)$$
for all $S\in \mathcal{S}_\text{{\bsifamily{l}}}(N)$, it fallows that
$$f_{-1}\Big[g_{-1}\big[( i_{\text{\tiny{$N$}}}(S)\big]\Big]\subseteq f_{-1}\Big[ i_{\text{\tiny{$M$}}}\big( g_{-1}[S]\big)\Big];$$
now, by the $I$-continuity of $f$,$$ f_{-1}\Big[ i_{\text{\tiny{$M$}}} \big( g_{-1}[S]\big)\Big]\subseteq i_{\text{\tiny{$L$}}}\Big( f_{-1}\big[g_{-1}[S]\big]\Big),$$ therefore $$f_{-1}\Big[g_{-1}\big[ i_{\text{\tiny{$N$}}}(S)\big]\Big]\subseteq i_{\text{\tiny{$L$}}}\Big( f_{-1}\big[g_{-1}[S]\big]\Big),$$
that is to say
$$(g\centerdot f)_{-1}\big[ i_{\text{\tiny{$N$}}}(S)\Big]\subseteq i_{\text{\tiny{$L$}}}\Big( (g\centerdot f)_{-1}[S]\Big)$$ \end{proof}
As a consequence we obtain
\begin{defi}
The category $\mathbf{I\text{-}Loc}$ of $I$-spaces comprises the following data:
\begin{enumerate}
\item {\bf Objects}: Pairs $(L,i_{\text{\tiny{$L$}}})$ where $L$ is an object of $\mathbf{Loc}$ and $i_{\text{\tiny{$L$}}}$ is an interior operator on $L$. \item {\bf Morphisms}: Morphisms of $\mathbf{Loc}$ which are $I$-continuous.
\end{enumerate}
\end{defi}
\subsection{The lattice structure of all interior operators}
For the category $\mathbf{Loc}$ we consider the collection
\[
Int(\mathbf{Loc})
\]
of all interior operators on $\mathbf{Loc}$. It is ordered by \[
I\leqslant J \Leftrightarrow i_{\text{\tiny{$L$}}}(S)\subseteq j_{\text{\tiny{$L$}}}(S), \,\,\ \text{for all $S\in \mathcal{S}_\text{{\bsifamily{l}}}$ and all $L$ object of $\mathbf{Loc}$}. \]
This way $Int(\mathbf{Loc})$ inherits a lattice structure from $\mathcal{S}_\text{{\bsifamily{l}}}$:
\begin{prop}
Every family $(I_{\text{\tiny{$\lambda$}}})_{\text{\tiny{$\lambda\in \Lambda$}}}$ in $Int(\mathbf{Loc})$ has a join $\bigvee\limits_{\text{\tiny{$\lambda\in \Lambda $}}}I_{\text{\tiny{$\lambda $}}}$ and a meet $\bigwedge\limits_{\text{\tiny{$\lambda\in \Lambda $}}}I_{\text{\tiny{$\lambda $}}}$ in $Int(\mathbf{Loc})$. The discrete interior operator
\[ I_{\text{\tiny{$D$}}}=({i_{\text{\tiny{$D$}}}}_{\text{\tiny{$L$}}})_{\text{$L\in \mathbf{Loc}$}}\,\,\ \text{with}\,\,\ {i_{\text{\tiny{$D$}}}}_{\text{\tiny{$L$}}}(S)=S\,\,\ \text{for all}\,\ S\in \mathcal{S}_\text{{\bsifamily{l}}}
\]
is the largest element in $Int(\mathbf{Loc})$, and the trivial interior operator
\[ I_{\text{\tiny{$T$}}}=({i_{\text{\tiny{$T$}}}}_\text{\tiny{$L$}})_{\text{$L\in \mathbf{Loc}$}}\,\,\ \text{with}\,\,\ {i_{\text{\tiny{$T$}}}}_{\text{\tiny{$L$}}}(S)=
\begin{cases} \{1\}& \text{for all}\,\ S\in \mathcal{S}_\text{{\bsifamily{l}}},\,\ S\ne L\\ L&\text {if}\,\ S=L \end{cases}
\] is the least one. \end{prop}
\begin{proof} For $\Lambda\ne\emptyset$, let $\widehat{I}=\bigvee\limits_{\text{\tiny{$\lambda\in\Lambda $}}}I_{\text{\tiny{$\lambda $}}}$, then
\[
\widehat{i_{\text{\tiny{$L$}}}}=\bigvee\limits_{\text{\tiny{$\lambda\in \Lambda$}}} {i_{\text{\tiny{$\lambda $}}}}_{\text{\tiny{$L$}}},
\]
for all $L$ object of $\mathbf{Loc}$, satisfies \begin{itemize}
\item $ \widehat{i_{\text{\tiny{$L$}}}}(S)\subseteq S$, because ${i_{\text{\tiny{$\lambda $}}}}_{\text{\tiny{$L$}}}(S)\subseteq S$ for all $S\in \mathcal{S}_\text{{\bsifamily{l}}}$ and for all $\lambda \in \Lambda$. \item If $S\leqslant T$ in $\mathcal{S}_\text{{\bsifamily{l}}}$ then ${i_{\text{\tiny{$\lambda $}}}}_{\text{\tiny{$L$}}}(S)\subseteq {i_{\text{\tiny{$\lambda $}}}}_{\text{\tiny{$L$}}}(T)$ for all $S\in \mathcal{S}_\text{{\bsifamily{l}}}$ and for all $\lambda \in \Lambda$, therefore $ \widehat{i_{\text{\tiny{$S$}}}}(S)\subseteq \widehat{i_{\text{\tiny{$L$}}}}(T)$.
\item Since ${i_{\text{\tiny{$\lambda $}}}}_{\text{\tiny{$L$}}}(L)=L $ for all $\lambda \in \Lambda$, we have that $ \widehat{i_{\text{\tiny{$L$}}}}(L)=L$.
\end{itemize} Similarly $\bigwedge\limits_{\text{\tiny{$\lambda\in \Lambda $}}}I_{\text{\tiny{$\lambda $}}}$,\,\ $ I_{\text{\tiny{$D$}}}$ and $I_{\text{\tiny{$T$}}}$ are interior operators.
\end{proof}
\begin{coro}\label{complete}
For every object $L$ of $\mathbf{Loc}$
\[
Int(L) = \{i_{\text{\tiny{$L$}}}\mid i_{\text{\tiny{$L$}}}\,\ \text{ is an interior operator on}\,\ L\}
\]
is a complete lattice.
\end{coro}
\subsection{Initial interior operators} Let $\mathbf{I\text{-}Loc}$ be the ctegory of $I$-spaces. Let $(M,i_{\text{\tiny{$M$}}})$ be an object of $\mathbf{I\text{-}Loc}$ and let $L$ be an object of $\mathbf{Loc}$. For each morphism $f:L\rightarrow M$ in $\mathbf{Loc}$ we define on $L$ the operotor \begin{equation} \label{initial} i_{\text{\tiny{$L_{f}$}}}:=f_{-1}\centerdot i_{\text{\tiny{$M$}}}\centerdot f_{*}. \end{equation} \begin{prop}\label{ini-cont} The operator (\ref{initial}) is an interior operator on $L$ for which the morphism $f$ is $I$-continuous. \end{prop} \begin{proof}\ \begin{enumerate} \item[($I_1)$] $\left(\text{Contraction}\right)$\,\ $i_{\text{\tiny{$L_{f}$}}}(S)= f_{-1}\centerdot i_{\text{\tiny{$M$}}}\centerdot f_{*}[S]\subseteq f_{-1}\centerdot f_{*}[S]\subseteq S$ for all $S\in \mathcal{S}_\text{{\bsifamily{l}}}$;
\item[($I_2)$] $\left(\text{Monotonicity}\right)$\,\ $S\subseteq T$ in $\mathcal{S}_\text{{\bsifamily{l}}}$, implies $f_{*}[S]\subseteq f_{*}[T]$, then $i_{\text{\tiny{$M$}}}\centerdot f_{*}[S]\subseteq i_{\text{\tiny{$M$}}}\centerdot f_{*}[T]$, consequently $ f_{-1}\centerdot i_{\text{\tiny{$M$}}}\centerdot f_{*}[S]\subseteq f_{-1}\centerdot i_{\text{\tiny{$M$}}}\centerdot f_{*}[T]$;
\item[($I_3)$] $\left(\text{Upper bound}\right)$\,\ $i_{\text{\tiny{$L_{f}$}}}(L)=f_{-1}\centerdot i_{\text{\tiny{$M$}}}\centerdot f_{*}[L]=L$. \end{enumerate} Finally, \begin{align*} f_{-1}\big(i_{\text{\tiny{$M$}}}(T)\big)&\subseteq f_{-1}\big(i_{\text{\tiny{$M$}}}\centerdot f_{*}\centerdot f^{-1}(T)\big)=(f_{-1}\centerdot i_{\text{\tiny{$M$}}}\centerdot f_{*})\big(f^{-1}(T)\big)\\ &= i_{\text{\tiny{$L_{f}$}}}\big(f^{-1}(T)\big), \end{align*}
for all $T\in \mathcal{S}_\text{{\bsifamily{l}}}$. \end{proof} It is clear that $ i_{\text{\tiny{$L_{f}$}}}$ is the coarsest interior operator on $L$ for which the morphism $f$ is $I$-continuous; more precisaly \begin{prop}\label{unique} Let $(L,i_{\text{\tiny{$L$}}})$ and $(M,i_{\text{\tiny{$M$}}})$ be objects of $\mathbf{I\text{-}Loc}$, and let $N$ be an object of $\mathbf{Loc}$. For each morphism $g:N\rightarrow L$ in $\mathbf{Loc}$ and for\linebreak $f:(L,i_{\text{\tiny{$L_{f}$}}})\rightarrow (M,i_{\text{\tiny{$N$}}})$ an $I$-continuous morphism, $g$ is $I$-continuous if and only if $f\centerdot g$ is $I$-continuous. \end{prop} \begin{proof} Suppose that $g\centerdot f$ is $I$-continuous, i. e. $$(f\centerdot g)_{-1}\big(i_{\text{\tiny{$M$}}}(T)\big)\subseteq i_{\text{\tiny{$N$}}}\big( (f\centerdot g)_{-1}(T) \big)$$
for all $T\in \mathbf{S(N)}$. Then, for all $S\in \mathcal{S}_\text{{\bsifamily{l}}}$, we have \begin{align*}
g_{-1}\big(i_{\text{\tiny{$L_{f}$}}}(S)\big)&=g_{-1}\big(f_{-1}\centerdot i_{\text{\tiny{$M$}}}\centerdot f_{*}(S)\big)=(f\centerdot g)_{-1}\big( i_{\text{\tiny{$M$}}}( f_{*}(S)) \big)\\
&\subseteq i_{\text{\tiny{$N$}}}\big( (f\centerdot g)_{-1}(f_{*}(S) ) \big)=i_{\text{\tiny{$N$}}}\big( g_{-1}\centerdot f_{-1}\centerdot f_{*} (S)\big)\\
&\subseteq i_{\text{\tiny{$N$}}}\big( g_{-1}(S)\big),\\ \end{align*} i.e. $g$ is $I$-continuous. \end{proof} As a consequence of corollary(\ref{complete}), proposition(\ref{ini-cont}) and proposition (\ref{unique}) (cf. \cite{AHS} or \cite{JM}), we obtain
\begin{theorem} The forgetful functor $U:\mathbf{I\text{-}Loc}\rightarrow \mathbf{Loc}$ is topological, i.e. the concrete category $\big(\mathbf{I\text{-}Loc},\ U\big)$ is topological.
\end{theorem}
\subsection{Open subobjects}
We introduce a notion of open subobjects different from the one alluded in \ref{open}.
\begin{defi}
An sublocale $S$ of a locale $L$ is called $I$-open \big(in $L$\big) if it is isomorphic to its $I$-interior, that is: if $i_{\text{\tiny{$L$}}}(S)= S$.
\end{defi}
The $I$-continuity condition (\ref{conti}) implies that $I$-openness is preserve by inverse images:
\begin{prop}
Let $f:L\rightarrow M$ be a morphism in $\mathbf{Loc}$. If $T$ is $I$-open in $M$, then $f_{-1}(T)$ is $I$-open in $L$.
\end{prop}
\begin{proof}
If $T= i_{\text{\tiny{$M$}}}(T)$ then $f_{-1}(T)=f_{-1}\big(i_{\text{\tiny{$M$}}}(T)\big)\subseteq i_{\text{\tiny{$L$}}}\big(f_{-1}(T)\big)$, so \linebreak $i_{\text{\tiny{$L$}}}\big(f_{-1}(T)\big)=f_{-1}(T)$.
\end{proof}
\section{$\mathfrak h$ Operators}
In this section we shall be conserned with a weak categorical version of a topological function studied by M, Suarez M. in \cite{MSM}. For that purpose we will use the colection $\mathcal{S}_\text{{\bsifamily{l}}}^{c}(L)$ of all complemented sublocales of a locale $L$ (See P, T. Johnston \cite{PJ1}, for example).
\begin{defi}
An $\mathfrak h$ operator of the category $\mathbf{Loc}$ is given by a family $\mathfrak h =(h_{\text{\tiny{$L$}}})_{\text{$L\in \mathbf{Loc}$}}$ of maps $h_{\text{\tiny{$L$}}}:\mathcal{S}_\text{{\bsifamily{l}}}^{c}(L)\rightarrow \mathcal{S}_\text{{\bsifamily{l}}}^{c}(L)$ such that \begin{itemize} \item [($h_1$)] $S\cap h_{\text{\tiny{$L$}}}(S)\subseteq S$, for all $S \in\mathcal{S}_\text{{\bsifamily{l}}}^{c}(L)$; \item [($h_2$)] If $S\subseteq T$ then $S\cap h_{\text{\tiny{$L$}}}(S)\subseteq T\cap h_{\text{\tiny{$L$}}}(T)$, for all $S,T \in\mathcal{S}_\text{{\bsifamily{l}}}^{c}(L)$; \item [($h_3$)] $ h_{\text{\tiny{$L$}}}(L)=L$.
\end{itemize}
\end{defi}
\begin{defi}
An $\mathfrak h$-space is a pair $(L,h_{\text{\tiny{$L$}}})$ where $L$ is an object of $\mathbf{Loc}$ and $h_{\text{\tiny{$L$}}}$ is an $\mathfrak h$ operator on $L$.
\end{defi}
\begin{defi}
A morphism $f:L\rightarrow M$ of $\mathbf{Loc}$ is said to be $\mathfrak h$-continuous if
\begin{equation}\label{h-conti}
f_{-1}\left[T\cap h_{\text{\tiny{$M$}}}(T)\right]\subseteq f_{-1}[T]\cap h_{\text{\tiny{$L$}}}\left( f_{-1}[T]\right)
\end{equation}
for all $T\in \mathcal{S}_\text{{\bsifamily{l}}}^{c}(M)$. Where $f_{-1}[-]$ is the inverse image of $f[-]$.
\end{defi}
\begin{prop}
Let $f:L\rightarrow M$ and $g:M\rightarrow N$ be two $\mathfrak h$-continuous morphisms of $\mathbf{Loc}$ then $g\centerdot f$ is an $\mathfrak h$-continuous morphism of $\mathbf{Loc}$.
\end{prop}
\begin{proof}
Since $g:M\rightarrow N$ is $I$-continuous, we have
$$
g_{-1}\left[V\cap h_{\text{\tiny{$N$}}}(V)\right]\subseteq g_{-1}[V]\cap h_{\text{\tiny{$M$}}}\left( g_{-1}[V]\right)
$$
for all $V\in \mathcal{S}_\text{{\bsifamily{l}}}^{c}(N)$, it fallows that $$ f_{-1}\big[ g_{-1}\left[V\cap h_{\text{\tiny{$N$}}}(V)\right]\big]\subseteq f_{-1}\big[g_{-1}[V]\cap h_{\text{\tiny{$M$}}}\left( g_{-1}[V]\right)\big]
$$
now, by the $\mathfrak h$-continuity of $f$, $$ f_{-1}\big[g_{-1}[V]\cap h_{\text{\tiny{$M$}}}(g_{-1}[V])\big]\subseteq f_{-1}\big[g_{-1}[V]\big]\cap h_{\text{\tiny{$L$}}}\left( f_{-1}\big[g_{-1}[V]\big]\right)$$ therefore $$(g\centerdot f)_{-1}\big[V\cap h_{\text{\tiny{$N$}}}(V)\big]\subseteq (g\centerdot f)_{-1}\cap h_{\text{\tiny{$L$}}}\big((g\centerdot f)_{-1}[V] \big).$$ This complete the proof. \end{proof}
As a consequence we obtain \begin{defi}
The category $\mathbf{\mathfrak h\text{-}Loc}$ of $\mathfrak h$-spaces comprises the following data:
\begin{enumerate}
\item {\bf Objects}: Pairs $(L,h_{\text{\tiny{$L$}}})$ where $L$ is an object of $\mathbf{Loc}$ and $h_{\text{\tiny{$L$}}}$ is an $\mathfrak h$-operator on $L$. \item {\bf Morphisms}: Morphisms of $\mathbf{Loc}$ which are $\mathfrak h$-continuous.
\end{enumerate}
\end{defi}
\subsection{The lattice structure of all $\mathfrak h$ operators}
For the category $\mathbf{Loc}$ we consider the collection
\[
\mathfrak h(\mathbf{Loc})
\]
of all $\mathfrak h$ operators on $\mathbf{Loc}$. It is ordered by \[
\mathfrak h\leqslant \mathfrak{h^{'}} \Leftrightarrow h_{\text{\tiny{$L$}}}(S)\subseteq h^{'}_{\text{\tiny{$L$}}}(S), \,\,\ \text{for all $S\in \mathcal{S}_\text{{\bsifamily{l}}}^{c}(L)$ and all $L$ object of $\mathbf{Loc}$}. \]
This way $\mathfrak h(\mathbf{Loc})$ inherits a lattice structure from $ \mathcal{S}_\text{{\bsifamily{l}}}^{c}$.
\begin{prop}
Every family $( \mathfrak h_{\text{\tiny{$\lambda$}}})_{\text{\tiny{$\lambda\in \Lambda$}}}$ in $\mathfrak h(\mathbf{Loc})$ has a join $\bigvee\limits_{\text{\tiny{$\lambda\in \Lambda $}}} \mathfrak h_{\text{\tiny{$\lambda $}}}$ and a meet $\bigwedge\limits_{\text{\tiny{$\lambda\in \Lambda $}}} \mathfrak h_{\text{\tiny{$\lambda $}}}$ in $Int(\mathbf{Loc})$. The discrete $\mathfrak h$ operator
\[ \mathfrak h_{\text{\tiny{$D$}}}=({h_{\text{\tiny{$D$}}}}_{\text{\tiny{$L$}}})_{\text{$L\in \mathbf{Loc}$}}\,\,\ \text{with}\,\,\ {h_{\text{\tiny{$D$}}}}_{\text{\tiny{$L$}}}(S)=S\,\,\ \text{for all}\,\ S\in \mathcal{S}_\text{{\bsifamily{l}}}^{c}(L)
\]
is the largest element in $\mathfrak h(\mathbf{Loc})$, and the trivial $\mathfrak h$ operator
\[ \mathfrak h_{\text{\tiny{$T$}}}=({h_{\text{\tiny{$T$}}}}_\text{\tiny{$L$}})_{\text{$L\in \mathbf{Loc}$}}\,\,\ \text{with}\,\,\ {h_{\text{\tiny{$T$}}}}_{\text{\tiny{$L$}}}(S)=
\begin{cases} \{1\}& \text{for all}\,\ S\in \mathcal{S}_\text{{\bsifamily{l}}}^{c}(L),\,\ S\ne L\\ L&\text {if}\,\ S=L \end{cases}
\]
is the least one.
\end{prop}
\begin{proof} For $\Lambda\ne\emptyset$, let $\widehat{\mathfrak h}=\bigvee\limits_{\text{\tiny{$\lambda\in\Lambda $}}}\mathfrak h_{\text{\tiny{$\lambda $}}}$, then
\[
\widehat{h_{\text{\tiny{$L$}}}}=\bigvee\limits_{\text{\tiny{$\lambda\in \Lambda$}}} {h_{\text{\tiny{$\lambda $}}}}_{\text{\tiny{$L$}}},
\]
for all $L$ object of $\mathbf{Loc}$, satisfies \begin{itemize}
\item $S\cap \widehat{h_{\text{\tiny{$L$}}}}(S)\subseteq S$, because $S\cap {h_{\text{\tiny{$\lambda $}}}}_{\text{\tiny{$S$}}}(L)\subseteq S$, for all $S \in\mathcal{S}_\text{{\bsifamily{l}}}^{c}(L)$ and for all $\lambda \in \Lambda$. \item If $S\subseteq T$ then $S\cap \widehat{h_{\text{\tiny{$L$}}}}(S)\subseteq T\cap \widehat{h_{\text{\tiny{$L$}}}}(T)$, since $S \cup {h_{\text{\tiny{$\lambda $}}}}_{\text{\tiny{$L$}}}(S)\subseteq T \cup {h_{\text{\tiny{$\lambda $}}}}_{\text{\tiny{$L$}}}(T)$, for all $S,T \in\mathcal{S}_\text{{\bsifamily{l}}}^{c}(L)$ and for all $\lambda \in \Lambda$. \item $L\cap \widehat{h_{\text{\tiny{$L$}}}}(L)= L$, because $L\cap {h_{\text{\tiny{$\lambda $}}}}_{\text{\tiny{$L$}}}(L)= L$ for all $\lambda \in \Lambda$.
\end{itemize} Similarly $\bigwedge\limits_{\text{\tiny{$\lambda\in \Lambda $}}}\mathfrak h_{\text{\tiny{$\lambda $}}}$,\,\ $ \mathfrak h_{\text{\tiny{$D$}}}$ and $\mathfrak h _{\text{\tiny{$T$}}}$ are $\mathfrak h$ operators.
\end{proof}
\begin{coro}\label{h-complete}
For every object $L$ of $\mathbf{Loc}$
\[
\mathfrak h(L) = \{h_{\text{\tiny{$L$}}}\mid h_{\text{\tiny{$L$}}}\,\ \text{ is an $\mathfrak h$ operator on}\,\ L\}
\]
is a complete lattice. \end{coro}
\subsection{Initial $\mathfrak h$ operators} Let $\mathbf{\mathfrak h\text{-}Loc}$ be the category of $\mathfrak h$-spaces. Let $(M,h_{\text{\tiny{$M$}}})$ be an object of $\mathbf{\mathfrak h\text{-}Loc}$ and let $L$ be an object of $\mathbf{Loc}$. For each morphism $f:L\rightarrow M$ in $\mathbf{Loc}$ we define on $L$ the operotor
\begin{equation} \label{h-initial} h_{\text{\tiny{$L_{f}$}}}:=f_{-1}\centerdot h_{\text{\tiny{$M$}}}\centerdot f_{*}. \end{equation}
\begin{prop}\label{h-ini-cont} The operator (\ref{h-initial}) is an $\mathfrak h$ operator on $L$ for which the morphism $f$ is $\mathfrak h$-continuous. \end{prop}
\begin{proof}\ \begin{enumerate} \item[($h_1)$] $S\cap h_{\text{\tiny{$L_{f}$}}}(S)= f_{-1} \Big[f_{*}[S]\cap h_{\text{\tiny{$M$}}}\big[ f_{*}[S]\big]\Big]\subseteq f_{-1}\big[f_{*}[s]\big]\subseteq S$,
for all $S\in \mathcal{S}_\text{{\bsifamily{l}}}^c(L)$.
\item[($h_2)$] $S\subseteq T$ in $\mathcal{S}_\text{{\bsifamily{l}}}^c(L)$, implies $f_{*}[S]\subseteq f_{*}[T]$, then
$f_{*}[S]\cap h_{\text{\tiny{$M$}}}\big( f_{*}[S]\big)\subseteq f_{*}[T]\cap h_{\text{\tiny{$M$}}}\big( f_{*}[T]\big)$, therefore
$f_{-1}\Big(f_{*}[S]\cap h_{\text{\tiny{$M$}}}\big( f_{*}[S]\big)\Big)\subseteq f_{-1}\Big(f_{*}[T]\cap h_{\text{\tiny{$M$}}}\big( f_{*}[T]\big)\Big)$, consequently $S\cap h_{\text{\tiny{$L_{f}$}}}(S)\subseteq T\cap h_{\text{\tiny{$L_{f}$}}}(T)$, for all $S,T \in\mathcal{S}_\text{{\bsifamily{l}}}^{c}(L)$; \item[($h_3)$] $L\cap h_{\text{\tiny{$L_{f}$}}}(L)= f_{-1} \Big[f_{*}[L]\cap h_{\text{\tiny{$M$}}}\big[ f_{*}[L]\big]\Big]=L$. \end{enumerate} \end{proof}
It is clear that $h_{\text{\tiny{$L_{f}$}}}(L)$ is the coarsest $\mathfrak h$ operator on $L$ for which the morphism $f$ is $\mathfrak h$-continuous; more precisaly \begin{prop}\label{h-unique} Let $(L,h_{\text{\tiny{$L$}}})$ and $(M,h_{\text{\tiny{$M$}}})$ be objects of $\mathbf{\mathfrak h\text{-}Loc}$,and let $N$ be an object of $\mathbf{Loc}$. For each morphism $g:N\rightarrow L$ in $\mathbf{Loc}$ and for\linebreak $f:(L,h_{\text{\tiny{$L_{f}$}}})\rightarrow (M,h_{\text{\tiny{$N$}}})$ an $\mathfrak h$-continuous morphism, $g$ is $\mathfrak h$-continuous if and only if $f\centerdot g$ is $\mathfrak h$-continuous. \end{prop} \begin{proof} Suppose that $g\centerdot f$ is $I$-continuous, i. e. $$(f\centerdot g)_{-1}\big(T\cap h_{\text{\tiny{$M$}}}(T)\big)\subseteq T\cap h_{\text{\tiny{$N$}}}\big( (f\centerdot g)_{-1}(T) \big)$$
for all $T\in T \in\mathcal{S}_\text{{\bsifamily{l}}}^{c}(N)$. Then, for all $S\in T \in\mathcal{S}_\text{{\bsifamily{l}}}^{c}(L)$, we have \begin{align*}
g_{-1}\Big(S\cap \big(h_{\text{\tiny{$L_{f}$}}}(S)\big)\Big)&=g_{-1}\Big(f_{-1}\big( f_{*}(S)\cap h_{\text{\tiny{$M$}}}\centerdot f_{*}(S)\big)=(f\centerdot g)_{-1}\big( f_{*}(S)\cap h_{\text{\tiny{$M$}}}( f_{*}(S)) \big)\\
&\subseteq f\centerdot g)_{-1}(f_{*}(S) \cap \Big(h_{\text{\tiny{$N$}}}\big( (f\centerdot g)_{-1}(f_{*}(S) ) \big)\Big)\\
&=(f\centerdot g)_{-1}(f_{*}(S) ) \cap h_{\text{\tiny{$N$}}}\big( g_{-1}\centerdot f_{-1}\centerdot f_{*} (S)\big)\\
&\subseteq g_{-1}(S)\cap h_{\text{\tiny{$N$}}}\big( g_{-1}(S)\big),\\ \end{align*} i.e. $g$ is $I$-continuous. \end{proof} As a consequence of corollary(\ref{h-complete}), proposition(\ref{h-ini-cont}) and proposition (\ref{h-unique}) (cf. \cite{AHS} or \cite{JM}), we obtain
\begin{theorem} The forgetful functor $U:\mathbf{\mathfrak h\text{-}Loc}\rightarrow \mathbf{Loc}$ is topological, i.e. the concrete category $\big(\mathbf{\mathfrak h\text{-}Loc},\ U\big)$ is topological.
\end{theorem}
\end{document}
|
arXiv
|
{
"id": "2204.13795.tex",
"language_detection_score": 0.5426876544952393,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{On moduli subspaces of central extensions of rational H-spaces}
\begin{abstract} We investigate the moduli sets of central extensions of H-spaces enjoying inversivity, power associativity and Moufang properties. By considering rational H-extensions, it turns out that there is no relationship between the first and the second properties in general. \end{abstract}
\section{Introduction} We assume that a space has the homotopy type of a path connected CW-complexes with a nondegenerate base point $*$ and that all maps are based maps.\\
\indent An {\it H-space} is a space $X$ endowed with a map $\mu :X\times X \to X$ , called a multiplication, such that both the restrictions $\mu |_{X\times *}$ and $\mu |_{*\times X}$ are homotopic to the identity map of $X$. The multiplication naturally induces a binary operation on the homotopy set $[Y,X]$ for any space $Y$. In \cite{Jam1960}, James has proved that $[Y, X]$ is an algebraic loop; that is, it has a two sided unit element and for any elements $x$ and $y\in [Y,X]$, the equations $xa=y$ and $bx=y$ have unique solutions $a,b\in [Y,X]$.\\ \indent Loop theoretic properties of H-spaces have been considered by several authors, for example, Curjel \cite{Cur1968} and Norman \cite{Nor1963}. In \cite{AL1990}, Arkowitz and Lupton considered H-space structures on inversive, power associative and Moufang properties. They consider whether there exists an H-space structure which does not satisfy the properties. Thanks to general theory of algebraic loops, we see that Moufang property implies inversivity and power associativity. However, it is expected that there is no relationship between the latter two properties in general.\\ \indent In \cite{Kac1995}, Kachi introduced central extensions of H-spaces which are called central H-extensions; see Definition \ref{H-ex}. Roughly speaking, for a given homotopy associative and homotopy commutative H-space $X_{1}$ and an H-space $X_{2}$, a central H-extension of $X_{1}$ by $X_{2}$ is defined to be the product $X_{1}\times X_{2}$ with a twisted multiplication. He also gave a classification theorem for the extensions. In fact, a quotient set of an appropriate homotopy set classifies the equivalence classes of central H-extensions; see Theorem \ref{thm:Kac}. Such the quotient set is called {\it the moduli set} of H-extensions. Moreover, we refer to the subset of the moduli set corresponding the set of the equivalence classes of H-extensions enjoying a property $P$ via the bijection in the classifying theorem as {\it the moduli subset} of H-extensions associated with the property $P$.\\ \indent The objective of the paper is to investigate the moduli subsets of central extensions of H-spaces associated with inversive, power associative or Moufang property. If a given H-space is ${\mathbb Q}$-local, then the moduli set of its central H-extensions is endowed with a vector space structure over ${\mathbb Q}$. It turns out that the moduli subset mentioned above inherit the vector space structure. This fact enable us to compare such the moduli subsets as vector space, and also to measure the size of the moduli set with the dimension of the vector space. Thus our main theorem (Theorem \ref{lem:4.4}) deduces the following result. \begin{ass}\label{ex1} Let $S_{{\rm inv}}$, $S_{{\rm p.a}}$ and $S_{{\rm Mo}}$ be the moduli subsets of central H-extensions of the Eilenberg-MacLane space $K({\mathbb Q},n)$ by $K({\mathbb Q},m)$ associated with inversive, power associative and Moufang properties, respectively. \begin{enumerate} \item If $m$ is even, $n=km$ and $k\geq 4$ is even, then $S_{{\rm Mo}}$ is the proper subset of $S_{{\rm p.a}}$ and $S_{{\rm p.a}}$ is the proper subset of $S_{{\rm inv}}:$ $$S_{{\rm Mo}}\subsetneq S_{{\rm p.a}}\subsetneq S_{{\rm inv}}.$$ \item If $m$ is even, $n=km$ and $k\geq 5$ is odd, then $S_{{\rm Mo}}$ is the proper subset of $S_{{\rm p.a}}\cap S_{{\rm inv}}$. Moreover, $S_{{\rm p.a}}\cap S_{{\rm inv}}$ is a proper subset of $S_{{\rm p.a}}$ and $S_{{\rm inv}}:$ $$S_{{\rm Mo}}\subsetneq S_{{\rm p.a}} \cap S_{{\rm inv}} \subsetneq S_{{\rm p.a}} \text{ \ and \ } S_{{\rm p.a}} \cap S_{{\rm inv}} \subsetneq S_{{\rm inv}}.$$ \item Otherwise, $S_{{\rm inv}}=S_{{\rm p.a}}=S_{{\rm Mo}}$. \end{enumerate} \end{ass} \indent The organization of this paper is as follows. In Section 2, we will recall several fundamental definitions and facts on H-spaces and algebraic loops. The classification theorem of central H-extensions are also described. In Section 3, we will present necessary and sufficient conditions for central H-extensions to be inversive, power associative and Moufang. The conditions allows us to describe the moduli set of extensions enjoying each of the properties in terms of a homotopy set of maps. In Section 4, we will deal with rational H-spaces and the dimensions of the moduli spaces mentioned above. Moreover, several examples are presented. Assertion \ref{ex1} is proved at the end of Section 4.
\section{Preliminaries} We begin by recalling the definition of an algebraic loop. \begin{defn}{\rm An} algebraic loop $(Q,\cdot )$ {\rm is a set $Q$ with a map \begin{eqnarray*} \cdot : Q\times Q\to Q ; \ (x,y)\longmapsto xy \end{eqnarray*} such that the equations $ax=b$ and $ya=b$ have unique solutions $x,y\in Q$ for all $a,b\in Q$. Moreover, there exists an element $e\in Q$ such that $xe=x=xe$ for all $x\in Q$. The element $e$ is called} unit. \end{defn}
Algebraic loops with particular multiplications are considered.
\begin{defn}\label{def:2.1.5} {\rm An algebraic loop $(Q,\cdot)$ is called to be} \begin{enumerate} \item inversive {\rm if, to any element $x$ of $Q$, there exists a unique element $x^{-1}$ of $Q$ such that the equalities $x^{-1}x=e=xx^{-1}$ hold,} \item power associative {\rm if $x(xx)=(xx)x$ for any $x\in Q$,} \item Moufang {\rm if $(x(yz))x=(xy)(zx)$ for any $x,y,z\in Q$,} \item symmetrically associative {\rm if $(xy)x=x(yx)$ for any $x,y\in Q$.} \end{enumerate} \end{defn}
In Section 3, we will focus on symmetrically associative property to examine Moufang property.
\begin{lem}{\rm \cite[Lemma 2A]{Bru1946}}\label{lem:2.3} If an algebraic loop $Q$ is Moufang, then it is inversive. \end{lem}
Thus we have implication between properties for multiplication described in Definition \ref{def:2.1.5}.
\[\xymatrix{ \fbox{power associative} & & \fbox{inversive}\\
\fbox{symmetrically associative} \ar@{=>}[u] & & \fbox{Moufang} \ar@{=>}[ll] \ar@{=>}[u]\\ }\]
We bring the loop theoretic notion to the realm of H-spaces.
\begin{defn} {\rm Let $(X,\mu)$ be an H-space. A} left inverse $l: X\to X$ {\rm and a} right inverse $r:X\to X$ {\rm of an H-space $(X,\mu)$ are maps such that $\mu (l\times id)\Delta \simeq * \simeq \mu(id\times r)\Delta$.} \end{defn}
\begin{defn} {\rm An H-space $(X,\mu)$ is called to be} \begin{enumerate} \item inversive {\rm if the left inverse and the right inverse of $(X,\mu )$ are homotopic,} \item power associative {\rm if $\mu(\mu \times id)(\Delta \times id)\Delta \simeq \mu(id \times \mu)(\Delta \times id)\Delta$,} \item Moufang {\rm if $\mu(\mu \times \mu)\theta \simeq \mu(\mu \times id)(id \times \mu \times id)\theta $, where $\theta : X^{3}\to X^{4}$ is defined by $\theta (x,y,z)=(x,y,z,x),$} \item symmetrically associative {\rm if $\mu(\mu \times id)(id \times t )(\Delta \times id)\simeq \mu(id \times \mu)(id\times t)(\Delta \times id)$, where $t:X^{2}\to X^{2}$ is defined by $t(x,y)=(y,x)$}. \end{enumerate} \end{defn}
The following proposition characterizes the above conditions on H-spaces by homotopy sets.
\begin{prop} Let $(X,\mu)$ be an H-space. Then $(X,\mu)$ is inversive $($respectively, power associative, Moufang and symmetrically associative$)$ if and only if the homotopy set $[Y,X]$ is inversive $($respectively, power associative, Moufang and symmetrically associative$)$ for any space $Y$. \end{prop}
\begin{proof}{\rm Suppose $X$ is inversive. Let $l$ and $r$ be a left inverse and a right inverse of $X$, respectively. Then, for any $[f]\in [Y,X]$, $[lf]$ is the inverse element of $[f]$. Conversely, if $[Y,X]$ is inversive for any space $Y$, then there exists $[\nu ]\in [X,X]$ such that $ [\nu][id_{X}]=*=[id_{X}][\nu]. $ A similar argument shows the other cases. }\end{proof}
Here we recall the definition of the central extension of an H-spaces.
\begin{defn}\label{H-ex}{\rm \cite{Kac1995} Let $(X_{1},\mu_{1})$ be a homotopy associative and homotopy commutative H-space and $(X_{2},\mu_{2})$ an H-space. An H-space $(X,\mu)$ is a} central H-extension of $(X_{1},\mu_{1})$ by $(X_{2},\mu_{2})$ {\rm if there exists a sequence of H-spaces \begin{eqnarray*} (X_{1},\mu_{1}) \stackrel{f_{1}}{\longrightarrow } ( X,\mu ) \stackrel{f_{2}}{\longrightarrow }(X_{2},\mu_{2}) \end{eqnarray*} such that the sequence \begin{eqnarray*} e \longrightarrow [Y,X_{1}] \stackrel{f_{1*}}{\longrightarrow } [Y,X] \stackrel{f_{2*}}{\longrightarrow } [Y,X_{2}] \longrightarrow e \end{eqnarray*} is exact as algebraic loops for any space $Y$ and the image of $f_{1*}$ is contained in the center of $[Y,X]$, where $f_{i*}$ is the algebraic loop homomorphism induced by $f_{i}$. } \end{defn}
In order to describe the classification theorem for central H-extensions presented by Kachi, we recall the definition of H-deviations. Let $(X,\mu)$ be an H-space and $Y$ a space. For any $[f],[g]\in [Y,X]$, let $D(f,g)$ be a unique element of $[Y,X]$ such that $D(f,g)[g]=[f]$.
\begin{defn}{\rm Let $(X,\mu)$ and $(Y,\mu')$ be H-spaces, and let $f:X\to Y$ be a map. An} H-deviation of the map f {\rm is an element \begin{eqnarray*} HD(f) \in [X\wedge X,Y] \end{eqnarray*} such that $q^{*}(HD(f))=D(f\mu , \mu'(f\times f))$, where $q:X\times X \to X\wedge X$ is the quotient map.} \end{defn}
The existence of H-deviations is insured by the following lemma.
\begin{lem}{\rm \cite[Lemma 1.3.5]{Zab1976}}\label{lem:2.2.2} Let $(X,\mu)$ be an H-space. Then for any spaces $Y,Z$, \begin{eqnarray*} e\longrightarrow [Y\wedge Z,X] \stackrel{q^{*}}{\longrightarrow } [Y\times Z,X]\stackrel{i^{*}}{\longrightarrow } [Y\vee Z,X]\longrightarrow e \end{eqnarray*} is the short exact sequence as algebraic loops, where $i:Y\vee Z\to Y\times Z$ is the inclusion and $q:Y\times Z\to Y\wedge Z$ is the quotient map. \end{lem}
Since $i^{*}D(f\mu , \mu'(f\times f))=D(f\mu i , \mu'(f\times f)i)=e$, there exists an element $HD(f) \in [X\wedge X,Y]$ such that $q^{*}(HD(f))=D(f\mu , \mu'(f\times f))$. If $f\simeq f':X\to X'$, then $HD(f)=HD(f')$. Therefore, the H-deviation map \begin{eqnarray*} HD : [X,Y] \to [X\wedge X,Y] \end{eqnarray*} is defined by sending a class $[f]$ to the class $HD(f)$. Moreover, if $X$ is a homotopy associative and homotopy commutative H-space, then the H-deviation map is an algebraic loop homomorphism ( \cite[Corollary 2.4]{Kac1995}).
\begin{defn}{\rm \cite{Kac1995} Two central H-extensions \begin{eqnarray*} (X,\mu _{X}) \stackrel{f_{i}}{\longrightarrow } (Z_{i},\mu_{i} ) \stackrel{g_{i}}{\longrightarrow }(Y,\mu_{Y}) \ (i=1,2) \end{eqnarray*} is said to be} equivalent {\rm if there exists an H-map $h:(Z_{1},\mu_{1}) \to (Z_{2},\mu_{2})$ such that the following diagram is homotopy commutative: $$ \begin{CD} X @>f_{1}>> Z_{1} @>g_{1}>> Y\\ @V=VV @VhVV @V=VV \\ X @>>f_{2}> Z_{2} @>>g_{2}> Y.\\ \end{CD} $$ } \end{defn}
It is readily seen that this relation is an equivalence relation. We denote by $CH(X_{1},\mu_{1};X_{2},\mu_{2})$ the set of equivalence classes of central H-extensions of $(X_{1},\mu_{1})$ by $(X_{2},\mu_{2})$, and by $[(X,\mu),f_{1},f_{2}]$ the equivalence class of the central H-extension \begin{eqnarray*} (X_{1},\mu_{1}) \stackrel{f_{1}}{\longrightarrow } (X,\mu ) \stackrel{f_{2}}{\longrightarrow }(X_{2},\mu_{2}). \end{eqnarray*}
\begin{thm}{\rm \cite[Theorem 4.3]{Kac1995}} \label{thm:Kac} Let $(X_{1},\mu_{1})$ be a homotopy associative and homotopy commutative H-space, and $(X_{2},\mu_{2})$ an H-space. Define \begin{eqnarray*} \Phi : [X_{2}\wedge X_{2},X_{1}]/\mathrm{Im} HD \longrightarrow CH(X_{1},\mu_{1};X_{2},\mu_{2}) \end{eqnarray*} by sending $[\omega ]\in [X_{2}\wedge X_{2},X_{1}]/\mathrm{Im} HD$ to $[(X_{1}\times X_{2},\mu_{\omega}),i_{1},p_{2}]\in CH(X_{1},\mu_{1};X_{2},\mu_{2})$, where the multiplication $\mu_{\omega}$ of $X_{1}\times X_{2}$ is defined by $\mu_{\omega}((x_{1},x_{2}),(y_{1},y_{2}))=(\mu_{1}(\mu_{1}(x_{1},y_{1}),\omega q(x_{2},y_{2})),\mu_{2}(x_{2},y_{2}))$, $q:X_{2}\times X_{2}\to X_{2}\wedge X_{2}$ is the quotient map. Then $\Phi $ is bijective. \end{thm}
Thus, for an equivalence class ${\mathcal E}$ in $CH(X_{1},\mu_{1};X_{2},\mu_{2})$, there exists a map $\omega :X_{2}\wedge X_{2}\to X_{1}$ such that $\Phi [\omega ]=[{\mathcal E}]$, the map $\omega $ is called {\it the classifying map} of the central H-extensions ${\mathcal E}$. Moreover, we refer to the set $[X_{2}\wedge X_{2},X_{1}]/\mathrm{Im} HD$ as {\it the moduli set} of H-extensions of $(X_{1},\mu_{1})$ by $(X_{2},\mu_{2})$.
\section{Moduli subsets of central H-extensions}
We retain the notation and terminology described in the previous section. Let $(X_{1},\mu_{1})$ be a homotopy associative and homotopy commutative H-space, and $(X_{2},\mu_{2})$ an H-space. Then a central H-extension of $(X_{1},\mu_{1})$ by $(X_{2},\mu_{2})$ is of the form $(X_{1}\times X_{2}, \mu_{\omega})$. Let $i_{j}:X_{j}\to X_{1}\times X_{2}$ and $p_{j}:X_{1}\times X_{2}\to X_{j}$ denote the inclusions and the projections, respectively. Let $\Delta_{j}:X_{j}\to X_{j}\times X_{j}$ and $\Delta:X_{1}\times X_{2}\to (X_{1}\times X_{2})^{2}$ be diagonal maps. We denote by $id_{j}$ and $id$ the identity maps of $X_{j}$ and $X_{1}\times X_{2}$, respectively.
\begin{prop}\label{prop:3.1} The central H-extension $(X_{1}\times X_{2},\mu_{\omega})$ is inversive if and only if $(X_{2},\mu_{2})$ is inversive and $\omega q(l_{2}\times id_{2})\Delta _{2}\simeq \omega q(id_{2} \times r_{2})\Delta _{2}$, where $l_{2}$ and $r_{2}$ are the left inverse and the right inverse of $(X_{2},\mu_{2})$. \end{prop}
Before proving Proposition \ref{prop:3.1}, we prepare lemmas.
\begin{lem}\label{lem:3.2} Let $(X,\mu)$, $(Y,\mu')$ be H-spaces, and let $f:X\to Y$ be a map. If $f$ is an H-map, then $fl\simeq l'f$, $fr\simeq r'f$, where $l,l'$ are left inverses of $(X,\mu),(Y,\mu')$ and $r,r'$ are right inverses of $(X,\mu),(Y,\mu')$ respectively. \end{lem}
\begin{proof}{\rm We see that $[fl][f]=[*]=[l'f][f]$ in $[X,Y]$ since $f$ is an H-map. Hence, we get $[fl]=[l'f]$. Similarly, one obtains the equality $[fr]=[r'f]$.} \end{proof}
\begin{lem}\label{lem:3.3} Let $Y,Z$ be spaces, and let $i:Y\to Y\times Z$, $p:Y\times Z\to Y$ be the inclusion and the projection. Then one has \begin{eqnarray*} \mathrm{Ker} \{ i^{*}:[Y\times Z,X]\to [Y,X] \} \cap \mathrm{Im} \{ p^{*}:[Y,X]\to [Y\times Z,X] \} = \{ e \} \end{eqnarray*} for any H-space $X$. \end{lem}
\begin{proof}{\rm For any $[f]\in \mathrm{Ker} i^{*}\cap \mathrm{Im} p^{*}$, there is a map $g:Y\rightarrow X$ such that $f\simeq gq$. Since $[f]$ is in $\mathrm{Ker} i^{*}$, it follows that $*\simeq fi\simeq g$ and hence $f\simeq *$. } \end{proof}
\noindent {\it Proof of Proposition \ref{prop:3.1}.} Let $l$ and $r$ be a left inverse and a right inverse of $(X_{1}\times X_{2},\mu_{\omega })$, respectively. Note that $l\simeq r$ if and only if $p_{j}l\simeq p_{j}r$ for $j=1,2$. In order to prove Proposition \ref{prop:3.1}, it is enough to show \begin{enumerate} \item $p_{1}l \simeq p_{1}r$ if and only if $\omega q(l_{2}\times id_{2})\Delta _{2}\simeq \omega q(id_{2} \times r_{2})\Delta _{2}$, \item $p_{2}l\simeq p_{2}r$ if and only if $(X_{2},\mu_{2})$ is inversive. \end{enumerate} Since $* \simeq p_{1}\mu_{\omega }(l\times id) \Delta = \mu_{1}(\mu_{1}(p_{1}l\times p_{1})\Delta\times \omega q(p_{2}l\times p_{2})\Delta)\Delta$, it follows that \begin{eqnarray*} [*]=([p_{1}l][p_{1}])[\omega q(p_{2}l\times p_{2})\Delta] \end{eqnarray*} in $[X_{1}\times X_{2},X_{1}]$. Similarly, we have \begin{eqnarray*} [*]=([p_{1}][p_{1}r])[\omega q(p_{2}\times p_{2}r)\Delta ] \end{eqnarray*} in $[X_{1}\times X_{2},X_{1}].$ Since $[X_{1}\times X_{2},X_{1}]$ is an abelian group, we see that $[p_{1}l]=[p_{1}r]$ if and only if $[\omega q(p_{2}l\times p_{2})\Delta]=[\omega q(p_{2}\times p_{2}r)\Delta].$ Lemma \ref{lem:3.2} allows us to obtain that \begin{eqnarray*} [\omega q(p_{2}l\times p_{2})\Delta]=[\omega q(l_{2}\times id_{2} )\Delta_{2}p_{2}] \end{eqnarray*} and \begin{eqnarray*} [\omega q(p_{2}\times p_{2}r)\Delta]=[\omega q(id_{2}\times r_{2})\Delta_{2}p_{2}]. \end{eqnarray*} If $[\omega q(l_{2}\times id_{2} )\Delta_{2}p_{2}]=[\omega q(id_{2}\times r_{2})\Delta_{2}p_{2}]$, then $[\omega q(l_{2}\times id_{2})\Delta_{2}] =[\omega q(l_{2}\times id_{2} )\Delta_{2}p_{2}i_{2}]=[\omega q(id_{2}\times r_{2})\Delta_{2}p_{2}i_{2}]= [\omega q(id_{2}\times r_{2})\Delta_{2}]$. Therefore we have the assertion $(1)$. Suppose that $p_{2}l\simeq p_{2}r$. Then $l_{2}\simeq p_{2}li_{2}\simeq p_{2}ri_{2}\simeq r_{2}$ and $(X_{2},\mu_{2})$ is inversive. Conversely, suppose that $(X_{2},\mu_{2})$ is inversive. Then it follows that $p_{2}li_{2}\simeq p_{2}ri_{2}$ and hence \begin{eqnarray*} [*]=[p_{2}li_{2}][p_{2}ri_{2}]^{-1}=[p_{2}li_{2}][r_{2}p_{2}ri_{2}]=i_{2}^{*}([p_{2}l][r_{2}p_{2}r]). \end{eqnarray*} Therefore we see that $[p_{2}l][r_{2}p_{2}r]$ is in $\mathrm{Ker} i_{2}^{*}.$ On the other hand, $[p_{2}l][r_{2}p_{2}r]=[l_{2}p_{2}][r_{2}r_{2}p_{2}]=p_{2}^{*}([l_{2}][r_{2}r_{2}])\in \mathrm{Im} p_{2}^{*}.$ In view of Lemma \ref{lem:3.3}, we have \begin{eqnarray*} [*] = [p_{2}l][r_{2}p_{2}r] =[p_{2}l][p_{2}r]^{-1}. \end{eqnarray*} Hence, it follows that $[p_{2}l]=[p_{2}r]$. We have the second assertion.
\begin{prop}\label{prop:3.4} The central H-extension $(X_{1}\times X_{2},\mu_{\omega})$ is power associative if and only if $(X_{2},\mu_{2})$ is power associative and $\omega q(\mu_{2}\times id_{2})\overline{\Delta}_{2} \simeq \omega q(id_{2} \times \mu_{2})\overline{\Delta}_{2}$, where $\overline{\Delta}_{2}=(\Delta _{2} \times id_{2})\Delta_{2}$. \end{prop}
\begin{proof}{\rm As in the proof of proposition \ref{prop:3.1}, it is enough to show the following: \begin{enumerate} \item $p_{1}\mu_{\omega }(\mu_{\omega}\times id)\overline{\Delta}\simeq p_{1}\mu_{\omega }(id \times\mu_{\omega})\overline{\Delta}$\\ if and only if $\omega q(\mu_{2}\times id_{2})\overline{\Delta}_{2} \simeq \omega q(id_{2} \times \mu_{2})\overline{\Delta}_{2}$, where $\overline{\Delta}=(\Delta \times id)\Delta$. \item $p_{2}\mu_{\omega }(\mu_{\omega}\times id)\overline{\Delta}\simeq p_{2}\mu_{\omega }(id \times\mu_{\omega})\overline{\Delta}$\\ if and only if $(X_{2},\mu_{2})$ is power associative. \end{enumerate} The statement $(1)$ is trivial. Since the map $p_{2}$ is an H-map, the assertion $(2)$ follows. } \end{proof}
The same argument as in Proposition \ref{prop:3.4} does work to prove the following propositions. The details are left to the reader.
\begin{prop}\label{prop:3.15} An H-space $(X_{1}\times X_{2},\mu_{\omega})$ is Moufang if and only if $(X_{2},\mu_{2})$ is Moufang and $[\mu_{1}(\omega q\times \omega q)\theta _{2}][\omega q(\mu_{2}\times \mu_{2})\theta _{2}]=[\mu_{1}(\omega q\times \omega q)(id_{2} \times (\mu_{2} \times id_{2} \times id_{2} )\Delta '_{2} )][\omega q (\mu_{2}(id_{2}\times \mu_{2})\times id_{2})\theta _{2}]\in [X_{2}^{3},X_{1}]$, where $\Delta'_{2}$ is the diagonal map of $X_{2}\times X_{2}$ and $\theta _{2}:X_{2}^{3}\longrightarrow X_{2}^{4}$ is defined by $\theta_{2} (x,y,z)=(x,y,z,x)$. \end{prop}
\begin{prop}\label{prop:3.17} $(X_{1}\times X_{2},\mu_{\omega})$ is symmetrically associative if and only if $(X_{2},\mu_{2})$ is symmetrically associative and $\mu_{1}(\omega q\times \omega q)(t \times (id_{2}\times \mu_{2}t)(\Delta _{2}\times id_{2}))\Delta'_{2}\simeq \mu_{1}(\omega q\times \omega q)(id_{2}\times id_{2} \times (\mu_{2}\times id_{2})(id_{2}\times t)(\Delta_{2}\times id_{2}))\Delta'_{2}.$ \end{prop}
Next, we consider the following subsets of the homotopy set $[X_{2}\wedge X_{2},X_{1}]$: \begin{align*} G_{{\rm inv}} &= \{ [\omega] \in [X_{2}\wedge X_{2},X_{1}] \ \mid \ \omega q(l_{2}\times id_{2})\Delta _{2}\simeq \omega q(id_{2} \times r_{2})\Delta _{2} \}, \\ G_{{\rm p.a}} &= \{ [\omega] \in [X_{2}\wedge X_{2},X_{1}] \ \mid \ \omega q(\mu_{2}\times id_{2})\overline{\Delta}_{2} \simeq \omega q(id_{2} \times \mu_{2})\overline{\Delta}_{2} \}, \\ G_{{\rm Mo}} &= \{ [\omega] \in [X_{2}\wedge X_{2},X_{1}] \ \mid \ \Gamma _{{\rm Mo}}(\omega )\simeq \Gamma '_{{\rm Mo}}(\omega ) \} \ \text{and} \\ G_{{\rm s.a}} &= \{ [\omega] \in [X_{2}\wedge X_{2},X_{1}] \ \mid \ \mu _{1}(\omega \times \omega )\Gamma _{{\rm s.a}}\simeq \mu _{1}(\omega \times \omega )\Gamma ' _{{\rm s.a}}\}, \end{align*} where \begin{align*} &\Gamma _{{\rm Mo}}(\omega )=\mu_{1}(\mu_{1}(\omega q\times \omega q)\theta _{2}\times \omega q(\mu_{2}\times \mu_{2})\theta _{2})\Delta_{X_{2}^{3}},\\ &\Gamma '_{{\rm Mo}}(\omega )=\mu_{1}(\mu_{1}(\omega q\times \omega q)(id_{2} \times (\mu_{2} \times id_{X_{2}^{2}} )\Delta '_{2} )\times \omega q (\mu_{2}(id_{2}\times \mu_{2})\times id_{2})\theta _{2})\Delta_{X_{2}^{3}}, \\ &\Gamma _{{\rm s.a}}=(q\times q)(t \times (id_{2} \times \mu_{2}t)(\Delta _{2}\times id_{2}))\Delta'_{2}\ \text{and}\\ &\Gamma ' _{{\rm s.a}}=(q\times q)(id_{2}\times id_{2} \times (\mu_{2}\times id_{2})(id_{2}\times t)(\Delta_{2}\times id))\Delta'_{2}. \end{align*}
\begin{lem} The sets $G_{{\rm inv}}$, $G_{{\rm p.a}}$, $G_{{\rm Mo}}$ and $G_{{\rm s.a}}$ are subgroups of $[X_{2}\wedge X_{2},X_{1}]$. \end{lem}
\begin{proof}{\rm For any $[\omega_{1}],[\omega_{2}]\in G_{{\rm inv}}$, we have \begin{align*} [\mu_{1}(\omega_{1}\times \omega_{2})\Delta''_{2} q(l_{2}\times id_{2})\Delta _{2}]
&= [\omega_{1}q(l_{2}\times id_{2})\Delta_{2}][\omega_{2}q(l_{2}\times id_{2})\Delta_{2}] \\
&= [\omega_{1}q(id_{2}\times r_{2})\Delta_{2}][\omega_{2}q(id_{2}\times r_{2})\Delta_{2}] \\
&= [\mu_{1}(\omega_{1}\times \omega_{2})\Delta''_{2} q(id_{2}\times r_{2})\Delta _{2}] \end{align*} where $\Delta''_{2}$ is the diagonal map of $X_{2}\wedge X_{2}$. Hence $[\omega_{1}][\omega_{2}]\in G_{{\rm inv}}$. Let $l_{1}$ be a left inverse of $(X_{1},\mu_{1})$. Then we have \begin{eqnarray*} [ l_{1}\omega_{1}q(l_{2}\times id_{2})\Delta _{2}] = [l_{1}\omega_{1}q(id_{2}\times r_{2})\Delta _{2}]. \end{eqnarray*} Hence we obtain $[\omega_{1}]^{-1}=[l_{1}\omega_{1}]\in G_{{\rm inv}}$. Similarly, we can show that $G_{{\rm p.a}}$, $G_{{\rm Mo}}$ and $G_{{\rm s.a}}$ are subgroups of $[X_{2}\wedge X_{2},X_{1}]$.} \end{proof}
Let $\Phi : [X_{2}\wedge X_{2},X_{1}]/\mathrm{Im} HD \to CH(X_{1},\mu_{1};X_{2},\mu_{2})$ be the bijection mentioned in Theorem \ref{thm:Kac}.
\begin{lem}\label{lem:4.2} If $(X_{2},\mu _2)$ is inversive $($respectively, power associative, Moufang and symmetrically associative$)$, then $\mathrm{Im} HD \subset G_{{\rm inv}}$ $($respectively, $\mathrm{Im} HD \subset G_{{\rm p.a}}$, $\mathrm{Im} HD \subset G_{{\rm Mou.}}$ and $\mathrm{Im} HD \subset G_{{\rm s.a}})$. \end{lem}
\begin{proof}{\rm For any $[\omega]\in \mathrm{Im} HD$, $\Phi ([\omega])=[(X_{1}\times X_{2},\mu_{1}\times \mu_{2}),i_{1},p_{2}]$. Since $(X_{2},\mu_{2})$ is inversive, $(X_{1}\times X_{2},\mu_{1}\times \mu_{2})$ is also inversive and $\omega q(l_{2}\times id_{2})\Delta _{2}\simeq \omega q(id_{2} \times r_{2})\Delta _{2}$ by Proposition \ref{prop:3.1}. Hence $[\omega]\in G_{{\rm inv}}$. A similar augment shows the other cases. } \end{proof}
\begin{thm}\label{prop:4.1.3} The following statements hold. \ \\[-1.2em] \begin{enumerate} \item If $(X_{2},\mu _2)$ is inversive, then $\Phi _{{\rm inv}}:G_{{\rm inv}}/\mathrm{Im} HD\to \{ [(X,\mu),f_{1},f_{2}]\in CH(X_{1},\mu_{1};X_{2},\mu_{2}) \mid (X,\mu)$ is inversive $\}$, which is the restricted homomorphism of $\Phi $ to $G_{{\rm inv}}/\mathrm{Im} HD $, is bijective. \item If $(X_{2},\mu _2)$ is power associative, then $\Phi _{{\rm p.a}}:G_{{\rm p.a}}/\mathrm{Im} HD\to \{ [(X,\mu),f_{1},f_{2}]\in CH(X_{1},\mu_{1};X_{2},\mu_{2}) \mid (X,\mu)$ is power associative $\}$, which is the restricted homomorphism of $\Phi $ to $G_{{\rm p.a}}/\mathrm{Im} HD $, is bijective. \item If $(X_{2},\mu _2)$ is Moufang, then $\Phi _{{\rm Mo}}:G_{{\rm Mo}}/\mathrm{Im} HD\to \{ [(X,\mu),f_{1},f_{2}]\in CH(X_{1},\mu_{1};X_{2},\mu_{2}) \mid (X,\mu)$ is Moufang $\}$, which is the restricted homomorphism of $\Phi $ to $G_{{\rm Mo}}/\mathrm{Im} HD $, is bijective. \item If $(X_{2},\mu _2)$ is symmetrically associative, then $\Phi _{{\rm s.a}}:G_{{\rm s.a}}/\mathrm{Im} HD\to \{ [(X,\mu),f_{1},f_{2}]\in CH(X_{1},\mu_{1};X_{2},\mu_{2}) \mid (X,\mu)$ is symmetrically associative $\}$, which is the restricted homomorphism of $\Phi $ to $G_{{\rm s.a}}/\mathrm{Im} HD $, is bijective. \end{enumerate} \end{thm}
\begin{proof}{\rm By Proposition \ref{prop:3.1}, we see that the map $\Phi_{{\rm inv}}$ is a well-defined homomorphism and hence it is injective. Let $(X,\mu)$ be an inversive central H-extension of $(X_{1},\mu_{1})$ by $(X_{2},\mu_{2})$. Theorem \ref{thm:Kac} yields that there exists a map $\omega :X_{2}\wedge X_{2}\to X_{1}$ such that $[(X,\mu),f_{1},f_{2}]=[(X_{1}\times X_{2},\mu_{\omega}),i_{1},p_{2}]$. Since $(X,\mu)$ is inversive, $(X_{1}\times X_{2},\mu_{\omega})$ is also inversive. Hence, $\omega q(l_{2}\times id_{2})\Delta _{2}\simeq \omega q(id_{2} \times r_{2})\Delta _{2}$ by Proposition \ref{prop:3.1}. Therefore $\Phi _{{\rm inv}}([\omega]) = [(X,\mu),f_{1},f_{2}]$ and $\Phi_{{\rm inv}}$ is surjective. In the same view, we can show that $\Phi _{{\rm p.a}}$, $\Phi _{{\rm Mo}}$ and $\Phi _{{\rm s.a}}$ are bijective. } \end{proof}
We denote $G_{{\rm inv}}/\mathrm{Im} HD$, $G_{{\rm p.a}}/\mathrm{Im} HD$, $G_{{\rm Mo}}/\mathrm{Im} HD$ and $G_{{\rm s.a}}/\mathrm{Im} HD$ by $S_{{\rm inv}}$, $S_{{\rm p.a}}$, $S_{{\rm Mo}}$ and $S_{{\rm s.a}}$, respectively, and call {\it the moduli subspace} of the H-extensions associated with the corresponding properties.
\section{The moduli spaces of central H-extensions of rational H-spaces}
In this section, we will investigate the moduli set $G/\mathrm{Im} HD$ and its subsets $S_{{\rm inv}}$, $S_{{\rm p.a}}$, $S_{{\rm Mo}}$ and $S_{{\rm s.a}}$ in the rational cases. If $(X,\mu)$ is a ${\mathbb Q}$-local, simply-connected H-space and the homotopy group $\pi _{*}(X)$ are of finite type, then $H^{*}(X;{\mathbb Q})\cong \Lambda (x_{1},x_{2},\cdots )$ as an algebra, the free commutative algebra with basis $x_{1},x_{2},\cdots $ by Hopf's Theorem \cite[p.286]{Spa1966}.
\begin{thm}{\rm \cite[Proposition 1]{Sch1984}} \label{thm:Sch} Let $(X,\mu)$ be a ${\mathbb Q}$-local, simply-connected H-space and $\pi _{*}(X)$ are of finite type, let $Y$ be a space. Then the canonical map \begin{eqnarray*} [Y,X]\longrightarrow \mathrm{Hom}_{{\rm Alg}}(H^{*}(X;{\mathbb Q} ),H^{*}(Y;{\mathbb Q} )), \ f\longmapsto H^{*}(f) \end{eqnarray*} is bijective. \end{thm}
According to Arkowitz and Lupton \cite{AL1990}, we give $\mathrm{Hom}_{{\rm Alg}}(H^{*}(X;{\mathbb Q} ),H^{*}(Y;{\mathbb Q} ))$ an algebraic loop structure so that the above canonical map is an algebraic loop homomorphism.
\begin{defn}{\rm Let $M=\Lambda (x_{i};i\in J )$ be a free commutative algebra generated by elements $x_{i}$ for $i\in J$. A homomorphism $\nu :M\to M\otimes M$ is called} a diagonal {\rm if the following diagram is commutative, where $\varepsilon :M\to {\mathbb Q}$ is augmentation:} \[\xymatrix{ & M \ar[d]^{\nu } \ar[ld]_{\cong } \ar[rd]^{\cong }& \\ {\mathbb Q} \otimes M & M\otimes M \ar[l]^{\varepsilon \otimes id} \ar[r]_{id\otimes \varepsilon } &M\otimes {\mathbb Q}. }\] \end{defn}
Let $(X,\mu)$ be an H-space. Then $H^{*}(\mu):H^{*}(X;{\mathbb Q})\to H^{*}(X;{\mathbb Q})\otimes H^{*}(X;{\mathbb Q})$ is a diagonal.
\begin{thm}{\rm \cite[Lemma 3.1]{AL1990}} \label{thm:AL} Let $M=\Lambda (x_{1},x_{2},\cdots )$ be a free commutative algebra with the diagonal map $\nu :M\to M\otimes M$, and $A$ a graded algebra. We define the product of $\mathrm{Hom}_{{\rm Alg}}(M,A)$ by \begin{eqnarray*} \alpha \cdot \beta = m(\alpha \otimes \beta )\nu , \end{eqnarray*} where $m$ is the product of $A$. Then $\mathrm{Hom}_{{\rm Alg}}(M,A)$ is an algebraic loop endowed with the product. \end{thm}
It follows from the proof of Theorem \ref{thm:AL} elements $\gamma _{1}$ and $\gamma _{2}\in \mathrm{Hom}_{{\rm Alg}}(M,A)$ satisfy the condition $\gamma _{1}\cdot \alpha = \beta$ and $\alpha \cdot \gamma _{2} = \beta $, then \begin{align*} &\gamma _{1}(x_{i})=\beta (x_{i})-\alpha (x_{i})-m(\gamma _{1}\otimes \alpha )P(x_{i}) \ \text{and}\\ &\gamma _{2}(x_{i})=\beta (x_{i})-\alpha (x_{i})-m(\alpha \otimes \gamma _{2} )P(x_{i}), \end{align*} where $P(x_{i})=\nu (x_{i})-x_{i}\otimes 1 -1\otimes x_{i}.$
\begin{defn} A left inverse $\lambda$ $(${\rm respectively}, a right inverse $\rho )$ {\rm of an algebraic loop $\mathrm{Hom}_{{\rm Alg}}(M,A)$ are elements of $\mathrm{Hom}_{{\rm Alg}}(M,A)$ such that $\lambda \cdot id = e$ $($respectively, $id \cdot \rho=e )$, where $e$ is the unit of $\mathrm{Hom}_{{\rm Alg}}(M,A)$. } \end{defn}
Let $A$ be a graded algebra and $M=\Lambda (x_{i};i\in J )$ a free commutative Hopf algebra for which each $x_{i}$ are primitive $($that is, $\nu (x_{i})=x_{i}\otimes 1+1\otimes x_{i}.)$ We consider the canonical isomorphism \begin{eqnarray*} \Psi : \mathrm{Hom}_{{\rm Alg}}(M, A) \stackrel{\cong }{\rightarrow } \mathrm{Hom}_{{\mathbb Q} }({\mathbb Q} \langle x_{i};j\in J \rangle , A) \end{eqnarray*} which is defined by \begin{eqnarray*} \Psi (\alpha )(x_{i})=\alpha (x_{i}) \ {\rm for} \ \alpha \in \mathrm{Hom}_{{\rm Alg}}(M, A), \end{eqnarray*} where ${\mathbb Q} \langle x_{i};i\in J \rangle$ denotes the graded ${\mathbb Q}$-vector space with basis $x_{i}$ for $i\in J$. Let $\mathrm{Hom}_{{\mathbb Q} }({\mathbb Q} \langle x_{i};i\in J \rangle , A)$ denote the set of ${\mathbb Q}$-linear maps from ${\mathbb Q} \langle x_{i};i\in J \rangle$ to $A$. Since $\mathrm{Hom}_{{\mathbb Q} }({\mathbb Q} \langle x_{i};i\in J \rangle , A)$ is a ${\mathbb Q}$-vector space with respect to the canonical sum and the canonical scalar multiple, we can give the ${\mathbb Q}$-vector space structure to $\mathrm{Hom}_{{\rm Alg}}(M,A)$ via $\Psi $. We regard $\mathrm{Hom}_{{\mathbb Q} }({\mathbb Q} \langle x_{i};i\in J \rangle , A)$ as an algebraic loop with the canonical sum. Then the isomorphism $\Psi $ is a morphism of algebraic loops. In fact, \begin{eqnarray*} \alpha \cdot \beta (x_{i}) = m(\alpha \otimes \beta )\nu (x_{i}) = \alpha (x_{i}) + \beta (x_{i}) = (\alpha +\beta )(x_{i}). \end{eqnarray*}
Let $(X_{1},\mu_{1})$ be a ${\mathbb Q}$-local, simply-connected, homotopy associative and homotopy commutative H-space and the homotopy group of $X_{1}$ are of finite type, let $(X_{2},\mu_{2})$ be an H-space, and let $H^{*}(X_{1};{\mathbb Q})=\Lambda (x_{1},x_{2},\cdots )$. Since $(X_{1},\mu_{1})$ is a homotopy associative and homotopy commutative H-space, we see that each $x_{i}$ is primitive; see \cite[Corollary 4.18]{MM1965}.
\begin{lem}\label{lem:3.3.1} Let $X$ and $Y$ be spaces. Then \begin{eqnarray*} H^{n}(X\wedge Y;{\mathbb Q}) \cong \{ x\otimes y \in H^{n}(X\times Y;{\mathbb Q}) \mid {\rm deg} \, x >0 , {\rm deg} \, y > 0 \} \ (n\geq 1). \end{eqnarray*} \end{lem}
\begin{proof}{\rm For any $n\geq 1$, we consider the following commutative diagram: \[\xymatrix{ [X\wedge Y,K({\mathbb Q} ,n )] \ar[r]^{q^{*}} \ar[d]^{\cong } & [X\times Y, K({\mathbb Q} ,n)] \ar[r]^{i^{*}} \ar[d]^{\cong } & [X\vee Y,K({\mathbb Q} ,n)] \ar[d]^{\cong } \\ H^{n}(X\wedge Y;{\mathbb Q}) \ar[r]^{H^{n}(q)} & H^{n}(X\times Y;{\mathbb Q} ) \ar[r]^{H^{n}(i)} & H^{n}(X\vee Y ;{\mathbb Q} ), }\] where $K({\mathbb Q},n)$ is the Eilenberg-MacLane space. By Lemma \ref{lem:2.2.2}, $H^{n}(q)$ is injective and we have $$ H^{n}(X\wedge Y;{\mathbb Q})\cong \mathrm{Ker} H^{n}(i) = \{ x\otimes y \in H^{n}(X\times Y;{\mathbb Q}) \mid {\rm deg} \, x >0 , {\rm deg} \, y > 0 \}. $$ }\end{proof}
Define the map \begin{eqnarray*} \overline{HD} : \mathrm{Hom}_{{\mathbb Q}}({\mathbb Q} \langle x_{1} ,x_{2},\cdots \rangle , H^{*}(X_{2} ;{\mathbb Q} ))\rightarrow \mathrm{Hom}_{{\mathbb Q} }({\mathbb Q} \langle x_{1} ,x_{2},\cdots \rangle , H^{*}(X_{2}\wedge X_{2} ;{\mathbb Q} )) \end{eqnarray*} by \begin{eqnarray*} \overline{HD}(\alpha )(x_{i})= P_{2}(\alpha (x_{i})) \ {\rm for} \ \alpha \in \mathrm{Hom}_{{\mathbb Q}}({\mathbb Q} \langle x_{1} ,x_{2},\cdots \rangle , H^{*}(X_{2} ;{\mathbb Q} )), \end{eqnarray*} where $P_{2}(x) = H^{*}(\mu_{2})(x) -x\otimes 1 - 1\otimes x \ (x\in H^{*}(X_{2};{\mathbb Q}))$. Then we see that $\overline{HD}$ is a ${\mathbb Q}$-linear map. \begin{prop}\label{prop:4.4} The following diagram of algebraic loops is commutative$:$ \[\xymatrix{ \mathrm{Hom}_{{\mathbb Q}}({\mathbb Q} \langle x_{1},x_{2},\cdots \rangle , H^{*}(X_{2} ;{\mathbb Q} )) \ar[r]^{\hspace{-1.5em}\overline{HD}} & \mathrm{Hom}_{{\mathbb Q} }({\mathbb Q} \langle x_{1},x_{2}, \cdots \rangle , H^{*}(X_{2}\wedge X_{2} ;{\mathbb Q} ))\\ [X_{2},X_{1}] \ar[r]_{HD} \ar[u]^{\Psi H^{*}}_{\cong } & [X_{2}\wedge X_{2},X_{1}] \ar[u]_{\Psi H^{*}}^{\cong }.\\ }\] \end{prop}
\begin{proof}{\rm By using the canonical isomorphism $\Psi $, we see that for any $f \in [X_{2},X_{1}]$ there exists a unique element $\overline{D}(f ) \in \mathrm{Hom}_{{\mathbb Q}}(\langle x_{1} ,x_{2},\cdots \rangle , H^{*}(X_{2}\times X_{2} ;{\mathbb Q} )$ such that $\overline{D}(f ) + (H^{*}(f) \otimes H^{*}(f) )\Psi H^{*}(\mu_{1}) = H^{*}(\mu_{2}) \Psi H^{*}(f) $. Therefore, it follows that $\overline{D}(f)= \Psi H^{*}(D(f\mu_{1}, \mu_{2}(f\times f)))$. Observe that each $x_{i}$ is primitive. In view of the proof of Theorem \ref{thm:AL}, we have $$ \overline{D}(f)(x_{i}) = P_{2}(\Psi H^{*}(f)(x_{i})). $$ Lemma \ref{lem:3.3.1} implies that $\overline{D}(f)(x_{i})=H^{*}(q)\overline{HD}(\Psi H^{*}(f) )(x_{i})$. Hence, by the definition of an H-deviation, we obtain $$ H^{*}(q)\Psi H^{*}(HD(f)) =\Psi H^{*}(D(f\mu_{1},\mu_{2}(f\times f))) = H^{*}(q)\overline{HD}(\Psi H^{*}(f) ). $$ Since $H^{*}(q)$ is injective, it follows from Lemma \ref{lem:2.2.2} and Theorem \ref{thm:Sch} that $\Psi H^{*}(HD(f))=\overline{HD}(\Psi H^{*}(f))$.} \end{proof}
Put $V=\mathrm{Hom}_{{\mathbb Q} }({\mathbb Q} \langle x_{1} ,x_{2},\cdots \rangle , H^{*}(X_{2}\wedge X_{2} ;{\mathbb Q} ))$ and set \begin{align*}
V_{{\rm inv}} &= \{ \alpha \in V \ | \ m_{2}(\lambda _{2}\otimes id)H^{*}(q)\alpha = m_{2}(id\otimes \rho _{2})H^{*}(q)\alpha \},\\
V_{{\rm p.a}} &= \{ \alpha \in V \ | \ \overline{m}_{2}(H^{*}(\mu_{2})\otimes id)H^{*}(q)\alpha = \overline{m}_{2}(id\otimes H^{*}(\mu_{2}))H^{*}(q)\alpha \},\\
V_{{\rm Mo}} &= \{ \alpha \in V \ | \ \overline{\Gamma} _{{\rm Mo}}(\alpha )=\overline{\Gamma }' _{{\rm Mo}}(\alpha ) \},\\
V_{{\rm s.a}} &= \{ \alpha \in V \ | \ H^{*}(\Gamma _{{\rm s.a}})(\alpha \otimes \alpha) H^{*}(\mu_{1})=H^{*}(\Gamma' _{{\rm s.a}})(\alpha \otimes \alpha) H^{*}(\mu_{1}) \}, \end{align*} where $m_{2}$ is the product of $H^{*}(X_{2};{\mathbb Q})$, $\overline{m}_{2}=m_{2}(m_{2}\otimes id)$, and $\lambda _{2}, \rho _{2}$ are a left inverse and a right inverse of $\mathrm{Hom}_{{\rm Alg}}(H^{*}(X_{2};{\mathbb Q}),H^{*}(X_{2};{\mathbb Q}))$, respectively. Let $$ \overline{\Gamma} _{{\rm Mo}}(\alpha )=H^{*}(\Gamma _{{\rm Mo}}(\omega )), \ \overline{\Gamma}' _{{\rm Mo}}(\alpha )=H^{*}(\Gamma _{{\rm Mo}}(\omega )) $$ where $\omega :X_{2}\wedge X_{2}\to X_{1}$ satisfy $H^{*}(\omega)=\alpha $.
Then, we see that $V_{{\rm inv}}$, $V_{{\rm p.a}}$, $V_{{\rm Mo}}$ and $V_{{\rm s.a}}$ are subspaces of $V$.
\begin{thm}\label{lem:4.4} The following statements hold. \ \\[-1.2em] \begin{enumerate} \item If $(X_{2},\mu _2)$ is inversive, then $\mathrm{Im} \overline{HD} \subset V_{{\rm inv}}$ and the canonical map $S_{{\rm inv}} \to V_{{\rm inv}}/ \mathrm{Im} \overline{HD}$ is an isomorphism of algebraic loops. \item If $(X_{2},\mu _2)$ is power associative, then $\mathrm{Im} \overline{HD} \subset V_{{\rm p.a}}$ and the canonical map $S_{{\rm p.a}} \to V_{{\rm p.a}}/ \mathrm{Im} \overline{HD}$ is isomorphism of algebraic loops. \item If $(X_{2},\mu _2)$ is Moufang, then $\mathrm{Im} \overline{HD} \subset V_{{\rm Mo}}$ and the canonical map $S_{{\rm Mo}} \to V_{{\rm Mo}}/ \mathrm{Im} \overline{HD}$ is isomorphism of algebraic loops. \item If $(X_{2},\mu _2)$ is symmetrically associative, then $\mathrm{Im} \overline{HD} \subset V_{{\rm s.a}}$ and the canonical map $S_{{\rm s.a}} \to V_{{\rm s.a}}/ \mathrm{Im} \overline{HD}$ is isomorphism of algebraic loops. \end{enumerate} \end{thm}
\begin{proof}{\rm The assertions follow from Lemma \ref{lem:4.2} and Proposition \ref{prop:4.4}. } \end{proof}
\begin{lem}\label{lem:3.3.6} Let $V^{k}= \mathrm{Hom}_{{\mathbb Q}}({\mathbb Q}\langle x_{k} \rangle , H^{*}(X_{2}\wedge X_{2};{\mathbb Q}))$ for $k=1,2,\cdots$. Then the ${\mathbb Q}$-linear map \begin{eqnarray*} \xi :\displaystyle\bigoplus _{k\geq 1}V^{k} \longrightarrow V ; \ \ \xi (\alpha _{1}, \alpha _{2},\cdots ) (x_{k})=\alpha _{k}(x_{k}) \ (\alpha _{k}\in V^{k}) \end{eqnarray*} is an isomorphism. Moreover, let \begin{align*} & V_{{\rm inv}}^{k} = \{ \alpha \in V^{k} \mid m_{2}(\lambda _{2}\otimes id)H^{*}(q)\alpha = m_{2}(id\otimes \rho _{2})H^{*}(q)\alpha \} ,\\ & V_{{\rm p.a}}^{k} = \{ \alpha \in V^{k} \mid \overline{m}_{2}(H^{*}(\mu_{2})\otimes id)H^{*}(q)\alpha = \overline{m}_{2}(id\otimes H^{*}(\mu_{2}))H^{*}(q)\alpha \} ,\\
&V_{{\rm Mo}}^{k} = \{ \alpha \in V^{k} \ | \ \overline{\Gamma} _{{\rm Mo}}(\alpha )=\overline{\Gamma }' _{{\rm Mo}}(\alpha ) \} \ {\rm and}\\
&V_{{\rm s.a}}^{k} = \{ \alpha \in V^{k} \ | \ H^{*}(\Gamma _{{\rm s.a}})(\alpha \otimes \alpha) H^{*}(\mu_{1})=H^{*}(\Gamma' _{{\rm s.a}})(\alpha \otimes \alpha) H^{*}(\mu_{1})\} . \end{align*} We define a ${\mathbb Q}$-linear map \begin{eqnarray*} \overline{HD}^{ \hspace{0.15em} k} : \mathrm{Hom}_{{\mathbb Q}}({\mathbb Q} \langle x_{k} \rangle , H^{*}(X_{2} ;{\mathbb Q} ))\rightarrow V^{k} \end{eqnarray*} by \begin{eqnarray*} \overline{HD}^{\hspace{0.15em}i}(\alpha )(x_{k})= P_{2}(\alpha (x_{k})) , \ \alpha \in \mathrm{Hom}_{{\mathbb Q}}({\mathbb Q} \langle x_{k} \rangle , H^{*}(X_{2} ;{\mathbb Q} )). \end{eqnarray*} Then the restrictions \begin{align*} &\displaystyle\bigoplus _{k\geq 1}V_{{\rm inv}}^{k} \longrightarrow V_{{\rm inv}} & &\displaystyle\bigoplus _{k\geq 1}V_{{\rm p.a}}^{k} \longrightarrow V_{{\rm p.a}} &\displaystyle\bigoplus _{k\geq 1}V_{{\rm Mo}}^{k} \longrightarrow V_{{\rm Mo}} \\ &\displaystyle\bigoplus _{k\geq 1}V_{{\rm s.a}}^{k} \longrightarrow V_{{\rm s.a}} & &\displaystyle\bigoplus _{k\geq 1}\mathrm{Im} \overline{HD}^{ \hspace{0.15em} k} \longrightarrow \mathrm{Im} \overline{HD} & \end{align*} of $\xi $ to $\displaystyle\bigoplus _{k\geq 1}V_{{\rm inv}}^{k}$, $\displaystyle\bigoplus _{k\geq 1}V_{{\rm p.a}}^{k}$, $\displaystyle\bigoplus _{k\geq 1}V_{{\rm Mo}}^{k}$, $\displaystyle\bigoplus _{k\geq 1}V_{{\rm s.a}}^{k}$ and $\displaystyle\bigoplus _{k\geq 1}\mathrm{Im} \overline{HD}^{ \hspace{0.15em} k}$ are all isomorphisms. \end{lem}
\begin{proof} {\rm The ${\mathbb Q}$-linear map \begin{eqnarray*} \xi ' :V\longrightarrow \displaystyle\bigoplus _{k\geq 1}V^{k}; \ \ \xi '(\alpha )=(\alpha i_{1},\alpha i_{2},\cdots ) \end{eqnarray*} is an inverse map of $\xi $, where $i_{k}:{\mathbb Q}\langle x_{k} \rangle \longrightarrow {\mathbb Q} \langle x_{1},x_{2},\cdots \rangle$ is the inclusion. } \end{proof}
By Lemma \ref{lem:3.3.6}, for the study of $V_{{\rm inv}}/\mathrm{Im} \overline{HD}$, $V_{{\rm p.a}}/\mathrm{Im} \overline{HD}$, $V_{{\rm Mo}}/\mathrm{Im} \overline{HD}$ and $V_{{\rm s.a}}/\mathrm{Im} \overline{HD}$, it is enough to investigate these vector space in the case where $H^{*}(X_{1};{\mathbb Q})=\Lambda (x)$. In what follows, we give some examples of $V$, $V_{{\rm inv}}$, $V_{{\rm p.a}}$, $V_{{\rm Mo}}$, $V_{{\rm s.a}}$ and $\mathrm{Im} \overline{HD}$.
\begin{ex}\label{ex:4.9} {\rm Let $X_{2}$ be a rational H-space such that $H^{*}(X_{2};{\mathbb Q})=\Lambda (y)$ and that ${\rm deg} \, y$ is odd. Then $X_{2}$ is a homotopy associative H-space for any multiplication. Then we have $V=V_{{\rm inv}}=V_{{\rm p.a}}=V_{{\rm Mo}}=V_{{\rm s.a}}$ and $\mathrm{Im} \overline{HD}=\{ 0 \}$. } \end{ex}
\begin{ex}\label{ex:3.3.8}{\rm Let $X_{2}$ be a rational H-space such that $H^{*}(X_{2};{\mathbb Q})=\Lambda (y)$ and that ${\rm deg} \, y$ is even. Then $X_{2}$ is a homotopy associative H-space for any multiplication. Then one has the following:\\ 1. If ${\rm deg} \, x=2{\rm deg} \, y$, then $$V=V_{{\rm inv}}=V_{{\rm p.a}}=V_{{\rm Mo}}=V_{{\rm s.a}}=\mathrm{Im} \overline{HD}\cong {\mathbb Q}.$$ 2. If ${\rm deg} \, x=3{\rm deg} \, y$, then $$V\cong {\mathbb Q}^{2} \ \text{and} \ V_{{\rm inv}}=V_{{\rm p.a}}=V_{{\rm Mo}}=V_{{\rm s.a}} = \mathrm{Im} \overline{HD} \cong \{(r_{1},r_{2})\in {\mathbb Q}^{2} \mid r_{1}=r_{2}\}.$$ 3. If ${\rm deg} \, x = m{\rm deg} \, y$ and $m\geq 4$, then $V\cong {\mathbb Q} ^{m-1},$\\ \hspace{1em} $V_{{\rm inv}}$ $\cong \left\{ \begin{array}{l} V \ (m:{\rm even})\\ \Bigl\{ (r_{1},\cdots ,r_{m-1})\in {\mathbb Q}^{m-1} \mid \displaystyle\sum^{m-1}_{j=1}(-1)^{j+1}r_{j} =0\Bigr\} \ (m:{\rm odd}) \end{array} \right. $ \\ \hspace{1em}$V_{{\rm p.a}}\cong \Bigl\{ (r_{1},\cdots ,r_{m-1})\in {\mathbb Q}^{m-1} \mid \displaystyle\sum^{m-1}_{j=1}(2^{j}-2^{m-j})r_{j} =0\Bigr\}$\\ \hspace{1em}$V_{{\rm s.a}}$ $\cong \left\{ \begin{array}{l} \{ (r_{1},\cdots ,r_{m-1})\in {\mathbb Q}^{m-1} \mid r_{l}=r_{m-l}, \ l=1,\cdots , \frac{m-2}{2}\} \ (m:{\rm even})\\ \{ (r_{1},\cdots ,r_{m-1})\in {\mathbb Q}^{m-1} \mid r_{l}=r_{m-l}, \ l=1,\cdots , \frac{m-1}{2}\} \ (m:{\rm odd}) \end{array} \right. $\\ \hspace{1em}$\mathrm{Im} \overline{HD}\cong \{ (r_{1},\cdots ,r_{m-1})\in {\mathbb Q}^{m-1} \mid$\\ \hspace{10em} $(m-1)!r_{1}=\cdots =i!(m-i)!r_{i}=\cdots = (m-1)!r_{m-1} \}$ \\ 4. Other cases, $V=V_{{\rm inv}}=V_{{\rm p.a}}=V_{{\rm Mo}}=V_{{\rm s.a}}=\mathrm{Im} \overline{HD}=\{ 0 \}$. } \end{ex} \begin{proof} {\rm In the statement 2, we choose the basis $\sigma _{1}$ and $\sigma _{2}$ of V defined by \begin{eqnarray*} \sigma _{1}(x)= y\otimes y^{2} \ \text{and} \ \sigma _{2}(x)= y^{2}\otimes y. \end{eqnarray*} We define the isomorphism of ${\mathbb Q}$-vector spaces $f:V\to {\mathbb Q}^{2}$ by $f(\sigma _{1})=(1,0)$ and $f(\sigma _{2})=(0,1)$. Then, for any element $\alpha =r_{1}\sigma _{1}+r_{2}\sigma _{2} \ (r_{i}\in {\mathbb Q})$ of V, we have \begin{align*} &m_{2}(\lambda _{2}\otimes id)H^{*}(q)\alpha (x)- m_{2}(id\otimes \rho _{2})H^{*}(q)\alpha (x)= (-2r_{1}+2r_{2})y^{3}, \\ &\overline{m}_{2}(H^{*}(\mu_{2})\otimes id)H^{*}(q)\alpha (x)- \overline{m}_{2}(id\otimes H^{*}(\mu_{2}))H^{*}(q)\alpha (x)=(-2r_{1}+2r_{2})y^{3}. \end{align*} The element $\alpha $ is in $\mathrm{Im} \overline{HD}$ if and only if there exists $\beta $ in $\mathrm{Hom}_{{\mathbb Q}}({\mathbb Q} \langle x \rangle , H^{*}(X_{2} ;{\mathbb Q} ))$ such that $\alpha =\overline{HD}(\beta )$. Put $\beta (x)=ry^{3} \ (r\in {\mathbb Q})$, then we have $r_{1}=3r=r_{2}$. Hence, $V_{{\rm p.a}}=V_{{\rm Mo}}=V_{{\rm s.a}} = \mathrm{Im} \overline{HD}$. Therefore 2 is shown. We can prove the statements 1,3 and 4 similar to that computations. } \end{proof}
\noindent {\it Proof of Assertion \ref{ex1}.} If $n=km$ and $k\geq 4$ is even, we have $$ V_{{\rm Mo}}/\mathrm{Im} HD \subseteq V_{{\rm s.a}}/\mathrm{Im} HD \subsetneq V_{{\rm p.a}}/\mathrm{Im} HD \subsetneq V/\mathrm{Im} HD =V_{{\rm inv}}/\mathrm{Im} HD$$ by Lemma \ref{lem:3.3.6} and Example \ref{ex:3.3.8}. Thus the statement (1) holds. A similar argument show the other cases.
\qed
\section{Acknowledgment} The author is deeply grateful to Katsuhiko Kuribayashi and Ryo Takahashi who provided helpful comments and suggestions.
\end{document}
|
arXiv
|
{
"id": "1005.2271.tex",
"language_detection_score": 0.567766010761261,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\displaystyle}{\displaystyle} \newcommand{\langle}{\langle} \newcommand{\rangle}{\rangle}
\newtheorem{thm}{Theorem}[subsection] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{definition}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \newtheorem{prf}[thm]{Proof}
\title{Golden Binomials and Carlitz Characteristic Polynomials}
\begin{abstract} The golden binomials, introduced in the golden quantum calculus, have expansion determined by Fibonomial coefficients and the set of simple zeros given by powers of Golden ratio. We show that these golden binomials are equivalent to Carlitz characteristic polynomials of certain matrices of binomial coefficients. It is shown that trace invariants for powers of these matrices are determined by Fibonacci divisors, quantum calculus of which was developed very recently. \end{abstract}
\section{Introduction} The golden quantum calculus, based on the Binet formula for Fibonacci numbers $F_n$ as $q$-numbers, was introduced in \cite{golden}. In this calculus, the
finite-difference $q$-derivative operator is determined by two Golden ratio bases $\varphi$ and $\varphi'$, while the golden binomial expansion, by Fibonomial coefficients. The coefficients are expressed in terms of Fibonacci numbers, while zeros of these binomials are given by powers of Golden ratio $\varphi$ and $\varphi'$. It was observed that similar polynomials were introduced by Carlitz in 1965 from different reason, as characteristic polynomials of certain matrices of binomial coefficients \cite{Carlitz}. The goal of the present paper is to show equivalence of Carlitz characterisitc polynomials with golden binomials. In addition, the proof and interpretation of main formulas for trace of powers of the matrix $A_{n+1}$ in terms of Fibonacci divisors and corresponding quantum calculus, developed recently in \cite{FNK} would be given.
\section{Golden Binomials}
\subsection{Fibonomials and Golden Pascal Triangle}
The binomial coefficients defined by \begin{equation} { n \brack k}_F= \frac{[n]_{F}!}{[n-k]_{F}! [k]_{F}!}= \frac{F_n!}{F_{n-k}! F_k!}, \label{goldenbinom}\end{equation} with $n$ and $k$ being non-negative integers, $n\geq k$, are called the Fibonomials. Using the addition formula for Fibonacci numbers \cite{golden}, \begin{equation} F_{n+m} = \varphi^n F_m + {\varphi'}^m F_n \end{equation} we have following expression \begin{equation} F_n=F_{n-k+k}=\left(-\frac{1}{\varphi}\right)^k F_{n-k}+\varphi^{n-k} F_k. \end{equation} By using \begin{equation} \varphi^n = \varphi F_n + F_{n-1}, \,\,\,\,{\varphi'}^n = \varphi' F_n + F_{n-1},\label{phin} \end{equation}
it can be rewritten as follows \begin{eqnarray} F_n &=& F_{n-k-1} F_k + F_{n-k} F_{k+1} \nonumber \\ &=& F_{n-k} F_{k-1}+ F_{n-k+1} F_k. \end{eqnarray} With the above definition (\ref{goldenbinom}) it gives recursion formula for Fibonomials in two forms, \begin{eqnarray} { n \brack k}_{F}&=& \frac{(-\frac{1}{\varphi})^k [n-1]_{F}!}{[k]_{F}! [n-k-1]_{F}!} + \frac{\varphi^{n-k} [n-1]_{F}!}{[n-k]_{F}![k-1]_{F}!} \nonumber \\ &=& \left(-\frac{1}{\varphi}\right)^k { n-1 \brack k}_{F} + \varphi^{n-k} { n-1 \brack k-1}_{F} \label{goldenpascal1}\\ &=& \varphi^k { n-1 \brack k}_{F} + \left(-\frac{1}{\varphi}\right)^{n-k} { n-1 \brack k-1}_{F} .\label{goldenpascal2} \end{eqnarray} These formulas, for $1\leq k\leq n-1$, determine the Golden Pascal triangle for Fibonomials \cite{golden}.
\subsection{Golden Binomial} The Golden Binomial is defined as \cite{golden}, \begin{equation} (x+y)_F^n = (x+\varphi^{n-1} y)(x+\varphi^{n-2} \varphi' y)...(x+ \varphi{\varphi'}^{n-2} y) (x+ {\varphi'}^{n-1} y)\end{equation} or due to $\varphi \varphi' = -1$ it is \begin{equation} (x+y)_F^n = (x+\varphi^{n-1} y)(x-\varphi^{n-3} y)...(x+ (-1)^{n-1}\varphi^{-n+1} y).\end{equation} It has n-zeros at powers of the Golden ratio $$\frac{x}{y}=-\varphi^{n-1},\,\,\,\, \frac{x}{y}=-\varphi^{n-3},\,\,\,\,...,\frac{x}{y}=-\varphi^{-n+1}.$$ For Golden binomial the following expansion in terms of Fibonomials is valid \cite{golden} \begin{eqnarray} (x+y)_F^n &=&\sum^{n}_{k=0}{ n \brack k}_{F} (-1)^{\frac{k(k-1)}{2}} x^{n-k} y^k \nonumber \\ &=& \sum^{n}_{k=0} \frac{F_n!}{F_{n-k}! F_k!}(-1)^{\frac{k(k-1)}{2}} x^{n-k} y^k. \label{goldenbinomexpansion}\end{eqnarray} The proof is easy by induction and using recursion formulas (\ref{goldenpascal1}), (\ref{goldenpascal2}) . In terms of Golden binomials we introduce the Golden polynomials \begin{equation} P_n (x) = \frac{(x-a)_F^n}{F_n!},\end{equation} where $n=1,2,...$, and $P_0(x) =1$. These polynomials satisfy relations \begin{equation} D_F^x P_n(x) = P_{n-1}(x), \end{equation}
where the Golden derivative is defined as
\begin{equation} D_F^x P_n(x) = \frac{P_n (\varphi x) - P_n (\varphi' x) }{(\varphi - \varphi') x}.\end{equation} For even and odd polynomials we have different products \begin{equation} P_{2n} (x) = \frac{1}{F_{2n}!} \prod^n_{k=1} (x- (-1)^{n+k}\varphi^{2k-1} a) (x + (-1)^{n+k}\varphi^{-2k +1} a) ,\end{equation} \begin{equation} P_{2n+1} (x) = \frac{(x - (-1)^n a)}{F_{2n+1}!} \prod^n_{k=1} (x- (-1)^{n+k}\varphi^{2k} a) (x - (-1)^{n+k}\varphi^{-2k} a) .\end{equation} By using (\ref{phin}) it is easy to find \begin{equation} \varphi^{2k} + \frac{1}{\varphi^{2k}} = F_{2k} + 2 F_{2k-1} ,\end{equation} \begin{equation} \varphi^{2k+1} - \frac{1}{\varphi^{2k+1}} = F_{2k+1} + 2 F_{2k} .\end{equation} Then we can rewrite our polynomials in terms of Fibonacci numbers \begin{equation} P_{2n} (x) = \frac{1}{F_{2n}!} \prod^n_{k=1} (x^2 - (-1)^{n+k} (F_{2k-1} + 2 F_{2k-2})x a - a^2) ,\end{equation} \begin{equation} P_{2n+1} (x) = \frac{(x - (-1)^n a)}{F_{2n+1}!} \prod^n_{k=1} (x^2- (-1)^{n+k}(F_{2k} + 2 F_{2k-1})x a + a^2) .\end{equation} The first few odd polynomials are \begin{equation} P_1(x) = (x-a),\end{equation} \begin{equation} P_3 (x) = \frac{1}{2} (x+a)(x^2 - 3 x a + a^2 ),\end{equation} \begin{equation} P_5 (x) = \frac{1}{2\cdot 3 \cdot 5} (x-a)(x^2 + 3 x a + a^2 )(x^2 - 7 x a + a^2 ),\end{equation} \begin{equation} P_7 (x) = \frac{1}{2\cdot 3 \cdot 5 \cdot 8 \cdot 13} (x+a)(x^2 - 3 x a + a^2 )(x^2 + 7 x a + a^2 )(x^2 - 18 x a + a^2 ),\end{equation} and the even ones \begin{equation} P_2 (x) = (x^2 - x a - a^2 ),\end{equation} \begin{equation} P_4 (x) = \frac{1}{2\cdot 3} (x^2 + x a - a^2 )(x^2 - 4 x a - a^2),\end{equation} \begin{equation} P_6 (x) = \frac{1}{2\cdot 3\cdot 5 \cdot 8} (x^2 - x a - a^2 )(x^2 + 4 x a - a^2)(x^2 - 11 x a - a^2).\end{equation}
\subsection{Golden analytic function} By golden binomials in complex domain, the golden analytic function can be derived, which is complex valued function of complex argument, not analytic in usual sense \cite{Eskisehir}. The complex golden binomial is defined as \begin{eqnarray} (x+iy)^n_F &=& (x + i\varphi^{n-1}y)(x - i\varphi^{n-3} y)... (x + i(-1)^{n-1}\varphi^{1-n}y) \\ &=&\sum^n_{k=0}\left[\begin{array}{c}n \\ k \end{array}\right]_{F} (-1)^{\frac{k(k-1)}{2}}x^{n-k}i^k y^k.\end{eqnarray} It can be generated by the golden translation $$ E^{iy D^x_F}_F x^n = (x+iy)^n_F,$$ where $$ E^x_F = \sum^\infty_{n=0} (-1)^{\frac{n(n-1)}{2}} \frac{x^n}{F_n!}.$$ The binomials determine the golden analytic function $$ f(z, F) = E^{iy D^x_F}_F f(x) = \sum^\infty_{n=0} a_n \frac{(x+iy)^n_F}{F_n!},$$ satisfying the golden $\bar\partial_F$ equation \begin{equation} \frac{1}{2}(D^x_{F} + i D^y_{-F}) f(z;F) = 0,\end{equation} where $D^x_{-F} = (-1)^{x\frac{d}{dx}} D^x_F$. For $u(x,y) = Cos_{F} (y D^x_{F}) f(x)$ and $v(x,y) = Sin_{F} (y D^x_{F}) f(x)$, the golden Cauchy-Riemann equations are \begin{equation} D^x_{F} u(x,y) = D^y_{-F} v(x,y),\,\,\,\,D^y_{-F} u(x,y) = -D^x_{F} v(x,y),\end{equation} and the golden-Laplace equation is \begin{equation} (D^x_{F})^2 u(x,y) + (D^y_{-F})^2 u(x,y) = 0. \end{equation}
\subsection{Particular Case}
The golden binomial $(x -a)^n_F$ can be also generated by the golden translation \begin{equation} E^{-a D^x_F}_F x^n = (x-a)^n_F. \end{equation} In particular case $a = 1$ we have
\begin{equation} (x-1)^m_F = (x - \varphi^{m-1}) (x + \varphi^{m-3})... (x - (-1)^{m-1}\varphi^{-m+1}) .\label{x1} \end{equation} First few binomials are \begin{eqnarray} (x-1)^1_F &=& x -1 ,\\ (x-1)^2_F& =& (x -\varphi) (x - \varphi'), \\ (x-1)^3_F& =& (x -\varphi^2)(x+1) (x - {\varphi'}^2), \\ (x-1)^4_F& =& (x -\varphi^3) (x +\varphi) (x + \varphi') (x - {\varphi'}^3) , \end{eqnarray} and corresponding zeros \begin{eqnarray} m = 1 &\Rightarrow & x =1 \\ m = 2 & \Rightarrow& x =\varphi, x = \varphi'\\ m=3 & \Rightarrow & x =\varphi^2, x = -1, x = {\varphi'}^2 \\ m=4& \Rightarrow& x =\varphi^3, x = -\varphi, x = -\varphi', x = {\varphi'}^3. \end{eqnarray} For arbitrary even and odd $n$ we have following zeros of Golden binomials \begin{eqnarray} n = 2k & \Rightarrow & (x-1)^{2k}_F : \varphi^{n-1}, {\varphi'}^{n-1}, -\varphi^{n-3}, -{\varphi'}^{n-3}, ..., \pm\varphi, \pm {\varphi'};\label{even}\\
n = 2k+1 & \Rightarrow & (x-1)^{2k+1}_F : \varphi^{n-1}, {\varphi'}^{n-1}, -\varphi^{n-3}, -{\varphi'}^{n-3}, ..., \pm 1.\label{odd} \end{eqnarray}
\section{Carlitz Polynomials}
In Section 2 we have introduced the Golden binomials. Now we are going to relate these binomials with characteristic equations for some matrices, constructed from binomial coefficients by Carlitz \cite{Carlitz}. \begin{definition} We define an $(n+1) \times (n+1)$ matrix $A_{n+1}$ with binomial coefficients, \begin{eqnarray} A_{n+1}=\left[{r \choose n-s} \right], \end{eqnarray} where $r,s=0,1,2,...,n .$ Here, \begin{eqnarray} {n \choose k} =\left\{ \begin{array}{ll}
\frac{n!}{(n-k)! \phantom{.} k!}, & \hbox{if k $\leq$ n;} \\
0, & \hbox{k $>$ n.}
\end{array}
\right. \end{eqnarray} \end{definition} First few matrices are, \begin{eqnarray} &&{n=0} \phantom{a} \Rightarrow r=s=0 \Rightarrow \phantom{a} A_{1}=\left[{0 \choose 0} \right]=(1) \nonumber \\ &&{n=1} \phantom{a} \Rightarrow r,s=0,1 \Rightarrow \phantom{a} A_{2}=\left[{r \choose 1-s} \right]=\left(
\begin{array}{cc}
{0 \choose 1} & {0 \choose 0} \\
{1 \choose 1} & {1 \choose 0} \\
\end{array}
\right)=\left(
\begin{array}{cc}
0 & 1 \\
1 & 1 \\
\end{array}
\right)\nonumber \\ &&{n=2} \Rightarrow r,s=0,1,2 \Rightarrow A_{3}=\left[{r \choose 2-s} \right]=\left(
\begin{array}{ccc}
{0 \choose 2} & {0 \choose 1} & {0 \choose 0} \\
{1 \choose 2} & {1 \choose 1} & {1 \choose 0} \\
{2 \choose 2} & {2 \choose 1} & {2 \choose 0} \\
\end{array}
\right)=\left(
\begin{array}{ccc}
0 & 0 & 1 \\
0 & 1 & 1 \\
1 & 2 & 1 \\
\end{array}
\right)
\nonumber \end{eqnarray} Continuing, the general matrix $A_{n+1}$ of order $(n+1)$ can be written as, \begin{eqnarray} A_{n+1}=\left(
\begin{array}{ccccc}
\ldots \phantom{.} 0 & 0 & 0 & 0 & 1 \\
\ldots \phantom{.} 0 & 0 & 0 & 1 & 1 \\
\ldots \phantom{.} 0 & 0 & 1 & 2 & 1 \\
\ldots \phantom{.} 0 & 1 & 3 & 3 & 1 \\
\ldots \phantom{.} 1 & 4 & 6 & 4 & 1 \\
\phantom{.....} \vdots & \vdots & \vdots & \vdots & \vdots \\
\end{array} \right)_{(n+1)\times(n+1)}, \nonumber \end{eqnarray} where the lower triangular matrix is build from Pascal's triangle. We notice that trace of first few matrices $A_{n+1}$ gives Fibonacci numbers. As would be shown, it is valid for any n (Theorem $(\ref{invarianttheorem})$ equation $(\ref{invarianttheoremequation1})$) .
\begin{definition} Characteristic polynomial of matrix $A_{n+1}$ is determined by, \begin{eqnarray} Q_{n+1}(x)=\mbox{det}(x I-A_{n+1}). \label{characteristicequation} \end{eqnarray} \end{definition} First few polynomials explicitly are
\begin{eqnarray} &&{n=0:}\phantom{abc} Q_{1}(x)=x - 1 ,\nonumber \\
&&{n=1:}\phantom{abc} Q_{2}(x)=\mbox{det}(x I-A_{2})=\left|
\begin{array}{cc}
x & -1 \\
-1 & x-1 \\
\end{array}
\right| =x^2-x-1, \nonumber \\
&&{n=2:}\phantom{abc} Q_{3}(x)=\mbox{det}(x I-A_{3})=\left|
\begin{array}{ccc}
x & 0 & -1 \\
0 & x-1 & -1 \\
-1 & -2 & x-1 \\
\end{array}
\right|=x^3-2 x^2-2 x+1 ,\nonumber \\ &&{n=3:}\phantom{abc} \nonumber \end{eqnarray}
\begin{eqnarray} Q_{4}(x)=\mbox{det}(x I-A_{4})&=&\left|
\begin{array}{cccc}
x & 0 & 0 & -1 \\
0 & x & -1 & -1 \\
0 & -1 & x-2 & -1 \\
-1 & -3 & -3 & x-1 \\
\end{array}
\right| \nonumber \\ &=&-x^4+3 x^3+6 x^2-3 x-1 . \nonumber \end{eqnarray}
Corresponding eigenvalues are represented by powers of $\varphi$ and $\varphi'$;
${n=0}$\phantom{abc} $\Rightarrow$ \phantom{a} $x_1=1$,
${n=1}$\phantom{abc}$\Rightarrow$ \phantom{a} $x_1=\varphi,\quad x_2=\varphi'$,
${n=2}$\phantom{abc}$\Rightarrow$ \phantom{a} $x_1=\varphi^2,\quad x_2=-1,\quad x_3={\varphi'}^2$,
${n=3}$\phantom{abc}$\Rightarrow$ \phantom{a} $x_1=\varphi^3, x_2=-\varphi, x_3=-\varphi',x_4=\varphi'^3$.
Comparing zeros of first few characteristic polynomials, with zeros of Golden Binomial $(\ref{x1})$, we notice that they coincide. According to this, we have following conjecture.
\textbf{Conjecture:} The characteristic equation $(\ref{characteristicequation})$ of matrix $A_{n+1}$ coincides with Golden Binomial; \begin{eqnarray} Q_{n+1}(x)=\mbox{det}(x I-A_{n+1})=(x-1)^{n+1}_{F}. \end{eqnarray}
To prove this conjecture, firstly we represent Golden binomials in the product form.
\begin{prop} The Golden binomial can be written as a product, \begin{eqnarray} (x-1)^{n+1}_{F}=\prod_{j=0}^{n} \left(x-\varphi^j \varphi'^{n-j}\right). \end{eqnarray} \end{prop} \begin{prf} Starting from Golden binomial in product representation \begin{eqnarray} (x+y)^{n}_{F} \equiv \prod_{j=0}^{n-1} \left(x-(-1)^{j-1}\phantom{.} \varphi^{n-1}\phantom{.} \varphi^{-2j} y \right) \end{eqnarray} by using \begin{eqnarray} \varphi^{-2j}=\left(\frac{1}{\varphi} \right)^{2j}=\left(-\frac{1}{\varphi} \right)^{2j}=\varphi'^{2j}, \end{eqnarray} after substitution $y=-1$ we have \begin{eqnarray} (x-1)^{n}_{F} \equiv \prod_{j=0}^{n-1} \left(x-(-1)^{j} \phantom{.} \varphi^{n-1}\phantom{.} \varphi'^{2j} \right) .\nonumber \end{eqnarray} By shifting $n \rightarrow n+1$, \begin{eqnarray} (x-1)^{n+1}_{F}&=&\prod_{j=0}^{n} \left(x-(-1)^{j} \phantom{.} \varphi^{n}\phantom{.} \varphi'^{2j} \right) \nonumber \\ &=&\prod_{j=0}^{n} \left(x-(-1)^{j} \phantom{.} \varphi^{n}\phantom{.} \frac{(-1)^{2j}}{\varphi^{j} \varphi^{j}} \right) \nonumber \\ &=&\prod_{j=0}^{n} \left(x- \varphi^{n} \left(-\frac{1}{\varphi}\right)^{j} \phantom{.} \frac{1}{\varphi^{j}} \right) \nonumber \\ &=&\prod_{j=0}^{n} \left(x- \varphi^{n-j} \varphi'^{j} \phantom{.} \right) \nonumber \end{eqnarray} and substituting $j=n-m$ we get, \begin{eqnarray} (x-1)^{n+1}_{F}=\prod_{m=0}^{n} \left(x- \varphi^{m} \varphi'\phantom{.}^{n-m} \phantom{.} \right). \nonumber \end{eqnarray} The formula shows explicitly that zeros of Golden binomial in $(\ref{even})$ and $(\ref{odd})$ are given by powers of $\varphi$ and $\varphi'$. \end{prf}
\begin{cor} The eigenvalues of matrix $A_{n+1}$ are the numbers, \begin{eqnarray} \varphi^n, \varphi^{n-1}\varphi', \varphi^{n-2}\varphi'^{2}, \ldots ,\varphi \phantom{.}\varphi'^{n-1}, \varphi'^{n} \label{eigenvaluesofmatrixAn+1}. \end{eqnarray} \end{cor}
As it was shown by Carlitz \cite{Carlitz}, this product formula is just characteristic equation $(\ref{characteristicequation})$ for matrix $A_{n+1}$. Since zeros of two polynomials $\mbox{det}(x I -A_{n+1})$ and $(x-1)^{n+1}_{F}$ coincide, then the conjecture is correct and we have following theorem. \begin{thm} Characteristic equation for combinatorial matrix $A_{n+1}$ is given by Golden binomial: \begin{eqnarray} Q_{n+1}(x)=\mbox{det}(x I-A_{n+1})=(x-1)^{n+1}_{F}. \end{eqnarray} \end{thm}
\section{Powers of $A_{n+1}$ and Fibonacci Divisors}
\begin{prop} Arbitrary $n^{th}$ power of $A_{2}$ matrix is written in terms of Fibonacci numbers, \begin{eqnarray} A^{n}_{2}=\left( \begin{array}{cc}
F_{n-1} & F_{n} \\
F_{n} & F_{n+1} \\
\end{array}
\right). \end{eqnarray} \end{prop} \begin{prf} Proof will be done by induction. For $n=1$, \begin{eqnarray} A_{2}=\left( \begin{array}{cc}
0 & 1 \\
1 & 1
\end{array} \right) =\left( \begin{array}{cc}
F_{0} & F_{1} \\
F_{1} & F_{2}
\end{array}\right), \nonumber \end{eqnarray} and for $n=2$, \begin{eqnarray} A^2_{2}=\left( \begin{array}{cc}
1 & 1 \\
1 & 2
\end{array} \right) =\left( \begin{array}{cc}
F_{1} & F_{2} \\
F_{2} & F_{3}
\end{array}\right). \nonumber \end{eqnarray} Suppose for $n=k$, \begin{eqnarray} A^{k}_{2}=\left( \begin{array}{cc}
F_{k-1} & F_{k} \\
F_{k} & F_{k+1} \\
\end{array}
\right) , \nonumber \end{eqnarray} then \begin{eqnarray} A^{k+1}_{2}&=&A^{k}_{2}\phantom{.} A_{2}=\left( \begin{array}{cc}
F_{k-1} & F_{k} \\
F_{k} & F_{k+1} \\
\end{array}
\right)\phantom{.} \left( \begin{array}{cc}
0 & 1 \\
1 & 1 \\
\end{array}
\right) \\ &=& \left( \begin{array}{cc}
F_{k} & F_{k}+F_{k-1} \\
F_{k+1} & F_{k}+F_{k+1} \\
\end{array}
\right)=\left( \begin{array}{cc}
F_{k} & F_{k+1} \\
F_{k+1} & F_{k+2} \\
\end{array}
\right).\nonumber \end{eqnarray} This result can be understood from observation that eigenvalues of matrix $A_2$ are $\varphi$ and $\varphi'$, and eigenvalues of $A^{n}_2$ are powers $\varphi^n$, $\varphi'^n$ related with Fibonacci numbers.
\end{prf}
As we have seen, eigenvalues of matrix $A_3$ are $\varphi^2, \varphi'^2, -1$. It implies that for $A^{n}_3$, eigenvalues are $\varphi^{2n}, \varphi'^{2n}, (-1)^n$, and the matrix can be expressed by Fibonacci divisor $F^{(2)}_n$ conjugate to $F_2$, due to \cite{FNK}, \begin{eqnarray} (\varphi^k)^n = \varphi^k F^{(k)}_n + (-1)^{k+1} F^{(k)}_{n-1},\label{phikn1}\\ ({\varphi'}^k)^n = {\varphi'}^k F^{(k)}_n + (-1)^{k+1} F^{(k)}_{n-1},\label{phikn2} \end{eqnarray} where $F^{(k)}_n = F_{nk}/F_k$. \begin{prop} \label{A3powern} Arbitrary $n^{th}$ power of $A_{3}$ matrix can be expressed in terms of Fibonacci divisors $F_{n}^{(2)}$, \begin{eqnarray} A^{n}_3= \frac{1}{5}\left(
\begin{array}{ccc}
(2F_{n}^{(2)}-3F_{n-1}^{(2)}+2(-1)^n) & (2F_{n}^{(2)}+2F_{n-1}^{(2)}+2(-1)^n) & (3F_{n}^{(2)}-2F_{n-1}^{(2)}-2(-1)^n) \\
(F_{n}^{(2)}+F_{n-1}^{(2)}+(-1)^n) & (6F_{n}^{(2)}-4F_{n-1}^{(2)}+(-1)^n) & (4F_{n}^{(2)}-F_{n-1}^{(2)}-(-1)^n) \\
(3F_{n}^{(2)}-2F_{n-1}^{(2)}-2(-1)^n) & (8F_{n}^{(2)}-2F_{n-1}^{(2)}-2(-1)^n) & (7F_{n}^{(2)}-3F_{n-1}^{(2)}+2(-1)^n) \\
\end{array} \right) \nonumber \end{eqnarray} \end{prop} \begin{prf} Let's diagonalize the matrix $A_{3}$, \begin{eqnarray} \phi_{3}=\sigma^{-1}_{3}\phantom{a} A_{3} \phantom{a} \sigma_{3}, \nonumber \end{eqnarray} where $\phi_{3}$ is the diagonal matrix and \begin{eqnarray} A_{3}=\sigma_{3} \phantom{.} \phi_{3} \phantom{.} \sigma^{-1}_{3}. \nonumber \end{eqnarray} Taking the $n^{th}$ power of both sides gives, \begin{eqnarray} A^{n}_{3}=(\sigma_{3} \phantom{.} \phi_{3} \phantom{.} \underbrace{\sigma^{-1}_{3})\phantom{a}(\sigma_{3}}_{I} \phantom{.} \phi_{3} \phantom{.} \sigma^{-1}_{3})\phantom{a}...\phantom{a}(\sigma_{3} \phantom{.} \phi_{3} \phantom{.} \underbrace{ \sigma^{-1}_{3})\phantom{a}(\sigma_{3}}_{I} \phantom{.} \phi_{3} \phantom{.} \sigma^{-1}_{3}) \nonumber \end{eqnarray} Therefore, \begin{eqnarray} {A^{n}_{3}=\sigma_{3} \phantom{.} \phi^{n}_{3} \phantom{.} \sigma^{-1}_{3}}. \label{A3powerintermsofphiandsigma} \end{eqnarray} By using the diagonalization principle, $\sigma_{3}$ and $\sigma^{-1}_{3}$ matrices can be obtained as, \begin{eqnarray} \sigma_{3}=\frac{1}{2}\left( \begin{array}{ccc}
-\varphi' & \frac{4}{3} & -\varphi \\
1 & \frac{2}{3} & 1 \\
\varphi & -\frac{4}{3} & \varphi' \end{array} \nonumber \right) \end{eqnarray} and, \begin{eqnarray} \sigma^{-1}_{3}=\left( \begin{array}{ccc}
\frac{2(\varphi'+2)}{5\left(\varphi-\varphi'\right)} & -\frac{4(\varphi'+2)}{5 \varphi'\left(\varphi-\varphi'\right)} & \frac{2(2\varphi'-1)}{5 \varphi' \left(\varphi-\varphi'\right)} \\
\frac{3}{5} & \frac{3}{5} & -\frac{3}{5}\\
-\frac{2(\varphi+2)}{5\left(\varphi-\varphi'\right)} & \frac{4(\varphi+2)}{5 \varphi\left(\varphi-\varphi'\right)} & \frac{2(1-2\varphi)}{5 \varphi \left(\varphi-\varphi'\right)} \end{array} \right)=\frac{2}{5\sqrt{5}}\left( \begin{array}{ccc}
\varphi'+2 & -2(1-2\varphi) & (2+\varphi) \\
\frac{3 \sqrt{5}}{2} & \frac{3\sqrt{5}}{2} & -\frac{3\sqrt{5}}{2}\\
-(\varphi+2) & 2 (1-2\varphi') & -(2+\varphi') \end{array} \right) \nonumber \end{eqnarray} Since eigenvalues of matrix $A_{3}$ are $\varphi^2,-1,\varphi'^2$, the diagonal matrix $\phi_{3}$ is, \begin{eqnarray} \phi_{3} =\left( \begin{array}{ccc}
\varphi'^2 & 0 & 0 \\
0 & -1 & 0\\
0 & 0 & \varphi'^2 \end{array} \right), \end{eqnarray} and an arbitrary $n^{th}$ power of this matrix is, \begin{eqnarray} \phi^n_{3} =\left( \begin{array}{ccc}
(\varphi'^2)^n & 0 & 0 \\
0 & (-1)^n & 0\\
0 & 0 & (\varphi'^2)^n \end{array} \right). \end{eqnarray} Finally by using $(\ref{A3powerintermsofphiandsigma})$, $A^{n}_3=$ \begin{eqnarray} \frac{1}{5}\left(
\begin{array}{ccc}
(2F_{n}^{(2)}-3F_{n-1}^{(2)}+2(-1)^n) & (2F_{n}^{(2)}+2F_{n-1}^{(2)}+2(-1)^n) & (3F_{n}^{(2)}-2F_{n-1}^{(2)}-2(-1)^n) \\
(F_{n}^{(2)}+F_{n-1}^{(2)}+(-1)^n) & (6F_{n}^{(2)}-4F_{n-1}^{(2)}+(-1)^n) & (4F_{n}^{(2)}-F_{n-1}^{(2)}-(-1)^n) \\
(3F_{n}^{(2)}-2F_{n-1}^{(2)}-2(-1)^n) & (8F_{n}^{(2)}-2F_{n-1}^{(2)}-2(-1)^n) & (7F_{n}^{(2)}-3F_{n-1}^{(2)}+2(-1)^n) \\
\end{array} \right) \nonumber \end{eqnarray} is obtained. \end{prf}
As we can expect, these results can be generalized to arbitrary matrix $A_{n+1}$. Since eigenvalues of $A_{n+1}$ are powers $\varphi^n$,$\varphi'^n$, $\ldots$, for $A^{N}_{n+1}$ eigenvalues are $\varphi^{nN}$,$\varphi'^{nN}$, \ldots But these powers can be written in terms of Fibonacci divisors as in $(\ref{phikn1})$, $(\ref{phikn2})$, and the matrix $A^{N}_{n+1}$ itself can be represented by Fibonacci divisors $F^{(n)}_{N}$.
For powers of matrix $A_{n+1}$ we have the following identities.
\begin{thm} \label{invarianttheorem} Invariants of $A^{k}_{n+1}$ matrix are found as, \begin{eqnarray} Tr\left( A^k_{n+1} \right)&=&\frac{F_{kn+k}}{F_{k}}=F^{(k)}_{n+1}, \label{invarianttheoremequation1} \\ {det}\left(A^k_{n+1} \right)&=&(-1)^{k \phantom{.} \frac{n(n+1)}{2}} . \label{invarianttheoremequation2} \end{eqnarray} For $k=1$, it gives \begin{eqnarray} Tr\left( A_{n+1} \right)&=& F_{n+1}, \nonumber \\ {det}\left(A_{n+1} \right)&=&(-1)^{\frac{n(n+1)}{2}} . \nonumber \end{eqnarray} \end{thm} \begin{prf} Let's diagonalize the general matrix $A_{n+1}$ as, \begin{eqnarray} \phi_{n+1}=\sigma^{-1}_{n+1}\phantom{a} A_{n+1} \phantom{a} \sigma_{n+1} \nonumber \end{eqnarray} where $\phi_{n+1}$ is diagonal and \begin{eqnarray} A_{n+1}=\sigma_{n+1} \phantom{.} \phi_{n+1} \phantom{.} \sigma^{-1}_{n+1}. \nonumber \end{eqnarray} Taking the $k^{th}$ power of both sides gives, \begin{eqnarray} A^{k}_{n+1}=(\sigma_{n+1} \phantom{.} \phi_{n+1} \phantom{.} \underbrace{\sigma^{-1}_{n+1})\phantom{a}(\sigma_{n+1}}_{I} \phantom{.} \phi_{n+1} \phantom{.} \sigma^{-1}_{n+1})\phantom{a}...\phantom{a}(\sigma_{n+1} \phantom{.} \phi_{n+1} \phantom{.} \underbrace{ \sigma^{-1}_{n+1})\phantom{a}(\sigma_{n+1}}_{I} \phantom{.} \phi_{n+1} \phantom{.} \sigma^{-1}_{n+1}) \nonumber \end{eqnarray} and \begin{eqnarray} A^{k}_{n+1}=\sigma_{n+1} \phantom{.} \phi^{k}_{n+1} \phantom{.} \sigma^{-1}_{n+1}. \label{An+1powerk} \end{eqnarray} By taking trace from both sides and using the cyclic permutation property of trace, \begin{eqnarray} Tr(A^{k}_{n+1})=Tr\phantom{.}(\sigma_{n+1} \phantom{.} \phi^{k}_{n+1} \phantom{.} \sigma^{-1}_{n+1})=Tr\phantom{.}(\sigma^{-1}_{n+1} \phantom{.} \sigma_{n+1} \phantom{.} \phi^{k}_{n+1})=Tr(I \phantom{a} \phi^{k}_{n+1})=Tr\phantom{.}( \phi^{k}_{n+1}) \nonumber \end{eqnarray} we get \begin{eqnarray} {Tr(A^{k}_{n+1})=Tr\phantom{.}( \phi^{k}_{n+1})}. \nonumber \end{eqnarray} The eigenvalues of matrix $A_{n+1}$ in $(\ref{eigenvaluesofmatrixAn+1})$, allows one to construct the diagonal matrix $\phi_{n+1}$ and calculate
\begin{eqnarray} Tr(A^{k}_{n+1})=Tr \phantom{..} \left(
\begin{array}{ccccccccc}
\varphi^n & 0 & 0 & .& . & . & 0 & 0 & 0 \\
0 & \varphi^{n-1} \varphi'& 0 & . & . & . & 0 & 0 & 0 \\
0 & 0 & \varphi^{n-2} \varphi'^2 & . & . & . & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & 0 & 0 & . & . & . & \varphi^{2} \varphi'^{n-2} & 0 & 0 \\
0 & 0 & 0 & . & . & . & 0 & \varphi \varphi'^{n-1} & 0 \\
0 & 0 & 0 & . & . & . & 0 & 0 & \varphi'^{n} \\
\end{array}
\right)^{k}. \nonumber \end{eqnarray} It gives \begin{eqnarray} Tr(A^{k}_{n+1})=Tr \phantom{..}\left(
\begin{array}{ccccccccc}
(\varphi^n)^k & 0 & 0 & . & 0 & 0 & 0 \\
0 & (\varphi^{n-1} \varphi')^k & 0 & . & 0 & 0 & 0 \\
0 & 0 & (\varphi^{n-2} \varphi'^2)^k & . & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & 0 & 0 & . & (\varphi^{2} \varphi'^{n-2})^k & 0 & 0 \\
0 & 0 & 0 &. & 0 & (\varphi \varphi'^{n-1})^k & 0 \\
0 & 0 & 0 &. & 0 & 0 & (\varphi'^{n})^k \\
\end{array}
\right) \nonumber \label{matrixfipowerk} \end{eqnarray} and \begin{equation} Tr(A^{k}_{n+1})=(\varphi^n)^k+(\varphi^{n-1} \varphi')^k+\ldots +(\varphi \varphi'^{n-1})^k+(\varphi'^{n})^k, \nonumber \end{equation} or \begin{equation} Tr(A^{k}_{n+1})=(\varphi^k)^n+(\varphi^{k})^{n-1} \varphi'^k+\ldots +\varphi^k (\varphi'^k)^{n-1}+(\varphi'^{k})^n . \nonumber \end{equation} The powers $(\varphi^k)^n$ and $(\varphi'^{k})^n$ substituted from equations $(\ref{phikn1})$ and $(\ref{phikn2})$ give $Tr(A^{k}_{n+1}) =$ \begin{eqnarray} &=&\left(\varphi^k \phantom{.} F^{(k)}_n + (-1)^{k+1} \phantom{.} F^{(k)}_{n-1}\right)+\left(\varphi^k \phantom{.} F^{(k)}_{n-1} + (-1)^{k+1} \phantom{.} F^{(k)}_{n-2}\right)\varphi'^{k}+\ldots \nonumber \\ &&+\left(\varphi^k \phantom{.} F^{(k)}_1 + (-1)^{k+1} \phantom{.} F^{(k)}_{0}\right)(\varphi'^{k})^{n-1}+(\varphi'^{k})^{n} \nonumber \\ &=& \varphi^k \left(F^{(k)}_n+F^{(k)}_{n-1}(\varphi'^k)+F^{(k)}_{n-2}(\varphi'^k)^2+\ldots +F^{(k)}_{1}(\varphi'^k)^{n-1}\right) \nonumber \\ &&+(-1)^{k+1}\left(F^{(k)}_{n-1}+F^{(k)}_{n-2}(\varphi'^k)+F^{(k)}_{n-3}(\varphi'^k)^2+\ldots +F^{(k)}_{0}(\varphi'^k)^{n-1}\right) \nonumber \\ &&+(\varphi'^k)^n \nonumber \\ &=&\varphi^k \left(\frac{F_{kn}}{F_{k}}+\frac{F_{(n-1)k}}{F_{k}}(\varphi'^k)+\frac{F_{(n-2)k}}{F_{k}}(\varphi'^k)^2+\ldots +\frac{F_{k}}{F_{k}}(\varphi'^k)^{n-1}\right) \nonumber \\ &&+(-1)^{k+1}\left(\frac{F_{(n-1)k}}{F_{k}}+\frac{F_{(n-2)k}}{F_{k}}(\varphi'^k)+\frac{F_{(n-3)k}}{F_{k}}(\varphi'^k)^2+\ldots +\frac{F_{0}}{F_{k}}(\varphi'^k)^{n-1}\right)\nonumber\\ &&+(\varphi'^k)^n \nonumber \\ &=&\frac{F_{kn}}{F_{k}}\phantom{..} \varphi^k + \frac{F_{(n-1)k}}{F_{k}} (-1)^{k} + \frac{F_{(n-2)k}}{F_{k}} (-1)^{k} (\varphi'^k)+\ldots +\frac{F_{k}}{F_{k}}(\varphi^k) (\varphi'^k)^{n-1}\nonumber \\ &&+ \frac{F_{(n-1)k}}{F_{k}} (-1)^{k+1} + \frac{F_{(n-2)k}}{F_{k}} (-1)^{k+1}(\varphi'^k) + \frac{F_{(n-3)k}}{F_{k}} (-1)^{k+1}(\varphi'^k)^2 \nonumber \\ &&+\ldots +\frac{F_{0}}{F_{k}} (-1)^{k+1}\varphi'^{n-1} + (\varphi'^k)^n \nonumber \\ &=&\frac{F_{kn}}{F_{k}}\phantom{..} \varphi^k + \frac{F_{(n-1)k}}{F_{k}} (-1)^{k} + \frac{F_{(n-2)k}}{F_{k}} (-1)^{k} (\varphi'^k)+\ldots \nonumber \\ &&+ \frac{F_{(n-(n-1))k}}{F_{k}} (-1)^{k}(\varphi'^k)^{n-2} + \frac{F_{(n-1)k}}{F_{k}} (-1)^{k+1}+ \frac{F_{(n-2)k}}{F_{k}} (-1)^{k+1}(\varphi'^k) \nonumber \\ &&+ \frac{F_{(n-3)k}}{F_{k}} (-1)^{k+1}(\varphi'^k)^{2}+\ldots +\frac{F_{k}}{F_{k}} (-1)^{k+1}(\varphi'^k)^{n-2} + (\varphi'^k)^{n} \nonumber \\ &=& \frac{F_{kn}}{F_{k}}\phantom{..} \varphi^k + \frac{F_{(n-1)k}}{F_{k}} \left( (-1)^k + (-1)^{k+1} \right) \nonumber \\ &&+ \frac{F_{(n-2)k}}{F_{k}} \left( (-1)^k \varphi'^k +(-1)^{k+1} \varphi'^k \right) + \frac{F_{(n-3)k}}{F_{k}} \left((-1)^k (\varphi'^k)^2 + (-1)^{k+1} (\varphi'^k)^2 \right)\nonumber \\ &&+\ldots +\frac{F_{k}}{F_{k}} \left( (-1)^k (\varphi'^k)^{n-2} + (-1)^{k+1} (\varphi'^k)^{n-2} \right) + (\varphi'^k)^{n} \nonumber \\ &=&\frac{F_{kn}}{F_{k}}\phantom{..} \varphi^k + \frac{F_{(n-1)k}}{F_{k}} (-1)^{k} (1+(-1)) + \frac{F_{(n-2)k}}{F_{k}} (-1)^{k} \varphi'^k (1+(-1))\nonumber \\ &&+\frac{F_{(n-3)k}}{F_{k}} (-1)^{k} (\varphi'^k)^2 (1+(-1))+\ldots +\frac{F_{k}}{F_{k}} (-1)^{k} (\varphi'^k)^{n+2} (1+(-1))\nonumber \\ &&+(\varphi'^k)^{n} \nonumber \\ &=&\frac{F_{kn}}{F_{k}} \phantom{..} \varphi^k + (\varphi'^k)^{n} \nonumber \\ &{(\ref{phikn2})}{=}&\frac{F_{kn}}{F_{k}}\phantom{..} \varphi^k + \varphi'^k \phantom{.} F^{(k)}_{n} + (-1)^{k+1} \phantom{.} F^{(k)}_{n-1} \nonumber \\ &=&\frac{F_{kn}}{F_{k}} \phantom{..} \varphi^k + \varphi'^k \phantom{.} \frac{F_{kn}}{F_{k}} + (-1)^{k+1} \phantom{.} \frac{F_{k(n-1)}}{F_{k}} \nonumber \end{eqnarray} \begin{eqnarray} &=&\frac{1}{F_{k}} \left( F_{kn} \varphi^k + \varphi'^k \phantom{.} F_{kn} + (-1)^{k+1} \phantom{.} F_{k(n-1)} \right)\nonumber \\ &=&\frac{1}{F_{k}} \frac{1}{\varphi-\varphi'} \left[ \left(\varphi^{kn}-\varphi'^{kn}\right)\varphi^{k}+\varphi'^{k} \left(\varphi^{kn}-\varphi'^{kn}\right)+ (-1)^{k+1}\left(\varphi^{(n-1)k}-\varphi'^{(n-1)k}\right) \right] \nonumber \\ &=&\frac{1}{F_{k}} \frac{1}{\varphi-\varphi'} \left[ \varphi^{k(n+1)}-\varphi'^{kn} \varphi^{k}+\varphi'^{k} \varphi^{kn}-\varphi'^{k+kn}+ (-1)^{k+1} \varphi^{(n-1)k}-(-1)^{k+1} \varphi'^{(n-1)k} \right] \nonumber \\ &=&\frac{1}{F_{k}} \frac{1}{\varphi-\varphi'} \bigg[ \varphi^{k(n+1)}-\varphi'^{k(n+1)}-\left(-\frac{1}{\varphi}\right)^{kn} \varphi^{k}+\left(-\frac{1}{\varphi}\right)^{k} \varphi^{kn}+(-1)^{k+1} \varphi^{(n-1)k} \nonumber \\ &&-(-1)^{k+1} \left(-\frac{1}{\varphi}\right)^{(n-1)k} \bigg] \nonumber \\ &=&\frac{1}{F_{k}} \frac{1}{\varphi-\varphi'} \bigg[ \varphi^{k(n+1)}-\varphi'^{k(n+1)}-(-1)^{kn} \varphi^{k(1-n)}+(-1)^{k} \varphi^{k(n-1)}-(-1)^{k} \varphi^{k(n-1)} \nonumber \\ &&+(-1)^k (-1)^{k(n-1)} \varphi^{k(1-n)} \bigg] \nonumber \\ &=&\frac{1}{F_{k}} \frac{1}{\varphi-\varphi'} \left[ \varphi^{k(n+1)}-\varphi'^{k(n+1)}-(-1)^{kn} \varphi^{k(1-n)}+(-1)^k (-1)^{kn}(-1)^{-k} \varphi^{k(1-n)} \right] \nonumber \\ &=&\frac{1}{F_{k}} \frac{1}{\varphi-\varphi'} \left[ \varphi^{k(n+1)}-\varphi'^{k(n+1)}-(-1)^{kn} \varphi^{k(1-n)}+(-1)^{kn} \varphi^{k(1-n)} \right] \nonumber \\ &=&\frac{1}{F_{k}} \phantom{.} \frac{1}{\varphi-\varphi'} \left[ \varphi^{k(n+1)}-\varphi'^{k(n+1)} \right] \nonumber \\ &=&\frac{1}{F_{k}}\phantom{.} \frac{\varphi^{k(n+1)}-\varphi'^{k(n+1)}}{\varphi-\varphi'} \nonumber \\ &=&\frac{1}{F_{k}} F_{k(n+1)} \nonumber \\ &=&\frac{F_{k(n+1)}}{F_{k}} . \nonumber \end{eqnarray} To prove the relation for ${det}\left(A^k_{n+1} \right)$, we take the determinant from both sides in $(\ref{An+1powerk})$, \begin{eqnarray} {\det}\left(A^k_{n+1} \right)={\det}\left( \sigma_{n+1} \phantom{.} \phi^{k}_{n+1} \phantom{.} \sigma^{-1}_{n+1}\right). \end{eqnarray} By using property of determinants, \begin{eqnarray} \det(AB)=\det(A) \det(B) \end{eqnarray} we obtain, \begin{eqnarray} {\det}\left(A^k_{n+1} \right)&=&{\det}\left( \sigma_{n+1}\right) \phantom{.} {\det} \left( \phi^{k}_{n+1} \right) \phantom{.} {\det} \left( \sigma^{-1}_{n+1} \right) \Rightarrow \nonumber \end{eqnarray} \begin{eqnarray} {\det}\left(A^k_{n+1} \right)&=&{\det}\left( \sigma_{n+1}\right) \phantom{.} {\det} \left( \sigma^{-1}_{n+1} \right) \phantom{.}{\det} \left( \phi^{k}_{n+1} \right) \nonumber \Rightarrow \\ {\det}\left(A^k_{n+1} \right)&=&{\det}\left( \sigma_{n+1} \phantom{.}\sigma^{-1}_{n+1} \right) \phantom{.}{\det} \left( \phi^{k}_{n+1} \right) \nonumber \Rightarrow \\ {\det}\left(A^k_{n+1} \right)&=&{\det}\left(I\right) \phantom{.}{\det} \left( \phi^{k}_{n+1} \right) \nonumber \Rightarrow\\ {\det}\left(A^k_{n+1} \right)&=&{\det} \left( \phi^{k}_{n+1} \right) .\nonumber \end{eqnarray} Since the matrix $\phi^{k}_{n+1}$ is known, the above equation becomes, \begin{eqnarray} {\det}\left(A^k_{n+1} \right)&=& (\varphi^n)^k \phantom{.} (\varphi^{n-1} \varphi')^k \phantom{.} (\varphi^{n-2} \varphi'^2)^k \ldots (\varphi^{2} \varphi'^{n-2})^k \phantom{.} (\varphi \varphi'^{n-1})^k \phantom{.}(\varphi'^n)^k \nonumber \\ &=&\left(\varphi^{nk}\phantom{.}\varphi^{(n-1)k}\phantom{.}\varphi^{(n-2)k} \ldots \varphi^{2k}\phantom{.}\varphi^{k}\right)\left(\varphi'^{k}\phantom{.}\varphi'^{2k} \ldots \varphi'^{(n-2)k}\phantom{.}\varphi'^{(n-1)k}\phantom{.}\varphi'^{nk}\right) \nonumber \\ &=&\left(\varphi^{nk+(n-1)k+(n-2)k+\ldots +2k+k}\right)\left(\varphi'\phantom{.}^{k+2k+ \ldots +(n-2)k+(n-1)k+nk}\right) \nonumber \\ &=&\varphi^{k\left[n+(n-1)+(n-2)+\ldots +2+1\right]} \phantom{.} \varphi'\phantom{.}^{k\left[1+2+ \ldots +(n-2)+(n-1)+n\right]} \nonumber \\ &=& \varphi^{k\left(\frac{n(n+1)}{2}\right)} \phantom{.} \varphi'^{k\left(\frac{n(n+1)}{2}\right)} \nonumber \\ &=& \left(\varphi^{\frac{n(n+1)}{2}}\right)^k \phantom{.} \left(\varphi'^ {\frac{n(n+1)}{2}}\right)^k \nonumber \\ &=& \left[(\varphi \varphi')^{\frac{n(n+1)}{2}}\right]^k \nonumber \\ &{\left(\varphi \varphi'=-1\right)}{=}& (-1)^{k \phantom{.} \frac{n\phantom{.}(n+1)}{2}}. \nonumber \end{eqnarray} \end{prf}
The above Theorem represnts Fibonacci divisors $F^{(k)}_{n+1}$ in terms of combinatorial matrix $A_{n+1}$. Quantum calculus for such divisors was constracted recently in \cite{FNK}. As was shown, it is related with several problems from hydrodynamics, quantum integrable systems and quantum information theory. This is why results of the present paper can be useful in the studies of this calculus and its applications.
\end{document}
|
arXiv
|
{
"id": "2012.11001.tex",
"language_detection_score": 0.4047873914241791,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[Steklov eigenvalues]
{Neumann to Steklov eigenvalues: asymptotic and monotonicity results}
\author{Pier Domenico Lamberti}
\address{ Dipartimento di Matematica\\ Universit\`a degli Studi di Padova\\ Via Trieste, 63\\
35126 Padova\\ Italy}
\email{[email protected]}
\author{Luigi Provenzano} \address{Dipartimento di Matematica\\ Universit\`a degli Studi di Padova\\ Via Trieste, 63\\
35126 Padova\\ Italy} \email{[email protected]}
\subjclass[2010]{Primary 35C20; Secondary 35P15, 35B25, 35J25, 33C10. }
\keywords{Steklov boundary conditions, eigenvalues, density perturbation, monotonicity, Bessel functions.}
\begin{abstract} We consider the Steklov eigenvalues of the Laplace operator as limiting Neumann eigenvalues in a problem of mass concentration at the boundary of a ball. We discuss the asymptotic behavior of the Neumann eigenvalues and find explicit formulas for their derivatives at the limiting problem. We deduce that the Neumann eigenvalues have a monotone behavior in the limit and that Steklov eigenvalues locally minimize the Neumann eigenvalues. \end{abstract}
\maketitle
\section{Introduction}\label{intro}
Let $B$ be the unit ball in $\mathbb R^N$, $N\geq 2$, centered at zero. We consider the Steklov eigenvalue problem for the Laplace operator
\begin{equation}\label{Ste} \left\{\begin{array}{ll} \Delta u =0 ,\ \ & {\rm in}\ B,\\ \frac{\partial u}{\partial \nu }=\lambda\rho u ,\ \ & {\rm on}\ \partial B, \end{array}\right. \end{equation} in the unknowns $\lambda$ (the eigenvalue) and $u$ (the eigenfunction), where $\rho =M/\sigma_N$, $M>0$ is a fixed constant, and $\sigma_N$ denotes the surface measure of $\partial B$.
As is well-known the eigenvalues of problem (\ref{Ste}) are given explicitly by the sequence \begin{equation}\label{intro1} \lambda_l=\frac{l}{\rho},\ \ \ \ l\in {\mathbb{N}}, \end{equation} and the eigenfunctions corresponding to $\lambda_l$ are the homogeneous harmonic polynomials of degree $l$. In particular, the multiplicity of $\lambda_l$ is $(2l+N-2) (l+N-3)! /(l!(N-2)!)$, and only $\lambda_0$ is simple, the corresponding eigenfunctions being the constant functions. See \cite{folland} for an introduction to the theory of harmonic polynomials.
A classical reference for problem (\ref{Ste}) is \cite{stek}. For a recent survey paper, we refer to \cite{gipo}; see also \cite{lambertisteklov}, \cite{lapro} for related problems.
It is well-known that for $N=2$, problem (\ref{Ste}) provides the vibration modes of a free elastic membrane the total mass of which is $M$ and is concentrated at the boundary with density $\rho$; see e.g., \cite{bandle}. As is pointed out in \cite{lapro}, such a boundary concentration phenomenon can be explained in any dimension $N\geq 2$ as follows.
For any $0< \varepsilon <1$, we define a `mass density' $\rho_{\varepsilon}$ in the whole of $B$ by setting
\begin{equation}\label{densitaintro} \rho_{\varepsilon}(x)=\left\{ \begin{array}{ll}
\varepsilon,& {\rm if\ }|x|\le 1-\varepsilon ,\\
\frac{M-\varepsilon \omega_N (1-\varepsilon )^N}{\omega_N (1-(1-\varepsilon)^N ) },& {\rm if\ }1-\varepsilon <|x|<1, \end{array} \right. \end{equation} where $\omega_N=\sigma_N/N$ is the measure of the unit ball. Note that for any $x\in B $ we have $\rho_{\varepsilon}(x)\to 0$ as $\varepsilon \to 0$, and $\int_{B}\rho_{\varepsilon}dx=M$ for all $\varepsilon>0$, which means that the `total mass' M is fixed and concentrates at the boundary of $B$ as $\varepsilon \to 0$. Then we consider the following eigenvalue problem for the Laplace operator with Neumann boundary conditions \begin{equation}\label{Neu} \left\{\begin{array}{ll} -\Delta u =\lambda \rho_{\varepsilon} u,\ \ & {\rm in}\ B,\\ \frac{\partial u}{\partial \nu }=0 ,\ \ & {\rm on}\ \partial B. \end{array}\right. \end{equation} We recall that for $N=2$ problem (\ref{Neu}) provides the vibration modes of a free elastic membrane with mass density $\rho_{\varepsilon }$ and total mass $M$ (see e.g., \cite{cohi}). The eigenvalues of (\ref{Neu}) have finite multiplicity and form a sequence $$ \lambda_0(\varepsilon)<\lambda_1(\varepsilon)\leq\lambda_2(\varepsilon)\leq\cdots, $$ depending on $\varepsilon$, with $\lambda_0(\varepsilon)=0$.
It is not difficult to prove that for any $l\in {\mathbb{N}}$ \begin{equation}\label{intro1,5}
\lambda_l(\varepsilon)\to \lambda_l,\ \ {\rm as}\ \varepsilon \to 0, \end{equation} see \cite{arr}, \cite{lapro}. (See also \cite{buosoprovenzano} for a detailed analysis of the analogue problem for the biharmonic operator.) Thus the Steklov problem can be considered as a limiting Neumann problem where the mass is concentrated at the boundary of the domain.
In this paper we study the asymptotic behavior of $\lambda_l(\varepsilon)$ as $\varepsilon\rightarrow 0$. Namely, we prove that such eigenvalues are continuously differentiable with respect to $\varepsilon$ for $\varepsilon \geq 0$ small enough, and that the following formula holds \begin{equation}\label{intro2} \lambda'_l(0)= \frac{2l\lambda_l}{3}+\frac{2\lambda^2_l}{N(2l+N)}. \end{equation} In particular, for $l\ne 0$, $\lambda'_l(0)>0$ hence $\lambda_l(\varepsilon )$ is strictly increasing and the Steklov eigenvalues $\lambda_l$ minimize the Neumann eigenvalues $\lambda_l(\varepsilon)$ for $\varepsilon $ small enough.
It is interesting to compare our results with those in \cite{niwa}, where authors consider the Neumann Laplacian in the annulus $1-\varepsilon <|x|<1$ and prove that for $N=2$ the first positive eigenvalue is a decreasing function of $\varepsilon$. We note that our analysis concerns all eigenvalues $\lambda_l$ with arbitrary indexes and multiplicity, and that we do not prove global monotonocity
of $\lambda_l(\varepsilon)$, which in fact does not hold for any $l$; see Figures \ref{fig1}, \ref{fig2}.
The proof of our results relies on the use of Bessel functions which allows to recast problem (\ref{Neu}) in the form of an equation $F(\lambda , \varepsilon)=0$ in the unknowns $\lambda ,\varepsilon$. Then, after some preparatory work, it is possible to apply the Implicit Function Theorem and conclude. We note that, despite the idea of the proof is rather simple and used also in other contexts (see e.g., \cite{lape}), the rigorous application of this method requires lenghty computations, suitable Taylor's expansions and estimates for the corresponding remainders, as well as recursive formulas for the cross-products of Bessel functions and their derivatives.
Importantly, the multiplicity of the eigenvalues which is often an obstruction in the application of standard asymptotic analysis, does not affect our method.
We note that if the ball $B$ is replaced by a general bounded smooth domain $\Omega$, the convergence of the Neumann eigenvalues to the Steklov eigenvalues when the mass concentrates in a neighborhood of $\partial \Omega $ still holds. However, the explicit computation of the appropriate formula generalizing (\ref{intro2}) is not easy and requires a completely different technique which will be discussed in a forthcoming paper.
We also note that an asymptotic analysis of similar but different problems is contained in \cite{naza1, naza2}, where by the way explicit computations of the coefficients in the asymptotic expansions of the eigenvalues are not provided.
It would be interesting to investigate the monotonicity properties of the Neumann eigenvalues in the case of more general families of mass densities $\rho_{\varepsilon}$. However, we believe that it would be difficult to adapt our method (which is based on explicit representation formulas) even in the case of radial mass densities (note that if $\rho_{\varepsilon}$ is not radial one could obtain a limiting Steklov-type problem with non-constant mass density, see \cite{arr} for a general discussion).
This paper is organized as follows. The proof of formula (\ref{intro2}) is discussed in Section \ref{sec:2}. In particular, Subsection \ref{sec:3} is devoted to certain technical estimates which are necessary for the rigorous justification of our arguments. In Subsection \ref{sec:4} we consider also the case $N=1$ and prove formula (\ref{intro2}) for $\lambda_1$ which, by the way, is the only non zero eigenvalue of the one dimensional Steklov problem. In Appendix we establish the required recursive formulas for the cross-products of Bessel functions and their derivatives which are deduced by the standard formulas available in the literature.
\section{Asymptotic behavior of Neumann eigenvalues}\label{sec:2}
It is convenient to use the standard spherical coordinates $(r,\theta)$ in ${\mathbb{R}}^N$, where $\theta=(\theta_1,...\theta_{N-1})$. The corresponding trasformation of coordinates is \begin{eqnarray*} x_1&=&r \cos(\theta_1),\\ x_2&=&r\sin(\theta_1)\cos(\theta_2),\\ \vdots\\ x_{N-1}&=&r\sin(\theta_1)\sin(\theta_2)\cdots\sin(\theta_{N-2})\cos(\theta_{N-1}),\\ x_N&=&r\sin(\theta_1)\sin(\theta_2)\cdots\sin(\theta_{N-2})\sin(\theta_{N-1}), \end{eqnarray*} with $\theta_1,...,\theta_{N-2}\in [0,\pi]$, $\theta_{N-1}\in [0,2\pi[$ (here it is understood that $\theta_1\in [0,2\pi[$ if $N=2$). We denote by $\delta$ the Laplace-Beltrami operator on the unit sphere $\mathbb S^{N-1}$ of $\mathbb R^N$, which can be written in spherical coordinates as \begin{equation*} \delta=\sum_{j=1}^{N-1}\frac{1}{q_j(\sin{\theta_j})^{N-j-1}}\frac{\partial}{\partial\theta_j}\left((\sin{\theta_j})^{N-j-1}\frac{\partial}{\partial\theta_j}\right), \end{equation*} where $$ q_1=1,\ \ \ \ q_j=(\sin{\theta_1}\sin{\theta_2}\cdots\sin_{\theta_{j-1}})^2,\ \ \ \ j=2,...,N-1, $$ see e.g., \cite[p. 40]{koz}.
To shorten notation, in what follows we will denote by $a$ and $b$ the quantities defined by $$a=\sqrt{\lambda\varepsilon}(1-\varepsilon),\ \ {\rm and}\ \ b=\sqrt{\lambda\tilde\rho_{\varepsilon}}(1-\varepsilon),$$ where \begin{equation*} \tilde\rho_{\varepsilon}=\frac{M-\varepsilon{\omega_N}\left(1-\varepsilon\right)^N}{\omega_N\left(1-\left(1-\varepsilon\right)^N\right)}. \end{equation*} As customary, we denote by $J_{\nu}$ and $Y_{\nu}$ the Bessel functions of the first and second species and order $\nu$ respectively (recall that $J_{\nu}$ and $Y_{\nu}$ are solutions of the Bessel equation $ z^2y''(z)+zy'(z)+(z^2-\nu^2)y(z)=0 $).
We begin with the following lemma.
\begin{lem}\label{solutionsN} Given an eigenvalue $\lambda$ of problem (\ref{Neu}), a corresponding eigenfunction $u$ is of the form $u(r,\theta)=S_l(r)H_l(\theta)$ where $H_l(\theta)$ is a spherical harmonic of some order $l\in\mathbb N$ and \small \begin{equation}\label{R} S_l(r)=\left\{ \begin{array}{ll} r^{1-\frac{N}{2}}J_{\nu_l}(\sqrt{\lambda\varepsilon} r),& {\rm if\ }r<1-\varepsilon , \\\ \\ r^{1-\frac{N}{2}}\left(\alpha J_{\nu_l}(\sqrt{\lambda\tilde\rho_{\varepsilon}} r)+\beta Y_{\nu_l}(\sqrt{\lambda\tilde\rho_{\varepsilon}} r)\right),& {\rm if\ }1-\varepsilon<r<1, \end{array} \right. \end{equation} \normalsize where $\nu_l=\frac{(N+2l-2)}{2}$ and $\alpha$, $\beta$ are given by \begin{eqnarray*} {\alpha}={\frac{\pi b}{2}}\left({J_{\nu_l}(a)Y_{\nu_l}'(b)-\frac{a}{b}J_{\nu_l}'(a)Y_{\nu_l}(b)}\right) ,\\ {\beta}={\frac{\pi b}{2}}\left({{\frac{a}{b}}J_{\nu_l}(b)J_{\nu_l}'(a)-J_{\nu_l}'(b)J_{\nu_l}(a)}\right).\nonumber \end{eqnarray*} \end{lem} \proof Recall that the Laplace operator can be written in spherical coordinates as $$\Delta=\partial_{rr}+\frac{N-1}{r}\partial_r+\frac{1}{r^2}\delta.$$ In order to solve the equation $-\Delta u= \lambda \rho_{\varepsilon}u$, we separate variables so that $u(r,\theta)=S(r)H(\theta)$. Then using $l(l+N-2)$, $l\in\mathbb N$, as separation constant, we obtain the equations \begin{equation}\label{radial} r^2S''+r(N-1)S'+r^2\lambda\rho_{\varepsilon}S-l(l+N-2)S=0 \end{equation} and \begin{equation}\label{angular} -\delta H=l(l+N-2)H. \end{equation} By setting $S(r)=r^{1-\frac{N}{2}}\tilde S(r)$ into (\ref{radial}), it follows that $\tilde S(r)$ satisfies the Bessel equation \begin{equation*} \tilde S''+\frac{1}{r}\tilde S'+\left(\lambda\rho_{\varepsilon}-\frac{\nu_l^2}{r^2}\right)\tilde S=0. \end{equation*} Since solutions $u$ of (\ref{Neu}) are bounded on $\Omega$ and $Y_{\nu_l}(z)$ blows up at $z=0$, it follows that for $r<1-\varepsilon$, $S(r)$ is a multiple of the function $r^{1-\frac{N}{2}}J_{\nu_l}(\sqrt{\lambda\varepsilon}r)$. For $1-\varepsilon<r<1$, $S(r)$ is a linear combination of the functions $r^{1-\frac{N}{2}}J_{\nu_l}(\sqrt{\lambda\tilde{\rho_{\varepsilon}}}r)$ and $ r^{1-\frac{N}{2}}Y_{\nu_l}(\sqrt{\lambda\tilde{\rho_{\varepsilon}}}r)$. On the other hand, the solutions of (\ref{angular}) are the spherical harmonics of order $l$. Then $u$ can be written as in (\ref{R}) for suitable values of $\alpha , \beta\in {\mathbb{R}}$.
Now we compute the coefficients $\alpha$ and $\beta$ in (\ref{R}). Since the right-hand side of the equation in (\ref{Neu}) is a function in $L^2(\Omega)$ then by standard regularity theory a solution $u$ of (\ref{Neu}) belongs to the standard Sobolev space $H^2(\Omega)$, hence $\alpha$ and $\beta$ must be chosen in such a way that $u$ and $\partial_r u$ are continuous at $r=1-\varepsilon$, that is \begin{equation*}\left\{ \begin{array}{ll} \alpha J_{\nu_l}(\sqrt{\lambda\tilde\rho_{\varepsilon}}(1-\varepsilon))+\beta Y_{\nu_l}(\sqrt{\lambda\tilde\rho_{\varepsilon}}(1-\varepsilon))=J_{\nu_l}(\sqrt{\lambda\varepsilon}(1-\varepsilon))\,,\\ \alpha J_{\nu_l}'(\sqrt{\lambda\tilde\rho_{\varepsilon}}(1-\varepsilon))+\beta Y_{\nu_l}'(\sqrt{\lambda\tilde\rho_{\varepsilon}}(1-\varepsilon))=\sqrt{\frac{\varepsilon}{\tilde\rho_{\varepsilon}}}J_{\nu_l}'(\sqrt{\lambda\varepsilon}(1-\varepsilon))\,. \end{array} \right. \end{equation*} Solving the system we obtain
\begin{eqnarray*} \alpha=\frac{J_{\nu_l}(a)Y_{\nu_l}'(b)-\frac{a}{b}J_{\nu_l}'(a)Y_{\nu_l}(b)}{J_{\nu_l}(b)Y_{\nu_l}'(b)-J_{\nu_l}'(b)Y_{\nu_l}(b)}\ ,\ \ \ \ \beta=\frac{\frac{a}{b}J_{\nu_l}(b)J_{\nu_l}'(a)-J_{\nu_l}'(b)J_{\nu_l}(a)}{J_{\nu_l}(b)Y_{\nu_l}'(b)-J_{\nu_l}'(b)Y_{\nu_l}(b)}. \end{eqnarray*} Note that $J_{\nu_l}(b)Y_{\nu_l}'(b)-J_{\nu_l}'(b)Y_{\nu_l}(b)$ is the Wronskian in $b$, which is known to be $\frac{2}{\pi b}$ (see \cite[\S 9]{abram}). This concludes the proof. \endproof
We are ready to establish an implicit characterization of the eigenvalues of (\ref{Neu}).
\begin{prop}\label{implicit1N} The nonzero eigenvalues $\lambda$ of problem (\ref{Neu}) are given implicitly as zeros of the equation \begin{equation}\label{implicitformula1N} \left(1-\frac{N}{2}\right)P_1(a,b)+\frac{b}{(1-\varepsilon)}P_2(a,b)=0 \end{equation} where
\normalsize \begin{eqnarray*} P_1(a,b)&=&J_{\nu_l}(a)\left(Y_{\nu_l}'(b)J_{\nu_l}(\frac{b}{1{-}\varepsilon}){-}J_{\nu_l}'(b)Y_{\nu_l}(\frac{b}{1{-}\varepsilon})\right)\\ &{+}&\frac{a}{b}J_{\nu_l}'(a)\left(J_{\nu_l}(b)Y_{\nu_l}(\frac{b}{1{-}\varepsilon}){-}Y_{\nu_l}(b)J_{\nu_l}(\frac{b}{1{-}\varepsilon})\right),\\ P_2(a,b)&=&J_{\nu_l}(a)\left(Y_{\nu_l}'(b)J_{\nu_l}'(\frac{b}{1{-}\varepsilon}){-}J_{\nu_l}'(b)Y_{\nu_l}'(\frac{b}{1{-}\varepsilon})\right)\\ &{+}&\frac{a}{b}J_{\nu_l}'(a)\left(J_{\nu_l}(b)Y_{\nu_l}'(\frac{b}{1{-}\varepsilon}){-}Y_{\nu_l}(b)J_{\nu_l}'(\frac{b}{1{-}\varepsilon})\right). \end{eqnarray*} \normalsize
\proof By Lemma \ref{solutionsN}, an eigenfunction $u$ associated with an eigenvalue $\lambda $ is of the form $u(r,\theta )=S_l(r)H_l(\theta)$ where for $r>1-\varepsilon$ \normalsize \begin{eqnarray*} S_l(r)&{=}&\frac{\pi b}{2}r^{1{-}\frac{N}{2}}\left[\left(J_{\nu_l}(a)Y_{\nu_l}'(b){-}\frac{a}{b}J_{\nu_l}'(a)Y_{\nu_l}(b)\right)J_{\nu_l}(\frac{b r}{1-\varepsilon})\right.\\ &{+}&\left.\left(\frac{a}{b}J_{\nu_l}(b)J_{\nu_l}'(a){-}J_{\nu_l}'(b)J_{\nu_l}(a)\right)Y_{\nu_l}(\frac{b r}{1-\varepsilon})\right]. \end{eqnarray*}
\normalsize We require that $\frac{\partial u}{\partial\nu}=\frac{\partial u}{\partial r}_{|_{r=1}}=0$, which is true if and only if \normalsize \begin{multline*} \frac{\pi b}{2}\left(1{-}\frac{N}{2}\right)\left[\left(J_{\nu_l}(a)Y_{\nu_l}'(b){-}\frac{a}{b}J_{\nu_l}'(a)Y_{\nu_l}(b)\right)J_{\nu_l}(\frac{b}{1-\varepsilon})\right.\\ {+}\left.\left(\frac{a}{b}J_{\nu_l}(b)J_{\nu_l}'(a){-}J_{\nu_l}'(b)J_{\nu_l}(a)\right)Y_{\nu_l}(\frac{b}{1-\varepsilon})\right]\\ {+}\frac{\pi b^2}{2(1-\varepsilon)}\left[\left(J_{\nu_l}(a)Y_{\nu_l}'(b){-}\frac{a}{b}J_{\nu_l}'(a)Y_{\nu_l}(b)\right)J_{\nu_l}'(\frac{b}{1-\varepsilon})\right.\\ {+}\left.\left(\frac{a}{b}J_{\nu_l}(b)J_{\nu_l}'(a){-}J_{\nu_l}'(b)J_{\nu_l}(a)\right)Y_{\nu_l}'(\frac{b}{1-\varepsilon})\right]=0. \end{multline*} \normalsize The previous equation can be clearly rewritten in the form (\ref{implicitformula1N}). \endproof \end{prop}
We now prove the following.
\begin{lem}\label{implicitepsN} Equation (\ref{implicitformula1N}) can be written in the form \begin{eqnarray}\label{simplified}\lefteqn{ \lambda^2\varepsilon\left(\frac{M}{3N\omega_N}-\frac{1}{\nu_l(1+\nu_l)}\right) +\lambda\varepsilon \left( \frac{N}{2}-\nu_l+\frac{(2-N)N\omega_N}{2\nu_l(1+\nu_l)M} \right) -2\lambda + \frac{2N\omega_Nl}{M} }\nonumber \\ & &\qquad\qquad\qquad\qquad\qquad\qquad\quad-\frac{2N\omega_Nl}{M}\left(\frac{N-1}{2}-\frac{\omega_N}{M} -\nu_l \right)\varepsilon + \mathcal R(\lambda,\varepsilon)=0 \end{eqnarray} where $\mathcal R(\lambda , \varepsilon )=O(\varepsilon \sqrt {\varepsilon})$ as $\varepsilon\to 0$. \proof
We plan to divide the left-hand side of (\ref{implicitformula1N}) by $J_{\nu_l}'(a)$ and to analyze the resulting terms using the known Taylor's series for Bessel functions. Note that $J_{\nu_l}'(a)>0$ for all $\varepsilon$ small enough. We split our analysis into three steps.
{\it Step 1.} We consider the term $\frac{P_2(a,b)}{J_{\nu_l}'(a)}$, that is \small \begin{multline}\label{implicit2} \frac{J_{\nu_l}(a)}{J_{\nu_l}'(a)}\left[Y_{\nu_l}'(b)J_{\nu_l}'(\frac{b}{1-\varepsilon})-Y_{\nu_l}'(\frac{b}{1-\varepsilon})J_{\nu_l}'(b)\right]\\ +\frac{a}{b}\left[Y_{\nu_l}'(\frac{b}{1-\varepsilon})J_{\nu_l}(b)-Y_{\nu_l}(b)J_{\nu_l}'(\frac{b}{1-\varepsilon})\right]. \end{multline} \normalsize
Using Taylor's formula, we write the derivatives of the Bessel functions in (\ref{implicit2}), call them ${\mathcal{C}}'_{\nu_l}$, as follows \begin{equation}\label{tayc} {\mathcal{C}}'_{\nu_l}\left(\frac{b}{1-\varepsilon} \right)={\mathcal{C}}'_{\nu_l}(b)+{\mathcal{C}}''_{\nu_l}(b)\frac{\varepsilon b}{1-\varepsilon}+\dots +\frac{{\mathcal{C}}^{(n)}_{\nu_l}(b)}{(n-1)!}\left(\frac{\varepsilon b}{1-\varepsilon}\right)^{n-1}+o\left(\frac{\varepsilon b}{1-\varepsilon}\right)^{n-1}. \end{equation} Then, using (\ref{tayc}) with $n=4$ for $J_{\nu_l}'$ and $Y_{\nu_l}'$ we get \small \begin{multline}\label{numero1} \frac{J_{\nu_l}(a)}{J_{\nu_l}'(a)}\left[\frac{\varepsilon b}{1-\varepsilon}\left(Y_{\nu_l}'(b)J_{\nu_l}''(b)-J_{\nu_l}'(b)Y_{\nu_l}''(b)\right)\right. +\frac{\varepsilon^2 b^2}{2(1-\varepsilon)^2}\left(Y_{\nu_l}'(b)J_{\nu_l}'''(b)-J_{\nu_l}'(b)Y_{\nu_l}'''(b)\right)\\ \left.+\frac{\varepsilon^3 b^3}{6(1-\varepsilon)^3}\left(Y_{\nu_l}'(b)J_{\nu_l}''''(b)-J_{\nu_l}'(b)Y_{\nu_l}''''(b)\right)+R_1(b)\right]\\ +\frac{a}{b}\left[\left(J_{\nu_l}(b)Y_{\nu_l}'(b)-Y_{\nu_l}(b)J_{\nu_l}'(b)\right)+\frac{\varepsilon b}{1-\varepsilon}\left(J_{\nu_l}(b)Y_{\nu_l}''(b)-Y_{\nu_l}(b)J_{\nu_l}''(b)\right)\right.\\ \left.+\frac{\varepsilon^2 b^2}{2(1-\varepsilon)^2}\left(J_{\nu_l}(b)Y_{\nu_l}'''(b)-Y_{\nu_l}(b)J_{\nu_l}'''(b)\right)+R_2(b)\right], \end{multline} \normalsize where \begin{equation} \label{erre1} R_1(b)=\sum_{k=4}^{+\infty}\frac{\varepsilon^kb^k}{k!(1-\varepsilon)^k}\biggl(Y'_{\nu_{l}}(b)J^{(k+1)}_{\nu_l}(b)-J'_{\nu_l}(b) Y_{\nu_{l}}^{(k+1)} (b) \biggr)\, \end{equation} and \begin{equation} \label{erre2} R_2(b)=\sum_{k=3}^{+\infty}\frac{\varepsilon^kb^k}{k!(1-\varepsilon)^k}\biggl(J_{\nu_{l}}(b)Y^{(k+1)}_{\nu_l}(b)-Y_{\nu_l}(b) J_{\nu_{l}}^{(k+1)} (b)\biggr)\, . \end{equation} Let $ R_3$ be the remainder defined in Lemma \ref{Jalemma}. We set \small \begin{multline}\label{numero2} R(\lambda,\varepsilon)=R_3(a)\left[\frac{\varepsilon b}{1-\varepsilon}\left(Y_{\nu_l}'(b)J_{\nu_l}''(b){-}J_{\nu_l}'(b)Y_{\nu_l}''(b)\right)\right.\\ {+}\left.\frac{\varepsilon^2 b^2}{2(1-\varepsilon)^2}\left(Y_{\nu_l}'(b)J_{\nu_l}'''(b){-}J_{\nu_l}'(b)Y_{\nu_l}'''(b)\right)\right.\\ \left.{+}\frac{\varepsilon^3 b^3}{6(1-\varepsilon)^3}\left(Y_{\nu_l}'(b)J_{\nu_l}''''(b){-}J_{\nu_l}'(b)Y_{\nu_l}''''(b)\right)\right]\\ {+}R_1(b)\left[\frac{a}{{\nu_l}}{+}\frac{a^3}{2{\nu_l}^2(1+{\nu_l})}\right] {+}R_2(b)\frac{a}{b}{+}R_3(a)R_1(b). \end{multline} \normalsize By Lemma \ref{remainder2}, it turns out that $R(\lambda,\varepsilon)= O(\varepsilon^3)$ as $\varepsilon\rightarrow 0$.
We also set \begin{eqnarray*} &&f(\varepsilon)=b_1^2(\varepsilon)a_1^3(\varepsilon)f_1(\varepsilon);\\ &&g(\varepsilon)=b_1^2(\varepsilon)a_1(\varepsilon)g_1(\varepsilon)+a_1^3(\varepsilon)g_2(\varepsilon);\\ &&h(\varepsilon)=a_1(\varepsilon)h_1(\varepsilon)+\varepsilon^2\frac{a_1^3(\varepsilon)}{b_1^2(\varepsilon)}h_2(\varepsilon);\\ &&k(\varepsilon)=\frac{a_1(\varepsilon)}{b_1^2(\varepsilon)}k_1(\varepsilon), \end{eqnarray*} where \small \begin{eqnarray*} &&a_1(\varepsilon)=\frac{a}{\sqrt{\lambda \varepsilon}} =(1-\varepsilon);\\ &&b_1(\varepsilon)= b\sqrt{\frac{\varepsilon }{\lambda }} ;\\ &&f_1(\varepsilon)=\frac{1}{6{\nu_l}^2(1+{\nu_l})(1-\varepsilon)^3};\\ &&g_1(\varepsilon)=\frac{1}{3{\nu_l}(1-\varepsilon)^3};\\ &&g_2(\varepsilon)=-\frac{1}{{\nu_l}^2(1+{\nu_l})(1-\varepsilon)}+\frac{\varepsilon}{2{\nu_l}^2(1+{\nu_l})(1-\varepsilon)^2}-\frac{\varepsilon^2(3+2{\nu_l}^2)}{6{\nu_l}^2(1+{\nu_l})(1-\varepsilon)^3};\\ &&h_1(\varepsilon)=-\frac{2}{{\nu_l}(1-\varepsilon)}+\frac{\varepsilon}{{\nu_l}(1-\varepsilon)^2}-\frac{\varepsilon^2(3+2{\nu_l}^2)}{3{\nu_l}(1-\varepsilon)^3}-\frac{\varepsilon}{(1-\varepsilon)^2};\\ &&h_2(\varepsilon)=\frac{1}{(1+{\nu_l})(1-\varepsilon)}-\frac{3\varepsilon}{2(1+{\nu_l})(1-\varepsilon)^2}+\frac{\varepsilon^2({\nu_l}^4+11{\nu_l}^2)}{6{\nu_l}^2(1+{\nu_l})(1-\varepsilon)^3};\\ &&k_1(\varepsilon)=2+\frac{2\varepsilon {\nu_l}}{(1-\varepsilon)}-\frac{3\varepsilon^2 {\nu_l}}{(1-\varepsilon)^2}+\frac{\varepsilon^3({\nu_l}^4+11{\nu_l}^2)}{3 \nu_l(1-\varepsilon)^3}-\frac{2\varepsilon}{(1-\varepsilon)}+\frac{\varepsilon^2(2+{\nu_l}^2)}{(1-\varepsilon)^2}.\\ \end{eqnarray*} \normalsize Note that functions $f,g,h,k$ are continuous at $\varepsilon =0$ and $f(0), g(0), h(0), k(0)\ne 0$.
Using the explicit formulas for the cross products of Bessel functions given by Lemma \ref{inducross} and Corollary \ref{formulaused} in (\ref{numero1}), (\ref{implicit2}) can be written as \begin{equation}\label{implicit5} \frac{1}{\sqrt{\lambda}\pi}\varepsilon\sqrt{\varepsilon}k(\varepsilon)+\frac{\sqrt{\lambda}}{\pi}\varepsilon\sqrt{\varepsilon}h(\varepsilon)+\frac{\lambda\sqrt{\lambda}}{\pi}\varepsilon^2\sqrt{\varepsilon}g(\varepsilon)+\frac{\lambda^2\sqrt{\lambda}}{\pi}\varepsilon^3\sqrt{\varepsilon}f(\varepsilon)+R(\lambda,\varepsilon). \end{equation}
{\it Step 2.} We consider the quantity $\frac{P_1(a,b)}{J_{\nu_l}'(a)}$, that is \small \begin{multline}\label{firstsum} \frac{J_{\nu_l}(a)}{J_{\nu_l}'(a)}\left[Y_{\nu_l}'(b)J_{\nu_l}(\frac{b}{1-\varepsilon})-J_{\nu_l}'(b)Y_{\nu_l}(\frac{b}{1-\varepsilon})\right]\\ +\frac{a}{b}\left[J_{\nu_l}(b)Y_{\nu_l}(\frac{b}{1-\varepsilon})-Y_{\nu_l}(b)J_{\nu_l}(\frac{b}{1-\varepsilon})\right]. \end{multline} \normalsize Proceeding as in Step 1 and setting
\small \begin{eqnarray*} &&\tilde f(\varepsilon)=-\frac{a_1^3(\varepsilon)b_1(\varepsilon)}{2\pi{\nu_l}^2(1+{\nu_l})(1-\varepsilon)^2};\\ &&\tilde g(\varepsilon)=\frac{a_1^3(\varepsilon)}{b_1(\varepsilon)}\left(\frac{1}{\pi{\nu_l}^2(1+{\nu_l})}+\frac{\varepsilon^2}{2\pi(1+{\nu_l})(1-\varepsilon)^2}\right)-\frac{a_1(\varepsilon)b_1(\varepsilon)}{{\nu_l}\pi(1-\varepsilon)^2};\\ &&\tilde h(\varepsilon)=\frac{a_1(\varepsilon)}{b_1(\varepsilon)}\left(\frac{2}{{\nu_l}\pi}+\frac{2\varepsilon}{\pi(1-\varepsilon)}+\frac{({\nu_l}-1)}{\pi(1-\varepsilon)^2}\varepsilon^2\right),\\ \end{eqnarray*} \normalsize one can prove that (\ref{firstsum}) can be written as \begin{equation}\label{part1} \varepsilon \tilde h(\varepsilon)+\lambda\varepsilon^2 \tilde g(\varepsilon)+\lambda^2\varepsilon^3 \tilde f(\varepsilon)+\hat R(\lambda,\varepsilon), \end{equation} where $\hat R({\lambda,\varepsilon})= O(\varepsilon^2\sqrt{\varepsilon})$ as $\varepsilon\rightarrow 0$; see Lemma \ref{remainder2}.
{\it Step 3.}
We combine (\ref{implicit5}) and (\ref{part1}) and rewrite equation (\ref{implicitformula1N}) in the form \small \begin{multline} \varepsilon(1-\frac{N}{2})\tilde h(\varepsilon)+\varepsilon\frac{ b_1(\varepsilon)k(\varepsilon)}{\pi (1-\varepsilon)}+\lambda\varepsilon^2(1-\frac{N}{2}) \tilde g(\varepsilon)+\lambda\varepsilon\frac{b_1(\varepsilon)h(\varepsilon)}{\pi (1-\varepsilon)}\label{implicitN}\\ +\lambda^2\varepsilon^3(1-\frac{N}{2})\tilde f(\varepsilon)+\lambda^2\varepsilon^2\frac{b_1(\varepsilon)g(\varepsilon)}{\pi (1-\varepsilon)}+\lambda^3\varepsilon^3\frac{b_1(\varepsilon)f(\varepsilon)}{\pi (1-\varepsilon)}+\mathcal R_0(\lambda,\varepsilon)=0, \end{multline} \normalsize where \begin{eqnarray*} &&\mathcal R_0(\lambda,\varepsilon)=\frac{\sqrt{\lambda}b_1(\varepsilon)}{(1-\varepsilon)\sqrt{\varepsilon}}R(\lambda,\varepsilon)+\left(1-\frac{N}{2}\right)\hat R(\lambda,\varepsilon). \end{eqnarray*} Note that $\mathcal R_0(\lambda,\varepsilon) =O(\varepsilon^2\sqrt{\varepsilon})$ as $\varepsilon\rightarrow 0$. Dividing by $\varepsilon $ in (\ref{implicitN}) and setting $\mathcal R_1(\lambda,\varepsilon)=\frac{\mathcal R_0(\lambda,\varepsilon)}{\varepsilon}$, we obtain \begin{eqnarray} &&(1-\frac{N}{2})\tilde h(\varepsilon)+\frac{ b_1(\varepsilon)k(\varepsilon)}{\pi (1-\varepsilon)}+\lambda\varepsilon(1-\frac{N}{2}) \tilde g(\varepsilon)+\lambda\frac{b_1(\varepsilon)h(\varepsilon)}{\pi (1-\varepsilon)}\label{implicitN1}\\ &&+\lambda^2\varepsilon^2(1-\frac{N}{2})\tilde f(\varepsilon)+\lambda^2\varepsilon\frac{b_1(\varepsilon)g(\varepsilon)}{\pi (1-\varepsilon)}+\lambda^3\varepsilon^2\frac{b_1(\varepsilon)f(\varepsilon)}{\pi (1-\varepsilon)}+\mathcal R_1(\lambda,\varepsilon)=0\nonumber . \end{eqnarray}
We now multiply in \eqref{implicitN1} by $\frac{\pi\nu_l (1-\varepsilon) }{ b_1(\varepsilon)}$ which is a positive quantity for all $0<\varepsilon<1$. Taking into account the definitions of functions $g,h,k, \tilde g,\tilde h$, we can finally rewrite (\ref{implicitN1}) in the form \begin{eqnarray}\lefteqn{ \lambda^2\varepsilon\left(\frac{\hat\rho (\varepsilon )}{3}-\frac{1}{\nu_l(1+\nu_l)}\right) +\lambda\varepsilon \left( \frac{N}{2}-\nu_l+\frac{2-N}{2\nu_l(1+\nu_l)\hat\rho (\varepsilon )} \right) -2\lambda }\nonumber \\ & &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ \frac{2l\left(1+\varepsilon\nu_l\right)}{\hat\rho (\varepsilon )} + \mathcal R(\lambda,\varepsilon)=0 , \end{eqnarray} where $$\hat \rho (\varepsilon )=\varepsilon \tilde\rho (\varepsilon )=\frac{M-\omega_N\varepsilon(1-\varepsilon)^N}{\omega_N\left(N-\frac{N(N-1)}{2}\varepsilon-\sum_{k=3}^N\binom{N}{k}(-1)^k\varepsilon^{k-1}\right)}, $$ and $\mathcal R(\lambda,\varepsilon) =O(\varepsilon\sqrt {\varepsilon} )$ as $\varepsilon \to 0$. The formulation in (\ref{simplified}) can be easily deduced by observing that $$ \hat \rho_{\varepsilon}= \frac{M}{N\omega_N}+2\frac{M}{N\omega_N}\left( \frac{N-1}{4}-\frac{\omega_N}{2M} \right)\varepsilon +O(\varepsilon^2),\ \ {\rm as}\ \varepsilon \to 0 . $$
\endproof \end{lem}
We are now ready to prove our main result
\begin{thm}\label{derN} All eigenvalues of problem (\ref{Neu}) have the following asymptotic behavior \begin{equation}\label{asymptotic} \lambda_l(\varepsilon)= \lambda_l+\left(\frac{2l\lambda_l}{3}+\frac{2\lambda^2_l}{N(2l+N)}\right)\varepsilon+o(\varepsilon ),\ \ \ {\rm as}\ \varepsilon \to 0, \end{equation} where $\lambda_l$ are the eigenvalues of problem (\ref{Ste}).
Moreover, for each $l\in {\mathbb{N}}$ the function defined by $\lambda_l(\varepsilon )$ for $\varepsilon >0$ and $\lambda_l(0)=\lambda_l$, is continuous in the whole of $[0,1[$ and of class $C^1$ in a neighborhood of $\varepsilon =0$. \proof
By using the Min-Max Principle and related standard arguments, one can easily prove that $\lambda_l(\varepsilon)$ depends with continuity on $\varepsilon >0$ (cfr. \cite{laproeurasian}, see also \cite{lala2004}). Moreover, by using (\ref{intro1,5}) the maps $\varepsilon \mapsto \lambda_l(\varepsilon)$ can be extended by continuity at the point $\varepsilon =0$ by setting $\lambda_l(0)=\lambda_l$.
In order to prove differentiability of $\lambda_{l}(\varepsilon )$ around zero and the validity of (\ref{asymptotic}), we consider equation (\ref{simplified}) and apply the Implicit Function Theorem. Note that equation (\ref{simplified}) can be written in the form $F(\lambda , \varepsilon)=0$ where $F$ is a function of class $C^1$ in the variables $(\lambda , \varepsilon )\in ]0,\infty [\times[0,1[$, with \begin{eqnarray}
F(\lambda , 0) &=& -2\lambda +\frac{2N\omega_Nl}{M},\nonumber\\
F'_{\lambda}(\lambda ,0)& =& -2,\nonumber\\ F'_{\varepsilon}(\lambda , 0)&=&\lambda^2\left(\frac{M}{3N\omega_N}-\frac{1}{\nu_l(1+\nu_l)}\right) +\lambda \left( \frac{N}{2}-\nu_l+\frac{(2-N)N\omega_N}{2\nu_l(1+\nu_l)M} \right)\nonumber\\ & & -\frac{2N\omega_Nl}{M}\left(\frac{N-1}{2}-\frac{\omega_N}{M} -\nu_l \right) \end{eqnarray}
By (\ref{intro1}), $\lambda_l=N\omega_Nl/M$ hence $F(\lambda_l,0)=0$. Since $F'_{\lambda}(\lambda_l ,0)\ne 0$, the Implicit Function Theorem combined with the continuity of the functions $\lambda_l(\cdot )$ allows to conclude that functions $\lambda_l(\cdot )$ are of class $C^1$ around zero.
We now compute the derivative of $\lambda_l(\cdot )$ at zero. Using the equality $N\omega_N/M=\lambda_l /l$ and recalling that $\nu_l=l+N/2-1$ we get \begin{eqnarray} F'_{\varepsilon}(\lambda_l , 0)&=&\lambda_l^2\left( \frac{l}{3\lambda_l} -\frac{1}{\nu_l(1+\nu_l)} \right)+\lambda_l\left(1-l+\frac{\lambda_l (2-N)}{2l\nu_l(1+\nu_l)} \right) -2\lambda_l\left(\frac{1}{2}-l-\frac{\lambda_l}{Nl} \right)\nonumber \\ &=&\lambda_l^2\left( \frac{1}{\nu_l(1+\nu_l)}\left(\frac{2-N}{2l}-1 \right)+\frac{2}{Nl} \right)+\frac{4}{3}\lambda_l l=\frac{4\lambda^2_l}{N^2+2Nl}+ \frac{4}{3}\lambda_l l . \nonumber \end{eqnarray} Finally, formula $\lambda'_l(0)=-F'_{\varepsilon}(\lambda_l , 0)/F'_{\lambda}(\lambda_l , 0)$ yields (\ref{intro2}) and the validity of (\ref{asymptotic}). \endproof \end{thm}
\begin{corol}\label{derminN} For any $l\in {\mathbb{N}}\setminus \{0\}$ there exists $\delta_l$ such that the function $\lambda _l(\cdot )$ is strictly increasing in the interval $[0, \delta _l[$. In particular, $\lambda_l<\lambda_l(\varepsilon )$ for all $\varepsilon \in ]0, \delta_l[$. \end{corol}
\begin{figure}\label{fig1}
\end{figure}
\begin{figure}\label{fig2}
\end{figure}
\subsection{Estimates for the remainders}\label{sec:3}
This subsection is devoted to the proof of a few technical estimates used in the proof of Lemma~\ref{implicitepsN}.
\begin{lem}\label{Jalemma} The function $R_3$ defined by \begin{equation} \label{Jalemma0} \frac{J_{\nu}(z)}{J_{\nu}'(z)}=\frac{z}{{\nu}}+\frac{z^3}{2{\nu}^2(1+{\nu})}+ R_3(z), \end{equation}
is $ O(z^5)$ as $z\rightarrow 0$. \proof
Recall the well-known following representation of the Bessel functions of the first species
\begin{equation}\label{besselabram} J_{\nu}(z)=\left(\frac{z}{2}\right)^{\nu}\sum_{j=0}^{+\infty}\frac{(-1)^j}{j!\Gamma(j+{\nu}+1)}\left(\frac{z}{2}\right)^{2j}. \end{equation}
For clarity, we simply write \begin{equation}\label{besselabram1} J_{\nu}(z)=z^{\nu}(a_0+a_2z^2+a_4z^4+O(z^5) ), \end{equation} hence \begin{equation}\label{besselabram2} J_{\nu}'(z)=z^{\nu-1}(\nu a_0+(\nu +2)a_2z^2+(\nu+4 )a_4z^4+O(z^5) ) \end{equation} where the coefficients $a_0,a_2, a_4$ are defined by (\ref{besselabram}). By (\ref{besselabram1}), (\ref{besselabram2}) and standard computations it follows that $$ \frac{J_{\nu}(z)}{J_{\nu}'(z)}=\frac{z}{\nu}-{\frac{2a_2}{\nu^2a_0}}z^3+O(z^5), $$ which gives exactly (\ref{Jalemma0}). \endproof \end{lem}
\begin{lem}\label{remainder2} For any $\lambda >0$ the remainders $R(\lambda,\varepsilon)$ and $\hat R(\lambda , \varepsilon)$ defined in the proof of Lemma~\ref{implicitepsN} are $ O(\varepsilon^3)$, $O(\varepsilon^2 \sqrt {\varepsilon })$, respectively, as $\varepsilon\rightarrow 0$. Moreover, the same holds true for the corresponding partial derivatives $\partial_{\lambda} R(\lambda,\varepsilon)$, $\partial_{\lambda}\hat R(\lambda,\varepsilon)$. \proof
First, we consider $R_3(a)=R_3(\sqrt{\lambda\varepsilon}(1-\varepsilon))$ where $R_3$ is defined in Lemma~\ref{Jalemma} and we differentiate it with respect to $\lambda$. We obtain \small \begin{eqnarray*} \frac{\partial R_3(a)}{\partial\lambda }=\frac{a R_3'(a)}{2\lambda}, \end{eqnarray*} \normalsize hence by Lemma \ref{Jalemma} we can conclude that $R_3(a)$ and $\frac{\partial R_3(a)}{\partial\lambda }$ are $O(\varepsilon^2\sqrt{\varepsilon})$ as $\varepsilon\rightarrow 0$.
Now consider $R_1(b)$ and $R_2(b)$ defined in (\ref{erre1}), (\ref{erre2}).
Since $\lambda>0$, we have that $b>0$ hence the Bessel functions are analytic in $b$ and we can write \small \begin{eqnarray*} 2\sqrt{\lambda} \frac{ \partial R_1(b)}{\partial \lambda}&=&\frac{\varepsilon b_1(\varepsilon)}{\sqrt{\varepsilon}(1-\varepsilon)}\sum_{k=4}^{+\infty}\frac{b^{k-1}\varepsilon^{k-1}}{(k-1)!(1-\varepsilon)^{k-1}}\left(Y_{\nu}'(b)J_{\nu}^{(k+1)}(b)-J_{\nu}'(b)Y_{\nu}^{(k+1)}(b)\right)\\ &+&\frac{b_1(\varepsilon)}{\sqrt{\varepsilon}}\sum_{k=4}^{+\infty}\frac{\varepsilon^kb^k}{k!(1-\varepsilon)^k}\left(Y_{\nu}'(b)J_{\nu}^{(k+1)}(b)-J_{\nu}'(b)Y_{\nu}^{(k+1)}(b)\right)'. \end{eqnarray*} \normalsize Here and in the sequel we write $\nu$ instead of $\nu_l$. Using the fact that $b=\sqrt{\lambda /\varepsilon}b_1(\varepsilon)$ and Lemma \ref{inducross} we conclude that all the cross products of the form $Y_{\nu}'(b)J_{\nu}^{(k+1)}(b)-J_{\nu}'(b)Y_{\nu}^{(k+1)}(b)$ and their derivatives $(Y_{\nu}'(b)J_{\nu}^{(k+1)}(b)-J_{\nu}'(b)Y_{\nu}^{(k+1)}(b))'$ are $O(\sqrt{\varepsilon})$ and $O(\varepsilon)$ respectively, as $\varepsilon\rightarrow 0$. It follows that $R_1(b)$ and $\partial_{\lambda}R_1(b)$ are $ O(\varepsilon^2\sqrt{\varepsilon})$ as $\varepsilon\rightarrow 0$.
Similarly, \small \begin{eqnarray*} 2\sqrt{\lambda}\frac{\partial R_2(b)}{\partial \lambda}&=&\frac{\varepsilon b_1(\varepsilon)}{\sqrt{\varepsilon}(1-\varepsilon)}\sum_{k=3}^{+\infty}\frac{b^{k-1}\varepsilon^{k-1}}{(k-1)!(1-\varepsilon)^{k-1}}\left(J_{\nu}(b)Y_{\nu}^{(k+1)}(b)-Y_{\nu}(b)J_{\nu}^{(k+1)}(b)\right)\\ &+&\frac{b_1(\varepsilon)}{\sqrt{\varepsilon}}\sum_{k=3}^{+\infty}\frac{\varepsilon^kb^k}{k!(1-\varepsilon)^k}\left(J_{\nu}(b)Y_{\nu}^{(k+1)}(b)-Y_{\nu}(b)J_{\nu}^{(k+1)}(b)\right)' , \end{eqnarray*} \normalsize hence $R_2(b)$ and $\partial_{\lambda}R_2(b)$ are $O(\varepsilon^2)$ as $\varepsilon\rightarrow 0$.
Summing up all the terms, using Lemma~\ref{case0} and Corollary~\ref{formulaused}, we obtain \small \begin{multline*} R(\lambda,\varepsilon)=R_3(a)\left[\frac{2\varepsilon}{\pi(1-\varepsilon)}\left(\frac{{\nu}^2}{b^2}-1\right){+}\frac{\varepsilon^2}{\pi(1-\varepsilon)^2}\left(1-\frac{3{\nu}^2}{b^2}\right)\right. \\\left.{+}\frac{\varepsilon^3b^2}{3\pi(1-\varepsilon)^3}\left(\frac{{\nu}^4+11{\nu}^2}{b^4}-\frac{3+2{\nu}^2}{b^2}+1\right)\right]\\ +R_1(b)\left[\frac{a}{{\nu}}+\frac{a^3}{2{\nu}^2(1+{\nu})}\right] {+}R_2(b)\frac{a}{b}{+}R_3(a)R_1(b). \end{multline*} \normalsize We conclude that $R(\lambda,\varepsilon)$ is $ O(\varepsilon^3)$ as $\varepsilon\rightarrow 0$. Moreover, it easily follows that $\frac{\partial R (\lambda,\varepsilon)}{\partial \lambda}$ is also $ O(\varepsilon^3)$ as $\varepsilon\rightarrow 0$.
The proof of the estimates for $\hat R$ and its derivatives is similar and we omit it. \endproof \end{lem}
\begin{rem} According to standard Landau's notation, saying that a function $f(z)$ is $O(g(z))$ as $z\to 0$ means that there exists $C>0$ such that
$|f(z)|\le C|g(z)|$ for any $z$ sufficiently close to zero. Thus, using Landau's notation in the statements of Lemmas~\ref{implicitepsN}, \ref{remainder2} understands the existence of such constants $C$, which in principle may depend on $\lambda >0$. However, a careful analysis of the proofs reveals that given a bounded interval of the type $[A,B]$ with $0<A<B$ then the appropriate constants $C$ in the estimates can be taken independent of $\lambda \in [A,B]$. \end{rem}
\subsection{The case \texorpdfstring{$N=1$}{TEXT}}\label{sec:4}
We include here a description of the case $N=1$ for the sake of completeness. Let $\Omega$ be the open interval $]-1,1[$. Problem \eqref{Ste} reads \begin{equation}\label{Ste1} \begin{cases} u''(x) =0 ,& {\rm for}\ x\in]-1,1[,\\ u'(\pm 1)=\pm\lambda\frac{M}{2}u(\pm 1),
\end{cases} \end{equation} in the unknowns $\lambda$ and $u$. It is easy to see that the only eigenvalues are $\lambda_0=0$ and $\lambda_1=\frac{2}{M}$ and they are associated with the constant functions and the function $x$, respectively. As in (\ref{densitaintro}), we define a mass density $\rho_{\varepsilon}$ on the whole of $]-1,1[$ by
\begin{equation*} \rho_{\varepsilon}(x)=\left\{ \begin{array}{ll} \frac{M}{2\varepsilon}-1+\varepsilon\,& {\rm if\ }x\in ]-1,-1+\varepsilon[\cup]1-\varepsilon,1[,\\ \varepsilon\,& {\rm if\ }x\in ]-1+\varepsilon,1-\varepsilon[. \end{array} \right. \end{equation*} Note that for any $x\in]-1,1[ $ we have $\rho_{\varepsilon}(x)\to 0$ as $\varepsilon \to 0$, and $\int_{-1}^1\rho_{\varepsilon}dx=M$ for all $\varepsilon>0$. Problem (\ref{Neu}) for $N=1$ reads \begin{equation}\label{Neu1} \left\{\begin{array}{ll} -u''(x) =\lambda \rho_{\varepsilon}(x) u(x),\ \ & {\rm for}\ x\in]-1,1[,\\ u'(-1)=u'(1)=0. \end{array}\right. \end{equation} It is well-known from Sturm-Liouville theory that problem (\ref{Neu1}) has an increasing sequence of non-negative eigenvalues of multiplicity one. We denote the eigenvalues of \eqref{Neu1} by $\lambda_l(\varepsilon)$ with $l\in\mathbb N$. For any $\varepsilon\in]0,1[$, the only zero eigenvalue is $\lambda_0(\varepsilon)$ and the corresponding eigenfunctions are the constant functions.
We establish an implicit characterization of the eigenvalues of \eqref{Neu1}.
\begin{prop} The nonzero eigenvalues $\lambda$ of problem \eqref{Neu1} are given implicitly as zeros of the equation \begin{multline}\label{implicitD1} 2\sqrt{\varepsilon\left(\frac{M}{2\varepsilon}-1+\varepsilon\right)}\cos{(2\sqrt{\lambda\varepsilon}(1-\varepsilon))}\sin{\left(2\varepsilon\sqrt{\lambda\left(\frac{M}{2\varepsilon}-1+\varepsilon\right)}\right)}\\ +\left[-\frac{M}{2\varepsilon}+1+\left(\frac{M}{2\varepsilon}-1+2\varepsilon\right)\cos{\left(2\varepsilon\sqrt{\lambda\left(\frac{M}{2\varepsilon}-1+\varepsilon\right)}\right)}\right]\sin{\left(2\sqrt{\lambda\varepsilon}(1-\varepsilon)\right)}=0. \end{multline} \proof Given an eigenvalue $\lambda>0$, a solution of (\ref{Neu1}) is of the form \small \begin{equation*} u(x)=\left\{ \begin{array}{ll} A\cos{(\sqrt{\lambda\rho_2}x)}+B\sin{(\sqrt{\lambda\rho_2}x)},& {\rm for\ }x\in]-1,-1+\varepsilon[,\\\ \\ C\cos{(\sqrt{\lambda\rho_1}x)}+D\sin{(\sqrt{\lambda\rho_1}x)},& {\rm for\ }x\in]-1+\varepsilon,1-\varepsilon[,\\\ \\ E\cos{(\sqrt{\lambda\rho_2}x)}+F\sin{(\sqrt{\lambda\rho_2}x)},& {\rm for\ }x\in]1-\varepsilon,1[, \end{array} \right. \end{equation*} \normalsize where $\rho_1=\varepsilon,\rho_2=\frac{M}{2\varepsilon}-1+\varepsilon$ and $A,B,C,D,E,F$ are suitable real numbers. We impose the continuity of $u$ and $u'$ at the points $x=-1+\varepsilon$ and $x=1-\varepsilon$ and the boundary conditions, obtaining a homogeneous system of six linear equations in six unknowns of the form $\mathcal M v=0$, where $v=(A,B,C,D,E,F)$ and $\mathcal M$ is the matrix associated with the system. We impose the condition ${\rm det}\mathcal M=0$. This yields formula \eqref{implicitD1}. \endproof \end{prop}
\begin{comment}
\scriptsize \begin{equation}\label{system1}\left\{ \begin{array}{ll} A\cos{(\sqrt{\lambda\rho_2}(1-\varepsilon))}-B\sin{(\sqrt{\lambda\rho_2}(1-\varepsilon))}-C\cos{(\sqrt{\lambda\rho_1}(1-\varepsilon))}+D\sin{(\sqrt{\lambda\rho_1}(1-\varepsilon))}=0\,\\ \\ C\cos{(\sqrt{\lambda\rho_1}(1-\varepsilon))}+D\sin{(\sqrt{\lambda\rho_1}(1-\varepsilon))}-E\cos{(\sqrt{\lambda\rho_2}(1-\varepsilon))}-F\sin{(\sqrt{\lambda\rho_2}(1-\varepsilon))}=0\,\\ \\ \sqrt{\rho_2}\left(A\sin{(\sqrt{\lambda\rho_2}(1-\varepsilon))}+B\cos{(\sqrt{\lambda\rho_2}(1-\varepsilon))}\right)-\sqrt{\rho_1}\left(C\sin{(\sqrt{\lambda\rho_1}(1-\varepsilon))}+D\sin{(\sqrt{\lambda\rho_1}(1-\varepsilon))}\right)=0\,\\ \\ \sqrt{\rho_1}\left(-C\sin{(\sqrt{\lambda\rho_1}(1-\varepsilon))}+D\cos{(\sqrt{\lambda\rho_1}(1-\varepsilon))}\right)+\sqrt{\rho_2}\left(E\sin{(\sqrt{\lambda\rho_2}(1-\varepsilon))}-F\cos{(\sqrt{\lambda\rho_2}(1-\varepsilon))}\right)=0\,\\ \\ A\sin{(\sqrt{\lambda\rho_2})}+B\cos{(\sqrt{\lambda\rho_2})}=0\,\\ \\ E\sin{(\sqrt{\lambda\rho_2})}-F\cos{(\sqrt{\lambda\rho_2})}=0 \end{array} \right. \end{equation} \normalsize We set $\alpha=\cos{(\sqrt{\lambda\rho_2}(1-\varepsilon))}$, $\beta=\sin{(\sqrt{\lambda\rho_2}(1-\varepsilon))}$, $\gamma=\cos{(\sqrt{\lambda\rho_1}(1-\varepsilon))}$, $\delta=\sin{(\sqrt{\lambda\rho_1}(1-\varepsilon))}$, $\zeta=\cos{(\sqrt{\lambda\rho_2})}$ and $\eta=\sin{(\sqrt{\lambda\rho_2})}$. We introduce the matrix $\mathcal M$ and the vector $\mathcal B$ defined by
\begin{equation*} \mathcal M=\begin{bmatrix} \alpha & -\beta & -\gamma & \delta & 0 & 0 \\ 0 & 0 & \gamma & \delta & -\alpha & -\beta \\ \sqrt{\rho_2}\beta & \sqrt{\rho_2}\alpha & -\sqrt{\rho_1}\delta & -\sqrt{\rho_1}\gamma & 0 & 0 \\ 0 & 0 & -\sqrt{\rho_1}\delta & \sqrt{\rho_1}\gamma & \sqrt{\rho_2}\beta & -\sqrt{\rho_2}\alpha \\ \eta & \zeta & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \eta & -\zeta \end{bmatrix},\ \ \mathcal B=\begin{bmatrix} A\\B\\C\\D\\E\\F \end{bmatrix}. \end{equation*} System (\ref{system1}) reads $$ \mathcal M\mathcal B=0. $$ We impose ${\rm det}(\mathcal M)=0$. Standard computations yield \footnotesize \begin{equation*} 2\left(\beta\delta\eta\sqrt{\rho_1}+\alpha\delta\eta\sqrt{\rho_2}+\alpha\delta\zeta\sqrt{\rho_1}-\beta\gamma\zeta\sqrt{\rho_2}\right)\left(\beta\gamma\eta\sqrt{\rho_1}-\alpha\delta\eta\sqrt{\rho_2}+\alpha\gamma\zeta\sqrt{\rho_1}+\beta\delta\zeta\sqrt{\rho_2}\right)=0. \end{equation*} \normalsize
We substitute the values of $\alpha,\beta,\gamma,\delta,\zeta,\eta,\rho_1$ and $\rho_2$ and obtain the following formula of the type $F(\lambda,\varepsilon)=0$ which gives implicitly the eigenvalues of problem (\ref{Neu1}) \small \begin{multline}\label{N1implicit}
F(\lambda,\varepsilon)=2\sqrt{\varepsilon\left(\frac{M}{2\varepsilon}-1+\varepsilon\right)}\cos{(2\sqrt{\lambda\varepsilon}(1-\varepsilon))}\sin{\left(2\varepsilon\sqrt{\lambda\left(\frac{M}{2\varepsilon}-1+\varepsilon\right)}\right)}\\ +\left[-\frac{M}{2\varepsilon}+1+\left(\frac{M}{2\varepsilon}-1+2\varepsilon\right)\cos{\left(2\varepsilon\sqrt{\lambda\left(\frac{M}{2\varepsilon}-1+\varepsilon\right)}\right)}\right]\sin{\left(2\sqrt{\lambda\varepsilon}(1-\varepsilon)\right)}=0 \end{multline} \normalsize
\end{comment}
Note that $\lambda=0$ is a solution for all $\varepsilon>0$, then we consider only the case of nonzero eigenvalues. Using standard Taylor's formulas, we easily prove the following
\begin{lem} Equation \eqref{implicitD1} can be rewritten in the form \begin{equation}\label{implicitepsD1} M-\frac{\lambda M^2}{2}+\frac{\lambda M^2}{6}\left(1+\lambda\left(2+\frac{M}{2}\right)\right)\varepsilon+R(\lambda,\varepsilon)=0, \end{equation} where $R(\lambda,\varepsilon) =O(\varepsilon^2)$ as $\varepsilon\rightarrow 0$. \end{lem}
Finally, we can prove the following theorem. Note that formula \eqref{formulaD1} is the same as \eqref{asymptotic} with $N=1,l=1$.
\begin{thm} The first eigenvalue of problem \eqref{Neu1} has the following asymptotic behavior \begin{equation}\label{formulaD1} \lambda_1(\varepsilon)=\lambda_1+\frac{2}{3}(\lambda_1+\lambda_1^2)\varepsilon+o(\varepsilon)\ \ \ {\rm as}\ \varepsilon \to 0, \end{equation} where $\lambda_1=2/M$ is the only nonzero eigenvalue of problem \eqref{Ste1}. Moreover, for $l>1$ we have that $\lambda_l(\varepsilon)\rightarrow +\infty$ as $\varepsilon\rightarrow 0$. \proof The proof is similar to that of Theorem \ref{derN}. It is possible to prove that the eigenvalues $\lambda_l(\varepsilon)$ of \eqref{Neu1} depend with continuity on $\varepsilon>0$. We consider equation \eqref{implicitepsD1} and apply the Implicit Function Theorem. Equation \eqref{implicitepsD1} can be written in the form $F(\lambda,\varepsilon)=0$, with $F$ of class $C^1$ in $]0,+\infty[\times[0,1[$ with $F(\lambda,0)=M-\frac{\lambda M^2}{2}$, $F_{\lambda}'(\lambda,0)=-\frac{M^2}{2}$ and $F_{\varepsilon}'(\lambda,0)=\frac{\lambda M^2}{6}(1+\lambda (2+\frac{M}{2}))$.
Since $\lambda_1=\frac{2}{M}$, $F(\lambda_1,0)= 0$ and $F_{\lambda}'(\lambda_1,0)\ne 0$, the zeros of equation (\ref{formulaD1}) in a neighborhood of $(\lambda ,0)$ are given by the graph of a $C^1$-function $\varepsilon \mapsto \lambda (\varepsilon )$ with $\lambda (0)=\lambda_1$. We note that $\lambda (\varepsilon )=\lambda_1(\varepsilon )$ for all $\varepsilon $ small enough. Indeed, assuming by contradiction that $\lambda (\varepsilon )=\lambda_l(\varepsilon )$ with $l\geq 2$, we would obtain that, possibly passing to a subsequence, $\lambda_1(\varepsilon )\to \bar \lambda $ as $\varepsilon \to 0$, for some $\bar \lambda \in [0, \lambda_1[$. Then passing to the limit in (\ref{implicitepsD1}) as $\varepsilon \to 0$ we would obtain a contradiction. Thus, $\lambda_1 (\cdot )$ is of class $C^1$ in a neighborhood of zero and $\lambda_1'(0)=-F'_{\varepsilon}(\lambda_1,0)/F'_{\lambda}(\lambda_1,0)$ which yields formula \eqref{formulaD1}.
The divergence as $\varepsilon \to 0$ of the higher eigenvalues $\lambda_l(\varepsilon )$ with $l>1$, is clearly deduced by the fact that the existence of a converging subsequence of the form $\lambda_l(\varepsilon _n)$, $n\in {\mathbb{N}}$ would provide the existence of an eigenvalue for the limiting problem (\ref{Ste1}) different from $\lambda_0$ and $\lambda_1$, which is not admissible.
\endproof \end{thm}
\section{Appendix}
We provide here explicit formulas for the cross products of Bessel functions used in this paper.
\begin{lem}\label{case0} The following identities hold \small \begin{eqnarray*} Y_{\nu}(z)J_{\nu}'(z)-J_{\nu}(z)Y_{\nu}'(z)&=&-\frac{2}{\pi z},\\ Y_{\nu}(z)J_{\nu}''(z)-J_{\nu}(z)Y_{\nu}''(z)&=&\frac{2}{\pi z^2},\\ Y_{\nu}'(z)J_{\nu}''(z)-J_{\nu}'(z)Y_{\nu}''(z)&=&\frac{2}{\pi z}\left(\frac{{\nu}^2}{z^2}-1\right), \end{eqnarray*} \normalsize \proof It is well-known (see \cite[\S 9]{abram}) that \small \begin{eqnarray*}J_{\nu}(z)Y_{\nu}'(z)-Y_{\nu}(z)J_{\nu}'(z)= J_{{\nu}+1}(z)Y_{\nu}(z)-J_{\nu}(z)Y_{{\nu}+1}(z)=\frac{2}{\pi z}, \end{eqnarray*} \normalsize which gives the first identity in the statement. The second identity holds since \small \begin{eqnarray*} J_{\nu}(z)Y_{\nu}''(z)-Y_{\nu}(z)J_{\nu}''(z)&=&\left(J_{\nu}(z)Y_{\nu}'(z)-Y_{\nu}(z)J_{\nu}'(z)\right)'=\left(\frac{2}{\pi z}\right)'=-\frac{2}{\pi z^2}. \end{eqnarray*} \normalsize The third identity holds since \small \begin{multline*} Y_{\nu}'(z)J_{\nu}''(z)-J_{\nu}'(z)Y_{\nu}''(z)=Y_{\nu}'(z)\left(J_{{\nu}-1}(z)-\frac{{\nu}}{z}J_{\nu}(z)\right)'-J_{\nu}'(z)\left(Y_{{\nu}-1}(z)-\frac{{\nu}}{z}Y_{\nu}(z)\right)'\\ =Y_{\nu}'(z)J_{{\nu}-1}'(z)-J_{\nu}'(z)Y_{{\nu}-1}'(z)+\frac{{\nu}}{z^2}\left(Y_{\nu}'(z)J_{\nu}(z)-J_{\nu}'(z)Y_{\nu}(z)\right)\\ =\left(Y_{\nu}'(z)\frac{1}{2}\left(J_{{\nu}-2}(z)-J_{\nu}(z)\right)-J_{\nu}'(z)\frac{1}{2}\left(Y_{{\nu}-2}(z)-Y_{\nu}(z)\right)\right)+\frac{2{\nu}}{\pi z^3}\\ =\frac{1}{2}\left(Y_{\nu}'(z)J_{{\nu}-2}(z)-J_{\nu}'(z)Y_{{\nu}-2}(z)\right)\\ -\frac{1}{2}\left(Y_{\nu}'(z)J_{\nu}(z)-J_{\nu}'(z)Y_{\nu}(z)\right)+\frac{2{\nu}}{\pi z^3}\\ =\frac{1}{2}\left(J_{\nu}'(z)Y_{\nu}(z)-Y_{\nu}'(z)J_{\nu}(z)\right)\\ +\frac{{\nu}-1}{z}\left(Y_{\nu}'(z)J_{{\nu}-1}(z)-J_{\nu}'(z)Y_{{\nu}-1}(z)\right)-\frac{1}{\pi z}+\frac{2{\nu}}{\pi z^3}\\ =\frac{{\nu}-1}{z}\left(J_{{\nu}-1}(z)\left(Y_{{\nu}-1}(z)-\frac{{\nu}}{z}Y_{\nu}(z)\right)-Y_{{\nu}-1}(z)\left(J_{{\nu}-1}(z)-\frac{{\nu}}{z}J_{\nu}(z)\right)\right)\\ -\frac{2}{\pi z}+\frac{2{\nu}}{\pi z^3}\\ =-\frac{{\nu}({\nu}-1)}{z^2}\left(Y_{\nu}(z)J_{{\nu}-1}(z)-J_{\nu}(z)Y_{{\nu}-1}(z)\right)-\frac{2}{\pi z}+\frac{2{\nu}}{\pi z^3}\\ =\frac{2}{\pi z}\left(-1+\frac{{\nu}^2}{z^2}\right), \end{multline*} \normalsize where the first, second and fourth equalities follow respectively from the well-known formulas $\mathcal C_{\nu}'(z)=\mathcal C_{{\nu}-1}(z)-\frac{{\nu}}{z}\mathcal C_{{\nu}}(z)$, $2\mathcal C_{{\nu}}'(z)=\mathcal C_{{\nu}-1}(z)-\mathcal C_{{\nu}+1}(z)$ and $\mathcal C_{{\nu}-2}(z)+\mathcal C_{{\nu}}(z)=\frac{2({\nu}-1)}{z}\mathcal C_{{\nu}-1}(z)$, where $\mathcal C_{{\nu}}(z)$ stands both for $J_{\nu}(z)$ and $Y_{\nu}(z)$ (see \cite[\S 9]{abram}). This proves the lemma. \endproof \end{lem}
\begin{lem}\label{inducross} The following identities hold \begin{eqnarray} Y_{\nu}(z)J_{\nu}^{(k)}(z)-J_{\nu}(z)Y_{\nu}^{(k)}(z)&=&\frac{2}{\pi z}\left(r_k+R_{{\nu},k}(z)\right),\label{num1}\\ Y_{\nu}'(z)J_{\nu}^{(k)}(z)-J_{\nu}'(z)Y_{\nu}^{(k)}(z)&=&\frac{2}{\pi z}\left(q_k+Q_{{\nu},k}(z)\right),\label{num2} \end{eqnarray} for all $k> 2$ and ${\nu}\geq 0$, where $r_k, q_k\in \{0,1, -1\}$, and $Q_{{\nu},k}(z)$, $R_{{\nu},k}(z)$ are finite sums of quotients of the form $\frac{c_{\nu,k}}{z^m}$, with $m\geq 1$ and $c_{\nu,k}$ a suitable constant, depending on ${\nu},k$. \proof We will prove (\ref{num1}) and (\ref{num2}) by induction. Identities (\ref{num1}) and (\ref{num2}) hold for $k=1$ and $k=2$ by Lemma \ref{case0}. Suppose now that \small \begin{eqnarray*} Y_{\nu}(z)J_{\nu}^{(k)}(z)-J_{\nu}(z)Y_{\nu}^{(k)}(z)&=&\frac{2}{\pi z}\left(r_k+R_{{\nu},k}(z)\right),\\ Y_{\nu}'(z)J_{\nu}^{(k)}(z)-J_{\nu}'(z)Y^{(k)}(z)&=&\frac{2}{\pi z}\left(q_k+Q_{{\nu},k}(z)\right), \end{eqnarray*} \normalsize hold for all ${\nu}\geq 0$. First consider \small $$ Y_{\nu}'(z)J_{\nu}^{(k+1)}(z)-J_{\nu}'(z)Y_{\nu}^{(k+1)}(z). $$ \normalsize We use the recurrence relations $\mathcal C_{{\nu}+1}(z)+\mathcal C_{{\nu}-1}(z)=\frac{2{\nu}}{z}\mathcal C_{\nu}(z)$ and $2\mathcal C'(z)=\mathcal C_{\nu -1}(z)- \mathcal C_{\nu +1}(z)$, where $\mathcal C_{\nu}(z)$ stands both for $J_{\nu}(z)$ and $Y_{\nu}(z)$ (see \cite[\S 9]{abram}). We have \footnotesize \begin{multline}\label{laquzero} Y_{\nu}'(z)J_{\nu}^{(k+1)}(z)-J_{\nu}'(z)Y_{\nu}^{(k+1)}(z)=Y_{\nu}'(z)(J_{\nu}')^{(k)}(z)-J_{\nu}'(z)(Y_{\nu}')^{(k)}(z)\\ =\frac{1}{4}\left[\left(Y_{{\nu}-1}(z)-Y_{{\nu}+1}(z)\right)\left(J_{{\nu}-1}(z)-J_{{\nu}+1}(z)\right)^{(k)}\right.\\ -\left.\left(J_{{\nu}-1}(z)-J_{{\nu}+1}(z)\right)\left(Y_{{\nu}-1}(z)-Y_{{\nu}+1}(z)\right)^{(k)}\right]\\ =\frac{1}{4}\left[\left(Y_{{\nu}-1}(z)J_{{\nu}-1}^{(k)}(z)-J_{{\nu}-1}(z)Y_{{\nu}-1}^{(k)}(z)\right)+\left(Y_{{\nu}+1}(z)J_{{\nu}+1}^{(k)}(z)-J_{{\nu}+1}(z)Y_{{\nu}+1}^{(k)}(z)\right)\right.\\ +\left.\left(J_{{\nu}+1}(z)Y_{{\nu}-1}^{(k)}(z)-Y_{{\nu}-1}(z)J_{{\nu}+1}^{(k)}(z)\right)+\left(J_{{\nu}-1}(z)Y_{{\nu}+1}^{(k)}(z)-Y_{{\nu}+1}(z)J_{{\nu}-1}^{(k)}(z)\right)\right]\\ =\frac{1}{4}\left[\frac{2}{\pi z}\left(r_k+R_{{\nu}-1,k}(z)+r_k+R_{{\nu}+1,k}(z)\right)\right.\\ +\left.\frac{2{\nu}}{z}\left(J_{\nu}(z)Y_{{\nu}-1}^{(k)}-Y_{\nu}(z)J_{{\nu}-1}^{(k)}(z)+J_{\nu}(z)Y_{{\nu}+1}^{(k)}(z)-Y_{\nu}(z)J_{{\nu}+1}^{(k)}(z)\right)\right.\\ -\left.\left(J_{{\nu}-1}(z)Y_{{\nu}-1}^{(k)}(z)-Y_{{\nu}-1}(z)J_{{\nu}-1}^{(k)}(z)+J_{{\nu}+1}(z)Y_{{\nu}+1}^{(k)}(z)-Y_{{\nu}+1}J_{{\nu}+1}^{(k)}(z)\right)\right]\\ =\frac{1}{4}\left[\frac{4}{\pi z}\left(2r_k+R_{{\nu}-1,k}(z)+R_{{\nu}+1,k}(z)\right)\right.\\ +\left.\frac{2{\nu}}{z}\left(J_{\nu}(z)\left(Y_{{\nu}-1}(z)+Y_{{\nu}+1}(z)\right)^{(k)}-Y_{\nu}(z)\left(J_{{\nu}-1}(z)+J_{{\nu}+1}(z)\right)^{(k)}\right)\right]\\ =\frac{1}{\pi z}\left(2r_k+R_{{\nu}-1,k}(z)+R_{{\nu}+1,k}(z)\right)\\ +\frac{{\nu}^2}{z}\left(J_{\nu}(z)\left(\frac{1}{z}Y_{\nu}(z)\right)^{(k)}-Y_{\nu}(z)\left(\frac{1}{z}J_{\nu}(z)\right)^{(k)}\right)\\ =\frac{2}{\pi z}\left[r_k+\frac{1}{2}\left(R_{{\nu}-1,k}(z)+R_{{\nu}+1,k}(z)\right)\right.\\ -\left.\frac{{\nu}^2}{z}\sum_{j=0}^k\frac{k!(-1)^{k-j}}{j!z^{k-j+1}}\left(r_j+R_{{\nu},j}(z)\right)\right]. \end{multline} \normalsize
We prove now (\ref{num2}) \footnotesize \begin{multline}\label{laqu} Y_{\nu}(z)J_{\nu}^{(k+1)}(z)-J_{\nu}(z)Y_{\nu}^{(k+1)}(z)=\left(Y_{\nu}(z)J_{\nu}^{(k)}(z)-J_{\nu}(z)Y_{\nu}^{(k)}(z)\right)'\\ -\left(Y_{\nu}'(z)J_{\nu}^{(k)}(z)-J_{\nu}'(z)Y_{\nu}^{(k)}(z)\right)\\ =\frac{2}{\pi z}\left(-q_k-Q_{{\nu},k}(z)-\frac{r_k}{z}-\frac{R_{{\nu},k}(z)}{z}+R_{{\nu},k}'(z)\right). \end{multline} \normalsize This concludes the proof. \endproof \end{lem}
\begin{corol}\label{formulaused} The following formulas hold \small \begin{eqnarray*} J_{\nu}(z)Y_{\nu}'''(z)-Y_{\nu}(z)J_{\nu}'''(z)&=&\frac{2}{\pi z}\left(\frac{2+{\nu}^2}{z^2}-1\right);\\ Y_{\nu}'(z)J_{\nu}'''(z)-J_{\nu}'(z)Y_{\nu}'''(z)&=&\frac{2}{\pi z^2}\left(1-\frac{3{\nu}^2}{z^2}\right);\\ Y_{\nu}'(z)J_{\nu}''''(z)-J_{\nu}'(z)Y_{\nu}''''(z)&=&\frac{2}{\pi z}\left(1-\frac{3+2{\nu}^2}{z^2}+\frac{{\nu}^4+11{\nu}^2}{z^4}\right). \end{eqnarray*} \normalsize \proof From Lemma \ref{inducross} (see in particular (\ref{laqu})) it follows \small \begin{multline*} J_{\nu}(z)Y_{\nu}'''(z)-Y_{\nu}(z)J_{\nu}'''(z)=-\frac{2}{\pi z}\left[-q_2-Q_{{\nu},2}(z)-\frac{r_2}{z}-\frac{R_{{\nu},2}(z)}{z}+R_{{\nu},2}'(z)\right]\\ =\frac{2}{\pi z}\left(\frac{2+{\nu}^2}{z^2}-1\right). \end{multline*} \normalsize Next we compute \small \begin{multline*} Y_{\nu}'(z)J_{\nu}'''(z)-J_{\nu}'(z)Y_{\nu}'''(z)=\frac{2}{\pi z}\left[r_2+R_{{\nu},2}(z)-\frac{{\nu}^2}{z}\sum_{j=0}^2\frac{2(-1)^{2-j}}{j!z^{2-j+1}}\left(r_j+R_{{\nu},j}(z)\right)\right]\\ =\frac{2}{\pi z^2}\left(1-\frac{3{\nu}^2}{z^2}\right). \end{multline*} \normalsize Finally, by (\ref{laquzero}) with $k=3$, we have \small \begin{multline*} Y_{\nu}'(z)J_{\nu}''''(z)-J_{\nu}'(z)Y_{\nu}''''(z)=\frac{2}{\pi z}\left[r_3+\frac{1}{2}\left(R_{{\nu}-1,3}(z)+R_{{\nu}+1,3}(z)\right)\right.\\ -\left.\frac{{\nu}^2}{z}\sum_{j=0}^3\frac{6(-1)^{3-j}}{j!z^{3-j+1}}\left(r_j+R_{{\nu},j}(z)\right)\right]\\ =\frac{2}{\pi z}\left(1-\frac{3+2{\nu}^2}{z^2}+\frac{{\nu}^4+11{\nu}^2}{z^4}\right). \end{multline*} \normalsize \endproof \end{corol}
\begin{comment}
\section{Da non inserire (?)}
\subsection{Behavior of the eigenvalues under dilations}
We consider problems (\ref{Ste}) and (\ref{Neu}) when $\Omega=B(0,R)$ is the ball centered in zero and of radius $R$ in $\mathbb R^N$, $N\geq 2$. We consider the following mass densities on $B(0,R)$
\begin{equation}\label{densR} \rho_{\varepsilon,M,R}(x)=\left\{ \begin{array}{ll} \varepsilon,& {\rm if\ }x\in B(0,R(1-\varepsilon)),\\ \frac{M-\varepsilon\omega_N R^N(1-\varepsilon)^N}{\omega_N R^N(1-(1-\varepsilon)^N)},& {\rm if\ }x\in B(0,R)\setminus\overline{B(0,R(1-\varepsilon))}. \end{array} \right. \end{equation}
The aim of this section is to study the dependence of $\lambda_j(\varepsilon)$ and $\lambda_j$ and their derivatives on the radius $R$. In particular we study two different problems. In the first problem we fix $M>0$ and let $R$ vary. In the second problem we want to keep the average density fixed, in a sense that will be understood later. In this section we denote by $\lambda_j(\varepsilon,M,R)$ the eigenvalues of problem (\ref{Neu}) on $B(0,R)$ with density (\ref{densR}) while we denote by $\lambda_j(M,R)$ the eigenvalues of problem (\ref{Ste}) on $B(0,R)$ with density (\ref{densR}). We introduce the following
\begin{eqnarray*} &&\mathcal R(u,\varepsilon,M,R)=\frac{\int_{B(0,R)}\abs{\nabla u}^2 dx}{\int_{B(0,R)} u^{2}\rho_{\varepsilon,M,R}\,dx},\\ &&\mathcal R(u,M,R)=\frac{\int_{B(0,R)}\abs{\nabla u}^2 dx}{\int_{\partial B(0,R)} ({\rm Tr}\, u) ^{2}\frac{M}{N\omega_NR^{N-1}}\,d\sigma}. \end{eqnarray*}
>From the min-max principle we have
\begin{eqnarray*} &&\lambda_{j}(\varepsilon,M,R)=\inf_{\substack{E\subset H^{1}(B(0,R)) \\{\rm dim}E=j+1}}\sup_{0\ne u\in E}\mathcal R(u,\varepsilon,M,R),\ \ \ \mathcal8 j\in\mathbb N,\\ &&\lambda_{j}(M,R)=\inf_{\substack{ E\subset H^{1}(B(0,R))\\{\rm dim} E=j+1}}\sup_{\substack{ u\in E \\ {\rm Tr}\, u\ne 0 }}\mathcal R(u,M,R),\ \ \ \mathcal8 j\in\mathbb N. \end{eqnarray*}
We perform a change of variable $x=Ry$. In this way
\begin{eqnarray*} &&\mathcal R(u,\varepsilon,M,R)=R^{-2}\mathcal R\left(u(R\cdot),\varepsilon,\frac{M}{R^N},1\right),\\ &&\mathcal R(u,M,R)=R^{-2}\mathcal R\left(u(R\cdot),\frac{M}{R^N},1\right). \end{eqnarray*} It follows that
\begin{eqnarray*} &&\lambda_j(\varepsilon,M,R)=R^{-2}\lambda_j(\varepsilon,\frac{M}{R^N},1),\\ &&\lambda_j(M,R)=R^{-2}\lambda_j(\frac{M}{R^N},1). \end{eqnarray*} We rewrite formula (\ref{derformulaN}) as
\begin{equation}\label{alterder} \lambda_j'(0,M,1)=\frac{2M\lambda_j^2(0,M,1)}{3N\omega_N}+\frac{2\lambda_j^2(0,M,1)\omega_N}{2M\lambda_j(0,M,1)+N^2\omega_N}. \end{equation}
We obtain
\small \begin{equation}\label{formulaR} \lambda_j'(0,M,R)=R^{-2}\lambda_j'(0,\frac{M}{R^N},1)=R^{2N-2}2\lambda_j^2(0,M,1)\left(\frac{M}{3N\omega_N}+\frac{\omega_N}{2M\lambda_j(0,M,1)+N^2\omega_N}\right). \end{equation} \normalsize Note that we dilated $\Omega$ keeping the mass fixed. Now we want to keep fixed the quantity $\frac{M}{R^N\omega_N}$, which has the meaning of an average density, modifying eventually the mass $M$. This is done by considering a new mass $MR^N$ and solve problem (\ref{Neu}) on $B(0,R)$ with density $\rho_{\varepsilon,MR^N,R}$. Then we consider $\mathcal R(u,\varepsilon,MR^N,R)$. Standard computations yield
\begin{eqnarray*} &&\lambda_j(\varepsilon,M R^N,R)=R^{-2}\lambda_j(\varepsilon,M,1),\\ &&\lambda_j(M R^N,R)=R^{-2}\lambda_j(M,1), \end{eqnarray*} i.e., only a scale factor $R^{-2}$ appears in front of the eigenvalues of (\ref{Ste}) and (\ref{Neu}). As for the derivative, we have
\small \begin{eqnarray}\label{formulaR2} \nonumber\lambda_j'(0,M R^N,R)=R^{-2}\lambda_j'(0,M,1)&=&R^{-2}2\lambda_j^2(0,M,1)\left(\frac{M}{3N\omega_N}+\frac{\omega_N}{2M\lambda_j(0,M,1)+N^2\omega_N}\right)\\ &=&R^{-2}2\lambda_j^2(0,M,1)\left(\frac{\delta}{3N}+\frac{1}{2\lambda(0,M,1)\delta+N^2}\right), \end{eqnarray} \normalsize
where $\delta$ is the average density $\frac{M}{\omega_N}$.
\begin{rem} We can recover these formulas by exploiting all the computations of Section 3 on $B(0,R)$ and $\rho_{\varepsilon,M,R}$. We just consider a different mass, eventually depending on the radius $R$, that is either $M$ or $M R^N$ and set $\tilde\lambda=R^2\lambda$. We find formulas for the eigenvalues $\tilde\lambda(0)$ and their derivatives and then recover those for $\lambda(0)$ and $\lambda'(0)$. \end{rem}
\subsection{Case $N=1$ $l^2$ setting}
We will provide now an interpretation of this fact in terms of compact selfadjoint operators on Hilbert Spaces. We fix for semplicity $M=2$. With this choice the eigenvalue $\lambda$ of problem (\ref{Ste1}) is $\lambda=1$. We consider the space $H^1(-1,1)$ of those measurable functions on the interval $]-1,1[$ which are square-integrable over $]-1,1[$ and whose first derivative is square integrable over $]-1,1[$. This is a Hilbert space. A orthonormal basis of $H^1(-1,1)$ is given by the following family of trigonometric functions
$$ \frac{1}{\sqrt{2}},\sin(\frac{\pi x}{2}),\Big\{\cos\left(\frac{2k\pi x}{2}\right)\Big\}_{k=1}^{+\infty},\Big\{\sin\left({(\frac{2k+1}{2})\pi x}\right)\Big\}_{k=1}^{+\infty}, $$
which in particular are the normalized eigenfunctions of the Neumann Laplacian on $]-1,1[$. If $u\in H^1(-1,1)$ then $u$ has the following form
$$ u(x)=b_0+a_0\sin(\frac{\pi}{2}x)+\sum_{k=1}^{+\infty}a_k\sin\left({(\frac{2k+1}{2})\pi x}\right)+b_k \cos\left(\frac{2k\pi x}{2}\right), $$
where $a_k,b_k\in\mathbb R$ for all $k=0,1,...$ . Now we consider the isometry $\mathcal F$ between $L^2(-1,1)$ and $l^2(\mathbb N)$, which associates to $u\in L^2(-1,1)$ the element $\mathcal F u=(a_0,a_1,...,a_k,...,b_0,b_1,...b_k,...)\in l^2(\mathbb N)$. The fact that this is a bijection and is an isometry is a standard fact from the theory of Fourier series. It is well known that if $T$ is a compact selfadjoint operator of $L^2(-1,1)$ into itself, then also $\mathcal F\circ T\circ\mathcal F^{-1}$ is a compact selfadjoint operator of $l^2(\mathbb N)$ into itself and the two spectra coincide.
We consider the operator $T:H^1(-1,1)\rightarrow H^1(-1,1)$ given by
$$ Tu=(-\Delta)^{-1}\circ J\circ i u, $$ where $J:L^2(-1,1)\rightarrow H^1(-1,1)'$ is such that $Ju[\phi]=u(1)\phi(1)+u(-1)\phi(-1)$ for all $\phi\in H^1(-1,1)$, $-\Delta:H^1(-1,1)\rightarrow H^1(-1,1)'$ is such that $-\Delta u[\phi]=\int_{-1}^1 u'\phi' dx$ and $i$ is the immersion of $H^1(-1,1)$ into $L^2(-1,1)$. We will describe here the operator
$$\mathcal T=\mathcal F\circ (-\Delta)^{-1}\circ J\circ\mathcal F^{-1}$$
from $l^2(\mathbb N)$ to itself. We observe that the domain of $\mathcal T$ is the subset of those sequences $\tilde u\in l^2(\mathbb N)$, $\tilde u=(a_0,a_1,...,b_0,b_1,...)$ such that the element $\tilde u'$ defined by $$\tilde u'=\frac{\pi^2}{4}(a_0^2,9a_1^2,...,(2k+1)^2a_k^2,...,0,4b_1^2,16b_2^2,...,(2k)^2b_k^2,...)$$ belongs to $l^2(\mathbb N)$. In this way $\mathcal F^{-1}\tilde u(1)$ and $\mathcal F^{-1}\tilde u (-1)$ are well defined.
The operator $\mathcal F^{-1}$ takes an element $\tilde u=(a_0,a_1,...,b_0,b_1,...)\in l^2(\mathbb N)$ to the function $u(x)=b_0+a_0\sin(\frac{\pi}{2}x)+\sum_{k=1}^{+\infty}a_k\sin\left({(\frac{2k+1}{2})\pi x}\right)+b_k \cos\left(\frac{2k\pi x}{2}\right)$. The operator $J$ associates to $u$ the functional $Ju$ such that $Ju[\phi]=u(1)\phi(1)+u(-1)\phi(-1)$. If $\phi\in H^1(-1,1,)$ is such that $\phi(x)=d_0+c_0\sin(\frac{\pi}{2}x)+\sum_{k=1}^{+\infty}c_k\sin\left({(\frac{2k+1}{2})\pi x}\right)+d_k \cos\left(\frac{2k\pi x}{2}\right)$, then
$$ Ju[\phi]=2\sum_{j=0}^{+\infty}(-1)^j c_j\left(\sum_{k=0}^{+\infty}(-1)^k a_k\right)+2\sum_{j=0}^{+\infty}(-1)^j d_j\left(\sum_{k=0}^{+\infty}(-1)^k b_k\right). $$
The operator $-\Delta$ takes $u\in H^1(-1,1)$ to the functional $-\Delta u$ which acts in the following way
$$
-\Delta u[\phi]=\int_0^1 u'\phi'dx=\frac{\pi^2}{4}\left[b_0 d_0 (2j)^2_{|_{j=0}}+a_0c_0+\sum_{j=1}^{+\infty}(2j+1)^2a_jc_j+\sum_{j=1}^{+\infty}(2j)^2 b_j d_j\right], $$
where the term $b_0 d_0 (2j)^2_{|_{j=0}}$ has been added for a very precise reason. We are ready to write formally the infinite matrix which represents the operator $\tilde T$. We call this matrix $M_{\mathcal T}$. It has the following form
\begin{equation} M_{\mathcal T}=\frac{4}{\pi^2}\begin{bmatrix} 2 & -2 & 2 & \cdots & 0 &\cdots & 0 & \cdots \\
-\frac{2}{9} & \frac{2}{9} & -\frac{2}{9} & \cdots & 0 & \cdots & 0 & \cdots\\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\ \frac{2(-1)^k}{(2k+1)^2} & \cdots & \frac{2(-1)^{k+j}}{(2k+1)^2} & \cdots & 0 & \cdots & \cdots & 0\\ \vdots & \ddots & \vdots & \ddots & \vdots & \ddots & \ddots & \ddots\\
0 & \cdots & 0 & \cdots & \frac{2}{(2l_{|_{l=0}})^2} & -\frac{2}{(2l_{|_{l=0}})^2} & \frac{2}{(2l_{|_{l=0}})^2} & \cdots\\ 0 & \cdots & 0 & \cdots & -\frac{2}{4} & \frac{2}{4} & -\frac{2}{4} & \cdots\\ \vdots & \ddots & \vdots & \ddots & \frac{2(-1)^{l}}{(2l)^2} & \cdots & \frac{2(-1)^{l+m}}{(2l)^2} & \cdots\\ \vdots & \ddots & \vdots & \ddots & \vdots & \ddots & \vdots & \ddots \end{bmatrix}. \end{equation}
The matrix $M_{\mathcal T}$ is of the form
\begin{equation*} \begin{bmatrix} A & 0 \\
0 & B \end{bmatrix}. \end{equation*}
We denoted by $k,j$ and $l,m$ the raws and columns of $A$ and $B$ respectively. In our notation $k,j,l,m$ start from $0$. Clearly the rank of $M_{\mathcal T}$ is $2$. By induction we can compute formulas for the eigenvalues and the eigenvectors. We know that we can study the two blocks separately.
Matrix $A$ has rank $1$ and one non zero eigenvalue, which is $$\mu_A=\frac{8}{\pi^2}\sum_{k=0}^{+\infty}\frac{1}{(2k+1)^2}=1,$$ while all the other eigenvalues are zero. In particular one can show by induction that the characteristic polynomial $P_{A_n}(x)$ is given by
$$ P_{A_n}(x)=\left(\frac{\pi^2 x}{8}\right)^n\left(\frac{\pi^2 x}{8}-\sum_{k=0}^{+\infty}\frac{1}{(2k+1)^2}\right), $$
where $A_n$ is the matrix composed by the first $n+1$ raws and $n+1$ columns of $A$. The eigenvector $u_A$ associated with $\mu_A$ is
$$u_A=(1,-\frac{1}{9},...,\frac{(-1)^k}{(2k+1)^2},...),$$
for $k=0,1,...$. We notice that the eigenvector $\mathcal F^{-1} u_A=\alpha x$ is the eigenfunction of problem (\ref{Ste1}) associated with the eigenvalue $\lambda_A=\mu_A^{-1}=1$, as we expected.
Now we consider the second block $B$ where the term $\frac{1}{(2l)^2_{|_{l=0}}}$ appears. We proceed as in the previous lines and we find that the characteristic polynomial $P_{B_n}(x)$ of the truncated matrix $B_n$ is
$$
P_{B_n}(x)=\left(\frac{\pi^2 x}{8}\right)^{n}\left(\frac{\pi^2 x}{8}-\frac{1}{(2l)^2_{|_{l=0}}}-\sum_{l=1}^{+\infty}\frac{1}{(2l)^2}\right). $$
The only non-zero eigenvalue $\mu_B$ of $B$ is given by
$$
\mu_B=\frac{8}{\pi^2(2l)^2_{|_{l=0}}}+\frac{2}{\pi^2}\sum_{l=1}^{+\infty}\frac{1}{l^2}=\frac{8}{\pi^2(2l)^2_{|_{l=0}}}+\frac{1}{3}, $$
and the associated eigenvector $u_B$ is
$$
u_B=(\frac{1}{(2l)^2_{|_{l=0}}},-\frac{1}{4},...,\frac{(-1)^l}{(2l)^2},...). $$
The expression $\frac{8}{\pi^2(2l)^2_{|_{l=0}}}$ makes apparently no sense. We have found that there is en eigenvalue $\mu_B$ which is $+\infty$ and en eigenvector which has the first entry $+\infty$. When we consider the corresponding eigenvalue and eigenvector of problem (\ref{Ste1}), we find that $\lambda_B=\mu_B^{-1}=\frac{3\pi^2 l^2}{6+\pi^2l^2}_{|_{l=0}}=0$. The corrisponding eigenvector must be necessarily $\alpha l^2_{|_{l=0}}$ times $u_B$, where $\alpha\in\mathbb R$, which means that $u_B=(\alpha,0,0,...)$ which is the constant function. In this way we recovered the eigenvalue $0$ of problem (\ref{Ste1}) corresponding to the constant function.
We recovered the two eigenvalues $\lambda_A=1$ and $\lambda_B=0$ of (\ref{Ste1}) and we observed that all the other eigenvalues, which are the inverse of all the others zero eigenvalues of $M_{\mathcal T}$, have value $+\infty$ in the sense we described here.
\end{comment}
{\bf Acknowledgments.} Large part of the computations in this paper have been performed by the second author in the frame of his PhD Thesis under the guidance of the first author. The authors acknowledge financial support from the research project `Singular perturbation problems for differential operators', Progetto di Ateneo of the University of Padova and from the research project `INdAM GNAMPA Project 2015 - Un approccio funzionale analitico per problemi di perturbazione singolare e di omogeneizzazione'. The authors are members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`{a} e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).
\end{document}
|
arXiv
|
{
"id": "1602.06078.tex",
"language_detection_score": 0.5400938987731934,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\subjclass[2020]{Primary: 14C20. Secondary: 14J45, 14J70}
\keywords{effective cones, base locus lemmas, log Fano varieties, Mori dream spaces, unexpected hypersurfaces}
\title{Log Fano blowups of mixed products of projective spaces and their effective cones}
\begin{abstract} We compute the cones of effective divisors on blowups of $\mathbb P^1 \times \mathbb P^2$ and $\mathbb P^1 \times \mathbb P^3$ in up to 6 points. We also show that all these varieties are log Fano, giving a conceptual explanation for the fact that all the cones we compute are rational polyhedral. \end{abstract}
Cones of divisors are combinatorial objects that encode key geometric information about algebraic varieties. Understanding the structure of these cones is therefore a basic problem in algebraic geometry. There are important theorems describing the structure of these cones for important classes of varieties: for example, Fano varieties have rational polyhedral nef cones by the Cone Theorem, and rational polyhedral effective cones by Birkar--Cascini--Hacon--McKernan \cite{BCHM}.
In general, however, these cones can be difficult to understand, even for simple varieties such as blowups of sets of points in projective space or products of projective spaces. Mukai \cite{Mukai05} and Castravet--Tevelev \cite{CT06} proved that the blowup of $\left(\mathbb P^n\right)^r$ in a set of $s$ points in very general position is a Mori dream space, and in particular has rational polyhedral effective cone, if and only if \begin{align*}
\frac{1}{r+1}+\frac{1}{s-n-1}+\frac{1}{n+1}>1. \end{align*} These blowups are highly symmetric from the point of view of divisor theory: there is a naturally-defined Weyl group action on the Picard group, and this is the key ingredient in describing the effective cones and Cox ring when they are finitely generated.
In this paper, we focus instead on certain ``asymmetric'' varieties of blowup type: namely, blowups of $\mathbb P^1 \times \mathbb P^2$ and $\mathbb P^1 \times \mathbb P^3$ in sets of up to 6 points in general position. The presence of factors of different dimensions means that the Picard groups of these varieties are less symmetric than in the previous examples, so we use different ideas to understand divisors. Our method combines the following techniques: {\it induction}, that is, pulling back divisors from lower blowups to get information about higher blowups; {\it restriction}, that is, obtaining necessary conditions for effectivity by restriction to suitable subvarieties whose cone of effective divisors is understood; and {\it base locus lemmas}, showing that divisors violating a numerical inequality must contain a certain fixed divisor as a component.
This method, that we shall refer to as the \emph{cone method}, is outlined in Section \ref{effconemethod}. Our main results, contained in Sections \ref{section-dim3}-\ref{section-dim4}, give explicit descriptions of the cones of effective divisors on these varieties and describe the geometry of the generating classes. For each such cone, we will give a list of extremal rays and a list of inequalities cutting it out: the latter corresponds to giving extremal rays of the corresponding cone of moving curves. Along the way, we also compute the effective cones of divisors of some threefolds given as blowups of $\mathbb P^3$ along a line and up to six points in general position (Section \ref{section-dim3-extra}).
More conceptually, one can also ask when varieties of the above kind are log Fano. For blowups of $\mathbb P^2$, being Fano is equivalent to being a Mori dream space and for these cases the Cox rings were described by Batyrev--Popov in \cite{BP04}. More recently, Araujo--Massarenti \cite{AM16} and Lesieutre--Park \cite{LP17} proved that the same holds in higher dimension, namely that blowups of $\mathbb P^n$ or of products of the form $\left(\mathbb P^n\right)^m$ in points in very general position are log Fano if and only if they are Mori dream spaces.
The same questions are open for mixed products, i.e. for blowups of $\mathbb P^{n_1}\times\cdots\times\mathbb P^{n_m}$ with not all $n_k$ equal. For blowups of $\mathbb P^1 \times \mathbb P^2$ and of $\mathbb P^1 \times \mathbb P^3$ in up to 6 points in general position, we show in Section \ref{weak-log} that all of these varieties are log Fano, and therefore Mori dream spaces. We also use the results of Mukai and Castravet--Tevelev to deduce that the blowup of $\mathbb P^1 \times \mathbb P^n$ in sufficiently many points is not a Mori dream space; Theorem \ref{theorem-summary} summarises what we know in this direction. This leaves a small number of open cases in each dimension, which we collect in Questions \ref{question1}--\ref{question3}.
{\bf Acknowledgements:} The second author is a member of INdAM-GNSAGA and she was partially supported by the EPSRC grant EP/S004130/1. The authors thank Hamid Abban, Izzet Coskun, and Hendrik S\"{u}{\ss} for valuable suggestions and corrections.
\section{Preliminaries}\label{preliminaries} We work throughout over the complex numbers $\mathbb C$.
For a variety $X$, we write $N^1(X)$ to denote the group of Cartier divisors on $X$ modulo numerical equivalence, tensored with $\mathbb R$. This is a finite-dimensional real vector space whose dimension is called the {\it Picard rank} of $X$, and denoted $\rho(X)$. Dually, we write $N_1(X)$ for the group of $1$-cycles on $X$ modulo numerical equivalence, tensored with $\mathbb R$. When $X$ is smooth, intersection of divisors and curves gives a perfect pairing $N^1(X) \times N_1(X) \rightarrow \mathbb R$. Where appropriate, we will use the same symbol to denote a divisor and its class in $N^1(X)$, or a curve and its class in $N_1(X)$. We write $\operatorname{Eff}(X)$ to denote the cone in $N^1(X)$ spanned by classes of effective divisors; in general this cone need not be open or closed, but in all our examples it will be rational polyhedral, in particular closed.
The main objects of interest in this paper will be blowups of products of two projective spaces in a collection of points in general position. A statement holds for a collection of points in {\it general position}, respectively {\it very general position}, if it holds when the corresponding element in the Hilbert scheme of $s$ points of $\mathbb P^m \times \mathbb P^n$ lies in the complement of a proper Zariski closed subset, respectively in the complement of a countable union of Zariski closed subsets. For convenience we fix the following notation: \begin{itemize} \item $X_{m,n,s}$: the blowup of $\mathbb P^m \times \mathbb P^n$ in a set of $s$ points in general position $\left\{p_1,\ldots,p_s\right\}$; \item $\pi_m, \, \pi_n$: the natural morphisms $X_{m,n,s} \rightarrow \mathbb P^m$ and $X_{m,n,s} \rightarrow \mathbb P^n$ respectively; by abuse of notation, we will use the same symbols to denote the corresponding morphisms $\mathbb P^m \times \mathbb P^n \rightarrow \mathbb P^m$ and $\mathbb P^m \times \mathbb P^n \rightarrow \mathbb P^n$;
\item $H_1, \, H_2$: the pullbacks of the hyperplane classes on $\mathbb P^m$ and $\mathbb P^n$ via $\pi_m$ and $\pi_n$ respectively; \item $l_1, \, l_2$: the classes of a line contained in a $\mathbb P^m$-fibre of $\pi_n$, respectively a line contained in a $\mathbb P^n$-fibre of $\pi_m$;
\item $E_i, \, e_i$: the exceptional divisor of the blowup of the point $p_i$, respectively the class of a line contained in $E_i$. \end{itemize}
The following proposition records the intersection numbers we need. All statements are straightforward consequences of general results about intersection theory of blowups; a reference is \cite[Proposition 13.12]{EH}. \begin{proposition}\label{intersection-table}
Let $X_{m,n,s}$ be as above. Then
\begin{enumerate}
\item[(a)] The vector spaces $N^1(X_{m,n,s})$ and $N_1(X_{m,n,s})$ have the following bases:
\begin{align*}
N^1(X_{m,n,s}) &= \langle H_1, H_2, E_1, \ldots, E_s \rangle, \\
N_1(X_{m,n,s}) &= \langle l_1, l_2, e_1, \ldots, e_s \rangle.
\end{align*} \item[(b)] We have the following intersection numbers among divisors:
\begin{align*}
H_1^m \cdot H_2^n &=1,\\
H_1^p \cdot H_2^{m+n-p} &=0 \quad \text{ for } p \neq m, \\
H_1^p \cdot E_i^{m+n-p} = H_2^p \cdot E_i^{m+n-p} &=0 \ \text{ for all } p >0, \ i=1,\ldots,s,\\
E_i^{m+n} &= (-1)^{m+n-1}.
\end{align*} \item[(c)] We have the following intersection numbers between divisors and curves: \begin{align*}
H_i \cdot l_j &= \delta_{ij},\\
H_i \cdot e_j &= 0 \quad \text{ for } i=1,2, \, j=1,\ldots,s,\\
E_i \cdot e_j &= -\delta_{ij}. \end{align*}
\end{enumerate} \end{proposition}
In particular this allows us to determine the numerical classes of curves on blowups, as follows. \begin{corollary}\label{corollary-curveclass} Let $C$ be the proper transform on $X_{m,n,s}$ of a curve of bidegree $(d_1,d_2)$ in $\mathbb P^m \times \mathbb P^n$ with multiplicity $m_i$ at the point $p_i$. Then the class of $C$ in $N_1(X_{m,n,s})$ is
\begin{align*}
d_1l_1 + d_2l_2 - \sum_{i=1}^s m_i e_i.
\end{align*} \end{corollary}
\begin{proof}
We have the intersection numbers
\begin{align*}
C \cdot H_i &= l_i \quad \text{ for } i=1,2,\\
C \cdot E_j &= m_j \quad \text{ for } j=1,\dots, s.
\end{align*}
Since the intersection pairing on $X_{m,n,s}$ is perfect, the given formula then follows from Proposition \ref{intersection-table} (a) and (c).
\end{proof} In particular we deduce the following formula, which we use in Section \ref{weak-log}. For any divisor $D$ on $X_{m,n,s}$, the class of $D$ can be written in the form \begin{equation}\label{general divisor}
d_1H_1+d_2H_2 - \sum_{i=1}^s m_i E_i, \end{equation} for some integers $d_1, \, d_2, \, m_1, \ldots, m_s$. \begin{corollary}\label{corollary-topselfint}
On the variety $X_{m,n,s}$ consider a divisor $D$ with class \eqref{general divisor}.
Then the top self-intersection number of $D$ is given by
\begin{align*}
D^{m+n} &= d_1^m d_2^n {m+n \choose n} -\sum_{i=1}^s m_i^{m+n}.
\end{align*} \end{corollary}
\subsection*{Virtual dimension and expected dimension}
Given a divisor $D$ on $X_{m,n,s}$, we can give a lower bound for the dimension of the linear system $|D|$, obtained by a simple parameter count.
\begin{definition}\label{virtual dimension}
Let $D$ be a divisor on $X_{m,n,s}$ with class \eqref{general divisor}. The \emph{virtual dimension} of the linear system $|D|$ is the integer $$
\operatorname{vdim}|D|={{m+d_1}\choose m}{{n+d_2}\choose n}-\sum_{i=1}^s{{m+n+m_i-1}\choose {m+n}}-1.$$
The \emph{expected dimension} of $|D|$ is $$
\operatorname{edim}|D|=\max\{-1,\operatorname{vdim}|D|\}. $$ \end{definition}
\begin{lemma}\label{vdim lower bound}
We have $\dim|D|\ge\operatorname{edim}|D|$. \end{lemma} \begin{proof}
If $d_i<0$, $\operatorname{vdim}|D|<0$ and $|D|$ is empty, so the statement holds. If $m_i\le0$ then $\operatorname{vdim}|D|=\operatorname{vdim}|D+m_iE_i|$ and
$|D|=(-m_i)E_i+|D+m_iE_i|$, so the statement holds for $D$ if it holds for all divisors with $m_i \geq 0$.
Assume now that $d_1,d_2,m_i\ge0$ for every $1\le i\le s$. Notice that a bidegree $(d_1,d_2)$ hypersurface of $\mathbb P^m\times\mathbb P^n$ is the zero locus of a bihomogeneous polynomial $F_{(d_1,d_2)}$ in $n+m+2$ variables that depends on ${{m+d_1}\choose m}{{n+d_2}\choose n}$ parameters. For such a hypersurface, the passage through a point with multiplicity $m_i$ corresponds to the vanishing of all partial derivatives of $F_{(d_1,d_2)}$ of order $m_i-1$ and therefore it imposes ${{m+n+m_i-1}\choose {m+n}}$ linear conditions. Since elements of the linear system $|D|$ are in $1-1$ correspondence with the elements of the projectivisation of the vector space of such polynomials, we can conclude. \end{proof} Later we will use the following description of the virtual dimension. This lemma is well known to experts but we are not aware of a reference. \begin{lemma}\label{vdimeuler}
For $D$ a divisor of the form \eqref{general divisor} with $m_i \geq 0$ for all $i$, the following relation between virtual dimension and Euler characteristic holds:
\begin{align*}
\operatorname{vdim}|D| &= \chi(X,O_X(D))-1.
\end{align*} \end{lemma}
\begin{proof}
Let $D$ be a divisor with class $d_1H_1+d_2H_2-\sum_i m_iE_i$ where $d_1,d_2$, and the $m_i$ are all nonnegative integers. We will prove the claim by induction on the natural number $M=\sum_i m_i$.
If $M=0$ then $D=d_1H_1+d_2H_2$ and both sides of the claimed equality equal
\begin{align*}
{m+d_1 \choose m} {n + d_2 \choose n}-1.
\end{align*}
Now assume the equality holds for $M=k-1$; we will prove it for $M=k$. Let $D$ be a divisor of the form \eqref{general divisor} with $M=k$. Assume without loss of generality that $m_1>0$, and define $D^\prime = D+E_1$: note that the divisor $D^\prime$ has $M=k-1$, so by our induction hypothesis we have $\operatorname{vdim}|D^\prime|=\chi(X,O_X(D^\prime))-1$.
On one hand we have
\begin{align*}
\operatorname{vdim}|D^\prime|-\operatorname{vdim}|D| &= {m+n+m_1-1 \choose m+n} - {m+n+m_1-2 \choose m+n} \\
&= {m+n+m_1-2 \choose m+n-1 }.
\end{align*}
On the other hand, we can twist the ideal sheaf sequence for $E_1$ by $O_X(D^\prime)$ to get the exact sequence
\begin{align*}
0 \rightarrow O_X(D) \rightarrow O_X(D^\prime) \rightarrow O_{E_1}(D^\prime|_{E_1}) \rightarrow 0
\end{align*}
which gives
\begin{align*}
\chi(X,O_X(D^\prime)) - \chi(E_1, O_{E_1}(D^\prime|_{E_1})) &= \chi(X,O_X(D)).
\end{align*}
Now $E_1 \cong \mathbb P^{m+n-1}$ and $D^\prime|_{E_1} \cong O_{\mathbb P^{m+n-1}}(m_1-1)$ which implies
\begin{align*}
\chi(E_1, O_{E_1}(D^\prime|_{E_1}))
&= {m+n+m_1-2 \choose m+n-1}.
\end{align*} Using these expressions and the induction hypothesis we get
\begin{align*}
\operatorname{vdim}|D| &= \operatorname{vdim}|D^\prime| - {m+n+m_1-2 \choose m+n-1 }\\
&= \chi(X,O_X(D^\prime))-1 - {m+n+m_1-2 \choose m+n-1 }\\
&= \chi(X,O_X(D^\prime))-1-\chi(E_1, O_{E_1}(D^\prime|_{E_1}))\\
&= \chi(X,O_X(D))-1.
\end{align*} \end{proof}
\subsection*{Base locus lemmas} An important ingredient in our computations will be base locus lemmas showing that effective divisor classes which violate certain numerical inequalities must contain a certain fixed divisor in their base locus. The following lemma gives a convenient way to prove several of these results.
\begin{lemma}\label{lemma-baselocusgeneral}
Let $X$ be a smooth projective variety. Let $C \in N_1(X)$ be a curve class, and let $F$ be an irreducible reduced divisor on $X$ such that irreducible curves with class $C$ cover a Zariski-dense subset of $F$. Let $f=F \cdot C$, and assume that $f<0$.
If $D$ is an effective divisor class on $X$ such that
\begin{align*}
D \cdot C = d <0
\end{align*}
then the divisor $F$ is contained in the base locus $\operatorname{Bs}(D)$ with multiplicity at least $\left \lceil{d/f} \right \rceil$. \end{lemma} \begin{proof} Since $D \cdot C<0$, every curve in the class $C$ must be contained in every effective divisor in the class $D$, so $F \subset \operatorname{Bs}(D)$. Now replace $D$ with $D-F$ and continue: the same argument applies to $D-kF$ as long as $d-kf<0$. \end{proof} In particular we immediately deduce base locus lemmas for two kinds of fixed divisors on our varieties: \begin{lemma}\label{lemma-baselocusexceptionals} On the variety $X_{m,n,s}$ consider an effective divisor $D$ with class $
d_1H_1+d_2H_2 - \sum_{i=1}^s m_i E_i $
Then:
\begin{enumerate}
\item[(a)] for each $i$, the exceptional divisor $E_i$ is contained in $\operatorname{Bs}(D)$ with multiplicity at least $\operatorname{max}(0,-m_i)$;
\item[(b)] in the case $m=1$, the unique effective divisor $F_i$ with class $H_1-E_i$ is contained in $\operatorname{Bs}(D)$ with multiplicity at least $\operatorname{max}(0,m_i-d_2)$.
\end{enumerate}
\end{lemma} \begin{proof} To prove $(a)$, apply Lemma \ref{lemma-baselocusgeneral} with $C=e_i$, the class of a line in $E_i$. Note that $E_i \cdot e_i = -1$.
To prove $(b)$, apply Lemma \ref{lemma-baselocusgeneral} with $C=l_2-e_i$. This is the class of the proper transform on $X_{1,n,s}$ of any line in a fibre of $\mathbb P^1 \times \mathbb P^n \rightarrow \mathbb P^1$ which passes through $p_i$. These lines cover the fibre, hence their proper transforms cover the proper transform of the fibre. Note that $(H_1-E_i)\cdot(l_2-e_i)=-1$. \end{proof}
\subsection*{Effective cones of del Pezzo surfaces}
In this subsection we record the effective cones of various del Pezzo surfaces. These cones are described in standard references such as \cite{manin}. For this paper, it will be convenient to view these surfaces as blowups of $\mathbb P^1 \times \mathbb P^1$ rather than of $\mathbb P^2$. In our notation, $X_{1,1,s}$ denotes a del Pezzo surface of degree $8-s$.
\begin{proposition}\label{proposition-x115} The effective cone $\operatorname{Eff}(X_{1,1,5})$ is given by the generators below left and the inequalities below right.
\begin{minipage}[t]{0.4\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{ccccccc} H_1 & H_2 & E_1 & E_2 & E_3 & E_4 & E_5\\ \hline\hline
0 & 0 & 1 & 0 & 0 & 0 & 0\\ 1 & 0 & -1 & 0 & 0 & 0 & 0\\ 1 & 1 & -1 & -1 & -1 & 0 & 0\\ 2 & 1 & -1 & -1 & -1 & -1 & -1\\ \end{array} \end{align*} \end{minipage}\quad \begin{minipage}[t]{0.4\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{ccccccc} d_1 & d_2 & m_1 & m_2 & m_3 & m_4 & m_5\\ \hline\hline
1 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 1 & 0 & 0 & 0\\ 1 & 2 & 1 & 1 & 1 & 0 & 0\\ 1 & 2 & 1 & 1 & 1 & 1 & 0\\ 1 & 3 & 1 & 1 & 1 & 1 & 1\\ 2 & 2 & 2 & 1 & 1 & 1 & 0\\ 2 & 2 & 2 & 1 & 1 & 1 & 1\\ 2 & 3 & 2 & 2 & 1 & 1 & 1\\ 3 & 3 & 2 & 2 & 2 & 2 & 1\\ \end{array} \end{align*} \end{minipage} \end{proposition}
\begin{proposition}\label{proposition-x116} The effective cone $\operatorname{Eff}(X_{1,1,6})$ is given by the generators below left and the inequalities below right.
\begin{minipage}[t]{0.4\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{cccccccc} H_1 & H_2 & E_1 & E_2 & E_3 & E_4 & E_5 & E_6\\ \hline\hline
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & -1 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & -1 & -1 & -1 & 0 & 0 & 0\\ 2 & 1 & -1 & -1 & -1 & -1 & -1 & 0\\ 2 & 2 & -2 & -1 & -1 & -1 & -1 & -1\\ \end{array} \end{align*} \end{minipage}\quad \begin{minipage}[t]{0.4\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{cccccccc} d_1 & d_2 & m_1 & m_2 & m_3 & m_4 & m_5 & m_6\\ \hline\hline
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0\\ 1 & 2 & 1 & 1 & 1 & 0 & 0 & 0\\ 1 & 2 & 1 & 1 & 1 & 1 & 0 & 0\\ 1 & 3 & 1 & 1 & 1 & 1 & 1 & 0\\ 1 & 3 & 1 & 1 & 1 & 1 & 1 & 1\\ 2 & 2 & 2 & 1 & 1 & 1 & 0 & 0\\ 2 & 2 & 2 & 1 & 1 & 1 & 1 & 0\\ 2 & 3 & 2 & 2 & 1 & 1 & 1 & 0\\ 2 & 3 & 2 & 2 & 1 & 1 & 1 & 1\\ 2 & 4 & 2 & 2 & 2 & 1 & 1 & 1\\ 3 & 3 & 2 & 2 & 2 & 2 & 1 & 0\\ 3 & 3 & 2 & 2 & 2 & 2 & 1 & 1\\ 3 & 3 & 3 & 2 & 1 & 1 & 1 & 1\\ 3 & 4 & 2 & 2 & 2 & 2 & 2 & 2\\ 3 & 4 & 3 & 2 & 2 & 2 & 1 & 1\\ 3 & 5 & 3 & 2 & 2 & 2 & 2 & 2\\ 4 & 4 & 3 & 3 & 2 & 2 & 2 & 1\\ 4 & 5 & 3 & 3 & 3 & 2 & 2 & 2\\ 5 & 5 & 3 & 3 & 3 & 3 & 3 & 2\\ \end{array} \end{align*} \end{minipage} \end{proposition} \noindent {\bf Writing cones } Let us spell out the conventions we use in writing effective cones in the tables in Propositions \ref{proposition-x115} and \ref{proposition-x116} above, and subsequently in this paper. Each effective cone we compute is described in two equivalent ways: by a list of generating classes, and by a list of defining inequalities.
For the table giving the list of generators, a row of the table of the form $(a_1 \ a_2 \ b_1 \cdots b_n)$ corresponds to a generator $a_1H_1+a_2H_2 + \sum_i b_i E_i$ of the effective cone. For example, in the tables of Proposition \ref{proposition-x116} above, Row 3 of the left table corresponds to the generator \begin{align*}
H_1+H_2-E_1-E_2-E_3 \end{align*} For the tables giving the list of defining inequalities, a row with entries $(\alpha_1 \ \alpha_2 \ \beta_1 \cdots \beta_n)$ corresponds to an inequality $\alpha_1 d_1 + \alpha_2 d_2 \geq \sum_i \beta_ i m_i$ which must be satisfied by an effective divisor written in the form (\ref{general divisor}). So for example in Proposition \ref{proposition-x116} above, Row 3 of the right table above corresponds to an inequality \begin{align*}
d_1+d_2 \geq m_1+m_2 \end{align*} A key point to stress is that these lists should be read {\bf up to permutation}: for a given generator written in the list, all generators obtained by suitable permutations are also included in the list, and similarly for inequalities. So for example, as well as the class and inequality written above, the lists of generators and inequalities respectively for $\operatorname{Eff}(X_{1,1,6})$ also include all of the following: \begin{align*}
H_1+H_2-E_i-E_j-E_k,\\
d_1+d_2\ge m_i+m_j, \end{align*} where $i, \, j, \, k$ are distinct indices in the set $\{1,\ldots,6\}$.
Finally, if $m=n$ such as in Propositions \ref{proposition-x115} and \ref{proposition-x116}, also $H_1,H_2$ (resp. $d_1,d_2$) can be swapped in the list of generators (resp. inequalities).
\section{The cone method}\label{effconemethod}
Let $X_{m,n,s}$ be the blowup of $\mathbb P^n\times\mathbb P^m$ in $s$ points in general position. In this section, we outline a method that aims to determine $\operatorname{Eff}(X_{m,n,s})$ from several pieces of information: knowledge of the effective cone $\operatorname{Eff}(X_{m,n,s-1})$, base locus lemmas for fixed divisors, and the effective cones of certain subvarieties. In later sections, this method will be applied recursively to determine the effective cones of $X_{1,n,s}$ for $n=2, \, 3$ and $s \leq 6$.
Using the notation of Section \ref{preliminaries}, we write a divisor $D$ in $X_{m,n,s}$ as $$ D=d_1H_1+d_2H_2-\sum_{i=1}^sm_iE_i,$$ where $d_1,d_2,m_1,\dots,m_s$ are integers.
The method consists of the following steps. \begin{enumerate}[leftmargin=+.65in] \item[\textbf{Step 1}] First of all, notice that $D+m_iE_i$ can be thought of as a divisor on $X_{m,n,s-1}$. Assume that we know the inequalities cutting out the effective cone of $\operatorname{Eff}(X_{m,n,s-1})$. These inequalities form a set of necessary and sufficient conditions, in the variables $d_1,d_2$, $m_1\dots,m_{i-1},m_{i+1},\dots,m_{s}$, for $D+m_iE_i$ to be effective on $X_{m,n,s-1}$, and a set of necessary conditions for $D$ to be effective on $X_{m,n,s}$, thanks to the obvious relation $D+m_iE_i\ge D$. Therefore, letting $i$ vary between $1$ and $s$, we obtain a list of inequalities in $d_1,d_2,m_1,\dots,m_{s}$ giving rise to a cone that contains $\operatorname{Eff}(X_{m,n,s})$. \item[\textbf{Step 2}] Secondly, for an extremal ray of the cone obtained in \textbf{Step 1} that is spanned by a fixed divisor $F$, we will compute a \emph{base locus lemma} for $F$, namely we will give an integer $K_F(D)$, depending on $F$ and on $D$, such that, if positive, $F$ is contained in the base locus of $D$ at least $K_F(D)$ times. Then we shall add the inequality $K_F(D)\le0$ to the list of \textbf{Step 1}, giving rise to a smaller cone. Notice that all effective divisors that are excluded by the new inequality must contain $F$ as a component. \item[\textbf{Step 3}] Finally, we compute the ray generators of the cone determined in \textbf{Step 2}, potentially identifying new extremal rays. If we can prove that the latter are all effective, then we add to the list the rays generated by the fixed divisors $F$ excluded in \textbf{Step 2}, and we obtain a complete list of generators of $\operatorname{Eff}(X_{m,n,s})$.
\item[\textbf{Step 4}]
When the output of \textbf{Step 3} has some non effective extremal rays, we need to refine the starting cone by adding additional necessary conditions for the effectivity of a general divisor $D$. The key idea here is that if $M$ is a movable divisor on $X_{m,n,s}$ and if $|D|_M|=\emptyset$, then $|D|=\emptyset$ too. Therefore the inequalities describing the effective cone of the divisor $M$ will yield necessary conditions for $D$ to be effective. These conditions will be added to those from \textbf{Step 1}. This will have the effect of shrinking the starting cone, and we run the procedure again. \end{enumerate}
Computationally, we implement this method for a given variety using the software package Normaliz \cite{normaliz}. For each variety $X_{m,n,s}$ that we consider, our set of supplementary files contains 4 Normaliz files with filenames of the form {\tt Xmns-*.*} and with the following contents: \begin{itemize} \item in the input file {\tt Xmns-ineqs.in} we collect the inequalities obtained in \textbf{Step 1}, \textbf{Step 2} and \textbf{Step 4}; \item the output file {\tt Xmns-ineqs.out} then computes, among other things, a list of generators of the restricted cone cut out by all these inequalities; \item in the input file {\tt Xmns-gens.in} we collect the generators of the restricted cone from the previous step, together with all known fixed divisor classes on $X_{m,n,s}$, following \textbf{Step 3};
\item finally, the output file {\tt Xmns-gens.out} computes the output of \textbf{Step 3}, giving both a list of generators and a list of defining inequalities. \end{itemize}
In this article we will apply this method to compute the effective cones of the blowup of $X_{1,m,s}$, with $m=2,3$, $s\le 6$, but we believe this can be applied in more general settings.
\section{Blowups of $\mathbb P^1 \times \mathbb P^2$}\label{section-dim3}
In this section we consider the threefolds $X_{1,2,s}$, the blowup of $\mathbb P^1\times\mathbb P^2$ in $s$ points in general position.
\subsection*{1 or 2 points}
The blowup of $\mathbb P^1\times\mathbb P^2$ in one or two points is a smooth toric threefold, therefore the cones of divisors are generated by the boundary divisors.
\begin{theorem}\label{Eff 122} The effective cone of $X_{1,2,1}$ is given by the list of generators below left and inequalities below right:
\begin{minipage}[t]{0.4\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{ccc} H_1 & H_2 & E_1 \\ \hline\hline
0 & 0 & 1 \\ 1 & 0 & -1 \\ \end{array}
\end{align*}
\end{minipage}\quad
\begin{minipage}[t]{0.4\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{ccc} d_1 & d_2 & m_1 \\ \hline\hline
1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 1 & 1 \\ \end{array}
\end{align*}
\ \end{minipage}
The effective cone of $X_{1,2,2}$ is given by the list of generators below left and inequalities below right:
\begin{minipage}[t]{0.4\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{cccc} H_1 & H_2 & E_1 & E_2 \\ \hline\hline
0 & 0 & 1 & 0 \\ 1 & 0 & -1 & 0 \\ 0 & 1 & -1 & -1 \\ \end{array}
\end{align*}
\end{minipage}\quad
\begin{minipage}[t]{0.4\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{cccc} d_1 & d_2 & m_1 & m_2 \\ \hline\hline
1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 1 & 1 & 0 \\ 1 & 2 & 1 & 1 \\ \end{array}
\end{align*} \end{minipage} \end{theorem} \begin{proof} The two tables on the left hand side contain the list of boundary divisors: they span the extremal rays of the effective cone. The lists on the right hand side are computed by duality, we leave the details to the reader. \end{proof}
\begin{remark}\label{fixed} All extremal rays of the effective cone of divisors of $X_{1,2,2}$ are fixed. Exceptional divisors and divisors $H_1-E_i$ are fixed, cf. Lemma \ref{lemma-baselocusexceptionals}. Finally, consider the image of $p_i,p_j$ under the projection onto the second factor: $H_2-E_i-E_j$ is the pre-image of the line determined by these points and is therefore a fixed divisor isomorphic to the blowup of $\mathbb P^1\times\mathbb P^1$ at two points in general position. \end{remark}
\subsubsection*{Base locus lemmata for degree one divisors}\label{base locus degree 1} In Lemma \ref{lemma-baselocusexceptionals}(b), we computed a lower bound for the multiplicity of containment of the fixed divisor $H_1-E_i$ in the base locus of an effective divisor. We now do the same for the fixed fivisor $H_2-E_i-E_j$.
\begin{lemma}\label{bsh2-ei-ej} Let $D=d_1E_1+d_2E_2-\sum_{i=1}^s m_i E_i$ be an effective divisor, with $s\ge 2$. The divisor $H_2-E_i-E_j$ is contained in $\operatorname{Bs}(D)$ with multiplicity at least $\max\{0,m_i+m_j-d_1-d_2\}$. \end{lemma}
\begin{proof}
Without loss of generality, we may assume that $i=1,j=2$. Under the assumption that $m_i\ge 0$ for every $i=1,\dots,s$, it is enough to prove the statement for divisors on $X_{1,2,2}$, namely for $D=d_1H_1+d_2H_2-m_1E_1-m_2E_2$. If $m_1+m_2-d_1-d_2\le0$ the statement is trivial, therefore we will assume that $m_1+m_2-d_1-d_2>0$.
Recall from Remark \ref{fixed}, that the divisor $F=H_2-E_1-E_2$ is isomorphic to $X_{1,1,2}$ and its Picard group is generated by the classes $l_1,l_2$ and $e_1,e_2$ (see Proposition \ref{intersection-table}). The surface is swept out by the irreducible curves with class $C=l_1+l_2-e_1-e_2$. We conclude using Lemma \ref{lemma-baselocusgeneral}, noticing that \begin{align*} F\cdot C& =-1\\ D\cdot C& =d_1+d_2-m_1-m_2. \end{align*} \end{proof}
\subsection*{3 points}
The first interesting case is $s=3$ and it can be computed following the procedure outlined in Section \ref{effconemethod}.
\begin{theorem}\label{Eff 123} The effective cone $\operatorname{Eff}(X_{1,2,3})$ is given by the list of generators below left and inequalities below right:
\begin{minipage}[t]{0.4\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{ccccc} H_1 & H_2 & E_1 & E_2 & E_3\\ \hline\hline
0 & 0 & 1 & 0 & 0\\ 1 & 0 & -1 & 0 & 0\\ 0 & 1 & -1 & -1 & 0\\ \end{array}
\end{align*}
\end{minipage}\quad
\begin{minipage}[t]{0.4\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{ccccc} d_1 & d_2 & m_1 & m_2 & m_3\\ \hline\hline
1 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 0\\ 1 & 2 & 1 & 1 & 0\\ 1 & 2 & 1 & 1 & 1\\ \end{array}
\end{align*} \end{minipage} \end{theorem} \begin{proof}
We apply the method of Section \ref{effconemethod} (\textbf{Step 1}-\textbf{2}) with the following inputs:
\begin{itemize}
\item the pullback via the blowdown morphism $X_{1,2,3} \rightarrow X_{1,2,2}$ of the inequalities from Theorem \ref{Eff 122} cutting out the cone $\operatorname{Eff}(X_{1,2,2})$;
\item the base locus inequalities corresponding to the fixed divisors $E_i,H_1-E_i, H_2-E_i-E_j$ from Lemmas \ref{lemma-baselocusexceptionals} and \ref{bsh2-ei-ej}.
\end{itemize}
The output is the cone of divisors cut out by the inequalities below right, and whose extremal rays, computed using the Normaliz files \texttt{X123-ineqs}, are spanned by the divisors in the table below left:
\begin{minipage}[t]{0.4\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{ccccc} H_1 & H_2 & E_1 & E_2 & E_3 \\ \hline\hline
0 & 1 & 0 & 0 & 0 \\ 0 & 1 & -1 & 0 & 0 \\ 0 & 2 & -1 & -1 & -1 \\ 1 & 0 & 0 & 0 & 0 \\ 1 & 1 & -1 & -1 & 0 \\ 1 & 1 & -1 & -1 & -1 \\ \end{array}
\end{align*}
\end{minipage}\quad
\begin{minipage}[t]{0.4\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{ccccc} d_1 & d_2 & m_1 & m_2 & m_3 \\ \hline\hline
1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 \\ 1 & 2 & 1 & 1 & 0 \\ 1 & 2 & 1 & 1 & 1 \\ 0 & 0 & -1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 & 0 \\ \end{array} \end{align*} \ \end{minipage}
The irreducible effective classes excluded by these inequalities are precisely $E_i$ and $H_1-E_i$ and $H_2-E_i-E_j$, for all choices of $i,j$ with $i\ne j$. Following \textbf{Step 3}, we add these classes to the cone obtained above and we compute the extremal rays of the resulting cone: the files \texttt{X123-gens} compute the list of extremal rays and inequalities displayed in the statement of the theorem. All of them are represented by effective classes and this allows us to conclude that this is the effective cone $\operatorname{Eff}(X_{1,2,3})$ as claimed. \end{proof}
We notice that the $X_{1,2,3}$ behaves like the toric cases, namely there is no new generator of the effective cone that did not already appear on $X_{1,2,2}$.
\begin{remark} The effective cone of $X_{1,n,n+1}$ follows as an application of work of Hausen and S{\"u}{\ss} \cite{HS10}, where the authors proved more general results on the Cox rings of algebraic varieties with torus actions. In particular, the action of $(\mathbb C^\ast)^n$ on $\mathbb P^n$ extends to a complexity-1 action on $X_{1,n,n+1}$, and \cite[Theorem 1.3]{HS10} gives an explicit description of the generators of the Cox ring, and hence the effective cone, in terms of this action. \end{remark}
\subsection*{4 points}
We will follow the procedure outlined in Section \ref{effconemethod} in order to describe the cone of effective divisors of $X_{1,2,4}$.
\begin{theorem}\label{Eff 124} The effective cone $\operatorname{Eff}(X_{1,2,4})$ is given by the list of generators below left and inequalities below right:
\begin{minipage}[t]{0.4\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{cccccc} H_1 & H_2 & E_1 & E_2 & E_3 & E_4\\ \hline\hline
0 & 0 & 1 & 0 & 0 & 0\\ 1 & 0 & -1 & 0 & 0 & 0\\ 0 & 1 & -1 & -1 & 0 & 0\\ 1 & 1 & -1 & -1 & -1 & -1\\ \end{array}
\end{align*}
\end{minipage}\quad
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{cccccc} d_1 & d_2 & m_1 & m_2 & m_3 & m_4\\ \hline\hline
1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 0 & 0\\ 1 & 2 & 1 & 1 & 0 & 0\\ 1 & 2 & 1 & 1 & 1 & 0\\ 1 & 3 & 1 & 1 & 1 & 1\\ 2 & 2 & 1 & 1 & 1 & 1\\ 2 & 3 & 2 & 1 & 1 & 1\\ \end{array} \end{align*}
\end{minipage} \end{theorem} \begin{proof} We apply the method of Section \ref{effconemethod} with the following inputs:
\begin{itemize}
\item the pullback via the blowdown morphism $X_{1,2,4} \rightarrow X_{1,2,3}$ of the inequalities from Theorem \ref{Eff 123} cutting out the cone $\operatorname{Eff}(X_{1,2,3})$;
\item the base locus inequalities corresponding to the fixed divisors $E_i,H_1-E_i, H_2-E_i-E_j$ from Lemmas \ref{lemma-baselocusexceptionals} and \ref{bsh2-ei-ej}.
\end{itemize} A minimal set of generators of the so obtained cone is computed using the Normaliz files \texttt{X124-ineqs}. The irreducible effective classes excluded by these inequalities are precisely $E_i$ and $H_1-E_i$ and $H_2-E_i-E_j$. We now add these classes to the cone obtained above and we compute the extremal rays of the resulting cone using the files \texttt{X124-gens}. As a result, we obtain the lists of extremal rays and inequalities displayed in the statement of the theorem. Observe that the divisor $H_1+H_2-\sum_{i=1}^4E_i$ is effective: this follows from Lemma \ref{vdim lower bound} since its virtual dimension is $2 \cdot 3 - 4 -1 =1$. Therefore all extremal rays are represented by effective classes, proving that this is the effective cone of $X_{1,2,4}$. \end{proof}
This is the first interesting case that does not behave in a toric manner. Indeed we see that the new generator $H_1+H_2-\sum_{i=1}^4E_i$, that is not inherited from $X_{1,2,s}$, with $s\le 3$, appears.
\subsection*{5 points}
We will follow the procedure outlined in Section \ref{effconemethod} in order to describe the cone of effective divisors of $X_{1,2,5}$. We start by proving a general statement which we will use now in the case $n=2$ and, in Section \ref{section-dim4}, in the case $n=3$.
\begin{lemma}\label{lemma-divisor1} A smooth divisor of bidegree $(1,1)$ in $\mathbb P^1 \times \mathbb P^n$ is isomorphic to the blowup of $\mathbb P^n$ in a linear space of codimension 2.
More generally, given a set of $s$ points in general position in $\mathbb P^1 \times \mathbb P^n$, let $V_s$ be a divisor of bidegree $(1,1)$ passing through all the points, and $\widetilde{V}_s$ its proper transform on $X_{1,n,s}$. Then $\widetilde{V}_s$ is isomorphic to the blowup of $\mathbb P^n$ in a linear space of codimension 2 and $s$ points. \end{lemma} \begin{proof} The proper transform $\widetilde{V}_s$ is isomorphic to the blowup of $V_s$ in $s$ points, so the second statement follows directly from the first.
To prove the first statement, let $V$ be a smooth divisor of bidegree $(1,1)$ on $\mathbb P^1 \times \mathbb P^n$. We can choose coordinates $[u,v]$ on $\mathbb P^1$ so that $V$ is defined by a bihomogeneous equation
\begin{align*}
u L_1 + v L_2 &= 0,
\end{align*}
where and $L_1$ and $L_2$ are independent linear forms in $x_0,\ldots,x_n$. The projection map $\pi \colon V \rightarrow \mathbb P^n$ is an isomorphism outside the locus
\begin{align*}
\left\{ [x_0,\ldots,x_n] \in \mathbb P^n \mid L_1(x_0,\ldots,x_n) = L_2(x_0,\ldots,x_n) =0 \right\},
\end{align*}
which is a codimension 2 linear space $\Lambda \subset \mathbb P^n$. Over each point of $\Lambda$ the fibre of $\pi$ is $\mathbb P^1$, so $\pi^{-1}(\Lambda)$ is a (Cartier) divisor in $V$, and therefore $\pi$ factors through the blowup $\operatorname{Bl}_\Lambda(\mathbb P^n) \rightarrow \mathbb P^n$. Since both varieties are smooth of Picard number 2, Zariski's Main Theorem shows that $V \rightarrow \operatorname{Bl}_\Lambda(\mathbb P^n)$ is an isomorphism. \end{proof}
\begin{proposition}\label{new generators 5 points} The divisors $H_1+H_2-\sum_{i=1}^5E_i$ and $2H_2-\sum_{i=1}^5E_i$ are effective and fixed on $X_{1,2,5}$ and they are both isomorphic to a degree $3$ del Pezzo surface of type $X_{1,1,5}$. \end{proposition} \begin{proof} Both divisors have virtual dimension $0$, hence the first statement holds by Lemma \ref{vdim lower bound}. One can show that the divisors are fixed by direct computation: it is indeed enough to show that $5$ randomly chosen points impose $5$ independent conditions to the linear system of surfaces in $\mathbb P^1\times\mathbb P^2$ of bidegree $(1,1)$ and $(0,2)$ respectively.
For the divisor $H_1+H_2-\sum_{i=1}^5 E_i$, the second statement follows from Lemma \ref{lemma-divisor1}. The divisor $2H_2-\sum_{i=1}^5E_i$ is the proper transform of a surface $S \subset \mathbb P^1 \times \mathbb P^2$ which is the preimage of a smooth conic in $\mathbb P^2$. Since $S$ is isomorphic to $\mathbb P^1 \times \mathbb P^1$ the result follows. \end{proof}
We are now ready to prove our main statement.
\begin{theorem}\label{Eff 125} The effective cone $\operatorname{Eff}(X_{1,2,5})$ is given by the list of generators below left and inequalities below right:
\begin{minipage}[t]{0.4\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{ccccccc} H_1 & H_2 & E_1 & E_2 & E_3 & E_4 & E_5\\ \hline\hline
0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 1 & -1 & -1 & 0 & 0 & 0\\ 1 & 0 & -1 & 0 & 0 & 0 & 0\\ 1 & 1 & -1 & -1 & -1 & -1 & -1\\ 0 & 2 & -1 & -1 & -1 & -1 & -1\\ \end{array}
\end{align*}
\end{minipage}\quad
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{ccccccc} d_1 & d_2 & m_1 & m_2 & m_3 & m_4 & m_5\\ \hline\hline
1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 0 & 0 & 0\\ 1 & 2 & 1 & 1 & 0 & 0 & 0\\ 1 & 2 & 1 & 1 & 1 & 0 & 0\\ 1 & 3 & 1 & 1 & 1 & 1 & 0\\ 1 & 4 & 1 & 1 & 1 & 1 & 1\\ 2 & 2 & 1 & 1 & 1 & 1 & 0\\ 2 & 3 & 2 & 1 & 1 & 1 & 0\\ 3 & 3 & 2 & 1 & 1 & 1 & 1\\ 3 & 4 & 3 & 1 & 1 & 1 & 1\\ \end{array} \end{align*}
\end{minipage} \end{theorem} \begin{proof}
We apply the method of Section \ref{effconemethod} with the following inputs:
\begin{itemize}
\item the pullback via the morphism $X_{1,2,5} \rightarrow X_{1,2,4}$ of the inequalities from Theorem \ref{Eff 124} cutting out the cone $\operatorname{Eff}(X_{1,2,4})$;
\item the base locus inequalities corresponding to the fixed divisors $E_i,H_1-E_i, H_2-E_i-E_j$ from Lemmas \ref{lemma-baselocusexceptionals} and \ref{bsh2-ei-ej},
\end{itemize}
using the Normaliz files \texttt{X125-ineqs}.
We then add the span of the classes $E_i,H_1-E_i, H_2-E_i-E_j$ to the so obtained cone, using the file \texttt{X125-gens}: we obtain the lists of extremal rays and inequalities displayed in the statement of the theorem.
To conclude, we observe that the divisor $H_1+H_2-\sum_{i=1}^5 E_i$, and $2H_2-\sum_{i=1}^5 E_i$ are effective by Proposition \ref{new generators 5 points}, and that all other divisors are effective by Theorem \ref{Eff 124}. Therefore we can conclude that this is the list of extremal rays of the effective cone of $X_{1,2,5}$. \end{proof}
\subsubsection*{Base locus lemmata for degree two divisors}\label{base locus degree 2}
Although we have already computed $\operatorname{Eff}(X_{1,2,5})$, there are additional base locus lemmas for divisors on this space that will be essential in computing the next case, namely $\operatorname{Eff}(X_{1,2,6})$. In this subsection we prove these base locus lemmas, for the fixed divisors $H_1+H_2-\sum_{j=1}^5E_{i_j}$ and $2H_2-\sum_{j=1}^5E_{i_j}$ on $X_{1,2,s}$, for $s\ge 5$.
\begin{lemma}\label{baselocus 1,1} Fix $s\ge 5$ and consider the threefold $X_{1,2,s}$. For $s=5$, the divisor $H_1+H_2-\sum_{i=1}^5E_{i}$ is contained in the base locus of $D=d_1H_1+d_2H_2-\sum_{i=1}^5m_iE_i$ with multiplicity at least $$\max\left\{0,\sum_{j=1}^5m_{i_j}-d_1-3d_2, \left\lceil \sum_{j=1}^5m_{i_j}-\frac{3d_1+5d_2}{2}\right\rceil \right\}.$$ For $s>5$, the divisor $H_1+H_2-\sum_{j=1}^5E_{i_j}$ is contained in the base locus of $D=d_1H_1+d_2H_2-\sum_{i=1}^sm_iE_i$, for $1\le i_1<\cdots<i_5<i_6\le s$, with multiplicity at least $$\max\left\{0,\sum_{j=1}^5m_{i_j}-d_1-3d_2, \left\lceil \sum_{j=1}^5m_{i_j}-\frac{3d_1+5d_2}{2} \right\rceil, \left\lceil \frac{2\sum_{j=1}^5m_{i_j}+m_{i_6}-4d_1+5d_2}{3}\right\rceil \right\}.$$
\end{lemma}
\begin{proof}
We first prove the statement for
divisors on $X_{1,2,5}$, namely for $D=d_1H_1+d_2H_2-\sum_{i=1}^5m_iE_i$.
It follows from Proposition \ref{new generators 5 points} that $\tilde{S}_1=H_1+H_2-\sum_{i=1}^5E_i$ is a degree $3$ del Pezzo surface, the blowup of $\mathbb P^2$ in $6$ points in general position, with Picard group generated by the class of a line $h$ and the class of $6$ exceptional divisors $e,e_i=E_i|_{\tilde{S}_1}$, $i=1,\dots,5$.
Now, for $i=1,2$, write $({H_i}|_{\tilde{S}_1})=a_ih-b_ie$; since both are effective and irreducible, we can assume $a_i\ge b_i\ge 0$. The following equations determine the values of $a_1,b_1,a_2,b_2$:
$0={H_1}^2|_{\tilde{S}_1}=a_1^2-b_1^2$, $1={H_2}^2|_{\tilde{S}_1}=a_2^2-b_2^2$, $0={H_1H_2}|_{\tilde{S}_1}=a_1a_2-b_1b_2$. This yields: \begin{align*}
{H_1}|_{\tilde{S}_1}& =h-e,\\
{H_2}|_{\tilde{S}_1}&=h. \end{align*}
The restriction is $D|_{\tilde{S}_1}=(d_1+d_2)h-d_1e-\sum_{i=1}^5m_ie_i$. In order to prove the statement, we consider two moving curve classes on $\tilde{S}_1$, and for each of them we apply Lemma \ref{lemma-baselocusgeneral} to obtain the claimed base locus multiplicity. \begin{itemize} \item Consider first of all class $C_1=3h-2e-\sum_{i=1}^5e_i$, whose irreducible representatives sweep out $\tilde{S}_1$. Notice that \begin{align*} \tilde{S}_1\cdot C_1 &= \left(H_1+H_2-\sum_{i=1}^5E_i\right)\cdot C_1= \left((h-e)+h-\sum_{i=1}^5e_i\right)\cdot C_1=-1\\
D|_{\tilde{S}_1}\cdot C_1& =d_1+3d_2-\sum_{i=1}^5m_i, \end{align*} so we can conclude by applying Lemma \ref{lemma-baselocusgeneral} that the integer $\sum_{j=1}^5m_{i_j}-d_1-3d_2$ is a lower bound for the multiplicity of containment of $\tilde{S}_1$ in the base locus of $D$. \item
Secondly, consider the curve class $C_2=5h-2e-2\sum_{i=1}^5e_i$, whose irreducible representatives sweep out $\tilde{S}_1$. We conclude observing that \begin{align*} \tilde{S}_1\cdot C_2 &= \left((h-e)+h-\sum_{i=1}^5e_i\right)\cdot C_2=-2\\
D|_{\tilde{S}_1}\cdot C_2& =3d_1+5d_2-2\sum_{i=1}^5m_i. \end{align*} and applying Lemma \ref{lemma-baselocusgeneral}. we obtain that $\left\lceil \frac{2\sum_{i=1}^5m_i-3d_1-5d_2}{2} \right\rceil$ is a lower bound for the multiplicity of containment of $\tilde{S}_1$ in the base locus of $D$.
\end{itemize}
This completes the proof of the first statement.
Under the assumption that $m_i\ge 0$ for every $i=1,\dots,s$, it is enough to prove the second statement for
divisors on $X_{1,2,6}$. Without loss of generality we may assume that the fixed bidegree $(1,1)$ surface is based at the first five points: $\tilde{S}_1=H_1+H_2-\sum_{i=1}^5E_i$. Consider the fixed fibral curve $l_1-e_6$, that intersects $\tilde{S}_1$ transversally in a point $q$. Since the points $p_1,\dots,p_6$ are in general position in $\mathbb{P}^1\times\mathbb{P}^2$, then so are the points $p_1,\dots,p_5,q$ in the del Pezzo surface ${S}_1$ of class $H_1+H_2$, whose blow-up is $\tilde{S}_1$. Let us consider the blow-up $\tilde{X}_{1,2,6}$ of $X_{1,2,6}$ at $q$ with exceptional divisor $E_q$, and the induced blow-up of $\tilde{S}_1$, that we will denote by $\tilde{\tilde{S}}_1$. The latter is a degree-$2$ del Pezzo surface, whose Picard group is generated by $e_q=E_q|_{\tilde{\tilde{S}}_1}$ and by $h,e_1,\dots,e_5$ that, abusing notation, are the pull-backs of the corresponding classes on $\tilde{S}_1$.
Now, consider a divisor $D$ with class $d_1H_1+d_2H_2-\sum_{i=1}^6m_iE_i$ on $X_{1,2,6}$. If $P_1$ and $P_2$ are general divisors of class $H_2-E_6$, then we compute \begin{align*}
(D_{|P_1}) \cdot ((P_2)_{|P_1}) &= D \cdot P_1 \cdot P_2 \\
&=D \cdot (l_1-e_6)\\
&=d_1-m_6. \end{align*}
This means that if $m_6>d_1$, then the curve $(P_2)_{|P_1}=l_1-e_6$ is contained in $D_{|P_1}$ at least $m_6-d_1$ times. Therefore $D_{|P_1}$ has multiplicity at least $m_6-d_1$ at every point of $l_1-e_6$, and hence so too does $D$. So if $\tilde{D}$ is the strict transform of $D$, then
$$\tilde{D}|_{\tilde{\tilde{S}}_1}=(d_1+d_2)h-d_1e-\sum_{i=1}^5m_ie_i-m_qe_q,$$ with $m_q\ge m_6-d_1$. We will show that $\left\lceil \frac{2\sum_{i=1}^5m_i+m_6-4d_1-5d_2}{3}\right\rceil$ is a lower bound for the multiplicity of containment of $\tilde{\tilde{S}}_1$ in the base locus of $\tilde{D}$. In order to do so, we consider the curve class $C_3=5h-2e-2\sum_{i=1}^5e_i-e_q$, whose irreducible representatives sweep out $\tilde{\tilde{S}}_1$. We have \begin{align*} \tilde{\tilde{S}}_1\cdot C_3 &= \left((h-e)+h-\sum_{i=1}^5e_i-e_q\right)\cdot C_3=-3\\
\tilde{D}|_{\tilde{\tilde{S}}_1}\cdot C_3& \le3d_1+5d_2-2\sum_{i=1}^5m_i-(m_6-d_1). \end{align*} Using Lemma \ref{lemma-baselocusgeneral} we obtain that $\tilde{\tilde{S}}_1$ is in the base locus of $\tilde{D}$, and hence ${\tilde{S}}_1$ is in the base locus of ${D}$, at least $\left \lceil \frac{2\sum_{i=1}^5m_i-m_6-4d_1-5d_2}{3} \right\rceil$ times for divisors with $m_6>d_1$.
In order to conclude, we need to consider the case $m_6\le d_1$. We claim that under this assumption the following holds: $$ \frac{2\sum_{i=1}^5m_i+m_6-4d_1-5d_2}{3} \le \max\left\{0,\sum_{j=1}^5m_{i_j}-d_1-3d_2, \frac{2 \sum_{i=1}^5m_{i}-3d_1-5d_2}{2}\right\}. $$ In order to prove the claim, we consider two cases. First of all, let $2\sum_{j=1}^5m_{i_j}-3d_1-5d_2\le 0$. In this case $$ 2\sum_{i=1}^5m_i+m_6-4d_1-5d_2= \left(2\sum_{j=1}^5m_{i_j}-3d_1-5d_2\right)+(m_6-d_1)\le0 $$ and the claim holds. Otherwise, if $2\sum_{j=1}^5m_{i_j}-3d_1-5d_2\ge0$, then \begin{align*} &3\left(2\sum_{j=1}^5m_{i_j}-3d_1-5d_2\right) -2\left(2\sum_{i=1}^5m_{i_j}+m_6-4d_1-5d_2\right)\\ =&\left(2\sum_{j=1}^5m_{i_j}-3d_1-5d_2\right)+2(d_1-m_6)\\ \ge&0, \end{align*} and the claim holds in this case too.
\end{proof}
\begin{lemma}\label{baselocus 0,2} The divisor $2H_2-\sum_{j=1}^5E_{i_j}$ is contained in the base locus of $D=d_1H_1+d_2H_2-\sum_{i=1}^sm_iE_i$, for $1\le i_1<\cdots<i_5\le s$, with multiplicity at least $\max\{0,\sum_{j=1}^5m_{i_j}- 3d_1-2d_2\}$. \end{lemma} \begin{proof} Under the assumption that $m_i\ge 0$ for every $i=1,\dots,s$, it is enough to prove the statement for divisors on $X_{1,2,5}$, namely for $D=d_1H_1+d_2H_2-\sum_{i=1}^5m_iE_i$. We will assume that $\sum_{i=1}^5m_i-3d_1-2d_2>0$, otherwise the claim is trivial.
It follows from Proposition \ref{new generators 5 points} that $\tilde{S}_2=2H_2-\sum_{i=1}^5E_i$ is a $X_{1,1,5}$, with Picard group generated by the two rulings $h_1,h_2$ and by $5$ exceptional curves $e_i=E_i|_{\tilde{S}_2}$, $i=1,\dots,5$.
Now, for $i=1,2$, write ${H_i}|_{\tilde{S}_2}=a_ih_1+b_ih_2$; with $a_i, b_i\ge 0$. Since ${H_i}^2|_{S_2}=0$, for $i=1,2$, and ${H_1H_2}|_{S_2}=2$, we obtain that one of the following sets of relations holds: $a_ib_j=2, a_j=b_i=0$ for either $(i,j)=(1,2)$ or $(i,j)=(2,1)$. Without loss of generality, we may choose the first set. From the equality $(2H_1+H_2)|_{\tilde{S}_2}=2h_1+2h_2$, cf. the proof of Proposition \ref{new generators 5 points}, we obtain that $a_1=1,b_2=2$. This yields: \begin{align*}
{H_1}|_{\tilde{S}_2}& =h_1,\\
{H_2}|_{\tilde{S}_2}&=2h_2. \end{align*}
The restriction is $D|_{\tilde{S}_2}=d_1h_1+2d_2h_2-\sum_{i=1}^5m_ie_i$. Consider the moving curve class $C=h_1+3h_2-\sum_{i=1}^5e_i$, whose irreducible representatives sweep out $\tilde{S}_2$ (cf. Proposition \ref{proposition-x115}). Since \begin{align*} \tilde{S}_2\cdot C &= \left(2H_2-\sum_{i=1}^5E_i\right)\cdot C= \left(4h_2-\sum_{i=1}^5e_i\right)\cdot C=-1\\
D|_{\tilde{S}_2}\cdot C& =3d_1+2d_2-\sum_{i=1}^5m_i, \end{align*} we can conclude applying Lemma \ref{lemma-baselocusgeneral}. \end{proof}
\subsection*{6 points}
In this subsection we will compute our final example in dimension 3, the effective cone of $X_{1,2,6}$. To do so, we first identify a final set of fixed divisors and give the associated base locus inequality.
\begin{lemma}\label{baselocus 1,4}
The divisor class $H_1+4H_2-3E_1-2\sum_{i=2}^6E_i-E_j$ is effective and fixed. The unique effective divisor $F$ in this class is irreducible. It is contained in the base locus of $D=d_1H_1+d_2H_2-\sum_{i=1}^6m_iE_i$ with multiplicity at least $\max\{0, 2m_1+\sum_{i=2}^6m_i-3d_1-3d_2\}$. \end{lemma} \begin{proof}
Consider the divisor class $F=H_1+4H_2-3E_1-2\sum_{i=2}^6E_i$. The formula for virtual dimension in Defintion \ref{virtual dimension} gives $\operatorname{vdim}|F|=-1$, suggesting that $F$ has no global sections.
However, consider the linear system $|G|$ corresponding to the divisor class $G=H_1+4H_2-3E_1$. We claim that $\dim|G|>\operatorname{vdim}|G|$, and therefore $\dim|F|>\operatorname{vdim}|F|$ also, so that we can conclude that $\dim|F|\ge0$. In order to prove the claim, we assign coordinates $x_0,x_1;y_0,y_1,y_2$ to $\mathbb P^1;\mathbb P^2$ and we consider the generic degree $(1,4)$ form \begin{align*} f(x_0,x_1,y_0,y_1,y_2)=&\ x_0(a_{0000}y_0^4+a_{0001}y_0^3y_1+a_{0002}y_0^3y_2+\cdots+a_{2222}y_2^4)+\\ & \ x_1(b_{0000}y_0^4+b_{0001}y_0^3y_1+b_{0002}y_0^3y_2+\cdots+b_{2222}y_2^4). \end{align*}
Imposing a triple point at $p_1=([1:0],[1:0:0])$ corresponds to imposing that the second order derivatives of $f$ vanish when evaluated at $(1,0,1,0,0)$. However, since our form has degree 1 in the variables $x_0,x_1$, the second derivative $\partial^2 f/\partial x_1^2$ is automatically zero, and so the triple point at $p_1$ imposes 9 conditions rather than 10 as in the formula of Definition \ref{virtual dimension}. Therefore $\dim|G|>\operatorname{vdim} |G|$ as claimed.
Next we claim that any effective divisor $F$ in the class $H_1+4H_2-3E_1-\sum_{i=2}^6 E_i$ is irreducible. To see this, note that since $F \cdot l_1=1$, the divisor $F$ contains a unique irreducible component which dominates $\mathbb P^2$. Write $F=F_1+F_2$ where $F_1$ is the unique irreducible component which dominates $\mathbb P^2$, and $F_2$ is a sum of irreducible components contracted by $\pi_2 \colon X \rightarrow \mathbb P^2$. The class of $F_2$ must be a sum of classes of the form $E_i$, $H_2-E_i-E_j$, and $2H_2-\sum_{j=1}^5 E_{i_j}$. Assuming $F_2$ is nonzero, it is then easy to find a curve class $C$ of the form $3l_1+5l_2-\sum_{j=1}^5 E_{i_j}$ such that $F_2 \cdot C>F \cdot C$, and hence $F_1 \cdot C<0$. By Lemma \ref{baselocus 1,1} this implies that $F_1$ is a divisor of the form $H_1+H_2-\sum_{k=1}^5 E_{i_k}$, but for any such divisor we find that $F-F_1$ has the form $3H_2-2E_{i_1}-2E_{i_2}-E_{i_3}-\cdots-E_{i_6}$. Since there is no cubic curve in $\mathbb P^2$ with 2 double points and passing through 4 other general points, $F-F_1$ is not effective. So we must have that $F_2=0$, hence $F$ is irreducible as required.
To prove that $F$ is fixed and the claimed multiplicity of containment holds, we apply Lemma \ref{lemma-baselocusgeneral} with the curve class $C=3l_1+3l_2-2e_1-\sum_{i=2}^6 e_i$. We compute
\begin{align*}
\left(H_1+4H_2-3E_1 - 2\sum_i E_i \right) \cdot C &= 15 -6-10 = -1.
\end{align*}
Therefore it suffices to prove that irreducible curves of class $C$ sweep out a divisor on $X_{1,2,6}$, which must then be $F$.
Let $\Gamma$ be an irreducible cubic curve in $\mathbb P^2$ with a node at $\pi(p_1)$ and passing through the points $\pi(p_i)$ for $i=2,\ldots,6$. There is a 1-parameter family of such curves, sweeping out $\mathbb P^2$. Now let
\begin{align*}
\nu \colon \mathbb P^1 \times \mathbb P^1 \rightarrow \mathbb P^1 \times \Gamma \subset \mathbb P^1 \subset \mathbb P^2
\end{align*}
be the product of the identity map with the normalisation of $\Gamma$. Under this map, a curve of bidegree $(a,b)$ in $\mathbb P^1 \times \mathbb P^1$ maps to a curve of bidegree $(a,3b)$ in $\mathbb P^1 \times \mathbb P^2$. The preimages of the 6 points $p_1,\ldots,p_6$ give us 7 points $q_1,\tilde{q}_1,\ldots,q_6$ in $\mathbb P^1 \times \mathbb P^1$ such that $\nu(q_1)=\nu(\tilde{q}_1)=p_1$ and $\nu(q_i)=p_i$ for $i=2,\ldots,6$. Counting parameters, we see that there is a curve $C'$ of bidegree $(3,1)$ through these 7 points. Moreover, such a curve must be irreducible, since by generality a curve of bidegree $(a,0)$ can pass through at most $a$ points and a curve of bidegree $(0,1)$ can pass through at most 2 points $q_0$ and $\tilde{q}_0$. The image of $C'$ in $\mathbb P^1 \times \mathbb P^2$ is then an irreducible curve of bidegree $(3,3)$ through the 6 points $p_1,\ldots,p_6$, and since the map $\nu$ identifies the $\mathbb P^1$-rulings through $q_0$ and $\tilde{q}_0$, the image $C$ has a node at $p_1$. The proper transform of $C'$ on $X_{1,2,6}$ is then an irreducible curve in the class $C$. By construction $\pi_2(C)=\Gamma$, so as we vary $\Gamma$ to sweep out $\mathbb P^2$ these irreducible curves of class $C$ then sweep out a divisor on $X_{1,2,6}$. \end{proof}
\begin{remark}\label{Laface-Moraga0} Laface and Moraga \cite{LM16} introduced the \emph{fibre-expected dimension} for linear systems of divisors on blowups of $(\mathbb P^1)^n$, providing a lower bound for the dimension that improves the one given by the virtual dimension. This takes into account that (the proper transforms after blowing up the points of) certain fibres of the natural projections $(\mathbb P^1)^n\to (\mathbb P^1)^r$ are contained with multiplicity in the base locus of the divisors, whenever the multiplicities are large enough with respect to the degrees. In algebraic terms, this corresponds to observing that certain partial derivatives of the multidegree polynomials corresponding to the linear systems vanish identically, therefore they should not give a contribution to the dimension count.
Their approach can be used verbatim in the case of blowups of $\mathbb P^1\times\mathbb P^n$, where each fixed line $C_i=l_1-e_i$ is contained at least $-D\cdot C_i$ times in the base locus of $D$ and, if $D\cdot C_i\ge2$, this gives a contribution to the dimension count.
In particular the argument proposed in the proof of Lemma \ref{baselocus 1,4} to show that the divisor $H_1+4H_2-3E_1$ on $X_{1,2,s}$ has dimension strictly larger than expected is a generalisation of Laface's and Moraga's idea. \end{remark}
To compute the effective cone of $X_{1,2,6}$, we require one more restriction on the classes of effective divisors. To obtain this, consider a movable divisor $M$ given as the proper transform of a general bidegree $(2,1)$ surface $S$ in $\mathbb P^1\times\mathbb P^2$ containing all six points. The following lemma describes the geometry of $M$.
\begin{lemma}\label{lemma-bideg21}
A smooth surface $S$ of bidegree $(2,1)$ in $\mathbb P^1 \times \mathbb P^2$ is isomorphic to $\mathbb P^1 \times \mathbb P^1$.
\end{lemma}
\begin{proof}
Using adjunction we see that $-K_S=2{H_2}|_{S}$, so $-K_S$ is 2-divisible in $\operatorname{Pic}(S)$, and we also find that $\left(-K_S \right)^2=8$. So if we can prove that $-K_S$ is ample, then $S$ is del Pezzo of degree 8 with $-K_S$ 2-divisible, hence it must be $\mathbb P^1 \times \mathbb P^1$.
To prove that $-K_S$ is ample, it is enough to prove that, for a general $S$ as in the statement the projection $S \rightarrow \mathbb P^2$ is finite.
Let $[u,v]$ and $[x,y,z]$ be homogeneous coordinates on $\mathbb P^1$ and $\mathbb P^2$ respectively. Then $S$ is given by an equation
\begin{align*} F(u,v,x,y,z) = u^2\Lambda_1+uv\Lambda_2+v^2\Lambda_3 &=0,
\end{align*}
where the $\Lambda_i$ are linear forms in $x, \, y, \, z$. For a point $p \in \mathbb P^2$, the fibre of $S \rightarrow \mathbb P^2$ over $p$ is infinite if and only if $\Lambda_i(p)=0$ for $i=1, \, 2, \, 3$. We claim that if the surface $S$ is nonsingular, the set of such $p$ is empty, and therefore $S \rightarrow \mathbb P^2$ is finite as required.
To prove the claim, suppose for contradiction that the $\Lambda_i$ are linear forms in $x, \, y, \, z$ with a common zero $p \in \mathbb P^2$. We will prove that $S$ must have a singular point.
By changing coordinates in $\mathbb P^2$ we can assume
\begin{align*}
\Lambda_1 = x, \quad \Lambda_2 &= x+y, \quad \Lambda_3 = y,
\end{align*}
so that $S$ is defined by the equation
\begin{align*}
F(u,v,x,y,z) = u^2x+v^2y+uv(x+y) &= 0.
\end{align*}
Note that $S$ contains the line
\begin{align*}
\left\{ \left([u,v],[0,0,1]\right) \mid [u,v] \in \mathbb P^1 \right\}.
\end{align*}
Computing partial derivatives, we find
\begin{align*}
\frac{\partial F}{\partial u} = 2ux+v(x+y), &\quad \quad
\frac{\partial F}{\partial v} = 2vy + u(x+y),\\
\frac{\partial F}{\partial x} = u^2+uv, \quad \quad & \frac{\partial F}{\partial y} = v^2+uv, \quad \quad
\frac{\partial F}{\partial z} =0,
\end{align*} and therefore the surface $S$ is singular at the point $\left([1,-1],[0,0,1]\right)$, as required.
\end{proof}
\begin{corollary} \label{mov(2,1)} The divisor class $2H_1+H_2-\sum_{i=1}^6E_i$ is movable on $X_{1,2,6}$. Moreover the generic element $M$ of its linear system is isomorphic to $X_{1,1,6}$. \end{corollary} \begin{proof}
By Lemma \ref{lemma-bideg21} we see that a general such $M$ is the the blowup of $\mathbb P^1 \times \mathbb P^1$ in 6 points, in other words it is isomorphic to $X_{1,1,6}$.
For every $j=1,\dots,6$, we have \[ M=(H_1-E_j)+\left(H_1+H_2-\sum_{i=1}^6E_i+E_j\right). \] Since $M$ is linearly equivalent to a sum of fixed divisors in multiple ways, and there is no common summand in these decompositions, we can conclude that $M$ is movable. \end{proof}
\begin{theorem}\label{Eff 126} The effective cone $\operatorname{Eff}(X_{1,2,6})$ is given by the list of generators below left and inequalities below right:
\begin{minipage}[t]{0.4\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{cccccccc} H_1 & H_2 & E_1 & E_2 & E_3 & E_4 & E_5 & E_6\\ \hline\hline
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & -1 & -1 & 0 & 0 & 0 & 0\\ 1 & 0 & -1 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & -1 & -1 & -1 & -1 & -1 & 0\\ 0 & 2 & -1 & -1 & -1 & -1 & -1 & 0\\ 1 & 4 & -3 & -2 & -2 & -2 & -2 & -2 \end{array}
\end{align*}
\end{minipage}\quad
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{cccccccc} d_1 & d_2 & m_1 & m_2 & m_3 & m_4 & m_5 & m_6\\ \hline\hline
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 2 & 1 & 1 & 0 & 0 & 0 & 0\\ 1 & 2 & 1 & 1 & 1 & 0 & 0 & 0\\ 1 & 3 & 1 & 1 & 1 & 1 & 0 & 0\\ 1 & 4 & 1 & 1 & 1 & 1 & 1 & 0\\ 1 & 4 & 1 & 1 & 1 & 1 & 1 & 1\\ 2 & 2 & 1 & 1 & 1 & 1 & 0 & 0\\ 2 & 3 & 2 & 1 & 1 & 1 & 0 & 0\\ 3 & 3 & 2 & 1 & 1 & 1 & 1 & 0\\
3 & 4 & 3 & 1 & 1 & 1 & 1 & 0\\ 3 & 4 & 3 & 1 & 1 & 1 & 1 & 1\\ 3 & 6 & 3 & 3 & 1 & 1 & 1 & 1\\ 4 & 3 & 2 & 1 & 1 & 1 & 1 & 1\\ 4 & 4 & 2 & 2 & 2 & 1 & 1 & 1\\ 5 & 5 & 3 & 2 & 2 & 2 & 2 & 1\\
6 & 5 & 2 & 2 & 2 & 2 & 2 & 2\\ 6 & 6 & 4 & 2 & 2 & 2 & 2 & 1\\ 7 & 6 & 3 & 3 & 2 & 2 & 2 & 2\\ 7 & 7 & 3 & 3 & 3 & 3 & 2 & 2\\ 7 & 8 & 3 & 3 & 3 & 3 & 3 & 3\\ 9 & 10 & 5 & 5 & 3 & 3 & 3 & 3\\ 10 & 10 & 4 & 4 & 4 & 4 & 4 & 3 \end{array} \end{align*} \end{minipage} \end{theorem}
\begin{proof} Let $M=2H_1+H_2-\sum_{i=1}^6E_i\cong X_{1,1,6}$ be the movable divisor of Corollary \ref{mov(2,1)}. Let $h_1,h_2,e_2,\dots,e_6$ be the generators of the Picard group of $M$.
The intersection table on $M$ is given by the following. Set $H_1\vert_M=ah_1+bh_2$, for $a,b\ge 0$. From $(H_1H_2)|_M=H_1H_2(2H_1+H_2)=1$, we obtain $a+b=1$. From $H_1^2|_M=H_1H_2(2H_1+H_2)=0$, we obtain $2ab=0$. Therefore either $(a,b)=(1,0)$ or $(a,b)=(0,1)$. We obtain: \begin{align*} H_1\vert_M&=h_i,\\ H_2\vert_M&=h_1+h_2,\\ E_i\vert_M&=e_i, i=1,\dots,6, \end{align*} where the second equality is a consequence of the adjunction formula.
Applying Proposition \ref{proposition-x116} to $D\vert_M$, we obtain the following necessary conditions for the effectivity of the divisors $D$ on $X_{1,2,6}$:
\begin{minipage}[t]{1\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{cccccccc} d_1 & d_2 & m_1 & m_2 & m_3 & m_4 & m_5 & m_6\\ \hline\hline
1 & 3 & 1 & 1 & 1 & 1 & 0 & 0\\ 1 & 4 & 1 & 1 & 1 & 1 & 1 & 1\\ 2 & 5 & 2 & 2 & 1 & 1 & 1 & 1\\ 3 & 6 & 2 & 2 & 2 & 2 & 1 & 1\\ 3 & 6 & 3 & 2 & 1 & 1 & 1 & 1\\ 3 & 7 & 3 & 2 & 2 & 2 & 1 & 1\\ 3 & 7 & 2 & 2 & 2 & 2 & 2 & 2\\ 3 & 8 & 3 & 2 & 2 & 2 & 2 & 2\\ 4 & 8 & 3 & 3 & 2 & 2 & 2 & 1\\ 4 & 9 & 3 & 3 & 3 & 2 & 2 & 2\\ 5 & 10 & 3 & 3 & 3 & 3 & 3 & 2\\ \end{array} \end{align*} \ \end{minipage}
We now apply the method of Section \ref{effconemethod} with the following inputs:
\begin{itemize}
\item the above inequalities coming from the effectivity of $D|_M$;
\item the pullback via the morphism $X_{1,2,6} \rightarrow X_{1,2,5}$ of the inequalities from Theorem \ref{Eff 125} cutting out the cone $\operatorname{Eff}(X_{1,2,5})$; \item the base locus inequalities corresponding to the fixed divisors $E_i$, $H_1-E_i$, $H_2-E_i-E_j$ from Lemmas \ref{lemma-baselocusexceptionals} and \ref{bsh2-ei-ej}; \item the base locus inequalities corresponding to the fixed divisors
$H_1+H_2-\sum_{j=1}^5E_{i_j}$ and $2H_2-\sum_{j=1}^5E_{i_j}$ from Lemmas \ref{baselocus 1,1} and \ref{baselocus 0,2}.
\item the base locus inequalities corresponding to the fixed divisors $H_1+4H_2-2\sum_jE_j - E_i$ from Lemma \ref{baselocus 1,4}.
\end{itemize} The resulting cone is computed with the files \texttt{X126-ineqs}. To the list of extremal rays, we add the rays spanned by the fixed divisors $E_i$, $H_1-E_i$, $H_2-E_i-E_j$, $H_1+H_2-\sum_{j=1}^5E_{i_j}$, $2H_2-\sum_{j=1}^5E_{i_j}$, and $H_1+4H_2-2\sum_{j=1}^6 E_j -E_i$, for all index permutations. A minimal set of generators and inqualities are obtained using the Normaliz files \texttt{X126-gens}. The former is given by the list of divisors on the left hand side of the statement of this theorem. Since they are all effective, they span the effective cone of $X_{1,2,6}$. \end{proof}
\section{Effective divisors on some threefolds}\label{section-dim3-extra}
In this section we compute the effective cones of certain threefolds obtained by blowing up either $\mathbb P^3$ or $\mathbb P^1 \times \mathbb P^2$ in a line and a set of points. These results will be used as inputs in the computations of Section \ref{section-dim4}, when we use our method to determine the cones $\operatorname{Eff}(X_{1,3,s})$ for $s \leq 6$.
We start with the following result, which is a special case of \cite[Theorem 5.1]{BDP16}. In the statement below $\mathcal E_i$ denotes the exceptional divisor of the blowup of a point in $\mathbb P^3$. The Normaliz files {\tt X036-gens} show that the cone spanned by the generators in the left-hand table is indeed cut out by the inequalities in the right-hand table.
\begin{theorem}\label{theorem-x036}
Let $X_{0,3,6}$ denote the blowup of $\mathbb P^3$ in a set of 6 points in general position. The effective cone $\operatorname{Eff}(X_{0,3,6})$ is given by the list of generators below left and inequalities below right:
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{ccccccc} H & \mathcal E_1 & \mathcal E_2 & \mathcal E_3 & \mathcal E_4 & \mathcal E_5 & \mathcal E_6\\ \hline\hline
0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & -1 & -1 & -1 & 0 & 0 & 0\\ 2 & -2 & -1 & -1 & -1 & -1 & -1 \end{array}
\end{align*}
\end{minipage}\quad
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{ccccccc} d & m_1 & m_2 & m_3 & m_4 & m_5 & m_6\\ \hline\hline
1 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 3 & 1 & 1 & 1 & 1 & 0 & 0\\ 3 & 1 & 1 & 1 & 1 & 1 & 0\\ 5 & 2 & 2 & 1 & 1 & 1 & 1\\ 7 & 2 & 2 & 2 & 2 & 2 & 2 \end{array}
\end{align*}
\end{minipage} \end{theorem} By blowing down one of the exceptional divisors we get the corresponding result for 5 points: \begin{corollary}\label{corollary-x035}
The effective cone $\operatorname{Eff}(X_{0,3,5})$ is given by the list of generators below left and inequalities below right:
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{cccccc} H & \mathcal E_1 & \mathcal E_2 & \mathcal E_3 & \mathcal E_4 & \mathcal E_5 \\ \hline\hline
0 & 1 & 0 & 0 & 0 & 0\\ 1 & -1 & -1 & -1 & 0 & 0\\ \end{array}
\end{align*}
\end{minipage}\quad
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{cccccc} d & m_1 & m_2 & m_3 & m_4 & m_5\\ \hline\hline
1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 \\ 3 & 1 & 1 & 1 & 1 & 0 \\ 3 & 1 & 1 & 1 & 1 & 1 \end{array}
\end{align*}
\end{minipage} \end{corollary} We will also need to consider certain threefolds which arise naturally as divisors in $\mathbb P^1 \times \mathbb P^3$; these threefolds can be described as blowups of $\mathbb P^3$ in a line and a set of points. The following result helps in determining their effective cones. \begin{proposition}\label{proposition-smallmodification} Let $s \geq 0$. Let $Y_{L, \, s+1}$ denote the blowup of $\mathbb P^3$ in a line and $s+1$ points in general position. Let $Z_{L, \, s}$ denote the blowup of $\mathbb P^1 \times \mathbb P^2$ in a line contained in a fibre of $\pi_1 \colon \mathbb P^1 \times \mathbb P^2 \rightarrow \mathbb P^1$ and $s$ points in general postion. Then there is a small modification $\varphi \colon Y_{L,\, s+1} \dashrightarrow Z_{L,\,s}$. \end{proposition} \begin{proof}
It is enough to consider the case $s=0$. For brevity we write $Y$, respectively $Z$, instead of $Y_{L,\, 1}$ and $Z_{L, \, 0}$. In this case both $Y$ and $Z$ are toric varieties, and we find the necessary $\varphi$ by considering their fans.
Consider the fan of $\mathbb P^3$ with ray generators $e_1, \, e_2, \, e_3, \, -\left(e_1+e_2+e_3 \right)$. Then $Y$ is obtained by star subdivision of the cones $\langle e_1, \, -\left(e_1+e_2+e_3 \right) \rangle$ corresponding to a line, and $\langle e_2, e_3, \, -\left(e_1+e_2+e_3 \right) \rangle$ corresponding to a point. The resulting fan $\Sigma$ has rays
\begin{align*}
\Sigma(1) &= \left\{ e_1, \, e_2, \, e_3, -\left(e_1+e_2+e_3 \right), \, -\left(e_2+e_3 \right), \, -e_1 \right\}.
\end{align*}
On the other hand, the fan of $\mathbb P^1 \times \mathbb P^2$ has ray generators $e_1, \, -e_1, \, e_2, \, e_3, \, -\left(e_2+e_3\right)$. This fan contains a cone $\langle -e_1, -\left(e_2+e_3\right) \rangle$ corresponding to a line contracted by $\pi_1 \colon \mathbb P^1 \times \mathbb P^2 \rightarrow \mathbb P^1$. Blowing up along this line corresponds to star subdivision of the cone above, resulting in a fan $\Sigma^\prime$ with ray generators
\begin{align*}
\Sigma^\prime(1) &= \left\{e_1, \, -e_1, \, e_2, \, e_3, -\left(e_2+e_3 \right), -\left(e_1+e_2+e_3 \right) \right\} \\
&= \Sigma(1).
\end{align*}
Since these two fans have the same rays, the evident rational map $\varphi \colon Y \dashrightarrow Z$ and its inverse do not contract any torus-invariant divisor on either side, and therefore $\varphi$ is a small modification as required. \end{proof} The small modification $\varphi$ allows us to identify divisors on $Y_{L,\, s+1}$ and $Z_{L,\, s}$, as follows. On $Y_{L,\, s+1}$ let $H$ denote the hyperplane class, and let $\mathcal E_i$, respectively $\mathcal E_L$, denote the exceptional divisor over the point $p_i$, respectively the line $L$. On $Z_{L,\, s}$ let $H_1$ and $H_2$ denote the pullbacks of the hyperplane classes from $\mathbb P^1$ and $\mathbb P^2$, and let $E_i$, respectively $E_L$, denote the exceptional divisor over the point $q_i$, respectively the line $L$.
\begin{lemma}
Fixing bases for $N^1(Y)$ and $N^1(Z)$ as follows:
\begin{align*}
N^1(Y) = \langle H, \mathcal E_L, \mathcal E_1, \ldots, \mathcal E_{s+1} \rangle \\
N^1(Z) = \langle H_1, H_2, E_L, E_1, \ldots E_s \rangle
\end{align*}
the pushforward and pullback maps $\varphi_\ast \colon N^1(Y) \rightarrow N^1(Z)$ and $\varphi^\ast \colon N^1(Z) \rightarrow N^1(Y)$ are given by the following matrices: \begin{align}\label{formula-pushforward}
\varphi_\ast =
\left(
\begin{array}{ccc|ccc}
1 & 0 & 1 & & & \\
1 & 1 & 0 & & {\mathbf 0} &\\
-1 & -1 & -1 & &&\\
\hline
& & & & &\\
& {\mathbf 0} & & & I_{s \times s} & \\
& & & & &
\end{array}\right) &\quad \quad \varphi^\ast = \left(
\begin{array}{ccc|ccc}
1 & 1 & 1 & & & \\
-1 & 0 & -1 & & {\mathbf 0} & \\
0 & -1 & -1 & & & \\
\hline
& & & & &\\
& {\mathbf 0} & & & I_{s \times s} & \\
& & & & &
\end{array}\right) \end{align} \end{lemma} \begin{proof} Again it suffices to prove this for $s=0$. The matrix $\varphi_*$ above is obtained by identifying classes on $Y$ and $Z$ which correspond to the same ray generators, as follows. The classes $H$ on $Y$ and $H_1+H_2-E_L$ on $Z$ both correspond to the sum $e_1+\left(-(e_2+e_3) \right)$ of ray generators. The classes $\mathcal E_L$ and $H_2-E_L$ both correspond to $-(e_2+e_3)$, which corresponds to $H_2-E_L$ on $Z$. Finally, $\mathcal E_1$ and $H_1-E_L$ both correspond to $-e_1$. The matrix $\varphi^*$ is the inverse of $\varphi_*$. \end{proof}
Now we will compute the effective cone of $Y_{L, \, 5}$. We start with some base locus inequalities. To state these we introduce the following notation. For $i \in \{1,\ldots,5\}$, let $\Pi(L,i)$ denote the plane in $\mathbb P^3$ spanned by $L$ and $p_i$ For distinct $i, \, j, \, k \in \{1,\ldots,5\}$, let $\Pi(i,j,k)$ denote the plane in $\mathbb P^3$ spanned by $p_i, \, p_j, \, p_k$. \begin{lemma}\label{lemma-baselocusineqsyl5}
Let $D$ be a divisor on $Y_{L,\, 5}$ with class
\begin{align*}
dH - m_L \mathcal E_L - \sum_{i=1}^5 m_i \mathcal E_i.
\end{align*}
Then the following divisors are contained in $\operatorname{Bs}(D)$ with the given multiplicities:
\begin{enumerate}
\item[(a)] $\mathcal E_L$ with multiplicity at least $\max\{0,-m_L\}$;
\item[(b)] $\Pi (i,j,k)$ with multiplicity at least $\max\{0,m_i+m_j+m_k-2d\}$;
\item[(c)] $\Pi(L,i)$ with multiplicity at least $\max\{0,m_L+m_i-d\}$.
\end{enumerate} \end{lemma} \begin{proof}
These all follow from Lemma \ref{lemma-baselocusgeneral} using the following curve classes:
(a): the class $e_L$ of a line in $\mathcal E_L$ that is contracted by the blowdown. Note that $\mathcal E_L~\cdot~e_L~=~-1$.
(b): the class $2l-e_i-e_j-e_k$ of the proper transform of a conic in $\mathbb P^3$ passing through $p_i$, $p_j$, and $p_k$. Note that $\Pi(i,j,k) \cdot (2l-e_i-e_j-e_k)~=~-1$.
(c): the class $l-e_L-e_i$ of the proper transform of a line in $\mathbb P^3$ intersecting $L$ and passing through $p_i$. We have $\Pi(L,i) \cdot (l-e_L-e_i)~=-1$. \end{proof}
To obtain more bounds on the effective cone, once more we consider the restriction of divisors to a suitable movable subvariety (\textbf{Step 4} of the method outlined in Section \ref{effconemethod}). At this point it is convenient to switch from considering effective divisors on $Y_{L, \, 5}$ to effective divisors on $Z_{L, \, 4}$; by Proposition \ref{proposition-smallmodification} these can be identified. In fact we will first consider divisors on $Z_{L, \, 5}$, but the conditions we obtain will give us useful information on $Z_{L, \, 4}$.
There is a $1$-parameter family of surfaces of bidegree $(2,1)$ in $\mathbb P^1 \times \mathbb P^2$ containing the line $L$ and the points $q_1,\ldots,q_5$. A general such surface $S$ is smooth, and its proper transform on $Z_{L,S}$ is a smooth surface $\widetilde{S}$ of class
\begin{align*}
2H_1+H_2-E_{L} - \sum_{i=1}^5 E_i.
\end{align*} By Lemma \ref{lemma-bideg21} the surface $S$ is isomorphic to $\mathbb P^1 \times \mathbb P^1$, so $\widetilde{S}$ is isomorphic to $X_{1,1,5}$. The effective cone of $X_{1,1,5}$ was described in Proposition \ref{proposition-x115}, and by restricting we can use this information to obtain effectivity conditions for divisors on $Z_{L,\, 5}$. To do this, we first need to record the following facts about restrictions of divisors on $Z_{L, \, 5}$ to $\widetilde{S}$. Since $\widetilde{S}$ is isomorphic to $X_{1,1,5}$, we have
\begin{align*}
N^1(\widetilde{S}) = \langle h_1, \, h_2, e_1, \ldots, e_5 \rangle,
\end{align*}
where $h_1$ and $h_2$ are the classes of the two rulings of $S \cong \mathbb P^1 \times \mathbb P^1$ and the $e_i$ are the exceptional curves of the blowups. Let us fix notation by choosing $h_1$ to be the ruling of $S$ containing the line $L$. Then we have
\begin{lemma}\label{lemma-restrictions}
The restriction map $N^1(Z_{L,\, 5}) \rightarrow N^1(\widetilde{S})$ is given by:
\begin{align*}
H_1 &\mapsto h_1 \\
H_2 & \mapsto h_1 + h_2\\
E_L &\mapsto h_1\\
E_i &\mapsto e_i \quad (i=1,\ldots,5).
\end{align*}
\end{lemma}
\begin{proof}
The first two restrictions can be computed on $S$. Since $S$ has class $2H_1+H_2$ this restriction of $H_1$ to $S$ is a curve of class $(2H_1+H_2) \cdot H_1 = H_1 \cdot H_2$, which is a line in the ruling $h_1$. The restriction of $H_2$ to $S$ is a curve of class $(2H_1+H_2) \cdot H_2$; this is an effective class on $S$ with selfintersection $(2H_1+H_2) \cdot H_2^2=2$, so it must equal $h_1+h_2$.
The exceptional divisor $E_L$ meets $\widetilde{S}$ in a smooth curve $C$; near $C$ the blow-down map $\widetilde{S} \rightarrow S$ is an isomorphism which maps $C$ to the line $L \subset S$, and therefore $C$ is a curve in the class $h_1$.
The restrictions of the remaining exceptional divisors are clear.
\end{proof}
We can now put together the necessary ingredients to determine the effective cone of the variety $Y_{L,5}$. \begin{theorem}\label{theorem-yl5}
The effective cone $\operatorname{Eff}(Y_{L,5})$ is given by the list of generators below left and inequalities below right:
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{ccccccc} H & \mathcal{E}_L & \mathcal E_1 & \mathcal E_2 & \mathcal E_3 & \mathcal E_4 & \mathcal E_5\\ \hline\hline
0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 1 & -1 & -1 & 0 & 0 & 0 & 0\\ 1 & 0 & -1 & -1 & -1 & 0 & 0\\ 2 & -1 & -1 & -1 & -1 & -1 & -1\\ 5 & -1 & -3 & -3 & -3 & -3 & -3\\ \end{array}
\end{align*}
\end{minipage}\quad
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{ccccccc} d & m_L & m_1 & m_2 & m_3 & m_4 & m_5\\ \hline\hline
1 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 1 & 0 & 0 & 0 & 0\\ 2 & 1 & 1 & 1 & 0 & 0 & 0\\ 3 & 0 & 1 & 1 & 1 & 1 & 0\\ 3 & 0 & 1 & 1 & 1 & 1 & 1\\ 3 & 2 & 1 & 1 & 1 & 0 & 0\\ 3 & 2 & 1 & 1 & 1 & 1 & 0\\ 4 & 2 & 2 & 1 & 1 & 1 & 1\\ 4 & 3 & 1 & 1 & 1 & 1 & 1\\ 5 & 3 & 2 & 2 & 1 & 1 & 1\\ 6 & 3 & 2 & 2 & 2 & 2 & 1\\ \end{array}
\end{align*}
\end{minipage} \end{theorem} \begin{proof}
We apply the method of Section \ref{effconemethod} with the following inputs:
\begin{itemize} \item the base locus inequalities from Lemma \ref{lemma-baselocusexceptionals} and Lemma \ref{lemma-baselocusineqsyl5}, corresponding to the fixed divisors $\mathcal E_i$, $\mathcal E_L$, $\Pi(i,j,k)$, and $\Pi(L,i)$;
\item the pullback via the map $Y_{L,\, 5} \rightarrow X_{0,3,5}$ of the inequalities from Corollary \ref{corollary-x035} cutting out the cone $\operatorname{Eff}(X_{0,3,5})$;
\item the pullback via the restriction map $N^1(Z_{L,5}) \rightarrow N^1(\widetilde{S}$) from Lemma \ref{lemma-restrictions} of inequalities from Lemma \ref{proposition-x115} cutting out the cone $\operatorname{Eff}(X_{1,1,5})$.
\end{itemize} Note that the last bullet point gives inequalities bounding $\operatorname{Eff}(Z_{L, \, 5})$. We convert these into inequalities bounding $\operatorname{Eff}(Y_{L,6})$ by using the dual of the map $\varphi_*$ given in (\ref{formula-pushforward}). These inequalities turn out not to involve one of the multiplicities $m_i$, and so we can view them as inequalities on $\operatorname{Eff}(Y_{L,5})$.
The Normaliz files \texttt{YL5-ineqs} compute the cone defined by these inequalities. We then add the classes of the fixed divisors listed in the first bullet point above, using the files \texttt{YL5-gens}, to obtain the list of extremal rays and inequalities shown in the tables above. It remains to show that all the extremal rays in the left table are effective.
The first 4 rows in the table correspond to effective divisors of type $\mathcal E_L$, $\mathcal E_i$, $\Pi(L,i)$, and $\Pi(i,j,k)$.
For Row 5, counting dimensions shows that there is a 1-parameter family of quadric surfaces in $\mathbb P^3$ containing the line $L$ and the 5 points: containment of $L$ imposes $3$ conditions and passage through $5$ points in general position imposes $5$ conditions. The proper transform of any such quadric has class \begin{align*} 2H-\mathcal E_L - \sum_{i=1}^5 \mathcal E_i \end{align*} which shows that Row 5 is effective.
It remains to deal with the final row of our table, that is to prove that the class
\begin{align*}
D &= 5H-\mathcal E_L - 3 \sum_{i=1}^5 \mathcal E_i
\end{align*}
is effective. The virtual dimension of $|D|$ is
\begin{align*}
{8 \choose 3} - 6 - 5 \cdot 10 -1&=-1
\end{align*} so this does not follow from a simple parameter count. Instead we will prove the this class is effective by restriction to a suitable subvariety.
As already seen, there is a 1-parameter family of quadric surfaces in $\mathbb P^3$ containing the line $L$ and the 5 points. By generality of $L$ and the points, every such quadric is smooth; let $Q$ be the proper transform on $Y_{L, \, 5}$ of any such quadric. Restricting $D$ to $Q$ gives the short exact sequence of sheaves
\begin{align*}
0 \rightarrow O(D-Q) \rightarrow O(D) \rightarrow O(D|_{Q}) \rightarrow 0
\end{align*}
and the corresponding long exact sequence of cohomology
\begin{align*}
0 \rightarrow H^0(Y,O(D-Q)) \rightarrow H^0(Y,O(D)) \rightarrow H^0(Q,O(D|_{Q})) \rightarrow H^1(Y,O(D-Q)) \rightarrow \cdots
\end{align*}
Now $Q$ has class $2H-\mathcal E_L - \sum_i \mathcal E_i$, so we get $D-Q=3H-2 \sum_i \mathcal E_i$ which is not effective since a cubic surface can be singular at no more than 4 points in general position. So $h^0(Y,O(D-Q))=0$.
On the other hand, according to \cite[Remark 11.1]{SAGA2014} we have $H^i(Y,O(D-Q))=0$ for $i \geq 2$, so
\begin{align*}
\chi(Y,O(D-Q)) &= h^0(Y,O(D-Q)) - h^1(Y,O(D-Q)).
\end{align*}
Using Lemma \ref{vdimeuler} we have
\begin{align*}
\chi(Y,O(D-Q))&=\operatorname{vdim}|D-Q|+1\\
&=20 - 5 \cdot 4 =0,
\end{align*}
so $h^0(Y,O(D-Q))=h^1(Y,O(D-Q))=0$.
The exact sequence of cohomology above therefore gives an isomorphism
\begin{align*}
H^0(Y,O(D)) &\cong H^0(Q,O(D|_{Q})).
\end{align*}
Proposition \ref{proposition-x115} tells us that $D|_{Q}=4h_1+5h_2-3\sum_i E_i$ is effective, so we conclude that $D$ is effective also. \end{proof} \begin{remark}
Since the class $5H-\mathcal E_L - 3 \sum_i E_i$ is a primitive generator of an extremal ray of $\operatorname{Eff}(Y_{L,5})$, it must be represented by an irreducible quintic surface $S \subset \mathbb P^3$. If $R$ is a twisted cubic incident to $L$ and passing through $p_1,\ldots,p_5$, we compute that $S \cdot R =-1$. so $R$ is contained in $D$. As we move the point $R \cap L$ these twisted cubics sweep out an irreducible surface, which must therefore equal $S$. \end{remark} Using the linear map $\varphi_*$ given in (\ref{formula-pushforward}) and its dual, we can restate this result in terms of the variety $Z_{L,\, 4}$; we will use the information in this form in the next section. \begin{corollary}\label{corollary-zl4} The effective cone $\operatorname{Eff}(Z_{L,4})$ is given by the generators below left and the inequalities below right.
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{ccccccc} H_1 & H_2 & E_L & E_1 & E_2 & E_3 & E_4\\ \hline\hline
0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 1 & 0 & 0 & -1 & 0 & 0 & 0\\ 0 & 1 & -1 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & -1 & -1 & 0 & 0\\ 1 & 1 & -1 & -1 & -1 & -1 & 0\\ 2 & 4 & -1 & -3 & -3 & -3 & -3 \end{array}
\end{align*}
\end{minipage}\quad
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{ccccccc} d_1 & d_2 & m_L & m_1 & m_2 & m_3 & m_4\\ \hline\hline
1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 0 & 0 & 0\\ 1 & 1 & 0 & 1 & 0 & 0 & 0\\ 1 & 1 & 1 & 1 & 0 & 0 & 0\\ 1 & 2 & 0 & 1 & 1 & 0 & 0\\ 1 & 2 & 0 & 1 & 1 & 1 & 0\\ 1 & 2 & 1 & 1 & 1 & 0 & 0\\ 1 & 3 & 0 & 1 & 1 & 1 & 1\\ 1 & 3 & 1 & 1 & 1 & 1 & 0\\ 1 & 3 & 1 & 1 & 1 & 1 & 1\\ 2 & 2 & 0 & 1 & 1 & 1 & 1\\ 2 & 3 & 0 & 2 & 1 & 1 & 1\\ 2 & 3 & 1 & 2 & 1 & 1 & 1\\ 2 & 4 & 1 & 2 & 2 & 1 & 1\\ 3 & 2 & 2 & 1 & 1 & 1 & 0\\ 3 & 2 & 2 & 1 & 1 & 1 & 1\\ 3 & 3 & 3 & 1 & 1 & 1 & 1\\ 3 & 4 & 2 & 2 & 2 & 2 & 1\\ 3 & 5 & 2 & 2 & 2 & 2 & 2 \end{array}
\end{align*}
\end{minipage} \end{corollary}
\subsection*{The effective cone of $Y_{L,\, 6}$} Finally for this section, we will consider $Y_{L,\, 6}$, the blowup of $\mathbb P^3$ in a line and 6 points. The effective cone of this variety will be an important input into the computations of the next section. To compute this effective cone, it will be necessary to use the small modification described in Proposition \ref{proposition-smallmodification} and obtain restrictions on divisors on both $Y_{L,\, 6}$ and $Z_{L,\, 5}$; putting together all the resulting restrictions, we can then find the effective cone.
We start with two more base locus lemmas.
\begin{lemma}\label{lemma-baselocusineqsyl6}
Given a class in $N^1(Y_{L,6})$ of the form $ 2H- \sum_{i=1}^6 E_i - E_j, $ there is a unique effective divisor $\mathcal F_j$ with the given class.
For a divisor $D$ on $Y_{L,6}$ with class \begin{align*} dH-m_L E_L - \sum_{i=1}^6 m_i E_i, \end{align*}
the divisor $\mathcal F_j$ is contained in $\operatorname{Bs}(D)$ with multiplicity at least $\max\left\{0,\sum_{i=1}^6 m_i + m_j -4d\right\}.$
\end{lemma} \begin{proof}
To prove the first claim, we observe that the class $2H-\sum_{i=1}^6 E_i -E_j$ is represented by the proper transform of a quadric surface in $\mathbb P^3$ with a singular point at $p_j$. By projection away from $p_j$ there is a unique such quadric $\mathcal F_j$.
The second claim follows from \cite[Lemma 4.1]{BDP16}; one can also see it directly as follows. Let $Q$ be the proper transform of a smooth quadric in $\mathbb P^3$ passing through the points $p_1,\ldots,p_6$. Then the intersection $\mathcal F_j \cap Q$ is a curve $C_j$ of class $4l-\sum_i e_i - e_j$. There is a 2-parameter family of such curves $C_j$, and the union of all such curves equals $\mathcal F_j$.
The only possibility for a reducible such curve is $C_j = R \cup L_j$, where $R$ is the proper transform of the twisted cubic through $p_1,\ldots,p_6$ and $L_j$ is the proper transform of a line on $F_j$ through $p_j$. So there is a 1-parameter family of such reducible curves, and therefore the general such $C_j$ is irreducible.
For a divisor $D$ on $Y_{L,6}$ with class $dH-m_L \mathcal E_L -\sum_{i=1}^5 m_i \mathcal E_i$, the intersection number of this divisor with $C_j$ equals
\begin{align*}
D \cdot C_j &= 4d - \sum_{i=1}^6 m_i - m_j.
\end{align*}
Finally we compute that $\mathcal F_j \cdot C_j =-1$, so by Lemma \ref{lemma-baselocusgeneral}, the divisor $F_j$ is contained in the base locus $\operatorname{Bs}(D)$ with multiplicity at least
\begin{align*}
-D \cdot C_j &= \sum_{i=1}^6m_i+m_j-4d.
\end{align*} \end{proof}
\begin{lemma}\label{lemma-baselocuszl5}
Given a class in $N^1(Z_{L,\, 5})$ of the form $
H_1+H_2-E_L-E_i-E_j-E_k $
there is a unique effective divisor $G_{ijk}$ with the given class.
For a divisor $D$ on $Z_{L,\, 5}$ with class $ d_1H_1+d_2H_2-m_L E_L - \sum_{i=1}^5 m_i E_i $
the divisor $G_{ijk}$ is contained in $\operatorname{Bs}(D)$ with multiplicity at least $$\max\left\{0,m_i+m_j+m_k -d_1-2d_2\right\}.$$ \end{lemma} \begin{proof}
For the first claim, the given class is represented by the proper transform of a divisor of bidegree $(1,1)$ containing the line $L$ and the points $p_i$, $p_j$, $p_k$. Containing the line $L$ imposes 2 conditions on such divisors, and passing through a point imposes 1 condition. So the corresponding linear system on $Z_{L,\ 5}$ has dimension $0$, meaning that it contains there is a unique effective divisor $G_{ijk}$.
For the second claim, consider a general divisor $H$ with class $H_1+H_2-E_i-E_j-E_k$. The intersection $G_{ijk} \cap H$ will then be a curve of class $C_{ijk} = l_1+2l_2-e_L-e_i-e_j-e_k$.
For a divisor $D$ on $Z_{L, \, 5}$ with class $d_1H_1+d_2H_2-m_LE_L-\sum_{i=1}^5 m_i E_i$, the intersection number of $D$ with the curve $C_{ijk}$ equals
\begin{align*}
D \cdot C_{ijk} &= d_1+2d_2-m_L - m_i-m_j-m_k.
\end{align*}
Finally we compute that $G_{ijk} \cdot C_{ijk} = -1$, so by Lemma \ref{lemma-baselocusgeneral} the divisor $G_{ijk}$ is contained in the base locus $Bs(D)$ with multiplicity at least
\begin{align*}
-D \cdot C_{ijk} &= m_i+m_j+m_k -d_1-2d_2.
\end{align*} \end{proof}
Now we return to consider divisors on $Y_{L,\, 6}$. As in the computation of $\operatorname{Eff}(Y_{L,5})$, we will need to use a restriction map to obtain extra inequalities on divisors on $\operatorname{Eff}(Y_{L,6})$. For a general line $L$ and $6$ points in general position in $\mathbb P^3$, there is a unique smooth quadric $Q$ containing all of them. Let $\widetilde{Q}$ be the proper transform of $Q$ on $Y_{L,6}$. If $D$ is any irreducible effective divisor on $Y_{L,6}$ other than $\widetilde{Q}$ itself, then the restriction $D|_{\widetilde{Q}}$ must be effective.
The divisor $\widetilde{Q}$ is isomorphic to $X_{1,1,6}$, so the inequalities from Proposition \ref{proposition-x116} will give us restrictions on the class of $D|_{\widetilde{Q}}$. We can label classes on $\widetilde{Q}$ so that $\mathcal E_L$ restricts to $h_1$. The restriction map $N^1(Y_{L,6}) \rightarrow N^1(X_{1,1,6})$ is then evidently given as follows: \begin{lemma}\label{lemma-restrictions3}
The restriction map $N^1(Y_{L,6}) \rightarrow N^1(\widetilde{Q})$ is given by
\begin{align*}
H &\mapsto h_1+h_2\\
\mathcal E_L &\mapsto h_1\\
\mathcal E_i &\mapsto e_i
\end{align*} \end{lemma} We can now compute the effective cone of this threefold. \begin{theorem}\label{theorem-yl6}
The effective cone $\operatorname{Eff}(Y_{L,6})$ is given by the generators below left and the inequalities below right.
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{cccccccc} H & \mathcal{E}_L & \mathcal{E}_1 & \mathcal{E}_2 & \mathcal{E}_3 & \mathcal{E}_4 & \mathcal{E}_5 & \mathcal{E}_6\\ \hline\hline
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & -1 & -1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & -1 & -1 & -1 & 0 & 0 & 0\\ 2 & 0 & -2 & -1 & -1 & -1 & -1 & -1\\ 2 & -1 & -1 & -1 & -1 & -1 & -1 & -1\\ 5 & -1 & -3 & -3 & -3 & -3 & -3 & 0\\ \end{array}
\end{align*}
\end{minipage} \quad
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15}
\begin{array}{cccccccc}
d & m_L & m_1 & m_2 & m_3 & m_4 & m_5 & m_6\\ \hline\hline
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 2 & 1 & 1 & 1 & 0 & 0 & 0 & 0\\ 3 & 0 & 1 & 1 & 1 & 1 & 0 & 0\\ 3 & 0 & 1 & 1 & 1 & 1 & 1 & 0\\ 3 & 2 & 1 & 1 & 1 & 0 & 0 & 0\\ 3 & 2 & 1 & 1 & 1 & 1 & 0 & 0\\ 4 & 2 & 2 & 1 & 1 & 1 & 1 & 0\\ 4 & 3 & 1 & 1 & 1 & 1 & 1 & 0\\ 5 & 0 & 2 & 2 & 1 & 1 & 1 & 1\\ 5 & 2 & 2 & 2 & 1 & 1 & 1 & 1\\ 5 & 3 & 2 & 2 & 1 & 1 & 1 & 0\\ 5 & 4 & 1 & 1 & 1 & 1 & 1 & 1\\ 6 & 3 & 2 & 2 & 2 & 2 & 1 & 0\\ 6 & 3 & 3 & 2 & 1 & 1 & 1 & 1\\ 7 & 0 & 2 & 2 & 2 & 2 & 2 & 2\\ 7 & 2 & 2 & 2 & 2 & 2 & 2 & 2\\ 7 & 4 & 3 & 3 & 1 & 1 & 1 & 1\\ 9 & 3 & 3 & 3 & 3 & 3 & 2 & 1\\ 11 & 4 & 4 & 4 & 3 & 3 & 3 & 1\\ 16 & 5 & 5 & 5 & 5 & 5 & 5 & 2 \end{array}
\end{align*} \end{minipage}
\end{theorem} \begin{proof}
We apply the method of Section \ref{effconemethod} with the following inputs:
\begin{itemize}
\item the base locus inequalities from Lemma \ref{lemma-baselocusexceptionals}, Lemma \ref{lemma-baselocusineqsyl5}, Lemma \ref{lemma-baselocusineqsyl6}, corresponding to the fixed divisors $\mathcal E_i$, $\mathcal E_L$, $\Pi(i,j,k)$, $\Pi(L,i)$, and $\mathcal F_i$ on $Y_{L,\, 6}$;
\item the base locus inequality from Lemma \ref{lemma-baselocuszl5}, corresponding to the fixed divisors $G_{ijk}$ on $Z_{L,\, 5}$;
\item the pullback via the restriction map $N^1(Y_{L,6}) \rightarrow N^1(\widetilde{Q})$ from Lemma \ref{lemma-restrictions3} of the inequalities from Proposition \ref{proposition-x116};
\item the pullback via the morphism $Y_{L,6} \rightarrow Y_{L,5}$ of the inequalities from Theorem \ref{theorem-yl5};
\item the pullback via the morphism $Y_{L,6} \rightarrow X_{0,3,6}$ of the inequalities from Theorem \ref{theorem-x036};
\item the pullback via the morphism $Z_{L,5} \rightarrow X_{1,2,5}$ of the inequalities from Theorem \ref{Eff 125}.
\end{itemize}
Again, in cases where we obtain an inequality on divisors on $Z_{L,\, 5}$, we use the dual of the isomorphism $\varphi \colon N^1(Y) \rightarrow N^1(Z)$ given in (\ref{formula-pushforward}) to convert these into inequalities on divisors on $Y_{L,\, 6}$.
The Normaliz files {\tt YL6-ineqs} compute the cone defined by these inequalities. We then add the classes of the fixed divisors listed in the first bullet point above, using the files \texttt{YL6-gens} to obtain the list of extremal rays and inequalities shown in the tables above. It remains to show that all the extremal rays in the left table are effective.
All the classes in the table on the left-hand side are pullbacks of effective classes on $Y_{L,5}$, with the exception of the class $2H-\mathcal E_L - \sum_{i=1}^6 \mathcal E_i$. This class is represented by the proper transform of the unique quadric in $\mathbb P^3$ containing the line $L$ and the points $p_i$, hence it is effective. \end{proof}
\section{Blowups of $\mathbb P^1 \times \mathbb P^3$}\label{section-dim4} In this section we use the computations of the previous section as inputs into our method, in order to compute the effective cone of divisors of the blowup of $\mathbb P^1 \times \mathbb P^3$ in up to 6 points in general position.
\subsection*{4 points} First we compute the effective cone of the variety $X_{1,3,4}$, which by definition is the blowup of $\mathbb P^1 \times \mathbb P^3$ in a set of 4 points in general position. Again this result follows from Hausen--S\"u{\ss} \cite[Theorem 1.2]{HS10}. It could also be proved using the method of Section \ref{effconemethod}, starting from the toric case $X_{1,3,2}$ and iterating, but for brevity we give a more direct proof.
\begin{theorem}\label{theorem-x134}
The effective cone $\operatorname{Eff}(X_{1,3,4})$ is given by the list of generators below left and inequalities below right:
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{cccccc} H_1 & H_2 & E_1 & E_2 & E_3 & E_4\\ \hline\hline
0 & 0 & 1 & 0 & 0 & 0\\ 1 & 0 & -1 & 0 & 0 & 0\\ 0 & 1 & -1 & -1 & -1 & 0\\ \end{array}
\end{align*}
\end{minipage}\quad
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{ccccccc} d_1 & d_2 & m_1 & m_2 & m_3 & m_4 \\ \hline\hline
1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 & 0 \\ 1 & 2 & 1 & 1 & 0 & 0 \\ 1 & 3 & 1 & 1 & 1 & 0 \\ 1 & 3 & 1 & 1 & 1 & 1 \\ \end{array}
\end{align*}
\end{minipage} \end{theorem} \begin{proof}
The classes listed in the table on the left are clearly effective, so they span a subcone $K$ of the cone $\operatorname{Eff}(X_{1,3,4})$. The Normaliz files {\tt X134-gens} shows that the subcone $K$ is cut out by the inequalities listed in the table on the right. We will prove that any effective divisor on $X_{1,3,4}$ must satisfy all the inequalities listed in the table on the right, which gives the reverse inclusion $\operatorname{Eff}(X_{1,3,4}) \subset K$.
To prove this, it is enough to show that for each inequality in the table on the right, there is a curve class $C_i \in N_1(X)$ such that the given inequality is of the form $D \cdot C_i \geq 0$ and such that irreducible curves in $C_i$ cover a Zariski-dense open subset of $X_{1,3,4}$.
For the first two inequalities in the table, the curve classes we need are $C_1=l_1$ and $C_2=l_2$, which clearly have the required properties.
For the remaining rows we can use the following argument. We give the details for the last row; the other rows are similar but easier. For any embedding $f \colon \mathbb P^1 \rightarrow \mathbb P^3$ whose image is a twisted cubic, its graph $\Gamma_f$ is an irreducible smooth curve of bidegree $(1,3)$ in $\mathbb P^1 \times \mathbb P^3$. Such a morphism $f$ is given by a choice of 4 linearly independent cubic forms in 2 variables, and direct computation shows that for 5 points $p_1, \ldots, p_5$ in general position in $\mathbb P^1 \times \mathbb P^3$, cubic forms can be chosen appropriately to make $\Gamma_f$ pass through all the $p_i$. Therefore, blowing up at $p_1,\ldots,p_4$ and taking proper transforms, we get a family of irreducible curves with class $l_1+3l_2-e_1-\cdots-e_4$ which cover a Zariski-dense open set, as required. \end{proof}
\subsection*{5 points} Next we consider the variety $X_{1,3,5}$, which by definition is the blowup of $\mathbb P^1 \times \mathbb P^3$ in a set of 5 points in general position. To compute the effective cone, we will again use base locus inequalities coming from fixed divisors, together with inequalities pulled back from blowups in fewer points. Additionally, we will iterate the strategy that we used in the previous section: we restrict divisors on $X_{1,3,5}$ to a subvariety whose effective cone we computed in Section \ref{section-dim3}, in order to obtain extra effectivity conditions, following \textbf{Step 4} of the cone method of Section \ref{effconemethod}.
To do this, let $s$ be a set of points in general position in $\mathbb P^1 \times \mathbb P^3$, let $V$ be a smooth divisor of bidegree $(1,1)$ containing all the points, and let $\widetilde{V_s}$ be the proper transform of $V$ on $X_{1,3,s}$. Then by Lemma \ref{lemma-divisor1} we have that $\widetilde{V_s}$ is isomorphic to $Y_{L,s}$.
\begin{lemma}\label{lemma-restriction2}
The restriction map $N^1(X_{1,3,s}) \rightarrow N^1(\widetilde{V_s})$ is given by
\begin{align*}
H_1 & \mapsto H-\mathcal E_L\\
H_2 & \mapsto H\\
E_i &\mapsto \mathcal E_i
\end{align*} \end{lemma} \begin{proof} It suffices to prove the case $s=0$. \end{proof}
We also need one final base locus lemma. For this, let $i, \, j, \, k$ be any set of 3 distinct indices in $\{1,\ldots,6\}$. Since there is a unique plane in $\mathbb P^3$ containing 3 points in general position, there is a unique effective divisor on $X_{1,3,6}$ in the class $H_2-E_i-E_j-E_k$. We denote this divisor by $\Pi(i,j,k)$. \begin{lemma}\label{lemma-baselocus3pts} Fix $s \geq 3.$ For a divisor $D$ on $X_{1,3,s}$ with class $d_1H_1+d_2H_2-\sum_{i=1}^s m_i E_i$. The divisor $\Pi(i,j,k)$ is contained in $\operatorname{Bs}(D)$ with multiplicity at least
\begin{align*}
\max\{0,m_i+m_j+m_k - d_1 - 2d_2\}.
\end{align*} \end{lemma} \begin{proof} Choose 2 general hypersurfaces of bidegree $(1,1)$ passing through the 3 points $p_i, \, p_j, \, p_k$, and let $D_1, \, D_2$ be their proper transforms on $X_{1,3,s}$. Let $C$ be the curve $\Pi(i,j,k) \cap D_1 \cap D_2$. By Proposition \ref{intersection-table} we compute \begin{align*}
C \cdot H_1 &= (H_2-E_i-E_j-E_k)\cdot(H_1+H_2-E_i-E_j-E_k)^2 \cdot H_1= 1,\\
C \cdot H_2 &= (H_2-E_i-E_j-E_k)\cdot(H_1+H_2-E_i-E_j-E_k)^2 \cdot H_2= 2, \end{align*}
so $C$ is a curve of class $l_1+2l_2-e_i-e_j-e_k$ and therefore we have $C \cdot \Pi(i,j,k)=-1$.
By choosing the divisors $D_1$ and $D_2$ appropriately, we can cover the divisor $\Pi(i,j,k)$ by such curves $C$, so by Lemma \ref{lemma-baselocusgeneral} the claim follows. \end{proof} Now we have the ingredients we need for our computation of the effective cone of $X_{1,3,5}$. \begin{theorem}\label{theorem-x135}
The effective cone $\operatorname{Eff}(X_{1,3,5})$ is given by the list of generators below left and inequalities below right:
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{ccccccc} H_1 & H_2 & E_1 & E_2 & E_3 & E_4 & E_5\\ \hline\hline
0 & 0 & 1 & 0 & 0 & 0 & 0\\ 1 & 0 & -1 & 0 & 0 & 0 & 0\\ 0 & 1 & -1 & -1 & -1 & 0 & 0\\ 1 & 1 & -1 & -1 & -1 & -1 & -1\\ 1 & 4 & -3 & -3 & -3 & -3 & -3 \end{array}
\end{align*}
\end{minipage}\quad
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{ccccccc} d_1 & d_2 & m_1 & m_2 & m_3 & m_4 & m_5\\ \hline\hline
1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 0 & 0 & 0\\ 1 & 2 & 1 & 1 & 0 & 0 & 0\\ 1 & 3 & 1 & 1 & 1 & 0 & 0\\ 1 & 3 & 1 & 1 & 1 & 1 & 0\\ 1 & 4 & 1 & 1 & 1 & 1 & 1\\ 2 & 4 & 2 & 1 & 1 & 1 & 1\\ 2 & 5 & 2 & 2 & 1 & 1 & 1\\ 3 & 3 & 1 & 1 & 1 & 1 & 1\\ 3 & 6 & 2 & 2 & 2 & 2 & 1 \end{array}
\end{align*}
\end{minipage} \end{theorem} \begin{proof}
We apply the method of Section \ref{effconemethod} with the following inputs:
\begin{itemize}
\item the base locus inequalities from Lemma \ref{lemma-baselocusexceptionals} and Lemma \ref{lemma-baselocus3pts} corresponding to the fixed divisors $E_i$, $H_1-E_i$, and $H_2-E_i-E_j-E_k$;
\item the pullback via the morphism $X_{1,3,5} \rightarrow X_{1,3,4}$ of the inequalities from Theorem \ref{theorem-x134} cutting out the cone $\operatorname{Eff}(X_{1,3,4})$;
\item the pullback via the restriction map $N^1(X_{1,3,5}) \rightarrow N^1(\widetilde{V_5})$ from Lemma \ref{lemma-restriction2} of the inequalities from Theorem \ref{theorem-yl5} cutting out the cone $\operatorname{Eff}(\widetilde{V_5})$.
\end{itemize}
The Normaliz files \texttt{X135-ineqs} compute the restricted cone defined by these inequalities. Adding the fixed divisors $E_i$, $H_1-E_i$, and $H_2-E_i-E_j-E_k$, the files \texttt{X135-gens} then compute the list of extremal rays and inequalities shown in the tables above. Lemma \ref{vdim lower bound} shows that all generators in the table on the left except the last are represented by effective divisors. It remains to the prove that the last generator $H_1+4H_2-3 \sum_{i=1}^5 E_i$ is represented by an effective divisor.
We will show by direct computation that for 5 points $p_1,\ldots,p_5$ in general position in $\mathbb P^1 \times \mathbb P^3$, there is a hypersurface $D$ of bidegree $(1,4)$ with multiplicity 3 at each of the $p_i$. The proper transform of $D$ on $X_{1,3,5}$ will then be an effective divisor with the required class. Denote the homogeneous coordinates on $\mathbb P^1$ by $s, \, t$ and those on $\mathbb P^3$ by $w, \, x, \, y, \, z$. Using projective transformations we may assume that the points $p_i$ have homogeneous coordinates
\begin{align*}
p_1 = \left([1,0], \, [1,0,0,0] \right), \quad p_2 = \left([0,1], \, [0,1,0,0] \right)&, \quad p_3 = \left([1,1], \, [0,0,1,0] \right),\\ p_4 = \left([1,a], \, [0,0,0,1] \right), \quad p_5 = &\left([1,b], \, [1,1,1,1] \right),
\end{align*}
where $a$ and $b$ are distinct complex numbers different from $0$ and $1$.
Define
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
C_1&=(w-x)(w-y)z\\
C_2&=(w-x)(w-z)y\\
C_3&=(w-y)(w-z)x\\
\end{align*}
\end{minipage}
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
L_1&=(b-1)sx+(s-t)y\\
L_2&=(a-b)sx+(t-as)z\\
\end{align*}
\end{minipage}
Then each $C_i$ is a form of bidegree $(0,3)$ with multiplicity $1$ at $p_1$ and multiplicity 2 at the other $p_i$, while $L_1$ and $L_2$ are forms of bidegree $(1,1)$ with multiplicity 1 at each $p_i$. So for any coefficients $a_{ij} \in \mathbb C$ the form
\begin{align*}
F&= \sum_{i=1}^2 \sum_{j=1}^3 a_{ij} L_i C_j
\end{align*}
has bidegree $(1,4)$ and multiplicity at least $2$ at $p_1$ and $3$ at the other $p_i$. We can then attempt to choose the coefficients $a_{ij}$ such that the form $F$ has multiplicity 3 at $p_1$ also. Computing, one finds that this occurs if and only if $F$ is a multiple of the form
\begin{align*}
F_0 = aL_1C_1 +(b-a)L_1C_3 + L_2C_2+(b-1)L_2C_3.
\end{align*} So the hypersurface $D=\left\{F_0=0\right\}$ has bidegree $(1,4)$ and multiplicity $3$ at each point $p_i$ as required.
\end{proof}
We conclude this section with the following discussion on the ray generator of bidegree $(1,4)$. \begin{proposition}\label{divisor 1433333 is fixed} The divisor $H_1+4H_2-3 \sum_{i=1}^5 E_i$ on $X_{1,3,5}$ is fixed. \end{proposition} \begin{proof}
As before let $\widetilde{V}_5$ be the proper transform on $X_{1,3,5}$ of a general hypersurface of bidegree $(1,1)$ passing through the 5 points. Recall that $\widetilde{V}_5$ is isomorphic to $Y_{L,5}$, the blowup of $\mathbb P^3$ in a line and 5 points.
Twist the ideal sheaf sequence for $\widetilde{V}_5$ by $D=H_1+4H_2-3\sum_{i=1}^5 E_i$ to get the short exact sequence
\begin{align*}\tag{$\ast$}
0 \rightarrow O_X\left(3H_2-2\sum_{i=1}^5 E_i\right) \rightarrow O_X(D) \rightarrow O_{\widetilde{V}_5}\left(D|_{\widetilde{V}_5}\right) \rightarrow 0
\end{align*}
By Lemma \ref{lemma-restriction2} the restriction of $D$ to $\widetilde{V}_5$ gives the class $3H-\mathcal E_L -\sum_{i=1}^5 \mathcal E_i$, which is fixed as we saw in the proof of Theorem \ref{theorem-yl5}. So $H^0(O_{\widetilde{V}_5}(D))=1$. On the other hand, any effective divisor on $X_{1,3,5}$ with class $3H_2-2\sum_{i=1}^5 E_i$ must come from a cubic in $\mathbb P^3$ with singularities at 5 points in general position. No such cubic exists, so we have $H^0(O_X(3H_2-2\sum_{i=1}^5 E_i))=0$. So the long exact sequence of cohomology associated to the short exact sequence ($\ast$) shows that $H^0(O_X(D))=1$ as required. \end{proof}
\begin{remark}\label{Laface-Moraga1}
As observed in Remark \ref{Laface-Moraga0}, a triple point imposes a number of conditions to the family of bidegree $(1,4)$-hypersurfaces of $\mathbb P^1\times \mathbb P^n$ that is one less than the expected one obtained by a parameter count. This is geometrically justified by the presence of the fixed line $C_i=l_1-e_i$ in the base locus of $D$ at least $-D\cdot C_i$ times. If, for instance, we consider $D=H_1+4H_2-3\sum_{i=1}^5E_i$ on $X_{1,2,5}$, we have $D\cdot C_i=-2$, for $i=1,\dots,5$, and each $C_i$ contributes by $1$ to the fibre-expected dimension formula:
$$\textrm{fibre-dim}|D|=\operatorname{vdim}|D|+5=-1.$$
Furthermore, we know from Proposition \ref{divisor 1433333 is fixed} that $\dim|D|=0$, so there is a gap of $1$ between the dimension and the fibre-expected dimension. Our expectation is that the fixed curve $C=l_1+3l_2-\sum_i^5 e_i$, that satisfies $D \cdot C =-2$, contributes by $1$ to the dimension count, so that
$$\dim|D|= \textrm{fibre-dim}|D|+1.$$ \end{remark}
\subsection*{6 points in $\mathbb P^1 \times \mathbb P^3$} Now we come to our final effective cone computation. Fix a set of 6 points in general position in $\mathbb P^1 \times \mathbb P^3$ and let $X_{1,3,6}$ be the corresponding blowup. We will compute the effective cone $\operatorname{Eff}(X_{1,3,6})$ in a similar way to the previous case.
A parameter count shows that there is a 1-parameter family of divisors of bidegree $(1,1)$ passing through the 6 points. Let $V_6$ be a smooth such divisor, and let $\widetilde{V_6} \cong Y_{L,6}$ be the proper transform of $V_6$ on $X_{1,3,6}$. Then as before, using the restriction map $N^1(X_{1,3,6}) \rightarrow N^1(\widetilde{V_6})$ given in Lemma \ref{lemma-restriction2}, we can pull back the inequalities cutting out $\operatorname{Eff}(\widetilde{V_6})$ to get inequalities cutting out $\operatorname{Eff}(X_{1,3,6})$.
Together with our existing base locus lemmas and pulling back from $X_{1,3,5}$, this gives us enough information to compute our final effective cone. \begin{theorem}\label{theorem-x136}
The effective cone $\operatorname{Eff}(X_{1,3,6})$ is given by the generators below left and the inequalities below right.
\begin{minipage}[t]{0.4\textwidth}
\begin{align*} \rowcolors{2}{white}{gray!15} \begin{array}{cccccccc} H_1 & H_2 & E_1 & E_2 & E_3 & E_4 & E_5 & E_6\\ \hline\hline
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & -1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & -1 & -1 & -1 & 0 & 0 & 0\\ 0 & 2 & -2 & -1 & -1 & -1 & -1 & -1\\ 1 & 1 & -1 & -1 & -1 & -1 & -1 & -1\\ 1 & 4 & -3 & -3 & -3 & -3 & -3 & 0 \end{array}
\end{align*}
\end{minipage} \quad
\begin{minipage}[t]{0.4\textwidth}
\begin{align*}
\rowcolors{2}{white}{gray!15} \begin{array}{cccccccc} d_1 & d_2 & m_1 & m_2 & m_3 & m_4 & m_5 & m_6\\ \hline\hline
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 2 & 1 & 1 & 0 & 0 & 0 & 0\\ 1 & 3 & 1 & 1 & 1 & 0 & 0 & 0\\ 1 & 3 & 1 & 1 & 1 & 1 & 0 & 0\\ 1 & 4 & 1 & 1 & 1 & 1 & 1 & 0\\ 1 & 5 & 1 & 1 & 1 & 1 & 1 & 1\\ 2 & 4 & 2 & 1 & 1 & 1 & 1 & 0\\ 2 & 5 & 2 & 2 & 1 & 1 & 1 & 0\\ 3 & 3 & 1 & 1 & 1 & 1 & 1 & 0\\ 3 & 5 & 2 & 2 & 1 & 1 & 1 & 1\\ 3 & 6 & 2 & 2 & 2 & 2 & 1 & 0\\ 3 & 6 & 3 & 2 & 1 & 1 & 1 & 1\\ 3 & 7 & 3 & 3 & 1 & 1 & 1 & 1\\ 5 & 7 & 2 & 2 & 2 & 2 & 2 & 2\\ 6 & 9 & 3 & 3 & 3 & 3 & 2 & 1\\ 7 & 11 & 4 & 4 & 3 & 3 & 3 & 1\\ 11 & 16 & 5 & 5 & 5 & 5 & 5 & 2 \end{array}
\end{align*}
\end{minipage} \end{theorem} \begin{proof}
We apply the method of Section \ref{effconemethod} with the following inputs:
\begin{itemize}
\item the base locus inequalities from Lemma \ref{lemma-baselocusexceptionals} and Lemma \ref{lemma-baselocus3pts} corresponding to the fixed divisors $E_i$, $H_1-E_i$, and $H_2-E_i-E_j-E_k$;
\item the pullback via the morphism $X_{1,3,6} \rightarrow X_{1,3,5}$ of the inequalities from Theorem \ref{theorem-x135};
\item the pullback via the restriction map $N^1(X_{1,3,6}) \rightarrow N^1(\widetilde{V_6})$ from Lemma \ref{lemma-restriction2} of the inequalities from Theorem \ref{theorem-yl6} cutting out $\operatorname{Eff}(Y_{L,6})$.
\end{itemize}
The Normaliz files \texttt{X136-ineqs} compute the restricted cone defined by these inequalities. Adding the fixed divisors $E_i$, $H_1-E_i$, and $H_2-E_i-E_j-E_k$ to the restricted cone and computing the extremal rays of the resulting cone, the files \texttt{X136-gens} then compute the list of extremal rays and inequalities shown in the tables above.
The generators in Rows 1, 2, 3 and 6 are pulled back from effective classes on $X_{1,3,5}$, hence are effective.
The generator in Row 4 is represented by the proper transform on $X_{1,3,6}$ of a hypersurface of the form $\pi_3^{-1}(Q)$ where $Q$ is a singular quadric in $\mathbb P^3$ with vertex at $\pi_3(p_1)$ and passing through the other points $\pi_3(p_i)$. There is a unique such quadric in $\mathbb P^3$, so this generator is represented by a fixed effective divisor.
Finally, the generator in Row 5 is effective by Lemma \ref{vdim lower bound}.
\end{proof}
\section{Weak Fano and log Fano varieties}\label{weak-log} For terminology used in this section, we refer to \cite[Notation 0.4, Section 2.3]{KM98}.
A $\mathbb Q$-factorial projective variety with finitely generated Picard group is a {\it Mori dream space} if its Cox ring \begin{align*}
\operatorname{Cox}(X) = \bigoplus_{L \in \operatorname{Pic}(X)} H^0(X,L) \end{align*} is a finitely generated $\mathbb C$-algebra. Blowups of products of copies of $\mathbb P^n$ at points that are Mori dream spaces were classified in \cite{Mukai05,CT06}. A similar question can be asked for blowups of mixed products such as the varieties $X_{m,n,s}$ and the answer is unknown in general.
Birkar--Cascini--Hacon--McKernan \cite[Corollary 1.3.2]{BCHM} showed that $X$ is a Mori dream space if it is log Fano. In particular, if $X$ is weak Fano then it is log Fano and therefore a Mori dream space. It is therefore natural to ask which of the varieties $X_{m,n,s}$ are weak Fano or log Fano.
In this section we discuss our progress in this direction for varieties $X_{1,n,s}$ with $n=2, \, 3$.
\begin{definition}\label{definition-kltpair}
Let $X$ be a $\mathbb Q$-factorial variety and $\Delta$ a $\mathbb Q$-divisor on $X$. The pair $(X,\Delta)$ is \emph{klt} if the coefficients of $\Delta$ are in the set $[0,1)$ and for any log resolution $f \colon Y \rightarrow X$ we have
\begin{align*}
K_Y+\Delta_Y &= f^*(K_X + \Delta) + \sum_i a_i E_i,
\end{align*}
where $\Delta_Y$ is the proper transform of $\Delta$ on $Y$, the $E_i$ are prime exceptional divisors, and $a_i >-1$ for all $i$. \end{definition} \begin{definition}
A $\mathbb Q$-factorial projective variety $X$ is
\begin{itemize}
\item \emph{log Fano} if there is a $\mathbb Q$-divisor $\Delta$ on $X$ such that the pair $(X,\Delta)$ is klt and $-K_X-\Delta$ is ample;
\item \emph{weak Fano} if $-K_X$ is nef and big.
\end{itemize}
\end{definition} Every weak Fano variety is log Fano; a reference is \cite[Lemma 2.5]{AM16}.
Now we move on to our results. First we consider the case of threefolds. For ease of notation we will denote the anticanonical divisor simply by $-K_X$ where the variety $X_{m,n,s}$ in question is understood. We will use the fact \cite[Proposition 2.61]{KM98} that a nef divisor $D$ is big if and only if its top self-intersection satisfies $D^{\operatorname{dim} X}>0$.
\begin{theorem}\label{theorem-weakfanox126} The variety $X_{1,2,s}$ is weak Fano if and only if $s\le6$. \end{theorem}
\begin{proof} We will show that for $s \leq 6$ the anticanonical divisor $-K_X=2H_1+3H_2-2\sum_{j=1}^sE_j$ is nef and has positive top self-intersection, while for $s \geq 7$ the top self-intersection is negative.
Using Corollary \ref{corollary-topselfint}, we compute the top self-intersection number \begin{align*}
(-K_X)^3&=\left( 2H_1+3H_2 - 2 \sum_{i=1}^s E_i \right)^3\\ &= 2^1 \cdot 3^2 \cdot {3 \choose 1} - \sum_{i=1}^s 2^3\\
&= 54-8s, \end{align*} which is positive if and only if $s\le6$.
In order to show that $-K_X$ is nef for $s \leq 6$, we find a set which is an upper bound for its base locus, then show that it has positive degree on all curves in that set. We consider the following unions of effective divisors, all of which belong to the anticanonical linear system: \[ \left(H_1+H_2-\sum_{j=1}^{s-1}E_{i_j}\right)+\left(H_1+H_2-\sum_{j=1}^{s-1}E_{k_j}\right)+\left(H_2-E_a-E_b\right), \]
where $a,b,i_j,k_j\in\{1,\dots,s\}$, the $i_j$ are distinct, as are the $k_j$, $a\ne b$, and each index appears precisely twice. That the linear systems corresponding to the three summands are nonempty follows from Lemma \ref{vdim lower bound}, indeed $\operatorname{vdim}|H_1+H_2-\sum_{j=1}^{s-1}E_{i_j}|=6-s$ and $\operatorname{vdim}|H_2-E_a-E_b|=0$. Therefore, the base locus of $-K_X$ is contained in the intersection of these unions of divisors, which is a subset of $\bigcup_{j=1}^sE_j$.
Now, assume that there is an irreducible curve $C$ such that $-K_X\cdot C<0$. Then this curve must be contained in the base locus $\operatorname{Bs}(-K_X)\subseteq\bigcup_{j=1}^sE_j$. Now, since the $E_j$'s are disjoint, the curve $C$ must be contained in one of the exceptional divisors, say $E_j$. But for any curve $C \subset E_j$ we have $E_j \cdot C<0$, while $E_i \cdot C = 0$ for $i \neq j$ and $H_1 \cdot C = H_2 \cdot C =0$. This gives $-K_X \cdot C >0$, contrary to our assumption. \end{proof} From \cite[Corollary 1.3.2]{BCHM} it follows that, for $s\le 6$, the variety $X_{1,2,s}$ is a Mori dream space. This gives a conceptual explanation for the finitely generated cones of effective divisors that we computed in Section \ref{section-dim3}.
We now turn to the case of fourfolds $X_{1,3,s}$. Here our varieties are never weak Fano, since $-K_X=2H_1+4H_2-3\sum_{i=1}^s E_i$ has negative degree on curves of the form $l_1-e_i$. We will show, however, that for up to 6 points, our varieties are still log Fano, and therefore Mori dream spaces.
Fujino--Gongyo \cite[Theorem 5.1]{FG12} showed that if $f \colon X \rightarrow Y$ is a surjective morphism of projective varieties and $X$ is log Fano, then $Y$ is also log Fano. Therefore it would suffice to prove our result in the case $s=6$. However, this approach would not give the log Fano structure explicitly in the case $s<6$. Instead, we give explicit log Fano structures on both $X_{1,3,5}$ and $X_{1,3,6}$.
Since the log Fano condition requires an ample class, we start with a lemma verifying that a particular class is ample. \begin{lemma}\label{lemma-ample} For $s \leq 6$ the divisor class $D=2H_1+2H_2-\sum_{i=1}^s E_i$ is ample on $X_{1,3,s}$. \end{lemma} \begin{proof} It is sufficient to prove the case $s=6$.
First we claim that for any 3 distinct indices $i, \, j, \, k$ from $\{1,\ldots,6\}$, the class $H_1+H_2-E_i-E_j-E_k$ is basepoint free, hence nef. To see this, we can write the class as
\begin{align*}
H_1+H_2-E_i-E_j-E_k &= \left(H_1-E_i\right)+\left(H_2-E_j-E_k\right)
\end{align*} and by permuting indices we get 2 more such decompositions. The classes $H_1-E_i$ are disjoint for distinct $i$, while the class $H_2-E_j-E_k$ has base locus the preimage of a line in $\mathbb P^3$ through 2 of the 3 points. Permuting indices we get 3 such lines, which are disjoint by generality. So the base locus of $H_1+H_2-E_i-E_j-E_k$ is empty as claimed.
In particular this implies that $D$ is nef, since it can be written for example as
\begin{align*}
D &= \left(H_1+H_2-E_1-E_2-E_3 \right)+ \left(H_1+H_2-E_4-E_5-E_6 \right).
\end{align*}
By Corollary \ref{corollary-topselfint} we compute $D^4 = 4 \cdot 4 \cdot 6 - 6 >0$, so $D$ is big. Therefore it lies in the interior of the effective cone $\operatorname{Eff}(X_{1,3,6})$.
If $D$ is nef but not ample, it lies on a codimension-1 face $F$ of the nef cone $\operatorname{Nef}(X_{1,3,6})$. By the previous paragraph, the face $F$ intersects the interior of $\operatorname{Eff}(X_{1,3,6})$. So there are generators of $\operatorname{Eff}(X_{1,3,6})$ on both sides of $F$, and in particular we can choose a generator $G$ such that for every natural number $k$ the class $kD-G$ is not nef. We will show that for every generator $G$ of $\operatorname{Eff}(X_{1,3,6})$ the class $kD-G$ is nef for some $k$. By the argument above, this implies that $D$ is ample, as required. By symmetry, it is enough to find a $k_i$ such that $k_iD-G_i$ is nef for the 6 generators $G_1,\ldots,G_6$ listed in the left-hand table of Theorem \ref{theorem-x136}. We compute:
\begin{itemize}
\item $D-G_1 = 2H_1+2H_2-2E_1-E_2-\cdots-E_6$: this class can be decomposed into effective classes as
\begin{align*}
D-G_1 &= (H_2-E_1-E_2-E_3)+(H_2-E_1-E_4-E_5)+(H_1-E_6)+H_1.
\end{align*}
By permuting indices in this decomposition one can see that the base locus of $D-G_1$ consists of at most the two curves of class $l_1-e_1$ and $l_2-e_1$. But $(D-G_1) \cdot (l_1-e_1) = 0$ and $(D-G_1) \cdot (l_2-e_1)=1$, so $D-G_1$ is nef.
\item $D-G_2 = H_1+2H_2-E_2-\cdots-E_6$: this can be decomposed into effective classes as
\begin{align*}
D-G_2 &= (H_1-E_2)+(H_2-E_3-E_4)+(H_2-E_5-E_6).
\end{align*}
By permuting indices we see that $D-G_2$ is basepoint-free, hence nef.
\item $D-G_3 = 2H_1+H_2-E_4-E_5-E_6$: this can be decomposed into effective classes as
\begin{align*}
D-G_3 &= (H_1-E_4)+(H_1-E_5)+(H_2-E_6).
\end{align*}
By permuting indices, we see that $D-G_3$ is basepoint-free, hence nef.
\item $2D-G_4 = 4H_1+2H_2-E_2-\cdots-E_6$: this class can be decomposed as $2D-G_4= 2H_1+(D+E_1)$. The proof that $D$ is nef can easily be modified to show that $D+E_1$ is nef, so $2D-G_4$ is a sum of nef classes, hence nef.
\item $D-G_5 = H_1+H_2$ is nef.
\item $4D-G_6= 7H_1+4H_2-E_1-\cdots-E_5-4E_6$: this class can be written as
\begin{align*}
4D-G_6 &= 2H_1+(H_1-E_1)+\cdots+(H_1-E_5)+4(H_2-E_6).
\end{align*}
The base locus of $4D-G_6$ is therefore contained in the union of the divisors $H_1-E_i$ for $i=1,\ldots,5$, together with the unique curve of class $l_1-e_6$. Each divisor $H_1-E_i$ is isomorphic to $\mathbb P^3$ blown up in 1 point, with cone of curves spanned by $e_i$ and $l_2-e_i$. For $i=1,\ldots5$ we have $(4D-G_6) \cdot e_i =1$ and $(4D-G_6) \cdot (l_2-e_i)=3$; we also compute $(4D-G_6) \cdot (l_1-e_6) = 3$. So $4D-G_6$ is nef as required.
\end{itemize} \end{proof} Now we are ready to describe the log Fano structures on our examples. \begin{theorem}\label{theorem-logfanox135} $X_{1,3,5}$ is log Fano. \end{theorem} \begin{proof}
There are exactly 10 planes in $\mathbb P^3$ containing the projection images of 3 of the 5 points. Let $P_1, \ldots, P_{10}$ be the proper transforms on $X_{1,3,5}$ of the preimages of these planes. For each $P_m$, its divisor class on $X_{1,3,5}$ is of the form $H_2-E_i-E_j-E_k$.
Let $D_1$ and $D_2$ be the proper transforms on $X_{1,3,5}$ of two general hypersurfaces of bidegree $(1,1)$ in $\mathbb P^1 \times \mathbb P^3$ containing all 5 points. Each of the $D_i$ has divisor class $H_1+H_2-\sum_i E_i$.
Now for a rational number $\epsilon$, consider the $\mathbb Q$-divisor
\begin{align*}
\Delta &= \frac15 \left( \sum_{m=1}^{10} P_m \right) + (1-\epsilon)\left( D_1 + D_2 \right) + \left( \frac15 - \epsilon \right) \sum_{i=1}^5 E_i.
\end{align*}
For $0 < \epsilon < \frac15$ this is an effective divisor. We compute that the class of $\Delta$ in $N^1(X)$ equals
\begin{align*}
& \frac15 \left( 10H_2- 6 \sum_{i=1}^5 E_i \right) + 2(1-\epsilon) \left( H_1+H_2-\sum_{i=1}^5 E_i \right) + \left( \frac15 - \epsilon \right) \sum_{i=1}^5 E_i\\
&= (2-2 \epsilon) H_1+ (4-2\epsilon) H_2 + (-3+\epsilon) \left( \sum_{i=1}^5 E_i \right),
\end{align*} and so \begin{align*}
-K_X-\Delta &= 2\epsilon H_1 + 2 \epsilon H_2 - \epsilon \sum_{i=1}^5 E_i \\
&= \epsilon \left( 2H_1+2H_2 - \sum_{i=1}^5 E_i \right). \end{align*} The class $2H_1+2H_2-\sum_i E_i$ is ample by Lemma \ref{lemma-ample}, so it remains ample when multiplied by $\epsilon >0$.
It remains to prove that $(X,\Delta)$ is a klt pair. Since we have chosen $0 < \epsilon < \frac15$, the divisor $\Delta$ is a $\mathbb Q$-linear combination of prime divisors with all coefficients in the set $[0,1)$.
To compute discrepancies of the pair $(X,\Delta)$, first let us analyse the intersections among components of $\Delta$. For each point $\pi_3(p_i)$ there are 6 planes containing this point and 2 more of the points $\pi_3(p_j)$, so on $X$ there are 6 divisors $P_m$ meeting pairwise transversely along the curve $C_i = \pi_3^{-1} \pi_3 (p_i)$. Next, for two points $\pi_3(p_i)$ and $\pi_3(p_j)$, let $L_{ij} \subset \mathbb P^3$ be the line joining them. Then there are 3 planes in $\mathbb P^3$ containing the line $L_{ij}$ and one other point, and therefore 3 divisors $P_m$ meeting pairwise transversely along the surface $S_{ij} = \pi_3^{-1}(L_{ij})$.
Now consider the morphism \begin{center}
\begin{tikzcd}
Z \rightarrow{r}{g} \rightarrow[bend right=30,swap]{rr}{h} &Y \rightarrow{r}{f} &X
\end{tikzcd} \end{center} where $f$ is the blowup of the curves $C_i$ and $g$ is the blowup of the proper transforms of the surfaces $S_{ij}$. One can check that $H$ is indeed a log resolution of the pair $(X,\Delta)$, so we can compute discrepancies of the pair by comparing $K_X+\Delta$ to $K_Z+\Delta_Z$. (Here $\Delta_Z$ denotes the proper transform of $\Delta$ on $Z$.)
Let $F_i$ denote the exceptional divisors of $f$ and $G_{ij}$ the exceptional divisors of $g$. Then \begin{align*}
K_Z = h^*K_X + 2 \sum_i F_i + \sum_{i,\, j} G_{ij}. \end{align*} Moreover since there are 6 planes $P_m$ through the each of the points $\pi_3(p_i)$ and 3 planes containing each of the lines $L_{ij}$, we get \begin{align*}
h^* \left( \sum_{m=1}^{10} P_m \right) = \sum_{m=1}^{10} \left(P_m \right)_Z + 6 \sum_i F_i + 3 \sum_{i,j} G_{ij}. \end{align*} On the other hand, the divisors $D_i$ and $E_i$ do not contain the centres of the blowups in $f$ and $g$, so we have \begin{align*}
h^* D_i &= (D_i)_Z \quad (i=1,\, 2),\\
h^* E_j &= (E_j)_Z \quad (j=1,\ldots,5). \end{align*} Putting everything together we get \begin{align*}
K_Z+\Delta_z = h^*(K_X+\Delta) + \frac45 \sum_i F_i + \frac25 \sum_{i,j} G_{i,j}, \end{align*} so by Definition \ref{definition-kltpair} the pair $(X,\Delta)$ is klt. \end{proof} The proof in the case $s=6$ is similar. To find the required boundary divisor this time, instead of planes through 3 points we use quadric cones through all the points and with vertex at one point. \begin{theorem}\label{theorem-logfanox136} $X_{1,3,6}$ is log Fano. \end{theorem} \begin{proof} For $i=1,\ldots,6$, define $Q_i$ to be the proper transform of the preimage of a cone in $\mathbb P^3$ with vertex at $\pi_3(p_i)$ and passing through the other 5 points $\pi_3(p_j)$. Now define
\begin{align*}
\Delta &= \frac16 \sum_i Q_i + (1-\epsilon)(D_1+D_2) +\left( \frac16 - \epsilon \right) \sum_i E_i
\end{align*}
For $0 < \epsilon < \frac16$ this is an effective divisor. We compute that the class of $\Delta$ in $N^1(X)$ equals
\begin{align*}
&\frac 16 \left( 12 H_2 - 7 \sum_{i=1}^6 E_i \right) +2(1-\epsilon)\left( H_1+H_2-\sum_{i=1}^6 E_i \right) + \left( \frac 16 - \epsilon \right) \sum_{i=1}^6 E_i\\
&= (2-2\epsilon) H_1 +(4-2\epsilon)H_2 +(-3+\epsilon) \sum_{i=1}^6 E_i,
\end{align*}
so as before we have
\begin{align*}
-K_X-\Delta &= \epsilon \left( 2H_1 + 2 H_2 - \sum_{i=1}^6 E_i \right),
\end{align*}
which is again an ample class.
It remains to prove that $(X,\Delta)$ is a klt pair. Again, since we have chosen $0<\epsilon<\frac16$, the coefficients of $\Delta$ are in the set $[0,1)$. As before, let $C_i=\pi_3^{-1} \pi (p_i)$, and now let $S= \pi_3^{-1}(R)$ where $R$ is the unique twisted cubic through the 6 points $\pi_3(p_i)$. Then for any two of the quadrics, say $Q_j$ and $Q_k$, their intersection is $Q_j \cap Q_k = L_{jk} \cup R$, and moreover none of the other quadrics contains $L_{jk}$. So our log resolution is given as before by a sequence of blowups
\begin{center}
\begin{tikzcd}
Z \rightarrow{r}{g} \rightarrow[bend right=30,swap]{rr}{h} &Y \rightarrow{r}{f} &X
\end{tikzcd}
\end{center}
where as before $f$ is the blowup of the 6 curves $C_i$ but now $g$ is the blowup of a single surface, namely the proper transform of $S$ on $Y$.
Let $F_i$ denote the exceptional divisors of $f$ and $G$ the exceptional divisor of $g$. Then one computes that for each quadric cones $Q_i$ we have
\begin{align*}
h^*(Q_i) &= (Q_i)_Z + \sum_{j=1}^6 F_j + F_i + G, \quad \quad \text{so} \\
h^*\left( \sum_i Q_i \right) &= \sum_i (Q_i)_Z + 7 \sum_i F_i + 6 G .
\end{align*} Putting everything together we find \begin{align*} K_Z + \Delta_Z = h^*(K_X+\Delta) +\frac56 \sum_i F_i. \end{align*} (Note that the coefficient of the exceptional divisor $G$ on the right hand side equals 0.) So again $(X,\Delta)$ is a klt pair. \end{proof} Using for example the theorem of Fujino--Gongyo mentioned above we deduce the remaining cases: \begin{corollary}\label{corollary-logfanox13s} For $s\leq 6$ the variety $X_{1,3,s}$ is log Fano. \end{corollary}
It follows that, for $s\le 6$, the variety $X_{1,3,s}$ is a Mori dream space. Again, this gives a conceptual explanation for the rational polyhedral effective cones that we found in Section \ref{section-dim4}.
\subsection*{Infinite cones of divisors} We finish with some upper bounds on the number of points that we can blowup before cones of effective divisors cease to be finitely generated. These are based on the corresponding bounds for a single projective space: we can lift divisors from a single projective space to a product to translate these bounds to the product setting.
The main input is the following theorem of Mukai and Castravet--Tevelev \cite{Mukai05,CT06}. \begin{theorem}\label{theorem-mukai}
For $n>1$, let $X_{0,n,s}$ be the blowup of $\mathbb P^n$ in a set of $s$ points in very general position. Then $\operatorname{Eff}(X_{0,n,s})$ is finitely generated if and only if one of the following holds:
\begin{itemize}
\item $n=2$ and $s \leq 8$;
\item $n=3$ and $s \leq 7$;
\item $n=4$ and $s \leq 8$;
\item $n \geq 5$ and $s \leq n+3$.
\end{itemize} \end{theorem} By pulling back divisors from a single projective space to a product, we get the following corollary. \begin{corollary}\label{corollary not MDS}
Consider a product of projective spaces
\begin{align*}
\mathbb P &= \mathbb P^{n_1} \times \cdots \times \mathbb P^{n_k}.
\end{align*} Let $Y$ be the blowup of $\mathbb P$ in a set of $s$ points in very general position. If there exists an $n_i \neq 1$ such that $n_i$ and $s$ do not satisfy one of the conditions of Theorem \ref{theorem-mukai}, then $\operatorname{Eff}(Y)$ is not rational polyhedral. \end{corollary} \begin{proof} Suppose that such an $n_i$ exists; for simplicity let us denote it by $n$. Choose a $\mathbb P^n$ factor of $\mathbb P$ and consider the projection map $\pi \colon \mathbb P \rightarrow \mathbb P^n$. Let $p_1,\ldots,p_s$ be the points in $\mathbb P$ and let $q_1,\ldots,q_s$ be their images in $\mathbb P^n$.
Let $\Delta$ be an effective divisor on $X_{0,n,s}$ with class $dh-\sum_i m_i e_i$. If $d>0$ and the $m_i$ are nonnegative, then $\Delta$ corresponds to a divisor $D \subset \mathbb P^n$ with degree $d$ and multiplicity $m_i$ at the point $q_i$. For such a divisor $D$, its preimage $\pi^{-1}(D)$ is the product of $D$ with a product of projective spaces, and therefore it has multiplicity $m_i$ at the point $p_i$. So the proper transform of $\pi^{-1}(D)$ on $Y$ has class $dH_n-\sum_i m_i E_i$ for some nonnegative integers $d$ and $m_i$. Conversely, any irreducible effective divisor on $Y$ whose class is of this form must either be the exceptional divisor over one of the points, or else pulled back from $\mathbb P^n$ in this way. The classes of these divisors span a subcone $E_Y$ of $\operatorname{Eff}(Y)$ which is isomorphic to $\operatorname{Eff}(X_{0,n,s})$.
The subcone $E_Y$ lies in a proper face $F$ of $\operatorname{Eff}(Y)$, namely the face orthogonal to all curve classes in factors of $\mathbb P$ other than $\mathbb P^n$. No other effective divisor on $Y$ has class in the face $F$. Therefore if $\operatorname{Eff}(Y)$ is rational polyhedral, hence spanned by effective divisors, the subcone $E_Y$ must be a face of $\operatorname{Eff}(Y)$. Every face of a rational polyhedral cone is rational polyhedral, so this implies $E_Y$, equivalently $\operatorname{Eff}(X_{0,n,s})$, is rational polyhedral. Therefore $n$ and $s$ must satisfy one of the conditions of Theorem \ref{theorem-mukai}. \end{proof} The following statement summarises our knowledge of which varieties $X_{1,n,s}$ are Mori dream spaces.
\begin{theorem} \label{theorem-summary}
Let $X_{1,n,s}$ be the blowup of $\mathbb P^1 \times \mathbb P^n$ in $s$ points in very general postions. Then $X_{1,n,s}$ is a Mori dream space in the following cases:
\begin{enumerate}
\item[(a)] $n=2$ or $n=3$ and $s \leq 6$;
\item[(b)] $n$ arbitrary and $s \leq n+1$.
\end{enumerate}
On the other hand, $X_{1,n,s}$ is not a Mori dream space in the following cases:
\begin{enumerate}
\item[(c)] $n=2$ or $n=4$ and $s \geq 9$;
\item[(d)] $n=3$ and $s \geq 8$;
\item[(e)] $n \geq 5$ and $s \geq n+4$.
\end{enumerate} \end{theorem} \begin{proof}
Statement $(a)$ was proved in Theorem \ref{theorem-weakfanox126} and Corollary \ref{corollary-logfanox13s}. Statement $(b)$ follows from the theorem of Hausen--S{\"u}{\ss} \cite[Theorem 1.3]{HS10}.
Statements $(c)$, $(d)$, $(e)$ follow from Theorem \ref{theorem-mukai} and Corollary \ref{corollary not MDS} by taking $k=2$ and $n_1=1$. \end{proof}
These results leave only a small number of open cases in each dimension, which we address in the following questions.
\begin{question}\label{question1} Are the varieties $X_{1,2,7}$, $X_{1,2,8}$, $X_{1,3,7}$ log Fano or Mori dream spaces? \end{question}
\begin{question}\label{question2} For $s=6,\, 7,\, 8$, are the varieties $X_{1,4,s}$ log Fano or Mori dream spaces? \end{question} The methods of this paper could in principle be applied to study these two questions. It would be interesting to know if our methods can be applied successfully in these cases.
Finally, in higher dimensions, the remaining open cases are the following: \begin{question}\label{question3} For $n \geq 5$, are $X_{1,n,n+2}$ and $X_{1,n,n+3}$ log Fano or Mori dream spaces? \end{question}
\end{document}
|
arXiv
|
{
"id": "2109.03736.tex",
"language_detection_score": 0.6643779277801514,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\renewcommand{\bfseries}{\bfseries} \renewcommand{\scshape}{\scshape}
\title[Sectional category and The Fixed Point Property] {Sectional category and The Fixed Point Property \\ }
\author{Cesar A. Ipanaque Zapata} \address{Departamento de Matem\'{a}tica,UNIVERSIDADE DE S\~{A}O PAULO INSTITUTO DE CI\^{E}NCIAS MATEM\'{A}TICAS E DE COMPUTA\c{C}\~{A}O - USP , Avenida Trabalhador S\~{a}o-carlense, 400 - Centro CEP: 13566-590 - S\~{a}o Carlos - SP, Brasil}
\curraddr{Departamento de Matem\'{a}ticas, CENTRO DE INVESTIGACI\'{O}N Y DE ESTUDIOS AVANZADOS DEL I. P. N. Av. Instituto Polit\'{e}cnico Nacional n\'{u}mero 2508, San Pedro Zacatenco, Mexico City 07000, M\'{e}xico} \email{[email protected]}
\author{Jes\'{u}s Gonz\'{a}lez} \address{Departamento de Matem\'{a}ticas, CENTRO DE INVESTIGACI\'{O}N Y DE ESTUDIOS AVANZADOS DEL I. P. N. Av. Instituto Polit\'{e}cnico Nacional n\'{u}mero 2508, San Pedro Zacatenco, Mexico City 07000, M\'{e}xico} \email{[email protected]}
\subjclass[2010]{Primary 55M20, 55R80, 55M30; Secondary 68T40}
\keywords{Fixed point property, Configuration spaces, Sectional category, Motion planning problem} \thanks {The first author would like to thank grant\#2018/23678-6, S\~{a}o Paulo Research Foundation (FAPESP) for financial support.}
\begin{abstract} For a Hausdorff space $X$, we exhibit an unexpected connection between the sectional number of the Fadell-Neuwirth fibration $\pi_{\red{2},1}^X:F(X,2)\to X$, and the fixed point property (FPP) for self-maps on $X$. Explicitly, we demonstrate that a space $X$ has the FPP if and only if 2 is the minimal cardinality of open covers $\{U_i\}$ of $X$ such that each $U_i$ admits a continuous local section for $\pi_{\red{2},1}^X$. This characterization connects a standard problem in fixed point theory to current research trends in topological robotics. \end{abstract}
\maketitle
\section{Introduction, \blue{outline and main results}}
\green{A topological theory of motion planning was initiated } \def\red{} \def\green{} \def\blue{{in~\cite{farber2003topological}. As a result, Farber's topological complexity of the space of states of an autonomous agent and, more generally, the sectional number of a map} are numerical invariants } \def\red{} \def\green{} \def\blue{{appearing naturally in the emerging field of topological robotics} (see \cite{pavesic} or \cite{pavesic2019}).}
\green{Let $X$ be a topological space and $k\geq 1$. The ordered configuration space of $k$ distinct points on $X$ (see \cite{fadell1962configuration}) is the topological space \[F(X,k)=\{(x_1,\ldots,x_k)\in X^k\mid ~~x_i\neq x_j\text{ whenever } i\neq j \},\] topologised as a subspace of the Cartesian power $X^k$. For $k\geq r\geq 1,$ there is a natural projection $\pi_{k,r}^X\colon F(X,k) \to F(X,r)$ given by $\pi_{k,r}^X(x_1,\ldots,x_r,\ldots,x_k)=(x_1,\ldots,x_r)$.}
\green{The study of sectional number and topological complexity for the map $\pi_{k,r}^X$ is still non-existent and, in fact, this work takes a first step in this direction. Several examples are presented to illustrate } \def\red{} \def\green{} \def\blue{{the} result } \def\red{} \def\green{} \def\blue{{arising in this field.}}
} \def\red{} \def\green{} \def\blue{{In more detail,} a topological space $X$ has \textit{the fixed point property} (FPP) if, for every continuous self-map $f$ of $X$, there is a point $x$ of $X$ such that $f(x)=x$. \blue{} \def\red{} \def\green{} \def\blue{{We} address the natural question of whether (and how) the FPP can be characterized in the category of Hausdorff spaces and continuous maps.} Such characterizations are known in smaller, more restrictive categories. \blue{For} instance, Fadell proved in 1969 (see \cite{fadell1970} for references) that, in the category of connected compact metric ANRs: \begin{itemize}
\item If $X$ is a Wecken space, \blue{then} $X$ has the FPP if and only if $N(f)\neq 0$ for every self-map $f:X\to X$.
\item If $X$ is a Wecken space satisfying the Jiang condition, $J(X)=\pi_1(X)$, then $X$ has the FPP if and only if $L(f)\neq 0$ for every self-map $f:X\to X$. \end{itemize}
In this work we characterize the FPP \blue{within the category of Hausdorff spaces, and in terms of sectional number}. \green{Indeed, we demonstrate that a space $X$ has the FPP if and only if the sectional number $sec\hspace{.1mm}(\pi_{2,1}^X)$ } \def\red{} \def\green{} \def\blue{{equals 2} (Theorem \ref{characterizacao-ppf}). } \def\red{} \def\green{} \def\blue{{As a result,} we give an alternative proof of the fact that the real projective plane } \def\red{} \def\green{} \def\blue{{has} the FPP (Example \ref{rp2}).}
As shown in Section \ref{kr-robot}, a particularly interesting feature of our characterization comes from its connection to current research trends in topological robotics.
\green{On the other hand, the study of the Nielsen root number and the minimal root number for the map $\pi_{k,r}^X$ is still non-existent. This problem belongs to } \def\red{} \def\green{} \def\blue{{the so-called \emph{unstable case} in} the general problem } \def\red{} \def\green{} \def\blue{{of} coincidence theory (see } \def\red{} \def\green{} \def\blue{{\cite[Section~7]{goncalves2005}}). } \def\red{} \def\green{} \def\blue{{We} provide conditions in terms of the minimal root number of $\pi_{2,1}^X$ } \def\red{} \def\green{} \def\blue{{for} $X$ } \def\red{} \def\green{} \def\blue{{to} have the FPP (Proposition } \def\red{} \def\green{} \def\blue{{\ref{conditions}}). } \def\red{} \def\green{} \def\blue{{In addition, we} prove that the Nielsen root number $NR(\pi_{k,r}^X,a)$ is at most one (Proposition \ref{nrn-pi}).}
\green{The paper is organized as follows: In Section \ref{secrt}, we recall the notions of minimal root number $MR[f,a]$ and the Nielsen root number $NR(f,a)$. In Section \ref{sn}, we recall the notion of Schwarz genus, standard sectional number and basic results about these numerical invariants. } \def\red{} \def\green{} \def\blue{{Our goal is to} study the sectional number for the projection map $\pi_{k,r}^X$. In particular, we demonstrate that a space $X$ has the FPP if and only if the sectional number $sec\hspace{.1mm}(\pi_{2,1}^X)$ } \def\red{} \def\green{} \def\blue{{equals 2} (Theorem \ref{characterizacao-ppf}). In Section \ref{tc-map}, we recall the notion of topological complexity for a map and basic results about these numerical invariant. As applications of our results, in Section \ref{kr-robot}, we study a particular problem in robotics. }
\colorado{The authors of this paper deeply thank the referee for very valuable comments and timely corrections on previous versions of the work.}
\section{Root theory}\label{secrt}
In this section we give a brief exposition of standard mathematical topics in Root theory: the minimal root number and the Nielsen root number $NR(f,a)$. Our exposition is by no means complete, as we limit our attention to concepts that appear in geometrical and topological questions. More technical details can be found in standard works on root theory, like \cite{brooks1970number} or \cite{brown1999middle}.
Let $f:X\to Y$ be a continuous map between topological spaces, and fix $a\in Y$. A point $x\in X$ such that $f(x)=a$ is called a \textit{root} of $f$ at $a$.
In Nielsen root theory, by analogy with Nielsen fixed-point theory, the roots of $f$ at $a$ are grouped into Nielsen classes, a notion of essentiality is defined, and the Nielsen root number $NR(f,a)$ is defined to be the number of essential root classes. The Nielsen root number is a homotopy invariant and measures the size of the root set in the sense that $$NR(f,a)\leq MR[f,a]:=\min\{\mid g^{-1}(a)\mid\hspace{.2mm}\colon~~g\simeq f\}.$$ The number $MR[f,a]$ is called \textit{the minimal root number} for $f$ at $a$. A classical result of Wecken states that $NR(f,a)$ is in fact a sharp lower bound in the homotopy class of $f$ for many spaces, in particular, for compact manifolds of dimension at least $3$. Thus, in \blue{such cases}, the vanishing of $NR(f,a)$ is sufficient to deform a map $f$ to be \blue{root-free}. Among the central problems in Nielsen root theory (or the theory of root classes) are: \begin{itemize}
\item the computation of $NR(f,a)$,
\item the realization of $NR(f,a)$, i.e., deciding when \blue{the equality} $NR(f,a)=MR[f,a]$ holds. \end{itemize}
\subsection{The Nielsen root number $NR(f,a)$} We recall from \cite{brooks1970number} the Nielsen root number $NR(f,a)$. Let $f:X\to Y$ be a continuous map between path-connected topological spaces, and choose a point $a\in Y$.
Assume that the set of roots $f^{-1}(a)$ is non empty. Two such roots $x_0$ and $x_1$ are \textit{equivalent} if there is a path $\alpha:[0,1]\to X$ from $x_0$ to $x_1$ such that the loop $f\circ\alpha$ represents the trivial element in $\pi_1(Y,a)$. This is indeed an equivalence relation, and an equivalence class is called a \textit{root class}.
Suppose $H:X\times [0,1]\to Y$ is a homotopy. Then a root $x_0\in H_0^{-1}(a)$ is said to be \textit{$H$-related} to a root $x_1\in H_1^{-1}(a)$ if and only if there is a path $\alpha:[0,1]\to X$ from $x_0$ to $x_1$ such that the loop $\beta:[0,1]\to Y,~\beta(t)=H(\alpha(t),t)$ represents the trivial element in $\pi_1(Y,a)$.
Note that a root $x_0$ of $f:X\to Y$ is equivalent to another root $x_1$ if and only if $x_0$ is related to $x_1$ by the constant homotopy at $f$.
A root $x_0\in f^{-1}(a)$ is said to be \textit{essential} if and only if for any homotopy $H:X\times [0,1]\to Y$ beginning at $f$, there is a root $x_1\in H_1^{-1}(a)$ to which $x_0$ is $H$-related. If one root in a root class is essential, then all other roots in that root class are essential \blue{too}, and we say that the root class itself is \text{essential}. The number of essential root classes is called the \textit{Nielsen number} of $(f,a)$ and is denoted by $NR(f,a)$. The number $NR(f,a)$ is a lower bound for the number of solutions of $f(x)=a$. If $f^\prime$ is homotopic to $f$ then $NR(f,a)=NR(f^\prime,a)$. Furthermore, $NR(f,a)\leq MR[f,a]$.
The order of the cokernel of the fundamental group homomorphism $f_\#:\pi_1(X)\to \pi_1(Y)$ is denoted by $R(f)$, that is, \[R(f)=\left\lvert \dfrac{\pi_1(Y)}{f_\#(\pi_1(X))}\right\rvert;\] it depends only on the homotopy class of $f$. There are always at most $R(f)$ root classes of $f(x)=a$, in particular, $R(f)\geq NR(f,a)$.
\begin{example} If $f_\#:\pi_1(X)\to \pi_1(Y)$ is an epimorphism, $ NR(f,a)\leq 1$. In particular, if $Y$ is simply connected, then $ NR(f,a)\leq 1$. \end{example}
\section{\red{Sectional number}}\label{sn}
\green{In this section we recall the notion of Schwarz genus } \def\red{} \def\green{} \def\blue{{together with} basic results } \def\red{} \def\green{} \def\blue{{from \cite{schwarz1958genus}} about this numerical invariant . Note that the notion of genus in Schwarz's paper \cite{schwarz1958genus} is } \def\red{} \def\green{} \def\blue{{given} for a fibration. We shall follow the terminology in \cite{pavesic2019} and refer to this notion as the Schwarz genus of a \colorado{continuous map}. Also, we recall from \cite{pavesic2019} the notion of standard sectional number.}
Let $p:E\to B$ be a \colorado{continuous map}. A \textit{(homotopy) cross-section} or \textit{section} of $p$ is a (homotopy) right inverse of $p$, i.e., a map $s:B\to E$, such that $p\circ s = 1_B$ ($p\circ s \simeq 1_B$). Moreover, given a subspace $A\subset B$, a \textit{(homotopy) local section} of $p$ over $A$ is a (homotopy) section of the restriction map $p_|:p^{-1}(A)\to A$, i.e., a map $s:A\to E$, such that $p\circ s$ is (homotopic to) the inclusion $A\hookrightarrow B$.
We recall the following definitions. \begin{definition} \begin{enumerate}
\item The (standard) \textit{sectional number} of a \colorado{continuous map} $p\colon E\to B$, $sec\hspace{.1mm}(p)$, is the minimal \blue{cardinality of} open \blue{covers} of $B$, such that each element \blue{of the cover} admits a continuous local section to $p$.
\item The \textit{sectional category} of $p$, also called Schwarz genus of $p$, and denoted by $secat(p),$ is the minimal \blue{cardinality of open covers of $B$, such that each element of the cover admits a continuous homotopy local section to $p$.} \end{enumerate} \end{definition}
\azuloso{Note that $p$ is surjective whenever $sec\hspace{.1mm}(p)<\infty$. The corresponding assertion for $secat(p)$ may fail.}
\begin{remark}\label{secat-sec} We have $secat(p)\leq sec\hspace{.1mm}(p)$. Furthermore, if $p$ is a fibration then $sec\hspace{.1mm}(p) = secat(p)$. \end{remark}
\begin{lemma}\label{prop-sectional-category}\cite{schwarz1958genus} Let $p:E\to B$ be a continuous map and $R$ be a commutative ring with unit. If there exist cohomology classes $\alpha_1,\ldots,\alpha_k\in H^\ast(B;R)$ with $p^\ast(\alpha_1)=\cdots=p^\ast(\alpha_k)=0$ and $\alpha_1\cup\cdots\cup \alpha_k\neq 0$, then $sec\hspace{.1mm}(p)\geq k+1$. \end{lemma}
} \def\red{} \def\green{} \def\blue{{A few observations worth keeping in mind in what follows are:} \begin{itemize} \item } \def\red{} \def\green{} \def\blue{{If} $B$ is path-connected (this case will appear in our work), we have that $\alpha\in H^\ast(B;R)$, $\alpha\neq 0$ with $p^\ast(\alpha)=0$ implies $\alpha\in \widetilde{H}^\ast(B;R)$. \item } \def\red{} \def\green{} \def\blue{{If} $p:E\to B$ } \def\red{} \def\green{} \def\blue{{is} a continuous map } \def\red{} \def\green{} \def\blue{{and} $p_\ast:H_\ast(E;R)\to H_\ast(B;R)$ or $p_\#:\pi_\ast(E)\to \pi_\ast(B)$ are not su\red{r}jective then $sec\hspace{.1mm}(p)\geq 2$. \item Let $p:E\to B$ be a continuous map. If $p$ has a section $s:B\to E$, then $p\circ s=1_B$ and $s^\ast\circ p^\ast=1_{H^\ast(B;R)}$. In particular, $p^\ast:H^\ast(B;R)\to H^\ast(E;R)$ is a monomorphism. \end{itemize}
The following statement is well-known.
\begin{lemma}\label{pullback}\cite{schwarz1958genus} Let $p:E\to B$ be a continuous map. If the following square \begin{eqnarray*} \xymatrix{ E^\prime \ar[r]^{\,\,} \ar[d]_{p^\prime} & E \ar[d]^{p} & \\
B^\prime \ar[r]_{\,\, f} & B &} \end{eqnarray*} is a pullback. Then $sec\hspace{.1mm}(p^\prime)\leq sec\hspace{.1mm}(p)$. \end{lemma}
\colorado{We recall the pathspace construction from \cite[pg. 407]{hatcheralgebraic}. For a continuous map $f:X\to Y$, consider the space \begin{equation*} E_f=\{(x,\gamma)\in X\times PY\mid~\gamma(0)=f(x)\}. \end{equation*} The map \begin{equation*} \rho_f:E_f\to Y,~(x,\gamma)\mapsto \rho_f(x,\gamma)=\gamma(1), \end{equation*} is a fibration. \azuloso{Further,} the projection over the first coordinate $E_f\to X,~(x,\gamma)\mapsto x$ is a homotopy equivalence with homotopy inverse $c:X\to E_f$ given by $x\mapsto (x,\gamma_{f(x)})$, where $\gamma_{f(x)}$ is the constant path at $f(x)$. \azuloso{This factors} an arbitrary map $f:X\to Y$ as the composition $X\stackrel{c}{\to} E_f\stackrel{\rho_f}{\to} Y$ of a homotopy equivalence and a fibration.}
\azuloso{For convenience, we record the following standard properties:} \begin{proposition}\label{secat-pf-equal-secat-f} \begin{enumerate}
\item \azuloso{For a continuous map $f:X\to Y$, ${secat}(\rho_f)= {secat}(f).$} \item \colorado{If $f\simeq g$, then \azuloso{${secat}(f)={secat}(g).$}} \end{enumerate} \end{proposition}
Next, we recall the notion of LS category which, in our setting, is one greater than that given in \cite{cornea2003lusternik}. For example, the category of a contractible space is one.
\begin{definition} The \textit{Lusternik-Schnirelmann category} (LS category) or category of a topological space $X$, denoted cat$(X)$, is the least integer $m$ such that $X$ can be covered by $m$ open sets, all of which are contractible within $X$. \end{definition}
We have $\text{cat}(X)=1$ iff $X$ is contractible. The LS category is a homotopy invariant, i.e., if $X$ is homotopy equivalent to $Y$ (which we shall denote by $X\simeq Y$), then $\text{cat}(X)=\text{cat}(Y)$.
\colorado{The following lemma \azuloso{generalizes} Proposition 9.14 from \cite{cornea2003lusternik}.}
\begin{lemma}\label{prop-secat-map}
\colorado{Let $p:E\to B$ be a continuous map.
\begin{enumerate}
\item If $p$ is a fibration, then $sec\hspace{.1mm}(p)\leq \azuloso{\text{cat}}(B)$. In particular, for any continuous map $f:X\to Y$, we have $secat(f)\leq \text{cat}(Y)$.
\item If $p$ is nulhomotopic, then $secat(p)=\text{cat}(B).$
\end{enumerate}} \end{lemma} \begin{proof}
\colorado{The first part of item $(1)$ was proved in \cite[Proposition 9.14]{cornea2003lusternik}. For the second part of item $2$, by Proposition~\ref{secat-pf-equal-secat-f}, we have $secat(f)=secat(\rho_f)$ and thus $secat(f)\leq \text{cat}(Y)$.}
\colorado{Item $2$ follows easily, because the inequality $\text{cat}(B)\leq secat(\rho_p)$ holds for any nulhomotopic map $p:E\to B$.} \end{proof}
\subsection{Configuration spaces}\label{secconfespa}
Let $X$ be a topological space and $k\geq 1$. The \textit{ordered configuration space} of $k$ distinct points on $X$ (see \cite{fadell1962configuration}) is the topological space \[F(X,k)=\{(x_1,\ldots,x_k)\in X^k\mid ~~x_i\neq x_j\text{ whenever } i\neq j \},\] topologised as a subspace of the Cartesian power $X^k$.
For $k\geq r\geq 1,$ there is a natural projection \blue{$\pi_{k,r}^X\colon F(X,k) \to F(X,r)$ given by $\pi_{k,r}^X(x_1,\ldots,x_r,\ldots,x_k)=(x_1,\ldots,x_r)$.}
\begin{lemma}[Fadell-Neuwirth fibration \cite{fadell1962configuration}] \label{TFN} Let $M$ be a connected $m-$dimensional topological manifold without boundary, where $m\geq 2$. \blue{For $k> r\geq 1$, the map} $\pi_{k,r}^M:F(M,k)\to F(M,r)$ is a locally trivial bundle with fiber $F(M-Q_r, k-r)$, \red{where $Q_r\subset M$ is a finite subset with $r$ elements}. In particular, $\pi_{k,r}^M$ is a fibration. \end{lemma}
\red{\blue{The boundary restriction in Lemma~\ref{TFN} is important, for} $\pi_{k,r}^X:F(\blue{M},k)\to F(\blue{}M,r)$ } \def\red{} \def\green{} \def\blue{{might fail to be} a fibration if $\blue{M}$ is a manifold with boundary. \blue{This can be seen} by considering, for example, the manifold $\mathbb{D}^2$, with $k=2$ and $r=1$, \blue{for} the fibre $\mathbb{D}^2-\{(0,0)\}$ is not homotopy equivalent to the fibre $\mathbb{D}^2-\{(1,0)\}$.}
\begin{proposition}\label{nrn-pi} Let $M$ be a connected $m-$dimensional topological manifold without boundary, where $m\geq 2$. \blue{For $k> r\geq 1$, the projection} $\pi_{k,r}^M:F(M,k)\to F(M,r)$ has Nielsen root number $NR(\pi_{k,r}^M,a)\leq 1$ for any $a\in F(M,r)$. \end{proposition} \begin{proof} The map $\pi_{k,r}^M:F(M,k)\to F(M,r)$ is a fibration with fiber $F(M-Q_r, k-r)$. We note that $F(M-Q_r, k-r)$ is path-connected. By the long exact homotopy sequence of the fibration $\pi_{k,r}^M$, we have the induced homomorphism $(\pi_{k,r}^M)_\#:\pi_1F(M,k)\to \pi_1F(M,r)$ is an epimorphism. Then, $R(\pi_{k,r}^M)=1$ and thus the Nielsen root number $NR(\pi_{k,r}^M,a)\leq 1$ for any $a\in F(M,r)$. \end{proof}
} \def\red{} \def\green{} \def\blue{{Note that $MR[\pi_{k,1}^X,a]=0$ (in particular $NR(\pi_{k,1}^X,a)=0$) for any contractible space $X$.}
\begin{proposition}\label{secop-pi-k-1}[Key lemma]
For any $k\geq 2$ and $X$ a Hausdorff space, we have $sec\hspace{.1mm}(\pi_{k,1}^X)\leq k$. \end{proposition} \begin{proof} Fix an element $(p_1,\ldots,p_k)\in F(X,k)$. For each $i=1,\ldots,k$, set \[U_i:=X-\{p_1,\ldots,p_{i-1},p_{i+1},\ldots,p_k\}\] and \blue{let} $s_i:U_i\longrightarrow F(X,k)$ \blue{be} given by $s_i(x):=(x,p_1,\ldots,p_{i-1},p_{i+1},\ldots,p_k)$. We note that each $U_i$ is open (because $X$ is Hausdorff) and each $s_i$ is a local section of $\pi_{k,1}^X$. Furthermore,, $X=U_1\cup\cdots\cup U_k$. Thus, $sec\hspace{.1mm}(\pi_{k,1}^X)\leq k.$ \end{proof}
\begin{remark} \colorado{\azuloso{Using} Lemma~\ref{prop-secat-map} \azuloso{we see that,} for any $k\geq2$, \begin{equation}\label{dosconds} \mbox{$\pi^X_{k,1}\simeq\star\;\;$ and $\;\;secat\hspace{.1mm}(\pi_{k,1}^X)=1\;\;$ if and only if $\;\;X\simeq\star$.} \end{equation}
The most appealing situation of \azuloso{(\ref{dosconds})} holds for $k=2$, as in fact $sec\hspace{.1mm}(\pi_{2,1}^X)\in\{1,2\}$, in view of Proposition~\ref{secop-pi-k-1}. Indeed, it would be interesting to know whether there is a space $X$ for which $\pi^X_{2,1}$ is a nulhomotopic fibration having $sec\hspace{.1mm}(\pi_{2,1}^X)=2$. Such a space would have to be a non-contractible co-H-space of topological complexity 2 or 3 (see \azuloso{Proposition~\ref{nul-homotopy-implie-cat2}} and Remark~\ref{aaaaa}) and, more relevantly for the purposes of this paper, would have to satisfy the fixed point property ---see Definition~\ref{defifpp} and Theorem~\ref{characterizacao-ppf} below.} \end{remark}
\begin{definition}\label{defifpp} A topological space $X$ has \textit{the fixed point property} (FPP) if for every continuous self-map $f$ of $X$ there is a point $x$ of $X$ such that $f(x)=x$. \end{definition}
\begin{example} It is well known that the unit disc $D^m=\{x\in\mathbb{R}^m:~\parallel x\parallel\leq 1\}$ has the FPP (The Brouwer's fixed point theorem). The \red{real, complex and quaternionic} projective spaces, $\mathbb{RP}^{n}, \mathbb{CP}^{n}$ and $\mathbb{HP}^{n}$ have the FPP \red{when $n$ is even} (see \cite{hatcheralgebraic}). For the particular case, $\mathbb{RP}^{2}$, see Example \ref{rp2}. \end{example}
Note that the map $\pi_{2,1}^X:F(X,2)\to X$ admits a cross-section if and only if there exists a fixed point free self-map $f:X\to X$. Thus, we have the following theorem. \begin{theorem}\label{characterizacao-ppf}[\blue{Main} theorem] Let $X$ be a Hausdorff space. The space $X$ has the FPP if and only if $sec\hspace{.1mm}(\pi_{2,1}^X)=2$. \end{theorem} \begin{proof} Suppose \blue{that} $X$ has the FPP, then $sec\hspace{.1mm}(\pi^{\red{X}}_{2,1})\geq 2$ \blue{so,} by the Key Lemma (Proposition \ref{secop-pi-k-1}), $sec\hspace{.1mm}(\pi_{2,1}^X)=2$. Suppose \blue{now that} $sec\hspace{.1mm}(\pi_{2,1}^X)=2$, \blue{so in} particular $sec\hspace{.1mm}(\pi_{2,1}^X)\neq 1$. Hence, $X$ has the FPP. \end{proof}
\begin{example} No nontrivial topological group $G$ has the FPP. Indeed, the map $s:G\to F(G,2),~g\mapsto (g,g_1g)$ (for some fixed $g_1\neq e\in G$) is a cross-section for $\pi_{2,1}^G:F(G,2)\to G$. The self-map $G\to G,~g\mapsto g_1g$ is fixed point free. \end{example}
\begin{example} We recall that the odd-dimensional projective spaces $\mathbb{RP}^{2n+1}$ has not the FPP, because there is a continuous self-map $h:\mathbb{RP}^{2n+1}\to \mathbb{RP}^{2n+1},$ given by the formula $h([x_1:y_1:\cdots:x_{n+1}:y_{n+1}])=[-y_1:x_1:\cdots:-y_{n+1}:x_{n+1}]$, without fixed point. Thus, $sec\hspace{.1mm}(\pi_{2,1}^{\mathbb{RP}^{2n+1}})=1.$
On the other hand, we know that an even-dimensional projective spaces $\mathbb{RP}^{2n}$ has the FPP. Thus, $sec\hspace{.1mm}(\pi_{2,1}^{\mathbb{RP}^{2n}})=2$. Analogous facts hold for complex and quaternionic projective spaces. \end{example}
\begin{example} The spheres $S^n$ does not have the FPP, because the antipodal map $A:S^n\to S^n,~x\mapsto -x$ has not fixed points. Thus, $sec\hspace{.1mm}(\pi_{2,1}^{S^n})=1.$ \end{example}
\begin{example} We know that any closed surface $\Sigma$, except for the projective plane $\Sigma\neq\mathbb{RP}^2$, has not the FPP. Thus, $sec\hspace{.1mm}(\pi_{2,1}^{\Sigma})=1.$ \end{example}
\begin{corollary}\label{suficiente-ppf} Let $X$ be a Hausdorff space. If there exist $\alpha\in H^\ast(X;R)$ with $\alpha\neq 0$ and $(\pi_{2,1}^X)^\ast(\alpha)=0\in H^\ast(F(X,2);R)$, that is, if the induced homomorphism $(\pi_{2,1}^X)^\ast:H^\ast(X;R)\to H^\ast(F(X,2);R)$ is not injective, then $sec\hspace{.1mm}(\pi_{2,1}^X)= 2$. In particular, $X$ has the FPP. \end{corollary} \begin{proof} From Lemma \ref{prop-sectional-category}, $sec\hspace{.1mm}(\pi_{2,1}^X)\geq 1+1=2$. Then, by Proposition \ref{secop-pi-k-1}, $sec\hspace{.1mm}(\pi_{2,1}^X)= 2$. Thus, the result follows from Theorem \ref{characterizacao-ppf}. \end{proof}
} \def\red{} \def\green{} \def\blue{{The converse of Corollary \ref{suficiente-ppf} is not true. For example, we recall that the unit disc $D^{m}:=\{x\in \mathbb{R}^m\mid~\parallel x\parallel\leq 1\}$ has the FPP (from the Brouwer`s fixed point theorem) and thus $sec\hspace{.1mm}(\pi_{2,1}^{D^{m}})=2.$ However, $\widetilde{H}^\ast(D^{m};R)=0$.}
\begin{corollary} Let $X$ be a Hausdorff space. If the induced homomorphisms $(\pi_{2,1}^X)_\ast:H_\ast(F(X,2);R)\to H_\ast(X;R)$ or $(\pi_{2,1}^X)_\#:\pi_\ast(F(X,2))\to \pi_\ast(X)$ are not surjective, then $sec\hspace{.1mm}(\pi_{2,1}^X)= 2$. In particular, $X$ has the FPP. \end{corollary}
\begin{example}\label{rp2} It is easy to see that $\pi_2(F(\mathbb{RP}^2,2))=0$ is trivial and $\pi_2(\mathbb{RP}^2)=\mathbb{Z}$. Then the induced homomorphism $(\pi_{2,1}^{\mathbb{RP}^2})_\#:\pi_2(F(\mathbb{RP}^2,2))\to \pi_2(\mathbb{RP}^2)$ is not surjective, and thus $sec\hspace{.1mm}(\pi_{2,1}^{\mathbb{RP}^2})= 2$. In particular, $\mathbb{RP}^2$ has the FPP. This part can also be proved by employing Lefschetz`s fixed point theorem. \end{example}
\begin{remark} For $k\geq l\geq r$, consider the following diagram \begin{eqnarray*}
\xymatrix{ F(X,k) \ar[r]^{\pi_{k,l}^X}\ar[d]_{\pi_{k,r}^X} & F(X,l)\ar[dl]^{\pi_{l,r}^X} \\
F(X,r) } \end{eqnarray*}
It is easy to see that if $\pi_{l,r}^X\simeq$ \red{const}, then $\pi_{k,r}^X\simeq$ \red{const} for any $k\geq l\geq r$. Moreover, we have $MR[\pi_{l,r}^X,a]\geq MR[\pi_{k,r}^X,a] \text{ for any } k\geq l\geq r$. \end{remark}
\begin{proposition}\label{conditions} \colorado{Let $X$ be a connected CW complex with $MR(\pi_{2,1}^X,x_0)=0$. Assume that there exist $\alpha\in \widetilde{H}^\ast(X;R)$ with $\alpha\neq 0$ and $i^\ast(\alpha)=0\in \widetilde{H}^\ast(X-\{x_0\};R)$ for some $x_0\in X$, that is, $i^\ast:\widetilde{H}^\ast(X;R)\to \widetilde{H}^\ast(X-\{x_0\};R)$ is not injective, where $i:X-\{x_0\}\hookrightarrow X$ is the inclusion map.}
Then $sec\hspace{.1mm}(\pi_{2,1}^X)=2$. In particular, $X$ has the FPP. \end{proposition} \begin{proof}
From $MR(\pi_{2,1}^X,x_0)=0$, there exist a continuous map $\varphi:F(X,2)\to X$ such that $\varphi^{-1}(x_0)=\emptyset$ and $\varphi\simeq\pi_{2,1}^X$. We have the \blue{homotopy} commutative diagram \begin{eqnarray*}
\xymatrix{ F(X,2) \ar[r]^{\pi_{2,1}^X}\ar[d]_{\varphi} & X\\
X-\{x_0\}.\ar[ur]^{i} } \end{eqnarray*}
The fact $\pi_{2,1}^X\simeq i\circ\varphi$ implies $\varphi^\ast\circ i^\ast=(\pi_{2,1}^X)^\ast$. In particular, $(\pi_{2,1}^X)^\ast(\alpha)=\varphi^\ast\circ i^\ast(\alpha)=0$. Therefore, there exist $\alpha\in \widetilde{H}^\ast(X;R)$ with $\alpha\neq 0$ and $(\pi_{2,1}^X)^\ast(\alpha)=0\in \widetilde{H}^\ast(F(X,2);R)$, then $sec\hspace{.1mm}(\pi_{2,1}^X)=2$. \end{proof}
\begin{example}
For $\pi_{2,1}^{S^2\vee S^1}:F(S^2\vee S^1,2)\to S^2\vee S^1$, we have $MR[\pi_{2,1}^{S^2\vee S^1},x_0]\geq 1$ for any $x_0\in S^2\vee S^1$. Indeed, we
\colorado{show below that} $sec\hspace{.1mm}(\pi_{2,1}^{S^2\vee S^1})=1$. Also, there exist $\alpha\in \widetilde{H}^1(S^2\vee S^1;R)$ with $\alpha\neq 0$ and $i^\ast(\alpha)=0\in \widetilde{H}^1(S^2;R)$, \colorado{so Proposition~\ref{conditions} yields} $MR[\pi_{2,1}^{S^2\vee S^1},x_0]\neq 0$, \colorado{as asserted. Now, in order to construct a cross-section for $\pi_{2,1}^{S^2\vee S^1}$, it suffices to exhibit a selfmap $f\colon S^2\vee S^1\to S^2\vee S^1$ with no fixed points. Think of $S^2\vee S^1$ as $\left(S^2\times \{b_0\}\right)\cup \left(\{a_0\}\times S^1\right)$, where $a_0=(1,0,0)$ and $b_0=(1,0)$. Then the required map $f$ is} given by the formulae
\begin{align*}
\colorado{f(a,b_0)} &
= \colorado{(a_0,\gamma(a_1)), \mbox{ for any $a=(a_1,a_2,a_3)\in S^2$, and}}\\
\colorado{f(a_0,b)} &=\colorado{(a_0,-b), \text{ for any } b\in S^1,}
\end{align*} \colorado{where $\gamma\colon [-1,1]$ is a path in $S^1$ from $b_0$ to $-b_0$.} \end{example}
We next relate our results to Farber's topological complexity, a homotopy invariant of $X$ introduced in \cite{farber2003topological}. Let $PX$ denote the space of all continuous paths $\gamma: [0,1] \to X$ in $X$ and $e_{0,1}: PX \to X \times X$ denote the map associating to any path $\gamma\in PX$ the pair of its initial and end points, i.e., $e_{0,1}(\gamma)=(\gamma(0),\gamma(1))$. Equip the path space $PX$ with the compact-open topology.
\begin{definition}\cite{farber2003topological} The \textit{topological complexity} of a path-connected space $X$, denoted by TC$(X)$, is the least integer $m$ such that the cartesian product $X\times X$ can be covered with $m$ open subsets $U_i$ such that, for any $i = 1, 2, \ldots, m$, there exists a continuous local section $s_i:U_i \to PX$ of $e_{0,1}$, that is, $e_{0,1}\circ s_i = id$ over $U_i$. If no such $m$ exists, we set TC$(X)=\infty$. \end{definition}
We have $\text{TC}(X)=1$ if and only if $X$ is contractible. The TC is a homotopy invariant, i.e., if $X\simeq Y$ then $\text{TC}(X)=\text{TC}(Y)$. Moreover, $\text{cat}(X)\leq \text{TC}(X)\leq 2\text{cat}(X)-1$ for any path-connected CW complex $X$.
\begin{proposition}\label{nul-homotopy-implie-cat2} Let $X$ be a non-contractible path-connected \colorado{CW complex}. If $\pi_{2,1}^X\simeq x_0$ for some $x_0\in X$ then $X-\{x_0\}$ is contractible in $X$. Furthermore $\text{cat}(X)=2$, \colorado{$\text{TC}(X)\in\{2,3\}$} and \colorado{$sec(\pi_{2,1}^X)=secat(\pi_{2,1}^X)=2$}. \end{proposition} \begin{proof}
Let $H:F(X,2)\times [0,1]\to X$ be a homotopy between $\pi_{2,1}^X$ and $x_0$. Set $G:(X-\{x_0\})\times [0,1]\to X$ given by the formula $G(x,t)=H((x,x_0),t)$. We have $G(x,0)=x$ and $G(x,1)=x_0$ for any $x\in X-\{x_0\}$. Thus $X-\{x_0\}$ is contractible in $X$. } \def\red{} \def\green{} \def\blue{{This obviously yields cat$(X)=2$, as well as $2\leq\text{TC}(X)\leq3$. \colorado{Furthermore, we have $2\geq sec(\pi_{2,1}^X)\geq secat(\pi_{2,1}^X)=\text{cat}(X)=2$.}
} \end{proof}
\begin{remark}\label{aaaaa} It is well known that $\text{cat}(X)=2$ corresponds to the case in which $X$ is a co-H-space. This is a large class of spaces } \def\red{} \def\green{} \def\blue{{including} all suspensions. In addition there are well-known examples of co-H-spaces that are not suspensions. In particular, a potential space satisfying the hypothesis in Proposition~\ref{nul-homotopy-implie-cat2} must be a co-H-space \colorado{of topological complexity 2 or 3 and would have to satisfy the fixed point property, but cannot be a closed smooth manifold. This last condition follows from the positive solution to the topological Poincaré conjecture.} \end{remark}
\begin{remark}\cite{fadell1962configuration} Let $X$ be a topological space. The map $\pi_{k,1}^X:F(X,k)\longrightarrow X$ has a continuous section, i.e., $sec\hspace{.1mm}(\pi_{k,1}^X)=1$ if and only if there exist $k-1$ fixed point free continuous self-maps $f_2,\ldots,f_{k}:X\longrightarrow X$ which are non-coincident, that is, $f_i(x)\neq f_j(x)$ for any $i\neq j$ and $x\in X$. \end{remark}
\begin{example}
Let $G$ be a topological group with \blue{cardinality} $|G|\geq k$. Then $sec\hspace{.1mm}(\pi_{k,1}^G)=1$, because the map $s:G\to F(G,2),~g\mapsto (g,g_1g,\ldots,g_{k-1}g)$ is a cross section for $\pi_{k,1}^G$ (for some fixed $(g_1,\ldots,g_{k-1})\in F(G-\{e\},k-1)$). \end{example}
\begin{example}\cite{fadell1962configuration} Let $M$ be a topological manifold without boundary and let $Q_m\subset M$ be a finite subset with $m$ elements. Then $sec\hspace{.1mm}(\pi_{k,1}^{M-Q_m})=1$ for any $m\geq 1$. \end{example}
\begin{proposition}\label{secop-pi-k-r} \blue{Let $X$ be a Hausdorff space.} For any $k>r\geq 1$, we have $sec\hspace{.1mm}(\pi_{k,r}^X)\leq \binom{k}{r}$, where $\binom{k}{r}=\dfrac{k!}{r!(k-r)!}$, the standard binomial coefficient. \end{proposition} \begin{proof}
Let $(p_1,\ldots,p_k)\in F(X,k)$ be a fixed $k-$tuple. Set $Q_k:=\{p_1,\ldots,p_k\}$ and for each $I_r\subseteq Q_k$ with $| I|=r$, let $Q_{I_r}:=Q_k-I_r=\{p_{j_1},\ldots,p_{j_{k-r}}\}$, where $j_1<\cdots<j_{k-r}$. Set $U_{I_r}:=F(M-Q_{I_r},r)$, and \blue{let} $s_{I_r}:U_{I_r}\to F(X,k)$ \blue{be} given by $s_{I_r}(x_1,\ldots,x_r):=(x_1,\ldots,x_r,p_{j_1},\ldots,p_{j_{k-r}})$. We note $U_{I_r}$ is open in $F(M,r)$ and each $s_I$ is a local section of $\pi_{k,r}^X$. Furthermore, $F(M,r)=\bigcup_{I\subseteq Q_k,~\mid I\mid=r}U_I$. Then, $sec\hspace{.1mm}(\pi_{k,r}^X)\leq \binom{k}{r}$. \end{proof}
\begin{corollary}\label{general-case} Let $M$ be a connected topological manifold without boundary of dimension at least two. Then $sec\hspace{.1mm}(\pi_{k,r}^M)\leq \min\{\binom{k}{r},cat(F(M,r))\}$. \end{corollary}
\begin{example}\label{sec-contractible} Let $M$ be a contractible topological manifold without boundary of dimension at least two. Then $sec\hspace{.1mm}(\pi_{k,1}^M)\leq \min\{k,cat(M)=1\}=1$. In particular, $M$ does not have the FPP. \end{example}
\begin{remark}\label{diagram-spheres} \blue{From} the diagram \begin{eqnarray*}
\xymatrix{ F(X,k) \ar[r]^{\pi_{k,k-1}^X}\ar[d]_{\pi_{k,1}^X} & F(X,k-1)\ar[dl]^{\pi_{k-1,1}^X} \\
X } \end{eqnarray*}
\blue{it is clear} that, if $\pi_{k,1}^X$ admits a section, then $\pi_{k-1,1}^X$ also admits a section. Thus $sec\hspace{.1mm}(\pi_{k,1}^X)=1$ implies $sec\hspace{.1mm}(\pi_{r,1}^x)=1$ for any $r\leq k$. Furthermore, when $X=S^d$, \blue{the corresponding} diagram
\begin{eqnarray*}
\xymatrix{ F(S^d,k) \ar[r]^{\pi_{k,2}^{S^d}}\ar[d]_{\pi_{k,1}^{S^d}} & F(S^d,2)\ar[dl]^{\pi_{2,1}^{S^d}} \\
S^d } \end{eqnarray*}
and \blue{the fact} that $\pi_{2,1}^{S^d}$ always admits a section \blue{imply that,} if $\pi_{k,2}^{S^d}$ admits a section, \blue{then}, so does $\pi_{k,1}^{S^d}$. The converse is also true, i.e., if $\pi_{k,1}^{S^d}$ admits a section, \blue{then} so does $\pi_{k,2}^{S^d}$ \cite{fadell1962configuration}. \end{remark}
\begin{proposition}\label{sec-even-spheres} \red{If} $k>2$ and $d$ even, \red{then} $sec\hspace{.1mm}(\pi_{k,r}^{S^d})=cat(F(S^d,r))=2$, for \blue{$r\in\{1,2\}$.} \end{proposition} \begin{proof} First, we show that $sec\hspace{.1mm}(\pi_{k,r}^{S^d})\geq 2$ for any $d$ even, $k\geq 3$ and $r=1$ or $2$. \red{By} the above diagrams, it suffices to show that $sec\hspace{.1mm}(\pi_{3,1}^{S^d})\geq 2$ for $d$ even, that is, $\pi_{3,1}^{S^d}$ does not admit a cross section. If a cross section existed, it would generate a map $f:S^d\to S^d$ such that $f(x)\neq x$ and $f(x)\neq -x$ for any $x\in S^d$. \red{Indeed, suppose that $\pi_{3,1}^{S^d}$ admits a section, it implies that $\pi_{3,2}^{S^d}$ admits a section (see the last part of Remark \ref{diagram-spheres}), say $s:F(S^d,2)\to F(S^d,3)$. Recall that a section $\sigma$ to $\pi_{2,1}^{S^d}$ is given by the formulae $\sigma(x)=(x,-x)$ for any $x\in S^d$. Take $f=p_{3}\circ s\circ\sigma$, where $p_3$ is the projection on the third coordinate.} Since $f(x)\neq -x$ for every $x\in S^d$ it is easy to see that $f\simeq 1$ and $f$ has degree one and hence fixed points which is a contradiction (We recall that if $f:S^d\to S^d$ has not fixed points then $f$ is homotopic to the antipodal map and $f$ has degree $(-1)^{d+1}$). Thus, $\pi_{3,1}^{S^d}$ does not admit a cross section.
From Proposition \ref{general-case}, $sec\hspace{.1mm}(\pi_{k,r}^{S^d})\leq \min\{\binom{k}{r},cat(F(S^d,r))=cat(S^d)=2\}$. Then $sec\hspace{.1mm}(\pi_{k,r}^{S^d})=2 \text{ for any } k\geq 3, r\in \{1,2\}$ ($d$ even). \end{proof}
\begin{proposition}\label{prop-prop} \begin{enumerate}
\item \red{If $L$ is a deformation retract of $X$}, then $\red{secat}(\pi_{k,1}^L)\geq \red{secat}(\pi_{k,1}^X)$.
\item \cite{fadell1962configuration} If $M$ is \red{a smooth manifold} and admits a non-vanishing vector field, then $sec\hspace{.1mm}(\pi_{k,1}^M)=1$ for every $k$. \end{enumerate} \end{proposition} \begin{proof}
$(1)$ Let $r:X\to L$ be a deformation retraction, i.e., $r\circ i=1_L$ and $i\circ r\simeq 1_X$, where $i:L\to X$ is the inclusion map. We have the following commutative diagram
\begin{eqnarray*}
\xymatrix{ F(L,k) \ar[r]^{i^k}\ar[d]_{\pi_{k,1}^L} & F(X,k)\ar[d]^{\pi_{k,1}^X} \\
L \ar[r]^{i} & X } \end{eqnarray*}
Suppose $U\subset L$ is an open set of $L$ with \red{homotopy} local section $s:U\to F(L,k)$ of $\pi_{k,1}^L$. Set $V=r^{-1}(U)\subset X$ and consider $\sigma:V\to F(X,k)$ given by $\sigma=i^k\circ s\circ r$. \[ \xymatrix{ r^{-1}(U) \ar[r]^{r}\ar@{->}@/_20pt/[rrr]_{\sigma} & U \ar[r]^{s} & F(L,k) \ar[r]^{i^k} & F(X,k)} \]
We have that $\sigma$ is a \red{homotopy} local section of $\pi_{k,1}^X$. Therefore, $\red{secat}(\pi_{k,1}^L)\geq \red{secat}(\pi_{k,1}^X)$. \end{proof}
\red{From (\cite{fadell1962configuration}, Theorem $5$-$(b)$) we have if $L\subset X$ is a retract and $\pi_{k,1}^L$ admits a cross-section then $\pi_{k,1}^X$ admits a cross-section. The statement from Proposition \ref{prop-prop} does not hold when $L$ is a retract. For example, $X=S^d$ (with $d\geq 2$ even) and $L=S^d_{-}=\{(x_1,\ldots,x_{d+1})\in S^d:~~x_{d+1}\leq 0\}$. Note that $S^d_{-}$ is a retraction of $S^d$, the retraction map is given by $r:S^d\to S^d_{-}$, $r(x)=x$ if $x\in S^d_{-}$ and $r(x)=(x_1,\ldots,x_d,-x_{d+1})$ if $x_{d+1}\geq 0$. We have $S^d_{-}$ is contractible, indeed it is homeomorphic to the $d-$dimensional closed unit disc $\mathbb{D}^d$ and thus $secat(\pi_{k,1}^L)=1$. Here we consider $k>2$ and thus $secat(\pi_{k,1}^X)=2$ (see Proposition \ref{sec-even-spheres} and Remark \ref{secat-sec}).}
\begin{corollary}\cite{fadell1962configuration} If $M$ is compact and the first Betti number of $M$ does not vanish, then $sec\hspace{.1mm}(\pi_{k,1}^M)=1$ for every $k$. \end{corollary}
\begin{corollary}\cite{fadell1962configuration}\label{sec-odd-manifolds} If $M$ is an odd-dimensional differentiable manifold, \blue{then} $sec\hspace{.1mm}(\pi_{k,1}^M)=1$ for every $k$. \end{corollary}
\begin{corollary}\label{sec-odd-spheres} \red{If} $k>2$ and $d$ odd, \red{then} $sec\hspace{.1mm}(\pi_{k,r}:F(S^d,k)\to F(S^d,r))=1$ for \blue{$r\in\{1,2\}$.} \end{corollary} \begin{proof} \red{\blue{This} follows from Corollary \ref{sec-odd-manifolds} and the last part of Remark \ref{diagram-spheres}.} \end{proof}
\section{Topological complexity of a map}\label{tc-map}
Recall that $PE$ denotes the space of all continuous paths $\gamma: [0,1] \longrightarrow E$ in $E$ and $e_{0,1}: PE \longrightarrow E \times E$ denotes the map associating to any path $\gamma\in PE$ the pair of its initial and end points $\pi(\gamma)=(\gamma(0),\gamma(1))$. Equip the path space $PE$ with the compact-open topology.
Let $p:E\to B$ be a \colorado{continuous map}, and let $e_p:PE\to E\times B,~e_p=(1\times p)\circ e_{0,1}$.
\begin{definition} The \textit{topological complexity} of the map $p$, denoted by TC$(p)$, is the sectional number $sec\hspace{.1mm}(e_p)$ of the map $e_p$, that is, the least integer $m$ such that the cartesian product $E\times B$ can be covered with $m$ open subsets $U_i$ such that for any $i = 1, 2, \ldots , m$ there exists a continuous local section $s_i : U_i \longrightarrow PE$ of $e_{P}$, that is, $e_{P}\circ s_i = id$ over $U_i$. If no such $m$ exists we set TC$(p)=\infty$. \end{definition}
We use a definition of topological complexity which generally is not the same that given in \cite{pavesic2019}. However, under certain conditions, these two definitions coincides (see \cite{pavesic2019}).
The proof of the following statement proceeds by analogy with \cite{pavesic2019}.
\begin{proposition} For a map $p:E\to B$, we have $\text{TC}(p)\geq\max\{ \text{cat}(B),sec\hspace{.1mm}(p)\}$. \end{proposition} \begin{proof} Let $U\subset E\times B$ be an open subset and $s:U\to PE$ be a partial section of $e_p$. Fix $x_0\in E$ and consider the inclusion $i_0:B\to E\times B$, given as $i_0(b)=(x_0,b)$. Set $V=i_0^{-1}(U)\subset B$, it is an open subset of $B$. Consider the map $H:V\times [0,1]\to B$ given by $H(b,t)=p(s(x_0,b)(t))$. It is easy to check that $H$ is a null-homotopy. We conclude that $\text{TC}(p)\geq cat(B)$.
On the other hand, consider the map $\sigma:V\to E$ defined by $\sigma(b)=s(x_0,b)(1)$. One can easily see that $\sigma$ is a partial section over $V$ to $p$. Therefore, $\text{TC}(p)\geq sec\hspace{.1mm}(p)$. \end{proof}
The proof of the following statement proceeds by analogy with \cite{pavesic2019}.
\begin{proposition}\label{section-ineq} Consider the diagram of maps $E^\prime\stackrel{p^\prime}{\to} E\stackrel{p}{\to} B\stackrel{p^{\prime\prime}}{\to}B^\prime$. If $p$ admits a section, then
\begin{itemize}
\item[$a$)] $\text{TC}(p^{\prime\prime})\leq \text{TC}(p^{\prime\prime} p)$.
\item[$b$)] $\text{TC}(p p^\prime)\leq \text{TC}(p^\prime)$.
\end{itemize}
In particular, $\text{TC}(B)\leq\text{TC}(p)\leq \text{TC}(E)$. \end{proposition} \begin{proof} Let $s:B\to E$ be a section to $p$.
$a)$ Suppose $\alpha_{p^{\prime\prime} p}:U\to PE$ is a partial section of $e_{p^{\prime\prime} p}$ over $U\subset E\times B^\prime$. Set $V:=(s\times 1_{B^\prime})^{-1}(U)\subset B\times B^\prime$. Then we can define the continuous map $\alpha_{p^{\prime\prime}}:V\to PB$ by \[\alpha_{p^{\prime\prime}}(b,b^\prime)(t):=\begin{cases}
b, & \hbox{for $0\leq t\leq \frac{1}{2}$;} \\
p(\alpha_{p^{\prime\prime} p}(s(b),b^\prime)(2t-1)), & \hbox{for $\frac{1}{2}\leq t\leq 1$.} \end{cases}\] Since $\alpha_{p^{\prime\prime}}$ is a partial section of $e_{p^{\prime\prime}}$ over $V$, we conclude that $\text{TC}(p^{\prime\prime})\leq \text{TC}(p^{\prime\prime} p)$.
$b)$ Let $\alpha_{p^\prime}:U\to PE^\prime$ be a partial section to $e_{p^\prime}:PE^\prime\to E^\prime\times E$ over $U\subset E^\prime\times E$. Set $V:=(1_{E^\prime}\times s)^{-1}(U)\subset E^\prime\times B$ and define the continuous map $\alpha_{pp^\prime}:V\to PE^\prime$ given by $\alpha_{pp^\prime}(e^\prime,b):=\alpha_{p^\prime}(e^\prime,s(b))$. It follows that $\alpha_{pp^\prime}$ is a partial section of $e_{pp^\prime}$ over $V$. This implies $\text{TC}(p p^\prime)\leq \text{TC}(p^\prime)$. \end{proof}
\begin{theorem}\label{tc-implies-fpp} Let $X$ be a Hausdorff space. \begin{enumerate}
\item If $X$ has the FPP, then $\text{TC}(\pi_{k,1}^X)\geq \max\{ \text{cat}(X),2\}$ for any $k\geq 2$.
\item If $\text{TC}(\pi_{2,1}^X)<\text{TC}(X)$ or $\text{TC}(\pi_{2,1}^X)> \text{TC}(F(X,2))$, then $sec\hspace{.1mm}(\pi_{2,1}^X)=2$. In particular, $X$ has the FPP.
\item If $X$ is a non-contractible space which does not have the FPP, then the configuration space $F(X,2)$ is not contractible. \end{enumerate} \end{theorem} \begin{proof} } \def\red{} \def\green{} \def\blue{{$(1)$:} We have $\text{TC}(\pi_{k,1})\geq sec\hspace{.1mm}(\pi_{k,1})\geq 2$. We recall that, $sec\hspace{.1mm}(\pi_{2,1})=2$ implies $sec\hspace{.1mm}(\pi_{k,1})\geq 2$, for any $k\geq 2$.
} \def\red{} \def\green{} \def\blue{{$(2)$:} This follows from Proposition \ref{section-ineq}.
} \def\red{} \def\green{} \def\blue{{$(3)$:} By Proposition \ref{section-ineq}, we have $1<\text{TC}(X)\leq \text{TC}(\pi_{2,1})\leq \text{TC}(F(X,2))$ and thus $F(X,2)$ is not contractible. \end{proof}
} \def\red{} \def\green{} \def\blue{{Item (3) in Theorem \ref{tc-implies-fpp}} gives a partial generalization of the work } \def\red{} \def\green{} \def\blue{{in~\cite{zapata2017non}.}
\begin{example} We know that the unit disc $D^m$ has the FPP. Then $\text{TC}(\pi_{k,1}^{D^m})\geq 2$, for any $k\geq 2$. \end{example}
The following Lemma generalizes the statement given in (\cite{pavesic2019}, pg. 19).
\begin{lemma}\label{general-pullback} If $p:E\to B$ is a fibration and $p^\prime:B\to B^\prime$ is a continuous map, then the following diagram is a pullback \begin{eqnarray*} \xymatrix{ PE \ar[r]^{\,\,p_{\#}} \ar[d]_{e_{p^\prime p}} & PB \ar[d]^{e_{p^\prime}} & \\
E\times B^\prime \ar[r]_{\,\, p\times 1_{B^\prime}} & B\times B^\prime &} \end{eqnarray*} \end{lemma} \begin{proof}
For any $\beta:X\to PB$ and any $\alpha:X\to E\times B^\prime$ satisfying $e_{p^\prime}\circ\beta=(p\times 1_{B^\prime})\circ\alpha$, we will check that there exists $H:X\to PE$ such that $e_{p^\prime\circ p}\circ H=\alpha$ and $p_{\#}\circ H=\beta$. \begin{eqnarray*} \xymatrix{ X \ar@/^10pt/[drr]^{\,\,\beta} \ar@/_10pt/[ddr]_{\alpha} \ar@{-->}[dr]_{H} & & &\\ & PE \ar[r]^{\,\,p_{\#}} \ar[d]^{e_{p^\prime\circ p}} & PB \ar[d]^{e_{p^\prime}} & \\
& E\times B^\prime \ar[r]_{\quad p\times 1_{B^\prime}\quad} & B\times B^{\prime} &} \end{eqnarray*} Indeed, note that we have the following commutative diagram: \begin{eqnarray*} \xymatrix{ X \ar[r]^{\,\,p_{1}\circ\alpha} \ar[d]_{i_0} & E \ar[d]^{p} \\
X\times I \ar[r]_{\,\,\beta} & B} \end{eqnarray*} where $p_1$ is the projection onto the first coordinate. Because $p$ is a fibration, there exists $H:X\times I\to E$ satisfying $H\circ i_0=p_1\circ\alpha$ and $p\circ H=\beta$, thus we does. \end{proof}
The following statement was proved in \cite{pavesic2019}; we give an \blue{elementary} proof in our context.
\begin{proposition} If $p:E\to B$ is a fibration, then $\text{TC}(p^\prime p)\leq \text{TC}(p^\prime)$ for any $p^\prime:B\to B^\prime$. In particular, $\text{TC}(p)\leq \text{TC}(B)$. \end{proposition} \begin{proof}
Since $p:E\to B$ is a fibration, the following diagram is a pullback (see Lemma \ref{general-pullback}) \begin{eqnarray*} \xymatrix{ PE \ar[r]^{\,\,p_{\#}} \ar[d]_{e_{p^\prime p}} & PB \ar[d]^{e_{p^\prime}} & \\
E\times B^\prime \ar[r]_{\,\, p\times 1_{B^\prime}} & B\times B^\prime &} \end{eqnarray*} This implies $\text{TC}(p^\prime p)=sec\hspace{.1mm}(e_{p^\prime p})\leq sec\hspace{.1mm}(e_{p^\prime})= \text{TC}(p^\prime)$.
\end{proof}
\begin{corollary}
If $p:E\to B$ is a fibration that admits a section, then $\text{TC}(p)=\text{TC}(B)$. In particular, $\text{TC}(p)=1$ if and only if $B$ is contractible.
\end{corollary}
\section{The $(k,r)$ robot motion planning problem}\label{kr-robot}
In this section we use the results above within a particular problem in robotics.
Recall that, in general terms, the \textit{configuration space} or \textit{state space} of a system $\mathcal{S}$ is defined as the space of all possible states of $\mathcal{S}$ (see \cite{latombe2012robot} or \cite{lavalle2006planning}). Investigation of the problem of simultaneous collision-free motion planning for a multi-robot system consisting of $k$ distinguishable robots, each with state space $X$, leads us to study the ordered configuration space $F(X,k)$ of $k$ distinct points on $X$. \red{Recall the definition of \blue{the} ordered configuration \blue{space}} $F(X,k)$ in Subsection~\ref{secconfespa}. Note that the \red{$i$-th} coordinate of a point $(x_1,\ldots,x_n)\in F(X,k)$ represents the configuration of the \red{$i$-th} moving object, so that the condition $x_i\neq x_j$ reflects the collision-free requirement.
\textit{The $(k,r)$ robot motion planning problem} consists in controlling simultaneously these $k$ robots without collisions, where one is interested in the initial positions of the $k$ robots and \textit{only interested in the final position of the first $r$ robots ($k\geq r$)} (see Figure \ref{fig1}).
\begin{figure}
\caption{The $(2,1)$ robot motion planning problem: we need to move Robots $1$ and $2$, simultaneously and avoiding collisions, from the initial positions $(a_1,a_2)$ to a final position $b_1$ of Robot $1$. We are only interested in the final position of the first robot.}
\label{fig1}
\end{figure}
\textit{An algorithm} for the $(k,r)$ robot motion planning problem is a function which assigns to any pair of configurations $(A,B)\in F(X,k)\times F(X,r)$ consisting of an initial state $A=(a_1,\ldots,a_k)\in F(X,k)$ and a desired state $B=(b_1,\ldots,b_r)\in F(X,r)$, a continuous motion of the system starting at the initial state $A$ and ending at the desired state $B$ (see Figure \ref{fig2}). \begin{figure}
\caption{An algorithm for the $(2,1)$ robot motion planning problem}
\label{fig2}
\end{figure}
The central problem of modern robotics, \textit{the motion planning problem}, consists of finding a motion planning algorithm.
We note that an algorithm to the $(k,r)$ robot motion planning problem is a (not necessarily continuous) section $s:F(X,k)\times F(X,r)\to PF(X,k)$ of the map $$e_{\pi_{k,r}^X}:PF(X,k)\to F(X,k)\times F(X,r),~e_{\pi_{k,r}^X}(\alpha)=(\alpha(0),\pi_{k,r}^X\alpha(1)),$$ where $\pi_{k,r}^X:F(X,k)\to F(X,r)$ is the projection of the first $r$ coordinates.
A motion planning algorithm $s$ is called \textit{continuous} if and only if $s$ is continuous. Absence of continuity will result in instability of the behavior of the motion planning. In general, there is not a global continuous motion planning algorithm, and only local continuous motion plans may be found. This fact gives, in a natural way, the use of the numerical invariant TC$(\pi_{k,r}^X)$. Recall that TC$(\pi_{k,r}^X)$ is the minimal number of \textit{continuous} local motion plans to $e_{\pi_{k,r}^X}$ (i.e., continuous local sections for $e_{\pi_{k,r}^X}$), which are needed to construct an algorithm for autonomous motion planning of the $(k,r)$ robot motion planning problem. Any motion planning algorithm $s:=\{s_i:U_i\to PE\}_{i=1}^{n}$ is called \textit{optimal} if $n=\text{TC}(\pi_{k,r}^X)$.
\begin{theorem} Let $M$ be a connected topological manifold without boundary of dimension at least $2$, and let $\pi_{k,r}^X:F(M,k)\to F(M,r)$ be the Fadell-Neuwirth fibration. \begin{enumerate}
\item If $M$ does not have the FPP, then $\text{TC}(\pi_{2,1}^M)=\text{TC}(M).$ Hence the complexity \red{of} the $(2,1)$ robot motion planning problem is the same \red{as} the complexity \red{of} the manifold $M$. More general, if $sec\hspace{.1mm}(\pi_{k,r}^M)=1$, then $\text{TC}(\pi_{k,r}^M)=\text{TC}(F(M,r)).$
\item If $M$ has the FPP, then $\max\{2,\text{cat}(M)\}\leq \text{TC}(\pi_{k,1}^M)\leq\text{TC}(M)$, for any $k\geq 2$. In particular, $M$ is not contractible. \end{enumerate} \end{theorem}
\begin{example} We recall that the $n$-dimensional sphere $S^n$ does not have the FPP. Then, $$\text{TC}(\pi_{2,1}^{S^n})=\text{TC}(S^n)=\begin{cases}
2, & \hbox{for $n$ odd;} \\
3, & \hbox{for $n$ even.} \end{cases}
$$
Furthermore, we have that any contractible topological manifold $M$ without boundary does not have the FPP. Hence, $\text{TC}(\pi_{2,1}^M)=\text{TC}(M)=1$. \end{example}
\begin{example} \begin{itemize}
\item The odd-dimensional projective spaces $\mathbb{RP}^m$ \red{do} not have the FPP, then $\text{TC}(\pi_{2,1}^{\mathbb{RP}^m})=\text{TC}(\mathbb{RP}^m)$. By \cite{farber2003topologicalproject}, the topological complexity $\text{TC}(\mathbb{RP}^m)$ for any $m\neq 1,3,7$, coincides with the smallest integer $k$ such that the projective space $\mathbb{RP}^m$ admits an immersion into $\mathbb{R}^{k-1}$.
\item It is known that any closed surface $\Sigma$ except the projective plane $\Sigma\neq\mathbb{RP}^2$, does not have the FPP. Thus, $\text{TC}(\pi_{2,1}^{\Sigma})=\text{TC}(\Sigma).$
\item We have that the projective plane $\mathbb{RP}^2$ has the FPP. Furthermore, it is well known $\text{cat}(\mathbb{RP}^2)=3$ and $\text{TC}(\mathbb{RP}^2)=4$ \cite{farber2003topologicalproject}. Then, $3=\text{cat}(\mathbb{RP}^2)\leq\text{TC}(\pi_{k,1}^{\mathbb{RP}^2})\leq \text{TC}(\mathbb{RP}^2)=4$, for $k\geq 2$.
\item For any connected compact Lie group, the Fadell-Neuwirth fibration $$\pi_{k,k-1}^{G\times \mathbb{R}^m}:F(G\times \mathbb{R}^m,k)\to F(G\times \mathbb{R}^m,k-1)$$ admits a continuous section (for $m\geq 2$). Then $\text{TC}(\pi_{k,k-1}^{G\times \mathbb{R}^m})=\text{TC}(F(G\times \mathbb{R}^m,k-1))$. By \cite{zapata2019cat}, the topological complexity $\text{TC}(F(G\times \mathbb{R}^m,2))=2\text{TC}(G)$. Hence, $\text{TC}(\pi_{3,2}^{G\times \mathbb{R}^m})=2\text{TC}(G)=2\text{cat}(G)$.
\item Any connected Lie group has not the FPP and $\text{cat}(G)=\text{TC}(G)$. Then, $\text{TC}(\pi_{2,1}^G)=\text{TC}(G)=\text{cat}(G)$. In general, $\text{TC}(\pi_{k,1}^G)=\text{TC}(G)=\text{cat}(G)$ for any $k\geq 2$. \end{itemize} \end{example}
\begin{example} \begin{itemize}
\item We have $sec\hspace{.1mm}(\pi_{k,r}^{S^d})=cat(F(S^d,r))=2$, for $k\geq 3$, $d$ even, and $r=1,2$. Then $2=sec\hspace{.1mm}(\pi_{k,r}^{S^d})\leq\text{TC}(\pi_{k,r}^{S^d})\leq \text{TC}(F(S^d,r))=\text{TC}(S^d)=3$.
\item For any $k\geq 2$ and $d$ odd, and $r=1,2$. We have $sec\hspace{.1mm}(\pi_{k,r}^{S^d})=1.$ Hence, $\text{TC}(\pi_{k,r}^{S^d})=\text{TC}(F(S^d,r))=\text{TC}(S^d)=2$. \end{itemize} \end{example}
\begin{proposition}\cite{pavesic2019} Let $p:E\to B$ be a fibration between ANR spaces. Then \[\text{cat}(B)\leq\text{TC}(p)\leq\min\{ \text{cat}(E)+\text{cat}(E)sec\hspace{.1mm}(p)-1, \text{TC}(B),\text{cat}(E\times B)\}.\] In particular, $\text{TC}(p)=1$ if and only if $B$ is contractible. \end{proposition}
\begin{theorem} Let $M$ be a connected topological manifold without boundary of dimension at least $2$. If $M$ has the FPP, then $$\max\{2,\text{cat}(M)\}\leq\text{TC}(\pi_{2,1}^M)\leq \min\{ 3\text{cat}(F(M,2))-1, \text{TC}(M),\text{cat}(F(M,2)\times M)\}.$$ \end{theorem}
\end{document}
|
arXiv
|
{
"id": "1912.03448.tex",
"language_detection_score": 0.6241943836212158,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Higher indescribability and derived topologies}
\author[Brent Cody]{Brent Cody} \address[Brent Cody]{ Virginia Commonwealth University, Department of Mathematics and Applied Mathematics, 1015 Floyd Avenue, PO Box 842014, Richmond, Virginia 23284, United States } \email[B. ~Cody]{[email protected]} \urladdr{http://www.people.vcu.edu/~bmcody/}
\begin{abstract} We introduce reflection properties of cardinals in which the attributes that reflect are expressible by infinitary formulas whose lengths can be strictly larger than the cardinal under consideration. This kind of generalized reflection principle leads to the definitions of $L_{\kappa^+,\kappa^+}$-indescribability and $\Pi^1_\xi$-indescribability of a cardinal $\kappa$ for all $\xi<\kappa^+$. In this context, universal $\Pi^1_\xi$ formulas exist, there is a normal ideal associated to $\Pi^1_\xi$-indescribability and the notions of $\Pi^1_\xi$-indescribability yield a strict hierarchy below a measurable cardinal. Additionally, given a regular cardinal $\mu$, we introduce a diagonal version of Cantor's derivative operator and use it to extend Bagaria's \cite{MR3894041} sequence $\langle\tau_\xi:\xi<\mu\rangle$ of derived topologies on $\mu$ to $\langle\tau_\xi:\xi<\mu^+\rangle$. Finally, we prove that for all $\xi<\mu^+$, if there is a stationary set of $\alpha<\mu$ that have a high enough degree of indescribability, then there are stationarily-many $\alpha<\mu$ that are nonisolated points in the space $(\mu,\tau_{\xi+1})$.
\end{abstract}
\subjclass[2010]{Primary 03E55, 54A35; Secondary 03E05}
\keywords{Derived topology, diagonal Cantor derivative, indescribable cardinals, stationary reflection}
\maketitle
\tableofcontents
\section{Introduction}\label{section_introduction}
When working with certain large cardinals, set theorists often use reflection arguments. For example, if $\kappa$ is a measurable cardinal then it is inaccessible, and furthermore, there are normal measure one many $\alpha<\kappa$ which are inaccessible; we say that the inaccessibility of a measurable cardinal $\kappa$ \emph{reflects} below $\kappa$. In this article we consider generalizations of this kind of reflection so that we may reflect attributes of large cardinals that are expressible by formulas whose lengths can be strictly longer than the large cardinal under consideration. We will see that in many cases, if $\kappa$ is a measurable cardinal and $\kappa$ has some property, which is expressible by a formula $\varphi$ whose length is less than $\kappa^+$, then the set of $\alpha<\kappa$ such that a canonically defined \emph{restricted} version of this formula $\varphi\mathrm{|}^\kappa_\alpha$ is true of $\alpha$, is normal measure one. We use this kind of generalized reflection to define the $L_{\kappa^+,\kappa^+}$-indescribability and $\Pi^1_\xi$-indescribability of a cardinal $\kappa$ for all $\xi<\kappa^+$, thus generalizing the notions of indescribability previously considered in \cite{MR3894041}. Let us note that a precursor to this type of reflection principle was studied by Sharpe and Welch (see \cite[Definition 3.21]{MR2817562}). We then use our notion of $\Pi^1_\xi$-indescribability to establish the nondiscreteness of certain topological spaces which are generalizations of the derived topologies considered in \cite{MR3894041}, and which are defined by using a diagonal version of the Cantor derivative operator (see the definition of $\tau_\xi$ and $d_\xi$ in Section \ref{section_higher_derived_topologies} and see Remark \ref{remark_example} for a simple case).
We believe the results presented below will open up new avenues for future work in many directions. For example, in order to define the restriction of formulas (Definition \ref{definition_restriction} and Definition \ref{definition_restriction_2}) and then to establish basic properties of $\Pi^1_\xi$-indescribability, we introduce the \emph{canonical reflection functions} (see Definition \ref{definition_canonical_reflection_functions} and Section \ref{section_canonical_reflection_functions}), which are interesting in their own right and will likely have applications in areas far removed from this paper. We also expect that the notion of restriction of formulas defined below will have applications in the study of infinitary logics and model theory. Note that \cite{MR457191} and \cite{MR360274} both contain results involving a notion of restriction of $L_{\infty,\omega}$ formulas to countable sets; we suspect that these results, as well as other results in this area \cite{MR457191}, will have analogues involving our notion of restriction. Furthermore, let us note that the notion of higher $\Pi^1_\xi$-indescribability also allows for a finer analysis of the large cardinal hierarchy as in \cite{MR4206111} and \cite{cody_holy_2022}. Finally, the notions and results contained herein, particularly those on higher $\xi$-stationarity and higher derived topologies (see Section \ref{section_higher_derived_topologies}), should also allow for generalizations of many results concerning iterated stationary reflection properties and characterizations of indescribability in G\"{o}del's constructible universe (see \cite{MR1029909}, \cite{MR3416912}, \cite{MR3894041} and \cite{MR4094556}).
Before we discuss the restriction of formulas in general, let us give some examples. For cardinals $\kappa$ and $\mu$, recall that $L_{\kappa,\mu}$ denotes the infinitary logic which allows for conjunctions of $<\kappa$-many formulas that together contain $<\mu$-many free variables and quantification (universal and existential) over $<\mu$-many variables at once. If $\kappa$ is a measurable cardinal and $\varphi$ is any sentence in the $L_{\kappa,\kappa}$ language of set theory such that $V_\kappa\models\varphi$, then the set of $\alpha<\kappa$ such that $V_\alpha\models\varphi$ is normal measure one in $\kappa$. On the other hand, for any cardinal $\kappa$ there are $L_{\kappa^+,\kappa^+}$ sentences which are true in $V_\kappa$ and false in $V_\alpha$ for all $\alpha<\kappa$. For example, for each $\eta<\kappa$ there is a natural $L_{\kappa^+,\kappa^+}$ formula $\chi_\eta(x)$ such that for all $\alpha\leq\kappa$ and all $a\in V_\alpha$ we have $V_\alpha\models\chi_\eta(a)$ if and only if $a$ is an ordinal and $a$ has order type at least $\eta$. Now $\chi=\bigwedge_{\eta<\kappa}\exists x\chi_\eta(x)$ is an $L_{\kappa^+,\kappa^+}$ sentence such that $V_\kappa\models\chi$, and yet there is no $\alpha<\kappa$ such that $V_\alpha\models\chi$. However, the \emph{restriction} $\chi\mathrm{|}^\kappa_\alpha:=\bigwedge_{\eta<\alpha}\exists x\chi_\eta(x)$ of $\chi$ to $\alpha$ holds in $V_\alpha$ for all $\alpha<\kappa$. In what follows we will define the restriction of $L_{\kappa^+,\kappa^+}$ formulas in generality, which will allow for similar reflection results. However, the main focus of this article is on a different kind of infinitary formula.
Generalizing the notions of $\Pi^1_n$ and $\Sigma^1_n$ formulas (see \cite{MR0281606} or \cite[Section 0]{MR1994835}), Bagaria \cite{MR3894041} defined the classes of $\Pi^1_\xi$ and $\Sigma^1_\xi$ formulas for all ordinals $\xi$. For example, if $\xi$ is a limit ordinal, a formula is $\Pi^1_\xi$ if it is of the form $\bigwedge_{\zeta<\xi}\varphi_\zeta$ where each $\varphi_\zeta$ is $\Pi^1_\zeta$. A formula is $\Sigma^1_{\xi+1}$ if it is of the form $\exists X\psi$ where $\psi$ is $\Pi^1_\xi$. Throughout this article, first-order variables will be written as lower case letters and second-order variables will be written as upper case. For more on the definition of $\Pi^1_\xi$ and $\Sigma^1_\xi$ formulas, see Section \ref{section_definition_pi1xi}. Given a cardinal $\kappa$, Bagaria defined a set $S\subseteq\kappa$ to be $\Pi^1_\xi$-indescribable in $\kappa$ if and only if for all $A\subseteq V_\kappa$ and all $\Pi^1_\xi$ sentences $\varphi$, if $(V_\kappa,\in,A)\models\varphi$ then there is an $\alpha\in S$ such that $(V_\alpha,\in,A\cap V_\alpha)\models\varphi$. Bagaria pointed out that, using his definition, no cardinal $\kappa$ can be $\Pi^1_\kappa$-indescribable because the $\Pi^1_\kappa$ sentence $\chi$ defined above is true in $V_\kappa$ but false in $V_\alpha$ for all $\alpha<\kappa$. We introduce a modification of Bagaria's notion of $\Pi^1_\xi$-indescribability which allows for a cardinal $\kappa$ to be $\Pi^1_\xi$-indescribable for all $\xi<\kappa^+$. Given a cardinal $\kappa$ and an ordinal $\xi<\kappa^+$, we say that a set $S\subseteq\kappa$ is \emph{$\Pi^1_\xi$-indescribable in $\kappa$} if and only if for all $\Pi^1_\xi$ sentences $\varphi$ (with first and second-order parameters from $V_\kappa$), if $V_\kappa\models\varphi$\footnote{Note that $\varphi$ may involve finitely-many second-order parameters $A_1,\ldots,A_n\subseteq V_\kappa$, and when we write $V_\kappa\models\varphi$ we mean $(V_\kappa,\in,A_1,\ldots,A_n)\models\varphi$. Since this abbreviated notion will not cause confusion and greatly simplifies notation, we will use it throughout the paper without further comment.} then there is some $\alpha\in S$ such that a canonically defined restriction of $\varphi$ is true in $V_\alpha$, which we express by writing $V_\alpha\models\varphi\mathrm{|}^\kappa_\alpha$ (see Definition \ref{definition_indescribability} for details).
In order to define the notions of restriction of $L_{\kappa^+,\kappa^+}$ formulas and restriction of $\Pi^1_\xi$ formulas, we use a sequence of functions $\<F^\kappa_\xi:\xi<\kappa^+\rangle$ we call the \emph{sequence of canonical reflection functions at $\kappa$}, which is part of the set theoretic folklore and which is closely related to the sequence $\<f^\kappa_\xi:\xi<\kappa^+\rangle$ of canonical functions at $\kappa$. Before defining the canonical reflection functions, let us recall some basic properties of canonical functions. Given a regular cardinal $\kappa$, the ordering defined on $^\kappa\mathop{{\rm ORD}}$ by letting $f<g$ if and only if $\{\alpha<\kappa: f(\alpha)<g(\alpha)\}$ contains a club, is a well-founded partial ordering. The Galvin-Hajnal \cite{MR376359} norm $\|f\|$ of such a function is defined to be the rank $f$ in the relation $<$. For each $\xi<\kappa^+$, there is a \emph{canonical} function $f^\kappa_\xi:\kappa\to\kappa$ of norm $\xi$, in the sense that $\|f^\kappa_\xi\|=\xi$ and whenever $\|h\|=\xi$ the set $\{\alpha<\kappa: f^\kappa_\xi(\alpha)\leq h(\alpha)\}$ contains a club (see \cite[Page 99]{MR2768680}). For concreteness, we will use the following definition of $f^\kappa_\xi$ for $\xi<\kappa^+$. If $\xi<\kappa$ we let $f^\kappa_\xi:\kappa\to\kappa$ be the function with constant value $\xi$. If $\kappa\leq\xi<\kappa^+$ we fix a bijection $b_{\kappa,\xi}:\kappa\to\xi$ and define $f^\kappa_\xi$ by letting $f^\kappa_\xi(\alpha)=\mathop{\rm ot}\nolimits(b_{\kappa,\xi}[\alpha])$ for all $\alpha<\kappa$. For convenience, we take $b_{\kappa,\kappa}$ to be the identity function $\mathop{\rm id}_\kappa:\kappa\to\kappa$, which implies that $f^\kappa_\kappa=\mathop{\rm id}_\kappa$. It is easy to see that for all $\zeta<\xi<\kappa^+$ we have $f^\kappa_\zeta<f^\kappa_\xi$ and that $f^\kappa_\xi$ is a canonical function of norm $\xi$. The sequence $\vec{f}=\<f^\kappa_\xi:\xi<\kappa^+\rangle$ is sometimes referred to as the sequence of canonical functions at $\kappa$. Although, this terminology is slightly misleading as the canonical functions are only well-defined modulo the nonstationary ideal.
\begin{definition}\label{definition_canonical_reflection_functions} Suppose $\kappa$ is a regular cardinal. For each $\xi\in\kappa^+\setminus\kappa$ let $b_{\kappa,\xi}:\kappa\to\xi$ be a bijection. We define the corresponding sequence of \emph{canonical reflection functions $\vec{F}=\<F^\kappa_\xi:\xi<\kappa^+\rangle$ at $\kappa$} where $F^\kappa_\xi:\kappa\to P_\kappa\kappa^+$ for each $\xi<\kappa^+$ as follows. \begin{enumerate} \item For $\xi<\kappa$ we let $F^\kappa_\xi(\alpha)=\xi$ for all $\alpha<\kappa$. \item For $\kappa\leq\xi<\kappa^+$ we let $F^\kappa_\xi(\alpha)=b_{\kappa,\xi}[\alpha]$ for all $\alpha<\kappa$. \end{enumerate} For each $\xi<\kappa^+$ and $\alpha<\kappa$ we let $\pi^\kappa_{\xi,\alpha}:F^\kappa_\xi(\alpha)\to f^\kappa_\xi(\alpha)$ be the transitive collapse of $F^\kappa_\xi(\alpha)$. \end{definition}
Notice that for all $\xi<\kappa^+$ we have $f^\kappa_\xi(\alpha)=\mathop{\rm ot}\nolimits(F^\kappa_\xi(\alpha))$ by definition. It is not difficult to see that for $\xi\in\kappa^+\setminus\kappa$, the $\xi^{th}$ canonical reflection function $F^\kappa_\xi$ is independent, modulo the nonstationary ideal, of which bijection $b_{\kappa,\xi}:\kappa\to\xi$ is used in its definition. That is, if $b_{\kappa,\xi}^1:\kappa\to\xi$ and $b_{\kappa,\xi}^2:\kappa\to\xi$ are two bijections then the set $\{\alpha<\kappa: b_{\kappa,\xi}^1[\alpha]=b_{\kappa,\xi}^2[\alpha]\}$ contains a club.
In Section \ref{section_canonical_reflection_functions}, we establish many basic structural properties of the canonical reflection functions which will be used later in the paper. A particularly useful application of canonical functions \cite[Proposition 2.34]{MR2768692} is that that the $\xi^{th}$ canonical function at a regular cardinal $\kappa$ represents the ordinal $\xi$ in any generic ultrapower by any normal ideal on $\kappa$. An easy result below (see Proposition \ref{proposition_useful_object}) shows that whenever $I$ is a normal ideal on $\kappa$, $G\subseteq P(\kappa)/I$ is generic and $j:V\to V^\kappa/G\subseteq V[G]$ is the corresponding generic ultrapower embedding, the $\xi^{th}$ canonical reflection function $F^\kappa_\xi$ represents $j"\xi$ in the generic ultrapower, that is, $j(F^\kappa_\xi)(\kappa)=j"\xi$.
In Section \ref{section_definition_pi1xi}, given a regular cardinal $\kappa$, we review the definitions of $\Pi^1_\xi$ and $\Sigma^1_\xi$ formulas over $V_\kappa$; when we say that $\varphi$ is $\Pi^1_\xi$ \emph{over} $V_\kappa$ we mean that $\varphi$ is $\Pi^1_\xi$ in Bagaria's sense, but $\varphi$ is also allowed to have any number of first-order parameters from $V_\kappa$ and finitely-many second-order parameters from $V_\kappa$ (see Definition \ref{definition_over}). In Definition \ref{definition_restriction}, we use canonical reflection functions to define the notion of restriction of $\Pi^1_\xi$ and $\Sigma^1_\xi$ formulas by transfinite induction on $\xi<\kappa^+$. For example, if $\varphi=\varphi(X_1,\ldots,X_m,A_1,\ldots, A_n)$ is a $\Pi^1_\xi$ formula over $V_\kappa$ and $\xi<\kappa$, then we define \[\varphi\mathrm{|}^\kappa_\alpha=\varphi(X_1,\ldots,X_m,A_1\cap V_\alpha,\ldots,A_n\cap V_\alpha).\] As another example, suppose $\xi\in\kappa^+\setminus\kappa$ and $\xi$ is a limit ordinal. If \[\varphi=\bigwedge_{\zeta<\xi}\varphi_\zeta\] is a $\Pi^1_\xi$ formula and $\alpha<\kappa$, then we define \[\varphi\mathrm{|}^\kappa_\alpha=\bigwedge_{\zeta<f^\kappa_\xi(\alpha)}\varphi_{(\pi^\kappa_{\xi,\alpha})^{-1}(\zeta)}\mathrm{|}^\kappa_\alpha\] provided that this formula is a $\Pi^1_{f^\kappa_\xi(\alpha)}$ formula over $V_\alpha$. As a consequence of this definition, it follows that there is a club $C$ in $\kappa$ such that for all regular $\alpha\in C$, $\varphi$ is a $\Pi^1_\kappa$ formula over $V_\kappa$ and $\varphi\mathrm{|}^\kappa_\alpha=\bigwedge_{\zeta<\alpha}\varphi_\alpha$. One nice feature of our definition of restriction is that it leads to a convenient way to represent $\Pi^1_\xi$ formulas in normal generic ultrapowers. Suppose $\kappa$ is weakly Mahlo, $I$ is a normal ideal on $\kappa$ and $\varphi$ is a $\Pi^1_\xi$ formula over $V_\kappa$ for some $\xi<\kappa^+$. Then, a result of \cite{cody_holy_2022} (see Lemma \ref{lemma_represent} below) shows that whenever $G\subseteq P(\kappa)/I$ is generic over $V$ and $j:V\to V^\kappa/G$ is the corresponding generic ultrapower embedding, we have $j(\Phi)(\kappa)=\varphi$ where $\Phi$ is the function with domain $\kappa$ defined by $\Phi(\alpha)=\varphi\mathrm{|}^\kappa_\alpha$.
For a given cardinal $\kappa$ and ordinal $\xi<\kappa^+$, in Definition \ref{definition_indescribability} we say that $S\subseteq\kappa$ is \emph{$\Pi^1_\xi$-indescribable} if and only if for all $\Pi^1_\xi$ sentences $\varphi$ over $V_\kappa$, whenever $V_\kappa\models\varphi$ there must be an $\alpha\in S$ such that $V_\alpha\models\varphi\mathrm{|}^\kappa_\alpha$.\footnote{Sharpe and Welch \cite[Definition 3.21]{MR2817562} extended the notion of $\Pi^1_n$-indescribability of a cardinal $\kappa$ where $n<\omega$ to that of $\Pi^1_\xi$-indescribability where $\xi<\kappa^+$ by demanding that the existence of a winning strategy for a particular player in a certain finite game played at $\kappa$ implies that the same player has a winning strategy in the analogous game played at some cardinal less than $\kappa$. The relationship between their notion and the one defined here is not known.} Our last result in Section \ref{section_definition_pi1xi} states that if $\kappa$ is a measurable cardinal then $\kappa$ is $\Pi^1_\xi$-indescribable for all $\xi<\kappa^+$, and furthermore, the set of $\alpha<\kappa$ such that $\alpha$ is $\Pi^1_\zeta$-indescribable for all $\zeta<\alpha^+$ is normal measure one in $\kappa$.
In Section \ref{section_other}, given a regular cardinal $\kappa$ and an $L_{\kappa^+,\kappa^+}$ formula $\varphi$ in the language of set theory, we use the canonical reflection functions at $\kappa$ to define a notion of restriction $\varphi\mathrm{|}^\kappa_\alpha$ by induction on subformulas, for all $\alpha\leq\kappa$. For a regular cardinal $\kappa$, we say that a set $S\subseteq\kappa$ is \emph{$L_{\kappa^+,\kappa^+}$-indescribable} if and only if for all $L_{\kappa^+,\kappa^+}$ sentences in the language of set theory with $V_\kappa\models\varphi$ there is an $\alpha<\kappa$ such that $V_\alpha\models\varphi\mathrm{|}^\kappa_\alpha$. Proposition \ref{proposition_Lkappa+} states that if $\kappa$ is a measurable cardinal, then $\kappa$ is $L_{\kappa^+,\kappa^+}$-indescribable and furthermore, the set of regular cardinals $\alpha<\kappa$ that are $L_{\alpha^+,\alpha^+}$-indescribable is normal measure one in $\kappa$.
Generalizing the results of L\'evy \cite{MR0281606} and Bagaria \cite{MR3894041} on universal formulas, in Section \ref{section_universal}, we establish the existence of universal $\Pi^1_\xi$ and $\Sigma^1_\xi$ formulas at a regular cardinal $\kappa$ for all $\xi<\kappa^+$ in an appropriate sense. Using universal formulas, we prove Theorem \ref{theorem_normal_ideal}, which states that if $\kappa$ is $\Pi^1_\xi$-indescribable where $\xi<\kappa^+$, then the collection \[\Pi^1_\xi(\kappa)=\{X\subseteq\kappa:\text{$X$ is not $\Pi^1_\xi$-indescribable}\}\] is a nontrivial normal ideal on $\kappa$.
In Section \ref{section_hierarchy}, again using the existence of universal $\Pi^1_\xi$ formulas discussed above, we prove Theorem \ref{theorem_expressing_indescribability}, which states that given a regular cardinal $\kappa$ and $\xi<\kappa^+$, the $\Pi^1_\xi$-indescribability of a set $S\subseteq\kappa$ is, in an appropriate sense, expressible by a $\Pi^1_{\xi+1}$ formula. We then prove two hierarchy results for $\Pi^1_\xi$-indescribability. For example, as a consequence of these results, if $\kappa$ is $\kappa+n+1$-indescribable, where $n<\omega$, then the set of $\alpha<\kappa$ which are $\alpha+n$-indescribable is in the filter $\Pi^1_{\kappa+n+1}(\kappa)^*$. More generally, our first hierarchy result, Corollary \ref{corollary_hierarchy}, states that if $\kappa$ is $\Pi^1_\xi$-indescribable where $\xi<\kappa^+$ and $\zeta<\xi$, then the set of $\alpha<\kappa$ which are $\Pi^1_{f^\kappa_\zeta(\alpha)}$-indescribable is in the filter $\Pi^1_\xi(\kappa)^*$. Our second hierarchy result, Corollary \ref{corollary_proper}, states that if $\kappa$ is $\Pi^1_\xi$-indescribable where $\xi<\kappa^+$, then for all $\zeta<\xi$ we have $\Pi^1_\zeta(\kappa)\subsetneq\Pi^1_\xi(\kappa)$. The proofs of these two hierarchy results require several lemmas which are interesting in their own right. For example, Proposition \ref{proposition_double_restriction} state that for any weakly Mahlo cardinal $\kappa$ and ordinal $\xi<\kappa^+$, if $\varphi$ is any $\Pi^1_\xi$ or $\Sigma^1_\xi$ formula then there is a club $C\subseteq\kappa$ such that for all regular $\alpha\in C$, the set of $\beta<\alpha$ for which \[(\varphi\mathrm{|}^\kappa_\alpha)\mathrm{|}^\alpha_\beta=\varphi\mathrm{|}^\kappa_\beta\] is club in $\alpha$.
Recall that, by results of Sun \cite{MR1245524} and Hellsten \cite{MR2026390}, one can characterize $\Pi^1_n$-indescribable subsets of a $\Pi^1_n$-indescribable cardinal $\kappa$ by using a natural base for the filter $\Pi^1_n(\kappa)^*$ dual to $\Pi^1_n(\kappa)$. For a regular cardinal $\kappa$, a set $C\subseteq\kappa$ is a \emph{$\Pi^1_0$-club in $\kappa$} if it is club in $\kappa$. We say that $C\subseteq\kappa$ is \emph{$\Pi^1_{n+1}$-club} in $\kappa$, where $n<\omega$, if it is $\Pi^1_n$-indescribable in $\kappa$ and whenever $C\cap\alpha$ is $\Pi^1_n$-indescribable in $\alpha$ we have $\alpha\in C$. Then, if $\kappa$ is $\Pi^1_n$-indescribable, a set $S\subseteq\kappa$ is $\Pi^1_n$-indescribable if and only if $S\cap C\neq\varnothing$ for all $\Pi^1_n$-clubs $C\subseteq\kappa$. This result is due to Sun \cite{MR1245524} for $n=1$ and to Hellsten \cite{MR2026390} for $n<\omega$. In Section \ref{section_higher_xi_clubs}, we generalize this to $\Pi^1_\xi$-indescribable subsets of $\Pi^1_\xi$-indescribable cardinals for all $\xi<\kappa^+$. That is, for all $\xi<\kappa^+$, we introduce a notion of $\Pi^1_\xi$-club subset of $\kappa$ such that if $\kappa$ is $\Pi^1_\xi$-indescribable then a set $S\subseteq\kappa$ is $\Pi^1_\xi$-indescribable if and only if $S\cap C\neq\varnothing$ for all $\Pi^1_\xi$-clubs $C\subseteq\kappa$. For more results involving $\Pi^1_\xi$-clubs, one should consult \cite{MR3985624}, \cite{MR4050036}, \cite{MR4230485} and \cite{MR4082998}.
Finally, in Section \ref{section_higher_derived_topologies}, we generalize some of the results of Bagaria \cite{MR3894041} on derived topologies on ordinals. Given a nonzero ordinal $\delta$, Bagaria defined a transfinite sequence of topologies $\langle\tau_\xi:\xi\in\mathop{{\rm ORD}}\rangle$ on $\delta$, called the \emph{derived topologies on $\delta$}, and proved---using the definitions of \cite{MR3894041}---that if there is an $\alpha<\delta$ which is $\Pi^1_\xi$-indescribable then the $\tau_{\xi+1}$ topology on $\delta$ is non-discrete. However, using the definitions of \cite{MR3894041}, $\alpha$ can be $\Pi^1_\xi$-indescribable only if $\xi<\alpha$. Thus, Bagaria obtained the non-discreteness of the $\tau_\xi$ topologies on $\delta$ only for $\xi<\delta$. Given a regular cardinal $\mu$, in Section \ref{section_higher_derived_topologies}, using \emph{diagonal Cantor derivatives}, we present a natural extension of Bagaria's notion of derived topologies on $\mu$ by defining a transfinite sequence of topologies $\langle\tau_\xi:\xi<\mu^+\rangle$ on $\mu$ such that for $\xi<\mu$ our $\tau_\xi$ is the same as that of \cite{MR3894041} and Bagaria's conditions for the nondiscreteness of the topologies $\tau_{\xi+1}$ for $\xi<\mu$ can be generalized to all $\xi<\mu^+$ (see Theorem \ref{theorem_xi_s_nonisolated} and Corollary \ref{corollary_nondiscreteness_from_indescribability}). \begin{remark}\label{remark_example} Let us describe the simplest of the new topologies introduced in this article. If $\langle\tau_\xi:\xi<\mu\rangle$ is Bagaria's sequence of derived topologies on a regular $\mu$, we define $d_\mu:P(\mu)\to P(\mu)$ by letting \[d_\mu(A)=\{\alpha<\mu:\text{$\alpha$ is a limit point of $A$ in the $\tau_\alpha$ topology on $\mu$}\}.\] We then define a new topology $\tau_\mu$ declaring $C\subseteq\mu$ to be closed in the space $(\mu,\tau_\mu)$ if and only if $d_\mu(C)\subseteq C$. That is, we let $U\in\tau_\mu$ if and only if $d_\mu(\mu\setminus U)\subseteq\mu\setminus U$ for $U\subseteq\mu$. \end{remark}
\begin{comment} \cite{MR4206111} \cite{MR3416912} \cite{MR4050036} \cite{MR2817562} \cite{BrickhillWelch} \cite{Brickhill:Thesis} \cite{MR4081067} \cite{MR1245524} \cite{MR2252250} \cite{MR2653962} \cite{MR2026390} \cite{MR4082998} \cite{MR0539973} \cite{MR457191} \cite{MR4094556} \end{comment}
\section{Canonical reflection functions}\label{section_canonical_reflection_functions}
In this section we establish the basic properties of the canonical reflection functions at a regular cardinal. Although some of these results are folklore, we include proofs for the reader's convenience. Many of the proofs in the current section will establish that certain sets defined using canonical reflection functions are in the club filter on a given regular cardinal $\kappa$. These results will be established by using generic ultrapower embeddings\footnote{The author would like to thank Peter Holy for suggesting the use of generic ultrapowers in the arguments of the current section.}; some background material on generic ultrapowers may be found in \cite{MR2768692}, but we will only that which is summarized here. Recall that if $\kappa$ is a regular cardinal, $I$ is a normal ideal on $\kappa$ and $G\subseteq P(\kappa)/I$ is generic over $V$, then, working in the forcing extension $V[G]$ there is a canonical $V$-normal $V$-ultrafilter $U_G\subseteq P(\kappa)$ obtained from $G$ such that $U_G$ extends the filter $I^*$ dual to $U$ and we may form the corresponding generic ultrapower $j:V\to V^\kappa/U_G\subseteq V[G]$. Further recall that the critical point of $j$ is $\kappa$ and equals the equivalence class of the identity function $\mathop{\rm id}:\kappa\to\kappa$. Thus, for all $X\in P(\kappa)^V$ we have $X\in U$ if and only if $\kappa\in j(X)$. Furthermore, the ultrapower $V^\kappa/U_G$ is wellfounded up to $(\kappa^+)^V$, $H(\kappa^+)\subseteq V^\kappa/U_G$ and when $\kappa$ is inaccessible we have $H(\kappa)=H(\kappa)^{V^\kappa/U_G}$. As is standard practice, in what follows we will often write $V^\kappa/G$ to mean $V^\kappa/U_G$. The following two propositions will be used throughout the article.
\begin{proposition}\label{proposition_framework} Suppose $\kappa$ is a regular uncountable cardinal and $S\subseteq\kappa$. Then $S$ contains a club subset of $\kappa$ if and only if whenever $G$ is generic for $P(\kappa)/{\mathop{\rm NS}}_\kappa$ it follows that $\kappa\in j(S)$ where $j:V\to V^\kappa/G$ is the generic ultrapower embedding obtained from $G$. \end{proposition}
\begin{proposition}\label{proposition_framework2} The following are equivalent when $\kappa$ is a regular uncountable cardinal\footnote{Notice that when the set of regular cardinals less than $\kappa$ is not stationary in $\kappa$, i.e. when $\kappa$ is not weakly Mahlo, then both (1) and (2) hold trivially. In later sections, when we apply Proposision \ref{proposition_framework2}, and the results derived from it in the current section, $\kappa$ will in fact be weakly Mahlo.} and $E\subseteq\kappa$. \begin{enumerate} \item There is a club $C\subseteq\kappa$ such that for all regular uncountable $\alpha\in C$ we have $\alpha\in E$. \item Whenever $G\subseteq P(\kappa)/{\mathop{\rm NS}}_\kappa$ is generic over $V$ such that $\kappa$ is regular in $V^\kappa/G$ and $j:V\to V^\kappa/G$ is the corresponding generic ultrapower embedding, we have $\kappa\in j(E)$. \end{enumerate} \end{proposition}
\begin{proof} It is trivial to see that (1) implies (2). If (1) is false then the set $S$ of regular cardinals in $\kappa\setminus E$ is stationary in $\kappa$. Let $G\subseteq P(\kappa)/{\mathop{\rm NS}}_\kappa$ be generic over $V$ with $S\in G$ and let $j:V\to V^\kappa/G$ be the corresponding generic ultrapower embedding. Then $\kappa\in j(S)$, which implies $\kappa$ is regular in $V^\kappa/G$ and $\kappa\notin j(E)$ contradicting (2). \end{proof}
The next result shows that, for regular $\kappa$, the $\xi^{th}$ canonical reflection function $F^\kappa_\xi$ (see Definition \ref{definition_canonical_reflection_functions}) represents a useful object in any generic ultrapower obtained from a normal ideal on $\kappa$.
\begin{proposition}\label{proposition_useful_object} Suppose $\kappa$ is a regular cardinal and $I$ is a normal ideal on $\kappa$. Let $G$ be generic for $P(\kappa)/I$ and let $j:V\to V^\kappa/G$ be the corresponding generic ultrapower embedding. Then, for all $\xi<\kappa^+$, the $\xi^{th}$ canonical reflection function $F^\kappa_\xi$ represents $j"\xi$ in the generic ultrapower, that is, $j(F^\kappa_\xi)(\kappa)=j"\xi$. \end{proposition}
\begin{proof} Let $j:V\to V^\kappa/G$ be the generic ultrapower obtained from a generic filter $G\subseteq P(\kappa)/I$ over $V$. Since $\mathop{\rm crit}(j)=\kappa$, it is easy to see that for $\xi\leq\kappa$ we have $j(F^\kappa_\xi)(\kappa)=j"\xi$. Now suppose $\kappa<\xi<\kappa^+$ and let $b_{\kappa,\xi}:\kappa\to\xi$ be the bijection such that $F^\kappa_\xi(\alpha)=b_{\kappa,\xi}[\alpha]$ for all $\alpha<\kappa$. By elementarity, $j(b_{\kappa,\xi}):j(\kappa)\to j(\xi)$ is a bijection in $M$ and $j(b_{\kappa,\xi})(\alpha)=j(b_{\kappa,\xi}(\alpha))$ for all $\alpha<\kappa$. Thus, $j(F^\kappa_\xi)(\kappa)=j(b_{\kappa,\xi})[\kappa]=j"\xi$. \end{proof}
\begin{corollary} Suppose $U$ is a normal measure on $\kappa$ and $j:V\to M$ is the corresponding ultrapower embedding. For all $\xi<\kappa^+$, the $\xi^{th}$ canonical reflection function $F^\kappa_\xi$ represents $j"\xi$ in the ultrapower, that is, $j(F^\kappa_\xi)(\kappa)=j"\xi$. \end{corollary}
Next we show that at least some of the canonical reflection functions at a regular $\kappa$ are, in fact, canonical; in Remark \ref{remark_not_canonical}, we show that this partial canonicity result is the best possible.
\begin{lemma}\label{lemma_canonicity} Suppose $\kappa$ is regular.\begin{enumerate} \item For all $\xi<\kappa^+$ the set $\{\alpha<\kappa: F^\kappa_\zeta(\alpha)\subsetneq F^\kappa_\xi(\alpha)\}$ contains a club subset of $\kappa$ for all $\zeta<\xi$. \item If $\xi<\kappa^+$ is a limit ordinal then the set \[\{\alpha<\kappa: F^\kappa_\xi(\alpha)=\bigcup_{\zeta\in F^\kappa_\xi(\alpha)} F^\kappa_\zeta(\alpha)\}\] contains a club subset of $\kappa$. \item If $\xi<\kappa^+$ is a limit ordinal the function $F^\kappa_\xi$ is canonical in the sense that whenever $F:\kappa\to P_\kappa\kappa^+$ is a function such that for all $\zeta<\xi$ the set $\{\alpha<\kappa: F^\kappa_\zeta(\alpha)\subseteq F(\alpha)\}$ contains a club, then the set $\{\alpha<\kappa: F^\kappa_\xi(\alpha)\subseteq F(\alpha)\}$ contains a club subset of $\kappa$. \end{enumerate} \end{lemma}
\begin{proof}
For (1), suppose $\zeta<\xi<\kappa^+$ and let $C=\{\alpha<\kappa: F^\kappa_\zeta(\alpha)\subsetneq F^\kappa_\xi(\alpha)\}$. Let $G\subseteq P(\kappa)/{\mathop{\rm NS}}_\kappa$ be generic over $V$ and let $j:V\to V^\kappa/G$ be the corresponding generic ultrapower. By Proposition \ref{proposition_useful_object}, $\kappa\in j(C)$ and thus by Proposition \ref{proposition_framework} we see that $C$ contains a club subset of $\kappa$.
Similarly, for (2), suppose $\xi$ is a limit ordinal, let $C=\{\alpha<\kappa: F^\kappa_\xi(\alpha)=\bigcup_{\zeta\in F^\kappa_\xi(\alpha)}F^\kappa_\zeta(\alpha)\}$ and let $j:V\to V^\kappa/G$ be the generic ultrapower obtained by forcing with $P(\kappa)/{\mathop{\rm NS}}_\kappa$. Working in $V^\kappa/G$, if $\zeta$ is an ordinal less than $j(\kappa)^+$, we let $\overline{F}^{j(\kappa)}_\zeta$ denote the $\zeta$-th cannonical reflection function at $j(\kappa)$. For each $\zeta<\xi$ we let $j(\<F^\kappa_\zeta(\alpha):\alpha<\kappa\rangle)=\langle\overline F^{j(\kappa)}_{j(\zeta)}(\alpha):\alpha<j(\kappa)\rangle$. Notice that \begin{align*}j(C)&=\{\alpha<\kappa: j(F^\kappa_\xi)(\alpha)=\bigcup_{\zeta\in j(F^\kappa_\xi)(\alpha)}\overline F^{j(\kappa)}_\zeta(\alpha)\}\\
&=\{\alpha<\kappa: j"\xi=\bigcup_{\zeta\in j"\xi}\overline{F}^{j(\kappa)}_\zeta(\alpha)\} \end{align*} For each $\zeta\in j"\xi$ we have $\overline F^{j(\kappa)}_\zeta(\kappa)=\overline F^{j(\kappa)}_{j(j^{-1}(\zeta))}(\kappa)=j(F^\kappa_{j^{-1}(\zeta)})(\kappa)=j"(j^{-1}(\zeta))=(j"\xi)\cap\zeta$, it follows that $\kappa\in j(C)$.
For (3), suppose $\xi<\kappa^+$ is a limit and let $F$ be as in the statement of the lemma. By assumption, if $j:V\to V^\kappa/G$ is any generic ultrapower obtained by forcing with $P(\kappa)/{\mathop{\rm NS}}_\kappa$, then $j(F^\kappa_\zeta)(\kappa)=j"\zeta\subseteq j(F)(\kappa)$ for all $\zeta<\xi$. By (2), we know that $j(F^\kappa_\xi)(\kappa)=j"\xi=\bigcup_{\zeta<\xi}j"\zeta$ and hence $j(F^\kappa_\xi)(\kappa)\subseteq j(F)(\kappa)$. \end{proof}
\begin{remark}\label{remark_not_canonical} Let us point out that Lemma \ref{lemma_canonicity}(3) does not hold if $\xi<\kappa^+$ is a successor ordinal. For example, supose $\xi=\kappa+1$ and $F:\kappa\to P_\kappa\kappa^+$ is defined by $F(\alpha)=\alpha$. Let $j:V\to V^\kappa/G$ be any generic ultrapower obtained by forcing with $P(\kappa)/{\mathop{\rm NS}}_\kappa$. Since $j(F)(\kappa)=\kappa$ and $j"(\kappa+1)=\kappa\cup\{j(\kappa)\}$ we see that $\{\alpha<\kappa: F^\kappa_\kappa(\alpha)\subseteq F(\alpha)\}$ contains a club in $\kappa$ and $\{\alpha<\kappa: F^\kappa_{\kappa+1}(\alpha)\subseteq F(\alpha)\}$ is nonstationary in $\kappa$.
\end{remark}
The following lemma shows that the canonical reflection functions at a regular cardinal satisfy a natural kind of coherence property.
\begin{lemma}\label{lemma_coherence} Suppose $\kappa$ is a regular cardinal and $\xi<\kappa^+$ is a limit ordinal. Let $\pi^\kappa_{\xi,\alpha}:F^\kappa_\xi(\alpha)\to f^\kappa_\xi(\alpha)$ be the transitive collapse of $F^\kappa_\xi(\alpha)$ for each $\alpha<\kappa$. Then the set \[C=\{\alpha<\kappa:(\forall\zeta\in F^\kappa_\xi(\alpha))\ F^\kappa_\xi(\alpha)\cap\zeta=F^\kappa_\zeta(\alpha)\}\] contains a club subset of $\kappa$. \end{lemma}
\begin{proof} Let $G\subseteq P(\kappa)/{\mathop{\rm NS}}_\kappa$ be generic over $V$ and let $j:V\to V^\kappa/G$ be the corresponding generic ultrapower embedding. Let $\vec{F}=\<F^\kappa_\zeta:\zeta<\kappa^+\rangle$ and notice that $j(\vec{F})=\langle\overline F^{j(\kappa)}_\zeta:\zeta<j(\kappa^+)\rangle$ where $\overline F^{j(\kappa)}_\zeta$ is the $\zeta$-th canonical reflection function at $j(\kappa)$ in $V^\kappa/G$. We have \[j(C)=\{\alpha<j(\kappa):(\forall\zeta\in j(F^\kappa_\xi)(\alpha))\ j(F^\kappa_\xi)(\alpha)\cap\zeta = \overline F^{j(\kappa)}_\zeta(\alpha)\}.\] Since $j(F^\kappa_\xi)(\kappa)=j"\xi$ and for each $\zeta\in j"\xi$ we have $\overline F^{j(\kappa)}_\zeta(\kappa)=\overline F^{j(\kappa)}_{j(j^{-1}(\zeta))}(\kappa)=j(F^\kappa_{j^{-1}(\zeta)})(\kappa)=j"\zeta$, it follows that $\kappa\in j(C)$. \end{proof}
Next we will show that for all limit ordinals $\xi<\kappa^+$, for club many $\alpha<\kappa$, the value of $f^\kappa_\xi(\alpha)$ is determined by the values of $f^\kappa_\zeta(\alpha)$ for $\zeta\in F^\kappa_\xi(\alpha)$.
\begin{lemma}\label{lemma_canonical_functions_at_limits} Suppose $\kappa$ is regular and $\xi<\kappa^+$ is a limit ordinal. Then the set \[D=\{\alpha<\kappa: f^\kappa_\xi(\alpha)=\bigcup_{\zeta\in F^\kappa_\xi(\alpha)}f^\kappa_\zeta(\alpha)\}\] contains a club subset of $\kappa$. \end{lemma}
\begin{proof} Let $G\subseteq P(\kappa)/{\mathop{\rm NS}}_\kappa$ be generic over $V$ and let $j:V\to V^\kappa/G$ be the corresponding generic ultrapower embedding. Let $j(\<f^\kappa_\zeta:\zeta<\kappa^+\rangle)=\langle\overline f^{j(\kappa)}_\zeta:\zeta<j(\kappa^+)\rangle$. We have \[j(D)=\{\alpha<j(\kappa): j(f^\kappa_\xi)(\alpha)=\bigcup_{\zeta\in j(F^\kappa_\xi)(\alpha)} \overline f^{j(\kappa)}_\zeta(\alpha)\}.\] Since $\xi=\bigcup_{\zeta\in j"\xi} j^{-1}(\zeta)$, it follows that $\kappa\in j(D)$. \end{proof}
The next two lemmas follow easily from Proposition \ref{proposition_framework} and confirm our intuition that for a regular cardinal $\kappa$ and ordinal $\xi<\kappa^+$, for club-many $\alpha<\kappa$ the value $f^\kappa_\xi(\alpha)$ behaves like $\alpha$'s version of $\xi$.
\begin{lemma}\label{lemma_limits} Suppose $\kappa$ is regular and $\xi<\kappa^+$ is a limit ordinal. Then the set \[D=\{\alpha<\kappa: \text{$f^\kappa_\xi(\alpha)$ is a limit ordinal}\}\] contains a club subset of $\kappa$. \end{lemma}
\begin{lemma}\label{lemma_successor} Suppose $\kappa$ is regular. For all $\zeta<\kappa^+$ the following sets are closed unbounded in $\kappa$. \begin{align*} D_0&=\{\alpha<\kappa: F^\kappa_{\zeta+1}(\alpha)\cap\zeta=F^\kappa_\zeta(\alpha)\}\\ D_1&=\{\alpha<\kappa: F^\kappa_{\zeta+1}(\alpha)=F^\kappa_\zeta(\alpha)\cup\{\zeta\}\}\\ D_2&=\{\alpha<\kappa: f^\kappa_{\zeta+1}(\alpha)=f^\kappa_\zeta(\alpha)+1\} \end{align*} \end{lemma}
Next we prove a proposition which generalizes a folklore result concerning canonical functions (see Corollary \ref{corollary_crazy}) to canonical reflection functions, and which draws a connection between the canonical reflection functions at a regular cardinal $\kappa$ and the canonical reflection functions at regular $\alpha<\kappa$. The following proposition was originally established in a previous version of this article using a more complicated proof; the proof below is due to Cody and Holy and appears in \cite{cody_holy_2022}.
\begin{proposition}\label{proposition_crazy} Suppose $\kappa$ is regular and $\xi<\kappa^+$. For each $\alpha<\kappa$ let \[\pi^\kappa_{\xi,\alpha}:F^\kappa_\xi(\alpha)\to f^\kappa_\xi(\alpha)\] be the transitive collapse of $F^\kappa_\xi(\alpha)$. Then there is a club $C^\kappa_\xi\subseteq\kappa$ such that for all regular uncountable $\alpha\in C^\kappa_\xi$ the set \[D^\alpha_\xi=\{\beta<\alpha : \pi^\kappa_{\xi,\alpha}[F^\kappa_\xi(\beta)]=F^\alpha_{f^\kappa_\xi(\alpha)}(\beta)\}\] is in the club filter on $\alpha$. \end{proposition}
\begin{proof} In order to prove the existence of such a club, we will use Proposition \ref{proposition_framework2}. Suppose $G\subseteq P(\kappa)/{\mathop{\rm NS}}_\kappa$ is generic over $V$ such that $\kappa$ is regular in $V^\kappa/G$ and let $j:V\to V^\kappa/G$ be the corresponding generic ultrapower. For each regular uncountable $\alpha<\kappa$ let $D^\alpha_\xi=\{\beta<\alpha : \pi^\kappa_{\xi,\alpha}[F^\kappa_\xi(\beta)]=F^\alpha_{f^\kappa_\xi(\alpha)}(\beta)\}$. We must show that \[\kappa\in j(\{\alpha\in{\rm REG}\cap\kappa: D^\alpha_\xi\text{ contains a club subset of $\kappa$}\}).\] Let $\vec{D}=\<D^\alpha_\xi:\alpha\in{\rm REG}\cap\kappa\rangle$, $\vec{\pi}=\langle\pi^\kappa_{\xi,\alpha}:\alpha<\kappa\rangle$ and $\vec{F}=\<F^\alpha_{f^\kappa_\xi(\alpha)}:\alpha\in{\rm REG}\cap\kappa\rangle$. By elementarity it follows that in $V^\kappa/G$, $j(\vec{\pi})_\kappa$ is a bijection from $j(F^\kappa_\xi)(\kappa)=j"\xi$ to $j(f^\kappa_\xi)(\kappa)=\xi$. Thus the set $\{j(\vec{\pi})_\kappa[j(F^\kappa_\xi)(\beta)]:\beta<\kappa\}$ is cofinal in $[\xi]^{<\kappa}$. Also by elementarity, we see that $j(\vec{F})_\kappa$ is the $\xi$-th canonical reflection function at $\kappa$ in $V^\kappa/G$ and hence the set $\{j(\vec{F})_\kappa(\beta):\beta<\kappa\}$ is cofinal in $[\xi]^{<\kappa}$. By the usual catching up argument, in $V^\kappa/G$ the set $j(\vec{D})_\kappa$ contains a club subset of $\kappa$. \end{proof}
The following folklore result (see \cite[Section 5]{MR1077260}) easily follows from Proposition \ref{proposition_crazy}, or can be established directly using an argument which is easier than that of Proposition \ref{proposition_crazy}.
\begin{corollary}\label{corollary_crazy} Suppose $\kappa$ is regular and $\xi<\kappa^+$. Then there is a club $C^\kappa_\xi\subseteq\kappa$ such that for all regular uncountable $\alpha\in C^\kappa_\xi$ the set \[D^\alpha_\xi=\{\beta<\alpha : f^\kappa_\xi(\beta)=f^\alpha_{f^\kappa_\xi(\alpha)}(\beta)\}\] is in the club filter on $\alpha$. \end{corollary}
\section{Restricting $\Pi^1_\xi$ formulas and consistency of higher $\Pi^1_\xi$-indescribability} \label{section_definition_pi1xi}
We begin this section with a precise definition of $\Pi^1_\xi$ and $\Sigma^1_\xi$ formulas \emph{over $V_\kappa$}, where $\kappa$ is a regular cardinal and $\xi$ is an ordinal. The following definition is similar to \cite[Definition 4.1]{MR3894041}, the only difference being that we allow for first and second order parameters from $V_\kappa$. Recall that throughout the article we use capital letters to denote second-order variables and lower case letters to denote first-order variables.
\begin{definition}\label{definition_over} Suppose $\kappa$ is a regular cardinal. We define the notions of $\Pi^1_\xi$ and $\Sigma^1_\xi$ formula over $V_\kappa$, for all ordinals $\xi$ as follows. \begin{enumerate} \item A formula $\varphi$ is $\Pi^1_0$, or equivalently $\Sigma^1_0$, over $V_\kappa$ if it is a first order formula in the language of set theory, however we allow for free variables and parameters from $V_\kappa$ of two types, namely of first and of second order. \item A formula $\varphi$ is $\Pi^1_{\xi+1}$ over $V_\kappa$ if it is of the form $\forall X_{k_1}\cdots\forall X_{k_m}\psi$ where $\psi$ is $\Sigma^1_\xi$ over $V_\kappa$ and $m\in\omega$. Similarly, $\varphi$ is $\Sigma^1_{\xi+1}$ over $V_\kappa$ if it is of the form $\exists X_{k_1}\cdots\exists X_{k_m}\psi$ where $\psi$ is $\Pi^1_\xi$ over $V_\kappa$ and $m\in\omega$.\footnote{We follow the convention that uppercase letters represent second order variables, while lower case letters represent first order variables. Thus, in the above, all quantifiers displayed are understood to be second order quantifiers, i.e., quantifiers over subsets of $V_\kappa$.}
\item When $\xi$ is a limit ordinal, a formula $\varphi$, with finitely many second-order free variables and finitely many second-order parameters, is $\Pi^1_\xi$ over $V_\kappa$ if it is of the form \[\bigwedge_{\zeta<\xi}\varphi_\zeta\] where $\varphi_\zeta$ is $\Pi^1_\zeta$ over $V_\kappa$ for all $\zeta<\xi$. Similarly, $\varphi$ is $\Sigma^1_\xi$ if it is of the form \[\bigvee_{\zeta<\xi}\varphi_\zeta\] where $\varphi_\zeta$ is $\Sigma^1_\zeta$ over $V_\kappa$ for all $\zeta<\xi$. \end{enumerate} \end{definition}
\begin{definition}\label{definition_restriction} By induction on $\xi<\kappa^+$, we define $\varphi\mathrm{|}^\kappa_\alpha$ for all $\Pi^1_\xi$ formulas $\varphi$ over $V_\kappa$ and all regular $\alpha<\kappa$ as follows. First assume that $\xi<\kappa$. If \[\varphi=\varphi(X_1,\ldots,X_m,A_1,\ldots,A_n),\] with free second order variables $X_1,\ldots,X_m$ and second order parameters $A_1,\ldots,A_n$, then we define \[\varphi\mathrm{|}^\kappa_\alpha=\varphi(X_1,\ldots,X_m,A_1\cap V_\alpha,\ldots,A_n\cap V_\alpha).\]
If $\xi=\zeta+1$ is a successor ordinal and $\varphi=\forall X_{k_1}\ldots\forall X_{k_m}\psi$ is $\Pi^1_{\zeta+1}$ over $V_\kappa$, then we define \[\varphi\mathrm{|}^\kappa_\alpha=\forall X_{k_1}\ldots\forall X_{k_m}(\psi\mathrm{|}^\kappa_\alpha).\] We define $\varphi\mathrm{|}^\kappa_\alpha$ analogously when $\varphi$ is $\Sigma^1_{\zeta+1}$.
If $\xi\in\kappa^+\setminus\kappa$ is a limit ordinal, and \begin{align}\varphi=\bigwedge_{\zeta<\xi}\psi_\zeta\label{equation_defn_restriction}\end{align} is $\Pi^1_\xi$ over $V_\kappa$, then we define \[\varphi\mathrm{|}^\kappa_\alpha=\bigwedge_{\zeta\in f^\kappa_\xi(\alpha)}\psi_{(\pi^\kappa_{\xi,\alpha})^{-1}(\zeta)}\mathrm{|}^\kappa_\alpha\] in case $\psi_{(\pi^\kappa_{\xi,\alpha})^{-1}(\zeta)}\mathrm{|}^\kappa_\alpha$ is a $\Pi^1_\zeta$ formula over $V_\alpha$ for every $\zeta<f^\kappa_\xi(\alpha)$. We leave $\varphi\mathrm{|}^\kappa_\alpha$ undefined otherwise. We define $\varphi\mathrm{|}^\kappa_\alpha$ similarly when $\xi\in\kappa^+\setminus\kappa$ is a limit ordinal and $\varphi$ is $\Sigma^1_\xi$.
\end{definition}
\begin{remark}\label{remark_definition_of_restriction} A few remarks about Definition \ref{definition_restriction} are in order. \begin{enumerate} \item An easy inductive argument on $\xi<\kappa^+$ shows that if $\varphi$ is a $\Pi^1_\xi$ or $\Sigma^1_\xi$ formula over $V_\kappa$, and $\alpha<\kappa$ is regular, then whenever $\varphi\mathrm{|}^\kappa_\alpha$ is defined, it is a $\Pi^1_{f^\kappa_\xi(\alpha)}$ or $\Sigma^1_{f^\kappa_\xi(\alpha)}$ formula over $V_\alpha$ respectively. \item Recall that we defined the sequence of canonical reflection functions $\<F^\kappa_\xi:\xi<\kappa^+\rangle$, the sequence of canonical functions $\<f^\kappa_\xi:\xi<\kappa^+\rangle$ and the transitive collapses $\pi^\kappa_{\xi,\alpha}:F^\kappa_\xi(\alpha)\to f^\kappa_\xi(\alpha)$ in a particular way making use of fixed sequence of bijections $\<b_{\kappa,\xi}:\xi\in\kappa^+\setminus\kappa\rangle$. Thus, the definition of $\varphi\mathrm{|}^\kappa_\alpha$ given above clearly depends on our choice of bijections $\<b_{\kappa,\xi}:\xi\in\kappa^+\setminus\kappa\rangle$. Below we will see that our definition of $\varphi\mathrm{|}^\kappa_\alpha$ is independent of this choice of bijections modulo the nonstationary ideal. See the paragraph after Definition \ref{definition_indescribability} for details. \end{enumerate} \end{remark}
In order to establish some basic properties of the restriction operation from Definition \ref{definition_restriction}, let us consider how it behaves with respect to generic ultrapowers. We will want to apply elementary embeddings to $\Pi^1_\xi$ and $\Sigma^1_\xi$ formulas, which will be viewed as set theoretic objects.
\begin{remark}\label{remark_coding} Assume that $\varphi$ is either a $\Pi^1_\xi$ or $\Sigma^1_\xi$ formula over $V_\kappa$ for some $\xi<\kappa^+$. Let $j:V\to V^\kappa/G$ be the generic ultrapower embedding obtained by forcing with $P(\kappa)/I$ where $I$ is some normal ideal on $\kappa$. We will leave it to the reader to check that any reasonable coding of formulas has the following properties.
\begin{enumerate}
\item If $\xi<\kappa$, and $A_1,\ldots,A_n$ are all second order parameters appearing in $\varphi$, then \[j(\varphi(A_1,\ldots,A_n))=\varphi(j(A_1),\ldots,j(A_n)).\]
\item $j(\forall X\,\varphi)=\forall X\,j(\varphi)$.
\item If $\xi\ge\kappa$ is a limit ordinal, and $\varphi$ is either of the form $\varphi=\bigwedge_{\zeta<\xi}\psi_\zeta$, or of the form $\bigvee_{\zeta<\xi}\psi_\zeta$, let $\vec\psi=\langle\psi_\zeta\mid\zeta<\xi\rangle$. Then, \[j(\varphi)=\bigwedge_{\zeta<j(\xi)}j(\vec\psi)_\zeta\quad\textrm{or}\quad j(\varphi)=\bigvee_{\zeta<j(\xi)}j(\vec\psi)_\zeta\] respectively. \end{enumerate} \end{remark}
Regarding the assumption of the next lemma, and also of some later results, note that $\kappa$ will be regular in a generic ultrapower $V^\kappa/G$ obtained by forcing with a normal ideal on $\kappa$ if and only if $G$ contains the set of regular cardinals below $\kappa$. This is of course only possible if that latter set is a stationary subset of $\kappa$, i.e., if $\kappa$ is weakly Mahlo. Let us note that the assumption that $\kappa$ is regular in the generic ultrapower is needed to ensure that $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa$ is defined.
\begin{lemma}[{Cody-Holy \cite{cody_holy_2022}}]\label{lemma_j_of} Suppose $\kappa$ is a regular cardinal and $\varphi$ is a $\Pi^1_\xi$ or $\Sigma^1_\xi$ formula over $V_\kappa$ for some $\xi<\kappa^+$. Whenever $I$ is a normal ideal on $\kappa$, $G\subseteq P(\kappa)/I$ is generic over $V$ and $j:V\to V^\kappa/G$ is the corresponding generic ultrapower such that $\kappa$ is regular in $V^\kappa/G$, it follows that $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa$ is $\Pi^1_\xi$ in $V^\kappa/G$ and furthermore, \[j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi.\] \end{lemma}
\begin{proof} We proceed by induction on $\xi<\kappa^+$. By Remark \ref{remark_coding}(1) and the definition of the restriction operation, the case when $\xi<\kappa$ is easy since \[j(\varphi(A_1,\ldots,A_n))\mathrm{|}^{j(\kappa)}_\kappa=\varphi(j(A_1),\ldots,j(A_n))\mathrm{|}^{j(\kappa)}_\kappa=\varphi(A_1,\ldots,A_n).\]
Suppose $\xi=\zeta+1$ is a successor ordinal above $\kappa$ and $\varphi=\forall X\psi(X)$ where $\psi(X)$ is a $\Sigma^1_\zeta$ formula over $V_\kappa$. By Remark \ref{remark_coding}(2),
\[j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=j(\forall X\psi(X))\mathrm{|}^{j(\kappa)}_\kappa=\forall X j(\psi(X))\mathrm{|}^{j(\kappa)}_\kappa=\forall X\psi(X).\] Essentially the same argument works when $\varphi=\exists X\psi(X)$ and $\psi(X)$ is a $\Pi^1_\zeta$ formula over $V_\kappa$.
Suppose $\xi\in\kappa^+\setminus\kappa$ is a limit and $\varphi=\bigwedge_{\zeta<\xi}\psi_\zeta$ is a $\Pi^1_\xi$ formula over $V_\kappa$. Let $\vec\psi=\langle\psi_\zeta\mid\zeta<\xi\rangle$, and let $\vec\pi=\langle\pi^\kappa_{\xi,\alpha}\mid\alpha<\kappa\rangle$. By elementarity $j(\vec{\pi})_\kappa$ is the transitive collapse of $j(F^\kappa_\xi)(\kappa)=j"\xi$ to $j(f^\kappa_\xi)(\kappa)=\xi$ and hence $j(\vec{\pi})_\kappa\upharpoonright j"\xi=j^{-1}\upharpoonright j"\xi$. Furthermore, For each $\zeta<\xi$ we have $j(\vec{\psi})_{j(\vec{\pi})_\kappa^{-1}(\zeta)}\mathrm{|}^{j(\kappa)}_\kappa=j(\vec{\psi})_{j(\zeta)}\mathrm{|}^{j(\kappa)}_\kappa=j(\psi_\zeta)\mathrm{|}^{j(\kappa)}_\kappa$, which is $\Pi^1_\zeta$ in $V^\kappa/G$ by our inductive hypothesis. Thus we have
\begin{align*} j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa&=\bigwedge_{\zeta<f^{j(\kappa)}_{j(\xi)}(\kappa)}j(\vec{\psi})_{j(\vec{\pi})_\kappa^{-1}(\zeta)}\mathrm{|}^{j(\kappa)}_\kappa\\
&=\bigwedge_{\zeta<\xi}j(\psi_\zeta)\mathrm{|}^{j(\kappa)}_\kappa\\
&=\varphi. \end{align*} The case when $\varphi$ is a $\Sigma^1_\xi$ formula is treated in exactly the same way. \end{proof}
A nice feature of our definition of restriction is that it provides a convenient way to represent $\Pi^1_\xi$ and $\Sigma^1_\xi$ formulas in generic ultrapowers.
\begin{lemma}[{Cody-Holy \cite{cody_holy_2022}}]\label{lemma_represent} Suppose $I$ is a normal ideal on $\kappa$, $G\subseteq P(\kappa)/I$ is generic over $V$, $j:V\to V^\kappa/G$ is the corresponding generic ultrapower and $\kappa$ is regular in $V^\kappa/G$. Suppose $\varphi$ is a $\Pi^1_\xi$ or $\Sigma^1_\xi$ formula over $V_\kappa$ for some $\xi<\kappa^+$ and let $\Phi:\kappa\to V_\kappa$ be such that $\Phi(\alpha)=\varphi\mathrm{|}^\kappa_\alpha$ for every regular $\alpha<\kappa$. Then, $\Phi$ represents $\varphi$ in $V^\kappa/G$. That is, $j(\Phi)(\kappa)=\varphi$. \end{lemma}
\begin{proof} This is an easy consequence of Lemma \ref{lemma_j_of} since \[j(\Phi)(\kappa)=j(\langle\varphi\mathrm{|}^\kappa_\alpha:\alpha\in\kappa\cap{\rm REG}\rangle)_\kappa=j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi.\] \end{proof}
As an easy consequence of Lemma \ref{lemma_j_of}, we see that for each $\xi\in\kappa^+\setminus\kappa$, the definition of $\varphi\mathrm{|}^\kappa_\alpha$ where $\varphi$ is $\Pi^1_\xi$ or $\Sigma^1_\xi$ over $V_\kappa$ is independent of which bijection $b_{\kappa,\xi}$ is used in its computation, modulo the nonstationary ideal on $\kappa$.
\begin{corollary}\label{corollary_restriction_is_well_defined}
Suppose $\kappa$ is a regular cardinal, $\xi\in\kappa^+\setminus\kappa$ and $\varphi$ is $\Pi^1_\xi$ or $\Sigma^1_\xi$ formula over $V_\kappa$. Let $b_{\kappa,\xi}$ and $\bar{b}_{\kappa,\xi}$ be bijections from $\kappa$ to $\xi$, and let $\varphi\mathrm{|}^\kappa_\alpha$ and $\varphi\bar{\mathrm{|}}^\kappa_\alpha$ denote the restriction of $\varphi$ to a regular $\alpha<\kappa$ defined using $b_{\kappa,\xi}$ and $\bar{b}_{\kappa,\xi}$ respectively. Then there is a club $C\subseteq\kappa$ such that for all regular $\alpha\in C$ we have $\varphi\mathrm{|}^\kappa_\alpha=\varphi\bar\mathrm{|}^\kappa_\alpha$ \end{corollary}
\begin{proof} To prove the existence of such a club we use Proposition \ref{proposition_framework2}. Let $G\subseteq P(\kappa)/{\mathop{\rm NS}}_\kappa$ be generic over $V$ such that $\kappa$ is regular in $V^\kappa/G$ and let $j:V\to V^\kappa/G$ be the corresponding generic ultrapower. Lemma \ref{lemma_j_of} implies that $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi=j(\varphi)\bar{\mathrm{|}}^{j(\kappa)}_\kappa$. \end{proof}
The following lemma was essentially established in an earlier version of the current article using a more complicated proof. The following simplified proof is due to Cody-Holy and appears in \cite{cody_holy_2022}.
\begin{lemma}\label{lemma_restriction_is_nice}
Suppose $\kappa$ is weakly Mahlo. For any $\xi<\kappa^+$, if $\varphi$ is a $\Pi^1_\xi$ or $\Sigma^1_\xi$ formula over $V_\kappa$, then there is a club subset $C_\varphi$ of $\kappa$ such that for any regular $\alpha\in C_\varphi$, $\varphi\mathrm{|}^\kappa_\alpha$ is defined, and therefore a $\Pi^1_{f^\kappa_\xi(\alpha)}$ or $\Sigma^1_{f^\kappa_\xi(\alpha)}$ formula over $V_\alpha$ respectively by Remark \ref{remark_definition_of_restriction}. \end{lemma} \begin{proof} Suppose $\varphi$ is a $\Pi^1_\xi$ formula over $V_\kappa$. To prove the existence of $C_\varphi$ we use Proposition \ref{proposition_framework2}. Suppose $G\subseteq P(\kappa)/{\mathop{\rm NS}}_\kappa$ is generic over $V$ such that $\kappa$ is regular in $V^\kappa/G$ and let $j:V\to V^\kappa/G$ be the corresponding generic ultrapower. By Lemma~\ref{lemma_j_of}, $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi$ is clearly defined and is a $\Pi^1_\xi$ formula in $V^\kappa/G$. \end{proof}
\begin{definition}\label{definition_indescribability}
Suppose $\kappa$ is a cardinal and $\xi<\kappa^+$. A set $S\subseteq\kappa$ is \emph{$\Pi^1_\xi$-indescribable} if for every $\Pi^1_\xi$ sentence $\varphi$ over $V_\kappa$, if $V_\kappa\models\varphi$ then there is some $\alpha\in S$ such that $V_\alpha\models\varphi|^\kappa_\alpha.$ \end{definition}
It easily follows from Corollary \ref{corollary_restriction_is_well_defined} that the above notion of indescribability does not depend on which sequence $\<b_{\kappa,\xi}:\xi\in\kappa^+\setminus\kappa\rangle$ is used to compute restrictions of formulas.
In Proposition \ref{proposition_measurable} below, we establish that the notion of indescribability given in Definition \ref{definition_indescribability} is relatively consistent by showing that every measurable cardinal $\kappa$ is $\Pi^1_\xi$-indescribable for all $\xi<\kappa^+$ and, in terms of consistency strength, the existence of a cardinal $\kappa$ which is $\Pi^1_\xi$-indescribable for all $\xi<\kappa^+$ is strictly weaker than the existence of a measurable cardinal.
\begin{proposition}\label{proposition_measurable} Suppose $U$ is a normal measure on a measurable cardinal $\kappa$. Then $\kappa$ is $\Pi^1_\xi$-indescribable for all $\xi<\kappa^+$ and the set \[X=\{\alpha<\kappa:\text{$\alpha$ is $\Pi^1_\xi$-indescribable for all $\xi<\alpha^+$}\}\] is in $U$. \end{proposition}
\begin{proof} Let $j:V\to M$ be the usual ultrapower embedding obtained from $U$ where $M$ is transitive and $j$ has critical point $\kappa$. Let us show that the set $X$ is in $U$; the fact that $\kappa$ is $\Pi^1_\xi$-indescribable for all $\xi<\kappa^+$ follows by a similar argument. Notice that it follows directly from Lemma \ref{lemma_restriction_is_nice} that for any $\xi<\kappa^+$ if $\varphi$ is any $\Pi^1_\xi$ formula over $V_\kappa$ then, in $M$, the formula $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa$ is $\Pi^1_\xi$ over $V_\kappa$ (because $\kappa\in j(C_\varphi)$). Furthermore, by Lemma \ref{lemma_j_of} we have \begin{align} j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi.\label{equation_measurable} \end{align} It will suffice to show that, in $M$, $\kappa$ is $\Pi^1_\xi$-indescribable for all limit ordinals $\xi<\kappa^+$. Fix a limit ordinal $\xi<\kappa^+$ and suppose \[(V_\kappa\models\varphi)^M\] where $\varphi$ is $\Pi^1_\xi$ over $V_\kappa$ in $M$. Since $H(\kappa^+)^V= H(\kappa^+)^M$, we have $\varphi\in V$ and $\varphi$ is $\Pi^1_\xi$ over $V_\kappa$ in $V$. It follows by (\ref{equation_measurable}), that \[\left((\exists\alpha<j(\kappa))\ V_\alpha\models j(\varphi)\mathrm{|}^{j(\kappa)}_\alpha\right)^M\] and thus, by elementarity, there is some $\alpha<\kappa=\mathop{\rm crit}(j)$ such that \[V_\alpha\models\varphi\mathrm{|}^\kappa_\alpha.\] Thus, $(V_\alpha\models\varphi\mathrm{|}^\kappa_\alpha)^M$ and hence $\kappa$ is $\Pi^1_\xi$-indescribable in $M$. \end{proof}
\section{Restricting $L_{\kappa^+,\kappa^+}$ formulas and consistency of $L_{\kappa^+,\kappa^+}$-indescribability}\label{section_other}
We will need to apply elementary embeddings to formulas of $L_{\kappa^+,\kappa^+}$, so let us consider some assumptions regarding the set-theoretic nature of these formulas. For example, we assume that if $j:V\to M$ is an elementary embedding with critical point $\kappa$ then $j(\lnot\varphi)=\lnot j(\varphi)$ and $j(\exists x\psi)=\exists j(\vec{x})\psi$; we also make additional assumption as in Remark \ref{remark_coding}(3) above, but we will not discuss this further.
Typically, when defining $L_{\kappa^+,\kappa^+}$ formulas, one begins by fixing a supply of $\kappa^+$-many variables that can be used to form $L_{\kappa^+,\kappa^+}$ sentences. However, without loss of generality, we will assume that we begin with a supply of $\kappa$-many variables $\{x_\eta:\eta<\kappa\}$. This assumption does not weaken the expressive power of $L_{\kappa^+,\kappa^+}$ sentences or theories (without parameters) in the language of set theory, because any particular sentence defined using a supply of $\kappa^+$-many variables only actually mentions $\kappa$-many. This assumption allows us to write $L_{\kappa^+,\kappa^+}$ formulas beginning with existential quantifiers in the form $\exists\<x_{\alpha_\eta}:\eta<\gamma\rangle\psi$, where the domain of the sequence of variables being quantified over is simply some $\gamma\leq\kappa$, rather than some $\xi<\kappa^+$ and $\langle\alpha_\eta:\eta<\gamma\rangle$ is some increasing sequence of ordinals less than $\kappa$. Another consequence of this assumption is that we may assume that all variables are elements of $V_\kappa$ when $\kappa=|V_\kappa|$, and hence if $j:V\to M$ is an elementary embedding with critical point $\kappa$, we will have $j(x)=x$ for all variables $x$. We will also assume that for all cardinals $\alpha<\kappa$ the set $\{x_\eta:\eta<\alpha\}$ constitutes the supply of $\alpha$-many variables used to form all $L_{\alpha^+,\alpha^+}$ sentences.
For a regular cardinal $\kappa$ and an ordinal $\alpha<\kappa$, we define $\varphi\mathrm{|}^\kappa_\alpha$ for all $L_{\kappa^+,\kappa^+}$ formulas $\varphi$ by induction of subformulas. For more on such induction principles, see \cite[Page 64]{MR0539973}.
\begin{definition}\label{definition_restriction_2} Suppose $\kappa$ is a regular cardinal and $\alpha<\kappa$ is a cardinal. We define $\varphi\mathrm{|}^\kappa_\alpha$ for all formulas $\varphi$ of $L_{\kappa^+,\kappa^+}$ in a given signature by induction on complexity of $\varphi$. \begin{enumerate} \item If $\varphi$ is a term equation $t_1=t_2$ or a relational formula of the form $R(t_1,\ldots,t_k)$ we define $\varphi\mathrm{|}^\kappa_\alpha$ to be $\varphi$. \item If $\varphi$ is of the form $\lnot\psi$ where $\psi\mathrm{|}^\kappa_\alpha$ has already been defined, we let $\varphi\mathrm{|}^\kappa_\alpha$ be the formula $\lnot(\psi\mathrm{|}^\kappa_\alpha)$. \item If $\varphi$ is of the form $\bigwedge_{\zeta<\xi}\varphi_\zeta$ where $\xi<\kappa^+$ and $\varphi_\zeta\mathrm{|}^\kappa_\alpha$ has been defined for all $\zeta<\xi$, then we define \[\varphi\mathrm{|}^\kappa_\alpha=\bigwedge_{\zeta < f^\kappa_\xi(\alpha)}\varphi_{(\pi^\kappa_{\xi,\alpha})^{-1}(\zeta)}\mathrm{|}^\kappa_\alpha,\] provided that this definition of $\varphi\mathrm{|}^\kappa_\alpha$ is a formula of $L_{\alpha^+,\alpha^+}$; otherwise we leave $\varphi\mathrm{|}^\kappa_\alpha$ undefined. \item If $\varphi$ is of the form $\exists\<x_{\alpha_\eta}:\eta<\gamma\rangle\psi$ where $\gamma\leq\kappa$, $\langle\alpha_\eta:\eta<\gamma\rangle$ is an increasing sequence of ordinals less than $\kappa$ and $\psi\mathrm{|}^\kappa_\alpha$ has already been defined, we let \[\varphi\mathrm{|}^\kappa_\alpha=\exists\<x_{\alpha_\eta}:\eta<\alpha\cap\gamma\rangle\ \psi\mathrm{|}^\kappa_\alpha,\] provided that this definition of $\varphi^\kappa_\alpha$ is a formula of $L_{\alpha^+,\alpha^+}$; otherwise we leave $\varphi\mathrm{|}^\kappa_\alpha$ undefined. \end{enumerate} \end{definition}
As for the notion of restriction of $\Pi^1_\xi$ formulas considered in Section \ref{section_definition_pi1xi} above, one can easily show that for $L_{\kappa^+,\kappa^+}$ formulas, the definition of $\varphi\mathrm{|}^\kappa_\alpha$ is independent of our choice of bijections $\<b_{\kappa,\xi}:\xi\in\kappa^+\setminus\kappa\rangle$ modulo the nonstationary ideal on $\kappa$.
Notice that, in Definition \ref{definition_restriction_2}(4), it is at least conceivable that some of the bound variables of $\varphi$ could become free variables of $\varphi\mathrm{|}^\kappa_\alpha$. However, it easily follows from the next lemma that this can happen only for a nonstationary set of $\alpha$, and hence this aspect of the definition can be ignored in all of the cases that we care about.
\begin{lemma}\label{lemma_alternative_indescribability} Suppose $\kappa$ is a regular cardinal and $\varphi$ is an $L_{\kappa^+,\kappa^+}$ formula in the language of set theory. If $I$ is a normal ideal on $\kappa$, $G\subseteq P(\kappa)/I$ is generic over $V$ and $j:V\to V^\kappa/G$ is the corresponding generic ultrapower such that $\kappa$ is regular in $V^\kappa/G$, then it follows that $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi$ is a formula of $L_{\kappa^+,\kappa^+}$ in $V^\kappa/G$.
\end{lemma}
\begin{proof} When $\varphi$ is a relational formula, it follows by our assumptions on the set-theoretic nature of such formulas that $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi\mathrm{|}^{j(\kappa)}_\kappa=\varphi$. If the result holds for $\psi$ and $\varphi$ is of the form $\lnot \varphi$, then it clearly holds for $\varphi$ too.
Now suppose $\varphi$ is of the form \[\varphi=\bigwedge_{\zeta<\xi}\varphi_\zeta,\] where $\xi<\kappa^+$. Define sequences $\vec{\varphi}=\langle\varphi_\zeta:\zeta<\xi\rangle$ and $\vec{\pi}=\langle\pi^\kappa_{\xi,\alpha}:\alpha<\kappa\rangle$. Recall that $j(\vec{\pi})_\kappa \upharpoonright j"\xi=j^{-1}\upharpoonright j"\xi$. We have \begin{align*} j(\varphi)=\bigwedge_{\zeta<j(\xi)} j(\vec{\varphi})_\zeta \end{align*} and thus \begin{align*} j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa&=\bigwedge_{\zeta<j(f^\kappa_\xi)(\kappa)} j(\vec{\varphi})_{j(\vec{\pi})_\kappa^{-1}(\zeta)}\mathrm{|}^{j(\kappa)}_\kappa\\
&=\bigwedge_{\zeta<\xi} j(\vec{\varphi})_{j(\zeta)}\mathrm{|}^{j(\kappa)}_\kappa\\
&=\bigwedge_{\zeta<\xi} j(\varphi_\zeta)\mathrm{|}^{j(\kappa)}_\kappa\\
&=\bigwedge_{\zeta<\xi}\varphi_\zeta\\
&=\varphi. \end{align*} Now suppose $\varphi$ is of the form $\exists\<x_{\alpha_\eta}:\eta<\gamma\rangle\psi$ where $\gamma\leq\kappa$ and $\langle\alpha_\eta:\eta<\gamma\rangle$ is an increasing sequence of ordinals less than $\kappa$. Let $\vec{x}=\<x_{\alpha_\eta}:\eta<\gamma\rangle$. We have \[j(\varphi)=\exists j(\vec{x}) j(\psi)\] and thus \begin{align*} j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa&=\exists j(\vec{x})\upharpoonright(\kappa\cap j(\gamma)) \ j(\psi)\mathrm{|}^{j(\kappa)}_\kappa\\
&=\exists\vec{x}\psi\\
&=\varphi. \end{align*}
\end{proof}
One can easily show that the following definition of $L_{\kappa^+,\kappa^+}$-indescribability is not dependent on which sequence of bijections is used to compute restrictions of $L_{\kappa^+,\kappa^+}$ formulas.
\begin{definition} Suppose $\kappa$ is a regular cardinal. A set $S\subseteq\kappa$ is \emph{$L_{\kappa^+,\kappa^+}$-indescribable} if for all sentences $\varphi$ of $L_{\kappa^+,\kappa^+}$ in the language of set theory, if $V_\kappa\models\varphi$ then there is some $\alpha<\kappa$ such that $V_\alpha\models\varphi\mathrm{|}^\kappa_\alpha$. \end{definition}
From Lemma \ref{lemma_alternative_indescribability} and an argument similar to that given above for Proposition \ref{proposition_measurable} we obtain the following, which shows that the existence of a cardinal $\kappa$ which is $L_{\kappa^+,\kappa^+}$-indescribable is strictly weaker than the existence of a measurable cardinal.
\begin{proposition}\label{proposition_Lkappa+} Suppose $U$ is a normal measure on a measurable cardinal $\kappa$. Then $\kappa$ is $L_{\kappa^+,\kappa^+}$-indescribable and the set \[\{\alpha<\kappa:\text{$\alpha$ is $L_{\alpha^+,\alpha^+}$-indescribable}\}\] is in $U$. \end{proposition}
\section{Higher $\Pi^1_\xi$-indescribability ideals}
In this section, given a regular cardinal $\kappa$, we prove the existence of universal $\Pi^1_\xi$ formulas for all $\xi<\kappa^+$ and use such formulas to show that the natural ideal on $\kappa$ associated to $\Pi^1_\xi$-indescribability is normal. We then use universal formulas to show that $\Pi^1_\xi$-indescribability is, in a sense, expressible by a $\Pi^1_{\xi+1}$ formula. This leads to several hierarchy results and a characterization of $\Pi^1_\xi$-indescribability in terms of the natural filter base consisting of the $\Pi^1_\xi$-club subsets of $\kappa$.
\begin{remark}\label{remark_diagonal} Let us make a brief remark about normal ideals and a notion of diagonal intersection we will use in several places below. Recall that an ideal $I$ on a regular cardinal $\kappa$ is \emph{normal} if and only if for any positive set $S\in I^+=\{X\subseteq\kappa: X\notin I\}$ and every function $f:S\to\kappa$ with $f(\alpha)<\alpha$ for all $\alpha\in S$, there is a positive set $T\in P(S)\cap I^+$ such that $f$ is constant on $T$. Equivalently, $I$ is normal if and only if the filter $I^*$ dual to $I$ is closed under diagonal intersection; that is, whenever $\vec{C}=\<C_\alpha:\alpha<\kappa\rangle$ is a sequences of sets in $I^*$ then $\mathop{\text{\Large$\bigtriangleup$}}\vec{C}=\mathop{\text{\Large$\bigtriangleup$}}_{\alpha<\kappa}C_\alpha=\{\alpha<\kappa:\alpha\in\bigcap_{\beta<\alpha}C_\beta\}$ is in $I^*$. Since diagonal intersections are independent, modulo the nonstationary ideal, of the particular enumeration of the sets involved, it follows that an ideal $I$ on $\kappa$ is normal if and only if for all $\xi\in\kappa^+\setminus\kappa$ whenever $\vec{C}=\<C_\zeta:\zeta<\xi\rangle$ is a sequence of sets in $I^*$, the set \[\mathop{\text{\Large$\bigtriangleup$}}\vec{C}=\mathop{\text{\Large$\bigtriangleup$}}_{\zeta<\xi}C_\zeta=\{\alpha<\kappa: \alpha\in \bigcap_{\zeta\in b_{\kappa,\xi}[\alpha]}C_\zeta\}\] is in $I^*$, where $b_{\kappa,\xi}:\kappa\to\xi$ is a bijection. In what follows, we will often make use of the fact that the club filter on a regular $\kappa$ is closed under such diagonal intersections. \end{remark}
\subsection{Universal $\Pi^1_\xi$ formulas and normal ideals}\label{section_universal}
For a regular cardinal $\kappa>\omega$, we define the notion of universal $\Pi^1_\xi$ formula, where $\xi<\kappa^+$, as follows. If $\xi<\kappa$ then we adopt a definition of universal $\Pi^1_\xi$ formula over $V_\kappa$, which is similar to that of \cite{MR3894041}, but we need a different notion for $\xi\in\kappa^+\setminus\kappa$.
\begin{definition}\label{definition_universal} Suppose $\kappa$ is a regular cardinal and $\xi<\kappa$. We say that a $\Pi^1_\xi$ formula $\Psi(X_1,\ldots,X_n,Y_\xi)$ over $V_\kappa$, where $X_1,\ldots,X_n,Y_\xi$ are second-order variables, is a \emph{universal $\Pi^1_\xi$ formula at $\kappa$ for formulas with $n$ free variables} if for all $\Pi^1_\xi$ formulas $\varphi(X_1,\ldots,X_n)$ over $V_\kappa$, with all free variables displayed, there is a $K_\varphi\in V_\kappa$, referred to as a \emph{code for $\varphi$} such that for all $A_1,\ldots,A_n\subseteq V_\kappa$ and all regular $\alpha\in \kappa\setminus\xi$ we have \[V_\alpha\models\varphi(A_1,\ldots,A_n)\text{ if and only if }V_\alpha\models\Psi(A_1,\ldots,A_n,K_\varphi).\]
On the other hand, suppose $\xi\in\kappa^+\setminus\kappa$. We say that a $\Pi^1_\xi$ formula $\Psi(X_1,\ldots,X_n,Y_\xi)$ over $V_\kappa$, where $X_1,\ldots,X_n,Y_\xi$ are second-order variables, is a \emph{universal $\Pi^1_\xi$ formula at $\kappa$ for formulas with $n$ free second order variables} if for all $\Pi^1_\xi$ formulas $\varphi(X_1,\ldots,X_n)$ over $V_\kappa$, with all free variables displayed, there is a $K_\varphi\subseteq\kappa$ and there is a club $C_\varphi\subseteq\kappa$ such that for all $A_1,\ldots,A_n\subseteq V_\kappa$ and all regular $\alpha\in C_\varphi\cup\{\kappa\}$ we have \[V_\alpha\models\varphi(A_1,\ldots,A_n)\mathrm{|}^\kappa_\alpha\text{ if and only if }V_\alpha\models\Psi(A_1,\ldots,A_n,K_\varphi)\mathrm{|}^\kappa_\alpha.\] When $n=0$, the intended meaning is that $\varphi$ is a $\Pi^1_\xi$ sentence over $V_\kappa$ and $\Psi_{\xi,0}(Y)$ has one free-variable. The notion of \emph{universal $\Sigma^1_\xi$ formula at $\kappa$ for formulas with $n$ free second-order variables} is defined similarly. \end{definition}
We will use the following lemma to prove that universal $\Pi^1_\xi$ formulas exist at regular $\kappa$ where $\kappa\leq\xi<\kappa^+$.
\begin{lemma}\label{lemma_no_increase} Suppose $\kappa$ is regular and $1\leq\zeta<\kappa^+$. Suppose $\psi_\zeta(W_1,\ldots,W_n,Y,Z)$ is a $\Pi^1_\zeta$ formula over $V_\kappa$ and $\varphi(X,Y)$ is a $\Pi^1_0$ formula over $V_\kappa$ where all free second-order variables are displayed. Then there is a $\Pi^1_\zeta$ formula $\varphi_\zeta(X,Z)$ over $V_\kappa$ and a club $C_\zeta$ in $\kappa$ such that for all $A,B\subseteq V_\kappa$ and for all regular $\alpha\in C_\zeta\cup\{\kappa\}$ we have \[V_\alpha\models\forall Y\forall W_1\cdots\forall W_n(\varphi(A\cap V_\alpha,Y)\lor\psi_\zeta(W_1,\ldots, W_n,Y,B)\mathrm{|}^\kappa_\alpha)\] if and only if \[V_\alpha\models \varphi_\zeta(A,B)\mathrm{|}^\kappa_\alpha.\] Furthermore, a similar statement holds for $\Sigma^1_\zeta$ formulas $\psi_\zeta'$ over $V_\kappa$. \end{lemma}
\begin{proof} We provide a proof for the cases in which $\psi_\zeta$ is a $\Pi^1_\zeta$ formula over $V_\kappa$. The other case in which the formulas are $\Sigma^1_\zeta$ is similar. We proceed by induction on $\zeta$. If $\zeta=1$ then $\psi_1(W_1,\ldots, W_n,Y,Z)$ is of the form $\forall W\psi_0(W_1,\ldots, W_n,W,Y,Z)$ where $\psi_0(W_1,\ldots,W_n,W,Y,Z)$ is $\Pi^1_0$ over $V_\kappa$ and we see that \[\forall Y\forall W_1\cdots W_n(\varphi(X,Z)\lor\psi_1(W_1,\ldots,W_n,Y,Z))\] is equivalent over $V_\kappa$ to the $\Pi^1_1$ formula \[\varphi_1(X,Z)=\forall Y\forall W_1\cdots\forall W_n\forall W(\varphi(X,Y)\lor\psi_0(W_1,\ldots,W_n,W,Y,Z)).\] Since restrictions of $\Pi^1_1$ formulas are trivial, this establishes the base case taking $C_1=\kappa$.
If $\zeta=\eta+1<\kappa^+$ is a successor ordinal, then $\psi_{\eta+1}(W_1,\ldots,W_n,Y,Z)$ is of the form $\forall W \psi_\eta'(W_1,\ldots,W_n,W,Y,Z)$ where $\psi_\eta'$ is $\Sigma^1_\eta$ over $V_\kappa$. Clearly the formula \[\varphi_{\eta+1}:=\forall Y\forall W_1\cdots\forall W_n\forall W(\varphi(X,Y)\lor \psi_\eta'(W_1,\ldots,W_n,W,Y,Z))\] is $\Pi^1_{\eta+1}$ over $V_\kappa$ and satisfies the desired property together with the club $C_{\eta+1}=C_\eta$ obtained from the inductive hypothesis.
If $\zeta<\kappa^+$ is a limit ordinal, then \[\psi_\zeta(W_1,\ldots,W_n,Y,Z)=\bigwedge_{\eta<\zeta}\psi_\eta(W_1,\ldots,W_n,Y,Z)\] where $\psi_\eta$ is $\Pi^1_\eta$ over $V_\kappa$ for all $\eta<\zeta$. In this case, the formula \[\forall Y\forall W_1\cdots\forall W_n(\varphi(X,Y)\lor\psi_\zeta(W_1,\ldots,W_n,Y,Z))\] is equivalent over $V_\kappa$ to \[\bigwedge_{\eta<\zeta}\forall Y\forall W_1\cdots\forall W_n(\varphi(X,Y)\lor\psi_\eta(W_1,\ldots,W_n,Y,Z)),\] and by our inductive hypothesis, for each $\eta<\zeta$, there is a $\Pi^1_\eta$ formula $\varphi_\eta(X,Z)$ over $V_\kappa$ and a club $C_\eta$ in $\kappa$ such that for all $A,B\subseteq V_\kappa$ and all regular $\alpha\in C_\eta\cup\{\kappa\}$ we have \[V_\alpha\models\forall Y\forall W_1\cdots\forall W_n(\varphi(A,Y)\lor\psi_\eta(W_1,\ldots, W_n,Y,B)\mathrm{|}^\kappa_\alpha)\] if and only if \[V_\alpha\models\varphi_\eta(A,B)\mathrm{|}^\kappa_\alpha.\] It is easy to verify that the formula \[\varphi_\zeta(X,Y)=\bigwedge_{\eta<\zeta}\varphi_\eta(X,Y)\] and a club subset $C_\zeta$ of $\mathop{\text{\Large$\bigtriangleup$}}_{\eta<\zeta}C_\eta=\{\alpha<\kappa:\alpha\in\bigcap_{\eta\in F^\kappa_\zeta(\alpha)}C_\eta\}$ are as desired.\footnote{Recall that $C_\zeta$ is in fact in the club filter on $\mu$ by Remark \ref{remark_diagonal}.} \end{proof}
The following proposition generalizes results of L\'{e}vy \cite{MR0281606} and Bagaria \cite{MR3894041}; Levy proved the case in which $\xi<\omega$ and Bagaria proved the case in which $\xi<\kappa$.
\begin{theorem}\label{theorem_universal} Suppose $\kappa>\omega$ is a regular cardinal and $\xi$ is an ordinal with $\xi<\kappa^+$. For each $n<\omega$ there is a universal $\Pi^1_\xi$ formula $\Psi^\kappa_{\xi,n}(X_1,\ldots,X_n,Y_\xi)$ and a universal $\Sigma^1_\xi$ formula $\bar{\Psi}^\kappa_{\xi,n}(X_1,\ldots,X_n,Y_\xi)$ at $\kappa$ for formulas with $n$ free second-order variables. \end{theorem}
\begin{proof} The case in which $\xi<\kappa$ follows directly from the proof of \cite[Proposition 4.4]{MR3894041}. Suppose $\xi=\zeta+1$ is a successor ordinal with $\kappa<\zeta+1<\kappa^+$ and the result holds for all $\eta\leq\zeta$. Let us show that there is a universal $\Pi^1_{\zeta+1}$ formula at $\kappa$ for formulas with $n$ free second-order variables; a similar argument works for $\Sigma^1_{\zeta+1}$ formulas, which we leave to the reader. Let $\bar{\Psi}_{\zeta,n+1}^\kappa(X_1,\ldots,X_n,X_{n+1},Y_\xi)$ be a universal $\Sigma^1_\zeta$ formula at $\kappa$ for formulas with $n+1$ free second-order variables obtained from the induction hypothesis. We will show that \[\Psi_{\zeta+1,n}^\kappa(X_1,\ldots,X_n,Y_\xi)=\forall W\bar{\Psi}_{\zeta,n+1}^\kappa(X_1,\ldots,X_n,W,Y_\xi)\] is the desired formula. Suppose $\varphi(X_1,\ldots,X_n)=\forall W\varphi_\zeta(X_1,\ldots,X_n,W)$ is any $\Pi^1_{\zeta+1}$ formula with $n$ free second-order variables, where $\varphi_\zeta$ is $\Sigma^1_\zeta$ with $n+1$ free second-order variables.\footnote{Note that if we had here a block of quantifiers $\forall W_1\cdots\forall W_k$, they could be collapsed to a single one by modifying $\varphi_\zeta$ without changing the fact that $\varphi_\zeta$ is $\Sigma^1_\zeta$.} Let $C_{\varphi_\zeta}$ and $K_{\varphi_\zeta}$ be as obtained from the inductive hypothesis. Fix $A_1,\ldots,A_n\subseteq V_\kappa$. Then for all regular $\alpha\in C_{\varphi_\zeta}\cup\{\kappa\}$ we have \begin{align*} V_\alpha\models\varphi(A_1,\ldots,A_n)\mathrm{|}^\kappa_\alpha&\iff (\forall W\subseteq V_\alpha) V_\alpha\models \varphi_\zeta(A_1,\ldots,A_n,W)\mathrm{|}^\kappa_\alpha\\
&\iff (\forall W\subseteq V_\alpha) V_\alpha\models \bar{\Psi}_{\zeta,n+1}^\kappa(A_1,\ldots,A_n,W,K_{\varphi_\zeta})\mathrm{|}^\kappa_\alpha\\
&\iff V_\alpha\models \forall W\bar{\Psi}_{\zeta,n+1}^\kappa(A_1,\ldots,A_n,W,K_{\varphi_\zeta})\mathrm{|}^\kappa_\alpha\\
&\iff V_\alpha\models\Psi_{\zeta+1,n}^\kappa(A_1,\ldots,A_n, K_{\varphi_\zeta})\mathrm{|}^\kappa_\alpha, \end{align*} which establishes the successor case of the induction.
Suppose $\xi$ is a limit ordinal with $\kappa\leq\xi<\kappa^+$ and the result holds for all $\zeta<\xi$. We will show that there is a universal $\Pi^1_\xi$ formula at $\kappa$ for formulas with $1$ free second-order variable; the proof for $n$ free second-order variables is essentially the same but one must replace the single variable $X$ with a tuple $X_1,\ldots,X_n$ in the appropriate places. We let $\Gamma:\kappa\times\kappa\to\kappa$ be the usual definable pairing function and for $A\subseteq\kappa$ and $\eta<\kappa$ we let \[(A)_\eta=\{\beta<\kappa:\Gamma(\eta,\beta)\in A\}\] be the ``$\eta^{th}$ slice'' of $A$.
Suppose $\zeta<\xi$. Using the inductive hypothesis, we let $\Psi^\kappa_{\zeta,1}$ be a universal $\Pi^1_\zeta$ formula at $\kappa$ for formulas with $1$ free variable. We will define the desired universal formula $\Psi_{\xi,1}^\kappa(X,Y_\xi)$ by simply taking the conjunction of the $\Psi^\kappa_{\zeta,1}$'s for $\zeta<\xi$, with the proviso that we must take care to use the right slice of the code $K_\varphi\subseteq\kappa$ we will define for an arbitrary $\Pi^1_\xi$ formula $\varphi=\bigwedge_{\zeta<\xi}\varphi_\zeta$, where $(K_\varphi)_{b_{\kappa,\xi}^{-1}(\zeta)}=K_{\varphi_\zeta}$. With this in mind, note that we will define $\Psi_{\xi,1}^\kappa(X,Y_\xi)$ in such a way that it is equivalent to \[\bigwedge_{\zeta<\xi}\Psi^\kappa_{\zeta,1}(X,(Y_\xi)_{b_{\kappa,\xi}^{-1}(\zeta)})\] over $V_\kappa$. In order to verify that the following definition of $\Psi^\kappa_{\xi,1}$ produces a $\Pi^1_\xi$ formula over $V_\kappa$, we must check that $\Psi^\kappa_{\zeta,1}(X,(Y_\xi)_{b_{\kappa,\xi}^{-1}(\zeta)})$ is expressible by a $\Pi^1_\zeta$ formula over $V_\kappa$.
Suppose $\zeta<\xi$. Notice that for $A\subseteq V_\kappa$ and $B\subseteq\kappa$ the sentence $\Psi^\kappa_{\zeta,1}(A,(B)_{b_{\kappa,\xi}^{-1}(\zeta)})$ is equivalent to \[\forall Y(Y=(B)_{b_{\kappa,\xi}^{-1}(\zeta)}\rightarrow \Psi^\kappa_{\zeta,1}(A,(B)_{b_{\kappa,\xi}^{-1}(\zeta)})),\] over $V_\kappa$ where $Y=(B)_{b_{\kappa,\xi}^{-1}(\zeta)}$ is expressible as a $\Pi^1_0$ formula over $V_\kappa$ using $b_{\kappa,\xi}^{-1}(\zeta)$ as a parameter and using a first order quantifier over ordinals. Thus, by Lemma \ref{lemma_no_increase}, we may let $\Theta^\kappa_\zeta(X,Y_\xi)$ be a $\Pi^1_\xi$ formula over $V_\kappa$ such that for all $A\subseteq V_\kappa$ and $B\subseteq\kappa$, $V_\kappa\models\Theta_\zeta^\kappa(A,B)$ if and only if $V_\kappa\models \Psi^\kappa_{\zeta,1}(A,(B)_{b_{\kappa,\xi}^{-1}(\zeta)})$, and furthermore, we can assume $\Theta_\zeta^\kappa$ has the property that there is a club $D_\zeta$ in $\kappa$ such that for all regular $\alpha\in D_\zeta$ and all $A,B\subseteq V_\kappa$ we have \[V_\alpha\models \Psi^\kappa_{\zeta,1}(A,(B)_{b_{\kappa,\xi}^{-1}(\zeta)})\mathrm{|}^\kappa_\alpha\text{ if and only if }V_\kappa\models\Theta_\zeta^\kappa(A,B)\mathrm{|}^\kappa_\alpha.\]
Now let us check that \[\Psi_{\xi,1}^\kappa(X,Y_\xi)=\bigwedge_{\zeta<\xi}\Theta_\zeta^\kappa(X,Y_\xi)\] is a universal $\Pi^1_\xi$ formula at $\kappa$ for formulas with $1$ free variable. Suppose $\varphi(X)=\bigwedge_{\zeta<\xi}\varphi_\zeta(X)$ is any $\Pi^1_\xi$ formula over $V_\kappa$ with one free second-order variable, where each $\varphi_\zeta$ is $\Pi^1_\zeta$ over $V_\kappa$. Let $K_\varphi=\{\Gamma(b_{\kappa,\xi}^{-1}(\zeta),\beta):\zeta<\xi\land\beta\in K_{\varphi_\zeta}\}$ code the sequence $\<K_{\varphi_\zeta}:\zeta<\xi\rangle$, where the codes $K_{\varphi_\zeta}$ are obtained by the inductive hypothesis. Notice that for each $\zeta<\xi$ we have $(K_\varphi)_{b_{\kappa,\xi}^{-1}(\zeta)}=K_{\varphi_\zeta}$.
Fix $A\subseteq V_\kappa$. It follows easily from the definitions of $K_\varphi$ and $\Psi_{\xi,1}^\kappa(X,Y_\xi)$ that \begin{align*} V_\kappa\models\Psi_{\xi,1}^\kappa(A,K_\varphi)&\iff V_\kappa\models\bigwedge_{\zeta<\xi}\Psi^\kappa_{\zeta,1}(A,(K_{\varphi})_{b_{\kappa,\xi}^{-1}(\zeta)})\\
&\iff V_\kappa\models\bigwedge_{\zeta<\xi}\Psi^\kappa_{\zeta,1}(A,(K_{\varphi_\zeta}))\\ &\iff V_\kappa\models\varphi(A). \end{align*}
Next let us show that there is a club $C$ in $\kappa$ such that for all regular $\alpha\in C$ we have $V_\alpha\models\varphi(A)\mathrm{|}^\kappa_\alpha$ if and only if $V_\alpha\models\Psi_{\xi,1}^\kappa(A,K_\varphi)\mathrm{|}^\kappa_\alpha$. To prove that such a club exists we use Proposition \ref{proposition_framework2}. Let $G\subseteq P(\kappa)/{\mathop{\rm NS}}_\kappa$ be generic over $V$ such that $\kappa$ is regular in $V^\kappa/G$ and let $j:V\to V^\kappa/G$ be the corresponding generic ultrapower embedding. We must show that $\kappa\in j(T)$ where \begin{align} T=\{\alpha\in{\rm REG}: V_\alpha\models\varphi(A)\mathrm{|}^\kappa_\alpha\iff V_\alpha\models\Psi_{\xi,1}^\kappa(A,K_\varphi)\mathrm{|}^\kappa_\alpha\}. \end{align} By Lemma \ref{lemma_j_of} we have \[j(\varphi(A))\mathrm{|}^{j(\kappa)}_\kappa=\varphi(A)=\bigwedge_{\zeta<\xi}\varphi_\zeta(A)\] and \[j(\Psi_{\xi,1}^\kappa(A,K_\varphi))\mathrm{|}^{j(\kappa)}_\kappa=\Psi_{\xi,1}^\kappa(A,K_\varphi)=\bigwedge_{\zeta<\xi}\Theta_\zeta^\kappa(A,K_\varphi).\] By our inductive hypothesis, for each $\zeta<\xi$ there is a club $C_{\varphi_\zeta}$ in $\kappa$ such that for all regular $\alpha\in C_{\varphi_\zeta}$ we have $V_\alpha\models\varphi_\zeta(A)\mathrm{|}^\kappa_\alpha$ if and only if $V_\alpha\models\Psi_{\zeta,1}^\kappa(A,K_{\varphi_\zeta})\mathrm{|}^\kappa_\alpha$. Since for all $\zeta<\xi$ we have $\kappa\in j(C_{\varphi_\zeta}\cap D_\zeta)$ and $(K_\varphi)_{b_{\kappa,\xi}^{-1}(\zeta)}=K_{\varphi_\zeta}$, it follows that in $V^\kappa/G$ we have \begin{align*} V_\kappa\models\varphi_\zeta(A)&\iff V_\kappa\models\Psi_{\zeta,1}^\kappa(A,(K_\varphi)_{b_{\kappa,\xi}^{-1}(\zeta)})\\
&\iff V_\kappa\models\Theta_\zeta^\kappa(A,K_\varphi). \end{align*} Hence $\kappa\in j(T)$. \end{proof}
\begin{remark}\label{remark_universal_formula_depends_on_bijections} Notice that contained within the proof of Theorem \ref{theorem_universal} is a construction via transfinite recursion on $\xi$ of a universal $\Pi^1_\xi$ formula at $\kappa$ for formulas with $n$ free second-order variables. Furthermore, when $\xi$ is a limit, let us emphasize that the definition of $\Psi^\kappa_{\xi,n}(X_1,\ldots,X_n,Y_\xi)$ depends not only on the chosen bijection $b_{\kappa,\xi}:\kappa\to\xi$, but on the entire history of bijections $b_{\kappa,\zeta}:\kappa\to\zeta$ chosen at previous limit steps $\zeta<\xi$ in the construction. \end{remark}
Generalizing work of Bagaria \cite{MR3894041}, as our first application of the existence of universal formulas, we show that there are natural normal ideals on $\kappa$ associated to $\Pi^1_\xi$-indescribability for all $\xi<\kappa^+$.
\begin{theorem}\label{theorem_normal_ideal} If a cardinal $\kappa$ is $\Pi^1_\xi$-indescribable where $\xi<\kappa^+$, then the collection \[\Pi^1_\xi(\kappa)=\{X\subseteq\kappa:\text{$X$ is not $\Pi^1_\xi$-indescribable}\}\] is a nontrivial normal ideal on $\kappa$. \end{theorem}
\begin{proof} Suppose $\kappa$ is $\Pi^1_\xi$-indescribable where $\xi<\kappa^+$. It is easy to see that \[\Pi^1_\xi(\kappa)=\{X\subseteq\kappa:\text{$X$ is not $\Pi^1_\xi$-indescribable}\}\] is a nontrivial ideal on $\kappa$, so we just need to prove it is normal. Suppose $S\in\Pi^1_\xi(\kappa)^+$ and fix a regressive function $f:S\to\kappa$. For the sake of contradiction, assume that for all $\eta<\kappa$ the set $f^{-1}(\{\eta\})=\{\alpha\in S: f(\alpha)=\eta\}$ is not in $\Pi^1_\xi(\kappa)^+$. Then, for each $\eta<\kappa$ there is some $\Pi^1_\xi$ formula $\varphi_\eta(X)$ over $V_\kappa$ and some $A_\eta\subseteq V_\kappa$ such that $V_\kappa\models\varphi_\eta(A_\eta)$ but \begin{align}V_\alpha\models\lnot\varphi_\eta(A_\eta)\mathrm{|}^\kappa_\alpha\text{ for all }\alpha\in S \text{ such that }f(\alpha)=\eta.\label{equation_will_contradict} \end{align} Let $\Psi^\kappa_{\xi,1}(X,Y_\xi)$ be the universal $\Pi^1_\xi$ formula at $\kappa$ for formulas with one free second-order variable, let $K_{\varphi_\eta}\subseteq\kappa$ be the code for $\varphi_\eta$ and let $C_{\varphi_\eta}$ be the club subset of $\kappa$ as in Definition \ref{definition_universal}. Then for all $\eta<\kappa$ we have \[V_\kappa\models\Psi^\kappa_{\xi,1}(A_\eta,K_{\varphi_\eta}).\]
We would like to show that the formula $\bigwedge_{\eta<\kappa}\Psi^\kappa_{\xi,1}(A_\eta,K_{\varphi_\eta})$ is equivalent to a single $\Pi^1_\xi$ formula over $V_\kappa$. Let $A=\{\Gamma(\eta,\beta): \eta<\kappa\land\beta\in A_\eta\}\subseteq\kappa$ and $K=\{\Gamma(\eta,\beta):\eta<\kappa\land\beta\in K_{\varphi_\eta}\}\subseteq\kappa$ code the sequences $\<A_\eta:\eta<\kappa\rangle$ and $\<K_{\varphi_\eta}:\eta<\kappa\rangle$ respectively. Let \[C=\mathop{\text{\Large$\bigtriangleup$}}_{\eta<\kappa}C_{\varphi_\eta}=\{\zeta<\kappa:\zeta\in\bigcap_{\eta<\zeta}C_{\varphi_\eta}\}\] and notice that $C$ is in the club filter on $\kappa$. By a straightforward application of Lemma \ref{lemma_no_increase}, there is a $\Pi^1_\xi$ sentence $\varphi(A,K,C)$ such that \[V_\kappa\models\varphi(A,K,C) \text{ if and only if } V_\kappa\models\bigwedge_{\eta<\kappa}\Psi^\kappa_{\xi,1}(A_\eta,K_{\varphi_\eta}),\] and furthermore, there is a club $D\subseteq\kappa$ such that for all regular $\alpha\in D$ we have \[V_\alpha\models\varphi(A,K,C)\mathrm{|}^\kappa_\alpha\text{ if and only if }V_\alpha\models\bigwedge_{\eta<\alpha}\Psi^\kappa_{\xi,1}(A_\eta,K_{\varphi_\eta})\mathrm{|}^\kappa_\alpha.\] Since $S$ is $\Pi^1_\xi$-indescribable in $\kappa$, there is some regular $\alpha\in S\cap C\cap D$ such that $V_\alpha\models\varphi(A,K,C)\mathrm{|}^\kappa_\alpha$. Since $\alpha\in D$ we have $V_\alpha\models \bigwedge_{\eta<\alpha}\Psi_{\xi,1}^\kappa(A_\eta,K_{\varphi_\eta})\mathrm{|}^\kappa_\alpha$ and since $\alpha\in C$ we have $V_\alpha\models\bigwedge_{\eta<\alpha} \varphi_\eta(A_\eta)$, which contradicts (\ref{equation_will_contradict}) since $f(\alpha)<\alpha$. \end{proof}
As an easy consequence of Theorem \ref{theorem_normal_ideal} we obtain the following a characterization of $\Pi^1_\xi$-indescribable subsets of a cardinal in terms of generic elementary embeddings, which we will use below to characterize $\Pi^1_\xi$-indescribable sets in terms of a natural filter base (see Theorem \ref{theorem_xi_clubs}(1)).
\begin{proposition}\label{proposition_generic_characterization} Suppose $\kappa$ is a regular cardinal, $\xi<\kappa^+$ and $S\subseteq\kappa$. The following are equivalent. \begin{enumerate} \item The set $S\subseteq\kappa$ is $\Pi^1_\xi$-indescribable in $\kappa$. \item There is some poset ${\mathbb P}$ such that whenever $G\subseteq {\mathbb P}$ is generic over $V$, there is an elementary embedding $j:V\to M\subseteq V[G]$ in $V[G]$ with critical point $\kappa$ such that \begin{enumerate} \item $\kappa\in j(S)$ and \item for all $\Pi^1_\xi$ sentences $\varphi$ over $V_\kappa$ in $V$ we have $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi$ and \[(V_\kappa\models\varphi)^V \implies (V_\kappa\models\varphi)^M.\] \end{enumerate} \end{enumerate} \end{proposition}
\begin{proof} Suppose $S\subseteq\kappa$ is $\Pi^1_\xi$-indescribable in $\kappa$. Let $G\subseteq P(\kappa)/\Pi^1_\xi(\kappa)$ be generic over $V$ with $S\in G$ and let $j:V\to V^\kappa/G$ be the corresponding generic ultrapower embedding. Note that the normality of the ideal $\Pi^1_\xi(\kappa)$ implies that the critical point of $j$ is $\kappa$ and $\kappa\in j(S)$. Suppose $\varphi$ is a $\Pi^1_\xi$ sentence over $V_\kappa$ and $V_\kappa\models\varphi$. Then the set \[C=\{\alpha<\kappa:\text{$\varphi\mathrm{|}^\kappa_\alpha$ is defined and $V_\alpha\models\varphi\mathrm{|}^\kappa_\alpha$}\}\] is in the filter dual to $\Pi^1_\xi(\kappa)$. Thus $\kappa\in j(C)$ and since $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi$ by Lemma \ref{lemma_j_of}, we see that (2b) holds.
Conversely, let $j$ be as in (2). Fix a $\Pi^1_\xi$ sentence $\varphi$ over $V_\kappa$ with $V_\kappa\models\varphi$. Then it follows by (2) that, in $M$, there is some $\alpha\in j(S)$ such that $V_\alpha\models j(\varphi)\mathrm{|}^{j(\kappa)}_\alpha$. Hence by elementarity, there is an $\alpha\in S$ such that $V_\alpha\models\varphi\mathrm{|}^\kappa_\alpha$. \end{proof}
\subsection{A hierarchy result}\label{section_hierarchy}
In order to prove the hierarchy results below (Corollary \ref{corollary_hierarchy} and Corollary \ref{corollary_proper}), we first need to establish a connection between universal formulas at $\kappa$ and universal formulas at regular $\alpha<\kappa$.
\begin{comment} \begin{lemma}\label{lemma_elementary}
Suppose $\kappa$ is regular and $\xi<\kappa^+$. The set of all $M\prec H_{\kappa^+}$ such that $|M|<\kappa$, $M\cap\kappa<\kappa$ and $M\cap \kappa^+=F^\kappa_\xi(M\cap\kappa)$ is club in $[H_{\kappa^+}]^{<\kappa}$. \end{lemma}
\begin{proof}
Let $C$ be the collection of all $M\prec H_{\kappa^+}$ such that $|M|<\kappa$ and $M\cap\kappa^+=F^\kappa_\xi(\kappa)$. Suppose $\<M_\zeta:\zeta<\gamma\rangle$ is an increasing chain of elements of $C$ where $\gamma<\kappa$ and set $M=\bigcup_{\zeta<\gamma}M_\zeta$. Clearly we have $M\prec H_{\kappa^+}$, $|M|<\kappa$ and $M\cap\kappa=\bigcup_{\zeta<\gamma}M_\zeta\cap\kappa<\kappa$. Furthermore, \[M\cap\kappa^+=\bigcup_{\zeta<\gamma}M_\zeta\cap\kappa^+=\bigcup_{\zeta<\gamma}F^\kappa_\xi(M_\zeta\cap\kappa)=F^\kappa_\xi(M\cap\kappa),\] and hence $C$ is a closed subset of $[H_{\kappa^+}]^{<\kappa}$. To see that $C$ is unbounded in $[H_{\kappa^+}]^{<\kappa}$, fix $a\in H_{\kappa^+}$. Let $M_0\prec H_{\kappa^+}$ be such that $a\in M_0$ and $M_0\cap\kappa<\kappa$. Choose $\alpha_0<\kappa$ large enough so that $M_0\cap\kappa^+\subseteq F^\kappa_\xi(\alpha_0)$. Given that $M_n$ and $\alpha_n$ have been defined where $n<\omega$, let $M_{n+1}\prec H_{\kappa^+}$ be such that $M_n\cup F^\kappa_\xi(\alpha_n)\subseteq M_{n+1}$ and let $\alpha_{n+1}\in\kappa\setminus(\alpha_n+1)$ be such that $M_{n+1}\cap\kappa^+\subseteq F^\kappa_\xi(\alpha_{n+1})$. Now notice that if we let $M_\omega=\bigcup_{n<\omega}M_n$ we have $a\in M\prec H_{\kappa^+}$, $M\cap\kappa=\bigcup_{n<\omega}\alpha_n<\kappa$ and $M\cap\kappa^+=F^\kappa_\xi(M\cap\kappa)$. \end{proof}
\begin{corollary} Suppose $\kappa$ is a regular cardinal, $\xi<\kappa^+$ is a limit ordinal and $b_{\kappa,\xi}:\kappa\to\xi$ is a bijection. There is a club $C$ in $\kappa$ such that for all $\alpha\in C$ there is an $M\prec H_{\kappa^+}$ such that $M$ contains $\kappa$, $\xi$ and $b_{\kappa,\xi}$ as elements, and the following hold. \begin{enumerate} \item $M\cap\kappa=\alpha$ \item $M\cap\kappa^+=F^\kappa_\xi(\alpha)$ \item If $\pi_M:M\to N$ is the transitive collapse of $M$ then $\pi_M\upharpoonright F^\kappa_\xi(\alpha)$ is the transitive collapse of $F^\kappa_\xi(\alpha)$ and \[\pi_M(b_{\kappa,\xi}):\alpha\to f^\kappa_\xi(\alpha)\] is a bijection. \end{enumerate} \end{corollary}
\begin{proof} Suppose $b_{\kappa,\xi}:\kappa\to\xi$ is a bijection and let $D$ be the club subset of $[H_{\kappa^+}]^{<\kappa}$ from the statement of Lemma \ref{lemma_elementary}. We can build an increasing continuous chain $\<M_\zeta:\zeta<\kappa\rangle$ of elementary substructures of $H_{\kappa^+}$ such that $\kappa,\xi,b_{\kappa,\xi}\in M_0$. It is easy to see that $C=\{M_\zeta\cap\kappa:\zeta<\kappa\}$ is the desired club subset of $\kappa$. \end{proof} \end{comment}
\begin{lemma}\label{lemma_restricting_universal_formulas} Suppose $\kappa>\omega$ is regular. Fix any $\xi<\kappa^+$ and $n<\omega$, let $\Psi^\kappa_{\xi,n}(X_1,\ldots,X_n,Y_\xi)$ and $\bar{\Psi}^\kappa_{\xi,n}(X_1,\ldots,X_n,Y_\xi)$ be, respectively, universal $\Pi^1_\xi$ and $\Sigma^1_\xi$ formulas at $\kappa$ for formulas with $n$ free second-order variables, which were defined by transfinite recursion in the proof of Theorem \ref{theorem_universal}. There are clubs $C_{\xi,n}$ and $D_{\xi,n}$ in $\kappa$ such that the following hold. \begin{enumerate} \item For all regular $\alpha\in C_{\xi,n}$ the formula $\Psi^\kappa_{\xi,n}(X_1,\ldots,X_n,Y_\xi)\mathrm{|}^\kappa_\alpha$ is a universal $\Pi^1_{f^\kappa_\xi(\alpha)}$ formula at $\alpha$ for formulas with $n$ free second-order variables. \item For all regular $\alpha\in D_{\xi,n}$ the formula $\bar{\Psi}^\kappa_{\xi,n}(X_1,\ldots,X_n,Y_\xi)\mathrm{|}^\kappa_\alpha$ is a universal $\Sigma^1_{f^\kappa_\xi(\alpha)}$ formula at $\alpha$ for formulas with $n$ free second-order variables. \end{enumerate} \end{lemma}
\begin{proof} We proceed by induction on $\xi$. Suppose $\xi<\kappa^+$ is a limit ordinal. The case in which $\xi$ is a successor ordinal is easier and is left to the reader. We will now prove (1); the proof of (2) is similar. Recall that \[\Psi^\kappa_{\xi,1}(X,Y_\xi)=\bigwedge_{\zeta<\xi}\Theta^\kappa_{\zeta,1}(X,Y_\xi)\] where each $\Theta^\kappa_{\zeta,1}(X,Y_\xi)$ is a $\Pi^1_\zeta$ formula over $V_\kappa$ equivalent to $\Psi^\kappa_{\zeta,1}(X,(Y_\xi)_{b_{\kappa,\xi}^{-1}(\zeta)})$ in the sense that there is a club $D_\zeta$ in $\kappa$ such that for all regular $\alpha\in D_\zeta\cup\{\kappa\}$ we have $V_\alpha\models\Theta^\kappa_{\zeta,1}(A,B)\mathrm{|}^\kappa_\alpha$ if and only if $V_\alpha\models \Psi^\kappa_{\zeta,1}(A,(B)_{b_{\kappa,\xi}^{-1}(\zeta)})\mathrm{|}^\kappa_\alpha$ for all $A\subseteq V_\kappa$ and $B\subseteq\kappa$. To prove (1), we will use Proposition \ref{proposition_framework2}. Suppose $G\subseteq P(\kappa)/{\mathop{\rm NS}}_\kappa$ is generic such that $\kappa$ is regular in $V^\kappa/G$ and let $j:V\to V^\kappa/G$ be the corresponding generic ultrapower embedding. To show that the desired club $C_{\xi,1}$ exists, we must show that $\kappa\in j(T)$ where $T$ is the set of regular cardinals $\alpha<\kappa$ such that $\Psi^\kappa_{\xi,1}(X,Y_\xi)\mathrm{|}^\kappa_\alpha$ is a universal $\Pi^1_{f^\kappa_\xi(\alpha)}$ formula at $\alpha$ for formulas with $1$ free variable. By Lemma \ref{lemma_j_of} we have $j(\Psi^\kappa_{\xi,1}(X,Y_\xi))\mathrm{|}^{j(\kappa)}_\kappa=\Psi^\kappa_{\xi,1}(X,Y_\xi)$ and $\kappa\in j(D_\zeta)$ for all $\zeta<\xi$. Thus, working in $V^\kappa/G$, if $A\subseteq V_\kappa$ and $B\subseteq\kappa$ then \begin{align*} V_\kappa\models\Psi^\kappa_{\xi,1}(A,B)&\iff V_\kappa\models \bigwedge_{\zeta<\xi}j(\Psi^\kappa_{\zeta,1}(A,(B)_{b_{\kappa,\xi}^{-1}(\zeta)}))\mathrm{|}^{j(\kappa)}_\kappa\\
&\iff V_\kappa\models \bigwedge_{\zeta<\xi}\Psi^\kappa_{\zeta,1}(A,(B)_{b_{\kappa,\xi}^{-1}(\zeta)}).\\ \end{align*} Now it is straightforward to verify $\kappa\in j(T)$, that is, $\Psi^\kappa_{\xi,1}(X,Y_\xi)$ is a universal $\Pi^1_\xi$ formula at $\kappa$ for formulas with $1$ free variable in $V^\kappa/G$; we give a brief outline of how to do this here. Still working in $V^\kappa/G$, fix a $\Pi^1_\xi$ formula $\varphi_\xi(X)=\bigwedge_{\zeta<\xi}\varphi_\zeta(X)$. From our inductive assumption, working in $V^\kappa/G$, we may fix codes $K_{\varphi_\zeta}\subseteq\kappa$ such that $V_\kappa\models\varphi_\zeta(A)$ if and only if $V_\kappa\models\Psi^\kappa_{\zeta,1}(A,K_{\varphi_\zeta})$. Then we let $K_\varphi=\{\Gamma(b_{\kappa,\xi}^{-1}(\zeta),\beta):\zeta<\xi\land\beta\in K_{\varphi_\zeta}\}$ and proceed exactly as in the proof of Theorem \ref{theorem_universal}, except that here we work in $V^\kappa/G$. Thus we conclude $\kappa\in j(T)$. \end{proof}
Next we show that for $\xi<\kappa^+$, the $\Pi^1_\xi$-indescribability of a set $S\subseteq\kappa$, is expressible by a $\Pi^1_{\xi+1}$ formula over $V_\kappa$ in the following sense.
\begin{theorem}\label{theorem_expressing_indescribability} Suppose $\kappa>\omega$ is inaccessible and $\xi<\kappa^+$. There is a $\Pi^1_{\xi+1}$ formula $\Phi^\kappa_\xi(Z)$ over $V_\kappa$ and a club $C\subseteq\kappa$ such that for all $S\subseteq\kappa$ we have \[\text{$S$ is a $\Pi^1_\xi$-indescribable subset of $\kappa$ if and only if $V_\kappa\models\Phi^\kappa_\xi(S)$}\] and for all regular $\alpha\in C$ we have \[\text{$S\cap\alpha$ is a $\Pi^1_{f^\kappa_\xi(\alpha)}$-indescribable subset of $\alpha$ if and only if $V_\alpha\models\Phi^\kappa_\xi(S)\mathrm{|}^\kappa_\alpha$}.\] \end{theorem} \begin{proof} We let $R\subseteq\kappa$ be a set, defined as follows, coding information about which $\alpha<\kappa$ and which $a\subseteq\alpha$ satisfy $V_\alpha\models\Psi^\kappa_{\xi,0}(a)\mathrm{|}^\kappa_\alpha$. For each regular $\alpha<\kappa$ let $\<a^\alpha_\beta:\beta<\delta_\alpha\rangle$ be a sequence of subsets of $\alpha$ such that for all $a\subseteq\alpha$ we have $V_\alpha\models\Psi^\kappa_{\xi,0}(a)\mathrm{|}^\kappa_\alpha$ if and only if $a=a^\alpha_\beta$ for some $\beta<\delta_\alpha$. Let $\Gamma:\kappa\times\kappa\times\kappa\to\kappa$ be the usual definable bijection. We let \[R=\{\Gamma(\alpha,\beta,\gamma): \text{($\alpha$ is regular)}\land \beta<\delta_\alpha\land \gamma\in a^\alpha_\beta\}.\] For $\alpha,\beta<\kappa$ we define \[R_{(\alpha,\beta)}=\{\gamma:\Gamma(\alpha,\beta,\gamma)\in R\}\] to be the $(\alpha,\beta)^{th}$ slice of $R$ so that when $\alpha$ is regular and $\beta<\delta_\alpha$ we have $R_{(\alpha,\beta)}=a^\alpha_\beta$. Now we let \[\Phi^\kappa_\xi(Z)=\forall X[\Psi^\kappa_{\xi,0}(X)\rightarrow(\exists Y\subseteq Z\cap{\rm REG})(Y\in{\mathop{\rm NS}}_\kappa^+)\land(\forall\eta\in Y)(\exists\beta)(X\cap\eta=R_{(\eta,\beta)})].\] Since the part of $\Phi^\kappa_\xi$ to the right of the $\rightarrow$ is $\Sigma^1_2$ over $V_\kappa$, and since $\Psi^\kappa_{\xi,0}(X)$ is $\Pi^1_\xi$ over $V_\kappa$ and appears to the left of the $\rightarrow$ in $\Phi^\kappa_\xi$, it follows that $\Phi^\kappa_\xi$ is expressible by a $\Pi^1_{\xi+1}$ formula over $V_\kappa$. In what follows, we will identify $\Phi^\kappa_\xi$ with this $\Pi^1_{\xi+1}$ formula.
First let us show that $S\subseteq\kappa$ is $\Pi^1_\xi$-indescribable in $\kappa$ if and only if $V_\kappa\models\Phi^\kappa_\xi(S)$. Suppose $S$ is $\Pi^1_\xi$-indescribable in $\kappa$. To see that $V_\kappa\models\Phi^\kappa_\xi(S)$, fix $K\subseteq\kappa$ such that $V_\kappa\models\Psi^\kappa_{\xi,0}(K)$. Then $D_0=\{\alpha<\kappa: V_\alpha\models\Psi^\kappa_{\xi,0}(K)\mathrm{|}^\kappa_\alpha\}$ is in the filter $\Pi^1_\xi(\kappa)^*$ and thus $Y=S\cap D_0\cap{\rm REG}$ is, in particular, stationary in $\kappa$. If $\alpha\in Y$ then we have $V_\alpha\models\Psi^\kappa_{\xi,0}(K)\mathrm{|}^\kappa_\alpha$ and hence $V_\alpha\models\Psi^\kappa_{\xi,0}(K\cap\alpha)\mathrm{|}^\kappa_\alpha$, which implies that $K\cap\alpha=R_{(\alpha,\beta)}$ for some $\beta<\delta_\alpha$. Conversely, suppose $V_\kappa\models \Phi^\kappa_\xi(S)$ and let us show that $S$ is $\Pi^1_\xi$-indescribable in $\kappa$. Fix a $\Pi^1_\xi$ sentence $\varphi$ such that $V_\kappa\models\varphi$. Then, by Theorem \ref{theorem_universal}, $V_\kappa\models\Psi^\kappa_{\xi,0}(K_\varphi)$ and thus there is a $Y\subseteq S\cap{\rm REG}$ stationary in $\kappa$ such that for all $\alpha\in Y$ we have $V_\alpha\models\Psi^\kappa_{\xi,0}(K_\varphi)\mathrm{|}^\kappa_\alpha$. By Theorem \ref{theorem_universal} there is a club $D_\varphi\subseteq\kappa$ such that for all regular $\alpha\in D_\varphi$ we have $V_\alpha\models\varphi\mathrm{|}^\kappa_\alpha$ if and only if $V_\alpha\models\Psi_{\xi,0}(K_\varphi)\mathrm{|}^\kappa_\alpha$. Thus we may choose a regular $\alpha\in Y\cap D_\varphi\cap{\rm REG}\subseteq S$ and observe that $V_\alpha\models\varphi\mathrm{|}^\kappa_\alpha$. Hence $S$ is $\Pi^1_\xi$-indescribable in $\kappa$.
To prove the second part of the statement we will use Proposition \ref{proposition_framework2}. Fix $S\subseteq\kappa$. Suppose $G\subseteq P(\kappa)/{\mathop{\rm NS}}_\kappa$ is generic, $\kappa$ is regular in $V^\kappa/G$ and $j:V\to V^\kappa/G$ is the corresponding generic ultrapower. Let $E$ be the set of ordinals $\alpha<\kappa$ such that $S\cap\alpha$ is a $\Pi^1_{f^\kappa_\xi(\alpha)}$-indescribable subset of $\alpha$ if and only if $V_\alpha\models\Phi^\kappa_{\xi,0}(S)\mathrm{|}^\kappa_\alpha$. We must show that $\kappa\in j(E)$. By Lemma \ref{lemma_j_of} we have $j(\Phi^\kappa_{\xi,0}(S))\mathrm{|}^{j(\kappa)}_\kappa=\Phi^\kappa_{\xi,0}(S)$, and thus we must show that in $V^\kappa/G$, $S$ is $\Pi^1_\xi$-indescribable in $\kappa$ if and only if $V_\kappa\models \Phi^\kappa_{\xi,0}(S)$. By Lemma \ref{lemma_restricting_universal_formulas}, it follows that in $V^\kappa/G$, $\Psi^\kappa_{\xi,0}(X)$ is a universal $\Pi^1_\xi$ formula at $\kappa$ and therefore we can proceed to verify $\kappa\in j(E)$ by using the argument in the previous paragraph, but working in $V^\kappa/G$. \end{proof}
We obtain our first hierarchy result as an easy corollary of Theorem \ref{theorem_expressing_indescribability}.
\begin{corollary}\label{corollary_hierarchy} Suppose $S\subseteq\kappa$ is $\Pi^1_\xi$-indescribable in $\kappa$ where $\xi<\kappa^+$ and let $\zeta<\xi$. Then the set \[C=\{\alpha<\kappa:\text{$S\cap\alpha$ is $\Pi^1_{f^\kappa_\zeta(\alpha)}$-indescribable}\}\] is in the filter $\Pi^1_\xi(\kappa)^*$. \end{corollary}
\begin{proof} Since $\zeta<\xi$, it follows that $S$ is $\Pi^1_\zeta$-indescribable in $\kappa$, and thus $V_\kappa\models\Phi^\kappa_\zeta(S)$, where $\Phi^\kappa_\zeta(Z)$ is the $\Pi^1_{\zeta+1}$ formula over $V_\kappa$ obtained from Theorem \ref{theorem_expressing_indescribability}. By Theorem \ref{theorem_expressing_indescribability}, there is a club $D$ in $\kappa$ such that for every regular $\alpha\in D$, \[\text{$S\cap\alpha$ is $\Pi^1_{f^\kappa_\zeta(\alpha)}$-indescribable if and only if $V_\alpha\models\Phi^\kappa_\zeta(S)\mathrm{|}^\kappa_\alpha$}.\] Since the set \[D\cap\{\alpha<\kappa: V_\alpha\models\Phi^\kappa_\zeta(S)\mathrm{|}^\kappa_\alpha\}\] is in the filter $\Pi^1_\zeta(\kappa)^*$, we see that \[\{\alpha<\kappa:\text{$S\cap\alpha$ is $\Pi^1_{f^\kappa_\zeta(\alpha)}$-indescribable}\}\in\Pi^1_\zeta(\kappa)^*\subseteq \Pi^1_\xi(\kappa)^*.\] \end{proof}
Next, in order to show that when $\kappa$ is $\Pi^1_\xi$-indescribable, we have a proper containment $\Pi^1_\zeta(\kappa)\subsetneq\Pi^1_\xi(\kappa)$ for all $\zeta<\xi$ (see Corollary \ref{corollary_proper}), we need several preliminary results.
Before we show that the restriction of a restriction of a given $\Pi^1_\xi$ formula $\varphi$, is often equal to a single restriction of $\varphi$, we need a lemma, which is established using an argument similar to that of Lemma \ref{lemma_j_of}.
\begin{lemma}[{Cody-Holy \cite{cody_holy_2022}}]\label{lemma_double_restriction} Suppose $I$ is a normal ideal on $\kappa$ and $G\subseteq P(\kappa)/I$ is generic such that $\kappa$ is regular in $V^\kappa/G$ and let $j:V\to V^\kappa/G$ be the corresponding generic ultrapower. If $\varphi$ is either a $\Pi^1_\xi$ or $\Sigma^1_\xi$ formula over $V_\kappa$ for some $\xi<\kappa^+$, and $\alpha<\kappa$ is regular such that $\varphi\mathrm{|}^\kappa_\alpha$ is defined, then \[j(\varphi)\mathrm{|}^{j(\kappa)}_\alpha=\varphi\mathrm{|}^\kappa_\alpha,\] with the former being calculated in $V^\kappa/G$, and the latter being calculated in $V$. \end{lemma} \begin{proof}
By induction on $\xi<\kappa^+$. This is immediate in case $\xi<\kappa$, for then by Remark \ref{remark_coding}(1), $j(\varphi(A_1,\ldots,A_n))=\varphi(j(A_1),\ldots,j(A_n))$, and thus $j(\varphi)\mathrm{|}^{j(\kappa)}_\alpha=\varphi\mathrm{|}^\kappa_\alpha$ by the definition of the restriction operation in this case. It is also immediate for successor steps above $\kappa$, for then by Remark \ref{remark_coding}(2), $j(\forall\vec X\psi)=\forall\vec X j(\psi)$.
At limit steps $\xi\ge\kappa$, if $\varphi=\bigwedge_{\zeta<\xi}\psi_\zeta$ is a $\Pi^1_\xi$ formula, let $\vec\psi=\langle\psi_\zeta\mid\zeta<\xi\rangle$, and let $\vec\pi=\langle\pi^\kappa_{\xi,\alpha}\mid\alpha<\kappa\rangle$. Then, by Remark \ref{remark_coding}(3), $j(\varphi)=\bigwedge_{\zeta<j(\xi)}j(\vec\psi)_\zeta$, and therefore, assuming for now that $j(\varphi)\mathrm{|}^{j(\kappa)}_\alpha$ is defined, \[j(\varphi)\mathrm{|}^{j(\kappa)}_\alpha=\bigwedge_{\zeta\in j(f^\kappa_\xi)(\alpha)}j(\vec\psi)_{j(\vec\pi)_\alpha^{-1}(\zeta)}\mathrm{|}^{j(\kappa)}_\alpha=\bigwedge_{\zeta\in j(f^\kappa_\xi)(\alpha)}j(\psi_{j^{-1}(j(\vec\pi)_\alpha^{-1}(\zeta))})\mathrm{|}^{j(\kappa)}_\alpha,\] using that $j(\vec\pi)_\alpha^{-1}[j(f^\kappa_\xi)(\alpha)]=j(F^\kappa_\xi)(\alpha)\subseteq j(F^\kappa_\xi)(\kappa)=j"\xi$. By our inductive hypothesis, for each $\gamma\in\xi$ and every regular $\alpha<\kappa$, $j(\psi_\gamma)\mathrm{|}^{j(\kappa)}_\alpha=\psi_\gamma\mathrm{|}^\kappa_\alpha$. Thus, \[j(\varphi)\mathrm{|}^{j(\kappa)}_\alpha=\bigwedge_{\zeta\in j(f^\kappa_\xi)(\alpha)}\psi_{j^{-1}(j(\vec\pi)_\alpha^{-1}(\zeta))}\mathrm{|}^\kappa_\alpha.\] Now, \[\varphi\mathrm{|}^\kappa_\alpha=\bigwedge_{\zeta\in f^\kappa_\xi(\alpha)}\psi_{(\pi^\kappa_{\xi,\alpha})^{-1}(\zeta)}\mathrm{|}^\kappa_\alpha.\] Since $\alpha<\kappa$ we have $j(f^\kappa_\xi)(\alpha)=f^\kappa_\xi(\alpha)$, and furthermore \[(\pi^\kappa_{\xi,\alpha})^{-1}[f^\kappa_\xi(\alpha)]=F^\kappa_\xi(\alpha)=(j^{-1}\circ j(\vec\pi)_\alpha^{-1})[j(f^\kappa_\xi)(\alpha)],\] showing the above restrictions of $\varphi$ and of $j(\varphi)$ to be equal,\footnote{Being somewhat more careful here, this in fact also uses that the maps $\pi^\kappa_{\xi,\alpha}$, $j$, and $j(\vec\pi)_\alpha$ are order-preserving, so that both of the above conjunctions are taken of the same formulas \emph{in the same order}.} and thus in particular also showing that $j(\varphi)\mathrm{|}^{j(\kappa)}_\alpha$ is defined, as desired.
The case when $\varphi$ is a $\Sigma^1_\xi$ formula is treated in exactly the same way. \end{proof}
We can now easily deduce the following, which was originally established in an earlier version of this article using a different proof. The proof included below is due to the author and Peter Holy.
\begin{proposition}\label{proposition_double_restriction} Suppose $\kappa$ is weakly Mahlo, and $\xi<\kappa^+$. For any formula $\varphi$ which is either $\Pi^1_\xi$ or $\Sigma^1_\xi$ over $V_\kappa$, there is a club $D\subseteq\kappa$ such that for all regular uncountable $\alpha\in D$, $\varphi\mathrm{|}^\kappa_\alpha$ is defined, and the set $D_\alpha$ of all ordinals $\beta<\alpha$ such that $(\varphi\mathrm{|}^\kappa_\alpha)\mathrm{|}^\alpha_\beta$ is defined and $(\varphi\mathrm{|}^\kappa_\alpha)\mathrm{|}^\alpha_\beta=\varphi\mathrm{|}^\kappa_\beta$, is in the club filter on $\alpha$. \end{proposition} \begin{proof}
Assume for a contradiction that the conclusion of the proposition fails. By Lemma \ref{lemma_restriction_is_nice}, this means that there is a stationary set $T$ consisting of regular and uncountable cardinals $\alpha$ such that the set $D_\alpha$ has stationary complement $E_\alpha\subseteq\alpha$. Using Lemma \ref{lemma_restriction_is_nice} once again, we may assume that $(\varphi\mathrm{|}^\kappa_\alpha)\mathrm{|}^\alpha_\beta$ is defined for every $\alpha\in T$ and every $\beta\in E_\alpha$. Let $\vec E$ denote the sequence $\langle E_\alpha\mid\alpha\in T\rangle$. Assume that $G\subseteq P(\kappa)/{\mathop{\rm NS}}_\kappa$ is generic over $V$ with $T\in G$ and $j:V\to V^\kappa/G$ is the corresponding generic ultrapower. Then, $\kappa\in j(T)$, and thus $j(\vec E)_\kappa$ is stationary in $V^\kappa/G$. But, \[j(\vec E)_\kappa=\{\beta<\kappa\mid(j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa)\mathrm{|}^\kappa_\beta\ne j(\varphi)\mathrm{|}^{j(\kappa)}_\beta\}.\]
Note that by Lemma \ref{lemma_double_restriction}, $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi$. But then, by Lemma \ref{lemma_restriction_is_nice} and Lemma \ref{lemma_double_restriction}, $j(\vec E)_\kappa$ is nonstationary in $V^\kappa/G$, which gives our desired contradiction. \end{proof}
Recall that for an uncountable regular cardinal $\kappa$, if $S\subseteq\kappa$ is stationary in $\kappa$ and for each $\alpha\in S$ we have a set $S_\alpha\subseteq\alpha$ which is stationary in $\alpha$, then it follows that $\bigcup_{\alpha\in S}S_\alpha$ is stationary in $\kappa$. We generalize this to $\Pi^1_\xi$-indescribability for all $\xi<\kappa^+$ as follows (this result was previously known \cite[Lemma 3.1]{MR4206111} for $\xi<\kappa$).
\begin{lemma}\label{lemma_union} Suppose $S$ is a $\Pi^1_\xi$-indescribable subset of $\kappa$ where $\xi<\kappa^+$. Further suppose that $S_\alpha$ is a $\Pi^1_{f^\kappa_\xi(\alpha)}$-indescribable subset of $\alpha$ for each $\alpha\in S$. Then $\bigcup_{\alpha\in S}S_\alpha$ is a $\Pi^1_\xi$-indescribable subset of $\kappa$. \end{lemma}
\begin{proof} Suppose $\xi<\kappa^+$ and $\varphi$ is some $\Pi^1_\xi$ sentence over $V_\kappa$ such that $V_\kappa\models\varphi$. By Lemma \ref{lemma_restriction_is_nice}, \[C_\varphi=\{\alpha<\kappa:\text{$\varphi\mathrm{|}^\kappa_\alpha$ is $\Pi^1_{f^\kappa_\xi(\alpha)}$ over $V_\kappa$}\}\] is in the club filter on $\kappa$. By Proposition \ref{proposition_double_restriction}, there is a club $D_\varphi\subseteq\kappa$ such that for all regular $\alpha\in D_\varphi$ the set of $\beta<\alpha$ such that $(\varphi\mathrm{|}^\kappa_\alpha)\mathrm{|}^\alpha_\beta=\varphi\mathrm{|}^\kappa_\beta$ is in the club filter on $\alpha$. Thus, $S\cap C_\varphi\cap D_\varphi$ is $\Pi^1_\xi$-indescribable in $\kappa$. Hence there is a regular uncountable $\alpha\in S\cap C_\varphi\cap D_\varphi$ such that $V_\alpha\models\varphi\mathrm{|}^\kappa_\alpha$. Let $E$ be a club subset of $\alpha$ such that for all $\beta\in E$ we have $(\varphi\mathrm{|}^\kappa_\alpha)\mathrm{|}^\alpha_\beta=\varphi\mathrm{|}^\kappa_\beta$. Since $S_\alpha\cap E$ is $\Pi^1_{f^\kappa_\xi(\alpha)}$-indescribable in $\alpha$ and $\varphi\mathrm{|}^\kappa_\alpha$ is $\Pi^1_{f^\kappa_\xi(\alpha)}$ over $V_\alpha$, there is some $\beta\in S_\alpha\cap E$ such that $V_\beta\models(\varphi\mathrm{|}^\kappa_\alpha)\mathrm{|}^\alpha_\beta$. Since $(\varphi\mathrm{|}^\kappa_\alpha)\mathrm{|}^\alpha_\beta=\varphi\mathrm{|}^\kappa_\beta$, it follows that $\bigcup_{\alpha\in S}S_\alpha$ is $\Pi^1_\xi$-indescribable in $\kappa$. \end{proof}
\begin{lemma}\label{lemma_set_of_non} For all ordinals $\xi$, if $S\subseteq\kappa$ is $\Pi^1_\xi$-indescribable in $\kappa$ where $\xi<\kappa^+$, then the set \[T=\{\alpha<\kappa:\text{$S\cap\alpha$ is not $\Pi^1_{f^\kappa_\xi(\alpha)}$-indescribable in $\alpha$}\}\] is $\Pi^1_\xi$-indescribable in $\kappa$. \end{lemma}
\begin{proof} We proceed by induction on $\xi$. For $\xi<\omega$ this is a well-known result, which follows directly from \cite[Lemma 3.2]{MR4206111}. Suppose $\xi\in\kappa^+\setminus\omega$ and, for the sake of contradiction, suppose $S$ is $\Pi^1_\xi$-indescribable and $T$ is not $\Pi^1_\xi$-indescribable in $\kappa$. Then $\kappa\setminus T$ is in the filter $\Pi^1_\xi(\kappa)^*$ and is thus $\Pi^1_\xi$-indescribable in $\kappa$. By Corollary \ref{corollary_crazy}, there is a club $C\subseteq\kappa$ such that for all regular uncountable $\alpha\in C$, the set \[D_\alpha=\{\beta<\alpha: f^\kappa_\xi(\beta)=f^\alpha_{f^\kappa_\xi(\alpha)}(\beta)\}\] is in the club filter on $\alpha$. Let $D$ be the set of regular uncountable cardinals less than $\kappa$, and note that $D\in \Pi^1_1(\kappa)^*\subseteq\Pi^1_\xi(\kappa)^*$. Notice that $(\kappa\setminus T)\cap C\cap D$ is $\Pi^1_\xi$-indescribable in $\kappa$. For each $\alpha\in (\kappa\setminus T)\cap C\cap D$, it follows by induction that the set \[T_\alpha=\{\beta<\alpha:\text{$S\cap\beta$ is not $\Pi^1_{f^\alpha_{f^\kappa_\xi(\alpha)}(\beta)}$-indescribable}\}\] is $\Pi^1_{f^\kappa_\xi(\alpha)}$-indescribable in $\alpha$. Thus, for each $\alpha\in (\kappa\setminus T)\cap C\cap D$ the set $T_\alpha\cap D_\alpha$ is $\Pi^1_{f^\kappa_\xi(\alpha)}$-indescribable in $\alpha$. Now it follows by Lemma \ref{lemma_union} that the set \[\bigcup_{\alpha\in (\kappa\setminus T)\cap C\cap D}(T_\alpha\cap D_\alpha)\subseteq T\] is $\Pi^1_\xi$-indescribable in $\kappa$, a contradiction. \end{proof}
Now we show that for regular $\kappa$, whenever $\zeta<\xi<\kappa^+$ and the ideals under consideration are nontrivial, we have $\Pi^1_\zeta(\kappa)\subsetneq\Pi^1_\xi(\kappa)$.
\begin{corollary}\label{corollary_proper} Suppose $\kappa$ is $\Pi^1_\xi$-indescribable where $\xi<\kappa^+$. Then for all $\zeta<\xi$ we have $\Pi^1_\zeta(\kappa)\subsetneq\Pi^1_\xi(\kappa)$. \end{corollary}
\begin{proof} The fact that $\Pi^1_\zeta(\kappa)\subseteq\Pi^1_\xi(\kappa)$ follows easily from the fact that the class of $\Pi^1_\xi$ formulas includes the $\Pi^1_\zeta$ formulas. To see that the proper containment holds, consider the set \[C=\{\alpha<\kappa:\text{$\alpha$ is $\Pi^1_{f^\kappa_\zeta(\alpha)}$-indescribable}\}.\] By Corollary \ref{corollary_hierarchy} and Proposition \ref{lemma_set_of_non}, we have $\kappa\setminus C\in \Pi^1_\xi(\kappa)\setminus\Pi^1_\zeta(\kappa)$. \end{proof}
\subsection{Higher $\Pi^1_\xi$-clubs}\label{section_higher_xi_clubs}
Now we present a characterization of the $\Pi^1_\xi$-indescribability of sets $S\subseteq\kappa$ in terms of a natural base for the filter $\Pi^1_\xi(\kappa)^*$.
\begin{definition}\label{definition_Pi1xi_club} Suppose $\kappa$ is a regular cardinal. We define the notion of $\Pi^1_\xi$-club subset of $\kappa$ for all $\xi<\kappa^+$ by induction.\begin{enumerate} \item A set $C\subseteq\kappa$ is \emph{$\Pi^1_0$-club} if it is closed and unbounded in $\kappa$. \item We say that $C$ is \emph{$\Pi^1_{\zeta+1}$-club in $\kappa$} if $C$ is $\Pi^1_\zeta$-indescribable in $\kappa$ and $C$ is \emph{$\Pi^1_\zeta$-closed}, in the sense that there is a club $C^*$ in $\kappa$ such that for all $\alpha\in C^*$, whenever $C\cap\alpha$ is $\Pi^1_{f_\zeta^\kappa(\alpha)}$-indescribable in $\alpha$ we must have $\alpha\in C$. \item If $\xi$ is a limit, we say that $C\subseteq\kappa$ is \emph{$\Pi^1_\xi$-club in $\kappa$} if $C$ is $\Pi^1_\zeta$-indescribable for all $\zeta<\xi$ and $C$ is \emph{$\Pi^1_\xi$-closed}, in the sense that there is a club $C^*$ in $\kappa$ such that for all $\alpha\in C^*$, whenever $C\cap\alpha$ is $\Pi^1_\zeta$-indescribable for all $\zeta < f^\kappa_\xi(\alpha)$, we must have $\alpha\in C$. \end{enumerate} \end{definition}
Let us show that, when the $\Pi^1_\xi$-indescribability ideal $\Pi^1_\xi(\kappa)$ is nontrivial, the $\Pi^1_\xi$-club subsets of $\kappa$ form a filter base for the dual filter $\Pi^1_\xi(\kappa)^*$ and a set being $\Pi^1_\xi$-club in $\kappa$ is expressible by a $\Pi^1_\xi$ sentence.
\begin{theorem}\label{theorem_xi_clubs} Suppose $\kappa$ is a regular cardinal. For all $\xi<\kappa^+$, if $\kappa$ is $\Pi^1_\xi$-indescribable then the following hold. \begin{enumerate} \item A set $S\subseteq\kappa$ is $\Pi^1_\xi$-indescribable if and only if $S\cap C\neq\varnothing$ for all $\Pi^1_\xi$-clubs $C\subseteq\kappa$. \item There is a $\Pi^1_\xi$ formula $\chi^\kappa_\xi(X)$ over $V_\kappa$ such that for all $C\subseteq \kappa$ we have \[\text{$C$ is $\Pi^1_\xi$-club in $\kappa$ if and only if } V_\kappa\models \chi^\kappa_\xi(C)\] and there is a club $D_\xi$ in $\kappa$ such that for all regular $\alpha\in D_\xi$ and all $C\subseteq\kappa$ we have \[\text{$C\cap\alpha$ is $\Pi^1_{f^\kappa_\xi(\alpha)}$-club in $\alpha$ if and only if } V_\alpha\models\chi^\kappa_\xi(C)\mathrm{|}^\kappa_\alpha.\] \end{enumerate} \end{theorem}
\begin{proof} Sun \cite[Theorem 1.17]{MR1245524} proved that the theorem holds for $\xi=1$, and Hellsten \cite[Theorem 2.4.2]{MR2026390} generalized this to the case in which $\xi<\omega$. We provide a proof of the case in which $\xi<\kappa^+$ is a limit ordinal; the case in which $\xi<\kappa^+$ is a successor is similar, but easier.
Suppose $\xi<\kappa^+$ is a limit ordinal and that both (1) and (2) hold for all ordinals $\zeta<\xi$. For the forward direction of (1), suppose $S\subseteq\kappa$ is $\Pi^1_\xi$-indescribable and fix $C\subseteq\kappa$ a $\Pi^1_\xi$-club subset of $\kappa$. Then, in particular, for each $\zeta<\xi$, $C$ is $\Pi^1_\zeta$-indescribable and, by Theorem \ref{theorem_expressing_indescribability}, we see that \[V_\kappa\models\bigwedge_{\zeta<\xi}\Phi^\kappa_\zeta(C).\] Let $G\subseteq P(\kappa)/\Pi^1_\xi(\kappa)$ be generic over $V$ with $S\in G$ and let $j:V\to V^\kappa/G$ be the corresponding generic ultrapower. Then $\kappa\in j(S)$ and by the proof of Proposition \ref{proposition_generic_characterization}, we have $\left(V_\kappa\models\bigwedge_{\zeta<\xi}\Phi^\kappa_\zeta(C)\right)^{V^\kappa/G}.$ For each $\zeta<\xi$, let $C_\zeta$ be the club subset of $\kappa$ obtained from Theorem \ref{theorem_expressing_indescribability} and notice that $\kappa\in j(C_\zeta)$ and hence in $V^\kappa/G$ the set $C$ is $\Pi^1_\zeta$-indescribable in $\kappa$. Since $C$ is a $\Pi^1_\xi$-club subset of $\kappa$ there is a club $C^*\subseteq\kappa$ as in Definition \ref{definition_Pi1xi_club}. Since $\kappa\in j(C^*)$ and $j(C)\cap\kappa$ is $\Pi^1_\zeta$-indescribable for all $\zeta<\xi=j(f^\kappa_\xi)(\kappa)$, it follows that $\kappa\in j(C)$. Therefore by elementarity $S\cap C\neq\varnothing$.
For the reverse direction of (1), suppose $\kappa$ is $\Pi^1_\xi$-indescribable and $S\subseteq\kappa$ intersects every $\Pi^1_\xi$-club. It suffices to show that if $\varphi=\bigwedge_{\zeta<\xi}\varphi_\zeta$ is any $\Pi^1_\xi$ sentence over $V_\kappa$ such that $V_\kappa\models\varphi$, then the set \[C=\{\alpha\in D: V_\alpha\models\varphi\mathrm{|}^\kappa_\alpha\}\] contains a $\Pi^1_\xi$-club, where $D\subseteq\kappa$ is a club subset of $\kappa$ such that for all regular $\alpha\in D$, $\varphi\mathrm{|}^\kappa_\alpha$ is defined.
First, let us argue that $C$ is $\Pi^1_\zeta$-indescribable for all $\zeta<\xi$. Suppose not. Then for some fixed $\zeta<\xi$, $C$ is not $\Pi^1_\zeta$-indescribable in $\kappa$ and hence $\kappa\setminus C$ is in the filter $\Pi^1_\zeta(\kappa)^*$. Since $\kappa$ is $\Pi^1_\xi$-indescribable by assumption, and since $\Pi^1_\zeta(\kappa)^*\subseteq\Pi^1_\xi(\kappa)^*\subseteq\Pi^1_\xi(\kappa)^+$, we see that $\kappa\setminus C$ is $\Pi^1_\xi$-indescribable in $\kappa$. Since $(\kappa\setminus C)\cap D$ is $\Pi^1_\xi$-indescribable and $V_\kappa\models\varphi$ there is an $\alpha\in (\kappa\setminus C)\cap D$ such that $V_\alpha\models\varphi\mathrm{|}^\kappa_\alpha$, a contradiction.
Next we must argue that $C$ is $\Pi^1_\xi$-closed. We must show that there is a club $C^*$ in $\kappa$ such that for all regular $\alpha\in C^*$, if $C\cap\alpha$ is $\Pi^1_\zeta$-indescribable in $\alpha$ for all $\zeta<f^\kappa_\xi(\alpha)$ then $\alpha\in C$. We will use Proposition \ref{proposition_framework2}. Let $G\subseteq P(\kappa)/{\mathop{\rm NS}}_\kappa$ be generic with $\kappa$ regular in $V^\kappa/G$ and let $j:V\to V^\kappa/G$ be the corresponding generic ultrapower embedding. It suffices to show that in $V^\kappa/G$, if $C$ is $\Pi^1_\zeta$-indescribable in $\kappa$ for all $\zeta<\xi$ then $\alpha\in j(C)$. Assume that in $V^\kappa/G$, $C$ is $\Pi^1_\zeta$-indescribable in $\kappa$ for all $\zeta<\xi$ but $\kappa\notin j(C)$. Since $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi$, it follows from the definition of $C$ that for some $\zeta<\xi$, $(V_\kappa\models\lnot\varphi_\zeta)^{V^\kappa/G}$. But in $V^\kappa/G$, $C$ is $\Pi^1_\zeta$-indescribable in $\kappa$ and so there is some $\alpha\in C$ such that $(V_\alpha\models\lnot\varphi_\zeta\mathrm{|}^\kappa_\alpha)^{V^\kappa/G}$, which contradicts the definition of $C$.
Now, let us show that (2) holds for the limit ordinal $\xi$. The definition of ``$X$ is $\Pi^1_\xi$-club'' is equivalent over $V_\kappa$ to \[\left(\bigwedge_{\eta<\xi}\Phi^\kappa_\eta(X)\right)\land(\exists C^*)\left[(\text{$C^*$ is club})\land (\forall\beta\in C^*)\left(\bigwedge_{\zeta< f^\kappa_\xi(\alpha)}(X\cap\beta\in\Pi^1_\zeta(\beta)^+)\rightarrow\beta\in X\right)\right].\] We define a set $R_\xi\subseteq \kappa$ that codes all relevant information about which subsets of $\alpha$, for $\alpha<\kappa$, are $\Pi^1_\zeta$-indescribable for all $\zeta<f^\kappa_\xi(\alpha)$ as follows. We let $R_\xi\subseteq\kappa$ be such that for each regular $\alpha<\kappa$, if $\alpha$ is $\Pi^1_\zeta$-indescribable for all $\zeta<f^\kappa_\xi(\alpha)$, then the sequence \[\langle(R_\xi)_\eta:\alpha\leq\eta<2^\alpha\rangle\] is an enumeration of the subsets of $\alpha$ that are $\Pi^1_\zeta$-indescribable in $\alpha$ for all $\zeta<f^\kappa_\xi(\alpha)$. Otherwise, we define $(R_\xi)_\eta=\varnothing$. Now we let \[\bar\chi^\kappa_\xi(X)=(\exists C^*)\left[(\text{$C^*$ is club})\land(\forall\beta\in C^*)(\exists\eta (X\cap\beta=(R_\xi)_\eta)\rightarrow \beta\in X)\right]\] and \[\chi^\kappa_\xi(X)=\left(\bigwedge_{\eta<\xi}\Phi^\kappa_\eta(X)\right)\land\bar\chi^\kappa_\xi(X).\] Since the second part $\bar\chi^\kappa_\xi(X)$ of the definition of $\chi^\kappa_\xi(X)$ is $\Sigma^1_1$, it is also trivially $\Pi^1_2$, and thus we see that $\chi^\kappa_\xi(X)$ is $\Pi^1_\xi$ over $V_\kappa$. Clearly, for all $C\subseteq\kappa$ we have \[\text{$C$ is $\Pi^1_\xi$-club in $\kappa$}\iff V_\kappa\models\chi_\xi(C).\] To complete the proof of (2), one may use Proposition \ref{proposition_framework2}, along with Theorem \ref{theorem_expressing_indescribability} to show that there is a club $D_\xi$ in $\kappa$ such that for all regular $\alpha\in D_\xi$ we have that for all $C\subseteq\kappa$, \[\text{$C\cap\alpha$ is $\Pi^1_{f^\kappa_\xi(\alpha)}$-club in $\alpha$}\iff V_\alpha\models\chi_\xi(C)\mathrm{|}^\kappa_\alpha.\] Let us note that the remaining details are similar to the proof of Theorem \ref{theorem_expressing_indescribability}(2), and are therefore left to the reader. \end{proof}
\section{Higher $\xi$-stationarity, $\xi$-s-stationarity and derived topologies}\label{section_higher_derived_topologies}
In this section we define natural generalizations of Bagaria's notions of $\xi$-stationarity, $\xi$-s-stationarity and derived topologies. Given a regular cardinal $\mu$, we will define a sequence of topologies $\langle\tau_\xi:\xi<\mu^+\rangle$ on $\mu$ such that the sequence $\langle\tau_\xi:\xi<\mu\rangle$ is Bagaria's sequence of derived topologies, and Bagaria's characterization of nonisolated points in the spaces $(\mu,\tau_\xi)$ for $\xi<\mu$ has a natural generalization to $\tau_\xi$ for $\xi\in\mu^+\setminus\mu$ (see Theorem \ref{theorem_xi_s_nonisolated}). We also show that Bagaria's result, in which he obtains the nondiscreteness of the topologies $\tau_\xi$ for $\xi<\mu$ from an indescribability hypothesis, can be generalized to $\tau_\xi$ for all $\xi<\mu^+$ using higher indescribability (see Corollary \ref{corollary_nondiscreteness_from_indescribability}).
Let us now discuss a generalization of Bagaria's derived topologies. Recall that, under certain conditions, one can specify a topology on a set $X$ by stating what the limit point operation must be. If $d:P(X)\to P(X)$ is a function satisfying properties (1) and (2) in Definition \ref{definition_cantor_derivative}, then one can define a topology $\tau_d$ on $X$ by demanding that a set $C\subseteq X$ be closed if and only if $d(C)\subseteq C$. Furthermore, if $d$ also satisfies property (3) in Definition \ref{definition_cantor_derivative}, then $d$ equals the limit point operator in the space $(X,\tau_d)$.
\begin{definition}\label{definition_cantor_derivative} Given a set $X$, we say that a function $d:P(X)\to P(X)$ is a \emph{Cantor derivative} on $X$ provided that the following conditions hold. \begin{enumerate} \item $d(\varnothing)=\varnothing$. \item For all $A,B\in P(X)$ we have \begin{enumerate} \item $A\subseteq B$ implies $d(A)\subseteq d(B)$, \item $d(A\cup B)\subseteq d(A)\cup d(B)$ and \item for all $x\in X$, $x\in d(A)$ implies $x\in d(A\setminus\{x\})$. \end{enumerate} \item $d(d(A))\subseteq d(A)\cup A$. \end{enumerate} If $d$ satisfies only (1) and (2) then we say that $d$ is a \emph{pre-Cantor derivative}. \end{definition}
\begin{fact}\label{fact_cantor_derivatives} If $d:P(X)\to P(X)$ is a pre-Cantor derivative on $X$ then the collection \[\tau=\{U\subseteq X: d(X\setminus U)\subseteq X\setminus U\}\] is a topology on $X$. Furthermore, if $d$ is a Cantor derivative on $X$ then \[d(A)=\{x\in X:\text{$x$ is a limit point of $A$ in $(X,\tau)$}\}\] for all $A\subseteq X$. \end{fact}
\begin{proof} Clearly $\varnothing\in\tau$ since $d(X)\subseteq X$. Furthermore, $X\in\tau$ because $d(\varnothing)=\varnothing$ by assumption. Suppose $I$ is some index set and for each $i\in I$ we have a set $C_i\subseteq X$ with $d(C_i)\subseteq C_i$. By (2a), it follows that for every $j\in I$ we have $d\left(\bigcap_{i\in I}C_i\right)\subseteq d(C_j)$ and hence $d\left(\bigcap_{i\in I}C_i\right)\subseteq \bigcap_{i\in I}C_i.$ Furthermore, if $I=\{0,1\}$ we have $d(C_0\cup C_1)\subseteq d(C_0)\cup d(C_1)\subseteq C_0\cup C_1$. Thus, $\tau$ is a topology on $X$.
For $A\subseteq X$, let $A'$ denote the set of limit points of $A$ in $(X,\tau)$. Let us show that $d(A)=A'$. Suppose $x\in d(A)$ and fix $U\in \tau$ with $x\in U$. For the sake of contradiction, suppose that $(U\cap A)\setminus\{x\}=\varnothing$, and notice that $A\setminus\{x\}\subseteq A\setminus U\subseteq X\setminus U$. Thus $d(A\setminus \{x\})\subseteq d(X\setminus U)\subseteq X\setminus U$, and since $x\in U$, this implies $x\notin d(A\setminus \{x\})$. But this implies $x\notin d(A)$ by (2c), a contradiction. Thus for any $A\subseteq X$ we have $d(A)\subseteq A'$.
For any set $A\subseteq X$, since the closure $\overline{A}=A\cup A'$ is the smallest closed set containing $A$, since $A\cup d(A)$ is closed (by (2b) and (3)) and since $d(A)\subseteq A'$, it follows that $\overline{A}=A\cup A'=A\cup d(A)$.
Now fix $A\subseteq X$. We have \begin{align*} x\in A' &\iff x\in \overline{A\setminus\{x\}}\\
&\iff x\in (A\setminus \{x\})\cup d(A\setminus\{x\})\\
&\iff x\in d(A\setminus\{x\})\\
&\iff x\in d(A). \end{align*}
\end{proof}
Given an ordinal $\delta$, Bagaria defined the sequence of derived topologies $\langle\tau_\xi:\xi<\delta\rangle$ on $\delta$ as follows.
\begin{definition}[Bagaria \cite{MR3894041}]\label{definition_bagaria} Let $\tau_0$ be the interval topology on $\delta$. That is, $\tau_0$ is the topology on $\delta$ generated\footnote{Recall that, given a set $X$ and a collection ${\mathcal B}\subseteq P(X)$, the \emph{topology generated by ${\mathcal B}$} is the smallest topology on $X$ which contains ${\mathcal B}$. That is, the topology generated by ${\mathcal B}$ is the collection of all unions of finite intersections of members of ${\mathcal B}$ together with the set $X$.} by the collection ${\mathcal B}_0$ consisting of $\{0\}$ and all open intervals of the form $(\alpha,\beta)$ where $\alpha<\beta\leq\delta$. We let $d_0:P(\delta)\to P(\delta)$ be the limit point operator of the space $(\delta,\tau_0)$. If $\xi<\delta$ is an ordinal and the sequences $\<B_\zeta:\zeta\leq\xi\rangle$, $\langle\tau_\zeta:\zeta\leq\xi\rangle$ and $\<d_\zeta:\zeta\leq\xi\rangle$ have been defined, we let $\tau_{\xi+1}$ be the topology generated by the collection \[{\mathcal B}_{\xi+1}={\mathcal B}_\xi\cup\{d_\xi(A): A\subseteq\delta\}\] and we let \[d_{\xi+1}(A)=\{\alpha<\delta:\text{$\alpha$ is a limit point of $A$ in the $\tau_{\xi+1}$ topology}\}.\] When $\xi<\delta$ is a limit ordinal, we define ${\mathcal B}_\xi=\bigcup_{\zeta<\xi}{\mathcal B}_\zeta$, let $\tau_\xi$ be the topology generated by ${\mathcal B}_\xi$ and define $d_\xi$ to be the limit point operator of the space $(\delta,\tau_\xi)$. \end{definition}
Bagaria proved that a point $\alpha<\delta$ is not isolated in $(\delta,\tau_\xi)$ if and only if it is $\xi$-s-reflecting (see \cite[Definition 2.8]{MR3894041} or Definition \ref{definition_xi_s_stationarity}). Since no ordinal $\alpha<\delta$ can be $\delta$-s-reflecting (see Remark \ref{remark_nontrivial}), it follows that the topology $\tau_\delta$ generated by $\bigcup_{\zeta<\delta}{\mathcal B}_\zeta$ is discrete. In what follows, by using diagonal Cantor derivatives, we extend Bagaria's definition of derived topologies to allow for more nontrivial cases. One may want to review Remark \ref{remark_example} before reading the following.
\begin{definition}\label{definition_tau_xi} Suppose $\mu$ is a regular cardinal. We define three sequences of functions $\langle{\mathcal B}_\xi:\xi<\mu^+\rangle$, $\langle{\mathcal T}_\xi:\xi<\mu^+\rangle$ and $\<d_\xi:\xi<\mu^+\rangle$, and one sequence $\langle\tau_\xi:\xi<\mu^+\rangle$ of topologies on $\mu$ by transfinite induction as follows. For $\xi<\mu$ we let $\tau_\xi$ and $d_\xi$ be defined as Definition \ref{definition_bagaria}, and we let ${\mathcal B}_\xi$ and ${\mathcal T}_\xi$ be functions with domain $\mu$ such that for all $\alpha<\mu$, we have ${\mathcal T}_\xi(\alpha)=\tau_\xi$ and ${\mathcal B}_\xi(\alpha)={\mathcal B}_\xi$ is the subbasis for $\tau_\xi$ as in Definition \ref{definition_bagaria}.
Suppose $\xi\in\mu^+\setminus\mu$ and we have already defined $\langle{\mathcal B}_\zeta:\zeta<\xi\rangle$, $\langle{\mathcal T}_\zeta:\zeta<\xi\rangle$, $\<d_\zeta:\zeta<\xi\rangle$ and $\langle\tau_\zeta:\zeta<\xi\rangle$. We let ${\mathcal B}_\xi$ and ${\mathcal T}_\xi$ be the functions with domain $\mu$ such that for each $\alpha\in \mu$ we have \[{\mathcal B}_\xi(\alpha)={\mathcal B}_0\cup\{d_\zeta(A):\zeta\in F^\mu_\xi(\alpha)\land A\subseteq\mu\}\] and ${\mathcal T}_\xi(\alpha)$ is the topology on $\mu$ generated by ${\mathcal B}_\xi(\alpha)$. We define $d_\xi:P(\mu)\to P(\mu)$ by letting \[d_\xi(A)=\{\alpha<\mu:\text{$\alpha$ is a limit point of $A$ in the ${\mathcal T}_\xi(\alpha)$ topology}\}\] for $A\subseteq\mu$. Then we let $\tau_\xi$ be the topology\footnote{It is easily seen that this $d_\xi$ is a pre-Cantor derivative as in Definition \ref{definition_cantor_derivative}, and thus $\tau_\xi$ is in fact a topology on $\mu$.} \[\tau_\xi=\{U\subseteq\mu: d_\xi(\mu\setminus U)\subseteq\mu\setminus U\}.\] \end{definition}
For all $\xi<\mu^+$ and $\alpha<\mu$, since ${\mathcal T}_\xi(\alpha)$ is the topology generated by ${\mathcal B}_\xi(\alpha)$, it follows that the collection of finite intersections of members of ${\mathcal B}_\xi(\alpha)$ is a basis for ${\mathcal T}_\xi(\alpha)$. That is, the collection of sets of the form \[I\cap d_{\xi_0}(A_0)\cap\cdots\cap d_{\xi_{n-1}}(A_{n-1})\] where $n<\omega$, $I\in{\mathcal B}_0$ is an interval in $\mu$, the ordinals $\xi_0\leq\cdots\leq\xi_{n-1}$ are in $F^\mu_\xi(\alpha)$ and $A_i\subseteq\mu$ for $i<n$, is a basis for the ${\mathcal T}_\xi(\alpha)$ topology on $\mu$.
Next let us show that the diagonal Cantor derivatives $d_\xi$ in Definition \ref{definition_tau_xi} are in fact Cantor derivatives as in Definition \ref{definition_cantor_derivative}, and thus each $d_\xi$ is the Cantor derivative of the space $(\mu,\tau_\xi)$ for all $\xi<\mu^+$.
\begin{lemma}\label{lemma_d_xi_is_cantor} Suppose $\mu$ is regular. For all $\xi<\mu^+$ and all $A\subseteq\mu$ we have \[d_\xi(d_\xi(A))\subseteq d_\xi(A).\] \end{lemma}
\begin{proof} For $\xi<\mu$ this follows easily from the fact that $d_\xi$ is defined to be the Cantor derivative of the space $(\mu,\tau_\xi)$.
Suppose $\xi\in\mu^+\setminus\mu$ and $\alpha\in d_\xi(d_\xi(A))$. Then $\alpha$ is a limit point of the set \[d_\xi(A)=\{\beta<\alpha:\text{$\beta$ is a limit point of $A$ in ${\mathcal T}_\xi(\beta)$}\}\] in the topology ${\mathcal T}_\xi(\alpha)$ on $\mu$ generated by ${\mathcal B}_\xi(\alpha)$. To show $\alpha\in d_\xi(A)$, we must show that $\alpha$ is a limit point of $A$ in the topology ${\mathcal T}_\xi(\alpha)$. Fix a basic open neighborhood $U$ of $\alpha$ in the ${\mathcal T}_\xi(\alpha)$ topology. Then $U$ is of the form \[I\cap d_{\xi_0}(A_0)\cap\cdots\cap d_{\xi_{n-1}}(A_{n-1})\] for some $\xi_i\in F^\mu_\xi(\alpha)$ and some $A_i\subseteq\mu$ where $i<n$. Since $\alpha$ is a limit point of $d_\xi(A)$ in the ${\mathcal T}_\xi(\alpha)$ topology on $\mu$ and since ${\mathcal B}_0\subseteq{\mathcal T}_\xi(\alpha)$, it follows that for all $\eta<\alpha$, $\alpha$ is a limit point of the set $d_\xi(A)\setminus\eta$ in the ${\mathcal T}_\xi(\alpha)$ topology. Since $\xi_i\in F^\mu_\xi(\alpha)$ for $i<n$ and since $\alpha$ is a limit ordinal, we can choose a $\beta<\alpha$ such that $\xi_i\in F^\mu_\xi(\beta)$ for all $i<n$. Since $\alpha$ is a limit point of $d_\xi(A)\setminus\beta$ in the ${\mathcal T}_\xi(\alpha)$ topology, we may choose an $\eta\in (d_\xi(A)\setminus\beta)\cap U\cap\alpha$. Since $\eta\geq\beta$ we have $\xi_i\in F^\mu_\xi(\eta)$ for all $i<n$ and thus $U\in {\mathcal B}_\xi(\eta)\subseteq {\mathcal T}_\xi(\eta)$. But since $\eta$ is a limit point of $A$ in the ${\mathcal T}_\xi(\eta)$ topology we have $A\cap U\cap\eta\neq\varnothing$. Thus $\alpha\in d_\xi(A)$. \end{proof}
The following result is an easy consequence of Fact \ref{fact_cantor_derivatives} and Lemma \ref{lemma_d_xi_is_cantor}.
\begin{corollary} Suppose $\mu$ is a regular cardinal. For each $\xi<\mu^+$, the function $d_\xi$ is the Cantor derivative of the space $(\mu,\tau_\xi)$. \end{corollary}
Let us present the following generalizations of Bagaria's notions of $\xi$-stationarity and $\xi$-s-stationarity, which will allow us to characterize the nondiscreteness of points in the spaces $(\mu,\tau_\xi)$ for $\xi<\mu^+$.
\begin{definition}\label{definition_xi_stationary} Suppose $\mu$ is a regular cardinal. A set $A\subseteq\mu$ is $0$-stationary in $\alpha<\mu$ if and only if $A$ is unbounded in $\alpha$. For $0<\xi<\alpha^+$, where $\alpha$ is regular, we say that $A$ is \emph{$\xi$-stationary in $\alpha$} if and only if for every $\zeta<\xi$, every set $S$ that is $\zeta$-stationary in $\alpha$ \emph{$\zeta$-reflects} to some $\beta\in A$, i.e., $S$ is $f^\alpha_\zeta(\beta)$-stationary in $\beta$. We say that an ordinal $\alpha<\mu$ is \emph{$\xi$-reflecting} if it is $\xi$-stationary in $\alpha$ as a subset of $\mu$. \end{definition}
\begin{definition}\label{definition_xi_s_stationarity} Suppose $\mu$ is a regular cardinal. $A$ set $A\subseteq\mu$ is \emph{$0$-simultaneously stationary in $\alpha$} (\emph{$0$-s-stationary in $\alpha$} for short) if and only if $A$ is unbounded in $\alpha$. For $0<\xi<\alpha^+$, where $\alpha$ is regular, we say that $A$ is \emph{$\xi$-simultaneously stationary in $\alpha$} (\emph{$\xi$-s-stationary in $\alpha$} for short) if and only if for every $\zeta<\xi$, every pair of subsets $S$ and $T$ that are $\zeta$-s-stationary in $\alpha$ \emph{simultaneously $\zeta$-reflect} to some $\beta\in A$, i.e., $S$ and $T$ are both $f^\alpha_\zeta(\beta)$-s-stationary in $\beta$. We say that $\alpha$ is $\xi$-s-reflecting if it is $\xi$-s-stationary in $\alpha$. \end{definition}
\begin{remark}\label{remark_nontrivial} Bagaria defined a set $A\subseteq\mu$ to be $\xi$-stationary in $\alpha<\mu$ if and only if for every $\zeta<\xi$, for every $S\subseteq\mu$ that is $\zeta$-stationary in $\alpha$ there is a $\beta\in A\cap\alpha$ such that $S$ is $\zeta$-stationary in $\beta$. Since $f^\alpha_\zeta$ equals the constant function $\zeta$ when $\zeta<\alpha$, it follows that Bagaria's notion of $A$ being $\xi$-stationary in $\alpha$ is equivalent to ours when $\xi<\alpha$. Bagaria comments in the paragraphs following \cite[Definition 2.6]{MR3894041} that, under his definition, no ordinal $\alpha$ can be $(\alpha+1)$-reflecting, because if $\alpha$ is the least such ordinal there is a $\beta<\alpha$ such that $\alpha\cap\beta=\beta$ is $\alpha$-stationary and thus $(\beta+1)$-stationary in $\beta$. Let us show that such an argument does \emph{not} work to rule out the existence of ordinals $\alpha$ which are $\alpha+1$-reflecting under our definition. Suppose $\alpha$ is $(\alpha+1)$-reflecting, as in Definition \ref{definition_xi_stationary}. Then there is some $\beta<\alpha$ that is $f^\alpha_\alpha(\beta)$-reflecting, but $f^\alpha_\alpha(\beta)=\beta$ and thus the conclusion is that $\beta$ is $\beta$-reflecting, and Bagaria shows that some ordinals (namely some large cardinals) $\beta$ can be $\beta$-reflecting. \end{remark}
In order to streamline the proof of the characterization of the nonisolated points in $(\mu,\tau_\xi)$ for $\xi<\mu^+$, we will use the following auxiliary notion of $\xi$-$\hat{\text{s}}$-stationarity, which is often equivalent to $f^\mu_\xi(\alpha)$-s-stationarity as shown in Lemma \ref{lemma_s_hat} below.
\begin{definition}\label{definition_xi_s_hat_stationary} Suppose $\mu$ is a regular cardinal. A set $A\subseteq\mu$ is \emph{$0$-simultaneously hat stationary in $\alpha$} ($0$-$\hat{\text{s}}$-stationary for short) if and only if $A$ is unbounded in $\alpha$. For $0<\xi<\mu^+$, we say that $A$ is \emph{$\xi$-simultaneously hat stationary in $\alpha$} (\emph{$\xi$-$\hat{\text{s}}$-stationary in $\alpha$} for short) if and only if for every $\zeta\in F^\mu_\xi(\alpha)$, every pair of subsets $S$ and $T$ of $\mu$ that are $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$ \emph{simultaneously $\zeta$-$\hat{\text{s}}$-reflect} to some $\beta\in A$, i.e., $S$ and $T$ are both $\zeta$-$\hat{\text{s}}$-stationary in $\beta$. We say that $\alpha$ is \emph{$\xi$-$\hat{\text{s}}$-reflecting} if it is $\xi$-$\hat{\text{s}}$-stationary in $\alpha$. \end{definition}
\begin{remark} Notice that when $\xi<\mu$, a set $A\subseteq\mu$ is $\xi$-$\hat{\text{s}}$-stationary in $\alpha$ if and only if for all $\zeta\in F^\mu_\xi(\alpha)=\xi$, every pair of subsets $S$ and $T$ of $\mu$ that are $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$ simultaneously $\zeta$-$\hat{\text{s}}$-reflect to some $\beta\in A$. Thus, for $\xi<\mu$, Definition \ref{definition_xi_s_hat_stationary} agrees with \cite[Definition 2.8]{MR3894041}. Furthermore, $A$ is $\mu$-$\hat{\text{s}}$-stationary in $\alpha$ if and only if it is $\alpha$-$\hat{\text{s}}$-stationary in $\alpha$. Also notice that for $\zeta<\xi<\mu$, there is a club $C_{\zeta,\xi}$ in $\mu$ such that for all $\alpha\in C_{\zeta,\xi}$ we have $F^\mu_\zeta(\alpha)\subseteq F^\mu_\xi(\alpha)$ and hence $\alpha$ being $\xi$-$\hat{\text{s}}$-stationary implies $\alpha$ is $\zeta$-$\hat{\text{s}}$-stationary. \end{remark}
We will need the following lemma, which generalizes \cite[Proposition 2.9]{MR3894041}.
\begin{lemma}\label{lemma_intersect_with_club} Suppose $\mu$ is a regular cardinal and $\xi<\mu^+$. There is a club $B_\xi\subseteq\mu$ such that for all regular $\alpha\in B_\xi$ if $A$ is $\xi$-$\hat{\text{s}}$-stationary in $\alpha$ ($f^\mu_\xi(\alpha)$-s-stationary in $\alpha$) and $C$ is a club subset of $\alpha$, then $A\cap C$ is also $\xi$-$\hat{\text{s}}$-stationary ($f^\mu_\xi(\alpha)$-s-stationary in $\alpha$) in $\alpha$.\end{lemma}
\begin{proof} We will prove the lemma for $\xi$-$\hat{\text{s}}$-stationarity; the proof for $\xi$-s-stationarity is similar. When $\xi<\mu$ the lemma follows directly from \cite[Proposition 2.9]{MR3894041}.
Suppose $\xi\in\mu^+\setminus\mu$ is a limit ordinal and the result holds for $\zeta<\xi$. For each $\zeta<\xi$ let $B_\zeta$ be the club subset of $\mu$ obtained from the inductive hypothesis. Let $B_\xi$ be a club subset of $\mu$ such that for all $\alpha\in B_\xi$ we have \begin{enumerate} \item[(i)] $\alpha\in\bigcap_{\zeta\in F^\mu_\xi(\alpha)}B_\zeta$, \item[(ii)] $\mathop{\rm ot}\nolimits(F^\mu_\xi(\alpha))$ is a limit ordinal and \item[(iii)] $(\forall\zeta\in F^\mu_\xi(\alpha))$ $F^\mu_\xi(\alpha)\cap\zeta=F^\mu_\zeta(\alpha)$. \end{enumerate} Suppose $\alpha\in B_\xi$, let $A\subseteq\mu$ be $\xi$-$\hat{\text{s}}$-stationary in $\alpha$ and let $C$ be a club subset of $\alpha$. Since $\mathop{\rm ot}\nolimits(F^\mu_\xi(\alpha))$ is a limit ordinal, to show that $A\cap C$ is $\xi$-$\hat{\text{s}}$-stationary in $\alpha$, it suffices to show that $A\cap C$ is $\eta$-$\hat{\text{s}}$-stationary in $\alpha$ for all $\eta\in F^\mu_\xi(\alpha)$. Since $F^\mu_\eta(\alpha)\subseteq F^\mu_\xi(\alpha)$ it follows that $A$ is $\eta$-$\hat{\text{s}}$-stationary in $\alpha$. Then, because $\alpha\in B_\eta$, it follows by the inductive hypothesis that $A\cap C$ is $\eta$-$\hat{\text{s}}$-stationary in $\alpha$.
Now suppose $\xi\in\mu^+\setminus\mu$ and the result holds for $\zeta\leq\xi$. We will show that it holds for $\xi+1$. For each $\zeta\leq\xi$, let $B_\zeta$ be the club subset of $\mu$ obtained by the inductive hypothesis. Let $B_{\xi+1}$ be a club subset of $\mu$ such that for all $\alpha\in B_{\xi+1}$ we have $\alpha\in \bigcap_{\zeta\in F^\mu_{\xi+1}(\alpha)} B_\zeta$. Suppose $\alpha\in B_{\xi+1}$, let $A$ be $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$ and suppose $C$ is a club subset of $\alpha$. To show that $A\cap C$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$, fix any sets $S,T\subseteq\alpha$ that are $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$ for some $\zeta\in F^\mu_{\xi+1}(\alpha)$. Since $\alpha\in B_\zeta$, it follows by the inductive hypothesis that both $S\cap C$ and $T\cap C$ are $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$. Since $A$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$, there is some $\beta\in A$ such that both $S\cap C$ and $T\cap C$ are $\zeta$-$\hat{\text{s}}$-stationary in $\beta$. Thus, $\beta\in A\cap C$ and both $S$ and $T$ are $\zeta$-$\hat{\text{s}}$-stationary in $\beta$, which establishes that $A\cap C$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$. \end{proof}
\begin{lemma}\label{lemma_s_hat} Suppose $\mu$ is a regular cardinal. For all $\xi<\mu^+$ there is a club $C_\xi\subseteq\mu$ such that for all regular $\alpha\in C_\xi$ a set $X\subseteq\alpha$ is $f^\mu_\xi(\alpha)$-s-stationary in $\alpha$ if and only if it is $\xi$-$\hat{\text{s}}$-stationary in $\alpha$. \end{lemma}
\begin{proof} Suppose $\xi<\mu$. Let $C_\xi=\mu\setminus\xi$ and suppose $\alpha\in C_\xi$ is regular and $X\subseteq\alpha$. Since $F^\mu_\xi(\alpha)=f^\mu_\xi(\alpha)=\xi$ it is easy to see that from the definitions that $X$ is $\xi$-s-stationarity in $\alpha$ if and only if it is $\xi$-$\hat{\text{s}}$-stationarity in $\alpha$.
Now suppose that $\xi\in\mu^+\setminus\mu$ is a limit ordinal and the result holds for $\zeta<\xi$; the case in which $\xi\in\mu^+\setminus\mu$ is a successor is easier and is therefore left to the reader. For each $\zeta<\xi$ let $C_\zeta$ be the club obtained by the inductive hypothesis, and let $B_\xi$ be the club obtained from Lemma \ref{lemma_intersect_with_club}. Let $C_\xi$ be a club subset of $\mu$ such that for all $\alpha\in C_\xi$ we have \begin{enumerate} \item[(i)] $\alpha\in B_\xi\cap \bigcap_{\zeta\in F^\mu_\xi(\alpha)}d_0(C_\zeta)$, \item[(ii)] $\mathop{\rm ot}\nolimits(F^\mu_\xi(\alpha))$ is a limit ordinal, \item[(iii)] $f^\mu_\xi(\alpha)=\bigcup_{\zeta\in F^\mu_\xi(\alpha)}f^\mu_\zeta(\alpha)$, \item[(iv)] for all $\zeta\in F^\mu_\xi(\alpha)$ there is a club $D^\alpha_{\xi,\zeta}$ in $\alpha$ such that for all $\beta\in D^\alpha_{\xi,\zeta}$ we have $f^\alpha_{f^\mu_\xi(\alpha)}(\beta)=f^\mu_\xi(\beta)$, \item[(v)] $(\forall\zeta\in F^\mu_\xi(\alpha))$ $F^\mu_\xi(\alpha)\cap \zeta=F^\mu_\zeta(\alpha)$. \end{enumerate} Suppose $X$ is $f^\mu_\xi(\alpha)$-s-stationary in $\alpha$ and fix sets $S,T\subseteq\alpha$ that are $\eta$-$\hat{\text{s}}$-stationary in $\alpha$ for some $\eta\in F^\mu_\xi(\alpha)$. Since $\alpha\in d_0(C_\eta)\subseteq C_\eta$, it follows by the inductive hypothesis that both $S$ and $T$ are $f^\mu_\eta(\alpha)$-s-stationary in $\alpha$. By (iii) we have $f^\mu_\eta(\alpha)<f^\mu_\xi(\alpha)$, and since $\alpha\in B_\xi$ it follows by Lemma \ref{lemma_intersect_with_club} that $X\cap C_\eta\cap D^\alpha_{\xi,\zeta}$ is $f^\mu_\xi(\alpha)$-s-stationarity in $\alpha$. Hence there is a $\beta\in X\cap C_\eta\cap D^\alpha_{\xi,\zeta}$ such that both $S$ and $T$ are $f^\alpha_{f^\mu_\eta(\alpha)}(\beta)$-s-stationary in $\beta$. Since it follows from (v) that $f^\alpha_{f^\mu_\eta(\alpha)}(\beta)=f^\mu_\eta(\beta)$, both $S$ and $T$ are $f^\mu_\eta(\beta)$-s-stationary in $\beta$. Since $\beta\in C_\eta$ it follows that $S$ and $T$ are both $\eta$-$\hat{\text{s}}$-stationary in $\beta$. Thus $S$ is $\xi$-$\hat{\text{s}}$-stationar in $\alpha$. Conversely, suppose $X$ is $\xi$-$\hat{\text{s}}$-stationary in $\alpha$ and fix sets $S,T\subseteq\alpha$ that are $\eta$-s-stationary in $\alpha$ for some $\eta<f^\mu_\xi(\alpha)$. Let $\pi^\mu_{\xi,\alpha}:F^\mu_\xi(\alpha)\to f^\mu_\xi(\alpha)$ be the transitive collapse of $F^\mu_\xi(\alpha)$ and let $\hat\eta=(\pi^\mu_{\xi,\alpha})^{-1}(\eta)$. Since $\alpha\in C_{\hat\eta}$, it follows that $S$ and $T$ are $\hat\eta$-$\hat{\text{s}}$-stationary in $\alpha$. Since $X\cap D^\alpha_{\xi,\hat\eta}$ is $\xi$-$\hat{\text{s}}$-stationary in $\alpha$, there is a $\beta\in X\cap D^\alpha_{\xi,\hat\eta}\cap\alpha$ such that $S$ and $T$ are both $f^\mu_{\hat\eta}(\beta)$-$\hat{\text{s}}$-stationary in $\beta$. Since $\beta\in D^\alpha_{\xi,\hat\eta}$, the sets $S$ and $T$ are $f^\alpha_{f^\mu_{\hat\eta}(\alpha)}(\beta)$-$\hat{\text{s}}$-stationary in $\beta$. Since $\eta=f^\mu_{\hat\eta}(\alpha)$ we see that both $S$ and $T$ are $f^\alpha_\eta(\beta)$-s-stationary in $\beta$. Thus $X$ is $\xi$-s-stationary. \end{proof}
In order to characterize the nonisolated points of the spaces $(\mu,\tau_\xi)$, for $\xi<\mu^+$, in terms of $\eta$-s-reflecting cardinals, we will need the following proposition, which generalizes \cite[Proposition 2.10]{MR3894041}.
\begin{proposition}\label{proposition_meat} Suppose $\mu$ is a regular cardinal. \begin{enumerate} \item For all $\xi<\mu^+$ there is a club $C_\xi\subseteq\mu$ such that for all $A\subseteq\mu$ we have \[d_\xi(A)\cap C_\xi=\{\alpha<\mu:\text{$A$ is $\xi$-$\hat{\text{s}}$-stationary in $\alpha$}\}\cap C_\xi.\] \item For all $\xi<\mu^+$ there is a club $D_\xi\subseteq\mu$ such that for all $\alpha\in D_\xi$ and all $A\subseteq\mu$ we have that $A$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$ if and only if $A\cap d_\zeta(S)\cap d_\zeta(T)\neq\varnothing$ (equivalently, if and only if $A\cap d_\zeta(S)\cap d_\zeta(T)$ is $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$) for every $\zeta\in F^\mu_{\xi+1}(\alpha)$ and every pair $S,T$ of subsets of $\alpha$ that are $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$. \item For all $\xi<\mu^+$ there is a club $E_\xi\subseteq\mu$ such that for all $\alpha\in E_\xi$ and all $A\subseteq\mu$, if $A$ is $\xi$-$\hat{\text{s}}$-stationary in $\alpha$ and $A_i$ is $\zeta_i$-$\hat{\text{s}}$-stationary in $\alpha$ for some $\zeta_i\in F^\mu_\xi(\alpha)$, for all $i<n$ where $n<\omega$, then $A\cap d_{\zeta_0}(A_0)\cap \cdots\cap d_{\zeta_{n-1}}(A_{n-1})$ is $\xi$-$\hat{\text{s}}$-stationary in $\alpha$. \end{enumerate} \end{proposition}
\begin{proof} We will prove (1) -- (3) by simultaneous induction on $\xi$, for $\xi$-$\hat{\text{s}}$-stationarity. For $\xi<\mu$, (1) -- (3) follow directly from \cite[Proposition 2.10]{MR3894041}, taking $C_\xi=D_\xi=E_\xi=\mu$.
Let us first show that if $\xi\in\mu^+\setminus\mu$ is a limit ordinal and (1) -- (3) hold for all $\zeta<\xi$, then (1) -- (3) hold for $\xi$.
First we will show that for $\xi\in\mu^+\setminus\mu$ a limit ordinal, if (1) holds for $\zeta<\xi$ then (1) holds for $\xi$. For each $\zeta<\xi$, let $C_\zeta$ be the club subset of $\mu$ obtained from (1). Let $C_\xi$ be a club subset of $\mu$ such that for all $\alpha\in C_\xi$ we have \begin{enumerate} \item[(i)] $\alpha\in\bigcap_{\zeta\in F^\mu_\xi(\alpha)}C_\zeta$, \item[(ii)] $\mathop{\rm ot}\nolimits(F^\mu_\xi(\alpha))$ is a limit ordinal and \item[(iii)] $(\forall\zeta\in F^\mu_\xi(\alpha))$ $F^\mu_\xi(\alpha)\cap\zeta=F^\mu_\zeta(\alpha)$. \end{enumerate} Now fix $A\subseteq\mu$ and suppose $\alpha\in d_\xi(A)\cap C_\xi$. Then $\alpha$ is a limit point of $A$ in the ${\mathcal T}_\xi(\alpha)$ topology on $\mu$. For each $\zeta\in F^\mu_\xi(\alpha)$ we have $F^\mu_\zeta(\alpha)\subseteq F^\mu_\xi(\alpha)$, which implies ${\mathcal T}_\zeta(\alpha)\subseteq{\mathcal T}_\xi(\alpha)$, and hence $\alpha$ is a limit point of $A$ in the ${\mathcal T}_\zeta(\alpha)$ topology on $\mu$. Thus $\alpha\in \bigcap_{\zeta\in F^\mu_\xi(\alpha)}d_\zeta(A)$. Since $\alpha\in \bigcap_{\zeta\in F^\mu_\xi(\alpha)}C_\zeta$, it follows that $A$ is $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$ for all $\zeta\in F^\mu_\xi(\alpha)$. By (ii) and (iii), this implies that $A$ is $\xi$-$\hat{\text{s}}$-stationary in $\alpha$. Conversely, suppose $A$ is $\xi$-$\hat{\text{s}}$-stationary in $\alpha$ and $\alpha\in C_\xi$. To show that $\alpha\in d_\xi(A)$ we must show that $\alpha$ is a limit point of $A$ in the ${\mathcal T}_\xi(\alpha)$ topology on $\mu$ generated by ${\mathcal B}_\xi(\alpha)$. Fix a basic open neighborhood $U$ of $\alpha$ in ${\mathcal T}_\xi(\alpha)$. Then $U$ is of the form \[I\cap d_{\zeta_0}(A_0)\cap\cdots\cap d_{\zeta_{n-1}}(A_{n-1})\] where $I$ is an interval in $\mu$, $n<\omega$, and for all $i<n$ we have $\zeta_i\in F^\mu_\xi(\alpha)$ and $A_i\subseteq\mu$. By (ii) we can choose some $\eta\in F^\mu_\xi(\alpha)$ with $\eta>\max\{\zeta_i: i<n\}$. By (iii), for each $i<n$ we have $\zeta_i\in F^\mu_\xi(\alpha)\cap\eta=F^\mu_\eta(\alpha)$ and hence $U$ is an open neighborhood of $\alpha$ in the ${\mathcal T}_\eta(\alpha)$ topology. Since $F^\mu_\eta(\alpha)\subseteq F^\mu_\xi(\alpha)$, it follows that $A$ is $\eta$-$\hat{\text{s}}$-stationary in $\alpha$, and since $\alpha\in C_\eta$ we have that $\alpha\in d_\eta(A)$. Thus $\alpha$ is a limit point of $A$ in the ${\mathcal T}_\eta(\alpha)$ topology, so $A\cap U\setminus\{\alpha\}\neq\varnothing$. This shows that $\alpha$ is a limit point of $A$ in the ${\mathcal T}_\xi(\alpha)$ topology.
Let us show that for $\xi\in \mu^+\setminus\mu$ a limit ordinal, if (3) holds for $\zeta<\xi$, then (3) holds for $\xi$. Let $E_\xi$ be a club subset of $\mu$ such that for all $\alpha\in E_\xi$ we have \begin{enumerate} \item[(i)] $\alpha\in \bigcap_{\zeta\in F^\mu_\xi(\alpha)}E_\zeta$, \item[(ii)] $\mathop{\rm ot}\nolimits(F^\mu_\xi(\alpha))$ is a limit ordinal and \item[(iii)] $(\forall\zeta\in F^\mu_\xi(\alpha))$ $F^\mu_\xi(\alpha)\cap\zeta=F^\mu_\zeta(\alpha)$. \end{enumerate} Suppose $\alpha\in E_\xi$. Let $A\subseteq\mu$ be $\xi$-$\hat{\text{s}}$-stationary in $\alpha$ and, for $i<n$, suppose $A_i$ is $\zeta_i$-$\hat{\text{s}}$-stationary in $\alpha$ for some $\zeta_i\in F^\mu_\xi(\alpha)$. We must show that $A\cap d_{\zeta_0}(A_0)\cap\cdots\cap d_{\zeta_{n-1}}(A_{n-1})$ is $\xi$-$\hat{\text{s}}$-stationary in $\alpha$. Fix a pair of sets $S,T\subseteq\mu$ that are $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$ for some $\zeta\in F^\mu_\xi(\alpha)$. Using (ii), choose $\eta\in F^\mu_\xi(\alpha)$ with $\eta>\max(\{\zeta_i: i< n\}\cup\{\zeta\})$. Since $F^\mu_\xi(\alpha)\cap\eta=F^\mu_\eta(\alpha)$, it follows that $A$ is $\eta$-$\hat{\text{s}}$-stationary in $\alpha$, and by our assumption that (3) holds for $\eta<\xi$ and the fact that $\alpha\in E_\eta$, it follows that $A\cap d_{\zeta_0}(A_0)\cap\cdots\cap d_{\zeta_{n-1}}(A_{n-1})$ is $\eta$-$\hat{\text{s}}$-stationary in $\alpha$. Thus, there is a $\beta\in A\cap d_{\zeta_0}(A_0)\cap\cdots\cap d_{\zeta_{n-1}}(A_{n-1})$ such that both $S$ and $T$ are $\zeta$-$\hat{\text{s}}$-stationary in $\beta$.
Now we will show that for a limit ordinal $\xi\in\mu^+\setminus\mu$, if (1) and (3) hold for $\zeta\leq\xi$, then (2) holds for $\xi$. For each $\zeta\leq\xi$, let $B_\zeta$ be the club subset of $\mu$ obtained from Lemma \ref{lemma_intersect_with_club}. Let $D_\xi$ be a club subset of $\mu$ such that for all $\alpha\in D_\xi$ we have \begin{enumerate}[(i)] \item $\mathop{\rm ot}\nolimits(F^\mu_\xi(\alpha))$ is a limit ordinal, \item $(\forall\zeta\in F^\mu_\xi(\alpha))$ $F^\mu_\xi(\alpha)\cap\zeta=F^\mu_\zeta(\alpha)$ and \item $\alpha\in\bigcap_{\zeta\in F^\mu_{\xi+1}(\alpha)}(B_\zeta\cap d_0(C_\zeta)\cap d_0(E_\zeta))$ where the $C_\zeta$'s and $E_\zeta$'s are obtained by the inductive hypothesis from (1) and (3) respectively. \end{enumerate} Suppose $\alpha\in D_\xi$. For the forward direction of (2), let $A$ be $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$ and fix a pair $S,T$ of subsets of $\alpha$ that are $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$ for some $\zeta\in F^\mu_{\xi+1}(\alpha)$. Since $\alpha\in d_0(C_\zeta)$, it follows that $C_\zeta$ is closed and unbounded in $\alpha$ and hence the set $A\cap C_\zeta$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$. Hence there exists a $\beta\in A\cap C_\zeta$ such that $S$ and $T$ are both $\zeta$-$\hat{\text{s}}$-stationary in $\beta$, and since $\beta\in C_\zeta$ we have $\beta\in A\cap d_\zeta(S)\cap d_\zeta(T)$ by (1). To see that $A\cap d_\zeta(S)\cap d_\zeta(T)$ is $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$, fix sets $X,Y\subseteq\alpha$ that are $\eta$-$\hat{\text{s}}$-stationary in $\alpha$ for some $\eta\in F^\mu_\zeta(\alpha)$. Since $\alpha\in E_\zeta$, it follows by (3) that $S\cap d_\eta(X)$ and $T\cap d_\eta(Y)$ are $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$. Since $A\cap C_\zeta$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$ there is some $\beta\in A\cap C_\zeta$ such that $S\cap d_\eta(X)$ and $T\cap d_\eta(Y)$ are both $\zeta$-$\hat{\text{s}}$-stationary in $\beta$. Since $\beta\in C_\zeta$, it follows that $\beta\in A\cap C_\zeta\cap d_\zeta(S\cap d_\eta(X))\cap d_\zeta(T\cap d_\eta(Y))\neq\varnothing$. Now we have \[\varnothing\neq A\cap C_\zeta\cap d_\zeta(S\cap d_\eta(X))\cap d_\zeta(T\cap d_\eta(Y))\subseteq A\cap C_\zeta\cap d_\zeta(S)\cap d_\zeta(T)\cap d_\eta(X)\cap d_\eta(Y),\] and hence $A\cap d_\zeta(S)\cap d_\zeta(T)$ is $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$.
For the reverse direction of (2), suppose that $\alpha\in D_\xi$ and for all $\zeta\in F^\mu_{\xi+1}(\alpha)$, if $S,T\subseteq\alpha$ are both $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$ then $A\cap d_\zeta(S)\cap d_\zeta(T)\neq\varnothing$. To show that $A$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$, fix $\zeta\in F^\mu_{\xi+1}(\alpha)$ and suppose $S,T\subseteq\alpha$ are $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$. By Lemma \ref{lemma_intersect_with_club} and the fact that $\alpha\in B_\zeta\cap d_0(C_\zeta)$, it follows that $S\cap C_\zeta$ and $T$ are both $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$. Thus, by (1), there is a $\beta\in A\cap d_\zeta(S\cap C_\zeta)\cap d_\zeta(T)$. Now since $\beta\in C_\zeta\cap d_\zeta(S)\cap d_\zeta(T)$, it follows by (1) that $S$ and $T$ are both $\zeta$-$\hat{\text{s}}$-stationary in $\beta$. Hence $A$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$.
It remains to show that if $\xi\in\mu^+\setminus\mu$ is an ordinal and (1), (2) and (3) hold for $\zeta\leq\xi$, then (1), (2) and (3) also hold for $\xi+1$.
Given that (1), (2) and (3) hold for $\zeta\leq\xi$, let us show that (3) holds for $\xi+1$. For $\zeta\leq\xi$, let $C_\zeta$, $D_\zeta$ and $E_\zeta$ be the club subsets of $\mu$ obtained from (1), (2) and (3) respectively. For each $\zeta\leq\xi$, let $B_\zeta$ be the club subset of $\mu$ obtained from Lemma \ref{lemma_intersect_with_club}. Let $E_{\xi+1}$ be a club subset of $\mu$ such that for all $\alpha\in E_{\xi+1}$ we have \begin{enumerate} \item[(i)] $\alpha\in \bigcap_{\zeta\in F^\mu_{\xi+1}(\alpha)} (B_\zeta\cap d_0(C_\zeta)\cap D_\zeta\cap E_\zeta)$, \item[(ii)] $\alpha\in C_\xi\cap D_\xi\cap E_\xi$ and \item[(iii)] $(\forall\zeta\in F^\mu_{\xi+1}(\alpha))$ $F^\mu_{\xi+1}(\alpha)\cap\zeta=F^\mu_\zeta(\alpha)$. \end{enumerate} Suppose $\alpha\in E_{\xi+1}$ and $A\subseteq\mu$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$. Let $n<\omega$ and for each $i<n$ suppose $\zeta_i\in F^\mu_{\xi+1}(\alpha)$ and $A_i$ is $\zeta_i$-$\hat{\text{s}}$-stationary in $\alpha$. We must show that $A\cap d_{\zeta_0}(A_0)\cap \cdots d_{\zeta_{n-1}}(A_{n-1})$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$. Fix sets $S,T\subseteq\alpha$ which are $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$ for some $\zeta\in F^\mu_{\xi+1}(\alpha)$. We must show that there is a $\beta\in A\cap d_{\zeta_0}(A_0)\cap \cdots d_{\zeta_{n-1}}(A_{n-1})$ such that $S\cap\beta$ and $T\cap\beta$ are $\zeta$-$\hat{\text{s}}$-stationary in $\beta$. Since $\alpha\in C_\zeta$, it follows by an inductive application of (1) that in order to prove (3) holds for $\xi+1$, it suffices to show that \begin{align} A\cap d_{\zeta_0}(A_0)\cap\cdots\cap d_{\zeta_{n-1}}(A_{n-1})\cap d_\zeta(S)\cap d_\zeta(T)\cap C_\zeta&\neq\varnothing.\label{eqn_for_3} \end{align}
Let us proceed to prove \ref{eqn_for_3} by induction on $n$. First, let us consider the case in which $\zeta_0=\zeta$. Since $\alpha\in D_\zeta$ and $A$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$, it follows inductively from (2) that the set $d_\zeta(S)\cap d_\zeta(T)\supseteq A\cap d_\zeta(S)\cap d_\zeta(T)$ is $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$. Now since $A$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$, the sets $A_0$ and $d_\zeta(S)\cap d_\zeta(T)$ are $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$ and since $\alpha\in D_\zeta$, it follows from (2) that $A\cap d_{\zeta_0}(A_0)\cap d_\zeta(d_\zeta(S)\cap d_\zeta(T))$ is $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$. By Lemma \ref{lemma_intersect_with_club}, since $\alpha\in B_\zeta\cap d_0(C_\zeta)$ we have that $A\cap d_{\zeta_0}(A_0)\cap d_\zeta(d_\zeta(S)\cap d_\zeta(T))\cap C_\zeta$ is $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$. Since $A\cap d_{\zeta_0}(A_0)\cap d_\zeta(d_\zeta(S)\cap d_\zeta(T))\cap C_\zeta\subseteq A\cap d_{\zeta_0}(A_0)\cap d_\zeta(S)\cap d_\zeta(T)\cap C_\zeta$, this establishes (\ref{eqn_for_3}) in case $n=1$ and $\zeta_0=\zeta$. Second, let us consider the case in which $n=1$ and $\zeta_0<\zeta$. Since $F^\mu_\zeta(\alpha)\subseteq F^\mu_{\xi+1}(\alpha)$, it follows that $A$ is $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$. Since $\alpha\in E_\zeta$ and $\zeta\in F^\mu_{\xi+1}(\alpha)$, we may inductively apply (3) to see that $A\cap d_{\zeta_0}(A_0)$ and thus also $d_{\zeta_0}(A_0)$ is $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$. Hence because $\alpha\in D_\zeta$, it follows by an inductive application of (2) that $A\cap d_\zeta(d_{\zeta_0}(A_0))\cap d_\zeta(d_\zeta(S)\cap d_\zeta(T))$ is $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$ and by Lemma \ref{lemma_intersect_with_club} and the fact that $\alpha\in B_\zeta\cap d_0(C_\zeta)$, we see that the set $A\cap d_\zeta(d_{\zeta_0}(A_0))\cap d_\zeta(d_\zeta(S)\cap d_\zeta(T))\cap C_\zeta$ is also $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$. Since $A\cap d_\zeta(d_{\zeta_0}(A_0))\cap d_\zeta(d_\zeta(S)\cap d_\zeta(T))\cap C_\zeta\subseteq A\cap d_{\zeta_0}(A_0)\cap d_\zeta(S)\cap d_\zeta(T)\cap C_\zeta$, this establishes (\ref{eqn_for_3}) in the second case where $n=1$ and $\zeta_0<\zeta$. Thirdly, suppose $n=1$ and $\zeta_0>\zeta$. Then by an inductive application of (2), the set $A\cap d_{\zeta_0}(A_0)$ is $\zeta_0$-$\hat{\text{s}}$-stationary in $\alpha$. Since $\alpha \in B_\zeta\cap d_0(C_{\zeta_0})$, it follows from by Lemma \ref{lemma_intersect_with_club} that $A\cap d_{\zeta_0}(A_0)\cap C_\zeta$ is also $\zeta_0$-$\hat{\text{s}}$-stationary in $\alpha$. Since $\zeta\in F^\mu_{\zeta_0}(\alpha)$, we see that there is some $\beta\in A\cap d_{\zeta_0}(A_0)\cap C_\zeta$ such that both $S$ and $T$ are $\zeta$-$\hat{\text{s}}$-stationary in $\beta$, and thus $\beta\in A\cap d_{\zeta_0}(A_0)\cap d_\zeta(S)\cap d_\zeta(T)\cap C_\zeta$. This establishes that (6) holds for $n=1$.
Now suppose $n>1$. Since $\alpha\in D_\zeta$, it follows inductively by (2) that $d_\zeta(S)\cap d_\zeta(T)$ is $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$. Since $\alpha\in E_\zeta$, it follows by an inductive application of (3) that $d_\zeta(d_\zeta(S)\cap d_\zeta(T))$ is $\mu$-$\hat{\text{s}}$-stationary in $\alpha$. Furthermore, since $\alpha\in C_{\zeta_{n-1}}$ we have that $A_{n-1}$ is $\zeta_{n-1}$-$\hat{\text{s}}$-stationary in $\alpha$ and thus we see that, again by an inductive application of (3) using the fact that $\alpha\in E_{\zeta_{n-1}}$ the set $d_{\zeta_{n-1}}(A_{n-1})$ is $\zeta_{n-1}$-$\hat{\text{s}}$-stationary in $\alpha$. Also, by the inductive hypothesis on $n$, the set $A\cap d_{\zeta_0}(A_0)\cap \cdots\cap d_{\zeta_{n-1}}(A_{n-2})$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$. Therefore, by an inductive application of (2), the set \[A\cap d_{\zeta_0}(A_0)\cap \cdots\cap d_{\zeta_{n-2}}(A_{n-2})\cap d_{\zeta_{n-1}}(d_{\zeta_{n-1}}(A_{n-1}))\cap d_\zeta(d_\zeta(S)\cap d_\zeta(T))\cap C_\zeta\] which is contained in \[A\cap d_{\zeta_0}(A_0)\cap \cdots\cap d_{\zeta_{n-2}}(A_{n-2})\cap d_{\zeta_{n-1}}(A_{n-1})\cap d_\zeta(S)\cap d_\zeta(T)\cap C_\zeta\] is $\mu$-$\hat{\text{s}}$-stationary in $\alpha$. This establishes (\ref{eqn_for_3}) and hence (3) holds for $\xi+1$.
Next, given that (1) and (2) hold for $\zeta\leq\xi$ and (3) holds for $\zeta\leq\xi+1$, let us show that (1) holds for $\xi+1$. For each $\zeta\leq\xi$, let $C_\zeta$ be the club subset of $\mu$ obtained from (1). Also let $E_{\xi+1}$ be the club obtained from (3). For each $\zeta\leq\xi+1$, let $B_\zeta$ be the club subset of $\mu$ obtained from Lemma \ref{lemma_intersect_with_club}. Now we let $C_{\xi+1}$ be a club subset of $\mu$ such that for all $\alpha\in C_{\xi+1}$ we have \begin{enumerate}
\item[(i)] $\alpha\in \bigcap_{\zeta\in F^\mu_{\xi+1}(\alpha)} (B_\zeta\cap d_0(C_\zeta))$ and \item[(ii)] $\alpha\in B_{\xi+1}\cap E_{\xi+1}$. \end{enumerate} Suppose $\alpha\in d_{\xi+1}(A)\cap C_{\xi+1}$. Then $\alpha$ is a limit point of $A$ in the ${\mathcal T}_{\xi+1}(\alpha)$ topology on $\mu$. To show that $A$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$, fix $\zeta\in F^\mu_{\xi+1}(\alpha)$ and suppose $S,T\subseteq\alpha$ are $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$. Since $\alpha\in C_{\xi+1}$ we have $\alpha\in d_0(C_\zeta)$ and thus $\alpha\in d_\zeta(S)\cap d_\zeta(T)$. Since $d_0(C_\zeta)\cap d_\zeta(S)\cap d_\zeta(T)\in {\mathcal B}_{\xi+1}(\alpha)$ is a basic open neighborhood of $\alpha$ in the ${\mathcal T}_{\xi+1}(\alpha)$ topology, and since $\alpha\in d_{\xi+1}(A)$, it follows that there is some $\beta\in A\cap d_0(C_\zeta)\cap d_\zeta(S)\cap d_\zeta(T)\setminus\{\alpha\}$. Since $\beta\in C_\zeta$, it follows by the inductive hypothesis on (1) that both $S$ and $T$ are $\zeta$-$\hat{\text{s}}$-stationary in $\beta$. Thus, $A$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$.
Now suppose $\alpha\in C_{\xi+1}$ and $A$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$. To show that $\alpha\in d_{\xi+1}(A)$, fix a basic open set $U\in{\mathcal B}_{\xi+1}(\alpha)$ in the ${\mathcal T}_{\xi+1}(\alpha)$ topology with $\alpha\in U$. Then $U$ is of the form $I\cap d_{\zeta_0}(A_0)\cap\cdots\cap d_{\zeta_{n-1}}(A_{n-1})$, where $I\in{\mathcal B}_0(\alpha)$ is an interval in $\mu$, $n<\omega$ and for all $i<n$ we have $\zeta_i\in F^\mu_{\xi+1}(\alpha)$ and $A_i\subseteq\mu$. Since $\alpha\in C_{\xi+1}$ and $\alpha\in I\cap d_{\zeta_0}(A_0)\cap\cdots\cap d_{\zeta_{n-1}}(A_{n-1})$, it follows by the inductive hypothesis that $A_i$ is $\zeta_i$-$\hat{\text{s}}$-stationary in $\alpha$ for all $i<n$. Then since $A$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$, it follows from the fact that (3) holds for $\xi+1$ and $\alpha\in E_{\xi+1}$, that $A\cap d_{\zeta_0}(A_0)\cap\cdots\cap d_{\zeta_{n-1}}(A_{n-1})$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$. Furthermore, since $I\cap\alpha$ is a club subset of $\alpha$ and $\alpha\in B_{\xi+1}$, Lemma \ref{lemma_intersect_with_club} implies that $A\cap I\cap d_{\zeta_0}(A_0)\cap\cdots\cap d_{\zeta_{n-1}}(A_{n-1})\cap\alpha$ is $(\xi+1)$-$\hat{\text{s}}$-stationary in $\alpha$ and is thus nonempty. This establishes that $\alpha$ is a limit point of $A$ in the ${\mathcal T}_{\xi+1}(\alpha)$ topology, that is, $\alpha\in d_{\xi+1}(A)$.
Finally, given that (1) and (3) hold for $\zeta\leq\xi+1$, let us show that (2) holds for $\xi+1$. Let $D_{\xi+1}$ be a club subset of $\mu$ such that for all $\alpha\in D_{\xi+1}$ we have \begin{enumerate} \item[(i)] $(\forall\zeta\in F^\mu_{\xi+2}(\alpha))$ $F^\mu_{\xi+2}(\alpha)\cap\zeta=F^\mu_\zeta(\alpha)$; \item[(ii)] $\alpha\in\bigcap_{\zeta\in F^\mu_{\xi+2}(\alpha)}C_\zeta\cap E_\zeta$; \item[(iii)] $\alpha\in B_{\xi+2}\cap C_{\xi+1}$ where $B_{\xi+2}$ is the club subset of $\mu$ obtained from Lemma \ref{lemma_intersect_with_club} and $C_{\xi+1}$ is obtained from our inductive assumption on (1); and \item[(iv)] for all $\zeta\in F^\mu_{\xi+2}(\alpha)$ and all $\eta\in F^\mu_\zeta(\alpha)$ we have \[\alpha\in d_0(\{\beta<\mu: F^\mu_\eta(\beta)\subseteq F^\mu_\zeta(\beta)\}).\] \end{enumerate} Suppose $A$ is $(\xi+2)$-$\hat{\text{s}}$-stationary in $\alpha$ and $\alpha\in D_{\xi+1}$. Let $S,T\subseteq\alpha$ be $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$ for some $\zeta\in F^\mu_{\xi+2}(\alpha)$. We must show that $A\cap d_\zeta(S)\cap d_\zeta(T)$ is $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$. Fix $\eta\in F^\mu_\zeta(\alpha)$ and suppose $X$ and $Y$ are $\eta$-$\hat{\text{s}}$-stationary subsets of $\alpha$. By an inductive application of (1), it will suffice to show that \[A\cap d_\zeta(S)\cap d_\zeta(T)\cap d_\eta(X)\cap d_\eta(Y)\cap C_\eta\neq\varnothing.\] Since $\alpha\in E_\zeta$ it follows inductively by (3) that the sets $S\cap d_\eta(X)$ and $T\cap d_\eta(Y)$ are $\zeta$-$\hat{\text{s}}$-stationary in $\alpha$. By (iv) the set $\{\beta<\alpha: F^\mu_\eta(\beta)\subseteq F^\mu_\zeta(\beta)\}$ is club in $\alpha$ and therefore, by Lemma \ref{lemma_intersect_with_club} since $\alpha\in B_{\xi+2}$, the set \[A\cap\{\beta<\alpha: F^\mu_\eta(\beta)\subseteq F^\mu_\zeta(\beta)\}\] is $(\xi+2)$-$\hat{\text{s}}$-stationary in $\alpha$. Since $\zeta\in F^\mu_{\xi+2}(\alpha)$ and since $\alpha\in E_\zeta$, it follows by (3) for $\zeta$ that there is some $\beta\in A$ such that \[\beta\in d_\zeta(S\cap d_\eta(X))\cap d_\zeta(T\cap d_\eta(Y)).\] Thus we have \[\beta\in d_\zeta(S)\cap d_\zeta(d_\eta(X))\cap d_\zeta(T)\cap d_\zeta(d_\eta(Y)).\] Since $F^\mu_\eta(\beta)\subseteq F^\mu_\zeta(\beta)$, it follows that ${\mathcal T}_\eta(\beta)\subseteq {\mathcal T}_\zeta(\beta)$, and therefore we obtain \[\beta\in d_\zeta(S)\cap d_\eta(d_\eta(X))\cap d_\zeta(T)\cap d_\eta(d_\eta(Y)).\] By Lemma \ref{lemma_d_xi_is_cantor}, we have \[\beta\in d_\zeta(S)\cap d_\zeta(T)\cap d_\eta(X) \cap d_\eta(Y)\] as desired. \end{proof}
Now we are ready to characterize the nonisolated points of the spaces $(\mu_\xi,\tau_\xi)$ (on a club) in terms of $\eta$-$\hat{\text{s}}$-reflecting cardinals. The following is a generalization of \cite[Theorem 2.11]{MR3894041}.
\begin{theorem}\label{theorem_xi_s_hat_nonisolated} Suppose $\mu$ is a regular cardinal. For all $\xi<\mu^+$ if $C_\xi$ is the club subset of $\mu$ obtained from Proposition \ref{proposition_meat}(1), then for all $\alpha\in C_\xi$, the ordinal $\alpha$ is not isolated in the $\tau_\xi$ topology on $\mu$ if and only if $\alpha$ is $\xi$-$\hat{\text{s}}$-reflecting. \end{theorem}
\begin{proof} For $\xi<\mu$ the result follows directly from \cite[Theorem 2.11]{MR3894041}.
Suppose $\xi\in\mu^+\setminus\mu$ and $\alpha\in C_\xi$. By Definition \ref{definition_tau_xi}, we have \begin{align*} \text{$\alpha$ is not isolated in the $\tau_\xi$ topology} &\iff \{\alpha\}\notin\tau_\xi\\
&\iff d_\xi(\mu\setminus\{\alpha\})\not\subseteq\mu\setminus\{\alpha\}\\
&\iff \alpha\in d_\xi(\mu\setminus\{\alpha\})\\
&\iff \text{$\alpha$ is $\xi$-$\hat{\text{s}}$-stationary in $\alpha$}\\
&\iff \text{$\alpha$ is $\xi$-$\hat{\text{s}}$-reflecting.} \end{align*} \end{proof}
By applying Lemma \ref{lemma_s_hat} and Theorem \ref{theorem_xi_s_hat_nonisolated} we easily obtain the following.
\begin{theorem}\label{theorem_xi_s_nonisolated} Suppose $\mu$ is a regular cardinal. For all $\xi<\mu^+$ there is a club $C_\xi\subseteq\mu$ such that for all $\alpha \in C_\xi$ we have that $\alpha$ is not isolated in the $\tau_\xi$ topology on $\mu$ if and only if $\alpha$ is $f^\mu_\xi(\alpha)$-s-reflecting. \end{theorem}
In order to show that $\Pi^1_\xi$-indescribability can be used to obtain the nondiscreteness of the topologies $\tau_\xi$ on $\mu$ for $\xi<\mu^+$, we need the following expressibility result.
\begin{lemma}\label{lemma_expressing_s_stationarity} Suppose $\kappa$ is a regular cardinal. For all $\xi<\kappa^+$ there is a formula $\Pi^1_\xi$ formula $\varphi_\xi(X)$ over $V_\kappa$ and a club $C_\xi$ subset of $\kappa$ such that for all $A\subseteq\kappa$ we have \[\text{$A$ is $\xi$-s-stationary in $\kappa$ if and only if $V_\kappa\models\varphi_\xi(A)$}\] and for all $\alpha\in C_\xi$ we have \[\text{$A$ is $f^\kappa_\xi(\alpha)$-s-stationary in $\alpha$ if and only if $V_\alpha\models\varphi_\xi(A)\mathrm{|}^\kappa_\alpha$}\] \end{lemma}
\begin{proof} We follow the proof of \cite[Proposition 4.3]{MR3894041} and proceed by induction on $\xi$. We let $\varphi_0(X)$ be the natural $\Pi^1_0$ formula asserting that $X$ is $0$-s-stationary (i.e. unbounded) in $\kappa$.
Suppose $\xi<\kappa^+$ is a limit ordinal and the result holds for $\zeta<\xi$. We let \[\varphi_\xi(X)=\bigwedge_{\zeta<\xi}\varphi_\zeta(X).\] Clearly $\varphi_\xi(X)$ is $\Pi^1_\xi$ over $V_\kappa$. Using our inductive assumption about the $\varphi_\zeta$'s, it is easy to verify that $A$ is $\xi$-s-stationary in $\kappa$ if and only if $V_\kappa\models\varphi_\xi(A)$. Furthermore, using an argument involving generic ultrapowers similar to those of Theorem \ref{theorem_expressing_indescribability} and Theorem \ref{theorem_xi_clubs}(2), the existence of the desired club $C_\xi$ is straightforward, and is therefore left to the reader.
Suppose $\xi=\zeta+1<\kappa^+$ is a successor. We let $\varphi_{\zeta+1}(X)$ be the natural $\Pi^1_{\zeta+1}$ formula equivalent to \begin{align*} \left(\bigwedge_{\eta<\zeta}\varphi_\eta(X)\right)&\land \forall S\forall T(\varphi_\zeta(S)\land\varphi_\zeta(T)\rightarrow\\ &(\exists\beta\in A)(\text{$S$ and $T$ are $f^\kappa_\zeta(\beta)$-s-stationary in $\beta$})). \end{align*} Note that, by an argument similar to that for Theorem \ref{theorem_xi_clubs}(2), we can code information about which subsets of $\beta$ are $f^\kappa_\zeta(\beta)$-s-stationary in $\beta$ into a subset of $\kappa$ and verify that the above formula is in fact equivalent to a $\Pi^1_{\zeta+1}$ formula over $V_\kappa$. The verification that the desired club $C_{\zeta+1}$ exists and that $\varphi_{\zeta+1}$ satisfies the requirements of the lemma is similar to the proof of Theorem \ref{theorem_expressing_indescribability} and Theorem \ref{theorem_xi_clubs}(2), and is thus left to the reader. \end{proof}
The next proposition, which is a generalization of \cite[Proposition 4.3]{MR3894041}, will allow us to obtain the nondiscreteness of the topologies $\tau_\xi$ from an indescribability hypothesis.
\begin{proposition}\label{proposition_indescribability_implies_reflecting} If a cardinal $\kappa$ is $\Pi^1_\xi$-indescribable for some $\xi<\kappa^+$, then it is $(\xi+1)$-s-reflecting. \end{proposition}
\begin{proof} Suppose $\kappa$ is $\Pi^1_\xi$-indescribable and suppose that $S$ and $T$ are $\zeta$-s-stationary in $\kappa$ where $\zeta\leq\xi$. Then we have \[V_\kappa\models\varphi_\zeta(S)\land\varphi_\zeta(T)\] where $\varphi_\zeta(X)$ is the $\Pi^1_\zeta$ formula obtained in Lemma \ref{lemma_expressing_s_stationarity}. Let $C_\zeta$ be the club subset of $\kappa$ from the statement of Lemma \ref{lemma_expressing_s_stationarity}. Since $\kappa$ is $\Pi^1_\xi$-indescribable, there is an $\alpha\in C_\zeta$ such that \[V_\alpha\models\varphi_\zeta(S)\mathrm{|}^\kappa_\alpha\land\varphi_\zeta(T)\mathrm{|}^\kappa_\alpha,\] which implies that $S$ and $T$ are both $f^\kappa_\zeta(\alpha)$-s-stationary in $\alpha$. Hence $\kappa$ is $(\xi+1)$-s-stationary. \end{proof}
Finally, we conclude that from an indescribability hypothesis, one can prove that the $\tau_{\xi+1}$ topology is not discrete.
\begin{corollary}\label{corollary_nondiscreteness_from_indescribability} Suppose $\mu$ is a regular cardinal and $\xi<\mu^+$. If the set \[S=\{\alpha<\mu:\text{$\alpha$ is $f^\mu_\xi(\alpha)$-indescribable}\}\] is stationary in $\mu$ (for example, this will occur if $\mu$ is $\Pi^1_{\xi+1}$-indescribable), then there is an $\alpha<\mu$ which is nonisolated in the space $(\mu,\tau_{\xi+1})$. \end{corollary}
\begin{proof} Fix $\xi<\mu^+$. Let $C$ be the club subset of $\mu$ obtained from Theorem \ref{theorem_xi_s_nonisolated}; that is, $C\subseteq\mu$ is club such that for all $\alpha\in C$ we have $\alpha$ is $f^\mu_{\xi+1}(\alpha)$-s-reflecting if and only if $\alpha$ is not isolated in the $\tau_{\xi+1}$ topology. Let $D=\{\alpha<\mu: f^\mu_{\xi+1}(\alpha)=f^\mu_\xi(\alpha)+1\}$ be the club subset of $\mu$ obtained from Lemma \ref{lemma_successor}. Now, if $\alpha\in S\cap C\cap D$ then $\alpha$ is $f^\mu_{\xi+1}(\alpha)$-s-reflecting by Proposition \ref{proposition_indescribability_implies_reflecting}, and is hence not isolated in the $\tau_{\xi+1}$ topology. \end{proof}
\end{document}
|
arXiv
|
{
"id": "2102.09598.tex",
"language_detection_score": 0.6308329105377197,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\maketitle
\begin{abstract} In this paper, we prove that the small energy harmonic maps from $\Bbb H^2$ to $\Bbb H^2$ are asymptotically stable under the wave map equation in the subcritical perturbation class. This result may be seen as an example supporting the soliton resolution conjecture for geometric wave equations without equivariant assumptions on the initial data. In this paper, we construct Tao's caloric gauge in the case when nontrivial harmonic map occurs. With the ``dynamic separation" the master equation of the heat tension field appears as a semilinear magnetic wave equation. By the endpoint and weighted Strichartz estimates for magnetic wave equations obtained by the first author \cite{Lize1}, the asymptotic stability follows by a bootstrap argument. \end{abstract}
\maketitle \tableofcontents
\section{Introduction} Let $(M,h)$ and $(N,g)$ be two Riemannian manifolds without boundary. A wave map is a map from the Lorentz manifold $\Bbb R \times M$ into $N$, $$u:\Bbb R\times M\to N, $$ which is locally a critical point for the functional \begin{align}\label{w1} F(u) = \int_{\Bbb R \times M} {\left(- {{{\left\langle {{\partial _t}u,{\partial _t}u} \right\rangle }_{{u^*}g}} + {h^{ij}}{{\left\langle {{\partial _{{x_i}}}u,{\partial _{{x_j}}}u} \right\rangle }_{{u^*}g}}} \right)} {\rm{dtdvo{l_h}}}. \end{align} Here ${h_{ij}}dx^idx^j$ is the metric tension under a local coordinate $(x^1,...,x^m)$ for $M$. In a coordinate free expression, the integrand in the functional $F(u)$ is the energy density of $u$ under the Lorentz metric of $\Bbb R\times M$, $$\mathbf{\eta}=-dt\otimes dt+h_{ij}dx^i\otimes dx^j.$$ Given a local coordinate $(y^1,...,y^n)$ for $N$, the Euler-Lagrange equation for (\ref{w1}) is given by \begin{align}\label{wmap1} \Box u^k +{\eta ^{\alpha \beta }}\overline{\Gamma}_{ij}^k(u){\partial _\alpha }{u^i}{\partial _\beta }{u^j}= 0, \end{align} where $\Box=-\partial_t^2+\Delta_M$ is the D'Alembertian on $\Bbb R\times M$, $\overline{\Gamma}^k_{ij}(u)$ are the Christoffel symbols at the point $u(t,x)\in N$. In this paper, we consider the case $M=\Bbb H^2$, $N=\Bbb H^2$.
The wave map equation on a flat spacetime, which is sometimes known as the nonlinear $\sigma$-model, arises as a model problem in general relativity and particle physics, see for instance \cite{MS}. The wave map equation on curved spacetime is related to the wave map-Einstein system and the Kerr Ernst potential, see \cite{AGS,IK,GL}. We remark that the case where the background manifold is the hyperbolic space is of particular interest. Indeed, the anti-de Sitter space (AdSn), which is the exact solution of Einstein's field equation for an empty universe with a negative cosmological constant, is asymptotically hyperbolic.
There exist plenty of works on the Cauchy problem, the long dynamics and blow up for wave maps on $\Bbb R^{1+m}$. We first recall the non-exhaustive lists of results on equivariant maps. The critical well-posedness theory was initially considered by Christodoulou, Tahvildar-Zadeh \cite{CT} for radial wave maps and Shatah, Tahvildar-Zadeh \cite{STZ2} for equivariant wave maps. The global well-posedness result of \cite{CT} was recently improved to scattering by Chiodaroli, Krieger, Luhrmann \cite{CKL}. The bubbling theorem of wave maps was proved by Struwe \cite{S}. The explicit construction of blow up solutions behaving as a perturbation of the rescaling harmonic map was achieved by Krieger, Schlag, Tataru \cite{KST}, Raphael, Rodnianski \cite{RR}, and Rodnianski, Sterbenz \cite{RS} for the $\Bbb S^2$ target in the equivariant class. And the ill-posedness theory was studied in D'Ancona, Georgiev \cite{DG} and Tao \cite{Tao10}.
Without equivariant assumptions on the initial data the sharp subcritical well-posedness theory was developed by Klainerman, Machedon \cite{KM1,KM2} and Klainerman, Selberg \cite{KS2}. The small data critical case was started by Tataru \cite{Tataru2} in the critical Besov space, and then completed by Tao \cite{Tao1,Tao2} for wave maps from $\Bbb R^{1+d}$ to $\Bbb S^m$ in the critical Sobolev space. The small data theory in critical Sobolev space for general targets was considered by Krieger \cite{J1,J2}, Klainerman, Rodnianski \cite{KR3}, Shatah, Struwe \cite{SS}, Nahmod, Stefanov, Uhlenbeck \cite{NSU}, and Tataru \cite{Tataru3}.
The dynamic behavior for wave maps on $\Bbb R^{1+2}$ with general data was obtained by Krieger, Schlag \cite{KS} for the $\Bbb H^2$ targets, Sterbenz, Tataru \cite{ST1,ST2} for compact Riemann manifolds and initial data below the threshold, and Tao \cite{Tao7} for the $\Bbb H^n$ targets. In fact, Sterbenz, Tataru \cite{ST1,ST2} proved that for any initial data with energy less than that of the minimal energy nontrivial harmonic map evolves to a global and scattering solution.
The works on the wave map equations on curved spacetime were relatively less. The existence and orbital stability of equivariant time periodic wave maps from $\Bbb R\times \Bbb S^2$ to $\Bbb S^2$ were proved by Shatah, Tahvildar-Zadeh \cite{STZ1}, see Shahshahani \cite{S} for an generalization of $\Bbb S^2$. The critical small data Cauchy problem for wave maps on small asymptotically flat perturbations of $\Bbb R^4$ to compact Riemann manifolds was studied by Lawrie \cite{LA}. The soliton resolution and asymptotic stability of harmonic maps under wave maps on $\Bbb H^2$ to $\Bbb S^2$ or $\Bbb H^2$ in the 1-equivariant case were established by Lawrie, Oh, Shahshahani \cite{LOS1,LOS2,LOS4,LOS5}, see also \cite{LOS} for critical global well-posedness for wave maps from $\Bbb R\times \Bbb H^d$ to compact Riemann manifolds with $d\ge4$.
In this paper, we study the asymptotic stability of harmonic maps to (\ref{w1}). The motivation is the so called soliton resolution conjecture in dispersive PDEs which claims that every global bounded solution splits into the superposition of divergent solitons with a radiation part plus an asymptotically vanishing remainder term as $t\to\infty$. The version for wave maps and hyperbolic Yang-Mills has been verified by Cote \cite{C} and Jia, Kenig \cite{JK} for equivariant maps along a time sequence, see also \cite{KLLS1,KLLS2} for exotic-ball wave maps and \cite{Gy} for wormholes. Recently Duyckaerts, Jia, Kenig, Merle \cite{DJKM} obtained the universal blow up profile for type II blow up solutions to wave maps $u:\Bbb R\times\Bbb R^2\to \Bbb S^2$ with initial data of energy slightly above the ground state. For wave maps from $\Bbb R\times \Bbb H^2$ to $\Bbb H^2$, Lawrie, Oh, Shahshahani \cite{LOS,LOS4} raised the following soliton resolution conjecture,\\ {\bf Conjecture 1.1}
Consider the Cauchy problem for wave map $u:\Bbb R\times \Bbb H^2\to \Bbb H^2$ with finite energy initial data $(u_0,u_1)$. Suppose that outside some compact subset $\mathcal{K}$ of $\Bbb H^2$ for some harmonic map $Q:\Bbb H^2\to \Bbb H^2$ we have $$ u_0(x)=Q(x), \mbox{ }{\rm{for}}\mbox{ }x\in \Bbb H^2\backslash\mathcal{K}. $$ Then the unique solution $(u(t),\partial_tu(t))$ to the wave map scatters to $(Q(x),0)$ as $t\to\infty$.
In this paper, we consider the easiest case of Conjecture 1.1, i.e., when the initial data is a small perturbation of harmonic maps with small energy. In order to state our main result, we introduce the notion of admissible harmonic maps. \begin{definition}\label{2as}
Let $D=\{z:|z|<1\}$ with the hyperbolic metric be the Poincare disk. We say the harmonic map $Q:D\to D$ is admissible if $Q(D)$ is a compact subset of $D$ covered by a geodesic ball centered at 0 of radius $R_0$, $\|\nabla^kdQ\|_{L^2}<\infty$ for $k=0,1,2$, and there exists some $\varrho>0$ such that $e^{\varrho r}|dQ|\in L^{\infty}$, where $r$ is the distance between $x\in D$ and the origin point in $D$. \end{definition} For any given admissible harmonic map $Q$, we define the space $\bf{H}^k\times\bf{H}^{k-1}$ by (\ref{h897}). Our main theorem is as follows. \begin{theorem}\label{a1} Fix any $R_0>0$. Assume the given admissible harmonic map $Q$ in Definition \ref{2as} satisfies \begin{align}\label{as4}
\|dQ\|_{L^2_x}<\mu_1, \mbox{ }\|e^{\varrho r}|dQ|\|_{L^{\infty}_x}<\mu_1, \mbox{ }\|\nabla^2dQ\|_{L^{\infty}_x}+\|\nabla dQ\|_{L^{\infty}_x}<\mu_1. \end{align} And assume that the initial data $(u_0,u_1)\in {\bf{H}^3\times\bf{H}^2}$ to (\ref{wmap1}) with $u_0:\Bbb H^2\to\Bbb H^2$, $u_1(x)\in T_{u_0(x)}N$ for each $x\in \Bbb H^2$ satisfy \begin{align}\label{as3}
\|(u_0,u_1)-(Q,0)\|_{{\bf{H}}^2\times {\bf{H}^1}}<\mu_2. \end{align} Then if $\mu_1>0$ and $\mu_2>0$ are sufficiently small depending only on $R_0$, (\ref{wmap1}) has a global solution $(u(t),\partial_tu(t))$ which converges to the harmonic map $Q:\Bbb H^2\to \Bbb H^2$ as $t\to\infty$, i.e., $$ \mathop {\lim }\limits_{t\to\infty }\mathop {\sup }\limits_{x \in {\mathbb{H}^2}} {d_{{\mathbb{H}^2}}}\left( {u(t,x),Q(x)} \right) = 0. $$ \end{theorem}
The initial data considered in this paper are perturbations of harmonic maps in the $\bf H^2$ norm. If one considers perturbations in the energy critical norm $H^1$, the $S_k$ v.s. $N_k$ norm constructed by Tataru \cite{Tataru2} and Tao \cite{Tao2} should be built for the hyperbolic background.
\noindent{\bf{Remark 1.1}} Notice that the limit harmonic map coincides with the unperturbed harmonic map in Theorem 1.1. The reason for this coincidence is that the $\bf H^2\times\bf H^1$ norm assume the initial data coincide with $Q$ at the infinity, then the uniqueness of harmonic maps with prescribed boundary map shows the limit harmonic map is exactly the unperturbed one.
\noindent{\bf{Remark 1.2}}{\bf(Examples for the admissible harmonic maps)}\\
\noindent Denote $D=\{z:|z|<1\}$ to be the Poincare disk. Then any holomorphic map $f:D \to D$ is a harmonic map. If we assume that $f(z)$ can be analytically extended into a larger disk than the unit disk, then $\mu_1f:D\to D$ satisfies all the conditions in Definition 1.1 and Theorem 1.1 if $0<\mu_1\ll1$. Hence the harmonic maps involved in Theorem 1.1 are relatively rich. See [Appendix,\cite{LZ}] for the proof of these facts. It is important to see in theses examples that the dependence of $\mu_1$ on $R_0$ is neglectable.
\noindent{\bf{Remark 1.3}}(Examples for the perturbations of admissible harmonic maps)
Since we have global coordinates for $\Bbb H^2$ given by (\ref{vg}), the perturbation in the sense of (\ref{as3}) is nothing but perturbations of $\Bbb R^2$-valued functions.
Since we are dealing with non-equivariant data where the linearization method seems to be hard to apply, we use the caloric gauge technique introduced by Tao \cite{Tao4} to prove Theorem 1.1. The caloric gauge of Tao was applied to solve the global regularity of wave maps from $\Bbb R^{2+1}$ to $\Bbb H^n$ in the heat-wave project. We briefly recall the main idea of the caloric gauge. Given a solution to the wave map $u(t,x):\Bbb R^{1+2}\to \Bbb H^n$, suppose that $\widetilde u(s,t,x)$ solves the heat flow equation with initial data $u(t,x)$ $$ \left\{ \begin{array}{l} \partial_s \widetilde{u}(s,t,x)=\sum^2_{i=1}\nabla_i\partial_i\widetilde{u}\\ \widetilde{u}(s,t,x) \upharpoonright_{s=0}= u(t,x). \\ \end{array} \right. $$ Since there exists no nontrivial finite energy harmonic map from $\Bbb R^2$ to $\Bbb H^n$, one can expect that the corresponding heat flow $\widetilde{u}(s,t,x)$ converges to a fixed point $Q$ as $s\to\infty$. For any given orthonormal frame at the point $Q$, one can pullback the orthonormal frame parallel with respect to $s$ along the heat flow to obtain the frame at $\widetilde{u}(s,t,x)$, particularly $u(t,x)$ when $s=0$. Then rewriting (\ref{wmap1}) under the constructed frame will give us a scalar system for the differential fields and connection coefficients. Despite the fact that the caloric gauge can be viewed as a nonlinear Littlewood-Paley decomposition, the essential advantage of the caloric gauge is that it removes some troublesome frequency interactions, which is of fundamental importance for critical problems in low dimensions.
Generally the caloric gauge was used in the case where no harmonic map occurs, for instance energy critical geometric wave equations with energy below the threshold. In our case nontrivial harmonic exists no mater how small the data one considers. However, as observed in our work \cite{LZ}, the caloric gauge is still extraordinarily powerful. In fact, denoting the solution of the heat flow with initial data $u(0,x)$ by $U(s,x)$, it is known that $U(s,x)$ converges to some harmonic map $Q(x)$ as $s\to\infty$. And one can expect that the solution $u(t,x)$ of (\ref{wmap1}) also converges to the same harmonic map $Q(x)$ as $t\to\infty$. This heuristic idea combined with the caloric gauge reduces the convergence of solutions to (\ref{wmap1}) to proving the decay of the heat tension filed.
There are three main ingredients in our proof. The first is to guarantee that all the heat flows initiated from $u(t,x)$ for different $t$ converge to the same harmonic map. This enables us to construct the caloric gauge. The second is to derive the master equation for the heat tension field, which finally reduces to a linear wave equation with a small magnetic potential. The third is to design a suitable closed bootstrap program. All these ingredients are used to overcome the difficulty that no integrability with respect to $t$ is available for the energy density because the harmonic maps prevent the energy from decaying to zero as $t\to\infty$.
The key for the first ingredient is using the decay of $\partial_tu$ along the heat flow. In order to construct the caloric gauge, one has to prove the heat flow initiated from $u(t,x)$ converges to the same harmonic map independent of $t$. If one only considers $t$ as a smooth parameter, i.e., in the homotopy class, the corresponding limit harmonic map yielded by the heat flow initiated from $u(t,x)$ can be different when $t$ varies. Indeed, there exist a family of harmonic maps $\{Q_{\lambda}\}$ which depend smoothly with respect to $\lambda\in(0,1)$. Therefore the heat flow with initial data $Q_{\lambda}$ remains to be $Q_{\lambda}$, which changes according to the variation of $\lambda$. This tells us the structure of (\ref{wmap1}) should be considered. The essential observation is $\partial_t u$ decays fast along the heat flow as $s\to\infty$. By a monotonous property observed initially by Hartman \cite{Hh} and the decay estimates of the heat semigroup, we can prove the distance between the heat flows initiated from $u(t_1)$ and $u(t_2)$ goes to zero as $s\to\infty$. Therefore the limit harmonic map for the heat flow generated from $u(x,t)$ are all the same for different $t$. Similar idea works for the Landau-Lifshitz flow, see our paper \cite{LZ}. And we remark that this part can be adapted to energy critical wave maps form $\Bbb R\times\Bbb H^2$ to $\Bbb H^2$ since essentially we only use the $L^2_x$ norm of $\partial_tu$ in the arguments which is bounded by the energy.
Different from the usual papers on the asymptotic stability, we will not use the linearization arguments involving spectrum analysis of the linearized operator and modulation equations. But the master equation appears naturally as a semilinear wave equation with a small magnetic potential. Indeed, the main equation we need to consider is the nonlinear wave equation for the heat tension filed. The point is that although the nonlinear part of this equation is not controllable, one can separate part of them to be a magnetic potential with a remainder likely to be controllable. This is why we need the Strichartz estimates for magnetic wave equations.
The second ingredient is to control the remained terms in the nonlinear part of the master equation after we separate the magnetic potential away. In fact, the terms involving one order derivatives of the heat tension filed can not be controlled only by Strichartz estimates, even if we are working in the subcritical regularity. In this paper, the one order derivative terms are controlled by the weighted Strichartz estimates and the exotic Strichartz estimates owned only by hyperbolic backgrounds compared with the flat case. These estimates were obtained in the first author's work \cite{Lize1}.
The third ingredient is to close the bootstrap, by which the global spacetime norm bounds of the heat tension field follows. The caloric gauge yields the gauged equation for the corresponding differential fields $\phi_{x,t}$, connection coefficients $A_{x,t}$ and the heat tension filed. It has been discovered in Tao \cite{Tao7} that the key field one needs to study is the heat tension field which satisfies a semilinear wave equation. And for the small data Cauchy problem of wave maps on $\Bbb R\times\Bbb H^4$, Lawrie, Oh, Shahshahani \cite{LOS} shows in order to close the bootstrap arguments it suffices to firstly proving a global spacetime bound for the heat tension filed $\phi_s$. In our case, since the energy will not decay, one has to get rid of the inhomogeneous terms which involve only the differential fields $\phi_x$ in the master equation. Furthermore, these troublesome terms involving only $\phi_x$ are much more serious in the study of the equation of wave map tension filed. This difficulty is overcome by using identities from intrinsic geometry to gain some cancelation and adding a space-time bound for $|\partial_tu|$ on the basis of the bootstrap arguments of \cite{LOS,Tao7}.
This paper is organized as follows. In Section 2, we recall some notations and notions and prove an equivalence between the intrinsic and extrinsic Sobolev norms in some sense. In Section 3, we construct the caloric gauge and obtain the estimates of the connection coefficients. In Section 4, we we derive the master equation. In Section 5, we first recall the non-endpoint and endpoint Strichartz estimates, Morawetz inequality, and weighted Strichartz estimates for the linear magnetic wave equation. Then we close the bootstrap and deduce the global spacetime bounds for the heat tension field. In Section 6, we finish the proof of Theorem 1.1. In Section 7, we prove some remaining claims in the previous sections.
We denote the constants by $C(M)$ and they can change from line to line. Small constants are usually denoted by $\delta$ and it may vary in different lemmas. $A\lesssim B$ means there exists some constant $C$ such that $A\le CB$.
\section{Preliminaries} Some standard preliminaries on the geometric notions of the hyperbolic spaces, Sobolev embedding inequalities and an equivalence relationship for the intrinsic and extrinsic formulations of the Sobolev spaces are recalled first. As a corollary we prove the local well-posedness for initial data $(u_0,u_1)$ in the $\bf{H}^3\times \bf{H}^2$ regularity and a conditional global well-posedness proposition. In addition, the smoothing effect of heat semigroup is recalled.
\subsection{The global coordinates and definitions of the function spaces} The covariant derivative in $TN$ is denoted by $\widetilde{\nabla}$, the covariant derivative induced by $u$ in $u^*(TN)$ is denoted by $\nabla$. We denote the Riemann curvature tension of $N$ by $\mathbf{R}$. The components of Riemann metric are denoted by $h_{ij}$ for M and $g_{ij}$ for N respectively. The Christoffel symbols on $M$ and $N$ are denoted by $\Gamma^{k}_{ij}$ and $\overline{\Gamma}^{k}_{ij}$ respectively.
We recall some facts on hyperbolic spaces. Let $\Bbb R^{1+2}$ be the Minkowski space with Minkowski metric $-(dx^0)^2+(dx^1)^2+(dx^2)^2$. Define a bilinear form on $\Bbb R^{1+2}\times \Bbb R^{1+2}$, $$ [x,y]=x^0y^0-x^1y^1-x^2y^2. $$ The hyperbolic space $\mathbb{H}^2$ is defined by $$\mathbb{H}^2=\{x\in \Bbb R^{2+1}: [x,x]=1 \mbox{ }{\rm{and}}\mbox{ }x^0>0\},$$ with a Riemannian metric being the pullback of the Minkowski metric by the inclusion map $\iota:\mathbb{H}^2\to \Bbb R^{1+2}.$ By Iwasawa decomposition we have a global system of coordinates. Indeed, the diffeomorphism $\Psi:\Bbb R\times \Bbb R\to \mathbb{H}^2$ is given by \begin{align}\label{vg}
\Psi(x_1,x_2)=({\rm{cosh}} x_2+e^{-x_2}|x_1|^2/2, {\rm{sinh}} x_2+e^{-x_2}|x_1|^2/2, e^{-x_2}x_1). \end{align} The Riemannian metric with respect to this coordinate system is given by $$ e^{-2x_2}(dx_1)^2+(dx_2)^2. $$ The corresponding Christoffel symbols are \begin{align}\label{christ} \Gamma^1_{2,2}=\Gamma^2_{2,1}=\Gamma^2_{2,2}=\Gamma^1_{1,1}=0; \mbox{ }\Gamma^1_{2,1}=-1, \mbox{ }\Gamma^2_{1,1}=e^{-2x_2}. \end{align} For any $(t,x)$ and $u:[0,T]\times\mathbb{H}^2\to \Bbb H^2$, we define an orthonormal frame at $u(t,x)$ by \begin{align}\label{frame} \Theta_1(u(t,x))=e^{u^2(t,x)}\frac{\partial}{\partial y_1}; \mbox{ }\Theta_2(u(t,x))=\frac{\partial}{\partial y_2}. \end{align} where $(u^1,u^2)$ denotes the coordinate of $u$ given by (\ref{vg}). {\bf Throughout this paper we will use coordinates (\ref{vg}) for both the target manifold $N=\Bbb H^2$ and the starting manifold $M=\Bbb H^2$.} Recall also the identity for Riemannian curvature on $N=\Bbb H^2$ \begin{align}\label{2.best} {\bf R}(X,Y)Z={\widetilde{\nabla} _X}{\widetilde{\nabla} _Y}Z - {\widetilde{\nabla }_Y}{\widetilde{\nabla} _X}Z - {\widetilde{\nabla}_{[X,Y]}}Z = \left\langle {X,Z} \right\rangle Y - \left\langle {Y,Z} \right\rangle X. \end{align} We have a useful identity for $X,Y,Z\in u^*(TN)$ \begin{align}\label{2.4best} {\nabla _i }\left( {{\bf R}\left( {X,Y} \right)Z} \right) = {\bf R}\left( {X,{\nabla _i }Y} \right)Z + {\bf R}\left( {{\nabla _i }X,Y} \right)Z + {\bf R}\left( {X,Y} \right){\nabla _i }Z. \end{align} For simplicity, denote $(X\wedge Y)Z=\left\langle {X,Z} \right\rangle Y - \left\langle {Y,Z} \right\rangle X$.
Let $H^k(\mathbb{H}^2;\Bbb R)$ be the usual Sobolev space for scalar functions defined on manifolds. We also recall the norm of $H^k$: $$
\|f\|^2_{H^k}=\sum^k_{l=1}\|\nabla^l f\|^2_{L^2_x}, $$ where $\nabla^l f$ is the covariant derivative. For maps $u:\mathbb{H}^2\to \mathbb{H}^2$, we define the intrinsic Sobolev semi-norm $\mathfrak{H}^k$ by $$
\|u\|^2_{{\mathfrak{H}}^k}=\sum^k_{i=1}\int_{\mathbb{H}^2} |\nabla^{i-1} du|^2 {\rm{dvol_h}}. $$ The map $u:\mathbb{H}^2\to\mathbb{H}^2$ is associated with a vector-valued function $u:\mathbb{H}^2\to \Bbb R^2$ by (\ref{vg}). Indeed, the vector $(u^1(x),u^2(x))$ is defined by $\Psi(u^1(x),u^2(x))=u(x)$ for any $x\in \mathbb{H}^2$ . Let $Q:\Bbb H^2\to\Bbb H^2$ be an admissible harmonic map in Definition 1.1. Then the extrinsic Sobolev space is defined by \begin{align}\label{h897} {\bf{H}^k_{Q}}=\{u: u^1-Q^1(x), u^2-Q^2(x)\in H^k(\mathbb{H}^2;\Bbb R)\}, \end{align} where $(Q^1(x),Q^2(x))\in \Bbb R^2$ is the corresponding components of $Q(x)$ under the coordinate (\ref{vg}). Denote the set of smooth maps which coincide with $Q$ outside of some compact subset of $M=\Bbb H^2$ by $\mathcal{D}$. Let $\mathcal{H}^k_{Q}$ be the completion of $\mathcal{D}$ under the metric given by \begin{align}\label{h897}
{\rm{dist}}_{k,Q}(u,w)=\sum^2_{j=1}\|u^j-w^j\|_{H^k(\mathbb{H}^2;\Bbb R)}, \end{align} where $u,w\in \mathcal{H}^k_{Q}$. Since $C^{\infty}_c(\mathbb{H}^2;\Bbb R)$ is dense in $H^k(\mathbb{H}^2;\Bbb R)$ (see Hebey \cite{Hebey}), $\mathcal{H}^k_{Q}$ coincides with ${\bf{H}^k_{Q}}$. And for simplicity, we write $\bf{H}^k$ without confusions. If $u$ is a map from $\Bbb R\times\Bbb H^2$ to $\Bbb H^2$, we define the space $\bf{H}^k\times\bf{H}^{k-1}$ by \begin{align}\label{h897}
{\bf{H}^k\times\bf{H}^{k-1}}=\left\{u:\sum^2_{j=1}\|u^j-Q^j\|_{H^k(\mathbb{H}^2;\Bbb R)}+\|\partial_tu^j\|_{H^{k-1}(\mathbb{H}^2;\Bbb R)}<\infty\right\}. \end{align} The distance in ${\bf{H}^k\times\bf{H}^{k-1}}$ is given by \begin{align}\label{zvh897}
{\rm{dist}}_{\bf{H}^k\times\bf{H}^{k-1}}(u,w)=\sum^2_{j=1}\|u^j-w^j(x)\|_{H^k}+\|\partial_tu^j-\partial_t w^j\|_{H^{k-1}}. \end{align}
\subsection{Sobolev embedding and Equivalence lemma} The Fourier transform on hyperbolic spaces takes proper functions defined on $\mathbb{H}^2$ to functions defined on $\Bbb R\times \Bbb S^1$, see Helgason \cite{Hel} for details. The operator $(-\Delta)^{\frac{s}{2}}$ is defined by the Fourier multiplier $\lambda\to (\frac{1}{4}+\lambda^2)^{\frac{s}{2}}$. We now recall the Sobolev inequalities of functions in $H^k$.
\begin{lemma}\label{wusijue} If $f\in C^{\infty}_c(\mathbb{H}^2;\Bbb R)$, then for $1<p<\infty,$ $p\le q\le \infty$, $0<\theta<1$, $1<r<2$, $r\le l<\infty$, $\alpha>1$, the following inequalities hold \begin{align}
{\left\| f \right\|_{{L^2}}} &\lesssim {\left\| {\nabla f} \right\|_{{L^2}}} \label{{uv111}}\\
{\left\| f \right\|_{{L^q}}} &\lesssim \left\| {\nabla f} \right\|_{{L^2}}^\theta \left\| f \right\|_{{L^p}}^{1 - \theta }\mbox{ }{\rm{when}}\mbox{ }\frac{1}{p} - \frac{\theta }{2} = \frac{1}{q} \label{uv211}\\
{\left\| f \right\|_{{L^l}}} &\lesssim {\left\| {\nabla f} \right\|_{{L^r}}}\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }{\rm{when}}\mbox{ }\frac{1}{r} - \frac{1}{2} = \frac{1}{l} \label{uv311}\\
{\left\| f \right\|_{{L^\infty }}} &\lesssim {\left\| {{{\left( { - \Delta } \right)}^{\frac{\alpha }{2}}}f} \right\|_{{L^2}}}\mbox{ }\mbox{ }\mbox{ }{\rm{when}}\mbox{ }\alpha>1 \label{uv4}\\
{\left\| {\nabla f} \right\|_{{L^p}}} &\sim{\left\| {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}f} \right\|_{{L^p}}} \label{uv5}.
\end{align} \end{lemma} For the proof, we refer to Bray \cite{B} for (\ref{uv211}), Ionescu, Pausader, Staffilani \cite{IPS} for (\ref{uv311}), Hebey \cite{Hebey} for (\ref{uv4}), see also Lawrie, Oh, Shahshahani \cite{LOS}. (\ref{uv5}) is obtained in \cite{Stri}.
We also recall the diamagnetic inequality which sometimes refers to Kato's inequality (see \cite{LOS}) and a more generalized Sobolev inequality (see [Proposition 2.2,\cite{AP}]). \begin{lemma} $(a)$ If $T$ is a tension field defined on $\Bbb H^2$, then in the distribution sense, one has the diamagnetic inequality \begin{align}\label{wusijue3}
|\nabla|T||\le |\nabla T|. \end{align} $(b)$ Let $1<p,q<\infty$ and $\sigma_1,\sigma_2\in\Bbb R$ such that $\sigma_1-\sigma_2\ge n/p-n/q\ge0$. Then for all $f\in C^{\infty}_c(\Bbb H^n;\Bbb R)$ \begin{align*}
\|(-\Delta)^{\sigma_2}f\|_{L^q}\lesssim \|(-\Delta)^{\sigma_1}f\|_{L^p}. \end{align*} \end{lemma}
\begin{remark} Lemma \ref{wusijue} and (\ref{wusijue3}) have several useful corollaries, for instance for $f\in H^2$ \begin{align}
\|f\|_{L_x^{\infty}}&\lesssim \|\nabla^2f\|_{L^2_x}\label{{uv111}6}\\
\|f\|_{L_x^{2}}&\lesssim \|\nabla^2f\|_{L^2_x}. \end{align} \end{remark}
The intrinsic and extrinsic formulations are equivalent in the following sense, see [Section 2, \cite{LZ}]. \begin{lemma}\label{new} Suppose that $Q$ is an admissible harmonic map in Definition 1.1. If $u\in \bf{H}_{Q}^k$ then for $k=2,3$ \begin{align}
\|u\|_{{\bf H}_{Q}^k}\thicksim\|u\|_{\mathfrak{H}^k}, \end{align} in the sense that there exist continuous functions $\mathcal{P},\mathcal{Q}$ such that \begin{align}
\|u\|_{{\bf H}^k_Q}&\le \mathcal{P}(\|u\|_{\mathfrak{H}^k})C(R_0,\|u\|_{\mathfrak{H}^2})\label{jia8}\\
\|u\|_{\mathfrak{H}^k}&\le \mathcal{Q}(\|u\|_{\bf{H}^k_Q})C(R_0,\|u\|_{{\bf{H}}_{Q}^2})\label{jia9}. \end{align} \end{lemma}
Lemma \ref{new} and its proof imply the following corollary, by which we can view Theorem 1.1 as a small data problem in the intrinsic sense. The proof of Corollary \ref{new2} is presented in Section 7. \begin{corollary}\label{new2} If $(u_0,u_1)$ belongs to ${\bf{H}^3}\times {\bf{H}^2}$ satisfying (\ref{as3}) then for $0<\mu_1\le 1,0<\mu_2\le 1$ \begin{align}
\|\nabla du_0\|_{L^2}+\|\nabla u_1\|_{L^2}+\|du\|_{L^2}+\|u_1\|_{L^2}\le C(R_0)\mu_2+C(R_0)\mu_1. \end{align} \end{corollary}
\begin{lemma}\label{8.5} We have the decay estimates for heat equations on $\Bbb H^2$: \begin{align}
\|e^{s\Delta_{\Bbb H^2}}f\|_{L^{\infty}_x}&\lesssim e^{-\frac{s}{4}}s^{-1}\|f\|_{L^{1}_x}\label{huhu899}\\
\|e^{s\Delta_{\Bbb H^2}}f\|_{L^{2}_x}&\lesssim e^{-\frac{s}{4}}\|f\|_{L^{2}_x}\label{m8}\\
\|e^{s\Delta_{\Bbb H^2}}f\|_{L^p_x}&\lesssim s^{\frac{1}{p}-\frac{1}{r}}\|f\|_{L^{r}_x},\label{huhu89}\\
\|e^{s\Delta_{\Bbb H^2}}(-\Delta_{\Bbb H^2})^{\alpha} f\|_{L^{q}_x}&\lesssim s^{-\alpha}e^{-\delta s}\|f\|_{L^{q}_x},\label{mm8} \end{align} where $1\le r\le p\le\infty$, $\alpha\in[0,1]$, $1<q<\infty$, $0<\delta\ll1$. \end{lemma} \begin{proof} (\ref{huhu899}) and (\ref{huhu89}) are known in the literature, see \cite{LZ,Coding}. (\ref{m8}) is a corollary of the spectral gap of $\frac{1}{4}$ for $-\Delta_{\Bbb H^2}$. The $s^{-\alpha}$ part of (\ref{mm8}) follows by interpolation between the three estimates of [Lemma 2.11,\cite{LOS}]. Thus it suffices to prove (\ref{mm8}) for $s$ large. The case of (\ref{mm8}) when $\alpha=0$ follows by directly estimating the heat kernel given in \cite{BM}. Since one has $e^{s\Delta}(-\Delta)^{\alpha}f=e^{\frac{s}{2}\Delta}e^{\frac{s}{2}\Delta}(-\Delta)^{\alpha}f$, by applying the exponential decay $L^p-L^p$ estimate to the first $e^{\frac{s}{2}\Delta}$ and the $s^{-\alpha}$ decay of $L^p\to (-\Delta)^{\alpha}L^p$ for the second $e^{\frac{s}{2}\Delta}$ proved just now, we obtain the full (\ref{mm8}). \end{proof}
The $\Bbb R^2$ version of the following lemma was proved in [Lemma 2.5,\cite{Tao4}]. We remark that the same arguments work in the $\Bbb H^2$ case, because the proof in \cite{Tao4} only uses the decay estimate (\ref{huhu89}) and the self-ajointness of $e^{t\Delta_{\Bbb R^2}}$, which are also satisfied by $e^{t\Delta_{\Bbb H^2}}$. \begin{lemma}\label{ktao1} For $f\in L^2_x$ defined on $\Bbb H^2$, one has
$$\int^{\infty}_0\|e^{s\Delta_{\Bbb H^2}}f\|^2_{L^{\infty}_x}ds\lesssim \|f\|^2_{L^2_x}. $$ \end{lemma} Without confusion, we will always use $\Delta$ instead of $\Delta_{\Bbb H^2}$.
\subsection{The Local and conditional global well-posedness} We quickly sketch the local well-posedness and conditional global well-posedness for (\ref{wmap1}). The local well-posedness of (\ref{wmap1}) for $(u_0,u_1)\in{\bf{H}^3\times\bf{H}^2}$ is standard by fixed point argument. Thus we present the following lemma with a rough proof.
\begin{lemma}\label{local}
For any initial data $(u_0,u_1)\in \bf{H}^3\times\bf{H}^2$, there exists $T>0$ depending only on $\|(u_0,u_1)\|_{\bf{H}^3\times\bf{H}^2}$ such that (\ref{wmap1}) has a unique local solution $(u,\partial_tu)\in C([0,T];\bf{H}^3\times\bf{H}^2)$. \end{lemma} \begin{proof} In the coordinates (\ref{vg}), (\ref{wmap1}) can be written as the following semilinear wave equation \begin{align}\label{XV12} \frac{{{\partial ^2}{u^k}}}{{\partial {t^2}}} - {\Delta}{u^k} + {\overline\Gamma}_{ij}^k\frac{{\partial {u^i}}}{{\partial t}}\frac{{\partial {u^j}}}{{\partial t}} - {h^{ij}}{\overline\Gamma} _{mn}^k\frac{{\partial {u^m}}}{{\partial {x^i}}}\frac{{\partial {u^n}}}{{\partial {x^j}}} = 0. \end{align} Notice that $\bf{H}^3$ and $\bf{H}^2$ are embedded to $L^{\infty}$ as illustrated in Remark 9.1, we can prove the local well-posedness of (\ref{XV12}) by the standard contradiction mapping argument in the complete metric space $\bf{H}^3\times\bf{H}^2$ with the metric given by \begin{align*}
{\rm{dist}}(u,w)=\sum^2_{j=1}\|u^j-w^j\|_{H^3}+\sum^2_{j=1}\|\partial_tu^j-\partial_tw^j\|_{H^2}. \end{align*} Moreover we can obtain the blow-up criterion: $T_*>0$ is the lifespan of $(\ref{XV12})$ if and only if \begin{align}\label{09ijn}
\mathop {\lim }\limits_{t \to T_*} {\left\| {(u(t,x),\partial_tu(t,x))} \right\|_{{\bf{H}^3\times\bf{H}^2}}} = \infty. \end{align} \end{proof}
The conditional global well-posedness is given by the following proposition. We remark that in the flat case $M=\Bbb R^d$, $1\le d\le 3$, Theorem 7.1 of Shatah, Struwe \cite{SStruwe} gave a local theory for Cauchy problem in $H^2\times H^1$. \begin{proposition}\label{global} Let $(u_0,u_1)\in\bf{H}^3\times \bf{H}^2$ be the initial data of (\ref{wmap1}), $T_*$ is the maximal lifespan determined by Lemma \ref{local}. If the solution $(u,\partial_tu)$ satisfies uniformly for all $t\in[0,T_*)$ \begin{align}\label{hcxp}
\|\nabla du\|_{L^2_x}+\|du\|_{L^2_x}+\|\nabla\partial_t u\|_{L^2_x}+\|\partial_t u\|_{L^2_x}\le C_1, \end{align} for some $C_1>0$ independent of $t\in[0,T_*)$ then $T_*=\infty$. \end{proposition}
\begin{proof} By the local well-posedness in Lemma \ref{local}, it suffices to obtain a uniform bound for $\|(u,\partial_t u)\|_{\bf{H}^3\times\bf{H}^2 }$ with respect to $t\in[0,T]$. By Lemma \ref{new}, it suffices to prove the intrinsic norms are uniformly bounded up to order three. We first point out a useful inequality which can be verified by integration by parts \begin{align}\label{V6}
\|\nabla^2 du\|^2_{L^2_x}\lesssim \|\nabla \tau(u)\|^2_{L^2_x}+ \|du\|^6_{L^6_x}+\|\nabla d u\|^2_{L^4_x}\|d u\|^2_{L^4_x}+C(\|u \|^2_{\mathfrak{H}^2}), \end{align} where $\tau(u)$ denotes the tension field which in the local coordinates is written as \begin{align*} \tau(u)=\left(\Delta u^k+h^{pq}\overline{\Gamma}^k_{ij}\frac{\partial u^{i}}{\partial x^p}\frac{\partial u^{j}}{\partial x^q}\right)\frac{\partial}{\partial y^k}. \end{align*} Thus (\ref{V6}), Gagliardo-Nirenberg inequality and Young inequality further yield \begin{align}\label{equil}
\|\nabla^2 du\|^2_{L^2_x}\lesssim \mathcal{P}(\|u \|^2_{\mathfrak{H}^2})+\|\nabla \tau(u)\|^2_{L^2_x}, \end{align} where $\mathcal{P}(x)$ is some polynomial. Define \begin{align*}
{E_3}(u,{\partial _t}u) = \frac{1}{2}\int_{{\Bbb H^2}} {{{\left| {\nabla \tau (u)} \right|}^2}} {\rm{dvo}}{{\rm{l}}_{\rm{h}}} + \frac{1}{2}{\int_{{\Bbb H^2}} {\left| {{\nabla ^2}{\partial _t}u} \right|} ^2}{\rm{dvo}}{{\rm{l}}_{\rm{h}}}. \end{align*} Then integration by parts yields \begin{align*} \frac{d}{{dt}}{E_3}(u,{\partial _t}u) &= \int_{{\Bbb H^2}} {{h^{ii}}\left\langle {{\nabla _t}{\nabla _i}\tau (u),{\nabla _i}\tau (u)} \right\rangle } {\rm{dvo}}{{\rm{l}}_{\rm{h}}}\\ &+ \int_{{\Bbb H^2}} {{h^{ii}h^{jj}}\left\langle {{\nabla _t}{\nabla _i}{\nabla _j}{\partial _t}u - \Gamma _{ij}^k{\nabla _k}{\partial _t}u,{\nabla _i}{\nabla _j}{\partial _t}u}-\Gamma_{ij}^k{\nabla _k}{\partial _t}u \right\rangle {\rm{dvo}}{{\rm{l}}_{\rm{h}}}}. \end{align*} Furthermore we have \begin{align*}
&\int_{{\Bbb H^2}} {{h^{ii}h^{jj}}\left\langle {{\nabla _t}{\nabla _i}{\nabla _j}{\partial _t}u - \Gamma _{ij}^k{\nabla _k}{\partial _t}u,{\nabla _i}{\nabla _j}{\partial _t}u}-\Gamma_{ij}^k{\nabla _k}{\partial _t}u \right\rangle {\rm{dvo}}{{\rm{l}}_{\rm{h}}}} \\
&= \int_{{\Bbb H^2}} {{h^{ii}h^{jj}}\left\langle {{\nabla _i}{\nabla _j}{\nabla _t}{\partial _t}u - \Gamma _{ij}^k{\nabla _k}{\partial _t}u,{\nabla _i}{\nabla _j}{\partial _t}u}-\Gamma_{ij}^k{\nabla _k}{\partial _t}u \right\rangle {\rm{dvo}}{{\rm{l}}_{\rm{h}}}} \\
&+ \int_{\Bbb H^2}O\big(\left| {\nabla {\partial _t}u} \right|\left| {{\nabla ^2}{\partial _t}u} \right|\big){\rm{dvol_h}} + O\big(\int_{\Bbb H^2}\left| {du} \right|\left| {{\partial _t}u} \right|\left| {\nabla {\partial _t}u} \right|\left| {{\nabla ^2}{\partial _t}u} \right|{\rm{dvol_h}} \big)\\
&+ \int_{\Bbb H^2}O\big(\left| {\nabla {\partial _t}u} \right|\left| {{\nabla ^2}{\partial _t}u} \right|\big){\rm{dvol_h}}+ \int_{\Bbb H^2}O\big({\left| {du} \right|^2}{\left| {{\partial _t}u} \right|^2}\left| {{\nabla ^2}{\partial _t}u} \right|\big){\rm{dvol_h}}\\
&+ \int_{\Bbb H^2}O\big({\left| {{\partial _t}u} \right|^2}\left| {\nabla du} \right|\left| {{\nabla ^2}{\partial _t}u} \right|\big){\rm{dvol_h}} \end{align*} Since $u$ solves (\ref{wmap1}), $\nabla_t\partial_tu=\tau(u)$. Then by integration by parts the leading term can be expanded as \begin{align*}
&\int_{{\Bbb H^2}} {{h^{ii}h^{jj}}}\left\langle {{\nabla _i}{\nabla _j}{\nabla _t}{\partial _t}u,{\nabla _i}{\nabla _j}{\partial _t}u}-\Gamma_{ij}^k{\nabla _k}{\partial _t}u \right\rangle {\rm{dvol_h}}\\
&= \int_{{\Bbb H^2}}{h^{ii}h^{jj}}\left\langle {{\nabla _i}{\nabla _j}\tau (u),{\nabla _i}{\nabla _j}{\partial _t}u}-\Gamma_{ij}^k{\nabla _k}{\partial _t}u \right\rangle {\rm{dvol_h}} \\
&=-\int_{{\Bbb H^2}} {{h^{ii}}\left\langle {{\nabla _i}\tau (u),{\nabla _t}{\nabla _i}\tau (u)} \right\rangle {\rm{dvo}}{{\rm{l}}_{\rm{h}}}} + \int_{\Bbb H^2}O\big(\left| {\nabla \tau (u)} \right|\left| {du} \right|\left| {{\partial _t}u} \right|\left| {\tau (u)} \right|\big){\rm{dvol_h}}\\
&+ \int_{\Bbb H^2}O\big(\left| {\nabla \tau (u)} \right|\left| {{\nabla ^2}u} \right|\left| {du} \right|\left| {{\partial _t}u} \right|\big){\rm{dvol_h}}+ \int_{\Bbb H^2}O\big(\left| {\nabla \tau (u)} \right|\left| {{\partial _t}u} \right|{\left| {du} \right|^2}\big){\rm{dvol_h}} \\
&+ \int_{\Bbb H^2}O\big(\left| {\nabla \tau (u)} \right|\left| {\nabla {\partial _t}u} \right|{\left| {du} \right|^2}\big){\rm{dvol_h}} +\int_{\Bbb H^2}O\big( \left| {\nabla \tau (u)} \right|\left| {{\partial _t}u} \right|{\left| {du} \right|^3}\big){\rm{dvol_h}}\\
&+ \int_{\Bbb H^2}O\big(\left| {\nabla {\partial _t}u} \right|\left| {\nabla \tau (u)} \right|\big){\rm{dvol_h}}.
\end{align*}
Thus we conclude
\begin{align*}
&\frac{d}{{dt}}{E_3}(u,{\partial _t}u)\\
&\le {\left\| {du} \right\|_{L_x^8}}{\left\| {{\partial _t}u} \right\|_{L_x^4}}{\left\| {\nabla {\partial _t}u} \right\|_{L_x^8}}{\left\| {{\nabla ^2}{\partial _t}u} \right\|_{L_x^2}} + {\left\| {\nabla {\partial _t}u} \right\|_{L_x^2}}{\left\| {{\nabla ^2}{\partial _t}u} \right\|_{L_x^2}} \\
&+ {\left\| {{\nabla ^2}{\partial _t}u} \right\|_{L_x^2}}\left\| {du} \right\|_{L_x^8}^2\left\| {{\partial _t}u} \right\|_{L_x^8}^2 + {\left\| {\nabla du} \right\|_{L_x^6}}\left\| {{\partial _t}u} \right\|_{L_x^6}^2{\left\| {\nabla^2 {\partial _t}u} \right\|_{L_x^2}} \\
&+ {\left\| {\nabla \tau (u)} \right\|_{L_x^2}}{\left\| {\nabla du} \right\|_{L_x^6}}{\left\| {du} \right\|_{L_x^6}}{\left\| {{\partial _t}u} \right\|_{L_x^6}} + {\left\| {\nabla \tau (u)} \right\|_{L_x^2}}{\left\| {\nabla du} \right\|_{L_x^4}}\left\| {du} \right\|_{L_x^8}^2 \\
&+ {\left\| {\nabla \tau (u)} \right\|_{L_x^2}}\left\| {du} \right\|_{L_x^{12}}^3{\left\| {{\partial _t}u} \right\|_{L_x^4}} + {\left\| {\nabla \tau (u)} \right\|_{L_x^2}}{\left\| {{\partial _t}u} \right\|_{L_x^6}}{\left\| {\tau (u)} \right\|_{L_x^6}}{\left\| {du} \right\|_{L_x^6}} \\
&+ {\left\| {\nabla \tau (u)} \right\|_{L_x^2}}{\left\| {{\partial _t}u} \right\|_{L_x^6}}\left\| {du} \right\|_{L_x^8}^2
+ {\left\| {\nabla \tau (u)} \right\|_{L_x^2}}{\left\| {\nabla {\partial _t}u} \right\|_{L_x^2}} + {\left\| {\nabla \tau (u)} \right\|_{L_x^2}}{\left\| {\nabla^2{\partial _t}u} \right\|_{L_x^2}}. \end{align*} Hence Young's inequality, Sobolev embedding and (\ref{V6}), (\ref{equil}) give \begin{align*} \frac{d}{{dt}}{E_3}(u,{\partial _t}u) \le C{E_3}(u,{\partial _t}u)+C. \end{align*} where $C$ depends only on $C_1$ in (\ref{hcxp}). Thus Gronwall shows \begin{align*} {E_3}(u,{\partial _t}u) \le e^{Ct}({E_3}({u_0},{u_1}) + C) . \end{align*} If $T_*<\infty$ this contradicts with (\ref{09ijn}). \end{proof}
\subsection{Geometric identities related to Gauges} Let $\{e_1(t,x),e_2(t,x)\}$ be an orthonormal frame for $u^*(T\mathbb{H}^2)$. Let $\phi_\alpha=(\psi^1_\alpha,\psi^2_\alpha)$ for $\alpha=0,1,2$ be the components of $\partial_{t,x}u$ in the frame $\{e_1,e_2\}$, i.e., $$ \phi_\alpha^j = \left\langle {{\partial _\alpha}u,{e_j}} \right\rangle. $$ For given $\Bbb R^2$-valued function $\phi$ defined on $[0,T]\times\Bbb H^2$, associate $\phi$ with a tangent filed $e\phi$ on $u^*(TN)$ by \begin{align}\label{poill} \phi\leftrightarrow e\phi=\sum^2_{j=1}\phi^je_j, \end{align} The map $u$ induces a covariant derivative on the trivial boundle $([0,T]\times\Bbb H^2,\Bbb R^2)$ defined by $$D_\alpha\phi=\partial_\alpha \phi+[A_\alpha]\phi, $$ where the coefficient matrix is defined by \begin{align*} [{A_\alpha}]^k_j = \left\langle {{\nabla_\alpha}{e_j},{e_k}} \right\rangle. \end{align*} It is easy to check the torsion free identity \begin{align}\label{pknb} D_\alpha\phi_\beta=D_\beta\phi_\alpha, \end{align} and the commutator identity \begin{align}\label{commut1} e[D_\alpha,D_\beta]\phi=e(\partial_\alpha A_\beta-\partial_\beta A_\alpha)\phi+e[A_\alpha,A_\beta]\phi=\mathbf{R}(u)(\partial_\alpha u, \partial_\beta u)(e\phi). \end{align} In the two dimensional case, (\ref{commut1}) can be further simplified to \begin{align}\label{commut} e[D_\alpha,D_\beta]\phi=e(\partial_\alpha A_\beta-\partial_\beta A_\alpha)\phi=\mathbf{R}(u)(\partial_\alpha u, \partial_\beta u)(e\phi). \end{align} \noindent{\bf{Remark 2.1}} Sometimes in the same line, we will use both the intrinsic quantities such as ${\bf{R}}(\partial_tu,\partial_su)$ and frame dependent quantities such as $\phi_i$. This will not cause trouble by remembering the correspondence (\ref{poill}). And we define a matrix valued function $\bf{a}\wedge\bf{b}$ by \begin{align}\label{nb890km} (\bf{a}\wedge\bf{b})\bf{c}=\left\langle {\bf{a},\bf{c}} \right\rangle\bf{b} - \left\langle {\bf{b},\bf{c}} \right\rangle \bf{a}, \end{align} where $\bf{a},\bf{b},\bf{c}$ are vectors on $\Bbb R^2$. It is easy to see (\ref{nb890km}) coincide with (\ref{2.best}) by letting $X=a_1e_1+a_2e_2$, $Y=b_1e_1+b_2e_2$, $Z=c_1e_1+c_2e_2$. Hence (\ref{commut}) can be written as \begin{align}\label{nb90km} [D_\alpha,D_\beta]\phi=(\phi_\alpha\wedge\phi_{\beta})\phi \end{align}
\begin{lemma} With the notions and notations given above, (\ref{wmap1}) can be written as \begin{align}\label{jnk} D_t\phi_t-h^{ij}D_i\phi_j+h^{ij}\Gamma^k_{ij}\phi_k=0¡£ \end{align} \end{lemma} \begin{proof} In the intrinsic formulation, (\ref{wmap1}) can be written as \begin{align*} {\nabla _t}{\partial _t}u - \left( {{\nabla _{{x_i}}}{\partial _{{x_j}}}u - {u_*}({\nabla _{\frac{\partial}{\partial x_i}}}\frac{\partial}{\partial {x_j}})} \right){h^{ij}} = 0. \end{align*} Expanding $\nabla_i\partial_j u$ and $u_*(\nabla_i\partial_j)$ by the frame $\{e_i\}^2_{i=1}$ yields \begin{align*} &{h^{ij}}{\nabla _i}{\partial _j}u - {h^{ij}}{u_*}({\nabla _{\frac{\partial }{{\partial {x_i}}}}}\frac{\partial }{{\partial {x_j}}}) = \sum\nolimits_{l =1}^2{{h^{ij}}} {\nabla _i}\left( {\left\langle {{\partial _j}u,{e_l}} \right\rangle {e_l}} \right) - \Gamma _{i,j}^k{h^{ij}}{\partial _k}u \\ &= {h^{ij}}\left( {{\partial _i}\psi _j^p{e_p} + [{A_i}]_l^p\psi _j^l{e_p}} \right) - \Gamma _{i,j}^k{h^{ij}}\psi _k^l{e_l} = e{h^{ij}}\left( {{D_i}{\phi _j}} \right) - e\Gamma _{i,j}^k{h^{ij}}{\phi _k}¡£ \end{align*} And $\nabla_t\partial_tu$ is expanded as \begin{align*} {\nabla _t}{\partial _t}u = \sum\nolimits_{l = 1}^2 {{\nabla _t}\left( {\left\langle {{\partial _t}u,{e_l}} \right\rangle {e_l}} \right)} = \left( {{\partial _t}\phi _0^p{e_p} + [{A_0}]_l^p\phi _0^l{e_p}} \right) = e\left( {{D_0}{\phi _0}} \right). \end{align*} Hence (\ref{jnk}) follows. \end{proof}
\section{Caloric Gauge} Denote the space $C([0,T];\bf{H}^3\times\bf{H}^2)$ by $\mathcal{X}_T$. The caloric gauge was first introduced by Tao \cite{Tao4} for the wave maps from $\Bbb R^{2+1}$ to $\mathbb{H}^n$. We give the definition of the caloric gauge in our setting. \begin{definition}\label{pp} Let $u(t,x):[0,T]\times \mathbb{H}^2\to \mathbb{H}^2$ be a solution of (\ref{wmap1}) in $\mathcal{X}_T$. Suppose that the heat flow initiated from $u_0$ converges to a harmonic map $Q:\mathbb{H}^2\to \mathbb{H}^2$. Then for a given orthonormal frame $\Xi(x)\triangleq\{\Xi_j(Q(x))\}^2_{j=1}$ which spans the tangent space $T_{Q(x)}\mathbb{H}^2$ for any $x\in \mathbb{H}^2$, by saying a caloric gauge we mean a tuple consisting of a map $\widetilde{u}:\Bbb R^+\times [0,T]\times\mathbb{H}^2\to\Bbb H^2$ and an orthonormal frame $\Omega\triangleq\{\Omega_j(\widetilde{u}(s,t,x))\}^2_{j=1}$ such that \begin{align}\label{muqi} \left\{ \begin{array}{l} {\partial _s}\widetilde{u}= \tau (\widetilde{u}) \\ {\nabla _s}{\Omega _j} = 0 \\ \mathop {\lim }\limits_{s \to \infty } {\Omega _j} = {\Xi _j} \\ \end{array} \right. \end{align} where the convergence of frames is defined by \begin{align}\label{convergence} \left\{ \begin{array}{l}
\mathop {\lim }\limits_{s \to \infty } \widetilde{u}(s,t,x) = Q(x) \\
\mathop {\lim }\limits_{s \to \infty } \left\langle {{\Omega _i}(s,t,x),{\Theta _j}(\widetilde{u}(s,t,x))} \right\rangle = \left\langle {{\Xi _i}(Q(x)),{\Theta _j}(Q(x))} \right\rangle \\
\end{array} \right. \end{align} \end{definition}
The remaining part of this section is devoted to the existence of the caloric gauge. \subsection{Warming up for the heat flows} In this subsection, we prove the estimates needed for the existence of the caloric gauge and the bounds for connection coefficients.
The equation of the heat flow is given by \begin{align}\label{8.29.1} \left\{ \begin{array}{l}
{\partial _s}u = \tau (u) \\
u(0,x)= v(x) \\
\end{array} \right. \end{align} The energy density $e$ is defined by $$
e(u)=\frac{1}{2}|du|^2. $$
The following lemma is due to Li, Tam \cite{LT}. (\ref{VI4}), (\ref{uu}) are proved in \cite{LZ}. \begin{lemma}\label{8.44} Given initial data $v:\Bbb H^2\to\Bbb H^2$ with bounded energy density, suppose that $\tau(v)\in L^p_x$ for some $p>2$ and the image of $\Bbb H^2$ under the map $v$ is contained in a compact subset of $\Bbb H^2$. Then the heat flow equation (\ref{8.29.1}) has a global solution $u$. Moreover for some $K,C>0$, we have \begin{align}
(\partial_s-\Delta)|du|^2+2|\nabla du|^2&\le K|du|^2\label{8.4}\\
(\partial_s-\Delta)|\partial_s u|^2+2|\nabla \partial_su|^2&\le 0\label{8.3}\\
(\partial_s-\Delta)|\partial_s u|&\le 0\label{VI4}\\
(\partial_s-\Delta)(|du|e^{-Cs})&\le 0.\label{uu} \end{align} \end{lemma}
Consider the heat flow from $\mathbb{H}^2$ to $\mathbb{H}^2$ with a parameter \begin{align}\label{8.29.2} \left\{ \begin{array}{l}
{\partial _s}\widetilde{u} = \tau (\widetilde{u}) \\
\widetilde{u}(s,t,x) \upharpoonright_{s=0}= u(t,x) \\
\end{array} \right. \end{align}
We will give two types of estimates of $\nabla^k\partial_s\widetilde{u},\nabla^k\partial_x\widetilde{u}$ in the following. One is the decay of $\|\nabla^k\partial_s\widetilde{u}\|_{L^{2}_x}$ as $s\to\infty$ which can be easily proved via energy arguments. The other is the global boundedness of $\|\partial_x\widetilde{u}\|_{L^{\infty}_x}$ away from $s=0$ and the decay of $\|\nabla^k\partial_s\widetilde{u}\|_{L^{\infty}_x}$ as $s\to\infty$, both of which need additional efforts. And we will prove the decay estimates with respect to $s$ for $\|\partial_t\widetilde{u}\|_{{L^{\infty}_x}\bigcap L^2_x}$, which is the key integrability gain to compensate the loss of decay of $\partial_x\widetilde{u}$. We start with the estimate of $\|d\widetilde{u}\|_{L^{\infty}_x}$ which is the cornerstone for all other estimates.
\begin{remark}\label{ki78} The following inequality which can be verified by Moser iteration is known in the heat flow literature: If $v$ is a nonnegative function satisfying \begin{align*} \partial_t v-\Delta v\le 0, \end{align*} then for $t\ge1$, \begin{align*} v(x,t)\le \int^t_{t-1}\int_{B(x,1)}v(y,s){\rm{dvol_y}}ds. \end{align*} \end{remark}
Introduce the norm: \begin{align}\label{ytgvfre}
\|u(t,x)\|_{\mathcal{X}_T}=&\|\nabla du\|_{C([0,T];L^2_x)}+\|\nabla \partial_tu\|_{C([0,T];L^2_x)}\nonumber\\
&+\| du\|_{C([0,T];L^2_x)}+\| \partial_tu\|_{C([0,T];L^2_x)}. \end{align}
Trivial applications of Remark \ref{ki78}, (\ref{uu}) and the non-increasing of the energy along the heat flow give the bounds for $\|d\widetilde{u}\|_{L^{\infty}_x}$. See also \cite{LZ} for another proof. \begin{lemma}\label{density}
Let $(u,\partial_tu)$ solve $(\ref{wmap1})$ in $\mathcal{X}_T$ (see (\ref{ytgvfre}) ) with $\|u\|_{\mathcal{X}_T}\le M$. If $\widetilde{u}$ is the solution to $(\ref{8.29.2})$ with initial data $u(t,x)$, then we have uniformly for $t\in[0,T]$, $s\in[1,\infty)$ \begin{align}
\left\| d\widetilde{u}(s,t,x) \right\| _{L_x^\infty }&\lesssim \left\| d u(t,x) \right\|_{L_x^2},\label{3.14a} \end{align} \end{lemma}
The decay of $\|\nabla^k\partial_s\widetilde{u}\|_{L^2_x}$ follows from an energy argument and the bound of the energy density provided by (\ref{3.14a}). \begin{lemma}\label{chen2222}
Let $(u,\partial_tu)\in \mathcal{X}_T$ with $\|u\|_{\mathcal{X}_T}\le M$, then for some universal constant $\delta>0$ the solution $\widetilde{u}(s,t,x)$ to heat flow (\ref{8.29.2}) satisfies \begin{align}
\|\partial_s\widetilde{u}(s,t,x)\|_{L^2_x}&\lesssim e^{-\delta s}MC(M), \mbox{ }{\rm{for}}\mbox{ }s>0\label{f41}\\
\|\nabla\partial_s\widetilde{u}(s,t,x)\|_{L^2_x}&\lesssim e^{-\delta s}MC(M), \mbox{ }{\rm{for}}\mbox{ }s\ge2\label{f42}\\
\int^{\infty}_0\|\nabla\partial_s\widetilde{u}(s,t,x)\|^2_{L^2_x}ds&\lesssim MC(M).\label{f40} \end{align} for all $t\in[0,T]$. The constant $C(M)$ grows polynomially as $M$ grows. \end{lemma} \begin{proof} First we notice that (\ref{8.3}), (\ref{huhu899}) and maximum principle yield \begin{align}
\|\partial_s\widetilde{u}(s,t,x)\|_{L^{2}_x}&\lesssim e^{-\frac{s}{4}}\|\partial_s\widetilde{u}(0,t,x)\|_{L^{2}_x}\label{lao1}\\
\|\partial_s\widetilde{u}(s,t,x)\|_{L^{\infty}_x}&\lesssim s^{-1}e^{-\frac{s}{4}}\|\partial_s\widetilde{u}(0,t,x)\|_{L^{2}_x}.\label{lao2} \end{align} We introduce three energy functionals: \begin{align*}
{\mathcal{E}_1}(\widetilde{u}) = \frac{1}{2}\int_{{\Bbb H^2}} {{{\left| {\nabla \widetilde{u}} \right|}^2}} dx,\mbox{ }{\mathcal{E}_2}(u) = \frac{1}{2}\int_{{\Bbb H^2}} {{{\left| {{\partial _s}\widetilde{u}} \right|}^2}} {\rm{dvol_h}},\mbox{ }{\mathcal{E}_3}(u) = \frac{1}{2}\int_{{\Bbb H^2}} {{{\left| {\nabla {\partial _s}\widetilde{u}} \right|}^2}} {\rm{dvol_h}}. \end{align*} By integration by parts and (\ref{8.29.2}), we have \begin{align*}
\frac{d}{{ds}}{\mathcal{E}_1}(\widetilde{u}) = - \int_{{\Bbb H^2}} {{{\left| {\tau (\widetilde{u})} \right|}^2}} {\rm{dvol_h}}. \end{align*} Thus the energy is decreasing with respect to $s$ and \begin{align}\label{V1}
\|d\widetilde{u}\|^2_{L^2_x}+\int^s_0\|\partial_s \widetilde{u}\|^2_{L^2_x}\le \mathcal{E}_1(u_0). \end{align} The non-positive sectional curvature assumption with integration by parts yields
$$\|\nabla d\widetilde{u}(s)\|^2_{L^2_x}\le \|\tau(\widetilde{u}(s))\|^2_{L^2_x}+\|d\widetilde{u}\|^2_{L^2_x} $$ Hence by (\ref{V1}), (\ref{8.29.2}) we conclude \begin{align}\label{V5}
\|\widetilde{u}\|^2_{\mathfrak{H}^2}+\int^s_0\|\partial_s \widetilde{u}\|^2_{L^2_x}ds\lesssim\|u_0\|^2_{\mathfrak{H}^2}. \end{align} Again by (\ref{8.29.2}) and integration by parts, one has \begin{align}
\frac{d}{{ds}}{\mathcal{E}_2}(\widetilde{u}) &= \int_{{\Bbb H^2}} {\left\langle {{\nabla _s}{\partial _s}\widetilde{u},{\partial _s}\widetilde{u}} \right\rangle } {\rm{dvol_h}}=\int_{{\Bbb H^2}} {\left\langle {{\nabla _s}\tau (\widetilde{u}),{\partial _s}\widetilde{u}} \right\rangle } {\rm{dvol_h}} \nonumber\\
&\le -\int_{{\Bbb H^2}} {\left\langle {\nabla {\partial _s}(\widetilde{u}),\nabla {\partial _s}\widetilde{u}} \right\rangle } {\rm{dvol_h}}+ C\int_{{\Bbb H^2}} {{{\left| {d\widetilde{u}} \right|}^2}{{\left| {{\partial _s}\widetilde{u}} \right|}^2}{\rm{dvol_h}}}.\label{j089} \end{align} Integrating (\ref{j089}) with respect to $s$ in $(s_1,s_2)$ for any $1<s_1<s_2$, we infer from (\ref{lao2}) and (\ref{V1}) that \begin{align}
\mathcal{E}_2(\widetilde{u}(s_2))-{\mathcal{E}_2}(\widetilde{u}(s_1))+\int^{s_2}_{s_1}\mathcal{E}_3(\widetilde{u}(s))ds &\lesssim \left\| { d\widetilde{u}} \right\|^2_{L^{\infty}_s{L_x^2}}\int^{s_2}_{s_1}\left\| {{\partial _s}\widetilde{u}} \right\|^2_{L_x^\infty}ds\lesssim M^4e^{-\delta s_1}. \end{align} Then by (\ref{lao1}) we have for $1<s<s_1<s_2$ and any $t\in [0,T]$ \begin{align}\label{lao3}
\int^{s_2}_{s_1}\|\nabla\partial_s\widetilde{u}(\tau,t,x)\|^2_{L^2_x}d\tau\lesssim M^2e^{-\delta s}. \end{align} Integration by parts and (\ref{8.29.2}) yield \begin{align}
&\frac{d}{{ds}}{\mathcal{E}_3}(\widetilde{u}(s))\nonumber\\
&\le -\int_{{\Bbb H^2}} \big({{{\left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|}^2}} + C\left| {d \widetilde{u}} \right|\left| {\nabla {\partial _s}\widetilde{u}} \right|{\left| {{\partial _s}\widetilde{u}} \right|^2} + C\left| {d\widetilde{u}} \right|\left| {\nabla {\partial _s}\widetilde{u}} \right|{\left| {{\partial _s}\widetilde{u}} \right|^2} \big){\rm{dvol_h}}\nonumber\\
&+ C\int_{\Bbb H^2}\big({\left| {d\widetilde{u}} \right|^3}\left| {{\partial _t}\widetilde{u}} \right|\left| {\nabla {\partial _t}\widetilde{u}} \right| + C\left| {{\nabla ^2}\widetilde{u}} \right|\left| {d\widetilde{u}} \right|\left| {{\partial _t}\widetilde{u}} \right|\left| {\nabla {\partial _t}\widetilde{u}} \right| + C{\left| {{\partial _t}\widetilde{u}} \right|^2}{\left| {d \widetilde{u}} \right|^4}\big){\rm{dvol_h}} \nonumber\\
&+ C\int_{\Bbb H^2}\big({\left| {\nabla {\partial _t}\widetilde{u}} \right|^2}{\left| {d\widetilde{ u}} \right|^2} + C{\left| {d\widetilde{u}} \right|^2}\left| {{\nabla ^2}{\partial _t}\widetilde{u}} \right|\left| {{\partial _t}\widetilde{u}} \right|\big){\rm{dvol_h}}.\label{9xian} \end{align} By H\"older, (\ref{lao3}), (\ref{lao2}), we see for $1<s<s_1<s_2$ and any $t\in [0,T]$ \begin{align*}
&\int_{{s_1}}^{{s_2}} {\int_{{\Bbb H^2}} {\left| {d\widetilde{u}} \right|} \left| {\nabla {\partial _s}\widetilde{u}} \right|{{\left| {{\partial _s}\widetilde{u}} \right|}^2}{\rm{dvol_h}}} ds\\
&\lesssim \left\| {{\partial _s}\widetilde{u}} \right\|_{L_s^4L_x^\infty ([{s_1},{s_2}] \times {\Bbb H^2})}^2{\left\| {d\widetilde{u}} \right\|_{L_s^\infty L_x^2([{s_1},{s_2}] \times {\Bbb H^2})}}{\left\| {\nabla {\partial _s}\widetilde{u}} \right\|_{L_s^2L_x^2([{s_1},{s_2}] \times {\Bbb H^2})}}\\ &\lesssim M^4{e^{ - \delta {s_1}}}. \end{align*} Similarly we have from (\ref{lao3}), (\ref{lao2}), (\ref{3.14a}) that for $1<s<s_1<s_2$ and any $t\in [0,T]$ \begin{align*}
&\int_{{s_1}}^{{s_2}} \int_{{\Bbb H^2}} {{{\left| {d\widetilde{u}} \right|}^3}} \left| {\nabla {\partial _s}\widetilde{u}} \right|\left| {{\partial _s}\widetilde{u}} \right|{\rm{dvol_hds}}\\
&\lesssim \left\| {d\widetilde{u}} \right\|_{L_s^\infty L_x^\infty ([{s_1},{s_2}] \times {\Bbb H^2})}^3{\left\| {\nabla {\partial _s}\widetilde{u}} \right\|_{L_s^2L_x^2([{s_1},{s_2}] \times {\Bbb H^2})}}{\left\| {{\partial _s}\widetilde{u}} \right\|_{L_s^2L_x^2([{s_1},{s_2}] \times {\Bbb H^2})}} \\
&\lesssim {M^5}{e^{ - \delta {s_1}}}. \end{align*} And similarly we obtain for $1<s<s_1<s_2$ and all $t\in [0,T]$ \begin{align*}
&\int_{{s_1}}^{{s_2}} \int_{{\Bbb H^2}} {\left| {\nabla d\widetilde{u}} \right|\left| {d\widetilde{u}} \right|\left| {{\partial _s}\widetilde{u}} \right|\left| {\nabla {\partial _s}\widetilde{u}} \right|} {\rm{dvol_hds}}\\
&\lesssim {\left\| {d\widetilde{u}} \right\|_{L_s^\infty L_x^\infty }}{\left\| {{\partial _s}\widetilde{u}} \right\|_{L_s^2L_x^\infty }}{\left\| {\nabla d\widetilde{u}} \right\|_{L_s^\infty L_x^2}}{\left\| {\nabla {\partial _s}\widetilde{u}} \right\|_{L_s^2L_x^2}} \\ &\lesssim {M^4}{e^{ - \delta {s_1}}}, \end{align*} where the integrand domains are $[{s_1},{s_2}] \times {\Bbb H^2}$. The remaining three terms in (\ref{9xian}) are easier to bound. In fact, Sobolev embedding, (\ref{lao2}) and (\ref{V5}) show \begin{align*}
\int_{{s_1}}^{{s_2}} {\int_{{\Bbb H^2}} {{{\left| {\nabla\widetilde{ u}} \right|}^4}} {{\left| {{\partial _s}\widetilde{u}} \right|}^2}} {\rm{dvol_hds}} \le \left\| {{\partial _s}\widetilde{u}} \right\|_{L_s^2L_x^\infty}^2\left\| {{\nabla d}\widetilde{u}} \right\|_{L_s^\infty L_x^2}^4 \le {M^6}{e^{ - \delta {s_1}}}. \end{align*} Similarly we obtain \begin{align*}
\int_{{s_1}}^{{s_2}} {\int_{{\Bbb H^2}} {{{\left| {d\widetilde{u}} \right|}^2}} {{\left| {\nabla {\partial _s}\widetilde{u}} \right|}^2}} {\rm{dvol_hds}} \le \left\| {\nabla {\partial _s}\widetilde{u}} \right\|_{L_s^2L_x^2}^2\left\| {d\widetilde{u}} \right\|_{L_s^\infty L_x^\infty }^2 \le {M^4}{e^{ - \delta {s_1}}}. \end{align*} The last remaining term in (\ref{9xian}) is absorbed by the negative term on the left. Indeed, for sufficiently small $\eta>0$ \begin{align*}
&\int_{{s_1}}^{{s_2}} {\int_{{\Bbb H^2}} {{{\left| {d\widetilde{u}} \right|}^2}} \left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|\left| {{\partial _s}\widetilde{u}} \right|} {\rm{dvol_hds}} \\
&\lesssim \eta \left\| {{\nabla ^2}{\partial _s}\widetilde{u}} \right\|_{L_s^2L_x^2([{s_1},{s_2}] \times {\Bbb H^2})}^2+ {\eta ^{ - 1}}\left\| {d\widetilde{u}} \right\|_{L_s^\infty L_x^\infty}^4\left\| {{\partial _s}\widetilde{u}} \right\|_{L_s^2L_x^2}^2 \\
&\lesssim \eta \left\| {{\nabla ^2}{\partial _s}\widetilde{u}} \right\|_{L_s^2L_x^2}^2 + {\eta ^{ - 1}}{M^6}{e^{ - \delta {s_1}}}. \end{align*} (\ref{V5}) implies that there exists $s_0\in(1,2)$ such that \begin{align}
\int_{{\Bbb H^2}} {{\left| {\nabla {\partial _s}u} \right|}^2}({s_0},t,x) {\rm{dvol_h}} \le {M^2}. \end{align} Hence applying (\ref{{uv111}}) and Gronwall inequality to (\ref{9xian}), we have for $s>s_0$ \begin{align}
&\int_{{\Bbb H^2}} {\left| {\nabla {\partial _s}\widetilde{u}} \right|}^2(s,t,x){\rm{dvol_h}}\nonumber\\
&\lesssim e^{ - \delta (s - {s_0})}\int_{{\Bbb H^2}} {{\left| {\nabla {\partial _s}\widetilde{u}} \right|}^2}({s_0},t,x) {\rm{dvol_h}} + MC(M)\int_{s_0}^s e^{ - \delta (s - \tau )}e^{ - \delta \tau }d\tau \nonumber\\ &\lesssim MC(M)\left( e^{ - \delta (s - {s_0})}+ e^{ - \delta s}(s - {s_0})\right).\label{7si} \end{align} Since $s_0\in(1,2)$, we have verified $(\ref{f42})$ for $s\ge 2$. (\ref{f40}) follows directly from (\ref{f42}), (\ref{V5}) and integrating (\ref{j089}) with respect to $s$. \end{proof}
We then prove the pointwise decay of $|\nabla\partial_s\widetilde{u}|$ with respect to $s$. First we need the Bochner formula for high derivatives of $\widetilde{u}$ along the heat flow. The proof of following four lemmas is direct calculations with the Bochner technique. Considering that the proof is quite standard, we state the results without detailed calculations. \begin{lemma}\label{9tian}
Let $\widetilde{u}$ be a solution to heat flow equation. Then $|\nabla\partial_s\widetilde{u}|^2$ satisfies \begin{align}
{\partial _s}{\left| {\nabla {\partial _s}\widetilde{u}} \right|^2} - {\Delta}{\left| {\nabla {\partial _s}\widetilde{u}} \right|^2} + 2{\left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|^2} &\lesssim {\left| {\nabla {\partial _s}\widetilde{u}} \right|^2}+ \left| {{\partial _s}\widetilde{u}} \right|{\left| {d\widetilde{u}} \right|^3}\left| {\nabla {\partial _s}\widetilde{u}} \right| \nonumber\\
&+{\left| {\nabla {\partial _s}\widetilde{u}} \right|^2}{\left| {d\widetilde{u}} \right|^2} + \left| {{\partial _s}\widetilde{u}} \right|\left| {{\nabla }d\widetilde{u}} \right|\left| {\nabla {\partial _s}\widetilde{u}} \right|\left| {d \widetilde{u}} \right|.\label{9tian1} \end{align} \end{lemma}
\begin{lemma}\label{B1} Let $\widetilde{u}$ be a solution to heat flow equation, then we have \begin{align}
&{\partial _s}{\left| {\nabla d\widetilde{u}} \right|^2} - {\Delta}{\left| {\nabla d\widetilde{u}} \right|^2} + 2{\left| {{\nabla ^2}d\widetilde{u}} \right|^2} \lesssim {\left| {\nabla d\widetilde{u}} \right|^2} + {\left| {d\widetilde{u}} \right|^2}{\left| {\nabla d\widetilde{u}} \right|^2} \nonumber\\
&+ {\left| {d\widetilde{u}} \right|}^2 + {\left| {d\widetilde{u}} \right|^4}\left| {\nabla d\widetilde{u}} \right|.\label{ion1} \end{align} \end{lemma}
\begin{lemma}\label{8zu}
Let $\widetilde{u}$ be a solution to heat flow equation. Then $|\nabla\partial_t\widetilde{u}|^2$ satisfies \begin{align}
&(\partial_s-\Delta)|\partial_t \widetilde{u}|^2=-2|\nabla \partial_t \widetilde{u}|^2-\mathbf{R}(\widetilde{u})(\nabla \widetilde{u},\partial_t \widetilde{u},\nabla \widetilde{u}, \partial_t \widetilde{u})\le 0.\label{10.127}\\
&{\partial _s}{\left| {\nabla {\partial _t}\widetilde{u}} \right|^2} - {\Delta}{\left| {\nabla {\partial _t}\widetilde{u}} \right|^2} + 2{\left| {{\nabla ^2}{\partial _t}\widetilde{u}} \right|^2} \lesssim {\left| {\nabla {\partial _t}\widetilde{u}} \right|^2} + {\left| {{\partial _s}\widetilde{u}} \right|{{\left| {{d}\widetilde{u}} \right|}^2}\left| {\nabla {\partial _t}\widetilde{u}} \right|}\nonumber \\
&+ {{{\left| {d\widetilde{u}} \right|}^3}\left| {{\partial _t}\widetilde{u}} \right|\left| {\nabla {\partial _t}\widetilde{u}} \right|} + {\left| {d\widetilde{u}} \right|\left| {{\partial _t}\widetilde{u}} \right|\left| {\nabla d\widetilde{u}} \right|\left| {\nabla {\partial _t}\widetilde{u}} \right|}+ {{{\left| {\nabla {\partial _t}\widetilde{u}} \right|}^2}{{\left| {d\widetilde{u}} \right|}^2}} .
\end{align}
\end{lemma}
\begin{lemma} Let $\widetilde{u}$ be a solution to heat flow equation, then \begin{align}
&{\partial _s}{\left| {{\nabla _t}{\partial _s}\widetilde{u}} \right|^2} - \Delta {\left| {{\nabla _t}{\partial _s}\widetilde{u}} \right|^2} + 2{\left| {\nabla {\nabla _t}{\partial _s}\widetilde{u}} \right|^2} \lesssim {\left| {{\nabla _t}{\partial _s}\widetilde{u}} \right|^2}+\left| {\nabla d\widetilde{u}} \right|\left| {{\partial _t}\widetilde{u}} \right|\left|{\nabla _t}{\partial _s}\widetilde{u}\right|\left| {{\partial _s}\widetilde{u}} \right| \nonumber\\
&+ \left| {{\nabla _t}{\partial _s}\widetilde{u}} \right|\left| {\nabla {\partial _s}\widetilde{u}} \right|\left| {{\partial _t}\widetilde{u}} \right|\left| {d\widetilde{u}} \right| + \left| {\nabla {\partial _t}\widetilde{u}} \right|\left| {{\partial _s}\widetilde{u}} \right|\left|{\nabla _t}{\partial _s}\widetilde{u}\right|\left| {d\widetilde{u}} \right|+{\left| {{\nabla _t}{\partial _s}\widetilde{u}} \right|^2}{\left| {d\widetilde{u}} \right|^2} .\label{iconm} \end{align} \end{lemma}
We have previously seen that the bound of $\|d\widetilde{u}\|_{L^{\infty}}$ is useful for bounding $\|\nabla\partial_s\widetilde{u}\|_{L^{2}}$. In order to bound $\|\nabla\partial_s\widetilde{u}\|_{L^{\infty}}$, it is convenient if one has a bound for $\|\nabla d\widetilde{u}\|_{L^{\infty}}$ firstly. \begin{lemma}
If $(u(t,x),\partial_tu(t,x))$ is a solution to (\ref{wmap1}) with $\|u(t,x)\|_{\mathcal{X}_T}\le M$. Then for $s\ge 2$ \begin{align}
\|\nabla d\widetilde{u}\|_{L^{\infty}_x}\lesssim MC(M).\label{chen2} \end{align} \end{lemma} \begin{proof} The proof of (\ref{chen2}) is also based on Remark \ref{ki78}. One can rewrite (\ref{ion1}) by Young inequality in the following form \begin{align*}
{\partial _s}{\left| {\nabla d\widetilde{u}} \right|^2} - {\Delta}{\left| {\nabla d\widetilde{u}} \right|^2} + 2{\left| {{\nabla ^2}d\widetilde{u}} \right|^2} \le C\left( {1 + {{\left| {d\widetilde{u}} \right|}^2}} \right){\left| {\nabla d\widetilde{u}} \right|^2} + {\left| {d\widetilde{u}} \right|^2}+ {\left| {d\widetilde{u}} \right|^8}. \end{align*}
Since for $s\ge1$, $\|d\widetilde{u}\|_{L^{\infty}}\lesssim M$, $\|\partial_s\widetilde{u}\|_{L^{\infty}}\lesssim M$, let $r(s,t,x) = {\left| {\nabla d\widetilde{u}} \right|^2} + M^2 + {M^8}$, then we have $${\partial _s}r - {\Delta}r \le C\left( {{M^2} + 1} \right)r.$$ Let $v = {e^{ - C\left( {{M^2} + 1} \right)s}}r$. For $s\ge2$, it is obvious that $v$ satisfies $$
{\partial _s}v - {\Delta}v \le 0. $$ By Remark \ref{ki78}, we deduce for $d(x,y)\le 1$, $s\ge2$ \begin{align*} v(s,t,x)\lesssim \int^{s}_{s-1}\int_{B(x,1)} v(\tau,t,y)d\tau{\rm{dvol_y}}. \end{align*}
Thus by $\|u\|_{\mathfrak{H}^2}\le M$ and (\ref{V5}), we conclude
$${\left| {\nabla d\widetilde{u}} \right|^2}\left( {{s},t,x} \right)\lesssim MC(M). $$ Hence (\ref{chen2}) follows. \end{proof}
Now we prove the decay of $\|\nabla\partial_s\widetilde{u}\|_{L^{\infty}_x}$ as $s\to\infty$. \begin{lemma}\label{decayingt}
If $(u,\partial_tu)$ is a solution to (\ref{wmap1}) in $\mathcal{X}_T$ with $\|u(t,x)\|_{\mathcal{X}_T}\le M$. Then for some universal constant $\delta>0$ \begin{align}
\|\nabla\partial_s\widetilde{u}\|_{L^{\infty}_x}\lesssim MC(M)e^{-\delta s}, \mbox{ }{\rm{for}} \mbox{ }s\ge1.\label{koo4} \end{align} \end{lemma} \begin{proof} By (\ref{lao2}), for $s\ge1$ \begin{align}\label{kongkong}
\|\partial_s\widetilde{u}\|_{L^{\infty}_x}\lesssim e^{-\delta s}M. \end{align} We can rewrite (\ref{9tian1}) by Young inequality as \begin{align}
{\partial _s}{\left| {\nabla {\partial _s}\widetilde{u}} \right|^2} - {\Delta}{\left| {\nabla {\partial _s}\widetilde{u}} \right|^2} \le (1 + {\left| {d\widetilde{u}} \right|^2}){\left| {\nabla {\partial _s}\widetilde{u}} \right|^2} + {\left| {d\widetilde{u}} \right|^6}{\left| {{\partial _s}\widetilde{u}} \right|^2} + {\left| {d\widetilde{u}} \right|^2}{\left| {\nabla d\widetilde{u}} \right|^2}{\left| {{\partial _s}\widetilde{u}} \right|^2}. \end{align}
Let $g(s,t,x) = {\left| {d\widetilde{u}} \right|^6}{\left| {{\partial _s}\widetilde{u}} \right|^2} + {\left| {d\widetilde{u}} \right|^2}{\left| {\nabla d\widetilde{u}} \right|^2}{\left| {{\partial _s}\widetilde{u}} \right|^2}$, then by Lemma \ref{density}, Lemma \ref{chen2222} and (\ref{kongkong}), $g(s,t,x) \le C(M)M{e^{ - \delta s}}$ for $s\ge1$. Let $f(s,t,x) = {\left| {\nabla \partial_s \widetilde{u}} \right|^2}\left( {s,t,x} \right) + \frac{1}{\delta }C(M)M{e^{ - \delta s}}$, then $${\partial _s}f - {\Delta}f \le C\left( {{M^2} + 1} \right)f.$$ Then $\bar v=e^{-C( {{M^2} + 1})s}f$ satisfies $$
{\partial _s}\bar v - {\Delta}\bar v \le0. $$ Applying Remark \ref{ki78} to $\bar{v}$ as before implies
$${\left| {\nabla {\partial _s}\widetilde{u}} \right|^2}\left( {{s},t,x} \right) + \frac{1}{\delta }C(M)M{e^{ - \delta {s}}} \le \int_{{s} - 1}^{{s}} {\int_{{\Bbb H^2}} {{{\left| {\nabla {\partial _s}\widetilde{u}\left( {\tau,t,y} \right)} \right|}^2}{\rm{dvol_hd\tau}}}} +C(M)M{e^{ - \delta {s}}}. $$ Therefore, (\ref{koo4}) follows from \begin{align}\label{ingjh}
\int_{{s} - 1}^{{s}} \int_{{\Bbb H^2}} {{{\left| {\nabla {\partial _s}\widetilde{u}\left( {\tau,t,y} \right)} \right|}^2}{\rm{dvol_h}d\tau}}\lesssim MC(M)e^{-\delta s}, \end{align} which arises from (\ref{f42}). \end{proof}
We move to the decay for $|\partial_t\widetilde{u}|$ with respect to $s$. \begin{lemma}
If $(u,\partial_tu)$ is a solution to (\ref{wmap1}) in $\mathcal{X}_T$ with $\|u(t,x)\|_{\mathcal{X}_T}\le M$. Then \begin{align}
\|\partial_t\widetilde{u}\|_{L^{2}_x}&\lesssim MC(M)e^{-\delta s},\mbox{ }{\rm{for}} \mbox{ }s>0\label{xiu1}\\
\|\partial_t\widetilde{u}\|_{L^{\infty}_x}&\lesssim MC(M)e^{-\delta s},\mbox{ }{\rm{for}} \mbox{ }s\ge1\label{xiu2}\\
\int^{\infty}_0\|\nabla\partial_t\widetilde{u}\|^2_{L^{2}_x}ds&\lesssim MC(M), \label{xiu3}\\
\|\nabla\partial_t\widetilde{u}\|_{L^{\infty}_x}&\lesssim MC(M)e^{-\delta s}, \mbox{ }{\rm{for}} \mbox{ }s\ge1.\label{xiu4} \end{align} \end{lemma} \begin{proof} The maximum principle and (\ref{huhu899}) imply \begin{align}\label{sdf8uj}
\| {{\partial _t}\widetilde{u}(s,t,x)} \|^2_{L^{\infty}_x} \le {s^{- 2}}{e^{ - \delta s}} \left\| {{\partial _t}\widetilde{u}(0,t,x)} \right\|^2_{L^2_x}. \end{align} Moreover further calculations with (\ref{10.127}) show \begin{align*}
(\partial_s-\Delta)|\partial_t \widetilde{u}|\le 0. \end{align*} Thus maximum principle and (\ref{m8}) give \begin{align}\label{10.26}
\|{{\partial _t}\widetilde{u}(s,t,x)} \|_{L^2_x}\lesssim e^{-\frac{1}{4}s}\|\partial_tu\|_{L^2_x}\le M. \end{align} Therefore, (\ref{xiu1}) and (\ref{xiu2}) follow from (\ref{sdf8uj}) and (\ref{10.26}) respectively. Second, we prove (\ref{xiu3}) by energy arguments. Introduce the energy functionals $$
\mathcal{E}_4(\widetilde{u})=\frac{1}{2}\int_{\Bbb H^2}|\partial_t\widetilde{u}|^2{\rm{dvol_h}},\mbox{ }\mathcal{E}_5(\widetilde{u})=\int_{\Bbb H^2}|\nabla\partial_t\widetilde{u}|^2{\rm{dvol_h}}. $$ Then integration by parts gives \begin{align}\label{muxc6zb8}
\frac{d}{{ds}}{\mathcal{E}_4}\left( \widetilde{u} \right) + {\mathcal{E}_5}\left( \widetilde{u} \right) \le \int_{\Bbb H^2}{\left| {d\widetilde{u}} \right|^2}{\left| {{\partial _t}\widetilde{u}} \right|^2}{\rm{dvol_h}}. \end{align} Integrating this formula with respect to $s$ in $[0.\kappa)$ with $\kappa>1$ shows
$$\int_0^{\kappa} {\left\| {\nabla \partial {}_t\widetilde{u}} \right\|_{L_x^2}^2ds} \le \left\| {\partial_t\widetilde{u}(\kappa)} \right\|_{L_x^2}^2 + \int_0^1 {\left\| {{\partial _t}\widetilde{u}} \right\|_{{L^4}}^2\left\| {d\widetilde{u}} \right\|_{{L^4}}^2} ds + {\mathcal{E}_1}\left( \widetilde{u} \right)M\int_1^{\kappa} {{e^{ - 2\delta s}}ds}, $$ where we have used (\ref{xiu1}), (\ref{xiu2}) and H\"older. By Sobolev embedding and letting $\kappa\to\infty$, we obtain \begin{align}\label{chuyunyi}
\int_0^\infty {\left\| {\nabla \partial_t\widetilde{u}} \right\|_{L_x^2}^2ds} \le {M^4} + {M^2}. \end{align} Finally, the proof of (\ref{xiu4}) follows by the same arguments as (\ref{koo4}) illustrated in Lemma \ref{decayingt}. \end{proof}
\begin{lemma}\label{zhangqiling}
Let $(u,\partial_tu)$ be a solution to (\ref{wmap1}) in $\mathcal{X}_T$ with $\|u(t,x)\|_{\mathcal{X}_T}\le M$. Then \begin{align}
{\left\| {{s^{\frac{1}{2}}}{\nabla _t}{\partial _s}\widetilde{u}} \right\|_{L_s^\infty L_x^2}} &\lesssim MC(M)\mbox{ }{\rm{for}}\mbox{ }s\in[0,1]\label{haorenhao9}. \end{align} Moreover, for $s\in[1,\infty)$ and some $0<\delta\ll1$ \begin{align}
{\left\| {{\nabla _t}{\partial _s}\widetilde{u}} \right\|_{L_s^\infty L_x^\infty }} &\lesssim e^{-\delta s}MC(M).\label{haorenhao10} \end{align} \end{lemma} \begin{proof}
It is easy to see $\left| {{\nabla _t}{\partial _s}\widetilde{u}} \right| \le \left| {\nabla {\partial _t}\widetilde{u}} \right|+ \left| {{h^{ii}}{\nabla _i}{\nabla _t}{\partial _i}\widetilde{u}} \right| + \left| {{\partial _t}\widetilde{u}} \right|{\left| {d\widetilde{u}} \right|^2}$, then \begin{align}\label{facik}
\left| {{\nabla _t}{\partial _s}\widetilde{u}} \right| \le \left| {{\nabla ^2}{\partial _t}\widetilde{u}} \right| + \left| {{\partial _t}\widetilde{u}} \right|{\left| {d\widetilde{u}} \right|^2} + \left| {\nabla {\partial _t}\widetilde{u}} \right|. \end{align} Integration by parts gives \begin{align}
&\frac{d}{{ds}}\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{{L^2}}^2\nonumber\\
&\le - {\int_{{\Bbb H^2}} {\left| {{\nabla ^2}{\partial _t}\widetilde{u}} \right|} ^2}{\rm{dvol_h}} + \int_{{\Bbb H^2}} {\left| {\nabla {\partial _t}\widetilde{u}} \right|\left| {{\partial _t}\widetilde{u}} \right|} \left| {d\widetilde{u}} \right|\left| {{\partial _s}\widetilde{u}} \right|{\rm{dvol_h}} \nonumber\\
&+ \int_{{\Bbb H^2}} {\left| {{\nabla ^2}{\partial _t}\widetilde{u}} \right|} {\left| {d\widetilde{u}} \right|^2}\left| {{\partial _s}\widetilde{u}} \right|{\rm{dvol_h}} + {\int_{{\Bbb H^2}} {\left| {\nabla {\partial _t}\widetilde{u}} \right|} ^2}{\rm{dvol_h}}+ {\int_{{\Bbb H^2}} {\left| {\nabla {\partial _t}\widetilde{u}} \right|} ^2}{\left| {d\widetilde{u}} \right|^2}{\rm{dvol_h}}.\label{haoren6} \end{align} By Sobolev embedding, we obtain
$$\frac{d}{{ds}}\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{{L^2}}^2 \le C\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{{L^2}}^2\left( {1 + \left\| {{\partial _s}\widetilde{u}} \right\|_{{L^\infty }}^2 + \left\| {d\widetilde{u}} \right\|_{{L^\infty }}^2} \right) + \left\| {\nabla d\widetilde{u}} \right\|_{{L^2}}^4\left\| {{\partial _s}\widetilde{u}} \right\|_{{L^\infty }}^2. $$ Thus we get
$$\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{{L^2}}^2 \le \left\| {\nabla {\partial _t}\widetilde{u}(0,t,x)} \right\|_{{L^2}}^2 + {e^{\int^s_0V(\tau)d\tau}}\int_0^s {{e^{ - \int_0^{\kappa} {V(\tau )} d\tau }}} \left\| {\nabla d\widetilde{u}} \right\|_{{L^2}}^4\left\| {{\partial _s}\widetilde{u}} \right\|_{{L^\infty }}^2d\kappa, $$
where $V(s)=Cs+C\|d\widetilde{u}\|^2_{L^{\infty}}+C\|\partial_s\widetilde{u}\|^2_{L^{\infty}}$. By Lemma \ref{ktao1} and Lemma \ref{8.44} \begin{align}\label{haorenh7}
\int^{1}_0\|d\widetilde{u}\|^2_{L^{\infty}}ds+\int^1_0\|\partial_s\widetilde{u}\|^2_{L^{\infty}}ds\le M^2. \end{align} Hence we conclude for $s\in[0,1]$, \begin{align}\label{jidujihao1}
\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{{L^2}}^2 \le \left\| {\nabla {\partial _t} \widetilde{u}(0,t,x)} \right\|_{{L^2}}^2 + {e^{MC(M)s}}MC(M). \end{align} With (\ref{haoren6}), we further deduce that \begin{align}\label{jidujihao}
\int_0^1 {{{\left\| {{\nabla ^2}{\partial _t}\widetilde{u}} \right\|}_{{L^2}}}ds} \lesssim MC(M). \end{align} Integration by parts shows, \begin{align*}
&\frac{d}{{ds}}\left( {\left\| {{\nabla ^2}{\partial _t}\widetilde{u}} \right\|_{L_x^2}^2s} \right) \\
&\le - s{\int_{{\Bbb H^2}} {\left| {{\nabla ^3}{\partial _t}\widetilde{u}} \right|} ^2}{\rm{dvol_hdt}} + \left\| {{\nabla ^2}{\partial _t}\widetilde{u}} \right\|_{L_x^2}^2 + s{\left\| {{\partial _s}\widetilde{u}} \right\|_{L_x^\infty }}{\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{L_x^4}}{\left\| {d\widetilde{u}} \right\|_{L_x^4}}{\left\| {{\nabla ^2}{\partial _t}\widetilde{u}} \right\|_{L_x^2}} \\
&+ s{\left\| {{\partial _t}\widetilde{u}} \right\|_{L_x^\infty }}{\left\| {\nabla {\partial _s}\widetilde{u}} \right\|_{L_x^2}}{\left\| {d\widetilde{u}} \right\|_{L_x^\infty }}{\left\| {{\nabla ^2}{\partial _t}\widetilde{u}} \right\|_{L_x^2}} + s\left\| {d\widetilde{u}} \right\|_{{L^8}}^2{\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{L_x^4}}{\left\| {{\nabla ^3}{\partial _t}\widetilde{u}} \right\|_{L_x^2}} \\
&+ s{\left\| {d\widetilde{u}} \right\|_{L_x^\infty }}{\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{L_x^2}}{\left\| {{\partial _s}\widetilde{u}} \right\|_{L_x^\infty }}{\left\| {{\nabla ^2}{\partial _t}\widetilde{u}} \right\|_{L_x^2}} + s\left\| {d\widetilde{u}} \right\|_{{L^\infty }}^2{\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{L_x^2}}{\left\| {{\nabla ^2}{\partial _t}\widetilde{u}} \right\|_{L_x^2}} \\
&+ s\left\| {d\widetilde{u}} \right\|_{{L^\infty }}^2{\left\| {\nabla d\widetilde{u}} \right\|_{L_x^2}}{\left\| {{\nabla ^2}{\partial _t}\widetilde{u}} \right\|_{L_x^2}} + s{\left\| {{\partial _t}\widetilde{u}} \right\|_{L_x^\infty }}{\left\| {\nabla d\widetilde{u}} \right\|_{L_x^2}}{\left\| {{\partial _s}\widetilde{u}} \right\|_{L_x^\infty }}{\left\| {{\nabla ^2}{\partial _t}\widetilde{u}} \right\|_{L_x^2}}\\
&+ s{\left\| {\nabla d\widetilde{u}} \right\|_{L_x^2}}{\left\| {{\partial _t}\widetilde{u}} \right\|_{L_x^\infty }}{\left\| {{\nabla ^3}{\partial _t}\widetilde{u}} \right\|_{L_x^2}}{\left\| {d\widetilde{u}} \right\|_{L_x^\infty }} + s\left\| {d\widetilde{u}} \right\|_{{L^\infty }}^2\left\| {{\nabla ^2}{\partial _t}\widetilde{u}} \right\|_{{L^2}}^2.
\end{align*} Then Gronwall with (\ref{V5}), (\ref{haorenh7}) yields for all $s\in[0,1]$ \begin{align}\label{haoren8}
\left\| {{\nabla ^2}{\partial _t}\widetilde{u}} \right\|_{L_x^2}^2s + \int_0^s {{\int_{{\Bbb H^2}} {\left| {{\nabla ^3}{\partial _t}\widetilde{u}} \right|}^2}\tau} {\rm{dvol_h}}d\tau \le MC(M). \end{align} Thus by (\ref{haoren8}), (\ref{jidujihao}), (\ref{jidujihao1}), and (\ref{facik}), we conclude \begin{align}
{\left\| {{s^{\frac{1}{2}}}{\nabla _t}{\partial _s}\widetilde{u}} \right\|_{L_s^\infty[0,1] L_x^2}} \le MC(M). \end{align} (\ref{haorenhao10}) follows by the same path as Lemma \ref{decayingt} with the help of (\ref{iconm}). The essential ingredient is to prove for $s_1\ge2$ \begin{align}\label{huaqian}
\int^{s_1+1}_{s_1}\|\nabla^2\partial_t\widetilde{u}\|^2_{L^2_x}ds\lesssim MC(M)e^{-\delta s_1}. \end{align} The remaining proof is devoted to verifying (\ref{huaqian}). By (\ref{{uv111}}) and (\ref{haoren6}), we obtain for any $0<c\ll1$ \begin{align}
&\frac{d}{{ds}}\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{{L^2}}^2 +c {\int_{{\Bbb H^2}} {\left| {{\nabla}{\partial _t}\widetilde{u}} \right|} ^2}{\rm{dvol_h}}+c {\int_{{\Bbb H^2}} {\left| {{\nabla^2}{\partial _t}\widetilde{u}} \right|} ^2}{\rm{dvol_h}}\nonumber\\
&\lesssim \int_{{\Bbb H^2}} {\left| {\nabla {\partial _t}\widetilde{u}} \right|\left| {{\partial _t}\widetilde{u}} \right|} \left| {d\widetilde{u}} \right|\left| {{\partial _s}\widetilde{u}} \right|{\rm{dvol_h}}+ \int_{{\Bbb H^2}} {\left| {{\nabla ^2}{\partial _t}\widetilde{u}} \right|} {\left| {d\widetilde{u}} \right|^2}\left| {{\partial _s}\widetilde{u}} \right|{\rm{dvol_h}}\nonumber\\
&+ {\int_{{\Bbb H^2}} {\left| {\nabla {\partial _t}\widetilde{u}} \right|} ^2}{\left| {d\widetilde{u}} \right|^2}{\rm{dvol_h}} +\frac{1}{c} {\int_{{\Bbb H^2}} {\left| {\nabla {\partial _t}\widetilde{u}} \right|} ^2}{\rm{dvol_h}}.\label{haoren68} \end{align} By Lemma 3.3 and (\ref{lao2}), we have for $s\ge1$ \begin{align}
&\left\| {d\widetilde{u}} \right\|_{L_x^{\infty}}\lesssim \|du\|_{L^2_x}\lesssim M\label{g8uiknlo}\\
&\left\| {{\partial _s}\widetilde{u}} \right\|_{L_x^{\infty}}\lesssim e^{-\delta s}\|\partial_s\widetilde{u}(0,t)\|_{L^2_x}\lesssim e^{-\delta s}M.\label{g8uiknlp} \end{align} Then by Sobolev embedding and Gronwall inequality, for $s\ge1$ \begin{align}
\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{L_x^2}^2 &\lesssim {e^{-cs}}\left\| {\nabla {\partial _t}\widetilde{u}(1,t,x)} \right\|_{L_x^2}^2
+ {e^{ - cs}}\int_1^s {{e^{c\kappa}}} \left\| {\nabla {\partial _t}\widetilde{u}(\kappa)} \right\|_{L_x^2}^2d\kappa\nonumber\\
&+ {e^{ - cs}}\int_1^s {{e^{c\kappa}}} \left\| {\nabla d\widetilde{u}(\kappa,t)} \right\|_{L_x^2}^4\left\| {{\partial _s}\widetilde{u}(\kappa,t)} \right\|_{L_x^\infty }^2d\kappa.\label{ua5vvz} \end{align} Hence (\ref{chuyunyi}), (\ref{V5}), (\ref{g8uiknlp}) and (\ref{ua5vvz}) give for $s\in[0,\infty)$ \begin{align}\label{fanlin}
\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{L_x^2}^2 \le MC(M), \end{align} where (\ref{fanlin}) when $s\in[0,1]$ follows by (\ref{jidujihao1}). Integrating (\ref{muxc6zb8}) with respect to $s$ in $[s_1,s_2]$ for $1\le s_1\le s_2<\infty$ yields \begin{align}\label{chanjiang1}
\int_{{s_1}}^{{s_2}} {\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{L_x^2}^2} ds \le \left\| {{\partial _t}\widetilde{u}} \right\|_{L_x^2}^2({s_2}) - \left\| {{\partial _t}\widetilde{u}} \right\|_{L_x^2}^2({s_1}) + \int_{{s_1}}^{{s_2}} {\left\| {{\partial _t}\widetilde{u}} \right\|_{L_x^2}^2} \left\| {d\widetilde{u}} \right\|_{L_x^\infty }^2ds. \end{align} By (\ref{xiu1}), Lemma \ref{density}, \begin{align}\label{chanjiang2}
\int_{{s_1}}^{{s_2}} {\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{L_x^2}^2} ds \lesssim MC(M){e^{ - \delta {s_1}}}. \end{align} Thus in any interval $[s_*,s_*+1]$ there exists $s^0_*\in[s_*,s_*+1]$ such that \begin{align}\label{chanjiang21}
{\left\| {\nabla {\partial _t}\widetilde{u}}(s^0_*) \right\|_{L_x^2}^2} ds \lesssim MC(M){e^{ - \delta {s_*}}}. \end{align} Fix $s_*\ge 1$, applying Gronwall to (\ref{haoren68}) in $[s^0_*,a]$ with $a\in[s^0_*,s_*+2]$ gives \begin{align}
\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{L_x^2}^2(a,t) &\le {e^{ - ca}}\left\| {\nabla {\partial _t}\widetilde{u}(s_*^0,t,x)} \right\|_{L_x^2}^2+ {e^{ - ca}}\int_{s^0_*}^a {{e^{cs}}}\left\| {\nabla {\partial _t}\widetilde{u}(s)} \right\|_{L_x^2}^2ds\nonumber\\
&+ {e^{ - ca}}\int_{s^0_*}^a {{e^{cs}}} \left\| {\nabla d\widetilde{u}(s)} \right\|_{L_x^2}^4\left\| {{\partial _s}\widetilde{u}(s)} \right\|_{L_x^\infty }^2ds.\label{ua6vvz} \end{align} Thus by (\ref{xiu1}), Lemma \ref{density}, (\ref{chanjiang21}) and the fact that $a$ at leat ranges over all $[s_*+1,s_*+2]$, we have for $s\ge2$, \begin{align}\label{chanjiang3}
\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{L_x^2}^2\le MC(M) e^{-\delta s}. \end{align} Integrating (\ref{haoren68}) with respect to $s$ again in $[s_1,s_1+1]$, we obtain (\ref{huaqian}) by (\ref{chanjiang3}) and (\ref{xiu1}), Lemma 3.3. Finally using maximum principle and Remark \ref{ki78} as Lemma \ref{decayingt}, we get (\ref{haorenhao10}) from (\ref{ua5vvz}), (\ref{chanjiang3}), (\ref{facik}) and Lemma \ref{decayingt}. \end{proof}
In the remaining part of this subsection, we consider the short time behaviors of the differential fields under the heat flow. Since the energy of the solution to the heat flow in our case will not decay to zero, we can not expect that it behaves as a solution to the linear heat equation in the large time scale. However, one can still expect that the solution to the heat flow is almost governed by the linear equation in the short time scale. We summarize these useful estimates in the following proposition. \begin{proposition}\label{lize1} Let $u:[0,T]\times\Bbb H^2\to \Bbb H^2$ be a solution to (\ref{wmap1}) satisfying \begin{align*}
\|(\nabla du,\nabla\partial_t u)\|_{L^2\times L^2}+\|(du,\partial_t u)\|_{L^2\times L^2}\le M. \end{align*} If $\widetilde{u}:\Bbb R^+\times[0,T]\times\Bbb H^2\to \Bbb H^2$ is the solution to (\ref{8.29.2}) with initial data $u(t,x)$, then for any $\eta>0$, it holds uniformly for $(s,t)\in(0,1)\times[0,T]$ that \begin{align*}
&s^{\frac{1}{2}}{\left\| {\nabla d\widetilde{u}} \right\|_{{L^\infty_x }}} + s^{\frac{1}{2}}{\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{{L^\infty_x }}} + {s}{\left\| {\nabla {\partial _s}\widetilde{u}} \right\|_{{L^\infty_x }}} + s^{\frac{1}{2}}{\left\| {{\partial _s}\widetilde{u}} \right\|_{{L^\infty_x }}}\\
&+ s^{\frac{1}{2}}{\left\| {\nabla {\partial _s}\widetilde{u}} \right\|_{{L^2_x}}} + {\left\| {s^{\frac{1}{2}}{\nabla ^2}{\partial _s}\widetilde{u}} \right\|_{L_{s,x}^2}}
+s{\left\| {\nabla_t {\partial _s}\widetilde{u}} \right\|_{{L^{\infty}_x}}}+s^{\eta}\|d\widetilde{u}\|_{L^{\infty}_x}\le MC(M). \end{align*} \end{proposition} \begin{proof}
Since $\|\nabla d\widetilde{u}\|_{L^2_x}\le M$ shown by (\ref{V5}), Sobolev embedding implies $\|d\widetilde{u}\|_{L^p_x}\le M$ for any $p\in(2,\infty)$. Then (\ref{uu}) and (\ref{huhu89}) yield $s^{\eta}\|d\widetilde{u}\|_{L^{\infty}_x}\le M$ for any $\eta>0$ and all $(t,s)\in[0,T]\times(0,1)$. By (\ref{ion1}), one has
$${\partial _s}\left| {\nabla d\widetilde{u}} \right| - {\Delta}\left| {\nabla d\widetilde{u}} \right|{\rm{ }} \le K\left| {\nabla d\widetilde{u}} \right| + {\left| {d\widetilde{u}} \right|^2}\left| {\nabla d\widetilde{u}} \right| + {\left| {d\widetilde{u}} \right|} + {\left| {d\widetilde{u}} \right|^4}. $$ Furthermore we obtain
$$\left( {\partial _s} - {\Delta} \right)\left( {{e^{ - sK}}{e^{ - \int_0^s {\left\| {d\widetilde{u}(\tau )} \right\|_{L_x^\infty }^2d\tau } }}\left| {\nabla d\widetilde{u}} \right|} \right) \le {e^{ - sK}}{e^{ - \int_0^s {\left\| {d\widetilde{u}(\tau )} \right\|_{L_x^\infty }^2d\tau } }}\left( {{{\left| {d\widetilde{u}} \right|}} + {{\left| {d\widetilde{u}} \right|}^4}} \right). $$ Then maximum principle implies for $s\in[0,2]$ \begin{align*}
{\left\| { {\nabla d\widetilde{u}}(s)} \right\|_{L_x^\infty }} &\lesssim {\big\| {{e^{{\Delta}\frac{s}{2}}}}( {{e^{ - \frac{{sK}}{2}}}{e^{ - \int_0^{\frac{s}{2}} {\left\| {d\widetilde{u}(\tau )} \right\|_{L_x^\infty }^2d\tau } }}\left| {\nabla d\widetilde{u}} \right|(\frac{s}{2})})\big\|_{L_x^\infty }} \\
&+ {\big\| {\int_{\frac{s}{2}}^s {{e^{{\Delta}(s - \tau )}} {{e^{ - K\tau }}{e^{ - \int_0^\tau {\| {d\widetilde{u}(\tau_1)} \|_{L_x^\infty }^2d\tau_1 } }}( {{{| {d\widetilde{u}}|}}+ {{| {d\widetilde{u}} |}^4}} )(\tau )} d\tau } }\big \|_{L_x^\infty }}. \end{align*} By the smoothing effect of the heat semigroup, we obtain for $s\in[0,1]$ \begin{align*}
{\left\| { {\nabla d\widetilde{u}}(s)} \right\|_{L_x^\infty }} \lesssim {s^{ - \frac{1}{2}}}{\big\| { {\nabla d\widetilde{u}} (\frac{s}{2})} \big\|_{L_x^2}} + {\int_{\frac{s}{2}}^s {{{\big\| {{{| {d\widetilde{u}}|}^4}(\tau )} \big\|}_{L_x^\infty }} + \big\| {{{| {d\widetilde{u}}|}}|(\tau )} \big\|} _{L_x^\infty }}d\tau. \end{align*} Then Lemma \ref{8.5} and Lemma \ref{8.44} show for $s\in(0,1)$
$${\left\| \nabla d\widetilde{u}(s) \right\|_{L_x^\infty }} \le {s^{ - \frac{1}{2}}}{\big\| { {\nabla d\widetilde{u}}(\frac{s}{2})} \big\|_{L_x^2}} + \int_{\frac{s}{2}}^s {{\tau ^{ - 3/2}}( \big\| {du} \big\|^4_{L_x^{\frac{8}{3}}} + \big\| {du} \big\|_{L_x^2})} d\tau. $$ Therefore by Sobolev inequality we conclude \begin{align*}
{\left\| {\left| {\nabla d\widetilde{u}} \right|(s)} \right\|_{L_x^\infty }} &\le {s^{ -\frac{1}{2}}}\mathop {\sup }\limits_{t \in [0,T]} \left( \left\| {\nabla du}(t) \right\|_{L_x^2}^4 + \left\| {du}(t) \right\|_{L_x^2} \right)\nonumber\\
&+{s^{ - \frac{1}{2}}}\mathop {\sup }\limits_{s \in [0,1]} {\left\| {\left| {\nabla d\widetilde{u}} \right|(s)} \right\|_{L_x^2}}. \end{align*} Thus by (\ref{V5}), we obtain for $s\in[0,1]$ \begin{align}\label{qian1}
s^{\frac{1}{2}}{\left\| {\left| {\nabla d\widetilde{u}} \right|(s)} \right\|_{L_x^\infty }} \le MC(M). \end{align} By (\ref{9xian}) we have \begin{align} &\frac{d}{{ds}}\left( {s{\mathcal{E}_3}(\widetilde{u}(s))} \right) \nonumber\\
&\lesssim {\mathcal{E}_3}(\widetilde{u}(s)) - \int_{{\Bbb H^2}} {s{{\left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|}^2}{\rm{dvol_h}}}+ \int_{\Bbb H^2}s\left( \left| {d\widetilde{u}} \right|\left| {\nabla {\partial _s}\widetilde{u}} \right|{{\left| {{\partial _s}\widetilde{u}} \right|}^2} \right) {\rm{dvol_h}}\nonumber \\
&+ \int_{{\Bbb H^2}} s\left( \left| {{\nabla d}\widetilde{u}} \right|\left| {d\widetilde{u}} \right|\left| {{\partial _s}\widetilde{u}} \right|\left| {\nabla {\partial _s}\widetilde{u}} \right| + {{\left| {{\partial _s}\widetilde{u}} \right|}^2}{{\left| {d\widetilde{u}} \right|}^4} + {{\left| {\nabla {\partial _s}\widetilde{u}} \right|}^2}{{\left| {d\widetilde{u}} \right|}^2} \right){\rm{dvol_h}} \nonumber\\
&+\int_{{\Bbb H^2}}s\big( {{\left| {d\widetilde{u}} \right|}^3}\left| {{\partial _s}\widetilde{u}} \right|\left| {\nabla {\partial _s}\widetilde{u}} \right|+ {{\left| {d\widetilde{u}} \right|}^2}\left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|\left| {{\partial _s}\widetilde{u}} \right|\big){\rm{dvol_h}}.\label{hu897} \end{align} The terms in the right hand side can be bounded by Sobolev and H\"older as follows \begin{align*}
\int_{{\Bbb H^2}} {s\left| {d\widetilde{u}} \right|} \left| {\nabla {\partial _s}\widetilde{u}} \right|{\left| {{\partial _s}\widetilde{u}} \right|^2}{\rm{dvol_h}}&\le {s^{\frac{1}{2}}}{\left\| {d\widetilde{u}} \right\|_{L_x^\infty }}{\left\| {\nabla {\partial _s}\widetilde{u}} \right\|_{L_x^2}}{s^{\frac{1}{2}}}\left\| {{\partial _s}\widetilde{u}} \right\|_{L_x^4}^2 \\
\int_{{\Bbb H^2}} {s{{\left| {d\widetilde{u}} \right|}^3}\left| {{\partial _s}\widetilde{u}} \right|\left| {\nabla {\partial _s}\widetilde{u}} \right|} {\rm{dvol_h}} &\le s\left\| {d\widetilde{u}} \right\|_{L_x^{12}}^3{\left\| {\nabla {\partial _s}\widetilde{u}} \right\|_{L_x^2}}{\left\| {{\partial _s}\widetilde{u}} \right\|_{L_x^4}} \\
\int_{{\Bbb H^2}} s\left| {{\nabla d}\widetilde{u}} \right|\left| {d\widetilde{u}} \right|\left| {{\partial _s}\widetilde{u}} \right|\left| {\nabla {\partial _s}\widetilde{u}} \right|{\rm{dvol_h}} &\le {{\left\| {s{\nabla d}\widetilde{u}} \right\|}_{L_x^\infty }}{{\left\| {\nabla {\partial _s}\widetilde{u}} \right\|}_{L_x^2}}{{\left\| {d\widetilde{u}} \right\|}_{L_x^4}}{{\left\| {{\partial _s}\widetilde{u}} \right\|}_{L_x^4}} \\
\int_{{\Bbb H^2}} s{{\left| {{\partial _s}\widetilde{u}} \right|}^2}{{\left| {d\widetilde{u}} \right|}^4}{\rm{dvol_h}} &\le \left\| {{\partial _s}\widetilde{u}} \right\|_{L_x^2}^2s\left\| {d\widetilde{u}} \right\|_{L_x^{\infty}}^4 \\
\int_{{\Bbb H^2}} s{{\left| {\nabla {\partial _s}\widetilde{u}} \right|}^2}{{\left| {d\widetilde{u}} \right|}^2}{\rm{dvol_h}} &\le \left\| {\nabla {\partial _s}\widetilde{u}} \right\|_{L_x^2}^2s\left\| {d\widetilde{u}} \right\|_{L_x^\infty }^2. \end{align*} The highest order term can be absorbed by the negative term, indeed we have \begin{align*}
\int_{{\Bbb H^2}} s{{\left| {d\widetilde{u}} \right|}^2}\left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|\left| {{\partial _s}\widetilde{u}} \right|{\rm{dvol_h}} &\le \frac{s}{{2C}}\int_{\Bbb H^2} {{{\left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|}^2}} {\rm{dvol_h} + C\int_{{\Bbb H^2}} {s{{\left| {{\partial _s}\widetilde{u}} \right|}^2}{{\left| {d\widetilde{u}} \right|}^4}{\rm{dvol_h}}}} \\
&\le \frac{s}{{2C}}\int_{{\Bbb H^2}} {{{\left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|}^2}} {\rm{dvol_h}} + C\left\| {{\partial _s}\widetilde{u}} \right\|_{L_x^2}^2s\left\| {d\widetilde{u}} \right\|_{L_x^{\infty}}^4.
\end{align*}
Recall the fact $\left| {d\widetilde{u}} \right|(s) \le e^{{\Delta s}}\left| {d{u}} \right|$ when $s\in[0,1]$, $\left| {{\partial _s}\widetilde{u}} \right|(s) \le {e^{{\Delta s}}}\left| {\tau({u})} \right|$, the terms involved above are bounded by smoothing effect \begin{align}
s\left\| {d\widetilde{u}} \right\|_{L_x^\infty }^2 + {s^{\frac{1}{4}}}{\left\| {{\partial _s}\widetilde{u}} \right\|_{L_x^4}} \le \left\| {d{u}} \right\|_{L_x^2}^2 + {\left\| {\tau({u})} \right\|_{L_x^2}}. \end{align} Thus integrating (\ref{hu897}) with respect to $s$ in $[0,s]$ with (\ref{qian1}) gives for $s\in[0,1]$ \begin{align}\label{ki6ll1}
s{\mathcal{E}_3}(\widetilde{u}(s)) + \int_0^s {\int_{\Bbb H^2}} {s{{\left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|}^2}} {\rm{dvol_h}} ds \lesssim \int_0^s {\left\| {\nabla {\partial _s}\widetilde{u}} \right\|_{L_x^2}^2ds'}. \end{align} Therefore by (\ref{f40}), we conclude for $s\in[0,1]$ \begin{align}\label{qian2}
\int_{{\Bbb H^2}} s{{\left| {\nabla {\partial _s}\widetilde{u}} \right|}^2}{\rm{dvo}}{{\rm{l}}_{\rm{h}}} \le MC(M). \end{align} By (\ref{9tian1}), we deduce $$
{\partial _s}\left| {\nabla {\partial _s}\widetilde{u}} \right| - {\Delta}\left| {\nabla {\partial _s}\widetilde{u}} \right|\le \left| {\nabla {\partial _s}\widetilde{u}} \right|{\left| {d\widetilde{u}} \right|^2} + \left| {{\partial _s}\widetilde{u}} \right|{\left| {d\widetilde{u}} \right|^3} + \left| {{\partial _s}\widetilde{u}} \right|\left| {{\nabla d}\widetilde{u}} \right|\left| {d \widetilde{u}} \right|. $$
Then as above considering the equation of ${e^{ - \int_0^s {\| {d\widetilde{u}(\tau )}\|_{L_x^\infty }^2d\tau } }}\left| {\nabla {\partial _s}\widetilde{u}} \right|$, we obtain by maximum principle that \begin{align*}
&{\| { {\nabla {\partial _s}\widetilde{u}}(s)} \|_{L_x^\infty }}\\
&\le {s^{ - \frac{1}{2}}}{\| {{\nabla {\partial _s}\widetilde{u}}(\frac{s}{2})} \|_{L_x^2}} + {\int_{\frac{s}{2}}^s {{{\| {\left| {{\partial _s}\widetilde{u}} \right|{{\left| {d\widetilde{u}} \right|}^3}(\tau )} \|}_{L_x^\infty }} + \|{\left| {{\partial _s}\widetilde{u}} \right|\left| {{\nabla d}\widetilde{u}} \right|\left| {d\widetilde{u}} \right|(\tau )} \|} _{L_x^\infty }}d\tau. \end{align*} Hence (\ref{qian1}) and (\ref{qian2}) give \begin{align*}
{\left\| {\left| {\nabla {\partial _s}\widetilde{u}} \right|(s)} \right\|_{L_x^\infty }} \le& {s^{ - 1}}M + \left( {\mathop {\sup }\limits_{s \in [0,1]} s{{\left\| {{\partial _s}\widetilde{u}} \right\|}_{L_x^\infty }}{{\left\| {d\widetilde{u}} \right\|}_{L_x^\infty }}} \right)\int_{\frac{s}{2}}^s {{\tau ^{ - 1}}\left\| {d\widetilde{u}} \right\|_{L_x^\infty }^2} d\tau\\
&+ \left( {\mathop {\sup }\limits_{s \in [0,1]} s{{\left\| {\nabla d\widetilde{u}} \right\|}_{L_x^\infty }}{{\left\| {d\widetilde{u}} \right\|}_{L_x^\infty }}} \right)\int_{\frac{s}{2}}^s {{\tau ^{ - 1}}{{\left\| {d\widetilde{u}} \right\|}_{L_x^\infty }}} d\tau\\
&\le {s^{ - 1}}M + {s^{ - 1}}M^2\int_{\frac{s}{2}}^s {\left( {\left\| {d\widetilde{u}} \right\|_{L_x^\infty }^2 + {{\left\| {d\widetilde{u}} \right\|}_{L_x^\infty }}} \right)} d\tau. \end{align*} Consequently, we have by Lemma \ref{ktao1}, \begin{align}\label{qian3}
{\left\| {\left| {\nabla \partial_s \widetilde{u}} \right|(s)} \right\|_{L_x^\infty }} \le MC(M)s^{- 1}. \end{align} By Lemma \ref{8zu}, one deduces \begin{align*}
{\partial _s}\left| {\nabla {\partial _t}\widetilde{u}} \right| - {\Delta}\left| {\nabla {\partial _t}\widetilde{u}} \right| &\le K\left| {\nabla {\partial _t}\widetilde{u}} \right|+\left| {\nabla {\partial _t}\widetilde{u}} \right||d\widetilde{u}|^2+|\partial_s\widetilde{u}||d\widetilde{u}|^2\\
&+ {\left| {d\widetilde{u}} \right|^3}\left| {{\partial _t}\widetilde{u}} \right| + \left| {d\widetilde{u}} \right|\left| {{\partial _t}\widetilde{u}} \right|\left| {\nabla d\widetilde{u}} \right|. \end{align*}
Considering the equation of ${e^{ - \int_0^s \big({\left\| {d\widetilde{u}(\tau )} \right\|_{L_x^\infty }^2-K\big)d\tau } }}\left| {\nabla {\partial _t}\widetilde{u}} \right|$, we have by maximum principle that \begin{align*}
\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{L^{\infty}_x} &\le {s^{ - \frac{1}{2}}}{\| {\nabla {\partial _t}\widetilde{u}} \|_{L_x^2}} +\int_{\frac{s}{2}}^s {{{(s - \tau )}^{ - \frac{1}{2}}}} \| {{\left| {d\widetilde{u}} \right|}^3}\left| {{\partial _t}\widetilde{u}} \right|(\tau )\|_{{L^2_x}}d\tau\\
&+ \int_{\frac{s}{2}}^s\||d\widetilde{u}|\left| {{\partial _t}\widetilde{u}} \right|\left| {\nabla d\widetilde{u}} \right|(\tau )\|_{L^{\infty}_x}+\||\partial_s\widetilde{u}||d\widetilde{u}|^2\|_{L^{\infty}_x}d\tau \\
&\le {s^{ - \frac{1}{2}}}{\left\| {\nabla \partial_t\widetilde{u}} \right\|_{L_x^2}} + \mathop {\sup }\limits_{s \in [0,1]} \left( {{{\left\| {{\partial _t}\widetilde{u}} \right\|}_{L_x^4}}\left\| {d\widetilde{u}} \right\|_{L_x^{12}}^3} \right)\int_{\frac{s}{2}}^s {{{(s - \tau )}^{ - \frac{1}{2}}}d\tau } \\
&+ \mathop {\sup }\limits_{s \in [0,1]} \left( {{s^{\frac{1}{2}}}{{\left\| {\nabla d\widetilde{u}} \right\|}_{L_x^\infty }}} \right)\int_{\frac{s}{2}}^s {{\tau ^{ - \frac{1}{2}}}{{\left\| {{\partial _t}\widetilde{u}} \right\|}_{L_x^\infty }}{{\left\| {d\widetilde{u}} \right\|}_{L_x^\infty }}d\tau }\\
&+ \mathop {\sup }\limits_{s \in [0,1]} \left( s\left\| d\widetilde{u}\right\|^2_{L_x^{\infty} } s^{\frac{1}{2}}\left\| \partial_s\widetilde{u} \right\|_{L_x^{\infty} } \right)\int_{\frac{s}{2}}^s \tau ^{ - \frac{3}{2}}d\tau. \end{align*} Hence we deduce by Lemma \ref{ktao1} \begin{align}\label{qian4}
{\left\| {\left| {\nabla \partial_t u} \right|(s)} \right\|_{L_x^\infty }} \le MC(M){s^{ - \frac{1}{2}}}. \end{align}
The bounds for $|\nabla_t\partial_s\widetilde{u}|$ follows by the same arguments as (\ref{qian1}) with help of Lemma \ref{zhangqiling} and (\ref{iconm}). \end{proof}
We summarize the long time and short time behaviors as a proposition. \begin{proposition}\label{sl} Let $u:[0,T]\times\Bbb H^2\to \Bbb H^2$ be a solution to (\ref{wmap1}) satisfying \begin{align*}
\|(\nabla du,\nabla\partial_t u)\|_{L^2\times L^2}+\|(du,\partial_t u)\|_{L^2\times L^2}\le M, \end{align*} If $\widetilde{u}:\Bbb R^+\times[0,T]\times\Bbb H^2\to \Bbb H^2$ is the solution to (\ref{8.29.2}) with initial data $u(t,x)$, then for any $\eta>0$, it holds uniformly for $t\in[0,T]$ that \begin{align*}
&\left\| {d\widetilde{u}} \right\|_{L_s^\infty[1,\infty) L_x^{\infty}}+\left\| {\nabla d\widetilde{u}} \right\|_{L_s^\infty[1,\infty) L_x^{\infty}}+\left\| {\nabla d\widetilde{u}} \right\|_{L_s^\infty L_x^2} + {\left\| {{{\nabla\partial _t}\widetilde{u}}} \right\|_{L_s^\infty L_x^2 }}\\
&+{\| {{s^{\frac{1}{2}}}\left| {\nabla d\widetilde{u}} \right|}\|_{L_s^\infty[0,1] L_x^\infty }}
+ {\| {e^{\delta s}\left| {{\partial _s}\widetilde{u}} \right|}\|_{L_s^\infty L_x^2 }}+
{\left\| {{s}\left| {{\nabla_t}{\partial _s}\widetilde{u}} \right|} \right\|_{L^{\infty}_s[0,1]L_x^{\infty}}}\\
&+ {\| {{s^{\frac{1}{2}}}\left| {\nabla {\partial _t}\widetilde{u}} \right|} \|_{L_s^\infty[0,1] L_x^\infty }}+ {\left\| {s\left| {\nabla {\partial _s}\widetilde{u}} \right|} \right\|_{L_s^\infty[0,1] L_x^\infty }}
+ {\| {{s^{\frac{1}{2}}}e^{\delta s}\left| {{\partial _s}\widetilde{u}} \right|}\|_{L_s^\infty L_x^\infty }}\\
&+{\| {{s^{\frac{1}{2}}}\left| {\nabla {\partial _s}\widetilde{u}} \right|}\|_{L_s^\infty[0,1] L_x^2}}
+{\| {{s^{\frac{1}{2}}}\left| {\nabla_t {\partial _s}\widetilde{u}} \right|}\|_{L_s^\infty[0,1] L_x^2}}+\left\|s^{\eta} {d\widetilde{u}} \right\|_{L_s^\infty(0,1) L_x^{\infty}}\\
&{\left\| {{s^{\frac{1}{2}}}{e^{\delta s}}\left| {\nabla {\partial _t}\widetilde{u}} \right|} \right\|_{L_s^\infty L_x^\infty }} + {\left\| {s{e^{\delta s}}\left| {\nabla {\partial _s}\widetilde{u}} \right|} \right\|_{L_s^\infty L_x^\infty }}+ {\left\| se^{\delta s}{\left| {{\nabla_t\partial _s}\widetilde{u}} \right|} \right\|_{L_s^\infty L_x^\infty }}\\
&{\left\| {{e^{\delta s}}\left| {\nabla {\partial _t}\widetilde{u}} \right|} \right\|_{L_s^\infty L_x^2 }} + {\left\| {s^{\frac{1}{2}}{e^{\delta s}}\left| {\nabla {\partial _s}\widetilde{u}} \right|}\right\|_{L_s^\infty L_x^2}}+ {\left\|s^{\frac{1}{2}}e^{\delta s} { {{\nabla_t\partial _s}\widetilde{u}}} \right\|_{L_s^\infty L_x^2 }}\le MC(M). \end{align*} \end{proposition}
\begin{lemma}\label{fotuo1}
If $(u,\partial_tu)$ solves (\ref{wmap1}) and $\|u(t,x)\|_{\mathcal{X}_T}\le M$, then we have \begin{align}
\left\| {{\nabla}^2d\widetilde{u}} \right\|_{L^2_x}&\le \max(s^{-\frac{1}{2}},1)MC(M)\label{fotuo2}\\
\left\| {{\nabla}^2d\widetilde{u}} \right\|_{L^{\infty}_x}&\le \max(s^{-1},1)MC(M)\label{fotuo3}\\
se^{\delta' s}\|\nabla ^2\partial _s\widetilde{u}\|_{L^2_x}&\lesssim MC(M)\label{fotuo4}\\
s^{\frac{3}{2}}e^{\delta' s}\|\nabla ^2\partial _s\widetilde{u}\|_{L^\infty_x}&\lesssim MC(M)\label{fotuo5} \end{align} \end{lemma} \begin{proof}
The Bochner formula for $\left| {{\nabla}^2d\widetilde{u}} \right|^2$ is as follows \begin{align}
&{\partial _s}{| {{\nabla}^2d\widetilde{u}}|^2} - \Delta {| {{\nabla}^2d\widetilde{u}}|^2} + 2{| { {\nabla^3 }d\widetilde{u}}|^2} \lesssim |\nabla^2d\widetilde{u}|^2(|d\widetilde{u}|^2+1)+|\nabla d\widetilde{u}|^2|\nabla^2d\widetilde{u}||d\widetilde{u}|\nonumber\\
&+|d\widetilde{u}|^3|\nabla d\widetilde{u}||\nabla^2d\widetilde{u}|+|\nabla d\widetilde{u}||\nabla^2 d\widetilde{u}|^2.\label{piiguuuu7} \end{align} Interpolation by parts and $\tau(\widetilde{u})=\partial_s\widetilde{u}$ give \begin{align}\label{hupke3}
\left\| {{\nabla}^2d\widetilde{u}} \right\|^2_{L^2_x}\lesssim \left\| \nabla\partial_s\widetilde{u} \right\|^2_{L^2_x}+\left\| \nabla d\widetilde{u} \right\|^3_{L^2_x}
+\left\| \nabla d\widetilde{u} \right\|^2_{L^2_x}\|du\|^2_{L^{\infty}_x} \end{align}
Then Proposition \ref{sl} yields (\ref{fotuo2}). (\ref{piiguuuu7}) shows $| {{\nabla}^2d\widetilde{u}}|$ satisfies \begin{align}
&{\partial _s}{| {{\nabla}^2d\widetilde{u}}|} - \Delta {| {{\nabla}^2d\widetilde{u}}|}\lesssim |\nabla^2d\widetilde{u}|(|d\widetilde{u}|^2+1)+|\nabla d\widetilde{u}|^2|d\widetilde{u}|+|d\widetilde{u}|^3|\nabla d\widetilde{u}|+|\nabla d\widetilde{u}||\nabla^2 d\widetilde{u}|. \end{align}
Let $f={| {{\nabla}^2d\widetilde{u}}|}e^{-\int^{s}_0(\|d\widetilde{u}\|^2_{L^{\infty}}+\|\nabla d\widetilde{u}\|_{L^{\infty}}+1)d\kappa}$. Then for $s\in[0,1]$, by Duhamel principle and smoothing effect, Lemma \ref{ktao1}, \begin{align}
&\|f(s,x)\|_{L^{\infty}_x}\lesssim s^{-\frac{1}{2}}\|f(\frac{s}{2},x)\|_{L^{2}_x}+\int^s_{\frac{s}{2}}(s-\tau)^{-\frac{1}{2}}\||\nabla d\widetilde{u}|^2|d\widetilde{u}|+|d\widetilde{u}|^3|\nabla d\widetilde{u}|\|_{L^2_x}d\tau. \end{align}
Then (\ref{fotuo3}) when $s\in[0,1]$ follows by Lemma \ref{ktao1} and Proposition \ref{sl}. (\ref{fotuo2}) gives $\| {{\nabla}^2d\widetilde{u}}\|_{L^2_x}\le MC(M)$ for all $s\ge1$. Meanwhile Proposition \ref{sl} shows $\|\nabla d\widetilde{u}\|_{L^{\infty}}+\| d\widetilde{u}\|_{L^{\infty}}\le MC(M)$ when $s\ge1$. Then if let $Z\triangleq e^{-C_1(M)s}(e^{-C_1(M) s}|\nabla^2 d\widetilde{u}|+C_1(M))$, then $(\partial_s-\Delta)Z\le0$. Applying Remark \ref{ki78} to $Z$ gives \begin{align}
\|\nabla^2 d\widetilde{u}(s,x)\|^2_{L^{\infty}_x}\lesssim \int^{s}_{s-1}\|\nabla^2 d\widetilde{u}(\tau,x)\|^2_{L^{2}_x}d\tau+MC(M). \end{align}
Then (\ref{fotuo3}) when $s\ge1$ follows by (\ref{hupke3}), (\ref{ingjh}) and Proposition \ref{sl}. The Bochner formula for $\left| {{\nabla}^2\partial_s{\widetilde{u}}} \right|^2$ is as follows \begin{align}
&{\partial _s}{\left| {{\nabla}^2{\partial _s}\widetilde{u}} \right|^2} - \Delta {\left| {{\nabla}^2{\partial _s}\widetilde{u}} \right|^2} + 2{\left| { {\nabla^3 }{\partial _s}\widetilde{u}} \right|^2} \lesssim |\nabla^2\partial_s\widetilde{u}|^2(|d\widetilde{u}|^2+1)+|\partial_s\widetilde{u}|^2|\nabla^2\partial_s\widetilde{u}||\nabla d\widetilde{u}|\nonumber\\
&+|\partial_s\widetilde{u}||d\widetilde{u}||\nabla \partial_s\widetilde{u}||\nabla^2\partial_s\widetilde{u}|+|\nabla^2\partial_s\widetilde{u}|^2|d\widetilde{u}||\partial_s\widetilde{u}|+
|\nabla^2d\widetilde{u}||d\widetilde{u}||\partial_s\widetilde{u}||\nabla^2\partial_s\widetilde{u}|\nonumber\\
&+|\nabla\partial_s\widetilde{u}||\nabla^2\partial_s\widetilde{u}||d\widetilde{u}||\nabla d\widetilde{u}|
+|\nabla\partial_s\widetilde{u}||\nabla^2\partial_s\widetilde{u}||d\widetilde{u}||\nabla d\widetilde{u}|+|\nabla d\widetilde{u}||\nabla^2 \partial_s\widetilde{u}|^2.\label{ftuo4} \end{align} Then one has \begin{align*}
&\frac{d}{{ds}}\int_{{\Bbb H^2}} {{s^2}{{\left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|}^2}{\rm{dvol_h}}} \\
&\le \int_{{\Bbb H^2}} 2{s{{\left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|}^2}} - 2{s^2}{\left| {{\nabla ^3}{\partial _s}\widetilde{u}} \right|^2} + {s^2}{\left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|^2}|d\widetilde{u}{|^2}{\rm{dvol_h}} \\
&+ \int_{{\Bbb H^2}} {{s^2}} |{\nabla ^2}d\widetilde{u}||d\widetilde{u}||{\partial _s}\widetilde{u}||{\nabla ^2}{\partial _s}\widetilde{u}| + {s^2}|\nabla {\partial _s}\widetilde{u}||{\nabla ^2}{\partial _s}\widetilde{u}||d\widetilde{u}||\nabla d\widetilde{u}|{\rm{dvol_h}} \\
&+ \int_{{\Bbb H^2}} {{s^2}|{\partial _s}\widetilde{u}||d\widetilde{u}||\nabla {\partial _s}\widetilde{u}||{\nabla ^2}{\partial _s}\widetilde{u}|} + {s^2}|{\nabla ^2}{\partial _s}u{|^2}|d\widetilde{u}||{\partial _s}\widetilde{u}|{\rm{dvol_h}} \\
&+ \int_{{\Bbb H^2}} {{s^2}|{\partial _s}\widetilde{u}{|^2}|{\nabla ^2}{\partial _s}\widetilde{u}||\nabla d\widetilde{u}| + } {s^2}|{\nabla ^2}{\partial _s}\widetilde{u}|^2|\nabla d\widetilde{u}|{\rm{dvol_h}} \\
&+ \int_{{\Bbb H^2}} {{s^2}} |\nabla {\partial _s}\widetilde{u}||{\nabla ^2}{\partial _s}\widetilde{u}||d\widetilde{u}||\nabla d\widetilde{u}|{\rm{dvol_h}}
+\int_{{\Bbb H^2}} {{s^2}{{\left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|}^2}{\rm{dvol_h}}}. \end{align*} Integrating the above formula in $s\in[s_1,\tau]$ with any $0<s_1<\tau<2$, by Sobolev embedding, Gagliardo-Nirenberg and Young inequality, we obtain \begin{align*}
&\int_{{\Bbb H^2}} {{\tau ^2}{{\left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|}^2}(\tau ,t){\rm{dvol_h}}} - \int_{{\Bbb H^2}} {s_1^2{{\left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|}^2}({s_1},t){\rm{dvol_h}}} \\
&\lesssim \int_{{s_1}}^\tau {\int_{{\Bbb H^2}} {s{{\left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|}^2}} } - {s^2}{\left| {{\nabla ^3}{\partial _s}\widetilde{u}} \right|^2} + \left\| {d\widetilde{u}} \right\|_{L_s^\infty L_x^4}^2{s^2}{\left| {{\nabla ^2}{\partial _s}\widetilde{u}} \right|^2}{\rm{dvol_h}}ds \\
&+ \int_{{s_1}}^\tau {\int_{{\Bbb H^2}} {{s^3}|d\widetilde{u}|^2|{\partial _s}\widetilde{u}|^2} } {\left| {{\nabla ^2}d\widetilde{u}} \right|^2} + {s^3}|\nabla {\partial _s}\widetilde{u}|^2|d\widetilde{u}|^2|\nabla d\widetilde{u}|^2{\rm{dvol_h}}ds \\
&+ \int_{{s_1}}^\tau {\int_{{\Bbb H^2}} {{s^3}|{\partial _s}\widetilde{u}|^2|d\widetilde{u}|^2|\nabla {\partial _s}\widetilde{u}|^2} } + {s^2}|d\widetilde{u}|^2|{\partial _s}\widetilde{u}|^2{\rm{dvol_h}}ds \\
&+ \int_{{s_1}}^\tau {\int_{{\Bbb H^2}} {{s^3}|\nabla d\widetilde{u}|^2|{\partial _s}\widetilde{u}|^2} } + {s^2}|\nabla d\widetilde{u}|^2{\rm{dvol_h}}ds \\
&+ \int_{{s_1}}^\tau {\int_{{\Bbb H^2}} {{s^3}} |\nabla {\partial _s}\widetilde{u}|^2|d\widetilde{u}|^2|\nabla d\widetilde{u}|^2{\rm{dvol_h}}} ds.
\end{align*} Thus letting $s_1\to0$, for $\tau\in(0,2)$, we deduce from (\ref{ki6ll1}), (\ref{fotuo3}) and Proposition \ref{sl} that \begin{align*}
\|s\nabla ^2\partial _s\widetilde{u}\|_{L^2_x}\lesssim MC(M), \end{align*}
from which (\ref{fotuo4}) when $s\in(0,1)$ follows. Integrating (\ref{ftuo4}) with respect to $x$ in $\Bbb H^2$, one obtains by (\ref{{uv111}}) and Proposition \ref{sl} especially the $L^{\infty}_x$ bounds for $|d\widetilde{u}|+|\nabla d\widetilde{u}|$ that for $s\ge1$ and any $0<c\ll 1$ \begin{align}
&\frac{d}{{ds}}\left\| {{\nabla ^2}{\partial _s}\widetilde{u}} \right\|_{L_x^2}^2+c\left\| {{\nabla ^2}{\partial _s}\widetilde{u}} \right\|_{L_x^2}^2\lesssim {\left\| {{\partial _s}\widetilde{u}} \right\|_{L_x^\infty }}{\left\| {\nabla {\partial _s}\widetilde{u}} \right\|_{L_x^2}}{\left\| {{\nabla ^2}{\partial _s}\widetilde{u}} \right\|_{L_x^2}}+\frac{1}{c}\left\| {{\nabla ^2}{\partial _s}\widetilde{u}} \right\|_{L_x^2}^2\nonumber\\
&+ {\left\| {{\partial _s}\widetilde{u}} \right\|_{L_x^2}}{\left\| {{\nabla ^2}{\partial _s}\widetilde{u}} \right\|_{L_x^2}} + {\left\| {\nabla {\partial _s}\widetilde{u}} \right\|_{L_x^2}}{\left\| {{\nabla ^2}{\partial _s}\widetilde{u}} \right\|_{L_x^2}}+{\left\| {{\nabla ^2}{\partial _s}\widetilde{u}} \right\|_{L_x^2}}\left\| {{\partial _s}\widetilde{u}} \right\|_{L_x^4}^2.\label{kioplmmnn} \end{align}
Meanwhile integrating (\ref{hu897}) with respect to $s$ in $(s',\infty)$, we obtain from the exponential decay of $|\partial_s\widetilde{u}|+|\nabla\partial_s\widetilde{u}|$ in Proposition \ref{sl} that for $s'\ge1$ \begin{align}\label{oi98nhbgfg}
\int^{\infty}_{s'}\|\nabla ^2\partial _s\widetilde{u}\|_{L^2_x}d\tau\lesssim e^{-\delta s'}MC(M). \end{align} Hence Gronwall inequality gives if choosing $0<c<\delta$ then for $s\ge1$ one has \begin{align}
\left\| {{\nabla ^2}{\partial _s}\widetilde{u}} (s)\right\|_{L_x^2}^2&\le e^{-cs}MC(M)+e^{-cs}\int^{s}_{1}e^{c\tau}\|\nabla ^2\partial _s\widetilde{u}\|^2_{L^2_x}d\tau +e^{-cs}\int^{s}_{1}e^{c\tau}e^{-\delta \tau}d\tau\nonumber\\ &\lesssim MC(M).\label{po67gvg} \end{align} Applying Gronwall inequality to (\ref{kioplmmnn}) again in $(\frac{s}{2},s)$, we deduce from (\ref{po67gvg}) that \begin{align}
\left\| {{\nabla ^2}{\partial _s}\widetilde{u}} (s)\right\|_{L_x^2}^2\le e^{-\frac{c}{2}s}MC(M)+e^{-cs}\int^{s}_{\frac{s}{2}}e^{c\tau}\|\nabla ^2\partial _s\widetilde{u}\|^2_{L^2_x}d\tau +e^{-cs}\int^{s}_{\frac{s}{2}}e^{c\tau}e^{-\delta \tau}d\tau. \end{align} Thus (\ref{fotuo4}) follows by (\ref{oi98nhbgfg}). Finally (\ref{fotuo5}) follows by (\ref{fotuo4}) and applying Remark \ref{ki78} to (\ref{ftuo4}) as before. \end{proof}
\subsection{The existence of caloric gauge} As a preparation for the existence of the caloric gauge, we prove that the heat flows initiated from $u(t,x)$ with different $t$ converge to the same harmonic map as $u_0$. \begin{lemma}\label{lhu880} If $(u,\partial_tu)$ is a solution to (\ref{wmap1}) in $\mathcal{X}_T$, then there exists a harmonic map $\widetilde{Q}$ such that as $s\to\infty$, $$ \mathop {\lim }\limits_{s \to \infty } \mathop {\sup }\limits_{(x,t) \in {\mathbb{H}^2} \times [0,T]} dist_{\Bbb H^2}(\widetilde{u}(s,x,t),\widetilde{Q}(x))=0. $$ \end{lemma} \begin{proof} The global existence of $\widetilde{u}$ is due to Lemma \ref{8.44}, the embedding ${\bf{H}^2}\hookrightarrow L^{\infty}$, ${\bf{H}^1}\hookrightarrow L^{p}$ for $p\in[2,\infty)$ and diamagnetic inequality. Then (\ref{8.3}), maximum principle and (\ref{huhu899}) show \begin{align}\label{10.112}
\left\| {{\partial _s}\widetilde{u}(s,t,x)} \right\|^2_{L^{\infty}_x} \le {s^{ - 1}}{e^{ - \frac{1}{4}s}}\int_{{\mathbb{H}^2}} {{{\left| {{\partial _s}\widetilde{u}(0,t,x)} \right|}^2}{\rm{dvol_h}}}. \end{align} Thus (\ref{8.29.2}) yields $$
\mathop {\sup }\limits_{(x,t) \in {\mathbb{H}^2}\times[0,T]} \left| {{\partial _s}\widetilde{u}(s,t,x)} \right| \le {s^{ -\frac{1}{2}}}{e^{ - \frac{1}{8}s}}\int_{{\mathbb{H}^2}} {{{\left| {{\partial _t}u(t,x)} \right|}^2}{\rm{dvol_h}}}\le C{s^{ - 1}}{e^{ - \frac{1}{8}s}}. $$ Therefore for any $1<s_0<s_1<\infty$ it holds $${d_{{\mathbb{H}^2}}}(\widetilde{u}({s_0},t,x),\widetilde{u}({s_1},t,x)) \lesssim \int_{{s_0}}^{{s_1}} {{e^{ - \frac{1}{8}s}}ds},$$ which implies $\widetilde{u}(s,t,x)$ converges to some map $\widetilde{Q}(t,x)$ uniformly on $(t,x)\in[0,T]\times \mathbb{H}^2$. By [Theorem 5.2,\cite{LT}], for any fixed $t$, $\widetilde{Q}(t,x)$ is a harmonic map form $\Bbb H^2\to\Bbb H^2$. It suffices to verify $\widetilde{Q}(t,x)$ is indeed independent of $t$. By (\ref{10.127}), maximum principle and (\ref{huhu899}), \begin{align}\label{sdf}
\mathop {\sup }\limits_{x \in {\mathbb{H}^2}} {\left| {{\partial _t}\widetilde{u}(s,t,x)} \right|^2} \le {s^{ - 1}}{e^{ - \frac{1}{4}s}}\int_{{\Bbb H^2}} {{{\left| {{\partial _t}\widetilde{u}(0,t,x)} \right|}^2}{\rm{dvol_h}}}. \end{align} As a consequence, for $0\le t_1<t_2\le T$ one has
$${d_{{\Bbb H^2}}}(\widetilde{u}(s,{t_1},x),\widetilde{u}(s,{t_2},x)) \le \int_{{t_1}}^{{t_2}} {\left| {{\partial _t}\widetilde{u}(s,t,x)} \right|} dt \le C{s^{ - \frac{1}{2}}}{e^{ - s/8}}({t_2} - {t_1}). $$ Let $s\to\infty$, we get ${d_{{\mathbb{H}^2}}}(\widetilde{Q}({t_1},x),\widetilde{Q}({t_2},x)) = 0$, thus finishing the proof. \end{proof}
\begin{lemma}\label{z8vcxzvb} Let $Q$ be an admissible harmonic map in Definition 1.1, and $\mu_1,\mu_2$ be sufficiently small. If $(u,\partial_tu)$ is a solution to (\ref{wmap1}) in $\mathcal{X}_T$, then $\widetilde{u}(s,t,x)$ uniformly converges to $Q$ as $s\to\infty$. \end{lemma} \begin{proof} By Lemma \ref{lhu880}, it suffices to prove $Q=\widetilde{Q}$. In the coordinate (\ref{vg}), the harmonic map equation can be written as \begin{align}
\Delta {\widetilde{Q}^l} + {h^{ij}}\overline \Gamma _{pq}^l\frac{{\partial {\widetilde{Q}^p}}}{{\partial {x_i}}}\frac{{\partial {\widetilde{Q}^q}}}{{\partial {x_j}}} &= 0\label{cxvgn} \\
\Delta {Q^l} + {h^{ij}}\overline \Gamma _{pq}^l\frac{{\partial {Q^p}}}{{\partial {x_i}}}\frac{{\partial {Q^q}}}{{\partial {x_j}}} &= 0. \label{cxvgn2} \end{align} Denote the heat flow initiated from $u_0$ by $U(s,x)$, then by (\ref{{uv111}}), \begin{align*}
&\|U^1(s,x)-Q^1(x)\|_{L^2}+\|U^2(s,x)-Q^2(x)\|_{L^2}\\
&\lesssim \|\nabla(U^1-Q^1)\|_{L^2} +\|\nabla(U^2-Q^2)\|_{L^2}. \end{align*} [Lemma 2.3,\cite{LZ}] shows that for $k=1,2,l=1,2,$ \begin{align*}
\|\nabla^l U^k\|_{L^2}\lesssim C(\|U\|_{\mathfrak{H}^2},R_0,\|Q\|_{\mathfrak{H}^2})\|\nabla^{l-1}dU\|_{L^2}. \end{align*}
By energy arguments, one obtains $\|\nabla dU\|_{L^2}\le C(\|\nabla du_0\|_{L^2},\|du_0\|_{L^2})$ and the energy decreases along the heat flow, see (\ref{V5}). Thus we have by Sobolev embedding and Corollary \ref{new2} that \begin{align}
&\|U^1(s,x)-Q^1(x)\|_{L^2}+\|U^2(s,x)-Q^2(x)\|_{L^2}+ \|dU\|_{L^2}\nonumber\\ &\le C(R_0)\mu_2+C(R_0)\mu_1\label{hbvcjin}\\
&\|U^1(s,x)\|_{L^{\infty}}+\|U^2(s,x)\|_{L^{\infty}}\le C(R_0).\label{pokeryu} \end{align} Hence letting $s\to\infty$, we have for some constant $C(R_0)$ \begin{align}
\|\widetilde{Q}^1\|_{L^{\infty}}+\|\widetilde{Q}^2\|_{L^{\infty}}\le C,\mbox{ }\|\nabla\widetilde{Q}^1\|_{L^{2}}+\|\nabla\widetilde{Q}^2\|_{L^{2}}\le \mu_1C(R_0)\label{ktu8n2} \end{align} Multiplying the difference between (\ref{cxvgn}) and (\ref{cxvgn2}) with $-{Q^l} + {\widetilde{Q}^l}$, we have by integration by parts that \begin{align*}
&{\left\| {\nabla \left( {{Q^l} - {\widetilde{Q}^l}} \right)} \right\|_{{L^2}}} \le \left\langle {{h^{ij}}\left( {\overline \Gamma _{pq}^l(Q) - \overline \Gamma _{pq}^l(\widetilde Q)} \right)\frac{{\partial {Q^p}}}{{\partial {x_i}}}\frac{{\partial {Q^q}}}{{\partial {x_j}}}, - {Q^l} + {{\widetilde Q}^l}} \right\rangle \\
&+ \left\langle {h^{ij}}\overline \Gamma _{pq}^l(\widetilde Q)\left( \frac{\partial {Q^p}}{\partial {x_i}} - \frac{\partial {\widetilde Q}^p}{\partial {x_i}} \right)\frac{{\partial {Q^q}}}{{\partial {x_j}}}, - {Q^l} + {\widetilde{Q}^l}\right\rangle \\
&+ \left\langle {{h^{ij}}\overline \Gamma _{pq}^l(\widetilde{Q})\frac{{\partial {\widetilde{Q}^p}}}{{\partial {x_i}}}\left( {\frac{{\partial {Q^q}}}{{\partial {x_i}}} - \frac{{\partial {\widetilde{Q}^q}}}{{\partial {x_j}}}} \right), - {Q^l} + {\widetilde{Q}^l}} \right\rangle.
\end{align*} Thus using the explicit formula for ${\overline \Gamma _{pq}^l}$, by (\ref{hbvcjin}), (\ref{pokeryu}), (\ref{ktu8n2}) we get \begin{align*}
&\left\| {\nabla \left( {{Q^l} - {\widetilde{Q}^l}} \right)} \right\|_{{L^2}}^2\\
&\lesssim \left( {\left\| {{Q^l} - {\widetilde{Q}^l}} \right\|_{{L^2}}^2 + \left\| {\nabla \left( {{Q^l} - {\widetilde{Q}^l}} \right)} \right\|_{{L^2}}^2} \right)\left( {\sum\limits_{k = 1}^2 {\left\| {\nabla {\widetilde{Q}^k}} \right\|_{{L^2}}^2 + \left\| {\nabla {Q^k}} \right\|_{{L^2}}^2} } \right). \end{align*} Therefore, we conclude for some constant $C(R_0)$ which is independent of $\mu_1,\mu_2$ provided $0\le \mu_1,\mu_2\le1$ \begin{align*}
&\sum\limits_{l = 1}^2 {\left\| {\nabla \left( {{Q^l} - {\widetilde{Q}^l}} \right)} \right\|_{{L^2}}^2}\\
&\le C(R_0)\left( {\left\| {dQ} \right\|_{{L^2}}^2 + \left\| {d\widetilde{Q}} \right\|_{{L^2}}^2} \right)\left( {\sum\limits_{l = 1}^2 {\left\| {\nabla \left( {{Q^l} - {\widetilde{Q}^l}} \right)} \right\|_{{L^2}}^2 + \left\| {{Q^l} - {\widetilde{Q}^l}} \right\|_{{L^2}}^2} } \right). \end{align*} Let $\mu_1$, $\mu_2$ be sufficiently small, (\ref{{uv111}}) gives \begin{align*}
&\sum\limits_{l = 1}^2 {\left\| {\nabla \left( {{Q^l} - {\widetilde{Q}^l}} \right)} \right\|_{{L^2}}^2} + \left\| {{Q^l} - {\widetilde{Q}^l}} \right\|_{{L^2}}^2\\
&\le \left( {{\mu _1} + {\mu _2}} \right)\left( {\sum\limits_{l = 1}^2 {\left\| {\nabla \left( {{Q^l} - {\widetilde{Q}^l}} \right)} \right\|_{{L^2}}^2 + \left\| {{Q^l} - {\widetilde{Q}^l}} \right\|_{{L^2}}^2} } \right). \end{align*} Hence $\widetilde{Q}=Q$. \end{proof}
Now we are ready to prove the existence of the caloric gauge in Definition \ref{pp}. \begin{proposition}\label{3.3} Given any solution $(u,\partial_tu)$ of (\ref{wmap1}) in $\mathcal{X}_T$ with $(u_0,u_1)\in \bf H_Q^3\times\bf H_Q^2$. For any fixed frame $\Xi\triangleq\{\Xi_1(Q(x)),\Xi_2(Q(x))\}$, there exists a unique corresponding caloric gauge defined in Definition \ref{pp}. \end{proposition} \begin{proof} We first show the existence part. Choose an arbitrary orthonormal frame $E_0(t,x)\triangleq\{\texttt{e}_i(t,x)\}^2_{i=1}$ such that $E_0(t,x)$ spans the tangent space $T_{u(t,x)}{\mathbb{H}^2}$ for each $(t,x)\in [0,T]\times \mathbb{H}^2$. The desired frame does exist, in fact we have a global orthonormal frame for $\mathbb{H}^2$ defined by (\ref{frame}). Then evolving (\ref{8.29.2}) with initial data $u(t,x)$, we have from Lemma \ref{z8vcxzvb} that $\widetilde{u}(s,t,x)$ converges to $Q$ uniformly for $(t,x)\in[0,T]\times \mathbb{H}^2$ as $s\to\infty$. Meanwhile, we evolve $E_0$ in $s$ according to \begin{align}\label{11.2} \left\{ \begin{array}{l} {\nabla _s}{\Omega _i}(s,t,x) = 0 \\ {\Omega _i}(s,t,x)\upharpoonright_{s=0} = {\texttt{e}_i}(t,x) \\ \end{array} \right. \end{align} Denote the evolved frame as $E_s\triangleq \{\Omega_i(s,t,x)\}^2_{i=1}$. We claim that there exists some orthonormal frame $E_{\infty}\triangleq\{\texttt{e}_i(\infty,t,x)\}^2_{i=1}$ which spans $T_{Q(x)}\Bbb H^2$ for each $(t,x)\in [0,T]\times\Bbb H^2$ such that \begin{align}\label{pl} \mathop {\lim }\limits_{s \to \infty }{\Omega_i(s,t,x)} = \texttt{e}_i(\infty,t,x). \end{align} Indeed, by the definition of the convergence of frames given in (\ref{convergence}) and the fact $\widetilde{u}(s,t,x)$ converges to $Q(x)$, it suffices to show for some scalar function $c_i:[0,T]\times \mathbb{H}^2\to \Bbb R$ \begin{align}\label{aw} \mathop {\lim }\limits_{s \to \infty } \left\langle {{\Omega_i}(s,t,x),{\Theta _i}(\widetilde{u}(s,t,x))} \right\rangle = c_i(t,x). \end{align} By direct calculations, $$
\left| \nabla _s \Theta_i(\widetilde{u}(s,t,x))\right| \lesssim \left| {{\partial _s}\widetilde{u}} \right|. $$ then (\ref{10.112}) and $\nabla_s\Omega=0$ imply that for $s>1$
$$\big|{\partial _s}\left\langle {{\Omega_i}(s,t,x),{\Theta _i}(\widetilde{u}(s,t,x))} \right\rangle \big|\lesssim Me^{-\delta s}. $$ Hence $(\ref{aw})$ holds for some $c_i(t,x)$, thus verifying (\ref{pl}). It remains to adjust the initial frame $E_0$ to make the limit frame $E_{\infty}$ coincide with the given frame $\Xi$. This can be achieved by the gauge transform invariance illustrated in Section 2.1. Indeed, since for any $U:[0,T]\times\mathbb{H}^2\to SO(2)$, and the solution $\widetilde{u}(s,t,x)$ to (\ref{8.29.2}), one has $\nabla_s U(t,x)\Omega(s,t,x)=U(t,x)\nabla_s\Omega(s,t,x)$, then the following gauge symmetry holds \begin{align*}
{E_0}\triangleq\left\{ {\texttt{e}_i(t,x)} \right\}^2_{i=1} &\mapsto {{E'}_0}\triangleq\left\{ {U(t,x){\texttt{e}_i}(t,x)} \right\}^2_{i=1} \\
{E_s}\triangleq\left\{ {{\Omega_i}(s,t,x)} \right\}^2_{i=1} &\mapsto {{E'}_s}\triangleq\left\{ {U(t,x){\Omega_i}(s,t,x)} \right\}^2_{i=1}. \end{align*} Therefore choosing $U(t,x)$ such that $U(t,x)E_{\infty}=\Xi$, where $E_{\infty}$ is the limit frame obtained by (\ref{pl}), suffices for our purpose. The uniqueness of the gauge follows from the identity $$ \frac{d}{{ds}}\left\langle {{\Phi _1} - {\Phi_2},{\Phi_1} - {\Phi_2}} \right\rangle = 0, $$ where $(\Phi _1)$ and $(\Phi _2)$ are two caloric gauges satisfying (\ref{muqi}). \end{proof}
\subsection{Expressions for the connection coefficients} The following lemma gives the expressions for the connection coefficients matrix $A_{x,t}$ by differential fields. The proof of Lemma \ref{po87bg} is almost the same as [Lemma 3.6,\cite{LZ}], thus we omit it. \begin{lemma}\label{po87bg} Suppose that $\Omega(s,t,x)$ is the caloric gauge constructed in Proposition \ref{3.3}, then we have for $i=1,2$ \begin{align} &\mathop {\lim }\limits_{s \to \infty } [{A_i}]^j_k(s,t,x) =\left\langle {{\nabla _i}{\Xi_k }(x),{\Xi_j}(x)} \right\rangle\label{kji}\\ &\mathop {\lim }\limits_{s \to \infty } {A_t}(s,t,x) = 0\label{kji22} \end{align} Particularly let $\Xi(x)=\Theta(Q(x))$ in Proposition \ref{3.3}, denote $A^{\infty}_i$ the limit coefficient matrix, i.e., $[A^{\infty}_i]^k_j=\left\langle {{\nabla _i}{\Xi_k }(Q(x)),{\Xi_j}(Q(x))} \right\rangle$, then we have for $i=1,2$, $s>0$, \begin{align} &{A_i}(s,t,x)\sqrt{h^{ii}(x)} = \int_s^\infty \sqrt{ h^{ii}(x)}{\mathbf{R}(\widetilde{u}(\kappa))\left( {{\partial _s}\widetilde{u}(\kappa),{\partial _i}\widetilde{u}(\kappa)} \right)} d\kappa + { \sqrt{h^{ii}(x)}}A^{\infty}_i.\label{edf}\\ &{A_t}(s,t,x)=\int^{\infty}_s\phi_s\wedge\phi_td\kappa,\label{edf22} \end{align} \end{lemma}
\begin{remark}\label{3sect} For convenience, we rewrite (\ref{edf}) as $A_i(s,t,x)=A^{\infty}_i(s,t,x)+A^{con}_i(s,t,x),$ where $A^{\infty}_i$ denotes the limit part, and $A^{con}_i$ denotes the controllable part, i.e., \begin{align*} A^{con}_i=\int^{\infty}_s\phi_s\wedge\phi_id\kappa. \end{align*} Similarly, we split $\phi_i$ into $\phi_i=\phi^{\infty}_i+\phi^{con}_i$, where $\phi^{con}_i=\int^{\infty}_s\partial_s\phi_id\kappa,$ and $$\phi^{\infty}_i={\left( {\left\langle {{\partial _i}Q(x),{\Xi _1}(Q(x))} \right\rangle
,\left\langle {{\partial _i}Q(x),{\Xi _2}(Q(x))} \right\rangle } \right)^t}.$$ \end{remark}
\section{Derivation of the master equation for the heat tension field} Recall that the heat tension filed $\phi_s$ satisfies \begin{align}\label{heat} \phi_s=h^{ij}D_i\phi_j-h^{ij}\Gamma^k_{ij}\phi_k. \end{align} And we define the wave tension filed as Tao by \begin{align}\label{wm} \mathfrak{W} = {D_t}{\phi _t} - {h^{ij}}{D_i}{\phi _j} + {h^{ij}}\Gamma _{ij}^k{\phi _k}. \end{align} In fact (\ref{heat}) is the gauged equation for the heat flow equation, and (\ref{wm}) is the gauged equation for the wave map (\ref{wmap1}), see Lemma 2.7. The evolution of $\phi_s$ with respect to $t$ is given by the following lemma.
\begin{lemma}\label{asdf} The heat tension field $\phi_s$ satisfies \begin{align} {D_t}{D_t}{\phi _s} - {h^{ij}}{D_i}{D_j}{\phi _s} + {h^{ij}}\Gamma _{ij}^k{D_k}{\phi _s} &= {\partial _s}\mathfrak{W} + {h^{ij}}\mathbf{R}({\partial _s}\widetilde{u},{\partial _i}\widetilde{u})\left( {\partial_j\widetilde{u}} \right) \nonumber\\ &+ \mathbf{R}({\partial _t}\widetilde{u},{\partial _s}\widetilde{u})\left( {\partial_t\widetilde{u}} \right).\label{heating} \end{align} \end{lemma} \begin{proof} By the torsion free identity and the commutator identity, we have \begin{align*} &{D_t}{D_t}{\phi _s} = {D_t}{D_s}{\phi _t} = {D_s}{D_t}{\phi _t} + \mathbf{R}({\partial _t}\widetilde{u},{\partial _s}\widetilde{u})\left( { \partial_t\widetilde{u}} \right) \\ &= {D_s}\left( {\mathfrak{W} + {h^{ij}}{D_i}{\phi _j} - {h^{ij}}\Gamma _{ij}^k{\phi _k}} \right) +\mathbf{R}({\partial _t}\widetilde{u},{\partial _s}\widetilde{u})\left( {\partial_t\widetilde{u}} \right) \\ &= {\partial _s}\mathfrak{W} + {h^{ij}}{D_s}{D_i}{\phi _j} - {h^{ij}}\Gamma _{ij}^k{D_s}{\phi _k} + \mathbf{R}({\partial _t}\widetilde{u},{\partial _s}\widetilde{u})\left( {\partial_t\widetilde{u}} \right) \\ &= {\partial _s}\mathfrak{W} + {h^{ij}}{D_i}{D_j}{\phi _s} - {h^{ij}}\Gamma _{ij}^k{D_k}{\phi _s} + {h^{ij}}\mathbf{R}({\partial _s}\widetilde{u},{\partial _i}\widetilde{u})\left( {{\partial_j\widetilde{u}}} \right) + \mathbf{R}({\partial _t}\widetilde{u},{\partial _s}\widetilde{u})\left( {\partial_t\widetilde{u}} \right). \end{align*} Thus (\ref{heating}) is verified. \end{proof}
The evolution of $\mathfrak{W}$ with respect to $s$ is given by the following lemma. \begin{lemma}\label{ab1} Under orthogonal coordinates, the wave tension field $\mathfrak{W}$ satisfies \begin{align*} {\partial _s}\mathfrak{W} =& \Delta \mathfrak{W} + 2h^{ii}{A_i}{\partial _i}\mathfrak{W} +h^{ii} {A_i}{A_i}\mathfrak{W} + h^{ii}{\partial _i}{A_i}\mathfrak{W} - {h^{ii}}\Gamma _{ii}^k{A_k}\mathfrak{W} + {h^{ii}}\left( {\mathfrak{W} \wedge {\phi _i}} \right){\phi _i}\\ & + 3{h^{ii}}({\partial _t}\widetilde{u} \wedge {\partial _i}\widetilde{u}){\nabla _t}{\partial _i}\widetilde{u}. \end{align*} \end{lemma} \begin{proof} In the following calculations, we always use the convention in Remark 2.1. By $\mathfrak{W}=D_t\phi_t-\phi_s$, we have from commutator equality that $${\partial _s}\mathfrak{W}= {D_s}({D_t}{\phi _t} - {\phi _s}) = {D_t}{D_t}{\phi _s} - {D_s}{\phi _s} + {\bf R}({\partial _s}\widetilde{u},{\partial _t}\widetilde{u})\left( {{\partial_t\widetilde{u}}} \right). $$ Further applications of the torsion free identity and commutator identity show \begin{align*} &{D_t}{D_t}{\phi _s} - {D_s}{\phi _s} \\ &= {D_t}{D_t}\left( {{h^{ij}}{D_i}{\phi _j} - {h^{ij}}\Gamma _{ij}^k{\phi _k}} \right) - {D_s}\left( {{h^{ij}}{D_i}{\phi _j} - {h^{ij}}\Gamma _{ij}^k{\phi _k}} \right) \\ &= {h^{ij}}{D_t}{D_t}{D_i}{\phi _j} - {h^{ij}}\Gamma _{ij}^k{D_t}{D_t}{\phi _k} - \left( {{h^{ij}}{D_s}{D_i}{\phi _j} - {h^{ij}}\Gamma _{ij}^k{D_s}{\phi _k}} \right) \\ &= {h^{ij}}{D_t}\left( {{D_i}{D_j}{\phi _t} + \mathbf{R}({\partial _t}\widetilde{u},{\partial _i}\widetilde{u})({\partial _j}\widetilde{u})} \right) - {h^{ij}}\Gamma _{ij}^k\left( {{D_k}{D_t}{\phi _t} + \mathbf{R}({\partial _t}\widetilde{u},{\partial _k}\widetilde{u})({\partial _t}\widetilde{u})} \right) \\ &- \left( {{h^{ij}}{D_i}{D_j}{\phi _s} - {h^{ij}}\Gamma _{ij}^k{D_k}{\phi _s} + {h^{ij}}\mathbf{R}({\partial _s}\widetilde{u},{\partial _i}\widetilde{u})({\partial _j}\widetilde{u})} \right) \\ &= {h^{ij}}{D_t}{D_i}{D_j}{\phi _t} - {h^{ij}}\Gamma _{ij}^k{D_k}{D_t}{\phi _t} - {h^{ij}}{D_i}{D_j}{\phi _s} + {h^{ij}}\Gamma _{ij}^k{D_k}{\phi _s} - {h^{ij}}\mathbf{R}({\partial _s}\widetilde{u},{\partial _i}\widetilde{u})(\partial_j\widetilde{u}) \\ &+ {h^{ij}}{\nabla _t}\left( {\mathbf{R}({\partial _t}\widetilde{u},{\partial _i}\widetilde{u})({\partial _j}\widetilde{u})} \right) - {h^{ij}}\Gamma _{ij}^k\mathbf{R}({\partial _t}\widetilde{u},{\partial _k}\widetilde{u})\left( {{\partial _t}\widetilde{u}} \right). \end{align*} The leading term can be written as \begin{align*} {h^{ij}}{D_t}{D_i}{D_j}{\phi _t}& = {h^{ij}}{D_i}{D_t}{D_j}{\phi _t} + {h^{ij}}\left( {\mathbf{R}({\partial _t}\widetilde{u},{\partial _i}\widetilde{u})e\left( {{D_j}{\phi _t}} \right)} \right) \\ &= {h^{ij}}{D_i}{D_j}{D_t}{\phi _t} + {h^{ij}}\left( {\mathbf{R}({\partial _t}\widetilde{u},{\partial _i}\widetilde{u}){\nabla _j}{\partial _t}\widetilde{u} } \right) + {h^{ij}}{\nabla _i}\left( {\mathbf{R}({\partial _t}\widetilde{u},{\partial _j}\widetilde{u}){\partial _t}\widetilde{u}} \right). \end{align*} Thus we conclude as \begin{align*}
{\partial _s}\mathfrak{W} &={h^{ij}}{D_i}{D_j}({D_t}{\phi _t} - {\phi _s}) - {h^{ij}}\Gamma _{ij}^k{D_k}({D_t}{\phi _t} - {\phi _s}) + {h^{ij}}\left( {\mathbf{R}({\partial _t}\widetilde{u},{\partial _i}\widetilde{u}){\nabla _j}{\partial _t}\widetilde{u}} \right) \\
&+ {h^{ij}}{\nabla _i}\left( {\mathbf{R}({\partial _t}\widetilde{u},{\partial _j}\widetilde{u}){\partial _t}\widetilde{u}} \right)
- {h^{ij}}\mathbf{R}({\partial _s}\widetilde{u},{\partial _i}\widetilde{u})(\partial_j\widetilde{u}) + {h^{ij}}{\nabla _t}\left( {\mathbf{R}({\partial _t}\widetilde{u},{\partial _i}\widetilde{u})({\partial _j}\widetilde{u})} \right) \\
&- {h^{ij}}\Gamma _{ij}^k\mathbf{R}({\partial _t}\widetilde{u},{\partial _k}\widetilde{u})\left( {{\partial _t}\widetilde{u}} \right) + \mathbf{R}({\partial _s}\widetilde{u},{\partial _t}\widetilde{u}){\partial _t}\widetilde{u}. \end{align*} Using $\mathfrak{W}=D_t\phi_t-\phi_s$ and (\ref{2.4best}) yields \begin{align}
{\partial _s}\mathfrak{W}& = \Delta \mathfrak{W} + 2h^{ii}{A_i}{\partial _i}\mathfrak{W}+ h^{ii}{A_i}{A_i}\mathfrak{W} + h^{ii}{\partial _i}{A_i}\mathfrak{W} - {h^{ij}}\Gamma _{ij}^k{A_k}\mathfrak{W}\nonumber\\
&+ \left\{ { - {h^{ii}}\left( {{\partial _s}\widetilde{u} \wedge {\partial _i}\widetilde{u}} \right){\partial _i}\widetilde{u} + {h^{ii}}({\nabla _t}{\partial _t}\widetilde{u } \wedge {\partial _i}\widetilde{u}){\partial _i}\widetilde{u}} \right\}\nonumber \\
&+ {h^{ii}}({\partial _t}\widetilde{u} \wedge {\nabla _t}{\partial _i}\widetilde{u}){\partial _i}\widetilde{u} + {h^{ii}}({\partial _t}\widetilde{u }\wedge {\partial _i}\widetilde{u}){\nabla _t}{\partial _i}\widetilde{u}\nonumber \\
&+ {h^{ii}}({\nabla _i}{\partial _t}\widetilde{u} \wedge {\partial _i}\widetilde{u}){\partial _t}\widetilde{u} + {h^{ii}}({\partial _t}\widetilde{u} \wedge {\partial _i}\widetilde{u}){\nabla _i}{\partial _t}\widetilde{u}\nonumber\\
&+ \left\{ {{h^{ii}}({\partial _t}\widetilde{u} \wedge {\nabla _i}{\partial _i}\widetilde{u}){\partial _t}\widetilde{u} - {h^{ii}}\Gamma _{ii}^k({\partial _t}\widetilde{u} \wedge {\partial _k}\widetilde{u}){\partial _t}\widetilde{u} + ({\partial _s}\widetilde{u} \wedge {\partial _t}\widetilde{u}){\partial _t}\widetilde{u}} \right\}.\label{wanxiao4} \end{align} Recalling the facts that $\mathfrak{W}$ is the gauged field for $\nabla_t\partial_t \widetilde{u}-\tau(\widetilde{u})$ and $\partial_s\widetilde{u}=\tau(\widetilde{u})$, we have \begin{align} - {h^{ii}}\left( {{\partial _s}\widetilde{u} \wedge {\partial _i}\widetilde{u}} \right){\partial _i}\widetilde{u} + {h^{ii}}({\nabla _t}{\partial _t}\widetilde{u} \wedge {\partial _i}\widetilde{u}){\partial _i}\widetilde{u} &= {h^{ii}}\left( {({\nabla _t}{\partial _t}\widetilde{u} - {\partial _s}\widetilde{u}) \wedge {\partial _i}\widetilde{u}} \right){\partial _i}\widetilde{u}\nonumber\\ &= {h^{ii}}\left( {\mathfrak{W} \wedge {\phi _i}} \right){\phi _i}.\label{wanxiao1} \end{align} Meanwhile, $\partial_s\widetilde{u}=\tau(\widetilde{u})$ also implies \begin{align}
&{h^{ii}}({\partial _t}\widetilde{u} \wedge {\nabla _i}{\partial _i}\widetilde{u}){\partial _t}\widetilde{u} - {h^{ii}}\Gamma _{ii}^k({\partial _t}\widetilde{u} \wedge {\partial _k}\widetilde{u}){\partial _t}\widetilde{u} + ({\partial _s}\widetilde{u} \wedge {\partial _t}\widetilde{u}){\partial _t}\widetilde{u }\nonumber\\
&= {h^{ii}}\left( {{\partial _t}\widetilde{u} \wedge \left( {\tau (\widetilde{u}) - {\partial _s}\widetilde{u}} \right)} \right){\partial _t}\widetilde{u} = 0.\label{wanxiao2} \end{align} Bianchi identity gives \begin{align}\label{wanxiao3} {h^{ii}}({\partial _t}\widetilde{u} \wedge {\nabla _t}{\partial _i}\widetilde{u}){\partial _i}\widetilde{u} + {h^{ii}}({\nabla _i}{\partial _t}\widetilde{u} \wedge {\partial _i}\widetilde{u}){\partial _t}\widetilde{u} &= - {h^{ii}}({\partial _i}\widetilde{u }\wedge {\partial _t}\widetilde{u}){\nabla _t}{\partial _i}\widetilde{u}\nonumber\\ &= {h^{ii}}({\partial _t}\widetilde{u} \wedge {\partial _i}\widetilde{u}){\nabla _t}{\partial _i}\widetilde{u}. \end{align} By (\ref{wanxiao1}), (\ref{wanxiao2}) and (\ref{wanxiao3}), (\ref{wanxiao4}) can be further simplified as \begin{align*} {\partial _s}\mathfrak{W} =& \Delta \mathfrak{W} + 2h^{ii}{A_i}{\partial _i}\mathfrak{W} + h^{ii}{A_i}{A_i}\mathfrak{W} +h^{ii} {\partial _i}{A_i}\mathfrak{W} - {h^{ii}}\Gamma _{ii}^k{A_k}\mathfrak{W}\\ & + {h^{ii}}\left( {\mathfrak{W} \wedge {\phi _i}} \right){\phi _i}+ 3{h^{ii}}({\partial _t}\widetilde{u} \wedge {\partial _i}\widetilde{u}){\nabla _t}{\partial _i}\widetilde{u}. \end{align*} \end{proof}
\begin{lemma}\label{xuejin} Let $Q$ be an admissible harmonic map in Definition 1.1. Fix the frame $\Xi$ in Remark \ref{3sect} by taking $\Xi(Q(x))=\Theta(Q(x))$ given by (\ref{vg}). Recall the definitions of $A^{\infty}_i$ in Lemma \ref{po87bg}. Then \begin{align}
&|A_i^{\infty}|\lesssim|dQ|, |\sqrt{h^{ii}}\phi_i^{\infty}|\lesssim |dQ|\label{kulun1}\\
&|{h^{ii}}\left( {{\partial _i}{A^{\infty}_i} - \Gamma _{ii}^k{A^{\infty}_k}} \right)|\lesssim |dQ|^2.\label{kulun} \end{align} \end{lemma} \begin{proof} Recall the definition $$[A_i^\infty ]_k^j = \left\langle {{\nabla _i}{\Theta _k},{\Theta _j}} \right\rangle ,{\Theta _1} = {e^{{Q^2}(x)}}\frac{\partial }{{\partial {y_1}}},{\Theta _2} = \frac{\partial }{{\partial {y_2}}}. $$ Since $A_i$ is skew-symmetric, it suffices to consider the $[A_i]^1_{2}$ terms. Direct calculation gives \begin{align*} [A_1^\infty ]_2^1 = \left\langle {{\nabla _1}{\Theta _2},{\Theta _1}} \right\rangle& = {e^{{Q^2}(x)}}\frac{{\partial {Q^{_k}}}}{{\partial {x_1}}}\left\langle {{\nabla _{\frac{\partial }{{\partial {y_k}}}}}\frac{\partial }{{\partial {y_2}}},\frac{\partial }{{\partial {y_1}}}} \right\rangle = {e^{ - {Q^2}(x)}}\frac{{\partial {Q^{_k}}}}{{\partial {x_1}}}\overline{\Gamma}_{k2}^1 \\ &= - {e^{ - {Q^2}(x)}}\frac{{\partial {Q^1}}}{{\partial {x_1}}}, \end{align*} and similarly we obtain \begin{align*} [A_1^\infty ]_1^2 ={e^{ - {Q^2}(x)}}\frac{{\partial {Q^1}}}{{\partial {x_1}}};\mbox{ } [A_2^\infty ]_1^2 = - [A_2^\infty ]_2^1 = {e^{ - {Q^2}(x)}}\frac{{\partial {Q^1}}}{{\partial {x_2}}}. \end{align*} Thus one has \begin{align}
&{h^{ii}}\left( {{\partial _i}[A_i^\infty ]_2^1 - \Gamma _{ii}^k[A_k^\infty ]_2^1} \right)\nonumber \\
&= - \left( {\frac{{{\partial ^2}{Q^1}}}{{\partial {x_2}^2}}{e^{2{x_2}}} + \frac{{{\partial ^2}{Q^1}}}{{\partial {x_1}^2}} - \frac{{\partial {Q^1}}}{{\partial {x_1}}}\frac{{\partial {Q^2}}}{{\partial {x_1}}}{e^{2{x_2}}} - \frac{{\partial {Q^1}}}{{\partial {x_2}}}\frac{{\partial {Q^2}}}{{\partial {x_2}}} - \frac{{\partial {Q^1}}}{{\partial {x_2}}}} \right){e^{ - {Q^2}(x)}} \label{chumen} \end{align} Writing the harmonic map equation for $Q$ in the coordinate (\ref{vg}) shows for $l=1,2$ $${h^{ii}}\frac{{{\partial ^2}{Q^l}}}{{\partial {x_i}^2}} - {h^{ii}}\Gamma _{ii}^k{\partial _k}{Q^l} + {h^{ii}}\bar \Gamma _{pq}^l\frac{{\partial {Q^p}}}{{\partial {x_i}}}\frac{{\partial {Q^q}}}{{\partial {x_i}}} = 0. $$ Let $l=1$ in the above equation, we have $${e^{2{x_2}}}\frac{{{\partial ^2}{Q^1}}}{{\partial {x_1}^2}} + \frac{{{\partial ^2}{Q^1}}}{{\partial {x_2}^2}} - \frac{{\partial {Q^1}}}{{\partial {x_2}}} - 2{e^{2{x_2}}}\frac{{\partial {Q^1}}}{{\partial {x_1}}}\frac{{\partial {Q^2}}}{{\partial {x_1}}} - 2\frac{{\partial {Q^1}}}{{\partial {x_2}}}\frac{{\partial {Q^2}}}{{\partial {x_2}}} = 0, $$ which combined with (\ref{chumen}) yields \begin{align}\label{chunmen2} {h^{ii}}\left( {{\partial _i}A_i^\infty - \Gamma _{ii}^kA_k^\infty } \right) = \left( {{e^{2{x_2}}}\frac{{\partial {Q^1}}}{{\partial {x_1}}}\frac{{\partial {Q^2}}}{{\partial {x_1}}} + \frac{{\partial {Q^1}}}{{\partial {x_2}}}\frac{{\partial {Q^2}}}{{\partial {x_2}}}} \right){e^{ - {Q^2}(x)}}. \end{align} Writing the energy density in coordinates (\ref{vg}), we obtain \begin{align*}
{\left| {dQ} \right|^2} &= {h^{ij}}\left\langle {\frac{{\partial {Q^k}}}{{\partial {x_i}}}\frac{\partial }{{\partial {y_k}}},\frac{{\partial {Q^k}}}{{\partial {x_j}}}\frac{\partial }{{\partial {y_k}}}} \right\rangle \\
&= {e^{2{x_2}}}{\left| {\frac{{\partial {Q^1}}}{{\partial {x_1}}}} \right|^2}{e^{ - 2{Q_2}}} + {e^{2{x_2}}}{\left| {\frac{{\partial {Q^2}}}{{\partial {x_1}}}} \right|^2} + {\left| {\frac{{\partial {Q^1}}}{{\partial {x_2}}}} \right|^2}{e^{ - 2{Q_2}}} + {\left| {\frac{{\partial {Q^2}}}{{\partial {x_2}}}} \right|^2}. \end{align*} Thus (\ref{kulun}) follows by (\ref{chunmen2}) and Young inequality. (\ref{kulun1}) is much easier and follows immediately by the same arguments. \end{proof}
Now we separate the main term in the equation of $\phi_s$. Recall the limit of $A_{s,t,x}$ given in (\ref{kji}), (\ref{kji22}), one can easily see the main term of (\ref{heating}) is a magnetic wave equation. Precisely, we have the following lemma. \begin{lemma}\label{hushuo} Fix the frame $\Xi$ in Proposition 3.3 by letting $\Xi_i(x)=\Theta_i(Q(x))$, $i=1,2$. Then the heat tension filed $\phi_s$ satisfies \begin{align*} &(\partial^2_t-\Delta){\phi _s} + W{\phi _s}\\ &= - 2{A_t}{\partial _t}{\phi _s} - {A_t}{A_t}{\phi _s} - {\partial _t}{A_t}{\phi _s} + {\partial _s}w + \mathbf{R}({\partial _t}\widetilde{u},{\partial _s}\widetilde{u})({\partial _t}\widetilde{u}) + 2{h^{ii}}A_i^{con}{\partial _i}{\phi _s} \\ &+ {h^{ii}}A_i^{con}A_i^\infty {\phi _s} + {h^{ii}}A_i^\infty A_i^{con}{\phi _s} + {h^{ii}}A_i^{con}A_i^{con}{\phi _s} + {h^{ii}}\left( {{\partial _i}A_i^{con} - \Gamma _{ii}^kA_k^{con}} \right){\phi _s} \\ &+ {h^{ii}}\left( {{\phi _s} \wedge \phi _i^\infty } \right)\phi _i^{con} + {h^{ii}}\left( {{\phi _s} \wedge \phi _i^{con}} \right)\phi _i^\infty + {h^{ii}}\left( {{\phi _s} \wedge \phi _i^{con}} \right)\phi _i^{con}, \end{align*} where $A_{x}^{\infty}$, $A_{x}^{con}$ are defined in Remark \ref{3sect}, and $W$ is given by \begin{align}\label{iuo9} W\varphi = -2 {h^{ii}}A_i^\infty {\partial _i}\varphi -{h^{ii}}A_i^\infty A_i^\infty \varphi - {h^{ii}}\left( {\varphi \wedge \phi _i^\infty } \right)\phi _i^\infty-h^{ii}(\partial_iA^{\infty}_{i}-\Gamma^k_{ii}A^{\infty}_k). \end{align} Furthermore, $-\Delta+W$ is a self-adjoint operator in $L^2(\Bbb H^2;\Bbb C^2)$. And it is strictly positive if $0<\mu_1\ll1$. \end{lemma} \begin{proof} By (\ref{heating}), expanding $D_{x,t}$ as $\partial_{t,x}+A_{t,x}$ implies \begin{align} &\partial _t^2{\phi _s} - {\Delta}{\phi _s}\nonumber\\ & = - 2{A_t}{\partial _t}{\phi _s} - {A_t}{A_t}{\phi _s} - {\partial _t}{A_t}{\phi _s} + {h^{ii}}{A_i}{A_i}{\phi _s} + {h^{ii}}\left( {{\partial _i}{A_i} - \Gamma _{ii}^k{A_k}} \right){\phi _s} \nonumber\\ &+2 {h^{ii}}{A_i}{\partial _i}{\phi _s}+ {\partial _s}\mathfrak{W} + {h^{ii}}\mathbf{R}({\partial _s}\widetilde{u},{\partial _i}\widetilde{u})({\partial _i}\widetilde{u}) + \mathbf{R}({\partial _t}\widetilde{u},{\partial _s}\widetilde{u})({\partial _t}\widetilde{u}).\label{huta} \end{align} By Remark \ref{3sect}, $A_i=A^{\infty}_i+A^{con}_i$, $\phi_i=\phi^{\infty}_{i}+\phi^{con}_i$. Then fixing $\Xi$ to be $(\Theta_1(Q),\Theta_2(Q))$, we have (\ref{huta}) reduces to \begin{align*}
&\partial _t^2{\phi _s} - {\Delta}{\phi _s} - 2{h^{ii}}A_i^\infty {\partial _i}{\phi _s} - {h^{ii}}A_i^\infty A_i^\infty {\phi _s} - {h^{ii}}\left( {{\phi _s} \wedge \phi _i^\infty } \right)\phi _i^\infty-h^{ii}(\partial_iA^{\infty}_i-\Gamma^k_{ii}A^{\infty}_k)\phi_s \\
&= - 2{A_t}{\partial _t}{\phi _s} - {A_t}{A_t}{\phi _s} - {\partial _t}{A_t}{\phi _s} + {\partial _s}w +(\phi _t\wedge\phi_s)\phi_t + {h^{ii}}A_i^{con}{\partial _i}{\phi _s} + {h^{ii}}A_i^{con}A_i^\infty {\phi _s}\\
&+ {h^{ii}}A_i^\infty A_i^{con}{\phi _s}+ {h^{ii}}A_i^{con}A_i^{con}{\phi _s} + {h^{ii}}\left( {{\partial _i}A_i^{con} - \Gamma _{ii}^kA_k^{con}} \right){\phi _s} + {h^{ii}}\left( {{\phi _s} \wedge \phi _i^\infty } \right)\phi _i^{con}\\
&+ {h^{ii}}\left( {{\phi _s} \wedge \phi _i^{con}} \right)\phi _i^\infty + {h^{ii}}\left( {{\phi _s} \wedge \phi _i^{con}} \right)\phi _i^{con}. \end{align*} Then from the non-negativeness of the sectional curvature for the target $N=\Bbb H^2$ and the skew-symmetry of the connection matrix $A^{\infty}_i$, we have $W$ is a nonnegative symmetric operator in $L^2(\Bbb H^2;\Bbb C^2)$ by direct calculations, see Lemma \ref{symm} in Section 7. The self-adjointness of $W$ follows from Kato's perturbation theorem. In fact, there exists a self-adjoint realization denote by $((\Delta_{col}),D(\Delta_{col}))$ of $(\Delta,C^{\infty}_c(\Bbb H^2,\Bbb C^2))$. It is known that $D(\Delta_{col})$ consists of functions $f\in L^2$ whose Laplacian $\Delta f$ in distribution sense belong to $L^2$, see for instance \cite{Stri}. Write $W$ as $W=V_1+V_2\nabla$, then $V_1$ and $V_2$ are of exponential decay as $d(x,0)\to\infty$ by Lemma \ref{xuejin} and Definition 1.1. For any fixed $\varepsilon>0$, take $R>0$ sufficiently large such that
$${\left\| {{V_1}(x)} \right\|_{L_{d(x,0) \ge R}^\infty }} \le \varepsilon ,\mbox{ }{\left\| {{V_2}(x)} \right\|_{L_{d(x,0) \ge R}^\infty }} \le \varepsilon, $$ then for any $f\in C^{\infty}_c(\Bbb H^2,\Bbb C^2)$, \begin{align}\label{huli}
{\left\| {{V_1}(x)f + {V_2}\nabla f} \right\|_{L_{d(x,0) \ge R}^2}} \le \varepsilon {\left\| f \right\|_{{L^2}}} + \varepsilon {\left\| {\nabla f} \right\|_{{L^2}}}. \end{align} For this $R$, the compactness of Sobolev embedding in bounded domains implies there exists $C(\varepsilon, R)$ such that \begin{align}\label{huli2}
{\left\| {{V_1}(x)f + {V_2}\nabla f} \right\|_{L_{d(x,0) \le R}^2}} \le C(\varepsilon ,R){\left\| f \right\|_{{L^2}}} + \varepsilon {\left\| {\Delta f} \right\|_{{L^2}}}. \end{align} Hence by (\ref{huli}) and (\ref{huli2}), one has for any $\varepsilon>0$ there exists $C(\varepsilon)$ such that \begin{align}
{\left\| {{V_1}(x)f + {V_2}\nabla f} \right\|_{{L^2}}} \le C(\varepsilon){\left\| f \right\|_{{L^2}}} + \varepsilon {\left\| {\Delta f} \right\|_{{L^2}}}. \end{align} Since $C^{\infty}_c(\Bbb H^2,\Bbb C^2)$ is a core of $\Delta_{col}$, Kato's compact perturbation theorem shows $-\Delta+W$ is self-adjoint in $L^2$ with domain $D(\Delta_{col})$. \end{proof}
\section{Bootstrap for the heat tension filed}
\subsection{Strichartz estimates for wave equation with magnetic potential} Theorem 5.2 and Remark 5.5 of Anker, Pierfelice \cite{AP} obtained the Strichartz estimates for linear wave/Klein-Gordon equation: Let $((p,q),(\widetilde{p},\widetilde{q}))$ be a $(\sigma,\tilde{\sigma})$ admissible couple, i.e., \begin{align*} &\left\{ {({p^{ - 1}},{q^{ - 1}}) \in (0,\frac{1}{2}] \times (0,\frac{1}{2}):\frac{1}{p} > \frac{1}{2}(\frac{1}{2} - \frac{1}{q})} \right\} \cup \left\{ {\left( {0,\frac{1}{2}} \right)} \right\}\\ &\sigma \ge \frac{3}{2}\left( {\frac{1}{2} - \frac{1}{q}} \right),\tilde \sigma \ge \frac{3}{2}\left( {\frac{1}{2} - \frac{1}{{\tilde q}}} \right). \end{align*} If $u$ solves $\partial_t^2u-\Delta u=g$ with initial data $(f_0,f_1)$, then
$$ {\left\| {\widetilde D_x^{ - \sigma + \frac{1}{2}}u} \right\|_{L_t^pL_x^q}} + {\left\| {\widetilde D_x^{ - \sigma - \frac{1}{2}}{\partial _t}u} \right\|_{L_t^pL_x^q}} \lesssim {\left\| {\widetilde D_x^{\frac{1}{2}}f_0} \right\|_{{L^2}}} + {\left\| {\widetilde D_x^{ - \frac{1}{2}}f_1} \right\|_{{L^2}}} + {\left\| {\widetilde D_x^{\tilde \sigma - \frac{1}{2}}g} \right\|_{L_t^{\tilde p'}L_x^{\tilde q'}}}. $$ where $\widetilde D=(-\Delta-\frac{1}{4}+\kappa^2)$ for some $\kappa>\frac{1}{2}$.
Let $\rho(x)=e^{-d(x,0)}$. The endpoint and non-endpoint Strichartz estimates for magnetic wave equations in the small potential case were obtained in the first author's work [Corollary. 1.1. Proposition 3.1 \cite{Lize1}]. We recall this for reader's convenience. Consider the magnetic wave equation on $\Bbb H^2$, \begin{align}\label{wavem} \left\{ \begin{array}{l}
\partial _t^2f - \Delta f + {B_0}(x)f + \sum^2_{i=1}{h^{ii}}{B_i}(x){\partial _i}f = F \\
f(0,x) = {f_0}(x),{\partial _t}f(0,x) = {f_1}(x) \\
\end{array} \right. \end{align}
\begin{lemma}[\cite{Lize1}]\label{poi9nmb} Assume that $B_0,B_1,B_2$ in (\ref{wavem}) satisfy for some $\varrho>0$ \begin{align}
\|B_0\|_{L^2\cap e^{-r\varrho}L^{\infty}}+\sum^2_{i=1}\|\sqrt{h^{ii}}B_i\|_{L^2\cap e^{-r\varrho}L^{\infty}}\le \mu_1. \end{align} And assume that the Schr\"odinger operator $H=-\Delta+B_0+h^{ii}B_i\partial_i$ is symmetric. If $0<\mu_1\ll 1$, $u$ solves (\ref{wavem}), then for any $0<\sigma\ll\varrho$, $p\in(2,6)$ \begin{align*}
&{\left\| {{\rho ^{\sigma}}\nabla f} \right\|_{L_t^2L_x^2}}+{\left\| (-\Delta)^{\frac{1}{4}}f \right\|_{L_t^2L_x^{p}}} +
{\left\| {{\partial _t}f} \right\|_{L_t^\infty L_x^2}} + {\left\| {\nabla f} \right\|_{L_t^\infty L_x^2}}\\
&\lesssim {\left\| {\nabla {f_0}} \right\|_{{L^2}}} + {\left\| {{f_1}} \right\|_{{L^2}}} + {\left\| F \right\|_{L_t^1L_x^2}}. \end{align*} \end{lemma}
Hence by Lemma \ref{xuejin}, Lemma \ref{hushuo} and Lemma \ref{poi9nmb}, we have: \begin{proposition}\label{tianxia} Let $W$ be defined above and $0<\mu_1\ll 1$, $0<\sigma\ll \varrho\ll 1$, then we have the weighted and endpoint Strichartz estimates for the magnetic wave equation: If $f$ solves the equation \begin{align*} \left\{ \begin{array}{l} \partial _t^2f - {\Delta}f + Wf = F \\ f(0,x) = {f_0},{\partial _t}f(0,x) = {f_1} \\ \end{array} \right. \end{align*} then it holds for any $p\in(2,6)$, $0<\sigma\ll \varrho$ \begin{align}
&{\left\| {{{\left| D \right|}^{\frac{1}{2}}}f} \right\|_{L_t^2L_x^{p}}} +{\left\| {{\rho ^{\sigma}}\nabla f} \right\|_{L_t^2L_x^2}}+
{\left\| {{\partial _t}f} \right\|_{L_t^\infty L_x^2}} + {\left\| {\nabla f} \right\|_{L_t^\infty L_x^2}}+{\left\| {{\rho ^{\sigma }}\nabla f} \right\|_{L_t^2L_x^2}}\nonumber\\
&\lesssim {\left\| {\nabla {f_0}} \right\|_{{L^2}}} + {\left\| {{f_1}} \right\|_{{L^2}}} + {\left\| F \right\|_{L_t^1L_x^2}}.\label{gutu4} \end{align} \end{proposition}
\begin{remark}\label{tataru0}
For all $\sigma\in \Bbb R$, $p\in(1,\infty)$, $\|\widetilde D^{\sigma} f\|_{p}$ is equivalent to $\|(-\Delta)^{\sigma/2}f\|_{p}$. Tataru \cite{Tataru4} shows for all $p\in(1,\infty)$, $\|\Delta f\|_{p}$ is equivalent to $\|\nabla^2f\|_{p}+\|\nabla f\|_{p}+\|f\|_{p}$. \end{remark}
\subsection{Setting of Bootstrap}
We fix the constants $\mu_1,\varepsilon_1, \varrho, \sigma$ to be \begin{align}\label{dozuoki} 0<\mu_2<\mu_1\ll\varepsilon_1\ll 1,\mbox{ }0<\sigma\ll\varrho\ll 1. \end{align} Let $L>0$ be sufficiently large say $L=100$. Define $\omega:\Bbb R^+\to \Bbb R^+$ and $a:\Bbb R^+\to \Bbb R^+$ by $$\omega(s) = \left\{ \begin{array}{l}
{s^{\frac{1}{2}}}\mbox{ }{\rm{when}}\mbox{ }0 \le s \le 1 \\
{s^L}\mbox{ }\mbox{ }{\rm{when}}\mbox{ }s \ge 1 \\
\end{array} \right.,a(s) = \left\{ \begin{array}{l}
s^{\frac{3}{4}}\mbox{ }\mbox{ }{\rm{when}}\mbox{ }0 \le s \le 1 \\
{s^L}\mbox{ }\mbox{ }{\rm{when}}\mbox{ }s \ge 1 \\
\end{array} \right. $$
\begin{proposition}\label{oures} Assume that $\mathcal{A}$ is the set of $T\in[0,T_*)$ such that for any $2<q<6+2\gamma$, $p\in(2,6)$ with some fixed $0<\gamma\ll1$, \begin{align}
{\left\| {(du,\partial_tu)} \right\|_{L_t^\infty L_x^2([0,T] \times {\Bbb H^2})}}
+ {\left\| (\nabla{\partial _t}u,\nabla du) \right\|_{L_t^\infty L_x^2([0,T] \times {\Bbb H^2})}}\nonumber\\
+{\left\| {{\partial _t}u} \right\|_{L_t^2L^q_x([0,T] \times {\Bbb H^2})}} &\le {\varepsilon _1}.\label{boot2}\\
{\left\| {\omega (s)|D{|^{ - \frac{1}{2}}}{\partial _t}{\phi _s}} \right\|_{L_s^\infty L_t^2L_x^p}} + {\left\| {\omega (s){\partial _t}{\phi _s}} \right\|_{L_s^\infty L_t^\infty L_x^2}}\nonumber\\
+{\left\| {\omega (s)\nabla {\phi _s}} \right\|_{L_s^\infty L_t^\infty L_x^2}} + {\left\| {\omega (s){{\left| D \right|}^{\frac{1}{2}}}{\phi _s}} \right\|_{L_s^\infty L_t^2L_x^p}} &\le {\varepsilon _1}.\label{boot5} \end{align} Then for all $T\in \mathcal{A}$ we have \begin{align}
&{\left\| {\omega (s){{\left| D \right|}^{ - \frac{1}{2}}}{\partial _t}{\phi _s}} \right\|_{L_s^\infty L_t^2L_x^p([0,T] \times {\Bbb H^2})}} + {\left\| {\omega (s){{\left| D \right|}^{\frac{1}{2}}}{\phi _s}} \right\|_{L^{\infty}_sL_t^2L_x^p([0,T] \times {\Bbb H^2})}}\nonumber\\
&+ {\left\| {\omega (s){\partial _t}{\phi _s}} \right\|_{L_s^\infty L_t^\infty L_x^2([0,T] \times {\Bbb H^2})}}
+ {\left\| {\omega (s)\nabla {\phi _s}} \right\|_{L_s^\infty L_t^\infty L_x^2([0,T] \times {H^2})}} \le \varepsilon _1^2. \label{huojiq} \end{align} and for any $r\in(2,6+2\gamma]$ it holds that \begin{align}
{\left\| {(du,\partial_tu)} \right\|_{L_t^\infty L_x^2([0,T] \times {\Bbb H^2})}} + {\left\| {(\nabla{\partial _t}u,\nabla du)} \right\|_{L_t^\infty L_x^2([0,T] \times {\Bbb H^2})}}&\le {\varepsilon^2 _1}\label{boot8q}\\
{\left\| {{\partial _t}u} \right\|_{L_t^2L_x^r([0,T] \times {\Bbb H^2})}} &\le {\varepsilon^2_1}.\label{boot9q} \end{align} Moreover we have \begin{align}
{\left\| {du} \right\|_{L_t^\infty L_x^2([0,{T}] \times {\Bbb H^2})}}& + {\left\| {{\partial _t}u} \right\|_{L_t^\infty L_x^2([ 0,{T}] \times {\Bbb H^2})}} + {\left\| {\nabla du} \right\|_{L_t^\infty L_x^2([0,{T}] \times {\Bbb H^2})}} \nonumber\\
&+ {\left\| {\nabla {\partial _t}u} \right\|_{L_t^\infty L_x^2([0,{T}] \times {\Bbb H^2})}} + {\left\| {{\partial _t}u} \right\|_{L_t^2L_x^6([ 0,{T}] \times {\Bbb H^2})}} \le \varepsilon _1^2.\label{hua12xde} \end{align} \end{proposition} The proof of Proposition \ref{oures} will be divided into several lemmas below. (\ref{huojiq}) is proved in Proposition 5.11. (\ref{boot8q}), (\ref{boot9q}) and (\ref{hua12xde}) are proved in Proposition 5.13 and Corollary 5.15 respectively.
The bootstrap programm we apply here is based on the design of \cite{LOS,Tao7}. The essential refinement is we add a spacetime bound $\|\partial_tu\|_{L^2_tL^p_x}$ to the primitive bootstrap assumption. The most important original ingredient in this part is we use the weighted Strichartz estimates in Section 5.1 to control the one order derivative terms of $\phi_s$.
\begin{proposition}\label{aaop} Assume (\ref{boot2}) holds, then we have for any $\eta>0$ \begin{align}
{\big\| {{A_t}} \big\|_{L_t^\infty L_x^\infty }} &\le \varepsilon_1\label{aaaw811}\\
{\big\| {{h^{ii}} {\partial _i}{A_i}(s)} \big\|_{L_t^\infty L_x^\infty }} &\le \varepsilon_1 \max(1,{s^{ -\eta}})\label{butterfly}\\
{\big\| {\sqrt {{h^{ii}}} {\partial _t}{A_i}(s)} \big\|_{L_t^\infty L_x^\infty }} &\le \varepsilon_1 {s^{ - \frac{1}{2}}}\label{u82}\\
{\big\| {\sqrt {{h^{ii}}} {A_i}(s)} \big\|_{L_t^\infty L_x^\infty }} &\le \varepsilon_1 \label{u81}. \end{align} \end{proposition} \begin{proof}
By the commutator identity and the facts $|\partial_t\widetilde{u}|\le e^{s\Delta}|\partial_tu|$, $|\partial_s\widetilde{u}|\le e^{s\Delta}|\partial_su|$, (\ref{aaaw811}) is bounded by Lemma \ref{ktao1}, \begin{align*}
{\left\| {{A_t}} \right\|_{L_t^\infty L_x^\infty }} &\le {\big\| {\int_s^\infty {{{\big\| {{\phi _t}} \big\|}_{L_x^\infty }}{{\big\| {{\phi _s}} \big\|}_{L_x^\infty }}} d\kappa} \big\|_{L_t^\infty }} \le \mathop {\sup }\limits_{t \in [0,T]} {\big\| {{\phi _t}} \big\|_{L_s^2L_x^\infty }}{\big\| {{\phi _s}} \big\|_{L_s^2L_x^\infty }} \\
&\le \mathop {\sup }\limits_{t \in [0,T]} {\big\| {{\partial _t}u} \big\|_{L_x^2}}{\big\| {{\partial _s}u} \big\|_{L_x^2}} \le {\varepsilon _1}. \end{align*} By the commutator identity, \begin{align*}
&{\big\| {\sqrt {{h^{ii}}} {\partial _t}{A_i}} \big\|_{L_t^\infty L_x^\infty }} \le \int_s^\infty {{{\big\| {\sqrt {{h^{ii}}} {\partial _t}\left( {{\phi _i} \wedge {\phi _s}} \right)} \big\|}_{L_t^\infty L_x^\infty }}} d\kappa\\
& \le \int_s^\infty {{{\big\| {\sqrt {{h^{ii}}} {\partial _t}{\phi _i}} \big\|}_{L_t^\infty L_x^\infty }}\big\| {{\phi _s}} \big\|_{L_t^\infty L^{\infty}_x }} d\kappa+ \int_s^\infty {{{\big\| {\sqrt {{h^{ii}}} {\phi _i}} \big\|}_{L_t^\infty L_x^\infty }}\big\| {{\partial _t}{\phi _s}} \big\|_{L_t^\infty L^{\infty}_x}} d\kappa. \end{align*} Using the relation between the induced derivative $D_{i,t}$ and the covariant derivative on $u^*(TN)$, one obtains
$| {\sqrt {{h^{ii}}} {\partial _t}{\phi _i}}| \le | {\nabla {\partial _t}\widetilde{u}}| + | {\sqrt {{h^{ii}}} {A_t}{\phi _i}}|+ | {\sqrt {{h^{ii}}} {A_i}{\phi _t}}|$ and similarly
$| {{\partial _t}{\phi _s}}| \le | {{\nabla _t}{\partial _s}\widetilde{u}}| + | {A_t}{\phi _s}|$. Hence it suffices to prove \begin{align}
{\int_s^\infty {\left\| {\left| {d\widetilde{u}} \right|\left| {{\nabla _t}{\partial _s}\widetilde{u}} \right|} \right\|} _{L_t^\infty L_x^\infty }}d\kappa + {\int_s^\infty {\left\| {\left| {{\partial _s}\widetilde{u}} \right|\left| {\nabla {\partial _t}\widetilde{u}} \right|} \right\|} _{L_t^\infty L_x^\infty }}d\kappa&\le \varepsilon_1s^{-\frac{1}{2}}\label{po987}\\
\int_s^\infty {{{\| {\sqrt {{h^{ii}}} {A_i}{\phi _t}{\phi _s}}\|}_{L_t^\infty L_x^\infty }}}d\kappa+\int_s^\infty {{{\| {\sqrt {{h^{ii}}} {A_t}{\phi _i}{\phi _s}}\|}_{L_t^\infty L_x^\infty }}}d\kappa&\le \varepsilon_1s^{-\frac{1}{2}} \label{pojn89} \end{align}
For $s\in(0,1]$, Proposition \ref{sl} and $|d\widetilde{u}|\le e^{s\Delta}|du|$ give \begin{align}\label{0918}
{\left\| {\left| {d\widetilde{u}} \right|\left| {{\nabla _t}{\partial _s}\widetilde{u}} \right|} \right\|_{L_t^\infty L_x^\infty }} + {\left\| {\left| {{\partial _s}\widetilde{u}} \right|\left| {\nabla {\partial _t}\widetilde{u}} \right|} \right\|_{L_t^\infty L_x^\infty }} \le \varepsilon_1 {s^{ - \frac{1}{2}}}{s^{ -1}} + \varepsilon_1 {s^{ -\frac{1}{2}}}{s^{ - 1}}. \end{align} For $s\ge1$, we have by Proposition \ref{sl} \begin{align}\label{01918}
{\left\| {\left| {d\widetilde{u}} \right|\left| {{\nabla _t}{\partial _s}\widetilde{u}} \right|} \right\|_{L_t^\infty L_x^\infty }} + {\left\| {\left| {{\partial _s}\widetilde{u}} \right|\left| {\nabla {\partial _t}\widetilde{u}} \right|} \right\|_{L_t^\infty L_x^\infty }} \le \varepsilon_1 {e^{ - \delta s}}. \end{align} Therefore (\ref{01918}) and (\ref{0918}) yield for all $s\in(0,\infty)$
$${\left\| {\left| {d\widetilde{u}} \right|\left| {{\nabla _t}{\partial _s}\widetilde{u}} \right|} \right\|_{L_t^\infty L_x^\infty }} + {\left\| {\left| {{\partial _s}\widetilde{u}} \right|\left| {\nabla {\partial _t}\widetilde{u}} \right|} \right\|_{L_t^\infty L_x^\infty }} \le \varepsilon_1 {s^{ -3/2}}. $$ Hence we obtain (\ref{po987}). (\ref{pojn89}) and (\ref{u81}) can be proved similarly. By (\ref{christ}) and direct calculations similar to Lemma \ref{xuejin}, \begin{align}\label{ipu83}
|h^{ii}\partial_iA^{\infty}_i|\lesssim |\nabla dQ|+|dQ|. \end{align} And the same route as (\ref{u82}) shows for any $\eta>0$ \begin{align}\label{ipu82}
|h^{ii}\partial_iA^{con}_i|\le \varepsilon_1 s^{-\eta}. \end{align} Thus (\ref{butterfly}) follows by (\ref{ipu82}), (\ref{ipu83}) \end{proof}
\begin{lemma}\label{aoao1} Assume (\ref{boot2}) and (\ref{boot5}) hold, then we have \begin{align}
\left\| \sqrt{h^{pp}}|\partial_p(h^{ii} {\partial _i}{A_i}(s))| \right\|_{L_t^\infty L_x^{\infty}} &\le \varepsilon_1 \max(s^{ -1},1)\label{1q2} \end{align} \end{lemma} \begin{proof} By Remark \ref{3sect}, it suffices to bound $A^{\infty}$ and $A^{con}$ part separately. Direct calculations as Lemma \ref{xuejin} and (\ref{christ}) yield the bound for the $A^{\infty}$ part is \begin{align*}
\sqrt{h^{pp}}|\partial_p(h^{ii} {\partial _i}{A^{\infty}_i}(s))| \le |\nabla^2 dQ|+ |\nabla dQ|+|dQ|. \end{align*} Thus (\ref{as4}) shows the $A^{\infty}$ part is bounded by \begin{align*}
\|\sqrt{h^{pp}}|\partial_p(h^{ii} {\partial _i}{A^{\infty}_i}(s))|\|_{L^{\infty}_x}\le \varepsilon_1. \end{align*} By (\ref{christ}) and direct calculations, \begin{align*}
&\sqrt{h^{pp}}|\partial_p(h^{ii} {\partial _i}({\phi_i\wedge\phi_s})(s))|\\
&\le |\nabla^2\partial_s\widetilde{u}| |d\widetilde{u}|+\sqrt{h^{ii}h^{pp}}|A_iA_p||\partial_s\widetilde{u}| |d\widetilde{u}|
+|\partial_s\widetilde{u}| |\nabla^2d\widetilde{u}|+
|\nabla\partial_s\widetilde{u}||\nabla d\widetilde{u}|\\
&+\sqrt{h^{ii}}|A_i||\nabla\partial_s\widetilde{u}||d\widetilde{u}|+
\sqrt{h^{ii}}|A_i||\partial_s\widetilde{u}||\nabla d\widetilde{u}|+
\sqrt{h^{ii}}|A_i\|\nabla\partial_s\widetilde{u}| |d\widetilde{u}|\\
&+\sqrt{h^{pp}h^{ii}}|\partial_pA_i||\nabla\partial_s\widetilde{u}| |d\widetilde{u}|+\sqrt{h^{pp}h^{ii}}|\partial_pA_i||\partial_s\widetilde{u}| |\nabla d\widetilde{u}|. \end{align*} Thus the $A^{con}$ part follows by Lemma \ref{fotuo1} and interpolation. \end{proof}
\begin{proposition}\label{bootstrap} Suppose that (\ref{boot2}), (\ref{boot5}) hold. Then we have for $p\in(2,6)$ \begin{align}
{\big\| {a(s){{\left\| {{\partial _t}{\phi _s}} \right\|}_{L_t^2L_x^p}}} \big\|_{L_s^\infty}}&\le {\varepsilon _1}\label{huojikn1} \\
{\big\| {a(s){{\left\| {\nabla {\phi _s}} \right\|}_{L_t^2L_x^p}}} \big\|_{L_s^\infty}} &\le {\varepsilon _1}\label{huojikn}. \end{align} Generally we have for $\theta\in[0,2]$ \begin{align}
{\big\| {\omega_\theta (s){{\left( { - \Delta } \right)}^\theta }{\phi _s}} \big\|_{L_s^\infty L_t^2L_x^p}}&\le {\varepsilon _1}\label{uojiknmpx3}\\
\big\| \omega_1(s)|D|{\partial _t}{\phi _s} \big\|_{L_s^\infty L_t^2L_x^p} &\le {\varepsilon _1},\label{9o0o} \end{align} where $\omega_\theta (s)=s^{\theta+\frac{1}{4}}$ when $s\in[0,1]$ and $\omega_\theta (s)=s^{L}$ when $s\ge1$. \end{proposition} \begin{proof} By (\ref{991}) and Duhamel principle we have \begin{align}
{\| {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{\phi _s}(s)}\|_{L_t^2L_x^p}} &\le {\| {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{e^{\frac{s}{2}\Delta }}{\phi _s}(\frac{s}{2})} \|_{L_t^2L_x^p}}\nonumber\\
&+ {\big\| {\int_{\frac{s}{2}}^s {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{e^{(s - \tau )\Delta }}{h^{ii}}{A_i}{\partial _i}{\phi _s}(\tau )} d\tau }\big\|_{L_t^2L_x^p}} \label{wulaso}\\
&+ {\big\| {\int_{\frac{s}{2}}^s {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{e^{(s - \tau )\Delta }}G(\tau)}d\tau } \big\|_{L_t^2L_x^p}}.\label{wulasoo}
\end{align}
where $G(\tau)={{h^{ii}}\left( {{\partial _i}{A_i}} \right){\phi _s} - {h^{ii}}\Gamma _{ii}^k{A_k}{\phi _s} + {h^{ii}}{A_i}{A_i}{\phi _s} + {h^{ii}}\left( {{\phi _s} \wedge {\phi _i}} \right){\phi _i}}.$ For (\ref{wulaso}), the smoothing effect and (\ref{u81}) show \begin{align*}
&s^{\frac{3}{4}}{\big\| {\int_{\frac{s}{2}}^s {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{e^{(s - \tau )\Delta }}{h^{ii}}{A_i}{\partial _i}{\phi _s}(\tau )} d\tau } \big\|_{L_t^2L_x^p}} \\
&\lesssim s^{\frac{3}{4}}\int_{\frac{s}{2}}^s {{{{{(s - \tau )}^{-\frac{1}{2}}}}}{{\left\| {{h^{ii}}{A_i}{\partial _i}{\phi _s}(\tau )} \right\|}_{L_t^2L_x^p}}} d\tau \\
&\lesssim s^{\frac{3}{4}}\int_{\frac{s}{2}}^s {{{{{(s - \tau )}^{-\frac{1}{2}}}}}{{\left\| {\nabla {\phi _s}(\tau )} \right\|}_{L_t^2L_x^p}}{{\big\| {\sqrt {{h^{ii}}} {A_i}} \big\|}_{L_t^\infty L_x^\infty }}d} \tau \\
&\lesssim s^{\frac{3}{4}}\varepsilon_1\int_{\frac{s}{2}}^s {{{{{(s - \tau )}^{-\frac{1}{2}}}}}{{\left\| {\nabla {\phi _s}(\tau )} \right\|}_{L_t^2L_x^p}}d} \tau.
\end{align*} Thus we conclude when $s\in[0,1]$ \begin{align}\label{w2}
&s^{\frac{3}{4}}{\big\| {\int_{\frac{s}{2}}^s {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{e^{(s - \tau )\Delta }}{h^{ii}}{A_i}{\partial _i}{\phi _s}(\tau )} d\tau } \big\|_{L_t^2L_x^p}}\nonumber\\
&\le \varepsilon_1{\left\| {s^{\frac{3}{4}}{{\left\| {\nabla {\phi _s}(s)} \right\|}_{L_t^2L_x^p}}} \right\|_{L_s^\infty }}. \end{align} Similarly we have for (\ref{wulasoo}) that \begin{align*}
&s^{\frac{3}{4}}\int_{\frac{s}{2}}^s (s - \tau )^{-\frac{1}{2}}\|G(\tau)\|_{L_t^2L_x^p} d\tau \\
&\le s^{\frac{3}{4}}\int_{\frac{s}{2}}^s {{{{{(s - \tau )}^{-\frac{1}{2}}}}}{{\left\| {{h^{ii}}{\partial _i}{A_i}} \right\|}_{L_t^\infty L_x^\infty }}{{\left\| {{\phi _s}} \right\|}_{L_t^2L_x^p}}} d\tau + s^{\frac{3}{4}}\int_{\frac{s}{2}}^s {{{\left\| {{A_2}} \right\|}_{L_t^\infty L_x^\infty }}{{\left\| {{\phi _s}} \right\|}_{L_t^2L_x^p}}} d\tau \\
&+ s^{\frac{3}{4}}\int_{\frac{s}{2}}^s {{{{{(s - \tau )}^{-\frac{1}{2}}}}}\left( {{{\left\| {{h^{ii}}{A_i}{A_i}} \right\|}_{L_t^\infty L_x^\infty }} + {{\left\| {{h^{ii}}{\phi _i}{\phi _i}} \right\|}_{L_t^\infty L_x^\infty }}} \right)} {\left\| {{\phi _s}} \right\|_{L_t^2L_x^p}}d\tau. \end{align*} Thus by Proposition \ref{aaop} and Proposition \ref{sl}, we have for all $s\in[0,1]$ \begin{align}\label{wulaso2}
(\ref{wulasoo})\lesssim {\left\| {{s^{\frac{1}{2}}}{{\left\| {{\phi _s}(s)} \right\|}_{L_t^2L_x^p}}} \right\|_{L_s^\infty }}. \end{align} For $s\ge1$, we also have by Duhamel principle \begin{align*}
&{s^L}{\big\| {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{\phi _s}(s)} \big\|_{L_t^2L_x^p}}\\
&\le {s^L}{\big\| {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{e^{\frac{s}{2}\Delta }}{\phi _s}(\frac{s}{2})} \big\|_{L_t^2L_x^6}} + {s^L}{\big\| {\int_{\frac{s}{2}}^s {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{e^{(s - \tau )\Delta }}G_1(\tau )} d\tau } \big\|_{L_t^2L_x^p}}, \end{align*} where $G_1$ is the inhomogeneous term. The linear term is bounded by \begin{align*}
{s^L}{\big\| {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{e^{\frac{s}{2}\Delta }}{\phi _s}(\frac{s}{2})} \big\|_{L_t^2L_x^p}} \le {s^L}{e^{ - \frac{1}{16} s}}{\big\| {{\phi _s}(\frac{s}{2})} \big\|_{L_t^2L_x^p}}. \end{align*} By Proposition \ref{aaop} and smoothing effect, the first term in $G_1$ is bounded as \begin{align*}
&{s^L}{\big\| {\int_{\frac{s}{2}}^s {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{e^{(s - \tau )\Delta }}{h^{ii}}{A_i}{\partial _i}{\phi _s}(\tau )} d\tau } \big\|_{L_t^2L_x^p}}\\
&\le {s^L}\int_{\frac{s}{2}}^s {{{{{\left( {s - \tau } \right)}^{-\frac{1}{2}}}}}{e^{ -\delta (s - \tau )}}{{\big\| {\nabla {\phi _s}(\tau )} \big\|}_{L_t^2L_x^p}}{{\big\| {\sqrt {{h^{ii}}} {A_i}} \big\|}_{L_t^\infty L_x^\infty }}} d\tau \\
&\le \varepsilon_1{s^L}\int_{\frac{s}{2}}^s {{e^{ -\delta (s - \tau )}}{\tau ^{ - L}}{{{{\left( {s - \tau } \right)}^{-\frac{1}{2}}}}}{{\big\| {{\tau ^L}\nabla {\phi _s}(\tau )} \big\|}_{L_t^2L_x^p}}d} \tau. \end{align*} The other terms in $G_1$ can be estimated similarly, thus we obtain for $s\ge1$ \begin{align}\label{wulaso3}
{s^L}{\left\| {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{\phi _s}(s)} \right\|_{L_t^2L_x^p}} \le \varepsilon_1{\left\| {{s^L}{{\left\| {\nabla {\phi _s}(s)} \right\|}_{L_t^2L_x^p}}} \right\|_{L_s^\infty (s \ge 1)}} + {\left\| {{s^L}{{\left\| {{\phi _s}(\tau )} \right\|}_{L_t^2L_x^p}}} \right\|_{L_s^\infty (s \ge 1)}}. \end{align} Combing (\ref{wulaso}), (\ref{wulasoo}), with (\ref{wulaso3}) gives corresponding estimates in (\ref{huojikn}) for $\nabla\phi_s$. It suffices to prove the remaining estimates in (\ref{huojikn}) for $\partial_t\phi_s$. Denote the inhomogeneous term in (\ref{9923}) by $G_3$, then Duhamel principle gives \begin{align*}
s^{\frac{3}{4}}{\left\| {{\partial _t}{\phi _s}(s)} \right\|_{L_t^2L_x^p}} \le s^{\frac{3}{4}}{\big\| {{e^{\Delta \frac{s}{2}}}{\partial _t}{\phi _s}(\frac{s}{2})} \big\|_{L_t^2L_x^p}} + s^{\frac{3}{4}}{\big\| {\int_{\frac{s}{2}}^s {{e^{\Delta (s - \tau )}}G_3(\tau )d\tau } } \big\|_{L_t^2L_x^p}}. \end{align*} The first term of $G_3$ is bounded by \begin{align*}
s^{\frac{3}{4}}{\int_{\frac{s}{2}}^s {\big\| {{e^{\Delta (s - \tau )}}{h^{ii}}\left( {{\partial _t}{A_i}} \right){\partial _i}{\phi _s}(\tau )} \big\|} _{L_t^2L_x^p}}d\tau\le s^{\frac{3}{4}}\int_{\frac{s}{2}}^s {\big\| {\sqrt {{h^{ii}}}{\partial _t}{A_i}} \big\|_{L_t^\infty L_x^\infty }}\big\|{\nabla}{\phi_s} \big\|_{L_t^2 L_x^p } d\tau. \end{align*} This is acceptable by Proposition \ref{aaop}. The second term in $G_3$ is bounded as \begin{align}
&s^{\frac{3}{4}}{\int_{\frac{s}{2}}^s {\big\| {{e^{\Delta (s - \tau )}}2{h^{ii}}{A_i}{\partial _i}{\partial _t}{\phi _s}(\tau )} \big\|} _{L_t^2L_x^p}}d\tau\nonumber\\
&\le s^{\frac{3}{4}}{\int_{\frac{s}{2}}^s {\big\| {{e^{\Delta (s - \tau )}}\sqrt {{h^{ii}}} {\partial _i}\left( {\sqrt {{h^{ii}}} {A_i}{\partial _t}{\phi _s}} \right)} \big\|} _{L_t^2L_x^p}}d\tau+ s^{\frac{3}{4}}{\int_{\frac{s}{2}}^s {\big\| {{e^{\Delta (s - \tau )}}{h^{ii}}{\partial _i}{A_i}{\partial _t}{\phi _s}} \big\|} _{L_t^2L_x^p}}d\tau\nonumber\\ &\triangleq I+II. \end{align} $I$ is bounded by the smoothing effect, boundedness of Riesz transform and Proposition \ref{aaop} \begin{align*}
I &\le {s^{\frac{3}{4}}}\int_{\frac{s}{2}}^s {{{\big\| {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{e^{\Delta (s - \tau )}}\big( {\sqrt {{h^{ii}}} {A_i}{\partial _t}{\phi _s}} \big)} \big\|}_{L_t^2L_x^p}}} d\tau \\
&\le {s^{\frac{3}{4}}}\int_{\frac{s}{2}}^s {{{{{\big( {s - \tau } \big)}^{-\frac{1}{2}}}}}{{\big\| {\sqrt {{h^{ii}}} {A_i}} \big\|}_{L_t^\infty L_x^\infty }}{{\big\| {{\partial _t}{\phi _s}} \big\|}_{L_t^2L_x^p}}} d\tau\\
&\le {s^{\frac{3}{4}}}\int_{\frac{s}{2}}^s {{{{{\left( {s - \tau } \right)}^{-\frac{1}{2}}}}}} {\varepsilon _1}{\big\| {{\partial _t}{\phi _s}} \big\|_{L_t^2L_x^p}}d\tau. \end{align*} $II$ is estimated as the first term of $G_3$ above. The third term of $G_3$ is bounded as \begin{align}
&{s^{\frac{3}{4}}}\int_{\frac{s}{2}}^s {{{\big\| {{e^{\Delta (s - \tau )}}{h^{ii}}\left( {{\partial _i}{\partial _t}{A_i}} \right){\phi _s}} \big\|}_{L_t^2L_x^p}}} d\tau \nonumber \\
&\le {s^{\frac{3}{4}}}{\int_{\frac{s}{2}}^s {\big\| {{e^{\Delta (s - \tau )}}\sqrt {{h^{ii}}} {\partial _i}\big( {\sqrt {{h^{ii}}} {\partial _t}{A_i}{\phi _s}} \big)} \big\|} _{L_t^2L_x^p}}d\tau\nonumber \\
&+ {s^{\frac{3}{4}}}\int_{\frac{s}{2}}^s {{{\big\| {{e^{\Delta (s - \tau )}}{h^{ii}}{\partial _t}{A_i}{\partial _i}{\phi _s}} \big\|}_{L_t^2L_x^p}}} d\tau.\label{woshi} \end{align} The remaining arguments are almost the same as $I$ and $II$. And the rest nine terms in $G_3$ can be estimated as above as well. Hence the desired estimates in (\ref{huojikn1}) for $\partial_t\phi_s$ when $s\in(0,1]$ is verified. It suffices to prove (\ref{huojikn1}) for $\partial_t\phi_s$ when $s\ge1$. The proof for this part is exactly close to the estimates of $\nabla\phi_s$ when $s\ge1$ and that of $I,II$. (\ref{9o0o}) follows by the same arguments as (\ref{huojikn1}) by applying smoothing effect of the heat semigroup. By interpolation, in order to verify (\ref{uojiknmpx3}), it suffices to prove \begin{align}
\left\| \omega_{1}(s) (-\Delta){\phi _s} \right\|_{L_s^\infty L_t^2L_x^p}\le {\varepsilon _1}. \end{align} By (\ref{991}), Duhamel principle and the smoothing effect we have \begin{align*}
&\|(- \Delta){\phi _s}(s)\|_{L_t^2L_x^p} \le s^{-1}e^{-\frac{\delta}{2}s}\|{\phi _s}(\frac{s}{2})\|_{L_t^2L_x^p}\\
&+ \int^{s}_{\frac{s}{2}} (s-\tau)^{-\frac{1}{2}}e^{-\delta(s-\tau)}\big(\|\nabla (h^{ii}A_i\partial_i\phi_s)\|_{{L_t^2L_x^p}}+\|\nabla G\|_{{L_t^2L_x^p}}\big)d\tau. \end{align*} Then by Lemma \ref{aoao1}, Proposition \ref{aaop}, (\ref{huojikn}), (\ref{boot2}), (\ref{boot5}), one obtains \begin{align*}
\left\| \omega_{1}(s) (-\Delta){\phi _s} \right\|_{L_s^\infty L_t^2L_x^p}\le {\varepsilon _1}\left\| \omega_{1} (s) \nabla^2{\phi _s} \right\|_{L_s^\infty L_t^2L_x^p}+\varepsilon_1. \end{align*} Thus (\ref{uojiknmpx3}) follows by Remark \ref{tataru0}. \end{proof}
\begin{lemma} Assume that (\ref{boot2}), (\ref{boot5}) hold, then for $q\in(2,6+2\gamma]$ \begin{align}
{\left\| {{\phi _t}(s)} \right\|_{L_s^\infty L_t^2L_x^q}} &\le {\varepsilon _1}\label{time2}\\
{\left\| {{A_t}} \right\|_{L_t^1L_x^\infty }} &\le \varepsilon _1^2\label{aaop11} \end{align} \end{lemma} \begin{proof}
First notice that $\phi_t$ satisfies $(\partial_s-\Delta)|\phi_t|\le 0$, thus for any fixed $(t,s,x)$ one has the pointwise estimate \begin{align*}
|\phi_t(s,t,x)|\le |\phi_t(0,t,x)|=|\partial_t u(t,x)|. \end{align*} Hence (\ref{time2}) follows by (\ref{boot2}). From commutator identity we have \begin{align}\label{qx1}
{\left\| {{A_t}} \right\|_{L_t^1L_x^\infty }} \le \int_0^\infty {{{\left\| {{\partial _t}u} \right\|}_{L_t^2L_x^\infty }}{{\left\| {{\partial _s}u} \right\|}_{L_t^2L_x^\infty }}} ds. \end{align} Sobolev inequality implies for $p_*$ slightly less than 6 \begin{align}\label{yu0cv}
\|\phi_s\|_{L^{\infty}_x}\le \||D|^{\frac{1}{2}}\phi_s\|_{L^{p_*}_x}. \end{align}
And since $|\partial_t \widetilde{u}|$ satisfies $(\partial_s-\Delta)|\partial_t \widetilde{u}|\le 0$, then \begin{align}\label{0yu0cv}
\|\phi_t(s)\|_{L^{\infty}_x}\lesssim s^{-1/{p_*}}e^{-\delta s}\|\phi(\frac{s}{2})\|_{L^{p_*}_x}. \end{align} By (\ref{0yu0cv}), (\ref{yu0cv}) and (\ref{boot5}), \begin{align}\label{0yu0cv9}
\int^{1}_{0}\|\phi_t(s)\|_{L^2_tL^{\infty}_x}\|\phi_s(s)\|_{L^2_tL^{\infty}_x}&\lesssim \int^1_0s^{-\frac{1}{2}-\frac{1}{p_*}}\|\phi_t(\frac{s}{2})\|_{L^2_tL^{p_*}_x}s^{\frac{1}{2}}\||D|^{\frac{1}{2}}\phi_s(s)\|_{L^2_tL^{p_*}_x} ds.\\
\int^{\infty}_{1}\|\phi_t(s)\|_{L^2_tL^{\infty}_x}\|\phi_s(s)\|_{L^2_tL^{\infty}_x}&\lesssim \int^{\infty}_1s^{-4L}\|\phi_t(\frac{s}{2})\|_{L^2_tL^{p_*}_x}\||D|^{\frac{1}{2}}\phi_s(s)\|_{L^2_tL^{p_*}_x}ds. \end{align} Thus (\ref{aaop11}) is obtained by (\ref{boot2}) and (\ref{boot5}). \end{proof}
\begin{lemma}\label{gdie1} Assume that (\ref{boot2}) and (\ref{boot5}) hold, then for $p\in(2,6+2\gamma]$ with $0<\gamma\ll1$, $\phi_t$ satisfies \begin{align}
{\left\| {\omega (s)|D|{\phi _t}(s)} \right\|_{L_s^\infty L_t^2L_x^p}} &\le {\varepsilon _1}\label{time}\\
{\left\| {\omega_{\frac{3}{4}} (s)\Delta{\phi _t}(s)} \right\|_{L_s^\infty L_t^2L_x^p}} &\le {\varepsilon _1}\label{timeling} \end{align} \end{lemma} \begin{proof} By Duhamel principle and (\ref{yfcvbn}) \begin{align*}
{s^{\frac{1}{2}}}{\big\| {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{\phi _t}(s)} \big\|_{L_t^2L_x^p}} &\le {s^{\frac{1}{2}}}{\big\| {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{e^{\frac{s}{2}\Delta }}{\phi _t}(\frac{s}{2})} \big\|_{L_t^2L_x^p}} \\
&+ {s^{\frac{1}{2}}}\int_{\frac{s}{2}}^s {{{\big\| {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{e^{(s - \tau )\Delta }}\mathcal{G}(\tau )} \big\|}_{L_t^2L_x^p}}} d\tau, \end{align*} where $\mathcal{G}$ denotes the inhomogeneous terms. By smoothing effect and Proposition \ref{aaop}, the first term in $\mathcal{G}$ is bounded by \begin{align*}
&{s^{\frac{1}{2}}}\int_{\frac{s}{2}}^s {{{\big\| {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{e^{(s - \tau )\Delta }}{h^{ii}}{A_i}{\partial _i}{\phi _t}} \big\|}_{L_t^2L_x^p}}} d\tau \\
&\le {s^{\frac{1}{2}}}\int_{\frac{s}{2}}^s {{{{{(s - \tau )}^{-\frac{1}{2}}}}}{{\big\| {\nabla {\phi _t}} \big\|}_{L_t^2L_x^p}}} {\big\| {\sqrt {{h^{ii}}} {A_i}} \big\|_{L_t^\infty L_x^\infty }}d\tau \\
&\le \varepsilon_1{s^{\frac{1}{2}}}\int_{\frac{s}{2}}^s {{{{{(s - \tau )}^{-\frac{1}{2}}}}}{{\big\| {\nabla {\phi _t}} \big\|}_{L_t^2L_x^p}}} d\tau. \end{align*} The large time estimates follow by the same route. Similar estimates for the rest terms in $\mathcal{G}$ and (\ref{time2}) yield (\ref{time}). By Duhamel principle and smoothing effect, we have \begin{align*}
\| \Delta\phi_t\|_{L^2_tL^p_x}
\lesssim s^{-\frac{1}{2}}e^{-\delta \frac{s}{2}}\|\nabla\phi_t\|_{L^2_tL^p_x}+\int_{\frac{s}{2}}^s (s-\tau)^{-\frac{1}{2}}e^{-\delta(s-\tau)}\|\nabla \mathcal{G} \|_{L_t^2L_x^p} d\tau. \end{align*} Then Lemma \ref{aoao1}, Proposition \ref{aaop}, (\ref{time}), (\ref{boot2}), (\ref{boot5}) give \begin{align*}
\| \omega_{\frac{3}{4}}(s)\Delta\phi_t\|_{L^{\infty}_sL^2_tL^p_x}
\lesssim \epsilon_1+\epsilon_1\| \omega_{\frac{3}{4}}(s)\nabla^2\phi_t\|_{L^{\infty}_sL^2_tL^p_x} \end{align*} Thus (\ref{timeling}) follows by Remark \ref{tataru0}. \end{proof}
\begin{lemma} Suppose that (\ref{boot2}) and (\ref{boot5}) hold, then the wave map tension field satisfies \begin{align}
{\left\|s^{-\frac{1}{2}} \mathfrak{W}(s) \right\|_{L^\infty_s L_t^1L_x^2}}\le \varepsilon^2_1\label{time5}\\
{\left\| \nabla \mathfrak{W}(s) \right\|_{L^\infty_s L_t^1L_x^2}}\le \varepsilon^2_1\label{time6}\\
{\left\| s^{\frac{1}{2}}\Delta \mathfrak{W}(s) \right\|_{L^\infty_s L_t^1L_x^2}}\le \varepsilon^2_1\label{time7}\\
{\left\| \omega(s)\partial_s \mathfrak{W}(s) \right\|_{L^\infty_s L_t^1L_x^2}}\le \varepsilon^2_1.\label{time8} \end{align} \end{lemma} \begin{proof} Recall the equation for $\mathfrak{W}$ evolving along $s$: \begin{align} {\partial _s}\mathfrak{W} &= \Delta \mathfrak{W} + 2{h^{ii}}{A_i}{\partial _i}\mathfrak{W} + {h^{ii}}{A_i}{A_i}\mathfrak{W} + {h^{ii}}{\partial _i}{A_i}\mathfrak{W} - {h^{ii}}\Gamma _{ii}^k{A_k}\mathfrak{W} + {h^{ii}}\left( {\mathfrak{W} \wedge {\phi _i}} \right){\phi _i}\nonumber\\ &+ 3{h^{ii}}({\partial _t}\widetilde{u} \wedge {\partial _i}{\widetilde{u}}){\nabla _t}{\partial _i}\widetilde{u}.\label{gurenjim} \end{align} Since $\mathfrak{W}(0,s,x)$=0 for all $(s,x)\in\Bbb R^+\times \Bbb H^2$, Duhamel principle gives $$(-\Delta)^k\mathfrak{W}(s,t,x)= \int_0^s {{e^{(s - \tau )\Delta }}{{\left( { - \Delta } \right)}^k}G_2(\tau )d\tau }, $$ where $G_2$ denotes the inhomogeneous term. \\ {\bf{Step One.}} In this step, we consider short time behavior, and all the integrand domain of $L^{\infty}_s$ is restricted in $s\in[0,1]$. By (\ref{u81}), (\ref{butterfly}), \begin{align*}
\int_0^s {{{\left\| {h^{ii}{A_i}{A_i}\mathfrak{W}} \right\|}_{L_t^1L_x^2}}} d\kappa &\le \int_0^s {{{\left\| {h^{ii}{A_i}{A_i}} \right\|}_{L_t^\infty L_x^\infty }}} {\left\| \mathfrak{W} \right\|_{L_t^1L_x^2}}d\kappa \le {s^{\frac{3}{2}}}\varepsilon _1^2{\left\| {\mathfrak{W}{s^{ - \frac{1}{2}}}} \right\|_{L_s^\infty L_t^1L_x^2}} \\
\int_0^s {{{\left\| {{h^{ii}}{\partial _i}{A_i}\mathfrak{W}} \right\|}_{L_t^1L_x^2}}} d\kappa &\le \int_0^s {{{\left\| {{h^{ii}}{\partial _i}{A_i}} \right\|}_{L_t^\infty L_x^\infty }}} {\left\| \mathfrak{W} \right\|_{L_t^1L_x^2}}d\kappa \le s^{\frac{1}{2}}\varepsilon _1^2{\left\| {\mathfrak{W}{s^{ - \frac{1}{2}}}} \right\|_{L_s^\infty L_t^1L_x^2}}. \end{align*} By Proposition \ref{sl}, \begin{align}
\int_0^s {{{\left\| {{h^{ii}}\left( {\mathfrak{W} \wedge {\phi _i}} \right){\phi _i}} \right\|}_{L_t^1L_x^2}}} d\kappa \le \int_0^s {{{\left\| {{h^{ii}}{\phi _i}{\phi _i}} \right\|}_{L_t^\infty L_x^\infty }}} {\left\|\mathfrak{W} \right\|_{L_t^1L_x^2}}d\kappa \le s\varepsilon _1^2{\left\| {\mathfrak{W}{s^{ - \frac{1}{2}}}} \right\|_{L_s^\infty L_t^1L_x^2}}. \end{align} By (\ref{time}), (\ref{u81}) and Proposition \ref{sl}, \begin{align*}
&\int_0^s {{{\left\| {{h^{ii}}\left( {{\partial _t}\widetilde{u} \wedge {\partial _i}\widetilde{u}} \right){\nabla _i}{\partial _t}\widetilde{u}} \right\|}_{L_t^1L_x^2}}} d\kappa \\
&\le \int_0^s {{{\left\| {d\widetilde{u}} \right\|}_{L_t^\infty L_x^6}}{{\left\| {\nabla {\partial _t}\widetilde{u}} \right\|}_{L_t^2L_x^6}}} {\left\| {{\partial _t}\widetilde{u}} \right\|_{L_t^2L_x^6}}d\kappa \\
&\le \int_0^s {{{\left\| {d\widetilde{u}} \right\|}_{L_t^\infty L_x^6}}{{\left\| {\nabla {\phi _t}} \right\|}_{L_t^2L_x^6}}} {\left\| {{\partial _t}\widetilde{u}} \right\|_{L_t^2L_x^6}}d\kappa + \int_0^s {{{\left\| {d\widetilde{u}} \right\|}_{L_t^\infty L_x^6}}{{\left\| {\sqrt {{h^{ii}}} {A_i}{\phi _t}} \right\|}_{L_t^2L_x^6}}} {\left\| {{\partial _t}\widetilde{u}} \right\|_{L_t^2L_x^6}}d\kappa \\
&\le {s^{\frac{1}{2}}}\varepsilon _1^2.
\end{align*} By the smoothing effect and the boundedness of Riesz transform, we have \begin{align*}
&\int_0^s {{{\left\| {{e^{(s - \kappa)\Delta }}{h^{ii}}{A_i}{\partial _i}\mathfrak{W}} \right\|}_{L_t^1L_x^2}}} d\kappa \\
&\le \int_0^s {{{\left\| {{e^{(s - \kappa)\Delta }}{h^{ii}}{\partial _i}\left( {{A_i}\mathfrak{W}} \right)} \right\|}_{L_t^1L_x^2}}} d\kappa
+ \int_0^s {{{\left\| {{e^{(s - \kappa)\Delta }}{h^{ii}}{\partial _i}{A_i}\mathfrak{W}} \right\|}_{L_t^1L_x^2}}} d\kappa \\
&\le \int_0^s {{{(s - \kappa)}^{ - \frac{1}{2}}}{{\left\| {\sqrt {{h^{ii}}} {A_i}\mathfrak{W}} \right\|}_{L_t^1L_x^2}}} d\kappa + \int_0^s {{{\left\| {{h^{ii}}{\partial _i}{A_i}\mathfrak{W}} \right\|}_{L_t^1L_x^2}}} d\kappa \\
&\le {s^{\frac{1}{2}}}\varepsilon _1^2{\left\| {\mathfrak{W}{s^{ - \frac{1}{2}}}} \right\|_{L_s^\infty L_t^1L_x^2}}. \end{align*} Hence we conclude (\ref{time5}) for $s\in[0,1]$ by choosing $\varepsilon_1$ sufficiently small. In order to prove (\ref{time6}), we use the following Duhamel principle instead to apply (\ref{time5}), $${\left( { - \Delta } \right)^{\frac{1}{2}}}\mathfrak{W}(s) = {\left( { - \Delta } \right)^{\frac{1}{2}}}{e^{\frac{s}{2}\Delta }}\mathfrak{W}(\frac{s}{2}) + \int_{\frac{s}{2}}^s {{{\left( { - \Delta } \right)}^{\frac{1}{2}}}{e^{(s - \tau )\Delta }}{G_2}(\tau )d\tau }. $$ Then (\ref{time6}) follows by (\ref{time5}) and the smoothing effect. Again by Duhamel principle and the smoothing effect,
$$\|\left( { - \Delta } \right)\mathfrak{W}(s)\|_{L^2_x} \le \|\left( { - \Delta } \right){e^{\frac{s}{2}\Delta }}\mathfrak{W}(\frac{s}{2})\|_{L^2_x} + \int_{\frac{s}{2}}^s (s-\tau)^{-\frac{1}{2}}e^{-\delta (s-\tau)}\|\left( { - \Delta } \right)^{\frac{1}{2}}{G_2}(\tau )\|_{L^2_x}d\tau. $$ Thus Lemma \ref{aoao1}, Proposition \ref{aaop}, (\ref{boot2}), (\ref{boot5}), Remark \ref{tataru0} and Lemma \ref{gdie1} give (\ref{time7}) for $s\in[0,1]$. For $s\in[0,1]$, (\ref{time8}) now arises from (\ref{time5})-(\ref{time7}).\\ {\bf{Step Two.}} We prove (\ref{time5})-(\ref{time7}) for $s\ge1$. This can be easily obtained by the same arguments as above with the help of $s^{-L}$ decay in the long time case.\\ {\bf{Step Three.}} We prove the large time behavior. The Duhamel principle we use is also $${\left( { - \Delta } \right)^k}\mathfrak{W}(s) = {\left( { - \Delta } \right)^k}{e^{\frac{s}{2}\Delta }}\mathfrak{W}(\frac{s}{2}) + \int_{\frac{s}{2}}^s {{{\left( { - \Delta } \right)}^k}{e^{(s - \tau )\Delta }}{G_2}(\tau )d\tau }. $$ Let $s\ge1$, applying smoothing effect we obtain
$${s^L}{\left\| {\mathfrak{W}(s)} \right\|_{L_t^1L_x^2}} \le {s^L}{e^{ - s/8}}{\left\| {\mathfrak{W}(\frac{s}{2})} \right\|_{L_t^1L_x^2}} + {s^L}\int_{\frac{s}{2}}^s {{e^{ - (s - \tau )/8}}{{\left\| {{G_2}(\tau )} \right\|}_{L_t^1L_x^2}}d\tau }. $$ Then by Hausdorff-Young and (\ref{time5})-(\ref{time7}), for $s\ge1$ \begin{align}\label{1wuhusd}
\|\mathfrak{W}\|_{L^1_tL^2_x}\le \varepsilon^2_1s^{-L}. \end{align} Similarly, we have for $s\in[1,\infty)$ \begin{align}\label{wuhusd}
\|\nabla \mathfrak{W}\|_{L^1_tL^2_x}+\|\Delta \mathfrak{W}\|_{L^1_tL^2_x}\le \varepsilon^2_1s^{-L}. \end{align} Thus the longtime part of (\ref{time8}) now results from (\ref{1wuhusd}), (\ref{wuhusd}). \end{proof}
\begin{lemma} Suppose that (\ref{boot2}) and (\ref{boot5}) hold, then for $0<\gamma\ll1$ \begin{align}
\left\| s^{-\frac{1}{2}}\mathfrak{W}(s)\right\|_{L_s^\infty L_t^2L_x^{3+\gamma}}+\left\| \omega(s)\mathfrak{W}(s)\right\|_{L_s^\infty L_t^2L_x^{3+\gamma}}& \le {\varepsilon _1}\label{time90}\\
{\left\|{\omega (s){\partial _t}{\phi _t}(s)} \right\|_{L_s^\infty L_t^2L_x^{3+\gamma}}} &\le \varepsilon_1.\label{time3}\\
{\left\|{{\partial _t}{A_t}(s)} \right\|_{L_s^\infty L_t^2L_x^{3+\gamma}}} &\le \varepsilon_1.\label{time37} \end{align} \end{lemma} \begin{proof} (\ref{time3}) is a direct corollary of (\ref{time90}). In fact, the definition of the wave map tension field gives \begin{align*} D_t\phi_t=\phi_s+\mathfrak{W}(s). \end{align*}
Hence $\partial_t\phi_t$ is bounded by $|\phi_s|+|A_t\phi_t|+|\mathfrak{W}|$, then (\ref{time3}) follows by (\ref{boot5}), (\ref{time90}), (\ref{time2}) and (\ref{aaaw811}). (\ref{time90}) follows by the same arguments as (\ref{time8}). The only difference is to use \begin{align*}
{\left\| {{h^{ii}}\left( {{\partial _t}\widetilde{u} \wedge {\partial _i}\widetilde{u}} \right){\nabla _i}{\partial _t}\widetilde{u}} \right\|_{L_t^2L_x^{3+\gamma}}} \le {\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{L_t^2L_x^{6+2\gamma}}}{\left\| {{\partial _t}\widetilde{u}} \right\|_{L_t^\infty L_x^{12+4\gamma}}}{\left\| {d\widetilde{u}} \right\|_{L_t^\infty L_x^{12+4\gamma}}}, \end{align*}
where the term ${\left\| {{\partial _t}\widetilde{u}} \right\|_{L_t^\infty L_x^{12+4\gamma}}}{\left\| {d\widetilde{u}} \right\|_{L_t^\infty L_x^{12+4\gamma}}}$ is bounded by Soboelv embedding and Proposition \ref{sl}. It remains to prove (\ref{time37}). By the definition of $D_t$ and $A_t$, we have \begin{align*}
\left| {{\partial _t}{A_t}(s)} \right| &\le \int_s^\infty {\left| {{\partial _t}{\phi _t}} \right|\left| {{\phi _s}} \right|} d\kappa + \int_s^\infty {\left| {{\partial _t}{\phi _s}} \right|\left| {{\phi _t}} \right|d\kappa} \\
&\le \int_s^\infty {\left| {{D_t}{\phi _t}} \right|\left| {{\phi _s}} \right|} d\kappa + \int_s^\infty {\left| {{A_t}} \right|\left| {{\phi _s}} \right|d\kappa} + \int_s^\infty {\left| {{\partial _t}{\phi _s}} \right|\left| {{\phi _t}} \right|d\kappa}, \end{align*} By $\mathfrak{W}=D_t\phi_t-\phi_s$ and H\"older, \begin{align}
&{\left\| {\int_s^\infty {\left| {{D_t}{\phi _t}} \right|\left| {{\phi _s}} \right|} d\kappa} \right\|_{L_t^2L_x^{3+\gamma}}}\nonumber\\
&\le \int_s^\infty {{{\left\| w \right\|}_{L_t^2L_x^{3+\gamma}}}{{\left\| {{\partial _s}\widetilde{u}} \right\|}_{L_t^\infty L_x^\infty }}} d\kappa + \int_s^\infty {{{\left\| {{\phi _s}} \right\|}_{L_t^\infty L_x^{6+2\gamma}}}} {\left\| {{\phi _s}} \right\|_{L_t^2L_x^{6+2\gamma}}}d\kappa.\label{huqiancvb} \end{align}
Since ${\| {{\phi _s}}\|_{L_x^{6+2\gamma}}} \le {\| {{{\left| D \right|}^{\frac{1}{2}}}{\phi _s}} \|_{L_x^p}}$ for $p\in(4,6)$, then (\ref{huqiancvb}) is acceptable by Proposition \ref{sl} and Proposition \ref{bootstrap}. Again by H\"older and Sobolev embedding, for $\frac{1}{m}+\frac{1}{4}=\frac{1}{3+\gamma}$ \begin{align*}
{\left\| {\int_s^\infty {\left| {{\partial _t}{\phi _s}} \right|\left| {{\phi _t}} \right|d\kappa} } \right\|_{L_t^2L_x^{3+\gamma}}} &\le \int_s^\infty {{{\left\| {{\partial _s}{\phi _t}} \right\|}_{L_t^2L_x^4}}} {\left\| {{\phi _t}} \right\|_{L_t^\infty L_x^m}}d\kappa \\
&\le \int_s^\infty {{{\left\| {{\partial _s}{\phi _t}} \right\|}_{L_t^2L_x^4}}} {\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{L_x^2}}d\kappa. \end{align*}
Since $|\partial_s\phi_t|\le|\partial_t\phi_s|+|A_t\phi_s|$, this is also acceptable by Proposition \ref{sl}, Proposition \ref{bootstrap}, (\ref{aaop11}), and (\ref{aaaw811}). Thus (\ref{time37}) follows. \end{proof}
\begin{proposition}\label{cuihua} Suppose that (\ref{boot2}) and (\ref{boot5}) hold. Then we have for $p\in(2,6)$ \begin{align}
&{\left\| {\omega (s){{\left| D \right|}^{ - \frac{1}{2}}}{\partial _t}{\phi _s}} \right\|_{L_s^\infty L_t^2L_x^p([0,T] \times {\Bbb H^2})}} + {\left\| {\omega (s){{\left| D \right|}^{\frac{1}{2}}}{\phi _s}} \right\|_{L^{\infty}_sL_t^2L_x^p([0,T] \times {\Bbb H^2})}}\nonumber\\
&+ {\left\| {\omega (s){\partial _t}{\phi _s}} \right\|_{L_s^\infty L_t^\infty L_x^2([0,T] \times {\Bbb H^2})}}
+ {\left\| {\omega (s)\nabla {\phi _s}} \right\|_{L_s^\infty L_t^\infty L_x^2([0,T] \times {H^2})}} \le \varepsilon _1^2. \label{huoji} \end{align} \end{proposition} \noindent{\bf{Proof}} By Lemma \ref{hushuo} and Proposition \ref{tianxia}, we obtain for any $p\in(2,6)$ \begin{align}
&{\omega(s)}{\left\| {{\partial _t}{\phi _s}} \right\|_{L_t^\infty L_x^2}} + {\omega(s)}{\left\| {\nabla {\phi _s}} \right\|_{L_t^\infty L_x^2}} + {\omega(s)}{\left\| {{{\left| \nabla \right|}^{\frac{1}{2}}}{\phi _s}} \right\|_{L_t^2L_x^p}}\nonumber\\
&+ {\omega(s)}{\left\| {{{\left| D \right|}^{ - \frac{1}{2}}}{\partial _t}{\phi _s}} \right\|_{L_t^2L_x^p}} +{\omega(s)}{\left\| {{\rho ^\sigma }\nabla {\phi _s}} \right\|_{L_t^2L_x^2}} \nonumber\\
&\lesssim {\omega(s)}{\left\| {{\partial _t}{\phi _s}(0,s,x)} \right\|_{L_x^2}} + {\omega(s)}{\left\| {\nabla {\phi _s}(0,s,x)} \right\|_{L_x^2}} + {\omega(s)}{\left\| G_4 \right\|_{L_t^1L_x^2}}.\label{chunjie1} \end{align} where $G_4$ denotes the inhomogeneous term. First, the $\phi_s(0,s,x)$ term is acceptable by Proposition \ref{sl}, $\mu_2\ll \varepsilon_1$ and $$
|\nabla_{t,x}\phi_s(0,s,x)|\le |\nabla_{t,x}\partial_sU|+\sqrt{h^{\gamma\gamma}}|A_{\gamma}||\partial_sU|, $$ where $U(s,x)$ is the heat flow initiated from $u_0$. Second, the three terms involved with $A_t$ are bounded by \begin{align*}
{\omega(s)}{\left\| {{A_t}{\partial _t}{\phi _s}} \right\|_{L_t^1L_x^2}} &\le {\left\| {{A_t}} \right\|_{L_t^1L_x^\infty }}{\omega(s)}{\left\| {{\partial _t}{\phi _s}} \right\|_{L_t^\infty L_x^2}} \\
{\omega(s)}{\left\| {{A_t}{A_t}{\phi _s}} \right\|_{L_t^1L_x^2}} &\le {\left\| {{A_t}} \right\|_{L_t^1L_x^\infty }}{\left\| {{A_t}} \right\|_{L_t^\infty L_x^\infty }}{\omega(s)}{\left\| {{\phi _s}} \right\|_{L_t^\infty L_x^2}} \\
{\omega(s)}{\left\| {{\partial _t}{A_t}{\phi _s}} \right\|_{L_t^1L_x^2}} &\le {\left\| {{\partial _t}{A_t}} \right\|_{L_t^2L_x^{3+\gamma}}}{\omega(s)}{\left\| {{\phi _s}} \right\|_{L_t^2L_x^k}}, \end{align*} where $\frac{1}{k}+\frac{1}{3+\gamma}=\frac{1}{2},$ and $k\in(2,6)$. They are admissible by (\ref{aaaw811}), (\ref{aaop11}) and (\ref{time37}). The $\partial_t\widetilde{u}$ term is bounded by \begin{align*}
{\omega(s)}{\left\| {\mathbf{R}({\partial _t}\widetilde{u},{\partial _s}\widetilde{u})({\partial _t}\widetilde{u})} \right\|_{L_t^1L_x^2}} \le {\left\| {{\partial _t}\widetilde{u}} \right\|_{L_t^2L_x^{6+2\gamma}}}{\left\| {{\partial _t}\widetilde{u}} \right\|_{L_t^\infty L_x^{6+2\gamma}}}{\omega(s)}{\left\| {{\phi _s}} \right\|_{L_t^2L_x^k}}, \end{align*} where $\frac{1}{k}+\frac{1}{3+\gamma}=\frac{1}{2},$ and $k\in(2,6)$. The $\partial_s\mathfrak{W}$ term is bounded by (\ref{time8}). The $A^{con}_i$ terms should be dealt with separately. We present the estimates for these terms as a lemma.
\begin{lemma}[Continuation of Proof of Proposition \ref{cuihua}] Under the assumption of Proposition \ref{cuihua}, we have \begin{align}
{\omega(s)}{\left\| {{h^{ii}}A_i^{con}{\partial _i}{\phi _s}} \right\|_{L_t^1L_x^2}} &\le {\varepsilon _1}{\omega(s)}{\left\| {{\rho ^\sigma }\nabla {\phi _s}} \right\|_{L_t^2L_x^2}} + \varepsilon _1^2\label{cuihua1}\\
{\omega(s)}{\left\| {{h^{ii}}A_i^{con}A_i^\infty {\phi _s}} \right\|_{L_t^1L_x^2}}&\le \varepsilon _1^2\label{cuihua2}\\
{\omega(s)}{\left\| {{h^{ii}}A_i^{con}A_i^{con} {\phi _s}} \right\|_{L_t^1L_x^2}}&\le \varepsilon _1^2\label{cuihua3}\\
{\omega(s)}{\left\| {{h^{ii}}\partial_iA_i^{con}{\phi _s}} \right\|_{L_t^1L_x^2}}&\le \varepsilon _1^2\label{cuihua4}\\
{\omega(s)}{\left\| {{h^{ii}}\Gamma _{ii}^kA_k^{con}} \phi_s\right\|_{L_t^1L_x^2}}&\le \varepsilon _1^2\label{cuihua5}. \end{align} \end{lemma} \begin{proof} Expanding $\phi_i$ as $\phi^{\infty}_i+\int^{\infty}_s\partial_s \phi_id\kappa$ yields \begin{align}
A_i^{con} = \int_s^\infty {{\phi _i} \wedge {\phi _s}} d\kappa = \int_s^\infty {\left( {\int_{\kappa}^\infty {{\partial _s}} {\phi _i}(\tau )d\tau + \phi _i^\infty } \right) \wedge {\phi _s}} (\kappa)d\kappa. \end{align} Hence we get \begin{align*}
&{\omega(s)}{\left\| {{h^{ii}}A_i^{con}{\partial _i}{\phi _s}} \right\|_{L_t^1L_x^2}} \\
&\le {\omega(s)}{\left\| {{h^{ii}}(\int_s^\infty \phi _i^\infty\wedge {{\phi _s}(\kappa)d\kappa} }) {\partial _i}{\phi _s}\right\|_{L_t^1L_x^2}}\\
&+ {\omega(s)}{\left\| {{h^{ii}} \left( {\int_s^\infty {{\phi _s}(\kappa) \wedge \left( {\int_{\kappa}^\infty {{\partial _s}} {\phi _i}(\tau )d\tau } \right)d\kappa} } \right)} {\partial _i}{\phi _s}\right\|_{L_t^1L_x^2}} \\ &\buildrel \Delta \over = B_1 + B_2 \end{align*} The $B_1$ term is bounded by \begin{align}
B_1 &\lesssim {\omega(s)}{\left\| {{\rho ^\sigma }\nabla {\phi _s}} \right\|_{L_t^2L_x^2}}{\left\| {\int_s^\infty {{\rho ^{ - \sigma }}\phi _i^\infty\sqrt{h^{ii}} {\phi _s}(\kappa)d\kappa} } \right\|_{L_t^2L_x^{\infty}}}\nonumber \\
&\le {\omega(s)}{\left\| {{\rho ^\sigma }\nabla {\phi _s}} \right\|_{L_t^2L_x^2}}{\left\| {{\rho ^{ - \sigma }}\phi _i^\infty }\sqrt{h^{ii}} \right\|_{L_x^{\infty}}}\int_s^\infty {{{\left\| {{\phi _s}(\kappa)} \right\|}_{L_t^2L_x^{\infty}}}d\kappa} \nonumber \\
&\lesssim {\omega(s)}{\left\| {{\rho ^\sigma }\nabla {\phi _s}} \right\|_{L_t^2L_x^2}}{\left\| {{\rho ^{ - \sigma }}\sqrt{h^{ii}}\phi _i^\infty } \right\|_{L_x^{\infty}}}{\left\| {a(s){{\left\| {\nabla{\phi _s}(s)} \right\|}_{L_t^2L_x^{4}}}} \right\|_{L_s^\infty }},\label{guihua} \end{align} where we have used the Sobolev embedding in the last step. Hence Proposition \ref{bootstrap} gives an acceptable bound, \begin{align*}
{B_1} \le C\mu_1\varepsilon_1{\omega(s)}{\left\| {{\rho ^\sigma }\nabla {\phi _s}} \right\|_{L_t^2L_x^2}}. \end{align*} The $B_2$ term is bounded by \begin{align*}
{B_2} \lesssim {\omega(s)}{\left\| {\nabla {\phi _s}} \right\|_{L_t^\infty L_x^2}}\int_s^\infty {{{\left\| {{\phi _s}(\kappa)} \right\|}_{L_t^2L_x^\infty }}\left( {\int_{\kappa}^\infty {{{\left\| {\nabla {\phi _s}(\tau )} \right\|}_{L_t^2L_x^\infty }}} d\tau } \right)d\kappa}. \end{align*} Meanwhile, Sobolev embedding and Proposition \ref{bootstrap} give when $\tau \in (0,1)$ \begin{align*}
{\left\| {\nabla {\phi _s}(\tau )} \right\|_{L_t^2L_x^\infty }} &\le {\left( {{\tau ^{\frac{3}{4}}}{{\left\| {\nabla {\phi _s}(\tau )} \right\|}_{L_t^2L_x^5}}} \right)^{3/5}}{\left( {{\tau ^{5/4}}{{\left\| {{\nabla ^2}{\phi _s}(\tau )} \right\|}_{L_t^2L_x^5}}} \right)^{2/5}}{\tau ^{ - \frac{1}{2} -9/20}} \\
&\le {\varepsilon _1}{\tau ^{ -19/20}},
\end{align*}
and when $\tau \in [1,\infty)$
\begin{align*}
{\left\| {\nabla {\phi _s}(\tau )} \right\|_{L_t^2L_x^\infty }} &\le {\left( {{\tau ^L}{{\left\| {\nabla {\phi _s}(\tau )} \right\|}_{L_t^2L_x^5}}} \right)^{3/5}}{\left( {{\tau ^L}{{\left\| {{\nabla ^2}{\phi _s}(\tau )} \right\|}_{L_t^2L_x^5}}} \right)^{2/5}}{\tau ^{ - L}} \le {\varepsilon _1}{\tau ^{ - L}}. \end{align*}
Similarly we deduce by Sobolev embedding $\|f\|_{L^{\infty}}\le \||D|^{\frac{1}{2}}f\|_{L^5}$ that \begin{align*}
{\left\| {{\phi _s}(\tau )} \right\|_{L_t^2L_x^\infty }} \le {\varepsilon _1}{\tau ^{ - \frac{1}{2}}},\mbox{ }{\rm{when}}\mbox{ }\tau \in (0,1);\mbox{ }{\left\| {{\phi _s}(\tau )} \right\|_{L_t^2L_x^\infty }} \le {\varepsilon _1}{\tau ^{ - L}},\mbox{ }{\rm{when}}\mbox{ }\tau \in [1,\infty ). \end{align*} Therefore we conclude \begin{align}
B_2 \le \varepsilon _1^2{\omega(s)}{\left\| {\nabla {\phi _s}} \right\|_{L_t^\infty L_x^2}}\label{guihua1}. \end{align} Proposition \ref{bootstrap} together with (\ref{guihua}), (\ref{guihua1}) yields (\ref{cuihua1}). Next we prove (\ref{cuihua2}). H\"older yields \begin{align*}
{\omega(s)}{\| {{h^{ii}}A_i^{con}A_i^\infty {\phi _s}}\|_{L_t^1L_x^2}} \le {\| {\sqrt {{h^{ii}}} A_i^{con}} \|_{L_t^2L_x^{\frac{10}{3}}}}{\omega(s)}{\| {{\phi _s}} \|_{L_t^2L_x^5}}. \end{align*} Using the expression $A_i^{con} = \int_s^\infty {{\phi _i} \wedge {\phi _s}} d\kappa$, we obtain \begin{align}
{\left\| {\sqrt {{h^{ii}}} A_i^{con}} \right\|_{L_t^2L_x^{\frac{10}{3}}}} &\lesssim {\left\| {\int_s^\infty {\sqrt {{h^{ii}}} {\phi _i} \wedge {\phi _s}} d\kappa} \right\|_{L_t^2L_x^{\frac{10}{3}}}} \le {\| {d\widetilde{u}}\|_{L_t^\infty L_x^{10}}}\int_s^\infty {{{\left\| {{\phi _s}} \right\|}_{L_t^2L_x^5}}} d\kappa \nonumber\\
&\lesssim {\| {\nabla d\widetilde{u}}\|_{L_t^\infty L_x^2}}{\| {\omega(s){{\| {{\phi _s}(s)} \|}_{L_t^2L_x^5}}}\|_{L_s^\infty }}.\label{guihua7} \end{align} Therefore Proposition \ref{bootstrap} gives (\ref{cuihua2}). Third, we verify (\ref{cuihua3}). H\"older yields \begin{align*}
{\omega(s)}{\| {{h^{ii}}A_i^{con}A_i^{con}{\phi _s}} \|_{L_t^1L_x^2}} \le {\| {\sqrt {{h^{ii}}} A_i^{con}} \|_{L_t^2L_x^{\frac{10}{3}}}}{\| {\sqrt {{h^{ii}}} A_i^{con}} \|_{L_t^\infty L_x^\infty }}{\omega(s)}{\| {{\phi _s}} \|_{L_t^2L_x^5}}. \end{align*}
The term ${\| {\sqrt {{h^{ii}}} A_i^{con}}\|_{L_t^2L_x^{\frac{10}{3}}}}$ has been estimated in (\ref{guihua7}). The ${\| {\sqrt {{h^{ii}}} A_i^{con}} \|_{L_t^{\infty}L_x^\infty}}$ term is bounded by \begin{align*}
{\| {\sqrt {{h^{ii}}} A_i^{con}}\|_{L_t^{\infty}L_x^\infty }} \lesssim \left\| {\int_s^\infty {{{\| {d\widetilde{u}} \|}_{L^\infty_{x} }}{{\| {{\phi _s}} \|}_{L^\infty _{x}}}} d\kappa} \right\|_{L^{\infty}_t}. \end{align*} This is acceptable by Proposition \ref{sl} and Lemma \ref{ktao1}. Forth, we prove (\ref{cuihua4}). H\"older yields \begin{align*}
{\omega(s)}{\left\| {{h^{ii}}\left( {{\partial _i}A_i^{con}} \right){\phi _s}} \right\|_{L_t^1L_x^2}} \le {\left\| {{h^{ii}}{\partial _i}A_i^{con}} \right\|_{L_t^2L_x^4}}{\omega(s)}{\left\| {{\phi _s}} \right\|_{L_t^2L_x^4}}. \end{align*} The $h^{ii}\partial_iA_i$ term is bounded by \begin{align*}
& {\left\| {{h^{ii}}{\partial _i}A_i^{con}} \right\|_{L_t^2L_x^4}} = {\left\| {\int_s^\infty {{h^{ii}}{\partial _i}} {\phi _i}{\phi _s}d\kappa + \int_s^\infty {{h^{ii}}} {\phi _i}{\partial _i}{\phi _s}d\kappa} \right\|_{L_t^2L_x^4}} \\
&\le \int_s^\infty {{{\left\| {{h^{ii}}{\partial _i}{\phi _i}} \right\|}_{L_t^\infty L_x^{20}}}} {\left\| {{\phi _s}} \right\|_{L_t^2L_x^5}}d\kappa + {\int_s^\infty {\left\| {d\widetilde{u}} \right\|} _{L_t^{\infty}L_x^\infty }}{\left\| {\nabla {\phi _s}} \right\|_{L_t^2L_x^4}}d\kappa \\
&\le \int_s^\infty {\left( {{{\left\| {\nabla d\widetilde{u}} \right\|}_{L_t^\infty L_x^{20}}} + {{\left\| {{h^{ii}}{A_i}{\phi _i}} \right\|}_{L_t^\infty L_x^{20}}}} \right)} {\left\| {{\phi _s}} \right\|_{L_t^2L_x^5}}d\kappa \\
&+ {\int_s^\infty {\left\| {d\widetilde{u}} \right\|} _{L_t^{\infty}L_x^\infty }}{\left\| {\nabla {\phi _s}} \right\|_{L_t^2L_x^4}}d\kappa. \end{align*}
Thus this is acceptable by Proposition \ref{aaop} and interpolation between the $\|\nabla d\widetilde{u}\|_{L^{\infty}}$ bound and the $\|\nabla d\widetilde{u}\|_{L^{2}}$ bound in Proposition \ref{sl}. Finally we notice that (\ref{cuihua5}) is a consequence of (\ref{guihua7}) and
$${\omega(s)}{\left\| {{h^{ii}}\Gamma _{ii}^kA_k^{con}} \phi_s\right\|_{L_t^1L_x^2}} \le {\left\| {A_2^{con}} \right\|_{L_t^2L_x^{\frac{10}{3}}}}{\omega(s)}{\left\| {{\phi _s}} \right\|_{L_t^2L_x^5}}. $$ \end{proof} $\blacksquare$
Proposition \ref{bootstrap} with Proposition \ref{cuihua} yields \begin{proposition}\label{xiaozi} Assume that the solution to (\ref{wmap1}) satisfies (\ref{boot5}) and (\ref{boot2}), then for any $p\in(2,6)$, $\theta\in[0,2]$ \begin{align*}
{\left\| \omega(s){\nabla\phi_s} \right\|_{L^{\infty}_sL_t^\infty L_x^2}} + {\left\|\omega(s) |D|^{\frac{1}{2}}{\phi_s} \right\|_{L^{\infty}_sL_t^2 L_x^p}} &\le {\varepsilon^2 _1}\\
{\left\| \omega_1(s)|D|{\partial _t}\phi_s \right\|_{L^{\infty}_sL_t^2L_x^p}}
+\left\|\omega_{\theta}(s) (-\Delta)^{\theta}{\phi_s} \right\|_{L^{\infty}_sL_t^2L_x^p}&\le {\varepsilon^2 _1}. \end{align*} \end{proposition}
\
\subsection{Close all the bootstrap} \begin{lemma} Assume that the solution to (\ref{wmap1}) satisfies (\ref{boot5}) and (\ref{boot2}), then for any $p\in(2,6+2\gamma]$ \begin{align}
{\left\| {(du,\partial_tu)} \right\|_{L_t^\infty L_x^2([0,T] \times {\Bbb H^2})}} + {\left\| {(\nabla{\partial _t}u,\nabla du)} \right\|_{L_t^\infty L_x^2([0,T] \times {\Bbb H^2})}}&\le {\varepsilon^2 _1}\label{boot8}\\
{\left\| {{\partial _t}u} \right\|_{L_t^2L_x^p([0,T] \times {\Bbb H^2})}} &\le {\varepsilon^2_1}.\label{boot9} \end{align} \end{lemma} \begin{proof} First we prove (\ref{boot9}). By $D_s\phi_t=D_t\phi_s$, $A_s=0$, one has \begin{align}
{\left\| {{\phi _t}(0,t,x)} \right\|_{L_t^2L_x^p}} \le {\left\| {\int_0^\infty {\left| {{\partial _s}{\phi _t}} \right|} ds} \right\|_{L_t^2L_x^p}} \le {\left\| {{\partial _t}{\phi _s}} \right\|_{L_s^1L_t^2L_x^p}}+{\left\| {{A _t}{\phi _s}} \right\|_{L_s^1L_t^2L_x^p}}. \end{align} Sobolev embedding gives
$${\left\| {{\partial _t}{\phi _s}} \right\|_{L_x^{6 + 2\gamma }}} + {\left\| {{\phi _s}} \right\|_{L_x^{6 + 2\gamma }}} \le {\left\| {{{\left( { - \Delta } \right)}^\vartheta }{\partial _t}{\phi _s}} \right\|_{L_x^{6 - \eta }}} + {\left\| {{{\left( { - \Delta } \right)}^\vartheta }{\phi _s}} \right\|_{L_x^{6 - \eta }}}, $$ where $\frac{\vartheta }{2} = \frac{1}{{6 - \eta }} - \frac{1}{{6 + 2\gamma }}$, $0<\eta\ll1,0<\gamma\ll1$. Thus (\ref{boot9}) follows by Proposition \ref{xiaozi} and Proposition \ref{aaop}. Second, we prove (\ref{boot8}). By Remark \ref{3sect}, ${\phi _i}(0,t,x) = \phi _i^\infty + \int_0^\infty {{\partial _s}{\phi _i}d\kappa}.$
Since $|d\widetilde{u}|\le \sqrt{h^{ii}}|\phi_i|$, $\|\sqrt{h^{ii}}\phi _i^\infty\|_{L^2_x}\le \|dQ\|_{L^2}\le \mu_1$, it suffices to verify for any $t,x\in[0,T]\times\Bbb H^2$
$$\int_0^\infty \|\sqrt{h^{ii}}{\partial _s}{\phi _i}\|_{L^2_x}d\kappa\le \varepsilon^2_1. $$
This is acceptable by Proposition \ref{aaop}, Proposition \ref{xiaozi} and $|\sqrt{h^{ii}}\partial_s\phi_i|\le |\nabla \phi_s|+\sqrt{h^{ii}}|A_i||\phi_s|$. Recalling (\ref{poijn}) for the equation of $\phi_s$ evolving along the heat flow, we have by integration by parts, \begin{align*}
\frac{d}{{ds}}\left\| {\tau (\widetilde{u})} \right\|_{L_x^2}^2 &= \frac{d}{{ds}}\left\langle {{\phi _s},{\phi _s}} \right\rangle = 2\left\langle {{D_s}{\phi _s},{\phi _s}} \right\rangle \\
&= 2{h^{ii}}\left\langle {{D_i}{D_i}{\phi _s} - \Gamma _{ii}^k{D_k}{\phi _s},{\phi _s}} \right\rangle +\left\langle h^{ij}(\phi_s\wedge\phi_i)\phi_j,\phi_s\right\rangle\\
&= - 2{h^{ii}}\left\langle {{D_i}{\phi _s},{D_i}{\phi _s}} \right\rangle+\left\langle h^{ij}(\phi_s\wedge\phi_i)\phi_j,\phi_s\right\rangle. \end{align*}
Hence $\|\partial_s\widetilde{u}\|_{L^2_x}\le e^{-\delta s}$ shows \begin{align}
&\left\| {\tau (\widetilde{u}(0,t,x))} \right\|_{L_x^2}^2\lesssim \int_0^\infty {{h^{ii}}\left\langle {{D_i}{\phi _s},{D_i}{\phi _s}} \right\rangle } ds \nonumber\\
&\lesssim \int_0^\infty {\left\langle {\nabla {\phi _s},\nabla {\phi _s}} \right\rangle } ds + \int_0^\infty {{h^{ii}}\left\langle {{A_i}{\phi _s},{A_i}{\phi _s}} \right\rangle } ds+\int^{\infty}_0|d\widetilde{u}|^2|\phi_s|^2ds.\label{muijbnc} \end{align} The nonnegative sectional curvature property of $N=\Bbb H^2$ with integration by parts implies
$$\left\| {\nabla d\widetilde{u}} \right\|_{L_x^2}^2 \lesssim \left\| {\tau (\widetilde{u})} \right\|_{L_x^2}^2 + \left\| {d\widetilde{u}} \right\|_{L_x^2}^2. $$ Hence (\ref{muijbnc}) gives \begin{align}
&\left\| {\nabla d\widetilde{u}(0,t,x)} \right\|_{L_x^2}^2 \nonumber\\ &\lesssim \int_0^\infty {\left\langle {\nabla {\phi _s},\nabla {\phi _s}} \right\rangle } ds
+\int^{\infty}_0|d\widetilde{u}|^2|\phi_s|^2ds+\int_0^\infty {{h^{ii}}\left\langle {{A_i}{\phi _s},{A_i}{\phi _s}} \right\rangle } ds+\|d\widetilde{u}(0,t,x)\|^2_{L^2_x}.\label{hjbn7v89} \end{align}
Since the $|d\widetilde{u}|$ term has been estimated, by Proposition \ref{xiaozi} , Proposition \ref{aaop} and (\ref{hjbn7v89}),
$$ {\left\| {\nabla d\widetilde{u}} \right\|_{L_t^\infty L_x^2([0,T] \times {\Bbb H^2})}}\le \varepsilon^2_1. $$
Finally we prove the desired estimates for $|\nabla\partial_t\widetilde{u}|$. Integration by parts yields, \begin{align*}
&\frac{d}{{ds}}\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{{L^2}}^2 = \frac{d}{{ds}}{h^{ii}}\left\langle {{D_i}{\phi _t},{D_i}{\phi _t}} \right\rangle = 2{h^{ii}}\left\langle {{D_s}{D_i}{\phi _t},{D_i}{\phi _t}} \right\rangle \\
&= 2{h^{ii}}\left\langle {{D_i}{D_t}{\phi _s},{D_i}{\phi _t}} \right\rangle + 2{h^{ii}}\left\langle {({\phi _s} \wedge {\phi _i}){\phi _t},{D_i}{\phi _t}} \right\rangle \\
&= - 2{h^{ii}}\left\langle {{D_t}{\phi _s},{D_i}{D_i}{\phi _t}} \right\rangle + 2\left\langle {{D_t}{\phi _s},{D_2}{\phi _t}} \right\rangle + 2{h^{ii}}\left\langle {({\phi _s} \wedge {\phi _i}){\phi _t},{D_i}{\phi _t}} \right\rangle \\
&= - 2\left\langle {{D_t}{\phi _s},{h^{ii}}{D_i}{D_i}{\phi _t} - {h^{ii}}\Gamma _{ii}^k{D_k}{\phi _t}} \right\rangle + 2{h^{ii}}\left\langle {({\phi _s} \wedge {\phi _i}){\phi _t},{D_i}{\phi _t}} \right\rangle. \end{align*} Recall (\ref{yfcvbn}), the parabolic equation of $\phi_t$ along heat flow, then
$$\frac{d}{{ds}}\left\| {\nabla {\partial _t}\widetilde{u}} \right\|_{{L^2}}^2 = - 2\left\langle {{D_t}{\phi _s},{D_s}{\phi _t}} \right\rangle + 2{h^{ii}}\left\langle {({\phi _s} \wedge {\phi _i}){\phi _t},{D_i}{\phi _t}} \right\rangle + 2{h^{ii}}\left\langle {{D_t}{\phi _s},({\phi _t} \wedge {\phi _i}){\phi _i}} \right\rangle. $$ Hence we conclude, \begin{align*}
&\left\| {\nabla {\partial _t}\widetilde{u}(0,t,x)} \right\|_{{L^2}}^2\\
&\lesssim \int_0^\infty {\left\langle {{\partial _t}{\phi _s},{\partial _t}{\phi _s}} \right\rangle d\kappa} + \int_0^\infty {\left\langle {{A_t}{\phi _s},{A_t}{\phi _s}} \right\rangle d\kappa} \\
&+ \int_0^\infty {{{\left\| {{\phi _s}} \right\|}_{L_x^2}}{{\left\| {d\widetilde{u}} \right\|}_{L_x^\infty }}{{\left\| {{\partial _t}\widetilde{u}} \right\|}_{L_x^\infty }}{{\left\| {\nabla {\partial _t}\widetilde{u}} \right\|}_{L_x^2}}d\kappa} + {\int_0^\infty {\left\| {{\partial _t}\widetilde{u}} \right\|} _{L_x^2}}\left\| {d\widetilde{u}} \right\|_{L_x^\infty }^2{\left\| {{D_t}{\phi _s}} \right\|_{L_x^2}}d\kappa. \end{align*} Thus by Proposition \ref{sl}, Proposition \ref{xiaozi} and Proposition \ref{aaop}, we have
$$\left\| {\nabla {\partial _t}\widetilde{u}(0,t,x)} \right\|_{{L^2}}^2 \le \varepsilon _1^4. $$ Therefore, we have proved all estimates in (\ref{boot8}) and (\ref{boot9}). \end{proof}
We summarize what we have proved in the following corollary. \begin{corollary} Assume $(-T^*,T_*)$ is the lifespan of solution to (\ref{wmap1}). And let $\mu_1,\mu_2$ be sufficiently small, then we have \begin{align*}
{\left\| {du} \right\|_{L_t^\infty L_x^2([0,{T_*}] \times {\Bbb H^2})}}& + {\left\| {{\partial _t}u} \right\|_{L_t^\infty L_x^2([ 0,{T_*}] \times {\Bbb H^2})}} + {\left\| {\nabla du} \right\|_{L_t^\infty L_x^2([0,{T_*}] \times {\Bbb H^2})}} \\
&+ {\left\| {\nabla {\partial _t}u} \right\|_{L_t^\infty L_x^2([0,{T_*}] \times {\Bbb H^2})}} + {\left\| {{\partial _t}u} \right\|_{L_t^2L_x^6([ 0,{T_*}] \times {\Bbb H^2})}} \le \varepsilon _1^2. \end{align*} Thus by Proposition \ref{global}, we have $(u,\partial_tu)$ is a global solution to (\ref{wmap1}). \end{corollary}
\section{Proof of Theorem 1.1} Finally, we prove Theorem 1.1 based on Proposition \ref{xiaozi}. \begin{proposition} Let $u$ be the solution to (\ref{wmap1}) in $\mathcal{X}_{[0,\infty)}$. Then as $t\to \infty$, $u(t,x)$ converges to a harmonic map, namely $$ \mathop {\lim }\limits_{t \to \infty } \mathop {\lim }\limits_{x\in\Bbb H^2 }{\rm{dist}}_{\Bbb H^2}(u(t,x),Q(x))= 0, $$ where $Q(x):\Bbb H^2\to \Bbb H^2$ is the unperturbed harmonic map. \end{proposition} \begin{proof} For $u(t,x)$, by Proposition \ref{3.3}, we have the corresponding heat flow converges to some harmonic map uniformly for $x\in\Bbb H^2$. Then by the definition of the distance on complete manifolds, we have \begin{align}\label{ppo0}
{\rm{dist}_{{\Bbb H^2}}}(u(t,x),Q(x)) \le \int_0^\infty {{{\left\| {{\partial _s}\widetilde{u}} \right\|}_{L_x^\infty }}ds}. \end{align}
For any $T>0$, $\mu>0$, since $|\partial_s\widetilde{u}|$ satisfies $(\partial_s-\Delta)|\partial_s\widetilde{u}|\le 0$, one has \begin{align}
\int_T^\infty {{{\left\| {{\partial _s}\widetilde{u}(s,t,x)} \right\|}_{L_x^\infty }}ds} &\lesssim \int_T^\infty {{e^{ - \frac{1}{8}s}}{{\left\| {\tau(u(t,x))} \right\|}_{L_x^2}}} ds \lesssim {e^{ - T/8}}{\left\| {\nabla du(t,x)} \right\|_{L_x^2}} \label{mu1}\\
\int_0^\mu {{{\left\| {{\partial _s}\widetilde{u}(s,t,x)} \right\|}_{L_x^\infty }}ds} &\lesssim \int_0^\mu {{{\left\| {{e^{s{\Delta _{{\Bbb H^2}}}}}\tau(u(t,x))} \right\|}_{L_x^\infty }}} ds \le \int_0^\mu {{s^{ - \frac{1}{2}}}{{\left\| {\nabla du(t,x)} \right\|}_{L_x^2}}} ds \nonumber\\
&\lesssim {\mu ^{\frac{1}{2}}}{\left\| {\nabla du(t,x)} \right\|_{L_x^2}} \label{mu2}
\end{align} Similarly, we have \begin{align}
\int_\mu ^T {{{\left\| {{\partial _s}\widetilde{u}(s,t,x)} \right\|}_{L_x^\infty }}ds}
&\lesssim \int_\mu ^T {{{\left\| {{e^{(s - \frac{\mu }{2}){\Delta _{{\Bbb H^2}}}}}{\partial _s}\widetilde{u}(\frac{\mu}{2},t,x)} \right\|}_{L_x^\infty }}} ds \nonumber\\
&\lesssim \int_\mu ^T {{{(s - \frac{\mu}{2})}^{ - \frac{1}{4}}}{{\left\| {{\partial _s}\widetilde{u}(\frac{\mu}{2},t,x)} \right\|}_{L_x^4}}} ds \nonumber\\
&\lesssim {\mu ^{ - \frac{1}{4}}}\int_\mu ^T {{{\left\| {{\phi _s}(\frac{\mu}{2},t,x)} \right\|}_{L_x^4}}} ds.\label{mu3} \end{align} Therefore it suffices to prove for a fixed $\mu>0$ \begin{align}\label{8ding}
\mathop {\lim }\limits_{t \to \infty } {\left\| {{\phi _s}(\mu )} \right\|_{L_x^4}} = 0. \end{align}
Proposition \ref{xiaozi} implies ${\mu ^{\frac{1}{2}}}{\left\| {{\phi _s}(\mu )} \right\|_{L_t^2L_x^4}}+{\mu ^{\frac{1}{2}}}{\left\| \partial_t{{\phi _s}(\mu )} \right\|_{L_t^2L_x^4}}< \infty $, thus for any $\epsilon>0$ there exists a $T_0$ such that \begin{align}\label{7vgj}
{\left\| {{\phi _s}(\mu )} \right\|_{L_t^2L_x^4([{T_0},\infty ) \times {\Bbb H^2})}}+ {\left\|\partial_t {{\phi _s}(\mu )} \right\|_{L_t^2L_x^4([{T_0},\infty ) \times {\Bbb H^2})}}< \epsilon. \end{align} Particularly, for any interval $[a,a+1]$ of length one with $a\ge T_0$, there exists some $t_{a}\in[a,a+1]$ such that \begin{align}\label{shipo}
{\left\| {{\phi _s}(\mu ,{t_{a}})} \right\|_{L_x^4}} \le \epsilon/2. \end{align} Then by fundamental theorem of calculus for any $t'\in[a,a+1]$ \begin{align}\label{gouq9}
\left| {{{\left\| {{\phi _s}(\mu ,t')} \right\|}_{L_x^4}} - {{\left\| {{\phi _s}(\mu ,{t_a})} \right\|}_{L_x^4}}} \right| \le \int_{{t_a}}^{t'} {\left| {{\partial _t}{{\left\| {{\phi _s}(\mu ,t)} \right\|}_{L_x^4}}} \right|} dt. \end{align}
Since $|{\partial _t}{\left\| {{\phi _s}(\mu ,t)} \right\|_{L_x^4}}|\le {\left\| {{\partial _t}{\phi _s}(\mu ,t)} \right\|_{L_x^4}}$, by H\"older, (\ref{gouq9}) and (\ref{7vgj}) show \begin{align*}
\left| {{{\left\| {{\phi _s}(\mu ,t')} \right\|}_{L_x^4}} - {{\left\| {{\phi _s}(\mu ,{t_a})} \right\|}_{L_x^4}}} \right| \le {\left\| {{\partial _t}{\phi _s}(\mu ,t)} \right\|_{L_t^2L_x^4}}{(t' - a)^{\frac{1}{2}}} \le {\left\| {{\partial _t}{\phi _s}(\mu ,t)} \right\|_{L_t^2L_x^4}}. \end{align*} Thus we have by (\ref{shipo}) that for any $t\in[a,a+1]$, \begin{align*}
{\left\| {{\phi _s}(\mu ,t)} \right\|_{L_x^4}}\le \epsilon. \end{align*} Since $a$ is arbitrary chosen, we obtain (\ref{8ding}). Therefore, Theorem 1.1 is proved, \end{proof}
\section{Proof of remaining lemmas and claims} We first collect some useful inequalities for the harmonic maps. \begin{lemma} Suppose that $Q$ is an admissible harmonic map in Theorem 1.1. If $0<\mu_1\ll 1$, then \begin{align}
\|\nabla dQ\|_{L^2}&\lesssim \mu_1\label{shubai4}\\
\|\nabla^2 dQ\|_{L^2}&\lesssim \mu_1.\label{tianren78} \end{align} \end{lemma} \begin{proof} By integration by parts and the non-positive sectional curvature of $N=\Bbb H^2$, \begin{align*}
\|\nabla dQ\|^2_{L^2}&\lesssim \|dQ\|^2_{L^2}+\|\tau(Q)\|^2_{L^2}\\
\|\nabla^2 dQ\|^2_{L^2}&\lesssim \|\nabla\tau(Q)\|^2_{L^2}+\|\nabla dQ\|^3_{L^2}+\|\nabla dQ\|^2_{L^4}\| dQ\|^2_{L^4}+\| dQ\|^6_{L^2}. \end{align*} Hence by $\tau(Q)=0$, we have (\ref{shubai4}). And then (\ref{tianren78}) follows from (\ref{as4}), Gagliardo-Nirenberg inequality and Sobolev embedding. \end{proof}
Now we prove Corollary 2.1. \begin{lemma} Fix $R_0>0$, let $0<\mu_1,\mu_2\ll\mu_3\ll1,$ then the initial data $(u_0,u_1)$ in Theorem 1.1 satisfy \begin{align}
\|du_0\|_{L^2}+\|u_1\|_{L^2}+\|\nabla du_0\|_{L^2}+\|\nabla u_1\|_{L^2}\le \mu_3.\label{ojvbhuy} \end{align} \end{lemma} \begin{proof} First by (\ref{shubai4}), the harmonic map $Q$ satisfies \begin{align}\label{kijncvbn}
\|\nabla dQ\|_{L^2}+\|dQ\|_{L^2}\le \mu_1. \end{align} By (1.4) and Sobolev embedding, \begin{align}
\|u^k_0-Q^k\|_{L^{\infty}}\lesssim \|u^k_0-Q^k\|_{H^2}\le \mu_2. \end{align}
Hence $|u^1_0|+|u^2_0|\lesssim R_0+\mu_2.$ Then choosing $R=CR_0+C\mu_2$ in [Lemma 2.3,\cite{LZ}], we have \begin{align}\label{zxzxsdu}
\|du_0\|_{L^2}+\|\nabla du_0\|_{L^2}\le Ce^{8(CR_0+C\mu_2)}\big(\|\nabla^2 u^k_0\|_{L^2}+\|\nabla^2 u^k_0\|_{L^2}^2\big). \end{align} Again by [Lemma 2.3,\cite{LZ}] and (\ref{kijncvbn}), \begin{align}\label{zw34vbg}
\|\nabla^2Q^k\|_{L^2}\le Ce^{8(R_0)}\big(\|\nabla dQ\|_{L^2}+\|\nabla dQ\|_{L^2}^2\big)\le Ce^{8(R_0)}\mu_1. \end{align} Therefore, (1.4), (\ref{zw34vbg}) and (\ref{zxzxsdu}) give \begin{align}
\|du_0\|_{L^2}+\|\nabla du_0\|_{L^2}\le Ce^{8(CR_0+C\mu_2)}(\mu_1+\mu_2) \end{align} Let $\mu_1$ and $\mu_2$ be sufficiently small depending on $R_0$, we obtain \begin{align}
\|du_0\|_{L^2}+\|\nabla du_0\|_{L^2}\le \mu_3. \end{align} \end{proof}
\begin{lemma}\label{symm} Let $W$ be the magnetic operator defined in Lemma \ref{hushuo} as \begin{align} W\varphi = - 2 {h^{ii}}A_i^\infty {\partial _i}\varphi -{h^{ii}}A_i^\infty A_i^\infty \varphi -{h^{ii}}\left( {\varphi \wedge \phi _i^\infty } \right)\phi _i^\infty-h^{ii}(\partial_iA^{\infty}_{i}-\Gamma^k_{ii}A^{\infty}_k), \end{align} Then $W$ is symmetric with domain $C^{\infty}_c(\Bbb H^2,\Bbb C^2)$. And $-\Delta+W$ is strictly positive if $\mu_1$ is sufficiently small. \end{lemma} \begin{proof} Since we work with complex valued functions here, the wedge operator $\wedge$ should be first extended to the complex number field by taking the inner product in (\ref{nb890km}) to be the complex inner product. By the explicit formula for $\Gamma^{k}_{ii}$ and $h^{ii}$, one has \begin{align}\label{kjc6789} h^{ii}\Gamma^k_{ii}A^{\infty}_k=h^{11}\Gamma^2_{11}A^{\infty}_2=A^{\infty}_2. \end{align} It is easy to see by the non-positiveness and symmetry of the sectional curvature that $\varphi\longmapsto -{h^{ii}}\left( {\varphi \wedge \phi _i^\infty } \right)\phi _i^\infty$ is a non-negative and symmetric operator on $L^2(\Bbb H^2,\Bbb C^2)$. And by the skew-symmetry of $A^{\infty}_i$, $\varphi\longmapsto -{h^{ii}}\left( {\varphi \wedge A _i^\infty } \right)A_i^\infty$ is a non-negative and symmetric symmetric operator on $L^2(\Bbb H^2,\Bbb C^2)$. We claim that $$\varphi\longmapsto 2 {h^{ii}}A_i^\infty {\partial _i}\varphi +h^{ii}(\partial_iA^{\infty}_{i}-\Gamma^k_{ii}A^{\infty}_k) $$ is a symmetric operator on $L^2(\Bbb H^2,\Bbb C^2)$ as well. Indeed, by the skew-symmetry of $A^{\infty}_i$, $\partial_iA^{\infty}_i$, integration by parts and (\ref{kjc6789}), \begin{align*} &\left\langle {2{h^{ii}}A_i^\infty {\partial _i}f + {h^{ii}}({\partial _i}A_i^\infty - \Gamma _{ii}^kA_k^\infty )f,g} \right\rangle \\ &= \left\langle {2{h^{ii}}A_i^\infty {\partial _i}f + {h^{ii}}{\partial _i}A_i^\infty f - A_2^\infty f,g} \right\rangle \\ &= \left\langle {{h^{ii}}{\partial _i}A_i^\infty f - A_2^\infty f,g} \right\rangle - \left\langle {2{h^{ii}}{\partial _i}A_i^\infty f,g} \right\rangle - \left\langle {2{h^{ii}}A_i^\infty f,{\partial _i}g} \right\rangle + \left\langle {2{h^{22}}A_2^\infty f,g} \right\rangle \\ &= \left\langle { - {h^{ii}}{\partial _i}A_i^\infty f + A_2^\infty f,g} \right\rangle - \left\langle {2{h^{ii}}A_i^\infty f,{\partial _i}g} \right\rangle \\ &= \left\langle {f,{h^{ii}}{\partial _i}A_i^\infty g - A_2^\infty g} \right\rangle + \left\langle {f,2{h^{ii}}A_i^\infty {\partial _i}g} \right\rangle \\ &= \left\langle {f,2{h^{ii}}A_i^\infty {\partial _i}g + {h^{ii}}{\partial _i}A_i^\infty g - A_2^\infty g} \right\rangle. \end{align*} It remains to prove $-\Delta+W$ is positive. Since we have shown $\varphi\longmapsto -{h^{ii}}\left( {\varphi \wedge \phi _i^\infty } \right)\phi _i^\infty$ and $\varphi\longmapsto -{h^{ii}}\left( {\varphi \wedge A _i^\infty } \right)A_i^\infty$ are nonnegative, it suffices to prove for some $\delta>0$ $$\left\langle { - \Delta f + 2{h^{ii}}A_i^\infty {\partial _i}f + {h^{ii}}({\partial _i}A_i^\infty - \Gamma _{ii}^kA_k^\infty )f,f} \right\rangle \ge \delta \left\langle {f,f} \right\rangle. $$ By the skew-symmetry of $A^{\infty}_i$ and $\partial_iA^{\infty}_i$, it reduces to \begin{align*} \left\langle { - \Delta f + 2{h^{ii}}A_i^\infty {\partial _i}f,f} \right\rangle \ge \delta \left\langle {f,f} \right\rangle. \end{align*} H\"older, (\ref{{uv111}}) and (\ref{xuejin}) imply for some universal constant $c>0$ \begin{align*}
& \left\langle { - \Delta f + 2{h^{ii}}A_i^\infty {\partial _i}f,f} \right\rangle \ge \left\| {\nabla f} \right\|_2^2 - 2{\left\| {\sqrt {{h^{ii}}} A_i^\infty } \right\|_\infty }{\left\| {\nabla f} \right\|_2}{\left\| f \right\|_2} \\
&\ge \frac{1}{2}\left\| {\nabla f} \right\|_2^2 + c\left\| f \right\|_2^2 - 2{\left\| {\sqrt {{h^{ii}}} A_i^\infty } \right\|_\infty }{\left\| {\nabla f} \right\|_2}{\left\| f \right\|_2} \\
&\ge \frac{1}{2}\left\| {\nabla f} \right\|_2^2 + c\left\| f \right\|_2^2 - 2{\mu _1}{\left\| {\nabla f} \right\|_2}{\left\| f \right\|_2}. \end{align*} Let $\mu_1$ be sufficiently small, then \begin{align*} \left\langle { - \Delta f + 2{h^{ii}}A_i^\infty {\partial _i}f,f} \right\rangle\ge \delta \left\langle {f,f} \right\rangle. \end{align*} \end{proof}
Recall the equation of the tension field $\phi_s$: \begin{lemma} The evolution of differential fields and the heat tension filed along the heat flow are given by the following: \begin{align} &\partial_s\phi_s=h^{ii}D_iD_i\phi_s-h^{ii}\Gamma^k_{ii}D_k\phi_s+h^{ii}(\phi_s\wedge\phi_i)\phi_i \label{poijn}\\ &{\partial_s}{\phi _s}-\Delta {\phi _s}= 2{h^{ii}}{A_i}{\partial _i}{\phi _s} + {h^{ii}}\left( {{\partial _i}{A_i}} \right){\phi _s} - {h^{ii}}\Gamma _{ii}^k{A_k}{\phi _s} + {h^{ii}}{A_i}{A_i}{\phi _s}\nonumber \\ &+ {h^{ii}}\left( {{\phi _s} \wedge {\phi _i}} \right){\phi _i}\label{991}\\ &{\partial _s}{\phi _t} - \Delta {\phi _t} = 2h^{ii}{A_i}{\partial _i}{\phi _t} + h^{ii}{A_i}{A_i}{\phi _t} + h^{ii}{\partial _i}{A_i}{\phi _t} - {h^{ii}}\Gamma _{ii}^k{A_k}{\phi _t} \nonumber \\ &+ {h^{ii}}\left( {{\phi _t} \wedge {\phi _i}} \right){\phi _i}.\label{yfcvbn}\\ &{\partial _s}{\partial _t}{\phi _s}= \Delta {\partial _t}{\phi _s} + 2{h^{ii}}\left( {{\partial _t}{A_i}} \right){\partial _i}{\phi _s} + 2{h^{ii}}{A_i}{\partial _i}{\partial _t}{\phi _s} + {h^{ii}}\left( {{\partial _i}{\partial _t}{A_i}} \right){\phi _s}\nonumber\\ &+ {h^{ii}}\left( {{\partial _i}{A_i}} \right){\partial _t}{\phi _s} - {h^{ii}}\Gamma _{ii}^k\left( {{\partial _t}{A_k}} \right){\phi _s} - {h^{ii}}\Gamma _{ii}^k{A_k}{\partial _t}{\phi _s} + {h^{ii}}\left( {{\partial _t}{A_i}} \right){A_i}{\phi _s}\nonumber\\ &+ {h^{ii}}{A_i}\left( {{\partial _t}{A_i}} \right){\phi _s} + {h^{ii}}{A_i}{A_i}{\partial _t}{\phi _s} + {h^{ii}}\left( {{\partial _t}{\phi _s} \wedge {\phi _i}} \right){\phi _i}+ {h^{ii}}\left( {{\phi _s} \wedge {\partial _t}{\phi _i}} \right){\phi _i}\nonumber\\ & + {h^{ii}}\left( {{\phi _s} \wedge {\phi _i}} \right){\partial _t}{\phi _i}.\label{9923} \end{align} \end{lemma} \begin{proof} Recall that we use the orthogonal coordinates (\ref{vg}) throughout the paper. Recall the equation of $\phi_s$: \begin{align}\label{who} \phi_s=h^{ii}D_i\phi_i-h^{ii}\Gamma^k_{ii}\phi_k. \end{align} Applying $D_s$ to (\ref{who}) yields \begin{align*}
&{D_s}{\phi _s} = {h^{ii}}{D_s}{D_i}{\phi _i} - {h^{ii}}\Gamma _{ii}^k{D_s}{\phi _k} = {h^{ii}}{D_i}{D_i}{\phi _s} - {h^{ii}}\Gamma _{ii}^k{D_k}{\phi _s} + {h^{ii}}\left( {{\phi _s} \wedge {\phi _i}} \right){\phi _i}\\
&= \Delta {\phi _s} + 2{h^{ii}}{A_i}{\partial _i}{\phi _s} + {h^{ii}}\left( {{\partial _i}{A_i}} \right){\phi _s} - {h^{ii}}\Gamma _{ii}^k{A_k}{\phi _s} + {h^{ii}}{A_i}{A_i}{\phi _s} + {h^{ii}}\left( {{\phi _s} \wedge {\phi _i}} \right){\phi _i}. \end{align*} The tension free identity and commutator identity give \begin{align*} {D_s}{\phi _t} &= {D_t}{\phi _s} = {D_t}\left( {{h^{ii}}{D_i}{\phi _j} - {h^{ii}}\Gamma _{ii}^k{\phi _k}} \right) = {h^{ii}}{D_t}{D_i}{\phi _i} - {h^{ii}}\Gamma _{ii}^k{D_t}{\phi _k} \\ &= {h^{ii}}{D_i}{D_i}{\phi _t} - {h^{ii}}\Gamma _{ii}^k{D_t}{\phi _k} + {h^{ii}}\left( {{\partial _t}u \wedge {\partial _i}u} \right){\partial _i}u. \end{align*} Therefore the differential filed $\phi_t$ satisfies \begin{align*} {\partial _s}{\phi _t} - \Delta {\phi _t} = 2h^{ii}{A_i}{\partial _i}{\phi _t} + h^{ii}{A_i}{A_i}{\phi _t} + h^{ii}{\partial _i}{A_i}{\phi _t} - {h^{ii}}\Gamma _{ii}^k{A_k}{\phi _t} + {h^{ii}}\left( {{\phi _t} \wedge {\phi _i}} \right){\phi _i}. \end{align*} Applying $\partial_t$ to (\ref{991}) gives (\ref{9923}). \end{proof}
\section{Acknowledgments} We owe our thanks to the anonymous referee for helpful comments which greatly improved this paper. We thank Prof. Daniel Tataru for helpful comments on our work and especially the necessity of adding Remark 1.1.
The first version of this paper is Chapter 3 of Li's thesis. We divide it into two parts for publication. Li owes gratitude to Prof. Youde Wang, Hao Yin, Cong Song for guidance on geometric PDEs and geometric analysis.
\end{document}
|
arXiv
|
{
"id": "1703.05207.tex",
"language_detection_score": 0.4644167125225067,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} We develop a theory of abstract intermediate function spaces on a compact convex set $X$ and study the behaviour of multipliers and centers of these spaces. In particular, we provide some criteria for coincidence of the center with the space of multipliers and a general theorem on boundary integral representation of multipliers. We apply the general theory in several concrete cases, among others to strongly affine Baire functions, to the space $A_f(X)$ of fragmented affine functions, to the space $(A_f(X))^\mu$, the monotone sequential closure of $A_f(X)$, to their natural subspaces formed by Borel functions, or, in some special cases, to the space of all strongly affine functions. In addition, we prove that the space $(A_f(X))^\mu$ is determined by extreme points and provide a large number of illustrating examples and counterexamples. \end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
We investigate centers, spaces of multipliers and their integral representation for distinguished spaces of affine functions on compact convex sets. The story starts by results on affine continuous functions. If $X$ is a compact convex set in a (Hausdorff) locally convex space, we denote by \gls{AcX} the space of all real-valued continuous affine functions on $X$. This space equipped with the supremum norm and pointwise order is an example of a unital $F$-space (see the next section for the notions unexplained here). Therefore, it possesses the \emph{center}\index{center!of $A_c(X)$} \gls{ZAcX}, i.e., the set of all elements of the form $T1_X$, where $T\colon A_c(X)\to A_c(X)$ is an operator satisfying $-\alpha I\le T\le \alpha T$ for some $\alpha\ge 0$. This notion may be viewed as a generalization of the center of a unital C$^*$-algebra. Indeed, if $A$ is a unital C$^*$-algebra, its state space $S(A)$ is a compact convex set and $A_c(S(A))$ is canonically identified with the self-adjoint part of $A$. Moreover, by \cite[Lemma 4.4]{wils} the center $Z(A_c(S(A)))$ is in this way identified with the self-adjoint part of the standard center of $A$.
It is shown in \cite[Theorem II.7.10]{alfsen} that the center of $A_c(X)$ has two more natural representations. On one hand it coincides with the spaces of \emph{multipliers}\index{multiplier!in $A_c(X)$} in $A_c(X)$, i.e., the functions $m\in A_c(X)$ such that for each $a\in A_c(X)$ there exists $b\in A_c(X)$ with $m(x)a(x)=b(x)$ for each $x\in \ext X$, where \gls{extX} stands for the set of all extreme points of $X$. On the second hand, $Z(A_c(X))$ may be identified with the space of functions on $\ext X$, that are continuous in the facial topology. This identification is given by the restriction mapping.
In \cite{edwards} these notions were studied for the monotone sequential closure $(A_c(X))^\mu$ of $A_c(X)$ and analogous results were proved. In \cite{smith-london} these results were extended to the sequential closure $(A_c(X))^\sigma$ of $A_c(X)$ and also to the monotone closure of the differences of semicontinuous affine functions (this space is denoted by $(A_s(X))^\mu$). It is shown in these papers that all these spaces are $F$-spaces with unit such that their center coincides with the space of multipliers. A crucial property of these spaces is that they are determined by the values on $\ext X$, i.e., that every function $u$ from these spaces satisfies $$ \inf u(\ext X)\le u\le \sup u(\ext X).$$
In addition, a central integral representation is available for these spaces. More precisely, the following result is proved in\cite[Theorem 5.5]{smith-london} (see also \cite[Theorem 1.2]{smith-pacjm})
\begin{thma}\label{T:smith} There exists a $\sigma$-algebra $\mathcal R$ of subsets of $\ext X$ such that the restriction mapping $\pi$ maps the center $Z((A_s(X))^\mu)$ onto the space of all bounded $\mathcal R$-measurable functions on $\ext X$. Moreover, there exists a unique affine map $x\mapsto \nu_x$ from $X$ into the set of probability measures on $\mathcal R$ satisfying, for $z\in Z((A_s(X))^\mu)$, $z(x)=\int_{\ext X} \pi(z)\,\mbox{\rm d}\nu_x$. \end{thma}
These results have profound consequences in the theory of $C^*$-algebras, where in fact this line of research has originated (see \cite{davies,pedersen-scand,pedersen-weak,pedersen-appl} or \cite[Section 5]{edwards}). Later on, the concept of the center was considered for arbitrary Banach spaces, see \cite{edwards-banach} and \cite{behrends}.
The original motivation of our research contained in the present paper was to extend the theory of \cite{edwards} and \cite{smith-london} to the case of the so-called \emph{fragmented} affine functions on $X$ (denoted by $A_f(X)$). This class of functions includes both semicontinuous affine functions as well as the affine functions of the first Baire class. The fact that this task should be interesting is witnessed by the result of \cite{dostal-spurny} saying that the space $A_f(X)$ is determined by extreme points. One of our results is the extension of this theorem to the space $(A_f(X))^\mu$, the monotone sequential closure of $A_f(X)$ in Theorem~\ref{extdeter} below.
However, it has appeared that it is natural and useful to develop a more abstract theory of \emph{intermediate functions spaces}\index{intermediate function space} which are closed spaces $H$ satisfying $A_c(X)\subset H\subset A_b(X)$ (here \gls{AbX} stands for the space of all affine bounded functions on $X$). In this abstract context we investigate the relationship of the center and the space of multipliers, preservation of extreme points, characterizations by a measurability condition on $\ext X$ and related results on integral representation. We also collect a number of examples and counterexamples.
We point out that the proofs of Theorem~\ref{T:smith} and its predecessors from \cite{edwards} use the same abstract scheme. We unify this approach in Theorem~\ref{T:integral representation H} and extend it in Theorems~\ref{t:aha-regularita} and~\ref{T: meritelnost H=H^uparrow cap H^downarrow} to a more general context. This not only covers the results of \cite{edwards,smith-london} but can be applied in many other cases, in particular to fragmented functions and their limits. Moreover, in some cases (for example for strongly affine Baire functions) we provide a more concrete description of the analogues of the $\sigma$-algebra $\mathcal R$ from Theorem~\ref{T:smith}.
The paper is organized as follows:
In the following section we collect basic definitions, facts and notation from the Choquet theory, topology and descriptive set theory needed throughout the paper. In Section~\ref{s:IFS} we introduce intermediate function spaces and collect their basic properties and representations.
Section~\ref{s:multi-atp} is devoted to centers and multipliers. We introduce and compare these notions. In addition, we introduce a new notion of strong multipliers and relate it to the other notions. It should be stressed that for $A_c(X)$, the space of affine continuous functions, all multipliers are strong and both generalizations are natural.
In Section~\ref{sec:mult-acx} we provide a characterization of multipliers on $A_c(X)$ which is, in many cases, easy to apply.
Section~\ref{s:preserveext} is devoted to the investigation of preservation of extreme points to larger intermediate function spaces. This is closely related to the coincidence of the center and the spaces of multipliers. We provide some sufficient conditions and also some counterexamples.
In Section~\ref{s:determined} we use the results of \cite{teleman} on boundary integral representation of semicontinuous functions to show that the space $(A_s(X))^\sigma$ is determined by extreme points. We also prove the above-mentioned result that the space $(A_f(X))^\mu$ is determined by extreme points.
Section~\ref{s:reprez-abstraktni} contains an abstract version of Theorem~\ref{T:smith} on integral representation of spaces of multipliers and also its generalization to a more general context.
In Section~\ref{s:meas sm} we give a better version of some results of Section~\ref{s:reprez-abstraktni} for strong multipliers, using suitable families of split faces.
Section~\ref{sec:splifaces} is devoted to extension results for strongly affine functions on split faces which are later used to formulate concrete versions of the abstract results from Section~\ref{s:reprez-abstraktni} in a more descriptive way.
In Section~\ref{sec:baire} we give concrete versions of the abstract results for spaces of strongly affine Baire functions and Section~\ref{sec:beyond} is devoted to the results on larger spaces (defined using semicontinuous, Borel or fragmented functions).
The last two sections are devoted to examples and counterexamples. Section~\ref{sec:strange} contains several strange counterexamples showing that some natural inclusions are not automatic. Finally, the last section is devoted to an analysis of a special class of simplices defined using the porcupine topology and considered by Stacey \cite{stacey}. This analysis illustrates that some sufficient conditions for certain representations are natural and inspires several open problems.
\section{Basic notions}
In this section we fix the notation, collect definitions of basic notions and provide some background results needed in the paper. We start by pointing out that we will deal only with real vector spaces. In particular, we consider only real-valued affine functions.
\subsection{Compact spaces and important classes of sets and functions} \label{ssc:csp}
If $X$ is a compact (Hausdorff) topological space, we denote by \gls{C(X)} the Banach space of all real-valued continuous functions on $X$ equipped with the sup-norm. The dual of $C(X)$ will be identified (by the Riesz representation theorem) with $M(X)$, the space of signed (complete) Radon measures on $X$ equipped with the total variation norm and the respective weak$^*$ topology. Let \gls{M1(X)} stand for the set of all probability Radon measures on $X$. A set $B\subset X$ is \emph{universally measurable}\index{set!universally measurable} if it is measurable with respect to any probability Radon measure on $X$. If $B\subset X$ is universally measurable, we write $M_1(B)$ for the set of all $\mu\in M_1(X)$ with $\mu(X\setminus B)=0$. Let \gls{1B} denote the characteristic function of $B\subset X$, we often write $1=1_X$ for the constant function $1$.
Before coming to specific classes of functions, let us introduce some general notation for families of functions. Let $F$ be a family of bounded functions on an~abstract set $\Gamma$. We set \begin{itemize}
\item \gls{Fsigma} to be the smallest family of functions on $\Gamma$ that contains $F$ and is closed with respect to taking limits of pointwise converging bounded sequences;
\item \gls{Fmu} to be the smallest family of functions on $X$ that contains $F$ and is closed with respect to taking pointwise limits of monotone bounded sequences;
\item \gls{Fdown} to be the set of all pointwise limits of bounded non-increasing sequences from $F$;
\item \gls{Fup} to be the set of all pointwise limits of bounded non-decreasing sequences from $F$. \end{itemize} Obviously $F^\mu\subset F^\sigma$. The converse holds whenever $F$ is a lattice (with pointwise operations), but not in general (see, e.g., Proposition~\ref{P:Baire-srovnani}(ii) below). We will also repeatedly use the following easy observation.
\begin{lemma}\label{L:muclosed is closed}
Let $\Gamma$ be a set and let $F\subset \ell^\infty(\Gamma)$ be a linear subspace containing constant functions such that $F^\mu=F$. Then $F$ is uniformly closed. \end{lemma}
\begin{proof} This follows from the beginning of the proof of \cite[Lemma 3.5]{edwards}. But the quoted lemma has stronger assumptions (even though they are not used at this step), so we provide the easy argument.
It is enough to show that, given sequence $(f_n)$ in $F$ such that $\sum_n\norm{f_n}<\infty$, the sum $f=\sum_n f_n$ (which exists in $\ell^\infty(\Gamma)$) belongs to $F$. So, let $(f_n)$ be such a sequence and denote $k=\sum_n\norm{f_n}\in[0,\infty)$ and $f=\sum_n f_n\in\ell^\infty(\Gamma)$. Set $g_n=f_n+\norm{f_n}$. By our assumptions $g_n\in F$ and $g_n\ge0$ for each $n\in\mathbb N$. The sequence of partial sums $(\sum_{k=1}^n g_k)_n$ is a non-decreasing sequence in $F$ uniformly bounded by $2k$, hence $g=\sum_{k=1}^{\infty} g_k\in F$. Then $f=g-k\in F$ as well.
\end{proof}
If $\mathcal A$ and $\mathcal B$ are two families of subsets of a set $\Gamma$, we denote, as usually, by \begin{itemize}
\item \gls{Asigma} the family of sets which may be expressed as a countable union of elements of $\mathcal A$;
\item \gls{Adelta} the family of sets which may be expressed as a countable intersection of elements of $\mathcal A$;
\item \gls{sigma(A)} the $\sigma$-algebra generated by $\mathcal A$;
\item \gls{AwedgeB} the system $\{A\cap B;\, A\in\mathcal A,B\in\mathcal B\}$;
\item \gls{AveeB} the system $\{A\cup B;\, A\in\mathcal A,B\in\mathcal B\}$.
\end{itemize}
If $Y$ is a Tychonoff (i.e., completely regular and Hausdorff) space, we denote by \gls{Ba1(Y)} the space of all \emph{Baire-one functions}\index{function!Baire-one} on $Y$ (i.e., pointwise limits of sequences of continuous functions) and by \gls{Ba1b(Y)} the space of all bounded Baire-one functions on $Y$. The space of \emph{Baire functions}\index{function!Baire} on $Y$ (i.e., the smallest space containing continuous functions and closed under pointwise limits of sequences) is denoted by \gls{Ba(Y)}, bounded Baire functions are denoted by \gls{Bab(Y)}.
Baire functions may be characterized using a measurability condition. Recall that a \emph{zero set}\index{set!zero} is a set $Z\subset Y$ of the form $$Z=[f=0]=\{y\in Y;\, f(y)=0\}$$ for a continuous function $f$. Complements of zero sets are called \emph{cozero sets}\index{set!cozero}. The families of zero and cozero sets will be denoted by \gls{Zer} and \gls{Coz}, respectively. It is known that $f\in \Ba_1(Y)$ if and only if it is $\operatorname{Zer}_\sigma$-measurable (see \cite[Exercise 3.A.1]{lmz}). Further, the $\sigma$-algebra generated by zero (or cozero) sets is called \emph{Baire $\sigma$-algebra}\index{sigma-algebra@$\sigma$-algebra!Baire} and its elements are \emph{Baire sets}\index{set!Baire}. Another known result says that $f\in\Ba(Y)$ if and only if it is Baire measurable (i.e., $f^{-1}(U)$ is a Baire set for any open set $U\subset\mathbb R$).
As usually, \emph{Borel sets}\index{set!Borel} are elements of the $\sigma$-algebra generated by open sets and a function is called \emph{Borel}\index{function!Borel} if the inverse image of any open set is a Borel set. We write \gls{Bo(Y)} for the family of all Borel functions on $Y$. An~important subclass of Borel functions is the family \gls{Bo1(Y)} of functions of the \emph{first Borel class}\index{function!of the first Borel class}, i.e., $(F\wedge G)_\sigma$-measurable functions, where $F\wedge G$ denote the sets obtained as the intersection of a closed and open set. (This class of functions coincide with the family $\operatorname{Bof}_1(Y,\mathbb R)$ from \cite{spurny-amh}.)
Now we come to fragmented functions and related classes of functions and sets. A function $f:Y\to\mathbb R$ \begin{itemize}
\item is called \emph{fragmented}\index{function!fragmented}, if for any $\epsilon>0$ and nonempty closed set $F\subset Y$ there exists a relatively open nonempty set $U\subset F$ such that $\operatorname{diam} f(U)<\epsilon$;
\item is said to have the \emph{point of continuity property}, if $f|_F$ has a point of continuity for any $F\subset X$ nonempty closed. \end{itemize} Further, a set $A\subset Y$ is \emph{resolvable}\index{set!resolvable} if the characteristic function $1_A$ is fragmented. This definition is used in \cite[Section 2]{koumou}, where such sets are also called \emph{$H$-sets}. The term resolvable is used in \cite[Chapter I, \S12]{kuratowski} where an alternative definition is used. The two definitions are equivalent by \cite[Lemma 2.2]{koumou} or \cite[Chapter I, \S12.V]{kuratowski}. We will denote the family of resolvable sets by \gls{H}.
The following theorem revealing relationships of these classes follows from \cite[Theorem 2.3]{koumou}.
\begin{thma}\label{T:a} Let $Y$ be a Tychonoff space and let $f:Y\to\mathbb R$ be a function. Then we have the following. \begin{enumerate}[$(a)$] \item $f$ has the point of continuity property $\Longrightarrow$ $f$ is fragmented $\Longrightarrow$ $f$ is $\mathcal H_\sigma$-measurable \item If $Y$ is compact (or, more generally, hereditarily Baire), then the three conditions from $(a)$ are equivalent. \end{enumerate} \end{thma}
With few exceptions we will consider fragmented functions only on a compact space $X$.
In such a case we write \gls{Fr(X)} (or \gls{Frb(X)}) for the space of all fragmented (or bounded fragmented) functions on $X$. It easily follows from Theorem~\ref{T:a} that fragmented functions form a Banach lattice and algebra, see \cite[Theorem~5.10]{lmns}. It also readily follows that any semicontinuous function $f\colon X\to \mathbb R$ is fragmented.
Further, it follows from \cite[Lemma 4.1]{koumou} that any resolvable set is universally measurable. Therefore also any fragmented function on a compact space is universally measurable (in the usual sense that the preimage of any open set is universally measurable).
We further note that fragmented functions may be viewed as a generalization of Baire-one functions. This is witnessed by the following known result:
\begin{thma}\label{T:b} Let $X$ be a compact Hausdorff space and let $f:X\to\mathbb R$ be a function. Then we have the following. \begin{enumerate}[$(a)$]
\item $f$ is Baire-one $\Longrightarrow$ $f$ is of the first Borel class $\Longrightarrow$ $f$ is fragmented.
\item If $X$ is metrizable, then the three conditions from (a) are equivalent. \end{enumerate} \end{thma}
Indeed, any Baire-one function is $\operatorname{Zer}_\sigma$-measurable, and thus $F_\sigma$-measurable, so it is of the first Borel class. Further, resolvable sets form an algebra containing open sets (by \cite[Proposition 2.1(i)]{koumou}), so any $(F\wedge G)_\sigma$ set belongs to $\mathcal H_\sigma$ and thus any function of the first Borel class is $\mathcal H_\sigma$-measurable.
Further, if $X$ is metrizable, then a subset of $X$ is resolvable if and only if it is both $F_\sigma$ and $G_\delta$ (see, e.g., \cite[Chapter II, \S24.III]{kuratowski}), hence any fragmented function is $F_\sigma$-measurable, thus Baire-one.
We finish this section by collecting known results on transferring of topological properties of mappings by continuous surjections between compact spaces.
\begin{lemma}\label{L:kvocient}
Let $X$ and $Y$ be compact spaces, let $\varphi:X\to Y$ be a continuous surjection and let $f:Y\to\mathbb R$ be a function.
Then we have the following equivalences:
\begin{enumerate}[$(a)$]
\item $f$ is continuous if and only if $f\circ \varphi$ is continuous.
\item $f$ is lower semicontinuous if and only if $f\circ \varphi$ is lower semicontinuous.
\item $f\in\Ba_1(Y)\Longleftrightarrow f\circ \varphi\in\Ba_1(X)$.
\item $f\in\Ba(Y)\Longleftrightarrow f\circ \varphi\in\Ba(X)$.
\item $f\in\Bo_1(Y)\Longleftrightarrow f\circ \varphi\in\Bo_1(X)$.
\item $f\in\Bo(Y)\Longleftrightarrow f\circ \varphi\in\Bo(X)$.
\item $f$ is fragmented if and only if $f\circ \varphi$ is fragmented.
\item $f$ is $\sigma(\mathcal H)$-measurable if and only if $f\circ \varphi$ is $\sigma(\mathcal H)$-measurable.
\end{enumerate} \end{lemma}
\begin{proof}
Assertions $(a)$ and $(b)$ follow easily from the fact that $\varphi$ is a closed mapping (and hence a quotient map).
The remaining assertions follow for example from \cite[Theorem A]{affperf}. (Assertions $(e)$, $(f)$ and $(h)$ follow directly from the quoted theorem. To get assertions $(c)$ and $(d)$ we need to use moreover the above-mentioned facts that Baire-one functions are characterized by $\operatorname{Zer}_\sigma$-measurability and Baire functions are characterized by measurability with respect to the Baire $\sigma$-algebra. To prove $(g)$ we use in addition Theorem~\ref{T:a}$(b)$.) \end{proof}
\subsection{Compact convex sets} \label{ssc:ccs} Let $X$ be a compact convex set in a locally convex (Hausdorff) topological vector space. Given a Radon probability measure $\mu$ on $X$, we write \gls{r(mu)} for the \emph{barycenter of $\mu$}\index{barycenter}, i.e., the unique point $x\in X$ satisfying $a(x)=\int_X a \,\mbox{\rm d}\mu$ for each affine continuous function on $X$ (see \cite[Proposition~I.2.1]{alfsen} or \cite[Chapter 7, \S\,20]{lacey}). Conversely, for a point $x\in X$, we denote by $M_{x}(X)$ the set of all Radon probability measures on $X$ with the barycenter $x$ (i.e., the set of all probabilities \emph{representing}\index{measure!representing a point} $x$).
We recall that $x \in X$ is an \emph{extreme point}\index{extreme point} of $X$ if whenever $x=\frac12(y+ z)$ for some $y, z \in X$, then $x=y=z$. We write $\ext X$ for the set of extreme points of $X$.
The usual dilation order $\prec$ on the set $M_1(X)$ of Radon probability measures on $X$ is defined as \gls{muprecnu} if and only if $\mu(f)\le \nu(f)$ for any real-valued convex continuous function $f$ on $X$. (Recall that \gls{mu(f)} is a shortcut for $\int f\,\mbox{\rm d}\mu$.) A measure $\mu\in M_1(X)$ is said to be \emph{maximal}\index{measure!maximal} if it is maximal with respect to the dilation order. If $B\supset \ext X$ is a Baire set, then $\mu(B)=1$ for any maximal measure $\mu\in M_1(X)$ (this follows from \cite[Corollary I.4.12]{alfsen}). Also, maximal measures are supported by $\overline{\ext X}$, see \cite[Theorem 3.79(c)]{lmns}.
Further, in case $X$ is metrizable, maximal probability measures are exactly the probabilities carried by the $G_\delta$ set $\ext X$ of extreme points of $X$ (see, e.g., \cite[p. 35]{alfsen} or \cite[Corollary 3.62]{lmns}).
By the Choquet representation theorem, for any $x\in X$ there exists a maximal representing measure (see \cite[p. 192, Corollary]{lacey} or \cite[Theorem I.4.8]{alfsen}). A compact convex set $X$ is termed \emph{simplex}\index{simplex} if this maximal measure is uniquely determined for each $x\in X$. It is a \emph{Bauer simplex}\index{simplex!Bauer} if moreover the set of extreme points is closed. In this case, the set $X$ is affinely homeomorphic with the set $M_1(\ext X)$ (see \cite[Corollary II.4.2]{alfsen}).
We will also need the following generalization of extreme points. A subset $A\subset X$ is called \emph{extremal}\index{set!extremal} if $\frac12(y+ z)\in A$ for some $y, z \in X$, then $y,z\in A$. A convex extremal set is called \emph{face}\index{face}.
Let $A\subset X$ be a face. The \emph{complementary set}\index{set!complementary to a face} $A'$ is the union of all faces disjoint from $A$. Then $A'$ is clearly extremal, so it is a face as soon as it is convex. In such a case it is called the \emph{complementary face}\index{face!complementary} of $A$. Further, a face $A$ is said to be \begin{itemize}
\item a \emph{parallel face}\index{face!paralel} if $A'$ is convex and for each $x\in X\setminus (A\cup A')$ there is a unique $\lambda\in (0,1)$ such that $x=\lambda a+(1-\lambda)a'$ for some $a\in A$, $a'\in A'$;
\item a \emph{split face}\index{face!split} if $A'$ is convex and for each $x\in X\setminus (A\cup A')$ there is a unique $\lambda\in (0,1)$ and a unique pair $a\in A$, $a'\in A'$ such that $x=\lambda a+(1-\lambda)a'$. \end{itemize} Clearly, any split face is parallel but the converse is not true. If $A$ is a split face (or at least a parallel face), there is a canonical mapping assigning to each $x\in X\setminus (A\cup A')$ the unique value $\lambda$ from the definition. If we extend it by the values $1$ on $A$ and $0$ on $A'$, we clearly obtain an affine function on $X$. We will denote it by \gls{lambdaA}. We finish this section by noting that in a simplex any closed face is split by \cite[Theorem II.6.22]{alfsen}.
\subsection{Distinguished spaces of affine functions}\label{ssc:meziprostory}
Given a compact convex set $X$, we denote by \gls{AbX} the space of all real-valued bounded affine functions on $X$. This space equipped with the supremum norm and the pointwise order is an ordered Banach space. It will serve as the basic surrounding space for our investigation.
We will further consider the following distinguished subsets of $A_b(X)$:
\begin{itemize} \item \gls{AcX} stands for the space of all affine continuous functions on $X$. It is a closed linear subspace of $A_b(X)$.
\item \gls{A1(X)} stands for the space of all affine Baire-one functions on $X$. It is a closed subspace of $A_b(X)$. We also note that by the Mokobodzki theorem (see, e.g., \cite[Theorem 4.24]{lmns}) any affine Baire-one function is a pointwise limit of a bounded sequence of affine continuous functions, i.e., it is \emph{of the first affine class}.\index{function!of the first affine class}
\item \gls{Al(X)} denotes the set of all real-valued lower semicontinuous affine functions on $X$. This is not a linear space, but it is a convex cone contained in $A_b(X)$. \item \gls{As(X)} denotes the space $A_l(X)-A_l(X)$. It is a linear subspace of $A_b(X)$. This space need not be closed in $A_b(X)$ by Proposition~\ref{P:Baire-srovnani}(b) below. So, we will consider also its closure $\overline{A_s(X)}$.
\item $A_b(X)\cap\Bo_1(X)$ is the space of all affine functions of the first Borel class on $X$ (note that any such function is automatically bounded). It is a closed subspace of $A_b(X)$ which contains $A_1(X)\cup\overline{A_s(X)}$.
\item \gls{Af(X)} stands for the space of all fragmented affine functions on $X$. Recall that any fragmented affine function on $X$ has a point continuity (due to Theorem~\ref{T:a}) and thus it is bounded on $X$ by\cite[Lemma 4.20]{lmns}. It easily follows from Theorem~\ref{T:a} that $A_f(X)$ is a closed subspace of $A_b(X)$. Moreover, by Theorem~\ref{T:b} we deduce that $A_b(X)\cap \Bo_1(X)\subset A_f(X)$. \item \gls{Asa(X)} denotes the space of all \emph{strongly affine} functions\index{function!strongly affine}, i.e., the space of all universally measurable functions $f\colon X\to\mathbb R$ satisfying $\mu(f)=f(r(\mu))$ for each $\mu\in M_1(X)$. It is clear that any strongly affine function is affine and it follows from the proof of \cite[Satz 2.1.(c)]{krause} that any strongly affine function is bounded. Hence, $A_{sa}(X)$ is a closed subspace of $A_b(X)$. Moreover, any fragmented affine function is strongly affine, see \cite[Theorem 4.21]{lmns}. \end{itemize} We summarize the above-mentioned inclusions: \begin{equation}\label{eq:prvniinkluze} \begin{array}{ccccccccc} A_c(X)&\subset& A_1(X)&\subset& A_b(X)\cap\Bo_1(X)&\subset&A_f(X)&\subset&A_{sa}(X) \\ \cap&&&&\cup &&&&\cap \\ A_l(X)&\subset& A_s(X) &\subset& \overline{A_s(X)}& && &A_b(X). \end{array} \end{equation}
In case $X$ is metrizable, the situation is simpler. More specifically, in this case we have: \begin{equation}\label{eq:prvniinkluze-metriz} \begin{aligned} A_c(X)&\subset A_l(X)\subset A_s(X)\subset\overline{A_s(X)}\subset A_1(X)=A_b(X)\cap \Bo_1(X)=A_f(X)\\&\subset A_{sa}(X)\subset A_b(X).\end{aligned} \end{equation}
We will further consider spaces $$(A_c(X))^\mu, (A_1(X))^\mu, (A_c(X))^\sigma, (A_s(X))^\mu, (A_b(X)\cap\Bo_1(X))^\mu, (A_f(X))^\mu.$$ By Lemma~\ref{L:muclosed is closed} all these families are closed linear subspaces of $A_b(X)$, in particular $\overline{A_s(X)}\subset (A_s(X))^\mu$.
\subsection{$F$-spaces and their centres} \label{ssc:fspaces}
An \emph{$F$-space with unit}\index{F-space with unit@$F$-space with a unit} is a partially ordered Banach space $A$ with closed positive cone $A^+$ together with an element $e\in A$ of unit norm satisfying \[ -\norm{a}e\le a\le \norm{a}e,\quad a\in A. \] The element $e$ is then called \emph{the unit} of $A$. An abstract theory of $F$-spaces and related types of ordered Banach spaces is developed in \cite{perdrizet}. Examples of $F$-spaces with unit include spaces $A_c(X)$, $A_b(X)$ and other spaces defined in Section~\ref{ssc:meziprostory} above. The role of the element $e$ plays the constant function equal to $1$.
In fact, any $F$-space with unit may be represented as $A_c(X)$ for a suitable $X$. Indeed, let $A$ be an $F$-space with unit $e$. Denote $$\gls{S(A)}=\{\varphi\in A^*;\, \norm{\varphi}\le 1\ \&\ \varphi(e)=1\}.$$ Then $S(A)$ is obviously a convex weak$^*$-compact subset of $A^*$. It is called the \emph{state space}\index{state space!of an $F$-space} of $A$. Then $A$ is identified with $A_c(S(A))$ by the following lemma which is essentially known in the theory of $F$-spaces.
\begin{lemma}\label{L:F-spaces}
Let $A$ be an $F$-space with unit $e$. Then the following assertions hold.
\begin{enumerate}[$(i)$]
\item $B_{A^*}=\operatorname{conv}(S(A)\cup (-S(A)))$.
\item The operator $T:A\to A_c(S(A))$ defined by
$$T(a)(\varphi)=\varphi(a),\quad a\in A, \varphi\in S(A)$$
is a linear order-preserving isometry of $A$ onto $A_c(S(A))$.
\end{enumerate}
\end{lemma}
\begin{proof}
$(i)$: Inclusion `$\supset$' is obvious. Further, the set on the right-hand side is clearly a convex symmetric weak$^*$-compact set. So, to prove the equality it is enough to show that $S(A)\cup (-S(A))$ is a $1$-norming subset of $B_{A^*}$. To this end fix any $a\in A\setminus\{0\}$. We may find maximal $\alpha\in\mathbb R$ and minimal $\beta\in\mathbb R$ such that $\alpha e\le a\le \beta e$. Then $\norm{a}=\max\{-\alpha,\beta\}$. Up to passing to $-a$ if necessary we may and shall assume that $\norm{a}=\beta>0$. We get $\norm{a+e}=\beta+1$. The Hahn-Banach theorem yields some $\varphi\in B_{A^*}$ with $\varphi(a+e)=\beta+1$. It follows that
$$\beta=\norm{a}\ge\varphi(a)\ge \beta+1-\varphi(e),$$
hence $\varphi(e)=1$ (thus $\varphi\in S(A)$) and $\varphi(a)=\beta=\norm{a}$. This completes the argument.
$(ii)$: It is clear that $T$ is a well-defined linear operator on $A$ with values in $A_c(S(A))$ and that it is order-preserving. By the very definition we get $\norm{T}\le 1$. In the proof of (i) we have showed that
$S(A)\cup (-S(A))$ is a $1$-norming subset of $B_{A^*}$. It follows that $T$ is an isometry. It remains to prove that $T$ is surjective. To this end fix any $f\in A_c(S(A))$. Then we define $\widetilde{f}:B_{A^*}\to \mathbb R$ by
$$\widetilde{f}(t\varphi_1-(1-t)\varphi_2)=t f(\varphi_1)-(1-t) f(\varphi_2),\quad \varphi_1,\varphi_2\in S(A), t\in[0,1].$$
By (i) we know that each $\varphi\in B_{A^*}$ may be expressed as $t\varphi_1-(1-t)\varphi_2$ for some $\varphi_1,\varphi_2\in S(A)$ and $t\in[0,1]$. Further, $\widetilde{f}$ is well defined as whenever $$t\varphi_1-(1-t)\varphi_2=s\psi_1-(1-s)\psi_2,$$ we get $$t\varphi_1+(1-s)\psi_2=s\psi_1+(1-t)\varphi_2.$$ By \cite[Part II, Theorem 6.2]{alfsen-effros-ann} the norm is additive on the positive cone of $A^*$, so $t+1-s=s+1-t$, in other words, $s=t$. Now, using the affinity of $f$, we easily deduce that $$t f(\varphi_1)+(1-s)f(\psi_2)=sf(\psi_1)+(1-t)f(\varphi_2).$$ It follows that $\widetilde{f}$ is a well-defined affine function. Moreover, $\widetilde{f}(0)=0$ and $\widetilde{f}$ is weak$^*$-continuous by a quotient argument. (Indeed, the mapping $q:S(A)\times S(A)\times [0,1]\to B_{A^*}$ defined by $q(\varphi_1,\varphi_2,t)=t\varphi_1-(1-t)\varphi_2$ is a quotient mapping and $\widetilde{f}\circ q$ is continuous.) It follows by the Banach-Dieudonn\'e theorem that $f=T(a)$ for some $a\in A$. \end{proof}
If $W$ is a partially ordered Banach space, we denote by $\mathfrak{D}(W)$ the \emph{ideal center}\index{ideal center} of the ordered algebra $L(W)$ of bounded linear operators on $W$, i.e., $$\gls{D(W)}=\{T\in L(W);\, \exists\lambda \ge0\colon -\lambda I\le T\le \lambda I\}.$$ Then $\mathfrak{D}(W)$ is an algebra.
We refer the reader to \cite{alfsen-effros-ann} or \cite{wils} for a detailed discussion on properties of these algebras. We note that in \cite[\S7]{alfsen}, elements of $\mathfrak{D}(W)$ are called \emph{order-bounded} operators, but we do not use this term in order to avoid confusion with one of the standard notions from the theory of Banach lattices.
If $A$ is an $F$-space with unit $e$, the \emph{center}\index{center!of an $F$-space} $Z(A)$ of $A$ is defined by $$\gls{Z(A)}=\{Te;\, T\in\mathfrak{D}(A)\}$$ (see \cite[page 159]{alfsen} for details).
\begin{lemma}\label{L:extenze na bidual}
Let $A$ be a Banach space. For any bounded linear operator $T:A\to A^{**}$ we define
$$\widehat{T}=\left(T^*\circ \kappa_{A^*}\right)^*,$$
where $\kappa_E$ denotes the canonical embedding of a Banach space $E$ into $E^{**}$. Then the following assertions hold.
\begin{enumerate}[$(a)$]
\item $\widehat{T}$ is a bounded linear operator on $A^{**}$ such that $\widehat{T}\circ \kappa_A=T$ (i.e., $\widehat{T}$ extends $T$).
\item $\widehat{\kappa_A}=I_{A^{**}}$.
\item If $A$ is an ordered Banach space and $T\ge0$, then $\widehat{T}\ge0$.
\item Assume that $A$ is an $F$-space with unit $e$ and $T$ satisfies $-\lambda\kappa_A\le T\le\lambda\kappa_A$ for some $\lambda>0$. Then $\widehat{T}\in\mathfrak{D}(A^{**})$ and hence $T(e)\in Z(A^{**})$.
\end{enumerate} \end{lemma}
\begin{proof}
$(a)$: It is clear that $\widehat{T}$ is a bounded linear operator on $A^{**}$. To prove the equality fix any $a\in A$ and $a^*\in A^*$. Then
$$\begin{aligned}
\widehat{T}(\kappa_A(a))(a^*)&=(T^*\circ\kappa_{A^*})^*(\kappa_A(a))(a^*)=\kappa_A(a)(T^*(\kappa_{A^*}(a^*)))\\&=T^*(\kappa_{A^*}(a^*))(a)=\kappa_{A^*}(a^*)(T(a))=T(a)(a^*), \end{aligned}$$
which completes the argument.
$(b)$: Fix $a^{**}\in A^{**}$ and $a^*\in A^*$ and compute:
$$\widehat{\kappa_A}(a^{**})(a^*)=((\kappa_A)^*\circ \kappa_{A^*})^*(a^{**})(a^*)
=a^{**}((\kappa_A)^*(\kappa_{A^*}(a^*))).$$
Given $a\in A$ we have
$$(\kappa_A)^*(\kappa_{A^*}(a^*))(a)
=\kappa_{A^*}(a^*)(\kappa_A(a))=
\kappa_A(a)(a^*)=a^*(a),$$
so $(\kappa_A)^*(\kappa_{A^*}(a^*))=a^*$ and the proof is complete.
$(c)$: Assume $T\ge 0$. Fix $a^{**}\in A^{**}$ such that $a^{**}\ge0$ and $a^*\in A^*$ with $a^*\ge0$. Then
$$\widehat{T}(a^{**})(a^*)=(T^*\circ \kappa_{A^*})^*(a^{**})(a^*)
=a^{**}(T^*(\kappa_{A^*}(a^*))).$$
Given $a\in A$ with $a\ge0$ we have
$$T^*(\kappa_{A^*}(a^*))(a)=\kappa_{A^*}(a^*)(T(a))=T(a)(a^*)\ge0,$$
hence $T^*(\kappa_{A^*}(a^*))\ge0$ and thus also $a^{**}(T^*(\kappa_{A^*}(a^*)))\ge0$.
$(d)$: Under the given assumptions we deduce from $(b)$ and $(c)$ that $-\lambda I_{A^{**}}\le \widehat{T}\le\lambda I_{A^{**}}$. Since $\kappa_A(e)$ is the unit of $A^{**}$ and by (a) we have $\widehat{T}(\kappa_A(e))=T(e)$, the proof is complete. \end{proof}
We will use several times the following easy abstract lemma on continuity.
\begin{lemma}\label{L:spojitost monot}
Let $A$ be an ordered vector space. Let $T:A\to A$ be a linear operator such that $0\le T\le I$.
If $(x_\nu)$ is a non-decreasing net in $A$ with supremum $x\in A$, then $(Tx_\nu)$ is a non-decreasing net in $A$ with supremum $Tx$.
I.e., we have
$$x_\nu\nearrow x\mbox{ in }A\Longrightarrow T(x_\nu)\nearrow T(x) \mbox{ in }A.$$ \end{lemma}
\begin{proof}
Assume that $(x_\nu)$ is a non-decreasing net with supremum $x$. Since $T\ge0$, $(Tx_\nu)$ is a non-decreasing net and $Tx$ is its upper bound. It remains to prove that it is the least upper bound.
So, let $y$ be any upper bound of $(Tx_\nu)$.
For each $\nu$ we have $x-x_\nu\ge0$, hence
$$0\le T(x)-T(x_\nu)\le x-x_\nu,$$
so $$y\ge T(x_\nu)\ge T(x)-x+x_\nu.$$
Hence, $y$ is an upper bound of $(T(x)-x+x_\nu)$. But this net has supremum $T(x)-x+x=T(x)$, so $y\ge T(x)$.
Therefore $T(x)$ is the least upper bound of $(T(x_\nu))$ and the proof is complete. \end{proof}
\subsection{Function spaces}\label{ssc:ch-fs}
An important source of examples of compact convex sets is provided by state spaces of function spaces. Therefore we recall basic facts from the theory of function spaces described in detail in \cite[Chapter 3]{lmns}.
If $K$ is a compact (Hausdorff) space and $E\subset C(K)$ is a subspace of $C(K)$ containing constant functions and separating points of $K$, we consider $E$ to be a \emph{function space}\index{function space}. Function spaces generalize spaces of affine continuous functions -- if $X$ is a compact convex set, then $E=A_c(X)$ is a function space.
Let \[ S(E)=\{\varphi\in E^*;\, \norm{\varphi}=\varphi(1)=1\} \] endowed with the weak$^*$ topology. Then $S(E)$ is a compact convex set. Its elements are called \emph{states on $E$} and $S(E)$ is called the \emph{state space}\index{state space!of a function space} of $E$. We also note that $\varphi\in S(E)$ if and only if $\varphi$ is a positive functional of norm one.
Let $\gls{phi}\colon K\to S(E)$ be defined as the evaluation mapping. Then $\phi$ is a homeomorphic injection. Further, define $\Phi\colon E\to A_c(S(E))$ by \[ \gls{Phi}(h)(\varphi)=\varphi(h),\quad \varphi\in S(E), h\in E. \] Then $\Phi$ is an isometric isomorphism of $E$ into $A_c(S(E))$. If $E$ is closed, $\Phi$ is moreover surjective (see \cite[Propostion 4.26]{lmns}).
For each $x\in K$, let \[ \gls{Mx(E)}=\{\mu\in M_1(K);\, \mu(h)=h(x) \mbox{ for each } h\in E\}. \] Then $M_x(E)$ is nonempty since it contains at least the Dirac measure $\varepsilon_x$. Further, we define the Choquet boundary $\Ch_E K$\index{Choquet boundary} as \[ \gls{ChE(K)}=\{x\in K;\, M_x(E)=\{\varepsilon_x\}\}. \] Then $\phi(\Ch_E K)=\ext S(E)$ (see \cite[Proposition 4.26(d)]{lmns}).
If $K=X$ is a compact convex set and $E=A_c(X)$, we obtain that $\Ch_{E} X=\ext X$ (see \cite[Theorem 2.40]{lmns}).
\begin{remark}
Let $E$ be a closed function space on a compact space $K$. Then $E$ (equipped with the inherited norm and the pointwise order) is an $F$-space with unit (the unit is the constant function $1_K$).
The state space defined in the present section coincide with the state space from Section~\ref{ssc:fspaces}. Moreover, the above defined operator $\Phi$ coincide with the operator $T$ from Lemma~\ref{L:F-spaces}(ii). \end{remark}
\section{Intermediate function spaces}\label{s:IFS}
Let $X$ be a compact convex set. Any closed subspace $H\subset A_b(X)$ containing $A_c(X)$ will be called \emph{intermediate function space}\index{intermediate function space}. Further, we say that an intermediate function space $H$ is \emph{determined by extreme points}\index{intermediate function space!determined by extreme points} if $$\forall u\in H\colon \inf u(\ext X)\le u\le \sup u(\ext X).$$
Examples of intermediate function spaces include the closed subspaces of $A_b(X)$ mentioned in Section~\ref{ssc:meziprostory}. The presents section is devoted mainly to various representations and general properties of intermediate function spaces. We start by observing that any intermediate function space is an $F$-space with unit $1_X$ in the terminology from Section~\ref{ssc:fspaces} above. Therefore we may consider the state space $S(H)$\index{state space!of an intermediate function space}. The following lemma is a more precise version of Lemma~\ref{L:F-spaces} in this context.
\begin{lemma}\label{L:intermediate} Let $H$ be an intermediate function space. \begin{enumerate}[$(a)$]
\item The operator $T:H\to A_c(S(H))$ defined by
$$T(a)(\varphi)=\varphi(a),\quad a\in H, \varphi\in S(H)$$
is a linear order-preserving isometry of $H$ onto $A_c(S(H))$.
\item For $x\in X$ define
$$\gls{iota}(x)(a)=a(x),\quad a\in H.$$
Then $\iota(x)\in S(H)$. Moreover, $\iota:X\to S(H)$ is a one-to-one affine mapping and $\iota(X)$ is dense in $S(H)$.
\item If $H=A_c(X)$, then $\iota$ is an affine homeomorphism of $X$ onto $S(A_c(H))$.
\item $H$ is determined by extreme points if and only if $\ext S(H)\subset\overline{\iota(\ext X)}$.
\item For any $\varphi\in S(H)$ there is a unique $\gls{pi}(\varphi)\in X$ such that
$$\varphi(a)=a(\pi(\varphi)),\quad a\in A_c(X).$$
Moreover, $\pi:S(H)\to X$ is a continuous affine surjection.
\item $\pi\circ\iota=\mbox{\rm id}_X$. \end{enumerate} \end{lemma}
\begin{proof} Assertion $(a)$ follows from Lemma~\ref{L:F-spaces}.
Define $\iota$ as in $(b)$. Obviously $\iota(x)\in S(H)$ for $x\in X$ and $\iota$ is clearly an affine mapping. Since $A_c(X)$ separates points of $X$ by the Hahn-Banach theorem and $H\supset A_c(X)$, we deduce that $\iota$ is one-to-one. Further, given any $f\in A_c(S(H))$, by $(a)$ we find $a\in H$ with $f=T(a)$.
Since $T$ is an isometry, we deduce
$$
\begin{aligned}
\norm{f}&=\norm{a}=\sup\{\abs{a(x)};\, x\in X\} =\sup\{\abs{\iota(x)(a)};\, x\in X\}\\ &=\sup\{\abs{f(\iota(x))};\, x\in X\}.
\end{aligned}
$$ It follows that $\iota(X)$ is dense in $S(H)$ and the proof of $(b)$ is completed.
$(c)$: If $H=A_c(X)$, then $\iota$ is continuous (from $X$ to the weak$^*$-topology), hence $\iota(X)$ is compact. We conclude using $(b)$.
$(d)$: Assume first that $H$ is determined by extreme points. It follows that for each $a\in H$ we have
$$\sup\{ \iota(x)(a);\, x\in \ext X\}=\sup\{ \iota(x)(a);\, x\in X\}=\sup\{ \varphi(a);\, \varphi\in S(H)\},$$
where the second equality follows from the density of $\iota(X)$ in $S(H)$. Hence, the Hahn-Banach separation theorem implies that
$S(H)=\overline{\operatorname{conv} \iota(\ext X)}$.
Milman's theorem now yields
$\ext S(H)\subset \overline{\iota(\ext X)}$.
Conversely, assume $\ext S(H)\subset \overline{\iota(\ext X)}$. Let $a\in H$ and $c\in\mathbb R$ be such that $a\le c$ on $\ext X$. Then $T(a)\le c$ on $\iota(\ext X)$. Since $T(a)\in A_c(S(H))$, we deduce that $T(a)\le c$ on $\overline{\iota(X)}\supset\ext S(H)$. Thus $T(a)\le c$ on $S(H)$ and we deduce that $a\le c$ on $X$.
$(e)$: Let $\varphi\in S(H)$. Then $\varphi|_{A_c(X)}\in S(A_c(X))$ and clearly $\varphi\mapsto \varphi|_{A_c(X)}$ is a continuous affine surjection of $S(H)$ onto $S(A_c(H))$. We may conclude using $(c)$.
Assertion $(f)$ is obvious. \end{proof}
As a consequence we get the following representation of states on $H$.
\begin{lemma}\label{L:reprezentace} Let $H$ be an intermediate function space. Then for each $\varphi\in S(H)$ there is a net $(x_\nu)$ in $X$ such that $$\varphi(a)=\lim_\nu a(x_\nu),\quad a\in H.$$ Moreover, the net $(x_\nu)$ converges in $X$ to $\pi(\varphi)$.
Conversely, if $(x_\nu)$ is a net in $X$ such that $\lim_\nu a(x_\nu)$ exists for each $a\in H$, then $$\varphi(a)=\lim_\nu a(x_\nu),\quad a\in H$$ defines a state on $H$ and $\lim_\nu x_\nu=\pi(\varphi)$. \end{lemma}
\begin{proof} Let $\varphi\in S(H)$. By Lemma~\ref{L:intermediate}$(b)$ there is a net $(x_\nu)$ in $X$ such that $\iota(x_\nu)\to\varphi$ in $S(H)$. This proves the existence of $(x_\nu)$. Using moreover assertion $(e)$ of Lemma~\ref{L:intermediate} we get that $$\lim_\nu a(x_\nu)=a(\pi(\varphi)),\quad a\in A_c(X).$$ Thus it follows from assertion $(c)$ that $x_\nu\to \pi(\varphi)$.
The converse is obvious. \end{proof}
Lemma~\ref{L:intermediate} shows, in particular, that given an intermediate function space $H$ on a compact convex set $X$, there is a compact convex set $Y=S(H)$ and two affine mappings $\iota$ and $\pi$ with certain properties. The next lemma provides a converse -- given a pair of compact convex sets $X,Y$ and two affine mappings with certain properties, we may reconstruct the respective intermediate function space. Therefore there is a canonical correspondence between pairs $(X,H)$, where $X$ is a compact convex set and $H$ an intermediate function space, and pairs of compact convex sets accompanied by compatible pairs of affine mappings.
\begin{lemma}\label{L:dva kompakty} Let $X$ and $Y$ be compact convex sets. Assume there is an affine continuous surjection $\varpi:Y\to X$ and an affine injection $\jmath:X\to Y$ with $\jmath(X)$ dense in $Y$ and such that $\varpi\circ\jmath=\mbox{\rm id}_X$. Then the following assertions are valid. \begin{enumerate}[$(a)$]
\item The operator $U:A_c(Y)\to A_b(X)$ defined by $U(f)=f\circ\jmath$ is an isometric injection and $H=U(A_c(Y))$ is an intermediate function space.
\item $H$ is determined by extreme points if and only if $\ext Y\subset\overline{\jmath(\ext X)}$.
\item Let $T$, $\pi$ and $\iota$ be the mappings associated to $H$ by Lemma~\ref{L:intermediate}. Further, let $\iota_Y:Y\to S(A_c(Y))$ be the mapping associated to $A_c(Y)$ (in place of $H$) by Lemma~\ref{L:intermediate}. Then the following holds:
\begin{enumerate}[$(i)$]
\item The dual operator $U^*$ maps $S(H)$ homeomorphically onto $S(A_c(Y))$;
\item $\iota=(U^*|_{S(H)})^{-1}\circ \iota_Y\circ\jmath$;
\item $\pi=\varpi\circ (\iota_Y)^{-1}\circ U^*|_{S(H)}$;
\item $Tf=U^{-1}(f)\circ(\iota_Y)^{-1}\circ U^*|_{S(H)}$ for $f\in H$.
\end{enumerate} \end{enumerate} \end{lemma}
\begin{proof} $(a)$: Any $f\in A_c(Y)$ is bounded and $\jmath(X)$ is dense in $Y$, hence $U$ is an isometric embedding of $A_c(Y)$ to $A_b(X)$. Moreover, if $a\in A_c(X)$, then $a\circ\varpi\in A_c(Y)$ and $$U(a\circ\varpi)=a\circ\varpi\circ\jmath=a,$$ so $A_c(X)\subset U(A_c(Y))$. Hence, $H=U(A_c(Y))$ is indeed an intermediate function space.
Assertion $(c)$ follows by a straightforward calculation and assertion $(b)$ is a consequence of $(c)$ and Lemma~\ref{L:intermediate}$(d)$. \end{proof}
There is one more view on intermediate function spaces. We may look at them as spaces in between $A_c(X)$ and $(A_c(X))^{**}$. The following lemma explains it.
\begin{lemma}\label{L:bidual}
Let $X$ be a compact convex set. Let $\iota:X\to S(A_c(X))$ be the mapping from Lemma~\ref{L:intermediate}. Set $Y=(B_{(A_c(X))^*},w^*)$.
Then the following assertions hold.
\begin{enumerate}[$(a)$]
\item There is a (unique) linear surjective isometry $\Psi:A_b(X)\to (A_c(X))^{**}$ such that
$$\Psi(f)(\iota(x))=f(x),\quad x\in X, f\in A_b(X).$$
Moreover, $\Psi$ is order-preserving and a homeorphism from the topology of pointwise convergence to the weak$^*$ topology.
\item $\Psi(f)|_Y$ is continuous if and only if $f\in A_c(X)$;
\item $\Psi(f)|_Y\in (A_c(Y))^\sigma$ if and only if $f\in (A_c(X))^\sigma$;
\item $\Psi(f)|_Y$ is Baire-one if and only if $f\in A_1(X)$;
\item $\Psi(f)|_Y$ is a Baire function if and only if $f$ is a Baire function;
\item $\Psi(f)|_Y$ is of the first Borel class if and only if $f\in A_b(X)\cap\Bo_1(X)$;
\item $\Psi(f)|_Y$ is fragmented if and only if $f\in A_f(X)$.
\item $\Psi(f)|_Y$ is strongly affine if and only if $f$ is strongly affine.
\end{enumerate} \end{lemma}
\begin{proof}
Assertion $(a)$ is just a reformulation of \cite[Proposition 4.32]{lmns}.
Let us continue by proving assertions $(b)$, $(d)$--$(g)$.
To prove the `only if' parts it is enough to observe that $f=\Psi(f)\circ \iota$ and that $\iota$ is continuous (see Lemma~\ref{L:intermediate}).
To prove the `if parts' we will use a quotient argument as in the proof of \cite[Lemma 5.39(a)]{lmns} (which in fact covers all the cases except for $(g)$). Let $q:X\times X\times [0,1]\to B_{(A_c(X))^*}$ be defined by
$$q(x,y,t)=t\iota(x)-(1-t)\iota(y),\quad x,y\in X, t\in[0,1].$$
By combining Lemma~\ref{L:intermediate}(c) with Lemma~\ref{L:F-spaces}(i) we see that $q$ is a continuous surjection. Moreover, given any $f\in A_b(X)$ we have
$$(\Psi(f)\circ q)(x,y,t)=\Psi(f)(t\iota(x)-(1-t)\iota(y))=t f(x)-(1-t) f(y).$$
Hence, the properties of $f$ easily transfers to $\Psi(f)\circ q$ and then by Lemma~\ref{L:kvocient} to $\Psi(g)|_Y$.
Assertions $(c)$ follows from $(b)$ using assertion $(a)$.
Assertion $(h)$ follows from \cite[Lemma 5.39(b)]{lmns}.
\end{proof}
In \cite[Proposition 3.2 and Proposition 4.1]{edwards} it is proved that $Z(A_c(X))\subset Z((A_c(X))^\mu)\subset Z(A_b(X))$. This result was extended in \cite[Proposition 4.7 and Proposition 4.11]{smith-london} to get $Z(A_c(X))\subset Z((A_c(X))^\mu)\subset Z((A_c(X))^\sigma)\subset Z(A_b(X))$ and $Z(A_c(X))\subset Z((A_s(X))^\mu)\subset Z(A_b(X))$. The next proposition says, in particular, that the fact that some centers are contained in $Z(A_b(X))$ is not specific for the concrete spaces but holds for any intermediate function space.
\begin{prop}\label{P:ZH subset ZAbX}
Let $X$ be a compact convex set.
\begin{enumerate}[$(i)$]
\item Let $T:A_c(X)\to A_b(X)$ be a linear operator satisfying
$$-\lambda f\le T(f)\le \lambda f,\quad f\in A_c(X), f\ge0,$$
for some $\lambda>0$. Then $T$ may be extended to an operator $\widetilde{T}\in \mathfrak{D}(A_b(X))$.
\item $Z(H)\subset Z(A_b(X))$ for any intermediate function space $H$ on $X$.
\end{enumerate} \end{prop}
\begin{proof}
$(i)$: Set $S=\Psi\circ T$ where $\Psi$ is from Lemma~\ref{L:bidual}. Then $S$ satisfies the assumptions of Lemma~\ref{L:extenze na bidual}(d). The just quoted lemma provides an operator $\widehat{S}$. It remains to set $\widetilde{T}=\Psi^{-1}\circ\widehat{S}\circ\Psi$.
$(ii)$: Let $H$ be any intermediate function space and let $h\in Z(H)$. Then there is $T\in \mathfrak{D}(H)$ with $T(1)=h$. Note that $T|_{A_c(X)}$ satisfies the assumptions of $(i)$, hence there is $S\in\mathfrak{D}(A_b(X))$ with $S|_{A_c(X)}=T|_{A_c(X)}$. In particular $h=T(1)=S(1)\in Z(A_b(X))$.
\iffalse
Let $h\in Z(H)$. Then there is $T\in \mathfrak{D}(H)$ with $T(1)=h$. Without loss of generality we may assume that $0\le T\le I$.
Let $\iota:X\to S(H)$ and $\iota_0:X\to S(A_c(X))$ be the mappings provided by Lemma~\ref{L:intermediate}.
For each $x\in X$ we set $Rx=\iota(x)\circ T|_{A_c(X)}$, i.e.,
$$Rx(a)=T(a)(x),\quad a\in A_c(X).$$
Clearly $Rx\in (A_c(X))^*$ and $\norm{R(x)}\le \norm{T}\le1$. Then $R:X\to (A_c(X))^*$ is an affine mapping. Thus there is a unique linear mapping $S:(A_c(X))^*\to (A_c(X))^*$ such that
$R=S\circ \iota_0$ (see, e.g., \cite[Lemma 2.2]{affperf}).
We claim that $0\le S\le I$. To prove this we show that $0\le S(\varphi)\le \varphi$ for each positive element of $(A_c(X))^*$. Since positive elements of $(A_c(X))^*$ are exactly positive multiples of $\iota_0(x)$, $x\in X$ (see Lemma~\ref{L:intermediate}$(c)$), it is enough to prove it in case $\varphi=\iota_0(x)$ for some $x\in X$.
So, fix $x\in X$ and $a\in A_c(X)$, $a\ge 0$. Then
$$S(\iota_0(x))(a)=Rx(a)=Ta(x)\in [0,a(x)]=[0,\iota_0(x)(a)].$$
Let $\Psi$ be the operator from Lemma~\ref{L:bidual} and
$$\widetilde{S}=\Psi^{-1}S^*\Psi.$$
Then $\widetilde{S}:A_b(X)\to A_b(X)$ is a linear operator satisfying $0\le \widetilde{S}\le I$. Moreover,
$$\begin{aligned}
\widetilde{S}(1_X)(x)&=\Psi^{-1}S^*\Psi(1_X)(x)
=S^*\Psi(1_X)(\iota_0(x))=\Psi(1_X)(S(\iota_0(x)))\\&=\Psi(1_X)(Rx)=\Psi(1_X)(\iota(x)\circ T|_{A_c(X)}) =(\iota(x)\circ T|_{A_c(X)})(1_X)\\&=\iota(x)(T(1_X))=\iota(x)(h)=h(x)\end{aligned}
$$
So, $\widetilde{S}(1_X)=h$ and hence $h\in Z(A_b(X))$. \fi \end{proof}
We note that in the proof of (ii) we do not claim that $S$ extends $T$, just $S$ extends the restriction of $T$ to $A_c(X)$. We further note that inclusion $Z(A_c(X))\subset Z(H)$ is not automatic as witnessed by Example~\ref{ex:inkluzeZ} below.
We present one more way of constructing intermediate function spaces, starting from a function space in the sense of Section~\ref{ssc:ch-fs}. This will be useful mainly to simplify constructions of concrete examples.
\begin{lemma}\label{L:function space} Let $K$ be a compact space and let $E\subset C(K)$ be a closed function space. Denote \[ \ell^\infty(K)\cap E^{\perp\perp}=\left\{h\in\ell^\infty(K)\text{ universally measurable};\, \int h\,\mbox{\rm d}\mu=0\mbox{ if }\mu\in E^\perp\right\}, \] where $$E^\perp=\{\mu\in M(K);\, \int f\,\mbox{\rm d}\mu=0\mbox{ for each }f\in E\}$$ is the annihilator of $E$ in $M(K)=C(K)^*$. Let $X=S(E)$ denote the state space of $E$ and $\phi\colon K\to X$ be the evaluation mapping.
Given $f\in \ell^\infty(K)\cap E^{\perp\perp}$ and $\varphi\in X$, we set $$\gls{V}(f)(\varphi)=\int f\,\mbox{\rm d}\mu\mbox{ whenever }\mu\in M_1(K)\mbox{ and }\varphi(a)=\int a\,\mbox{\rm d}\mu\mbox{ for }a\in E.$$
Then the following assertions are valid: \begin{enumerate}[$(a)$]
\item $V$ is a well-defined linear isometry of $\ell^\infty(K)\cap E^{\perp\perp}$ onto $A_{sa}(X)$. Its inverse is given by the mapping $f\mapsto f\circ \phi$, $f\in A_{sa}(X)$.
Moreover, $V$ is order-preserving and $V(f_n)\to V(f)$ pointwise whenever $(f_n)$ is a bounded sequence in $ \ell^\infty(K)\cap E^{\perp\perp}$ pointwise converging to $f$.
\item $V|_E=\Phi$, and thus $V(E)=A_c(X)$.
Moreover,
$$\begin{gathered}
V(\Ba^b_1(K)\cap E^{\perp\perp})=A_1(X), V(\Ba^b(K)\cap E^{\perp\perp})=A_{sa}(X)\cap \Ba(X), \\
V(\Bo^b_1(K)\cap E^{\perp\perp})=A_b(X)\cap\Bo_1(X),
V(\Fr^b(K)\cap E^{\perp\perp})=A_f(X),\\
V(\{f\in \ell^\infty(K)\cap E^{\perp\perp};\, f\mbox{ is lower semicontinuous on }K\})=A_l(X). \end{gathered}
$$
\item Let $H\subset \ell^\infty(K)\cap E^{\perp\perp}$ be a closed subspace containing $E$. Then $V(H)$ is an intermediate function space on $X$. If $\iota\colon X\to S(V(H))$ is the mapping provided by Lemma~\ref{L:intermediate} and $\imath\colon K\to S(V(H))$ is defined by $\imath(x)(Vh)=h(x)$ for $h\in H$ and $x\in K$, then
$$\iota(\phi(x))=\imath(x),\quad x\in K.$$
\item The following assertions are equivalent:
\begin{enumerate}[$(i)$]
\item $V(H)$ is determined by extreme points of $S(E)$;
\item $H$ is determined by the Choquet boundary of $E$, i.e., for each $h\in H$ we have $\inf h(\Ch_E K)\le h\le \sup h(\Ch_E K)$;
\item $\ext S(V(H))\subset\overline{\imath (\Ch_E K)}.$
\end{enumerate}
\end{enumerate} \end{lemma}
\begin{proof} $(a)$: The fact that $V$ is well defined and maps $\ell^\infty\cap E^{\perp\perp}$ onto $A_{sa}(X)$ is proved in \cite[Theorem 5.40 and Corollary 5.41]{lmns}. But we recall the scheme leading to the proof because it will be needed to prove the remaining assertions.
Let $U\colon \ell^\infty(K)\cap E^{\perp\perp}\to A_{b}(M_1(K))$ be defined by $$U(h)(\mu)=\int h\,\mbox{\rm d}\mu,\quad \mu\in M_1(K), h\in H.$$ Since elements of $\ell^\infty(K)\cap E^{\perp\perp}$ are universally measurable and bounded, $U$ is a well-defined linear operator and $\norm{U}\le 1$. Since $M_1(K)$ contains Dirac measures, $U$ is an isometric injection. By \cite[Proposition 5.30]{lmns} or \cite[Proposition 3.1]{spurnyrepre} the range of $U$ is contained in $A_{sa}(M_1(K))$.
Let $\rho\colon M_1(K)\to X$ be defined by $$\rho(\mu)(f)=\int f\,\mbox{\rm d}\mu,\quad f\in E, \mu\in M_1(K).$$ Then $\rho$ is an affine continuous surjection of $M_1(K)$ onto $X$. (This is an easy consequence of the Hahn-Banach theorem, cf. \cite[Section 4.3]{lmns}.)
Fix $h\in \ell^\infty(K)\cap E^{\perp\perp}$. If $\mu_1,\mu_2\in M_1(K)$ are such that $\rho(\mu_1)=\rho(\mu_2)$, then $\mu_1-\mu_2\in E^\perp$ and hence $\int h\,\mbox{\rm d}\mu_1=\int h\,\mbox{\rm d}\mu_2$. It follows that there is a (unique) function $V(h)\colon X\to\mathbb R$ with $U(h)=V(h)\circ \rho$. Since $\rho$ is an affine continuous surjection, we deduce that $V(h)$ is a strongly affine function and $\norm{V(h)}=\norm{U(h)}$ (see \cite[Proposition 5.29]{lmns}). The linearity of $V$ is clear.
Thus $V$ maps $\ell^\infty(K)\cap E^{\perp\perp}$ into $A_{sa}(X)$. For the proof of its surjectivity, let $f\in A_{sa}(X)$ be given. Then $h=f\circ \phi$ is a bounded universally measurable function on $K$. If $\mu_1,\mu_2\in M_1(K)$ satisfy $\mu_1-\mu_2\in E^\perp$, then $\phi(\mu_1),\phi(\mu_2)$ are probability measures on $X$ with the same barycenter $\rho(\mu_1)=\rho(\mu_2)$ (see \cite[Proposition 4.26(c)]{lmns}). Hence \[ \begin{aligned} \int_K h\,\mbox{\rm d}\mu_1&=\int_K f\circ\phi\,\mbox{\rm d}\mu_1=\int_X f\,\mbox{\rm d}(\phi(\mu_1))=f(\rho(\mu_1))=f(\rho(\mu_2))=\cdots\\ &=\int_K h\,\mbox{\rm d}\mu_2. \end{aligned} \] Thus $h\in E^{\perp\perp}$. Further, for $\varphi\in X$ we find a measure $\mu\in M_1(K)$ with $\rho(\mu)=\varphi$. Then \[ Vh(\varphi)=Uh(\mu)=\mu(h)=\mu(f\circ \phi)=(\phi(\mu))(f)=f(r(\phi(\mu)))=f(\rho(\mu))=f(\varphi). \] Hence $Vh=f$ and $V$ is surjective.
So far we have proved that $V$ is a well defined surjective isometry and the mapping $f\mapsto f\circ \phi$, $f\in A_{sa}(X)$, is its inverse.
It is clear that $V$ is order-preserving. The sequential continuity follows from the Lebesgue dominated convergence theorem.
$(b)$: The equality $V|_E=\Phi$ follows from the definitions. To prove the remaining equalities
we will use the scheme recalled in the proof of $(a)$.
To prove inclusions `$\supset$' we fix $f$ in the space on the right-hand side. Then $f$ is strongly affine (see \cite[Theorem 4.21]{lmns}) and hence $f\circ \phi\in E^{\perp\perp}$. Moreover, since $\phi$ is a homeomorphic injection, $f\circ \phi$ shares the descriptive properties of $f$.
Conversely, assume $h$ belongs to the space on the left-hand side. Then $V(h)\in A_{sa}(X)$ by $(a)$. Moreover, $U(h)$ belongs to the respective descriptive class on $M_1(K)$ by \cite[Lemma 3.2]{lusp} in the case of fragmented functions and by \cite[Proposition 5.30]{lmns} in the remaining cases. Hence $V(h)$ belongs to the same class on $X$ by Lemma~\ref{L:kvocient}.
\iffalse(b) Let $h\in \ell^{\infty}(K)$ be a pointwise limit of a bounded sequence $\{h_n\}$ from $E$. Then $h\in E^{\perp\perp}$ by the Lebesgue dominated convergence theorem and the functions $\{\Phi(h_n)\}$ converge by the same reasoning pointwise to $Vh$. Hence $Vh\in A_1(X)$.
Conversely, if $Vh\in A_1(X)$ for some $h\in \ell^\infty(K)\cap E^{\perp\perp}$ and $\{f_n\}$ in $A_c(X)$ is bounded sequence converging pointwise to $f$, then $h_n=f_n\circ \phi\in E$ and converge pointwise to $h=Vh\circ \phi$.
[If $h\in \ell^{\infty}(K)$ is fragmented, $Uh$ is fragmented as well by \cite[Lemma 3.2]{lusp}. Since $\rho$ is a continuous mapping of a compact space $M_1(K)$ onto a compact space $X$, $Vh=Uh\circ\rho$ is fragmented by \cite[Theorem 3]{HoSp} and Theorem~\ref{T:a}.] \fi
$(c)$: Let $\iota\colon S(E)\to S(V(H))$ and $\imath\colon K\to S(V(H))$ be as in the statement. Then for $x\in K$ and $h\in H$ we have \[ \imath(x)(Vh)=h(x)=Uh(\varepsilon_x)= Vh(\rho(\varepsilon_x))=Vh(\phi(x))=(\iota(\phi(x)))(Vh), \] and thus $\imath(x)=\iota(\phi(x))$, $x\in K$.
$(d)$: By Lemma~\ref{L:intermediate}(d), $V(H)$ is determined by extreme points of $S(E)$ if and only if $ \ext S(V(H))\subset \overline{\iota (\ext S(E))}$. Since \[ \iota(\ext S(E))=\iota(\phi(\Ch_E K))=\imath(\Ch_E K), \] we have the equivalence of $(i)$ and $(iii)$.
Finally, for each $x\in K$ and $h\in H$ we have \[ Vh(\phi(x))=Vh(\rho(\varepsilon_x))=Uh(\varepsilon_x)=h(x). \] Further, $\phi(\Ch_E K)=\ext S(E)$ (see \cite[Proposition 4.26(d)]{lmns}). From these observations it is easy to see that $(i)$ is equivalent to $(ii)$. \end{proof}
\section{Multipliers and central elements in intermediate function spaces}\label{s:multi-atp}
Let $X$ be a compact convex set and let $H$ be an intermediate function space on $X$. Since $H$ is an $F$-space, we know from Section~\ref{ssc:fspaces} what is $Z(H)$, its center. There is another important subspace of $H$ -- that of multipliers. In this section we investigate the structure of this subspace and its relationship to the center. This is motivated by \cite[Theorem II.7.10]{alfsen} where this relationship is clarified for $H=A_c(X)$ and by its extensions in \cite[Proposition 4.4 and Proposition 4.9]{smith-london} to $H=(A_c(X))^\sigma$ and $H=(A_s(X))^\mu$.
We start by two definitions. A function $u\in H$ is a \emph{multiplier}\index{multiplier!of an intermediate function space} if for any $a\in H$ there is some $b\in H$ such that $b=u\cdot a$ on $\ext X$. The set of multipliers is denoted by $M(H)$. I.e., we have $$\gls{M(H)} =\{ u\in H ;\, \forall a\in H\;\exists b\in H\colon b=u\cdot a \mbox{ on }\ext X\}.$$ Further, we define the family of \emph{strong multipliers}\index{multiplier!strong} of $H$ as \begin{equation*} \begin{aligned} \gls{Ms(H)} = \{ u\in H ;\, \forall a\in H\;\exists b\in H\colon b&=u\cdot a\quad \mu\text{-almost everywhere} \\& \text{ for each maximal } \mu \in M_1(X)\}. \end{aligned} \end{equation*} Since for each $x\in\ext X$ the Dirac measure $\varepsilon_x$ is maximal, $M^s(H) \subset M(H)$ for each intermediate function space $H$. The converse implication is not true in general (see Example~\ref{ex:dikous-mezi-new} below). However, it holds in many important cases, as we show below. First we recall a notion of a standard compact convex set.
\begin{definition} \label{d:standard} A compact convex set $X$ is called a \emph{standard compact convex set}\index{standard compact convex set}, provided $\mu(A)=1$ for each maximal $\mu\in M_1(X)$ and each universally measurable $A\supset \ext X$. \end{definition}
It follows from \cite[Theorem 3.79(b)]{lmns} that a compact convex set $X$ is standard whenever $\ext X$ is Lindel\"of.
\begin{prop}\label{P:rovnostmulti} Let $H$ be an intermediate function space on a compact convex set $X$. Assume that \begin{itemize} \item either $H\subset \Ba^b(X)$, \item or functions in $H$ are universally measurable and $X$ is a standard compact convex set. \end{itemize} Then $M^s(H)=M(H)$. \end{prop}
\begin{proof} Let $u \in M(H)$ and $a \in H$ be given. We find $b \in H$ such that $b=u \cdot a$ on $\ext X$. We fix a maximal measure $\mu \in M_1(X)$, and we shall show that $b=u \cdot a$ $\mu$-almost everywhere. Assuming that $H$ consists of Baire functions, the set $[b=u \cdot a]$ is a Baire set containing $\ext X$, and thus $\mu([b=u \cdot a])=1$ by \cite[Theorem 3.79(a)]{lmns}. If, on the other hand, $X$ is a standard compact convex set and the functions from $H$ are universally measurable, then $[b=u \cdot a]$ is a universally measurable set containing $\ext X$, and thus $\mu([b=u \cdot a])=1$ as well. The proof is finished. \end{proof}
We note that unlike the notion of a multiplier, the notion of a strong multiplier seems to be completely new. This is not surprising, since it follows from the above proposition that in previous works devoted to centers and multipliers of spaces $(A_c(X))^\mu$ and $(A_c(X))^\sigma$, the notions of a multiplier and a strong multiplier would be the same. In Proposition~\ref{P:multi strong pro As} below we complement the previous proposition by showing equality $M^s(H)=M(H)$ for $H\subset (A_s(X))^\sigma$. However, when dealing with abstract intermediate function spaces, both these notions seem to be useful. As we have already remarked, they may differ by Example~\ref{ex:dikous-mezi-new} below, but we know no counterexample within `natural' intermediate functions spaces (cf. Question~\ref{q:m=ms} below).
If $H$ is determined by extreme points, the function $b$ from the definition of a multiplier is uniquely determined by $a$, so any multiplier defines an operator. This is the content of the following proposition.
\begin{prop}\label{P:mult} Let $H$ be an intermediate function space on $X$ determined by extreme points. Let $u\in M(H)$. Then there is a unique mapping $T:H\to H$ such that $$T(a)(x)=u(x)\cdot a(x),\quad x\in\ext X, a\in H.$$ Moreover, $T$ is a linear operator, $T(1)=u$ and $$\inf u(X)\cdot I\le T\le \sup u(X)\cdot I,$$ where $I$ is the identity operator on $H$.
In particular, $M(H)\subset Z(H)$.
\end{prop}
\begin{proof} The proof is completely straightforward. \end{proof}
Now we focus on the converse inclusion, i.e., on the validity of equality $Z(H)=M(H)$. It holds in some special cases, for example if $H=A_c(X)$ (see \cite[Theorem II.7.10]{alfsen}), $H=(A_c(X))^\sigma$ or $H=(A_s(X))^\mu$ (see \cite{smith-london}). We will study its validity in the abstract setting.
The first tool is the following lemma.
\begin{lemma}\label{L:nasobeni} Let $X$ be a compact convex set and $H_1,H_2$ two intermediate function spaces on $X$. Assume that $H_1\subset H_2$,
$H_2$ is determined by extreme points and for any $x\in\ext X$ the functional $\iota_1(x)$ is an extreme point of $S(H_1)$. (Note that $\iota_1:X\to S(H_1)$ is the mapping provided by Lemma~\ref{L:intermediate}.) Then for any operator $T\in\mathfrak{D}(H_2)$ we have $$\forall a\in H_1\colon T(a)=T(1)\cdot a\mbox{ on }\ext X.$$ \end{lemma}
\begin{proof} First assume that $0\le T\le I$. Set $z=T(1)$. Fix $x\in \ext X$. We wish to show that $T(a)(x)=z(x)\cdot a(x)$ for $a\in H_1$.
Since $0\le T\le I$, we deduce that $z(x)\in [0,1]$. Assume first $z(x)=0$. Given $a\in H_1$, we get $$(\inf a) T(1)=T(\inf a)\le T(a)\le T(\sup a)=(\sup a)T(1),$$ hence $T(a)(x)=0$. If $z(x)=1$, replace $T$ by $S=I-T$ and deduce that $S(a)(x)=0$, hence $T(a)(x)=a(x)=a(x)z(x)$ for $a\in H_1$.
Assume $z(x)\in(0,1)$. Then $$a(x)=Ta(x) + (I-T)(a)(x)=z(x)\cdot\frac{T(a)(x)}{z(x)}+(1-z(x))\frac{(I-T)a(x)}{1-z(x)}, \quad a\in H_1.$$ Set $$\varphi_1(a)=\frac{T(a)(x)}{z(x)}\quad\mbox{and}\quad\varphi_2(a)=\frac{(I-T)a(x)}{1-z(x)}$$ for $a\in H_1$. Then $\varphi_1,\varphi_2\in S(H_1)$ and $\iota_1(x)=z(x)\varphi_1+(1-z(x))\varphi_2$. By the assumption we deduce that $\varphi_1=\varphi_2=\iota_1(x)$, hence $T(a)(x)=a(x)\cdot z(x)$ for $a\in H_1$ and the proof is complete.
Now let $T$ be a general operator in $\mathfrak{D}(H_2)$. Then there is a linear mapping $S$ with $0\le S\le I$ and $\alpha,\beta\in\mathbb R$ such that $T=\alpha I+\beta S$. Then, given any $a\in H_1$, $$T(a)=\alpha a+\beta S(a)= \alpha a +\beta S(1)\cdot a=(\alpha +\beta S(1))a= T(1)\cdot a$$ on $\ext X$. \end{proof}
For $H_1=H_2=A_c(X)$ we get the above-mentioned known result covered by \cite[Theorem II.7.10]{alfsen}. There are two important more general cases where the previous lemma may be applied covered by the following corollary.
\begin{cor}\label{cor:rovnost na ext} Let $H$ be an intermediate function space determined by extreme points. \begin{enumerate}[$(a)$]
\item If $T\in\mathfrak{D}(H)$ is an operator, then
$$\forall a\in A_c(X)\colon T(a)=T(1)\cdot a\mbox{ on }\ext X.$$
\item If $\iota(x)\in\ext S(H)$ for each $x\in \ext X$, then $Z(H)=M(H)$. \end{enumerate} \end{cor}
The next lemma shows that the connection between multipliers and operators in ideal center is very tight.
\begin{lemma}\label{L:ob-soucin} Let $H$ be an intermediate function space determined by extreme points. Let $T\in\mathfrak{D}(H)$ be an operator. Then $$T(1)\in M(H) \Longleftrightarrow \forall a\in H\colon T(a)=T(1)\cdot a\mbox{ on }\ext X.$$ \end{lemma}
\begin{proof} Implication `$\Longleftarrow$' is obvious. To prove the converse assume that $T(1)\in M(H)$. By Proposition~\ref{P:mult}
we obtain an operator $S\in\mathfrak{D}(H)$ such that $$S(a)=T(1)\cdot a\mbox{ on }\ext X \mbox{ for }a\in H.$$ Note that $S(1)=T(1)$.
Now we use the identification $H=A_c(S(H))$ provided by Lemma~\ref{L:intermediate}$(a)$. Then both $S$ and $T$ are operators in $\mathfrak{D}(A_c(S(H)))$. Hence by Corollary~\ref{cor:rovnost na ext}, $$\begin{gathered}T(a)=T(1)\cdot a\mbox{ on }\ext S(H)\mbox{ for }a\in A_c(S(H)),\\ S(a)=S(1)\cdot a\mbox{ on }\ext S(H)\mbox{ for }a\in A_c(S(H)).\end{gathered}$$ Since $S(1)=T(1)$, we deduce $S=T$. Thus $T(a)=T(1)\cdot a$ on $\ext X$ for each $a\in H$ and the proof is complete. \end{proof}
This lemma, together with the identification from Lemma~\ref{L:intermediate}$(a)$ used in the proof, inspires a characterization of intermediate function spaces $H$ satisfying the equality $Z(H)=M(H)$. To formulate the characterization we introduce the following notation. For a compact convex set $X$ we set \begin{equation}\label{eq:m(X)} \gls{m(X)}=\{x\in X;\, \forall T\in\mathfrak{D}(A_c(X))\,\forall a\in A_c(X)\colon T(a)(x)=T(1)(x)\cdot a(x)\}.\end{equation} Obviously, $m(X)$ is a closed subset of $X$. By the above we know it contains $\ext X$.
The promised characterization of equality $Z(H)=M(H)$ is contained in the following proposition which follows from Lemma~\ref{L:ob-soucin} using the identifications $H=A_c(S(H))$ provided by Lemma~\ref{L:intermediate}.
\begin{prop}\label{p:zh-mh} Let $H$ be an intermediate function space determined by extreme points. Then $Z(H)=M(H)$ if and only if $\iota(\ext X)\subset m(S(H))$ (where $\iota$ is the mapping from Lemma~\ref{L:intermediate}). \end{prop}
The next section will be devoted to a description of $m(X)$ which enables us to apply Proposition~\ref{p:zh-mh} in concrete cases.
We continue by an example pointing out that in some cases there are only trivial multipliers.
\begin{example}\label{ex:symetricka}
Let $X$ be a centrally symmetric compact convex subset of a locally convex space which is not a segment and let $H$ be an intermediate function space on $X$. Then the following assertions hold.
\begin{enumerate}[$(a)$]
\item $Z(H)$ contains only constant functions.
\item If $H$ is determined by extreme points, then $M(H)$ contains only constant functions.
\end{enumerate} \end{example}
\begin{proof}
$(b)$: By shifting the set $X$ if necessary we may assume that $X$ is symmetric around $0$. Assume there is a non-constant $u\in M(H)$. Up to adding a constant function, we may assume that $u(0)=0$. Since $H$ is determined by extreme points, there is $x\in\ext X$ with $u(x)\ne0$. Up to multiplying $u$ by a non-zero constant we may assume $u(x)=1$. Then clearly $u(-x)=-1$.
Let $y\in \ext X\setminus\{x,-x\}$. Such a point exists as $X$ is not a segment. Set $\alpha=u(y)$. Then $u(-y)=-\alpha$. Since $u$ is a multiplier, there is $b\in H$ such that $b=u^2$ on $\ext X$. It follows that $\alpha^2=1$. So, up to replacing $y$ by $-y$ if necessary, we may assume that $u(y)=1$ (and $u(-y)=-1$).
Let $a_0$ be the linear functional on the linear span of $\{x,y\}$ such that $a_0(x)=1$ and $a_0(y)=0$. By the Hahn-Banach extension theorem it may be extended to some $a\in A_c(X)$.
Since $A_c(X)\subset H$, we deduce that $a\in H$. Thus there is $v\in H$ such that $v=ua$ on $\ext X$. It follows that $0=v(0)=1$, a contradiction.
$(a)$: Since $Z(H)$ is canonically identified with $M(A_c(S(H)))$, it is enough to show that $S(H)$ is centrally symmetric and then use $(b)$. Let $\iota:X\to S(H)$ be the mapping from Lemma~\ref{L:intermediate}. We claim that $\iota(0)$ is the center of symmetry of $S(H)$. To prove this it is enough to observe that $2\iota(0)-\varphi\in S(H)$ whenever $\varphi\in S(H)$. So, take any $\varphi\in S(H)$ and let $\psi=2\iota(0)-\varphi$. Then $\psi$ is clearly a continuous linear functional on $H$. Moreover, $\psi(1_X)=2-\varphi(1_X)=1$. Further, assume $f\in H$, $f\ge0$. Then for any $x\in X$ we have
$f(x)\le f(x)+f(-x)=2f(0)$, hence $2f(0)-f\ge0$. So,
$$\psi(f)=2f(0)-\varphi(f)=\varphi(2f(0)-f)\ge0.$$
We deduce that $\psi\in S(H)$ and the proof is complete.
\iffalse
The symmetry is then defined by
$$\eta(\varphi)=2\iota(0)-\varphi,\quad\varphi\in S(H).$$
So, take any $\varphi\in S(H)$. Then $\eta(\varphi)$ is clearly a continuous linear functional on $H$. Moreover, $\eta(\varphi)(1_X)=2-\varphi(1_X)=1$. Further, assume $f\in H$, $f\ge0$. Then for any $x\in X$ we have
$f(x)\le f(x)+f(-x)=2f(0)$, hence $2f(0)-f\ge0$. So,
$$\eta(\varphi)(f)=2f(0)-\varphi(f)=\varphi(2f(0)-f)\ge0.$$
We deduce that $\eta(\varphi)\in S(H)$. Since $\eta(\iota(0))=\iota(0)$ and $\eta^{-1}=\eta$, the proof is complete.\fi \end{proof}
Note, that if $X$ is a segment, necessarily $M(H)=H=A_c(X)=A_b(X)$.
Now we pass to some basic properties of centers and spaces of multipliers. We start by proving they are closed subspaces.
\begin{lemma}\label{L:uzavrenost} Let $X$ be a compact convex set and let $H$ be an intermediate function space on $X$. \begin{enumerate}[$(i)$]
\item $Z(H)$ is a closed linear subspace of $H$.
\item If $H$ is determined by extreme points, then both $M(H)$ and $M^s(H)$ are closed linear subspaces of $H$.
\end{enumerate}
\end{lemma}
\begin{proof}
$(ii)$: Assume that $H$ is determined by extreme points. It is clear that $M(H)$ and $M^s(H)$ are linear subspaces of $H$.
Let $(m_n)$ be a sequence in $M(H)$ converging in $H$ to some $m\in H$. We are going to prove that $m\in M(H)$.
Let $a\in H$. For each $n\in \mathbb N$ there is $b_n\in H$ such that $b_n=m_n \cdot a$ on $\ext X$. Then $b_n|_{\ext X}$ converge uniformly to $m a|_{\ext X}$. Since $H$ is determined by extreme points, we deduce that $(b_n)$ is uniformly Cauchy on $X$. Thus there is $b\in H$ such that $b_n\to b$ in $H$. Clearly $b=m \cdot a$ on $\ext X$, and thus $M(H)$ is closed.
To show that $M^s(H)$ is closed, we proceed similarly. Let $(m_n)$ be a sequence in $M^s(H)$ converging in $H$ to $m\in H$. Let $a\in H$ be arbitrary. Let $b_n$ and $b$ be as above. Fix any maximal measure $\mu \in M_1(X)$. Since $m_n\in M^s(H)$ and $H$ is determined by extreme points, we deduce that
$b_n=m_n \cdot a$ $\mu$-almost everywhere. Hence (using the $\sigma$-additivity of $\mu$) we find a $\mu$-null set $N\subset X$ such that
$$\forall x\in X\setminus N\;\forall n\in\mathbb N\colon b_n(x)=m_n(x)\cdot a(x),$$
thus $b(x)=m(x)\cdot a(x)$ for $x\in X\setminus N$. Therefore $b=m\cdot a$ $
\mu$-almost everywhere.
This completes the proof.
\iffalse
Let $a\in H$, and we fix a maximal measure $\mu \in M_1(X)$. For each $n\in \mathbb N$ we find $b_n\in H$ such that $b_n=m_n \cdot a$ $\mu$-almost everywhere. Thus we can find a set $A_n \subset X$ such that $\mu(A_n)=1$ and $b_n=m_n \cdot a$ on $A_n$. Further, as above we find a function $b \in H$ which is the uniform limit of $(b_n)$. Now, it is clear that the set $b=m \cdot a$ contains $\bigcap_{n \in \mathbb N} A_n$, hence $b=m \cdot a$ $\mu$-almost everywhere. Thus $m \in M^s(H)$, which completes the proof. \fi
$(i)$: Let $T:H\to A_c(S(H))$ be the operator from Lemma~\ref{L:intermediate}.
Then $T(Z(H))=Z(A_c(S(H)))=M(A_c(S(H)))$ which is a closed subspace of $A_c(S(H))$ by $(ii)$.
\end{proof}
When $H$ is an intermediate function space, the spaces $H^\mu$ and $H^\sigma$ are closed subspaces of $A_b(X)$ by Lemma~\ref{L:muclosed is closed}. Hence, they are intermediate function spaces as well. Let us now look at the relationship between multipliers of $H$, $H^\mu$ and $H^\sigma$. The following result partially settles this question.
\begin{prop}
\label{p:multi-pro-mu}
Let $H$ be an intermediate function space.
\begin{enumerate}[$(i)$]
\item Assume that $H^\mu$ is determined by extreme points. Then $(M(H))^\mu\subset M(H^\mu)$ and $(M^s(H))^\mu\subset M^s(H^\mu)$.
In particular, if $H=H^\mu$, then $M(H)=(M(H))^\mu$ and $M^s(H)=(M^s(H))^\mu$.
\item Assume that $H \subset A_{sa}(X)$ and $H^\sigma$ is determined by extreme points. Then $(M^s(H))^\sigma\subset M^s(H^\sigma)$. In particular,
\[M^s(H) \subset M^s(H^\mu) \subset M^s(H^\sigma).\]
If, additionally, $H=H^\sigma$, then $M^s(H)=(M^s(H))^\sigma$.
\end{enumerate} \end{prop}
\begin{proof} $(i)$: We start by showing that $M(H)\subset M(H^\mu)$. To this end let $m\in M(H)$ be given. We assume that $m\ge0$, otherwise we would just add a suitable constant to it. Let \[ \mathcal F=\{a\in H^\mu;\, \exists b\in H^\mu\colon b=am\text{ on }\ext X\}. \] Then $\mathcal F$ is clearly a linear subspace of $H^\mu$ containing $H$. Further, $\mathcal F$ is stable with respect to taking pointwise limits of monotone bounded sequences. Indeed, let $(a_n)$ be a non-decreasing bounded sequence in $\mathcal F$ with the limit $a\in A_b(X)$. Then we find functions $b_n\in H^\mu$ such that $b_n=a_nm$ on $\ext X$. Then $b_{n+1}-b_n=(a_{n+1}-a_n)m\ge 0$ on $\ext X$. Since $H^\mu$ is determined by extreme points, we deduce that $(b_n)$ is non-decreasing and bounded. It follows that there exists its limit $b=\lim b_n$, which belongs to $H^\mu$. Since $b=am$ on $\ext X$, the function $a$ belongs to $\mathcal F$. We conclude that $\mathcal F=H^\mu$ and therefore $m\in M(H^\mu)$.
To finish the proof that $(M(H))^\mu\subset M(H^\mu)$, it remains to show that $M(H^\mu)$ is closed with respect to pointwise limits of bounded monotone sequences.
To this end fix a sequence $(m_n)$ in $M(H^\mu)$ such that $m_n\nearrow m\in A_b(X)$. Clearly $m\in H^\mu$.
We continue by proving that $m\in M(H^\mu)$. To this end we fix $a\in H^\mu$ and we are looking for $b\in H^\mu$ such that $b=ma$ on $\ext X$.
Since all the functions in question are bounded and $1\in H\subset H^\mu$, we may assume without loss of generality that $a\ge 0$ and $m_1\ge0$.
By the assumptions we find $b_n\in H^\mu$ such that $b_n=m_na$ on $\ext X$. We deduce that
$$b_n=m_n a\nearrow m a\mbox{ on }\ext X.$$
Since $H^\mu$ is determined by extreme points, the sequence $(b_n)$ is nondecreasing. Using one more determinacy by extreme points we deduce that each $b_n$ is bounded from above by $\norm{m}\cdot\norm{a}$, so $b_n\nearrow b$ for some $b\in H^\mu$. Since $b=ma$ on $\ext X$, we conclude that $m\in M(H^\mu)$.
This settles the case of $M(H)$, including the `in particular' part. The case of $M^s(H)$ is similar: The first step is again to prove that $M^s(H)\subset M^s(H^\mu)$. We assume that $m\in M^s(H)$ is positive, and we consider the family \begin{equation*} \begin{aligned} \mathcal F^s=\{a\in H^\mu;\, \exists b\in H^\mu\colon b=am \ &\mu\text{-almost everywhere} \\ &\text{ for each maximal } \mu \in M_1(X)\}. \end{aligned} \end{equation*} Again, $\mathcal F^s$ is a linear subspace of $H^\mu$ containing $H$. As above, it is enough to show that the family $\mathcal F^s$ is stable with respect to taking pointwise limits of monotone bounded sequences. Thus we pick a non-decreasing bounded sequence $(a_n)$ in $\mathcal F$ with the limit $a\in A_b(X)$. We find $b_n$ and $b$ as above. Fix any maximal measure $\mu \in M_1(X)$. Since $m\in M^s(H)$ and $H$ is determined by extreme points, we deduce that
$b_n=m \cdot a_n$ $\mu$-almost everywhere. Hence (using the $\sigma$-additivity of $\mu$) we find a $\mu$-null set $N\subset X$ such that
$$\forall x\in X\setminus N\;\forall n\in\mathbb N\colon b_n(x)=m(x)\cdot a_n(x),$$
hence $b(x)=m(x)\cdot a(x)$ for $x\in X\setminus N$. Therefore $b=ma$ $\mu$-almost everywhere.
Hence $a\in\mathcal F^s$.
\iffalse We find functions $b_n\in H^\mu$ such that the equality $b_n=a_nm$ holds $\mu$-almost everywhere for each maximal measure $\mu \in M_1(X)$, in particular it holds on $\ext X$. Thus as above, we can find the limit $b=\lim b_n$, which is in $H^\mu$. It remains to show that for each maximal measure $\mu \in M_1(X)$ and $\mu$-almost every point $x \in X$, $b(x)=a(x)m(x)$. For a fixed such measure $\mu$ and each $n \in \mathbb N$ we can find a set $B_n$ such that $\mu(B_n)=1$ and $b_n=a_nm$ on $B_n$. It then follows that the equality $b=am$ holds on $\bigcap_{n \in \mathbb N} B_n$, which finishes the argument. \fi
The second step is again to show that $M^s(H^\mu)$ is closed with respect to taking pointwise limits of bounded monotone sequences. We proceed as above, we just assume that the sequence $(m_n)$ belongs to $M^s(H^\mu)$ instead of $M(H^\mu)$. We fix a maximal measure $\mu \in M_1(X)$. Then $b_n=m_na$ $\mu$-almost everywhere for each $n\in\mathbb N$, hence $b=ma$ $\mu$-almost everywhere. The proof is finished.
$(ii)$: Assume that $H$ consists of strongly affine functions and $H^\sigma$ is determined by extreme points. Then $H^{\sigma}$ consists of strongly affine functions as well. As above, the first step is to prove that $M(H)\subset M(H^\sigma)$. Let $m\in M(H)$ be given and let \begin{equation*}
\begin{aligned} \mathcal F=\{a\in H^\sigma;\, \exists b\in H^\sigma\colon b=am \ &\mu\text{-almost everywhere} \\& \text{ for each maximal } \mu \in M_1(X) \}. \end{aligned} \end{equation*} Then $\mathcal F$ is a linear subspace of $H^\sigma$ containing $H$. Further, $\mathcal F$ is closed with respect to taking pointwise limits of bounded sequences. Indeed, let $(a_n)$ be such a sequence in $\mathcal F$ with limit $a\in A_b(X)$. Then clearly $a\in H^\sigma$. For each $n\in\mathbb N$ we find $b_n\in H^\sigma$ such that $b_n=a_nm$ $\mu$-almost everywhere for each maximal measure $\mu \in M_1(X)$.
Further, we fix a point $x\in X$ and a maximal measure $\nu \in M_x(X)$. Since the functions $b_n$ are strongly affine, $b_n(x)=\nu(b_n)$. Since $b_n=a_nm$ $\nu$-almost everywhere, we deduce $b_n(x)=\nu(a_nm)$. By the Lebesgue dominated convergence theorem we deduce that $b_n(x)\to \nu(am)$. So, the sequence $(b_n)$ pointwise converges to some $b\in A_b(X)$. Then $b\in H^\sigma$ is strongly affine. Further, let $\mu\in M_1(X)$ be any maximal measure. Since $b_n\to b$, $a_nm\to am$ and $b_n=a_nm$ $\mu$-almost everywhere for each $n\in\mathbb N$, we deduce that $b=am$ $\mu$-almost everywhere. Hence $a\in \mathcal F$. It follows that $\mathcal F=H^\sigma$ and thus $m\in M^s(H^\sigma)$.
The second step is to show that $M(H^\sigma)$ is closed with respect to pointwise limits of bounded sequences.
To this end fix a bounded sequence $(m_n)$ in $M(H^\sigma)$ such that $m_n\to m\in A_b(X)$. Clearly $m\in H^\sigma$.
We continue by proving $m\in M(H^\sigma)$. To this end we fix $a\in H^\sigma$. For each $n\in\mathbb N$ there is $b_n\in H^\sigma$ such that $b_n=m_na$ $\mu$-almost everywhere for any maximal measure $\mu\in M_1(X)$. Fix $x\in X$ and take a maximal measure $\nu\in M_x(X)$. Then
$$b_n(x)=\nu(b_n)=\nu(m_na)\to\nu(ma).$$
Thus the sequence $(b_n)$ pointwise converges to some $b\in A_b(X)$. Then $b\in H^\sigma$. Further, similarly as above we show that $b=ma$ $\mu$-almost everywhere for any maximal $\mu\in M_1(X)$. Hence $m\in M(H^\sigma)$.
The `in particular' part follows from $(i)$ and the above combined with the fact that $(H^\mu)^\sigma=H^\sigma$. The proof is finished. \end{proof}
\begin{remarks}\label{rem:Ms(H1)} (1) In assertion $(i)$ of Proposition~\ref{p:multi-pro-mu} it is not sufficient to assume that just $H$ is determined by extreme points. Indeed, Example~\ref{ex:deter-ext-body} below shows that there is an intermediate function space $H$ determined by extreme points such that the space $H^\mu$ is not.
(2) Assertion $(ii)$ of Proposition~\ref{p:multi-pro-mu} may be refined (using essentially the same proof): Assume that $H\subset A_{sa}(X)$. Let $H_1$ be the space of all limits of bounded pointwise converging sequences from $H$. Then $H_1$ is a linear subspace of $A_{sa}(X)$ (not necessarily closed) and $(M^s(H))_1\subset M^s(H_1)$. \end{remarks}
We finish this section by asking the following open question on validity of assertion $(ii)$ of the previous proposition for $M(H)$ instead of $M^s(H)$.
\begin{ques}
Assume that $H$ is an intermediate function space such that $H^\sigma$ is determined by extreme points. Is $M(H)^\sigma\subset M(H^\sigma)$? \end{ques}
\section{On multipliers for $A_c(X)$}\label{sec:mult-acx}
In this section we will deal only with continuous affine functions. Our aim to describe $m(X)$ and to characterize multipliers on $A_c(X)$. A description of $m(X)$ is given in Proposition~\ref{P:m(X)} below, a characterization of multipliers is provided in Proposition~\ref{P:mult-charact} in the general case and in Proposition~\ref{P:simplex mult} in case $X$ is a simplex. These characterizations are related to the characterization from \cite[Theorem II.7.10]{alfsen} using the continuity with respect to the facial topology, but our version is more understandable and easier to apply in concrete cases (as witnessed by the examples at the end of this section).
We start by the following lemma which is a key step to a description of $m(X)$.
\begin{lemma}\label{L:platnost soucinu} Let $T\in\mathfrak{D}(A_c(X))$ be an operator and $u=T(1)$. Let $x\in X$. Then the following assertions are equivalent. \begin{enumerate}[$(1)$]
\item $T(a)(x)=u(x)\cdot a(x)$ for each $a\in A_c(X)$.
\item $T(u)(x)=u(x)^2$.
\item $u$ is constant on $\operatorname{spt}\mu$ for any probability measure $\mu$ supported by $\overline{\ext X}$ such that $r(\mu)=x$.
\item $u$ is constant on $\operatorname{spt}\mu$ for some probability measure $\mu$ supported by $\overline{\ext X}$ such that $r(\mu)=x$. \end{enumerate} \end{lemma}
\begin{proof} $(1)\implies(2)$: This is trivial.
$(2)\implies (3)$: Assume $T(u)(x)=u(x)^2$. Let $\mu$ be a probability measure supported by $\overline{\ext X}$ with $r(\mu)=x$. Then $$\begin{aligned} \int (u-u(x))^2\,\mbox{\rm d}\mu&= \int u^2\,\mbox{\rm d}\mu - 2 \int u(x)u\,\mbox{\rm d}\mu+\int u(x)^2\,\mbox{\rm d}\mu =\int u^2\,\mbox{\rm d}\mu-u(x)^2\\ &=\int T(u)\,\mbox{\rm d}\mu -u(x)^2=T(u)(x)-u(x)^2=0. \end{aligned}$$ In the second equality we used the assumption $r(\mu)=x$ and $u\in A_c(X)$. In the third equality we used the fact that $T(u)=u^2$ on $\ext X$ (by Corollary\ref{cor:rovnost na ext}(a)) and so by continuity also on $\overline{\ext X}$. The fourth equality uses again that $r(\mu)=x$ and the last one follows from (2).
It follows that $u=u(x)$ $\mu$-a.e. Hence, by continuity, $u=u(x)$ on $\operatorname{spt}\mu$.
$(3)\implies(4)$: This is trivial.
$(4)\implies(1)$: Let $\mu$ be such a measure. Then for each $a\in A_c(X)$ we have $$T(a)(x)=\int T(a)\,\mbox{\rm d}\mu=\int ua\,\mbox{\rm d}\mu=\int u(x)a\,\mbox{\rm d}\mu=u(x)\int a\,\mbox{\rm d}\mu=u(x)a(x).$$ In the second equality we used the fact that $T(a)=T(1)\cdot a=u \cdot a$ on $\ext X$ (Corollary~\ref{cor:rovnost na ext}(a)) and so by continuity also on $\overline{\ext X}$. The third one follows from the assumption $u=u(x)$ on $\operatorname{spt}\mu$. \end{proof}
As a corollary we get the following characterization of $m(X)$.
\begin{prop}\label{P:m(X)} Let $X$ be a compact convex set and $x\in X$. Then the following assertions are equivalent. \begin{enumerate}[$(1)$]
\item $x\in m(X)$.
\item There is a probability measure $\mu$ supported by $\overline{\ext X}$ with barycenter $x$ such that any $u\in M(A_c(X))$ is constant on $\operatorname{spt}\mu$.
\item If $\mu$ is any probability measure supported by $\overline{\ext X}$ with barycenter $x$, then any $u\in M(A_c(X))$ is constant on $\operatorname{spt}\mu$. \end{enumerate} \end{prop}
This proposition follows immediately from Lemma~\ref{L:platnost soucinu}. Although this proposition provides a complete characterization of $m(X)$, its use is limited by our knowledge of $M(A_c(X))$. Multipliers on $A_c(X)$ can be characterized by continuity in the facial topology (see \cite[Theorem II.7.10]{alfsen}) but this is hard to use in concrete cases. In the rest of this section we provide some more accessible necessary conditions and characterizations.
\begin{lemma}\label{L:mult-nutne} Let $X$ be a compact convex set and let $u\in M(A_c(X))$. Then the following conditions are satisfied. \begin{enumerate}[$(i)$]
\item If $\mu\in M_1(\overline{\ext X})$ has the barycenter in $\overline{\ext X}$, then $u$ is constant on $\operatorname{spt}\mu$.
\item If $\mu,\nu\in M_1(\overline{\ext X})$ have the same barycenter and $B\subset \mathbb R$ is a Borel set, then $\mu(u^{-1}(B))=\nu(u^{-1}(B))$ and, moreover, if this number is strictly positive, then the probabilities
$$\frac{\mu_{|u^{-1}(B)}}{\mu(u^{-1}(B))}\mbox{ and }\frac{\nu_{|u^{-1}(B)}}{\nu(u^{-1}(B))}$$ have the same barycenter. \item If $a,b,c,d\in\overline{\ext X}$ are four distinct points and the segments $[a,b]$ and $[c,d]$ intersect in a point which is internal in both segments, then $u(a)=u(b)=u(c)=u(d)$. \item If $a,b,c,d,e\in\overline{\ext X}$ are five distinct points and $$t_1a+t_2b+t_3c=sd+(1-s)e$$ for some $t_1,t_2,t_3,s\in(0,1)$ such that $t_1+t_2+t_3=1$, then $u(a)=u(b)=u(c)=u(d)=u(e)$. \end{enumerate} \end{lemma}
\begin{proof} $(i)$: Any $x\in\overline{\ext X}$ belongs to $m(X)$, hence the assertion follows from Proposition~\ref{P:m(X)}, implication $(1)\implies(3)$.
$(ii)$: Assume $\mu,\nu\in M_1(\overline{\ext X})$ have the same barycenter $x$.
Set $$E=\overline{\operatorname{span}}\{u^n;\, n\ge 0\}\subset C(X).$$ By the Stone-Weierstrass theorem we get $$E=\{f\circ u;\, f\in C(u(X))\}.$$
Moreover, since $u$ is a multiplier, we claim that $(f\cdot g)|_{\overline{\ext X}}$ may be extended to an affine continuous function on $X$ for any $f\in A_c(X)$ and $g\in E$. Indeed, the mapping
$$R:f\mapsto f|_{\overline{\ext X}}$$ is an isometric linear injection of $A_c(X)$ into $C(\overline{\ext X})$. So, its range is a closed linear subspace of $C(\overline{\ext X})$. Hence, given any $f\in A_c(X)$, the set
$$\{g\in C(X);\, (f\cdot g)|_{\overline{\ext X}}\in R(A_c(X))\}$$ is a closed linear subspace of $C(X)$, containing constant functions and stable under the multiplication by $u$. It follows that this subspace contains $E$.
We deduce $$\int a\cdot g\,\mbox{\rm d}\mu=\int a\cdot g\,\mbox{\rm d}\nu,\quad a\in A_c(X), g\in E,$$ hence \begin{equation}\label{eq:aaa}
\int a\cdot f\circ u\,\mbox{\rm d}\mu=\int a\cdot f\circ u\,\mbox{\rm d}\nu,\quad a\in A_c(X), f\in C(u(X)).\end{equation} Applying to $a=1$ we get $$\int f\circ u\,\mbox{\rm d}\mu=\int f\circ u\,\mbox{\rm d}\nu,\quad f\in C(u(X)),$$ so $$\int f\,\mbox{\rm d} u(\mu)=\int f\,\mbox{\rm d} u(\nu),\quad f\in C(u(X)),$$ i.e., $u(\mu)=u(\nu)$. I.e., $\mu(u^{-1}(B))=\nu(u^{-1}(B))$ for any Borel set $B\subset \mathbb R$. This proves the first part of the assertion.
To prove the second part, observe that \eqref{eq:aaa} implies (using the Lebesgue dominated convergence theorem) that $$\int a\cdot f\circ u\,\mbox{\rm d}\mu=\int a\cdot f\circ u\,\mbox{\rm d}\nu,\quad a\in A_c(X), f\in \Ba^b(u(X)).$$ In particular, applying to $f=1_B$ for a Borel set $B\subset \mathbb R$ (note that Borel and Baire subsets of $\mathbb R$ coincide) we get $$\int a\cdot 1_B\circ u\,\mbox{\rm d}\mu=\int a\cdot 1_B\circ u\,\mbox{\rm d}\nu,\quad a\in A_c(X),$$ i.e., $$\int_{u^{-1}(B)} a\,\mbox{\rm d}\mu=\int_{u^{-1}(B)} a\,\mbox{\rm d}\nu,\quad a\in A_c(X).$$ By the first part we know that $\mu(u^{-1}(B))=\nu(u^{-1}(B))$. If this number is zero, the above equality holds trivially; if it is strictly positive, the above equality means exactly that the probabilities
$$\frac{\mu_{|u^{-1}(B)}}{\mu(u^{-1}(B))}\mbox{ and }\frac{\nu_{|u^{-1}(B)}}{\nu(u^{-1}(B))}$$ have the same barycenter. This completes the proof of assertion $(ii)$.
$(iii)$: By the assumption there are $s,t\in(0,1)$ such that $$sa+(1-s)b=tc+(1-t)d.$$ Define measures $$\mu=s\varepsilon_a+(1-s)\varepsilon_b,\quad \nu=t\varepsilon_c+(1-t)\varepsilon_d.$$ These two measures satisfy the assumptions of $(ii)$, so $u(\mu)=u(\nu)$.
If $u(a)=u(b)=\lambda$, then $u(\mu)=\varepsilon_\lambda$. Since then $u(\nu)=\varepsilon_\lambda$ as well, necessarily $u(c)=u(d)=\lambda$. If $u(c)=u(d)$, we proceed similarly.
So assume $u(a)\ne u(b)$ and $u(c)\ne u(d)$. Up to relabeling the points we may assume $u(a)<u(b)$ and $u(c)<u(d)$. Fix $\omega\in(u(a),u(b))$ and let $B=(-\infty,\omega)$. Then $\mu|_{u^{-1}(B)}=s\varepsilon_a$ and $\mu(u^{-1}(B))=s$. Since $\nu(u^{-1}(B))=\mu(u^{-1}(B))=s$ and simultaneously $\nu(u^{-1}(B))\in\{0,t,1\}$, we deduce $t=s$. But then the barycenters of the normalized measures are $a$ and $c$. By $(ii)$, these barycenters must be equal, which is a contradiction. This completes the proof.
$(iv)$: Define measures $$\mu=t_1\varepsilon_a+t_2\varepsilon_b+t_3\varepsilon_c,\quad \nu=s\varepsilon_d+(1-s)\varepsilon_e.$$ These two measures satisfy the assumptions of $(ii)$, so $u(\mu)=u(\nu)$.
If $u(d)=u(e)=\lambda$, then $u(\nu)=\varepsilon_\lambda$. Since then $u(\mu)=\varepsilon_\lambda$ as well, necessarily $u(a)=u(b)=u(c)=\lambda$.
Assume $u(d)\ne u(e)$. Then $$ \begin{aligned} \mu(u^{-1}(\mathbb R\setminus \{u(d),u(e)\})&=\nu(u^{-1}(\mathbb R\setminus \{u(d),u(e)\})=0,\\ \mu(u^{-1}(\{u(d)\})&=\nu(u^{-1}(\{u(d)\})=s>0,\\ \mu(u^{-1}(\{u(e)\})&=\nu(u^{-1}(\{u(e)\})=1-s>0. \end{aligned} $$ It follows that $u(\{a,b,c\})=\{u(d),u(e)\}$. So, at some two of points $a,b,c$ function $u$ attains the same value and at the third point it attains a different value. Up to relabelling we may assume $$u(a)=u(b)=u(d)\ne u(c)=u(e).$$
Then $t_1+t_2=s$, $t_3=1-s$. Since the normalized measures $\frac{\mu|_{u^{-1}(\{u(e)\}}}{\mu(u^{-1}(\{u(e)\})}$ and
$\frac{\nu|_{u^{-1}(\{u(e)\}}}{\nu(u^{-1}(\{u(e)\})}$ have the same barycenter (by $(ii)$), we deduce that $c=e$. This contradiction completes the proof. \end{proof}
\begin{remark}\label{rem:nutne} (1) Condition $(i)$ of the previous lemma follows from condition $(ii)$ applied to $\mu$ and the Dirac measure carried at the barycenter of $\mu$ in place of $\nu$. However, we formulate this condition separately because it is easier and, moreover, in case $X$ is a simplex it serves as a characterization (see Proposition \ref{P:simplex mult} below).
(2) Condition $(ii)$ of the previous lemma can be reformulated using measures annihilating $A_c(X)$ as follows: \begin{enumerate}[$(ii')$]
\item If $\mu\in M(\overline{\ext X})\cap A_c(X)^\perp$ and $B\subset \mathbb R$ is a Borel set, then $\mu(u^{-1}(B))=0$ and $\mu|_{u^{-1}(B)}\in A_c(X)^\perp$. \end{enumerate}
(3) In conditions $(iii)$ and $(iv)$ of the previous lemma it is essential that we consider either two pairs of points or a pair and a triple of points. Example \ref{E:more points}(4) below illustrates that a similar condition does not hold for larger families of points. \end{remark}
Next we are going to show that condition $(ii)$ from Lemma~\ref{L:mult-nutne} characterizes multipliers. To this end we will use the following lemma.
\begin{lemma}\label{L:extending} Assume that $X$ is a compact convex set. Let $f\in C(\overline{\ext X})$. Then $f$ may be extended to an affine continuous function on $X$ if and only if \[ \forall\mu,\nu\in M_1(\overline{\ext X})\colon r(\mu)=r(\nu) \Rightarrow \int_{\overline{\ext X}} f\,\mbox{\rm d}\mu=\int_{\overline{\ext X}} f\,\mbox{\rm d}\nu. \] \end{lemma}
\begin{proof} The `only if part' is obvious and the `if part' follows from \cite[Theorem II.4.5]{alfsen}. \iffalse Let $F\colon M_1(\overline{\ext X})\to \mathbb R$ be defined as $F(\mu)=\int f\,\mbox{\rm d}\mu$, $\mu\in M_1(\overline{\ext X})$. Then $F$ is an affine continuous function on the compact convex set $M_1(\overline{\ext X})$. Let the extension $h$ of $f$ be given by \[ h(x)=\int f\,\mbox{\rm d}\mu,\quad \mu\in M_1(\overline{\ext X}), r(\mu)=x. \] Then $h$ is well defined. If $r\colon M_1(\overline{\ext X})\to X$ denotes the barycentric mapping restricted on $M_1(\overline{\ext X})$, then $r$ is an affine continuous surjection (see \cite[Theorem 3.81]{lmns}). Further, $F=h\circ r$. Now it is easy to observe from the properties of $r$ that $h$, as well as $F$, is an affine continuous function on $X$. Since it extends $f$, the proof is finished.\fi \end{proof}
\begin{prop}\label{P:mult-charact} Let $u\in A_c(X)$. Then the following assertions are equivalent. \begin{enumerate}[$(1)$]
\item $u$ is a multiplier for $A_c(X)$.
\item If $\mu,\nu\in M_1(\overline{\ext X})$ have the same barycenter and $B\subset \mathbb R$ is a Borel set, then $\mu(u^{-1}(B))=\nu(u^{-1}(B))$ and, moreover, if this number is strictly positive, then the probabilities
$$\frac{\mu_{|u^{-1}(B)}}{\mu(u^{-1}(B))}\mbox{ and }\frac{\nu_{|u^{-1}(B)}}{\nu(u^{-1}(B))}$$ have the same barycenter.
\end{enumerate} \end{prop}
\begin{proof} $(1)\implies(2)$ is proved in Lemma~\ref{L:mult-nutne}$(ii)$.
$(2)\implies(1)$: Let $h\in A_c(X)$. It is enough to prove that $h\cdot u|_{\overline{\ext X}}$ may be extended to an affine continuous function. We will check the condition from Lemma~\ref{L:extending}. Take two probability measures $\mu,\nu$ on $\overline{\ext X}$ with the same barycenter. By $(2)$ we get that $$\int h\cdot g\,\mbox{\rm d}\mu=\int h\cdot g\,\mbox{\rm d}\nu,$$ whenever $g$ is the characteristic function of $u^{-1}(B)$, where $B\subset \mathbb R$ is a Borel set. It follows that this equality holds for any bounded function $g$ measurable with respect to the $\sigma$-algebra generated by $$u^{-1}(B), B\subset\mathbb R \mbox{ Borel}.$$ In particular, the choice $g=u$ is possible, which completes the proof. \end{proof}
Now we are going to show that condition $(i)$ from Lemma~\ref{L:mult-nutne} characterizes multipliers on simplices. To this end we need the following lemma:
\begin{lemma}\label{L:simplex extension} Assume that $X$ is a simplex. Let $f\in C(\overline{\ext X})$. Then $f$ may be extended to an affine continuous function on $X$ if and only if $$\forall\mu\in M_1(\overline{\ext X})\colon r(\mu)\in\overline{\ext X}\Rightarrow f(r(\mu))=\int f\,\mbox{\rm d}\mu.$$ \end{lemma}
\begin{proof} The `only if part' is obvious, let us prove the `if part'. For each $x\in X$, let $\delta_x$ denote the unique maximal measure representing $x$. Let $\tilde{f}\in C(X)$ be an extension of $f$ provided by the Tietze theorem. Then the function $h(x)=\int_{X} \tilde{f}\,\mbox{\rm d}\delta_x$, $x\in X$ is a strongly affine function on $X$ by \cite[Theorem 6.8(c)]{lmns}. Since maximal measures are supported by $\overline{\ext X}$ and $\tilde{f}|_{\overline{\ext X}}=f$, we deduce that $h=f$ on $\overline{\ext X}$ by the assumption. Since $h$ is strongly affine and continuous on $\overline{\ext X}$, $h$ is continuous on $X$ by \cite[Theorem 3.5]{lusp}. \end{proof}
\begin{prop}\label{P:simplex mult} Assume that $X$ is a simplex. Let $u\in A_c(X)$. Then $u\in M(A_c(X))$ if and only if $$\forall\mu\in M_1(\overline{\ext X})\colon r(\mu)\in\overline{\ext X}\Rightarrow u\mbox{ is constant on }\operatorname{spt}\mu.$$ \end{prop}
\begin{proof} Implication $\implies$ is proved in Lemma~\ref{L:mult-nutne}$(i)$. Let us prove the converse implication. Assume that $u$ satisfies the given property and fix $f\in A_c(X)$. We will show that there is $g\in A_c(X)$ coinciding with $u\cdot f$ on $\overline{\ext X}$.
We will check the condition provided by Lemma~\ref{L:simplex extension}. To this end take any $\mu\in M_1(\overline{\ext X})$ such that $r(\mu)\in\overline{\ext X}$. Then $$\int uf\,\mbox{\rm d}\mu=\int u(r(\mu)) f\,\mbox{\rm d}\mu= u(r(\mu)) \int f\,\mbox{\rm d}\mu= u(r(\mu)) f(r(\mu)),$$ where in the first equality we used that $u$ is constant on $\operatorname{spt}\mu$. This completes the argument. \end{proof}
We now collect some examples illustrating the use of the above characterizations on concrete compact convex sets. Most of them are defined as the state spaces of certain function space, hence we use the notation from Section~\ref{ssc:ch-fs}.
\begin{example2}\label{E:more points} (1) Let $X$ be a Bauer simplex, i.e., $X=M_1(K)$ for some compact topological space $K$. Then $M(A_c(X))=A_c(X)\, (=C(K))$. This follows easily from the definitions. Alternatively, it follows from Proposition~\ref{P:simplex mult} as $\ext X$ is closed and the only probabilities carried by $\overline{\ext X}=\ext X$ having barycenter in $\overline{\ext X}=\ext X$ are the Dirac measures. Moreover, in this case $m(X)=\ext X\ (=\{\varepsilon_t;\, t\in K\})$. This follows from Proposition~\ref{P:m(X)} together with the Urysohn lemma.
(2) Let $K=[0,1]$, $$E=\{f\in C([0,1]);\, f(\tfrac12)=\tfrac12(f(0)+f(1))\}.$$ Then $E$ is a function space, $\Ch_E K=[0,\frac12)\cup(\frac12,1]$. Moreover, from \cite[Theorem 2]{stacey} it follows that $X=S(E)$ is a simplex. Indeed, it is not difficult to verify that the mapping \[ t\mapsto\begin{cases}\varepsilon_t,& t\in K\setminus\{\frac12\},\\ \frac12(\varepsilon_0+\varepsilon_1),&t=\frac12, \end{cases} \] satisfies the assumptions of the aforementioned theorem.
Using Proposition~\ref{P:simplex mult} we see that \begin{equation} \label{eq:priklad-multi-konst} M(A_c(X))=\{\Phi(f);\, f\in E, f(0)=f(1)=f(\tfrac12)\}. \end{equation} Using Proposition~\ref{P:m(X)} we deduce $$m(X)=\{\phi(t);\, t\in [0,1]\} \cup [\phi(0),\phi(1)],$$ where $[\phi(0),\phi(1)]$ denotes the respective segment in $X$.
Indeed, the inclusion `$\supset$' follows from \eqref{eq:priklad-multi-konst} and Proposition~\ref{P:m(X)}$(2)$.
To see the inclusion `$\subset$', let $s\in m(X)$ be given. Then there exists a probability measure $\mu\in M_s(X)\cap M_1(\overline{\ext X})$. Then $\mu$ is supported by $\{\phi(t);\, t\in [0,1]\}$. By Proposition~\ref{P:m(X)}$(3)$ we obtain that $\mu$ is either the Dirac measure at some $\phi(t)$, $t\in [0,1]\setminus\{\frac12\}$ or $\mu$ is supported by the set $\{\phi(0),\phi(\frac12),\phi(1)\}$. Then it follows that $s\in [\phi(0),\phi(1)]$.
(3) Let $K=[0,1]$ and $$E=\left\{f\in C([0,1]);\, f(0)=\int_0^1 f\right\}.$$ Then it is easy to check that $E$ a function space and $\Ch_E K=(0,1]$. Using \cite[Theorem 2]{stacey} we obtain that $X=S(E)$ is a simplex. By Proposition~\ref{P:simplex mult} we deduce that $M(A_c(X))$ consists only of constant functions. In particular, $m(X)=X$.
(4) Let $K=[0,5]$, $$E=\{f\in C([0,5]);\, f(1)=\tfrac12(f(0)+f(2)),f(4)=\tfrac12(f(3)+f(5))\}.$$ Then $E$ is a function space, $\Ch_E K=[0,5]\setminus\{1,4\}$ and $X=S(E)$ is a simplex (again we use \cite[Theorem 2]{stacey}). Using Proposition~\ref{P:simplex mult} we see that $$M(A_c(X))=\{\Phi(f);\, f\in E, f(0)=f(1)=f(2)\ \&\ f(3)=f(4)=f(5)\}.$$ Using Proposition~\ref{P:m(X)} we deduce $$m(X)=\{\phi(t);\, t\in [0,5]\} \cup [\phi(0),\phi(2)]\cup[\phi(3),\phi(5)].$$
This example is just a more complicated variant of example (2) above. But it may be used to illustrate the optimality of assertions $(iii)$ and $(iv)$ of Lemma~\ref{L:mult-nutne}. Indeed, set $$\mu_1=\tfrac14(\varepsilon_{\phi(0)}+\varepsilon_{\phi(2)})+\tfrac12\varepsilon_{\phi(4)}, \quad\nu_1=\tfrac14(\varepsilon_{\phi(3)}+\varepsilon_{\phi(5)})+\tfrac12\varepsilon_{\phi(1)}.$$ Then $\mu_1$ and $\nu_1$ are mutually orthogonal probabilities supported by $\overline{\ext X}$ with the same barycenter. Their supports have exactly three points, but there are multipliers which are not constant on the union of these supports.
Further, set $$ \mu_2=\tfrac14(\varepsilon_{\phi(0)}+\varepsilon_{\phi(2)}+\varepsilon_{\phi(3)}+\varepsilon_{\phi(5)}), \quad \nu_2=\tfrac12(\varepsilon_{\phi(1)}+\varepsilon_{\phi(4)}).$$ Again, $\mu_2$ and $\nu_2$ are mutually orthogonal probabilities supported by $\overline{\ext X}$ with the same barycenter. The support of $\mu_2$ has four points and the support of $\nu_2$ has two points, but there are multipliers which are not constant on the union of these supports.
(5) Let $K=[0,1]$ and $$E=\{f\in C([0,1]);\, f(0)+f(\tfrac13)=f(\tfrac23)+f(1)\}.$$ Then $E$ is a function space with $\Ch_E K=[0,1]$ (this is easy to see from the fact that for each $t\in K$ there exists a nonnegative function $f\in E$ that attains $0$ precisely at $t$). The measures $\frac12(\varepsilon_{\phi(0)}+\varepsilon_{\phi(1/3)})$ and $\frac12(\varepsilon_{\phi(2/3)}+\varepsilon_{\phi(1)})$ are then two maximal measures on $X=S(E)$ with the same barycenter in $S(E)$, and hence $X$ is not a simplex. We claim that $$M(A_c(X))=\{\Phi(f);\, f\in E, f(0)=f(\tfrac13)=f(\tfrac23)=f(1)\}.$$ Inclusion `$\subset$' follows from Lemma~\ref{L:mult-nutne}$(iii)$. To prove the converse we will use Proposition~\ref{P:mult-charact} using the reformulation from Remark~\ref{rem:nutne}(2).
Take any $\mu\in M(K)$ such that $\mu\in E^\perp$. It follows from the definition of $E$ and the bipolar theorem that $\mu$ is a multiple of $$\varepsilon_0+\varepsilon_{1/3}-\varepsilon_{2/3}-\varepsilon_1.$$
Assume that $f\in E$ is such that $f(0)=f(\frac13)=f(\frac23)=f(1)$. Then clearly $\mu(f)=0$. Since $f$ is constant on the support of $\mu$, given any Borel set $B\subset\mathbb R$, the set $f^{-1}(B)$ either contains the support of $\mu$ or is disjoint with the support of $\mu$. In both cases $\mu|_{f^{-1}(B)}\in E^\perp$.
Using Proposition~\ref{P:m(X)} we get $$m(X)=\{\phi(t);\, t\in[0,1]\}\cup\operatorname{conv}\{\phi(0),\phi(1/3),\phi(2/3),\phi(1)\}.$$
(6) Let $K=[0,1]$ and $\mu,\nu\in M_1(K)$ be two mutually orthogonal probabilities such that $\mu,\nu\notin \{\varepsilon_t;\, t\in K\}$. Let $$E=\left\{f\in C([0,1]);\, \int f\,\mbox{\rm d}\mu=\int f\,\mbox{\rm d}\nu\right\}.$$ We claim that $E$ is a function space and $\Ch_E K=[0,1]$.
Obviously, $E$ contains constant functions. If $t,s\in K$ different satisfy $f(t)=f(s)$ for every $f\in E$, then by the bipolar theorem there exists $c\in\mathbb R$ such that $\varepsilon_t-\varepsilon_s=c(\mu-\nu)$. Up to relabelling $s$ and $t$ we may and shall assume that $c\ge0$. Then by the orthogonality of $\mu$ and $\nu$ we have \[ 2=\norm{\varepsilon_t-\varepsilon_s}=c\norm{\mu-\nu}=2c, \] i.e., $c=1$ and hence $\varepsilon_t+\nu=\mu+\varepsilon_s$. Then \[ 1+\nu(\{t\})=\mu(\{t\}). \] Since $\mu(\{t\})\le 1$, we have $\nu(\{t\})=0$ and $\mu=\varepsilon_t$. But this contradicts the choice of $\mu$. Hence $E$ separates the points of $K$, so it is a function space.
Further we check that $M_t(E)=\{\varepsilon_t\}$ for every $t\in K$. Indeed, let $t\in K$ be arbitrary and $\lambda\in M_t(E)\setminus\{\varepsilon_t\}$. Then we may assume without loss of generality that $\lambda(\{t\})=0$. Then $\varepsilon_t-\lambda$ is again by the bipolar theorem a multiple of $\mu-\nu$, hence $\varepsilon_t-\lambda=c(\mu-\nu)$ for some $c\in\mathbb R$. Up to relabelling $\mu$ and $\nu$ we may and shall assume that $c\ge0$. As above we have \[ 2=\norm{\varepsilon_t-\lambda}=c\norm{\mu-\nu}=2c, \] which gives $c=1$. Hence $\varepsilon_t+\nu=\mu+\lambda$, which yields \[ 1+\nu(\{t\})=\mu(\{t\})\le 1. \] We obtain $\nu(\{t\})=0$ and $\mu(\{t\})=1$. Hence $\mu=\varepsilon_t$ and $\nu=\lambda$. But this contradicts our assumptions on measures $\mu$ and $\nu$. This completes the proof that $\Ch_E K=[0,1]$.
Next we verify that $X=S(E)$ is not a simplex. Indeed, the measures $\phi(\mu)$ and $\phi(\nu)$ are different measures supported by $\ext X=\phi(K)$ (and hence maximal) with the same barycenter. Thus $X$ is not a simplex.
We claim that $$M(A_c(X))=\{\Phi(f);\, f\in E, f\mbox{ is constant on }\operatorname{spt}\mu\cup\operatorname{spt}\nu\}.$$ Indeed, by the bipolar theorem $E^\perp$ is formed by the multiples of $\mu-\nu$. Inclusion `$\supset$' follows from Proposition~\ref{P:mult-charact} similarly as in example (5) above. To prove the converse assume that $f\in E$ but $f$ is not constant on $\operatorname{spt}\mu\cup\operatorname{spt}\nu$. It follows that there is
an open set $U\subset\mathbb R$ such that $0<\mu(f^{-1}(U))+\nu(f^{-1}(U))<2$. Then $\lambda=\mu-\nu\in E^\perp$, but $\lambda|_{f^{-1}(U)}$ is not a multiple of $\mu-\nu$ and hence it does not belong to $E^\perp$. It thus follows from Lemma~\ref{L:mult-nutne} that $\Phi(f)\notin M(A_c(X))$.
Using Proposition~\ref{P:m(X)} we now deduce that $$m(X)=\{\phi(t);\, t\in [0,1]\}\cup \overline{\operatorname{conv}}\{\phi(t);\, t\in\operatorname{spt}\mu\cup\operatorname{spt}\nu\}.$$
(7) Let $K=[0,7]$ and $$E=\{f\in C([0,7]);\, f(0)+f(1)=f(2)+f(3)\ \&\ f(4)+f(5)=f(6)+f(7)\}.$$ Then $E$ is a function space, $\Ch_E K=[0,7]$ and $X=S(E)$ is not a simplex (the reasoning is similar as in example (6)). By the bipolar theorem $E^\perp$ is formed by linear combinations of $\varepsilon_0+\varepsilon_1-\varepsilon_2-\varepsilon_3$ and $\varepsilon_4+\varepsilon_5-\varepsilon_6-\varepsilon_7$. Similarly as in (5) and (6) we get $$\begin{aligned}
M(A_c(X))=\{\Phi(f), f\in E, f(0)=f(1)&=f(2)=f(3)\\
&\&\ f(4)=f(5)=f(6)=f(7)\}.
\end{aligned}$$ In particular, $$\varepsilon_{\phi(0)}+\varepsilon_{\phi(1)}+\varepsilon_{\phi(4)}+\varepsilon_{\phi(5)}\quad\mbox{ and } \quad\varepsilon_{\phi(2)}+\varepsilon_{\phi(3)}+\varepsilon_{\phi(6)}+\varepsilon_{\phi(7)}$$ are two mutually orthogonal maximal measures on $X$ with the same barycenter but there are multipliers which are not constant on the supports of these measures. \end{example2}
\section{On preservation of extreme points}\label{s:preserveext}
In this section we investigate the relationship between $\ext X$ and $\ext S(H)$ for an intermediate function space $H$ on a compact convex set $X$. This is motivated, among others, by the need to clarify when Corollary~\ref{cor:rovnost na ext}$(b)$ may be applied.
Given a compact convex set $X$ and an intermediate function space $H$ on $X$, $\iota\colon X\to S(H)$ and $\pi\colon S(H)\to X$ will be the mappings provided by Lemma~\ref{L:intermediate}.
We start by the following easy lemma.
\begin{lemma}\label{L:ext a fibry} Let $X$ be a compact convex set and let $H$ be an intermediate function space. \begin{enumerate}[$(a)$]
\item Let $x\in \ext X$. Then $\pi^{-1}(x)$ is a closed face of $S(H)$, so it contains an extreme point of $S(H)$. In particular, $\pi(\ext S(H))\supset\ext X$.
\item If $H$ is determined by extreme points, then $\pi(\ext S(H))\subset \overline{\ext X}$. \end{enumerate} \end{lemma}
\begin{proof} Assertion $(a)$ is obvious (using the Krein-Milman theorem). Assertion $(b)$ follows easily from Lemma~\ref{L:intermediate}$(d)$. \end{proof}
\begin{example2} It may happen that for some $\varphi\in\ext S(H)$ we have $\pi(\varphi)\notin\ext X$, even if $H$ is determined by extreme points:
Let $K=[0,1]$ and $$E=\{f\in C([0,1]);\, f(\tfrac{1}{2})=\tfrac{1}{2}(f(0)+f(1))\}.$$ Then $X=S(E)$ is a simplex and $\Ch_E K=[0,\tfrac12)\cup(\tfrac12,1]$ (see Example~\ref{E:more points}(2)).
Let $$\widetilde{H}=\{f\in \Ba_1^b([0,1]);\, f(\tfrac{1}{2})=\tfrac{1}{2}(f(0)+f(1))\}.$$ Then $\widetilde{H}$ is formed exactly by pointwise limits of uniformly bounded sequences from $E$. In particular, $\widetilde{H}$ is determined by $\Ch_EK$. Clearly, $\widetilde{H}$ fits to the scheme from Lemma~\ref{L:function space}$(c)$. Hence, using notation from the quoted lemma, $H=V(\widetilde{H})$ is an intermediate function space on $X$. Let $$F=\left\{\varphi\in S(H);\, \varphi\left(V\left(1_{\left[0,\tfrac12-\delta\right]\cup\left\{\tfrac12\right\}\cup\left[\tfrac12+\delta,1\right]}\right)\right)=0\mbox{ for each }\delta\in(0,\tfrac12)\right\}.$$ Then $F$ is nonempty, it contains for example any cluster point of the sequence $\imath(\frac12-\tfrac{1}{n+1})$ in $S(H)$ (where $\imath\colon K\to S(H)$ is the mapping from Lemma~\ref{L:function space}$(c)$). Further, it is clear that $F$ is a closed face of $S(H)$, so it contains some extreme points of $S(H)$.
Finally observe that $\pi(F)=\{\phi({\frac12})\}$, where $\phi\colon K\to X$ is the evaluation mapping from Section~\ref{ssc:ch-fs}. Indeed, assume $\varphi\in F$. By Lemma~\ref{L:reprezentace} there is a net $(x_\alpha)$ in $X=S(E)$ such that $\iota(x_\alpha)\to\varphi$. By the Hahn-Banach theorem each $x_\alpha$ may be extended to a state on $C([0,1])$, represented by a probability measure $\mu_\alpha$ on $[0,1]$. Then for each $\delta\in(0,\frac12)$ we have $$\mu_\alpha([0,\tfrac12-\delta]\cup\{\tfrac12\}\cup[\tfrac12+\delta,1])\to0,$$ i.e., $$\mu_\alpha((\tfrac12-\delta,\tfrac12)\cup(\tfrac12,\tfrac12+\delta))\to1.$$ Let $\mu$ be any weak$^*$-cluster point of the net $(\mu_\alpha)$. Then $\mu([\tfrac12-\delta,\tfrac12+\delta])=1$ for each $\delta\in(0,\tfrac{1}{2})$. Thus $\mu=\varepsilon_{\frac12}$. It follows that $\mu_\alpha\to\varepsilon_{\frac12}$ in the weak$^*$-topology. We conclude that $\pi(\varphi)=\phi(\frac12)$.
Since $\phi(\frac12)\notin\ext X=\phi(\Ch_E K)$, the proof is complete. \end{example2}
In view of Lemma~\ref{L:platnost soucinu} it is important to know when $\iota(x)$ is an extreme point of $S(H)$ whenever $x\in \ext X$. A general characterization is given in the following lemma.
\begin{lemma}\label{L:zachovavani ext} Let $X$ be a compact convex set and let $H$ be an intermediate function space. Let $x\in \ext X$. Then the following are equivalent: \begin{enumerate}[$(1)$]
\item $\iota(x)\in \ext S(H)$.
\item $\iota(x)\in\ext\pi^{-1}(x)$.
\item If $(x_i)$ and $(y_i)$ are two nets in $X$ converging to $x$ such that $$a(\tfrac{x_i+y_i}2)\to a(x) \mbox{ for each }a\in H,$$ then $a(x_i)\to a(x)$ and $a(y_i)\to a(x)$ for each $a\in H$. \end{enumerate}
\end{lemma}
\begin{proof} Implication $(1)\implies(2)$ is trivial. To show $(2)\implies(1)$ we recall that $\pi^{-1}(x)$ is a closed face of $S(H)$.
$(3)\implies(2)$: Assume that $\iota(x)=\frac12(\varphi_1+\varphi_2)$ for some $\varphi_1,\varphi_2\in \pi^{-1}(x)$. By Lemma~\ref{L:reprezentace} there are nets $(x_i)$ and $(y_i)$ in $X$ converging to $x$ such that $\iota(x_i)\to\varphi_1$ and $\iota(y_i)\to\varphi_2$ in $S(H)$.
Further, $$\frac12(\iota(x_i)+\iota(y_i)) \to \frac12(\varphi_1+\varphi_2)=\iota(x)\mbox{ in }S(H),$$ hence $a(\frac{x_i+y_i}2)\to a(x)$ for each $a\in H$. The assumption yields $a(x_i)\to a(x)$ and $a(y_i)\to a(x)$ for each $a\in H$, i.e., $\iota(x_i)\to \iota(x)$ and $\iota(y_i)\to \iota(x)$ in $S(H)$. In other words, $\varphi_1=\varphi_2=\iota(x)$. Hence, $\iota(x)\in\ext \pi^{-1}(x)$.
$(1)\implies(3)$: Assume that $(x_i)$ and $(y_i)$ are nets satisfying the given properties. Up to passing to subnets we may assume that $\iota(x_i)\to \varphi_1$ and $\iota(y_i)\to\varphi_2$ in $S(H)$. Then, given $a\in H$ we have $$\frac12(\varphi_1(a)+\varphi_2(a))=\lim_i a(\tfrac{x_i+y_i}2)=a(x),$$ so $\frac12(\varphi_1+\varphi_2)=\iota(x)$. By the assumption we get that $\varphi_1=\varphi_2=\iota(x)$, hence $a(x_i)\to a(x)$ and $a(y_i)\to a(x)$ for each $a\in H$. This completes the proof. \end{proof}
The following two observations provide a sufficient condition for a point $x\in\ext X$ to satisfy $\iota(x)\in\ext S(H)$.
\begin{lemma}\label{lem:contin} Let $X$ be a compact convex set and $H$ an intermediate function space. Let $x\in\ext X$. Assume that each $f\in H$ is continuous at $x$. Then $\pi^{-1}(x)=\{\iota(x)\}$, in particular, $\iota(x)\in \ext S(H)$. \end{lemma}
\begin{proof} Let $\varphi\in \pi^{-1}(x)$. Let $(x_\nu)$ be a net provided by Lemma~\ref{L:reprezentace}. Then for each $f\in H$ we have $$\varphi(f)=\lim_\nu f(x_\nu)=f(x),$$ so $\varphi=\iota(x)$. \end{proof}
\begin{cor}\label{c:spoj-ext} Let $X$ be a compact convex set and $H$ an intermediate function space. Assume that each function from $H$ is strongly affine. Let $x\in \ext X$. \begin{enumerate}[$(i)$]
\item Assume that for each $f\in H$ its restriction $f|_{\overline{\ext X}}$ is continuous at $x$. Then $\pi^{-1}(x)=\{\iota(x)\}$, in particular, $\iota(x)\in \ext S(H)$.
\item If $x$ is an isolated extreme point, then $\pi^{-1}(x)=\{\iota(x)\}$, in particular, $\iota(x)\in \ext S(H)$. \end{enumerate} \end{cor}
\begin{proof}
Since $(ii)$ follows from $(i)$, it is enough to verify $(i)$. To this end we need to prove that any function $f\in H$ is continuous at $x\in\ext X$ provided its restriction $f|_{\overline{\ext X}}$ is continuous at $x$. So let $\varepsilon>0$ be given. We find an open neighbourhood $U$ containing $x$ such that $\abs{f(y)-f(x)}<\varepsilon$ for every $y\in U\cap \overline{\ext X}$. By \cite[Lemma 2.2]{rs-fragment} there exists a neighbourhood $V$ of $X$ such that for each $y\in V$ and $\mu\in M_y(X)$ it holds $\mu(U)>1-\varepsilon$. Then for arbitrary $y\in V$ we pick a measure $\mu_y\in M_y(X)\cap M_1(\overline{\ext X})$. Then \[ \begin{aligned} \abs{f(y)-f(x)}&=\abs{\mu_y(f)-\mu_y(f(x))}\le \int_{\overline{\ext X}}\abs{f-f(x)}\,\mbox{\rm d}\mu_y\\ &\le\int_{\overline{\ext X}\cap U} \abs{f-f(x)}\,\mbox{\rm d}\mu_y+2\norm{f}\mu_y(X\setminus U)\\ &\le \varepsilon+2\varepsilon\norm{f}. \end{aligned} \] Hence $f$ is continuous at $x$ and Lemma~\ref{lem:contin} applies. \end{proof}
\begin{example}\label{ex:Bauer}
There are metrizable Bauer simplices $X_1, X_2$ and intermediate function spaces $H_1,H_2$ on $X_1, X_2$, respectively, such that the following properties are satisfied (for $i=1,2$):
\begin{enumerate}[$(i)$]
\item $H_i\subset A_1(X_i)$.
\item $Z(H_i)=H_i$ and $M(H_i)=A_c(X_i)$.
\item There is $x\in\ext X_1$ with $\iota(x)\notin \ext S(H_1)$ and $\iota(\ext X_1\setminus\{x\})\subset\ext S(H_1)$.
\item $\iota(\ext X_2)\cap \ext S(H_2)=\emptyset$.
\item $S(H_i)$ is a Bauer simplex and $S(H_1)$ is metrizable.
\end{enumerate}
\end{example}
\begin{proof} We will use the approach of Lemma~\ref{L:function space}, together with the respective notation. We construct first $X_1$ and $H_1$ and then the second pair.
(1) Let $K_1=[0,1]$ and $E_1=C(K_1)$ and set $X_1=S(E_1)=M_1(K_1)$. Then $X_1$ is a metrizable Bauer simplex and we have $\Ch_{E_1} K_1=[0,1]$.
Set $$\begin{aligned} \widetilde{H_1}=\Bigl\{f:[0,1]\to\mathbb R;\, f&\mbox{ is continuous on }[0,\tfrac12)\cup(\tfrac12,1], \\ & \lim_{t\to\frac12-} f(t) \mbox{ and } \lim_{t\to\frac12+} f(t)\mbox{ exist in }\mathbb R \\& \mbox{ and } f(\tfrac12)=\tfrac12(\lim_{t\to\frac12-} f(t)+\lim_{t\to\frac12+} f(t)) \Bigr\}.\end{aligned}$$ Clearly $\widetilde{H_1}\subset\Ba_1^b(K_1)$. Further, $\widetilde{H_1}$ fits into the scheme from Lemma~\ref{L:function space} and $H_1=V(\widetilde{H_1})\subset A_1(X_1)$. In particular, $H_1$ is determined by $$\ext X_1=\phi(\Ch_{E_1} K_1)=\{\varepsilon_t;\, t\in [0,1]\}.$$
Let us characterize $\ext S(H_1)$. We fix $t\in [0,1]$ and describe $\pi^{-1}(\phi(t))$. If $t\ne\frac12$, then any $f\in H$ is continuous at $t$ when restricted to $\overline{\ext X}$, hence $\pi^{-1}(\phi(t)) =\{\imath(t)\}$ by Corollary~\ref{c:spoj-ext}. In particular, in this case $\imath(t)=\iota(\varepsilon_t)\in\ext S(H_1)$.
Next assume that $t=\frac12$. Let $\varphi$ be an extreme point of $\pi^{-1}(\phi(t))$.
It follows from Lemma~\ref{L:function space}$(d)$ that there is a net $(t_\nu)$ in $[0,1]$ converging to $\frac{1}{2}$ such that $\varphi=\lim_\nu\imath(t_\nu)$. Up to passing to a subnet we may assume that one of the following possibilities takes place:
\begin{itemize}
\item $t_\nu=\frac12$ for all $\nu$. Then $\varphi=\imath(\frac12)$.
\item $t_\nu<\frac12$ for all $\nu$. Then
$$\varphi(Vf)=\lim_{s\to\frac12-} f(s),\quad f\in \widetilde{H_1}.$$
Denote this functional by $\varphi_-$.
\item $t_\nu>\frac12$ for all $\nu$. Then
$$\varphi(Vf)=\lim_{s\to\frac12+} f(s),\quad f\in \widetilde{H_1}.$$
Denote this functional by $\varphi_+$.
\end{itemize} Thus, $\ext\pi^{-1}(\phi(\frac12))\subset\{\imath(\frac12),\varphi_-,\varphi_+\}$. Since $\imath(\frac12)=\frac12(\varphi_-+\varphi_+)$, we deduce that $\ext\pi^{-1}(\phi(\frac12))=\{\varphi_-,\varphi_+\}$ and $\pi^{-1}(\phi(\frac12))$ is the segment connecting $\varphi_-$ and $\varphi_+$.
We deduce that $$\ext S(H_1)=\{\imath(t);\, t\in [0,1]\setminus \{\tfrac12\}\}\cup\{\varphi_-,\varphi_+\}.$$ Indeed, inclusion `$\supset$' follows from Lemma~\ref{L:ext a fibry}$(a)$ using the above analysis of the fibers of $\pi$. The converse inclusion follows by using moreover Lemma~\ref{L:ext a fibry}$(b)$.
Therefore, $\ext S(H_1)$ is a metrizable compact set homeomorphic to the union of two disjoint closed intervals. Since $H_1$ is canonically linearly isometric to the space $C(\ext S(H_1))$, we deduce that $S(H_1)$ is also a metrizable Bauer simplex.
It follows that $$Z(A_c(S(H_1)))=M(A_c(S(H_1))=A_c(S(H_1))\quad\mbox{and}\quad m(S(H_1))=\ext S(H_1).$$ In particular, $Z(H_1)=H_1$.
Further, $\frac12\in\Ch_{E_1}K_1$, but $\imath(\frac12)\notin \ext S(H_1)$. Hence, not only the extreme point $\phi(\frac12)$ is not preserved, but $\imath(\frac12)\notin m(S(H_1))$, so $M(H_1)\subsetneqq Z(H_1)$. It follows easily from Lemma~\ref{L:ob-soucin} and Lemma~\ref{L:platnost soucinu} that $M(H_1)=A_c(X_1)$ (and hence it can be identified with $E_1$).
(2) Let $K_2=\mathbb T=\{z\in\mathbb C;\, \abs{z}=1\}$, $E_2=C(\mathbb T)$ and set $X_2=S(E_2)$. Then $X_2$ is a metrizable Bauer simplex and $\Ch_{E_2} \mathbb T=\mathbb T$.
Set \[ \begin{aligned} \widetilde {H_2}=\Bigl\{f\in\ell^\infty(\mathbb T);\, \forall z\in \mathbb T \colon & \lim_{w\to z-} f(w) \mbox{ and } \lim_{w\to z+} f(w)\mbox{ exist in }\mathbb R \\& \mbox{ and } f(z)=\tfrac12(\lim_{w\to z-} f(w)+\lim_{w\to z+} f(w)) \Bigr\}. \end{aligned} \] To unify the meaning of one-sided limits we assume that the circle is oriented counterclockwise.
It is clear that $E_2\subset \widetilde{H_2}$. Further, given any $f\in \widetilde{H_2}$ we define the gap at a point $z\in\mathbb T$ as the difference of the one-sided limits at that point. It follows from the existence of one-sided limits at each point that, given $\varepsilon>0$, there are only finitely many points where the absolute value of the gap is above $\varepsilon$. Therefore, $f$ is continuous except at countably many points and, hence $f$ is a Baire-one function.
Thus $\widetilde{H_2}$ fits into the scheme from Lemma~\ref{L:function space}. Then $H_2=V(\widetilde{H_2})$ is an intermediate function space on $X_2$ contained in $A_1(X_2)$. Similarly as in (1) above we see that $$\ext S(H_2)=\{\varphi_{z+},\varphi_{z-};\, z\in \mathbb T\},$$ where $$\begin{aligned} \varphi_{z+}(Vf)&=\lim_{w\to z+} f(w),\quad f\in \widetilde{H_2},\\ \varphi_{z-}(Vf)&=\lim_{w\to z-} f(w),\quad f\in \widetilde{H_2}. \end{aligned}$$ The set $\ext S(H_2)$ is compact -- it is a circular variant of the double arrow space (cf. \cite[Theorem 2.3.1]{fabian-kniha} or \cite{kalenda-stenfra}). Indeed, the topology on $\ext S(H_2)$ is generated by `open arcs', i.e., by the sets of the form $$\{\varphi_{e^{ia}-}\}\cup \{\varphi_{e^{it}+},\varphi_{e^{it}-};\, t\in (a,b)\} \cup \{\varphi_{e^{ib}+}\},\quad a,b\in\mathbb R,a<b.$$ We again get that $S(H_2)$ is a Bauer simplex (this time non-metrizable) and hence $$Z(A_c(S(H_2)))=A_c(S(H_2)) \mbox{ and } m(S(H_2))=\ext S(H_2).$$ In particular, $Z(H_2)=H_2$.
For each $z\in \mathbb T$ we have $$\imath(z)=\frac12(\varphi_{z-}+\varphi_{z+}),$$ hence $\imath(K_2)\cap \ext S(H_2)=\emptyset$, i.e., no extreme point is preserved.
Finally, using Lemma~\ref{L:ob-soucin} and Lemma~\ref{L:platnost soucinu} we easily get $M(H_2)=A_c(X_2)$. \end{proof}
The previous examples show that, given $x\in\ext X$, $\iota(x)$ need not be an extreme point of $S(H)$, even if $X$ is a Bauer simplex. Moreover, in these cases we even have $M(H)\subsetneqq Z(H)$. Next we focus on some sufficient conditions. The first result is a refinement of \cite[Lemma 3]{cc}.
To formulate it we use the notion of a split face recalled in Section~\ref{ssc:ccs} above together with the respective notation. We further need to recall that, given a bounded function $f\colon X\to \mathbb R$ (where $X$ is a compact convex set), its \emph{upper envelope}\index{upper envelope} $f^*$ is defined as \[ \gls{f*}(x)=\inf\left\{a(x);\, a\in A_c(X),a\ge f\right\},\quad x\in X \] (see also \cite[p. 4]{alfsen} where the upper envelope is denoted by $\widehat{f}$). The relationship of upper envelopes and split faces is revealed by the following observation which follows from \cite[Proposition II.6.5]{alfsen}: \begin{equation}\label{eq:lambda=1*}
F\subset X\mbox{ a closed split face}\implies \lambda_F=1_F^*. \end{equation}
\begin{lemma}\label{l:lemma-split} Let $X$ be a compact convex set and let $H$ be an intermediate function space. Let $x\in \ext X$.
If $\{x\}$ is a split face of $X$ and $1_{\{x\}}^*\in H$, then $\{\iota(x)\}$ is a split face of $S(H)$. In particular, $\iota(x)$ is an extreme point of $S(H)$. \end{lemma}
\begin{proof} Assume that $\{x\}$ is a split face. By \eqref{eq:lambda=1*} we know that $\lambda_{\{x\}}=1_{\{x\}}^*$ and hence $F=(1_{\{x\}}^*)^{-1}(0)$ is the complementary face. Set $$\widetilde{F}=\{\varphi\in S(H);\, \varphi(1_{\{x\}}^*)=0\}.$$ Since $1_{\{x\}}^*\in H$, $\widetilde{F}$ is a closed face of $S(H)$ containing $\iota(F)$. We are going to show that $\{\iota(x)\}$ is a split face of $S(H)$ and its complementary face is $\widetilde{F}$.
To this end fix an arbitrary $\varphi\in S(H)$. By Lemma~\ref{L:reprezentace} there is a net $(y_\alpha)$ in $X$ such that $\iota(y_\alpha)\to\varphi$ in $S(H)$. We know that for each $\alpha$ we have $$y_\alpha=\lambda_\alpha x+(1-\lambda_\alpha)z_\alpha, \mbox{ where }\lambda_\alpha=1_{\{x\}}^*(y_\alpha)\mbox{ and }z_\alpha\in F.$$ Then $$\varphi(1_{\{x\}}^*)=\lim_\alpha \iota(y_\alpha)(1_{\{x\}}^*)=\lim_\alpha 1_{\{x\}}^*(y_\alpha)=\lim_\alpha \lambda_\alpha,$$ i.e., $$\lambda_\alpha\to \lambda:= \varphi(1_{\{x\}}^*).$$ If $\lambda=1$, then $$\iota(y_\alpha)=\lambda_\alpha\iota(x)+(1-\lambda_\alpha)\iota(z_\alpha)\to \iota(x),$$ hence $\varphi=\iota(x)$.
If $\lambda<1$, then $$\lim_\alpha \iota(z_\alpha)=\lim_\alpha\frac{\iota(y_\alpha)-\lambda_\alpha\iota(x)}{1-\lambda_\alpha}=\frac{\varphi-\lambda\iota(x)}{1-\lambda}.$$ So, $\iota(z_\alpha)\to\psi\in S(H)$ such that $$\varphi=\lambda\iota(x)+(1-\lambda)\psi.$$ Since $\varphi(1_{\{x\}}^*)=\lambda$, necessarily $\psi(1_{\{x\}}^*)=0$, so $\psi\in \tilde{F}$.
It follows that $\operatorname{conv}(\{\iota(x)\}\cup \widetilde F)=S(H)$. Since $\widetilde F$ is a face, this completes the proof. \iffalse If $\{x\}$ is a split face, then $1_{\{x\}}^*$ is an affine function and $F=(1_{\{x\}}^*)^{-1}(0)$ is the complementary face, see \cite[Propositions II.6.5 and II.6.9]{alfsen}. Moreover, each $y\in X$ may be uniquely represented in the form $$y=\lambda x+ (1-\lambda)z,\mbox{ where }\lambda\in[0,1]\mbox{ and }z\in F.$$ Moreover, in this case $\lambda=1_{\{x\}}^*(y)$.
Set $$\widetilde{F}=\{\varphi\in S(H);\, \varphi(1_{\{x\}}^*)=0\}.$$ Since $1_{\{x\}}^*\in H$, $\widetilde{F}$ is a closed face of $S(H)$ containing $\iota(F)$. We are going to show that $\{\iota(x)\}$ is a split face of $S(H)$ and its complementary face is $\widetilde{F}$.
To this end fix an arbitrary $\varphi\in S(H)$. By Lemma~\ref{L:reprezentace} there is a net $(y_\alpha)$ in $X$ such that $\iota(y_\alpha)\to\varphi$ in $S(H)$. We know that for each $\alpha$ we have $$y_\alpha=\lambda_\alpha x+(1-\lambda_\alpha)z_\alpha, \mbox{ where }\lambda_\alpha\in[0,1]\mbox{ and }z_\alpha\in F.$$ Moreover, $\lambda_\alpha=1_{\{x\}}^*(y_\alpha)$. Then $$\varphi(1_{\{x\}}^*)=\lim_\alpha \iota(y_\alpha)(1_{\{x\}}^*)=\lim_\alpha 1_{\{x\}}^*(y_\alpha)=\lim_\alpha \lambda_\alpha,$$ i.e., $$\lambda_\alpha\to \lambda:= \varphi(1_{\{x\}}^*).$$ If $\lambda=1$, then $$\iota(y_\alpha)=\lambda_\alpha\iota(x)+(1-\lambda_\alpha)\iota(z_\alpha)\to \iota(x),$$ hence $\varphi=\iota(x)$.
If $\lambda<1$, then $$\lim_\alpha \iota(z_\alpha)=\lim_\alpha\frac{\iota(y_\alpha)-\lambda_\alpha\iota(x)}{1-\lambda_\alpha}=\frac{\varphi-\lambda\iota(x)}{1-\lambda}.$$ So, $\iota(z_\alpha)\to\psi\in S(H)$ such that $$\varphi=\lambda\iota(x)+(1-\lambda)\psi.$$ Since $\varphi(1_{\{x\}}^*)=\lambda$, necessarily $\psi(1_{\{x\}}^*)=0$, so $\psi\in \tilde{F}$.
This completes the proof.\fi \end{proof}
\begin{cor}\label{cor:iotax} Let $X$ be a compact convex set such that each $x\in\ext X$ forms a split face (in particular, this takes place if $X$ is a simplex). Let $H$ be an intermediate function space containing all affine semicontinuous functions (i.e., $A_s(X)\subset H$). Then $\iota(x)\in\ext S(H)$ for any $x\in\ext X$.
This applies among others in the following cases: \begin{enumerate}[$(a)$]
\item $A_b(X)\cap\Bo_1(X)\subset H$;
\item $X$ is metrizable and $A_1(X)\subset H$. \end{enumerate}
If $H$ is moreover determined by extreme points, then $Z(H)=M(H)$. \end{cor}
\begin{proof} Assume each $x\in \ext X$ forms a split face. By the very definition of the upper envelope we see that functions $1_{\{x\}}^*$ are upper semicontinuous. By \eqref{eq:lambda=1*} they are also affine and hence they belong to $H$ by our assumptions. We then conclude by Lemma~\ref{l:lemma-split}.
The `in particular' part follows from \cite[Theorem II.6.22]{alfsen}.
The equality $Z(H)=M(H)$ then follows from Proposition~\ref{p:zh-mh}. \end{proof}
Note that Corollary~\ref{cor:iotax} does not cover (among others) the case $H=A_1(X)$ if $X$ is a non-metrizable simplex. However, in this case we can use a separable reduction method. This is the content of assertion $(a)$ of the
following proposition. Assertion $(b)$ shows the equality $Z(A_1(X))=M(A_1(X))$ in another special case.
\begin{prop}\label{p:postacproa1} Let $X$ be a a compact convex set and let $H$ be an intermediate function space on $X$. \begin{enumerate}[$(a)$]
\item Assume that $X$ is a simplex and $A_1(X)\subset H\subset (A_{c}(X))^\sigma$. Then $\iota(x)\in\ext S(H)$ whenever $x\in\ext X$. Hence $Z(H)=M(H)$.
\item If $H=A_1(X)$ and $\ext X$ is a Lindel\"of\ resolvable set, then $Z(H)=M(H)$.
\end{enumerate} \end{prop}
\begin{proof} $(a)$: Let $x\in \ext X$ be given. We want to verify condition (3) in Lemma~\ref{L:zachovavani ext}. Let $(x_i)$, $(y_i)$ be nets converging to $x$ such that $a(\tfrac{x_i+y_i}{2})\to a(x)$ for each $a\in H$. Let $b\in H$ be given. We shall verify that $b(x_i)\to b(x)$ and $b(y_i)\to b(x)$.
By \cite[Theorem 9.12]{lmns}, there exists a metrizable simplex $Y$, an affine continuous surjection $\varphi\colon X\to Y$ and a function $\tilde{b}\in (A_c(X))^\sigma$ such that $\varphi(x)\in\ext Y$ and $b=\tilde{b}\circ \varphi$. Set $$\widetilde{H}=\{\tilde{f}\in (A_c(Y))^\sigma ;\, \tilde{f}\circ\varphi\in H\}.$$ Then $\widetilde{H}$ is a closed subspace of $(A_c(Y))^\sigma$. Further, clearly $A_1(Y)\subset \widetilde{H}$.
We have $\varphi(x_i)\to\varphi(x)$, $\varphi(y_i)\to \varphi(x)$ and for each $\tilde{f}\in \widetilde{H}$ \[ \tilde{f}(\tfrac{\varphi(x_i)+\varphi(y_i)}{2})=(\tilde{f}\circ \varphi)(\tfrac{x_i+y_i}{2})\to (\tilde{f}\circ \varphi)(x)=\tilde{f}(\varphi(x)). \] By Corollary~\ref{cor:iotax}$(b)$ and Lemma~\ref{L:zachovavani ext}, $\tilde{f}(\varphi(x_i))\to \tilde{f}(\varphi(x))$ as well as $\tilde{f}(\varphi(y_i))\to \tilde{f}(\varphi(x))$ for each $\tilde{f}\in \widetilde{H}$. In particular, \[ b(x_i)=\tilde{b}(\varphi(x_i))\to \tilde{b}(\varphi(x))=b(x) \] for our function $b$. Similarly we have $b(y_i)\to b(x)$. Hence $\iota(x)\in \ext S(H)$ by Lemma~\ref{L:zachovavani ext}. The equality $Z(A_1(X))=M(A_1(X))$ then follows from Proposition~\ref{p:zh-mh}.
$(b)$: Since $H=A_1(X)$ is determined by $\ext X$, we have immediately $M(H)\subset Z(H)$ (see Proposition~\ref{P:mult}). To show the converse inclusion, let $T\in\mathfrak{D}(H)$ be given and $m=T(1)$. We want to show that $m\in M(A_1(X))$. Using Lemma~\ref{L:nasobeni} we infer that $T(a)=m\cdot a$ on $\ext X$ for each $a\in A_c(X)$. Let $a\in H=A_1(X)$ be given. We will find a function $b\in A_1(X)$ such that $b=m\cdot a$ on $\ext X$.
To this end, let $\{a_n\}$ be a bounded sequence in $A_c(X)$ converging to $a$ on $X$. Then the sequence of $\{T(a_n)\}$ in $A_1(X)$ satisfies $T(a_n)=m\cdot a_n$ on $\ext X$. Since $\{m\cdot a_n\}$ converges pointwise on $\ext X$, the bounded sequence $\{T(a_n)\}$ converges pointwise on $X$, say to a function $b$ on $X$. Then $b$ is a strongly affine Baire function on $X$, which equals $m\cdot a$ on $\ext X$. Since $m\cdot a$ is a Baire-one function on $\ext X$, from \cite[Theorem 6.4]{lusp} we obtain that $b\in A_1(X)$. Thus $b$ is the desired function and $m=T(1)\in M(A_1(X))$. Thus $Z(H)\subset M(H)$ and the proof is complete. \end{proof}
\iffalse \begin{prop}\label{p:postacproa1} Let $X$ be a a compact convex set and $H=A_1(X)$. \begin{itemize}
\item [(a)] If $X$ is a simplex, then $\iota(x)\in\ext S(H)$ whenever $x\in\ext X$. Hence $Z(H)=M(H)$.
\item [(b)] If $\ext X$ is a Lindel\"of\ resolvable set, then $Z(H)=M(H)$.
\end{itemize} \end{prop}
\begin{proof} (a): Let $x\in \ext X$ be given. We want to verify condition (3) in Lemma~\ref{L:zachovavani ext}. Let $(x_i)$, $(y_i)$ be nets converging to $x$ such that $a(\tfrac{x_i+y_i}{2})\to a(x)$ for each $a\in A_1(X)$. Let $b\in A_1(X)$ be given. We need to verify that $b(x_i)\to b(x)$ and $b(y_i)\to b(x)$.
By \cite[Theorem 9.12]{lmns}, there exists a metrizable simplex $Y$, an affine continuous surjection $\varphi\colon X\to Y$ and a function $\tilde{b}\in A_1(Y)$ such that $\varphi(x)\in\ext Y$ and $b=\tilde{b}\circ \varphi$. Then $\varphi(x_i)\to\varphi(x)$, $\varphi(y_i)\to \varphi(x)$ and for each $\tilde{a}\in A_1(Y)$ we have $\tilde{a}\circ \varphi\in A_1(X)$, and hence \[ \tilde{a}(\tfrac{\varphi(x_i)+\varphi(y_i)}{2})=(\tilde{a}\circ \varphi)(\tfrac{x_i+y_i}{2})\to (\tilde{a}\circ \varphi)(x)=\tilde{a}(\varphi(x)). \] By Corollary~\ref{cor:iotax}(b) and Lemma~\ref{L:zachovavani ext}, $\tilde{a}(\varphi(x_i))\to \tilde{a}(\varphi(x))$ as well as $\tilde{a}(\varphi(y_i))\to \tilde{a}(\varphi(x))$ for each $\tilde{a}\in A_1(Y)$. In particular, \[ b(x_i)=\tilde{b}(\varphi(x_i))\to \tilde{b}(\varphi(x))=b(x) \] for our function $b$. Similarly we have $b(y_i)\to b(x)$. Hence $\iota(x)\in \ext S(H)$ by Lemma~\ref{L:zachovavani ext}.
The equality $Z(A_1(X))=M(A_1(X))$ then follows from Proposition~\ref{p:zh-mh}.
(b): Since $H=A_1(X)$ is determined by $\ext X$, we have immediately $M(H)\subset Z(H)$ (see Proposition~\ref{P:mult}). To show the converse inclusion, let $T\in\mathfrak{D}(H)$ be given and $m=T(1)$. We want to show that $m\in M(A_1(X))$. Using Lemma~\ref{L:nasobeni} we infer that $T(a)=m\cdot a$ on $\ext X$ for each $a\in A_c(X)$. Let $a\in H=A_1(X)$ be given. We want to find a function $b\in A_1(X)$ such that $b=m\cdot a$ on $\ext X$.
To this end, let $\{a_n\}$ be a bounded sequence in $A_c(X)$ converging to $a$ on $X$. Then the sequence of $\{T(a_n)\}$ in $A_1(X)$ satisfies $T(a_n)=m\cdot a_n$ on $\ext X$. Since $\{m\cdot a_n\}$ converges pointwise on $\ext X$, the bounded sequence $\{T(a_n)\}$ converges pointwise on $X$, say to a function $b$ on $X$. Then $b$ is a strongly affine Baire function on $X$, which equals $m\cdot a$ on $\ext X$. Since $m\cdot a$ is a Baire-one function on $\ext X$, from \cite[Theorem 6.4]{lusp} we obtain that $b\in A_1(X)$. Thus $b$ is the desired function and $m=T(1)\in M(A_1(X))$. Thus $Z(H)\subset M(H)$ and the proof is complete. \end{proof} \fi
\begin{example}\label{ex:4prostory}
There is a (non-metrizable) Bauer simplex $X$ and four intermediate function spaces on $X$ satisfying
$$A_c(X)\subsetneqq H_1\subsetneqq H_2=A_1(X)\subsetneqq H_3\subsetneqq H_4=A_b(X)\cap\Bo_1(X)$$
such that the following assertions hold (where $\iota_i:X\to S(H_i)$ is the mapping from Lemma~\ref{L:intermediate}):
\begin{enumerate}[$(i)$]
\item $M(H_1)=A_c(X)$, $Z(H_1)=H_1$, $\iota_1(\ext X)\setminus\ext S(H_1)\ne\emptyset$;
\item $M(H_2)=Z(H_2)=H_2$, $\iota_2(\ext X)\subset \ext S(H_2)$;
\item $M(H_3)=H_2$, $Z(H_3)=H_3$, $\iota_3(\ext X)\setminus\ext S(H_3)\ne\emptyset$;
\item $M(H_4)=Z(H_4)=H_4$, $\iota_4(\ext X)\subset \ext S(H_4)$.
\end{enumerate}
\end{example}
\begin{proof} We start by noticing that assertions $(ii)$ and $(iv)$ are valid for the given choices of intermediate function spaces as soon as $X$ is a simplex (by Proposition~\ref{p:postacproa1}$(a)$ and Corollary~\ref{cor:iotax}$(b)$). So, we need to construct $X$ and $H_1$ and $H_3$ such that $(i)$ and $(iii)$ are fulfilled.
We will again proceed using Lemma~\ref{L:function space} and the respective notation. Let $K=\omega_1+1+\omega_1^{-1}$, equipped with the order topology. By $\omega_1$ we mean the set of all countable ordinals with the standard well order, by $\omega_1^{-1}$ the same set with the inverse order. Then $K$ is a compact space (it coincides with the space from \cite[Example 1.10(iv)]{kalenda-survey}). Set $E=C(K)$ and $X=S(E)$. Then $X$ is a Bauer simplex which may be canonically identified with $M_1(K)$. The evaluation mapping $\phi:K\to X$ from Section~\ref{ssc:ch-fs} assigns to each $x\in K$ the Dirac measure $\varepsilon_x$.
We fix two specific points in $K$ -- by $a$ we will denote the first accumulation point (from the left, it is usually denoted by $\omega$) and by $b$ the `middle point', i.e., the unique point of uncountable cofinality. Let us define the following spaces: $$\begin{alignedat}{4} \widetilde{H_1}&=\Bigl\{f\in \ell^\infty(K);\, &&f\mbox{ is continuous at each }x\in K\setminus\{a\} \\ &&& \lim_{n<\omega} f(2n) \mbox{ and }\lim_{n<\omega} f(2n+1) \mbox{ exist in }\mathbb R \\ &&& \mbox{ and } f(a)=\tfrac12(\lim_{n<\omega} f(2n)+\lim_{n<\omega} f(2n+1))\Bigr\}, \\ \widetilde{H_2}&=\Ba_1^b(K), && \\ \widetilde{H_3}&=\Bigl\{f\in \ell^\infty(K);\, && \lim_{x\to b-} f(x) \mbox{ and }\lim_{x\to b+} f(x) \mbox{ exist in }\mathbb R \\ &&& \mbox{ and } f(b)=\tfrac12(\lim_{x\to b-} f(x)+\lim_{x\to b+} f(x))\Bigr\}, \\ \widetilde{H_4}&=\Bo_1^b(K).&& \end{alignedat}$$ Then clearly $$E\subsetneqq \widetilde{H_1}\subsetneqq \widetilde{H_2}\subsetneqq \widetilde{H_3}.$$ Moreover, any function on $K$ with finite one-sided limits at $b$ must be constant on the sets $(\alpha,b)$ and $(b,\beta)$ for some $\alpha\in\omega_1$ and $\beta\in\omega_1^{-1}$. So, any such function is of the first Borel class. Thus $\widetilde{H_3}\subsetneqq\widetilde{H_4}$. It follows that $\widetilde{H_4}$ (and hence also the smaller spaces) fits into the scheme from Lemma~\ref{L:function space}. So, we set $H_j=V(\widetilde{H_j})$ for $j=1,2,3,4$. Then the chain of inclusions and equalities is satisfied and, moreover, assertions $(ii)$ and $(iv)$ are fulfilled as explained above.
Similarly as in the proof of Example~\ref{ex:Bauer} (part (1)) we see that $$\ext S(H_1)=\{\imath_1(x);\, x\in K\setminus \{a\}\}\cup\{\varphi_o,\varphi_e\},$$ where $$\varphi_o(Vf)=\lim_{n<\omega} f(2n+1)\quad\mbox{and}\quad\varphi_e(Vf)=\lim_{n<\omega} f(2n)$$ for $f\in \widetilde{H_1}$. And in the same way we deduce that $\imath_1(a)\notin m(S(H_1))$ and hence assertion $(i)$ is fulfilled.
Finally, let us prove assertion $(iii)$. It follows from Lemma~\ref{l:lemma-split} that $\imath_3(x)\in \ext S(H_3)$ for $x\in K\setminus\{b\}$. On the other hand, $\imath_3(b)=\frac12(\varphi_-+\varphi_+)$, where $$\varphi_-(Vf)=\lim_{x\to b-} f(x)\quad\mbox{and}\quad\varphi_+(Vf)=\lim_{x\to b+} f(x)$$ for $f\in \widetilde{H_3}$. Since $\varphi_-$ and $\varphi_+$ are distinct points of $S(H_3)$, we deduce that $\imath_3(b)\notin \ext S(H_3)$.
Further, it is easy to check that $$M(H_3)=V(\{f\in\widetilde{H_3};\, f\mbox{ is continuous at }b\})=H_2.$$ It remains to check that $Z(H_3)=H_3$. To this end we observe that $$\forall f,g\in \widetilde{H_3}\, \exists h\in\widetilde{H_3}\colon h=fg\mbox{ on }K\setminus\{b\}.$$ In other words, $$\forall f,g\in H_3\, \exists h\in H_3\colon h=fg\mbox{ on }\ext X\setminus\{\varepsilon_b\}.$$ Let $T:H_3\to A_c(S(H_3))$ be the mapping from Lemma~\ref{L:intermediate}. Then the above formula may be rephrased as \begin{equation}\label{eq:H3}
\forall u,v\in A_c(S(H_3))\, \exists w\in A_c(S(H_3))\colon w=uv\mbox{ on }\iota_3(\ext X\setminus\{\varepsilon_b\}).\end{equation} Since $H_3$ is determined by extreme points, Lemma~\ref{L:intermediate}$(d)$ yields that $$\ext S(H_3)\subset\overline{\iota_3(\ext X)} =\overline{\iota_3(\ext X\setminus\{\varepsilon_b\})}\cup\{\iota_3(\varepsilon_b)\}. $$ Since $\iota_3(\varepsilon_b)\notin \ext S(H_3)$ (as we have shown above), we deduce that $$\ext S(H_3)\subset \overline{\iota_3(\ext X\setminus\{\varepsilon_b\})}.$$ Using this inclusion and \eqref{eq:H3} we deduce that $$ \forall u,v\in A_c(S(H_3))\, \exists w\in A_c(S(H_3))\colon w=uv\mbox{ on }\ext S(H_3),$$ so $M(A_c(S(H_3)))=A_c(S(H_3))$, which means that $Z(H_3)=H_3$. \end{proof}
The message of Example~\ref{ex:4prostory} is twofold. Firstly, it shows that the upper bound on $H$ in Proposition~\ref{p:postacproa1}$(a)$ cannot be dropped. And secondly, it shows that the equality $Z(H)=M(H)$ and the preservation of extreme points depends not only on the size of $H$ -- when $H$ is enlarged, these properties may be fixed (like in passing from $H_1$ to $H_2$ or from $H_3$ to $H_4$) or spoiled (like in passing from $A_c(X)$ to $H_1$ or from $H_2$ to $H_3$).
The next example shows that the assignments $H\mapsto Z(H)$, $H\mapsto M(H)$ and $H\mapsto M^s(H)$ are not monotone.
\begin{example}\label{ex:inkluzeZ}
There is a metrizable Bauer simplex $X$ and two intermediate function spaces $H_1, H_2$ on $X$ such that the following properties are fulfilled:
\begin{enumerate}[$(i)$]
\item $H_1\subset H_2\subset A_1(X)$;
\item $S(H_2)$ is a metrizable Bauer simplex;
\item $M(H_2)=Z(H_2)=H_2$;
\item $M(H_1)=Z(H_1)\subsetneqq A_c(X)=Z(A_c(X))=M(A_c(X))$.
\end{enumerate}
Moreover, for these spaces all multipliers are strong. \end{example}
\begin{proof}
Let $K=[0,2]\cup[3,5]$, $E=C(K)$ and $X=S(E)=M_1(K)$. Then $X$ is a metrizable Bauer simplex. In particular, $A_c(X)=Z(A_c(X))=M(A_c(X))$.
We further set
$$\begin{alignedat}{3}
\widetilde{H_2}&=\{f\in\ell^\infty(K);\, &&\mbox{ the restrictions }f|_{[0,1]}, f|_{(1,2]}, f|_{[3,4]}, f|_{(4,5]} \mbox{ are continuous},\\
&&& \lim_{t\to 1+} f(t), \lim_{t\to 4+} f(t) \mbox{ exist in }\mathbb R\} \\
\widetilde{H_1}&=\{f\in \widetilde{H_2} ;\, && f(1)+f(4)= \lim_{t\to 1+} f(t) + \lim_{t\to 4+} f(t)\}
\end{alignedat}$$
Then $\widetilde{H_1}$ and $\widetilde{H_2}$ fit into the scheme of Lemma~\ref{L:function space} and $\widetilde{H_1}\subset \widetilde{H_2}\subset\Ba_1^b(K)$. Set $H_1=V(\widetilde{H_1})$ and $H_2=V(\widetilde{H_2})$. Thus $(i)$ is valid.
Similarly as in Example~\ref{ex:Bauer} (part (1)) we see that the extreme points of $S(H_2)$ are $$\ext S(H_2)=\{\imath_2(t);\, t\in K\}\cup \{\varphi_1,\varphi_2\},$$ where $$\varphi_1(f)=\lim_{t\to 1+} f(t),\quad \varphi_2(t)=\lim_{t\to 4+} f(t).$$ So, $\ext S(H_2)$ is homeomorphic to the union of four disjoint closed intervals. Since $H_2$ is canonically identified with $C(\ext S(H_2))$, we deduce that $S(H_2)$ is a metrizable Bauer simplex. So, $(ii)$ is proved and, moreover, $Z(H_2)=H_2$. By Proposition~\ref{p:zh-mh} we get $Z(H_2)=M(H_2)$, so $(iii)$ is proved as well.
The restriction map $\pi_{21}:S(H_2)\to S(H_1)$ is clearly one-to-one on $\ext S(H_2)$ and maps $\ext S(H_2)$ onto $\ext S(H_1)$. By definition of $H_1$ we see that $$\pi_{21}(\varphi_1)+\pi_{21}(\varphi_2)=\imath_1(1)+\imath_1(4).$$ Using Lemma~\ref{L:mult-nutne} we see that $$M(H_1)=V(\{f\in C(K);\, f(1)=f(4)\})\subsetneqq V(E)=A_c(X).$$ Finally, by Proposition~\ref{p:zh-mh} we get $M(H_1)=Z(H_1)$. Taking into account that $X$ is a Bauer simplex, the proof of $(iv)$ is complete.
The `moreover statement' follows from Proposition~\ref{P:rovnostmulti}.
\end{proof}
The next example shows that extreme points are not automatically preserved even if $H$ is one of the spaces $A_1(X)$, $A_b(X)\cap\Bo_1(X)$, $A_f(X)$.
\iffalse \begin{example}\label{ex:slozity} There is a compact convex set $X$ with the following properties: \begin{enumerate}[$(a)$] \item There is some $y\in\ext X$ such that $\iota_b(y)\notin \ext S(A_b(X)\cap\Bo_1(X))$ and $\iota_f(y)\notin\ext S(A_f(X))$ (where $\iota_f$ and $\iota_b$ are the mappings provided by Lemma~\ref{L:intermediate});
\item $Z(A_f(X))=M(A_f(X))$ and $Z(A_b(X)\cap\Bo_1(X))=M(A_b(X)\cap\Bo_1(X))$. \end{enumerate} Moreover, there are both metrizable and non-metrizable variants of such $X$. (Note that, in case $X$ is metrizable, $A_f(X)=A_1(X)$.) \end{example}\fi
\begin{example}\label{ex:slozity} There is a compact convex set $X$ such that for any intermediate function space $H$ on $X$ satisfying $A_b(X)\cap\Bo_1(X)\subset H\subset A_f(X)$ the following assertions hold: \begin{enumerate}[$(a)$] \item There is some $y\in\ext X$ such that $\iota(y)\notin \ext S(H)$;
\item $Z(H)=M(H)$. \end{enumerate} Moreover, there are both metrizable and non-metrizable variants of such $X$. (Note that, in case $X$ is metrizable, $H=A_1(X)$.) \end{example}
\begin{proof} The proof is divided to several steps:
\noindent{\tt Step 1:} The construction of $X$.
Fix any compact space $K$ which is not scattered and which contains a one-to-one convergent sequence. Let $(x_n)$ be a one-to-one sequence in $K$ converging to $x\in K$. Without loss of generality we may assume that $x\notin\{x_n;\, n\in\mathbb N\}$. Further, fix $y\in K\setminus(\{x_n;\, n\in\mathbb N\}\cup \{x\})$.
Since $K$ is not scattered, there is a continuous Radon probability $\mu$ on $K$. Since $\mu$ is atomless, using \cite[215D]{fremlin2} it is easy to construct by induction Borel sets $$B_s, s\in \bigcup_{n\in\mathbb N\cup\{0\}} \{0,1\}^n$$ satisfying the following properties for each $s$: \begin{itemize}
\item[(i)] $B_\emptyset=K$;
\item[(ii)] $B_s=B_{s,0}\cup B_{s,1}$;
\item[(iii)] $B_{s,0}\cap B_{s,1}=\emptyset$;
\item[(iv)] $\mu(B_s)=2^{-n}$ where $n$ is the length of $s$. \end{itemize} For $n\in\mathbb N$ we define a function $f_n$ on $K$ by setting $$f_n(t)=\begin{cases}
1 & x\in B_s \mbox{ for some $s$ of length $n$ ending by }0,\\
-1 & x\in B_s \mbox{ for some $s$ of length $n$ ending by }1. \end{cases}$$ Then $(f_n)$ is an orthonormal sequence in $L^2(\mu)$, so $f_n\to0$ weakly in $L^2(\mu)$ and hence also in $L^1(\mu)$. Note that $\norm{f_n}_1=1$ for each $n$. Let $\mu_n$ be the measure on $K$ with density $f_n$ with respect to $\mu$. Then $(\mu_n)$ is a sequence in the unit sphere of $C(K)^*$ weakly converging to $0$. Note that $\mu_n$ are continuous measures and $\mu_n(K)=0$ for each $n$.
For $n\in\mathbb N$ set $$u_n=\varepsilon_y+\mu_n+\varepsilon_{x_n}-\varepsilon_x\quad\mbox{and}\quad v_n=\varepsilon_y+\mu_n+\varepsilon_{x}-\varepsilon_{x_n}.$$ The sought $X$ is now defined by $$X=\wscl{\operatorname{conv}(M_1(K)\cup\{u_n,v_n;\, n\in\mathbb N\})}.$$ It is obviously a compact convex set.
\noindent{\tt Step 2:} A representation of elements of $X$.
Set $$L=\{\varepsilon_y\}\cup\{u_n,v_n;\, n\in\mathbb N\}.$$ Since $u_n\to \varepsilon_y$ and $v_n\to\varepsilon_y$ in $X$, we deduce that $L$ is a countable weak$^*$-compact set. Hence \[
\wscl{\operatorname{conv} L}=\left\{\lambda\varepsilon_y+(1-\lambda)\sum_{n=1}^\infty (s_n u_n+t_n v_n);\, \lambda\in[0,1], s_n,t_n\ge 0,
\sum_{n=1}^\infty (s_n+t_n)=1\right\}. \] We note that
$$X=\operatorname{conv}(M_1(K)\cup\wscl{\operatorname{conv} L}),$$ hence any element of $X$ is of the form $$\lambda_1\sigma + (1-\lambda_1)(\lambda_2\varepsilon_y+(1-\lambda_2)\sum_{n=1}^\infty (s_n u_n+t_n v_n)),$$ where $\sigma\in M_1(K)$, $\lambda_1,\lambda_2\in [0,1]$, $s_n,t_n\ge0$ and $\sum_{n=1}^\infty (s_n+t_n)=1$. Given such a representation, we set $\lambda=\lambda_1+(1-\lambda_1)\lambda_2$. Then $\lambda\in[0,1]$ and $1-\lambda=(1-\lambda_1)(1-\lambda_2)$. Thus, any element of $X$ is represented as \begin{equation}\label{eq:X-reprez}
\lambda\nu+(1-\lambda)\sum_{n=1}^\infty (s_n u_n+t_n v_n),\end{equation} where $\nu\in M_1(K), \lambda\in[0,1], s_n,t_n\ge 0, \sum_{n=1}^\infty (s_n+t_n)=1$. It is not hard to check that this representation is not unique.
\noindent{\tt Step 3:} A representation of $A_f(X)$.
Set $F=\{\varepsilon_t;\, t\in K\}\cup\{u_n,v_n;\, n\in\mathbb N\}$. Then $F$ is a closed subset of $X$ and $X$ is the closed convex hull of $F$. It thus follows from the Milman theorem that $\ext X\subset F$. In fact, the equality holds, as we will see later. Currently only the inclusion is essential.
Recall that affine fragmented functions are strongly affine (by \eqref{eq:prvniinkluze}) and determined by extreme points (by \cite{dostal-spurny}, cf. Theorem~\ref{extdeter} below). So, any $f\in A_f(X)$ may be reconstructed from its restriction to $F$ by integration with respect to suitable probability measures.
Hence, any function $f\in A_f(X)$ is represented by a bounded fragmented function $h$ on $K$ and a pair of bounded sequences $(a_n),(b_n)$ of real numbers, in such a way that $$f(\lambda\nu+(1-\lambda)\sum_n(s_n u_n+t_n v_n))=\lambda\int h\,\mbox{\rm d}\nu+(1-\lambda)\sum_n(s_n a_n+t_n b_n).$$ Not all choices of $h$, $(a_n)$, $(b_n)$ are possible. In fact, $f$ is determined by its values on $\{\varepsilon_t;\, t\in K\}$, i.e., by the function $h$. Indeed, if $\nu\in M_1(K)\subset X$, then necessarily $f(\nu)=\int h\,\mbox{\rm d}\nu$.
Further, fix $n\in\mathbb N$. Then $\nu_n:=\frac23(\mu_n^-+\varepsilon_{x})\in M_1(K)$ and $$\tfrac25\left(u_n+\tfrac32\nu_n\right)=\tfrac25(\varepsilon_y+\mu_n^++\varepsilon_{x_n})\in M_1(K).$$ Thus $$\tfrac25(f(u_n)+\tfrac32f(\nu_n))=f\left(\tfrac25(\varepsilon_y+\mu_n^++\varepsilon_{x_n})\right),$$ so $$\begin{aligned}f(u_n)&=\tfrac52 f\left(\tfrac25(\varepsilon_y+\mu_n^++\varepsilon_{x_n})\right)-\tfrac32f(\nu_n)\\& =\tfrac52\left(\tfrac25\left(h(y)+\int h\,\mbox{\rm d}\mu_n^++h(x_n)\right)\right)-\tfrac32\left(\tfrac23\left(\int h\,\mbox{\rm d}\mu_n^-+h(x)\right)\right) \\&=h(y)+\int h\,\mbox{\rm d}\mu_n^++h(x_n)-\int h\,\mbox{\rm d}\mu_n^--h(x) =\int h\,\mbox{\rm d} u_n \end{aligned}$$ Similarly, $$f(v_n)=\int h\,\mbox{\rm d} v_n.$$
Hence, there is a one-to-one correspondence between $A_f(X)$ and $\Fr^b(K)$. If $h\in \Fr^b(K)$, then the corresponding function from $A_f(X)$ is given by $$\mu\mapsto \int h\,\mbox{\rm d}\mu.$$ Note that $h\ge0$ does not imply that the corresponding function is positive.
\noindent{\tt Step 4:} Set $$\widetilde{H}=\left\{ h\in\Fr^b(K);\, \mbox{the function }\mu\mapsto \int h\,\mbox{\rm d}\mu\mbox{ belongs to }H\right\}.$$ Then $\widetilde{H}$ is a closed subspace of $\Fr^b(K)$ containing $\Bo_1^b(K)$ and there is a canonical one-to-one correspondence between $\widetilde{H}$ and $H$.
\noindent{\tt Step 5:} If $t\in K\setminus\{y,x,x_n;\, n\in\mathbb N\}$, then $\{\varepsilon_{t}\}$ is a split face of $X$ and, consequently, $\iota(\varepsilon_{t})\in\ext S(H)$.
\iffalse Set
$$M=\{\varepsilon_t;\, t\in K\}\cup \{u_n,v_n;\, n\in\mathbb N\}.$$ As remarked above, $\ext X\subset M$ by Milman's theorem as $M$ is weak$^*$-closed in $X$.
Fix $t\in K\setminus\{y,x,x_n;\, n\in\mathbb N\}$. \fi The function $$f_t:\sigma\mapsto\sigma(\{t\})$$ is an affine function on $X$ with values in $[0,1]$ such that $f_t(\varepsilon_t)=1$.
Further, fix any $\sigma\in X\setminus\{\varepsilon_t\}$. Consider its representation \eqref{eq:X-reprez}. Then $$f_t(\sigma)=\sigma(\{t\})=\lambda\nu(\{t\})<1.$$ Indeed, if $\lambda\nu(\{t\})=1$, then both $\lambda=1$ and $\nu(\{t\})=1$, i.e., $\sigma=\nu=\varepsilon_t$.
So, $f_t(\varepsilon_{t})=1$ and $f_t(\sigma)<1$ for $\sigma\in X\setminus\{\varepsilon_{t}\}$, hence $\varepsilon_{t}\in \ext X$.
Let us continue analyzing properties of $\sigma\in X\setminus\{\varepsilon_{t}\}$ represented by \eqref{eq:X-reprez}. Set $\alpha=\nu(\{t\})$. Then $\alpha\in[0,1]$ and there is $\widetilde{\nu}\in M_1(K)$ such that $\widetilde{\nu}(\{t\})=0$ and $$\nu=\alpha\varepsilon_{t} + (1-\alpha) \widetilde{\nu}.$$ It follows that $$\begin{aligned} \sigma&= \lambda\nu+(1-\lambda)\sum_{n=1}^\infty (s_n u_n+t_n v_n)\\ &=\lambda(\alpha\varepsilon_{t} + (1-\alpha) \widetilde{\nu})+(1-\lambda)\sum_{n=1}^\infty (s_n u_n+t_n v_n) \\&=\lambda\alpha \varepsilon_{t} +(1-\lambda\alpha)\left(\frac{\lambda(1-\alpha)}{1-\lambda\alpha}\widetilde{\nu}+ \frac{1-\lambda}{1-\lambda\alpha}\sum_{n=1}^\infty (s_n u_n+t_n v_n)\right) \\&=\lambda\alpha \varepsilon_{t} +(1-\lambda\alpha)\widetilde{\sigma}=f_t(\sigma) \varepsilon_{t} +(1-f_t(\sigma))\widetilde{\sigma} . \end{aligned}$$ Note that $\widetilde{\sigma}\in X$ and $f_t(\widetilde{\sigma})=\widetilde{\sigma}(\{t\})=0$. It now easily follows that $\{\varepsilon_{t}\}$ is a split face and the zero level $[f_t=0]$ is the complementary face. Since $1_{\varepsilon_t}^*=f_t$ is upper semicontinuous and hence of the first Borel class, we deduce $1_{\varepsilon_t}^*\in H$, thus Lemma~\ref{l:lemma-split} yields $\iota(\varepsilon_{t})\in \ext S(H)$.
\noindent{\tt Step 6:} $\{\varepsilon_{x}\}\cup\{\varepsilon_{x_n},u_n,v_n;\, n\in\mathbb N\}\subset\ext X.$
Let us consider functions $f_x$ and $f_{x_n}$ for $n\in\mathbb N$ defined in the same way as $f_t$ in Step 5 above. They are affine and, if $\sigma\in X$ is represented by \eqref{eq:X-reprez}, we get $$\begin{aligned} f_{x_n}(\sigma)&=\lambda\nu(\{x_n\}) + (1-\lambda)(s_n-t_n),\\ f_x(\sigma)&=\lambda\nu(\{x\}) + (1-\lambda)\left(\sum_n t_n-\sum_n s_n\right). \end{aligned}$$
Observe that $f_{x_n}$ attains values from $[-1,1]$. We get: \begin{itemize}
\item $\{v_n\}=[f_{x_n}=-1]$,
hence $v_n\in\ext X$.
\item $[f_{x_n}=1]$ is the segment $[\varepsilon_{x_n},u_n]$. So, this segment is a face, thus $u_n,\varepsilon_{x_n}\in\ext X$. \end{itemize}
Finally, $f_x$ also attains values in $[-1,1]$ and $[f_x=1]$ consists of infinite convex combinations of elements $\varepsilon_x,v_n;\, n\in\mathbb N$. Note that $f_{x_n}$ attains values from $[-1,0]$ on $[f_x=1]$ and $$\{\varepsilon_{x}\}=[f_x=1]\cap\bigcap_{n\in\mathbb N}[f_{x_n}=0],$$ thus $\varepsilon_x\in\ext X$.
\noindent{\tt Step 7:} $\{\iota(\varepsilon_{x})\}\cup\{\iota(\varepsilon_{x_n}),\iota(u_n),\iota(v_n);\, n\in\mathbb N\}\subset\ext S(H).$
Elements $u_n,v_n$ are isolated extreme points, so for them we conclude using Corollary~\ref{c:spoj-ext}$(ii)$.
Next we fix $n\in\mathbb N$ and we are going to show that $\iota(\varepsilon_{x_n})\in \ext S(H)$. So, let $\varphi_1,\varphi_2\in\pi^{-1}(\varepsilon_{x_n})$ be such that $\frac12(\varphi_1+\varphi_2)=\iota(\varepsilon_{x_n})$. If we plug there $f_{x_n}$ (note that $f_{x_n}\in H$, as by \cite[Lemma 3.1]{miryanalznam} it is of the first Borel class), we deduce that
$$1=f_{x_n}(\varepsilon_{x_n})=\iota(\varepsilon_{x_n})(f_{x_n})=\tfrac12(\varphi_1(f_{x_n})+\varphi_2(f_{x_n})).$$
Since $f_{x_n}$ has values in $[-1,1]$, we deduce that $\varphi_1(f_{x_n})=\varphi_2(f_{x_n})=1$.
Let us analyze first $\varphi_1$.
By Lemma~\ref{L:reprezentace} we may find a net $(\sigma_i)$ in $X$ converging to $\varepsilon_{x_n}$ such that $\iota(\sigma_i)\to \varphi_1$.
Assume that $\sigma_i$ is represented as in \eqref{eq:X-reprez} and the coefficients have additional upper index $i$. Then $$\lambda^i\nu^i(\{x_n\})+(1-\lambda^i)(s_n^i-t_n^i)=\sigma_i(\{x_n\})=f_{x_n}(\sigma_i)=\iota(\sigma_i)(f_{x_n})\to \varphi_1(f_{x_n})=1.$$ Up to passing to a subnet we may assume that $\lambda^i\to\lambda\in[0,1]$. Let us distinguish some cases:
Case 1: $\lambda=1$. Then necessarily $\nu^i(\{x_n\})\to 1$. Thus for any $h\in \widetilde{H}$ and the corresponding $h'\in H$ we get $$\begin{aligned} \varphi_1(h')&=\lim_i \int h\,\mbox{\rm d}\sigma_i \\ &= \lim_i \left( \lambda_i\int h\,\mbox{\rm d}\nu^i + (1-\lambda_i) \sum_n \left(s_n^i\int h\,\mbox{\rm d} u_n+ t_n^i\int h\,\mbox{\rm d} v_n\right)\right)\\ &=\lim_i \int h\,\mbox{\rm d}\nu^i= \lim_i \left(h(x_n)\nu^i(\{x_n\})+\int_{K\setminus\{x_n\}}h\,\mbox{\rm d}\nu^i\right)=h(x_n), \end{aligned}$$ so $\varphi_1=\iota(x_n)$.
Case 2: $\lambda\in(0,1)$. Then necessarily $\nu^i(\{x_n\})\to1$, $s_n^i\to 1$ and $t_n^i\to0$. Thus for any $h\in \widetilde{H}$ and the corresponding $h'\in H$ we get $$\begin{aligned} \varphi_1(h')&=\lim_i \int h\,\mbox{\rm d}\sigma_i \\ &= \lim_i \left( \lambda_i\int h\,\mbox{\rm d}\nu^i + (1-\lambda_i) \sum_n \left(s_n^i\int h\,\mbox{\rm d} u_n+ t_n^i\int h\,\mbox{\rm d} v_n\right)\right)\\ &= \lim_i \left(\lambda_i h(x_n)\nu^i(\{x_n\}) + (1-\lambda_i)s_n^i \int h\,\mbox{\rm d} u_n\right)\\ &= \lambda h(x_n)+(1-\lambda)\int h\,\mbox{\rm d} u_n, \end{aligned}$$ so $\varphi_1=\iota(\lambda \varepsilon_{x_n}+(1-\lambda)u_n)$. This contradicts the assumption that $\varphi_1\in \pi^{-1}(\varepsilon_{x_n})$.
Case 3: $\lambda=0$. Using a similar computation we get that $\varphi_1=\iota(u_n)$, which again contradicts the assumption.
Summarizing, Case 1 must take place, hence $\varphi_1=\iota(\varepsilon_{x_n})$. The same works for $\varphi_2$, so $\iota(\varepsilon_{x_n})\in\ext S(H)$.
We continue by showing that also $\iota(\varepsilon_x)\in \ext S(H)$. We proceed similarly as above. Let $\varphi_1,\varphi_2\in\pi^{-1}(\varepsilon_{x})$ be such that $\frac12(\varphi_1+\varphi_2)=\iota(\varepsilon_{x})$. If we plug there $f_{x}$, we deduce that
$$1=f_{x}(\varepsilon_{x})=\iota(\varepsilon_{x})(f_{x})=\tfrac12(\varphi_1(f_{x})+\varphi_2(f_{x})).$$
Since $f_{x}$ has values in $[-1,1]$, we deduce that $\varphi_1(f_{x})=\varphi_2(f_{x})=1$.
Let us analyze first $\varphi_1$.
By Lemma~\ref{L:reprezentace} we may find a net $(\sigma_i)$ in $X$ converging to $\varepsilon_{x}$ such that $\iota(\sigma_i)\to \varphi_1$.
Assume that $\sigma_i$ is represented as in \eqref{eq:X-reprez} and coefficients have additional upper index $i$. Then $$\lambda^i\nu^i(\{x\})+(1-\lambda^i)(\sum_n t_n^i-\sum_n s_n^i)\to 1.$$ Up to passing to a subnet we may assume that $\lambda^i\to\lambda\in[0,1]$. Let us distinguish some cases:
Case 1: $\lambda=1$. Then we get, similarly as above, that $\varphi_1=\iota(\varepsilon_x)$.
Case 2: $\lambda\in(0,1)$. Then $$\nu^i(\{x\})\to 1, \sum_n t_n^i\to 1, \sum_n s_n^i\to 0.$$ Thus for any $h\in \widetilde{H}$ and the corresponding $h'\in H$ we get $$\begin{aligned} \varphi_1(h')&=\lim_i \int h\,\mbox{\rm d}\sigma_i \\ &= \lim_i \left( \lambda_i\int h\,\mbox{\rm d}\nu^i + (1-\lambda_i) \sum_n \left(s_n^i\int h\,\mbox{\rm d} u_n+ t_n^i\int h\,\mbox{\rm d} v_n\right)\right)\\ &= \lim_i \left(\lambda_i h(x)\nu^i(\{x\}) + (1-\lambda_i)\sum_n t_n^i \int h\,\mbox{\rm d} v_n\right)\\&= \lambda h(x)+(1-\lambda)\lim_i \sum_n t_n^i \int h\,\mbox{\rm d} v_n. \end{aligned}$$ Set $$\tau_n^i=\frac{t_n^i}{\sum_k t_k^i}.$$ Since $\sum_k t_k^i\to 1$, this definition has a sense for $i$ large enough. Then $$\begin{aligned} \varphi_1(h')&=\lambda h(x)+(1-\lambda)\lim_i \sum_n \tau_n^i \int h\,\mbox{\rm d} v_n \\ &= \lim_i \left(\lambda h(x)+(1-\lambda) \sum_n \tau_n^i \int h\,\mbox{\rm d} v_n\right), \end{aligned}$$ hence $$\iota(\lambda \varepsilon_x+(1-\lambda)\sum_n \tau_n^i v_n)\to \varphi_1 \mbox{ in }S(H).$$ It follows $$ \lambda\varepsilon_x+(1-\lambda)\sum_n \tau_n^i v_n\to \varepsilon_x\mbox{ in }X.$$ But this is a contradiction as any cluster point of $(\sum_n \tau_n^i v_n)_i$ must be in $\wscl{\operatorname{conv} L}$. So Case 2 cannot take place.
Case 3: $\lambda=0$. We get a contradiction similarly as in Case 2.
Thus $\varphi_1=\iota(\varepsilon_x)$. The same applies to $\varphi_2$, hence $\iota(\varepsilon_x)\in\ext S(H)$.
\noindent{\tt Step 8:} $\varepsilon_y\in\ext X$.
Consider the function $f_y(\sigma)=\sigma(\{y\})$, $\sigma\in X$. Then $f_y$ is affine and attains values in $[0,1]$. Further, $f_y(\sigma)=1$ if and only if $\sigma\in \wscl{\operatorname{conv} L}$ (see the notation in Step 2). Hence $\wscl{\operatorname{conv} L}$ is a face. A general element of this set is $$\sigma=\lambda\varepsilon_y+(1-\lambda)\sum_{n=1}^\infty (s_n u_n+t_n v_n).$$ To prove that $\varepsilon_y$ is an extreme point, it is enough to show $$\varepsilon_y=\lambda\varepsilon_y+(1-\lambda)\sum_{n=1}^\infty (s_n u_n+t_n v_n)\Rightarrow \lambda=1.$$ So, assume that $\varepsilon_y$ represented as above. Then $$0=f_{x_n}(\varepsilon_y)=(1-\lambda)(s_n-t_n).$$ Hence, if $\lambda\ne1$, we get $s_n=t_n$. Since $u_n+v_n=2(\varepsilon_y+\mu_n)$, we deduce that $$\varepsilon_y=\lambda\varepsilon_y+2(1-\lambda)\sum_{n=1}^\infty s_n(\varepsilon_y+\mu_n)=\varepsilon_y+2(1-\lambda)\sum_n s_n\mu_n,$$ hence $$\sum_n s_n\mu_n=0.$$ Since $\sum_n s_n=\frac12$ and $(\mu_n)$ is orthogonal in $L^2(\mu)$ (see the construction in Step 1), this is a contradiction.
This completes the proof that $\varepsilon_{y}\in \ext X$.
\noindent{\tt Step 9:} $\iota(\varepsilon_y)\notin\ext S(H)$.
Recall that $u_n\to \varepsilon_y$ and $v_n\to\varepsilon_y$ in $X$ and $\frac12(u_n+v_n)=\varepsilon_y+\mu_n\to \varepsilon_y$ weakly. In particular, $f(\tfrac12(u_n+v_n))\to f(\varepsilon_{y})$ for each $f\in H$. But $f_x\in H$, $f_x(u_n)=-1$, $f_x(v_n)=1$ and $f_x(\varepsilon_y)=0$. We conclude by Lemma~\ref{L:zachovavani ext}.
\noindent{\tt Step 10:} Summary and conclusion.
If we summarize the results on extreme points, we have proved that $$\ext X=\{\varepsilon_{t};\, t\in K\}\cup \{u_n,v_n;\, n\in\mathbb N\},$$ $\iota(\varepsilon_y)\notin \ext S(H)$ and $\iota(\sigma)\in \ext S(H)$ for $\sigma\in \ext X\setminus\{\varepsilon_y\}$.
We further observe that $\iota(\varepsilon_y)\in m(S(H))$. Indeed, let $\psi$ be any cluster point of $(\iota(x_n))$ in $S(X)$. Then $$\varphi_1=\iota(\varepsilon_y)+\psi-\iota(\varepsilon_x)\quad\mbox{and}\quad\varphi_2=\iota(\varepsilon_y)+\iota(\varepsilon_x)-\psi$$ belong to $\overline{\ext S(H)}$ and $\iota(\varepsilon_y)=\frac12(\varphi_1+\varphi_2)$. Since there are many different cluster points of $(\iota(x_n))$, we deduce using Lemma~\ref{L:mult-nutne}$(c)$ and Proposition~\ref{P:m(X)} that $\iota(\varepsilon_y)\in m(S(H))$. In particular, in this case we get $Z(H)=M(H)$.
Finally note that $X$ may be metrizable -- it is enough to start with an uncountable metrizable compact $K$, for example $K=[0,1]$. A non-metrizable example $X$ is obtain if we start, for example, with $K=[0,1]^\Gamma$ for an uncountable $\Gamma$, or, $K=(B_H,w)$, the closed unit ball of a non-separable Hilbert space equipped with the weak topology. \end{proof}
It seems that the following question remains open.
\begin{ques}
Let $X$ be a compact convex set and $H$ be one of the spaces $A_1(X)$, $A_b(X)\cap\Bo_1(X)$, $A_f(X)$. Is $M(H)=Z(H)$? \end{ques}
\section{Spaces determined by extreme points} \label{s:determined}
In this section we prove two results on determination by extreme points. Let us start by explaining the context. The space $A_c(X)$ is determined by extreme points by the Krein-Milman theorem. More generally, the space $A_{sa}(X)\cap\Ba(X)$ is determined by extreme points because any Baire set containing $\ext X$ carries all maximal measures (see, e.g., the proof of \cite[Lemma 3.3]{smith-london}. Hence, a fortiori, spaces $(A_c(X))^\mu$ and $(A_c(X))^\sigma$ are determined by extreme points. Further, the same argument shows that $A_{sa}(X)$ is determined by extreme points whenever $X$ is a standard compact convex set. However, for a general compact convex set $X$ it is not true, a counterexample is given in \cite{talagrand}.
Therefore it is natural to ask which subspaces of $A_{sa}(X)$ are determined by extreme points in general. In \cite[Theorem 3.7]{smith-london}) this was proved for $(A_s(X))^\mu$ and in \cite{dostal-spurny} for $A_f(X)$ (hence for $A_b(X)\cap\Bo_1(X)$). We are going to extend these results to $(A_s(X))^\sigma$ and to $(A_f(X))^\mu$.
Before passing to the results we point out that even though determinacy by extreme points seems to be in some relation with strong affinity of functions, there is no implication between these two properties. It is witnessed on one hand by the above-mentioned counterexample from \cite{talagrand} and on the other hand by examples from Section~\ref{sec:strange}.
Now we pass to the result on $(A_s(X))^\sigma$. It is based on the integral representation of semicontinuous affine functions proved in \cite{teleman}. Let us recall the related notions and constructions. If $X$ is a compact convex set, the \emph{Choquet topology}\index{Choquet topology} is the topology $\tau_{\Ch}$ on $\ext X$ such that \gls{tauCh}-closed sets are precisely the sets of the form $F\cap \ext X$, where $F\subset X$ is a closed extremal set. As explained in \cite[Section 2]{batty-cambr} this topology coincides with the topology $\tau_{\ext}$ from \cite[Chapter 9]{lmns}. It is easy to check that $(\ext X, \tau_{\Ch})$ is a $T_1$ compact (in general non-Hausdorff -- see \cite[Theorem 9.10]{lmns}) topological space. The topology $\tau_{\Ch}$ is an important tool to define a canonical correspondence between maximal measures and certain measures on extreme points. It is proved in \cite[Theorem 9.19]{lmns}.
\begin{lemma}\label{L:miry na ext} Let $X$ be a compact convex set. Let $\Sigma$ denote the $\sigma$-algebra on $X$ generated by all Baire sets in $X$ and all closed extremal sets in $X$. Let $$\gls{Sigma'}=\{A\cap\ext X;\, A\in\Sigma\}.$$ Then $\Sigma'$ is a $\sigma$-algebra on $\ext X$ and for any maximal measure $\mu\in M_1(X)$ there is a (unique) probability measure \gls{mu'} on the measurable space $(\ext X, \Sigma')$ such that, \begin{equation}
\label{eq:mira1} \begin{aligned} \mu(A)&=\mu'(A\cap \ext X),\quad A\in\Sigma,\\ \mu'(A)&=\sup\{\mu'(F);\, F\subset A\text{ is }\tau_{\Ch}\text{-closed}\},\quad A\in \Sigma',\\ \mu'(F)&=\inf\{\mu'(B\cap \ext X);\, B\cap \ext X\supset F, B\text{ Baire in }X\},\\ &\qquad\qquad F\subset \ext X\text{ is }\tau_{\Ch}\text{-closed}.
\end{aligned}
\end{equation} \end{lemma}
We continue by a representation theorem for $(A_s(X))^\sigma$. (We note that the measurability and integral is tacitly considered with respect to the completion of the respective measure.)
\begin{thm}\label{T:Assigma-reprez}
Let $X$ be a compact convex set. Let $\mu\in M_1(X)$ be a maximal measure and $\mu'$ the measure provided by Lemma~\ref{L:miry na ext}. Then for each $f\in (A_s(X))^\sigma$ the restriction $f|_{\ext X}$ is $\mu'$-measurable
$$\int_{\ext X} f\,\mbox{\rm d}\mu'=f(r(\mu)).$$ \end{thm}
\begin{proof}
If $f$ is semicontinuous, the result follows from \cite[Theorem 1]{teleman}. The general case then follows easily by linearity of integral and Lebesgue dominated convergence theorem. \end{proof}
The following result on determinacy by extreme points is an easy consequence of the previous theorem.
\begin{thm}
\label{t:assigma-deter} Let $X$ be a compact convex set. Then the intermediate function space $(A_s(X))^\sigma$ is determined by extreme points. \end{thm}
We continue by the promised result for the space $(A_f(X))^\mu$.
\begin{thm} \label{extdeter} Let $X$ be a compact convex set. Then the space $(A_f(X))^\mu$
is determined by extreme points. \end{thm}
\begin{proof}
We will adapt some ideas of the proof of \cite[Theorem 3.7]{smith-london}. To this end we define set $L$ by $$\begin{aligned}
L=\{ f:X\to\mathbb R\cup\{+\infty\};\,& l\mbox{ is concave and lower bounded}, \\ & l\mbox{ is universally measurable},
\\ & l(r(\mu))\ge \int_X l\,\mbox{\rm d}\mu \mbox{ for }\mu\in M_1(X), \\
& \{x\in F;\, l|_F \text{ lower semicontinuous at $x$}\}
\\&\qquad\mbox{is residual in $F$ whenever $F\subset X$ is closed}\}. \end{aligned}$$ \iffalse of all functions satisfying the following conditions: \begin{itemize} \item if $l\in L$, then $l$ has values in $\mathbb R\cup\{+\infty\}$, \item if $l\in L$, then $l$ is concave and lower bounded, \item if $l\in L$ and $\mu\in M_1(X)$, the integral $\int_X l\,\mbox{\rm d}\mu$ exists and $l(r(\mu))\ge \int_X l\,\mbox{\rm d}\mu$, \item if $l\in L$ and $F\subset X$ nonempty closed, then the set \[
\{x\in F;\, l|_F \text{ lower semicontinuous at $x$}\} \] is residual in $F$. \end{itemize} \fi Further, let \[ U=\{a\in A_b(X);\, \forall \varepsilon>0\,\forall x\in X\,\exists\, l,-u\in L: u\le a\le l\ \&\ l(x)-u(x)<\varepsilon \}. \] The formula for $U$ is taken from \cite[p. 102]{smith-london}, but in our case the set $L$ is much larger. We proceed by a series of claims.
\begin{claim}
If $u\in -L$ and $x\in \ext X$ such that $u|_{\overline{\ext X}}$ is upper semicontinuous at $x$, then $u$ is upper semicontinuous at $x$. \end{claim}
\begin{proof}[Proof of Claim 1]
This claim is an ultimate generalization of \cite[Lemma 2.4]{rs-fragment} and the proof is quite similar: The function $u$ is convex and upper bounded, say by a constant $M\ge 0$. Further, for each $\mu\in M_1(X)$ we have $u(r(\mu))\le \mu(u)$. Let $h\in\mathbb R$ satisfying $h>u(x)$ be given. We aim to find a neighbourhood $V$ of $x$ such that $u(y)< h$ for each $y\in V$. We select $h'\in (u(x),h)$. Next we find using the assumption a neighbourhood $U\subset X$ of $x$ such that $u(t)<h'$ for each $t\in U\cap \overline{\ext X}$. Let $\varepsilon>0$ be such that $h'+\varepsilon M<h$. By \cite[Lemma 2.2]{rs-fragment} there exists a neighbourhood $V$ of $x$ such that for each $t\in V$ and $\mu\in M_t(X)$ it holds $\mu(U)>1-\varepsilon$. Then for arbitrary $y\in V$ we pick a measure $\mu_y\in M_y(X)$ with support in $\overline{\ext X}$. Then \[ u(y)\le \mu_y(u)=\int_{\overline{\ext X}} u\,\mbox{\rm d}\mu_y=\int_{\overline{\ext X}\cap U}u\,\mbox{\rm d}\mu_y+\int_{\overline{\ext X}\setminus U}u\,\mbox{\rm d}\mu_y \le h'+\varepsilon M<h. \] Thus $u$ is upper semicontinuous at $x$. \end{proof}
\begin{claim} If $u\in -L$ and $c\in\mathbb R$ are such that $u\le c$ on $\ext X$, then $u\le c$ on $X$. \end{claim}
\begin{proof}[Proof of Claim 2] This claim is a generalization of \cite[Corollary 2.6]{rs-fragment} and the proof is similar to that of \cite[Theorem 2.5]{rs-fragment}. Let $u\in -L$ satisfy $u\le c$ on $\ext X$. Assuming that $u(t)>c$ for some $t\in X$, let $\eta\in (c,u(t))$ and let $H=[u\ge \eta]$. Since $u$ is convex, $H$ is semi-extremal subset of $X$ (i.e., $X\setminus H$ is convex), which does not intersect $\ext X$. Let $E=\overline{\operatorname{conv}} H$, then by the Milman theorem $\ext E\subset \overline{H}$. Let \[
C=\{x\in \overline{\ext E};\, u|_{\overline{\ext E}}\text{ is upper semicontinuous at }x\}. \]
Since $u\in -L$, the set $C$ is residual in $\overline{\ext E}$. Let $G\subset C$ be a dense $G_\delta$ set in $\overline{\ext E}$. If $U\subset \overline{\ext E}$ is open and dense in $\overline{\ext E}$, then $U\cap \ext E$ is open and dense in $\ext E$. Since $\ext E$ is a Baire space (see \cite[Theorem I.5.13]{alfsen}), the set $G\cap \ext E$ is nonempty. Thus there exists a point $x\in C\cap \ext E$, which means that $x$ is an extreme point of $E$ such that $u|_{\overline{\ext E}}$ is upper semicontinuous at $x$. By Claim 1, $u|_{E}$ is upper semicontinous at $x$. Since $\ext E\subset \overline{H}$, we obtain that $u(x)\ge \eta$, i.e., $x\in H$. Thus $x\in H\cap \ext E$. In particular, $x$ is not an interior point of any segment with endpoints in $\overline{H}$, i.e., $x\in H\cap \ext \overline{H}$ in the terminology of \cite{rs-fragment}. But this contradicts \cite[Lemma 2.1]{rs-fragment}. \end{proof}
\begin{claim} The set $L$ is a convex cone (i.e., $f+g, \alpha f\in L$ whenever $f,g\in L$ and $\alpha\ge 0$) containing $A_f(X)$ and closed with respect to taking pointwise limits of nondecreasing sequences. \end{claim}
\begin{proof}[Proof of Claim 3]
It is clear that $L$ is a convex cone. Further, given $f\in A_f(X)$, the function $f$ is strongly affine (cf.\ Section~\ref{ssc:meziprostory}), hence concave, bounded and $\mu(f)=f(r(\mu))$ for each $\mu\in M_1(X)$. If $F$ is nonempty closed set in $X$, $f|_F$ has the point of continuity property (by Theorem~\ref{T:a}), and thus the set $C_f=\{x\in F;\, f|_F \text{ continuous at }x\}$ is a dense $G_\delta$ set in $F$ due to \cite[Theorem 2.3]{koumou}. Hence the set $L_f=\{x\in F;\, f|_F \text{ lower semicontinuous at }x\}\supset C_f$ is residual in $F$. Thus, indeed, $f\in L$.
Let now $\{l_n\}$ be a nondecreasing sequence of functions in $L$. Then $l=\lim l_n$ is lower bounded, concave and has values in $\mathbb R\cup\{+\infty\}$. If $\mu\in M_1(X)$, $l$ is $\mu$-measurable (since the functions $l_n$, $n\in\mathbb N$, are $\mu$-measurable) and due to the Lebesgue monotone convergence theorem we have \[ \mu(l)=\int_X (\lim_{n\to\infty} l_n)\,\mbox{\rm d}\mu=\lim_{n\to \infty} \int_X l_n\,\mbox{\rm d}\mu\le \lim_{n\to\infty} l_n(r(\mu))=l(r(\mu)). \] Finally, let $F\subset X$ be a nonempty closed set. Then the sets \[
C_{l_n}=\{x\in F;\, l_n|_F \text{ lower semicontinuous at }x\},\quad n\in\mathbb N \]
are residual in $F$. Hence $C=\bigcap_{n=1}^\infty C_{l_n}$ is residual in $F$ and for a point $x\in C$ we have $x\in C_l=\{x\in F;\, l|_F \text{ lower semicontinuous at }x\}$. Indeed, if $c<l(x)$ is given, let $n\in \mathbb N$ satisfy $c<l_n(x)\le l(x)$. Let $U\subset X$ be a neighbourhood of $x$ such that $c<l_n(y)$ for each $y\in U\cap F$. Then for $y\in Y\cap F$ holds $l(y)\ge l_n(y)>c$, which gives that $l$ is lower semicontinuous at $x$. Thus $C_l$ is residual as well and $l\in L$. \end{proof}
\begin{claim} The set $U$ is a vector space containing $A_f(X)$ and closed with respect to taking pointwise limits of bounded monotone sequences. \end{claim}
\begin{proof}[Proof of Claim 4] We proceed similarly as in the proof of \cite[Proposition 3.5]{smith-london}: It is obvious from the definition and Claim 3 that $U$ is a vector space. Further, since $A_f(X)\subset L\cap -L$, we have $A_f(X)\subset U$.
We continue by showing that $U$ is closed with respect to taking pointwise limits of bounded monotone sequences. Since we already know that $U$ is a vector space, it is enough to consider nondecreasing sequences. So, let $\{a_n\}$ be a bounded nondecreasing sequence of functions from $U$ with pointwise limit $a\in A_b(X)$. Let $x\in X$ and $\varepsilon>0$ be given. Fix $n\in\mathbb N$ such that $a(x)-a_n(x)<\frac{\varepsilon}{4}$. Since $a_n\in U$, there exists $u\in -L$ with $u\le a_n$ and $a_n(x)-u(x)<\frac{\varepsilon}{4}$. Then $u\le a_n\le a$ and $a(x)-u(x)=(a(x)-a_n(x))+(a_n(x)-u(x))<\frac{\varepsilon}{2}$.
Further, given $n\in\mathbb N$, set $b_n=a_{n+1}-a_n$. Then $b_n\in U$ and thus there exists a function $l_n\in L$ such that \[ b_n\le l_n\quad\text{and} \quad l_n(x)-b_n(x)<\frac{\varepsilon}{2^{n+2}}. \] Then the partial sums $m_k=l_1+\cdots+l_k$, $k\in\mathbb N$, are contained in $L$ due to Claim 3 and the sequence $\{m_k\}$ is nondecreasing as $l_n\ge b_n=a_{n+1}-a_n\ge 0$ for each $n\in\mathbb N$. Hence $m=\lim_{k\to\infty} m_k$ belongs to $L$ by Claim 3. Moreover, \[ m_k=l_1+\cdots+l_k\ge (a_2-a_1)+(a_3-a_2)+\cdots+(a_{k+1}-a_k)=a_{k+1}-a_1,\quad k\in\mathbb N \] and \[ \begin{aligned} &m_k(x)-\left(a_{k+1}(x)-a_1(x)\right)=\\ &=\left(l_1(x)+\cdots+l_k(x)\right)-\left((a_2(x)-a_1(x))+\cdots+(a_{k+1}(x)-a_k(x)\right)\\ &=(l_1(x)-b_1(x))+(l_2(x)-b_2(x))+\cdots +(l_k(x)-b_k(x))\\ &<\sum_{n=1}^k \frac{\varepsilon}{2^{n+2}}<\frac{\varepsilon}{4}. \end{aligned} \] For the function $m$ we thus obtain inequalities \[ m\ge a-a_1\quad\text{and}\quad m(x)-\left(a(x)-a_1(x)\right)\le\frac{\varepsilon}{4}. \] Since $a_1\in U$, we can find $n\in L$ such that $a_1\le n$ and $n(x)-a_1(x)<\frac{\varepsilon}{4}$. Then $l=m+n\in L$ satisfies \[ l=m+n\ge a-a_1+a_1=a \] and \[ l(x)-a(x)=(m(x)-a(x)+a_1(x))+(n(x)-a_1(x))<\frac{\varepsilon}{4}+\frac{\varepsilon}{4}=\frac{\varepsilon}{2}. \] Thus $l(x)-u(x)<\varepsilon$ and $a\in U$. \end{proof}
\begin{claim} The space $(A_f(X))^\mu$ is determined by its values on $\ext X$. \end{claim}
\begin{proof}[Proof of Claim 5] Since $(A_f(X))^\mu\subset U$ by Claim 4, it is enough to show that $U$ is determined by its values on $\ext X$. Let $a\in U$ satisfies $\alpha\le a(x)\le \beta$ for each $x\in \ext X$.
Fix any $x\in X$ and $\varepsilon>0$. Let $-u,l\in L$ be such that $u\le a\le l$ and $l(x)-u(x)<\varepsilon$. Since $u\le \beta$ and $l\ge \alpha$ on $\ext X$, using Claim 2 we deduce $l(x)\ge \alpha$ and $u(x)\le \beta$, so $$\alpha-\varepsilon\le l(x)-\varepsilon <u(x)\le a(x)\le l(x)< u(x)+\varepsilon\le \beta+\varepsilon.$$ Since $\varepsilon>0$ is arbitrary, we conclude $\alpha\le a(x)\le \beta$ which completes the proof.
\end{proof} \end{proof}
We continue by an obvious consequence.
\begin{cor}
Let $X$ be a compact convex set. Then each of the spaces
$$A_b(X)\cap\Bo_1(X), (A_b(X)\cap\Bo_1(X))^\mu, A_f(X)$$
is determined by extreme points. \end{cor}
We note that the case of $A_f(X)$ (and hence also that of $A_b(X)\cap\Bo_1(X)$) is covered already by \cite{dostal-spurny}.
If we compare Theorem~\ref{t:assigma-deter} with Theorem~\ref{extdeter} (and its corollary), the following question seems to be natural.
\begin{ques}\label{q:sigma-deter}
Let $X$ be a compact convex set. Are the spaces $(A_b(X)\cap \Bo_1(X))^\sigma$ and $(A_f(X))^\sigma$ determined by extreme points? \end{ques}
In fact, even the following question seems to be open.
\begin{ques}
Let $X$ be a compact convex set. Is the space $A_{sa}(X)\cap\Bo(X)$ of Borel strongly affine functions determined by extreme points? \end{ques}
Note that the methods of proving Theorems~\ref{t:assigma-deter} and~\ref{extdeter} are very different. A natural attempt to answer Question~\ref{q:sigma-deter} in the positive would be to try to extend Theorem~\ref{T:Assigma-reprez} to larger classes of functions. There are two possible ways of such an extension. Either we take the statement as it is and we just consider more general functions. I.e., we require that $f$ is $\mu'$-measurable for each maximal probability measure $\mu$ and the formula holds. It is not hard to find $X$ and $f\in A_b(X)\cap\Bo_1(X)$ such that this condition is not satisfied (see Example~\ref{ex:teleman-ne} below).
The second possibility is to weaken our requirements -- for any $x\in X$ there should be some probability measure $\mu$ defined on some $\sigma$-algebra on $\ext X$ such that each $f$ from the respective class is $\mu$-measurable and satisfies $\mu(f)=f(x)$. Even this weaker condition would be sufficient to derive determinacy by extreme points. However, under continuum hypothesis it is not satisfied for $A_b(X)\cap \Bo_1(X)$ in general (see Example~\ref{ex:AMC}$(1)$ below). But it is not clear whether we may find a counterexample without additional set-theoretical assumptions (cf.\ Example~\ref{ex:AMC}$(2)$).
This is also related to \cite[Question 1' on p. 97]{teleman} asking essentially whether Theorem~\ref{T:Assigma-reprez} holds for strongly affine functions. The negative answer (even for the weaker variant) follows already from \cite{talagrand}, where a strongly affine function not determined by extreme points is constructed.
\section{Boundary integral representation of spaces of multipliers}\label{s:reprez-abstraktni}
Main results of \cite{edwards,smith-london} include boundary integral representation of centers of spaces $(A_c(X))^\mu$, $(A_c(X))^\sigma$ and $(A_s(X))^\sigma$. If we use the approach of \cite[Theorem 1.2]{smith-pacjm}, this means that centers of these spaces are characterized by a measurability condition on $\ext X$ and, moreover, a mapping assigning to each $x\in X$ a representing measure is constructed. The proofs of these results follow the same pattern. This inspired us to provide a common roof of these results. This is the content of the present section, the promised common roof is Theorem~\ref{T:integral representation H}. We also investigate possible generalizations in Theorems~\ref{T: meritelnost H=H^uparrow cap H^downarrow} and~\ref{t:aha-regularita}.
There is, however, one difference in comparison with the results of \cite{edwards,smith-london}. One of the key ingredients of the quoted results was coincidence of the center and the spaces of multipliers. In general these two spaces are different, so we focus on spaces of multipliers.
We start by an abstract lemma on function algebras. The lemma is known, but we give a proof for the sake of completeness.
\begin{lemma}\label{L:lattice}
Let $\Gamma$ be a nonempty set and let $A\subset \ell^\infty(\Gamma)$ be a linear subspace.
\begin{enumerate}[$(a)$]
\item Assume that $A$ is norm-closed subalgebra of $\ell^\infty(\Gamma)$. Then:
\begin{itemize}
\item[$\circ$] $A$ is also a sublattice of $\ell^\infty(\Gamma)$.
\item[$\circ$] If $f\in A$ and $f\ge c$ for some strictly positive $c\in\mathbb R$, then $\frac1f\in A$. (In particular, in this case $A$ contains constant functions.)
\end{itemize}
\item If $A$ is a subalgebra closed with respect to pointwise limits of bounded monotone sequences, then $A$ is norm-closed and, moreover, closed
with respect to limits of bounded pointwise converging sequences.
\item Assume that $A$ is a norm-closed sublattice of $\ell^\infty(\Gamma)$ containing constant functions. Then it is also a subalgebra of $\ell^\infty(\Gamma)$.
\end{enumerate} \end{lemma}
\begin{proof}
$(a)$: Assume $A$ is a norm-closed subalgebra. To show it is a sublattice, it is enough to prove that $\abs{f}\in A$ whenever $f\in A$. So, fix $f\in A$. Then $f$ is bounded, so there is some $r>0$ such that the range of $f$ is contained in $[-r,r]$. By the classical Weierstrass theorem there is a sequence $(p_n)$ of polynomials such that $p_n(t)\rightrightarrows \abs{t}$ on $[-r,r]$. Up to replacing $p_n$ by $p_n-p_n(0)$ we may assume that $p_n(0)=0$. Since $A$ is a subalgebra, we deduce $p_n\circ f\in A$ for each $n$. Since $p_n\circ f\rightrightarrows \abs{f}$ on $\Gamma$ and $A$ is closed, we conclude that $\abs{f}\in A$.
The second assertion is similar. Since $f$ is bounded, there is some $r>c$ such that the range of $f$ is contained in $[c,r]$. Let
$g:[0,r]\to \mathbb R$ be defined by
$$g(t)=\begin{cases}
\frac t{c^2} & t\in[0,c],\\ \frac1t & t\in [c,r].
\end{cases}$$
Then $g$ is continuous on $[0,r]$ and $g(0)=0$. By the classical Weierstrass theorem we find a sequence $(p_n)$ of polynomials converging to $g$ uniformly on $[0,r]$.
Up to replacing $p_n$ by $p_n-p_n(0)$ we may assume that $p_n(0)=0$. Since $A$ is an algebra, we deduce that $p_n\circ f\in A$ and this sequence uniformly converges to $\frac1f$. Thus $\frac1f\in A$. To see the `in particular part', observe that $1=f\cdot\frac1f\in A$ whenever both $f\in A$ and $\frac1f\in A$.
$(b)$: $A$ is norm-closed by Lemma~\ref{L:muclosed is closed}. By $(a)$ we deduce it is a sublattice. It follows that $A$ is closed to pointwise suprema of bounded countable sets. Indeed, if $(a_n)$ is a bounded sequence, the lattice property yields that $b_n=\max\{a_1,\dots,a_n\}\in A$ for each $n$. Since $b_n\nearrow \sup_n a_n$, we deduce that the supremum belongs to $A$. Finally, it follows that $A$ is closed with respect to taking pointwise limsup of bounded sequences, which completes the proof.
$(c)$: Assume that $A$ is a norm-closed sublattice containing constant functions. To prove it is a subalgebra, it is enough to show that $f^2\in A$ whenever $f\in A$. (Indeed, we have $fg=\frac12((f+g)^2-f^2-g^2)$.) So, fix $f\in A$. Then $f$ is bounded, so there is some $r>0$ such that the range of $f$ is contained in $[-r,r]$. By a classical result there is a sequence of piecewise linear functions $l_n$ such that $l_n(t)\rightrightarrows t^2$ on $[-r,r]$. The assumptions yield that $l_n\circ f\in A$ for each $n$. Since $l_n\circ f\rightrightarrows f^2$ on $\Gamma$ and $A$ is norm-closed, we conclude that $f^2\in A$. \end{proof}
\iffalse We continue by a variant of \cite[Proposition II.7.9]{alfsen}:
\begin{prop}\label{P:algebra ZAc}
Let $X$ be a compact convex set. The restriction operator
$R_0:Z(A_c(X))\to C(m(X))$ defined by $R_0(a)=a|_{m(X)}$ for $a\in Z(A_c(X))$ is a linear order-isomorphic isometric inclusion and its range is a closed subalgebra and sublattice of $C(m(X))$. \end{prop}
\begin{proof}
It is clear that $R_0$ is a linear order-isomorphic isometric inclusion. So, by Lemma~\ref{L:uzavrenost}(i) its range is a closed subspace of $C(m(X))$.
To prove it is stable under pointwise multiplication fix $a_1,a_2\in Z(A_c(X))$. Let $T_1,T_2\in\mathfrak{D}(A_c(X))$ be such that $T_1(1)=a_1$ and $T_2(1)=a_2$. Then $T_1\circ T_2\in\mathfrak{D}(A_c(X))$ as well and for each $x\in m(X)$ we have
$$(T_1\circ T_2)(1_X)(x)=T_1(T_2(1_X))(x)=T_1(a_2)(x)=T_1(1)(x)\cdot a_2(x)=a_1(x)\cdot a_2(x),$$
thus $a_1|_{m(X)}\cdot a_2|_{m(X)}$ belongs to the range of $R_0$. So, the range is a closed subalgebra of $C(m(X))$. It remains to use Lemma~\ref{L:lattice}(a) to conclude it is also a sublattice.
\end{proof} \fi
We continue by the following result which is a generalization of \cite[Proposition II.7.9]{alfsen} and of some important steps in the proof of main results of \cite{edwards,smith-london}.
\begin{prop}\label{P:algebra ZH}
Let $H$ be an intermediate function space determined by extreme points. Let us define the restriction operator
$\gls{R}:H\to \ell^\infty(\ext X)$ by $R(a)=a|_{\ext X}$ for $a\in H$.
Then the following assertions are valid:
\begin{enumerate}[$(a)$]
\item $R$ is a linear order-isomorphic isometric inclusion and its range is a closed linear subspace of $\ell^\infty(\ext X)$ containing constant functions.
\item $R(H)$ is a sublattice $\Longleftrightarrow$ $R(H)$ is a subalgebra $\Longleftrightarrow$ $M(H)=H$.
\item $R(M(H))$ is a closed subalgebra and sublattice of $\ell^\infty(\ext X)$. If, moreover, $H^\mu=H$, then $R(H)$ is closed with respect to pointwise limits of bounded sequences.
\item All the properties from $(c)$ are fulfilled also by $R(M^s(H))$.
\end{enumerate}
\end{prop}
\begin{proof} $(a)$: It is clear that $R$ is a linear mapping. Since $H$ is determined by extreme points, it is both order-isomorphism and isometry, so its range is a closed linear subspace of $\ell^\infty(\ext X)$. Obviously it contains constant functions.
$(c)$: By Lemma~\ref{L:uzavrenost}$(ii)$ $M(H)$ is closed, so $R(M(H))$ is closed as well (by $(a)$). Let us continue by showing that $R(M(H))$ is a subalgebra. Let $a_1,a_2\in R(M(H))$. Let $m_1,m_2\in M(H)$ be such that $R(m_1)=a_1$ and $R(m_2)=a_2$. Since $m_1\in M(H)$, there is $m\in H$ such that $m=m_1m_2$ on $\ext X$. Then clearly $m|_{\ext X}=a_1a_2$. It remains to show that $m\in M(H)$. To this end fix $a\in H$. Since $m_1\in M(H)$, there is $b_1\in H$ such that $b_1=m_1a$ on $\ext X$. Since $m_2\in M(H)$, there is $b_2\in H$ such that $b_2=m_2b_1$ on $\ext X$. Then $b_2=m_2m_1a$ on $\ext X$, hence $b_2=ma$ on $\ext X$. This completes the proof that $m\in M(H)$. Thus $R(M(H))$ is a closed subalgebra of $\ell^\infty(\ext X)$, so by Lemma~\ref{L:lattice}$(a)$ we deduce that it is also a sublattice.
Finally, assume that moreover $H=H^\mu$. Then by Proposition~\ref{p:multi-pro-mu}$(i)$ we get $M(H)=(M(H))^\mu$. It follows that $R(H)$ is closed with respect to pointwise limits of bounded monotone sequences. Indeed, let $(g_n)$ be a non-decreasing sequence in $R(M(H))$ pointwise converging to some $g\in\ell^\infty(\ext X)$. Find $m_n\in M(H)$ with $R(m_n)=g_n$. Since $H$ is determined by extreme points, the sequence $(m_n)$ is non-decreasing and bounded. So $m=\lim_n m_n\in M(H)$. Then $g=R(m)\in R(M(H))$. We conclude by Lemma~\ref{L:lattice}$(b)$.
$(d)$: This is completely analogous to $(c)$.
$(b)$: It follows from $(a)$ and Lemma~\ref{L:lattice} that $R(H)$ is a subalgebra if and only if it is a sublattice. If $M(H)=H$, then $R(H)$ is a subalgebra by $(c)$. Finally, if $R(H)$ is a subalgebra, the very definition of multipliers yields that $M(H)=H$. \end{proof}
The following lemma is proved in \cite[Lemma 3.5]{edwards}.
\begin{lemma}\label{L:sigmaalgebra} Let $\Gamma$ be a set and let $A\subset\ell^\infty(\Gamma)$ be a subalgebra containing constant functions and closed with respect to taking limits of pointwise converging bounded sequences. Then
$$\mathcal A=\{E\subset \Gamma;\, 1_E\in A\}$$
is a $\sigma$-algebra and
$$A=\{f\in\ell^\infty(\Gamma);\, f^{-1}(U)\in\mathcal A\mbox{ for each }U\subset\mathbb R\mbox{ open}\},$$
i.e., $A$ is formed exactly by bounded $\mathcal A$-measurable functions on $\Gamma$. \end{lemma}
Now we are ready to formulate the promised common roof of the main results of \cite{edwards,smith-london}.
\begin{thm}\label{T:integral representation H}
Let $H$ be an intermediate function space determined by extreme points satisfying $H^\mu=H$. Then the following assertions hold.
\begin{enumerate}[$(i)$]
\item The systems
$$\gls{AH}=\{E\subset \ext X;\, \exists m\in M(H)\colon m|_E=1 \ \&\ m|_{\ext X\setminus E}=0\}\index{sigma-algebra@$\sigma$-algebra!AH@$\mathcal A_H$}$$
and
$$\gls{AHs}=\{E\subset \ext X;\, \exists m\in M^s(H)\colon m|_E=1 \ \&\ m|_{\ext X\setminus E}=0\}\index{sigma-algebra@$\sigma$-algebra!AHs@$\mathcal A_H^s$}$$
are $\sigma$-algebras.
\item Let $u\in \ell^\infty(\ext X)$. Then $u$ may be extended to an element of $M(H)$ or $M^s(H)$ if and only if $u$ is $\mathcal A_H$ or $\mathcal A^s_H$-measurable, respectively.
\item For any $x\in X$ there is a unique probability $\mu=\mu_{H,x}$ on $(\ext X,\mathcal A_H)$ such that
$$\forall m\in M(H)\colon m(x)=\int m\,\mbox{\rm d}\mu.$$
\end{enumerate} \end{thm}
\begin{proof}
Let $R$ be the restriction map from Proposition~\ref{P:algebra ZH}. It follow from the quoted proposition that $R(M(H))$ and $R(M^s(H))$ satisfy assumptions of Lemma~\ref{L:sigmaalgebra}. By applying this lemma we obtain assertions $(i)$ and $(ii)$.
$(iii)$: Fix $x\in X$. Define a functional $\varphi:R(M(H))\to \mathbb R$ by
$$\varphi(g)=R^{-1}(g)(x),\quad g\in R(M(H)).$$
Since $H$ is determined by extreme points, it is a positive linear functional of norm one. By $(ii)$, $R(M(H))$ coincides with bounded $\mathcal A_H$-measurable functions on $\ext X$, hence by \cite[Theorem IV.5.1]{DS1} the functional $\varphi$ is represented by a finitely additive probability measure $\mu$ on $(\ext X,\mathcal A_H)$.
It remains to observe that $\mu$ is $\sigma$-additive. To this end fix $(E_n)$, a disjoint sequence in $\mathcal A_H$. Let $m_n\in M(H)$ be such that $m_n|_{E_n}=1$ and $m_n|_{\ext X\setminus E_n}=0$ and set $f_n=m_1+\dots+m_n$. Then $(f_n)$ is a non-decresing sequence in $M(H)$ upper bounded by $1$, so $f=\lim_n f_n\in M(H)$. Observe that $f=1$ on $\bigcup_n E_n$ and $f=0$ on $\ext X\setminus \bigcup_n E_n$. Then
$$\begin{aligned}
\mu\left(\bigcup_n E_n\right)&=\varphi(1_{\bigcup_n E_n})=f(x)=\lim_n f_n(x)=\lim_n\varphi(1_{E_1\cup\dots E_n})
\\&=\lim_n \mu(E_1\cup\dots\cup E_n)
=\sum_{n=1}^\infty \mu(E_n).\end{aligned}$$
This proves the existence of $\mu$. The uniqueness is obvious, for $E\in\mathcal A_H$ we necessarily have
$$\mu(E)=\int 1_E\,\mbox{\rm d}\mu=R^{-1}(1_E)(x).$$ \end{proof}
\begin{cor}
Let $H$ be an intermediate function space determined by extreme points satisfying $H^\mu=H$.
Then $(M(H))^\sigma=M(H)$ and $(M^s(H))^\sigma=M^s(H)$. \end{cor}
\begin{proof}
We show the proof for $M(H)$, the case of $M^s(H)$ is basically the same. Let $R$ be the restriction map from Proposition~\ref{P:algebra ZH}. Let $(f_n)$ be a bounded sequence in $M(H)$ pointwise converging to some $f\in A_b(X)$. By Proposition~\ref{P:algebra ZH}$(c)$ then $f|_{\ext X}$ belongs to $R(M(H))$, so there is some $g\in M(H)$ such that $f|_{\ext X}=g|_{\ext X}$. Fix $x\in X$ and let $\mu=\mu_{H,x}$ be the probability measure provided by Theorem~\ref{T:integral representation H}$(iii)$. Then
$$f(x)=\lim_n f_n(x)=\lim_n \int_{\ext X} f_n\,\mbox{\rm d}\mu=\int_{\ext X}f\,\mbox{\rm d}\mu=\int_{\ext X}g\,\mbox{\rm d}\mu=g(x),$$
where we used the Lebesgue dominated convergence theorem.
So, $f=g$ and thus $f\in M(H)$. \end{proof}
\begin{remark}\label{rem:intrepH} (1) Theorem~\ref{T:integral representation H} is an abstract common roof of several results.
The case $H=(A_c(X))^\mu$ is addressed in \cite{edwards}. The results are formulated in a different language using the spectrum $\widehat{A_c(X)}$ in place of $\ext X$, but this approach is equivalent due to \cite[Proposition 2.3]{edwards}. Assertions $(i)$ and $(ii)$ of Theorem~\ref{T:integral representation H} then correspond to \cite[Theorem 3.6]{edwards} and assertion $(iii)$ corresponds to \cite[Proposition 4.9]{edwards}.
This research was continued in \cite{smith-london}. The case $H=(A_c(X))^\sigma$ corresponds to \cite[Theorems 5.2 and 5.3]{smith-london} and the case $H=(A_s(X))^\mu$ corresponds to \cite[Theorems 5.4 and 5.5]{smith-london}. In \cite{smith-london} the terminology from \cite{edwards} is used, in \cite[Theorem 1.2]{smith-pacjm} the case $H=(A_s(X))^\mu$ is reformulated using functions on $\ext X$, which corresponds to our setting.
Our result is abstract, the only assumption is that $H$ is an intermediate function space determined by extreme points and satisfying $H^\mu=H$. By results of Section~\ref{s:determined} the natural spaces to which Theorem~\ref{T:integral representation H} applies include $(A_s(X))^\sigma$, $(A_f(X))^\mu$ and $(A_b(X)\cap\Bo_1(X))^\mu$. If $X$ is a standard compact convex set (for example if it is metrizable), the theorem applies also to $A_{sa}(X)$.
(2) Let us compare Theorem~\ref{T:integral representation H} with Theorem~\ref{T:Assigma-reprez}. Although both theorems are devoted to boundary integral representation, they have different meaning and they apply in different situations.
In Theorem~\ref{T:Assigma-reprez} one canonical $\sigma$-algebra is considered and the representing measures are derived from the respective maximal measures representing in the usual way. And the theorem says that this kind of a canonical representation holds for certain class of affine functions (more precisely, functions from $(A_s(X))^\sigma$). In some cases the same representation holds for a larger class of functions (for example for strongly affine functions if $X$ is metrizable).
On the other hand, Theorem~\ref{T:integral representation H} provides a $\sigma$-algebra depending on $H$, the representing measures result from an abstract theorem on representation of dual spaces and are not directly related to the measures representing in the usual sense. Further, the representation works only for multipliers. However, there is an additional ingredient (not present in Theorem~\ref{T:Assigma-reprez}) -- a characterization of multipliers (and strong multipliers) using a kind of measurability of the restriction to $\ext X$. This becomes even more interesting if there is a more descriptive characterization of the $\sigma$-algebra. This task is addressed in Sections~\ref{s:meas sm} and~\ref{sec:baire} and also in concrete examples in Section~\ref{sec:stacey}.
(3) We note that the results in \cite{edwards,smith-london,smith-pacjm} are formulated for $Z(H)$ in place of $M(H)$, but in all three cases equality $Z(H)=M(H)$ holds (see \cite[Propositions 4.4 and 4.9]{smith-london} for proofs for $(A_c(X))^s$ and $(A_s(X))^\mu$. The case of $(A_c(X))^\mu$ may be proved using the method of \cite[Proposition 4.9]{smith-london}.)
Our results work for a general intermediate function space $H$ which is determined by extreme points and satisfies $H^\mu=H$, but they are formulated for $M(H)$. If $Z(H)=M(H)$, the results may be obviously formulated for $Z(H)$ in place of $M(H)$.
If $M(H)\subsetneqq Z(H)$, then $Z(H)$ still admits a structure of an algebra and a lattice since it is isometric to $Z(A_c(S(H)))=M(A_c(S(H)))$. But the respective algebraic and lattice operations are not necessarily pointwise on $\ext X$ and, moreover, it seems not to be clear whether the assumption $H^\mu=H$ would guarantee some kind of monotone completeness of $Z(H)$.
(4) It is natural to ask whether in assertion $(iii)$ of Theorem~\ref{T:integral representation H} the converse holds as well. I.e., given a probability $\mu$ on $(\ext X,\mathcal A_H)$, is there some $x\in X$ such that $\mu=\mu_{H,x}$? We note that probability measures on $(\ext X,\mathcal A_H)$ represent exactly
`$\sigma$-normal states' on $M(H)$ (cf. \cite[Proposition 4.7]{edwards} for a special case, the general case may be proved in the same way). So, the question is whether the only `$\sigma$-normal states' on $M(H)$ are the evaluation functionals in points of $X$.
In \cite[paragraph before Proposition 4.11]{edwards} it is claimed that this is not clear for $H=(A_c(X))^\mu$. \end{remark}
Another easy consequence of Theorem~\ref{T:integral representation H} is the following description of extreme points in spaces of multipliers.
\begin{cor} \label{c:ext body multiplikatoru} Let $X$ be a compact convex set and $H=H^{\mu}$ be an intermediate function space determined by extreme points. Let $E=M(H)$ or $E=M^s(H)$ and let $B^+$ denote the positive part of the unit ball of $E$. Then $$\ext B^+=\{m\in E;\, m(\ext X) \subset \{0, 1\}\} \mbox{ and } B^+=\overline{\operatorname{conv}\ext B^+}.$$
\end{cor}
\begin{proof} By Proposition \ref{P:algebra ZH} and Theorem \ref{T:integral representation H}, there exist $\sigma$-algebras $\mathcal A_H$ and $\mathcal A^s_H$ on $\ext X$ such that the spaces $M(H)$ and $M^s(H)$ are canonically isometric to subspaces of $\ell^{\infty}(\ext X)$ consisting of $\mathcal A_H$ or $\mathcal A^s_H$-measurable functions, respectively. From this identification the statement easily follows. \end{proof}
Assumption $H^\mu=H$ is one of the key ingredients of Theorem~\ref{T:integral representation H} -- it enables us to use Lemma~\ref{L:sigmaalgebra} to get assertions $(i)$ and $(ii)$. However, for $H=A_c(X)$ there is an analogue of assertion $(ii)$ in \cite[Theorem II.7.10]{alfsen}. The quoted theorem says (among others) that $u\in \ell^\infty(\ext X)$ may be extended to an element of $M(A_c(X))$ if and only if it is continuous with respect to the facial topology. It is natural to ask whether a similar statement is valid for more general intermediate function spaces. We continue by analyzing this question in the abstract setting. The first step is the following abstract lemma.
\begin{lemma}\label{L:jen algebra} Let $\Gamma$ be a set and let $A\subset\ell^\infty(\Gamma)$ be a norm-closed subalgebra containing constant functions.
Set
$$\mathcal A=\{ [f>0];\, f\in A\}.$$ \begin{enumerate}[$(a)$]
\item
Then $\mathcal A$ is a family of subsets of $\Gamma$ containing $\emptyset$ and $\Gamma$, and closed with respect to finite intersections and countable unions.
\item Set
$$B=\{f\in\ell^\infty(\Gamma);\, f^{-1}(U)\in\mathcal A\mbox{ for each }U\subset\mathbb R\mbox{ open}\}.$$
Then $B$ is a norm-closed subalgebra of $\ell^\infty(\Gamma)$ containing $A$.
\item The following assertions are equivalent:
\begin{enumerate}[$(i)$]
\item $A=B$;
\item $\frac fg\in A$ whenever $f,g\in A$, $0\le f\le g$ and $g$ does not attain zero;
\item given $E,F\subset\Gamma$ disjoint such that $\Gamma\setminus E,\Gamma\setminus F\in \mathcal A$, there is $f\in A$ with $0\le f\le 1$ such that $f=0$ on $E$ and $f=1$ on $F$.
\end{enumerate} \end{enumerate} \end{lemma}
\begin{proof}
First observe that by Lemma~\ref{L:lattice} $A$ is necessarily a sublattice of $\ell^\infty(\Gamma)$.
$(a)$: Since $A$ contains constant functions $0$ and $1$, we get $\emptyset\in\mathcal A$ and $\Gamma\in\mathcal A$.
Given $f,g\in A$, we get $f^+\cdot g^+\in A$ and $$[f>0]\cap [g>0]=[f^+>0]\cap [g^+>0]=[f^+\cdot g^+>0]\in\mathcal A,$$ so $\mathcal A$ is closed with respect to finite intersections.
Given a sequence $(f_n)$ in $A$, the functions $g_n=\min\{1,f_n^+\}$ belong to $A$ and hence also $g=\sum_{n=1}^\infty2^{-n}g_n$ belong to $A$. Since
$$\bigcup_{n\in\mathbb N}[f_n>0]=\bigcup_{n\in\mathbb N}[g_n>0]=[g>0]\in\mathcal A,$$ we conclude that $\mathcal A$ is closed with respect to countable unions.
$(b)$: Using $(a)$ we deduce that $$B=\{f\in\ell^\infty(\Gamma);\, [f>c]\in\mathcal A\mbox{ and }[f<c]\in\mathcal A\mbox{ for each }c\in\mathbb R\}.$$ Given $f\in A$ and $c\in\mathbb R$, then $$[f>c]=[f-c>0]\in\mathcal A\quad\mbox{and}\quad[f<c]=[c-f>0]\in\mathcal A,$$ thus $A\subset B$.
If $f,g\in B$ and $c>0$, then $$[f+g>c]=\bigcup\{ [f>p]\cap [g>q];\, p,q\in\mathbb Q, p+q>c\},$$ which belongs to $\mathcal A$ by (a). Similarly we see that $[f+g<c]\in \mathcal A$. Hence $f+g\in B$. Similarly we may show that $B$ is stable under the multiplication, so it is an algebra.
To see that $B$ is uniformly closed, fix a sequence $(f_n)$ in $B$ uniformly converging to $f\in\ell^\infty(\Gamma)$. Up to passing to a subsequence we may assume that $\norm{f_n-f}_\infty<\frac1n$. Observe that $$[f>c]=\bigcup_{n\in\mathbb N}[f_n>c+\tfrac1n]\in \mathcal A,$$ and similarly $[f<c]\in\mathcal A$, thus $f\in B$.
$(c)$: Let us prove the individual implications:
$(i)\Rightarrow(ii)$: Let $f,g$ be as in $(ii)$. Then $\frac fg$ is well defined and bounded. Moreover, we easily see that $\frac fg\in B$, thus assuming $(i)$ we deduce $\frac fg\in A$.
$(ii)\Rightarrow(iii)$: Let $E,F$ be as in $(iii)$. Then there are $f,g\in A$ such that $E=[f\le 0]$ and $F=[g\le0]$. Since $A$ is a sublattice, up to replacing $f,g$ by $f^+$ and $g^+$, we may assume that $f,g$ are non-negative. Then $E=[f=0]$ and $F=[g=0]$. Since $E,F$ are disjoint, $f+g$ is a strictly positive function from $A$. By $(ii)$ we get that $\frac{f}{f+g}\in A$ and this function has the required properties.
$(iii)\Rightarrow(i)$: Fix $f\in B$ and $\varepsilon>0$. Since $f$ is bounded, there are $t_1,\dots,t_n\in\mathbb R$ such that $$f(\Gamma)\subset\bigcup_{j=1}^n (t_j-\tfrac\varepsilon2,t_j+\tfrac\ep2).$$ For each $j$ set $$E_j=[t_j-\tfrac\ep2\le f\le t_j+\tfrac\ep2]\quad\mbox{and}\quad F_j=[f\le t_j-\varepsilon]\cup[f\ge t_j+\varepsilon].$$
By $(iii)$ there is $g_j\in A$ such that $0\le g_j\le 1$, $g_j|_{F_j}=0$ and $g_j|_{E_j}=1$. Since
$E_1\cup\dots\cup E_n=\Gamma$, we get $g_1+\dots+g_n\ge 1$. By Lemma~\ref{L:lattice}$(a)$ we deduce that $\frac{1}{g_1+\dots+g_n}\in A$. Set $h_j=\frac{g_j}{g_1+\dots +g_n}$. Then $h_j\in A$, thus
$$g=f(t_1)h_1+\dots+f(t_n)h_n\in A$$
and $\norm{g-f}\le\varepsilon$. Since $\varepsilon>0$ is arbitrary and $A$ is closed, we deduce that $f\in A$. \end{proof}
Assertion $(c)$ of the above lemma says that it is not automatic that a closed subalgebra of $\ell^\infty(\Gamma)$ containing constants may be described using a kind of measurability. In some cases it is true -- for functions continuous with respect to some topology or for Baire-one functions with respect to a Tychonoff topology. In some cases it fails -- for example for the algebra $c$ of converging sequences considered as a subalgebra of $\ell^\infty$.
Let us now discuss how Lemma~\ref{L:jen algebra} may be applied to intermediate functions spaces. Such an analysis is contained in the following proposition.
\begin{prop} \label{p:system-aha} Let $H$ be an intermediate function space determined by extreme points. Then the following assertions hold. \begin{enumerate}[$(a)$]
\item Set
$$\gls{AH}=\{[m>0]\cap\ext X;\, m\in M(H)\}. \index{system of sets!AH@$\mathcal A_H$}$$
The family $\mathcal A_H$ contains the empty set, the whole set $\ext X$ and is closed with respect to finite intersections and countable unions.
Moreover, $m|_{\ext X}$ is $\mathcal A_H$-measurable for each $m\in M(H)$.
\item The following assertions are equivalent:
\begin{enumerate}[$(i)$]
\item A bounded function on $\ext X$ can be extended to an element of $M(H)$ if and only if it is $\mathcal A_H$-measurable.
\item If $u,v\in M(H)$ satisfy $0\le u\le v$ and $v$ does not attain $0$ on $\ext X$, then there is $m\in M(H)$ such that $m=\frac uv$ on $\ext X$.
\item If $u,v\in M(H)$ are such that $u\ge0$, $v\ge0$ and the sets $E=[u=0]\cap \ext X$ and $F=[v=0]\cap \ext X$ are disjoint, there is $m\in M(H)$ such that $0\le m\le 1$, $m=0$ on $E$ and $m=1$ on $F$.
\end{enumerate}
\item $\mathcal A_H=\{E \subset \ext X;\, \exists m\in (M(H))^\uparrow \colon 0\le m\le 1\ \&\ m|_E=1 \ \&\ m|_{\ext X\setminus E}=0\}.$
\item For each $E\in\mathcal A_H$ there are (not necessarily closed) faces $F_1,F_2$ of $X$ such that
$E=F_1\cap \ext X$ and $\ext X\setminus E= F_2\cap \ext X$.
\item If $H^\mu=H$, then $\mathcal A_H$ coincide with the $\sigma$-algebra from Theorem~\ref{T:integral representation H}. \end{enumerate} The same statements are valid when $M(H)$ is replaced by $M^s(H)$ and $\mathcal A_H$ is replaced by
$$\gls{AHs}=\{[m>0]\cap \ext X;\, m\in M^s(H)\}. \index{system of sets!AHs@$\mathcal A_H^s$}$$
\end{prop}
\begin{proof}
Let $R$ be the restriction mapping from Proposition~\ref{P:algebra ZH}. Then $R(M(H))$ is (by the quoted proposition) a closed subalgebra and sublattice of $\ell^\infty(\ext X)$. It easily follows that $$\mathcal A_H= \{[m>0]\cap \ext X;\, m\in M(H), m\ge0\}.$$
Asertions $(a)$ and $(b)$ thus follow from Lemma~\ref{L:jen algebra}.
$(c)$: Assume $E\in A_H$. Using the definitions and the lattice property of $R(M(H))$ we deduce that there is
$f\in M(H)$ such that $0\le f\le 1$ and $E=[f<1]\cap \ext X$.
Given $n\in\mathbb N$, by Proposition~\ref{P:algebra ZH}$(c)$ there is $f_n\in M(H)$ be such that $f_n=1-f^n$ on $\ext X$. The sequence $(f_n)$ is bounded and non-decreasing, let $m$ denote its limit. Then $m\in(M(H))^\uparrow$, $0\le m\le1$, $m|_E=1$ and $m|_{\ext X\setminus E}=0$.
This completes the proof of inclusion `$\subset$'.
To prove the converse, assume that $E\subset \ext X$, $m\in M(H)^\uparrow$, $m|_E=1$ and $m|_{\ext X\setminus E}=0$. Let $(f_n)$
in $M(H)$ with $f_n\nearrow m$. We observe that
$$E=\bigcup_n [f_n>0]\cap\ext X,$$
so $E\in\mathcal A_H$ by $(a)$.
$(d)$: Let $E\in \mathcal A_H$. Let $m\in (M(H))^\uparrow$ be provided by $(c)$. Then clearly $[m=0]$ and $[m=1]$ are faces.
$(e)$: If additionally $H^\mu=H$, then $(M(H))^\uparrow=M(H)$ (by Proposition~\ref{p:multi-pro-mu}$(i)$), so the assertion easily follows.
It is simple to check that the proof works also in the case when $M(H)$ is replaced by $M^s(H)$ and $\mathcal A_H$ is replaced by $\mathcal A^s_H$. \end{proof}
\iffalse For strong multipliers assertion $(c)$ of the previous lemma may be complemented by the following lemma.
\begin{lemma}\label{L:neseni01}
Let $H$ be an intermediate function such that $H^\uparrow$ is determined by extreme points. Let $m\in (M^s(H))^\uparrow$ satisfy $m(\ext X)\subset\{0,1\}$. Then the set $[m=0]\cup[m=1]$ carries all maximal measures. \end{lemma}
\begin{proof}
Let $(m_n)$ be a sequence in $M^s(H)$ such that $m_n\nearrow m$. By Proposition~\ref{P:algebra ZH}$(d)$ there is a sequence $(f_n)$ in $M^s(H)$ such that
\begin{itemize}
\item $f_1=\max\{0,m_1\}$ on $\ext X$;
\item $f_{n+1}=\max\{m_{n+1},f_n\}$ on $\ext X$ for $n\in\mathbb N$.
\end{itemize}
Then $0\le f_n\le 1$, the sequence $(f_n)$ is non-decreasing and $f_n\nearrow m$ on $\ext X$. Since $H^\uparrow$ is determined by extreme points, we deduce that $f_n\nearrow m$ on $X$.
Let $g_n\in H$ be such that $g_n=f_n^2$ on $\ext X$. Then $(g_n)$ is a non-increasing sequence in $H$, let $g$ denote its limit.
Let $\mu\in M_1(X)$ be a maximal measure. Since $f_n\in M^s(H)$, we get
$$\forall n\in\mathbb N\colon g_n=f_n^2\quad\mu\mbox{-almost everywhere}.$$
Hence there is a set $N\subset X$ with $\mu(N)=0$ such that
$$\forall x\in X\setminus N\,\forall n\in\mathbb N\colon g_n(x)=(f_n(x))^2.$$
Passing to the limit we get
$$\forall x\in X\setminus N\colon g(x)=(m(x))^2,$$
hence $g=m^2$ $\mu$-almost everywhere.
Applying to $\mu=\varepsilon_x$ for $x\in\ext X$, we deduce $g=m^2$ on $\ext X$. Since $m^2=m$ on $\ext X$ and $H^\uparrow$ is determined by extreme points, we deduce that $g=m$.
Hence, for any maximal measure $\mu$ we have $m=m^2$ $\mu$-almost everywhere, i.e., $m(x)\in\{0,1\}$ for $\mu$-almost all $x\in X$ which completes the proof. \end{proof} \fi \iffalse
Next we aim to show that there is another description of the above-defined systems $\mathcal A_H$ and $\mathcal A^s_H$. Before we formulate it, we prove the following auxiliary result which uses techniques from the proof of \cite[Theorem II.7.10]{alfsen}.
\begin{lemma} \label{l:multi-measure} Let $H$ be an intermediate function space such that \iffalse $H\subset A_{sa}(X)$ and \fi $H$ is determined by extreme points.
\begin{itemize} \item[(i)] Let $m\in M(H)$ and $c\in\mathbb R$. Then there exists a function $a\in A_{sa}(X)$ such that
$\ext X \cap [m \ge c]=\ext X \cap [a=1]$, $\ext X \subset [a=1] \cup [a=0]$ and $a\in (M(H))^{\downarrow}$. In particular, if $H^{\mu}$ is determined by extreme points, then $a \in M(H^{\mu})$. \item[(ii)] Let $m \in M^s(H)$ and $c\in\mathbb R$. Then the function $a$ from (i) satisfies moreover that the set $[a=1] \cup [a=0]$ carries each maximal measure on $X$ and $a\in (M^s(H))^{\downarrow}$. In particular, if $H^{\mu}$ is determined by extreme points, then $a \in M^s(H^{\mu})$. \item[(iii)] If $a \in (M(H))^{\downarrow}$ is such that $\ext X \subset [a=1] \cup [a=0]$, then there exists $m \in M(H)$ such that $\ext X \cap [a=1]=\ext X \cap [m=1]$. \item[(iv)] If $a \in (M^s(H))^{\downarrow}$ is such that $\ext X \subset [a=1] \cup [a=0]$, then there exists $m \in M^s(H)$ such that $\ext X \cap [a=1]=\ext X \cap [m=1]$. \end{itemize} \end{lemma}
\begin{proof} For the proof of (i), we may assume that $m(X)\subset [0,1]$ and $c\in [0,1]$. Since $M(H)$ is a lattice and algebra (by Proposition~\ref{P:algebra ZH}), for each $n\in\mathbb N$ we can find a function $a_n\in M(H)$ such that \[ a_n(x)=\left((m(x)+1-c)\wedge 1\right)^n,\quad x\in \ext X. \] Then $a_1\colon X\to [0,1]$ and $(a_n)$ forms a non-increasing sequence of positive functions from $M(H)$. Let $a=\lim\limits_{n\to\infty} a_n$. Then $a \in (M(H))^{\downarrow}$. Thus if $H^{\mu}$ is determined by extreme points, then \[a \in (M(H))^{\mu} \subset (M(H^{\mu}))^{\mu}=M(H^\mu)\] which follows by the combination of Proposition \ref{p:multi-pro-mu}(i) and Lemma \ref{L:uzavrenost}(iii). Further, it is clear that $a(X)\subset [0,1]$. Since $a_n(x)=1$ for each $x\in \ext X$ with $m(x)\ge c$, we obtain $\ext X \cap [m \ge c] \subset \ext X \cap [a=1]$. On the other hand, if $m(x)<c$ for some $x\in\ext X$, $a_1(x)<1$, and thus $a_n(x)\searrow 0$. Hence $\ext X \cap [m \ge c]=\ext X \cap [a=1]$.
Let us now assume that $m \in M^s(H)$. Then we can find the functions $a_n\in M^s(H)$ in such a way that the equality \[ a_n=\left((m+1-c)\wedge 1\right)^n \] holds $\mu$-almost everywhere for each maximal measure $\mu \in M_1(X)$. It follows that for $\mu$-almost every $x \in X$, $a(x)=1$ or $a(x)=0$, which finishes the proof of (ii).
To prove (iii), let $a \in (M(H))^{\downarrow}$ be such that $\ext X \subset [a=1] \cup [a=0]$. We find a sequence $(m_n)$ from $M(H)$ which decreases to $a$. Since $M(H)$ is an algebra and lattice, for each $n \in \mathbb N$, the function $\frac{1}{2^n} (m_n \wedge 1)$ belongs to $M(H)$. Since $M(H)$ is closed and $H$ is determined by extreme points, the function $m=\sum_{n=1}^{\infty} \frac{1}{2^n} (m_n \wedge 1)$ belongs to $M(H)$ as well. Further, it is clear that $\ext X \cap [a=1]=\ext X \cap [m=1]$. To proof of (iv) is the same. \end{proof} \fi
\iffalse From the previous lemma we immediately obtain the following.
\begin{prop} \label{P: jiny popis A_H} Let $H$ be an intermediate function space determined by extreme points. Then
$$(\mathcal A_H)^c=\{E \subset \ext X;\, \exists m\in (M(H))^\downarrow \colon m|_E=1 \ \&\ m|_{\ext X\setminus E}=0\}. $$ and
$$(\mathcal A^s_H)^c=\{E \subset \ext X;\, \exists m\in (M^s(H))^\downarrow \colon m|_E=1 \ \&\ m|_{\ext X\setminus E}=0\}. $$ \end{prop} \fi
In case an intermediate function space $H$ satisfies that $H^\mu$ is determined by $\ext X$, by Theorem~\ref{T:integral representation H} we know that for each $x\in X$ there is a unique measure $\mu_{H^\mu,x}$ defined on $(\ext X,\mathcal A_{H^\mu})$ such that $m(x)=\int m\,\mbox{\rm d}\mu_{H^\mu,x}$ for each $m\in M(H^\mu)$. Since $M(H)\subset (M(H))^\mu\subset M(H^\mu)$ by Proposition~\ref{p:multi-pro-mu}$(i)$, we can use this integral representation for elements from $M(H)$. The next result shows that the measure $\mu_{H^\mu,x}$ is ``regular'' on the $\sigma$-algebra generated by $\mathcal A_H$.
\begin{thm}
\label{t:aha-regularita}
Let $H$ be an intermediate function space on a compact convex set $X$ such that $H^\mu$ is determined by extreme points. For $x\in X$, let $\mu=\mu_{H^\mu,x}$ be the measure on $(\ext X,\mathcal A_{H^\mu})$ given by Theorem~\ref{T:integral representation H}$(iii)$.
Then the following assertions hold.
\begin{enumerate}[$(i)$] \item The $\sigma$-algebra $\sigma(\mathcal A_H)$ in $\ext X$ generated by $\mathcal A_H$ is contained in $\mathcal A_{H^\mu}$. \item For any $A\in \sigma(\mathcal A_H)$ and $\varepsilon>0$ there exist sets $F\in (\mathcal A_H)^c$ and $G\in\mathcal A_H$ such that $F\subset A\subset G$ and $\mu(G\setminus F)<\varepsilon$. \end{enumerate} \end{thm}
\begin{proof} $(i)$: Since $M(H)\subset M(H^\mu)$ by Proposition~\ref{p:multi-pro-mu}$(i)$, we obtain inclusion $\mathcal A_H\subset \mathcal A_{H^\mu}$. This proves inclusion $\sigma(\mathcal A_H)\subset \mathcal A_{H^\mu}$, because $\mathcal A_{H^\mu}$ is a $\sigma$-algebra by Theorem~\ref{T:integral representation H}$(i)$.
$(ii)$: We set \[ \mathcal E=\{A\in \sigma(\mathcal A_H);\, \forall \varepsilon>0\ \exists F\in (\mathcal A_{H})^c, G\in \mathcal A_H\colon F\subset A\subset G \And \mu(G\setminus F)<\varepsilon\}. \] We claim that $\mathcal E$ is a Dynkin class of sets (see \cite[136A Lemma]{fremlin1}) containing $(\mathcal A_H)^c$.
Indeed, $\mathcal E$ contains $\emptyset$, it is obviously closed with respect to complements (in $\ext X$). Finally, $\mathcal E$ is closed with respect to taking countable unions of disjoint families of sets. To see this, take a disjoint sequence $(A_n)$ in $\mathcal E$ and let $\varepsilon>0$. We denote $A=\bigcup_{n} A_n$. There exists $k\in\mathbb N$ such that $\mu(A)-\varepsilon\le \mu(\bigcup_{n=1}^k A_n)$ We select $F_i\in (\mathcal A_H)^c, G_i\in \mathcal A_H$ such that $F_i\subset A_i\subset G_i$ and $\mu(G_i\setminus F_i)<2^{-i}\varepsilon$, $i\in\mathbb N$. Then $F=F_1\cup \cdots \cup F_k\in (\mathcal A_H)^c$, $G=\bigcup_{i} G_i\in \mathcal A_H$, $F\subset A\subset G$ and \[ \begin{aligned} \mu(G\setminus F)&\le \mu(G\setminus A)+\mu(A\setminus F)\le \sum_{i\in\mathbb N} \mu(G_i\setminus F_i)+\mu(A\setminus \bigcup_{i=1}^k A_i)+\sum_{i=1}^k\mu(A_i\setminus F_i)\\ &\le \varepsilon+\varepsilon+\varepsilon=3\varepsilon. \end{aligned} \]
To verify that $(\mathcal A_H)^c\subset \mathcal E$, let $E\in(\mathcal A_H)^c$ and $\varepsilon>0$ be given. By Proposition~\ref{p:system-aha}$(c)$ there exists $m\in (M(H))^\downarrow$ such that $m|_E=1$ and $m|_{\ext X\setminus E}=0$. Let $(m_j)$ be a sequence in $M(H)$ such that $m_j\searrow m$. Then the sets $E_j=\ext X\cap [m_j> \frac12]$, $j\in\mathbb N$, belong to $\mathcal A_H$, they form a nonincreasing sequence and $E=\bigcap_{j\in\mathbb N} E_j$. Hence $\mu(E)=\lim_{j\to\infty} \mu(E_j)$. Thus we can select a set $E_j$ such that $\mu(E_j\setminus E)<\varepsilon$. Hence $\mathcal E\supset (\mathcal A_H)^c$.
By \cite[136B Theorem]{fremlin1}, $\mathcal E$ contains the $\sigma$-algebra $\sigma(\mathcal A_H)$ generated by $\mathcal A_H$. Hence $\mathcal E=\sigma(\mathcal A_H)$, which concludes the proof. \end{proof}
Next we provide a sufficient condition ensuring that measurability with respect to systems $\mathcal A_H$ and $\mathcal A^s_H$ characterizes multipliers on $H$.
\begin{thm}
\label{T: meritelnost H=H^uparrow cap H^downarrow}
Let $H$ be an intermediate function space satisfying that $H=H^{\uparrow}\cap H^{\downarrow}$ and such that $H^{\uparrow}$ is determined by extreme points. Let $\mathcal A_H$, $\mathcal A^s_H$ be the families of sets from Proposition \ref{p:system-aha}. Then:
\begin{enumerate}[$(i)$]
\item $M(H)=M(H)^{\uparrow} \cap M(H)^{\downarrow}$ and $M^s(H)=M^s(H)^{\uparrow} \cap M^s(H)^{\downarrow}$.
\item A bounded function on $\ext X$ can be extended to an element of $M(H)$ if and only if it is $\mathcal A_H$-measurable.
\item A bounded function on $\ext X$ can be extended to an element of $M^s(H)$ if and only if it is $\mathcal A^s_H$-measurable.
\end{enumerate} \end{thm}
\begin{proof} $(i)$: Clearly, $M(H) \subset M(H)^{\uparrow} \cap M(H)^{\downarrow}$. To prove the reverse inclusion, we assume that we are given a function $m$ such that there are sequences $(a_n), (b_n)$ from $M(H)$ with $a_n \nearrow m$ and $b_n \searrow m$. Then $m \in H^\uparrow\cap H^\downarrow=H$. To show that $m \in M(H)$, let $h \in H$ be given. We may assume that $h \geq 0$, otherwise we would add a suitable constant to it. Then there exist sequences of functions $(f_n), (g_n)$ from $H$ such that $f_n=a_n \cdot h$ and $g_n=b_n \cdot h$ on $\ext X$ (for $n\in\mathbb N$). Since $H$ is determined by extreme points, the sequence $(f_n)$ is non-decreasing and the sequence $(g_n)$ is non-increasing on $X$. Let $f$ and $g$ denote the pointwise limits of sequences $(f_n)$ and $(g_n)$, respectively. Then $f\in H^\uparrow$, $g\in H^\downarrow$ and $f=g=m \cdot h$ on $\ext X$. Since $f-g\in H^\uparrow$ and $f-g=0$ on $\ext X$, we deduce that $f=g$ on $X$. Thus $f=g \in H^{\uparrow}\cap H^{\downarrow}=H$, which proves that $m \in M(H)$. The proof that $M^s(H)=M^s(H)^{\uparrow} \cap M^s(H)^{\downarrow}$ is basically the same, except that we use the monotone convergence theorem.
For the proof of $(ii)$ we will verify condition $(ii)$ from Proposition \ref{p:system-aha}$(b)$. Assume $u,v\in M(H)$ such that $0\le u\le v$ and $v$ does not attain $0$ on $\ext X$. Let $R$ be the restriction map from Proposition~\ref{P:algebra ZH}. It follows from Proposition~\ref{P:algebra ZH}$(c)$ and Lemma~\ref{L:lattice}$(a)$ we deduce that $\frac{R(u)}{R(v)+\frac1n},\frac{R(u)+\frac1n}{R(v)+\frac1n}\in R(M(H))$. We observe that $$\frac{R(u)}{R(v)+\frac1n}\nearrow \frac{R(u)}{R(v)}\quad\mbox{and}\quad\frac{R(u)+\frac1n}{R(v)+\frac1n}\searrow\frac{R(u)}{R(v)}.$$ Using $(i)$ we now see that $\frac{R(u)}{R(v)}\in R(M(H))$ which completes the argument.
Assertion $(iii)$ can be proven similarly as assertion $(ii)$. The proof is finished. \end{proof}
We continue by describing a rather general situation and several concrete cases where the previous theorem may be applied.
\begin{lemma}\label{L:YupcapYdown=Y}
Let $\Gamma$ be a set and let $\mathcal B$ be a family of subsets of $\Gamma$ containing the empty set and the whole set $\Gamma$ and closed with respect to taking countable unions and finite intersections. Let $Y$ denote the space of all bounded $\mathcal B$-measurable functions on $\Gamma$. Then
$$Y=Y^\uparrow\cap Y^\downarrow=\overline{Y^\uparrow}\cap \overline{Y^\downarrow}.$$ \end{lemma}
\begin{proof}
It is clear that $Y\subset Y^\uparrow\cap Y^\downarrow\subset\overline{Y^\uparrow}\cap \overline{Y^\downarrow}$.
To prove the converse observe that
$[f>c]\in \mathcal B$ whenever $f\in Y^\uparrow$ and $c\in\mathbb R$. Moreover, this property is preserved by uniform limits (see the proof of Lemma~\ref{L:jen algebra}$(b)$). Hence
$[f>c]\in \mathcal B$ whenever $f\in \overline{Y^\uparrow}$ and $c\in\mathbb R$.
Similarly we get that $[f<c]\in \mathcal B$ whenever $f\in \overline{Y^\downarrow}$ and $c\in\mathbb R$.
Therefore, if $f\in \overline{Y^\uparrow}\cap \overline{Y^\downarrow}$ and $c\in\mathbb R$, we deduce $[f>c]\in\mathcal B$ and $[f<c]\in\mathcal B$. Using the properties of $\mathcal B$ we conclude that $f$ is $\mathcal B$-measurable, hence $f\in Y$. \end{proof}
\begin{cor}\label{cor:hup+hdown pro A1aj}
Let $X$ be a compact convex set and $H$ be one of the spaces
$$A_1(X), A_b(X)\cap\Bo_1(X), A_f(X).$$
Then $H=H^\uparrow\cap H^\downarrow=\overline{H^\uparrow}\cap\overline{H^\downarrow}$.
In particular, Theorem~\ref{T: meritelnost H=H^uparrow cap H^downarrow} applies to these spaces. \end{cor}
\begin{proof}
The named spaces are formed by affine functions which are $\operatorname{Zer}_\sigma$-measurable, $(F\wedge G)_\sigma$-measurable or $\mathcal H_\sigma$-measurable, respectively (see Section~\ref{ssc:csp}). Thus the statement follows immediately from Lemma~\ref{L:YupcapYdown=Y}. \end{proof}
\begin{remarks}\label{rem:o meritelnosti} (1) The key results of Theorem~\ref{T:integral representation H} and Theorem~\ref{T: meritelnost H=H^uparrow cap H^downarrow} include a characterization of multipliers (and strong multipliers) by a measurability condition on $\ext X$. If $H^\mu=H$, this is a measurability condtition with respect to a $\sigma$-algebra. This $\sigma$-algebra is canonical and unique (because $1_E$ is measurable if and only if $E$ belongs to the $\sigma$-algebra). However, the canonical description from Theorem~\ref{T:integral representation H}$(i)$ is not very descriptive. In the sequel we try to give a better description for some spaces $H$. A characterization for strongly affine Baire functions in given in Theorem~\ref{T:baire-multipliers}$(b)$ below, in some concrete examples in Section~\ref{sec:stacey}.
(2) For spaces not closed to monotone limits the situation is more complicated. Firstly, there are some cases when Theorem~\ref{T: meritelnost H=H^uparrow cap H^downarrow} may be applied (see Corollary~\ref{cor:hup+hdown pro A1aj}) and some cases when the characterization fails (see Proposition~\ref{P:dikous-lsc--new}$(f)$). Further, if multipliers are characterized by a measurability condition, the respective family of sets need not be uniquely determined. In such a case family $\mathcal A_H$ is the smallest one, but perhaps some bigger one works as well (cf.\ Proposition~\ref{P:dikous-spoj-new}, assertions $(b)$ and $(c)$). Therefore, in some cases we provide a more descriptive measurability condition characterizing multipliers but without claiming it is a description of family $\mathcal A_H$ (see Theorem~\ref{t:a1-lindelof-h-hranice}, Theorem~\ref{t:Bo1-fsigma-hranice} and Theorem~\ref{t:af-fsigma-hranice} below). \end{remarks}
\section{Measurability of strong multipliers in terms of split faces}\label{s:meas sm}
From the previous section we already know that, given an intermediate function space $H$ with certain properties, multipliers and strong multipliers may be characterized by measurability with respect to the system $\mathcal A_H$ or $\mathcal A_H^s$, respectively, on $\ext X$ (see Theorem~\ref{T:integral representation H} and Theorem~\ref{T: meritelnost H=H^uparrow cap H^downarrow}). However, the systems $\mathcal A_H$ and $\mathcal A_H^s$ are described using $M(H)$ and $M^s(H)$, not just using $H$. This implies some limitations of possible applications of these characterizations. In this section we try to partially overcome this issue for strong multipliers. We do not know how to do a similar thing for multipliers themselves. This was, in fact, one of our main motivations to introduce the concept of a strong multiplier. The first step is the following improvement of Proposition~\ref{p:system-aha}$(c)$ for strong multipliers.
\iffalse Given an intermediate functions space $H$ on a compact convex set $X$ which is determined by extreme points and satisfies $H=H^{\mu}$, we know from Theorem \ref{T:integral representation H} that the spaces of multipliers $M(H)$ and $M^s(H)$ can be characterized via measurability on $\ext X$ with respect to $\sigma$-algebras $\mathcal A_H$ and $\mathcal A^s_H$, respectively. Even for systems that do not satisfy the equality $H=H^{\mu}$ we have some kind of description of measurability of multipliers from $H$ by certain systems $\mathcal A_H$ and $\mathcal A^s_H$. In both cases, however, the description of the systems $\mathcal A_H$ and $\mathcal A^s_H$ is given again by multipliers. Thus Theorem \ref{T:integral representation H} does not help us to find out, given an intermediate function space $H$, what exactly are the (strong) multipliers on $\mathcal H$. The main aim of this section is to show that, if we moreover assume that $H$ consists of strongly affine functions, the system $\mathcal A^s_H$ may be described in a more concrete way, using split faces. We do not know whether it is possible to find a similar description of the system $\mathcal A_H$. This was, in fact, one of our main motivations to introduce the concept of a strong multiplier.
\begin{definition} \label{d:ef-ha} Given an intermediate function space $H$, we introduce the following notation. Let \begin{equation} \nonumber \begin{aligned} \mathcal F_{H}= \{F \subset X;\,& F \text{ is a split face such that both } F \text{ and } F^{\prime} \text{ are measure convex}\\& \text{and measure extremal, } F \cup F^{\prime} \text{carries each maximal measure}\\&
\text{and }\Upsilon_F(a|_F) \in H\text{ for each }a\in H\}. \end{aligned} \end{equation} \end{definition}
We aim to prove the following result. We recall that given a split face $F$, $\lambda_F$ stands for the unique affine function which is $1$ on $F$ and $0$ on $F^{\prime}$. \fi
\begin{thm}\label{T:meritelnost-strongmulti} Let $H \subset A_{sa}(X)$ be an intermediate function space such that $H^{\uparrow}$ is determined by extreme points. Then \[\mathcal A^s_H=\{F \cap \ext X: F \text{ is a split face such that } \lambda_F \in M^s(H)^{\uparrow}\}.\] \end{thm}
Before we prove the result we need some definitions and a couple of lemmas. We start by the following stronger notions of convexity and extremality.
\begin{definition} A universally measurable set $F\subset X$ is called \emph{measure convex}\index{set!measure convex} provided $r(\mu)\in F$ whenever $\mu\in M_1(X)$ is carried by $F$ (i.e., $\mu(F)=1$). The set $F$ is called \emph{measure extremal}\index{set!measure extremal} provided $\mu(F)=1$ whenever $\mu\in M_1(X)$ and $r(\mu)\in F$. \end{definition}
\begin{remarks} (1) It is clear that any measure convex set is convex and any measure extremal set is extremal.
(2) Any closed, open or resolvable convex set is also measure convex, see \cite[Proposition~2.80]{lmns}. Similarly, a closed, open or resolvable extremal set is also measure extremal, see \cite[Proposition~2.92]{lmns}.
(3) There exist an $F_\sigma$ face $F$ and $G_\delta$ face $G$ in $X=M_1([0,1])$, which are not measure convex (see \cite[Propositions~2.95 and 2.96]{lmns}). \end{remarks}
We continue by a lemma on a complementary pair of faces.
\begin{lemma} \label{l:complementarni-facy} Let $A, B$ be two disjoint faces of a compact convex set $X$ which are both measure convex and measure extremal, and such that $A \cup B$ carries each maximal measure on $X$. Then the following assertions hold. \begin{enumerate}[$(i)$]
\item $X=\operatorname{conv}(A\cup B)$.
\item $A^{\prime}=B$.
\item If $f, h$ are strongly affine functions such that $f=h \cdot 1_A$ $\mu$-almost everywhere for each maximal measure $\mu \in M_1(X)$, then $f=h \cdot 1_A$ on $A \cup B$. \end{enumerate} \end{lemma}
\begin{proof} $(i)$: Let $x\in X$ be given. We pick a maximal measure $\mu\in M_x(X)$. Then $\mu(A\cup B)=1$ by the assumption.
If $\mu(A)=1$, then $x \in A$ since $A$ is measure convex. Similarly $x\in B$ provided $\mu(B)=1$. Thus we may assume that $\lambda=\mu(A)\in (0,1)$. Then $\mu_A=\lambda^{-1}\mu|_A$ and $\mu_B=(1-\lambda)^{-1}\mu|_B$ are probability measures on $X$. By the measure convexity we see that $x_A=r(\mu_A)\in A$ and $x_B=r(\mu_B)\in B$. Since the barycentric mapping is affine, we have \[ x=r(\mu)=r(\lambda\mu_A+(1-\lambda)\mu_B)=\lambda x_A+(1-\lambda)x_B\in\operatorname{conv}(A\cup B). \]
$(ii)$: Let $C\subset X$ be a face disjoint from $A$ and $x\in C$ be given. By $(i)$ we can write $x=\lambda x_A+(1-\lambda) x_B$ for some $\lambda\in[0,1]$ and $x_A\in A$, $x_B\in B$. If $\lambda=1$, then $x=x_A \in A$, which is impossible as $C\cap A=\emptyset$. If $\lambda \in (0,1)$, then $x_A, x_B\in C$ because $C$ is a face. Again we have a contradiction as $x_A\in C\cap A$. Hence the only possibility is $\lambda=0$, which means $x=x_B\in B$. Thus $A'\subset B$. Since obviously $B\subset A'$, the proof of $(ii)$ is finished.
$(iii)$: Let $f,h$ be as in the assumptions. Take an arbitrary $x \in A$. Fix a maximal measure $\mu \in M_1(X)$ with $x=r(\mu)$. Since $A$ is measure extremal, we get $\mu(A)=1$, and it follows that $$h(x)1_A(x)=h(x)=\int_{A} h(y) \,\mbox{\rm d}\mu(y)=\int_A f(y)\,\mbox{\rm d}\mu(y)=f(x),$$ hence $f(x)=h(x)1_A(x)$. The case $x \in B$ is treated similarly. The proof is finished. \end{proof}
\iffalse \begin{lemma} \label{l:neseni max. mirami} Let $H$ be an intermediate function space such that $H \subset A_{sa}(X)$ and $H$ is determined by extreme points. Let $m \in M^s(H)$ be such that $m(\ext X) \subset \{0, 1\}$. Then the set $[m=1] \cup [m=0]$ carry each maximal measure on $X$. \end{lemma}
\begin{proof} We first note that since $H$ consists of strongly affine functions, the functions from $H$ are in particular $\mu$-measurable for each maximal measure $\mu$ on $X$. Thus, since $m \in M^s(H)$, there exists a function $a \in H$ such that the set $[a=m^2]$ carries each maximal measure on $X$. Since $m=m^2=a$ on $\ext X$ and $H$ is determined by extreme points, $m=a$ on $X$. Thus the set $[m=1] \cup [m=0]$ carries each maximal measure on $X$, since it is equal to the set $[m=m^2]$. \end{proof} \fi
The next lemma shows that monotone limits of multipliers share some properties of multipliers.
\begin{lemma}\label{L:mult-uparrow}
Let $H$ be an intermediate function space such that $H^\uparrow$ is determined by extreme points. Let $m\in (M(H))^\uparrow$ and $a\in H$ be a non-negative function. Then the following assertions hold.
\begin{enumerate}[$(a)$]
\item There is a unique $b\in H^\uparrow$ such that $b=ma$ on $\ext X$.
\item If $m\in (M^s(H))^\uparrow$, then $b=ma$ $\mu$-almost everywhere for every maximal $\mu\in M_1(X)$.
\end{enumerate} \end{lemma}
\begin{proof} Fix a sequence $(m_n)$ in $M(H)$ with $m_n\nearrow m$. Let $b_n\in H$ be such that $b_n=m_n a$ on $\ext X$. Then $b_n\nearrow ma$ on $\ext X$. Since $H$ is determined by extreme points, the sequence $(b_n)$ is non-decreasing on $X$, hence $b_n\nearrow b$ for some $b\in H^\uparrow$. Clearly $b=ma$ on $\ext X$. Since $H^\uparrow$ is determined by extreme points, such $b$ is unique. This completes the proof of $(a)$.
If $m\in (M^s(H))^\uparrow$, we may assume that $m_n\in M^s(H)$. Let $\mu$ be any maximal measure on $X$. Then $b_n=m_n a$ $\mu$-almost everywhere for each $n\in\mathbb N$, hence $b=ma$ $\mu$-almost everywhere. This completes the proof of $(b)$. \end{proof}
\begin{lemma} \label{l:multi-face} Let $H$ be an intermediate function space such that $H^\uparrow$ is determined by extreme points. Let $m \in (M^s(H))^\uparrow$ be such that $m(\ext X) \subset \{0, 1\}$. Then the following assertions hold: \begin{enumerate}[$(a)$]
\item The set $[m=0]\cup[m=1]$ carries all maximal measures.
\item If $H\subset A_{sa}(X)$, then $F=[m=1]$ is a split face with $\lambda_F=m$. Moreover, $F^{\prime}=[m=0]$ and both $F, F^{\prime}$ are measure convex and measure extremal. \end{enumerate} \end{lemma}
\begin{proof} $(a)$: Let $(m_n)$ be a sequence in $M^s(H)$ such that $m_n\nearrow m$. By Proposition~\ref{P:algebra ZH}$(d)$ there is a sequence $(f_n)$ in $M^s(H)$ such that
\begin{itemize}
\item $f_1=\max\{0,m_1\}$ on $\ext X$;
\item $f_{n+1}=\max\{m_{n+1},f_n\}$ on $\ext X$ for $n\in\mathbb N$.
\end{itemize}
Then $0\le f_n\le 1$, the sequence $(f_n)$ is non-decreasing and $f_n\nearrow m$ on $\ext X$. Since $H^\uparrow$ is determined by extreme points, we deduce that $f_n\nearrow m$ on $X$.
Let $g_n\in H$ be such that $g_n=f_n^2$ on $\ext X$. Then $(g_n)$ is a non-increasing sequence in $H$, let $g$ denote its limit.
Let $\mu\in M_1(X)$ be a maximal measure. Since $f_n\in M^s(H)$, we get
$$\forall n\in\mathbb N\colon g_n=f_n^2\quad\mu\mbox{-almost everywhere}.$$
Hence there is a set $N\subset X$ with $\mu(N)=0$ such that
$$\forall x\in X\setminus N\,\forall n\in\mathbb N\colon g_n(x)=(f_n(x))^2.$$
Passing to the limit we get
$$\forall x\in X\setminus N\colon g(x)=(m(x))^2,$$
hence $g=m^2$ $\mu$-almost everywhere.
Applying to $\mu=\varepsilon_x$ for $x\in\ext X$, we deduce $g=m^2$ on $\ext X$. Since $m^2=m$ on $\ext X$ and $H^\uparrow$ is determined by extreme points, we deduce that $g=m$.
Hence, for any maximal measure $\mu$ we have $m=m^2$ $\mu$-almost everywhere, i.e., $m(x)\in\{0,1\}$ for $\mu$-almost all $x\in X$ which completes the proof.
$(b)$: By $(a)$ we know that $[m=0]\cup[m=1]$ carries all maximal measures. Since $m$ is a strongly affine function and $m(X) \subset [0, 1]$, it follows using elementary methods that both sets $F=[m=1]$ and $[m=0]$ are measure convex and measure extremal faces. Therefore, by Lemma~\ref{l:complementarni-facy} we get $F'=[m=0]$ and $X=\operatorname{conv}(F\cup F')$.
Further, let $x \in X\setminus (F\cup F')$ be such that \[ x=\lambda_1x_1+(1-\lambda_1)y_1=\lambda_2 x_2+(1-\lambda_2)y_2 \] for some $x_1,x_2\in F$, $y_1,y_2\in F'$ and $\lambda_1,\lambda_2\in [0,1]$. An application of $m$ yields $\lambda_1=\lambda_2=m(x)\in(0,1)$. This already shows that $F$ is a parallel face and $\lambda_F=m$. Let $\lambda=m(x)$ stand for the common value.
If $x_1\neq x_2$, let $h\in A_c(X)$ be a positive function satisfying $h(x_1)<h(x_2)$. Since $h \in H$ and $m \in (M^s(H))^\uparrow$, by Lemma~\ref{L:mult-uparrow} there exists a function $a \in H^\uparrow$ such that for each maximal measure $\mu \in M_1(X)$, $a=h \cdot m$ $\mu$-almost everywhere. Thus $a=h \cdot m$ on $F \cup F^{\prime}$ by Lemma~\ref{l:complementarni-facy}$(iii)$. Consequently, \[ \lambda h(x_1)=\lambda a(x_1)=\lambda a(x_1)+(1-\lambda) a(y_1)=a(x)=\lambda a(x_2)+(1-\lambda) a(y_2)=\lambda h(x_2). \] Hence $h(x_1)=h(x_2)$, a contradiction completing the proof. \end{proof}
\begin{proof}[Proof of Theorem~\ref{T:meritelnost-strongmulti}]
Let $F$ be a split face with $\lambda_F\in (M^s(H))^\uparrow$. Then $\lambda_F|_F=1$ and $\lambda_F|_{F'}=0$. Since $F\cup F'\supset\ext X$, Proposition~\ref{p:system-aha}$(c)$ (applied to strong multipliers) shows that $F\cap\ext X\in\mathcal A_H^S$.
Conversely, assume $E\in\mathcal A_H^s$. By Proposition~\ref{p:system-aha}$(c)$ (applied to strong multipliers) we find $m\in (M^s(H))^\uparrow$ such that $m|_E=1$ and $m|_{\ext X\setminus E}=0$. By Lemma~\ref{l:multi-face} we deduce that $F=[m=1]$ is a split face and $m=\lambda_F$. This completes the proof. \iffalse We recall that by Proposition \ref{p:system-aha}$(c)$ applied to strong multipliers we have
$$\mathcal A^s_H=\{E \subset \ext X;\, \exists m\in (M^s(H))^\uparrow \colon m|_E=1 \ \&\ m|_{\ext X\setminus E}=0\}. $$
We denote \[\mathcal B^s_H=\{F \cap \ext X: F \text{ is a split face such that } \lambda_F \in M^s(H)^{\downarrow}\}.\] Then, the inclusion $(\mathcal A^s_H)^c \subset \mathcal B^s_H$ follows from the fact that, given $m\in (M^s(H))^\downarrow \subset M^s(H^{\mu})$ such that $m(\ext X) \subset \{0, 1\}$, the set $[m=1] \cup [m=0]$ carry each maximal measure on $X$ by Lemma \ref{l:neseni max. mirami}, and hence the set $[m=1]$ is a split face with the complementary face $[m=0]$ by Lemma \ref{l:multi-face}. Moreover, $m=\lambda_F$ on $X$ by uniqueness, since $m=\lambda_F$ on $F \cup F^{\prime}$.
On the other hand, if $F$ is a split face with the complementary face $F^{\prime}$, then we denote $E=F \cap \ext X=[\lambda_F=1] \cap \ext X$, and then $\ext X \setminus E=F^{\prime} \cap \ext X=[\lambda_F=0] \cap \ext X$, which proves that $\mathcal B^s_H \subset (\mathcal A^s_H)^c$.
Now we prove the second statement of the result. Concerning the first inclusion, let $F$ from the system $\mathcal F_H$ be given. It is enough to prove that $\Upsilon_F 1 \in M^s(H)$. Thus we pick an arbitrary $a \in H$. Then, by the definition of $\mathcal F_H$ we know that $\Upsilon_F(a|_F) \in H$. Obviously, $\Upsilon(a|_H)=a \cdot \Upsilon_F 1$ on $F \cup F^{\prime}$. Since each maximal measure is carried by $F \cup F^{\prime}$, it follows that $\Upsilon_F 1$ is indeed a strong multiplier.
Concerning the second inclusion, assume that we are given a split face such that $\lambda_F \in M^s(H)^{\downarrow}\}$. Then $\lambda_F \in M^s(H^{\mu})$ by Proposition \ref{p:multi-pro-mu}, Then we know by Lemmas \ref{l:neseni max. mirami} and \ref{l:multi-face} that the set $F=[m=1]$ is a split face with the complementary face $F^{\prime}=[m=0]$, $F \cup F^{\prime}$ carry each maximal measure on $X$ and both sets $F$ and $F^{\prime}$ are measure convex and measure extremal. Since $a \in (M^s(H^\mu))$, it follows that for each $h \in H^{\mu}$, $\Upsilon_F(h|_F) \in H^\mu$, which finishes the proof. \fi \end{proof}
\begin{remark} The proof of Theorem~\ref{T:meritelnost-strongmulti} illustrates
the crucial difference between strong multipliers and ordinary multipliers. To be more precise, an~analogue of Lemma~\ref{l:multi-face} for multipliers does not hold. It may happen that $H$ is determined by extreme points, $H\subset A_{sa}(H)$ and $H=H^\mu$, but there is $m\in M(H)$ such that $m(\ext X)\subset\{0,1\}$, but $[m=1]$ is not a split face. An example illustrating it is described in Example~\ref{ex:dikous-mezi-new}. \end{remark}
We further note that an important role is played by the condition that $F\cup F'$ carries all maximal measures. We do not know whether this condition is automatically satisfied for nice split faces.
\begin{ques}
Let $X$ be a compact convex set and let $F\subset X$ be a split face such that both $F$ and $F'$ are measure convex and measure extremal. Does the set $F\cup F'$ carry all maximal measures? \end{ques}
The answer is positive if $X$ is a simplex:
\begin{obs}
Let $X$ be a simplex and let $A,B\subset X$ be two convex measure extremal sets such that $X=\operatorname{conv}(A\cup B)$. Then $A\cup B$ carries all maximal measures. \end{obs}
\begin{proof}
Let $\mu\in M_1(X)$ be maximal. Let $x=r(\mu)$ be its barycenter.
Then $x=\lambda a+(1-\lambda)b$ for some $\lambda\in [0,1]$, $a\in A$ and $b\in B$. Let $\mu_a$ and $\mu_b$ be maximal measures representing $a$ and $b$, respectively. By measure extremality we deduce that $\mu_a$ is carried by $A$ and $\mu_b$ is carried by $B$. Thus $\lambda\mu_a+(1-\lambda)\mu_b$ is a maximal measure carried by $A\cup B$ with barycenter $x$. By simpliciality this measure coincide with $\mu$, thus $\mu$ is carried by $A\cup B$. \end{proof}
Theorem~\ref{T:meritelnost-strongmulti} provides a better characterization of the system $\mathcal A_H^s$ than Proposition~\ref{p:system-aha}. However, it still uses $M^s(H)$ and not just $H$. But it can be used, at least for some spaces $H$, to provide a characterization of strong multipliers just in terms of $H$. We recall that we know from Theorem \ref{T: meritelnost H=H^uparrow cap H^downarrow} that, under some assumptions on $H$, a bounded function on $\ext X$ can be extended to an element of $M^s(H)$ if and only if it is $\mathcal A^s_H$-measurable. It is easy to see that $\mathcal A^s_H$ is the smallest system with such property. If moreover $H=H^{\mu}$, then $\mathcal A^s_H$ is a $\sigma$-algebra and hence, it is the unique $\sigma$-algebra with this property. However, when $H$ is not equal to $H^{\mu}$, there might be some systems larger than $\mathcal A^s_H$ which characterize strong multipliers. This motivates the following notion.
\begin{definition} For a system $\mathcal A$ of subsets of $\ext X$ and an intermediate function space $H$, we say that $M^s(H)$ \emph{is determined by} $\mathcal A$\index{system of sets!determining strong multipliers} if a bounded function on $\ext X$ can be extended to an element of $M^s(H)$ if and only if it is $\mathcal A$-measurable. \end{definition}
Apart from the above-defined system $\mathcal A^s_H$, there are another canonical systems of subsets of extreme points that we may consider.
\begin{definition} \label{d:es-ha} For an intermediate function space $H$, let \[\begin{aligned} \gls{SH}&=\{F\cap\ext X;\, F\mbox{ is split face with }\lambda_F\in H^\uparrow\}, \\ \gls{ZH}&=\{[f=1]\cap\ext X;\, f\in H^\uparrow, f(\ext X)\subset\{0,1\}\}.\end{aligned} \]\index{system of sets!SH@$\mathcal S_H$}\index{system of sets!ZH@$\mathcal Z_H$} \end{definition}
If $H \subset A_{sa}(X)$ is an intermediate function space such that $H^{\uparrow}$ is determined by extreme points, by Theorem~\ref{T:meritelnost-strongmulti} we know that $\mathcal A^s_H \subset \mathcal S_H$. It appears that in some cases the equality holds (see Theorem~\ref{T:baire-multipliers}, Proposition~\ref{P:As pro simplex}, Theorem~\ref{t:metriz-sa-splitfacy} and Proposition~\ref{P:shrnutidikousu} below) or at least $M^s(H)$ is determined by $\mathcal S_H$ (see Theorem~\ref{t:a1-lindelof-h-hranice}, Theorem~\ref{t:Bo1-fsigma-hranice} and Theorem~\ref{t:af-fsigma-hranice} below). We note that unlike the systems $\mathcal A^s_H$, the systems $\mathcal S_H$ clearly reflect the inclusions between intermediate functions spaces. This is applied in the following proposition.
\begin{prop} \label{P: silnemulti-inkluze} Let $H_1, H_2$ be intermediate function spaces on a compact convex set $X$ that $H_1\subset H_2\subset A_{sa}(X)$. Assume that $H_2^\uparrow$ is determined by extreme points and $M^s(H_2)$ is determined by $\mathcal S_{H_2}$. Then $M^s(H_1) \subset M^s(H_2)$. \end{prop}
\begin{proof}
Let $m \in M^s(H_1)$. Then $m|_{\ext X}$ is $\mathcal A^s_{H_1}$-measurable by Proposition \ref{p:system-aha}. By Theorem~\ref{T:meritelnost-strongmulti} we get $\mathcal A^s_{H_1} \subset \mathcal S_{H_1}$ and thus $m|_{\ext X}$ is $\mathcal S_{H_1}$-measurable. Since clearly $\mathcal S_{H_1} \subset \mathcal S_{H_2}$, $m|_{\ext X}$ is $\mathcal S_{H_2}$-measurable. By the assumption on $H_2$ there exists $f \in M^s(H_2)$ such that $f|_{\ext X}=m|_{\ext X}$. Since $H_2$ is determined by extreme points, $m=f \in M^s(H_2)$. \end{proof}
Next observe that $\mathcal S_H \subset \mathcal Z_{H}$ for any intermediate function space $H$ on $X$. The inclusion may be proper in case $X$ is not a simplex (for example if $X$ is a square in the plane). However, there are important cases when the equality holds:
\begin{lemma}\label{L:SH=ZH}
Let $X$ be a simplex and let $H\subset A_{sa}(X)$ be an intermediate function space. Assume moreover that at least one of the following conditions is satisfied:
\begin{itemize}
\item $H\subset \Ba(X)$;
\item $X$ is a standard compact convex set.
\end{itemize}
Then $\mathcal S_H=\mathcal Z_H$. \end{lemma}
\begin{proof}
Let $E\in\mathcal Z_H$. Fix $f\in H^\uparrow$ such that $f(\ext X)\subset \{0,1\}$ and $E=[f=1]\cap \ext X$. Assuming one of the conditions, we deduce that the set $[f=1]\cup[f=0]$ carries all maximal measures. Set $F=[f=1]$.
By Lemma~\ref{l:complementarni-facy} we deduce that $F'=[f=0]$ and $X=\operatorname{conv}(F\cup F')$.
Next we proceed similarly as in the proof of Lemma~\ref{l:multi-face}. Let $x \in X\setminus (F\cup F')$ be such that \[ x=\lambda_1x_1+(1-\lambda_1)y_1=\lambda_2 x_2+(1-\lambda_2)y_2 \] for some $x_1,x_2\in F$, $y_1,y_2\in F'$ and $\lambda_1,\lambda_2\in [0,1]$. Let $\mu_1,\mu_2,\nu_1,\nu_2\in M_1(X)$ be maximal measures representing $x_1,x_2,y_1,y_2$, respectively. Since $F$ and $F'$ are clearly measure extremal, $\mu_1,\mu_2$ are carried by $F$ and $\nu_1,\nu_2$ are carried by $F'$. Then $$\lambda_1\mu_1+(1-\lambda_1)\nu_1,\lambda_2\mu_2+(1-\lambda_2)\nu_2$$ are two maximal measures representing $x$, therefore they are equal. Since $F$ and $F'$ are two disjoint universally measurable sets, we deduce that $$\lambda_1\mu_1=\lambda_2\mu_2 \quad\mbox{and}\quad (1-\lambda_1)\nu_1=(1-\lambda_2)\nu_2.$$ Thus $\lambda_1=\lambda_2\in (0,1)$ and $\mu_1=\mu_2$ and $\nu_1=\nu_2$. Thus $x_1=x_2$. It follows that $F$ is a split face and $\lambda_F=f\in H^\uparrow$. Thus $E\in\mathcal S_H$. \end{proof}
We observe that the assumption that $X$ is a standard compact convex space is important as Proposition~\ref{P:shrnutidikousu}$(c)$ below shows.
We proceed to the final result of this section which provides a characterization of (strong) multipliers for a special type of intermediate function spaces defined via a topological property on $\ext X$. This will be applied later in Theorem~\ref{t:a1-lindelof-h-hranice} and in Section~\ref{ssce:fsigma-hranice}.
To formulate it we need to introduce some natural notation concerning split faces. We start by an easy lemma.
\begin{lemma} \label{l:extense-splitface} Let $X$ be a compact convex set and let $F\subset X$ be a split face with the complementary face $F'$. Then for each affine function $a$ on $F$ and $a'$ on $F'$ there exists a unique affine function $b:X\to\mathbb R$ satisfying $b=a$ on $F$ and $b=a'$ on $F'$.
Moreover, $b$ is bounded whenever $a$ and $a'$ are bounded. \end{lemma}
\begin{proof} The uniqueness is obvious as $X=\operatorname{conv}(F\cup F')$. To prove the existence, given $x\in X$ we set \[ b(x)=\lambda_F(x) a(y)+(1-\lambda_F(X)) a'(y'), \] where $y\in F$ and $y'\in F'$ satisfy $x=\lambda_F(x)y+(1-\lambda_F(x))y'$. By a routine verification, the function $b$ is affine. Further, if $a$ and $a'$ are bounded, $b$ is obviously bounded as well. \end{proof}
The case $a'=0$ in the previous lemma is especially important. It inspires the following definition.
\begin{definition}\label{d:teckoacko} If $F$ is a split face in $X$ and $a$ is an affine function defined at least on $F$, we denote by $\gls{Upsilon} a$\index{operator Upsilon@operator $\Upsilon$} the unique affine function $b$ satisfying $b=a$ on $F$ and $b=0$ on $F'$. We write $\Upsilon_F$ instead of $\Upsilon$ in case we need to stress it is related to $F$. \end{definition}
Next we collect basic properties of the extension operator. Their proofs are completely straightforward.
\begin{obs}\label{obs:operator rozsireni}
Let $X$ be a compact convex set and let $F\subset X$ be a split face.
\begin{enumerate}[$(i)$]
\item $\Upsilon$ is a positive linear operator from the linear space of affine functions on $F$ to the linear space of affine functions on $X$.
\item If $(a_n)$ is a sequence of affine functions on $F$ pointwise converging to an affine function $a$, then $\Upsilon a_n\to\Upsilon a$ pointwise on $X$.
\item $\Upsilon$ maps $A_b(F)$, the space of bounded affine functions on $F$, isometrically into $A_b(X)$.
\item $\Upsilon 1_F=\lambda_F$.
\end{enumerate} \end{obs}
The operator $\Upsilon$ is closely related to multipliers. More precisely, if $f\in A_b(X)$ is arbitrary, then $\Upsilon f\in A_b(X)$ and $\Upsilon (f|_F)=\lambda_F\cdot f$ on $\ext X$. This will be used in the following result. In the next section we will investigate in more detail when this feature produces a real multiplier in our sense.
\begin{prop} \label{P:meritelnost multiplikatoru pomoci topologickych split facu} Let $X$ be a standard compact convex set and let $T\subset\ell^\infty(\ext X)$ be a linear subspace with the following properties: \begin{enumerate}[$(i)$]
\item $af \in T^{\uparrow}$ for each $a \in T$ positive and $f \in T^{\uparrow}$;
\item $\overline{T^{\downarrow}} \cap \overline{T^{\uparrow}}=T$;
\item $f|_{\ext X}\in T$ for each $f\in A_c(X)$. \end{enumerate} Set
$$H=\{f\in A_{sa}(X);\, f|_{\ext X}\in T\}.$$ Then $H$ is an intermediate function space and $M^s(H)=M(H)$ is determined by the system $$\begin{aligned} \mathcal B^s_{H}= \{ F\cap\ext X;\,& F \text{ is a split face such that } 1_{F \cap \ext X} \in T^{\uparrow} \\& \text{and }
\Upsilon_F(a|_F) \in A_{sa}(X)\text{ for every }a\in H \}. \end{aligned}$$
\end{prop}
\begin{proof} It is clear that $H$ is a linear subspace of $A_{sa}(X)$ containing $A_c(X)$. Moreover, by the assumptions we know that $A_{sa}(X)$ is determined by extreme points, so it follows from condition $(ii)$ that $H$ is closed. It is thus an intermediate function space. By Proposition~\ref{P:rovnostmulti} we get $M(H)=M^s(H)$.
By Proposition~\ref{p:system-aha}$(a)$ any $m\in M^s(H)$ is $\mathcal A_H^s$-measurable. To prove it is also $\mathcal B_H^s$-measurable it is enough to observe that $\mathcal A_H^s\subset\mathcal B_H^s$. So, assume $E\in \mathcal A_H^s$. By Theorem~\ref{T:meritelnost-strongmulti} we find a split face $F\subset X$ such that $\lambda_F\in (M^s(H))^\uparrow$ and $E=F\cap\ext X$. Since $\lambda_F|_{\ext x}=1_{F\cap\ext X}$, this function belongs to $T^\uparrow$. Finally, let $a\in H$. There is $c\in\mathbb R$ such that $b=a+c\cdot1_X\ge 0$. By Lemma~\ref{L:mult-uparrow} there is $f\in H^\uparrow$ such that $f=\lambda_F\cdot b$ on $\ext X$. Then $g=f-c\cdot 1_X\in A_{sa}(X)$ and $g=\lambda_F\cdot a$ on $\ext X$. Since $X$ is standard, the equality holds $\mu$-almost everywhere for each maximal $\mu\in M_1(X)$. Further, since $\lambda_F$ is strongly affine, $F$ and $F'$ are measure convex and measure extremal. By Lemma~\ref{l:multi-face}$(a)$ we deduce that $F\cup F'$ carries all maximal measures. By Lemma~\ref{l:complementarni-facy}$(iii)$ we conclude that $g=\lambda_F\cdot a$ on $F\cup F'$, i.e., $g=\Upsilon(a|_F)$.
\iffalse By Proposition \ref{P: jiny popis A_H},
$$(\mathcal A^s_H)^c=\{E \subset \ext X;\, \exists m\in (M^s(H))^\downarrow \colon m|_E=1 \ \&\ m|_{\ext X\setminus E}=0\}. $$ Thus for each $m \in (\mathcal A^s_H)^c$, the set $[m=1] \cap \ext X$ belongs to $1_{F \cap \ext X} \in T^{\downarrow}$. Further, since $H$ consists of strongly affine functions, so does the space $H^{\mu}$, thus from Theorem \ref{T:meritelnost-strongmulti} combined with the above we obtain that $\mathcal A^s_{H} \subset \mathcal B^s_{H}$. hence we get that each $M^s(H)$ is $\mathcal B^s_H$-measurable, since it is $\mathcal A^s_H$-measurable. \fi
The converse implication will be proved in several steps:
{\tt Step 1:} Let $f\in\ell^\infty(\ext X)$ be $\mathcal B^s_H$-measurable and $a\in H$, $a\ge 0$. Then there is $h\in A_{sa}(X)$ such that \begin{itemize}
\item $h=af$ on $\ext X$;
\item $h|_{\ext X}\in\overline{T^\uparrow}$. \end{itemize}
Let us first observe that we may assume without loss of generality that $f\ge0$. Indeed, if $f$ is general bounded function, then there is some $\lambda>0$ such that $f_1=f+\lambda \cdot 1_{\ext X}\ge 0$. If $h_1$ with the required properties corresponds to $f_1$, the function $h=h_1-\lambda a$ corresponds to $f$. (Note that $a|_{\ext X}\in T$.)
So, assume $f\ge0$.
Without loss of generality we assume that $0 \leq f \leq 1$. For each $n\in \mathbb N$ we set \[ C_{n,i}= [f> \tfrac{i-1}{2^n}],\quad i\in\{0,\dots, 2^n\}. \] By the assumption we find split faces $F_{n,i}$ from the system which defines $\mathcal B^s_{\mathcal H}$ with $C_{n,i}=F_{n,i} \cap \ext X$. Set \[
a_n=2^{-n}\sum_{i=1}^{2^n} \Upsilon_{F_{n,i}}(a|_{F_{n,i}}). \]
Then $a_n \in A_{sa}(X)$. Further, since $a|_{\ext X} \in T$ and $1_{F_{n,i} \cap \ext X} \in T^{\uparrow}$, we get $a_n|_{\ext X} \in T^{\uparrow}$ (by condition $(i)$). Moreover, it is simple to check that \[ a(x) \cdot f(x)\le a_n(x)\le a(x) \cdot (f(x)+2^{-n})\mbox{ for }x \in \ext X \]
(cf.\ the proof of \cite[Theorem II.7.2]{alfsen}).
Hence $(a_n)$ is a uniformly convergent sequence on $X$ (as $A_{sa}(X)$ is determined by extreme points). Denote the limit by $h$. Then clearly $h \in A_{sa}(X)$, $h=af$ on $\ext X$ and $h|_{\ext X} \in \overline{T^{\uparrow}}$.
{\tt Step 2:} Let $f\in\ell^\infty(\ext X)$ be $\mathcal B^s_H$-measurable and $a\in H$. Then there is $h\in A_{sa}(X)$ such that \begin{itemize}
\item $h=af$ on $\ext X$;
\item $h|_{\ext X}\in\overline{T^\uparrow}$. \end{itemize}
Find $\lambda\ge0$ such that $b=a+\lambda\cdot 1_X\ge 0$. Observe that $-f$ is also $\mathcal B^s_H$-measurable. We apply Step 1 to the pairs $f,b$ and $-f,\lambda\cdot 1_X$
and obtain strongly affine functions $h_1,h_2$ such that $h_1=f b$ and $h_2=-\lambda f$ on $\ext X$, and $h_i|_{\ext X}\in \overline{T^{\uparrow}}$ (for $i=1,2$). Then $h=h_1+h_2$ is strongly affine, $h|_{\ext X} \in \overline{T^{\uparrow}}$ and $h=fa$ for each $x \in \ext X$.
{\tt Step 3:} Let $f\in\ell^\infty(\ext X)$ be $\mathcal B^s_H$-measurable and $a\in H$. Then there is $h\in H$ such that $h=af$ on $\ext X$.
Recall that $-f$ is also $\mathcal B^s_H$-measurable. We apply Step 2 to the pairs $f,a$ and $-f,a$. We get strongly affine functions $h_1,h_2$ such that $h_1=fa$ and $h_2=-fa$ on $\ext X$ and $h_i|_{\ext X}\in \overline{T^{\uparrow}}$ (for $i=1,2$). Then $h_1=-h_2$ on $\ext X$, so $h_1=-h_2$ (as $A_{sa}(X)$ is determined by extreme points). Then $h_1|_{\ext X}\in \overline{T^{\downarrow}} \cap \overline{T^{\uparrow}}=T$ (by condition $(ii)$). Thus $h \in H$.
{\tt Step 4:} Let $f\in\ell^\infty(\ext X)$ be $\mathcal B^s_H$-measurable. Apply Step 3 to the pair $f,1_X$. The resulting function $h$ is a multiplier and $h|_{\ext X}=f$. Since $M(H)=M^s(H)$, this finishes the proof. \end{proof}
\section{Extending affine functions from split faces} \label{sec:splifaces}
In this section we collect results on split faces needed in the sequel with focus on extending affine mappings. More precisely, we investigate in more detail properties of the extension operator $\Upsilon$ from Definition~\ref{d:teckoacko}.
Let us briefly recall some notation from Section~\ref{ssc:ccs}. Let $X$ be a compact convex set, $F\subset X$ a split face and $F'$ the complementary face. By $\lambda_F$ we denote the unique function $\lambda_F\colon X\to [0,1]$ such that for each $x\in X$ there are $y\in F$ and $y'\in F'$ such that \begin{equation}\label{eq:split}
x=\lambda_F(x)y+(1-\lambda_F(x))y'.\end{equation} We recall that $\lambda_F$ is affine, $F=[\lambda_F=1]$, $F'=[\lambda_F=0]$ and, moreover, if $x\in X\setminus(F\cup F')$, the points $y$ and $y'$ are uniquely determined.
\subsection{More on measure convex split faces}
In this section we study in more detail measure convex split faces with measure convex complementary face. We start by a lemma providing an automatic partial strong affinity.
\begin{lemma}\label{L:mc-partial-sa}
Let $X$ be a compact convex set and let $A\subset X$ be a measure convex split face such that $A'$ is also measure convex.
Let $a$ be a strongly affine function on $A$. Then $\Upsilon a$ is a bounded affine function on $X$ which satisfies the barycentric formula for any probability measure on $X$ carried by $A\cup A'$. \end{lemma}
\begin{proof} The strong affinity of $a$ on $A$ is defined in the natural sense (it is reasonable as $A$ is measure convex). Further, a strongly affine function on a measure convex set is bounded as the proof of \cite[Satz 2.1.(c)]{krause} shows. Therefore $\Upsilon a$ is a bounded affine function on $X$ by Lemma~\ref{l:extense-splitface}.
Let $\mu\in M_1(X)$ be carried by $A\cup A'$. Then $\mu=t\mu_1+(1-t)\mu_2$ for some $t\in [0,1]$ and probability measures $\mu_1,\mu_2$ carried by $A,A'$, respectively. By measure convexity we get $r(\mu_1)\in A$ and $r(\mu_2)\in A'$. Thus $$ \int \Upsilon(a)\,\mbox{\rm d}\mu=\int a\,\mbox{\rm d} (t\mu_1)=ta(r(\mu_1))$$ as $a$ is strongly affine on $A$. Further, $$r(\mu)=tr(\mu_1)+(1-t)r(\mu_2).$$ As $A$ is a split face, $t=\lambda_A(r(\mu))$ and $r(\mu_1)=y$ (in the notation from \eqref{eq:split}). Hence, $$\Upsilon(a)(r(\mu))=ta(r(\mu_1)).$$ We conclude by comparing the formulas. \end{proof}
We continue by a result characterizes split faces among measure convex faces with measure convex complementary set. It is a generalization of \cite[Theorem II.6.12]{alfsen} where the case of closed faces is addressed. It has also similar flavour as condition $(ii')$ from Remark~\ref{rem:nutne}$(2)$.
We recall that a signed measure $\mu\in M(X)$ is called \emph{boundary}\index{measure!boundary} provided its variation $\abs{\mu}$ is maximal on $X$.
\begin{thm}
\label{t:split-mc-me-char}
Let $X$ be a compact convex set and let $A\subset X$ be a measure convex face such that the complementary set $A'$ is also measure convex. Assume that $A\cup A'$ carries every maximal measure. Then $A$ is a split face if and only if $\mu|_A\in (A_c(X))^\perp$ for each boundary measure $\mu\in (A_c(X))^\perp$. \end{thm}
\begin{proof} $\implies$: Let $A$ be a split face and $\mu\in (A_c(X))^\perp$ be a boundary measure. Then $\mu(X)=0$, hence $\mu^+(X)=\mu^-(X)$. Hence, we may assume without loss of generality that $\mu^+$ and $\mu^-$ are maximal probability measures. The assumption $\mu\in (A_c(X))^\perp$ then means that $\mu^+$ and $\mu^-$ have the same barycenter, denote it by $x$. Further, $\mu^+$ and $\mu^-$ are carried by $A\cup A'$. It follows that $$\mu^+=a_1\nu_1+a_2\nu_2\quad\mbox{and}\quad\mu^-=a_3\nu_3+a_4\nu_4,$$ where $a_1,a_2,a_3,a_4\ge 0$, $a_1+a_2=a_3+a_4=1$, and $\nu_1,\nu_3\in M_1(A)$, $\nu_2,\nu_4\in M_1(A')$. Then $$a_1r(\nu_1)+a_2r(\nu_2)=r(\mu^+)=r(\mu^-)=a_3 r(\nu_3)+a_4r(\nu_4).$$
By measure convexity we deduce that $r(\nu_1),r(\nu_3)\in A$ and $r(\nu_2),r(\nu_4)\in A'$. Since $A$ is a split face, we deduce $a_1=a_3$ and, provided $a_1>0$, $r(\nu_1)=r(\nu_3)$. Thus $a_1\nu_1-a_3\nu_3=\mu|_A\in(A_c(X))^\perp$.
$\Longleftarrow$: Since $A\cup A'$ carries every maximal measure, $X=\operatorname{conv} (A\cup A')$. Indeed, let $x\in X$ be given. We find a maximal measure $\mu\in M_1(X)$ with $r(\mu)=x$ and using the assumption we decompose it as $\mu=a_1\mu_1+a_2\mu_2$, where $a_1+a_2=1$, $a_1,a_2\ge 0$ and $\mu_1\in M_1(A), \mu_2\in M_1(A')$. Then their barycenters satisfy $x_1=r(\mu_1)\in A$, $x_2=r(\mu_2)\in A'$ and $x=a_1x_1+a_2x_2$. Hence $x\in \operatorname{conv} (A\cup A')$ and $X=\operatorname{conv} (A\cup A')$.
To observe that $A$ is split, we first verify that both $A$ and $A'$ are measure extremal with respect to maximal measures. More precisely, we claim that given $x\in A$ and $\mu\in M_x(X)$ maximal, then $\mu(A)=1$. To see this, assume that $\mu=a_1\mu_1+a_2\mu_2$, where $a_1,a_2\ge 0$, $a_1+a_2=1$, and $\mu_1\in M_1(A)$, $\mu_2\in M_1(A')$. If $a_2>0$, we write \[ x=r(\mu)=a_1r(\mu_1)+a_2r(\mu_2) \] and observe, that since $A$ is a face, $r(\mu_2)\in A$, which contradicts the measure convexity of $A'$. Hence $a_2=0$ and $\mu\in M_1(A)$. Similarly we check that $A'$ is measure extremal with respect to maximal measures.
To check the uniqueness of convex combinations, take $x\in X\setminus (A\cup A')$ and assume that \[ x=\lambda_1x_1+(1-\lambda_1)x_2=\lambda_2x_3+(1-\lambda_2)x_4 \] are convex combinations with $x_1,x_3\in A$ and $x_2,x_4\in A'$. We find maximal measures $\mu_i\in M_{x_i}(X)$ for $i=1,\dots, 4$. Then $\mu_1,\mu_3\in M_1(A)$ and $\mu_2,\mu_4\in M_1(A')$. Since \[ \nu=(\lambda_1\mu_1+(1-\lambda_1)\mu_2)-(\lambda_2\mu_3+(1-\lambda_2)\mu_4) \] is boundary and $\nu\in (A_c(X))^\perp$, by the assumption we have \[
\nu|_{A}=\lambda_1\mu_1-\lambda_2\mu_3\in (A_c(X))^\perp. \] Hence $\lambda_1=\lambda_2$ and $x_1=r(\mu_1)=r(\mu_3)=x_3$. This proves that $A$ is a split face. \end{proof}
If $X$ is a simplex, then $(A_c(X))^\perp$ contains no nonzero boundary measure, hence we get the following corollary.
\begin{cor} \label{c:simplex-facejesplit} Let $X$ be a simplex and let $A\subset X$ be a measure convex face such that the complementary set $A'$ is also measure convex and $A\cup A'$ carries every maximal measure. Then $A$ is a split face. \end{cor}
\subsection{Strongly affine functions on compact convex sets with $K$-analytic boundary} \label{ss:measure-splitfaces}
This section is devoted to split faces $F$ with strongly affine $\lambda_F$ and to extending strongly affine functions to strongly affine ones. To this end we restrict ourselves to compact convex sets with $K$-analytic boundary. We start by recalling one of the equivalent definitions of $K$-analytic spaces.
\begin{definition}
A Tychonoff space $Y$ is called \emph{$K$-analytic} if it is a continuous image of an $F_{\sigma\delta}$ subset of a compact space.\index{K-analytic space@$K$-analytic space} \end{definition}
This is the original definition due to Choquet \cite{choquet59}. Note that any $K$-analytic space is Lindel\"of and that a Polish space (i.e., a separable completely metrizable space) is $K$-analytic.
The main result of this section is the following theorem.
\begin{thm}
\label{t:srhnuti-splitfaceu-metriz} Let $X$ be a compact convex set with $\ext X$ being $K$-analytic.
(This is satisfied, in particular, if $X$ is metrizable or if $\ext X$ is $F_\sigma$.)
Let $A\subset X$ be a split face.
Then the following assertions are equivalent.
\begin{enumerate}[$(i)$]
\item Both $A$ and $A'$ are measure convex.
\item $A$ is measure convex and $\Upsilon a\in A_{sa}(X)$ whenever $a$ is a strongly affine function on $A$.
\item $\Upsilon(a|_A)\in A_{sa}(X)$ whenever $a\in A_{sa}(X)$.
\item The function $\lambda_A$ is strongly affine.
\item Both $A$ and $A'$ are measure convex and measure extremal.
\end{enumerate} \end{thm}
\begin{proof}[Easy part of the proof.] We start by observing that some implications are easy and hold without any assumptions on $X$. More precisely:
$(ii)\implies(iii)$: Note that $a|_A$
is strongly affine on $A$ whenever $A$ is measure convex and $a\in A_{sa}(X)$.
$(iii)\implies(iv)$: Recall that $\lambda_A=\Upsilon(1_X|_A)$.
$(iv)\implies(v)$: Observe that $A=[\lambda_A=1]$ and $A'=[\lambda_A=0]$.
$(v)\implies(i)$: This is trivial. \end{proof}
It remains to prove implication $(i)\implies(ii)$. In its proof we will use the assumption on $\ext X$. It will be done using some auxilliary results.
\iffalse The first one is the following easy observation.
\begin{lemma}
Let $A$ be a measure convex split face such that its complementary face $A'$ is also measure convex. Let $x\in X\setminus (A\cup A')$ be given and $y\in A$ and $y'\in A'$ are given such that \eqref{eq:split} is valid. Let $\mu\in M_x(X)$ be carried by $A\cup A'$. Then $\lambda_A(x)=\mu(A)$ and $y=r(\frac{\mu|_A}{\mu(A)})$. \end{lemma}
\begin{proof} If $\mu(A)=0$, then $\mu(A')=1$ and by measure convexity, $x=r(\mu)\in A'$. Hence $\mu(A)>0$. Analogously we obtain $\mu(A')>0$. We write \[
\mu=\mu(A)\frac{\mu|_A}{\mu(A)}+\mu(A')\frac{\mu|_{A'}}{\mu(A')}. \] Then \[
x=r(\mu)=\mu(A)r\left(\frac{\mu|_A}{\mu(A)}\right)+\mu(A')r\left(\frac{\mu|_{A'}}{\mu(A')}\right). \]
By measure convexity, the barycenters are in $A$, $A'$, respectively. It follows from the uniqueness of the decomposition that $\mu(A)=\lambda_A(x)$ and $y=r(\frac{\mu|_A}{\mu(A)})$. \end{proof} \fi
The first one can be viewed as a variant of Lemma~\ref{L:kvocient} for continuous affine surjections and strongly affine functions.
\begin{lemma}
\label{l:perfect-k-analytic}
Let $X, Z$ be compact convex sets and let $Y\subset Z$ be a measure convex $K$-analytic subset. Let $\rho\colon Z\to X$ be a continuous affine mapping such that $\rho(Y)=X$. Let $f\colon X\to \mathbb R$ be a bounded function such that $f\circ\rho$ is strongly affine on $Y$. Then $f$ is strongly affine on $X$.
\end{lemma}
\begin{proof}
Let $\mu\in M_1(X)$ be given. We are going to verify that $f$ is $\mu$-measurable and $\mu(f)=f(r(\mu))$. Let us denote $s=\rho|_Y$.
By \cite[Corollary~432G]{fremlin4} there is a Radon measure $\nu\in M_1(X)$ such that $\nu(Y)=1$ and $s(\nu)=\mu$.
We continue by proving $f$ is $\mu$-measurable. So, fix an open set
$U\subset \mathbb R$. The function $\widetilde{f}=f\circ s=(f\circ\rho)|_Y$ is universally measurable, hence the set $\widetilde{f}^{-1}(U)$ is $\nu$-measurable. Since $\nu$ is Radon, we find a $\sigma$-compact set $K\subset \widetilde{f}^{-1}(U)$ with $\nu(K)=\nu(\widetilde{f}^{-1}(U))$. Then $s(K)$ is a $\sigma$-compact set and $$s(K)\subset s(\widetilde{f}^{-1}(U))=f^{-1}(U).$$ Moreover, $$s^{-1}(f^{-1}(U)\setminus s(K))\subset s^{-1}(f^{-1}(U))\setminus K=
\widetilde{f}^{-1}(U)\setminus K,$$
which is a $\nu$-null set. It follows that $f^{-1}(U)\setminus s(K)$ is $\mu$-null and thus $f^{-1}(U)$ is $\mu$-measurable.
Finally, we check the barycentric formula. Let $x=r(\mu)$ be the barycenter of $\mu$ and $z=r(\nu)\in Z$ be the barycenter of $\nu$. Since $Y$ is measure convex, $z\in Y$. Further, $s(z)=x$. Indeed, given $a\in A_c(X)$, we have $a\circ \rho\in A_c(Z)$, and thus
\[
\begin{aligned}
a(x)&=a(r(\mu))=\mu(a)=s(\nu)(a)=\nu(a\circ s)=\int_Y (a\circ s )\,\mbox{\rm d}\nu=\int_Z (a\circ\rho)\,\mbox{\rm d}\nu\\
&=(a\circ\rho)(z)=a(s(z)).
\end{aligned}
\]
Since continuous affine functions on $X$ separate points of $X$, we obtain $x=s(z)$.
Thus
\[
\begin{aligned} \mu(f)&=s(\nu)(f)=\nu(f\circ s)=\int_Y (f\circ s)\,\mbox{\rm d}\nu=\int_Y (f\circ \rho)\,\mbox{\rm d}\nu=(f\circ \rho)(z)=f(s(z))\\ &=f(x).
\end{aligned}
\]
Since $\mu$ is arbitrary, we deduce that $f$ is strongly affine.
\iffalse
Let $\mu\in M_1(X)$ be given. We denote $s=\rho|_Y$. We aim to verify that $f$ is $\mu$-measurable and $\mu(f)=f(r(\mu))$. To this end we use \cite[Corollary~432G]{fremlin4} to find a Radon measure $\nu\in M_1(X)$ such that $\nu(Y)=1$ and $s_\sharp\nu=\mu$.
Let $U\subset \mathbb R$ be an open set. Then $\widetilde{f}^{-1}(U)=s^{-1}(f^{-1}(U))$, i.e., $f^{-1}(U)=s(\widetilde{f}^{-1}(U))$. We find countable many compact sets $K_n$ in $\widetilde{f}^{-1}(U)$ and a set $N\subset \widetilde{f}^{-1}(U)$ such that, denoting $K=\bigcup_n K_n$, we have $\nu(N)=0$ and $\nu(\widetilde{f}^{-1}(U))=\nu(K)$. Then $\widetilde{K}=s^{-1}(s(K))$ is a closed subset of $\widetilde{f}^{-1}(U)$ such that $\widetilde{N}=\widetilde{f}^{-1}(U)\setminus \widetilde{K}$ is $\nu$-null and $\widetilde{N}=s^{-1}(s(\widetilde{N}))$. Let $H\subset Y$ be a countable union of compact sets such that $H\cap \widetilde{N}=\emptyset$ and $\nu(H)=1$. Then $s(H)\cap s(\widetilde{N})=\emptyset$ and $s(H)$ is a countable union of compact sets in $X$. Since \[ \mu(s(H))=(s_\sharp\nu)(s(H))=\nu(s^{-1}(s(H)))\ge \nu(H)=1, \]
the set $s(\widetilde{N})$ is $\mu$-null. Hence $f^{-1}(U)=s(\widetilde{K})\cup s(\widetilde{N})=s(K)\cup s(\widetilde{N})$, as a union of countably many compact sets and $\mu$-null set, is $\mu$-measurable.
Let now $x=r(\mu)$ be the barycenter of $\mu$ and $z=r(\nu)\in Z$ be the barycenter of $\nu$. Since $Y$ is measure convex, $z\in Y$. Further, $s(z)=x$. Indeed, given $a\in A_c(X)$, we have $a\circ \rho\in A_c(Z)$, and thus
\[
\begin{aligned}
a(x)&=a(r(\mu))=\mu(a)=(s_\sharp \nu)(a)=\nu(a\circ s)=\int_Y (a\circ s )\,\mbox{\rm d}\nu=\int_Y (a\circ\rho)\,\mbox{\rm d}\nu\\
&=(a\circ\rho)(z)=a(s(z)).
\end{aligned}
\]
Since continuous affine functions on $X$ separate points of $X$, we obtain $x=s(z)$.
Thus
\[ \mu(f)=(s_\sharp \nu)(f)=\nu(f\circ s)=\int_Y (f\circ s)\,\mbox{\rm d}\nu=\int_Y (f\circ \rho)\,\mbox{\rm d}\nu=(f\circ \rho)(z)=f(s(z))=f(x).
\]
Hence $f$ is strongly affine.\fi \end{proof}
We continue by a sufficient condition for strong affinity on compact convex sets with $K$-analytic boundary.
\begin{prop}
\label{p:sa-k-analytic}
Let $X$ be a compact convex set with $\ext X$ being $K$-analytic and let $f\colon X\to\mathbb R$ be a bounded function such that for each $\mu\in M_1(X)$ maximal, $f$ is $\mu$-measurable and $\mu(f)=f(r(\mu))$. Then $f$ is strongly affine.
\end{prop}
\begin{proof} Since $\ext X$ is a $K$-analytic set, it is universally measurable by \cite[Corollary 2.9.3]{ROJA} and a measure $\mu\in M_1(X)$ is carried by $\ext X$ if and only if $\mu$ is maximal (see \cite[Theorem 3.79]{lmns}). Hence the function $$g=\begin{cases}
f&\text{on }\ext X,\\
0&\text{elsewhere}, \end{cases}$$
is universally measurable. Indeed, given a measure $\mu\in M_1(X)$, we can decompose $\mu=\mu|_{\ext X}+\mu|_{X\setminus \ext X}$ and $g$ is measurable with respect to both parts.
Let $Z=M_1(X)$ and $Y=\{\mu\in Z;\, \mu(\ext X)=1\}$. Then $Y$ is a $K$-analytic subset of $Z$ by \cite[Theorem 3(a)]{Hol-Kal}, which is moreover measure convex. Indeed, the function $h=1_{\ext X}$ is universally measurable, and thus $\widetilde{h}(\mu)=\mu(h)$ is a strongly affine function on $Z$ by \cite[Proposition 5.30]{lmns}. Hence $Y=[\widetilde{h}=1]$ is a measure convex face in $Z$.
By \cite[Proposition 5.30]{lmns}, the function $\widetilde{g}\colon Z\to \mathbb R$ defined by $\widetilde{g}(\mu)=\mu(g)$, $\mu\in Z$, is strongly affine on $Z$. Thus its restriction $\widetilde{f}=\widetilde{g}|_Y$ to $Y$ is strongly affine on $Y$. To finish the proof it is enough to observe that, denoting $r\colon Z\to X$ the barycentric mapping, we have $\widetilde{f}=f\circ r$ on $Y$ by the assumption. (We observe that $r\colon Y\to X$ is surjective by \cite[Theorem 3.79 and Theorem 3.65]{lmns}.) Hence Lemma~\ref{l:perfect-k-analytic} finishes the reasoning.
\end{proof}
\iffalse \begin{lemma} \label{l:extense-sa-kanalytic} Let $X$ be a compact convex set with $\ext X$ being $K$-analytic and $A\subset X$ be a measure convex split face such that its complementary face $A'$ is also measure convex. Then $\Upsilon a$ is strongly affine for any $a\in A_{sa}(A)$. \end{lemma} \fi
Now we are ready to prove the remaining implication.
\begin{proof}[Proof of implication $(i)\implies(ii)$ from Theorem~\ref{t:srhnuti-splitfaceu-metriz}.] Assume $A$ and $A'$ are measure convex and fix a strongly affine function $a$ on $A$. By Lemma~\ref{l:extense-splitface}, $b=\Upsilon a$ is a bounded affine function on $X$. To prove it is strongly affine we will use Proposition~\ref{p:sa-k-analytic}. So, let $\mu\in M_1(X)$ be maximal and $x=r(\mu)$. Then $\mu$ is carried by $\ext X$ (by \cite[Theorem 3.79]{lmns}) and hence by $A\cup A'$. Therefore we conclude by Lemma~\ref{L:mc-partial-sa}. \iffalse
So, $b$ is clearly $\mu$-measurable.
It remains to prove that $b(x)=\mu(b)$. If $\mu$ is carried by $A$, then $x\in A$ and we use the assumption that $a$ is strongly affine on $A$. If $\mu$ is carried by $A'$, then $x\in A'$ and we use the fact that $b=a=0$ on $A'$.
Finally, assume $\mu(A)>0$ and $\mu(A')>0$. We write \[
\mu=\mu(A)\tfrac{\mu|_A}{\mu(A)}+\mu(A')\tfrac{\mu|_{A'}}{\mu(A')}. \] Then \[
x=r(\mu)=\mu(A)r\left(\tfrac{\mu|_A}{\mu(A)}\right)+\mu(A')r\left(\tfrac{\mu|_{A'}}{\mu(A')}\right). \]
By measure convexity, the barycenters are in $A$, $A'$, respectively. It follows from the uniqueness of the decomposition that $\mu(A)=\lambda_A(x)$ and $y=r(\frac{\mu|_A}{\mu(A)})$ (using the notation from \eqref{eq:split}). Thus
$$b(x)=\lambda_A(x) a(x)=\mu(A)a\left(r\left(\tfrac{\mu|_A}{\mu(A)}\right)\right) = \mu(a) \int_A a\,\mbox{\rm d} \tfrac{\mu|A}{\mu(A)}=\int b\,\mbox{\rm d}\mu, $$ which completes the argument.
Finally, by Proposition~\ref{p:sa-k-analytic} we deduce that $b$ is strongly affine. \fi \end{proof}
The following example witnesses that in Theorem~\ref{t:srhnuti-splitfaceu-metriz}$(i)$ the assumption that both $A$ and $A'$ are measure convex is essential.
\begin{example}\label{ex:d+s}
There exists a split face $A\subset X=M_1([0,1])$ with the following properties:
\begin{enumerate}[$(i)$]
\item Both $A$ and $A'$ are Baire sets;
\item $A$ is measure convex and $A'$ is measure extremal;
\item $A\cup A'$ carries every maximal measure;
\item $\lambda_A$ is an affine Baire function which is not strongly affine.
\end{enumerate} \end{example}
\begin{proof} Let \[ A=\{\mu\in X;\, \mu(\{x\})=0, x\in [0,1]\}. \] Then $A$ is a $G_\delta$ face of $X$ (see \cite[Proposition~2.58]{lmns}). Let \[ B=\{\mu\in X;\, \mu\text{ is discrete}\}. \] Then $B=\{\mu\in X;\, \mu_d([0,1])=1\}$ (here $\mu_d$ denotes the discrete part of $\mu$) is a Borel face of $X$ by \cite[Proposition~2.63]{lmns}. We claim that $A$ is a split face and $B=A'$.
To this end we notice that each $\mu\in X$ can be uniquely decomposed as $\mu=\mu_c+\mu_d$, where $\mu_c$ is the continuous part of $\mu$ and $\mu_d$ is the discrete part of $\mu$. Hence $X$ is the direct convex sum of $A$ and $B$ (see \cite[Chapter II, \S 6]{alfsen}).
To finish the proof that $A$ is a split face it is enough to show that $B=A'$. Obviously, $B\subset A'$. For the converse inclusion, let $\mu\in A'$ be given. Then $\mu=\alpha \mu_1+(1-\alpha)\mu_2$, where $\alpha\in [0,1]$, $\mu_1\in A$ and $\mu_2\in B$. Let $C\subset X$ be a face containing $\mu$ and disjoint from $A$. If $\alpha\in (0,1]$, then $\alpha\mu_1+(1-\alpha)\mu_2=\mu\in C$ implies $\mu_1\in C$, which is impossible as $C\cap A=\emptyset$. Hence $\alpha=0$, and consequently $\mu=\mu_2\in B$. Hence $A'\subset B$.
So far we have verified that $A$ is a Borel split face such that $A'=B$ is Borel as well and $A\cup A'\supset \ext X$ carries any maximal measure. Since $X$ is metrizable, Baire and Borel sets coincide. We have thus verified conditions $(i)$ and $(iii)$.
To check that $A$ is measure convex, we observe that \[ A=\bigcap_{x\in [0,1]}\{\mu\in X;\, \mu(\{x\})=0\}. \] Since each function $\mu\mapsto \mu(\{x\})$ is upper semicontinuous and affine on $X$ and has values in $[0,1]$, it is strongly affine and thus each set $\{\mu\in X;\, \mu(\{x\})=0\}$ is measure convex. Since $A$ is Borel, it follows that $A$ is measure convex.
To check that $B=A'$ is measure extremal, fix $\mu\in B$ and $\Lambda\in M_1(X)$ with $r(\Lambda)=\mu$. Then $\mu=\sum_{n=1}^\infty t_n\varepsilon_{x_n}$ for a sequence $(x_n)$ in $[0,1]$ and a sequence $(\lambda_n)$ of non-negative numbers with $\sum_{n=1}^\infty t_n=1$. The function $H(\nu)=\sum_{n=1}^\infty \nu(\{x_n\})$ is easily seen to be strongly affine and $H(\mu)=1$, Thus $H=1$ $\Lambda$-almost everywhere, hence $\Lambda$ is supported by $B$. Hence, we have verified $(ii)$.
Finally, the function $\lambda_A$ is an affine function of the second Baire class which is not strongly affine, since $\lambda_A(\mu)=1-\mu_d([0,1])$ for $\mu\in X$, and \cite[Proposition~2.63]{lmns} applies. So, $(iv)$ is verified and the proof is complete. \end{proof}
\iffalse \begin{definition} Let $X$ be a compact convex set and $A\subset X$ be measure convex. We call a function $a\colon A\to \mathbb R$ \emph{strongly affine} if $\mu(a)=a(r(\mu))$, whenever $\mu$ is a probability measure supported by $A$. \end{definition}
\begin{remark}
\label{r:omezen-sa-naA}
We remind that a strongly affine function on a measure convex set $A\subset X$ is bounded as the proof of \cite[Satz 2.1.(c)]{krause} shows. \end{remark} \fi
We note that Theorem~\ref{t:srhnuti-splitfaceu-metriz} implies, in particular, that $\lambda_A$ is a (strong) multiplier for $A_{sa}(X)$ provided $X$ is a compact convex set with $K$-analytic boundary and $A$ is a split face with $\lambda_A$ strongly affine. Later we will discuss a converse and similar results. We note that Example~\ref{ex:dikous-divnyspitface} below shows that the assumption on $\ext X$ in Theorem~\ref{t:srhnuti-splitfaceu-metriz} cannot be dropped. However, the following problems seem to be open.
\begin{ques}
Let $X$ be a compact convex set and let $A\subset X$ be split face.
\begin{enumerate}[$(1)$]
\item Assume both $A$ and $A'$ are measure convex and measure extremal. Is $\lambda_A$ strongly affine?
\item Assume $\lambda_A$ is strongly affine. Does $\Upsilon$ maps strongly affine functions to strongly affine functions?
\end{enumerate} \end{ques}
\iffalse \begin{lemma} \label{l:extense-sa-metriz} Let $X$ be a metrizable compact convex set and $A\subset X$ be a measure convex split face such that its complementary face $A'$ is also measure convex. Then $\Upsilon a$ is strongly affine for any $a\in A_{sa}(A)$. \end{lemma}
\begin{proof} Fix $a\in A_{sa}(A)$ and $\mu\in M_1(X)$. If $\mu$ is carried by $A$, then $\Upsilon a$ is $\mu$-measurable as $\Upsilon a=a$ on $A$. Since $A$ is measure convex and $a$ is strongly affine on $A$, we get $r(\mu)\in A$ and $\Upsilon a(r(\mu))=\int\Upsilon a\,\mbox{\rm d}\mu$. If $\mu$ is carried by $A'$, then $\Upsilon a=0$ $\mu$-almost everywhere (as $\Upsilon a=0$ on $A'$) and hence it is $\mu$-measurable. Moreover, $r(\mu)\in A'$ (as $A'$ is measure convex) and $\int \Upsilon a\,\mbox{\rm d}\mu=0=\Upsilon a (r(\mu))$.
Next assume that $\mu$ is carried by $X\setminus (A\cup A')$. We find a maximal measure $\nu\in M_1(X)$ with $\mu\prec\nu$. By \cite[Theorem 3.92(iii)]{lmns}, there exists a measure $\omega\in M_1(X\times X)$ such that \begin{itemize}
\item [(a)] $\pi_1(\omega)=\mu$, $\pi_2(\omega)=\nu$, where $\pi_1,\pi_2$ are the canonical projections;
\item [(b)] for each $B\subset X$ universally measurable and $f\in A_c(X)$ it holds
\[
\int_{B\times X} f\circ\pi_1\,\mbox{\rm d}\omega=\int_{B\times X} f\circ \pi_2\,\mbox{\rm d}\omega.
\] \end{itemize} Let $\mathcal{S}_\mu$ and $\mathcal{S}_\nu$ denote the $\sigma$-algebras of $\mu$- and $\nu$-measurable sets, respectively. Let $\mathcal{S}$ be the $\sigma$-algebra on $X\times X$ generated by sets $\{S_1\times S_2;\, S_1\in\mathcal{S}_\mu, S_2\in\mathcal{S}_\nu\}$.
We apply \cite[Theorem 452M]{fremlin4} to get a family $(D_x)_{x\in X}$ such that the following properties are fulfilled: \begin{itemize}
\item [(c)] Given $x\in X$, $D_x$ is a complete probability measure on $X$ such that
any compact subset of $X$ is $D_x$-measurable and $D_x$ is inner regular with respect to compact sets.
\item [(d)] For any bounded $\mathcal{S}$-measurable function $h$ on $X\times X$ we have
\[ \int_{X\times X} h\,\mbox{\rm d}\omega=\int_X\left(\int_X h(x,y)\,\mbox{\rm d} D_x(y)\right)\,\mbox{\rm d}\mu(x). \] \end{itemize}
Property (c) implies that each $D_x$ is a Radon probability measure on $X$. Property (d) yields that the function $x\mapsto \int_X h(x,y)\,\mbox{\rm d} D_x(y)$ is $\mu$-measurable for each bounded $\mathcal{S}$-measurable $h$ on $X\times X$.
The next step is to verify that $D_x$ is carried by $A\cup A'$ for $\mu$-almost all $x\in X$. To this end, consider $f=1_{X\setminus(A\cup A')}$. Then $f$ is $\mathcal{S}_\nu$-measurable and thus the function $(x,y)\mapsto f(y)$ is $\mathcal{S}$-measurable. We obtain \[\begin{aligned} \int_X D_x(X\setminus A\cup A') \,\mbox{\rm d}\mu(x)&= \int_X\left(\int_X f(y)\,\mbox{\rm d} D_x(y)\right)\,\mbox{\rm d}\mu(x)=\int_{X\times X} (f\circ\pi_2)(x,y)\,\mbox{\rm d}\omega\\&=\int_X f\,\mbox{\rm d}\pi_2(\omega)=\int_X f\,\mbox{\rm d}\nu=0.\end{aligned} \] Hence $D_x(X\setminus A\cup A')=0$ (i.e., $D_x$ is carried by $A\cup A'$) for $\mu$-almost every $x\in X$. Let $N\subset X$ be a set of $\mu$-measure zero such that $A\cup A'\subset N$ and $D_x(X\setminus A\cup A')=0$ for $x\in X\setminus N$.
We claim that $r(D_x)=x$ for $\mu$-almost all $x\in X$. To this end we consider a function $f\in A_c(X)$. Then (a) combined with (d) yields for each Borel set $B\subset X$ equality \[ \begin{aligned} \int_B f(x)\,\mbox{\rm d}\mu(x)&=\int_{B\times X} f(x)\,\mbox{\rm d}\omega(x,y)=\int_{B\times X}(f\circ\pi_2)\,\mbox{\rm d}\omega\\ &=\int_{X\times X} 1_B(x)f(y)\,\mbox{\rm d}\omega(x,y)=\int_X\left(\int_X 1_B(x)f(y)\,\mbox{\rm d} D_x(y)\right)\,\mbox{\rm d}\mu(x)\\ &=\int_B \left(\int_X f(y)\,\mbox{\rm d} D_x(y)\right)\,\mbox{\rm d}\mu(x)=\int_B f(r(D_x))\,\mbox{\rm d}\mu(x). \end{aligned} \] Since $B\subset X$ Borel is arbitrary, $f(x)=f(r(D_x)$ $\mu$-almost everywhere. Thus for each $f\in A_c(X)$ there exists a $\mu$-null set $N_f$ such that $f(x)=f(r(D_x))$ for $x\in X\setminus N_f$.
Using metrizability of $X$ we find a countable dense set $\{f_n;\, n\in \mathbb N\}$ in $A_c(X)$. Let $M=N\cup \bigcup_{n=1}^\infty N_{f_n}$, then $\mu(M)=0$. For $x\in X\setminus M$ we have $f_n(x)=f_n(r(D_x))$, $n\in\mathbb N$, which gives $f(x)=f(r(D_x))$ for $f\in A_c(X)$. Hence $r(D_x)=x$ for $x\in X\setminus M$.
Given $x\in X\setminus M$, let $y(x)\in A$ and $y'(x)\in A'$ be such that \eqref{eq:split} is fulfilled. Since $D_x$ is carried by $A\cup A'$, Lemma~\ref{l:split-miry} gives
$D_x(A)=\lambda_A(x)$ and $y(x)=r(\frac{D_x|_A}{D_x(A)})$. Thus \[
(\Upsilon a)(x)=\lambda(x)a(y(x))=D_x(A)a(r(\frac{D_x|_A}{D_x(A)}))=D_x(A)\frac{D_x|_A(a)}{D_x(A)}=D_x(a1_A). \] The function $a1_A$ is $\nu$-measurable, and thus $(x,y)\mapsto a(y)1_A(y)$ is $\mathcal{S}$-measurable. Thus the function $x\mapsto D_x(a1_A)$ is $\mu$-measurable. Since it equals $\Upsilon a$ $\mu$-almost everywhere, $\Upsilon a$ is $\mu$-measurable.
Moreover, let $z=r(\mu)$. Since $\Upsilon a(x)=D_x(a1_A)$ $\mu$-almost everywhere, we get \[ \begin{aligned} \int_X (\Upsilon a)(x)\,\mbox{\rm d}\mu(x)&=\int_X\left(\int_X a(y)1_A(y)\,\mbox{\rm d} D_x(y)\right)\,\mbox{\rm d}\mu(x)=\int_{X\times X} a(y)1_A(y)\,\mbox{\rm d}\omega\\
&=\int_X a(y)1_A(y)\,\mbox{\rm d}\pi_2(\omega)(y)=\int_A a\,\mbox{\rm d}\nu=\nu(A)\int_X a\,\mbox{\rm d}\tfrac{\nu|_A}{\nu(A)}\\&=\nu(A)a\left(r\left(\frac{\nu|_A(a)}{\nu(A)}\right)\right) =\lambda(z)a(y(z))=(\Upsilon a)(z). \end{aligned} \] This completes the proof. \end{proof}
\begin{cor}
\label{c:split-extremal-metriz} Let $A$ be a measure convex split face in a compact convex set $X$ with $\ext X$ being $K$-analytic such that its complementary face $A'$ is also measure convex. Then both $A$ and $A'$ are measure extremal. \end{cor}
\begin{proof} By Lemma~\ref{l:extense-sa-kanalytic} applied to the function $a=1$, $\Upsilon 1$ is strongly affine. Since $A=[\Upsilon 1=1]$ and $A'=[\Upsilon 1=0]$, the measure extremality easily follows. \end{proof} \fi
\subsection{Baire split faces} \label{ss:extension-splitfaces}
The aim of this section is to characterize split faces $A\subset X$ with $\lambda_A$ Baire and strongly affine. This is the content of the following proposition, which is an analogue of Theorem~\ref{t:srhnuti-splitfaceu-metriz}. We point out that in this case $X$ is a general compact convex set (with no additional assumptions on $\ext X$), stronger assumptions are imposed on the face.
\begin{prop}
\label{p:shrnuti-splitfaceu-baire}
Let $X$ be a compact convex set and let $A\subset X$ be a split face. The the following assertions are equivalent.
\begin{enumerate}[$(i)$]
\item Both $A$ and $A'$ are Baire and measure convex.
\item $A$ is measure convex and $\Upsilon a\in A_{sa}(X)\cap\Ba(X)$ whenever $a$ is a strongly affine Baire function on $A$.
\item $\Upsilon(a|_A)\in A_{sa}(X)\cap\Ba(X)$ whenever $a\in A_{sa}(X)\cap\Ba(X)$.
\item The function $\lambda_A$ is strongly affine and Baire.
\item Both $A$ and $A'$ are Baire, measure convex and measure extremal.
\end{enumerate} \end{prop}
The proof follow the same scheme as that of Theorem~\ref{t:srhnuti-splitfaceu-metriz}. Again, implications $(ii)\implies(iii)\implies(iv)\implies(v)\implies(i)$ are easy and may be proved exactly in the same way. To prove the remaining implication $(i)\implies(ii)$ we will use two auxilliary results. The first one is a purely topological extension result.
\begin{lemma} \label{l:extense-K-anal} Let $X$ be a compact convex set and $A\subset X$ a $K$-analytic split face such that $A'$ is also $K$-analytic. Then
$\Upsilon a$ is Baire for each Baire affine function on $A$.
In particular, both $A$ and $A'$ are Baire.
\end{lemma}
\begin{proof}
Let $a$ be a Baire affine function on $A$. To verify that $\Upsilon a$ is Baire, we proceed as in \cite{stacey-maan}. Since $A$ and $A'$ are $K$-analytic, the space $H=A\times A'\times [0,1]$ is $K$-analytic as well (see \cite[Theorem~2.5.5]{ROJA}). The mapping $\varphi\colon H\to X$ defined by $\varphi([y,y',\lambda])=\lambda y +(1-\lambda) y'$ is a continuous affine surjection. Further, the function $h\colon H\to \mathbb R$ defined as $h([y,y',\lambda])=\lambda a(y)$ is a Baire function on $H$ satisfying $b\circ \varphi=h$. Hence, for each $U\subset \mathbb R$, open the sets \[ b^{-1}(U)=\varphi(h^{-1}(U))\quad\text{and}\quad b^{-1}(\mathbb R\setminus U)=\varphi(h^{-1}(\mathbb R\setminus U)) \] are disjoint, $K$-analytic (as continuous images of $K$-analytic sets) and cover $X$. By \cite[Section~3.3]{ROJA}, they are Baire sets. Hence $b$ is Baire measurable and thus it is a Baire function.
In particular, if we apply it to $a=1$, we deduce that $A$ and $A'$ are Baire sets. \end{proof}
We start by a sufficient condition for strong affinity of Baire functions which is a counterpart of Proposition~\ref{p:sa-k-analytic}.
\begin{lemma}
\label{l:sa-baireovske}
Let $a$ be a bounded Baire function on a compact convex set $X$ satisfying $a(r(\mu))=\mu(a)$ for each maximal $\mu\in M_1(X)$. Then $a$ is strongly affine. \end{lemma} \begin{proof}
By a separable reduction result (see \cite[Theorem 9.12]{lmns}), there exist a metrizable compact convex set $Y$, a continuous affine surjection $\varphi\colon X\to Y$ and a Baire function $b\colon Y\to \mathbb R$ such that $a=b\circ\varphi$.
Then $b$ is bounded and $b(r(\nu))=\nu(b)$ for each maximal $\nu\in M_1(Y)$.
Indeed, let $\nu\in M_1(Y)$ maximal be given. Using \cite[Proposition 7.49]{lmns} we find a maximal measure $\mu\in M_1(X)$ with $\varphi(\mu)=\nu$.
Given $h\in A_c(Y)$, we have
\[
h(\varphi(r(\mu)))=(h\circ \varphi)(r(\mu))=\mu(h\circ\varphi)=(\varphi(\mu))(h)=\nu(h)=h(r(\nu)).
\]
Thus $\varphi(r(\mu))=r(\nu)$. Now we have \[ \nu(b)=(\varphi(\mu))(b)=\mu(b\circ\varphi)=(b\circ\varphi)(r(\mu))=b(\varphi(r(\mu)))=b(r(\nu)). \]
Thus we have proved that $b$ satisfies the barycentric formula with respect to each maximal measure on $Y$. Since $Y$ is metrizable, $\ext Y$ is $K$-analytic, hence Proposition~\ref{p:sa-k-analytic} yields that $b$ is strongly affine on $Y$. \iffalse Let $\mu\in M_1(Y)$ be given. By \cite[Theorem 11.41]{lmns}, there exists a Borel measurable mapping $m\colon Y\to M_1(Y)$ such that $r(m(x))=x$, $x\in Y$, and $m(x)$ is maximal for each $x\in Y$. Let $\nu\in M_1(Y)$ be defined by the formula \[ \nu(g)=\int_Y \left(\int_Y g(t)\,\mbox{\rm d} m(x)(t)\right)\,\mbox{\rm d} \mu(x), \quad g\in C(Y). \] Note that $\nu$ is indeed a positive linear functional on $C(Y)$ satisfying $\nu(1_Y)=1$, so it can be viewed as an element of $M_1(Y)$. Further, \[ h(r(\nu))=\nu(h)=\int_Y m(x)(h)\,\mbox{\rm d}\mu(x)=\int_Y h(x)\,\mbox{\rm d}\mu(x)=h(r(\mu)),\quad h\in A_c(Y), \]\ hence $r(\nu)=r(\mu)$. Further, for each $g\in C(Y)$ we have \[ \nu(g^*)=\int_Y m(x)(g^*)\,\mbox{\rm d}\mu(x)=\int_Y m(x)(g)\,\mbox{\rm d}\mu(x)=\nu(g), \] and thus $\nu$ is maximal. Hence we obtain \[ b(r(\mu))=b(r(\nu))=\nu(b)=\int_Y m(x)(b)\,\mbox{\rm d}\mu(x)=\int_Y b(x)\,\mbox{\rm d}\mu(x)=\mu(b). \] Hence $b\in A_{sa}(Y)$.\fi
By \cite[Proposition~5.29]{lmns}, the function $a=b\circ \varphi$ is strongly affine as well. \end{proof}
\iffalse \begin{lemma} \label{l:urysohn-F} Let $X$ be a compact convex set and $A\subset X$ a Baire split face such that $A'$ is also Baire. \begin{itemize}
\item[(a)] $\Upsilon a$ is Baire for each Baire affine function on $A$.
\item[(b)] Assume moreover that both $A$ and $A'$ are measure convex. Then $\Upsilon a\in A_{sa}(X)\cap \Ba(X)$ for each $a\in A_{sa}(X)\cap \Ba(X)$. \end{itemize} \end{lemma}
Before embarking on the proof of the lemma, we need to recall that a topological space is $K$-analytic if it is the image of a Polish space under a usco (an upper semicontinuous compact valued) map, see \cite[p. 11]{ROJA}. \fi
\begin{proof}[Proof of implication $(i)\implies(ii)$ from Proposition~\ref{p:shrnuti-splitfaceu-baire}.]
Assume that both $A$ and $A'$ are Baire and measure convex and let $a$ be a strongly affine Baire function on $A$. By Lemma~\ref{l:extense-K-anal} we know that $\Upsilon a\in\Ba(X)$. It remains to show that $b=\Upsilon a$ is strongly affine. By Lemma~\ref{l:sa-baireovske} it is enough to prove that $b$ satisfies the barycentric formula with respect to every maximal measure. Since $A\cup A'$ is a Baire set containing $\ext X$, it carries all maximal measures. Hence, we conclude by Lemma~\ref{L:mc-partial-sa}. \iffalse
So, let $\mu\in M_1(X)$ be maximal with $r(\mu)=x$. Then $\mu(A\cup A')=1$, as $A\cup A'$ is a Baire set containing $\ext X$. If $\mu$ is carried by $A$, then $x\in A$ by measure convexity of $A$ and $\mu(b)=\mu(a)=a(x)=b(x)$ as $b=a$ on $A$ and $a$ is strongly affine on $A$. If $\mu$ is carried by $A'$, then $x\in A'$ by measure convexity of $A'$ and $\mu(b)=0=b(x)$ as $b=0$ on $A'$.
Hence we may assume that $\mu(A)\in (0,1)$. Then \[
\mu=\mu(A)\frac{\mu|_A}{\mu(A)}+\mu(A')\frac{\mu|_{A'}}{\mu(A')}, \] hence \[
x=r(\mu)=\mu(A)r\left(\frac{\mu|_A}{\mu(A)}\right)+\mu(A')r\left(\frac{\mu|_{A'}}{\mu(A')}\right), \]
where $r\left(\frac{\mu|_A}{\mu(A)}\right)\in A$ and $r\left(\frac{\mu|_{A'}}{\mu(A')}\right)\in A'$ by measure convexity. Since $A$ is a split face, this convex combination is uniquely determined, and thus \[ \begin{aligned}
b(x)&=(\Upsilon a)(x)=\mu(A)a\left(r\left(\frac{\mu|_A}{\mu(A)}\right)\right)=
\mu(A)\int_X a\,\mbox{\rm d}\frac{\mu|_A}{\mu(A)}=\int_X a1_A\,\mbox{\rm d}\mu\\ &=\int_{A\cup A'}a1_A\,\mbox{\rm d}\mu=\int_X b\,\mbox{\rm d}\mu. \end{aligned} \]
This finishes the proof.\fi \end{proof}
\subsection{Extending strongly affine functions from closed split faces}
In this section we focus on extending affine functions of various descriptive classes from closed split faces. To this end, we need the following lemma.
\begin{lemma} \label{l:frag-selekce} Let $r\colon M\to N$ be a continuous surjection of a compact space $M$ onto a compact space $N$. Let $g\colon M\to \mathbb R$ be in $\Bo_1(M)$ or a fragmented function. Then there exists a function $\phi\colon N\to M$ such that $r(\phi(y))=y$, $y\in N$, and $g\circ \phi$ is in $\Bo_1(N)$ or fragmented, respectively. \end{lemma}
\begin{proof} We first handle the case of fragmented functions. Let $(U_n)$ be a countable base of open sets in $\mathbb R$. By Theorem~\ref{T:a}, each set $g^{-1}(U_n)$ can be written as a countable union of resolvable sets in $M$ called, say, $F_{n,k}$. Let $\mathcal A=\{F_{n,k};\, n,k\in\mathbb N\}$. By \cite[Lemma 1]{HoSp}, there exists a selection $\phi\colon N\to M$ from the multi-valued mapping $y\mapsto r^{-1}(y)$ such that $\phi^{-1}(A)$ is resolvable in $N$ for each $A\in \mathcal A$. Then for each $U_n$ the set \[ (g\circ \phi)^{-1}(U_n)=\phi^{-1}(g^{-1}(U_n))=\phi^{-1}\left(\bigcup_{k=1}^\infty F_{n,k}\right)=\bigcup_{k=1}^\infty \phi^{-1}(F_{n,k}) \] is a countable union of resolvable sets in $N$. By Theorem~\ref{T:a}, $g\circ\phi$ is fragmented.
If $g\in\Bo_1(M)$, we proceed as above but we use the selection lemma \cite[Lemma 8]{HoSp} instead. \end{proof}
\begin{lemma} \label{l:rozsir-splitface} Let $X$ be a compact convex set and let $F\subset X$ be a closed split face. Then the following assertions are valid. \begin{enumerate}[$(a)$]
\item Both $F$ and $F'$ are measure convex, measure extremal and $F\cup F'$ carries all maximal measures.
\item $\Upsilon a\in A_s(X)$ whenever $a\in A_s(F)$.
\item $\Upsilon a\in A_b(X)\cap \Bo_1(X)$ whenever $a\in A_b(A)\cap \Bo_1(A)$.
\item $\Upsilon a\in A_f(X)$ whenever $a\in A_f(F)$. \end{enumerate} \end{lemma}
\begin{proof} $(a)$: The face $F$, being closed, is measure convex. Further, by \cite[Proposition II.6.5 and Proposition II.6.9]{alfsen} we deduce that $\lambda_F=(1_F)^*$, so it is upper semicontinuous. It follows that $\lambda_F$ is strongly affine by \eqref{eq:prvniinkluze}, hence we easily get that both $F$ and $F'$ are measure convex and measure extremal. Further, $$F\cup F'=[\lambda_F=1]\cup[\lambda_F=0]=[(1_F)^*=1_F],$$ so this set carries each maximal measure by \cite[Theorem 3.58]{lmns} (as $1_F$ is upper semicontinuous).
Let us proceed to $(b)-(d)$: Since $\Upsilon 1_F=1_F^*\in A_s(X)\subset A_b(X)\cap \Bo_1(X)\subset A_f(X)$ and $\Upsilon$ is linear, it is enough to prove the statements assuming $0\le a\le1$.
Given $x\in X$, let $y(x)\in F$ and $y'(x)\in F'$ be such that \eqref{eq:split} is fulfilled. We recall that in this case $\Upsilon a(x)=\lambda_F(x) a(y(x))$ for any $a\in A_b(X)$. Further, it follows from Lemma~\ref{L:mc-partial-sa} that $$\Upsilon a(x)=\int_F a\,\mbox{\rm d}\omega\mbox{\quad for any maximal }\omega\in M_1(X).$$ \iffalse
Further, let $\omega\in M_1(X)$ be any maximal representing $x$. Since $\lambda_F$ is strongly affine, we deduce $\lambda_F(x)=\omega(F)$. We further claim that $$\Upsilon a(x)=\int_F a\,\mbox{\rm d}\omega.$$
If $\omega(F)=0$, we have $x\in F'$ and thus $\Upsilon a(x)=0=\int_F a\,\mbox{\rm d}\omega|_F$. Analogously, if $\omega(F)=1$, $x\in F$ and $\Upsilon a(x)=a(x)=\int_F a\,\mbox{\rm d} \omega$ as $a$ is strongly affine on $F$.
If $\omega(F)\in (0,1)$, let $y_1=r\left(\frac{\omega|_F}{\omega(F)}\right)\in F$ and $y_2=r\left(\frac{\omega|_{F'}}{\omega(F')}\right)\in F'$. Then $x=\omega(F)y_1+\omega(F')y_2$, which by the uniqueness of the decomposition means $y_1=y(x)$ and $y'(x)=y_2$. Thus \[ \Upsilon a(x)=\lambda_F(x)a(y(x))=\lambda_F(x)a(y_1)=\lambda_F(x)\int_F a\,\mbox{\rm d}\tfrac{\omega}{\omega(F)}=\int_F a\,\mbox{\rm d}\omega. \]\fi In other words, if we set \[ d=\begin{cases} a,&\text{on } F,\\ 0,&\text{on }X\setminus F, \end{cases} \] then for any maximal $\omega\in M_1(X)$ we have $$\Upsilon a(x)=\int_X d\,\mbox{\rm d}\omega.$$
Now we verify that $\Upsilon a$ is in the relevant function system.
$(b)$: It is enough to show that $\Upsilon a\in A_s(X)$ for any positive upper semicontinuous affine function $a$ on $F$. In this case $d$ is an upper semicontinuous convex function and $\Upsilon a=d^*$. Indeed, let $x\in X$ be given. Let $\nu$ be a maximal measure representing $x$. Then $$\Upsilon a(x)=\int_F a\,\mbox{\rm d} \nu=\nu(d)=\nu(d^*)\ge d^*(x),$$ where we used \cite[Theorem 3.58]{lmns} and the fact that $d^*$ is an upper semicontinuous concave function. On the other hand, $\nu(d)\le d^*(x)$ by \cite[Corollary I.3.6 and the following remark]{alfsen}. Hence, we conclude that $\Upsilon a$ is upper semicontinuous.
$(c)$: Assume $a\in A_b(F)\cap\Bo_1(F)$. Since the space $\Bo_1(X)$ is uniformly closed, it is enough to show that $\Upsilon a$ may be uniformly approximated by functions from $\Bo_1(X)$. To this end, fix $\varepsilon>0$. Let $0=\alpha_0<\alpha_1<\cdots<\alpha_n=1$ be a division of $[0,1]$ with $\alpha_{i}-\alpha_{i-1}<\varepsilon$, $i\in \{1,\dots, n\}$.
For each $i\in\{0,\dots, n-1\}$ we consider the set \[ M_i=\{\omega\in M_1(X);\, \omega(F)\ge \alpha_i\}. \] Then each $M_i$ is a closed set in $M_1(X)$ and $N_i=r(M_i)$ is a closed set containing $[\alpha_i\le \lambda_F<\alpha_{i+1}]$. Let $\tilde{a}\colon M_1(X)\to\mathbb R$ be defined as
$$\tilde{a}(\omega)=\omega|_F(a)=\omega(d),\quad\omega\in M_1(X).$$
Since $d\in\Bo_1(X)$, \cite[Proposition 5.30]{lmns} implies that $\tilde{a}\in\Bo_1(M_1(X))$. In particular, $\tilde{a}|_{M_i}\in \Bo_1(M_i)$ for $i\in\{0,\dots, n-1\}$.
We use Lemma~\ref{l:frag-selekce} to find a selection function $\phi_i\colon N_i\to M_i$ such that $\tilde{a}\circ \phi_i\in \Bo_1(N_i)$. We define the sought function $b$ as \[ b(x)= \begin{cases}
a(x) & x\in F,\\ (\tilde{a}\circ \phi_i)(x)& x\in [\alpha_i\le \lambda_F<\alpha_{i+1}], i\in \{0,\dots,n-1\}.\end{cases} \]
Clearly, the restriction of $b$ to each of the sets $[\alpha_i\le \lambda_F<\alpha_{i+1}]$ is of the first Borel class. By the very assumption $b|_F=a$ is also of the first Borel class. Since $\lambda_F$ is upper semicontinuous, the sets $[\alpha_i\le \lambda_F<\alpha_{i+1}]$ are of the form $U\cap H$, where $U$ open and $H$ closed. Thus $b\in \Bo_1(X)$.
We are going to show that $\norm{b-\Upsilon a}_\infty<\varepsilon$. To this end, fix $x\in X$. If $x\in F$, then $b(x)-\Upsilon a(x)=0$.
If $x\in X\setminus F$, we fix $i\in\{0,\dots,n-1\}$ with $x\in [\alpha_i\le \lambda_F<\alpha_{i+1}]$. We set $\mu=\phi_i(x)$ and decompose it as $\mu=\mu|_F+\mu|_{X\setminus F}$. Let $\nu_1$ and $\nu_2$ be maximal measures with $\mu|_F\prec \nu_1$ and $\mu|_{X\setminus F}\prec \nu_2$. Then $\nu=\nu_1+\nu_2$ is a maximal measure with $\mu\prec \nu$. Hence $r(\nu)=r(\mu)=x$. If $\mu(F)=0$, then $\nu_1=0$. If $\mu(F)>0$, then -- using the fact that $F$ is measure convex and measure extremal -- we deduce that $\nu_1$ is carried by $F$. Thus we have \[\begin{gathered}
\alpha_i\le \mu(F)=\nu_1(F)\quad \text{and}\quad\\ \nu_1(F)+\nu_2(F)=\nu(F)=\int 1_F\,\mbox{\rm d}\nu\le\int 1_F\,\mbox{\rm d}\mu=\mu(F)<\alpha_{i+1}, \end{gathered}\]
where we used that $\mu\prec\nu$ and $1_F$ is an upper semicontinuous concave function. We deduce that $\nu_2(F)<\alpha_{i+1}-\alpha_i$. Since $a$ is strongly affine, we have $\mu|_F(a)=\nu_1(a)$. Hence \[ \begin{aligned}
\abs{b(x)-\Upsilon a(x)}&=\abs{(\tilde{a}\circ \phi_i)(x)-\Upsilon a(x)}=\abs{\mu|_F(a)-\nu|_F(a)}\\
&=\abs{\nu_1(a)-\nu_1(a)-\nu_2|_F(a)}\le \nu_2(F)<\alpha_{i+1}-\alpha_i. \end{aligned} \] This completes the proof that $\norm{b-\Upsilon a}<\varepsilon$. Thus $\Upsilon a$ is the uniform limit of a sequence of functions in $\Bo_1(X)$, which implies that $\Upsilon a\in \Bo_1(X)$ as well.
$(d)$: Assume $a\in A_f(F)$. We proceed as in the proof of $(c),$ we only use \cite[Lemma 3.2]{lusp} to show that $\tilde{a}$ is fragmented and subsequently Lemma~\ref{l:frag-selekce} to find selections $\phi_i$ such that the functions $\tilde{a}\circ\phi_i$ are fragmented. The resulting function $b$ is then fragmented as well. So, $\Upsilon a$ can be uniformly approximated by fragmented functions, so $\Upsilon a$ is fragmented.
\end{proof}
The following statement follows from the previous lemma together with Observation~\ref{obs:operator rozsireni}.
\begin{cor}
Let $X$ be a compact convex set and let $F\subset X$ be a closed split face. Then the following assertions are valid. \begin{enumerate}[$(a)$]
\item $\Upsilon a\in \overline{(A_s(X)}$ whenever $a\in \overline{A_s(F)}$.
\item $\Upsilon a\in (A_s(X))^\mu$ whenever $a\in (A_s(F))^\mu$.
\item $\Upsilon a\in (A_s(X))^\sigma$ whenever $a\in (A_s(F))^\sigma$.
\item $\Upsilon a\in (A_b(X)\cap \Bo_1(X))^\mu$ whenever $a\in (A_b(F)\cap \Bo_1(F))^\mu$.
\item $\Upsilon a\in (A_f(X))^\mu$ whenever $a\in (A_f(F))^\mu$. \end{enumerate} \end{cor}
\section{Strongly affine Baire functions}\label{sec:baire}
In this section we focus on canonical intermediate function spaces of Baire functions and mulitpliers on them. We start by comparing four of such spaces which are closed with respect to pointwise limits of monotone sequences.
\begin{prop} \label{P:Baire-srovnani} Let $X$ be a compact convex set. Then we have the following: \begin{enumerate}[$(i)$]
\item $(A_c(X))^\mu\subset(A_1(X))^\mu\subset(A_c(X))^\sigma\subset A_{sa}(X)\cap \Ba^b(X)$.
\item Any of the inclusion from assertion (i) may be strict.
\item If $X$ is a simplex, the four subspaces from assertion $(i)$ coincide. \end{enumerate} \end{prop}
To provide examples proving assertion $(ii)$ of the previous proposition we will use the following lemmma.
\begin{lemma}\label{L:symetricke X} Let $X$ be a compact convex subset of a locally convex space which is symmetric (i.e., $X=-X$). Let $f\in A_b(X)$ and $x\in X$ be given. Then we have the following: \begin{enumerate}[$(i)$]
\item If $f$ is lower semicontinuous at $x$, then $f$ is upper semicontinuous at $-x$.
\item If $f$ is lower semicontinuous both at $x$ and at $-x$, then $f$ is continuous at $x$.
\item Let $(f_\alpha)$ be a non-decreasing net in $A_b(X)$ with $f_\alpha\nearrow f$. If each $f_\alpha$ is continuous at $x$, then $f$ is continuous at $x$ as well. \end{enumerate} \end{lemma}
\begin{proof}
$(i)$: Fix $\varepsilon>0$. Then there is $U$, a neighborhood of $x$, such that $f(y)>f(x)-\varepsilon$ for each $y\in U$. Then $-U$ is a neighborhood of $-x$ and for any $y\in -U$ we have
$$f(y)=2f(0)-f(-y)<2f(0)-(f(x)-\varepsilon)=f(-x)+\varepsilon,$$
where we used that $-y\in U$ and the assumption that $f$ is affine. This completes the proof.
Assertion $(ii)$ follows immediately from $(i)$.
$(iii)$ By applying $(i)$ both to $f_\alpha$ and $-f_\alpha$ we deduce that $f_\alpha$ is continuous also at $-x$. Now it follows that $f$ is lower semicontinuous both at $x$ and at $-x$, so it is continuous at $x$ by $(ii)$. \end{proof}
Now we are ready to prove the above proposition:
\begin{proof}[Proof of Proposition~\ref{P:Baire-srovnani}] Assertion $(i)$ is obvious. Let us prove assertion $(ii)$. Let $X_1=(B_{C([0,1])^*},w^*)$. Then $X_1$ is symmetric, so Lemma~\ref{L:symetricke X}$(iii)$ yields $(A_c(X_1))^\mu=A_c(X_1)$. Note that $X_1$ may be represented as the space of signed Radon measures on $[0,1]$ with total variation at most $1$. The function $f_1(\mu)=\mu(\{0\})$ clearly belongs to $A_1(X_1)$ and is not continuous. Hence $(A_c(X_1))^\mu\subsetneqq (A_1(X_1))^\mu$.
Further, any $f\in A_1(X_1)$ is continuous at all points of a dense $G_\delta$-subset of $X_1$. It easily follows from Lemma~\ref{L:symetricke X}$(iii)$ (using the Baire category theorem) that the same is true for any $f\in (A_1(X_1))^\mu$. Define the function $f_2$ by $f_2(\mu)=\mu([0,1]\cap \mathbb Q)$. Then $f_2\in A_2(X_1)\subset (A_c(X))^\sigma$, but it has no point of continuity. Indeed, both sets $$\begin{aligned} E_0&=\{\mu\in X_1;\, \mu\mbox{ is discrete and }\abs{\mu}([0,1]\setminus \mathbb Q)=1\},\\ E_1&=\{\mu\in X_1;\,\abs{\mu}([0,1]\cap\mathbb Q)=1\}\end{aligned}$$
are dense in $X_1$, $f_2|_{E_0}=0$ and $f_2|_{E_1}=1$. We deduce that $(A_1(X_1))^\mu\subsetneqq (A_c(X_1))^\sigma$.
Further, let $E$ be the Banach space provided by \cite{talagrand} and $X_2=(B_{E^*},w^*)$. Then $(A_c(X_2))^\sigma\subsetneqq A_{sa}(X_2)\cap \Ba^b(X_2)$. Indeed, $E$ has the Schur property and hence $(A_c(X_2))^\sigma=A_c(X_2)$ and in \cite{talagrand} an element $\phi\in E^{**}\setminus E$ is constructed such that $\phi|_{X_2}$ is strongly affine and of the second Baire class.
Finally, if we set $X=X_1\times X_2$, it is easy to check that
$$(A_c(X))^\mu\subsetneqq(A_1(X))^\mu\subsetneqq(A_c(X))^\sigma\subsetneqq A_{sa}(X)\cap \Ba^b(X).$$
$(iii)$: Let $a\in A_{sa}(X)\cap \Ba^b(X)$ be given. Using \cite[Theorem 9.12]{lmns} we find a metrizable simplex $Y$ along with a continuous affine surjection $\varphi\colon X\to Y$ such that there exists a Baire function $\tilde{a}\colon Y\to \mathbb R$ with $a=\tilde{a}\circ \varphi$. Then $\tilde{a}$ is a strongly affine Baire function on $Y$ by \cite[Proposition 5.29]{lmns}. By \cite[Corollary]{alfsen-note-scand}, $\tilde{a}\in (A_c(Y))^\mu$. Hence $a=\tilde{a}\circ \varphi\in (A_c(X))^\mu$. \end{proof}
\begin{remark}
It is possible to consider a more detailed transfinite hierarchy of spaces of strongly affine Baire functions. In particular, we may consider spaces $(A_\alpha(X))^\mu$, $(A_{sa}(X)\cap \Ba_\alpha(X))^\mu$ or $(A_{sa}(X)\cap \Ba_\alpha(X))^\sigma$, where $\alpha\in[2,\omega_1)$ is an ordinal. It is probably not known whether all these spaces may be different. \end{remark}
Next we focus on describing the spaces of multipliers. Recall that
by Proposition~\ref{P:rovnostmulti} $M^s(H)=M(H)$ holds for any $H\subset \Ba(X)\cap A_{sa}(X)$, in particular for the four spaces addressed above. Further, by \cite[Proposition 4.4]{smith-london} we know that $M((A_c(X))^\sigma)=Z((A_c(X))^\sigma)$. We obtain from the proof of \cite[Proposition 4.9]{smith-london} that $M((A_c(X))^\mu)=Z((A_c(X))^\mu)$ as well. It seems not to be clear whether an analogous equality holds for the remaining two spaces:
\begin{ques} Let $X$ be a compact convex set
Let $H=(A_1(X))^\mu$ or $H=\Ba(X)\cap A_{sa}(X)$. Is $M(H)=Z(H)$? \end{ques}
The four above mentioned spaces are intermediate function spaces closed under monotone limits, hence Theorem~\ref{T:integral representation H} applies to each of them. We shall look in
more detail at the respective $\sigma$-algebras.
\begin{thm}\label{T:baire-multipliers} Let $X$ be a compact convex set and let $H=A_{sa}(X)\cap\Ba(X)$. Then we have the following. \begin{enumerate}[$(a)$]
\item
$\begin{aligned}[t]
\mathcal A_H&=\mathcal A^s_H=\mathcal S_H\\&=\{ F\cap \ext X ;\, F\mbox{ is a split face, $F$ and $F'$ are Baire and measure convex}\}. \end{aligned}$
\noindent{}If $X$ is a simplex, then $$\mathcal A_H=\mathcal Z_H=\{[f=1]\cap\ext X;\, f\in A_{sa}(X)\cap\Ba(X), f(\ext X)\subset\{0,1\}\}.$$
\item $M(H')\subset M(H)$ for each intermediate function space $H' \subset H $. \end{enumerate}
\end{thm}
\begin{proof}
$(a)$: Equality $\mathcal A_H=\mathcal A_H^s$ follows from Theorem~\ref{T:integral representation H}$(ii)$ using the fact that $M(H)=M^s(H)$. Inclusion $\mathcal A_H^s\subset\mathcal S_H$ follows from Theorem~\ref{T:meritelnost-strongmulti}. Inclusion `$\subset$' from the last equality is obvious.
Finally, assume that $F\subset X$ is a split face such that both $F$ and $F'$ are Baire and measure convex. It follows from Proposition~\ref{p:shrnuti-splitfaceu-baire} that $\lambda_F\in M(H)$. Thus $F\cap\ext X\in \mathcal A_H$ by the very definition of $\mathcal A_H$.
If $X$ is a simplex, Lemma~\ref{L:SH=ZH} yields $\mathcal S_H=\mathcal Z_H$ and so the proof is complete.
$(b)$: This follows from $(a)$ and Proposition~\ref{P: silnemulti-inkluze}. \end{proof}
We continue by a comparison of several spaces of multipliers.
\begin{prop}\label{P:baire-multipliers-inclusions} Let $X$ be a compact convex set. Then we have the following inclusions:
$$\begin{array}{ccccc}
M(A_c(X))&\subset& M((A_c(X))^\mu)&\subset &M((A_c(X))^\sigma)\subset M(A_{sa}(X)\cap \Ba(X))\\
\cap&&&\nesubset&\\
M(A_1(X))&\subset&M((A_1(X))^\mu))&&\end{array}$$
\end{prop}
\begin{proof}
First recall that by Proposition~\ref{P:rovnostmulti} for all spaces in question multipliers and strong multipliers coincide. Thus inclusions $M(A_c(X))\subset M((A_c(X))^\mu)\subset M((A_c(X))^\sigma)$ follow from Proposition~\ref{p:multi-pro-mu} (one can use also the old results of \cite{edwards,smith-london}).
Further, inclusion $M(A_c(X))\subset M(A_1(X))$ follows from Remark~\ref{rem:Ms(H1)}.
Inclusions $M(A_1(X))\subset M((A_1(X))^\mu)\subset M((A_c(X))^\sigma)$ follow from Proposition~\ref{p:multi-pro-mu} applied to $H=A_1(X)$.
Finally, inclusion $M((A_c(X))^\sigma)\subset M(A_{sa}(X)\cap\Ba(X))$ follows from Theorem~\ref{T:baire-multipliers}$(b)$. \end{proof}
We continue by two natural open problems.
\begin{ques}
Let $X$ be a compact convex set. Is $M((A_c(X))^\mu)\subset M((A_1(X))^\mu)$? \end{ques}
Note that this inclusion is missing in Proposition~\ref{P:baire-multipliers-inclusions}. We have no idea how to attack it.
\begin{ques}
Let $X$ be a compact convex set. Is $M((A_c(X))^\mu)= M(A_{sa}(X)\cap\Ba(X))$? \end{ques}
By Proposition~\ref{P:baire-multipliers-inclusions} we know that inclusion `$\subset$' holds, but we do not know any counterexample to the converse inclusion. Note that in two extreme cases the equality holds: If $X$ is a simplex, the equality follows from Proposition~\ref{P:Baire-srovnani}$(iii)$. On the other hand, if $X$ is symmetric, inclusions from Proposition~\ref{P:Baire-srovnani}$(i)$ may be strict, but in this case the spaces of multipliers are the same and trivial by Example~\ref{ex:symetricka}.
\iffalse \begin{remark}
\label{r:simplex-basa}
In case $X$ is a simplex, the family $\mathcal A$ in Theorem~\ref{T:baire-multipliers}(b) can be alternatively described by the following observation. A set $F\subset X$ is a split face such that $F$ and $F'$ are Baire and measure convex if and only if there exists a strongly affine Baire function $m\colon X\to [0,1]$ such that $F=[m=1]$ and
$m(\ext X)\subset \{0,1\}$.
To see this, we first observe that given a Baire measure convex split face $F$ whose complementary face $F'$ is also Baire and measure convex, the function $m=\lambda_F$ is strongly affine and Baire by Proposition~\ref{p:shrnuti-splitfaceu-baire}.
On the other hand, given a strongly affine Baire function $m\colon X\to [0,1]$ with $F=[m=1]$ and $m(\ext X)\subset \{0,1\}$, then $F$ is a Baire measure convex and measure extremal face as well as the set $[m=0]$. Since $[m=1]\cup [m=0]$ is a Baire set containing $\ext X$, it carries every maximal measure. By Lemma~\ref{l:complementarni-facy}(i), $[m=0]=F'$. From Corollary~\ref{c:simplex-facejesplit} we get that $F$ is a split face. \end{remark} \fi
\iffalse Theorem~\ref{T:baire-multipliers}(b) enables us to show that the split faces creating the family $\mathcal A$ in Theorem~\ref{T:baire-multipliers}(b) form a Boolean sublattice of the family $2^X$ of all subsets of $X$.
\begin{cor} \label{c:splitfacy-lattice} Let $H=A_{sa}(X)\cap B^b(X)$ and $\mathcal{A}$ be the system described in Theorem~\ref{T:baire-multipliers}(b). Let $\mathcal F$ denote the family of all Baire measure convex split faces in $X$ such that its complementary face is also Baire and measure convex.
Then there is a bijection $R$ between the family $\mathcal F$ and $\mathcal A$ given by $R(F)=F\cap \ext X$. Moreover, the family $(\mathcal F,\subset)$ is a $\sigma$-complete Boolean lattice with the operations given by \[ \begin{aligned} F_1\vee F_2&=R^{-1}(R(F_1)\cup R(F_2)),\\ F_1\wedge F_2&= R^{-1}(R(F_1)\cap R(F_2)), \end{aligned} \quad F_1,F_2\in \mathcal F. \] \end{cor}
\begin{proof} We know from the proof of Theorem~\ref{T:baire-multipliers}(b) that $\mathcal A$ is equal precisely to the traces of split faces from $\mathcal F$. Hence $R\colon \mathcal F\to \mathcal A$ given by the restriction to $\ext X$ is a bijection.
We observe that, given $F,G\in\mathcal F$, then $F\subset G$ if and only if $F\cap \ext X\subset G\cap \ext X$. To see this, let $m_F, m_G\in M^s(H)$ be the strong multipliers satisfying $[m_F=1]\cap \ext X=F\cap \ext X$ and $[m_G=1]\cap \ext X=G\cap\ext X$. Then $m_F\le m_G$ on $\ext X$, which implies $m_F\le m_G$ on $X$ by the minimum principle. If $x\in F$, then $1=\Upsilon_F(1|_F)=m_F(x)$. Hence $m_G(x)=1$, which implies $x\in G$.
If $F_n\in\mathcal F, n\in\mathbb N$, are now given, the face $F=R^{-1}(\bigcup_{n=1}^\infty R(F_n))$ is an element of $\mathcal F$ satisfying $F\cap \ext X=\bigcup_{n=1}^\infty(R(F_n)\cap \ext X)$. By the considerations above, $F$ is the supremum of the family $\{F_n;\, n\in\mathbb N\}$ in $(\mathcal F,\subset)$.
Analogously we would construct the infimum of the family $\{F_n;\, n\in\mathbb N\}$ by taking $F=R^{-1}(\bigcap_{n=1}^\infty R(F_n))$. \end{proof} \fi
Let us now pass to the space $A_1(X)$. We start by the following easy consequence of Corollary~\ref{cor:hup+hdown pro A1aj}.
\begin{prop}\label{P:A1 determined}
Let $X$ be a compact convex set. Then a bounded function on $\ext X$ may be extended to an element of $M(A_1(X))=M^s(A_1(X))$ if and only if it is $\mathcal A_{A_1(X)}$-measurable. I.e., the space $M(A_1(X))=M^s(A_1(X))$ is determined by $\mathcal A_{A_1(X)}=\mathcal A^s_{A_1(X)}$. \end{prop}
It is not clear whether this characterization can be improved:
\begin{ques}\label{q:A1-split}
Let $X$ be a compact convex set and let $H=A_1(X)$. Is $\mathcal A_H=\mathcal S_H$? Or, at least, is $A_1(X)$ determined by $\mathcal S_H$? \end{ques}
The answer is positive for the class of simplices addressed in Section~\ref{sec:stacey} (see Proposition~\ref{P:dikous-A1-new}). We further get a better characterization for compact convex sets $X$ with $\ext X$ being Lindel\"of\ resolvable:
\begin{thm}
\label{t:a1-lindelof-h-hranice}
Let $X$ be a compact convex such that $\ext X$ is a Lindel\"of\ resolvable set and let $H=A_1(X)$. Then $M(H)=M^s(H)$ is determined by
\[
\begin{aligned}
\mathcal B=\{\ext X \setminus F;\, &F\text{ is a }\operatorname{Coz}_\delta\text{ measure convex split face}\\
&\text{such that }F'\text{ is Baire and measure convex}\}.
\end{aligned}
\]
Consequently, $M(H)=M^s(H)$ is determined by $\mathcal S_H$, and hence for each intermediate function space $H' \subset H$, $M(H') \subset M(H)$. \end{thm}
\begin{proof} We first prove the `consequently' part. It is easy to see that $\mathcal S_H \subset \mathcal B$. By Theorem~\ref{T:meritelnost-strongmulti} we know that $\mathcal A^s_H\subset\mathcal S_H$. By the first part of the theorem, $M^s(H)$ is determined by $\mathcal A$. By Proposition~\ref{P:A1 determined} we deduce that $M^s(H)$ is determined by $\mathcal A^s_H$. Thus $M^s(H)$ is determined by $\mathcal S_H$. The rest follows from Proposition \ref{P: silnemulti-inkluze}.
To prove the first part of the statement, we use Proposition~\ref{P:meritelnost multiplikatoru pomoci topologickych split facu} for $H=A_1(X)$ and $T=\Ba_1^b(\ext X)$. First we need to verify that $T$ satisfies conditions $(i)-(iii)$ of the quoted proposition. Condition $(i)$ follows easily from the fact that Baire-one functions are stable with respect to the pointwise multiplication. Condition $(ii)$ follows from Lemma~\ref{L:YupcapYdown=Y} and condition $(iii)$ is obvious. Further, the formula for $H$ follows from \cite[Theorem 6.4]{lusp} due to the assumption on $\ext X$.
Therefore, the assumptions of Proposition~\ref{P:meritelnost multiplikatoru pomoci topologickych split facu} are satisfied, hence $H$ is determined by $\mathcal B^s_H$. To complete the proof it is enough to show that $\mathcal B\subset\mathcal B^s_H$. So, fix $E\in \mathcal B$. Let $F$ be a $\operatorname{Coz}_\delta$ measure convex split face with $F'$ Baire and measure convex such that $E=\ext X\setminus F$. Let $(U_n)$ be a decreasing sequence of cozero subsets of $X$ with $\bigcap_n U_n=F$. Then $1_{U_n\cap\ext X}\in T$ for each $n\in\mathbb N$ and hence $1_{F'\cap\ext X}\in T^\uparrow$.
Further, by Proposition~\ref{p:shrnuti-splitfaceu-baire}, $\Upsilon_{F'}(a|_{F'})\in A_{sa}(X)\cap\Ba(X)$ for each $a\in H$. Thus $E=F'\cap\ext X\in\mathcal B^s_H$. This concludes the proof. \end{proof}
\begin{remarks}\label{r:fsigma-resolvable} (1) By Proposition~\ref{p:postacproa1}, $Z(A_1(X))=M(A_1(X))$ in case $\ext X$ is Lindel\"of\ resolvable set. Hence we have obtained a measurable characterization of $Z(A_1(X))$ in Theorem~\ref{t:a1-lindelof-h-hranice} for this class of compact convex sets.
(2) In case $X$ is metrizable the assumptions on $X$ in Theorem~\ref{t:a1-lindelof-h-hranice} are equivalent with the fact that $\ext X$ is an $F_\sigma$ set. Indeed, if $\ext X$ is an $F_\sigma$ set in $X$, its characteristic function is Baire-one (as $\ext X$ is a $G_\delta$ set) and thus $1_{\ext X}$ is a fragmented function. Conversely, if $1_{\ext X}$ is a fragmented function, by Theorem B the see that $\ext X$ is both $F_\sigma$ and $G_\delta$.
(3) We also point out that the assumption in Theorem~\ref{t:a1-lindelof-h-hranice} is satisfied provided $\ext X$ is an $F_\sigma$ set. To see this it is enough to show that $\ext X$ is a resolvable set in $X$. To this end, assume that $F\subset X$ is a nonempty closed set such that both $F \cap \ext X$ and $F\setminus \ext X$ are dense in $F$. By \cite[Th\'eor\`eme 2]{tal-kanal}, we can write $\ext X=\bigcap_{n=1}^\infty (H_n \cup V_n)$, where $H_n\subset X$ is closed and $V_n\subset X$ is open, $n \in\mathbb N $. Thus both $F\setminus \ext X$ and $F\cap \ext X$ are comeager disjoint sets in $F$, in contradiction with the Baire category theorem. Hence $\ext X$ is a resolvable set. \end{remarks}
\section{Beyond Baire functions}\label{sec:beyond}
In this section we investigate intermediate function spaces which are not necessarily contained in the space of Baire functions. More precisely, we consider functions derived from semicontinuous affine functions, from affine functions of the first Borel class or from fragmented affine functions.
\subsection{Comparison of the spaces} We start by collecting basic properties and mutual relationship of these spaces. Before formulating the first proposition, we introduce another piece of notation:
If $K$ is a compact space, we denote by \gls{Lb(K)} the space of differences of bounded lower semicontinuous functions on $K$.
\begin{prop}\label{P:vetsi prostory srov} Let $X$ be a compact convex set. Then the following assertions are valid.
\begin{enumerate}[$(a)$]
\item $\begin{array}[t]{ccccccc}
A_s(X) &\subset &\overline{A_s(X)}&\subset& A_b(X)\cap\Bo_1(X) & \subset & A_f(X) \\
& & \cap & & \cap & & \cap \\
&& (A_s(X))^\mu & \subset &(A_b(X)\cap \Bo_1(X))^\mu & \subset & (A_f(X))^\mu \\
&& \cap &&&& \cap \\
&& (A_s(X))^\sigma&&\subset && A_{sa}(X).
\end{array}$
\item All the inclusions in assertion $(a)$ except for $(A_s(X))^\mu\subset (A_b(X)\cap \Bo_1(X))^\mu$ and $(A_s(X))^\mu\subset (A_s(X))^\sigma$ may be strict even if $X$ is a Bauer simplex.
\item Inclusion $(A_s(X))^\mu\subset (A_b(X)\cap \Bo_1(X))^\mu$ may be strict even if $X$ is a simplex, but in case $X$ is a Bauer simplex, the equality holds.
\item Inclusion $(A_s(X))^\mu\subset (A_s(X))^\sigma$ may be strict, but if $X$ is a simplex, the equality holds.
\end{enumerate} \end{prop}
Before passing to the proof itself, we give the following easy lemma.
\begin{lemma}\label{L:borel=Lbmu}
Let $L$ be a compact space.
\begin{enumerate}[$(a)$]
\item $\Lb(L)\subset\overline{\Lb(L)}\subset\Bo^b_1(L)$;
\item $(\Lb(L))^\mu=(\Lb(L))^\sigma=(\Bo_1^b(L))^\mu=(\Bo_1^b(L))^\sigma=\Bo^b(L)$.
\end{enumerate} \end{lemma}
\begin{proof}
Assertion $(a)$ is well known. Let us prove $(b)$. Since both $\Lb(L)$ and $\Bo^b_1(L)$ are lattices, we deduce that $(\Lb(L))^\mu=(\Lb(L))^\sigma$ and $(\Bo_1^b(L))^\mu=(\Bo_1^b(L))^\sigma$. Further, we clearly have
$(\Lb(L))^\sigma\subset (\Bo_1^b(L))^\sigma\subset \Bo^b(L)$.
Next we observe that if $G\subset L$ is open, then $1_G\in \Lb(L)$, because this function is lower semicontinuous. Thus the set \[ \{B\subset L;\, 1_B\in (\Lb(L))^\sigma\} \] contains open sets and it is easily seen to be a $\sigma$-algebra, Hence $1_B\in (\Lb(L))^\sigma$ for each Borel set in $L$, which implies $f\in (\Lb(L))^\sigma$ for each bounded Borel function on $L$ (as simple functions are dense in $\Bo^b(L)$). \end{proof}
Now we proceed to the proof of the above proposition:
\begin{proof}[Proof of Proposition~\ref{P:vetsi prostory srov}.]
Assertion $(a)$ is clear.
$(c)$: If $X$ is a Bauer simplex, the equality follows from Lemma~\ref{L:borel=Lbmu}$(b)$ using Lemma~\ref{L:function space}. An example showing that the inclusion may be strict for a simplex $X$ is provided by Proposition~\ref{p:vztahy-multi-ifs}$(a)$ below.
$(d)$: If $X$ is a simplex, the equality is proved in \cite[p. 104]{smith-london} (using results of \cite{krause}).
An example when the inclusion is proper is provided by Lemma~\ref{L:symetricke X}. Indeed, let $X$ be a symmetric compact convex set. By assertions $(ii)$ and $(iii)$ of the quoted lemma we have $(A_s(X))^\mu=(A_c(X))^\mu=A_c(X)$ and $(A_s(X))^\sigma=(A_c(X))^\sigma$. Thus if we take $X$ to be (for example) the dual unit ball of the Banach space $E=c_0$, then \[ (A_s(X))^\sigma=(A_c(X))^\sigma\neq A_c(X)=(A_c(X))^\mu=(A_s(X))^\mu. \]
$(b)$: We will use Lemma~\ref{L:function space}. Therefore we first look at inclusions of the respective spaces of bounded functions on a compact space.
Let $K_1=[0,1]$. It follows from \cite[Proposition 5.1]{odell-rosen} that there are two functions $f_1,f_2:K_1\to \mathbb R$ such that
$f_1\in\overline{\Lb(K_1)}\setminus\Lb(K_1)$ and $f_2\in\Ba_1^b(K_1)\setminus\overline{\Lb(K_1)}$. Further, let $A\subset K_1$ be an analytic set which is not Borel. Then $f_3=1_A$ is universally measurable but not Borel, hence neither in $(\Fr^b(K_1))^\mu$ nor in $(\Lb(K_1))^\sigma$ (as $K_1$ is metrizable). The function $f_4=1_{K_1\cap\mathbb Q}$ is Borel but not fragmented.
Further, let $K_2$ be the long line, i.e., $K_2=[0,\omega_1)\times[0,1)\cup\{(\omega_1,0)\}$ equipped with the order topology induced by the lexicographic order. Let $S\subset [0,\omega_1)$ be a stationary set whose complement is also stationary (i.e., both $S$ and it complement intersect each closed unbounded subset of $[0,\omega_1)$). We define the following functions on $K_2$: \begin{itemize}
\item Let
$$g_1(\alpha,t)=\begin{cases}
1 & \alpha\in S,\\ 0 & \mbox{ otherwise}.
\end{cases}$$
Then $g_1$ is a fragmented non-Borel function on $K_2$.
\item Let $$g_2(\alpha,t)=\begin{cases}
1 & \alpha\in S, t\in\mathbb Q,\\ 0 &\mbox{otherwise}.
\end{cases}$$
Then $g_2$ belongs to $(\Fr^b(K_2))^\mu$ but it is neither fragmented not Borel. \end{itemize}
Now we conclude using Lemma~\ref{L:function space}. Indeed, for $i=1,2$ we set $E_i=C(K_i)$, $X_i=S(E_i)=M_1(K_i)$ and let $V_i$ be the operator from Lemma~\ref{L:function space}. Then we have: $$\begin{gathered}
V_1(f_1)\in \overline{A_s(X_1)}\setminus A_s(X_1), V_1(f_2)\in A_b(X_1)\cap\Bo_1(X_1)\setminus \overline{A_s(X_1)},\\
V_2(g_1)\in A_f(X_2)\setminus (A_b(X_2)\cap\Bo_1(X_2))^\mu,
V_1(f_4)\in (A_b(X_1)\cap\Bo_1(X_1))^\mu\setminus A_f(X_1),\\
V_2(g_2)\in (A_f(X_2))^\mu\setminus (A_f(X_2)\cup (A_b(X_2)\cap\Bo_1(X_2))^\mu),\\
V_1(f_3)\in A_{sa}(X_1)\setminus (A_f(X_1))^\mu\cup(A_s(X_1))^\sigma. \end{gathered}$$
Since $(A_s(X_i))^\mu=(A_s(X_i))^\sigma=(A_b(X_i)\cap\Bo_1(X_i))^\mu$ (by $(c)$ and $(d)$), we deduce that no more inclusions hold. \end{proof}
\begin{remarks} \label{r:as-sigma-mu}
(1) Assertion $(b)$ of Proposition~\ref{P:vetsi prostory srov} in particular shows that $A_s(X)$ need not be a Banach space, even if $X$ is a Bauer simplex. This answers a question in \cite[p. 100]{smith-london}.
(2) Assertion $(d)$ of Proposition~\ref{P:vetsi prostory srov} answers in the negative a question asked in \cite[p. 104]{smith-london}. \end{remarks}
In view of the validity of $(A_s(X))^\sigma=(A_s(X))^\mu$ for a simplex, the following question is natural:
\begin{ques}
Assume $X$ is a simplex. Do the equalities $(A_b(X)\cap\Bo_1(X))^\sigma=(A_b(X)\cap\Bo_1(X))^\mu$ and $(A_f(X))^\sigma=(A_f(X))^\mu$ hold? \end{ques}
We note that these equalities clearly hold if $X$ is a Bauer simplex and also for simplices addressed in Section~\ref{sec:stacey} below (see Proposition~\ref{P:dikous-Afmu-new}$(a)$ and Proposition~\ref{P:dikous-bo1-mu-new}$(a)$). But the method used for $A_s(X)$ in \cite{krause} fails.
We continue by a comparison of the spaces addressed in this section with spaces of Baire functions from the previous section.
\begin{prop}\label{P:srovnani baire a vetsich}
Let $X$ be a compact convex set. Then the following assertions are valid.
\begin{enumerate}[$(a)$]
\item $\begin{array}[t]{ccccccc}
A_s(X)&\subset&\overline{A_s(X)}&\subset& (A_s(X))^\mu & \subset & (A_s(X))^\sigma \\
\cup &&&&\cup&&\cup \\
A_c(X)&&\subset&&(A_c(X))^\mu&\subset&(A_c(X))^\sigma \\
&\sesubset&&&\cap&\nesubset&\\
&& A_1(X)&\subset&(A_1(X))^\mu&& \\
&&\cap&&\cap&& \\
&&A_b(X)\cap\Bo_1(X)&\subset&(A_b(X)\cap\Bo_1(X))^\mu.&&
\end{array}$
No more inclusions between spaces of Baire functions and the other ones are valid in general.
\item Assume that $X$ is metrizable. Then:
$$\begin{array}{ccccccc}
A_s(X) &\subset &\overline{A_s(X)}&\subset&(A_c(X))^\mu & = & (A_s(X))^\mu \\
\cup&&\cap&&\cap &&\cap\\
A_c(X)&&A_1(X)&\subset&(A_1(X))^\mu &\subset& (A_c(X))^\sigma
\\
&&\parallel&&\parallel &&\parallel\\
&&A_b(X)\cap\Bo_1(X)&\subset&(A_b(x)\cap\Bo_1(X))^\mu &\subset& (A_s(X))^\sigma\\
&&\parallel&&\parallel &&\parallel\\
&&A_f(X)&\subset&(A_f(X))^\mu &\subset& (A_b(x)\cap\Bo_1(X))^\sigma
\\
&&&&&&\parallel\\
&&&&&& (A_f(X))^\sigma.
\end{array}$$
No more inclusion are valid in general.
\end{enumerate} \end{prop}
\begin{proof}
$(a)$: The validity of inclusions is clear. To illustrate that the relevant inclusions may be proper, even for a Bauer simplex, we use the long line, i.e., the compact space $K_2$ from the proof of Proposition~\ref{P:vetsi prostory srov}$(b)$. The function $f=1_{\{(\omega_1,0)\}}$ is upper semicontinuous and hence of the first Borel class, but it is not a Baire function.
Hence $V_2(f)\in A_s(X_2)\setminus (A_c(X_2))^\sigma$ and also $V_2(f)\in A_b(X_2)\cap\Bo_1(X_2)\setminus (A_c(X_2))^\sigma$.
$(b)$: Assume $X$ is metrizable. Then clearly $A_l(X)\subset (A_c(X))^\mu$, so $(A_s(X))^\mu=(A_c(X))^\mu$. The remaining equalities follow from Theorem~\ref{T:b}$(b)$. No more inclusions hold by Proposition~\ref{P:Baire-srovnani} and the proof of Proposition~\ref{P:vetsi prostory srov}$(b)$ (note that the space $X_1$ is metrizable). \end{proof}
\subsection{On spaces derived from semicontinuous affine functions} Now we pass to the investigation of multipliers of the spaces of our interest. We start by looking at the spaces derived from semicontinuous functions.
\begin{prop}\label{P:multi strong pro As}
Let $X$ be a compact convex set and let $H\subset (A_s(X))^\sigma$ be an intermediate function space. Then $M(H)=M^s(H)$. \end{prop}
\begin{proof} The proof will be done in three steps:
{\tt Step 1:} Assume that $m\in M(H)$ satisfies $m(\ext X)\subset\{0,1\}$. We will prove that $m\in M^s(H)$.
Set $F=[m=1]$ and $E=[m=0]$. Then both $F$ and $E$ are measure convex and measure extremal disjoint faces. First we shall verify that $F\cup E$ carries every maximal measure. To this end, let $\mu\in M_1(X)$ maximal be given. Let $(\ext X, \Sigma_\mu, \widehat{\mu})$ be the completion of the measure space provided by Lemma~\ref{L:miry na ext}. Denote $c=\widehat{\mu}(F\cap \ext X)$ (note that then $1-c=\widehat{\mu}(E\cap \ext X)$). By equation~\eqref{eq:mira1}, there exist closed extremal sets $F_n$, $n\in\mathbb N$, such that $F_n\cap \ext X\subset F\cap \ext X$ and $\mu(F_n)=\widehat{\mu}(F_n\cap \ext X)\nearrow c$.
We claim that $F_n\subset F$. To see this, consider $x\in F_n$ and its maximal representing measure $\nu$. Let $(\ext X,\Sigma_\nu, \widehat{\nu})$ be the completion of the measure space provided by Lemma~\ref{L:miry na ext} for $\nu$. Since $F_n$ is measure extremal, \[ 1=\nu(F_n)=\widehat{\nu}(F_n\cap \ext X)\le \widehat{\nu}(F\cap \ext X)\le 1. \] Hence $m(x)=\int_{\ext X} m\,\mbox{\rm d}\widehat{\nu}=1$, i.e., $x\in F$.
From this observation now follows that $c=\lim \mu(F_n)\le \mu(F)$. Similarly we obtain $\mu(E)\ge 1-c$. Hence $\mu(F)=c$ and $\mu(E)=1-c$, which gives that $F\cup E$ carries $\mu$.
We will show that $m$ is a strong multiplier. To this end, let $f\in H$ be given and let $g\in H$ be such that $g=mf$ on $\ext X$. Let $\mu\in M_1(X)$ be any maximal measure. We need to check that $\mu([g=mf])=1$. This will be achieved by showing that $F\cup E\subset [g=mf]$. So let $x\in F$ be given. We select a maximal measure $\nu\in M_x(X)$ and consider the induced measure $\widehat{\nu}$. Then \[ 1=m(x)=\int_{\ext X} m\,\mbox{\rm d}\widehat{\nu}=\widehat{\nu}(F\cap \ext X). \] Thus \[ \begin{aligned} g(x)&=\int_{\ext X}g\,\mbox{\rm d}\widehat{\nu}=\int_{\ext X} mf\,\mbox{\rm d}\widehat{\nu}=\int_{F\cap \ext X} f\,\mbox{\rm d}\widehat{\nu}=\int_{\ext X} f\,\mbox{\rm d}\widehat{\nu}\\ &=f(x)=m(x)f(x). \end{aligned} \] Similarly we obtain $E\subset [g=mf]$. Hence $m$ is a strong multiplier of $H$.
{\tt Step 2:} Assume $H^\mu=H$. Then $M(H)=M^s(H)$.
By Step 1 we deduce that $\mathcal A_H\subset\mathcal A^s_H$. Theorem~\ref{T:integral representation H}$(ii)$ then yields that $M(H)=M^s(H)$.
{\tt Step 3:} $M(H)=M^s(H)$ for general $H$.
Assume $H\subset (A_s(X))^\sigma$ and $m\in M(H)$. Then $H^\mu\subset (A_s(X))^\sigma$. By Proposition~\ref{p:multi-pro-mu}$(i)$ we get $m\in M(H^\mu)$, so by Step 2 we deduce $m\in M^s(H^\mu)$. Let us now check that $m\in M^s(H)$:
Fix $f\in H$. Then there is $g\in H$ such that $g=mf$ on $\ext X$. Simultaneously, there is $h\in H^\mu$ such that $h=mf$ $\mu$-almost everywhere for each maximal $\mu\in M_1(X)$. Finally, since $H^\mu$ is determined by extreme points, we get $h=g$. This completes the proof. \end{proof}
We continue by looking at the relationship of the spaces of multipliers and centers.
\begin{prop}\label{P:centrum pro As}
Let $X$ be a compact space and let $H$ be any of the spaces
$$\overline{A_s(X)}, (A_s(X))^\mu, (A_s(X))^\sigma.$$
Then
$$Z(H)=M(H)=M^s(H).$$ \end{prop}
\begin{proof}
Equality $M(H)=M^s(H)$ follows from Proposition~\ref{P:multi strong pro As}. Inclusion $M(H)\subset Z(H)$ follows from Proposition~\ref{P:mult} and Theorem~\ref{t:assigma-deter}. It remains to prove $Z(H)\subset M(H)$.
The case $H=(A_s(X))^\mu$ is proved in \cite[Proposition 4.9]{smith-london}. We will adapt the proof for the other cases:
Let $T\in\mathfrak{D}(H)$ be arbitrary. Without loss of generality we assume $0\le T\le I$. Set $m=T(1)$. By Corollary~\ref{cor:rovnost na ext} we get
$$\forall f\in A_c(X)\colon T(f)=mf\mbox{ on }\ext X.$$
By the argument of \cite[Proposition 4.9]{smith-london} we deduce
$$\forall f\in A_l(X)\colon T(f)=mf\mbox{ on }\ext X.$$
Since $T$ is a bounded linear operator, we get
$$\forall f\in \overline{A_s(X)}\colon T(f)=mf\mbox{ on }\ext X.$$
This completes the proof if $H=\overline{A_s(X)}$.
As in \cite[Proposition 4.9]{smith-london} we infer
$$\forall f\in (A_s(X))^\mu\colon T(f)=mf\mbox{ on }\ext X,$$
which proves the case $H=(A_s(X))^\mu$.
Finally, assume $H=(A_s(X))^\sigma$ and consider
$$\mathcal F=\{ f\in H;\, \exists g\in H\colon g=mf\mbox{ on }\ext X\}.$$
It is clearly a linear space. By the above it contains $A_s(X)$. It remains to prove it is closed with respect to pointwise limits of bounded convergent sequences. So, assume $(f_n)$ is a bounded sequence in $\mathcal F$ pointwise converging to some $f\in H$. For each $n\in\mathbb N$ let $g_n\in H$ be such that $g_n=mf_n$ on $\ext X$
Let $x\in X$ be arbitrary. Let $\mu$ be a maximal measure representing $x$ and let $\widehat{\mu}$ be the respective measure provided by Lemma~\ref{L:miry na ext}. Then
$$g_n(x)= \int_{\ext X} g_n\,\mbox{\rm d}\widehat{\mu}= \int_{\ext X} mf_n\,\mbox{\rm d}\widehat{\mu} \overset{n}{\longrightarrow} \int_{\ext X} mf\,\mbox{\rm d}\widehat{\mu}.$$
Thus the sequence $(g_n)$ pointwise converges to some $g\in H$. Applying the above computation to $x\in\ext X$ we deduce $g=mf$ on $\ext X$. Thus $f\in\mathcal F$.
\end{proof}
For a simplex, the situation is easier:
\begin{prop}\label{P:As pro simplex}
Let $X$ be a simplex and let $H$ be any of the spaces
$$\overline{A_s(X)}, (A_s(X))^\mu, (A_s(X))^\sigma.$$
Then $M(H)=H$. In particular, in this case
$$\mathcal A_H=\mathcal S_H=\mathcal Z_H.$$ \end{prop}
\begin{proof}
Assume $X$ is a simplex. By Proposition~\ref{P:vetsi prostory srov}$(d)$ we have $(A_s(X))^\mu=(A_s(X))^\sigma$. Further, by \cite[Lemma 2.3]{krause} the spaces $A_s(X)$ and $(A_s(X))^\mu$ are lattices, if the lattice operations are defined pointwise on $\ext X$ (i.e., the images under the restriction operator $R$ from Proposition~\ref{P:algebra ZH} are sublattices of $\ell^\infty(\ext X)$). This property clearly passes to $\overline{A_s(X)}$. So, we conclude by Proposition~\ref{P:algebra ZH}$(b)$.
The `in particular' part follows from Proposition~\ref{p:system-aha}$(c)$. \end{proof}
The previous proposition inspires the following question.
\begin{ques}
Let $X$ be a compact convex space. Is $M((A_s(X))^\mu)=M((A_s(X))^\sigma)$? \end{ques}
Note that the answer is trivially positive if $X$ is a simplex by Proposition~\ref{P:vetsi prostory srov}$(d)$.
It is further not clear whether some of the properties of the spaces generated by semicontinuous affine functions pass to larger spaces. Let us formulate the relevant questions:
\begin{ques}\label{q:m=ms}
Let $X$ be a compact convex set and let $H$ be any of the spaces
$$A_b(X)\cap \Bo_1(X),A_f(X),(A_b(X)\cap \Bo_1(X))^\mu,(A_f(X))^\mu.$$
Is it true that $M(H)=M^s(H)$? \end{ques}
Note that Example~\ref{ex:dikous-mezi-new} below provides an example of an intermediate function space $H$ on a simplex $X$ such that $M^s(H)\subsetneqq M(H)$. Moreover, $H^\mu=H$ and $H\subset (A_b(X)\cap\Bo_1(X))^\mu$. However, we know no counterexample among the above-named natural intermediate function spaces (in particular, for the special simplices from Section~\ref{sec:stacey} the answer is positive by Proposition~\ref{P:shrnutidikousu}).
\begin{ques}
Let $X$ be a compact convex set and let $H$ be any of the spaces
$$A_b(X)\cap \Bo_1(X),A_f(X),(A_b(X)\cap \Bo_1(X))^\mu,(A_f(X))^\mu.$$
Is it true that $Z(H)=M(H)$? \end{ques}
By Corollary~\ref{cor:iotax} the answer is positive if $X$ is a simplex (or, more generally, if any $x\in \ext X$ forms a split face of $X$). But it is not clear, whether this assumption is needed.
\subsection{More on measurability of strong multipliers} Now we look in more detail at the strong multipliers and the systems $\mathcal A^s_H$ for natural choices of $H$. We start by clarifying which of the spaces that are not closed to monotone limits are determined by measurability on $\ext X$.
\begin{prop}\label{P:determ-Afaj}
Let $X$ be a compact convex set. Then the following assertions hold.
\begin{enumerate}[$(a)$]
\item Let $H$ be one of the spaces $A_b(X)\cap\Bo_1(X)$, $A_f(X)$. Then $M(H)$ is determined by $\mathcal A_H$ and $M^s(H)$ is determined by $\mathcal A^s_H$.
\item It may happen that $H=\overline{A_s(X)}$ is not determined by $\mathcal A_H$ (even if $X$ is a simplex).
\end{enumerate} \end{prop}
\begin{proof}
Assertion $(a)$ follows from Corollary~\ref{cor:hup+hdown pro A1aj}. Assertion $(b)$ follows from Proposition~\ref{P:dikous-lsc--new}$(f)$ below. \end{proof}
We continue by two theorems providing a lower bound for systems $\mathcal A^s_H$.
\begin{thm} \label{t:mh-asfrag} Let $H$ be one of the spaces $$\overline{A_s(X)},A_b(X)\cap \Bo_1(X), A_f(X).$$ \begin{enumerate}[$(i)$]
\item Any bounded facially upper semicontinuous function on $\ext X$ may be (uniquely) extended to an element of $M^s(H)$.
\item $\mathcal A^s_H$ contains the algebra generated by the facial topology.
\item $M(A_c(X))\subset M^s(H)$. \end{enumerate}
\end{thm}
\begin{proof} $(i)$: Let $f\colon \ext X\to\mathbb R$ be bounded and facially upper semicontinuous. Since $M^s(H)$ is a linear space containing constant functions, we may assume without loss of generality that $0\le f\le 1$.
For each $n\in \mathbb N$ we set \[ C_{n,i}=\{x\in\ext X;\, f(x)\ge \tfrac{i}{2^n}\},\quad i\in\{0,\dots, 2^n\}. \] Each of these sets is facially closed, so set we find a closed split face $F_{n,i}$ with $C_{n,i}=F_{n,i}\cap \ext X$. It follows from Lemma~\ref{l:rozsir-splitface} that $\lambda_{F_{n,i}}\in M^s(H)$. Set \[ a_n=2^{-n}\sum_{i=1}^{2^n}\lambda_{F_{n,i}}. \] Then $a_n\in M^s(H)$ and \[ f(x)\le a_n(x)\le f(x)+2^{-n},\quad x\in \ext X \] (cf. the proof of \cite[Theorem II.7.2]{alfsen}). Hence $(a_n)$ is uniformly convergent sequence on $X$ whose limit $a\in M^s(H)$ satisfies $f(x)=a(x)$ for each $x\in \ext X$. This proves the existence. Uniqueness follows from the fact that $H$ is determined by extreme points.
$(ii)$: Let $F$ be a closed split face. It follows from Lemma~\ref{l:rozsir-splitface} that $\lambda_F\in M^s(H)$.
Then $F=[\lambda_F>0]\cap \ext X$ and $\ext X\setminus F=[\lambda_F<1]\cap\ext X$ belong to $\mathcal A^s_H$. So, we conclude by applying properties of $\mathcal A^s_H$ from Proposition~\ref{p:system-aha}.
$(iii)$: Let $u\in M(A_c(X))$. By \cite[Theorem II.7.10]{alfsen} the restriction $u|_{\ext X}$ is facially continuous, hence, a fortiori, facially upper semicontinuous. By $(i)$ we deduce that $u\in M^s(H)$. \end{proof}
\begin{thm} \label{t:mh-asfragmu} Let $H$ be one of the spaces $$(A_s(X))^\mu,(A_s(X))^\sigma,(A_b(X)\cap \Bo_1(X))^\mu, (A_f(X))^\mu.$$ \begin{enumerate}[$(i)$]
\item Any bounded facially upper semicontinuous function on $\ext X$ may be (uniquely) extended to an element of $M^s(H)$.
\item $\mathcal A^s_{H}$ contains the $\sigma$-algebra of facially Borel sets.
\item $M(A_c(X))\subset M^s(H)$. \end{enumerate}
\end{thm}
\begin{proof}
Assertions $(i)$ and $(iii)$ follow from Theorem~\ref{t:mh-asfrag} together with Proposition~\ref{p:multi-pro-mu}. To prove $(ii)$ we moreover use that $\mathcal A^s_{H}$ is a $\sigma$-algebra. \end{proof}
We continue by some natural open problems.
\begin{ques}\label{q:fr-split}
Let $X$ be a compact convex set and let $H$ be one of the spaces
$$\overline{A_s(X)},(A_s(X))^\mu,(A_s(X))^\sigma,A_f(X),(A_f(X))^\mu.$$
Is $\mathcal A_H=\mathcal S_H$? \end{ques}
\begin{remarks}
(1) If $X$ is a simplex, a positive answer for the first three spaces is provided by Proposition~\ref{P:As pro simplex}. But we have no idea how to attack the general case.
(2) For the last two spaces the answer is positive for a class of simplices addressed in Section~\ref{sec:stacey} (see Proposition~\ref{P:dikous-af-new} and Proposition~\ref{P:dikous-Afmu-new} below).
(3) The analogous question for spaces $A_b(X)\cap\Bo_1(X)$ and $(A_b(X)\cap\Bo_1(X))^\mu$ has negative answer, even if $X$ is a simplex (see Proposition~\ref{P:dikous-Bo1-new} and Proposition~\ref{P:dikous-bo1-mu-new} below). \end{remarks}
The next problem is related to inclusion of spaces of multipliers.
\begin{ques}
Let $X$ be a compact convex set. Is it true that
$$\begin{gathered}
M^s(\overline{A_s(X)})\subset M^s(A_b(X)\cap \Bo_1(X))\subset M^s(A_f(X)),\\
M((A_c(X))^\mu)\subset M^s((A_s(X))^\mu)\subset M^s((A_b(X)\cap \Bo_1(X))^\mu)\subset M^s((A_f(X))^\mu)?
\end{gathered}$$
Do the analogous inclusions hold for the spaces of multipliers? \end{ques}
We note that these inclusions hold within the class of simplices addressed in Section~\ref{sec:stacey} (see Proposition~\ref{p:vztahy-multi-ifs} below).
\subsection{Affine functions on compact convex sets with $F_\sigma$ boundary} \label{ssce:fsigma-hranice}
In this section we prove a result on transferring some topological properties of strongly affine functions from an $F_\sigma$ set containing $\ext X$ to the whole set $X$. This result will be applied
to get an analogue of Theorem~\ref{t:a1-lindelof-h-hranice} for affine functions of the first Borel class and for affine fragmented functions on compact convex sets with $F_\sigma$ boundary.
The promised transfer result is the following theorem which may be seen as a generalization of the respective cases of \cite[Theorem 3.5]{lusp} (where a transfer of properties from $\overline{\ext X}$ to $X$ is addressed). We think that a full generalization may be proved, but we restrict ourselves to first class functions which is the case important in the context of the present paper.
\begin{thm}
\label{t:prenos-fsigma}
Let $X$ be a compact convex set, $F\supset \ext X$ be an $F_\sigma$ set and $f\in A_{sa}(X)$. Then the following assertions are valid.
\begin{enumerate}[$(i)$]
\item If $f|_{F}\in \Bo_1(F)$, then $f\in \Bo_1(X)$.
\item If $f|_{F}\in \Fr(F)$, then $f\in \Fr(X)$.
\end{enumerate} \end{thm}
\begin{proof} Without loss of generality we may assume that $\norm{f}\le 1$.
$(i)$: We will show that $f$ can be approximated uniformly by a function in $\Bo_1(X)$. To this end, let $\eta>0$ be given. Let $r\colon M_1(X)\to X$ denote the barycenter mapping. We write $F=\bigcup_{n=1}^\infty F_n$, where the sets $F_n$ are closed and satisfy $F_1\subset F_2\subset F_3\subset F_4\subset \cdots$.
Set $M_n=\{\omega\in M_1(X);\, \omega(F_n)\ge 1-\eta\}$. Then $M_n$ are closed sets in $M_1(X)$. Further, $N_n=r(M_n)$, $n\in\mathbb N$, are closed sets in $X$ satisfying $\bigcup_{n=1}^\infty N_n=X$. (Indeed, for $x\in X$ pick a maximal measure $\nu\in M_x(X)$. Then $\nu(F)=1$, and thus there exists $n\in\mathbb N$ satisfying $\nu(F_n)\ge 1-\eta$.)
Fix $n\in\mathbb N$. Let $g_n\colon M_n\to \mathbb R$ be defined as $g_n(\mu)=\mu(f|_{F_n})$. Then $g_n\in \Bo_1(M_n)$ by Lemma~\ref{L:function space}$(b)$. Using Lemma~\ref{l:frag-selekce} we find a mapping $s_n\colon N_n\to M_n$ such that $g_n\circ s_n\in \Bo_1(N_n)$ and $r(s_n(x))=x$, $x\in N_n$. Then we have for $x\in N_n$ an estimate \[ \begin{aligned}
\abs{g_n(s_n(x))-f(x)}&=\abs{(s_n(x))(f|_{F_n})-f(r(s_n(x))}=\abs{(s_n(x))(f|_{F_n})-s_n(x)(f)}\\ &=\abs{\int_{X\setminus F_n} f\,\mbox{\rm d} (s_n(x))}\le \eta. \end{aligned} \] Hence $\abs{g_n\circ s_n-f}\le \eta$ on $N_n$.
Now we denote $N_0=\emptyset$ and define a function $g\colon X\to \mathbb R$ as \[ g(x)=g_n(s_n(x)),\quad x\in N_{n}\setminus N_{n-1}, n\in\mathbb N. \] Then $g$ is $(F\wedge G)_\sigma$-measurable on $X$, because \[ g^{-1}(U)=\bigcup_{n=1}^\infty \left((g_n\circ s_n)^{-1}(U)\cap (N_n\setminus N_{n-1})\right) \] is of type $(F\wedge G)_\sigma$ in $X$ for every open $U\subset \mathbb R$. Thus $g\in \Bo_1(X)$ and satisfies $\norm{f-g}\le \eta$. Since $\eta>0$ is arbitrary, we deduce $f\in \Bo_1(X)$.
$(ii)$: We proceed as in $(i)$, so let $M_n$ and $N_n$ be as above. Then $g_n$ defined as in $(i)$ is a fragmented function on $M_n$ by Lemma~\ref{L:function space}$(b)$. We use again Lemma~\ref{l:frag-selekce} to obtain mappings $s_n\colon N_n\to M_n$ such that $r(s_n(x))=x$, $x\in N_n$, and $g_n\circ s_n\in \Fr(N_n)$. Then $g_n\circ s_n$ is $\mathcal H_\sigma$-measurable on $N_n$ by Theorem~\ref{T:a}$(a)$. As above we see that $\abs{g_n\circ s_n-f}\le \eta$ on $N_n$. The function \[ g(x)=g_n(s_n(x)),\quad x\in N_{n}\setminus N_{n-1}, n\in\mathbb N, \] then satisfies $\norm{g-f}\le \eta$, and it is $\mathcal H_\sigma$-measurable. Indeed, for any $U\subset \mathbb R$ open we have \[ g^{-1}(U)=\bigcup_{n=1}^\infty \left((g_n\circ s_n)^{-1}(U)\cap (N_n\setminus N_{n-1})\right), \] which is a set of type $\mathcal H_\sigma$. Hence $g\in \Fr(X)$ by Theorem~\ref{T:a}. Since $\eta>0$ is arbitrary, we deduce $f\in\Fr(X)$. \end{proof}
\iffalse
\begin{thm}
\label{t:af-prenos-zhranice}
Let $X$ be a compact convex set with $\ext X$ being $F_\sigma$ set and let $f\in A_{sa}(X)$. Then
$$f\in A_f(X)\Longleftrightarrow f|_{\ext X}\mbox{ is fragmented}. $$ \end{thm}
To prove this theorem we need two lemmas on $\varepsilon$-fragmented functions. We recall that a function $f\colon Y\to\mathbb R$ is \emph{$\varepsilon$-fragmented} (where $\varepsilon>0$) if for any nonempty (closed) set $F\subset Y$ there is a nonempty relatively open set $U\subset F$ with $\operatorname{diam} f(U)<\varepsilon$.
\begin{lemma} \label{l:distance} Let $M$ be a compact space and $a\colon M\to [-1,1]$ be $\varepsilon$-fragmented. Then there exists a fragmented function $b\colon M\to [-1,1]$ such that $\norm{a-b}\le \varepsilon$. \end{lemma}
\begin{proof} We start by constructing a suitable transfinite sequence of open subsets of $M$. Set $U_0=\emptyset$. Let us assume $\gamma>0$ is an ordinal such that the open sets $U_\xi$ are constructed for every $\xi<\gamma$. If $\gamma$ is limit, we set $U_\gamma=\bigcup_{\xi<\gamma} U_\xi$. If $\gamma$ is not limit, it has a predecessor $\gamma'$ with $\gamma=\gamma'+1$. If $U_{\gamma'}=M$, we set $\kappa=\gamma'$ and stop the construction. Otherwise we use the $\varepsilon$-fragmentability of $a$ to find a nonempty relatively open set $U\subset M\setminus U_{\gamma'}$ such that $\operatorname{diam} a(U)\le \varepsilon$. Then $U_{\gamma}=U_{\gamma'}\cup U$ is an open set. This finishes the construction.
For each $\gamma<\kappa$ fix some $x_\gamma\in U_{\gamma+1}\setminus U_\gamma$ and define $$b(x)=a(x_\gamma),\quad x\in U_{\gamma+1}\setminus U_\gamma, \gamma<\kappa.$$
Then $b$ is a function on $M$ with $\|a-b\|\le\varepsilon$. Moreover, $b$ is clearly fragmented. Indeed, let $F\subset M$ be a nonempty closed set. Let $\gamma$ be the smallest ordinal such that $U_\gamma\cap F\ne \emptyset$. Then $b$ is constant on $U_\gamma\cap F$. \end{proof}
\begin{lemma} \label{l:fsigma-frag}
Let $X$ be a compact convex set and let $a\in A_{sa}(X)$ be such that $\norm{a}\le1$. Assume that $\varepsilon>0$ and that there is an $F_\sigma$ set $F\supset \ext X$ such that $a|_F$ is $\varepsilon$-fragmented. Then $a$ is $6\varepsilon$-fragmented on $X$. \end{lemma}
\begin{proof} Let $r:M_1(X)\to X$ denote the barycenter mapping. We write $F=\bigcup_{n=1}^\infty F_n$, where the sets $F_n$ are closed and satisfy $F_1\subset F_2\subset F_3\subset\cdots$.
Pick $\eta\in (0,\frac{\varepsilon}{2})$ and set $M_n=\{\omega\in M_1(X);\, \omega(F_n)\ge 1-\eta\}$. Then $r(M_n)$ are closed sets in $X$ satisfying $\bigcup_{n=1}^\infty r(M_n)=X$. (Indeed, for $x\in X$ pick a maximal measure $\nu\in M_x(X)$. Then $\nu(F)=1$, and thus there exists $n\in\mathbb N$ satisfying $\nu(F_n)\ge 1-\eta$.)
Let us continue by showing that $a$ is $6\varepsilon$-fragmented. Let $H\subset X$ be a closed nonempty set. By the Baire category theorem there exists $m\in \mathbb N$ such that $r(M_m)\cap H$ has nonempty interior in $H$. Using Lemma~\ref{l:distance} we find a fragmented function $b\colon F_m\to [-1,1]$ such that $\norm{a|_{F_m}-b}\le \varepsilon$. Let $\tilde{b}\colon M_m\to[-1,1] $ be defined as $\tilde{b}(\omega)=\omega(b|_{F_m})$. Then $\tilde{b}$ is a fragmented function by \cite[Lemma 3.2]{lusp}, and thus Lemma~\ref{l:frag-selekce} provides a selection function $\phi\colon r(M_m)\to M_m$ such that $\tilde{b}\circ \phi$ is fragmented and $r(\phi(z))=z$, $z\in r(M_m)$.
Let $x\in H\cap r(M_m)$ be given. Then \[ \begin{aligned}
\abs{a(x)-\tilde{b}(\phi(x))}&=\abs{\phi(x)(a)-\phi(x)(b|_{F_m})}=\abs{\phi(x)|_{F_m}(a-b)+\phi(x)|_{X\setminus F_m}(a)}\\ &\le \varepsilon+\eta\le 2\varepsilon. \end{aligned} \] Thus $\norm{a-\tilde{b}\circ\phi}\le 2\varepsilon$ on $H\cap r(M_m)$. Let $U$ be a nonempty open subset of $H$ such that $U\subset H\cap r(M_m)$. Let $V\subset U$ be a nonempty open subset of $H$ such that $\operatorname{diam} (\tilde{b}\circ \phi)(V)\le \varepsilon$. Then $\operatorname{diam} a(V)\le 5\varepsilon$, which shows that $a$ is $6\varepsilon$-fragmented. \end{proof}
Now we are ready to prove the theorem:
\begin{proof}[Proof of Theorem~\ref{t:af-prenos-zhranice}.]
Let $f\in A_{sa}(X)$ be given such that $f|_{\ext X}$ is fragmented on $\ext X$. Without loss of generality $\norm{f}\le1$. For any $\varepsilon>0$, $f|_{\ext X}$ is $\varepsilon$-fragmented, Thus $f$ is $6\varepsilon$-fragmented on $X$ by Lemma~\ref{l:fsigma-frag}. Since $\varepsilon>0$ is arbitrary, $f$ is fragmented. \end{proof}
\fi
Now we can formulate the main results of this section. The first one applies to the functions of the first Borel class.
\begin{thm} \label{t:Bo1-fsigma-hranice}
Let $X$ be a compact convex set with $\ext X$ being an $F_\sigma$ set and $H=A_b(X)\cap\Bo_1(X)$. Then $M(H)=M^s(H)$ is determined by
\[
\begin{aligned}
\mathcal B=\{\ext X\setminus F;\, &F\text{ is an }(F\vee G)_\delta\text{ measure convex split face}\\
&\text{such that }F'\text{ is measure convex}\}.
\end{aligned}
\]
Consequently, $M^s(H)$ is determined by $\mathcal S_H$, and thus for each intermediate function space $H' \subset H$, $M(H')\subset M(H)$. \end{thm}
\begin{proof}
The `consequently' part follows as in Theorem~\ref{t:a1-lindelof-h-hranice}: Indeed, it is easy to check that $\mathcal S_H \subset \mathcal B$. By Theorem~\ref{T:meritelnost-strongmulti} we know that $\mathcal A^s_H\subset\mathcal S_H$. By the first part of the theorem, $M^s(H)$ is determined by $\mathcal B$. Further, by Proposition~\ref{P:determ-Afaj} $M^s(H)$ is determined also by $\mathcal A_H^s$. It follows that $M^s(H)$ is determined by $\mathcal S_H$. Now it is enough to use Proposition \ref{P: silnemulti-inkluze}.
For the first part of the theorem, we use Proposition~\ref{P:meritelnost multiplikatoru pomoci topologickych split facu} for $H=A_b(X)\cap\Bo_1(X)$ and $T=\Bo_1^b(\ext X)$. First we need to verify that $T$ satisfies conditions $(i)-(iii)$ of the quoted proposition. Condition $(i)$ follows easily from the fact that functions of the first Borel clas are stable with respect to the pointwise multiplication. Further, condition $(ii)$ follows from Lemma~\ref{L:YupcapYdown=Y} and condition $(iii)$ is obvious. The formula for $H$ follows from Theorem~\ref{t:prenos-fsigma} due to the assumption on $\ext X$.
We deduce that the assumptions of Proposition~\ref{P:meritelnost multiplikatoru pomoci topologickych split facu} are fulfilled and hence $M^s(H)$ is determined by the system $\mathcal B^s_H$. Hence, to complete the proof it is enough to show that $\mathcal B\subset\mathcal B^s_H$.
So, let $F$ be an $(F\vee G)_\delta$ face such that both $F$ and $F'$ are measure convex. Let $(R_n)$ be a sequence of $(F\vee G)$ sets in $X$ such that $X\setminus F=\bigcup_n R_n$. Then $1_{(R_1\cup\dots\cup R_n)\cap\ext X}\nearrow 1_{F'\cap\ext X}=\lambda_{F'}|_{\ext X}$, thus $\lambda_{F'}|_{\ext X}\in T^\uparrow$. Finally, using implication $(i)\implies(iii)$ of Theorem~\ref{t:srhnuti-splitfaceu-metriz} we complete the proof that $\ext X\setminus F=F'\cap \ext X\in\mathcal B^s_H$. \end{proof}
\begin{thm} \label{t:af-fsigma-hranice}
Let $X$ be a compact convex set with $\ext X$ being an $F_\sigma$ set and $H=A_f(X)$. Then $M(H)=M^s(H)$ is determined by
\[
\begin{aligned}
\mathcal B=\{\ext X\setminus F;\, &F\text{ is an }\mathcal H_\delta\text{ measure convex split face}\\
&\text{such that }F'\text{ is measure convex}\}.
\end{aligned}
\]
Consequently, $M^s(H)$ is determined by $\mathcal S_H$, and thus for each intermediate function space $H' \subset H$, $M(H')\subset M(H)$. \end{thm}
\begin{proof} The proof is completely analogous to that of Theorem~\ref{t:Bo1-fsigma-hranice}, we just replace functions of the first Borel class by fragmented functions and $(F\vee G)$ sets by resolvable sets. In the proof we additionally use Theorem~\ref{T:a} to identify fragmented functions with $\mathcal H_\sigma$-measurable functions.
In this way we may simply copy the above proof except for checking the validity of condition $(ii)$ of Proposition~\ref{P:meritelnost multiplikatoru pomoci topologickych split facu}:
We additionally observe that by \cite[Th\'eor\`eme 2]{tal-kanal} $\ext X$, being $F_\sigma$, is necessarily an $(F\vee G)_\delta$ set, hence an $\mathcal H_\delta$ set. Since it is simultaneously an $\mathcal H_\sigma$ set, it easily follows from \cite[Proposition 2.1(iii)]{koumou} that $\ext X$ is in fact a resolvable set, hence hereditarily Baire. Hence, by Theorem~\ref{T:a}$(b)$ the space $T=\Fr^b(\ext X)$ coincides with bounded $\mathcal H_\sigma$-measurable functions on $\ext X$. Therefore, condition $(ii)$ follows from Lemma~\ref{L:YupcapYdown=Y}. \iffalse
The `consequently' part follows as in Theorem~\ref{t:a1-lindelof-h-hranice}: Indeed, it is easy to check that $\mathcal S_H \subset \mathcal B$. By Theorem~\ref{T:meritelnost-strongmulti} we know that $\mathcal A^s_H\subset\mathcal S_H$. By the first part of the theorem, $M^s(H)$ is determined by $\mathcal B$. Further, by Proposition~\ref{P:determ-Afaj} $M^s(H)$ is determined also by $\mathcal A_H^s$. It follows that $M^s(H)$ is determined by $\mathcal S_H$. Now it is enough to use Proposition \ref{P: silnemulti-inkluze}.
For the first part of the theorem, we use Proposition~\ref{P:meritelnost multiplikatoru pomoci topologickych split facu} for $H=A_f(X)$ and $T=\Fr^b(\ext X)$. First we need to verify that $T$ satisfies conditions $(i)-(iii)$ of the quoted proposition. Condition $(i)$ follows easily from the fact that fragmented functions are stable with respect to the pointwise multiplication, condition $(iii)$ is obvious. To prove condition $(ii)$ we first observe that by \cite[Th\'eor\`eme 2]{tal-kanal} $\ext X$, being $F_\sigma$, is necessarily an $(F\vee G)_\delta$ set, hence an $\mathcal H_\delta$ set. Since it is simultaneously an $\mathcal H_\sigma$ set, it easily follows from \cite[Proposition 2.1(iii)]{koumou} that $\ext X$ is in fact a resolvable set, hence hereditarily Baire. Hence, by Theorem~\ref{T:a}$(b)$ the space $T$ coincides with bounded $\mathcal H_\sigma$-measurable functions on $\ext X$. Therefore, condition $(ii)$ follows from Lemma~\ref{L:YupcapYdown=Y}. Further, the formula for $H$ follows from Theorem~\ref{t:af-prenos-zhranice} due to the assumption on $\ext X$.
We deduce that the assumptions of Proposition~\ref{P:meritelnost multiplikatoru pomoci topologickych split facu} are fulfilled and hence $M^s(H)$ is determined by the system $\mathcal B^s_H$. Hence, to complete the proof it is enough to show that $\mathcal B\subset\mathcal B^s_H$.
So, let $F$ be an $\mathcal H_\delta$-face such that both $F$ and $F'$ are measure convex. Let $(R_n)$ be a sequence of resolvable sets in $X$ such that $R_n\nearrow X\setminus F$. Then $1_{R_n\cap\ext X}\nearrow 1_{F'\cap\ext X}=\lambda_{F'}|_{\ext X}$, thus $\lambda_{F'}|_{\ext X}\in T^\uparrow$. Finally, using implication $(i)\implies(iii)$ of Theorem~\ref{t:srhnuti-splitfaceu-metriz} we complete the proof that $\ext X\setminus F=F'\cap \ext X\in\mathcal B^s_H$.\fi \end{proof}
We continue by a natural open problem.
\begin{ques} Do Theorem~\ref{t:Bo1-fsigma-hranice} and Theorem~\ref{t:af-fsigma-hranice} hold under the assumption that $\ext X$ is a Lindel\"of\ resolvable set? \end{ques}
If $\ext X$ is $F_\sigma$, then $\ext X$ is Lindel\"of (being $\sigma$-compact) and resolvable (see the proof of Theorem~\ref{t:af-fsigma-hranice}). Hence the positive answer would give a natural generalization of the two above theorems and would be a more precise analogue of Theorem~\ref{t:a1-lindelof-h-hranice}. The key problem is the possibility to generalize Theorem~\ref{t:prenos-fsigma}.
\iffalse
\begin{thm} \label{t:multi-inkluze-fsigma-a1-af} Let $X$ be a compact convex set with $\ext X$ being an $F_\sigma$ set. Then $M(A_1(X))\subset M(A_f(X))$. \end{thm}
\begin{proof} Let $m\in M(A_1(X))$ be given. Since $\ext X$ is $K$-analytic, $m$ is a multiplier for $A_{sa}(X)$ by Theorem~\ref{t:inkluze-kanalytic}. Hence for a given function $a\in A_f(X)$, the function $ma$ can be extended to a strongly affine function $b$ with $b=am$ on $\ext X$. Since $b=am$ is fragmented on $\ext X$, $b$ is fragmented as well by Lemma~\ref{l:fsigma-frag}. Hence $m\in M(A_f(X))$. \end{proof}
If $H$ is an intermediate function space satisfying the assumptions of Theorem~\ref{T:integral representation H} (for example, one of the spaces mentioned in Remark~\ref{rem:intrepH} or one of the spaces from Theorem~\ref{T:intrepr-dalsi}), we get the $\sigma$-algebra $\mathcal A_H$ which may be used to characterize $M(H)$ via measurability on $\ext X$. This $\sigma$-algebra is described in Theorem~\ref{T:integral representation H}, but this description does not indicate how rich $\mathcal A_H$ is, or how the $\sigma$-algebras corresponding to different spaces are related. Let us formulate the following easy observation:
\begin{obs} Let $X$ be a compact convex set. Assume that $H_1$ and $H_2$ are two intermediate functions spaces on $X$ determined by extreme points. Then $$M(H_1)\subset M(H_2) \Longrightarrow \mathcal A_{H_1}\subset \mathcal A_{H_2}.$$ If, moreover, $H_1\subset H_2$ and $H_2$ is closed with respect to monotone limits, we get $$M(H_1)\subset M(H_2) \Longleftrightarrow \mathcal A_{H_1}\subset \mathcal A_{H_2}.$$ \end{obs}
\begin{proof}
The first implication follows directly from the definition on $\mathcal A_H$ in Proposition~\ref{p:system-aha}.
Assume that $H_1\subset H_2$, $H_2$ is closed with respect to monotone limits and $\mathcal A_{H_1}\subset\mathcal A_{H_2}$. Let $m\in M(H_1)$. By Proposition~\ref{p:system-aha} we deduce $m|_{\ext X}$ is $\mathcal A_{H_1}$-measurable, so it is also $\mathcal A_{H_2}$-measurable. Since $m\in H_1\subset H_2$ and $H_2$ is determined by extreme points, Theorem~\ref{T:integral representation H}(b) yields $m\in M(H_2)$. \end{proof}
So, it is natural to investigate inclusions between spaces of multipliers (and centers) of different intermediate function spaces.
Le us further recall that \cite[Theorem II.7.10]{alfsen} says (among others) that a bounded function on $\ext X$ may be extended to an element of $M(A_c(X))=Z(A_c(X))$ if and only if it is continuous in the facial topology. We recall that the facial topology on $\ext X$ consists exactly of sets of the form $\ext X\setminus F$, where $F$ is a closed split face of $X$ (see \cite[p. 143]{alfsen}).
A natural question arises how rich are the $\sigma$-algebras characterizing multipliers of intermediate function spaces mentioned in Theorem~\ref{T:intrepr-dalsi}. A partial information follows from Lemma~\ref{l:rozsir-splitface}.
\begin{thm} \label{t:measur} Let $X$ be a compact convex set and $H$ be either $(A_f(X))^\mu$ or $(\Bo_1(X)\cap A_{sa}(X))^\mu$. Let $\mathcal A_H$ denote the $\sigma$-algebra given by Theorem~\ref{T:integral representation H}. Then $\mathcal A_H$ contains all sets of the form $\ext X\cap F$, where $F$ is a closed split face in $X$. \end{thm}
\begin{proof} Let $H=(\Bo_1(X)\cap A_{sa}(X))^\mu$ and $F\subset X$ be a closed split face. We want to check that $f=1_{F\cap \ext X}$ is the restriction of an element in $M(H)$. Since $f$ is facially upper semicontinuous on $\ext X$, for each $a\in \Bo_1(X)\cap A_{sa}(X)$ Theorem~\ref{t:mh-asfrag} along with Lemma~\ref{l:rozsir-splitface} provides a pair $b_1,b_2\in \Bo_1(X)\cap A_{sa}(X)$ of functions satisfying on $\ext X$ equalities \[ b_1=(a+\lambda)f\quad\text{and}\quad b_2=-\lambda f, \] where $\lambda\ge 0$ is chosen such that $a+\lambda\ge 0$. Then $b=b_1+b_2$ is a function in $\Bo_1(X)\cap A_{sa}(X)$ with $b=af$ on $\ext X$. Hence $f$ is the restriction of an element in $M(\Bo_1(X)\cap A_{sa}(X))$. By Proposition~\ref{p:multi-pro-mu} we have that $f$ is the restriction of an element in $M((\Bo_1(X)\cap A_{sa}(X))^\mu)$, and hence is $f$ is $\mathcal A_{H}$-measurable. Thus $F\cap \ext X\in \mathcal A_H$.
Similarly we proceed in the case $H=(A_f(X))^\mu$, which finishes the proof. \end{proof}
\begin{ques} Let $X$ be a compact convex set. Is it true that \[ Z((A_s(X))^\mu)\subset Z((A_f(X))^\mu) ? \] \end{ques}
We also do not know the answer to the following question posed in \cite{smith-london}.
\begin{ques} Let $X$ be a compact convex set. Is it true that \[ Z((A_c(X))^\mu)\subset Z(A_s(X))^\mu) ? \] \end{ques}
\fi
\subsection{Multipliers of strongly affine functions} \label{ssce:x-kanalytic}
$A_{sa}(X)$, the space of strongly affine functions should serve as a natural roof for intermediate function spaces determined by extreme points. However, as we have already noticed above, the situation is more difficult. Firstly, by \cite{talagrand} strongly affine functions need not be determined by extreme points and, secondly, due to examples from Section~\ref{sec:strange} below there are intermediate function spaces which are determined by extreme points but not contained in $A_{sa}(X)$. Anyway, the space $A_{sa}(X)$ remains to be a natural object of interest. The following natural question seems to be open.
\begin{ques}
Let $X$ be a compact convex set such that $A_{sa}(X)$ is determined by extreme points. Is $M(A_{sa}(X))=M^s(A_{sa}(X))$? \end{ques}
The answer is positive if $X$ is a standard compact convex set (see Proposition~\ref{P:rovnostmulti}). But there are compact convex sets which are not standard but still strongly affine functions are determined by extreme points, see, e.g., Proposition~\ref{P:dikous-sa-new} and Lemma~\ref{L:maxmiry-dikous}$(2)$. For these examples, the answer remains to be positive (see Proposition~\ref{p:vztahy-multi-ifs}$(d)$).
Another intriguing question is the following:
\begin{ques}
Let $X$ be a compact convex set such that $H=A_{sa}(X)$ is determined by extreme points. Is $\mathcal A^s_H=\mathcal S_H$?
Is it true at least assuming $X$ is standard? \end{ques}
The answer is positive for the class of simplices addressed in Section~\ref{sec:stacey}, see Proposition~\ref{P:shrnutidikousu}. The general case remains to be open, but there is one more special case when the answer is positive -- if $\ext X$ is $K$-analytic. This is the content of the following theorem.
\begin{thm} \label{t:metriz-sa-splitfacy} Let $X$ be a compact convex set with $\ext X$ being $K$-analytic and $H=A_{sa}(X)$. Then \[\begin{aligned} \mathcal A_H&=\mathcal A^s_H=\mathcal S_H\\&=\{F\cap \ext X;\, F\text{ is a split face such that both }F,F'\text{ are measure convex}\}. \end{aligned}\]
Consequently, $M(H')\subset M(H)$ for each intermediate function space $H' \subset H$. \end{thm}
\begin{proof} Since $X$ is standard, $\mathcal A_H=\mathcal A^s_H$ by Proposition~\ref{P:rovnostmulti}. Inclusion $\mathcal A^s_H\subset\mathcal S_H$ follows from Theorem~\ref{T:meritelnost-strongmulti}. The last equality follows from Theorem~\ref{t:srhnuti-splitfaceu-metriz} (equivalence $(iv)\Longleftrightarrow(i)$). Finally, if $F$ is a split face such that both $F$ and $F'$ are measure convex, Theorem~\ref{t:srhnuti-splitfaceu-metriz} (implication $(i)\implies (iii)$) shows that $\lambda_F\in M(H)$. Therefore $F\cap\ext X=[\lambda_F=1]\cap\ext X\in\mathcal A_H$. This completes the proof.
Since $M^s(H)$ is determined by $\mathcal A^s_H$ due to Theorem~\ref{T:integral representation H}$(ii)$ and $\mathcal A^s_H=\mathcal S_H$,
the `consequently part' follows from Proposition \ref{P: silnemulti-inkluze} (using again Proposition~\ref{P:rovnostmulti}). \end{proof}
\section{Examples of strange intermediate function spaces}\label{sec:strange}
In this section we collect three examples. The first two show, in particular, that there is no relationship between strongly affine functions and determinacy by extreme points. The third one shows that split faces need not induce multipliers (even on a metrizable Bauer simplex).
Let us now pass to the first example. We note that most of the intermediate function spaces we address in this paper are formed by strongly affine functions and are determined by extreme points. But in general, these two properties are incomparable. On one hand, by \cite{talagrand} there is a strongly affine function not determined by extreme points. It is not hard to find one affine function which is not strongly affine but is determined by extreme points. We present something more -- intermediate function spaces generated by such functions. The first example is the following:
\begin{example}
\label{ex:deter-ext-body}
Let $X=M_1([0,1])$ and let $\mathbb I$ stand for the set of all irrational points in $[0,1]$. We define a function $G\colon X\to [0,1]$ by $$G(\mu)=\mu_d(\mathbb I), \quad\mu\in X,$$
where $\mu_d$ denotes the discrete part of $\mu$. Let $H=\operatorname{span} (A_c(X)\cup\{G\})$. Then $X$ is a metrizable Bauer simplex and $H$
is an intermediate function space on $X$ with the
following properties.
\begin{enumerate}[$(a)$] \item $H\subset\Ba_2(X)$; \item $G$ is not strongly affine and hence $H$ is not contained in $A_{sa}(X)$; \item $H$ is determined by $\ext X$ but $H^\mu$ is not determined by $\ext X$; \item $M(H)$ contains only constant functions.
\end{enumerate} \end{example}
\begin{proof} It is clear that $X$ is a metrizable Bauer simplex and $H$ is an intermediate function space on $X$.
To prove properties $(a)$ and $(b)$ it is enough to show that $G$ is a Baire-two function on $X$ that is not strongly affine.
To see this, we notice that the function $\mu\mapsto \mu(\{x\})$, $\mu\in X$, is upper semicontinuous on $X$ for each $x\in [0,1]$. Hence the function \[ G_1(\mu)=\mu_d(\mathbb Q)=\sum_{q\in\mathbb Q} \mu(\{q\}),\quad \mu\in X, \] is Baire-two. Since $G(\mu)=\mu_d([0,1])-G_1(\mu)$ and $\mu\mapsto \mu_d([0,1])$ is Baire-two (see \cite[Corollary I.2.9]{alfsen}), the function $G$ is Baire-two as well.
To verify that $G$ is not strongly affine, let $\lambda$ denote the Lebesgue measure on $[0,1]$. If $\phi\colon [0,1]\to X$ is the evaluation mapping from Section~\ref{ssc:ch-fs}, then $\phi(\lambda)$ is a probability measure on $X$ whose barycenter is $\lambda$ (see \cite[Proposition 2.54]{lmns}). Then $G(\lambda)=\lambda_d(\mathbb I)=0$. On the other hand, for $t\in\mathbb I$ we have $G(\varepsilon_t)=1$. Since $\phi(\lambda)(\{\varepsilon_t;\, t\in\mathbb I\})=1$, we get $\int_X G(\mu)\,\mbox{\rm d}\phi(\lambda)(\mu)=1$. Hence $G$ is not strongly affine.
$(c)$: Note that $H=\{F+cG;\, F\in A_c(X), c\in\mathbb R\}$. We will show that $H$ is determined by extreme points. To this end, let $F\in A_c(X)$ and $c\in \mathbb R$ be given. By Section~\ref{ssc:ch-fs}, there exists a function $f\in C([0,1])$ such that $F(\mu)=\mu(f)$, $\mu\in X$. Then the function $K=F+cG$ satisfies \[ K(\varepsilon_t)=\begin{cases} f(t)+c,& t\in \mathbb I,\\
f(t),& t\in \mathbb Q. \end{cases} \] Since $\sup f(\mathbb Q)=\sup f(\mathbb I)=\sup f([0,1])$, we obtain \[ \sup K(\ext X)=\max\left\{c+\sup f(\mathbb Q), \sup f(\mathbb I)\right\}=\max\left\{c+\sup f([0,1]), \sup f([0,1])\right\} \] Analogously we obtain that \[ \inf K(\ext X)=\min\left\{c+\inf f(\mathbb Q), \inf f(\mathbb I)\right\}=\min\left\{c+\inf f([0,1]), \inf f([0,1])\right\}. \] For any $\mu\in X$ we have \[ \inf f([0,1])+cG(\mu)\le K(\mu)=\mu(f)+cG(\mu)\le \sup f([0,1])+cG(\mu). \] If $c\ge 0$, we have \[ \inf K(\ext X)=\inf f([0,1])\le K(\mu)\le \sup f([0,1])+c=\sup K(\ext X). \] If $c<0$, we obtain \[ \inf K(\ext X)=\inf f([0,1])+c\le K(\mu)\le \sup f([0,1])=\sup K(\ext X). \] Hence $H$ is determined be extreme points.
We continue by showing that $H^\mu$ is not determined by extreme points. Let $f=-1_{\mathbb Q}$ and $c=-1$. Then the function $K(\mu)=\mu(f)-G(\mu)$, $\mu\in X$, is in $H^\mu$ (as $1_{\mathbb Q}\in (C([0,1])^\mu$). Further, \[ K(\varepsilon_t)=-1,\quad t\in [0,1], \] but for the Lebesgue measure $\lambda$ we obtain $K(\lambda)=0.$ Hence $H^\mu$ is not determined by extreme points.
$(d)$: Given $K\in H$ we set $\widehat K(t)=K(\varepsilon_t)$ for $t\in[0,1]$. Any such function is of the form $\widehat K=f+c 1_{\mathbb I}$ for some $f\in C([0,1])$ and $c\in\mathbb R$. Then \begin{equation}\label{eq:rozdil}
\limsup_{s\to t} \widehat K(s)-\liminf_{s\to t}\widehat K(s)=\abs{c} \mbox{ for each }t\in[0,1].\end{equation}
Assume now $K\in H$ is a non-constant function. Then $\widehat{K}$ is not constant either. Let $\widehat K=f+c 1_{\mathbb I}$ as above. There are two possibilities:
Case 1: $c=0$ and $f$ is not constant. Then there is no $L\in H$ with $L=KG$ on $\ext X$. Indeed, we would have $\widehat{L}=f\cdot 1_{\mathbb I}$ and the difference from \eqref{eq:rozdil} would not be constant on $[0,1]$.
Case 2: $c\ne 0$. Let $F(\mu)=\int_{[0,1]} t\,\mbox{\rm d}\mu(t)$ for $\mu\in X$. Then there is no $L\in H$ with $L=KF$ on $\ext X$. Indeed, we would have
$$\widehat L(t)=tf(t)+t 1_{\mathbb I}(t), \qquad t\in[0,1],$$
and, again, the difference from \eqref{eq:rozdil} would not be constant on $[0,1]$.
Thus $K\notin M(H)$ and the proof is complete. \end{proof}
Before presenting another variant of the preceding example we need the following lemma.
\begin{lemma} \label{l:baire-one-cont}
Let $f\in \Ba_1([0,1])$ be such that the restrictions $f|_{\mathbb Q}$ and $f|_{\mathbb I}$ are continuous. Then $f\in C([0,1])$. \end{lemma}
\begin{proof} Assume that $x\in \mathbb Q$ is such that $f$ is not continuous at $x$. Then there exist $\eta>0$ and a sequence $(x_n)$ of irrational numbers converging to $x$ such that $\abs{f(x)-l}>\eta$, where $l=\lim f(x_n)$. Let $U$ be a neighborhood of $x$ such that $\abs{f(x)-f(y)}<\frac\eta4$ for each $y\in U\cap \mathbb Q$. Pick $n\in\mathbb N$ such that $x_n\in U$ and $\abs{f(x_n)-l}<\frac\eta4$. Let $V\subset U$ be a neighborhood of $x_n$ such that $\abs{f(x_n)-f(y)}<\frac\eta4$ for each $y\in V\cap \mathbb I$. Then $f$ has no point of continuity on $V$ because \[ f(y)\in (f(x)-\tfrac\eta4,f(x)+\tfrac\eta4),\quad y\in V\cap \mathbb Q, \] and \[ f(y)\in (l-\tfrac\eta4,l+\tfrac\eta4), \quad y\in V\cap \mathbb I. \] But this is a contradiction with the fact that $f\in \Ba_1([0,1])$.
The case $x\in \mathbb I$ can be treated similarly, thus the proof is done. \end{proof}
\begin{example} \label{ex:hacko-mu-deter} Let $Y=B_{M([0,1])}$. We define a function $G\colon Y\to [0,1]$ by $$G(\mu)=\mu_d(\mathbb I), \quad\mu\in Y,$$ where we use notation from Example~\ref{ex:deter-ext-body}. Set $H=\operatorname{span}\{A_c(Y)\cup\{G\})$. Then $Y$ is a metrizable compact convex set set and $H$ is an intermediate function space on $Y$ with the following properties: \begin{enumerate}[$(a)$] \item $H\subset \Ba_2(Y)$; \item $H$ is not contained in $A_{sa}(Y)$; \item $H$ is determined by $\ext X$; \item $H=H^\mu$; \item $M(H)$ contains only constant functions.
\end{enumerate}
\end{example}
\begin{proof} It is clear that $Y$ is a metrizable compact convex space and $H$ is an intermediate function space on $Y$.
Let $X$ be the compact convex set from Example~\ref{ex:deter-ext-body}. We observe that $X\subset Y$ and $Y=\operatorname{conv} (X\cup -X)$. Then $\ext Y=(\ext X)\cup (-\ext X)$. Further, $G|_X$ coincide with the function $G$ from Example~\ref{ex:deter-ext-body}.
$(a)$: We need to show that $G$ is a Baire-two function on $Y$. We know that $G|_X$ and $G|_{-X}$ are Baire-two functions (by Example~\ref{ex:deter-ext-body}(a)) and $G$ is affine. Consider the mapping \[ q\colon X\times(-X)\times[0,1]\to Y,\quad q(x_1,-x_2,\lambda)=\lambda x_1-(1-\lambda)x_2. \] Then $q$ is a continuous surjection and $\tilde{G}(x_1,-x_2,\lambda)=\lambda G(x_1)-(1-\lambda)G(x_2)$ is Baire-two. Since $\tilde{G}=G\circ q$, $G$ is Baire-two by \cite[Theorem 5.26]{lmns}.
$(b)$: The function $G$ is not strongly affine as Example~\ref{ex:deter-ext-body}(b) says that $G|_X$ is not strongly affine.
$(c)$: The space $H$ is determined by extreme points. Let $F\in A_c(Y)$ and $c\in\mathbb R$ be given. Then there exist a function $f\in C([0,1])$ and $d\in\mathbb R$ such that $(F+cG)(\mu)=\mu(f)+d+cG(\mu)$ for $\mu\in Y$. From Example~\ref{ex:deter-ext-body}(c) we know that $$\sup (F+cG)(X)\le \sup (F+cG)(\ext X)\le \sup (F+cG)(\ext Y)$$ and $$\begin{aligned} \sup(F+cG)(-X)&=-\inf(F+cG)(X)\le-\inf(F+cG)(\ext X)\\&=\sup(F+cG)(-\ext X)\le \sup (F+cG)(\ext Y).\end{aligned}$$ Since $F+cG$ is affine and $Y=\operatorname{conv}(X\cup(-X))$, we deduce $\sup(F+cG)(Y)\le\sup (F+cG)(\ext Y)$. Similarly we infer the inequality for infimum.
$(d)$: Let a bounded non-decreasing sequence $(K_n)$ in $H$ pointwise converge to $K$ on $Y$. Then $K$ is affine on $Y$, and thus we can assume by adding a suitable constant that $K(0)=0$. We find continuous functions $k_n\in C([0,1])$, $d_n, c_n\in\mathbb R$ such that $K_n(\mu)=\mu(k_n)+d_n+c_nG(\mu)$. Then $d_n=K_n(0)\nearrow K(0)=0$, and thus $(d_n)$ converges to $0$. In particular, the sequence $(d_n)$ is bounded. By the very assumption the sequence $(K_n)$ is bounded, so $(K_n-d_n)$ is bounded as well. For $t\in\mathbb Q$ we have $K_n(\varepsilon_t)-d_n=k_n(t)$, so the sequence $(k_n|_{\mathbb Q})$ is bounded. Since each $k_n$ is continuous, $\norm{k_n}_\infty=\norm{k_n|_{\mathbb Q}}_\infty$, so the sequence $(k_n)$ is bounded as well. Now we deduce that also the sequence $(c_n)$ is bounded. By choosing a suitable subsequence we may assume that $c_n\to c\in\mathbb R$.
\iffalse For $t\in \mathbb Q$ we have \[ K_n(\varepsilon_t)=k_n(t)+d_n\nearrow K(\varepsilon_t). \] Since $K$ is bounded, $(\abs{k_n(t)})$ is bounded by $M=\norm{K}+\norm{(d_n)}_\infty$. Hence $\sup_{t\in [0,1]} \abs{k_n(t)}=\sup_{t\in\mathbb Q} \abs{k_n(t)}\le M$. If we fix $t\in\mathbb I$, then \[ K_n(\varepsilon_t)=k_n(t)+d_n+c_n\nearrow K(\varepsilon_t) \] implies that $(c_n)$ is bounded. By choosing a suitable subsequence we may assume that $c_n\to c$. \fi
Now we have for $t\in\mathbb Q$ \[ \begin{aligned} K_n(\varepsilon_t)&=k_n(t)+d_n\nearrow K(\varepsilon_t)\quad\text{and}\\
K_n(-\varepsilon_t)&=-k_n(t)+d_n\nearrow K(-\varepsilon_t)=-K(\varepsilon_t). \end{aligned} \] Hence both the functions $t\mapsto K(\varepsilon_t)$ and $t\mapsto -K(\varepsilon_t)$ are limits of non-decreasing sequences of continuous functions on $\mathbb Q$, and hence the function $t\mapsto K(\varepsilon_t)$ is continuous on $\mathbb Q$.
Similarly we have for $t\in\mathbb I$ \[ \begin{aligned} K_n(\varepsilon_t)&=k_n(t)+d_n+c_n\nearrow K(\varepsilon_t)\quad\text{and}\\ K_n(-\varepsilon_t)&=-k_n(t)+d_n-c_n\nearrow K(-\varepsilon_t)=-K(\varepsilon_t). \end{aligned} \] As above, $t\mapsto K(\varepsilon_t)$ is continuous on $\mathbb I$. Thus the function $k(t)=K(\varepsilon_t)$, $t\in [0,1]$, is continuous on $\mathbb Q$ and on $\mathbb I$. Further, the functions $k_n$ satisfy \[ k_n(t)= \begin{cases} k_n(t)+d_n-d_n=K_n(\varepsilon_t)-d_n\to k(t),& t\in \mathbb Q,\\
k_n(t)+d_n+c_n-d_n-c_n=K_n(\varepsilon_t)-d_n-c_n\to k(t)-c,& t\in\mathbb I.
\end{cases} \]
Hence the function $l(t)=k(t)-c1_{\mathbb I}$ is Baire-one on $[0,1]$. Also, $l|_{\mathbb Q}$ and $l|_{\mathbb I}$ are continuous. It follows from Lemma~\ref{l:baire-one-cont} that $l\in C([0,1])$.
Hence the functions $L_n(\mu)=\mu(k_n)$ and $L(\mu)=\mu(l)$, $\mu\in Y$, satisfy fro $\mu\in Y$ \[ L(\mu)=\lim_{n\to\infty}L_n(\mu)=\lim_{n\to\infty}\left(\mu(k_n)+d_n+c_nG(\mu)-d_n-c_nG(\mu)\right) =K(\mu)-cG(\mu). \] Since $L$ is strongly affine, $K-cG$ is strongly affine as well. Since \[ \begin{aligned} L(\varepsilon_t)&=\begin{cases} k(t), &t\in\mathbb Q,\\
k(t)-c, &t\in\mathbb I,
\end{cases},\\ L(-\varepsilon_t)&=\begin{cases} -k(t), &t\in\mathbb Q,\\
-k(t)+c, &t\in\mathbb I,
\end{cases} \end{aligned} \] we have $L(\mu)=\mu(l)$, $\mu\in\ext Y$. But the function on the right hand side is continuous on $\ext Y$, and thus $L$ is continuous on $\ext Y$. By \cite[Corollary 5.32]{lmns}, $K-cG$ is continuous on $Y$. Hence $K=(K-cG)+cG\in H$.
$(e)$: $M(H)$ contains only constant functions by Example~\ref{ex:symetricka}. \end{proof}
We continue by the third example showing that split faces need not generate multipliers even on a metrizable Bauer simplex.
\begin{example} \label{ex:multi-analytic} There exist an intermediate function space $H$ on the metrizable Bauer simplex $X=M_1([0,1])$ and a subset $F\subset X$ such that the following properties hold. \begin{enumerate}[$(a)$] \item $H^\sigma=H$, $H\subset A_{sa}(X)$ and $H$ is determined by extreme points. \item $F$ is a measure convex split face, its complementary face $F'$ is also measure convex and $\lambda_F\in H$. \item $\lambda_F\notin M(H)$. \end{enumerate} \end{example}
\begin{proof} $(a)$: Let $\tilde{H}=\operatorname{span}(\Ba^b([0,1])\cup\{\tilde{f}\})$, where $\tilde{f}=1_A$ for $A=A_1\cup A_2$ with $A_1\subset [0,\frac13]$ analytic non-Borel and $A_2\subset [\frac23,1]$ analytic non-Borel. Using Lemma~\ref{L:function space} we obtain an intermediate function space $H=V(\tilde{H})$ on the Bauer simplex $X=M_1([0,1])$ with $H\subset A_{sa}(X)$. Since $X$ is a standard compact convex set, $H$ is determined by extreme points.
Now we show that $H^\sigma=H$. To this end, let a bounded sequence $h_n=b_n+c_n f$, where $b_n=V(\tilde{b}_n)$ for some $\tilde{b}_n\in \Ba^b([0,1])$, $c_n\in\mathbb R$ and $f=V(\tilde{f})$, be such that $h_n\to h\in A_{sa}(X)$. If we identify points of $[0,1]$ with $\ext X$, we have $h_n\to h$ on $[0,1]$.
We claim that $(c_n)$ is bounded. If this is not the case, we may assume that $\abs{c_n}\to \infty$. Then the set $B=\{x\in [0,1];\, \abs{b_n(x)}\to\infty\}$ is Borel and contains $A$. Indeed, for $x\in A$ we have \[ \abs{b_n(x)}=\abs{b_n(x)+c_n-c_n}=\abs{h_n(x)-c_n}\ge \abs{c_n}-\norm{h_n}\to\infty. \] Since $A$ is not Borel, there exists $x\in B\setminus A$. Then $h_n(x)=b_n(x)\to \infty$, a contradiction. Hence $(c_n)$ is bounded. Thus we may assume that $c_n\to c$ for some $c\in\mathbb R$. Then \[ b_n=h_n-c_nf\to h-cf\text{ on }\ext X. \] Hence $b=h-cf$ is a Borel function on $\ext X$, and thus have a Borel strongly affine extension $b$ on $X$. Further, $h=b+cf$ on $\ext X$. Since $h, b,c$ are strongly affine, $h=b+cf$ on $X$. Hence $H=H^\sigma$.
$(b)$: Let $F=[f=1]$. Then $F$ is a measure convex face with $F'=[f=0]$ (this follows from Lemma~\ref{l:complementarni-facy}(i)). We aim to check that $F$ is split face. To this end, let $s\in X$ be given. Then $s$ is a probability measure on $[0,1]$. If $s(A)=1$, then $s\in F$. Similarly, if $s([0,1]\setminus A)=1$, we have $s\in F'$. So let $s(A)\in (0,1)$. Then \[
s=s(A)\frac{s|_{A}}{s(A)}+(1-s(A))\frac{s|_{[0,1]\setminus A}}{s([0,1]\setminus A)}, \]
where $\frac{s|_{A}}{s(A)}\in F$ and
$\frac{s|_{[0,1]\setminus A}}{s([0,1]\setminus A)}\in F'$. Hence this provides a decomposition of $s$ into a convex combination of elements of $F$ and $F'$. To check its uniqueness, let $s=\lambda t+(1-\lambda)t'$, where $\lambda\in (0,1)$ and $t\in F$, $t'\in F'$. The application of the function $f$ yields $\lambda=s(A)$. Since $t\in A$, we have $1=f(t)=t(1_{{A}})=t({A})$, and thus $t$ is carried by ${A}$. Similarly we obtain that $t'$ is carried by $[0,1]\setminus {A}$. Given a universally measurable $D\subset {A}$, we have \[ s(D)=s({A})t(D). \]
Hence $t=\frac{s|_{{A}}}{s({A})}$.
Then $t'=\frac{s|_{[0,1]\setminus {A}}}{s([0,1]\setminus{A})}$ and $F$ is a split face.
Since $\lambda_F=f$, we have $\lambda_F\in H$.
$(c)$: Nevertheless, $\lambda_F=f$ is not a multiplier for $H$. Indeed, let $g\colon [0,1]\to[0,1]$ be a continuous function with $g=1$ on $[0,\frac13]$ and $g=0$ on $[\frac23,1]$. Then $\lambda_F\cdot V(g)=fg$ on $\ext X$. Thus it is enough to show that $fg\notin \{h|_{\ext X};\, h\in H\}$. But this follows from the fact that any function $h$ from the right hand side is of the form $h=b+c\tilde{f}$ for some bounded Borel $b$ and $c\in\mathbb R$. Then $\tilde{f}g=b+c\tilde{f}$ implies $b+c\tilde{f}=0$ on $[\frac23,1]$. This implies $c=0$ as $A_2$ is non-Borel. But then $\tilde{f}=b$ on $[0,\frac13]$, which is impossible. Hence the conclusion follows. \end{proof}
\section{Examples of Stacey's compact convex sets} \label{sec:stacey}
The aim of this section is to illustrate the abstract results obtained in the previous sections by means of the concrete examples of Stacey's simplices. Their construction provides (in particular) examples distinguishing intermediate function spaces and their multipliers.
\subsection{Construction} \label{subs:construction}
Let $L$ be a (Hausdorff) compact topological space and let $A\subset L$ be an arbitrary subset. Let the set $$\gls{KLA}=(L\times\{0\}) \cup (A\times \{-1,1\})$$ be equipped with the porcupine topology. I.e., points of $A\times\{-1,1\}$ are isolated and a neighborhood basis of $(t,0)$ for $t\in L$ is formed by \[ (U\times \{-1,0,1\})\cap K_{L,A} \setminus\{(t,-1),(t,1)\},\quad U\mbox{ is a neighborhood of $t$ in }L. \] Then $K_{L,A}$ is a compact Hausdorff space (this was observed in \cite[Section VII]{bi-de-le} and it is easy to check). We further set \[ E=\gls{ELA}=\{f\in C(K_{L,A});\, f(t,0)=\tfrac{1}{2}(f(t,-1)+f(t,1)) \mbox{ for }t\in A\}. \] Then $E_{L,A}$ is a function space on $K_{L,A}$ and its Choquet boundary is \[ \Ch_{E_{L,A}}K_{L,A}=(A\times\{-1,1\}) \cup ((L\setminus A)\times\{0\}) \] (see \cite[formula (7.1) on p. 328]{bi-de-le}). Denote $X=\gls{XLA}=S(E_{L,A})$ the respective state space. By \cite[Theorem 3]{stacey} each $X_{L,A}$ is a simplex (see the explanation in \cite[Section 2]{kalenda-bpms}).
We will describe several intermediate function spaces, their centers and multipliers on $X$. We adopt the notation from Section~\ref{ssc:ch-fs} and from Lemma~\ref{L:function space}. In particular, $\Phi:E_{L,A}\to A_c(X_{L,A})$ will denote the canonical isometry, $\phi:K_{L,A}\to X_{L,A}$ the evaluation mapping and $V:\ell^\infty(K_{L,A})\cap (E_{L,A})^{\perp\perp} \to A_{sa}(X_{L,A})$ the isometry from Lemma~\ref{L:function space}. We further define mappings $\jmath\colon L\to K_{L,A}$ and $\psi\colon K_{L,A}\to L$ by \[ \gls{jmath}(t)=(t,0) \mbox{ for }t\in L \quad\mbox{ and }\quad \gls{psi}(t,i)=t\mbox{ for }(t,i)\in K_{L,A}. \] Then $\jmath$ is a homeomorphic injection, $\psi$ is a continuous surjection and $\psi\circ\jmath$ is the identity on $L$.
We conclude this section by few easy formulas which we will use several times. Assume that $K=K_{L,A}$, $E=E_{L,A}$ and $B\subset \Ch_E K$ is arbitrary. Then we have the following equalities: \begin{equation}\label{eq:podmny ChEK}
\begin{aligned}
\psi(B)\cap\psi(\Ch_E K\setminus B)&=\{t\in A;\, B\cap\{(t,-1),(t,1)\}\\ &\qquad\qquad\mbox{ contains exactly one point} \},\\
\psi(B)\setminus\psi(\Ch_E K\setminus B)&=\{t\in L\setminus A;\, (t,0)\in B\} \\&\qquad\cup\{t\in A;\, \{(t,-1),(t,1)\}\subset B\}
\\&=\{t\in L;\,\psi^{-1}(t)\cap \Ch_E K\subset B\}.\end{aligned} \end{equation}
\subsection{Description of topological properties of functions on $K_{L,A}$} \label{ssec:topology}
We start with a characterization of topological properties of bounded functions on $K_{L,A}$.
\begin{prop}\label{p:topol-stacey} Let $f\in \ell^\infty(K_{L,A})$ and let $K=K_{L,A}$. Then the following assertions hold. \begin{enumerate}[$(a)$]
\item $f\in C(K)$ if and only if $f\circ\jmath$ is continuous on $L$ and for all $\varepsilon>0$ the set
\[
\{t\in A;\, \abs{f(t,0)- f(t,-1)}\ge\varepsilon\mbox{ or }\abs{f(t,0)-f(t,1)}\ge\varepsilon\}
\]
is finite.
\item $f\in \Ba_1^b(K)$ if and only if $f\circ\jmath\in \Ba_1^b(L)$ and
\[
\{t\in A;\, f(t,-1)\ne f(t,0)\mbox{ or }f(t,1)\ne f(t,0)\} \mbox{ is countable}\}.
\]
\item $f\in \Ba^b(K)$ if and only if $f\circ\jmath\in \Ba^b(L)$ and
\[
\{t\in A;\, f(t,-1)\ne f(t,0)\mbox{ or }f(t,1)\ne f(t,0)\} \mbox{ is countable}\}.
\]
\item $f$ is lower semicontinuous if and only if the following two conditions are satisfied:
\begin{enumerate}[$(i)$]
\item $f\circ\jmath$ is lower semicontinuous on $L$;
\item for each $\varepsilon>0$ and each accumulation point $t_0$ of the set $$A_\varepsilon=\{t\in A;\, f(t,0)\ge\min\{f(t,-1),f(t,1)\}+\varepsilon\}$$ we have $f(t_0,0)\le\liminf\limits_{t\to t_0, t\in A_\varepsilon}f(t,0)-\varepsilon$.
\end{enumerate}
\item $f\in \Bo_1^b(K)$ if and only if $f\circ\jmath\in \Bo_1^b(L).$
\item $f\in\Bo^b(K)$ if and only if $f\circ\jmath\in \Bo^b(L).$
\item $f\in \Fr^b(K)$ if and only if $f\circ\jmath\in \Fr^b(L).$
\item $f\in (\Fr^b(K))^\mu$ if and only if $f\circ\jmath\in (\Fr^b(L))^\mu.$
\item $f$ is universally measurable if and only if $f\circ\jmath$ is universally measurable. \end{enumerate} \end{prop}
\begin{proof} $(a)$: This easily follows from the definitions and is proved in \cite{stacey} within the proof of Theorem 3.
$(b)$: Inclusion `$\subset$' follows from $(a)$. To prove the converse take any $f$ satisfying the condition. Let $\{t_n;\, n\in\mathbb N\}$ be the respective countable set. Let $(g_n)$ be a bounded sequence in $C(L)$ pointwise converging to $f\circ\jmath$. Define $f_n$ by
$$f_n(t,i) = \begin{cases}
f(t,i)&\mbox{if }t=t_k\mbox{ for some }k\le n\mbox{ and }i\in\{-1,1\},\\
g_n(t) &\mbox{otherwise}.
\end{cases}$$
Then $(f_n)$ is a bounded sequence of continuous functions on $K$ pointwise converging to $f$.
$(c)$: This follows easily from the proof of $(b)$.
$(d)$: $\implies$: Assume $f$ is lower semicontinuous. Clearly, $f\circ \jmath$ is also lower semicontinuous, hence $(i)$ is satisfied. To prove $(ii)$ fix $\varepsilon>0$ and let $t_0$ be an accumulation point of $A_\varepsilon$. Let $(t_\alpha)$ be a net in $A_\varepsilon$ converging to $t_0$ such that $\lim_\alpha f(t_\alpha,0)=\liminf\limits_{t\to t_0, t\in A_\varepsilon} f(t,0)$.
Since $t_\alpha\in A_\varepsilon$, we may find $i_\alpha\in\{-1,1\}$ such that $f(t_\alpha,i_\alpha)\le f(t_\alpha,0)-\varepsilon$. In the topology of $K$ we have $(t_\alpha,i_\alpha)\to(t_0,0)$, hence
\[
f(t_0,0)\le \liminf_{\alpha} f(t_\alpha,i_\alpha)\le \liminf_{\alpha} f(t_\alpha,0)-\varepsilon=\liminf\limits_{t\to t_0, t\in A_\varepsilon}f(t,0)-\varepsilon
\]
and the proof is complete.
$\impliedby$: Assume $f$ satisfies conditions $(i)$ and $(ii)$. Let us show $f$ is lower semicontinuous. Since the points of $A\times\{-1,1\}$ are isolated, it is enough to prove the lower semicontinuity at $(t,0)$ for each $t\in L$. So, fix any $t_0\in L$.
It follows from $(i)$ that $f(t_0,0)\le\liminf_{s\to t_0}f(s,0)$.
If $t_0$ is not an accumulation point of $A$, the proof is complete.
Assume now that $t_0$ is an accumulation point of $A$. We need to prove that $f(t_0,0)\le\liminf\limits_{\alpha}f(t_\alpha, i_\alpha)$ whenever $(t_\alpha)_{\alpha\in I}$ is a net in $A\setminus \{t_0\}$ converging to $t_0$ and $i_\alpha\in\{-1,1\}$. Assume the contrary, i.e., there is a net $\left((t_\alpha, i_\alpha)\right)_{\alpha\in I}$ such that $\lim\limits_{\alpha} f(t_\alpha, i_\alpha)<f(t_0,0)$. Up to passing to a subnet we may assume that $f(t_\alpha,0)\to c\in\mathbb R$. Then $c\ge f(t_0,0)$. Let $\varepsilon>0$ be such that $\varepsilon<c-\lim\limits_{\alpha} f(t_\alpha, i_\alpha)$. Then $t_0$ is an accumulation point of $A_\varepsilon$ and hence by $(ii)$ we deduce
that
$f(t_0,0)\le \liminf\limits_{\alpha} f(t_\alpha,0)-\varepsilon=c-\varepsilon$. Since $\varepsilon$ may be arbitrarily close to $c-\lim\limits_\alpha f(t_\alpha, i_\alpha)$, we infer $f(t_0,0)\le\lim\limits_\alpha f(t_\alpha,i_\alpha)$, which is a contradiction with our assumption. Hence $f$ is lower semicontinuous at $t_0$.
$(e)$: Since $K\setminus \jmath(L)$ is an open discrete set in $K$, any function on $K$ with $f\circ \jmath\in \Bo_1(L)$ satisfies the condition that for each $U\subset \mathbb R$ open the set $f^{-1}(U)$ is expressible as a countable union of sets contained in the algebra generated by open sets.
On the other hand, if $f\in \Bo_1^b(K)$, then $f\circ \jmath\in \Bo_1^b(L)$ as $\jmath$ is a homeomorphic embedding.
$(f), (g), (h), (i)$: This also follows from the fact that any function on $K\setminus \jmath(L)$ is Borel and fragmented. \end{proof}
\subsection{Strongly affine functions on $X_{L,A}$} \label{ssec:sa-functions-onX}
This section is devoted to describing strongly affine functions and their multipliers on $X_{L,A}$. We start by describing maximal measures and characterizing standard compact convex set among simplices $X_{L,A}$.
\begin{lemma}\label{L:maxmiry-dikous}
Let $K=K_{L,A}$, $E=E_{L,A}$ and $X=X_{L,A}$.
\begin{enumerate}[$(1)$]
\item A measure $\mu\in M_1(X)$ is maximal if and only if $\mu=\phi(\nu)$ for some $\nu\in M_1(K)$ such that the discrete part of $\nu$ is carried by $\Ch_E K$.
\item $X$ is a standard compact convex set if and only if $A$ contains no compact perfect subset.
\end{enumerate} \end{lemma}
\begin{proof}
$(1)$ Assume that $\mu\in M_1(X)$ is maximal. Then $\mu$ is carried by $\overline{\ext X}$ (see \cite[Proposition I.4.6]{alfsen}), hence $\mu=\phi(\nu)$ for some $\nu\in M_1(K)$. Moreover, if $\nu(\{x\})>0$ for some $x\in K\setminus\Ch_E K$, it is easy to check that $\mu$ is not maximal (cf. the characterization of simple boundary measures in \cite[p. 35]{alfsen}). This proves the `only if' part.
To prove the converse we observe that discrete measures carried by $\ext X$ are maximal. Further, assume that $\nu$ is a continuous measure on $K$. It is obviously carried by $\jmath(L)$. Then $\nu$ is $E$-maximal by the Mokobodzki maximal test \cite[Theorem 3.58]{lmns}. Indeed, let $f\in C(K)$ and $\varepsilon>0$. Let
$$C=\{t\in A;\, \abs{f(t,0)-f(t,1)}\ge\varepsilon\mbox{ or } \abs{f(t,0)-f(t,-1)}\ge\varepsilon\}.$$
By Proposition~\ref{p:topol-stacey} the set $C$ is finite.
Fix any $t_0\in L\setminus C$ and find a continuous function $h:L\to[0,\infty)$ such that $h(t_0)=0$ and
$$h(t)=\max\{ \abs{f(t,0)-f(t,1)},\abs{f(t,0)-f(t,-1)} \}\mbox{ for }t\in C.$$
Then
$$g=(f\circ\jmath+\varepsilon+h)\circ\psi \in E\mbox{ and }g\ge f.$$
In particular,
$$f^*(t_0,0)\le g(t_0,0)\le f(t_0,0)+\varepsilon.$$
Hence $\{t\in L;\, f^*(t,0)>f(t,0)+\varepsilon\}\subset C$, so it is a finite set. We conclude that $[f^*\ne f]\cap \jmath(L)$ is countable and hence $\nu$-null. So, we have verified condition $(iv)$ from \cite[Theorem 3.58]{lmns} and hence we conclude that $\nu$ is $E$-maximal. Thus $\phi(\nu)$ is a maximal measure on $X$ by \cite[Proposition 4.28(d)]{lmns}. Since maximal measures form a convex set, the proof of the `if part' is complete.
$(2)$ Assume that $A$ contains a compact perfect set $P$. Then $P$ carries a continuous Radon probability $\nu$. By $(1)$ the measure $\mu=\phi(\jmath(\nu))$ is maximal. Since $\mu$ is carried by $\phi(\jmath(P))$ which is a compact set disjoint with $\ext X$, $X$ is not standard.
To prove the converse, assume that $A$ contains no compact perfect set. Let $N\supset \ext X$ be a universally measurable set. Then $B=\phi^{-1}(N\cap\overline{\ext X})$ is a universally measurable subset of $K$ containing $\Ch_E K$. In particular, $\jmath(L)\setminus B$ is a universally measurable subset of $\jmath(A)$, so it is universal null (as it contains no compact perfect set). Now it easily follows from $(1)$ that $N$ carries all maximal measures. \end{proof}
We continue by introducing a piece of notation which we will repeatedly use in this section.
\begin{notation} If $f$ is any function on $K$, we define \[ \gls{ftilde}(t,i)=\begin{cases}
\frac12(f(t,-1)+f(t,1)),& t\in A, i=0,\\
f(t,i)&\mbox{otherwise}.
\end{cases} \] \end{notation}
In the next result we describe the structure of multipliers for the intermediate function system of strongly affine functions on $X_{L,A}$.
\begin{prop} \label{P:dikous-sa-new} Let $K=K_{L,A}$, $E=E_{L,A}$, $X=X_{L,A}$ and $H=A_{sa}(X)$. Then the following assertions are valid. \begin{enumerate}[$(a)$]
\item The space $H$ is determined by extreme points.
\item $H=V(\ell^\infty(K)\cap E^{\perp\perp})$ and
$$\begin{aligned}
\ell^\infty(K)\cap E^{\perp\perp}=\{f\in \ell^\infty(K);\, & f\circ\jmath\mbox{ is universally measurable and }
\\&
f(t,0)=\tfrac{1}{2}(f(t,-1)+f(t,1)) \mbox{ for }t\in A\}.\end{aligned}$$
\item
$\begin{aligned}[t]
M^s(H)=M(H)=V(\{f\in \ell^\infty(K)\cap E^{\perp\perp} ;\, & \{t\in A;\, f(t,1)\ne f(t,-1)\}\\&\mbox{ is a universally null set}\}).\end{aligned}$
\item
$\begin{alignedat}[t]{4}
\mathcal A_H=\mathcal A^s_H&= \Big\{ &&\phi((B\setminus A)\times\{0\} \cup (B\cap A)\times \{-1,1\}\cup N_1\times\{-1\} \cup N_2\times\{1\});\, \\&&&
B\subset L\mbox{ universally measurable}, N_1,N_2\subset A \mbox{ universally null} \Big\}
\\&=\Big\{&&\phi(B);\, B\subset\Ch_E K, \psi(B)\mbox{ is universally measurable},
\\&&&\qquad\psi(B)\cap\psi(\Ch_E K\setminus B)\mbox{ is universally null}\Big\}
\end{alignedat}
$
\end{enumerate} \end{prop}
\begin{proof} $(a)$: This follows from an easy observation that $\overline{\ext X}\subset\operatorname{conv}(\ext X)$.
$(b)$: The first equality follows from Lemma~\ref{L:function space}$(a)$. Let us prove the second one. Inclusion `$\subset$' is clear as $\frac12(\varepsilon_{(t,-1)}+\varepsilon_{(t,1)})-\varepsilon_{(t,0)}\in E^\perp$ for each $t\in A$.
Let us continue by showing the converse.
Given $f\in C(K)$, Proposition~\ref{p:topol-stacey}$(a)$ yields that $f$ and $\widetilde{f}$ differ only at a countable set, in particular $\widetilde{f}$ is a Borel function.
It is proved in \cite[Section 2]{kalenda-bpms} that
$$E^\perp=\{\nu\in M(K);\, \int_{K}\widetilde f \,\mbox{\rm d}\nu=0\mbox{ for each }f\in C(K)\}.$$
Fix any $\nu\in E^\perp$. By the above formula we deduce that for any $g\in C(L)$ we have $\int_{K} g\circ\psi\,\mbox{\rm d}\nu=0$, thus
$\psi(\nu)=0$. Since $\nu$ is a Radon measure, it may be expressed
in the form
$$\nu=\jmath(\nu_0) + \sum_{j\in\mathbb N} (a_j \varepsilon_{(t_j,-1)}+b_j\varepsilon_{(s_j,1)}),$$
where $\nu_0\in M(L)$, $s_j,t_j\in A$ and $\sum_{j\in\mathbb N}(\abs{a_j}+\abs{b_j})<\infty$. Equality $\psi(\nu)=0$ then means that $\nu_0+ \sum_{j\in\mathbb N} (a_j \varepsilon_{t_j}+b_j\varepsilon_{s_j})=0$.
We deduce that $\nu$ may be expressed as
$$\nu=\sum_{j\in\mathbb N} (a_j\varepsilon_{(t_j,-1)}+b_j \varepsilon_{(t_j,1)}-(a_j+b_j)\varepsilon_{(t_j,0)})$$
for some $t_j\in A$ and real numbers $a_j,b_j$ satisfying $\sum_{j\in\mathbb N}(\abs{a_j}+\abs{b_j})<\infty$.
For each $j\in\mathbb N$ let $f_j=1_{\{(t_j,1)\}}-1_{\{(t_j,-1)\}}$. Then $f_j\in C(K)$ and $\widetilde{f_j}=f_j$, so $\int f_j\,\mbox{\rm d}\nu=0$. It follows that $a_j=b_j$, i.e.,
$$\nu=\sum_{j\in\mathbb N} a_j(\varepsilon_{(t_j,-1)}+ \varepsilon_{(t_j,1)}-2\varepsilon_{(t_j,0)}),$$
thus clearly $\int f\,\mbox{\rm d}\nu=0$ for each $f$ from the set on the right-hand side. This completes the proof of assertion $(b)$.
$(c)$:
It follows from Lemma~\ref{L:function space}, Lemma~\ref{L:maxmiry-dikous}, assertion $(b)$ and the definitions that, given $f\in \ell^\infty(K)\cap E^{\perp\perp}$ we have
$$\begin{gathered}
V(f)\in M(A_{sa}(X))\Longleftrightarrow \forall g\in \ell^\infty(K)\cap E^{\perp\perp}\colon \widetilde{fg}\circ\jmath\mbox{ is universally measurable}, \\
V(f)\in M^s(A_{sa}(X))\Longleftrightarrow \forall g\in \ell^\infty(K)\cap E^{\perp\perp}\colon [\widetilde{fg}\ne (fg)] \mbox{ is universally null}.\end{gathered}$$
Therefore, any function from the last set is a strong multiplier. Since $M^s(H)\subset M(H)$ holds always, it remains to prove that any multplier belongs to the last set.
To this end assume that $B=\{t\in A;\, f(t,1)\ne f(t,-1)\}$ is not universally null. It follows that there is a continuous measure $\nu\in M_1(L)$ with $\nu^*(B)>0$. Then there is a set $C\subset B$ which is not $\nu$-measurable. Let $g=\chi_{C\times \{1\}}-\chi_{C\times\{-1\}}$. Then $g\in \ell^\infty(K)\cap E^{\perp\perp}$, but $\widetilde{fg}\circ\jmath$ is not $\nu$-measurable. Thus $V(f)\notin M(A_{sa}(X_A))$.
$(d)$: This follows from $(c)$ and Theorem~\ref{T:integral representation H} (using moreover \eqref{eq:podmny ChEK}). \end{proof}
To compare systems $\mathcal A^s_H$ and $\mathcal S_H$ for the individual intermediate function spaces $H$ we need a description of nice split faces. Such a description is contained in the following lemma.
\begin{lemma}\label{L:dikous-split-new} Let $K=K_{L,A}$, $E=E_{L,A}$ and $X=X_{L,A}$. Let $B\subset\Ch_{E}K$. Then the following assertions are equivalent: \begin{enumerate}[$(1)$]
\item There is a split face $F$ of $X$ such that $F\cap\ext X=\phi(B)$ and $\lambda_F$ is strongly affine.
\item There is a split face $F$ of $X$ such that $F\cap\ext X=\phi(B)$ and both $F$ and $F'$ are measure extremal.
\item $\psi(B)$ is universally measurable and
\[
\{t\in A;\, B\cap\{(t,-1),(t,1)\} \mbox{ contains exactly one point}\}
\]
is a universally null set. \end{enumerate} \end{lemma}
\begin{proof}
$(1)\implies(2)$: This is obvious.
$(2)\implies(3)$: Let $f= \lambda_F\circ \phi$ and $$C=\{t\in A;\, B\cap\{(t,-1),(t,1)\} \mbox{ contains exactly one point}\}.$$
Since $\lambda_F$ is affine and $[\lambda_F=1]\cup[\lambda_F=0]=F\cup F'\supset \ext X$, we deduce that $f$ attains only values $0,\frac12,1$ and
$$C=\{t\in L;\, f(t,0)=\tfrac12\}.$$ Since $[f=0]$ and $[f=1]$ are universally measurable, we deduce that $C$ is also a universally measurable set.
Assume it is not universally null. It follows there is a continuous probability $\mu\in M_1(L)$ supported by a compact subset of $C$. Let $\nu=\jmath(\mu)$. By Lemma~\ref{L:maxmiry-dikous} we deduce that $\phi(\nu)$ is a maximal measure on $X$. Let $z$ be the barycenter of $\phi(\nu)$ in $X$. Then $z=\lambda_F(z)y+(1-\lambda_F(z))y'$ for some $y\in F$ and $y'\in F'$. Let $\mu_y$ and $\mu_{y'}$ be maximal measures representing $y$ and $y'$. Since $F$ and $F'$ are measure extremal, we deduce that $\mu_y$ is supported by $F$ and $\mu_{y'}$ is supported by $F'$. Since $\lambda_F(z)\mu_y+(1-\lambda_F(z))\mu_{y'}$ is a maximal measure representing $z$, the simpliciality of $X$ implies
$\phi(\nu)=\lambda_F(z)\mu_y+(1-\lambda_F(z))\mu_{y'}$, so $\phi(\nu)$ is supported by $F\cup F'=[\lambda_F=1]\cup[\lambda_F=0]$, i.e., $\nu$ is supported by $[f=0]\cup[f=1]$. But by the construction $\nu$ is supported by $C=[\lambda_F=\frac12]$. This contradiction completes the argument.
\iffalse $(1)\Rightarrow(2)$: Denote by $f$ the function on $K$ such that $V(f)=\lambda_F$. Then $f=1$ on $B$ and $f=0$ on $\Ch_{E}K\setminus B$. Moreover,
$f(t,0)=0$ if and only if $t\in L\setminus\psi(B)$. It follows that $\psi(B)$ is universally measurable.
Set $$C=\{t\in A;\, B\cap\{(t,-1),(t,1)\} \mbox{ contains exactly one point}\}.$$ Then $$C=\{t\in L;\, f(t,0)=\tfrac12\},$$ so it is a universally measurable set.
Assume it is not universally null. It follows there is a continuous probability $\mu\in M_1(L)$ supported by a compact subset of $C$. Let $\nu=\jmath(\mu)$. By Lemma~\ref{L:maxmiry-dikous} we deduce that $\phi(\nu)$ is a maximal measure on $X$. Let $z$ be the barycenter of $\phi(\nu)$ in $X$. Then $z=\lambda_F(z)y+(1-\lambda_F(z))y'$ for some $y\in F$ and $y'\in F'$. Let $\mu_y$ and $\mu_{y'}$ be maximal measures representing $y$ and $y'$. Since $F=[\lambda_F=1]$ is measure extremal, we deduce that $\mu_y$ is supported by $F$. Similarly $\mu_{y'}$ is supported by $F'$. Since $\lambda(z)\mu_y+(1-\lambda(z))\mu_{y'}$ is a maximal measure representing $z$, the simpliciality of $X$ implies
$\phi(\nu)=\lambda(z)\mu_y+(1-\lambda(z))\mu_{y'}$, so $\phi(\nu)$ is supported by $F\cup F'=[\lambda_F=1]\cup[\lambda_F=0]$. But the construction provides that $\phi(\nu)$ is supported by $[\lambda_F=\frac12]$. This contradiction completes the argument.\fi
$(3)\implies(1)$: Define $$C=\{t\in A;\, B\cap\{(t,-1),(t,1)\} \mbox{ contains exactly one point}\}.$$ By the assumption $C$ is universally null. Set
$$f(t,i)=\begin{cases}
1, & (t,i)\in B,\\
0, & (t,i)\in \Ch_{E_A}K_A\setminus B,\\
1, & t\in A, i=0, \{(t,-1),(t,1)\}\subset B,\\
0, & t\in A, i=0, \{(t,-1),(t,1)\}\cap B=\emptyset,\\
\frac12, & t\in C, i=0.
\end{cases}$$
Then $V(f)$ is strongly affine by Proposition~\ref{P:dikous-sa-new}. Since $0\le V(f)\le 1$,
clearly $F_1=[V(f)=1]$ and $F_2=[V(f)=0]$ are faces of $X$. Since $V(f)$ is strongly affine, these faces are measure convex and measure extremal. By Lemma~\ref{L:maxmiry-dikous} we deduce that $F_1\cup F_2$ carries all maximal measures. Lemma~\ref{l:complementarni-facy} then yields that $F_2=F_1'$. By Corollary~\ref{c:simplex-facejesplit} we know that $F_1$ is a split face. Since obviously $V(f)=\lambda_{F_1}$, the proof is complete. \iffalse
Take any $z\in X$. Let $\nu$ be a maximal measure representing $z$. By \cite[Proposition 4.28(d)]{lmns} there is an $E$-maximal measure $\mu$ on $K$ with $\phi(\mu)=\nu$. Set
\[
\mu_1=\mu|_B, \mu_0=\mu|_{\Ch_{E}K\setminus B}, \mu_{1/2}=\mu|_{C\times\{0\}}.
\]
Since $C$ is universally null, $\mu_{1/2}$ is a discrete measure. Taking into account that $\mu_{1/2}$ must be maximal, we deduce $\mu_{1/2}=0$. It follows that
$\mu=\mu_1+\mu_0$. If $\mu_0=0$, then $\nu$ is supported by $F$, so $z\in F$. Similarly, if $\mu_1=0$, we get $z\in F'$.
Further, assume that both $\mu_1$ and $\mu_0$ are nonzero.
Let $z_1$ be the barycenter of $\frac{\phi(\mu_1)}{\norm{\mu_1}}$. Then $z_1\in F$. Similarly, $z_0$, the barycenter of $\frac{\phi(\mu_0)}{\norm{\mu_0}}$, belongs to $F'$. Thus $z=\norm{\mu_1}z_1+\norm{\mu_0}z_0\in\operatorname{conv}(F\cup F')$.
We deduce that $X=\operatorname{conv}(F\cup F')$. It remains to prove the uniqueness. Assume $x\in X$ and $x=\lambda y+(1-\lambda)y'$ for some $\lambda\in[0,1]$ and $y\in F$, $y'\in F'$. Then clearly $\lambda=V(f)(x)$, so $\lambda$ is uniquely determined. If $x\in F$, necessarily $y=x$. If $x\in F'$, necessarily $y'=x$. Assume $x\in X\setminus (F\cup F')$. Let $\nu_1$ be a maximal measure representing $y$. Then $\nu_1$ is supported by $F$. Let $\nu_0$ be a maximal measure representing $y'$. Then $\nu_0$ is supported by $F'$. We deduce that $\nu=\lambda\nu_1+(1-\lambda)\nu_0$ is a maximal measure representing $z$. Since $X$ is a simplex, this maximal measure is uniquely determined by $z$. Thus also $\nu_1$ and $\nu_0$ are uniquely determined by $z$ and hence the same holds for $y$ and $y'$, their barycenters.
So, we deduce that $F$ is a split face and $\lambda_F=V(f)$. This completes the proof.\fi
\end{proof}
\begin{cor}\label{cor:dikous-split-me} Let $K,E,X$ be as in Proposition~\ref{P:dikous-sa-new}.
Let $F\subset X$
be a split face such that both $F$ and $F'$ are measure extremal. Then the function $\lambda_F$ is strongly affine. \end{cor}
\begin{proof}
Assume that $F$ is such a split face. Define $f$ and $C$ as in the proof of implication $(2)\implies(3)$ of the previous proposition. Then $f$ is universally measurable and $C$ is universally null (by the quoted implication). By Proposition~\ref{P:dikous-sa-new} we deduce that $f\in \ell^\infty(K)\cap E^{\perp\perp}$ and hence $V(f)$ is strongly affine. It is thus enough to prove that $\lambda_F=V(f)$. Let $x\in X$ be arbitrary. Then $x=\lambda_F(x)y+(1-\lambda_F(x))y'$ for some $y\in F$ and $y'\in F'$. Let $\mu_y$ and $\mu_{y'}$ be maximal measures representing $y$ and $y'$. Since $F$ and $F'$ are measure extremal, we deduce that $\mu_y$ is supported by $F$ and $\mu_{y'}$ is supported by $F'$. Let $\nu_y$ and $\nu_{y'}$ be the unique measures on $K$ with $\phi(\nu_y)=\mu_y$ and $\phi(\nu_{y'})=\mu_{y'}$. Let
$\nu=\lambda_F(x)\nu_y+(1-\lambda_F(x))\nu_{y'}$. Then $\phi(\nu)$ is a maximal measure representing $x$. Therefore
$$V(f)(x)=\int f\,\mbox{\rm d}\nu = \lambda_F(x)\int f\,\mbox{\rm d}\nu_y=\lambda_F(x)$$
as $\nu_y$ is supported by $[f=1]$. This completes the proof. \end{proof}
\begin{cor}\label{cor:dikous-sa-split}
Let $K,E,X,H$ be as in Proposition~\ref{P:dikous-sa-new}. Then
$$\begin{aligned}
\mathcal A^s_H&= \mathcal A_H=\mathcal S_H\\&=\{\ext X\cap F;\, F\subset X\mbox{ is a split face with $F$ and $F'$ measure extremal}\}
\subset \mathcal Z_H. \end{aligned} $$
The converse to the last inclusion holds if and only if $A$ does not contain any compact perfect subset. \end{cor}
\begin{proof}
The equalities follow by combining Proposition~\ref{P:dikous-sa-new}$(d)$ with Lemma~\ref{L:dikous-split-new}. The inclusion is obvious.
Assume $A$ contains no compact perfect subset. By Lemma~\ref{L:maxmiry-dikous} we get that $X$ is a standard compact convex set. So, Lemma~\ref{L:SH=ZH} shows that the equality holds.
\iffalse
Let $f\in A_{sa}(X)$ be such that $\ext X\subset [f=0]\cup[f=1]$. Let $g$ be the function satisfying $V(g)=f$. Then $g$ attains only values $0,\frac12,1$. Let
$$B=\{t\in L;\, g(t,0)=0\}\mbox{ and }N=\{t\in L;\, g(t,0)=\frac12\}.$$
Then $B$ and $N$ are universally measurable and $N\subset A$. If $N$ is not universal null, there is a continuous Radon probability $\mu$ on $L$ such that $\mu(N)>0$. Using the Radon property, we find a compact set $F\subset L$ with $\mu(F)>0$. Then $F$ cannot be scattered and hence $B$ (and thus $A$) contains a compact perfect set, a contradiction. Thus $N$ is universal null. It follows from Lemma~\ref{L:dikous-split-new} that $[f=1]\cap \ext X$ is a split face.\fi
Conversely, assume that $A$ contains a compact perfect subset $D$. Let $$f=1_{D\times\{1\}}+\frac12 1_{D\times\{0\}}.$$ Then $V(f)\in A_{sa}(X)$ and $\ext X\subset [V(f)=1]\cup[V(f)=0]$. Since $D$ is not universal null, Lemma~\ref{L:dikous-split-new} shows that $[V(f)=1]\cap \ext X\in\mathcal Z_H\setminus\mathcal S_H$. \end{proof}
The following example show that in condition $(2)$ of Lemma~\ref{L:dikous-split-new} the assumption of measure extremality cannot be replaced by measure convexity. It also witnesses that in Theorem~\ref{t:srhnuti-splitfaceu-metriz} some assumption on $X$ is needed.
\begin{example}\label{ex:dikous-divnyspitface}
Let $L=A=[0,1]$, $K=K_{L,A}$, $E=E_{L,A}$, $X=X_{L,A}$ and $H=A_{sa}(X)$. Then there is as split face $F\subset X$ with the following properties:
\begin{enumerate}[$(i)$]
\item Both $F$ and $F'$ are measure convex.
\item $\lambda_F$ is not strongly affine.
\item $F\cap\ext X\notin \mathcal S_H$.
\item There are maximal measures not carried by $F\cup F'$.
\end{enumerate} \end{example}
\begin{proof} The proof will be done in several steps.
{\tt Step 1:} Set $$\begin{aligned}
N_1&=\{\mu\in M_1(K);\, \mu(L\times\{-1\})=\mu([\tfrac12,1]\times\{0\})=\mu_d(L\times\{0\})=0\},\\
N_2&=\{\mu\in M_1(K);\, \mu(L\times\{1\})=\mu([0,\tfrac{1}{2}]\times\{0\})=\mu_d(L\times\{0\})=0\}, \end{aligned}$$ where $\mu_d$ denotes the discrete part of $\mu$. Then $N_1$ and $N_2$ are Borel measure convex subsets of $M_1(K)$.
It is enough to prove the statement for $N_1$. This set is the intersection of the following three sets: \begin{itemize}
\item $\{\mu\in M_1(K);\, \mu (L\times\{-1\})=0\}$: This is a closed convex set (note that $L\times\{-1\}$ is an open subset of $K$ and hence the function $\mu\mapsto \mu (L\times\{-1\})$ is lower semicontinuous), so it is a Borel measure convex set.
\item $\{\mu\in M_1(K);\,\mu([\tfrac12,1]\times\{0\})=0\}$: It follows for example from Lemma~\ref{L:function space}$(b)$ that $\mu\mapsto \mu([\tfrac12,1]\times\{0\})$ is a strongly affine nonnegative Borel function. Hence the set is Borel and measure convex.
\item $\{\mu\in M_1(K);\, \mu_d (L\times\{0\})=0\}$: It follows from \cite[Proposition 2.58]{lmns} that it is a $G_\delta$ set. Further, it is measure convex by the proof of Example~\ref{ex:d+s}. \end{itemize} So, $N_1$ is indeed a Borel measure convex set, being the intersection of three Borel measure convex sets.
{\tt Step 2:} Let $\theta:M_1(K)\to X$ be defined by $\theta(\mu)=r(\phi(\mu))$ for $\mu\in M_1(K)$. Then $\theta$ is an affine continuous surjection of $M_1(K)$ onto $X$. Moreover, $N_1=\theta^{-1}(\theta(N_1))$ and $N_2=\theta^{-1}(\theta(N_2))$.
The mapping $\theta$ is the composition of two mappings -- the mapping $\mu\mapsto\phi(\mu)$ which is an affine homeomorphism of $M_1(K)$ onto $M_1(\overline{\ext X})$ and the barycenter mapping which is affine and continuous and, due to the Krein-Milman theorem maps $M_1(\overline{\ext X})$ onto $X$. Therefore $\theta$ is an affine continuous surjection.
It is enough to prove the equality for $N_1$. Assume that $\mu_1\in N_1$ and $\mu_2\in M_1(K)$ such that $\theta(\mu_1)=\theta(\mu_2)$. Then $\mu_2-\mu_1\in E^\perp$. It follows from the proof of Proposition~\ref{P:dikous-sa-new}$(b)$ that there is a sequence $(t_j)$ in $L$ and a summable sequence $(a_j)$ of real numbers such that $$\mu_2=\mu_1+\sum_j a_j(\varepsilon_{(t_j,1)}+\varepsilon_{(t_j,-1)}-2\varepsilon_{(t_j,0)}).$$ Since $\mu_2\ge0$ and $\mu_1(\{(t_j,-1)\})=\mu_1(\{(t_j,0)\})=0$, necessarily $a_j=0$ for each $j\in\mathbb N$. Thus $\mu_2=\mu_1$ and the argument is complete.
{\tt Step 3:} Let $F_1=\theta(N_1)$ and $F_2=\theta(N_2)$. Then $F_1$ and $F_2$ are disjoint Borel measure convex subsets of $X$.
Since $N_1$ and $N_2$ are clearly disjoint, by Step 2 we get that also $F_1$ and $F_2$ are disjoint. Further, by Step 1 we know that $N_1$ and $N_2$ are Borel sets, hence $F_1$ and $F_2$ are Borel sets by Lemma~\ref{L:kvocient}$(f)$ and Step 2. Finally, let us prove that $F_1$ is measure convex (the case of $F_2$ is completely analogous): Let $\mu\in M_1(X)$ be carried by $F_1$. Since $\theta$ is a continuous surjection (by Step 2), the mapping from $M_1(M_1(K))$ to $M_1(X)$ assigning to a measure its image measure is also surjective, there is $\nu\in M_1(M_1(K))$ such that $\theta(\nu)=\mu$. Then $\nu(N_1)=\nu(\theta^{-1}(F_1))=\mu(F_1)=1$, thus $\nu$ is carried by $N_1$. Let $\sigma=r(\nu)$ be the barycenter of $\nu$. By Step 1 we know that $\sigma\in N_1$. Since clearly $r(\mu)=r(\theta(\nu))=\theta(r(\nu))=\theta(\sigma)\in F_1$, the argument is complete.
{\tt Step 4:} Given $x\in X$, there are $\lambda\in[0,1]$, $y_1\in F_1$ and $y_2\in F_2$ such that $x=\lambda y_1+(1-\lambda)y_2$. Moreover, $\lambda$ is uniquely determined. If $x\in X\setminus(F_1\cup F_2)$, then $y_1$ and $y_2$ are uniquely determined as well.
Let $x\in X$. Let $\mu$ be a maximal measure representing $x$. Then $\mu$ is carried by $\overline{\ext X}$, so there is $\nu\in M_1(K)$ with $\phi(\nu)=\mu$. It follows from Lemma~\ref{L:maxmiry-dikous} that $\nu_d(L\times\{0\})=0$. Hence there is $\lambda\in[0,1]$, $\nu_1\in N_1$ and $\nu_2\in N_2$ such that $\nu=\lambda\nu_1+(1-\lambda)\nu_2$. Then $$x=\theta(\nu)=\lambda \theta(\nu_1)+(1-\lambda)\theta(\nu_2),$$ which proves the existence.
To prove the uniqueness, assume that $$x=ty_1+(1-t)y_2$$ for some $t\in[0,1]$, $y_1\in F_1$ and $y_2\in F_2$. Let $\sigma_1\in N_1$ and $\sigma_2\in N_2$ be such that $y_1=\theta(\sigma_1)$ and $y_2=\theta(\sigma_2)$. Let $\sigma=t\sigma_1+(1-t)\sigma_2$. Then $x=\theta(\sigma)$, i.e., $x$ is the barycenter of $\phi(\sigma)$. Since $\sigma_d(L\times\{0\})=0$, Lemma~\ref{L:maxmiry-dikous} says that $\phi(\sigma)$ is maximal. Since $X$ is a simplex, we deduce that $\phi(\sigma)=\mu$, hence $\sigma=\nu$. It follows that $$t=\sigma(L\times\{0\}\cup [0,\tfrac12]\times\{1\})=\nu(L\times\{0\}\cup [0,\tfrac12]\times\{1\})=\lambda.$$ Moreover, if $x\in X\setminus(F_1\cup F_2)$, then $\lambda\in (0,1)$, so $\sigma_1=\nu_1$ and $\sigma_2=\nu_2$. Hence $y_1=\theta(\nu_1)$ and $y_2=\theta(\nu_2)$ which completes the argument.
{\tt Step 5:} $F_1$ is a split face of $X$ and $F_1'=F_2$.
Let $x\in F_1$ and $x=\frac12(y+z)$ for some $y,z\in X$. Let $y=t y_1+(1-t) y_2$ and $z= s z_1+(1-s)z_2$ be the decompositions provided by Step 4. If $s=t=0$, then $x\in F_2$, which is impossible. If $s+t\in(0,2)$, then $$x=\tfrac12(ty_1+(1-t)y_2+s z_1+(1-s) z_2)= \tfrac{t+s}{2}\cdot\tfrac{t y_1+s z_1}{t+s} +\tfrac{2-t-s}{2}\cdot\tfrac{(1-t)y_2+(1-s)z_2}{2-t-s}.$$ Step 4 then shows that $s+t=2$, a contradiction. Finally, if $s=t=1$, then $y,z\in F_1$. This completes the proof that $F_1$ is a face. In the same way we see that $F_2$ is also a face. Hence clearly $F_2\subset F_1'$. The converse inclusion can be proved by repeating the proof of Lemma~\ref{l:complementarni-facy}$(ii)$.
{\tt Step 6:} $\lambda_{F_1}$ is not strongly affine.
Let $\mu$ be the normalized Lebesgue measure on $[0,\frac12]\times\{0\}$ and $x=\theta(\mu)$. Since $\mu\in N_1$, we get $x\in F_1$ and hence $\lambda_{F_1}(x)=1$. On the other hand, $x$ is the barycenter of $\phi(\mu)$, the measure $\phi(\mu)$ is carried by $\phi([0,\frac12]\times\{0\})$ and hence $\lambda_{F_1}=\frac12$ on the support of $\phi(\mu)$.
{\tt Step 7:} $F_1\cap\ext X\notin \mathcal S_H$.
It is enough to observe that $F_1\cap\ext X=\phi(L\times\{1\})$ and this set is not in $\mathcal S_H$ by Lemma~\ref{L:dikous-split-new}.
{\tt Step 8:} Let $\mu$ be the Lebesgue measure on $L\times\{0\}$. Then $\phi(\mu)$ is a maximal measure (by Lemma~\ref{L:maxmiry-dikous}) not carried by $F_1\cup F_2$.
\end{proof}
\subsection{Continuous affine functions on $X_{L,A}$} \label{ssec:cont-af-X}
Now we focus on the space of continuous affine functions on $X_{L,A}$.
\begin{prop}\label{P:dikous-spoj-new} Let $K=K_{L,A}$, $E=E_{L,A}$, $X=X_{L,A}$ and $H=A_c(X)$. Then the following assertions are valid.
\begin{enumerate}[$(a)$]
\item $M^s(H)=M(H)=\Phi(\{f\circ\psi;\, f\in C(L)\})$.
\item The facial topology on $\ext X$ is formed by sets
\[\begin{aligned}
\phi((\psi^{-1}(U)\cap \Ch_{E}K) \setminus F),\quad &
U\subset L\mbox{ open}, F\subset A\times\{-1,1\}\\&\mbox{ such that $\psi(F)$ has no accumulation point in }U.\end{aligned}
\]
\item $\mathcal A^s_H=\mathcal A_H=\{\phi(\psi^{-1}(U)\cap\Ch_E K);\, U\subset L\mbox{ open}\}.$
\end{enumerate} \end{prop}
\begin{proof}
$(a)$: The first equality follows from Proposition~\ref{P:rovnostmulti}, let us look at the second one. Inclusion `$\supset$' is clear. The converse follows from Lemma~\ref{L:mult-nutne}(i) since $\phi(t,0)$ is the barycenter of $\frac12(\varepsilon_{\phi(t,-1)}+\varepsilon_{\phi(t,1)})$ for each $t\in A$.
$(b)$: Since $X$ is a simplex, \cite[Theorem II.6.2]{alfsen} implies that the facial topology is
$$\{\ext X\setminus G;\, G\subset X\mbox{ a closed face}\}.$$
So, assume $G$ is a closed face of $X$. Then $B=\phi^{-1}(G)$ is a closed subset of $K$. Let
$$F=\{(t,i)\in B\cap (A\times\{(-1,1)\});\, (t,-i)\notin B\}.$$
If $t_0$ is an accumulation point of $\psi(F)$ in $L$, then $(t_0,0)\in\overline{F}\subset B$. Hence, if $t_0\in A$, we deduce that $\{t_0\}\times\{-1,0,1\}\subset B$ (as $G$ is a face).
Then $B'=B\setminus F$ is a closed subset of $K$ with $\psi^{-1}(\psi(B'))=B'$, hence $U=I\setminus\psi(B')$ is an open subset of $L$ and no accumulation point of $\psi(F)$ belongs to $U$. Clearly
$$\Ch_{E}K\setminus B= \psi^{-1}(U)\cap\Ch_{E}K\setminus F,$$
so any facially open set is of the given form.
Conversely, assume $U$ and $F$ satisfy the above conditions. Let
$$B=\psi^{-1}(L\setminus U)\cup F.$$
Then $B$ is a closed subset of $K$. Further, it is easy to check that it is a Choquet set in the sense of \cite[Definition 8.27]{lmns}. Since $X$ is a simplex, it follows from \cite[Theorem 8.60 and Proposition 8.42]{lmns} that $\overline{\operatorname{conv}\phi(B)}$ is a split face. Since clearly
$$\ext X\setminus \overline{\operatorname{conv}\phi(B)} = \phi(\Ch_{E}K\setminus B)= \phi((\psi^{-1}(U)\cap \Ch_{E}K) \setminus F),$$
the proof is complete.
$(c)$: This follows from $(a)$ and Proposition~\ref{p:system-aha}. \end{proof}
\begin{remark}
In the situation from the previous proposition, $M(H)$ is determined by $\mathcal A_H$ (this follows easily from Theorem~\ref{T: meritelnost H=H^uparrow cap H^downarrow}) and also by the facial topology (see \cite[Theorem II.7.10]{alfsen}). But $\mathcal A_H$ is a proper subfamily of the facial topology whenever $A\ne\emptyset$.
This illustrates the feature addressed in Remark~\ref{rem:o meritelnosti}$(2)$. \end{remark}
\subsection{Baire strongly affine functions on $X_{L,A}$} \label{ssec:baire-af-X}
Next we look at spaces of Baire strongly affine functions. The relevant results are contained in two propositions -- the first one is devoted to Baire-one functions and the second one to general Baire functions. Before coming to these results we give a topological lemma.
\begin{lemma}\label{L:countable Baire}
Let $L$ be a compact space and let $B\subset L$ be a countable set. Then $B$ is a Baire set if and only if it consists of $G_\delta$ points of $L$. \end{lemma}
\begin{proof}
Any closed $G_\delta$ subset of $L$ is $\operatorname{Coz}_\delta$, hence Baire. Since Baire sets form a $\sigma$-algebra, the proof of the `if part' is complete.
To prove the converse, assume $B$ is a Baire set and fix any $x\in B$. Enumerate $B\setminus\{x\}=\{b_n;\, n\in\mathbb N\}$. By the Urysohn lemma there is a continuous function $f_n:L\to [0,1]$ such that $f_n(x)=1$ and $f_n(b_j)=0$ for $j\le n$. Then $B_n=B\cap[f_n>0]$ is a Baire set.
We conclude that $\{x\}=\bigcap_n B_n$ is a Baire set as well. As the complement $L\setminus\{x\}$ is also Baire, hence Lindel\"of, it is $F_\sigma$. It follows that $x$ is a $G_\delta$ point. \end{proof}
\begin{prop}\label{P:dikous-A1-new}
Let $K=K_{L,A}$, $E=E_{L,A}$, $X=X_{L,A}$ and $H=A_1(X)$. Then the following assertions are valid:
\begin{enumerate}[$(a)$]
\item $\begin{aligned}[t]
H=V\Big(\Big\{f\in\ell^\infty(K);\,& f\circ\jmath\in\Ba_1(L), f(t,0)=\tfrac12(f(t,-1)+f(t,1))\mbox{ for }t\in A, \\& \{t\in A;\, f(t,1)\neq f(t,-1)\}\mbox{ is countable}\Big\}\Big).
\end{aligned}$
\item $\begin{aligned}[t]
M&^s(H)=M(H)\\&=V\Big(\Big\{f\in\ell^\infty(K);\, f\circ\jmath\in\Ba_1(L), f(t,0)=\tfrac12(f(t,-1)+f(t,1))\mbox{ for }t\in A, \\& \quad\forall\varepsilon>0\colon\{t\in A;\, \abs{f(t,1)- f(t,-1)}\ge\varepsilon\}\mbox{ is a countable $G_\delta$ subset of }L\Big\}\Big)
\end{aligned}$
\item $\begin{aligned}[t]
\mathcal A_H&=\mathcal A^s_H=\{\phi(\psi^{-1}(F)\cap \Ch_{E}(K)\cup C_1\times\{1\}\cup
C_2\times\{-1\});\, F\subset L\ \operatorname{Zer}_\sigma\mbox{set},\\
&\qquad\qquad C_1,C_2\subset A\mbox{ countable sets consisting of $G_\delta$ points in }L\}
\\ &=\{ \phi(B);\, B\subset\Ch_E K, \psi(B)\mbox{ is a $\operatorname{Zer}_\sigma$ subset of }L, \\ & \quad \psi(B)\cap\psi(\Ch_E K\setminus B)
\mbox{ is a countable set consisting of $G_\delta$ points in }L\}.
\end{aligned}$
\item $\begin{aligned}[t]
\mathcal A_H&=\mathcal A^s_H=\mathcal S_H\\&=\{\ext X\setminus F;\, F\mbox{ is a $\operatorname{Coz}_\delta$ split face such that $F'$ is a Baire set}
\\&\qquad\qquad \mbox{ and
$F,F'$ are measure convex}\}=\mathcal Z_H. \end{aligned} $
\end{enumerate} \end{prop}
\begin{proof}
$(a)$: Inclusion `$\subset$' follows from Proposition~\ref{p:topol-stacey}$(b)$ and Proposition~\ref{P:dikous-sa-new}$(b)$. Conversely, assume that $f$ satisfies the conditions on the right-hand side. By Proposition~\ref{p:topol-stacey}$(b)$ and Proposition~\ref{P:dikous-sa-new}$(b)$ we deduce that $f\in \Ba_1^b(K)\cap E^{\perp\perp}$, and thus $V(f)\in A_1(X)$ by Lemma~\ref{L:function space}$(b)$.
$(b)$: The first equality follows from Proposition~\ref{P:rovnostmulti}, let us look at the second one. Fix a function $f$ on $K$ such that $V(f)\in A_1(X)$. Using $(a)$ and Proposition~\ref{p:topol-stacey}$(b)$ we see that
$V(f)\in M(H)$ if and only if
$$V(g)\in H\Longrightarrow \widetilde{fg}\circ\jmath\in\Ba_1(L).$$
Let us now prove the two inclusions:
`$\supset$': Assume $f$ satisfies the condition on the right-hand side and fix any $g$ with $V(g)\in A_1(X)$. Then $fg\in \Ba_1(K)$, $fg=\widetilde{fg}$ on $A\times \{-1,1\}$ and on $(L\setminus A)\times \{0\}$. For $t\in A$ we have
\begin{equation*}
\begin{aligned}
f(t,0)g(t,0)-\widetilde{fg}(t,0)&=
\tfrac14(f(t,-1)+f(t,1))(g(t,-1)+g(t,1))\\&\qquad-\tfrac12(f(t,-1)g(t,-1)+f(t,1)g(t,1))
\\&=\tfrac14(f(t,-1)-f(t,1))(g(t,1)-g(t,-1)),
\end{aligned}
\end{equation*}
hence
$$\abs{f(t,0)g(t,0)-\widetilde{fg}(t,0)}\le\tfrac12\norm{g}_\infty\abs{f(t,1)-f(t,-1)},$$
so, for each $\varepsilon>0$,
\[
\{t\in L;\, \abs{f(t,0)g(t,0)-\widetilde{fg}(t,0)}\ge\varepsilon\}
\subset
\{t\in A;\, \tfrac12\norm{g}_\infty\abs{f(t,1)-f(t,-1)}\ge \varepsilon\}.
\] The latter set is a countable $G_\delta$ subset of $L$, and hence the former set is also a countable $G_\delta$ set in $L$.
So, $(fg-\widetilde{fg})\circ\jmath\in \Ba_1(L)$, because Baire-one functions on normal spaces are characterized via $G_\delta$ measurability of their level sets (see \cite[Exercise 3.A.1]{lmz}). Thus $\widetilde{fg}\circ \jmath\in\Ba_1(L)$ and therefore $V(f)\in M(A_1(X))$.
`$\subset$': Assume that $V(f)\in M(H)$ and $\varepsilon>0$ is given. Let
\[ B=\{t\in A;\, \abs{f(t,1)-f(t,-1)}\ge\varepsilon\}.
\] By Proposition~\ref{p:topol-stacey}$(b)$, $B$ is countable. We use the condition from the beginning of the proof for $g=f$, i.e., we use the fact that $\widetilde{f^2}\circ\jmath\in \Ba_1(L)$. Since $f^2\circ\jmath\in \Ba_1(L)$, the function \[ t\in L\mapsto \abs{f^2(t,0)-\widetilde{f^2}(t,0)}=\frac14\abs{(f(t,1)-f(t,-1))^2} \] is Baire-one on $L$. Hence the level sets of this function are $G_\delta$ set, from which it follows that $B$ is a $G_\delta$ set.
$(c)$: The first equality follows from $(b)$, let us prove the second one:
`$\supset$': Let $F\subset L$ be a $\operatorname{Zer}_\sigma$ set. Then there is a Baire-one function $g:L\to[0,1]$ such that $F=[g>0]$ (see, e.g., \cite[Proposition 2]{kalenda-spurny}). Let $f=g\circ\psi$. Then $V(f)\in M(H)$ by $(b)$. Then
$[f>0] = \psi^{-1}(F)$ and hence
$$\phi(\psi^{-1}(F)\cap\Ch_{E}K)\in\mathcal A_{H}$$
by Proposition~\ref{p:system-aha}.
Further, fix any $t\in A$ such that $\{t\}$ is a $G_\delta$ set in $L$ and $i\in\{-1,1\}$. Let $h=1_{(t,i)}-1_{(t,-i)}$. By $(b)$ we see that $V(h)\in M(H)$ and hence
\[
\{\phi(t,i)\}=[V(h)>0]\cap\ext X\in \mathcal A_{H}.
\]
Since $\mathcal A_{H}$ is closed with respect to countable unions by Proposition~\ref{p:system-aha}$(a)$, the proof of inclusion `$\supset$' is complete.
`$\subset$': Let $f$ be such that $V(f)\in M(H)$. For each $n\in\mathbb N$ set
\[
F_n=\{t\in L;\, f(t,0)>\tfrac1n\}\quad \mbox{and}\quad C_n=\{t\in A;\, \abs{f(t,1)-f(t,-1)}>\tfrac1n\}.
\]
Then $F_n$ is a $\operatorname{Zer}_\sigma$ set and $C_n$ is a countable $G_\delta$ set in $L$ (by $(b)$). Hence it consists of $G_\delta$ points in $L$. The equality
\[
\begin{aligned}\relax
[f>0]\cap\Ch_{E}K&= \psi^{-1}\left(\bigcup_{n\in\mathbb N} (F_n\setminus C_n)\right)\cap \Ch_{E}K \cup\\
&\quad \bigcup_{n\in\mathbb N}\{(t,i)\in C_n\times\{-1,1\};\, f(t,i)>0\}.
\end{aligned}
\] now yields the assertion. Indeed, each set $C_n$, as a countable $G_\delta$ set, is a $\operatorname{Coz}_\delta$ set, and thus $F_n\setminus C_n$ is a $\operatorname{Zer}_\sigma$ set. Hence their union is a $\operatorname{Zer}_\sigma$ set.
Finally, the last equality follows from \eqref{eq:podmny ChEK}.
$(d)$: The first equality is repeated from $(c)$. Inclusion $\mathcal A^s_H\subset\mathcal S_H$ follows from Theorem~\ref{T:meritelnost-strongmulti}. Inclusion `$\subset$' from the third equality is obvious.
Let us prove the converse inclusion. Let $F\subset X$ be a $\operatorname{Coz}_\delta$ split face such that $F'$ is a Baire set and both $F$ and $F'$ are measure convex. By Proposition~\ref{p:shrnuti-splitfaceu-baire} we get that $\lambda_F$ is Baire and strongly affine. Let $f$ be the function on $K$ such that $V(f)=\lambda_{F'}$ and let $B=\phi^{-1}(F)$. Let $$ \begin{aligned}
C&=\{t\in L;\, f(t,0)=\tfrac{1}{2}\}\\
&=\{t\in A;\, B\cap\{(t,1),(t,-1)\}\mbox{ contains exactly one point}\}. \end{aligned} $$ It follows from Proposition~\ref{p:topol-stacey}$(c)$ that $C$ is a countable Baire set, hence by Lemma~\ref{L:countable Baire} it consists of $G_\delta$ points.
Let us enumerate $C=\{t_n;\, n\in\mathbb N\}$ and for each $n\in\mathbb N$ let $i_n\in \{-1,1\}$ be such that $(t_n,i_n)\notin B$.
Let $g\in \Ba_1(L)$ be such that $0\le g\le 1$ and $[g=1]=B$. Such a function exists for example by \cite[Proposition 2]{kalenda-spurny}. We define a sequence $(g_n)$ of functions on $K$ by
$$g_n(t,i)=\begin{cases}
1- g(t)^n, & t\in L\setminus\{t_1,\dots,t_n\},\\
f(t,i), & t\in\{t_1,\dots,t_n\}.
\end{cases}$$ Then $g_n\in \Ba_1(K)\cap E^{\perp\perp}$ and $g_n\nearrow f$. It follows that $V(g_n)\in A_1(X)$ and $V(g_n)\nearrow V(f)=\lambda_{F'}$. Thus $\lambda_{F'}\in H^\uparrow$ and the argument is complete.
Finally, $\mathcal S_H=\mathcal Z_H$ by Lemma~\ref{L:SH=ZH}.
\end{proof}
We pass to the case of general strongly affine Baire functions.
\begin{prop}\label{P:dikous-Bair-new}
Let $K=K_{L,A}$, $E=E_{L,A}$, $X=X_{L,A}$ and $H=(A_c(X))^\mu$. Then the following assertions are valid.
\begin{enumerate}[$(a)$]
\item $H=(A_c(X))^\sigma=\Ba^b(X)\cap A_{sa}(X)$.
\item $\begin{aligned}[t]
H=V\Big(\Big\{f\in\ell^\infty(K);\,& f\circ\jmath\in\Ba(L), f(t,0)=\tfrac12(f(t,-1)+f(t,1))\mbox{ for }t\in A, \\& \{t\in A;\, f(t,1)\neq f(t,-1)\}\mbox{ is countable}\Big\}\Big)\end{aligned}$
\item $\begin{aligned}[t]
M&^s(H)=M(H)\\&=V\Big(\Big\{f\in\ell^\infty(K);\, f\circ\jmath\in\Ba(L), f(t,0)=\tfrac12(f(t,-1)+f(t,1))\mbox{ for }t\in A, \\& \{t\in A;\, f(t,1)\neq f(t,-1)\}\mbox{ is a countable Baire subset of }L\Big\}\Big)\end{aligned}$
\item $\begin{aligned}[t]
\mathcal A_H&=\mathcal A^s_H=\{\phi(\psi^{-1}(F)\cap \Ch_{E}(K)\cup C_1\times\{1\}\cup
C_2\times\{-1\});\, F\subset L\ \mbox{Baire},\\
&\qquad\qquad C_1,C_2\subset A\mbox{ countable sets consisting of $G_\delta$ points in }L\}
\\ &=\{ \phi(B);\, B\subset\Ch_E K, \psi(B), \psi(\Ch_E K\setminus B)\mbox{ are Baire subsets of }L, \\ & \qquad \qquad\psi(B)\cap\psi(\Ch_E K\setminus B)
\mbox{ is countable}\}.
\end{aligned}$
\item $\begin{aligned}[t]
\mathcal A_H&=\mathcal A^s_H=\mathcal S_H\\&=\{\ext X\cap F;\, F\mbox{ is a split face such that}\\&\qquad\qquad F, F'\mbox{ are Baire and measure convex}\}= \mathcal Z_H. \end{aligned} $
\end{enumerate} \end{prop}
\begin{proof} $(a)$: Since $X$ is a simplex, the equality $H=(A_c(X))^\sigma=\Ba^b(X)\cap A_{sa}(X)$ follows from Proposition~\ref{P:Baire-srovnani}$(iii)$.
$(b)$: This follows by combining Proposition~\ref{p:topol-stacey}$(c)$, Proposition~\ref{P:dikous-sa-new}$(b)$ and Lemma~\ref{L:function space}$(b)$.
$(c)$: The first equality follows from Proposition~\ref{P:rovnostmulti}, let us look at the second one. We first observe that $V(g\circ\psi)\in M(H)$ whenever $g\in \Ba_1^b(L)$. So, given $f$ such that $V(f)\in H$, we get $V(f)\in M(H)\Leftrightarrow V(f-f\circ\jmath\circ\psi)\in M(H)$. Moreover, $$f(t,1)-(f\circ\jmath\circ\psi)(t,1) -(f(t,-1)-(f\circ\jmath\circ\psi)(t,-1))=f(t,1)-f(t,-1),$$ so it is enough to prove the equality for the functions satisfying $f\circ\jmath=0$ on $L$.
`$\subset$': Assume that $f\circ\jmath=0$, $V(f)\in M(H)$ and $B=\{t\in A;\, f(t,1)\neq f(t,-1)\}$. Then $V(f)\in H$ and hence $f$ satisfies the conditions in $(b)$.
Further, the function \[ g(t,i)=\begin{cases}
1,& t\in B, f(t,i)>0,\\
-1,& t\in B, f(t,i)<0,\\
0&\text{ otherwise} \end{cases} \] satisfies $V(g)\in H$ by $(b)$. Since $V(f)\in M(H)$, $\widetilde{fg}\circ\jmath$ is a Baire function on $L$. Since \[ \widetilde{fg}(t,0)\begin{cases}
=0,& t\in L\setminus B,\\
>0,& t\in B, \end{cases} \] the set $B$ is a Baire subset of $L$. (It is countable by $(b)$.)
`$\supset$': Let $f\circ\jmath=0$ and $f$ satisfy the conditions on the right-hand side. Then $V(f)\in H$ by $(b)$. Further, let $g$ with $V(g)\in H$ be given. If $\widetilde{fg}(t,0)$ is nonzero for some $t\in A$, then $f(t,1)\neq f(t,-1)$. But the set $$B=\{t\in A;\, \abs{f(t,1)-f(t,-1)}>0\}$$ is a countable Baire set in $L$, and thus each of its subsets is also Baire (this follows from Lemma~\ref{L:countable Baire}). From this we obtain that $\widetilde{fg}$ is Baire on $K$, and consequently $V(f)\in M(H)$.
$(d)$: The first equality follows from $(c)$, let us continue by the second one.
`$\supset$': Let $F\subset L$ be a Baire set. Then $1_F$ is a Baire function on $L$. Let $f=1_F\circ\psi$. Then $V(f)\in M(H)$ by $(c)$. Then
$[f=1] = \psi^{-1}(F)$ and hence
$$\phi(\psi^{-1}(F)\cap\Ch_{E}K)\in\mathcal A_{H}$$
by Theorem~\ref{T:integral representation H}(i).
Further, fix any countable $C\subset A$ such that $C$ is a Baire set in $L$. Let $h=1_{C\times \{1\}}-1_{C\times\{-1\}}$. By $(c)$ we see that $V(h)\in M(H)$ and hence
\[
\phi(C\times\{1\})=[V(h)>0]\cap\ext X\in \mathcal A_{H}\mbox{ and } \phi(C\times\{-1\})=[V(h)<0]\cap\ext X\in \mathcal A_{H}
\]
by Theorem~\ref{T:integral representation H}. Since $\mathcal A_H$ is a $\sigma$-algebra, the proof of inclusion `$\supset$' is complete.
`$\subset$': Let $f$ be such that $V(f)\in M(H)$ and $\ext X\subset[V(f)=1]\cup [V(f)=0]$. Set
\[
F=\{t\in L;\, f(t,0)=1\}\mbox{ and }C=\{t\in A;\, f(t,1)\ne f(t,-1)\}.
\]
Then $F$ is a Baire set and $C$ is a countable Baire set in $L$ by $(c)$.
The equality
\[
\begin{aligned}\relax
[f=1]\cap\Ch_{E}K&= \psi^{-1}\left(F\right)\cap \Ch_{E}K \cup\{(t,i)\in C\times\{-1,1\};\, f(t,i)=1\}
\end{aligned}
\] now yields the assertion.
$(e)$: This is just a special case of Theorem~\ref{T:baire-multipliers}$(a)$. \end{proof}
\subsection{Fragmented affine functions on $X_{L,A}$} \label{ssec:fragmented-dikous}
We continue by analyzing spaces $A_f(X_{L,A})$ and $(A_f(X_{L,A}))^\mu$. Their subclasses consisting of Borel functions will be analyzed in the subsequent section. We start by providing a topological lemma on fragmented functions and resolvable sets.
\begin{lemma}\label{L:dedicneres} Let $L$ be a compact space.
\begin{enumerate}[$(a)$]
\item Let $B\subset L$. Then $B$ is scattered if and only if each subset of $B$ is resolvable.
\item Let $B\subset L$. Then $B$ is $\sigma$-scattered if and only if each subset of $B$ belongs to $\sigma(\mathcal H)$ (the $\sigma$-algebra generated by resolvable sets).
\item $(\Fr^b(L))^\mu=(\Fr^b(L))^\sigma$ and this system coincide with the family of all bounded $\sigma(\mathcal H)$-measurable functions on $L$.
\end{enumerate} \end{lemma}
\begin{proof}
$(a)$: Assume $B$ is scattered. Then any subset of $B$ is also scattered, hence resolvable. Conversely, assume $B$ is not scattered, i.e., there exists a relatively closed set $H\subset B$ without isolated points. Then $F=\overline{H}$ is a compact set in $L$ without isolated points. We assume first that $F\setminus H$ is not nowhere dense in $F$. Then there exists a closed set $C\subset F$ such that $\overline{C\cap B}=\overline{C\setminus B}=C$. Thus $B$ is not resolvable.
In case $F\setminus H$ is nowhere dense in $F$, we use \cite[Theorem 3.7]{comfort} to get a pair $D_1,D_2\subset F$ of disjoint dense sets in $F$. Then both the sets $D_1\cap H$ and $D_2\cap H$ are dense in $H$. It follows that $D_1\cap H$ is a subset of $B$ which is not resolvable.
$(b)$: The `only if' part follows from $(a)$.
To prove the `if' part assume that $B$ is not $\sigma$-scattered. If $B\notin\sigma(\mathcal H)$, the proof is complete. Thus we assume that $B\in\sigma(\mathcal H)$. It follows from \cite[Theorem 6.13]{hansell-DT} that $B$ is `scattered $K$-analytic'. By combining \cite[Theorem 1]{holicky-cmj01} with \cite[Lemma 2.3]{namioka-pol} we deduce that $B$ is `almost \v{C}ech-analytic' in the sense of \cite{namioka-pol}. Hence, we deduce from \cite[Corollary~5.4]{namioka-pol} that $B$ contains a nonempty compact set $K$ without isolated points. Then $K$ can be mapped onto $[0,1]$ by a continuous function $\varphi\colon K\to [0,1]$. We select a set $C\subset [0,1]$ that is not analytic (by \cite[Theorem 29.7]{kechris} it is enough to take a non-measurable set.). Then $D=\varphi^{-1}(C)\subset K$ is not scattered-$K$-analytic. Indeed, if this were the case, by \cite[Corollary 7]{HoSp} $C$ would be scattered-$K$-analytic and hence analytic by \cite[Proposition 1]{holicky-cmj93} and \cite[Theorem 25.7]{kechris}.) Thus $D\notin\sigma(\mathcal H)$ (by \cite[Theorem 6.13]{hansell-DT}).
$(c)$: Inclusion $(\Fr^b(L))^\mu\subset (\Fr^b(L))^\sigma$ is obvious. The converse follows easily from the fact that $\Fr^b(L)$ is a lattice. Further, the family of $\sigma(\mathcal H)$-measurable functions is closed to limits of pointwise converging sequences and by Theorem~\ref{T:a} it contains $\Fr^b(L)$. So, it contains $(\Fr^b(L))^\mu$. Conversely, it is easy to see that $1_A\in (\Fr^b(L))^\mu$ for any $A\in\sigma(\mathcal H)$ and that simple functions are uniformly dense in bounded $\sigma(\mathcal H)$-measurable functions, which proves the converse inclusion. \end{proof}
\begin{prop}\label{P:dikous-af-new} Let $K=K_{L,A}$, $E=E_{L,A}$ $X=X_{L,A}$ and $H=A_f(X)$. Then the following assertions are valid. \begin{enumerate}[$(a)$]
\item $\begin{aligned}[t]
H=V\big(\{f\in\ell^\infty(K);\, &f\circ\jmath\in \Fr^b(L) \\&\mbox{ and }f(t,0)=\tfrac12(f(t,1)+f(t,-1))\mbox{ for }t\in A\}\big).\end{aligned}$ \item $\begin{aligned}[t]
M^s&(H)=M(H)\\&=V\big(\{f\in\ell^\infty(K);\, f\circ\jmath\in \Fr^b(L), f(t,0)=\tfrac12(f(t,1)+f(t,-1))\mbox{ for }t\in A
\\&\qquad\qquad
\mbox{and }\forall\varepsilon>0\colon \{t\in A;\, \abs{f(t,1)-f(t,-1)}\ge\varepsilon\}\mbox{ is scattered}\}\big).\end{aligned}$ \item $\begin{aligned}[t]
\mathcal A_H=\mathcal A_H^s&= \{\phi(\psi^{-1}(F)\cap \Ch_{E} K\cup C_1\times\{1\}\cup
C_2\times\{-1\});\, \\&\qquad\qquad F\subset L\text{ is a } \mathcal H_\sigma\mbox{ set},
C_1,C_2\subset A\text{ are $\sigma$-scattered}\}
\\&=\{\phi(B);\, B\subset \Ch_E K, \psi(B)\in\mathcal H_\sigma, \\&\qquad\qquad\psi(B)\cap\psi(\Ch_E K\setminus B)\mbox{ is $\sigma$-scattered}\}. \end{aligned}$
\item $\begin{aligned}[t]
\mathcal A_H&=\mathcal A^s_H=\mathcal S_H\\&=\{\ext X\setminus F;\, F\subset X \mbox{ is a split face}, F\in\mathcal H_\delta, F'\in\sigma(\mathcal H),\\&\qquad\qquad F, F'\mbox{ are measure extremal}\}\subset\mathcal Z_H\end{aligned}$
Moreover, the converse to the last inclusion holds if and only if $A$ contains no compact perfect subset.
\end{enumerate}
\end{prop}
\begin{proof} $(a)$: The assertion follows from Proposition~\ref{p:topol-stacey}$(g)$, Proposition~\ref{P:dikous-sa-new}$(b)$ and Lemma~\ref{L:function space}$(b)$.
$(b)$: Inclusion $M^s(H)\subset M(H)$ holds always. To prove the remaining ones we proceed similarly as for Baire functions. First observe that $V(g\circ\psi)\in M^s(H)$ for each $g\in \Fr^b(L)$. Hence $V(f)\in M^s(H)$ if and only if $V(f-f\circ\jmath\circ\psi)\in M^s(H)$ and also $V(f)\in M(H)$ if and only if $V(f-f\circ\jmath\circ\psi)\in M(H)$. So, as above, it is enough to prove the remaining inclusions within the functions satisfying $f\circ\jmath=0$.
Let us continue by inclusion `$\subset$' from the second equality. Assume that $f\circ\jmath=0$ and $V(f)\in M(H)$. Fix $\varepsilon>0$ and let $$B=\{t\in A;\, \abs{f(t,1)-f(t,-1)}\ge \varepsilon\}.$$ Assume $B$ is not scattered. By Lemma~\ref{L:dedicneres}$(a)$ we find a subset $C\subset B$ which is not resolvable. Let $g$ be the function on $K$ defined by \[ g(t,i)=\begin{cases}
1, & t\in C, f(t,i)>0,\\
-1, &t\in C, f(t,i)<0,\\
0, &\text{otherwise}. \end{cases} \] Then $V(g)\in H$ (by $(a)$). Since $V(f)\in M(H)$, we deduce $V(\widetilde{fg})\in H$. But for $t\in C$ we have \[ \widetilde{fg}(t,0)=\begin{cases}
0,& t\in L\setminus C,\\
\frac12\abs{f(t,1)-f(t,-1)}\ge\frac{\varepsilon}{2}, & t\in C. \end{cases} \] Since $C$ is not resolvable, we easily deduce that $\widetilde{fg}\circ \jmath$ is not fragmented, a contradiction.
To prove the remaining inclusion assume that $f$ satisfies the conditions in the right-hand side and $f\circ\jmath=0$. We aim to prove that $V(f)\in M^s(H)$. Given $\varepsilon>0$ let $$B_\varepsilon=\{t\in A;\, \abs{f(t,1)-f(t,-1)}\ge \varepsilon\} \quad\mbox{and}\quad f_\varepsilon(t,i)=\begin{cases}
f(t,i), & t\in B_\varepsilon,\\ 0, & \mbox{otherwise}. \end{cases}$$
Then $\|f-f_\varepsilon\|\le\frac\ep2$. Since $M^s(H)$ is closed, it is enough to show that $V(f_\varepsilon)\in M^s(H)$. So, let a function $g$ with $V(g)\in H$ be given. Then $\widetilde{f_\varepsilon g}(t,0)=0$ for $t\in L\setminus B_\varepsilon$. Since $B_\varepsilon$ is scattered, it follows that $\widetilde{f_\varepsilon g}\circ \jmath$ is fragmented and so $\widetilde{f_\varepsilon g}\in H$. Moreover, $$[\widetilde{f_\varepsilon g}\ne f_\varepsilon g]\subset B_\varepsilon\times\{0\}.$$ Since $B_\varepsilon$ is scattered and hence universally null, it follows from Lemma~\ref{L:maxmiry-dikous} that $V(\widetilde{f_\varepsilon g})=V(f_\varepsilon)V(g)$ $\mu$-almost everywhere for each maximal measure $\mu\in M^1(X)$. Thus $V(f_\varepsilon)\in M^s(H)$ and the proof is complete.
$(c)$: The first equality follows from $(b)$, let us look at the second one:
`$\supset$': Let $F\subset L$ be an $\mathcal H_\sigma$ set. Then there exists a function $g\in \Fr^b(L)$ such that $[g>0]=F$. Then $f=g\circ\psi$ satisfies $V(f)\in M(H)$, and thus $\phi(\psi^{-1}(F)\cap \Ch_E K)\in \mathcal A_H$. If $C\subset A$ is scattered, the function $f=1_{C\times \{1\}}-1_{C\times \{-1\}}$ satisfies $V(f)\in M(H)$ by $(b)$. Hence $\phi(C\times\{1\})\in \mathcal A_H$, because $C\times\{1\}=[f>0]$. Similarly $\phi(C\times\{-1\})\in\mathcal A_H$. Now we conclude by noticing that $\mathcal A_H$ is closed with respect to finite intersections and countable unions, which finishes the proof of the inclusion `$\supset$'.
`$\subset$': Let $f$ with $V(f)\in M(H)$ be given. For each $n\in\mathbb N$ set
\[
F_n=\{t\in L;\, f(t,0)>\tfrac1n\}\quad \mbox{and}\quad C_n=\{t\in A;\, \abs{f(t,1)-f(t,-1)}\ge\tfrac1n\}.
\]
Then $F_n$ is an $\mathcal H_\sigma$ set and $C_n$ is a scattered set in $L$ (by $(b)$). The equality
\[
\begin{aligned}\relax
[f>0]\cap\Ch_{E}K&= \psi^{-1}\left(\bigcup_{n\in\mathbb N} (F_n\setminus C_n)\right)\cap \Ch_{E}K \cup\\
&\quad \bigcup_{n\in\mathbb N}\{(t,i)\in C_n\times\{-1,1\};\, f(t,i)>0\}.
\end{aligned}
\] now yields the assertion. Indeed, each $C_n$ is a resolvable set, and thus $F_n\setminus C_n$ is an $\mathcal H_\sigma$ set. Hence their union is an $\mathcal H_\sigma$ set.
$(d)$: The first equality is repeated from $(c)$. Inclusion $\mathcal A^s_H\subset\mathcal S_H$ follows from Theorem~\ref{T:meritelnost-strongmulti}. Inclusion `$\subset$' from the third equality is obvious.
Next assume that $F$ is a split face such that $F\in\mathcal H_\delta$, $F'\in\sigma(H)$ and $F,F'$ are measure extremal. By Corollary~\ref{cor:dikous-split-me} we know that $\lambda_F$ is strongly affine. Let $f$ be the function on $K$ such that $\lambda_F=V(f)$. Then $f$ attains only values $0,\frac12,1$. Since $[f=1]\in\mathcal H_\delta$ and $[f=0]\in\sigma(\mathcal H)$, we deduce that $[f=\frac12]\in\sigma(\mathcal H)$ as well. Further, by Lemma~\ref{L:dikous-split-new} the set $[f=\frac12]$ is universally null and hence it contains no compact perfect subset.
It follows from the proof of Lemma~\ref{L:dedicneres}$(b)$ that $[f=\frac12]$ is $\sigma$-scattered. Let
$$B=\phi^{-1}(\ext X\setminus F)=[f=0]\cap\Ch_E K.$$
Then
$$L\setminus\psi(B)=\jmath^{-1}([f=1]\cap\jmath(L))\in\mathcal H_\delta,$$
thus $\psi(B)\in\mathcal H_\sigma$. Further, $\psi(B)\cap\psi(\Ch_E K\setminus B)=\jmath^{-1}([f=\frac12])$ is $\sigma$-scattered.
Thus $\phi(B)\in\mathcal A_H$ by $(c)$. This completes the proof of the equalities.
The last inclusion is obvious.
Assume $A$ contains no compact perfect subset. By Lemma~\ref{L:maxmiry-dikous} we know that $X$ is standard, so the equality follows from Lemma~\ref{L:SH=ZH}.
Conversely, assume that $A$ contains a compact perfect subset $D$. Let $f=1_{D\times\{1\}}+\frac12 1_{D\times\{0\}}$. Then $V(f)\in A_f(X)$, $\ext X\subset [V(f)=1]\cup[V(f)=0]$, but $[V(f)>0]\cap \ext X\notin \mathcal A_H$ by $(c)$. \end{proof}
\begin{prop}\label{P:dikous-Afmu-new} Let $K=K_{L,A}$, $E=E_{L,A}$, $X=X_{L,A}$ and $H=(A_f(X))^\mu$. Then the following assertions are valid.
\begin{enumerate}[$(a)$]
\item $\begin{aligned}[t]
H=(A_f(X))^\sigma=V\big(\{f\in&\ell^\infty(K);\, f\circ\jmath\in (\Fr^b(L))^\sigma\mbox{ and }\\&f(t,0)=\tfrac12(f(t,1)+f(t,-1))\mbox{ for }t\in A\}\big).\end{aligned}$
\item $\begin{aligned}[t]
M^s(H)=M(H)=V\big(\{f\in&\ell^\infty(K);\, f\circ\jmath\in (\Fr^b(L))^\sigma,\\&f(t,0)=\tfrac12(f(t,1)+f(t,-1))\mbox{ for }t\in A
\\ & \{t\in A;\, f(t,1)\neq f(t,-1)\} \mbox{ is $\sigma$-scattered}
\}\big).\end{aligned}$
\item $\begin{aligned}[t]
\mathcal A_H=\mathcal A^s_H=&
\{\phi(\psi^{-1}(F)\cap \Ch_{E} K\cup C_1\times\{1\}\cup
C_2\times\{-1\});\,\\&\qquad\qquad F\subset L, F\in\sigma(\mathcal H),
C_1,C_2\subset A\mbox{ are $\sigma$-scattered}\}
\\=&\{\phi(B);\, B\subset\Ch_E K, \psi(B)\in\sigma(\mathcal H), \\&\qquad\qquad\psi(B)\cap\psi(\Ch_E K\setminus B)\mbox{ is $\sigma$-scattered}\}.
\end{aligned}
$
\item $\begin{aligned}[t]
\mathcal A_H&=\mathcal A_H^s=\mathcal S_H\\&=\{F\cap\ext X;\, F\mbox{ is a split face}, F,F' \mbox{belong to }\sigma(\mathcal H)\\&\qquad\mbox{ and are measure extremal}\}\subset\mathcal Z_H. \end{aligned} $
Moreover, the converse to the last inclusion holds if and only if $A$ contains no compact perfect subset.
\end{enumerate} \end{prop}
\begin{proof}
$(a)$: Inclusion `$\subset$' in the first equality is obvious, in the second one it follows from Proposition~\ref{P:dikous-af-new}$(b)$.
For the proof of the converse inclusions we set \[ \begin{aligned} \mathcal A&=\{f\in\ell^\infty(K);\, f\circ \jmath=0 \text{ on } L\mbox{ and }f(t,0)=\tfrac12(f(t,1)+f(t,-1))\mbox{ for }t\in A\},\\
\mathcal B&=\{f\in\ell^\infty(K);\, f\circ\jmath\in (\Fr^b(L))^\mu\mbox{ and }f(t,0)=\tfrac12(f(t,1)+f(t,-1))\mbox{ for }t\in A\},\\ \mathcal C&=\{f\in\ell^\infty(K);\, f\circ\jmath\in (\Fr^b(L))^\mu\mbox{ and }f=f\circ j\circ\psi\},\\ \mathcal D&=\{f\in\ell^\infty(K);\, f\circ\jmath\in \Fr^b(L)\mbox{ and }f=f\circ j\circ\psi\}.\\ \end{aligned} \] Note that due to Lemma~\ref{L:dedicneres}$(c)$ it is enough to verify that $V(\mathcal B)\subset (A_f(X))^\mu$. To this end, let $b\in \mathcal B$ be given. Then we set $c=b\circ\jmath\circ\psi$ and $a=b-c$.
Then $a\in \mathcal A$, $c\in \mathcal C$ and $b=a+c$. It follows that $\mathcal B\subset\mathcal A+\mathcal C$. Since
$V(\mathcal A)\subset A_f(X)\subset (A_f(X))^\mu$ by Proposition~\ref{P:dikous-af-new}$(a)$, it is enough to check that $V(\mathcal C)\subset (A_f(X))^\mu$. But $V(\mathcal D)\subset A_f(X)$ by Proposition~\ref{P:dikous-af-new}$(a)$ and clearly $\mathcal C=\mathcal D^\mu$, so $V(\mathcal C)=V(\mathcal D^\mu)=V(D)^\mu\subset (A_f(X))^\mu$, which finishes the proof.
$(b)$: We proceed similarly as in the proof of Proposition~\ref{P:dikous-af-new}$(b)$: Inclusion $M^s(H)\subset M(H)$ holds always. Again, $V(f\circ\psi)\in M^s(H)$ whenever $f\in(\Fr^b(L))^\sigma$, hence it is enough to prove the remaining inclusions within functions satisfying
$f\circ\jmath=0$ (i.e., $f\in\mathcal A$ using the notation from the proof of $(a)$).
Let us prove inclusion `$\subset$' from the second equality. Assume that $f\circ\jmath=0$ and $V(f)\in M(H)$. Let \[ B=\{t\in A;\, f(t,1)\neq f(t,-1)\}. \] We want to prove that $B$ is $\sigma$-scattered. Assume not. Then Lemma~\ref{L:dedicneres}$(b)$ provides a subset $C\subset B$ such that $C\notin\sigma(\mathcal H)$. Set \[ g(t,i)=\begin{cases}
1,&t\in C, f(t,i)>0,\\
-1,&t\in C, f(t,i)<0,\\
0&\text{ otherwise}. \end{cases} \] Then $g$ satisfies $V(g)\in A_f(X)\subset H$. It follows from $(a)$ that $\widetilde{fg}\circ \jmath\in (\Fr^b(L))^\sigma$. But \[ \{t\in L;\, \widetilde{fg}(t,0)>0\}=C\notin\sigma(\mathcal H). \]
Hence $\widetilde{fg}\circ j\notin (\Fr^b(L))^\sigma$ (by Lemma~\ref{L:dedicneres}$(c)$), a contradiction.
Hence $B$ is $\sigma$-scattered.
To prove the remaining inclusion assume that $f\circ\jmath=0$ and that the set \[ B=\{t\in A;\, f(t,1)\neq f(t,-1)\} \] is $\sigma$-scattered. Let now $g$ with $V(g)\in H$ be given. Then $\widetilde{fg}$ is nonzero only on some subset $C$ of $B$, which is $\sigma$-scattered. Hence it follows by Lemma~\ref{L:dedicneres}$(c)$ that $\widetilde{fg}\circ j\in (\Fr^b(L))^\sigma$, which proves that $V(\widetilde{fg})\in H$ by $(a)$. Moreover, $[\widetilde{fg}\ne fg]\subset B\times\{0\}$. Since $B$ is $\sigma$-scattered and hence universally null, it follows from Lemma~\ref{L:maxmiry-dikous} that $V(\widetilde{fg})=V(f)V(g)$ $\mu$-almost everywhere for each maximal measure $\mu\in M_1(X)$. Hence $V(f)\in M^s(H)$.
$(c)$: The first equality follows from $(b)$. Let us prove the second one.
`$\subset$': Let $f$ with $V(f)\in M(H)$ and $\ext X\subset [V(f)=0]\cup[V(f)=1]$ be given. Set
\[
F=\{t\in L;\, f(t,0)=1\}\quad \mbox{and}\quad C=\{t\in A;\, f(t,1)\ne f(t,-1)\}.
\] Then $F\in\sigma(\mathcal H)$ and $C$ is a $\sigma$-scattered set in $L$ (by $(b)$). The equality
\[
\begin{aligned}\relax
[f=1]\cap\Ch_{E}K&= \psi^{-1}\left(F\right)\cap \Ch_{E}K \cup
\{(t,i)\in C\times\{-1,1\};\, f(t,i)=1\}
\end{aligned}
\] now yields the assertion.
`$\supset$': Let $F\subset L$ be a $\sigma(\mathcal H)$ set. Then $1_F\in (\Fr^b(L))^\sigma$ (by Lemma~\ref{L:dedicneres}$(c)$), hence $f=1_F\circ\psi$ satisfies $V(f)\in M(H)$, and thus $\phi(\psi^{-1}(F)\cap \Ch_E K)\in \mathcal A_H$. If $C\subset A$ is $\sigma$-scattered, the function $f=1_{C\times \{1\}}+\frac12\cdot 1_{C\times \{0\}}$ satisfies $V(f)\in M(H)$ by $(b)$. Hence $\phi(C\times\{1\})\in \mathcal A_H$. Similarly $\phi(C\times\{-1\})\in\mathcal A_H$. Now we conclude by noticing that $\mathcal A_H$ is closed with respect to finite intersections and countable unions, which finishes the proof of the inclusion ``$\supset$''.
The last equality follows from \eqref{eq:podmny ChEK}.
$(d)$: The first equality is repeated from $(c)$. Inclusion $\mathcal A_H^s\subset \mathcal S_H$ follows from Theorem~\ref{T:meritelnost-strongmulti}. Inclusion `$\subset$' from the third equality is obvious.
Next assume that $F$ is a split face such that $F,F'\in\sigma(\mathcal H)$ and $F,F'$ are measure extremal. By Corollary~\ref{cor:dikous-split-me} we know that $\lambda_F$ is strongly affine. Let $f$ be the function on $K$ such that $\lambda_F=V(f)$. Then $f$ attains only values $0,\frac12,1$. Since $[f=1]\in\sigma(\mathcal H)$ and $[f=0]\in\sigma(\mathcal H)$, we deduce that $[f=\frac12]\in\sigma(\mathcal H)$ as well. Further, by Lemma~\ref{L:dikous-split-new} the set $[f=\frac12]$ is universally null and hence it contains no compact perfect subset.
It follows from the proof of Lemma~\ref{L:dedicneres}$(b)$ that $[f=\frac12]$ is $\sigma$-scattered.
Let
$$B=\phi^{-1}(F\cap\ext X)=[f=1]\cap\Ch_E K.$$
Then
$$\psi(B)=\jmath^{-1}(([f=1]\cup[f=\tfrac{1}{2}]\cap\jmath(L))\in\sigma(\mathcal H)$$
and $\psi(B)\cap\psi(\Ch_E K\setminus B)=\jmath^{-1}([f=\frac12])$ is $\sigma$-scattered.
Thus $\phi(B)\in\mathcal A_H$ by $(c)$. This completes the proof of the equalities.
The last inclusion is obvious.
Assume $A$ contains no compact perfect subset. By Lemma~\ref{L:maxmiry-dikous} we know that $X$ is standard, so the equality follows from Lemma~\ref{L:SH=ZH}.
Finally, assume that $A$ contains a compact perfect subset $D$. Let $f=1_{D\times\{1\}}+\frac12 1_{D\times\{0\}}$. Then $V(f)\in A_f(X)\subset H$, $\ext X\subset [V(f)=1]\cup[V(f)=0]$, but $[V(f)=1]\cap \ext X\notin \mathcal A_H$ by $(c)$. \end{proof}
\subsection{Borel strongly affine functions on $X_{L,A}$}
In this section we investigate spaces $\overline{A_s(X_{L,A})}$, $(A_s(X_{L,A}))^\mu$, $A_b(X_{L,A})\cap \Bo_1(X_{L,A})$ and $(A_b(X_{L,A})\cap \Bo_1(X_{L,A}))^\mu$. So, these spaces consist of Borel strongly affine functions which are not necessarily Baire. We start by describing the space $\overline{A_s(X)}$. To this end, we need a basic information on derivation of a topological space. If $B$ is a topological space, let $B^{(0)}=B$, $B^{(1)}$ denote the set of all accumulation points of $B$ (in $B$). Inductively we define $B^{(n+1)}=(B^{(n)})^{(1)}$ for $n\in\mathbb N$. We say that $B$ is \emph{of finite height} provided $B^{(n)}=\emptyset$ for some $n\in\mathbb N\cup\{0\}$. We further recall that a set $B$ is \emph{isolated} if all its points are isolated (i.e., $B^{(1)}=\emptyset$).
\begin{lemma}\label{L:finite rank}
Let $B$ be a topological space. Then the following two assertions are equivalent.
\begin{enumerate}[$(i)$]
\item $B$ is of finite height.
\item $B$ is a finite union of isolated subsets of $B$.
\end{enumerate} \end{lemma}
\begin{proof}
$(i)\implies(ii)$: Assume $B^{(n)}=\emptyset$. Then sets
$$B^{(0)}\setminus B^{(1)},B^{(1)}\setminus B^{(2)},\dots B^{(n-1)}\setminus B^{(n)}$$
are isolated and their union is $B$.
$(ii)\implies(i)$: If $B$ is isolated, then $B^{(1)}=\emptyset$ and hence $B$ has finite height. To complete the proof it remains to show that $B$ has finite height provided $B=B_1\cup B_2$, where $B_1$ has finite height and $B_2$ is isolated.
This may be proved by induction on the height of $B_1$. If $(B_1)^{(0)}=\emptyset$, then $B=B_2$ is isolated, hence of finite height. Assume that $n\in\mathbb N\cup\{0\}$ and the statement holds if $(B_1)^{(n)}=\emptyset$.
Next, assume $(B_1)^{(n+1)}=\emptyset$. Let $x\in B_1$ be any isolated point of $B_1$. Then there is an open set $U\subset B$ with $U\cap B_1=\{x\}$, so $U\subset B_2\cup\{x\}$. It follows that $U^{(2)}=\emptyset$. Since $x$ is arbitrary, we deduce that $B^{(2)}\subset (B_1)^{(1)}\cup B_2$. Since $((B_1)^{(1)})^{(n)}=(B_1)^{(n+1)}=\emptyset$, the induction hypothesis says that $B^{(2)}$ is of finite height. Thus $B$ is of finite height as well. \end{proof}
\begin{lemma}\label{L:sigma isolated} Let $L$ be a compact space and $B\subset L$. \begin{enumerate}[$(a)$]
\item If $B$ is isolated, then $B$ is an $(F\wedge G)$ set.
\item If $B$ is of finite height, then $1_B\in\Bo_1(L)$.
\item If $B$ is $\sigma$-isolated, then each subset of $B$ is an $(F\wedge G)_\sigma$ set. \end{enumerate} \end{lemma}
\begin{proof} These results are known and easy to check. For the sake of completeness we provide the short proofs.
$(a)$: Assume $B$ is isolated. For each $t\in B$ let $U_t\subset L$ be open such that $U_t\cap B=\{t\}$. Then $G=\bigcup_{t\in B}U_t$ is an open set and $B=\overline{B}\cap G$.
$(b)$: If $B$ is isolated, the assertion follows from $(a)$. The general case then follows by Lemma~\ref{L:finite rank}.
Assertion $(c)$ follows from $(a)$. \end{proof}
This easy lemma will be used in the following propositions. There are some interesting related open problems discussed in the following section.
\begin{prop}\label{P:dikous-lsc--new}
Let $K=K_{L,A}$, $E=E_{L,A}$, $X=X_{L,A}$ and $H=\overline{A_s(X)}$. Then the following assertions are valid.
\begin{enumerate}[$(a)$]
\item Let $f\in\ell^\infty(K)\cap E^{\perp\perp}$ be such that $V(f)\in A_l(X)$. Then $f\circ\jmath$ is lower semicontinuous on $L$ and for each $\varepsilon>0$ the set $B_\varepsilon=\{t\in A;\, \abs{f(t,1)-f(t,-1)}\ge \varepsilon\}$ is of finite height.
\item $\begin{aligned}[t]
H=V\big(\{f\in& \ell^\infty(K);\, f\circ \jmath\in \overline{\Lb(L)}, f(t,0)=\tfrac{1}{2}(f(t,1)+f(t,-1))\mbox{ for }t\in A,\\ &
\forall\varepsilon>0\colon \{t\in A;\, \abs{f(t,1)-f(t,-1)}\ge \varepsilon\}\mbox{ is of finite height}\}\big).
\end{aligned}$
\item $M^s(H)=M(H)=H$.
\item $\begin{aligned}[t]
\mathcal A_H=\mathcal A^s_H&= \{\phi(\psi^{-1}(F)\cap \Ch_{E} K\cup C_1\times\{1\}\cup
C_2\times\{-1\});\,\\
&\qquad\qquad F\subset L\text{ is an } (F\wedge G)_\sigma\mbox{ set},C_1,C_2\subset A\text{ are $\sigma$-isolated}\}
\\&=\{\phi(B);\, B\subset\Ch_E K, \psi(B)\mbox{ is an $(F\wedge G)_\sigma$ set},\\&\qquad\qquad
\psi(B)\cap\psi(\Ch_E K\setminus B)\mbox{ is $\sigma$-isolated}\}.
\end{aligned}$
\item $\mathcal A_H=\mathcal A^s_H=\mathcal S_H=\mathcal Z_H$.
\item In case $L=[0,1]$ the space $M(H)=M^s(H)$ is not determined by the family $\mathcal A_H=\mathcal A^s_H$.
\end{enumerate} \end{prop}
\begin{proof} $(a)$: Assume that $V(f)\in A_l(X)$. Then $f$ is lower semicontinuous on $K$ and hence $f\circ \jmath$ is lower semicontinuous on $L$. Fix $\varepsilon>0$ and let $B=B_\varepsilon$. Let $M>\norm{f}=\norm{V(f)}$ be chosen. We claim that $f(t,0)\le M-k\frac{\varepsilon}{2}$ for each $t\in B^{(k)}$ (where $k\in\mathbb N\cup\{0\}$).
To this end we proceed inductively. For $k=0$ the assertion obviously holds true. Assume that for some $k\in\mathbb N\cup\{0\}$ the claim holds. Let $t\in B^{(k+1)}$ be given. Then there exists a net $(t_\alpha)\subset B^{(k)}$ such that $t_\alpha\to t$. Since $\abs{f(t_\alpha, 1)-f(t_\alpha,-1)}\ge \varepsilon$, we can select $i_\alpha\in \{-1,1\}$ such that $f(t_\alpha, i_\alpha)\le f(t_\alpha,0)-\frac{\varepsilon}{2}$. By Proposition~\ref{p:topol-stacey}$(d)$ and the induction hypothesis we get \[ f(t,0)\le \liminf_\alpha f(t_\alpha,0)-\tfrac{\varepsilon}{2}\le M-k\tfrac{\varepsilon}{2}-\tfrac{\varepsilon}{2}= M-(k+1)\tfrac{\varepsilon}{2}. \] This proves the claim.
The claim now yields that $B^{(n)}=\emptyset$ whenever $n\in\mathbb N$ satisfies $M-n\frac{\varepsilon}2<-M$. Hence $B$ is of finite height.
$(b)$: '$\subset$': If $V(f)\in \overline{A_s(X)}$, then clearly $f\circ\jmath\in\overline{\Lb(L)}$. Further, let $$\mathcal F=\{f\in\ell^\infty(K)\cap E^{\perp\perp};\, \forall\varepsilon>0\colon B_\varepsilon(f)\mbox{ has finite height}\},$$ where $B_\varepsilon(f)$ is given by the formula from $(a)$. By assertion $(a)$ we know that $f\in\mathcal F$ whenever $V(f)$ is lower semicontinuous. It is easy to see that $\mathcal F$ is a closed linear subspace (note that Lemma~\ref{L:finite rank} implies that the union of two sets of finite height has finite height). It follows that $f\in\mathcal F$ whenever $V(f)\in H=\overline{A_s(X)}$.
Inclusion `$\supset$' will be proved in several steps:
{\tt Step 1:} Let $B\subset A$ be isolated and $f\in\ell^\infty(K)\cap E^{\perp\perp}$ satisfy $f=0$ on $\jmath(L)\cup ((A\setminus B)\times\{1,-1\})$. Then there exists a bounded lower semicontinuous function $g$ on $L$ such that $g\circ\psi+f$ is lower semicontinuous function on $K$. In particular, $V(f)\in A_s(X)$.
To prove the claim, fix $M>\norm{f}$. Since $B$ is isolated, for each $t\in B$ we find and open neighborhood $U_t$ of $t$ in $L$ such that $U_t\cap B=\{t\}$. Let $G=\bigcup\{U_t;\, t\in B\}$. Then $G$ is an open set and we set $g=M\cdot 1_G$. Clearly, $g$ is bounded and lower semicontinuous. We will check that $h=g\circ\psi +f$ is lower semicontinuous on $K$.
To this end fix $(t,i)\in K$. If $i\in\{1,-1\}$, then $(t,i)$ is an isolated point of $K$ and hence the function $h$ even continuous at $(t,i)$. So, assume $i=0$. If $t\in L\setminus G$, then $h(t)=0$. Since clearly $h\ge0$ on $K$, $h$ is lower semicontinuous at $t$. Finally, assume $t\in G$. It means that there is some $s\in B$ with $t\in V_s$. Then $U=\psi^{-1}(V_s)\setminus\{(s,1),(s,-1)\}$ is an open neighborhood of $(t,0)$ and $h=M$ on $U$. Thus $h$ is continuous at $(t,0)$.
{\tt Step 2:} Let $B\subset A$ be of finite height and $f\in\ell^\infty(K)\cap E^{\perp\perp}$ satisfy $f=0$ on $\jmath(L)\cup ((A\setminus B)\times\{1,-1\})$. Then $V(f)\in A_s(X)$.
Since $B$ is a finite union of isolated sets (by Lemma~\ref{L:finite rank}), $f$ is the sum of a finite number of functions satisfying assumptions of Step 1. It is now enough to use that $A_s(X)$ is a linear space.
{\tt Step 3:} We complete the proof. Let $f$ satisfy the conditions on the right-hand side. Let $\varepsilon>0$ be given. Then there is $g\in \Lb(L)$ such that $\norm{f\circ\jmath-g}<\varepsilon$. Further, the set $B=\{t\in A;\, \abs{f(t,1)-f(t,-1)}\ge \varepsilon\}$ is of finite height. Let \[ h(t,i)=\begin{cases}
f(t,i)-f(t,0),& t\in B, i\in\{-1,1\},\\
0,&\text{otherwise}.
\end{cases} \]
By Step 2 we deduce that $V(h)\in A_s(X)$. By Proposition~\ref{P:dikous-sa-new}$(v)$ and Lemma~\ref{L:function space}$(b)$ we deduce that $V(g\circ \psi)\in A_s(X)$. So, $V(h+g\circ\psi)\in A_s(X)$ as well. Let us estimate $\|f-h-g\circ\psi\|$:
If $t\in L$, then \[ \begin{aligned} \abs{f(t,0)-h(t,0)-g(t))}&=\abs{(f\circ \jmath)(t)-g(t)}<\varepsilon. \end{aligned} \] If $t\in A\setminus B$ and $i\in\{1,-1\}$, then \[ \begin{aligned} \abs{f(t,i)-h(t,i)-g(t)}&\le\abs{f(t,i)-f(t,0)}+\abs{f(t,0)-g(t)}\\&=\tfrac12\abs{f(t,1)-f(t,-1)}+\abs{(f\circ \jmath)(t)-g(t)} <\tfrac\ep2+\varepsilon<2\varepsilon \end{aligned} \] Finally, if $t\in B$ and $i\in\{-1,1\}$, then \[ \begin{aligned} \abs{f(t,i)-h(t,i)-g(t)}&=\abs{f(t,0)-g(t))}=\abs{(f\circ \jmath)(t)-g(t)}<\varepsilon. \end{aligned} \] Thus $\norm{f-h-g\circ\psi}< 2\varepsilon$. Since $\varepsilon>0$ is arbitrary, we deduce $V(f)\in\overline{A_s(X)}=H$.
$(c)$: The first equality follows from Proposition~\ref{P:multi strong pro As}. Since $X$ is a simplex, the second equality follows from Proposition~\ref{P:As pro simplex}.
$(d)$: The first equality follows from $(c)$. Let us prove the second one.
`$\subset$': Let $f$ with $V(f)\in M(H)=H$ be given. For each $n\in\mathbb N$ set
\[
F_n=\{t\in L;\, f(t,0)>\tfrac1n\}\quad \mbox{and}\quad C_n=\{t\in A;\, \abs{f(t,1)-f(t,-1)}\ge\tfrac1n\}.
\]
Since $f\circ\jmath\in\overline{\Lb(L)}\subset\Bo_1(L)$, we deduce that $F_n$ is an $(F\wedge G)_\sigma$ set. By $(b)$ each $C_n$ is of finite height. The equality
\[
\begin{aligned}\relax
[f>0]\cap\Ch_{E}K&= \psi^{-1}\left(\bigcup_{n\in\mathbb N} (F_n\setminus C_n)\right)\cap \Ch_{E}K \cup\\
&\quad \bigcup_{n\in\mathbb N}\{(t,i)\in C_n\times\{-1,1\};\, f(t,i)>0\}.
\end{aligned}
\] now yields the assertion. Indeed, each $C_n$ is a finite union of isolated sets (by Lemma~\ref{L:finite rank}) and thus $F_n\setminus C_n$ is an $(F\wedge G)_\sigma$ set (by Lemma~\ref{L:sigma isolated}(b)). Hence their union is an $(F\wedge G)_\sigma$ set.
`$\supset$': Let $G\subset L$ be an open set. Then $1_G\circ \psi\in A_s(X)$, hence $\psi^{-1}(G)\cap\Ch_E K\in\mathcal A_H$. The same holds for $F\subset L$ closed. Hence, by Proposition~\ref{p:system-aha} we deduce that $\psi^{-1}(F)\cap\Ch_E K\in\mathcal A_H$ for any $(F\wedge G)_\sigma$ set $F\subset L$.
If $C\subset A$ is isolated, the function $f=1_{C\times \{1\}}-1_{C\times \{-1\}}$ satisfies $V(f)\in H$ by $(b)$. Hence $\phi(C\times\{1\})\in \mathcal A_H$, because $C\times\{1\}=[f>0]$. Similarly $\phi(C\times\{-1\})\in\mathcal A_H$. Now we conclude by noticing that $\mathcal A_H$ is closed with respect to finite intersections and countable unions, which finishes the proof of the inclusion `$\supset$'.
Finally, the last equality follows from \eqref{eq:podmny ChEK}.
$(e)$: The first equality is repeated from $(d)$. The remaining equalities follow from Proposition~\ref{P:As pro simplex}.
$(f)$: Assume $L=[0,1]$. Within the proof of Proposition~\ref{P:vetsi prostory srov}$(b)$
we found a function $f\in\Bo_1^b(L)\setminus\overline{\Lb(L)}$. Then $V(f\circ\psi)\notin H$ (by $(b)$), but $V(f\circ\psi)|_{\ext X}$ is $\mathcal A_H$-measurable by $(d)$. \end{proof}
\begin{prop}\label{P:dikous-Asmu-new}
Let $K=K_{L,A}$, $E=E_{L,A}$ $X=X_{L,A}$ and $H=(A_s(X))^\mu$. Then the following assertions are valid.
\begin{enumerate}[$(a)$]
\item $\begin{aligned}[t]
H=(A_s(X))^\sigma=V\big(\{f\in \ell^\infty(K)&;\,
f\circ\jmath\in\Bo^b(L),\\&f(t,0)=\tfrac{1}{2}(f(t,1)+f(t,-1))\mbox{ for }t\in A, \\& \{t\in A;\, f(t,1)\ne f(t,-1)\}\mbox{ is $\sigma$-isolated}\}\big)\end{aligned}$
\item $M^s(H)=M(H)=H$.
\item $\begin{aligned}[t]
\mathcal A_H=\mathcal A_H^s&= \{\phi(\psi^{-1}(F)\cap \Ch_{E} K\cup C_1\times\{1\}\cup
C_2\times\{-1\});\,\\ &\qquad\qquad F\subset L\mbox{ is a Borel set, }
C_1,C_2\subset A\mbox{ are $\sigma$-isolated}\}
\\&=\{\phi(B);\, B\subset\Ch_E K, \psi(B)\mbox{ is a Borel set},\\&\qquad\qquad \psi(B)\cap\psi(\Ch_E K\setminus B)\mbox{ is $\sigma$-isolated}\}.
\end{aligned}$
\item $\mathcal A_H=\mathcal A^s_H=\mathcal S_H=\mathcal Z_H=\Sigma'$, where $\Sigma'$ is the $\sigma$-algebra from Lemma~\ref{L:miry na ext}.
\end{enumerate} \end{prop}
\begin{proof}
$(a)$: The first equality follows from Proposition~\ref{P:vetsi prostory srov}$(d)$ as $X$ is a simplex. Inclusion `$\subset$' of the second inclusion follows from Proposition~\ref{P:dikous-lsc--new}$(b)$ using Lemma~\ref{L:finite rank}. To prove the converse we proceed similarly as in the proof of Proposition~\ref{P:dikous-Afmu-new}$(a)$: We set
\[ \begin{aligned} \mathcal A&=\{f\in\ell^\infty(K);\, f\circ \jmath=0 \text{ on } L\mbox{ and }f(t,0)=\tfrac12(f(t,1)+f(t,-1))\mbox{ for }t\in A\},\\
\mathcal B&=\{f\in\ell^\infty(K);\, f\circ\jmath\in (\Lb(L))^\mu\mbox{ and }f(t,0)=\tfrac12(f(t,1)+f(t,-1))\mbox{ for }t\in A\},\\ \mathcal C&=\{f\in\ell^\infty(K);\, f\circ\jmath\in (\Lb(L))^\mu\mbox{ and }f=f\circ j\circ\psi\},\\ \mathcal D&=\{f\in\ell^\infty(K);\, f\circ\jmath\in \overline{\Lb(L)}\mbox{ and }f=f\circ j\circ\psi\},\\ \end{aligned} \]
Note that due to Lemma~\ref{L:borel=Lbmu}$(b)$ it is enough to verify that $V(\mathcal B)\subset (A_s(X))^\mu$. To this end, let $b\in \mathcal B$ be given. We set $c=b\circ\jmath\circ\psi$ and $a=b-c$.
Then $a\in \mathcal A$, $c\in \mathcal C$ and $b=a+c$. It follows that $\mathcal B\subset\mathcal A+\mathcal C$. Since
$V(\mathcal A)\subset \overline{A_s(X)}\subset (A_s(X))^\mu$ by Proposition~\ref{P:dikous-lsc--new}$(b)$, it is enough to check that $V(\mathcal C)\subset (A_s(X))^\mu$. But $V(\mathcal D)\subset \overline{A_s(X)}$ by Proposition~\ref{P:dikous-lsc--new}$(b)$ and clearly $\mathcal C=\mathcal D^\mu$, so $V(\mathcal C)=V(\mathcal D^\mu)=V(D)^\mu\subset (A_s(X))^\mu$, which finishes the proof.
$(b)$: The first equality follows from Proposition~\ref{P:multi strong pro As}. Since $X$ is a simplex, the second equality follows from Proposition~\ref{P:As pro simplex}.
$(c)$: The first equality follows from $(b)$, let us prove the second one.
`$\subset$': Let $f$ with $V(f)\in M(H)=H$ and $\ext X\subset [V(f)=0]\cup[V(f)=1]$ be given. Set
\[
F=\{t\in L;\, f(t,0)=1\}\mbox{ and }C=\{t\in A;\, f(t,1)\ne f(t,-1)\}.
\] Then $F$ is Borel and $C$ is a $\sigma$-isolated set in $L$ (by $(a)$). The equality
\[
\begin{aligned}\relax
[f=1]\cap\Ch_{E}K&= \psi^{-1}\left(F\right)\cap \Ch_{E}K \cup
\{(t,i)\in C\times\{-1,1\};\, f(t,i)=1\}.
\end{aligned}
\] now yields the assertion.
`$\supset$': Let $F\subset L$ be a Borel set. Then $1_F$ is a Borel function, hence $f=1_F\circ\psi$ satisfies $V(f)\in H= M(H)$ by $(a)$, and thus $\phi(\psi^{-1}(F)\cap \Ch_E K)\in \mathcal A_H$. If $C\subset A$ is $\sigma$-isolated, the function $f=1_{C\times \{1\}}+\frac12\cdot 1_{C\times \{0\}}$ satisfies $V(f)\in H= M(H)$ by $(a)$ (and Lemma~\ref{L:sigma isolated}$(c)$). Hence $\phi(C\times\{1\})\in \mathcal A_H$. Similarly $\phi(C\times\{-1\})\in\mathcal A_H$. Now we conclude by noticing that $\mathcal A_H$ is closed with respect to finite intersections and countable unions, which finishes the proof of the inclusion `$\supset$'.
Finally, the last equality follows from \eqref{eq:podmny ChEK}.
$(d)$: The first equality is repeated from $(c)$, the second and the third one follow from Proposition~\ref{P:As pro simplex}. Let us prove the last one. We will prove two inclusions:
$\Sigma'\subset \mathcal A_H$: Let $B\subset X$ be a Baire set. Then $B'=\phi^{-1}(B)$ is a Baire subset of $K$. Let $$\begin{gathered}
C=\{t\in A;\, B'\cap \{(t,-1),(t,0),(t,1)\} \mbox{ contains one or two points}\},\\ D=\{t\in L;\, (t,0)\in B'\}.\end{gathered}$$ By Proposition~\ref{p:topol-stacey}$(c)$ we deduce that $D$ is a Baire set and $C$ is countable. Thus $D\setminus C$ is Borel and $C$ is $\sigma$-isolated and $$B\cap\ext X=\phi(B'\cap\Ch_E K)=\phi( \psi^{-1}(D\setminus C)\cap\Ch_E K\cup (C\times\{-1,1\}\cap B'))\in\mathcal A_H$$ by $(c)$.
Further, let $F\subset X$ be a closed extremal set. Let $F'=\phi^{-1}(F)$. Then $F'$ is a closed set. Let $$\begin{gathered}
C=\{t\in A;\, F'\cap \{(t,-1),(t,0),(t,1)\} \mbox{ contains one or two points}\},\\ D=\{t\in L;\, (t,0)\in F'\}.\end{gathered}$$ Then $D$ is closed and, moreover, $\psi^{-1}(D)\subset F'$ as $F$ is extremal. Therefore $(t,0)\notin F'$ whenever $t\in C$. It follows that any accumulation point of $C$ belongs to $D$, thus $C$ is isolated. We get that $$F\cap\ext X=\phi(F'\cap\Ch_E K)=\phi( \psi^{-1}(D)\cap\Ch_E K\cup (C\times\{-1,1\}\cap F'))\in\mathcal A_H$$ by $(c)$.
This completes the proof of inclusion $\Sigma'\subset\mathcal A_H$.
$\mathcal A_H\subset\Sigma'$: Assume $F\subset L$ is closed. Then $F'=\phi(\psi^{-1}(F))$ is a closed extremal set, thus $$\phi(\psi^{-1}(F)\cap\Ch_E L)=F'\cap\ext X\in \Sigma'.$$ Since $\Sigma'$ is a $\sigma$-algebra, we deduce that $\phi(\psi^{-1}(B)\cap\Ch_E L)\in\Sigma'$ for any Borel set $B\subset L$.
Further, let $C\subset A$ be an isolated set. Then $D=\overline{C}\setminus C$ is a closed subset of $L$ and $\phi(\psi^{-1}(D)\cup C\times\{1\})$ is a closed extremal set in $X$. Thus $$\phi(\psi^{-1}(D)\cap\Ch_E K\cup C\times\{1\})=\phi(\psi^{-1}(D)\cup C\times\{1\})\cap\ext X\in\Sigma'.$$ Since $\Sigma'$ is a $\sigma$-algebra, using the previous paragraph we deduce that \[ \phi(C\times\{1\})=\phi(\psi^{-1}(D)\cap\Ch_E K\cup C\times\{1\})\setminus \phi(\psi^{-1}(D)\cap\Ch_E K)\in\Sigma'. \] Similarly $\phi(C\times\{-1\})\in\Sigma'$. Using $(c)$ we now easily conclude that $\mathcal A_H\subset\Sigma'$ and the proof is complete. \end{proof}
Now we come to the case of affine functions of the first Borel class on $X$. To simplify its formulation we introduce the following piece of notation. If $L$ is a compact space, we set $$\begin{aligned} \mathcal H_B&=\{D\subset L;\, \forall C\subset D\colon 1_C\in\Bo_1(L)\} \\&= \{D\subset L;\, \forall C\subset D\colon C\in (F\wedge G)_\sigma\ \&\ C\in(F\vee G)_\delta\}.\end{aligned}$$ It follows from Lemma~\ref{L:sigma isolated}$(b)$ that any set of finite height belongs to $\mathcal H_B$. Further, by Lemma~\ref{L:dedicneres}$(a)$ we get that any element of $\mathcal H_B$ is scattered.
\begin{prop}\label{P:dikous-Bo1-new} Let $K=K_{L,A}$, $E=E_{L,A}$ $X=X_{L,A}$ and $H=\Bo_1(X)\cap A_b(X)$. Then the following assertions are valid. \begin{enumerate}[$(a)$]
\item $\begin{aligned}[t]
H=V(\{f\in\ell^\infty(K);\, f\circ\jmath&\in \Bo_1(L), \mbox{ and }\\&f(t,0)=\tfrac12(f(t,1)+f(t,-1))\mbox{ for }t\in A\}.\end{aligned}$ \item $\begin{aligned}[t]
M^s(H)=M(H)=V\big(\{f\in&\ell^\infty(K);\, f\circ\jmath\in \Bo_1(L),\\&f(t,0)=\tfrac12(f(t,1)+f(t,-1))\mbox{ for }t\in A,\\
& \forall\varepsilon>0\colon \{t\in A;\, \abs{f(t,1)-f(t,-1)}\ge\varepsilon\}\in\mathcal H_B \}.\end{aligned}$ \item $\begin{aligned}[t]
\mathcal A_H=\mathcal A^s_H&=\{\phi(\psi^{-1}(F)\cap \Ch_{E} K\cup C_1\times\{1\}\cup
C_2\times\{-1\});\, \\
&\qquad\qquad F\subset L\text{ is an } (F\wedge G)_\sigma\mbox{ set},\ C_1,C_2\subset A,\ C_1,C_2\in(\mathcal H_B)_\sigma\}
\\&=\{\phi(B);\, B\subset \Ch_E K, \psi(B)\in(F\wedge G)_\sigma, \\ &\qquad\qquad
\psi(B)\cap\psi(\Ch_E K\setminus B)\in (\mathcal H_B)_\sigma\} \end{aligned}$
\item $\mathcal A_H=\mathcal A^s_H\subset\mathcal S_H\subset\mathcal Z_H$.
The first inclusion is proper if, for example, $A$ contains a homeomorphic copy of the ordinal interval $[0,\omega_1)$.
The converse to the second inclusion holds if and only if $A$ contains no compact perfect subset.
\end{enumerate} \end{prop}
\begin{proof}
$(a)$: This follows by combining Proposition~\ref{p:topol-stacey}$(e)$, Proposition~\ref{P:dikous-sa-new}$(b)$ and Lemma~\ref{L:function space}$(b)$.
$(b)$: Inclusion $M^s(H)\subset M(H)$ holds always. To prove the remaining ones we proceed similarly as in the previous proofs. First observe that $V(g\circ\psi)\in M^s(H)$ for each $g\in \Bo_1^b(L)$. So, as above, it is enough to prove the remaining inclusions within functions satisfying $f\circ\jmath=0$.
We continue by inclusion `$\subset$' from the second equality. Assume that $f\circ\jmath=0$ and $V(f)\in M(H)$. Fix $\varepsilon>0$ and let $B=\{t\in A;\, \abs{f(t,1)-f(t,-1)}\ge \varepsilon\}$. Assume there is $C\subset B$ such that $1_C\notin \Bo_1(L)$. Let $g$ be the function on $K$ defined by \[ g(t,i)=\begin{cases}
1, & t\in C, f(t,i)>0,\\
-1, &t\in C, f(t,i)<0,\\
0, &\text{otherwise}. \end{cases} \] Then $V(g)\in H$ (by $(a)$). Since $V(f)\in M(H)$, we deduce $V(\widetilde{fg})\in H$. But for $t\in C$ we have \[ \widetilde{fg}(t,0)=\begin{cases}
0,& t\in L\setminus C,\\
\frac12\abs{f(t,1)-f(t,-1)}\ge\frac{\varepsilon}{2} & t\in C. \end{cases} \] Since $1_C\notin\Bo_1(L)$, we easily deduce that $\widetilde{fg}\circ\jmath\notin\Bo_1(L)$, a contradiction.
To prove the remaining inclusion assume that $f$ satisfies the conditions on the right-hand side and $f\circ\jmath=0$. Given $\varepsilon>0$ let $$B_\varepsilon=\{t\in A;\, \abs{f(t,1)-f(t,-1)}\ge \varepsilon\} \quad\mbox{and}\quad f_\varepsilon(t,i)=\begin{cases}
f(t,i) & t\in B_\varepsilon,\\ 0 & \mbox{otherwise}. \end{cases}$$
Then $\|f-f_\varepsilon\|\le\frac\ep2$. Since $M^s(H)$ is closed, it is enough to show that $V(f_\varepsilon)\in M^s(H)$. So, let $g$ with $V(g)\in H$ be given. Then $\widetilde{f_\varepsilon g}(t,0)=0$ for $t\in L\setminus B_\varepsilon$. Our assumption implies that any function on $L$ which is zero outside $B_\varepsilon$ belongs to $\Bo_1(L)$. Hence $V(\widetilde{f_\varepsilon g})\in H$. Moreover, $[\widetilde{f_\varepsilon g}\ne f_\varepsilon g]\subset B_\varepsilon\times\{0\}$. Since $B_\varepsilon\in\mathcal H_B$, it is scattered (see the remarks before this proposition) and hence universally null. It follows from Lemma~\ref{L:maxmiry-dikous} that $V(\widetilde{f_\varepsilon g})=V(f_\varepsilon)V(g)$ $\mu$-almost everywhere for each maximal $\mu\in M_1(X)$. Hence $V(f_\varepsilon)\in M^s(H)$ and this completes the proof.
$(c)$: The first equality follows from $(b)$. Let us prove the second one.
`$\supset$': Let $F\subset L$ be a $(F\wedge G)_\sigma$ set. Then there exists a bounded function $g\in \Bo_1(L)$ such that $[g>0]=F$. Then $f=g\circ\psi$ satisfies $V(f)\in M(H)$, and thus $\phi(\psi^{-1}(F)\cap \Ch_E K)\in \mathcal A_H$. If $C\subset A$ is such that $C\in\mathcal H_B$, the function $f=1_{C\times \{1\}}-1_{C\times \{-1\}}$ satisfies $V(f)\in M(H)$ by $(b)$. Hence $\phi(C\times\{1\})\in \mathcal A_H$, because $C\times\{1\}=[f>0]$. Similarly $\phi(C\times\{-1\})\in \mathcal A_H$. Now we conclude by noticing that $\mathcal A_H$ is closed with respect to finite intersections and countable unions, which finishes the proof of the inclusion `$\supset$'.
`$\subset$': Let $f$ with $V(f)\in M(H)$ be given. For each $n\in\mathbb N$ set
\[
F_n=\{t\in L;\, f(t,0)>\tfrac1n\}\mbox{ and }C_n=\{t\in A;\, \abs{f(t,1)-f(t,-1)}\ge\tfrac1n\}.
\]
Then $F_n$ is a $(F\wedge G)_\sigma$ set and $C_n\in\mathcal H_B$ (by $(b)$). The equality
\[
\begin{aligned}\relax
[f>0]\cap\Ch_{E}K&= \psi^{-1}\left(\bigcup_{n\in\mathbb N} (F_n\setminus C_n)\right)\cap \Ch_{E}K \cup\\
&\quad \bigcup_{n\in\mathbb N}\{(t,i)\in C_n\times\{-1,1\};\, f(t,i)>0\}.
\end{aligned}
\] now yields the assertion. Indeed, each $C_n$ is also a $(F\vee G)_\delta$ set, and thus $F_n\setminus C_n$ is a $(F\wedge G)_\sigma$ set. Hence their union is a $(F\wedge G)_\sigma$ set.
Finally, the last equality follows from \eqref{eq:podmny ChEK}.
$(d)$: The first equality is repeated from $(c)$. Inclusion $\mathcal A_H^s\subset\mathcal S_H$ follows from Theorem~\ref{T:meritelnost-strongmulti}. The second inclusion is obvious.
Assume that $B\subset A$ is homeomorphic to $[0,\omega_1)$. Let $f=1_{B\times\{1\}}+\frac12 1_{B\times\{0\}}$. Then $V(f)\in H$ (as $f\circ\jmath=\frac12\cdot 1_B\in\Bo_1(L)$). Further, $[V(f)=1]$ is a split face by Lemma~\ref{L:dikous-split-new} as $B$ is scattered and hence universal null. Hence $\ext X\cap [V(f)=1]\in\mathcal S_H$. However, $\ext X \cap [V(f)=1] \notin\mathcal A_H$ by $(c)$ as $B$ contains a non-Borel subset. (It is enough to take a stationary set whose complement is also stationary provided for example by \cite[Lemma 8.8]{jech-book} and use \cite[Lemma 1]{raorao71}.)
Assume $A$ contains no compact perfect subset. By Lemma~\ref{L:maxmiry-dikous} we know that $X$ is standard and hence $\mathcal S_H=\mathcal Z_H$ by Lemma~\ref{L:SH=ZH}.
Finally, assume that $A$ contains a compact perfect subset $D$. Let $f=1_{D\times\{1\}}+\frac12 1_{D\times\{0\}}$. Then $V(f)\in H$, $\ext X\subset [V(f)=1]\cup[V(f)=0]$, but $[V(f)=1]\cap\ext X\notin\mathcal S_H$ by Lemma~\ref{L:dikous-split-new}. \end{proof}
\begin{prop}\label{P:dikous-bo1-mu-new} Let $K=K_{L,A}$, $E=E_{L,A}$, $X=X_{L,A}$ and $H=(\Bo_1(X)\cap A_b(X))^\mu$. Then the following assertions are valid. \begin{enumerate}[$(a)$]
\item $\begin{aligned}[t]
H=(\Bo_1(X)\cap A_b(X))^\sigma=V&(\{f\in\ell^\infty(K);\, f\circ\jmath\in \Bo^b(L)\mbox{ and }\\&f(t,0)=\tfrac12(f(t,1)+f(t,-1))\mbox{ for }t\in A\}). \end{aligned}$
\item $\begin{aligned}[t]
M^s&(H)=M(H)\\&=V\big(\{f\in\ell^\infty(K);\, f\circ\jmath\in \Bo^b(L),f(t,0)=\tfrac12(f(t,1)+f(t,-1))\mbox{ for }t\in A,\\& \qquad\qquad
\{t\in A;\, f(t,1)\neq f(t,-1)\} \mbox{ is a hereditarily Borel subset of }L
\}\big). \end{aligned}$
\item $\begin{aligned}[t]
\mathcal A_H=\mathcal A^s_H&=\{\phi(\psi^{-1}(F)\cap \Ch_{E} K\cup C_1\times\{1\}\cup
C_2\times\{-1\});\, \\& \qquad\qquad F\subset L\text{ Borel}, C_1,C_2\subset A\mbox{ hereditarily Borel subsets of }L\}
\\&=\{\phi(B);\, B\subset\Ch_E K, \psi(B)\mbox{ Borel},\\&
\qquad\qquad\psi(B)\cap\psi(\Ch_E K\setminus B)\mbox{ hereditarily Borel}\}.
\end{aligned}$
\item $\mathcal A_H=\mathcal A^s_H\subset\mathcal S_H\subset\mathcal Z_H$.
The first inclusion is proper if, for example, $A$ contains a homeomorphic copy of the ordinal interval $[0,\omega_1)$.
The converse to the second inclusion holds if and only if $A$ contains no compact perfect subset.
\end{enumerate} \end{prop}
\begin{proof} (a) Inclusions `$\subset$' in both equalities are obvious.
For the proof of the converse inclusion we proceed as in the preceding proofs. We set \[ \begin{aligned} \mathcal A&=\{f\in\ell^\infty(K);\, f\circ \jmath=0 \text{ on } L\mbox{ and }f(t,0)=\tfrac12(f(t,1)+f(t,-1))\mbox{ for }t\in A\},\\
\mathcal B&=\{f\in\ell^\infty(K);\, f\circ\jmath\in \Bo^b(L)\mbox{ and }f(t,0)=\tfrac12(f(t,1)+f(t,-1))\mbox{ for }t\in A\},\\ \mathcal C&=\{f\in\ell^\infty(K);\, f\circ\jmath\in \Bo^b(L)\mbox{ and }f=f\circ \jmath\circ\psi\},\\ \mathcal D&=\{f\in\ell^\infty(K);\, f\circ\jmath\in \Bo_1^b(L)\mbox{ and }f=f\circ \jmath\circ\psi\} \end{aligned} \] We want to verify that $V(\mathcal B)\subset H$. To this end, let $b\in \mathcal B$ be given. Then we set $c=b\circ\jmath\circ\psi$ and $a=b-c$. Then $a\in \mathcal A$, $c\in \mathcal C$ and $b=a+c$. Thus $\mathcal B\subset\mathcal A+\mathcal C$. Since $V(\mathcal A)\subset A_b(X)\cap\Bo_1(X)\subset H$ by Proposition~\ref{P:dikous-Bo1-new}$(a)$, it is enough to show that $V(\mathcal C)\subset H$. But $\mathcal C=\mathcal D^\mu$ by Lemma~\ref{L:borel=Lbmu}, so $$V(\mathcal C)=V(\mathcal D^\mu)\subset V(\mathcal D)^\mu\subset(A_b(X)\cap\Bo_1(X))^\mu=H,$$ where we used Proposition~\ref{P:dikous-Bo1-new}$(a)$.
$(b)$: Inclusion $M^s(H)\subset M(H)$ holds always. Similarly as above, it is enough to prove the remaining inclusions within functions satisfying $f\circ\jmath=0$.
Let us prove inclusion `$\subset$' from the second equality. Let $f$ with $f\circ\jmath=0$ and $V(f)\in M(H)$ be given. Let $C\subset B=\{t\in A;\, f(t,1)\neq f(t,-1)\}$ be arbitrary. Then the function \[ g(t,i)=\begin{cases}
1,&t\in C, h(t,i)>0,\\
-1,&t\in C, h(t,i)<0,\\
0,&\text{ otherwise} \end{cases} \] satisfies $V(g)\in H$, and thus $\widetilde{fg}\circ j\in \Bo^b(L)$. Since $C=\{t\in L;\, \widetilde{fg}(t,0)>0\}$, the set $C$ is a Borel subset of $L$.
To prove the remaining inclusion assume that $f$ satisfies the condition on the right-hand side and $f\circ\jmath=0$. Set $$B=\{t\in A;\, f(t,1)\ne f(t,-1)\}.$$ We shall show that $V(f)\in M^s(H)$. Let $g$ be given such that $V(g)\in H$. Then $\{t\in L;\, \widetilde{fg}(t,0)\ne 0\}\subset B$ and thus $\widetilde{fg}\circ j\in\Bo^b(L)$. Moreover, $[\widetilde{fg}\ne fg]\subset B\times\{0\}$. Since $B$ is hereditarily Borel, it is $\sigma$-scattered by Lemma~\ref{L:dedicneres}$(b)$, in particular it is universally null. It follows from Lemma~\ref{L:maxmiry-dikous} that $V(\widetilde{fg})=V(f)V(g)$ $\mu$-almost everywhere for each maximal $\mu\in M_1(X)$. This completes the proof that $V(f)\in M^s(H)$.
$(c)$: The first equality follows from $(b)$, let us prove the second one.
`$\subset$': Let $f$ with $V(f)\in M(H)$ and $\ext X\subset [V(f)=0]\cup[V(f)=1]$ be given. Set
\[
F=\{t\in L;\, f(t,0)=1\}\quad \mbox{and}\quad C=\{t\in A;\, f(t,1)\ne f(t,-1)\}.
\] Then $F$ is Borel and $C$ is hereditarily Borel in $L$ (by $(b)$). The equality
\[
\begin{aligned}\relax
[f=1]\cap\Ch_{E}K&= \psi^{-1}\left(F\right)\cap \Ch_{E}K \cup
\{(t,i)\in C\times\{-1,1\};\, f(t,i)=1\}.
\end{aligned}
\] now yields the assertion.
`$\supset$': Let $F\subset L$ be a Borel set. Then $1_F$ is a Borel function, hence $f=1_F\circ\psi$ satisfies $V(f)\in M(H)$ by $(b)$, and thus $\phi(\psi^{-1}(F)\cap \Ch_E K)\in \mathcal A_H$. If $C\subset A$ is hereditarily Borel in $L$, the function $f=1_{C\times \{1\}}+\frac12\cdot 1_{C\times \{0\}}$ satisfies $V(f)\in M(H)$ by $(b)$. Hence $\phi(C\times\{1\})\in \mathcal A_H$. Similarly $\phi(C\times\{-1\})\in\mathcal A_H$. Now we conclude by noticing that $\mathcal A_H$ is closed with respect to finite intersections and countable unions, which finishes the proof of the inclusion `$\supset$'.
$(d)$: The proof can be done by copying the proof of Proposition~\ref{P:dikous-Bo1-new}$(d)$. \end{proof}
\subsection{Overview and counterexamples} \label{ssec:relations-stacey-ifs}
The aim of this part is to summarize relations between the considered intermediate function spaces on Stacey's simplices. We also relate these examples to the general theory, give some counterexamples and formulate some questions. First we provide a basic overview of the considered spaces in the following table
\begin{prop} \label{p:vztahy-multi-ifs} Let $K=K_{L,A}$, $E=E_{L,A}$ and $X=X_{L,A}$. Then the following assertions hold (we omit the letter $X$). \begin{enumerate}[$(a)$] \item We have the following inclusions between intermediate function spaces: \begin{equation}
\label{eq:table-ifs} \begin{array}{ccccccccc}
& &A_c &\subset &A_1 &\subset &(A_{sa}\cap \Ba^b)& =& (A_{c})^\mu \\
& &\cap & &\cap & & & & \\
& &\overline{A_s}&\subset &A_b\cap \Bo_1 & \subset & A_f & & \\
& & \cap & &\cap & &\cap & & \\
(A_c)^\mu&\subset &(A_s)^\mu &\subset & (A_b\cap\Bo_1)^\mu&\subset &(A_f)^\mu & \subset & A_{sa}. \end{array} \end{equation} Moreover, it may happen that all the inclusions are proper and no more inclusions hold.
\item Whenever we have an inclusion in table~\eqref{eq:table-ifs}, the same inclusion holds for the spaces of multipliers. Moreover, it may happen that all the inclusions are strict except possibly for $M((A_s)^\mu)\subset M((A_b\cap\Bo_1)^\mu)$.
\item If $H$ is any of the spaces $A_c, A_1,\overline{A_s}, A_b\cap \Bo_1, A_f$, then $H^\mu=H^\sigma$.
\item If $H$ is any of the spaces
listed in table~\eqref{eq:table-ifs}, then $Z(H)=M(H)=M^s(H)$.
\item If $H$ is any of the spaces $A_1$, $\overline{A_s}$, $A_f$, then $M(H^\mu)=(M(H))^\mu$.
Further, $(M(A_c))^\mu = M((A_c)^\mu)$ if and only if $A$ contains no $G_\delta$-point of $L$. \end{enumerate} \end{prop}
\begin{proof} $(a)$ and $(b)$: The relations in table \eqref{eq:table-ifs} are valid for any simplex $X$. The validity in our case is also witnessed by the above-given characterizations. The validity of the same inclusions of the spaces of multipliers follows from the above description of these spaces (together with Lemmata~\ref{L:dedicneres}, \ref{L:finite rank} and~\ref{L:sigma isolated}).
To distinguish these spaces we choose $L=A=[0,1]^{\mathbb R}$:
Denote by $p:L\to[0,1]$ the projection onto the first coordinate. Then the function $1_{\{0\}}\circ p\circ\psi$ witnesses that $A_c\subsetneqq A_1$ and $1_{\mathbb Q}\circ p\circ \psi$ witnesses that $A_1\subsetneqq (A_c)^\mu$. The same functions witness that $M(A_c)\subsetneqq M(A_1)\subsetneqq M((A_c)^\mu)$.
Let $f\in \Ba^b_1([0,1])\setminus\overline{\Lb([0,1])}$ be the function provided by \cite[Proposition 5.1]{odell-rosen}. Then $f\circ p\circ \psi$ witnesses that $A_1\not\subset\overline{A_s}$ and hence $\overline{A_s}\subsetneqq A_b\cap\Bo_1$. The same function witnesses that $M(A_1)\not\subset M(\overline{A_s})$ and $M(\overline{A_s})\subsetneqq M(A_b\cap\Bo_1)$. We also deduce that $\overline{A_s}\subsetneqq(A_s)^\mu$ and $M(\overline{A_s})\subsetneqq M((A_s)^\mu)$.
Let $x\in L$ be arbitrary. Since $x$ is not a $G_\delta$ point, the function $1_{\{x\}}\circ \psi$ witnesses that $\overline{A_s}\not\subset (A_c)^\mu$ and hence $A_c\subsetneqq \overline{A_s}$, $A_1\subsetneqq A_b\cap\Bo_1$ and $(A_c)^\mu\subsetneqq (A_s)^\mu$. The same function shows that $M(\overline{A_s})\not\subset M((A_c)^\mu)$, $M(A_c)\subsetneqq M(\overline{A_s})$, $M(A_1)\subsetneqq M(A_b\cap\Bo_1)$ and $M((A_c)^\mu)\subsetneqq M((A_s)^\mu)$.
The function $1_{L\times\{1\}}-1_{L\times\{-1\}}$ witnesses that $(A_s)^\mu\subsetneqq (A_b\cap \Bo_1)^\mu$. (Note that it says nothing on the respective spaces of multipliers.)
Further, the space $L$ is separable, so we may take a countable dense set $C\subset L$.
The function $1_C\circ\psi$ witnesses that $(A_c)^\mu\not\subset A_f$ and $A_b\cap\Bo_1\subsetneqq(A_b\cap\Bo_1)^\mu$. The same function witnesses that $M((A_c)^\mu)\not\subset M(A_f)$ and $M(A_b\cap\Bo_1)\subsetneqq M((A_b\cap\Bo_1)^\mu)$.
There is also $D\subset L$ homeomorphic to the ordinal interval $[0,\omega_1]$ and $N\subset D$ non-Borel (this follows from \cite[Lemma 8.8]{jech-book} and \cite[Lemma 1]{raorao71} as remarked in the proof of Proposition~\ref{P:dikous-Bo1-new}$(d)$) . Then $1_N\circ \psi$ witnesses that $A_b\cap\Bo_1\subsetneqq A_f$ and $A_f\not\subset (A_b\cap \Bo_1)^\mu$. The same function shows that $M(A_b\cap\Bo_1)\subsetneqq M(A_f)$ and $M(A_f)\not\subset M((A_b\cap \Bo_1)^\mu)$. Moreover, $(1_N+1_C)\circ\psi$ witnesses that $A_f\subsetneqq (A_f)^\mu$ and $(A_b\cap\Bo_1)^\mu\subsetneqq (A_f)^\mu$. The same function shows that $M(A_f)\subsetneqq M((A_f)^\mu)$ and $M((A_b\cap\Bo_1)^\mu)\subsetneqq M((A_f)^\mu)$. Finally, if $S\subset [0,1]$ is an analytic non-Borel set, then $1_S\circ p\circ\psi$ witnesses that $(A_f)^\mu\subsetneqq A_{sa}$ and $M((A_f)^\mu)\subsetneqq M(A_{sa})$.
$(c)$: The case of $A_c$ and $A_1$ follows from Proposition~\ref{P:Baire-srovnani}(iii). The remaining cases follow Propositions~\ref{P:dikous-Asmu-new}, \ref{P:dikous-bo1-mu-new} and~\ref{P:dikous-Afmu-new}.
$(d)$: Equality $M(H)=M^s(H)$ in all the cases follows from the above propositions. Equality $Z(H)=M(H)$ follows from \cite[Theorem II.7.10]{alfsen} for $A_c$, from Proposition~\ref{p:postacproa1}$(a)$ for $A_1$ and $(A_c)^\mu$,
from Corollary~\ref{cor:iotax} and Corollary~\ref{cor:rovnost na ext}$(b)$ for the remaining spaces.
$(e)$: This follows easily from the above characterizations. \end{proof}
We point out that assertion $(d)$ of the previous proposition provides a partial positive answer to Question~\ref{q:m=ms}. On the other hand, the following question seems to be open:
\begin{ques}
Let $X=X_{L,A}$. Is it true that
$$\begin{gathered}
M((A_s(X))^\mu)=M((A_b(X)\cap\Bo_1(X))^\mu)\mbox{ and }\\ M((A_b(X)\cap\Bo_1(X))^\mu)=(M(A_b(X)\cap\Bo_1(X)))^\mu \ ? \end{gathered}$$ \end{ques}
In view of the above characterizations this question is related to the following topological problem which is, up to our knowledge, open.
\begin{ques}
Let $L$ be a compact space and let $B\subset L$ be a hereditarily Borel set. Is $B$ necessarily $\sigma$-isolated? \end{ques}
Such a set $B$ must be $\sigma$-scattered by Lemma~\ref{L:dedicneres}$(b)$, but we do not know whether this stronger conclusion is valid.
We continue by an overview of the important special case of metrizable starting space $L$.
\begin{prop}
Let $K=K_{L,A}$, $E=E_{L,A}$ and $X=X_{L,A}$. Assume that $L$ is metrizable.
Then the following assertions hold (we omit the letter $X$). \begin{enumerate}[$(a)$]
\item We have the following inclusions between intermediate function spaces: \begin{equation}
\label{eq:table-ifs-metriz} \begin{array}{ccccccccc}
A_c& \subset & \overline{A_s} &\subset &A_1 &\subset &(A_{sa}\cap \Ba^b)& =& (A_{c})^\mu \\
& & & &\cap & & \cap & & \parallel \\
& && &A_b\cap \Bo_1 & \subset & (A_b\cap\Bo_1)^\mu & & (A_s)^\mu\\
& & & &\parallel & &\parallel & & \\
& & & &A_f&\subset &(A_f)^\mu & \subset & A_{sa}. \end{array}\end{equation} Moreover, it may happen that all the inclusions are proper and no more inclusions hold. \item We have the following inclusions between the respective spaces of multipliers:
\[\begin{aligned}
M(A_c)&\subset M(\overline{A_s})\subset M(A_1)=M(A_b\cap \Bo_1)=M(A_f)
\subset A_1\\&\subset M((A_c)^\mu)=M((A_b\cap \Bo_1)^\mu)=M((A_f)^\mu)=(A_c)^\mu\subset M(A_{sa}).\end{aligned}
\]
Moreover, it may happen that all the inclusions are proper. \end{enumerate}
\end{prop}
\begin{proof} Since $L$ is metrizable, any lower semicontinuous function on $L$ is of the first Baire class and any isolated set in $L$ is countable. It follows that $\overline{A_s}\subset A_1$. Further, $A_c\subset A_s\subset A_1$ and $(A_c)^\mu=(A_1)^\mu$ implies that $(A_s)^\mu=(A_c)^\mu$. Moreover, on $L$ metrizable, Borel one and Baire one functions coincide (see Theorem~\ref{T:b}), thus $A_f=A_b\cap\Bo_1$. These facts together with table \eqref{eq:table-ifs} yield the validity of \eqref{eq:table-ifs-metriz}.
Let us continue by proving the inclusions and equalities from assertion $(b)$. To this end we use the above-mentioned fact that fragmented, Borel one and Baire one functions on $L$ coincide and that any scattered subset of a $L$ is countable and $G_\delta$. Indeed, assume $B$ is a scattered subset of $L$. If $B$ were uncountable, $B$ would contain a uncountable set without isolated points, which is impossible. Hence $B$ is countable. If it were not $G_\delta$, using the Hurewicz theorem (see \cite[Theorem 21.18]{kechris}) we would find a relatively closed subset of $B$ homeomorphic to $\mathbb Q$. This is again impossible, and thus $B$ is of type $G_\delta$.
Using the two mentioned fact and the above characterizations we easily see that the chain of inclusions and equalities in $(b)$ is valid.
Finally, let $L=A=[0,1]$. We will check that no more inclusions hold. The function $1_{\{0\}}\circ\psi$ witnesses that $A_c\subsetneqq \overline{A_s}$ and $M(A_c)\subsetneqq M(\overline{A_s})$. There is a subset $D\subset[0,1]$ homeomorphic to the ordinal interval $[0,\omega^\omega]$. Then the function $1_{D\times \{1\}}-1_{D\times\{-1\}}$ witnesses that $\overline{A_s}\subsetneqq A_1$ and $M(\overline{A_s})\subsetneqq M(A_1)$. The function $1_{L\times\{1\}}-1_{L\times\{-1\}}$ witnesses that $A_1\subsetneqq A_b\cap \Bo_1$. The function $1_{\mathbb Q\times\{1\}}-1_{\mathbb Q\times\{-1\}}$ shows that $M(A_1)\subsetneqq A_1$. The function $1_{\mathbb Q}\circ\psi$ witnesses that $A_1\subsetneqq (A_c)^\mu$. The function $1_{L\times\{1\}}-1_{L\times\{-1\}}+1_{\mathbb Q}\circ\psi$ witnesses that $(A_c)^\mu\subsetneqq (A_b\cap\Bo_1)^\mu$ and $A_b\cap\Bo_1\subsetneqq (A_b\cap\Bo_1)^\mu$. If $S\subset [0,1]$ is an analytic non-Borel set, the function $1_S\circ \psi$ witnesses that $(A_f)^\mu\subsetneqq A_{sa}$ and $(A_c)^\mu\subsetneqq M(A_{sa})$. \end{proof}
We continue by the following example showing, in particular, difference between multipliers and strong multipliers.
\begin{example}
\label{ex:dikous-mezi-new} Let $K=K_{L,A}$, $E=E_{L,A}$ and $X=X_{L,A}$.
For any bounded function $f$ on $K$ let us define three functions on $L$:
$$\begin{gathered}
f_0(t)=f(t,0),\ t\in L,\quad f_1(t)=\begin{cases}
f(t,1), & t\in A,\\
f(t,0), & t\in L\setminus A,
\end{cases} \\
f_{-1}(t)=\begin{cases}
f(t,-1), & t\in A,\\
f(t,0), & t\in L\setminus A.
\end{cases} \end{gathered}$$ We set
$$\widetilde{H}=\{f\in \ell^\infty (K);\, f_0,f_1,f_{-1}\mbox{ are Borel functions and }f_0=\tfrac12(f_1+f_{-1})\}$$
and $H=V(\widetilde{H})$.
Then we have the following:
\begin{enumerate}[$(a)$]
\item $H$ is an intermediate function space such that $H^\mu=H$.
\item $(A_s(X))^\mu\subset H\subset (A_f(X))^\mu$.
\item $M(H)=H$.
\item $M^s(H)=V(\{f\in \widetilde{H};\,\{t\in A;\, f(t,1)\ne f(t,-1)\}\mbox{ is $\sigma$-scattered}\})$.
\item The following assertions are equivalent:
\begin{enumerate}[$(i)$]
\item $M^s(H)=M(H)$;
\item $M(H)\subset M((A_f(X)^\mu)$;
\item $A$ contains no compact perfect subset.
\end{enumerate}
In particular, if, for example, $L=A=[0,1]$, we have $M(H)\not\subset M((A_f(X))^\mu)$ and $M^s(H)\subsetneqq M(H)$.
\item $\begin{aligned}[t]
\mathcal A_H&=\{\phi(B);\, B\subset\Ch_E K, \psi(B) \mbox{ and }\psi (\Ch_E K\setminus B)\mbox{ are Borel subsets of }L\}\\ &= \mathcal Z_H,\\
\mathcal A_H^s&=\{\phi(B);\, B\subset\Ch_E K, \phi(B)\in\mathcal A_H, \\&\qquad\{t\in A;\, B\cap\{(t,1),(t,-1)\}\mbox{ contains exactly one point$\}$ is $\sigma$-scattered}\}
\\&=\{\phi(B);\, B\subset\Ch_E K, \psi(B) \mbox{ and }\psi (\Ch_E K\setminus B)\mbox{ are Borel subsets of }L\\&\qquad
\psi(B)\cap\psi(\Ch_E K\setminus B)\mbox{ is $\sigma$-scattered}\}=\mathcal S_H.
\end{aligned}$
\end{enumerate} \end{example}
\begin{proof} Assertions $(a)$ and $(c)$ are obvious. Assertion $(b)$ follows by combining Propositions~\ref{P:dikous-Asmu-new} and~\ref{P:dikous-Afmu-new} with Lemma~\ref{L:sigma isolated}.
$(d)$: `$\supset$': Assume $B=\{t\in A;\, f(t,1)\ne f(t,-1)\}$ is $\sigma$-scattered. It follows that for each $g\in\widetilde{H}$ we have $fg\circ\jmath=\widetilde{fg}\circ\jmath$ for $t\in L\setminus B$. Since $B$ is universal null, we deduce that $V(\widetilde{fg})=V(f)V(g)$ $\mu$-almost everywhere for any maximal measure $\mu$. It follows that $V(g)\in M^s(H)$,
`$\subset$': Assume that the set $B=\{t\in A;\, f(t,1)\ne f(t,-1)\}$ is not $\sigma$-scattered. Since it is Borel, by Lemma~\ref{L:dedicneres}$(b)$ it contains a compact perfect set $D$. Then there is a continuous probability measure $\mu$ carried by $D$. By Lemma~\ref{L:maxmiry-dikous} $\phi(\mu)$ is maximal and $\{t\in L:(f(t,0))^2=\widetilde{f^2}(t,0)\}=L\setminus B$ has $\mu$-measure zero. Thus $V(f)\notin M^s(H)$.
$(e)$: Assume $A$ contains no compact perfect set. Let $f\in \widetilde{H}$. Then $$B=\{t\in A;\, f(t,1)\ne f(t,-1)\}$$ is a Borel subset of $A$. Since $A$ contains no compact perfect set, by Lemma~\ref{L:dedicneres} we deduce that $B$ is $\sigma$-scattered. Thus $V(f)\in M^s(H)$ by $(d)$ and $V(f)\in M((A_f(X)^\mu)$ by Proposition~\ref{P:dikous-Afmu-new}.
Conversely, assume that $A$ contains a compact perfect set $D$. Let $f=1_{D\times\{1\}}-1_{D\times\{-1\}}$. Then $f\in \widetilde{H}$, so $V(f)\in M(H)=H$. By $(d)$ we get $V(f)\notin M^s(H)$ and by Proposition~\ref{P:dikous-Afmu-new} we deduce $V(f)\notin M((A_f(X))^\mu)$.
The `in particular part' is obvious.
$(f)$: Since $M(H)=H$, $\mathcal A_H=\mathcal Z_H$ by the very definition. Further, it is easy to see that the second and the third sets coincide.
The formulas for $\mathcal A_H^s$ follow easily from the proof of $(d)$, \eqref{eq:podmny ChEK} and Lemma~\ref{L:dikous-split-new}. \end{proof}
We continue by an overview of the relationship of the systems $\mathcal A_H$, $\mathcal A_H^s$, $\mathcal S_H$ and $\mathcal Z_H$. It is contained in the following proposition.
\begin{prop}\label{P:shrnutidikousu}
Let $K=K_{L,A}$, $E=E_{L,A}$ and $X=X_{L,A}$. Then the following assertions hold.
\begin{enumerate}[$(a)$]
\item If $H$ is one of the spaces
$$A_1,\overline{A_s},A_f, (A_1)^\mu,(A_s)^\mu,(A_f)^\mu,A_{sa},$$
then $\mathcal A_H=\mathcal A^s_H=\mathcal S_H$.
\item If $H$ is one of the spaces
$$A_b\cap\Bo_1, (A_b\cap \Bo_1)^\mu,$$
then $\mathcal A_H=\mathcal A^s_H\subset\mathcal S_H$. The inclusion may be proper even if $X$ is standard.
\item If $H$ is one of the spaces $A_1$, $(A_c)^\mu$, $\overline{A_s}$, $(A_s)^\mu$ then $\mathcal A_H=\mathcal A^s_H=\mathcal S_H=\mathcal{Z}_H$.
\item If $H$ is one of the spaces
$$ A_b\cap\Bo_1, A_f, (A_b\cap \Bo_1)^\mu,(A_f)^\mu,A_{sa},$$
Then $\mathcal S_H=\mathcal{Z}_H$ if and only if $X$ is a standard compact convex set. Moreover, it may happen that this is not satisfied.
\item If $H$ is the space from Example~\ref{ex:dikous-mezi-new}, then
$$\mathcal A^s_H=\mathcal S_H\subset\mathcal A_H=\mathcal Z_H.$$
The equality holds if and only if $X$ is a standard compact convex set. Moreover, it may happen that this is not satisfied.
\end{enumerate} \end{prop}
\begin{proof} This follows from the above propositions on the individual intermediate function spaces and Lemma~\ref{L:maxmiry-dikous}$(2)$. As an example of a non-standard $X$ may serve the case $L=A=[0,1]$. An example of a standard $X$ illustrating the proper inclusion in $(b)$ is provided by choosing, for example, $L=A=[0,\omega_1]$. \end{proof}
The previous proposition show that the assumptions in Lemma~\ref{L:SH=ZH} are natural and cannot be simply dropped. It also illustrates that the assumptions of Proposition~\ref{P: silnemulti-inkluze} are not satisfied automatically. Further, it provides a partial positive answer to Question~\ref{q:A1-split} and Question~\ref{q:fr-split}.
We continue by examples illustrating limits of possible extensions of Theorem~\ref{T:Assigma-reprez}. The first one shows that the quoted theorem cannot be extended to functions of the first Borel class.
\begin{example}\label{ex:teleman-ne}
Let $L=A=[0,1]$ and $X=X_{L,A}$. Then there are a function $f\in A_b(X)\cap\Bo_1(X)$ and a maximal measure $\mu\in M_1(X)$ such that
$f|_{\ext X}$ is not $\mu'$-measurable (using the notation from Lemma~\ref{L:miry na ext}). \end{example}
\begin{proof} Let $K=K_{L,A}$ and $E=E_{L,A}$. Let $h=1_{L\times \{1\}}-1_{L\times\{-1\}}$. By Proposition~\ref{P:dikous-Bo1-new}$(a)$ we get that $f=V(h)\in A_b(X)\cap\Bo_1(X)$. Let $\lambda$ be the Lebesgue measure on $L=[0,1]$. Then $\mu=\phi(\jmath(\lambda))$ is a maximal measure on $X$ (by Lemma~\ref{L:maxmiry-dikous}). Let us prove that the set $U=[f>0]\cap\ext X=\phi(L\times\{1\})$ is not $\mu'$-measurable.
Let $B\subset X$ be a closed extremal set with $B\cap\ext X\subset U$. Let $F=\phi^{-1}(B)$. Then $F$ is a closed subset of $K$ contained in $L\times\{0,1\}$. The extremality of $B$ implies that $F\subset L\times\{1\}$, so $F$ is finite. By \eqref{eq:mira1} we deduce that $\mu'(B\cap \ext X)=0$. Since $B$ is arbitrary, $\mu'(U)=0$. In the same way we see that $\mu'(\ext X\setminus U)=0$, so $\mu'=0$, a contradiction. \end{proof}
In the second example we address the weaker version of the representation addressed in the end of Section~\ref{s:determined}. It appears that it depends on additional axioms of the set theory.
\begin{example}\label{ex:AMC}
Let $L=A=[0,1]$ and $X=X_{L,A}$.
\begin{enumerate}[$(1)$]
\item Assume the continuum hypothesis. Then there is $x\in X$ such that there is no probability measure $\mu$ defined on some $\sigma$-algebra on $\ext X$ such that
$$\forall f\in A_b(X)\cap\Bo_1(X)\colon f|_{\ext X}\mbox{ is $\mu$-measurable and } f(x)=\int f\,\mbox{\rm d}\mu.$$
\item Assume there is an atomless measurable cardinal. Then for any $x\in X$ there is a
probability measure $\mu$ defined on some $\sigma$-algebra on $\ext X$ such that
$$\forall f\in A_{sa}(X)\colon f|_{\ext X}\mbox{ is $\mu$-measurable and } f(x)=\int f\,\mbox{\rm d}\mu.$$
\end{enumerate} \end{example}
\begin{proof}
Let $K=K_{L,A}$.
$(1)$: Let $B\subset L$ be arbitrary. Then $V(1_{B\times\{1\}}-1_{B\times\{-1\}})\in A_b(X)\cap\Bo_1(X)$. It follows that the domain $\sigma$-algebra of any $\mu$ with the required property should be the power set of $\ext X$. But, by the continuum hypothesis this set has cardinality $\aleph_1$ and thus by Ulam's theorem (see, e.g., \cite[Lemma 10.13]{jech-book}) any measure defined on this $\sigma$-algebra is discrete. Let $\lambda$ be the Lebesgue measure on $L=[0,1]$ and let $x$ be the barycenter of $\phi(\jmath(\lambda))$. Assume that $\mu$ is a measure with the required properties. Since it is discrete, there is some $t\in L$ such that $\mu(\{\phi(t,1),\phi(t,-1)\})>0$. Let $f=1_{\{t\}}\circ\psi$. Then $V(f)\in A_1(X)\subset A_b(X)\cap\Bo_1(X)$ and
$$0<\int V(f)\,\mbox{\rm d}\mu= V(f)(x)=\int V(f)\,\mbox{\rm d}\phi(\jmath(\lambda))=\int f\,\mbox{\rm d}\jmath(\lambda) =\int 1_{\{t\}}\,\mbox{\rm d}\lambda=0,$$
a contradiction.
$(2)$: Under this assumption any continuous Borel measure $\mu$ on $[0,1]$ admits a $\sigma$-additive extension $\widehat{\mu}$ defined on the power set of $[0,1]$. Indeed, let $\mu$ be any continuous measure on $[0,1]$. Then the function $t\mapsto\mu([0,t])$ is a continuous non-decreasing mapping of $[0,1]$ onto $[0,1]$. Thus we may define a kind of inverse:
$$m(t)=\min\{ s\in[0,1];\, \mu([0,s])=t\}, \quad t\in [0,1].$$
Then $m$ is increasing and it is easily seen to be left continuous and upper semicontinuous. In particular it is a Borel injection of $[0,1]$ into $[0,1]$. Denote by $\lambda$ the Lebesgue measure on $[0,1]$. We claim that $\mu=m(\lambda)$. Indeed, for any $t\in(0,1]$
$$\begin{aligned}
m(\lambda)([0,t])&=\lambda(m^{-1}([0,t]))= \sup\{s\in[0,1];\, m(s)\le t\}
\\&=\sup\{s\in[0,1];\, m(s)< t\} = \sup\{s\in [0,1];\, \mu([0,s])<t\}=\mu([0,t]),
\end{aligned}$$
hence the equality holds for all intervals and thus for all Borel sets. Further, it is proved in \cite[p. 123]{jech-book} that there is the required extension $\widehat{\lambda}$ for the Lebesgue measure. It remains to set $\widehat{\mu}=m(\widehat{\lambda})$.
Next, given $x\in X$ let $\delta_x$ be the unique maximal representing measure. It follows from Lemma~\ref{L:maxmiry-dikous} that
$$\delta_x=\phi(\jmath(\nu_x)+\sigma_x),$$
where $\nu_x$ is a continuous measure on $L=[0,1]$ and $\sigma_x$ is a discrete measure on $L\times\{-1,1\}$. Define
$$\mu_x(F\times\{1\}\cup G\times\{-1\})=\tfrac12(\widehat{\nu_x}(F)+\widehat{\nu_x}(G)),\quad F,G\subset L.$$
We claim that $\phi(\mu_x+\sigma_x)$ is the right choice.
Indeed, let $f$ be a function such that $V(f)\in A_{sa}(X)$. Then
$$\begin{aligned}
\int V(f)\,\mbox{\rm d} \phi(\mu_x)&=\frac12\left(\int f(t,1)\,\mbox{\rm d}\widehat{\nu_x}(t)+ \int f(t,-1)\,\mbox{\rm d}\widehat{\nu_x}(t)\right)=
\int f(t,0)\,\mbox{\rm d}\widehat{\nu_x}(t)\\&=\int f(t,0)\,\mbox{\rm d}\nu_x(t)=
\int V(f)\,\mbox{\rm d}\phi(\jmath(\nu_x)),\end{aligned}$$
hence
$$\int V(f)\,\mbox{\rm d} \phi(\mu_x+\sigma_x)=\int V(f)\,\mbox{\rm d} \phi(\jmath(\nu_x)+\sigma_x)=\int V(f)\,\mbox{\rm d}\delta_x=V(f)(x).$$
This completes the proof.
\end{proof}
The previous example inspires the following question.
\begin{ques}
Assume that there is an atomless measurable cardinal.
Let $X$ be a compact convex set.
\begin{enumerate}[$(a)$]
\item Is it true that for each $x\in X$ there is a probability measure $\mu_x$ defined on some $\sigma$-algebra on $\ext X$ such that
$$\forall f\in (A_{f}(X))^\mu\colon f|_{\ext X}\mbox{ is $\mu_x$-measurable and } f(x)=\int f\,\mbox{\rm d}\mu_x\ ?$$
\item Assume moreover that $A_{sa}(X)$ is determined by extreme points. Is the representation from $(a)$ valid for all strongly affine functions?
\end{enumerate} \end{ques}
\printglossary[title=List of notation,type=symbols,style=list,nogroupskip]
\printindex
\end{document}
|
arXiv
|
{
"id": "2305.16920.tex",
"language_detection_score": 0.7004676461219788,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Moment characterization of the weak disorder phase for directed polymers in a class of unbounded environments} \author[R.~Fukushima]{Ryoki Fukushima} \address{Department of Mathematics, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305--8571, Japan} \email{[email protected]}
\author[S.~Junk]{Stefan Junk} \address{Advanced Institute for Materials Research Tohoku University Mathematical Group, 2-1-1 Katahira, Aoba-ku, Sendai, 980-8577 Japan} \email{[email protected]}
\begin{abstract}
For a directed polymer model in random environment, a characterization of the weak disorder phase in terms of the moment of the renormalized partition function has been proved in [S.~Junk: Communications in Mathematical Physics 389, 1087--1097 (2022)]. We extend this characterization to a large class of unbounded environments which includes many commonly used distributions. \end{abstract}
\maketitle
\section{Introduction} \label{sec:intro} We consider a model of directed polymer in random environment. Let $(X=(X_j)_{j\ge 0}, P^{\text{SRW}})$ be the simple random walk on $\mathbb{Z}^d$ starting at the origin and $((\omega_{j,x})_{(j,x)\in \mathbb{N}\times\mathbb{Z}^d},\mathbb{P})$ be a sequence of independent and identically distributed random variables satisfying \begin{align}\label{eq:expmom}
e^{\lambda(\beta)}:=\mathbb{E}\big[e^{\beta \omega_{0,0}}\big]<\infty\text{ for all $\beta\ge 0$.} \end{align}
Then we define the law of the polymer of length $n$ at the inverse temperature $\beta \ge 0$ by \begin{equation}
\label{eq:polymer}
\text{d} \mu_{\omega,n}^\beta(\text{d} X)=\frac{1}{Z_n^\beta(\omega)}\exp\Big(\beta\sum_{j=1}^n \omega_{j,X_j}\Big)P^\text{SRW}(\text{d} X), \end{equation} where $Z_n^\beta(\omega)=E^{\text{SRW}}[\exp(\beta\sum_{j=1}^n \omega_{j,X_j})]$ is the normalizing constant, called the \emph{partition function} of the model. Under this measure, the random walk is attracted by the sites where $\omega$ is positive, and repelled by the sites where it is negative. Thus we expect that the behavior of the polymer is strongly affected by the environment when $\beta$ is large.
This intuition is made precise in~\cite{CSY03,CY06} under the assumption $\mathbb{E}\big[e^{\beta \omega_{0,0}}\big]<\infty$ for all $\beta\in \mathbb{R}$. In spatial dimension $d\geq 3$, there exists $\beta_{cr}\in(0,\infty)$ such that for $\beta<\beta_{cr}$, \begin{align}
\label{eq:WD}
e^{-n\lambda(\beta)}Z_n^\beta(\omega) \xrightarrow{n\to \infty}W_\infty^\beta(\omega)>0, \quad \mathbb{P}\text{-a.s.},
\end{align} whereas for $\beta>\beta_{cr}$, \begin{align}
\label{eq:SD}
e^{-n\lambda(\beta)}Z_n^\beta(\omega) \xrightarrow{n\to \infty}0, \quad \mathbb{P}\text{-a.s.} \end{align} As one can readily verify that the annealed partition function satisfies $E[Z_n^\beta]=e^{n\lambda(\beta)}$, the above shows that the quenched and annealed partition functions are comparable for $\beta<\beta_{cr}$ and contrary for $\beta>\beta_{cr}$. This indicates that the effect of disorder is weak in the former phase and strong in the latter phase with a drastic change in behavior across $\beta_{cr}$. We refer the interested reader to~\cite{CSY03,CY06}.
The proof of the aforementioned results relies on the fact that $W_n^\beta(\omega):=e^{-n\lambda(\beta)}Z_n^\beta(\omega)$ is a non-negative martingale under $\mathbb{P}$ with the filtration $\mathcal{F}_n:=\sigma(\omega_{j,x}\colon j\le n, x\in\mathbb{Z}^d)$, and one can further show that the phase~\eqref{eq:WD} is characterized by the unform integrability of $W_n^\beta(\omega)$. But in order to further analyze the weak disorder phase, it is desirable to have a stronger property for $(W_n^\beta(\omega))_{n\ge 0}$. The second author has recently proved in~\cite{J21_1} that for $\beta<\beta_{cr}$, the martingale $(W_n^\beta(\omega))_{n\ge 0}$ is $L^p$-bounded for some $p>1$, under the assumption that the random potential $\omega$ is bounded from above. The main result of this paper extends this characterization to a large class of unbounded environments.
\section{Main result} \label{sec:result}
We introduce the following condition for the environment $\omega$. \begin{condition}\label{cond:1} For $\beta>0$, there exist $A_1=A_1(\beta)>1$ and $c_1=c_1(\beta)>0$ such that, for all $A>A_1$,
\begin{align}\label{eq:cond1}
E\mathopen{}\mathclose\bgroup\originalleft[\mathopen{}\mathclose\bgroup\originalleft.e^{\beta\omega}\;\aftergroup\egroup\originalright|\;\omega>A\aftergroup\egroup\originalright]\leq c_1e^{\beta A}. \end{align} \end{condition} This condition strengthens the assumption \eqref{eq:expmom} of finite exponential moments by requiring a control on the overshoot when $\omega$ is conditioned to be large. It does not seem to be very restrictive and holds for many commonly used distributions, although we stress that there are distributions that satisfy \eqref{eq:expmom} but not Condition \ref{cond:1}. We elaborate on these matters in Section~\ref{sec:cond}.
The following is the main result of this paper. \begin{theorem}
\label{thm:main}
Let $\beta$ be such that $\mathbb{P}(W_\infty^\beta>0)>0$ and assume that $\omega$ satisfies \eqref{eq:expmom} and Condition \ref{cond:1}.Then there exists $p=p(\beta)>1$ such that
\begin{align}
\label{eq:L^p-bdd}
\sup_{n\in \mathbb{N}}\|W_n^\beta\|_p<\infty.
\end{align}
Moreover, the set of $p>1$ such that \eqref{eq:L^p-bdd} holds is open.
\end{theorem}
\begin{remark} If $\lim_{n\to\infty}W_n^\beta=0$, then $W_n^\beta$ is not uniformly integrable and hence~\eqref{eq:L^p-bdd} necessarily fails. Thus the weak disorder is characterized by the finiteness of a $p$-th moment. \end{remark}
\begin{remark}
In \cite{J21_1}, it was further shown that if $\omega$ is bounded from below, then $\sup_n\mathbb{E}[W_n^{-\varepsilon}]<\infty$ for some $\varepsilon>0$. The argument in this paper can easily be generalized to show that the same holds whenever $\omega$ satisfies the straightforward generalization of Condition \ref{cond:1} to the negative tail. \end{remark}
\begin{remark} It is an interesting problem to describe the dependence of the optimal exponent $p^*(\beta):=\sup\{p\colon (W_n^\beta)_{n\in\mathbb{N}}\text{ is $L^p$ bounded}\}$ as a function of $\beta$. For bounded environments, it has been shown in \cite{J22} that ${p^*}(\beta)\geq 1+2/d$ whenever $W_\infty^\beta>0$, so that $\beta\mapsto p^*(\beta)$ has a discontinuity at $\beta_{cr}$. It is natural to expect that the same holds in general. \end{remark} \section{Extension of Condition~\ref{cond:1}}
As will be explained in detail below, the main step in proving Theorem \ref{thm:main} is to control the overshoot of $W_\tau$ at a stopping time $\tau$, which takes the form \begin{align}\label{eq:convex_comb}
\frac{W_{\tau}^\beta}{W_{\tau-1}^\beta}= \sum_x \alpha_x e^{\beta\omega_x-\lambda(\beta)} \end{align} for a certain choice of probability weights $(\alpha_x)_{x\in\mathbb{Z}^d}$. The purpose of this section is to translate the Condition \ref{cond:1} on $\omega$ into a statement about such convex combinations.
First, we state a condition satisfied by $e^{\beta\omega-\lambda(\beta)}$ whenever $\omega$ satisfies Condition \ref{cond:1}. \begin{condition}
\label{cond:2} The random variable $Y$ is non-negative with $\mathbb{E}[Y]=1$, $\mathbb{E}[Y^2]<\infty$ and there exist $A_2>1$ and $c_2>0$ such that, for all $p\in [1,2]$ and $A\ge A_2$, \begin{align} \label{eq:cond2}
\mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[Y^p \relmiddle{|} Y> A\aftergroup\egroup\originalright] \le c_2A^p. \end{align} \end{condition}
The next condition requires additionally that \eqref{eq:cond2} extends to convex combinations. \begin{condition}\label{cond:3} The random variable $Y$ is non-negative with $\mathbb{E}[Y]=1$, $\mathbb{E}[Y^2]<\infty$ and there exist $A_3>1$ and $c_3>0$ such that the following holds: If $(Y_i)_{i\in I}$ are i.i.d. copies of $Y$ and $(\alpha_{i})_{i \in I}$ is a collection non-negative numbers with $\sum_{i\in I}\alpha_i=1$, then for all $p\in[1,2]$ and $A\geq A_3$ \begin{align}\label{eq:cond3}
\mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[\Big(\textstyle{\sum_{i\in I}\,} \alpha_i Y_i\Big)^p \relmiddle{|} \textstyle{\sum_{i\in I}\,} \alpha_i Y_i > A\aftergroup\egroup\originalright] \le c_3A^p. \end{align} \end{condition}
We now show that both conditions follow from Condition \ref{cond:1}.
\begin{lemma}\label{lem:claim2} \begin{enumerate} \item [(i)] If $\omega$ satisfies Condition \ref{cond:1}, then $Y:=e^{\beta\omega-\lambda(\beta)}$ satisfies Condition \ref{cond:2}. \item [(ii)] If a random variable $Y$ satisfies Condition \ref{cond:2}, then it also satisfies Condition \ref{cond:3}. \end{enumerate} \end{lemma}
\begin{proof} The proof of \textbf{part (i)} is simple. For $A\geq A_2:=e^{\beta A_1(2\beta)-\lambda(\beta)}$, we can use Condition~\ref{cond:1} to get \begin{align*}
\mathbb{E}[Y^2|Y>A]&=\mathbb{E}\Big[e^{2\beta\omega}\;\Big|\;\omega>\frac{1}{\beta}(\log A+\lambda(\beta))\Big]e^{-2\lambda(\beta)}\\
&\leq c_0(2\beta) e^{2\beta \frac{1}{\beta}(\log A+\lambda(\beta))}e^{-2\lambda(\beta)}\\
&=:c_1 A^2. \end{align*} The extension to $p\in[1,2)$ follows from Jensen's inequality.
The proof of \textbf{part (ii)} is more involved. In the following, we use $C$ for positive constants depending only on $\mathbb{E}[Y_i^2]$, $A_2$ and $c_2$, whose values may change from line to line. Let $A\ge A_3:=A_2$ and $N:=\sum_i \mathbbm 1_{\{\alpha_iY_i>A\}}$. We separately consider the case where all the summands are small ($N=0$) and the cases where the event $\sum_i\alpha_iY_i>A$ is realized due to a single large summand ($N\geq 1$). In the first case, we have \begin{equation}\label{eq:case1}
\mathbb{E}\Big[\Big(\textstyle{\sum_{i}\,} \alpha_i Y_i\Big)^2\mathbbm 1_{\{N=0\}}\;\Big|\; \textstyle{\sum_i\,} \alpha_i Y_i>A\Big]
\leq \mathbb{E}\Big[\Big(\textstyle{\sum_i\,} \alpha_i Y_i\mathbbm 1_{\{\alpha_iY_i\leq A\}}\Big)^2\;\Big|\; \textstyle{\sum_{i}\,} \alpha_i Y_i>A\Big] \end{equation} since $Y_i=Y_i\mathbbm 1_{\{\alpha_iY_i\leq A\}}$ for all $i$ on $\{N=0\}$. Let $\tau:=\inf\{i\colon \sum_{j\leq i}\alpha_jY_j>A\}$ and observe that on $\{\sum_i \alpha_i Y_i>A\}=\{\tau<\infty\}$, \begin{align*} \sum_{i\leq \tau}\alpha_iY_i\mathbbm 1_{\{\alpha_iY_i\leq A\}}\leq \sum_{i<\tau}\alpha_iY_i+\alpha_\tau Y_{\tau}\mathbbm 1_{\{\alpha_\tau Y_\tau \leq A\}}\leq 2A. \end{align*} Note also that conditioned on $\tau=i$, the remaining variables $(Y_{j+i})_{j\ge 1}$ obey the unconditioned law $\mathbb{P}$. Therefore, \begin{equation}
\label{eq:case1-bound} \begin{split}
\mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[\Big(\textstyle{\sum_{i}\,} \alpha_i Y_i\Big)^2\mathbbm 1_{\{N=0\}}\relmiddle{|} \textstyle{\sum_{i}\,} \alpha_i Y_i>A\aftergroup\egroup\originalright]
&\leq \mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[\Big(2A+\textstyle{\sum_{i>\tau}\,}\alpha_i Y_i\Big)^2\relmiddle{|} \tau<\infty\aftergroup\egroup\originalright]\\
&\leq \mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[\Big(2A+\textstyle{\sum_{i\in I}\,} \alpha_i Y_i\Big)^2\aftergroup\egroup\originalright]\\
&\leq C(A^2+1), \end{split} \end{equation} where in the last line, we have used $\sum_{i\in I}\alpha_i=1$ and that $Y_1$ has a finite second moment. In the second case $N\ge 1$, we use $\{N\geq 1\}\subseteq\{\sum_i\alpha_iY_i>A\}$ to obtain \begin{equation}\label{eq:case2}
\mathbb{E}\Big[\Big(\textstyle{\sum_{i}\,} \alpha_i Y_i\Big)^2\mathbbm 1_{\{N\geq 1\}}\;\Big|\; \textstyle{\sum_{i}\,} \alpha_i Y_i>A\Big]
\leq \mathbb{E}\Big[\Big(\textstyle{\sum_{i}\,} \alpha_i Y_i\Big)^2\;\Big|\; N\geq 1\Big].
\end{equation} Let $q_i:=\mathbb{P}(\alpha_iY_i>A\mid N\geq 1)$ and observe that \begin{equation}
\label{eq:square} \begin{split} \alpha_i^2\mathbb{E}[Y_i^2\mid N\geq 1] &\le \alpha_i^2\mathbb{E}[Y_i^2\mid \alpha_iY_i>A]q_i+\alpha_i^2\mathbb{E}[Y_i^2\mid \alpha_iY_i\leq A]\\ &\leq c_2A^2q_i+\alpha_i^2\mathbb{E}[Y_i^2], \end{split} \end{equation} where we have used Condition~\ref{cond:2} for the first term (note that $A/\alpha_i\geq A_2$) and the negative correlation between $Y_i^2$ and $\mathbbm 1_{\{\alpha_iY_i\leq A\}}$ for the second term.
Similarly, for $i\neq j$, let $q_{i,j}:=\mathbb{P}(\alpha_iY_i>A,\alpha_jY_j>A\mid N\geq 1)$ and observe that \begin{equation}
\label{eq:cross} \begin{split} \alpha_i\alpha_j\mathbb{E}[Y_iY_j\mid N\geq 1]&\leq \alpha_i\alpha_j\big(q_{i,j}\mathbb{E}[Y_i\mid \alpha_i Y_i>A]\mathbb{E}[Y_j\mid \alpha_jY_j>A]\\ &\quad+q_i\mathbb{E}[Y_i\mid \alpha_i Y_i>A]+q_j\mathbb{E}[Y_j\mid \alpha_jY_j>A]\\ &\quad+\mathbb{E}[Y_iY_j\mathbbm 1_{\{\alpha_i Y_i\le A\}}\mathbbm 1_{\{\alpha_j Y_j\le A\}}\mid \textstyle{\max_{k\neq i,j}\alpha_kY_k}> A]\big)\\ &\leq C\big(q_{i,j} A^2+\alpha_jq_i A+\alpha_iq_j A+\alpha_i\alpha_j\big), \end{split} \end{equation} where we have used Condition~\ref{cond:1} for the first three terms and that $Y_iY_j\mathbbm 1_{\{\alpha_i Y_i\le A\}}\mathbbm 1_{\{\alpha_j Y_j\le A\}}$ is independent of $\{\max_{k\neq i,j}\alpha_kY_k> A\}$ for the last term. To bound the right-hand side of \eqref{eq:case2}, we are going to sum~\eqref{eq:square} over $i$ and~\eqref{eq:cross} over $i\neq j$. Note that $\sum_{i}q_i=\mathbb{E}[N\mid N\geq 1]$ and $\sum_{i\neq j}q_{i,j}\le \mathbb{E}[N^2\mid N\geq 1]$. We have the following bounds on these quantities. \begin{lemma} \label{lem:N} In the above setup, it holds that $\mathbb{E}[N\mid N\geq 1]\leq 2$ and $\mathbb{E}[N^2\mid N\geq 1]\leq 5$. \end{lemma} The proof of this lemma will be given below. Now, summing~\eqref{eq:square} over $i$ and~\eqref{eq:cross} over $i\neq j$ and then using Lemma~\ref{lem:N}, it follows that the left-hand side of~\eqref{eq:case2} is bounded by $C(A^2+1)$. Combining this with~\eqref{eq:case1-bound} and recalling $A\ge 1$, we get
\begin{align*}
\mathbb{E}\Big[\Big(\textstyle{\sum_{i}\,} \alpha_i Y_i\Big)^2 \;\Big|\; \textstyle{\sum_{i}\,} \alpha_i Y_i > A\Big] \le C (A^2+1)\leq 2CA^2. \end{align*}
Finally, the claim for $p\in[1,2)$ follows as before by applying Jensen's inequality to the above. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:N}]
Let $\sigma=\inf\{i \colon \alpha_i Y_i >A\}$ and write $N=\mathbbm 1_{\sigma<\infty}+\sum_{i>\sigma} \mathbbm 1_{\{\alpha_iY_i>A\}}$. Conditioned on $\sigma=i$, the random variables $(Y_{i+j})_{j\geq 1}$ obey the unconditioned law $\mathbb{P}$. Therefore,
\begin{align*}
\mathbb{E}[N\mid N\geq 1]&=\mathbb{E}[N\mid \sigma<\infty]\leq 1+\mathbb{E}[N],\\
\mathbb{E}[N^2\mid N\geq 1]&=\mathbb{E}[N^2\mid \sigma<\infty]\leq \mathbb{E}[(1+N)^2],
\end{align*}
and hence it suffices to prove that $\mathbb{E}[N]\leq 1$ and $\mathbb{E}[N^2]\leq 2$. Both follow from the Markov inequality:
\begin{align*}
\mathbb{E}[N]&=\sum_i\mathbb{P}(Y_i>A/\alpha_i)\leq \mathbb{E}[Y_1]\sum_{i}\frac{\alpha_i}A=\frac{\mathbb{E}[Y_1]}{A},\\
\mathbb{E}[N^2]&=\sum_i\mathbb{P}(Y_i>A/\alpha_k)+\sum_{i\neq j}\mathbb{P}(Y_i>A/\alpha_i)\mathbb{P}(Y_j>A/\alpha_j)\\
&\leq \frac{\mathbb{E}[Y_1]}{A}+\frac{\mathbb{E}[Y_1]^2}{A^2}.
\end{align*} Recalling $\mathbb{E}[Y_1]=1$ and $A\geq 1$, this implies the desired bounds. \end{proof}
\section{Proof of Theorem~\ref{thm:main}} In this section, we prove Theorem~\ref{thm:main}. Let us start by recalling the relevant steps of the proof of~\cite[Theorem 1.1 (ii)]{J21_1}. For $t>1$, define the stopping time \begin{equation}
\tau(t):=\inf\mathopen{}\mathclose\bgroup\originalleft\{n\in \mathbb{N}\colon W_n^\beta \ge t\aftergroup\egroup\originalright\} \end{equation} and the pinned version of $W_n^\beta$ as follows: \begin{equation}
W_{n,x}^\beta:=E^\text{SRW}\mathopen{}\mathclose\bgroup\originalleft[\exp\mathopen{}\mathclose\bgroup\originalleft(\sum_{t=1}^n (\beta\omega_{t,X_t}-\lambda(\beta))\aftergroup\egroup\originalright); X_n=x\aftergroup\egroup\originalright] \end{equation} Then, by using the Markov property for the simple random walk, we write on $\{\tau(t)\leq n\}$ \begin{equation}
\label{eq:stopping} \begin{split}
W_n^\beta&=\sum_{x\in\mathbb{Z}^d} W_{\tau(t),x}^\beta \mathopen{}\mathclose\bgroup\originalleft(W_{n-\tau(t)}^\beta \circ \theta_{\tau(t),x}\aftergroup\egroup\originalright)\\
&=W_{\tau(t)}^\beta\sum_{x\in\mathbb{Z}^d} \mu_{\omega,\tau(t)}^\beta(X_{\tau(t)}=x) \mathopen{}\mathclose\bgroup\originalleft(W_{n-\tau(t)}^\beta \circ \theta_{\tau(t),x}\aftergroup\egroup\originalright), \end{split} \end{equation} where $\theta_{k,x}$ stands for the time-space shift of the environment. By \eqref{eq:stopping} and Jensen's inequality, we have \begin{align}\label{eq:argument}
\mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[\big(W_n^\beta\big)^p \mathbbm 1_{\{\tau(t)=k\}}\aftergroup\egroup\originalright]
\le \mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[\big(W_k^\beta\big)^p \mathbbm 1_{\{\tau(t)=k\}}\aftergroup\egroup\originalright]\mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[\big(W_{n-k}^\beta\big)^p\aftergroup\egroup\originalright]. \end{align} To continue the argument, we need the following bound on the second factor, uniformly in $t>1$ and $p\in[1,2]$: \begin{align} \label{eq:need} \mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[\big(W_k^\beta\big)^p \mathbbm 1_{\{\tau(t)=k\}}\aftergroup\egroup\originalright]\leq Ct^p\mathbb{P}(\tau(t)=k). \end{align} In \cite{J21_1}, the assumption $\omega_{t,x} \le K$ was used to ensure that $(W_{k}^\beta)^p \le e^{2 \beta K}t^p$ on $\{\tau(t)=k\}$, that is, the martingale does not overshoot much at the stopping time $\tau(t)$.
We replace this part of the argument by using Lemma~\ref{lem:claim2}. Let $c_3$ and $A_3$ be the constants obtained by applying Lemma~\ref{lem:claim2}(i)--(ii). We now bound the left-hand side in \eqref{eq:need} by considering the cases $W_k^\beta\le A_3t$ and $W_k^\beta>A_3t$ separately. The first case is simple: \begin{equation}
\label{eq:firstcase}
\mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[\big(W_k^\beta\big)^p \mathbbm 1_{\{\tau(t)=k,W_k^\beta\le A_3t\}}\aftergroup\egroup\originalright]\le (A_3t)^p\mathbb{P}\big(\tau(t)=k,W_k^\beta\le A_3t\big). \end{equation} In the second case, we consider the conditional expectation given $\mathcal{F}_{k-1}$ to write \begin{equation}
\label{eq:stop@k}
\mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[\big(W_k^\beta\big)^p \mathbbm 1_{\{\tau(t)=k,W_k^\beta> A_3t\}}\aftergroup\egroup\originalright]
=\mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[\big(W_{k-1}^\beta\big)^p \mathbbm 1_{\{\tau(t)>k-1\}}\mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[\mathopen{}\mathclose\bgroup\originalleft({W_k^\beta}/{W_{k-1}^\beta}\aftergroup\egroup\originalright)^p \mathbbm 1_{\{W_k^\beta>A_3t\}}\relmiddle{|} \mathcal{F}_{k-1}\aftergroup\egroup\originalright]\aftergroup\egroup\originalright]. \end{equation} We further rewrite\footnote{In the following equation, we regard $\mu_{\omega,k-1}^\beta$ as a measure on the space of \emph{infinite} path while the interaction with the environment is restricted to time interval $[0,k-1]$.} \begin{align*}
{W_k^\beta}/{W_{k-1}^\beta}=\textstyle{\sum_{x}\,} \alpha_x Y_x\text{ and }\big\{W_k^\beta>A_3t\big\}=\big\{\textstyle{\sum_{x}\,}\alpha_xY_x>A\big\}, \end{align*} where $\alpha_x:=\mu_{\omega,k-1}^\beta(X_k=x)$, $Y_x:=e^{\beta\omega_{k,x}-\lambda(\beta)}$ and $A:=A_3t/W_{k-1}^\beta$. Then, noting that \begin{itemize}
\item $(e^{\beta\omega_{k,x}-\lambda(\beta)})_{x\in\mathbb{Z}^d}$ is independent of $\mathcal{F}_{k-1}$,
\item $\mu_{\omega,k-1}^\beta(X_k=x)$ is an $\mathcal{F}_{k-1}$-measurable probability measure on $\mathbb{Z}^d$ and
\item $t/W_{k-1}^\beta \ge 1$ on $\{\tau(t)>k-1\}$, \end{itemize} we can apply Lemma~\ref{lem:claim2} under $\mathbb{P}(\cdot\mid\mathcal{F}_{k-1})$ to obtain \begin{align*}
\mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[\big({W_k^\beta}/{W_{k-1}^\beta}\big)^p \mathbbm 1_{\{W_k^\beta>A_3t\}}\relmiddle{|} \mathcal{F}_{k-1}\aftergroup\egroup\originalright]
\le c_3\big(A_3t/W_{k-1}^\beta\big)^p\mathbb{P}\mathopen{}\mathclose\bgroup\originalleft(W_k^\beta/W_{k-1}^\beta> A_3t/W_{k-1}^\beta\relmiddle{|} \mathcal{F}_{k-1}\aftergroup\egroup\originalright). \end{align*} Substituting this into~\eqref{eq:stop@k} yields \begin{equation}
\label{eq:secondcase}
\mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[\big(W_k^\beta\big)^p \mathbbm 1_{\{\tau(t)=k,W_k^\beta>A_3t\}}\aftergroup\egroup\originalright]
\le c_3A_3^2t^p \mathbb{P}(\tau(t)=k,W_k^\beta>A_3t). \end{equation} Combining this bound with \eqref{eq:firstcase}, we obtain \eqref{eq:need} and can thus repeat the argument in~\cite[eq.(20)]{J21_1}.
Since the other parts of the proof of~\cite[Theorem~2.1 (ii)]{J21_1} do not rely on the boundedness assumption, the same argument proves Theorem~\ref{thm:main}.
\section{Discussion on Condition~\ref{cond:1}} \label{sec:cond} In this section, we discuss Condition~\ref{cond:1}. First, although it looks natural, it does not hold in general. For example, if $\omega$ is supported on $\{k^2\}_{k\in \mathbb{N}}$, then regardless the concrete form of the distribution of $\omega$, we have \begin{align*} \mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[e^{\beta\omega} \relmiddle{|} \omega> k^2\aftergroup\egroup\originalright]&=\mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[e^{\beta\omega} \relmiddle{|} \omega \ge (k+1)^2\aftergroup\egroup\originalright]\geq e^{\beta(k+1)^2}
\end{align*} and hence Condition~\ref{cond:1} fails.
Next, we see that Condition \ref{cond:1} is valid under a one-sided tail regularity assumption, which holds under certain upper and lower bounds on the tail. \begin{proposition}\label{prop:sufficient} Let $\omega$ be a real-valued random variable.
\begin{enumerate}
\item [(i)] Assume that there exist $K>0$ and $M>2\beta$ such that \begin{align}\label{eq:dom-var2}
\limsup_{x\to\infty}\sup_{y\geq K}\frac{\mathbb{P}(\omega>x+y)e^{My}}{\mathbb{P}(\omega>x)}<\infty. \end{align} Then Condition \ref{cond:1} holds.
\item[(ii)] Assume that there exist $c>0$ and a convex function $f$ satisfying $\lim_{x\to\infty}\frac{f(x)}x=\infty$ such that, for $x$ large enough,
\begin{align}\label{eq:convex}
c^{-1}e^{-f(x)}\leq \mathbb{P}(\omega > x)\leq ce^{-f(x)}.
\end{align} Then Condition \ref{cond:1} holds for all values of $\beta$. \item[(iii)] Assume that there exist $c>0$ and an increasing
function $f$ satisfying $f(x+y)\geq f(x)f(y)$ such that, for $x$ large enough, \begin{align}\label{eq:convex2}
c^{-1}e^{-cf(x)}\leq \mathbb{P}(\omega > x)\leq ce^{-f(x)/c}. \end{align} Then Condition \ref{cond:1} holds for all values of $\beta$. \end{enumerate} \end{proposition} This proposition covers many commonly used distributions. \begin{itemize}
\item If $\omega$ has a logarithmically concave Lebesgues density, then $x\mapsto \mathbb{P}(\omega>x)$ is also logarithmically concave (see \cite[Theorem 2]{P71}) and hence \eqref{eq:convex} holds with $f(x):=-\log \mathbb{P}(\omega>x)$. Note also that $\lim_{x\to\infty}\frac{f(x)}{x}=\infty$ already follows from \eqref{eq:expmom}. This covers, for example, the Gaussian distribution or the Weibull distribution (with $\mathbb{P}(\omega>x)=ce^{-c'x^\alpha}$ for $\alpha> 1$).
\item For the Poisson distribution, it is not hard to check \eqref{eq:dom-var2} directly.
\item The (negative) Gumbel distribution, with $\mathbb{P}(\omega> x)=\exp({-e^{(x-c)/c'}})$, further satisfies \eqref{eq:convex2}. More generally, we can take $f(x)=e^{x^\alpha}$ with $\alpha\ge 1$ in \eqref{eq:convex2}.
\end{itemize}
\begin{proof}
\textbf{Part (i)}: By \eqref{eq:dom-var2}, there exist $K>0$, $M>2\beta$, $A_0>1$ and $C>0$ such that, for $y\geq K$, $A>A_1$ and $u>K+A$, \begin{align*}
\mathbb{P}(\omega>u)\leq C\mathbb{P}(\omega>A)e^{-Mu+MA}. \end{align*} Thus, for $A\geq A_1$, \begin{align*}
\mathbb{E}[e^{2\beta\omega}\mathbbm 1_{\omega>A}]&\leq \mathbb{P}(\omega>A)e^{2\beta (A+K)}+\mathbb{E}[e^{2\beta\omega}\mathbbm 1_{\omega>A+K}]\\
&=\mathbb{P}(\omega>A)e^{2\beta (A+K)}+\int_{e^{2\beta (A+K)}}^\infty \mathbb{P}(\omega>\log(t)/2\beta)\text{d} t\\
&\leq \mathbb{P}(\omega>A) \Big(e^{2\beta (A+K)}+Ce^{MA}\int_{e^{2\beta (A+K)}}^\infty e^{-M\log(t)/2\beta} \text{d} t\Big)\\
&=\mathbb{P}(\omega>A) \Big(e^{2\beta (A+K)}+\frac{C}{M/2\beta-1}e^{MA}e^{2\beta(A+K)(1-M/2\beta)}\Big)\\
&=: c_3\mathbb{P}(\omega>A)e^{2\beta A}, \end{align*} where we have used the assumption $M>2\beta$ to ensure the convergence of the last intergral.
For \textbf{part (ii)}, it is now enough to verify \eqref{eq:dom-var2}. The convexity and the assumption on superlinear growth imply that there exists $x_0>0$ such that the right derivative $D_+f(x_0) \ge 3\beta$. Then for $x\geq x_0$ and $ y>0$, we have $f(x+y)-f(x) \ge 3\beta y$ and hence \begin{align*}
\frac{\mathbb{P}(\omega>x+y)}{\mathbb{P}(\omega>x)}\leq c^2e^{-(f(x+y)-f(x))}
\leq c^2 e^{-3\beta y}. \end{align*} This implies \eqref{eq:dom-var2}.
For \textbf{part (iii)}, note that by the super-additive theorem there exists $C>0$ such that $f(x)\geq e^{Cx}$, hence for $y>2\log(c)/C$ and $x$ large enough, \begin{align*}
\frac{\mathbb{P}(\omega>x+y)}{\mathbb{P}(\omega>x)}\leq c^2\exp\Big(-f(x)\Big(\frac{f(x+y)}{cf(x)}-c\Big)\Big)\leq c^2e^{-f(x)(f(y)/c-c)}\leq c^2 e^{-3\beta y}. \end{align*} This again implies \eqref{eq:dom-var2} and we are done. \end{proof}
\begin{remark} In Section~\ref{sec:cond}, we rephrased Condition~\ref{cond:1} in terms of the random variable $Y:=e^{\beta\omega-\lambda(\beta)}$. Since some authors use this $Y$ as the random potential in the directed polymer model (see, for example, \cite{IS88,Sep12}), it might be of interest to rephrase also \eqref{eq:dom-var2}, which reads \begin{align}\label{eq:dom-var}
\text{there exist } K>1\text{ and } M>2\text{ such that }\limsup_{y\to \infty}\sup_{\lambda\ge K}\lambda^{M}\frac{\mathbb{P}(Y > \lambda y)}{\mathbb{P}(Y> y)} <\infty. \end{align} This is a one-sided regular variation condition. It appears, for example, in~\cite[Theorem~2.0.1]{BGT} and inspecting its proof, one can see that \eqref{eq:dom-var} follows from \begin{align}
\text{there exist } K>1, M>2 \text{ and } \rho<K^{-M} \text{ such that }\sup_{\lambda\in[K,K^2]}\limsup_{y\to\infty}\frac{\mathbb{P}(Y > \lambda y)}{\mathbb{P}(Y> y)}<\rho. \end{align}
There are plenty of distributions that satisfy~\eqref{eq:dom-var}. For instance, if there exist $c,C>0$ and $\gamma>0$ such that \begin{align}
\label{eq:Weibull}
c\exp(-C y^\gamma)\le \mathbb{P}(Y> y)\le C\exp(-c y^\gamma) \end{align} holds for all sufficiently large $y$, then \begin{align*}
\frac{\mathbb{P}(Y > \lambda y)}{\mathbb{P}(Y> y)}
&\le \frac{C}{c}\exp(-(c\lambda^\gamma-C) y^\gamma),
\end{align*} and~\eqref{eq:dom-var} follows. A similar argument applies to the case where $y^\gamma$ in~\eqref{eq:Weibull} is replaced by $\exp(y^\gamma)$ ($\gamma>0$) or $\exp(\log^\alpha y)$ $(\alpha>1$). \end{remark}
\section*{Acknowledgment} This work was supported by KAKENHI 21K03286, 22H00099 and 18H03672.
\end{document}
|
arXiv
|
{
"id": "2303.01918.tex",
"language_detection_score": 0.5467618107795715,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{The ample cone of moduli spaces of sheaves on the plane}
\date{\today} \author[I. Coskun]{Izzet Coskun} \author[J. Huizenga]{Jack Huizenga}
\address{Department of Mathematics, Statistics and CS \\University of Illinois at Chicago, Chicago, IL 60607} \email{[email protected]} \email{[email protected]} \thanks{During the preparation of this article the first author was partially supported by the NSF CAREER grant DMS-0950951535, and the second author was partially supported by a National Science Foundation Mathematical Sciences Postdoctoral Research Fellowship} \subjclass[2010]{Primary: 14J60. Secondary: 14E30, 14J26, 14D20, 13D02} \keywords{Moduli space of stable vector bundles, Minimal Model Program, Bridgeland Stability Conditions, ample cone}
\begin{abstract} Let $\xi$ be the Chern character of a stable sheaf on $\mathbb{P}^2$. Assume either $\rk(\xi)\leq 6$ or $\rk(\xi)$ and $c_1(\xi)$ are coprime and the discriminant $\Delta(\xi)$ is sufficiently large. We use recent results of Bayer and Macr\`i \cite{BayerMacri2} on Bridgeland stability to compute the ample cone of the moduli space $M(\xi)$ of Gieseker semistable sheaves on $\mathbb{P}^2$. We recover earlier results, such as those by Str\o mme \cite{Stromme} and Yoshioka \cite{Yoshioka}, as special cases. \end{abstract}
\maketitle \setcounter{tocdepth}{1} \tableofcontents
\section{Introduction}
Let $\xi$ be the Chern character of a stable sheaf on $\mathbb{P}^2$. The moduli space $M(\xi)$ parameterizes $S$-equivalence classes of Gieseker semistable sheaves with Chern character $\xi$. It is an irreducible, normal, factorial, projective variety \cite{LePotierLectures}. In this paper, we determine the ample cone of $M(\xi)$ when either $\rk(\xi)\leq 6$ or $\rk(\xi)$ and $c_1(\xi)$ are coprime and the discriminant $\Delta(\xi)$ is sufficiently large.
The {\em ample cone} $\Amp(X)$ of a projective variety $X$ is the open convex cone in the N\'{e}ron-Severi space spanned by the classes of ample divisors. It controls embeddings of $X$ into projective space and is among the most important invariants of $X$. Its closure, the {\em nef cone} $\Nef(X)$, is spanned by divisor classes that have non-negative intersection with every curve on $X$ and is dual to the Mori cone of curves on $X$ under the intersection pairing (see \cite{Lazarsfeld}). We now describe our results on $\Amp(M(\xi))$ in greater detail.
Let $\xi$ be an integral Chern character with rank $r>0$. We record such a character as a triple $(r,\mu,\Delta)$, where $$\mu = \frac{\ch_1}{r} \qquad \textrm{and} \qquad \Delta = \frac{1}{2}\mu^2 - \frac{\ch_2}{r}$$ are the \emph{slope} and \emph{discriminant}, respectively. We call the character $\xi$ (semi)stable if there exists a (semi)stable sheaf of character $\xi$. Dr\'{e}zet and Le Potier give an explicit curve $\delta(\mu)$ in the $(\mu, \Delta)$-plane such that the moduli space $M(\xi)$ is positive dimensional if and only if $\Delta \geq \delta(\mu)$ \cite{DLP}, \cite{LePotierLectures}. The vector bundles whose Chern characters satisfy $\Delta = \delta(\mu)$ are called {\em height zero bundles}. Their moduli spaces have Picard group isomorphic to $\mathbb{Z}$. Hence, their ample cone is spanned by the positive generator and there is nothing further to discuss. Therefore, we will assume $\Delta>\delta(\mu)$, and say $\xi$ has \emph{positive height}.
There is a nondegenerate symmetric bilinear form on the $K$-group $K(\mathbb{P}^2)$ sending a pair of Chern characters $\xi, \zeta$ to the Euler Characteristic $\chi(\xi^*, \zeta)$. When $\xi$ has positive height, the Picard group of the moduli space $M(\xi)$ is naturally identified with the orthogonal complement $\xi^\perp$ and is isomorphic to $\mathbb{Z} \oplus \mathbb{Z}$ \cite{LePotierLectures}. Correspondingly, the N\'{e}ron-Severi space is a two-dimensional vector space. In order to describe $\Amp(M(\xi))$, it suffices to specify its two extremal rays.
The moduli space $M(\xi)$ admits a surjective, birational morphism $j: M(\xi)\rightarrow M^{DUY}(\xi)$ to the Donaldson-Uhlenbeck-Yau compactification $M^{DUY}(\xi)$ of the moduli space of stable vector bundles (see \cite{JunLiDonaldson} and \cite{HuybrechtsLehn}). As long as the locus of singular (i.e., non-locally-free) sheaves in $M(\xi)$ is nonempty (see Theorem \ref{thm-singular}), the morphism $j$ is not an isomorphism and contracts curves (see Proposition \ref{prop-DUY}). Consequently, the line bundle $\mathcal{L}_1$ defining $j$ is base-point-free but not ample (see \cite{HuybrechtsLehn}). It corresponds to a Chern character $u_1\in \xi^\perp \cong \Pic M(\xi)$ and spans an extremal ray of $\Amp(M(\xi))$. For all the characters $\xi$ that we will consider in this paper there are singular sheaves in $M(\xi)$, so one edge of $\Amp(M(\xi))$ is always spanned by $u_1$. We must compute the other edge of the cone, which we call the {\em primary edge}.
We now state our results. Let $\xi = (r, \mu, \Delta)$ be a stable Chern character. Let $\xi'= (r', \mu', \Delta')$ be the stable Chern character satisfying the following defining properties: \begin{itemize} \item $0< r' \leq r$ and $\mu'< \mu$, \item Every rational number in the interval $(\mu', \mu)$ has denominator greater than $r$, \item The discriminant of any stable bundle of slope $\mu'$ and rank at most $r$ is at least $\Delta'$, \item The minimal rank of a stable Chern character with slope $\mu'$ and discriminant $\Delta'$ is $r'$. \end{itemize} The character $\xi'$ is easily computed using Dr\'ezet and Le Potier's classification of stable bundles.
\begin{theorem}\label{thm-asymptotic} Let $\xi = (r,\mu,\Delta)$ be a positive height Chern character such that $r$ and $c_1$ are coprime. Suppose $\Delta$ is sufficiently large, depending on $r$ and $\mu$. The cone $\Amp(M(\xi))$ is spanned by $u_1$ and a negative rank character in $(\xi')^\perp$. \end{theorem}
The required lower bound on $\Delta$ can be made explicit; see Remark \ref{rem-explicit}. Our second result computes the ample cone of small rank moduli spaces.
\begin{theorem}\label{thm-smallRank} Let $\xi = (r,\mu,\Delta)$ be a positive height Chern character with $r\leq 6$. \begin{enumerate} \item If $\xi$ is not a twist of $(6,\frac{1}{3},\frac{13}{18})$, then $\Amp(M(\xi))$ is spanned by $u_1$ and a negative rank character in $(\xi')^\perp$. \item If $\xi = (6,\frac{1}{3},\frac{13}{18})$, then $\Amp(M(\xi))$ is spanned by $u_1$ and a negative rank character in $(\ch \mathcal{O}_{\mathbb{P}^2})^\perp$. \end{enumerate} \end{theorem}
The new ingredient that allows us to calculate $\Amp(M(\xi))$ is Bridgeland stability. Bridgeland \cite{bridgeland:stable}, \cite{Bridgeland} and Arcara, Bertram \cite{ArcaraBertram} construct Bridgeland stability conditions on the bounded derived category of coherent sheaves on a projective surface. On $\mathbb{P}^2$, these stability conditions $\sigma_{s,t} = (\mathcal A_s, Z_{s,t})$ are parameterized by a half plane $H := \{ (s,t) | s,t \in \mathbb{R}, t>0\}$ (see \cite{ABCH} and \S \ref{sec-prelim}). Given a Chern character $\xi$, $H$ admits a finite wall and chamber decomposition, where in each chamber the collection of $\sigma_{s,t}$-semistable objects with Chern character $\xi$ remains constant. These walls are disjoint and consist of a vertical line $s = \mu$ and nested semicircles with center along $t=0$ \cite{ABCH}. In particular, there is a largest semicircular wall $W_{\max}$ to the left of the vertical wall. We will call this wall the {\em Gieseker wall}. Outside this wall, the moduli space $M_{\sigma_{s,t}}(\xi)$ of $\sigma_{s,t}$-semistable objects is isomorphic to the Gieseker moduli space $M(\xi)$ \cite{ABCH}.
Let $\sigma_0$ be a stability condition on the Gieseker wall for $M(\xi)$. Bayer and Macr\`i \cite{BayerMacri2} construct a nef divisor $\ell_{\sigma_0}$ on $M(\xi)$ corresponding to $\sigma_0$. They also characterize the curves $C$ in $M(\xi)$ which have intersection number $0$ with $\ell_{\sigma_0}$, as follows: $\ell_{\sigma_0}.C=0$ if and only if two general sheaves parameterized by $C$ are $S$-equivalent with respect to $\sigma_0$ (that is, their Jordan-H\"older factors with respect to the stability condition $\sigma_0$ are the same). The divisor $\ell_{\sigma_0}$ is therefore an extremal nef divisor if and only if such a curve in $M(\xi)$ exists. This divisor can also be constructed via the GIT methods of Li and Zhao \cite{LiZhao}.
In light of the results of Bayer and Macr\`i, our proofs of Theorems \ref{thm-asymptotic} and \ref{thm-smallRank} amount to the computation of the Gieseker wall. For simplicity, we describe our approach to proving Theorem \ref{thm-asymptotic}; the basic strategy for the proof of Theorem \ref{thm-smallRank} is similar.
\begin{theorem}\label{thm-introWall} Let $\xi$ be as in Theorem \ref{thm-asymptotic}. The Gieseker wall for $M(\xi)$ is the wall $W(\xi',\xi)$ where $\xi$ and $\xi'$ have the same Bridgeland slope. \end{theorem}
There are two parts to the proof of this theorem. First, we show that $W_{\max}$ is no larger than $W(\xi',\xi)$. This is a numerical computation based on Bridgeland stability. The key technical result (Theorem \ref{thm-excludeHighRank}) is that if a wall is larger than $W(\xi',\xi)$, then the rank of a destabilizing subobject corresponding to the wall is at most $\rk(\xi)$. We then find that the extremality properties defining $\xi'$ guarantee that $W(\xi',\xi)$ is at least as large as any wall for $M(\xi)$ (Theorem \ref{thm-main}).
In the other direction, we must show that $W(\xi',\xi)$ is an actual wall for $M(\xi)$. Define a character $\xi'' = \xi-\xi'$. Our next theorem produces a sheaf $E\in M(\xi)$ which is destabilized along $W(\xi',\xi)$.
\begin{theorem}\label{thm-introSt} Let $\xi$ be as in Theorem \ref{thm-asymptotic}. Fix general sheaves $F\in M(\xi')$ and $Q\in M(\xi'')$. Then the general sheaf $E$ given by an extension $$0\to F\to E\to Q\to 0$$ is Gieseker stable. Furthermore, we obtain curves in $M(\xi)$ by varying the extension class. \end{theorem}
If $E$ is a Gieseker stable extension as in the theorem, then $E$ is strictly semistable with respect to a stability condition $\sigma_0$ on $W(\xi',\xi)$, and not semistable with respect to a stability condition below $W(\xi',\xi)$. Thus $W(\xi',\xi)$ is an actual wall for $M(\xi)$, and it is the Gieseker wall. Any two Gieseker stable extensions of $Q$ by $F$ are $S$-equivalent with respect to $\sigma_0$, so any curve $C$ in $M(\xi)$ obtained by varying the extension class satisfies $\ell_{\sigma_0}.C=0$. Therefore, $\ell_{\sigma_0}$ spans an edge of the ample cone. Dually, $C$ spans an edge of the Mori cone of curves.
The natural analogs of Theorems \ref{thm-introWall} and \ref{thm-introSt} are almost true when instead $\rk(\xi)\leq 6$ as in Theorem \ref{thm-smallRank}; some minor adjustments to the statements need to be made for certain small discriminant cases. See Theorems \ref{thm-smallRankCurves}, \ref{thm-mainsmall}, \ref{thm-mainSporadic}, and Propositions \ref{prop-sporadic} and \ref{prop-sporadic2} for precise statements. As the rank increases beyond $6$, these exceptions become more common, and many more ad hoc arguments are required when using current techniques.
Bridgeland stability conditions were effectively used to study the birational geometry of Hilbert schemes of points on $\mathbb{P}^2$ in \cite{ABCH} and moduli spaces of rank 0 semistable sheaves in \cite{Woolf}. The ample cone of $M(\xi)$ was computed earlier for some special Chern characters. The ample cone of the Hilbert scheme of points on $\mathbb{P}^2$ was computed in \cite{li} (see also \cite{ABCH}, \cite{Ohkawa}). Str\o mme computed $\Amp(M(\xi))$ when the rank of $\xi$ is two and either $c_1$ or $c_2 - \frac{1}{4}c_1^2$ is odd \cite{Stromme}. Similarly, when the slope is $\frac{1}{r}$, Yoshioka \cite{Yoshioka} computed the ample cone of $M(\xi)$ and described the first flip. Our results contain these as special cases. Bridgeland stability has also been effectively used to compute ample cones of moduli spaces of sheaves on other surfaces. For example, see \cite{ArcaraBertram}, \cite{BayerMacri2}, \cite{BayerMacri3}, \cite{MYY1}, \cite{MYY2} for K3 surfaces, \cite{MM}, \cite{Yoshioka2}, \cite{YanagidaYoshioka} for abelian surfaces, \cite{Nuer} for Enriques surfaces, and \cite{BertramCoskun} for the Hilbert scheme of points on Hirzebruch surfaces and del Pezzo surfaces.
\subsection*{Organization of the paper} In \S \ref{sec-prelim}, we will introduce the necessary background on $M(\xi)$ and Bridgeland stability conditions on $\mathbb{P}^2$. In \S \ref{sec-hp} and \S \ref{sec-extremal}, we study the stability of extensions of sheaves and prove the first statement in Theorem \ref{thm-introSt}. In \S \ref{sec-elementaryMod} and \ref{sec-smallRank}, we prove the analogue of the first assertion in Theorem \ref{thm-introSt} for $\rk(\xi) \leq 6$. In \S \ref{sec-curves}, we complete the proof of Theorem \ref{thm-introSt} (and its small-rank analogue) by constructing the desired curves of extensions. Finally, in \S \ref{sec-ample}, we compute the Gieseker wall, completing the proofs of Theorems \ref{thm-asymptotic} and \ref{thm-smallRank}.
\section{Preliminaries}\label{sec-prelim} In this section, we recall basic facts concerning the classification of stable vector bundles on $\mathbb{P}^2$ and Bridgeland stability.
\subsection{Stable sheaves on $\mathbb{P}^2$} Let $\xi$ be the Chern character of a (semi)stable sheaf on $\mathbb{P}^2$. We will call such characters {\em (semi)stable characters}. The classification of stable characters on $\mathbb{P}^2$ due to Dr\'{e}zet and Le Potier is best stated in terms of the slope $\mu$ and the discriminant $\Delta$. Let $$P(m)= \frac{1}{2}(m^2 + 3m +2)$$ denote the Hilbert polynomial of $\mathcal{O}_{\mathbb{P}^2}$. In terms of these invariants, the Riemann-Roch formula reads $$\chi(E,F) = \rk(E) \rk(F) ( P( \mu(F) - \mu(E)) - \Delta(E) - \Delta(F)).$$
An {\em exceptional bundle} $E$ on $\mathbb{P}^2$ is a stable bundle such that $\Ext^1(E,E)=0$. The exceptional bundles are rigid; their moduli spaces consist of a single reduced point \cite[Corollary 16.1.5]{LePotierLectures}. They are the stable bundles $E$ on $\mathbb{P}^2$ with $\Delta(E) < \frac{1}{2}$ \cite[Proposition 16.1.1]{LePotierLectures}. Examples of exceptional bundles include line bundles $\mathcal{O}_{\mathbb{P}^2}(n)$ and the tangent bundle $T_{\mathbb{P}^2}$. All exceptional bundles can be obtained from line bundles via a sequence of mutations \cite{DrezetBeilinson}. An {\em exceptional slope} $\alpha\in \mathbb{Q}$ is the slope of an exceptional bundle. If $\alpha$ is an exceptional slope, there is a unique exceptional bundle $E_\alpha$ with slope $\alpha$. The rank of the exceptional bundle is the smallest positive integer $r_{\alpha}$ such that $r_{\alpha} \alpha$ is an integer. The discriminant $\Delta_{\alpha}$ is then given by $$\Delta_{\alpha} = \frac{1}{2} \left( 1 - \frac{1}{r_{\alpha}^2}\right).$$ The set $\F E$ of exceptional slopes is well-understood (see \cite{DLP} and \cite{CoskunHuizengaWoolf}).
The classification of positive dimensional moduli spaces of stable vector bundles on $\mathbb{P}^2$ is expressed in terms of a fractal-like curve $\delta$ in the $(\mu, \Delta)$-plane. For each exceptional slope $\alpha \in \F E$, there is an interval $I_{\alpha} = ( \alpha-x_{\alpha}, \alpha + x_{\alpha})$ with $$x_{\alpha} = \frac{3- \sqrt{5+8 \Delta_{\alpha}}}{2}$$ such that the function $\delta(\mu)$ is defined on $I_{\alpha}$ by the function $$\delta(\mu) = P(-|\mu - \alpha|) - \Delta_{\alpha}, \ \ \mbox{if} \ \ \alpha - x_{\alpha} < \mu < \alpha + x_{\alpha}.$$ The graph of $\delta(\mu)$ is an increasing concave up parabola on the interval $[\alpha - x_{\alpha}, \alpha]$ and a decreasing concave up parabola on the interval $[\alpha, \alpha + x_{\alpha}]$. The function $\delta$ is invariant under translation by integers. The main classification theorem of Dr\'{e}zet and Le Potier is as follows.
\begin{theorem}[\cite{DLP}, \cite{LePotierLectures}] There exists a positive dimensional moduli space of Gieseker semistable sheaves $M(\xi)$ with integral Chern character $\xi$ if and only if $\Delta \geq \delta(\mu)$. In this case, $M(\xi)$ is a normal, irreducible, factorial projective variety of dimension $r^2(2 \Delta -1) + 1$. \end{theorem}
\subsection{Singular sheaves on $\mathbb{P}^2$} For studying one extremal edge of the ample cone, we need to understand the locus of singular sheaves in $M(\xi)$. The following theorem, which is likely well-known to experts, characterizes the Chern characters where the locus of singular sheaves in $M(\xi)$ is nonempty. We include a proof for lack of a convenient reference.
\begin{theorem}\label{thm-singular} Let $\xi= (r, \mu, \Delta)$ be an integral Chern character with $r>0$ and $\Delta \geq \delta(\mu)$. The locus of singular sheaves in $M(\xi)$ is empty if and only if $\Delta - \delta(\mu) < \frac{1}{r}$ and $\mu$ is not an exceptional slope. \end{theorem}
\begin{proof} Let $F$ be a singular sheaf in $M(\xi)$. Then $F^{**}$ is a $\mu$-semistable, locally free sheaf \cite[\S 8]{HuybrechtsLehn} with invariants $$\rk(F^{**}) = r, \ \ \mu(F^{**}) = \mu, \ \ \mbox{and} \ \ \Delta(F^{**}) \leq \Delta(F) - \frac{1}{r}.$$
Since the set $\mathbb{R} - \cup_{\alpha \in \F{E}} I_{\alpha}$ does not contain any rational numbers \cite{DLP}, \cite[Theorem 4.1]{CoskunHuizengaWoolf}, $\mu \in I_{\alpha}$ for some exceptional slope $\alpha$. Let $E_{\alpha}$ with invariants $(r_{\alpha}, \alpha, \Delta_{\alpha})$ be the corresponding exceptional bundle.
If $\Delta - \delta(\mu)< \frac{1}{r}$ and $F$ is a singular sheaf in $M(\xi)$, then $\Delta(F^{**})< \delta(\mu)$. If $\alpha > \mu$, then $\hom(E_{\alpha}, F^{**})>0$. If $\alpha< \mu$, then $\hom(F^{**}, E_{\alpha})>0$. In either case, these homomorphisms violate the $\mu$-semistability of $F^{**}$, leading to a contradiction. Therefore, if $\Delta - \delta(\mu) < \frac{1}{r}$ and $\mu$ is not an exceptional slope, then the locus of singular sheaves in $M(\xi)$ is empty.
To prove the converse, we construct singular sheaves using elementary modifications. If $\Delta - \delta(\mu) \geq \frac{1}{r}$, then $\zeta = (r, \mu, \Delta - \frac{1}{r})$ is a stable Chern character. Let $G$ be a $\mu$-stable bundle in $M(\zeta)$, which exists by \cite[Corollary 4.12]{DLP}. Choose a point $p\in \mathbb{P}^2$ and let $G \rightarrow \mathcal{O}_p$ be a general surjection. Then the kernel sheaf $F$ defined by $$0 \rightarrow F \rightarrow G \rightarrow \mathcal{O}_p \rightarrow 0$$ is a $\mu$-stable, singular sheaf with Chern character $\xi$ (see \S \ref{sec-elementaryMod} for more details on elementary modifications).
We are reduced to showing that if $\mu = \alpha$ and $\Delta - \delta(\alpha) < \frac{1}{r}$, then the locus of singular sheaves in $M(\xi)$ is nonempty. Since $c_1(E_{\alpha})$ and $r_{\alpha}$ are coprime, the rank of any bundle with slope $\alpha$ is a multiple of $r_{\alpha}$. Write $$r= k r_{\alpha}^2 + m r_{\alpha}, \ \ 0 \leq k, \ 0 < m \leq r_{\alpha}.$$ By integrality, there exists an integer $N$ such that $\Delta - \frac{N}{r} = \Delta_{\alpha}$. Our choice of $k$ implies $$ \Delta= \Delta_{\alpha} + \frac{k+1}{r}.$$
First, assume $k=0$. If $r'<r$, then $\Delta_{\alpha} + \frac{1}{r'} > \Delta_{\alpha} + \frac{1}{r}$. Consequently, the only Gieseker semistable sheaves of character $(r',\mu,\Delta')$ with $r'<r$ and $\Delta'<\Delta$ are semi-exceptional sheaves $E_{\alpha}^{\oplus \ell}$ with $\ell < m$. Let $F$ be a general elementary modification of the form $$0 \rightarrow F \rightarrow E_{\alpha}^{\oplus m} \rightarrow \mathcal{O}_p \rightarrow 0.$$ Then $F$ is a $\mu$-semistable singular sheaf with Chern character $\xi$. If $F$ were not Gieseker semistable, then it would admit an injective map $\phi: E_{\alpha} \rightarrow F$. By Lemma \ref{lem-Segre} below, for a general surjection $\psi: E_{\alpha}^{\oplus m} \rightarrow \mathcal{O}_p$, there does not exist an injection $E_{\alpha} \rightarrow E_{\alpha}^{\oplus m}$ which maps to $0$ under $\psi$. Composing $\phi$ with the maps in the exact sequence defining $F$, we get a contradiction. We conclude that $F$ is Gieseker semistable. This constructs singular sheaves when $k=0$.
Next assume $k>0$. If $m= r_{\alpha}$, then we can construct a singular sheaf in $M(\xi)$ as a $(k+1)$-fold direct sum of a semistable singular sheaf constructed in the case $k=0$, $m=r_\alpha$. Hence, we may assume that $m< r_{\alpha}$. Let $G$ be a $\mu$-stable vector bundle with Chern character $$\zeta= \left(kr_{\alpha}^2, \alpha, \delta(\alpha)=\Delta_{\alpha} + \frac{1}{r_{\alpha}^2}\right).$$ Note that $(\mu(\zeta),\Delta(\zeta))$ lies on the curve $\delta$, hence $\chi(E_{\alpha}, G) = \chi(G, E_{\alpha}) =0$. Every locally free sheaf in $M(\zeta)$ has a two-step resolution in terms of exceptional bundles orthogonal to $E_{\alpha}$ \cite{DLP}. Consequently, $\hom(G, E_{\alpha})=0$. We also have $\hom(E_\alpha, G)=0$ by stability.
Let $ \phi: E_{\alpha}^{\oplus m} \oplus G \rightarrow \mathcal{O}_p$ be a general surjection and let $F$ be defined as the corresponding elementary modification $$0 \rightarrow F \rightarrow E_{\alpha}^{\oplus m} \oplus G \rightarrow \mathcal{O}_p \rightarrow 0.$$ We first check that the Chern character of $F$ is $\xi$. Clearly, $\rk(F)=r$ and $\mu(F)= \alpha$. The discriminant equals $$ \Delta(F) = \frac{1}{r} \left( m r_{\alpha} \Delta_{\alpha} + k r_{\alpha}^2 \left(\Delta_{\alpha} + \frac{1}{r_{\alpha}^2}\right)\right) + \frac{1}{r} = \Delta_{\alpha} + \frac{k+1}{r} = \Delta.$$ Hence, $F$ is a singular sheaf with the correct invariants. It remains to check that it is Gieseker semistable. Note that $F$ is at least $\mu$-semistable.
Suppose $\psi: U \rightarrow F$ is an injection from a Gieseker stable sheaf $U$ that destabilizes $F$. Since $F$ is $\mu$-semistable, $\mu(U) = \alpha$ and $\Delta(U) < \Delta$. Then we claim that either $U = E_{\alpha}$ or $\rk(U) > m r_{\alpha}$. Suppose $U \not= E_{\alpha}$ and $\rk(U) = s r_{\alpha}$. Then $$\Delta = \Delta_{\alpha} + \frac{k+1}{r}> \Delta(U) \geq \Delta_{\alpha} + \frac{1}{s r_{\alpha}}.$$ Hence, $$ s > \frac{k r_{\alpha} + m}{k+1} > \frac{km + m} {k+1} = m.$$
If $U \neq E_{\alpha}$, composing $\psi$ with the inclusion to $E_{\alpha}^{\oplus m} \oplus G$ gives an injection $\psi' : U \rightarrow E_{\alpha}^{\oplus m} \oplus G$. Since $\rk (U) > m r_{\alpha}$, the projection to $G$ cannot be zero. Hence, we get a nonzero map $\vartheta : U \rightarrow G$. Let $V = \im \vartheta$. We have $\rk V = \rk G$ by the $\mu$-stability of $G$. We claim that $\vartheta$ is in fact surjective. The quotient $G/V$ is $0$-dimensional by stability, and, if it is nonzero, then $$\Delta(U) < \Delta < \delta(\alpha) + \frac{1}{r}< \delta(\alpha) + \frac{1}{kr_\alpha^2} \leq \Delta(V).$$ This violates the stability of $U$, so $V = G$ and $\vartheta$ is surjective. If $\rk(U) = \rk(G)$, then $U\cong G$ and $\psi'$ maps $U$ isomorphically onto $G \subset E_{\alpha}^{\oplus m} \oplus G$. A general hyperplane in the fiber $(E_\alpha^{\oplus m} \oplus G)_p$ is transverse to $G_p$, so this contradicts the fact that $\phi \circ \psi' = 0$ and $\phi$ is general. Suppose $\rk(U)>\rk(G)$, and write $$\rk(U) = kr_{\alpha}^2+nr_\alpha$$ with $0< n<m$. Then we find $$\Delta_\alpha+\frac{k+1}{r} =\Delta>\Delta(U) \geq \Delta_\alpha + \frac{k+1}{\rk(U)},$$ contradicting $\rk(U) < r$. We conclude that if $U\neq E_\alpha$, then $U$ cannot destabilize $F$.
On the other hand, if $U = E_{\alpha}$, then by the semistability of $G$ the composition of $\psi'$ with the projection to $G$ must be $0$. A general hyperplane in the fiber $(E_\alpha^{\oplus m} \oplus G)_p$ intersects $(E_{\alpha}^{\oplus m})_p$ in a hyperplane $H\subset (E_\alpha^{\oplus m})_p$. Since $m\leq r_\alpha$, Lemma \ref{lem-Segre} shows the composition of $\psi'$ with $\phi$ is nonzero, a contradiction. We conclude that $F$ is Gieseker semistable. \end{proof}
\begin{lemma}\label{lem-Segre} Let $E_{\alpha}$ be an exceptional bundle of rank $r_{\alpha}$. Let $H$ be a general codimension $c$ subspace of the fiber of $E_{\alpha}^{\oplus m}$ over a point $p$. Then there exists an injection $\phi: E_{\alpha} \rightarrow E_\alpha^{\oplus m}$ such that $\phi_p(E_{\alpha}) \subset H$ if and only if $c r_{\alpha} \leq m-1$. \end{lemma} \begin{proof} For simplicity set $E=E_\alpha $ and $r =r_\alpha$. Let $S$ denote the Segre embedding of $\mathbb{P}^{r-1} \times \mathbb{P}^{m-1}$ in $\mathbb{P}^{rm-1}$. Let $q_1, q_2$ denote the two projections from $S$ to $\mathbb{P}^{r-1}$ and $\mathbb{P}^{m-1}$, respectively. We will call a linear $\mathbb{P}^{r-1}$ in $S$ contracted by $q_2$ a {\em $\mathbb{P}^{r-1}$ fiber}.
Let $\phi: E \rightarrow E^{\oplus m}$ be an injection. Composing $\phi$ with the $m$ projections, we get $m$ morphisms $E\rightarrow E$. Since $E$ is simple, the resulting maps are all homotheties. Let $M = (\lambda_1 I \ \ \lambda_2 I \ \ \dots \ \ \lambda_m I)$ be the $r \times rm$ matrix, where $I$ is the $r\times r$ identity matrix and $\lambda_i$ are scalars. Let $\vec{x} = (x_1, \dots, x_r)^T$. Hence, $\phi_p(E)$ has the form $$M \vec{x} = (\lambda_1 x_1, \lambda_1 x_2, \dots, \lambda_1 x_r, \dots, \lambda_m x_1, \dots, \lambda_m x_r)^T.$$ If we projectivize, we see that the fibers $\mathbb{P} (\phi_p(E))$ are $\mathbb{P}^{r -1}$ fibers contained in the Segre embedding of $\mathbb{P}^{r -1} \times \mathbb{P}^{m-1}$ in $\mathbb{P}((E^{\oplus m})_p)$. Conversely, every $\mathbb{P}^{r-1}$ fiber in $S$ is obtained by fixing a point $(\lambda_1, \dots, \lambda_m) \in \mathbb{P}^{m-1}$ and, hence, is the fiber of an injection $E \rightarrow E^{\oplus m}$. The lemma thus reduces to the statement that a general codimension $c$ linear subspace of $\mathbb{P}^{m r -1}$ contains a $\mathbb{P}^{r -1}$ fiber in $S$ if and only if $c r \leq m-1$.
Consider the incidence correspondence $$J = \{ (A, H) : H \cong \mathbb{P}^{m r - 1 -c}, A \subset H \cap S \ \ \mbox{is a } \ \mathbb{P}^{r-1} \ \mbox{fiber}\}.$$ Then the first projection $\pi_1$ maps $J$ onto $\mathbb{P}^{m-1}$. The fiber of $\pi_1$ over a linear space $A$ is the set of codimension $c$ linear spaces that contain $A$, hence it is isomorphic to the Grassmannian $G((m-1)r - c, (m-1)r)$. By the theorem on the dimension of fibers, $J$ is irreducible of dimension $(cr+1)(m-1) - c^2$. The second projection cannot dominate $G(mr-c, mr)$ if $\dim(J) < \dim(G(mr-c, mr)= c(mr-c)$. Comparing the two inequalities, we conclude that if $c r > m-1$, the second projection is not dominant. Hence, the general codimension $c$ linear space does not contain a $\mathbb{P}^{r-1}$ fiber in $S$.
To see the converse, we check that if $r \leq m-1$, then a general hyperplane contains a codimension $r$ locus of linear $\mathbb{P}^{r-1}$ fibers of $S$. Consider the hyperplane $ H$ defined by $\sum_{i=1}^{r} Z_{(i-1)r + i} =0.$ Substituting the equations of the Segre embedding, we see that $\sum_{i=1}^{r} \alpha_i x_i = 0$. Since this equation must hold for every choice of $x_i$, we conclude that $\alpha_1 = \cdots = \alpha_r =0$. Hence, the locus of $\mathbb{P}^{r-1}$ fibers of $S$ contained in $H$ is a codimension $r$ linear space in $\mathbb{P}^{m-1}$. A codimension $c$ linear space is the intersection of $c$ hyperplanes. Moreover, the intersection of $c$ codimension $r$ subvarieties of $\mathbb{P}^{m-1}$ is nonempty if $c r \leq m-1$. Hence, if $c r \leq m-1$, every codimension $c$ linear space contains a $\mathbb{P}^{r-1}$ fiber of $S$. This suffices to prove the converse.
\end{proof}
\subsection{The Picard group and Donaldson-Uhlenbeck-Yau compactification} Stable vector bundles with $\Delta = \delta(\mu)$ are called {\em height zero} bundles. Their moduli spaces have Picard rank one. The ample generator spans the ample cone and there is nothing further to discuss.
For the rest of the subsection, suppose $\xi=(r,\mu,\Delta)$ is a \emph{positive height} Chern character, meaning $\Delta> \delta(\mu)$. There is a pairing on $K(\mathbb{P}^2)$ given by $(\xi, \zeta) = \chi(\xi^*, \zeta)$. When $\Delta > \delta(\mu)$, Dr\'{e}zet proves that the Picard group of $M(\xi)$ is a free abelian group on two generators naturally identified with $\xi^{\perp}$ in $K(\mathbb{P}^2)$ \cite{LePotierLectures}. In $M(\xi)$, linear equivalence and numerical equivalence coincide and the N\'{e}ron-Severi space $\NS(M(\xi)) = \Pic(M(\xi)) \otimes \mathbb{R}$ is a two-dimensional vector space. In order to specify the ample cone, it suffices to determine its two extremal rays.
In $\xi^\perp \cong \Pic(M(\xi))$ there is a unique character $u_1$ with $\rk(u_1) = 0$ and $c_1(u_1) = -r$. The corresponding line bundle $\mathcal L_1$ is base-point-free and defines the Jun Li morphism $j: M(\xi) \to M^{DUY}(\xi)$ to the Donaldson-Uhlenbeck-Yau compactification \cite[\S 8]{HuybrechtsLehn}.
\begin{proposition}\label{prop-DUY} Let $\xi= (r,\mu,\Delta)$ be a positive height character, and suppose that there are singular sheaves in $M(\xi)$. Then $u_1$ spans an extremal edge of $\Amp(M(\xi))$. \end{proposition}
\begin{proof} We show that $j$ contracts a curve in $M(\xi)$. Two stable sheaves $E,E'\in M(\xi)$ are identified by $j$ if $E^{**} \cong (E')^{**}$ and the sets of singularities of $E$ and $E'$ are the same (counting multiplicity). The proof of Theorem \ref{thm-singular} constructs singular sheaves via an elementary modification that arises from a surjection $E= E_{\alpha}^{\oplus m} \oplus G \rightarrow \mathcal{O}_p$. Here $m=0$ if $\Delta - \delta(\mu) \geq \frac{1}{r}$ or $\mu$ is not exceptional. Otherwise, $1 \leq m < r_{\alpha}$. Note that $\hom(E, \mathcal{O}_p)= r$ and $\dim (\Aut(E)) = m^2 +1$ if $G\not= 0$ and $\dim (\Aut(E)) = m^2$ if $G=0$. Hence, if $r>1$, varying the surjection $E \rightarrow \mathcal{O}_p$ gives a positive dimensional family of nonisomorphic Gieseker semistable sheaves with the same singular support and double dual. If instead $r = 1$ and $\Delta\geq 2$, then (up to a twist) $j$ is the Hilbert-Chow morphism to the symmetric product, and the result is still true. \end{proof}
\begin{corollary}\label{cor-DUY} Let $\xi = (r,\mu,\Delta)$ be a positive height character. If $\Delta$ is sufficiently large or if $r\leq 6$ then $u_1$ spans an edge of $\Amp(M(\xi))$. \end{corollary} \begin{proof} In either case, this follows from Theorem \ref{thm-singular} and Proposition \ref{prop-DUY}. \end{proof}
\subsection{Bridgeland stability conditions on $\mathbb{P}^2$}
We now recall basic facts concerning Bridgeland stability conditions on $\mathbb{P}^2$ developed in \cite{ABCH}, \cite{CoskunHuizenga} and \cite{HuizengaPaper2}.
A {\em Bridgeland stability condition} $\sigma$ on the bounded derived category $\mathcal{D}^b(X)$ of coherent sheaves on a smooth projective variety $X$ is a pair $\sigma = (\mathcal A, Z)$, where $\mathcal A$ is the heart of a bounded $t$-structure and $Z$ is a group homomorphism $$Z: K(\mathcal{D}^b(\mathbb{P}^2)) \rightarrow \mathbb{C}$$ satisfying the following two properties.
\begin{enumerate}
\item (Positivity) For every object $0 \not= E \in \mathcal A$, $Z(E) \in \{r e^{i \pi \theta} | r> 0, 0 < \theta \leq 1\}$. Positivity allows one to define the slope of a non-zero object in $\mathcal A$ by setting $$\mu_Z(E) = - \frac{\Re(Z(E))}{\Im(Z(E))}.$$ An object $E$ of $\mathcal A$ is called {\em (semi)stable} if for every proper subobject $F\subset E$ in $\mathcal A$ we have $\mu_Z(F) < (\leq) \mu_Z (E)$.
\item (Harder-Narasimhan Property) Every object of $\mathcal A$ has a finite Harder-Narasimhan filtration. \end{enumerate}
Bridgeland \cite{Bridgeland} and Arcara and Bertram \cite{ArcaraBertram} have constructed Bridgeland stability conditions on projective surfaces. In the case of $\mathbb{P}^2$, the relevant Bridgeland stability conditions have the following form. Any torsion-free coherent sheaf $E$ on $\mathbb{P}^2$ has a Harder-Narasimhan filtration $$0= E_0 \subset E_1 \subset \cdots \subset E_n = E$$ with respect to the Mumford slope with semistable factors $\mathrm{gr}_i = E_i / E_{i-1}$ such that $$\mu_{\max}(E) = \mu(\mathrm{gr}_1) > \cdots > \mu(\mathrm{gr}_n) = \mu_{\min}(E) .$$ Given $s\in \mathbb{R}$, let $\mathcal Q_s$ be the full subcategory of $\coh(\mathbb{P}^2)$ consisting of sheaves such that their quotient by their torsion subsheaf have $\mu_{\min}(Q)> s$. Similarly, let $\mathcal F_s$ be the full subcategory of $\coh(\mathbb{P}^2)$ consisting of torsion free sheaves $F$ with $\mu_{\max}(F) \leq s$. Then the abelian category $$\mathcal A_s := \{ E \in \mathcal{D}^b(\mathbb{P}^2) : \mbox{H}^{-1}(E) \in \mathcal F_s, \mbox{H}^0(E) \in \mathcal Q_s, H^i(E) = 0 \ \mbox{for} \ i \not= -1, 0 \}$$ obtained by tilting the category of coherent sheaves with respect to the torsion pair $(\mathcal F_s, \mathcal Q_s)$ is the heart of a bounded $t$-structure. Let $$Z_{s,t} (E) = - \int_{\mathbb{P}^2} e^{-(s+it)H} \ch(E),$$ where $H$ is the hyperplane class on $\mathbb{P}^2$. The pair $(\mathcal A_s, Z_{s,t})$ is a Bridgeland stability condition for every $s > 0$ and $t \in \mathbb{R}$. We thus obtain a half plane of Bridgeland stability conditions on $\mathbb{P}^2$ parameterized by $(s, t)$, $t>0$.
\subsection{Bridgeland walls} If we fix a Chern character $\xi\in K(\mathbb{P}^2)$, the $(s,t)$-plane of stability conditions for $\mathbb{P}^2$ admits a finite wall and chamber structure where the objects in $\mathcal A_s$ with Chern character $\xi$ that are stable with respect to the stability condition $(\mathcal A_s, Z_{s,t})$ remain unchanged within the interior of a chamber (\cite{ABCH}, \cite{Bridgeland}, \cite{BayerMacri}, \cite{BayerMacri2}). An object $E$ is destabilized along a wall $W(E, F)$ by $F$ if $E$ is semistable on one side of the wall but $F \subset E$ in the category $\mathcal A_s$ with $\mu_{s,t} (F) > \mu_{s,t}(E)$ on the other side of the wall. We call these walls {\em Bridgeland walls}. The equations of the wall $W(E,F)$ can be computed using the relation $\mu_{s,t} (F) = \mu_{s,t}(E)$ along the wall.
Suppose $\xi,\zeta\in K(\mathbb{P}^2)\otimes \mathbb{R}$ are two linearly independent real Chern characters. A \emph{potential Bridgeland wall} is a set in the $(s,t)$-half-plane of the form $$W(\xi,\zeta) = \{(s,t):\mu_{s,t}(\xi) = \mu_{s,t}(\zeta)\},$$ where $\mu_{s,t}$ is the slope associated to $\mathcal{Z}_{s,t}$. Bridgeland walls are always potential Bridgeland walls. The \emph{potential Bridgeland walls for $\xi$} are all the potential walls $W(\xi,\zeta)$ as $\zeta$ varies in $K(\mathbb{P}^2)\otimes \mathbb{R}$. If $E,F\in D^b(\mathbb{P}^2)$, we also write $W(E,F)$ as a shorthand for $W(\ch(E),\ch(F))$.
The potential walls $W(\xi,\zeta)$ can be easily computed in terms of the Chern characters $\xi$ and $\zeta $. \begin{enumerate} \item If $\mu(\xi) = \mu(\zeta)$ (where the Mumford slope is interpreted as $\infty$ if the rank is $0$), then the wall $W(\xi,\zeta)$ is the vertical line $s= \mu(\xi)$ (interpreted as the empty set when the slope is infinite). \item Otherwise, without loss of generality assume $\mu(\xi)$ is finite, so that $r\neq 0$. The walls $W(\xi,\zeta)$ and $W(\xi,\xi+\zeta)$ are equal, so we may further reduce to the case where both $\xi$ and $\zeta$ have nonzero rank. Then we may encode $\xi = (r_1,\mu_1,\Delta_1)$ and $\zeta = (r_2,\mu_2,\Delta_2)$ in terms of slope and discriminant instead of $\ch_1$ and $\ch_2$. The wall $W(\xi,\zeta)$ is the semicircle centered at the point $(s,0)$ with $$s = \frac{1}{2}(\mu_1+\mu_2)-\frac{\Delta_1-\Delta_2}{\mu_1-\mu_2}$$ and having radius $\rho$ given by $$\rho^2 = (s-\mu_1)^2-2\Delta_1.$$ \end{enumerate}
In the principal case of interest, the Chern character $\xi = (r,\mu,\Delta)$ has nonzero rank $r\neq 0$ and nonnegative discriminant $\Delta\geq 0$. In this case, the potential walls for $\xi$ consist of a vertical wall $s=\mu$ together with two disjoint nested families of semicircles on either side of this line \cite{ABCH}. Specifically, for any $s$ with $|s-\mu| > \sqrt{2\Delta}$, there is a unique semicircular potential wall with center $(s,0)$ and radius $\rho$ satisfying $$\rho^2 = (s-\mu)^2 - 2\Delta.$$ The semicircles are centered along the $s$-axis, with smaller semicircles having centers closer to the vertical wall. Every point in the $(s,t)$-half-plane lies on a unique potential wall for $\xi$. When $r>0$, only the family of semicircles left of the vertical wall is interesting, since an object $E$ with Chern character $\xi$ can only be in categories $\mathcal A_s$ with $s<\mu$.
Since the number of Bridgeland walls is finite, there exists a largest semicircular Bridgeland wall $W_{\max}$ to the left of the vertical line $s = \mu$ that contains all other semicircular walls. Furthermore, for every $(s,t)$ with $s< \mu$ and contained outside $W_{\max}$, the moduli space of Bridgeland stable objects in $\mathcal A_s$ with respect to $Z_{s,t}$ and Chern character $\xi$ is isomorphic to the moduli space $M(\xi)$ \cite{ABCH}. We call $W_{\max}$ the \emph{Gieseker wall}.
\subsection{A nef divisor on $M(\xi)$}\label{ssec-BayerMacriPlan} Let $(\mathcal A,Z) = \sigma_0 \in W_{\max}$ be a stability condition on the Gieseker wall. Bayer and Macr\`{i} \cite{BayerMacri2} construct a nef divisor $\ell_{\sigma_0}$ on $M(\xi)$ corresponding to $\sigma_0$. They also compute its class and describe geometrically the curves $C\subset M(\xi)$ with $C \cdot \ell_{\sigma_0} = 0$.
To describe the class of $\ell_{\sigma_0}$ in $\xi^\perp \cong \Pic M(\xi)$, consider the functional \begin{align*}N^1(M(\xi)) & \to \mathbb{R}\\ \xi' &\mapsto \Im\left( -\frac{Z(\xi')}{Z(\xi)}\right).\end{align*} Since the pairing $(\xi,\zeta) = \chi(\xi\otimes \zeta)$ is nondegenerate, we can write this functional as $(\zeta,-)$ for some unique $\zeta\in \xi^\perp$. In terms of the isomorphism $\xi^\perp \cong \Pic M(\xi)$, we have $\zeta = [\ell_{\sigma_0}].$ Considering $(\zeta,\ch \mathcal{O}_p)$ shows that $\zeta$ has negative rank. Furthermore, if $W_{\max} = W(\xi',\xi)$ (so that $Z(\xi')$ and $Z(\xi)$ are real multiples of one another), then $\zeta$ is a negative rank character in $(\xi')^\perp$. The ray in $N^1(M(\xi))$ determined by $\sigma_0$ depends only on $W_{\max}$, and not the particular choice of $\sigma_0$.
A curve $C\subset M(\xi)$ is orthogonal to $\ell_{\sigma_0}$ if and only if two general sheaves parameterized by $C$ are $S$-equivalent with respect to $\sigma_0$. This gives an effective criterion for determining when the Bayer-Macr\`i divisor $\ell_{\sigma_0}$ is an extremal nef divisor. In every case where we compute the ample cone of $M(\xi)$, the divisor $\ell_{\sigma_0}$ is in fact extremal.
\section{Admissible decompositions}\label{sec-hp}
In this section, we introduce the notion of an admissible decomposition of a Chern character of positive rank. Each such decomposition corresponds to a potential Bridgeland wall. In the cases when we can compute the ample cone, the Gieseker wall will correspond to a certain admissible decomposition.
\begin{definition}\label{def-admissible} Let $\xi$ be a stable Chern character of positive rank. A \emph{decomposition} of $\xi$ is a triple $\Xi = (\xi',\xi,\xi'')$ such that $\xi = \xi'+\xi''$. We say $\Xi$ is an \emph{admissible decomposition} if furthermore \begin{enumerate}[label=(D\arabic*)] \item \label{cond-Fstable} $\xi'$ is semistable, \item \label{cond-Qstable} $\xi''$ is stable, \item\label{cond-rank} $0 < \rk(\xi') \leq \rk(\xi)$, \item \label{cond-Fslope} $\mu(\xi') < \mu(\xi)$, and \item \label{cond-slopeDiff} if $\rk(\xi'')>0$, then $\mu(\xi'')-\mu(\xi')<3$. \end{enumerate} \end{definition}
\begin{remark} The Chern characters in an admissible decomposition $\Xi$ span a $2$-plane in $K(\mathbb{P}^2)$. We write $W(\Xi)$ for the potential Bridgeland wall where characters in this plane have the same slope.
Condition \ref{cond-Fstable} means that $\xi'$ is either semiexceptional or stable. We require $\xi''$ to be stable since this holds in all our examples and makes admissibility work better with respect to elementary modifications; see \S\ref{sec-elementaryMod}. \end{remark}
There are a couple numerical properties of decompositions which will frequently arise.
\begin{definition}\label{def-numericalProps} Let $\Xi = (\xi',\xi,\xi'')$ be a decomposition. \begin{enumerate} \item $\Xi$ is \emph{coprime} if $\rk(\xi)$ and $c_1(\xi)$ are coprime.
\item $\Xi$ is \emph{torsion} if $\rk(\xi'')=0$, and \emph{torsion-free} otherwise. \end{enumerate} \end{definition}
The conditions in the definition of an admissible decomposition ensure that there is a well-behaved space of extensions of the form $$0\to F\to E\to Q\to 0$$ with $F\in M(\xi')$ and $Q\in M(\xi'')$.
\begin{lemma}\label{existenceOfExtensions} Let $\Xi = (\xi',\xi,\xi'')$ be an admissible torsion-free decomposition. We have $\chi(\xi'',\xi')<0$. In particular, for any $F\in M(\xi')$ and $Q\in M(\xi'')$ there are non-split extensions $$0\to F\to E \to Q \to 0.$$ Furthermore, $\Ext^1(Q,F)$ has the expected dimension $-\chi(\xi'',\xi')$ for any $F\in M(\xi')$ and $Q\in M(\xi'')$. \end{lemma} \begin{proof} From \ref{cond-Fslope} and the torsion-free hypothesis, we have $\mu(\xi)<\mu(\xi'')$. Let $F\in M(\xi')$ and $Q\in M(\xi'')$. By stability, $\Hom(Q,F) = 0$. Using Serre duality with condition \ref{cond-slopeDiff}, we have $\Ext^2(Q,F)=0$. Therefore $\ext^1(Q,F) = -\chi(\xi'',\xi')$ and $\chi(\xi'',\xi')\leq 0$.
To prove $\chi(\xi'',\xi')<0$, first suppose $\xi'$ is semiexceptional. Then $$\chi(\xi'',\xi')=\chi(\xi,\xi')-\chi(\xi',\xi')<\chi(\xi,\xi').$$ As in the previous paragraph, $\chi(\xi,\xi')\leq 0$, hence $\chi(\xi'',\xi')<0$.
A similar argument works if $\xi''$ is semiexceptional.
Assume neither $\xi'$ or $\xi''$ is semiexceptional. Then $-3<\mu(\xi')-\mu(\xi'')<0$ and $\Delta(\xi')+\Delta(\xi'')>1$. Since $P(x)<1$ for $-3< x < 0 $, we conclude $\chi(\xi'',\xi')<0$ by the Riemann-Roch formula. \end{proof}
We now introduce a notion of stability for an admissible decomposition $\Xi$. Let $F_{s'}/S'$ (resp. $Q_{s''}/S''$) be a complete flat family of semistable sheaves with Chern character $\xi'$ (resp. $\xi''$), parameterized by a smooth and irreducible base variety. Since $\ext^1(Q_{s''},F_{s'})$ does not depend on $(s',s'')\in S'\times S''$, there is a projective bundle $S$ over $S'\times S''$ such that the fiber over a point $(s',s'')$ is $\mathbb{P}\Ext^1(Q_{s''},F_{s'})$. Then $S$ is smooth, irreducible, and it carries a universal extension sheaf $E_s/S$.
We wish to examine the stability properties of the general extension $E_s/S$. If $E_s$ is \mbox{$(\mu$-)(semi)stable} for some $s\in S$, then the general $E_s$ has the same stability property. Since the moduli spaces $M(\xi')$ and $M(\xi'')$ are irreducible, the general $E_s$ will be $(\mu$-)(semi)stable if and only if there exists some extension $$0\to F \to E\to Q\to 0$$ where $F\in M(\xi')$, $Q\in M(\xi'')$, and $E$ is $(\mu$-)(semi)stable. Since $S$ is complete, we do not need to know that $E$ is parameterized by a point of $S$.
\begin{definition}\label{def-stableTriple} Let $\Xi$ be an admissible decomposition. We say that $\Xi$ is \emph{generically} \mbox{$(\mu$-)}(semi)stable if there is some extension $$0\to F\to E\to Q\to 0$$ where $F\in M(\xi')$, $Q\in M(\xi'')$, and $E$ is $(\mu$-)(semi)stable. \end{definition}
\section{Extremal triples}\label{sec-extremal} We now introduce the decomposition of a Chern character $\xi$ which frequently corresponds to the primary edge of the ample cone of $M(\xi)$.
\begin{definition}\label{def-extremal} We call a triple $\Xi=(\xi',\xi,\xi'')$ of Chern characters \emph{extremal} if it is an admissible decomposition of $\xi$ with the following additional properties: \begin{enumerate}[label=(E\arabic*)] \item \label{cond-slopeClose} $\xi'$ and $\xi$ are \emph{slope-close}: we have $\mu(\xi') < \mu(\xi)$, and every rational number in the interval $(\mu(\xi'),\mu(\xi))$ has denominator larger than $\rk(\xi)$. \item \label{cond-discMinimal} $\xi'$ is \emph{discriminant-minimal}: if $\theta'$ is a stable Chern character with $0<\rk(\theta')\leq \rk(\xi)$ and $\mu(\theta') = \mu(\xi')$, then $\Delta(\theta')\geq \Delta(\xi')$. \item \label{cond-rankMinimal} $\xi'$ is \emph{rank-minimal}: if $\theta'$ is a stable Chern character with $\mu(\theta')=\mu(\xi')$ and $\Delta(\theta') = \Delta(\xi')$, then $\rk(\theta')\geq \rk(\xi')$. \end{enumerate} \end{definition}
\begin{remark}\label{rem-extremalRemark} If $\Xi$ is an extremal triple, then it is uniquely determined by $\xi$. The wall $W(\Xi)$ thus also only depends on $\xi$. Not every stable character $\xi$ can be decomposed into an extremal triple $\Xi = (\xi',\xi,\xi'')$, but the vast majority can; see Lemma \ref{lem-extremalExist}.
Condition \ref{cond-discMinimal} in Definition \ref{def-admissible} is motivated by the formula for the center $(s,0)$ of $W(\Xi)$: $$s = \frac{\mu(\xi')+\mu(\xi)}{2}-\frac{\Delta(\xi')-\Delta(\xi)}{\mu(\xi')-\mu(\xi)}.$$ If $\Delta(\xi')$ decreases while the other invariants are held fixed, then the center of $W(\Xi)$ moves left. Correspondingly, the wall becomes larger. As we are searching for the largest walls, intuitively we should restrict our attention to triples with minimal $\Delta(\xi')$.
Similarly, condition \ref{cond-slopeClose} typically helps make the wall $W(\Xi)$ large. In the formula for $s$, the term $$-\frac{\Delta(\xi')-\Delta(\xi)}{\mu(\xi')-\mu(\xi)}$$ will dominate the expression if $\Delta(\xi)$ is sufficiently large and $\mu(\xi')$ is sufficiently close to $\mu(\xi)$.
Condition \ref{cond-rankMinimal} forces $\xi'$ to be stable, since semiexceptional characters are multiples of exceptional characters. \end{remark}
The next lemma shows the definition of an extremal triple is not vacuous. \begin{lemma}\label{lem-extremalExist} Let $\xi = (r,\mu,\Delta)$ be a stable Chern character, and suppose either \begin{enumerate} \item $\Delta$ is sufficiently large (depending on $r$ and $\mu$) or \item $r\leq 6$. \end{enumerate} Then there is a unique extremal triple $\Xi = (\xi',\xi,\xi'')$. \end{lemma} \begin{proof} Let $(r^\bullet, \mu^\bullet, \Delta^\bullet)$ denote the rank, slope and discriminant of $\xi^\bullet$. The Chern character $\xi'$ is uniquely determined by conditions \ref{cond-Fstable}, \ref{cond-rank}, and \ref{cond-slopeClose}-\ref{cond-rankMinimal}; it depends only on $r$ and $\mu$, and not $\Delta$. Set $\xi''=\xi-\xi'$, and observe that $r''$ and $\mu''$ depend only on $r$ and $\mu$. We must check that $\xi''$ is stable and $\mu''-\mu'<3$ if $r''>0$. If $r''=0$, then $c_1(\xi'')>0$, so stability is automatic.
Suppose $r''>0$. Let us show $\mu''-\mu'<3$. By \ref{cond-slopeClose} we have $\mu'\geq\mu-\frac{1}{r}$, so $$r''\mu''=r\mu - r'\mu'\leq (r-r')\mu+\frac{r'}{r}<r''\mu+1$$ and $$\mu''-\mu' < \mu+\frac{1}{r''}-\mu+\frac{1}{r} = \frac{1}{r''}+\frac{1}{r}\leq \frac{3}{2}.$$
If $r\leq 6$, we will see that $\xi''$ is stable in \S\ref{sec-smallRank}. Suppose $\Delta$ is sufficiently large. We have a relation $$r\Delta = r'\Delta'+r''\Delta''-\frac{r'r''}{r}(\mu'-\mu'')^2.$$ The invariants $r',\mu',\Delta',r'',\mu''$ depend only on $r$ and $\mu$. By making $\Delta$ large, we can make $\Delta''$ as large as we want, and thus we can make $\xi''$ stable. \end{proof}
It is easy to prove a weak stability result for extremal triples.
\begin{proposition}\label{prop-slopeSemistable} Let $\Xi=(\xi',\xi,\xi'')$ be an extremal torsion-free triple. Then $\Xi$ is generically $\mu$-semistable. \end{proposition} \begin{proof} By Lemma \ref{existenceOfExtensions}, there is a non-split extension $$0\to F \to E \to Q\to 0$$ with $F\in M^s(\xi')$ stable and $Q\in M(\xi)$. We will show $E$ is $\mu$-semistable. Since $F$ and $Q$ are torsion-free, $E$ is torsion-free as well.
Suppose $E$ is not $\mu$-semistable. Then there is some surjection $E\to C$ with $\mu(C)<\mu(E)$ and $\rk(C)<\rk(E)$. By passing to a suitable quotient of $C$, we may assume $C$ is stable. Using slope-closeness \ref{cond-slopeClose}, we find $\mu(C)\leq \mu(F)$.
First assume $\mu(C)<\mu(F)$. By stability, the composition $F\to E\to C$ is zero, and thus $E\to C$ induces a map $Q\to C$. This map is zero by stability, from which we conclude $E\to C$ is zero, a contradiction.
Next assume $\mu(C)= \mu(F)$. If $\Delta(C)>\Delta(F)$, then we have an inequality $p_C<p_F$ of reduced Hilbert polynomials, so $F\to C$ is zero by stability and we conclude as in the previous paragraph. On the other hand, $\Delta(C)<\Delta(F)$ cannot occur by the minimality condition \ref{cond-discMinimal}.
Finally, suppose $\mu(C) = \mu(F)$ and $\Delta(C) = \Delta(F)$. Since $C$ and $F$ are both stable, any nonzero map $F\to C$ is an isomorphism. Then the composition $E\to C\to F$ with the inverse isomorphism splits the sequence. \end{proof}
The following corollary gives the first statement of Theorem \ref{thm-introSt} in the torsion-free case.
\begin{corollary}\label{cor-slopeStable} If $\Xi$ is a coprime, torsion-free, extremal triple, then it is generically $\mu$-stable. \end{corollary}
\section{Elementary modifications}\label{sec-elementaryMod} Many stability properties of an admissible decomposition $\Xi=(\xi',\xi,\xi'')$ are easier to understand when the discriminant $\Delta(\xi)$ is small. Elementary modifications allow us to reduce to the small discriminant case.
\begin{definition} Let $G$ be a coherent sheaf and let $G\to \mathcal{O}_p$ be a surjective homomorphism. Then the kernel $$0\to G'\to G\to \mathcal{O}_p\to 0$$ is called an \emph{elementary modification} of $G$. \end{definition}
If $G$ has positive rank, we observe the equalities $$\rk(G') = \rk(G) \qquad \mu(G') = \mu(G) \qquad \Delta(G') = \Delta(G) + \frac{1}{\rk(G)} \qquad \chi(G') =\chi(G)-1.$$ The next lemma is immediate.
\begin{lemma} If $G$ is $\mu$-(semi)stable, then any elementary modification of $G$ is $\mu$-(semi)stable. \end{lemma}
\begin{warning} Elementary modifications do not generally preserve Gieseker (semi)stability. This is our reason for focusing on $\mu$-stability of extensions. \end{warning}
Given a short exact sequence of sheaves, there is a natural induced sequence involving compatible elementary modifications.
\begin{proposition-definition}\label{def-elementaryModSequence} Suppose $$0\to F \to E \to Q\to 0$$ is a short exact sequence of sheaves. Let $Q'$ be the elementary modification of $Q$ corresponding to a homomorphism $Q\to \mathcal{O}_p$, and let $E'$ be the elementary modification of $E$ corresponding to the composition $E\to Q\to \mathcal{O}_p$. Then there is a natural short exact sequence $$0\to F \to E'\to Q'\to 0.$$ This sequence is called an \emph{elementary modification} of the original sequence. \end{proposition-definition} \begin{proof} A straightforward argument shows that there is a natural commuting diagram $$\xymatrix{ &&0\ar[d]&0\ar[d]&\\ 0\ar[r]&F\ar@{=}[d]\ar[r]&E'\ar[d]\ar[r]&Q'\ar[d]\ar[r]&0\\ 0\ar[r]&F\ar[r]&E\ar[d]\ar[r]&Q\ar[d]\ar[r]&0\\ &&\mathcal{O}_p\ar[d]\ar@{=}[r]&\mathcal{O}_p\ar[d]&\\ &&0&0& }$$ with exact rows and columns. \end{proof}
We similarly extend the notion of elementary modifications to decompositions of Chern characters.
\begin{definition} Let $\Xi = (\xi',\xi,\xi'')$ be a decomposition. Let $\Theta = (\theta',\theta,\theta'')$ be the decomposition such that \begin{enumerate} \item $\theta'=\xi'$, \item $\theta$ and $\xi$ have the same rank and slope, and \item $\Delta(\theta) = \Delta(\xi) + \frac{1}{\rk(\xi)}$. \end{enumerate} We call $\Theta$ the \emph{elementary modification} of $\Xi$. If $\Xi$ is admissible, then $\Theta$ is admissible as well.
If $\Xi$ and $\Theta$ are admissible decompositions, we say $\Theta$ lies \emph{above} $\Xi$, and write $\Xi\preceq \Theta$, if conditions (1)-(3) are satisfied and $\Delta(\xi)\leq \Delta(\theta)$. Finally, $\Xi$ is \emph{minimal} if it is a minimal admissible decomposition with respect to $\preceq$. \end{definition}
The next result follows from the integrality of the Euler characteristic and the Riemann-Roch formula.
\begin{lemma} Let $\Xi$ and $\Theta$ be admissible decompositions. Then $\Xi\preceq \Theta$ if and only if $\Theta$ is an iterated elementary modification of $\Xi$. \end{lemma}
Extremality is preserved by elementary modifications.
\begin{lemma} Suppose $\Xi$ and $\Theta$ are admissible decompositions with $\Xi\preceq \Theta$. If one decomposition is extremal, then the other is as well. \end{lemma}
Combining our results so far in this subsection, we obtain the following tool for proving results on generic $\mu$-stability of triples.
\begin{proposition}\label{prop-minimalReduction} Suppose $\Xi$ is a minimal admissible decomposition and that $\Xi$ is generically $\mu$-stable. Then any $\Theta$ which lies above $\Xi$ is also generically $\mu$-stable. \end{proposition}
\section{Stability of small rank extremal triples}\label{sec-smallRank}
The goal of this subsection is to prove the following theorem.
\begin{theorem}\label{thm-slopeCloseStable} Let $\Xi = (\xi',\xi,\xi'')$ be an extremal triple with $\rk(\xi)\leq 6$. Then $\Xi$ is generically $\mu$-stable. \end{theorem}
By Proposition \ref{prop-minimalReduction}, we only need to consider cases where $\Xi$ is minimal. We also assume $\Xi$ is torsion-free and defer to \S\ref{ssec-torsion} for the torsion case. By twisting, we may assume $0<\mu(\xi)\leq 1$. After these reductions, there are a relatively small number of triples to consider, which we list in Table \ref{table-slopeClose}. For each triple, we also indicate the strategy we will use to prove the triple is generically $\mu$-stable.
\begin{center} \renewcommand*{\arraystretch}{1.3} \begin{longtable}{cccccccccccc} \caption[]{The minimal, extremal, torsion-free triples $\Xi = (\xi',\xi,\xi'') = ((r',\mu',\Delta'),(r,\mu,\Delta),(r'',\mu'',\Delta''))$ which must be considered in Theorem \ref{thm-slopeCloseStable}.}\label{table-slopeClose}\\ \toprule $\xi'$ & $\xi$ & $\xi''$ && Strategy &$\qquad$ & $\xi'$ & $\xi$ & $\xi''$ && Strategy\\\midrule \endfirsthead \multicolumn{12}{l}{{\small \it continued from previous page}}\\ \toprule $\xi'$ & $\xi$ & $\xi''$ && Strategy &$\qquad$ & $\xi'$ & $\xi$ & $\xi''$ && Strategy\\\midrule \endhead \bottomrule \multicolumn{12}{r}{{\small \it continued on next page}} \\ \endfoot \bottomrule \endlastfoot $(1,0,0)$ & $(2,\frac{1}{2},\frac{3}{8})$ & $(1,1,1)$ && Coprime &&$(2,\frac{1}{2},\frac{3}{8})$&$(5,\frac{3}{5},\frac{12}{25})$&$(3,\frac{2}{3},\frac{5}{9})$ &&Coprime\\ $(1,0,0)$ & $(3,\frac{1}{3},\frac{5}{9})$ & $(2,\frac{1}{2},\frac{7}{8})$ && Coprime &&$(4,\frac{3}{4},\frac{21}{32})$ & $(5,\frac{4}{5},\frac{18}{25})$ & (1,1,1) && Coprime\\ $(2,\frac{1}{2},\frac{3}{8})$ & $(3,\frac{2}{3},\frac{5}{9})$ & $(1,1,1)$ && Coprime &&$(1,0,0)$ & $(6,\frac{1}{6},\frac{55}{72})$ & $(5,\frac{1}{5},\frac{23}{25})$ && Coprime\\ $(1,0,0)$ & $(4,\frac{1}{4},\frac{21}{32})$ & $(3,\frac{1}{3},\frac{8}{9})$ && Coprime && $(4,\frac{1}{4},\frac{21}{32})$ & $(6,\frac{1}{3},\frac{5}{9})$ & $(2,\frac{1}{2},\frac{3}{8})$ && Complete\\ $(3,\frac{1}{3},\frac{5}{9})$ & $(4,\frac{1}{2},\frac{5}{8})$ & $(1,1,1)$ && Complete && $(5,\frac{2}{5},\frac{12}{25})$ & $(6,\frac{1}{2},\frac{17}{24})$ & $(1,1,2)$ && Prop. \ref{prop-rank6adHoc}\\ $(3,\frac{2}{3},\frac{5}{9})$ & $(4,\frac{3}{4},\frac{21}{32})$ & $(1,1,1)$ && Coprime && $(5,\frac{3}{5},\frac{12}{25})$ & $(6,\frac{2}{3},\frac{5}{9})$ & $(1,1,1)$ && Complete\\ $(1,0,0)$ & $(5,\frac{1}{5},\frac{18}{25})$ & $(4,\frac{1}{4},\frac{29}{32})$ && Coprime && $(5,\frac{4}{5},\frac{18}{25})$ & $(6,\frac{5}{6},\frac{55}{72})$ & $(1,1,1)$ && Coprime\\ $(3,\frac{1}{3},\frac{5}{9})$ &$(5,\frac{2}{5},\frac{12}{25})$ & $(2,\frac{1}{2},\frac{3}{8})$ && Coprime \\ \end{longtable} \end{center}
Observing that $\xi''$ is always stable in Table \ref{table-slopeClose} completes the proof of Lemma \ref{lem-extremalExist} as promised. The triples labelled ``Coprime'' are all generically $\mu$-stable by Corollary \ref{cor-slopeStable}. We turn next to the triples labelled ``Complete.''
\begin{definition} An admissible decomposition $\Xi$ is called \emph{complete} if the general $E\in M(\xi)$ can be expressed as an extension $$0\to F \to E \to Q\to 0$$ with $F\in M(\xi')$ and $Q\in M(\xi'')$. \end{definition}
\begin{remark} Suppose $\Xi$ is admissible and generically semistable. Recall the universal extension sheaf $E_s/S$ discussed preceding Definition \ref{def-stableTriple}. If $U\subset S$ is the open subset parameterizing semistable sheaves, then $\Xi$ is complete if and only if the moduli map $U\to M(\xi)$ is dominant. By generic smoothness, $E_s/U$ is a complete family of semistable sheaves over a potentially smaller dense open subset. \end{remark}
If $\xi$ is stable, then the general sheaf in $M(\xi)$ is $\mu$-stable by a result of Dr\'{e}zet and Le Potier \cite[4.12]{DLP}. Thus if $\Xi$ is complete, then $\Xi$ is generically $\mu$-stable.
\begin{proposition}\label{prop-completeTriples} Let $\Xi$ be one of the three triples in Table \ref{table-slopeClose} labelled ``Complete.'' Then $\Xi$ is complete, and in particular generically $\mu$-stable. \end{proposition} \begin{proof} First suppose $\Xi=(\xi',\xi,\xi'')$ is one of $$((3,\tfrac{1}{3},\tfrac{5}{9}),(4,\tfrac{1}{2},\tfrac{5}{8}),(1,1,1)) \qquad \textrm{or} \qquad ((4,\tfrac{1}{4},\tfrac{21}{32}),(6,\tfrac{1}{3},\tfrac{5}{9}),(2,\tfrac{1}{2},\tfrac{3}{8})).$$ Let $E$ be a $\mu$-stable sheaf of character $\xi$, and let $Q\in M(\xi'')$ be semistable. We have $\chi(E,Q)>0$ in either case, which implies $\hom(E,Q)>0$ by stability. Pick a nonzero homomorphism $f:E\to Q$, and let $R\subset Q$ be the image of $f$. By stability considerations, $R$ must have the same rank and slope as $Q$, and $\Delta(R)\geq \Delta(Q)$. Letting $F\subset E$ be the kernel of $f$, we find that $\rk(F) = \rk(\xi')$, $\mu(F)=\mu(\xi')$, and $\Delta(F)\leq \Delta(\xi')$, with equality if and only if $f$ is surjective. Furthermore, $F$ is $\mu$-semistable. Indeed, if there is a subsheaf $G\subset F$ with $\mu(G)>\mu(F)$, then $\mu(G)\geq \mu(E)$ by slope-closeness \ref{cond-slopeClose}, so $G\subset E$ violates $\mu$-stability of $E$. Then discriminant minimality \ref{cond-discMinimal} forces $\ch F = \xi'$. Furthermore, since $\rk(\xi')$ and $c_1(\xi')$ are coprime, $F$ is actually semistable. Thus $E$ is expressed as an extension $$0\to F\to E\to Q\to 0$$ of semistable sheaves as required.
For the final triple $((5,\frac{3}{5},\frac{12}{25}),(6,\frac{2}{3},\frac{5}{9}),(1,1,1))$ a slight modification to the previous argument is needed. Fix a $\mu$-stable sheaf $E$ of character $\xi$. This time $\chi(\xi,\xi'')=0$, so the expectation is that if $Q\in M(\xi'')$ is general, then there is no nonzero map $E\to Q$. Consider the locus $$D_E = \{Q\in M(\xi''):\hom(E,Q)\neq 0\}.$$ Then either $D_E = M(\xi'')$ or $D_E$ is an effective divisor, in which case we can compute its class to show $D_E$ is nonempty. Either way, there is some $Q\in M(\xi'')$ which admits a nonzero homomorphism $E\to Q$. The argument can now proceed as in the previous cases. \end{proof}
The next proposition treats the last remaining case, completing the proof of Theorem \ref{thm-slopeCloseStable}.
\begin{proposition}\label{prop-rank6adHoc} The triple $\Xi =(\xi',\xi,\xi'')=((5,\frac{2}{5},\frac{12}{25}),(6,\frac{1}{2},\frac{17}{24}), (1,1,2))$ is generically $\mu$-stable \end{proposition} \begin{proof} Observe that $\xi'$ is the Chern character of the exceptional bundle $F = E_{2/5}$ and $\xi''$ is the Chern character of an ideal sheaf $Q=I_Z(1)$, where $Z$ has degree $2$. Let $Q_{s''}/M(\xi'')$ be the universal family. Then the projective bundle $S$ over $M(\xi'')$ with fibers $\mathbb{P}\Ext^1(Q_{s''},F)$ is smooth and irreducible of dimension $$\dim S = \dim M(\xi'')-\chi(\xi'',\xi')-1=14,$$ and there is a universal extension $E_s/S$. Every $E_s$ is $\mu$-semistable by Proposition \ref{prop-slopeSemistable}.
A simple computation shows $\Hom(F,Q_{s''})=0$ for every $Q_{s''}$. If $E$ is any sheaf which sits as an extension $$0\to F\to E\to Q_{s''}\to 0,$$ then we apply $\Hom(F,-)$ to see $\Hom(F,E) \cong \Hom(F,F) = \mathbb{C}$. Thus the homomorphism $F\to E$ is unique up to scalars, the sheaf $Q_{s''}$ is determined as the cokernel, and since $Q_{s''}$ is simple the corresponding extension class in $\Ext^1(Q_{s''},F)$ is determined up to scalars. We find that distinct points of $S$ parameterize non-isomorphic sheaves. A straightforward computation further shows that the Kodaira-Spencer map $T_sS \to \Ext^1(E_s,E_s)$ is injective for every $s\in S$.
We now proceed to show that the general $E_s$ also satisfies stronger notions of stability.
\emph{Step 1: the general $E_s$ is semistable}. If $E_s$ is not semistable, it has a Harder-Narasimhan filtration of length $\ell\geq 2$, and all factors have slope $\frac{1}{2}$. For each potential set of numerical invariants of a Harder-Narasimhan filtration, we check that the corresponding Shatz stratum of $s\in S$ such that the Harder-Narasimhan filtration has that form has positive codimension.
There are only a handful of potential numerical invariants of the filtration. A non-semistable $E_s$ has a semistable subsheaf $G$ with $\mu(G) = \mu(E) = \frac{1}{2}$ and $\Delta(G) < \Delta(E) = \frac{17}{24}$. Then the Chern character of $G$ must be one of \begin{equation}\tag{$\ast$} (2,\tfrac{1}{2},\tfrac{3}{8}), \qquad (4,\tfrac{1}{2},\tfrac{3}{8}),
\qquad \textrm{or} \qquad (4,\tfrac{1}{2},\tfrac{5}{8}).\end{equation} We can rule out the first two cases immediately by an ad hoc argument. In either of these cases $E$ has a subsheaf isomorphic to $T_{\mathbb{P}^2}(-1)$. Then there is a sequence $$0\to T_{\mathbb{P}^2}(-1)\to E\to R\to 0.$$ Applying $\Hom(F,-)$, we see $\Hom(F,T_{\mathbb{P}^2}(-1))$ injects into $\Hom(F,E)=\mathbb{C}$. But $\chi(F,T_{\mathbb{P}^2}(-1))=3$, so this is absurd.
Thus the only Shatz stratum we must consider is the locus of sheaves with a filtration $$0 \subset G_1 \subset G_2 = E_s$$ having $\ch \mathrm{gr}_1 = \zeta_1 := (4,\frac{1}{2},\frac{5}{8})$ and $\ch \mathrm{gr}_2 := \zeta_2 = (2,\frac{1}{2},\frac{7}{8})$. Let $$\Sigma = \Flag(E/S;\zeta_1,\zeta_2) \xrightarrow{\pi} S$$ be the relative flag variety parameterizing sheaves with a filtration of this form. By the uniqueness of the Harder-Narasimhan filtration, $\pi$ is injective, and its image is the Shatz stratum. The differential of $\pi$ at a point $t = (s,G_1)\in \Sigma$ can be analyzed via the exact sequence $$0 \to \Ext^0_+(E_s,E_s)\to T_t \Sigma\xrightarrow{T_t\pi} T_s S\xrightarrow{\omega_+} \Ext_+^1(E_s,E_s).$$ We have $\Ext_+^0(E_s,E_s)=0$ by \cite[Proposition 15.3.3]{LePotierLectures}, so $T_t\pi$ is injective and $\pi$ is an immersion. The codimension of the Shatz stratum near $s$ is at least $\rk \omega_+$.
The map $\omega_+$ is the composition $T_sS\to \Ext^1(E_s,E_s)\to \Ext^1_+(E_s,E_s)$ of the Kodaira-Spencer map with the canonical map from the long exact sequence of $\Ext_{\pm}$. The Kodaira-Spencer map is injective, and $\Ext^1(E_s,E_s)\to \Ext_+^1(E_s,E_s)$ is surjective since $\Ext_-^2(E_s,E_s)=0$. We have $$\dim T_sS = 14, \qquad \ext^1(E_s,E_s)=16, \qquad \textrm{and} \qquad \ext^1_+(E_s,E_s)=-\chi(\mathrm{gr}_1,\mathrm{gr}_2) = 4,$$ so we conclude $\rk \omega_+\geq 2$. Therefore the Shatz stratum is a proper subvariety of $S$. We conclude $\Xi$ is generically semistable.
\emph{Step 2: the general $E_s$ is $\mu$-stable}. Note that a semistable sheaf in $M(\xi)$ is automatically stable. Then the moduli map $S\to M^s(\xi)$ is injective, and its image has codimension $2$ in $M^s(\xi)$.
If a sheaf $E\in M^s(\xi)$ is not $\mu$-stable, then there is a filtration $$0\subset G_1 \subset G_2 = E$$ such that the quotients $\mathrm{gr}_i$ are semistable of slope $\frac{1}{2}$ and $\Delta(\mathrm{gr}_1) > \Delta(E) > \Delta(\mathrm{gr}_2)$ (see the proof of \cite[Theorem 4.11]{DLP}). Then as in the previous step $\zeta_2 = \ch(\mathrm{gr}_2)$ is one of the characters $(\ast)$, and $\zeta_1 = \ch(\mathrm{gr}_1)$ is determined by $\zeta_2$. For each of the three possible filtrations, the Shatz stratum in $M^s(\xi)$ of sheaves with a filtration of the given form has codimension at least $\ext^1_+(E,E) = -\chi(\mathrm{gr}_1,\mathrm{gr}_2)$.
When $\zeta_2 = (4,\frac{1}{2},\frac{3}{8})$ we compute $-\chi(\mathrm{gr}_1,\mathrm{gr}_2) = 6$, and when $\zeta_2 = (4,\frac{1}{2},\frac{5}{8})$ we have $-\chi(\mathrm{gr}_1,\mathrm{gr}_2) = 4$. In particular, the corresponding Shatz strata have codimension bigger than $2$. On the other hand, for $\zeta_2 = (2,\frac{1}{2},\frac{3}{8})$ we only find the stratum has codimension at least $2$, and it is a priori possible that it contains the image of $S\to M^s(\xi)$.
To get around this final problem, we must show that the general sheaf $E_s$ parameterized by $S$ does not admit a nonzero map $E_s\to T_{\mathbb{P}^2}(-1)$. This can be done by an explicit calculation. Put $Q = I_Z(1)$, where $Z = V(x,y^2)$. By stability, $\Hom(Q,T(-1))=0$, so there is an exact sequence $$\xymatrix{ 0 \ar[r]& \Hom(E_s,T_{\mathbb{P}^2}(-1))\ar[r] & \Hom(F,T_{\mathbb{P}^2}(-1))\ar[r]^{f} \ar@{=}[d] & \Ext^1(Q,T_{\mathbb{P}^2}(-1))\ar@{=}[d]\\ &&\mathbb{C}^3&\mathbb{C}^{4}&}$$ and we must see $f$ is injective. The map $f$ is the contraction of the canonical map $$\Ext^1(Q,F) \otimes \Hom(F,T_{\mathbb{P}^2}(-1))\to \Ext^1(Q,T_{\mathbb{P}^2}(-1))$$ corresponding to the extension class of $E$ in $\Ext^1(Q,F)$. This canonical map can be explicitly computed using the standard resolutions $$\xymatrix@R=1mm{ 0 \ar[r] &\mathcal{O}_{\mathbb{P}^2}(-2)\ar[r] &\mathcal{O}_{\mathbb{P}^2}^6 \ar[r] &F \ar[r] &0\\ 0\ar[r] &\mathcal{O}_{\mathbb{P}^2}(-2) \ar[r]&\mathcal{O}_{\mathbb{P}^2}(-1)\oplus \mathcal{O}_{\mathbb{P}^2} \ar[r] & Q\ar[r]& 0\\ 0\ar[r]& \mathcal{O}_{\mathbb{P}^2}(-1)\ar[r]& \mathcal{O}_{\mathbb{P}^2}^3 \ar[r]& T_{\mathbb{P}^2}(-1)\ar[r]& 0}$$ with the special form of $Q$ simplifying the calculation. Injectivity of $f$ for a general $E_s$ follows easily from this computation. \end{proof}
\section{Curves of extensions}\label{sec-curves}
\subsection{General results} Let $F$ and $Q$ be sheaves, and suppose the general extension $E$ of $Q$ by $F$ is semistable of Chern character $\xi$. In this section, we study the moduli map $$\mathbb{P} \Ext^1(Q,F)\dashrightarrow M(\xi).$$ In particular, we would like to be able to show this map is nonconstant.
\begin{definition} Let $\Xi$ be a generically semistable admissible decomposition. We say $\Xi$ \emph{gives curves} if for a general $F\in M(\xi')$ and $Q\in M(\xi'')$, the map $\mathbb{P} \Ext^1(Q,F)\dashrightarrow M(\xi)$ is nonconstant. \end{definition}
There are three essential ways that $\Xi$ could fail to give curves. \begin{enumerate} \item If $-\chi(\xi'',\xi') = 1$, then $\mathbb{P}\Ext^1(Q,F)$ is a point. \item Sheaves parameterized by $\mathbb{P} \Ext^1(Q,F)$ might all be strictly semistable and $S$-equivalent. \item The sheaves parameterized by $\mathbb{P}\Ext^1(Q,F)$ might all be isomorphic. \end{enumerate} Possibility (1) is easy to check for any given triple. If $\Xi$ is generically stable, then possibility (2) cannot arise when $F$ and $Q$ are general, so this is also easy to rule out. The third case requires the most work to deal with.
\begin{lemma}\label{lem-nonIsoExtensions} Let $F$ and $Q$ be simple sheaves with $\Hom(F,Q) = 0$. Then distinct points of $\mathbb{P}\Ext^1(Q,F)$ parameterize nonisomorphic sheaves. \end{lemma} \begin{proof} Suppose $E$ is a sheaf which can be realized as an extension $$0\to F\to E\to Q\to 0.$$ Since $F$ is simple and $\Hom(F,Q) = 0$ we find $\hom(F,E) = 1$. Similarly, since $Q$ is simple and $\Hom(F,Q)=0$, we have $\hom(E,Q)=1$. This means that the corresponding class in $\mathbb{P}\Ext^1(Q,F)$ depends only on the isomorphism class of $E$. \end{proof}
The lemma gives us a simple criterion for proving a triple $\Xi$ gives curves.
\begin{proposition}\label{prop-curveCriterion} Let $\Xi=(\xi',\xi,\xi'')$ be an admissible, generically stable triple, and assume $\xi'$ is stable. Suppose either \begin{enumerate} \item $\Xi$ is not minimal, or \item $-\chi(\xi'',\xi') \geq 2$. \end{enumerate} If $\Hom(F,Q)=0$ for a general $F\in M(\xi')$ and $Q\in M(\xi'')$, then $\Xi$ gives curves. \end{proposition} \begin{proof} If $\Xi$ is not minimal, then $-\chi(\xi'',\xi')\geq 2$ holds automatically. Indeed, since $\Xi$ is not minimal it is an elementary modification of another admissible triple $\Theta = (\theta',\theta,\theta'')$. Then $\chi(\xi'',\xi')<\chi(\theta'',\theta')<0$ by the Riemann-Roch formula and Lemma \ref{existenceOfExtensions}.
Since $\xi'$ and $\xi''$ are stable, we can choose stable sheaves $F\in M(\xi')$ and $Q\in M(\xi'')$ such that $\Hom(F,Q) = 0$ and the general extension of $Q$ by $F$ is stable. By Lemma \ref{lem-nonIsoExtensions}, $\Xi$ gives curves. \end{proof}
We also observe that elementary modifications behave well with respect to the notion of giving curves.
\begin{lemma}\label{lem-curveBump} Suppose $\Xi$ is admissible and generically $\mu$-stable. If $\Xi \preceq \Theta$ and $\Xi$ gives curves, then $\Theta$ gives curves. \end{lemma} \begin{proof} Suppose $\Theta$ is obtained from $\Xi$ by a single elementary modification. Let $F\in M(\xi')$ and $Q\in M(\xi'')$ be general. Take $U \subset \mathbb{P}\Ext^1(Q,F)$ to be the dense open subset parameterizing $\mu$-stable sheaves $E$ which are locally free at a general fixed point $p\in \Supp Q$. Let $Q\to \mathcal{O}_p$ be a surjective homomorphism. Given any extension $$0\to F \to E \to Q \to 0$$ corresponding to a point of $U$, we get an exact sequence of compatible elementary modifications $$0\to F\to E'\to Q'\to 0$$ as in Definition \ref{def-elementaryModSequence}. As $E$ can be recovered from $E'$ and the map $U\to M(\xi)$ is nonconstant, we conclude that the map $\mathbb{P}\Ext^1(Q',F)\dashrightarrow M(\theta)$ is nonconstant. Thus $\Theta$ gives curves. \end{proof}
\subsection{Curves from coprime triples with large discriminant} Our next result provides the dual curves we will need to prove Theorem \ref{thm-asymptotic}.
\begin{theorem}\label{thm-curves} Let $\Xi = (\xi',\xi,\xi'')$ be a coprime extremal triple, and suppose $\Delta(\xi)$ is sufficiently large, depending on $\rk(\xi)$ and $\mu(\xi)$. Then $\Xi$ gives curves. \end{theorem} \begin{proof} The triple $\Xi$ is not minimal since $\Delta(\xi)$ is large. By Proposition \ref{prop-curveCriterion}, we only need to show that if $F\in M(\xi')$ and $Q\in M(\xi'')$ are general and $\Delta(\xi)$ is sufficiently large, then $\Hom(F,Q) = 0$. Fix a general $F\in M(\xi')$ an $Q\in M(\xi'')$. If $\Hom(F,Q)\neq 0$, choose a nonzero homomorphism $F\to Q$. We can find a surjective homomorphism $Q\to \mathcal{O}_p$ such that $F\to Q\to \mathcal{O}_p$ is also surjective. Then applying $\Hom(F,-)$ to the elementary modification sequence $$0\to Q'\to Q\to \mathcal{O}_p\to 0$$ we find that $\Hom(F,Q')$ is a proper subspace of $\Hom(F,Q)$. Repeating this process, we can find some $\Theta \succeq \Xi$ such that $\Hom(F,Q)=0$ for general $F\in M(\theta')$ and $Q\in M(\theta'')$. Then $\Theta$ gives curves, and by Lemma \ref{lem-curveBump} any $\Lambda \succeq \Theta$ also gives curves. \end{proof}
\subsection{Curves from small rank triples} We now discuss extremal curves in the moduli space $M(\xi)$ when the rank is small. For all but a handful of characters $\xi$ we can apply the next theorem.
\begin{theorem}\label{thm-smallRankCurves}\label{thm-curvessmall} Let $\Xi = (\xi',\xi,\xi'')$ be an extremal triple with $\rk(\xi)\leq 6$. Suppose $\chi(\xi',\xi'')\leq 0$. Then $\Xi$ gives curves. \end{theorem}
\begin{proof} As with the proof of Theorem \ref{thm-slopeCloseStable}, we assume $0<\mu(\xi)\leq 1$. By Lemma \ref{lem-curveBump}, it is enough to consider triples $\Xi$ such that any admissible $\Theta$ with $\Theta \prec \Xi$ has $\chi(\theta',\theta'')>0$. We also assume $\Xi$ is torsion-free, and handle the torsion case in \S\ref{ssec-torsion}. We list the relevant triples together with $\chi(\xi',\xi'')$ in Table \ref{table-curves}.
\begin{center} \renewcommand*{\arraystretch}{1.3} \begin{longtable}{cccccccccccc} \caption[]{Triples to be considered for the proof of Theorem \ref{thm-smallRankCurves}.}\label{table-curves}\\ \toprule $\xi'$ & $\xi$ & $\xi''$ && $\chi(\xi',\xi'')$ &$\qquad$ & $\xi'$ & $\xi$ & $\xi''$ && $\chi(\xi',\xi'')$\\\midrule \endfirsthead \multicolumn{12}{l}{{\small \it continued from previous page}}\\ \toprule $\xi'$ & $\xi$ & $\xi''$ && $\chi(\xi',\xi'')$ &$\qquad$ & $\xi'$ & $\xi$ & $\xi''$ && $\chi(\xi',\xi'')$\\\midrule \endhead \bottomrule \multicolumn{12}{r}{{\small \it continued on next page}} \\ \endfoot \bottomrule \endlastfoot $(1,0,0)$ & $(2,\frac{1}{2},\frac{11}{8})$ & $(1,1,3)$ && $0$ &&$(2,\frac{1}{2},\frac{3}{8})$&$(5,\frac{3}{5},\frac{17}{25})$&$(3,\frac{2}{3},\frac{8}{9})$ &&0\\ $(1,0,0)$ & $(3,\frac{1}{3},\frac{11}{9})$ & $(2,\frac{1}{2},\frac{15}{8})$ && $0$ &&$(4,\frac{3}{4},\frac{21}{32})$ & $(5,\frac{4}{5},\frac{18}{25})$ & (1,1,1) && $-1$\\ $(2,\frac{1}{2},\frac{3}{8})$ & $(3,\frac{2}{3},\frac{8}{9})$ & $(1,1,2)$ && $-1$ &&$(1,0,0)$ & $(6,\frac{1}{6},\frac{79}{72})$ & $(5,\frac{1}{5},\frac{33}{25})$ && 0\\ $(1,0,0)$ & $(4,\frac{1}{4},\frac{37}{32})$ & $(3,\frac{1}{3},\frac{14}{9})$ && 0 && $(4,\frac{1}{4},\frac{21}{32})$ & $(6,\frac{1}{3},\frac{13}{18})$ & $(2,\frac{1}{2},\frac{7}{8})$ && $-1$\\ $(3,\frac{1}{3},\frac{5}{9})$ & $(4,\frac{1}{2},\frac{7}{8})$ & $(1,1,2)$ && $-1$ && $(5,\frac{2}{5},\frac{12}{25})$ & $(6,\frac{1}{2},\frac{17}{24})$ & $(1,1,2)$ && $-2$\\ $(3,\frac{2}{3},\frac{5}{9})$ & $(4,\frac{3}{4},\frac{21}{32})$ & $(1,1,1)$ && 0 && $(5,\frac{3}{5},\frac{12}{25})$ & $(6,\frac{2}{3},\frac{13}{18})$ & $(1,1,2)$ && $-4$\\ $(1,0,0)$ & $(5,\frac{1}{5},\frac{28}{25})$ & $(4,\frac{1}{4},\frac{45}{32})$ && 0 && $(5,\frac{4}{5},\frac{18}{25})$ & $(6,\frac{5}{6},\frac{55}{72})$ & $(1,1,1)$ && $-2$\\ $(3,\frac{1}{3},\frac{5}{9})$ &$(5,\frac{2}{5},\frac{17}{25})$ & $(2,\frac{1}{2},\frac{7}{8})$ && $-1$ \\ \end{longtable} \end{center}
Every triple in the table satisfies $\chi(\xi'',\xi')\leq -2$. To apply Proposition \ref{prop-curveCriterion}, we need to see that if $F\in M(\xi')$ and $Q\in M(\xi'')$ are general, then $\Hom(F,Q)= 0$. This can easily be checked case by case via standard sequences or by Macaulay2. We omit the details.\end{proof}
\subsection{Sporadic small rank triples}\label{ssec-sporadic} Here we discuss the few sporadic Chern characters not addressed by Theorem \ref{thm-smallRankCurves}. Suppose $\xi$ has positive height and $\rk(\xi)\leq 6$, and let $\Xi = (\xi',\xi,\xi'')$ be extremal. If $\Xi$ is torsion-free and $\chi(\xi',\xi'')>0$, then $\xi' = (1,0,0)$ and $$\xi\in \{(2,\tfrac{1}{2},\tfrac{7}{8}),(3,\tfrac{1}{3},\tfrac{8}{9}),(4,\tfrac{1}{4},\tfrac{29}{32}),(5,\tfrac{1}{5},\tfrac{23}{25}),(6,\tfrac{1}{6},\tfrac{67}{72})\}.$$ That is, $$\xi = (r,\tfrac{1}{r},P(-\tfrac{1}{r})+\tfrac{1}{r})$$ for some $r$ with $2\leq r \leq 6$. In this case, we have $\chi(\xi',\xi) = 2$, so the general sheaf $E\in M(\xi)$ admits \emph{two} maps from $\mathcal{O}_{\mathbb{P}^2}$. Along the wall $W(\Xi)$, the destabilizing subobject of $E$ should therefore be $\mathcal{O}_{\mathbb{P}^2}^2$ instead of $\mathcal{O}_{\mathbb{P}^2}$. Furthermore, assuming there are sheaves $E\in M(\xi)$ which are Bridgeland stable just outside $W(\Xi)$, the wall $W(\Xi)$ must be the collapsing wall. We will show in the next section that $W(\Xi)$ is actually the Gieseker wall. This implies that the primary edges of the ample and effective cones of divisors on $M(\xi)$ coincide. Our study of the effective cone in \cite{CoskunHuizengaWoolf} easily implies the next result.
\begin{proposition}\label{prop-sporadic} Let $r\geq 2$, and let $\Xi=(\xi',\xi,\xi'')$ be the admissible decomposition with $\xi' = (2,0,0)$ and $\xi = (r,\tfrac{1}{r},P(-\tfrac{1}{r})+\tfrac{1}{r})$. Then $\Xi$ is complete, and it gives curves. \end{proposition}
While we had to modify the original extremal triple $\Xi$ in order to make it give curves, note that the wall $W(\Xi)$ is unchanged by this modification. When showing that $W(\Xi)$ is the largest wall in \S\ref{sec-ample}, we will not need to handle these cases separately.
In the case of the Chern character $\xi = (6,\frac{1}{3},\frac{13}{18})$, Theorem \ref{thm-smallRankCurves} shows that the corresponding extremal triple $\Xi$ gives curves. However, the wall $W(\Xi)$ is actually empty. In this case $\chi(\mathcal{O}_{\mathbb{P}^2},E) = 5$ for $E\in M(\xi)$ and the Chern character $\xi' = (5,0,0)$ will correspond to the primary edges of both the effective and ample cones. Again, curves dual to this edge of the effective cone are given by \cite{CoskunHuizengaWoolf}.
\begin{proposition}\label{prop-sporadic2} The admissible decomposition $\Xi = ((5,0,0),(6,\frac{1}{3},\frac{13}{18}),(1,2,6))$ is complete and gives curves. \end{proposition}
\subsection{Torsion triples}\label{ssec-torsion}
Let $\Xi=(\xi',\xi,\xi'')$ be an extremal torsion triple, and let $r=\rk(\xi) = \rk(\xi')$. We have $\mu(\xi)-\mu(\xi') = \frac{1}{r}$ by the slope-closeness condition \ref{cond-slopeClose}. Write $$\mu(\xi') = \frac{a}{b} \qquad \textrm{and}\qquad \mu(\xi) = \frac{c}{d}$$ in lowest terms. The numbers $\mu(\xi')$ and $\mu(\xi)$ are consecutive terms in the \emph{Farey sequence} of order $r$, so $\mu(\xi) - \mu(\xi') = \frac{1}{bd}$ and we deduce that $bd=r$. Also, the \emph{mediant} $$\frac{a+c}{b+d}$$ must have denominator $b+d$ larger than $r$. The two conditions $bd = r$ and $b+d>r$ together imply that either $b=1$ or $d=1$. That is, either $\mu(\xi')$ or $\mu(\xi)$ is an integer.
If $\mu(\xi')$ is an integer, then by discriminant minimality \ref{cond-discMinimal} and rank minimality \ref{cond-rankMinimal} we have $\xi' = (1,\mu(\xi'),0)$. Thus $r=1$, and in every case $\mu(\xi)$ is also an integer.
We may assume $\mu(\xi) = 1$. Consider the triple $\Xi$ with $$\xi' = (r,1-\tfrac{1}{r},P(-\tfrac{1}{r})) \qquad \textrm{and} \qquad \xi = (r,1,1).$$ The character $\xi''$ is then $\ch \mathcal{O}_L$, where $L\subset \mathbb{P}^2$ is a line. We have $\chi(\xi,\xi'')=0$, and $\Xi$ is complete (hence generically $\mu$-stable) by a similar argument to Proposition \ref{prop-completeTriples}. If $r\geq 2$, then $$\dim M(\xi')+\dim M(\xi'')=(r^2-3r+2)+2< r^2+1 = \dim M(\xi),$$ so $\Xi$ must give curves. When $r=1$, for any $p\in L$ there is a sequence $$0\to \mathcal{O}_{\mathbb{P}^2}\to I_p(1) \to \mathcal{O}_L\to 0,$$ so $\Xi$ gives curves when $r=1$ as well. Applying elementary modifications, we conclude our discussion with the following.
\begin{proposition} Let $\Xi = (\xi',\xi,\xi'')$ be a torsion extremal triple. If $\xi$ is not the Chern character of a line bundle, then $\Xi$ is generically $\mu$-stable and gives curves. \end{proposition}
\section{The ample cone}\label{sec-ample}
\subsection{Notation} We begin by fixing notation for the rest of the paper. Let $\xi$ be a stable Chern character of positive height. We assume one of the following three hypotheses hold:
\begin{enumerate}[label=(H\arabic*)] \item \label{hyp-asymptotic} $\rk(\xi)$ and $c_1(\xi)$ are coprime and $\Delta(\xi)$ is sufficiently large, \item \label{hyp-smallRank} $\rk(\xi) \leq 6$ and $\xi$ is not a twist of $(6,\frac{1}{3},\frac{13}{18})$, or \item \label{hyp-exceptionalCase} $\xi = (6,\frac{1}{3},\frac{13}{18})$. \end{enumerate} Suppose we are in case \ref{hyp-asymptotic} or \ref{hyp-smallRank}. There is an extremal triple $\Xi = (\xi',\xi,\xi'')$. Either $\Xi$ gives curves or we are in one of the cases of Proposition \ref{prop-sporadic}, in which case there is a decomposition of $\xi$ which gives curves and has the same corresponding wall. As discussed in \S\ref{ssec-BayerMacriPlan}, to show the primary edge of the ample cone corresponds to $W(\Xi)$ it will be enough to show that $W(\Xi)$ is the Gieseker wall $W_{\max}$. Note that $W_{\max}$ cannot be strictly nested inside $W(\Xi)$, since then by our work so far there are sheaves $E\in M(\xi)$ destabilized along $W(\Xi)$. We must show $W_{\max}$ is not larger than $W(\Xi)$.
Let $E\in M(\xi)$ be a sheaf which is destabilized along some wall. For any $(s,t)$ on the wall we have an exact sequence $$0\to F\to E\to Q\to 0$$ of $\sigma_{s,t}$-semistable objects of the same slope, where the sequence is exact in any of the corresponding categories $\mathcal A_s$ along the wall. Above the wall, $\mu_{s,t}(F)<\mu_{s,t}(E)$, and below the wall the inequality is reversed. Let $\Theta = (\theta',\theta,\theta'') = (\ch F,\ch E,\ch Q)$ be the corresponding decomposition of $\xi = \theta$, so that the wall is $W(\Theta)$. Our job is to show that $W(\Theta)$ is no larger than $W(\Xi)$ by imposing numerical restrictions on $\theta'$. We begin by imposing some easy restrictions on $\theta'$.
\begin{lemma}\label{lem-boundsTrivial} The object $F$ is a nonzero torsion-free sheaf, so $\rk(F)\geq 1$. We have $\mu(F) < \mu(E)$, and every Harder-Narasimhan factor of $F$ has slope at most $\mu(E)$. \end{lemma} \begin{proof} Fix a category $\mathcal A_s$ along $W(\Theta)$. Taking cohomology sheaves of the destabilizing sequence of $E$, we get a long exact sequence $$0\to {\rm H}^{-1}(F)\to 0 \to {\rm H}^{-1}(Q) \to {\rm H}^0(F)\to {\rm H}^0(E)\to {\rm H}^0(Q)\to 0$$ since $E\in \mathcal Q_s$. Thus $F$ is a sheaf in $\mathcal Q_s$. We write $K = {\rm H}^{-1}(Q)$ and $C = {\rm H}^{0}(Q)$, so $K\in \mathcal F_s$, $C\in \mathcal Q_s$, and we have an exact sequence of sheaves $$0\to K\to F \to E \to C\to 0.$$ Since $E$ is torsion-free, the torsion subsheaf of $F$ is contained in $K$. Since $K\in \mathcal F_s$ is torsion-free, we conclude $F$ is torsion-free. Clearly also $F$ is nonzero, for otherwise $(\theta',\theta,\theta'')$ wouldn't span a $2$-plane in $K(\mathbb{P}^2)$. We conclude $\rk(F)\geq 1$.
Let $$\{0\} \subset F_1\subset \cdots \subset F_\ell = F$$ be the Harder-Narasimhan filtration of $F$. If $\mu(F_1) > \mu(E)$ then $F_1\to E$ is zero and $F_1\subset K$. Since $K\in \mathcal F_s$ for any $s$ along $W(\Theta)$, this is absurd. Therefore $\mu(F_1)\leq \mu(E)$. We can't have $\mu(F) = \mu(E)$ since then $W(\Theta)$ would be the vertical wall, so we conclude $\mu(F)<\mu(E)$. \end{proof}
\subsection{Excluding higher rank walls} In this subsection, we bound the rank of $F$ under the assumption that $W(\Theta)$ is larger than $W(\Xi)$. In general, there will be walls corresponding to ``higher rank'' subobjects. We show that the Gieseker wall cannot correspond to such a subobject.
\begin{theorem}\label{thm-excludeHighRank} Keep the notation and hypotheses from above. \begin{enumerate} \item Suppose hypothesis \ref{hyp-asymptotic} or \ref{hyp-smallRank} holds. If $W(\Theta)$ is larger than $W(\Xi)$, then $1\leq \rk(\theta') \leq
\rk(\xi)$.
\item If $\xi = (6,\frac{1}{3},\frac{13}{18})$, then the same result holds for the decomposition $\Xi = (\xi',\xi,\xi'')$ with $\xi' = (5,0,0)$.
\end{enumerate} \end{theorem}
The next inequality is our main tool for proving the theorem.
\begin{proposition}\label{prop-highRank} If $\rk(F)>\rk(E)$, then the radius $\rho_\Theta$ of $W(\Theta)$ satisfies $$\rho_\Theta^2 \leq \frac{\rk(E)^2}{2(\rk(E)+1)}\Delta(E).$$ \end{proposition} \begin{proof} Consider the exact sequence of sheaves $$0\to K^k\to F^f\to E^e\to C^c\to 0,$$ with the superscripts denoting the ranks of the sheaves. Since $F$ is in the categories $\mathcal Q_s$ along $W(\Theta)$, we have $$f(s_\Theta+\rho_\Theta)\leq f\mu(F) = c_1(F) = c_1(K)+c_1(E) - c_1(C)= k\mu(K)+e\mu(E)-c_1(C).$$
Next, since $K$ is nonzero and in $\mathcal F_s$ along $W(\Theta)$, we have $\mu(K)\leq s_\Theta-\rho_\Theta$, and thus $$f(s_\Theta+\rho_\Theta)\leq k(s_\Theta-\rho_\Theta)+e\mu(E)-c_1(C).$$ Rearranging, $$(k+f)\rho_\Theta\leq (k-f)s_\Theta+e\mu(E) - c_1(C).$$ If $C$ is zero or torsion, then $k-f=-e$ and $c_1(C)\geq 0$, from which we get \begin{equation}\label{eqn1} (k+f)\rho_\Theta\leq (k-f)(s_\Theta-\mu(E))\end{equation} This inequality also holds if $C$ is not torsion. In that case, we have $k-f=c-e$ and since $E$ is semistable $c_1(C) = c\mu(C) \geq c\mu(E)$, from which the inequality follows.
Both sides of Inequality (\ref{eqn1}) are positive, and squaring both sides gives $$(k+f)^2\rho_\Theta^2\leq (k-f)^2(\rho_{\Theta}^2+2\Delta(E)).$$ We conclude $$\rho_\Theta^2 \leq \frac{(k-f)^2}{2kf} \Delta(E).$$ This inequality is as weak as possible when the coefficient $(k-f)^2/(2kf)$ is maximized. Viewing $e$ as fixed, $k$ and $f$ are integers satisfying $f\geq e+1$ and $f-e\leq k\leq f$. It is easy to see the coefficient is maximized when $f = e+1$ and $k=1$, which corresponds to the inequality we wanted to prove. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm-excludeHighRank}] We recall $$\rho_\Xi^2 = \left(\frac{\mu(\xi')-\mu(\xi)}{2}-\frac{\Delta(\xi)-\Delta(\xi')}{\mu(\xi)-\mu(\xi')}\right)^2-2\Delta(\xi).$$ If we view $\rho_\Xi^2$ as a function of $\Delta(\xi)$, then it grows quadratically as $\Delta(\xi)$ increases. Suppose $\Delta(\xi)$ is large enough so that $$\rho_\Xi^2 \geq \frac{\rk(\xi)^2}{2(\rk(\xi)+1)}\Delta(\xi).$$ Then if $\rk(\theta')>\rk(\xi)$, we have $\rho^2_\Theta\leq \rho^2_\Xi$. This proves the theorem if hypothesis \ref{hyp-asymptotic} holds.
Next suppose \ref{hyp-smallRank} holds, and write $\xi = (r,\mu,\Delta)$. View $\Delta$ as variable, and consider the quadratic equation in $\Delta$ $$\rho_\Xi^2 = \frac{r^2}{2(r+1)}\Delta;$$ this equation depends only on $r$ and $\mu$. Assuming this equation has roots, let $\Delta_1(r,\mu)$ be the larger of the two roots. Then the theorem is true for $\xi$ if $\Delta\geq \Delta_1(r,\mu)$. Let $\Delta_0(r,\mu)$ be the minimal discriminant of a rank $r$, slope $\mu$ sheaf satisfying \ref{hyp-smallRank}. We record the values of $\Delta_0(r,\mu)$ and $\Delta_1(r,\mu)$ for all pairs $(r,\mu)$ with $1\leq r\leq 6$ and $0 < \mu \leq 1$ in Table \ref{table-discValues}. For later use, we also record the value of the right endpoint $(x^+(r,\mu),0)$ of the wall $W(\Xi)$ corresponding to the character $(r,\mu,\Delta_0)$.
\begin{center} \renewcommand*{\arraystretch}{1.3} \begin{longtable}{ccccccccccccccccc} \caption[]{Computation of $\Delta_0(r,\mu)$, $\Delta_1(r,\mu)$, and $x^+(r,\mu)$.}\label{table-discValues}\\ \toprule $r$ & $\mu$ & $\Delta_0$ & $\Delta_1 $ &$x^+$ &$\qquad$& $r$ & $\mu$ & $\Delta_0$ & $\Delta_1$ &$x^+$&$\qquad$&$r$ & $\mu$ & $\Delta_0$ & $\Delta_1$ &$x^+$\\\midrule \endfirsthead \multicolumn{17}{l}{{\small \it continued from previous page}}\\ \toprule $r$ & $\mu$ & $\Delta_0$ & $\Delta_1 $ &$x^+$ &$\qquad$& $r$ & $\mu$ & $\Delta_0$ & $\Delta_1$ &$x^+$&$\qquad$&$r$ & $\mu$ & $\Delta_0$ & $\Delta_1$ &$x^+$\\\midrule \endhead \bottomrule \multicolumn{17}{r}{{\small \it continued on next page}} \\ \endfoot \bottomrule \endlastfoot 1 & 1 & 2 & 1.00 & 0 && 4 & $\frac{1}{2}$ & $\frac{7}{8}$ & 0.81 & 0 && 5 & 1 & $\frac{6}{5}$ & 1.13 &0.46 \\ 2 & $\frac{1}{2}$ & $\frac{7}{8}$ & 0.25 & 0 && 4 & $\frac{3}{4}$ & $\frac{29}{32}$ & 0.67 & 0.53 && 6 & $\frac{1}{6}$ & $\frac{67}{72}$ & 0.03 & 0 \\ 2 & 1 & $\frac{3}{2}$ & 1.11 & 0.30 && 4 & 1 & $\frac{5}{4}$ & 1.13 & 0.44 && 6 & $\frac{1}{3}$ & $\frac{8}{9}$ & 0.79 & 0 \\ 3 & $\frac{1}{3}$ & $\frac{8}{9}$ & 0.11 & 0 && 5 & $\frac{1}{5}$ & $\frac{23}{25}$ & 0.04 &0 && 6 & $\frac{1}{2}$ & $\frac{17}{24}$ & 0.64 &0.17 \\ 3 & $\frac{2}{3}$ & $\frac{8}{9}$ & 0.57 & 0.37 && 5 & $\frac{2}{5}$ & $\frac{17}{25}$ & 0.65 &0 && 6 & $\frac{2}{3}$ & $\frac{13}{18}$ & 0.58 & 0.46 \\ 3 & 1 & $\frac{4}{3}$ & 1.13 &0.39 && 5 & $\frac{3}{5}$ & $\frac{17}{25}$ & 0.48 &0.37 && 6 & $\frac{5}{6}$ & $\frac{67}{72}$ & 0.78 &0.68 \\ 4 & $\frac{1}{4}$ & $\frac{29}{32}$ & 0.06 & 0 && 5 & $\frac{4}{5}$ & $\frac{23}{25}$ & 0.73 &0.62 && 6 & 1 & $\frac{7}{6}$ & 1.13 & 0.48 \\ \end{longtable} \end{center} In every case, we find that $\Delta_0(r,\mu) \geq \Delta_1(r,\mu)$ as required. We note that $\Delta_1(6,\frac{1}{3})> \frac{13}{18}$, so the proof does not apply to $\xi = (6,\frac{1}{3},\frac{13}{18})$.
When $\xi = (6,\frac{1}{3},\frac{13}{18})$, we put $\Xi = ((5,0,0),(6,\frac{1}{3},\frac{13}{18}),(1,2,6))$ and compute $\rho_\Xi^2=4$. If $\rk(\theta')> 6$ then Proposition \ref{prop-highRank} gives $\rho_{\Theta}^2\leq \frac{13}{7}$, so $W(\Xi)$ is not nested in $W(\Theta)$ in this case either. \end{proof}
The proof of the theorem also gives the following nonemptiness result.
\begin{corollary} If hypothesis \ref{hyp-asymptotic} or \ref{hyp-smallRank} holds, then $W(\Xi)$ is nonempty. If $\xi = (6,\frac{1}{3},\frac{13}{18})$, the wall corresponding to $\xi'=(5,0,0)$ is nonempty. \end{corollary}
\subsection{The ample cone, large discriminant case}
Here we finish the proof that $W(\Xi)$ is the Gieseker wall if $\xi$ satisfies hypothesis \ref{hyp-asymptotic}. View $\xi = \xi(\Delta) = (\rk(\xi),\mu(\xi),\Delta)$ as having fixed rank and slope and variable $\Delta$, so that the extremal triple $\Xi=\Xi(\Delta)$ decomposing $\xi(\Delta)$ depends on $\Delta$. We begin with the following lemma that will also be useful in the small rank case \ref{hyp-smallRank}.
\begin{lemma}\label{lem-rightPoint} The right endpoint $x^+_{\Xi(\Delta)} = s_{\Xi(\Delta)} + \rho_{\Xi(\Delta)}$ of $W(\Xi(\Delta))$ is a strictly increasing function of $\Delta$, and $$\lim_{\Delta\to \infty} x_{\Xi(\Delta)}^+ = \mu(\xi').$$ \end{lemma} \begin{proof} The statement that the function is increasing follows as in the second paragraph of Remark \ref{rem-extremalRemark}. The walls $W(\Xi(\Delta))$ are all potential walls for the Chern character $\xi'$, so they form a nested family of semicircles foliating the quadrant left of the vertical wall $s = \mu(\xi')$. If the radius of such a wall is arbitrarily large, then its right endpoint is arbitrarily close to the vertical wall. We saw in the proof of Theorem \ref{thm-excludeHighRank} that if $\Delta$ is arbitrarily large, then the radius of $W(\Xi(\Delta))$ is arbitrarily large. \end{proof}
\begin{theorem}\label{thm-main} Suppose $\xi$ satisfies \ref{hyp-asymptotic}, and let $\Xi$ be the extremal triple decomposing $\xi$. Then $W(\Xi) = W_{\max}$, and the primary edge of the ample cone of $M(\xi)$ corresponds to $W(\Xi)$. \end{theorem} \begin{proof} Let $\Delta(\xi)$ be large enough that there is an extremal $\Xi$ that gives curves. Also assume $\Delta(\xi)$ is large enough that Theorem \ref{thm-excludeHighRank} holds. If necessary, further increase $\Delta(\xi)$ so that no rational numbers with denominator at most $\rk(\xi)$ lie in the interval $[x^+_\Xi,\mu(\xi'))$.
Suppose the decomposition $\Theta$ corresponds to an actual wall $W(\Theta)$ which is at least as large as $W(\Xi)$, and let $$0\to F\to E \to Q\to 0$$ be a destabilizing sequence along $W(\Theta)$. Since $F\in \mathcal Q_s$ along $W(\Xi)$, we have $\mu(F) \geq x_\Xi^+$. Now $\rk(F) \leq \rk(\xi)$, so by the choice of $\Delta(\xi)$ and the slope-closeness condition \ref{cond-slopeClose} we conclude $\mu(F) = \mu(\xi')$.
Furthermore, $F$ is $\mu$-semistable. If it were not, by Lemma \ref{lem-boundsTrivial} the only possibility would be that $F$ has a subsheaf of slope $\mu(\xi)$. Then $F$ must have a Harder-Narasimhan factor of slope smaller than $\mu(\xi')$, and this violates that $F\in \mathcal Q_s$ for all $s$ along $W(\Theta)$ by our choice of $\Delta(\xi)$.
Finally, the $\mu$-semistability of $F$ implies $\Delta(F) \geq \Delta(\xi')$ by the discriminant minimality condition \ref{cond-discMinimal}. If $\Delta(F)>\Delta(\xi')$, then $W(\Theta)$ is nested inside $W(\Xi)$. We conclude $\Delta(F) = \Delta(\xi')$, and $W(\Theta) = W(\Xi)$. Therefore $W(\Xi)$ is the Gieseker wall. \end{proof}
\begin{remark}\label{rem-explicit} The lower bound on $\Delta$ needed for our proof of Theorem \ref{thm-main} can be made explicit. We have increased $\Delta$ on several occasions throughout the paper. If $r$ and $\mu$ are fixed, then $\Delta$ needs to be large enough that the following statements hold. \begin{enumerate} \item $\xi''$ is stable (Lemma \ref{lem-extremalExist}). \item $\Xi$ gives curves. Alternately, it is enough to know that if $F\in M(\xi')$ and $Q\in M(\xi'')$ are general, then $\Hom(F,Q)=0$. The proof of Theorem \ref{thm-curves} allows us to give a lower bound for $\Delta$ if $\hom(F,Q)$ can be computed for some extremal triple $\Xi$ decomposing a character $\xi$ with rank $r$ and slope $\mu$. \item The wall $W(\Xi)$ is large enough to imply the destabilizing subobject along $W_{\max}$ has rank at most $r$ (Proposition \ref{prop-highRank}). \item The right endpoint $x_\Xi^+$ of $W(\Xi)$ is close enough to $\mu(\xi')$ that every rational number in $[x_{\Xi}^+,\mu(\xi'))$ has denominator larger than $r$. \end{enumerate} \end{remark}
\begin{remark}\label{rem-Yoshioka} As an application of Theorem \ref{thm-main} and the discussion in the preceding remark, we explain how our results recover Yoshioka's computation \cite{Yoshioka} of the ample cone of $M(\xi)$ in case $c_1(\xi) = 1$ and $r\geq 2$. Let $\theta' = (2,0,0)$ and $\theta = (r,\frac{1}{r},P(-\frac{1}{r})+\frac{1}{r})$, and let $\Theta$ be the corresponding admissible triple. By Proposition \ref{prop-sporadic}, $\Theta$ is complete and gives curves. Any triple $\Lambda \succeq \Theta$ also gives curves by Lemma \ref{lem-curveBump}.
Now suppose $\xi$ has positive height and $c_1(\xi) = 1$. Then either $\xi=\theta$ or $\xi$ is an elementary modification of $\theta$. Let $\Xi$ be the extremal triple decomposing $\xi$. If $W(\Xi)$ is the Gieseker wall, then the curves in $M(\xi)$ constructed in the previous paragraph are orthogonal to the divisor class on $M(\xi)$ coming from $W(\Xi)$, so we only need to check that $W(\Xi)$ is the Gieseker wall. To do this, we verify that if $\Delta \geq P(-\frac{1}{r})+\frac{1}{r}$, then statements (1), (3), and (4) in Remark \ref{rem-explicit} hold. It is clear that $\xi''$ is stable, so (1) holds.
To check (3) and (4), it is enough to verify they hold for the decomposition $\Theta$. For (3), by Proposition \ref{prop-highRank} we must show $$\left(r-\frac{1}{2}\right)^2=\rho_\Theta^2 \geq \frac{r^2}{2(r+1)}\Delta(\theta)=\frac{2r^2-r+1}{4r+4},$$ which is clear for $r\geq 2$. For (4), we need $x_\Theta^+>-\frac{1}{r}$; in fact, $x_\Theta^+ = 0$ holds. \end{remark}
\subsection{The ample cone, small rank case} We next compute the Gieseker wall in the small rank case.
\begin{theorem}\label{thm-mainsmall} Suppose $\xi$ satisfies \ref{hyp-smallRank}, and let $\Xi$ be the extremal triple decomposing $\xi$. Then $W(\Xi) = W_{\max}$, and the primary edge of the ample cone of $M(\xi)$ corresponds to $W(\Xi)$. \end{theorem} \begin{proof} We may assume $0 < \mu(\xi) \leq 1$. Suppose $\Theta = (\theta',\theta,\theta'')$ is a decomposition of $\xi$ corresponding to an actual wall $W(\Theta)$ which is larger than $W(\Xi)$. Let $F\to E$ be a destabilizing inclusion corresponding $W(\Theta)$. We will show that $\mu(\theta')>\mu(\xi')$. Combining this with Lemma \ref{lem-boundsTrivial}, Theorem \ref{thm-excludeHighRank}, and slope-closeness \ref{cond-slopeClose} then gives a contradiction.
To prove $\mu(\theta')>\mu(\xi')$, we first derive two auxiliary inequalities. We will make use of the nondegenerate symmetric bilinear form $(\xi,\zeta) = \chi(\xi \otimes \zeta)$ on $K(\mathbb{P}^2)$. Let $\gamma$ be a Chern character of positive rank such that $\gamma^\perp = \langle \xi',\xi\rangle$.
\emph{First inequality:} Since $\mu(\theta')<\mu(\xi)$, the assumption that $W(\Theta)$ is bigger than $W(\Xi)$ means $(\theta',\gamma) > 0$. Indeed, $W(\Theta) = W(\Xi)$ if and only if $\theta'\in \gamma^\perp$. If $\Delta(\theta')$ is decreased starting from a character on $\gamma^\perp$, then $(\theta',\gamma)$ increases and the wall $W(\Theta)$ gets bigger.
\emph{Second inequality:} Put $\zeta_1 = \ch \mathcal{O}_{\mathbb{P}^2}(-1)$ and $\zeta_2 = \ch \mathcal{O}_{\mathbb{P}^2}(-3).$ We observe that $\xi'$ lies in either $\zeta_1^\perp$ or $\zeta_2^\perp$. Let $i$ be such that $\xi'\in \zeta_i^\perp$; we will show that $(\theta',\zeta_i)\leq 0$ in either case.
\emph{Case 1: $(\xi',\zeta_1)=0$.} If $(\theta',\zeta_1)>0$, then $\chi(\mathcal{O}_{\mathbb{P}^2}(1),F)>0$. Suppose $\Ext^2(\mathcal{O}_{\mathbb{P}^2}(1),F)=0$. Then there is a nonzero homomorphism $\mathcal{O}_{\mathbb{P}^2}(1)\to F$, and composing with the inclusion $F\to E$ gives a nonzero homomorphism $\mathcal{O}_{\mathbb{P}^2}(1)\to E$. Since $\xi$ has slope at most $1$ and positive height this contradicts semistability of $E$.
It remains to prove $\Ext^2(\mathcal{O}_{\mathbb{P}^2}(1),F)=0$. Dually, we must show $\Hom(F,\mathcal{O}_{\mathbb{P}^2}(-2))=0$. If $x^+_\Xi \geq -2$, then since $W(\Theta)$ is larger than $W(\Xi)$ we will have $F\in \mathcal Q_{-2}$, proving this vanishing. By Lemma \ref{lem-rightPoint}, we only have to check this inequality when $\Delta(\xi)$ is minimal subject to satisfying \ref{hyp-smallRank}. We carried out this computation in Table \ref{table-discValues}.
\emph{Case 2: $(\xi',\zeta_2)=0$.} If $(\theta',\zeta_2)>0$, then either $\Hom(\mathcal{O}_{\mathbb{P}^2}(3),F)$ or $\Ext^2(\mathcal{O}_{\mathbb{P}^2}(3),F)=\Hom(F,\mathcal{O}_{\mathbb{P}^2})^*$ is nonzero. Clearly $\Hom(\mathcal{O}_{\mathbb{P}^2}(3),F)=0$ by Lemma \ref{lem-boundsTrivial}. We must show $\Hom(F,\mathcal{O}_{\mathbb{P}^2})=0$. This follows from $x_{\Xi}^+\geq 0$, which is again true.
Now we use the inequalities $(\theta',\gamma)>0$ and $(\theta',\zeta_i)\leq 0$ to prove $\mu(\theta')>\mu(\xi')$. There is a character $\nu$ such that if $\eta$ has positive rank, then $(\eta,\nu)\geq 0$ (resp. $>$) if and only if $\mu(\eta) \geq \mu(\xi')$ (resp. $>$). We summarize the known information about the signs of various pairs of characters here, noting that $(\xi,\zeta_i)<0$ since $\xi$ has positive height. $$
\begin{array}{c|ccc} (-,-)& \gamma & \zeta_i & \nu\\ \hline \xi & 0 & <0 & >0 \\ \xi' & 0 & 0 & 0 \\ \theta' & >0 & \leq 0 & \\ \end{array} $$ The character $\nu$ is in $(\xi')^\perp$, and $\gamma$ and $\zeta_i$ form a basis for $(\xi')^\perp$. Write $\nu = a\gamma+b \zeta_i$ as a linear combination. Since $0<(\xi,\nu) = b(\xi,\zeta_i)$, we find $b<0$. The character $\nu$ has rank $0$, so this forces $a>0$. We conclude $$(\theta',\nu) = a(\theta',\gamma)+b(\theta',\zeta_i)>0,$$ so $\mu(\theta')>\mu(\xi')$. \end{proof}
We finish the paper by considering the last remaining case.
\begin{theorem}\label{thm-mainSporadic} Let $\xi = (6,\frac{1}{3},\frac{13}{18})$, and let $\Xi$ be the decomposition of $\xi$ with $\xi' = (5,0,0)$. Then $W(\Xi) = W_{\max}$, and the primary edge of the ample cone corresponds to $W(\Xi)$. \end{theorem} \begin{proof} We use the same notation as in the proof of the previous theorem. We compute $x_{\Xi}^+ = 0$, so since $W(\Theta)$ is larger than $W(\Xi)$ we have $F\in \mathcal Q_\epsilon$ for some small $\epsilon>0$. Consider the Harder-Narasimhan filtration $$0\subset F_1\subset \cdots \subset F_\ell = F.$$ Every quotient $\mathrm{gr}_i$ of this filtration satisfies $0<\mu(\mathrm{gr}_i)\leq \frac{1}{3}$. Since $\mathrm{gr}_i$ is semistable, we find $\chi(\mathrm{gr}_i,\mathcal{O}_{\mathbb{P}^2})\leq 0$ for all $i$. It follows that $\chi(\theta',\ch(\mathcal{O}_{\mathbb{P}^2}))\leq 0$ as well. A straightforward computation using this inequality and $\chi(\theta',\gamma)>0$ shows $\mu(\theta')> \frac{2}{7}$. This contradicts Theorem \ref{thm-excludeHighRank} since every rational number in the interval $(\frac{2}{7},\frac{1}{3})$ has denominator larger than $6$. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1409.5478.tex",
"language_detection_score": 0.6932141780853271,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Hom-Groups, Representations and Homological Algebra} \author { Mohammad Hassanzadeh }
\curraddr{University of Windsor, Department of Mathematics and Statistics, Lambton Tower, Ontario, Canada. } \email{[email protected]} \subjclass[2010]{ 17D99, 06B15, 20J05}
\keywords{ Nonassociative rings and algebras, Representation theory, Homological methods in group theory} \maketitle \begin{abstract}
A Hom-group $G$ is a nonassociative version of a group where associativity, invertibility, and unitality are twisted by a map $\alpha: G\longrightarrow G$. Introducing the Hom-group algebra $\mathbb{K}G$, we observe that Hom-groups are providing examples of Hom-algebras, Hom-Lie algebras and Hom-Hopf algebras.
We introduce two types of modules over a Hom-group $G$. To find out more about these modules, we introduce Hom-group (co)homology with coefficients in these modules. Our (co)homology theories generalizes group (co)homologies for groups. Despite the associative case we observe that the coefficients of Hom-group homology is different from the ones for Hom-group cohomology.
We show that the inverse elements provide a relation between Hom-group (co)homology with coefficients in right and left $G$-modules. It will be shown that our (co)homology theories for Hom-groups with coefficients could be reduced to the Hochschild (co)homologies of Hom-group algebras. For certain coefficients the functoriality of Hom-group (co)homology will be shown.
\end{abstract}
\section{ Introduction}
The notion of Hom-Lie algebra is a generalization of Lie algebras which appeared first in $q$-deformations of Witt and Virasoro algebras where the Jacobi identity is deformed by a linear map \cite{as}, \cite{ckl}, \cite{ cz}. There are several interesting examples of Hom-Lie algebras. For an example the authors in \cite{gr} have shown that any algebra of dimension 3 is a Hom-Lie algebra. The related algebra structure is called Hom-algebra and introduced in \cite{ms1}.
Later the other objects such as Hom-bialgebras and Hom-Hopf algebras were studied in \cite{ms2}, \cite{ms3}, \cite{ ya2}, \cite{ya3}, \cite{ya4}.
We refer the reader to more work for
Hom-Lie algebras to \cite{cs}, \cite{hls}, \cite{ls}, \cite{bm}, for
Hom-algebras to \cite{gmmp}, \cite{fg}, \cite{hms}, and for representations of Hom-objects to \cite{cq}, \cite{gw}, \cite{pss}. One knows that studying Hopf algebras have close relations to groups and Lie algebras. The set of group-like elements and primitive elements of a
Hopf algebra form a group and a Lie algebra respectively. Conversely any group gives a Hopf algebra which is called group algebra. For
any Lie algebra we have universal enveloping algebra. There have been many work relating Hom-Lie algebras and Hom-Hopf algebras.
However some relations were missing in the context of Hom-type objects due to the Lack of Hom-type of notions for groups and group algebras.
Here we briefly explain how Hom-groups were came in to the context of Hom-type objects. The universal enveloping algebra of a Hom-Lie algebra has a Hom-bialgebra structure, see \cite{ya4}. However it has not a Hom-Hopf algebra structure in the sense of \cite{ms2}. This is due to the fact that the antipode is not an inverse of the identity map in the convolution product. This motivated the authors in \cite{lmt} to modify the notion of invertibility in Hom-algebras and introduce a new definition for the antipode of Hom-Hopf algebras. Solving this problem, they came into axioms of Hom-groups which naturally are appearing in the structure of the group-like elements of Hom-Hopf algebras. They also were motivated by constructing a Hom-Lie group integrating a Hom-Lie algebra. Simultaneously with this paper, the author in \cite{h1} introduced and studied several fundamental notions for Hom-groups which the twisting map $\alpha$ is invertible. It was shown that Hom-groups are examples of quasigroups. Furthermore Lagrange theorem for finite Hom-groups were shown. The Home-Hopf algebra structure of Hom-group Hopf algebra $\mathbb{K}G$ were introduced in \cite{h2}.\\
In this paper we investigate different aspects of Hom-groups such as modules and homological algebra. In Section 2, we study the basics of Hom-groups. We introduce the Hom-algebra associated to a Hom-group $G$ and we call it Hom-group algebra denoted by $\mathbb{K}G$. It has been shown in \cite{ms1} that the commutator of a Hom-associative algebra $A$ is a Hom-Lie algebra $\mathfrak{g}_A$. The authors in \cite{ya4}, \cite{lmt} showed that the universal enveloping algebra of a Hom-Lie algebra is endowed it with a Hom-Hopf algebra structure. Therefore Hom-groups are sources of examples for Hom-algebras, Hom-Lie algebras, and Hom-Hopf algebras as follows
$$ G\hookrightarrow \mathbb{K}G\hookrightarrow \mathfrak{g}_{\mathbb{K}G}\hookrightarrow U(\mathfrak{g}_{\mathbb{K}G}).$$ We refer the reader for more examples and fundamental notions for Hom-groups to \cite{h1}.
In Section 3, we introduce two types of modules over Hom-groups. The first type is called dual Hom-modules. Using inverse elements in a Hom-group $G$, we show that a left dual $G$-module can be turned in to a right dual $G$-module and vice-a-versa. Then we introduce $G$-modules and we show that for any left $G$-module $M$ the algebraic dual $Hom(M, \mathbb{K})$ is a dual right $G$-module where $\mathbb{K}$ is a field.
It is known that the group (co)homology provides an important set of tools for studying modules over a group.
This motivates us to introduce (co)homology theories for Hom-groups to find out more about representations of Hom-groups.
Generally introducing homological algebra for non-associative objects is a difficult task. The first attempts to introduce homological tools for
Hom-algebras and Hom-Lie algebras were appeared in \cite{aem}, \cite{ms3}, \cite{ms4}, \cite{ya1}. The authors in \cite{hss} defined Hochschild and
cyclic (co)homology for Hom-algebras.
In Section 4, we introduce Hom-group cohomology with coefficients in dual left (right) $G$-modules. The conditions of
$M$ in our work were also appeared in other context such as \cite{cg} where the authors used the category of Hom-modules over Hom-algebras to obtain a monoidal category for modules over Hom-bialgebras. A noticeable difference between homology theories of Hom-algebras introduced in \cite{hss} and the ones for Hom-groups in this paper is that the first one needs bimodules over Hom-algebras and the second one requires one sided modules (left or right). We show that the Hom-group cohomology with coefficients in a dual right $G$-module is isomorphic to Hom-group cohomology with coefficients in the dual left $G$-module where the left action is given by the inverse elements. We compute 0 and 1-cocycles and we show the functoriality of Hom-group cohomology for certain coefficients. Since any Hom-group gives the Hom-group algebra, the natural question is the relation with cohomology theories of these two different objects. We show that Hom-group cohomology of a Hom-group $G$ with coefficients in a dual left module is isomorphic to the Hom-Hochschild cohomology of Hom-group algebra $\mathbb{K}G$ with coefficients in the dual $\mathbb{K}G$-bimodule whose right $\mathbb{K}G$-action is trivial. Later we introduce Hom-group homologies with coefficients in left (right) $G$-modules.
Despite the associative case, the Hom-associativity condition leads us to use different type of representations for cohomology and homology theories for Hom-groups. We look into similar results in the homology case.
The (co)homology theories for Hom-algebras in \cite{hss}, \cite{aem}, \cite{ms3} and Hom-groups in this paper, gives us the hope of solving the open problems of introducing homological tools for other non-associative objects such as Jordan algebras and alternative algebras.
\tableofcontents
\section{Hom-groups } Here we recall the definition of a Hom-group from \cite{lmt}. \begin{definition}\label{def-hom}{\rm
A Hom group consists of a set G together with a distinguished member $1$ of $G$, a set map: $\alpha: G\longrightarrow G$, an operation $\mu: G\times G\longrightarrow G$, and an operation written as $^{-1}:G\longrightarrow G$. These pieces of structure are subject to the following axioms:\\
i) The product map $\mu: G\times G\longrightarrow G$ is satisfying the Hom-associativity property
$$\mu(\alpha(g), \mu(h, k))= \mu(\mu(g,h), \alpha(k)).$$
For simplicity when there is no confusion we omit the sign $\mu$.
ii) The map $\alpha $ is multiplicative, i.e, $\alpha(gk)=\alpha(g)\alpha(k)$.
iii) The element $1$ is called unit and it satisfies the Hom-unitality condition
$$g1=1g=\alpha(g), \quad\quad ~~~~~ \alpha(1)=1.$$
iv) The map $g\longrightarrow g^{-1}$ satisfies the anti-morphism property $(gh)^{-1}=h^{-1} g^{-1}$.
v) For any $g\in G$
there exists a natural number $n$ satisfying the Hom-invertibility condition
$$\alpha^n(g g^{-1})=\alpha^n(g^{-1}g)=1.$$
The smallest such
$n$ is called the invertibility index of $g$.
} \end{definition}
Since we have the anti-morphism $g\longmapsto g^{-1}$, therefore by the definition, inverse of any element $g\in G$ is unique although different elements may have different invertibility index.
The inverse of the unit element $1$ of a Hom-group $(G, \alpha)$ is itself because $\alpha(\mu(1, 1))=\alpha(1)=1$.
For any Hom-group $(G, \alpha)$ we have $\alpha(g)^{-1}=\alpha(g^{-1})$. This is because if $g^{-1}$ is the unique inverse of $g$ where its invertibility index is $k$ then $$\alpha^{k-1}( \alpha(g) \alpha(g^{-1}))=\alpha^k(g) \alpha^k(g^{-1})=1.$$ So the invertibility index of $\alpha(g)$ is $k-1$. If $k=1$ then the invertibility index of each element of $G$ is one. Non-associativity of the product prevent us to easily define the notion of order for an element $g$. Therefore many basics result of group theory will be affected by missing associativity condition.
\begin{example}\label{deformation of groups}
{\rm
Let $(G, \mu, 1)$ be any group and $\alpha: G\longrightarrow G$ be a group homomorphism. We define a new product $\mu_{\alpha}: G\times G\longrightarrow G$ given by $$\mu_{\alpha}(g, h)= \alpha(\mu(g, h))=\mu(\alpha(g), \alpha(h)).$$ Then $(G, \mu_{\alpha})$ is a Hom-group and we denote this by $G_{\alpha}$. We note that inverse of any element $g\in G_{\alpha}$ is also $g^{-1}$ because
$$\alpha(\mu_{\alpha}(g, g^{-1}))= \alpha(g)\alpha(g^{-1})=\alpha(1)=1.$$
The invertibility index of all elements of $G_{\alpha}$ are one.
} \end{example} \begin{remark}
{\rm
In this paper we use the general definition of Hom-groups in Definition \ref{def-hom}. However the author in \cite{h1} considered an special case when $\alpha$ is invertible. Therefore the invertibility axiom will change to the one that for any $g\in G$, there exists a $g^{-1}\in G$ where
$$gg^{-1}= g^{-1}g=1.$$
It was shown that the inverse element $g^{-1}$ is unique and also $(gh)^{-1}= h^{-1}g^{-1}$. Therefore some of the axioms in Definition \ref{def-hom} will be obtained by Hom-associativity. See \cite{H1}.
} \end{remark}
\begin{definition}{\rm
Let $\mathbb{K}$ be a field. For any Hom-group $(G, \alpha)$ we can define a free $k$-Hom algebra $\mathbb{K}G$ which is called Hom-group algebra.
More precisely $\mathbb{K}G$ denotes the set of all formal expressions of the form $\sum cg$ where $c\in k$ and $g\in G$. The multiplication of $\mathbb{K}G$ is defined
by $(cg)(c'g')= (cc')(gg')$ for all $c,c'\in k$ and $g,g'\in G$.
For the Hom-algebra structure we extend $\alpha:G\longrightarrow G$ to a $\mathbb{K}$-linear map $\mathbb{K}G\longrightarrow \mathbb{K}G$ in the obvious way.
}
\end{definition}
\begin{remark}{\rm
It is shown in \cite{ms1} that the commutator of a Hom-associative algebra $A$ given by $[a,b]=ab-ba$, is a Hom-Lie algebra $\mathfrak{g}_A$. Furthermore the authors in \cite{ya4}, \cite{lmt} showed that the universal enveloping algebra of a Hom-Lie algebra is endowed it with a Hom-Hopf algebra structure. One notes that the Hopf algebra structures in \cite{ya4} is different from the one in \cite{lmt}. Therefore Hom-groups are a source of examples of Hom-algebras, Hom-Lie algebras, and Hom-Hopf algebras as follows
$$ G\hookrightarrow \mathbb{K}G\hookrightarrow \mathfrak{g}_{\mathbb{K}G}\hookrightarrow U(\mathfrak{g}_{\mathbb{K}G}).$$ }
\end{remark}
By \cite{lmt}, an element $x$ in an unital Hom-algebra $(A, \alpha, 1)$ is invertible if there is an element $x^{-1}\in A$ and a
non-negative integer $k$ such that $$ \alpha^k(x x^{-1})= \alpha^k(x^{-1}x)=1.$$
The element $x^{-1}$ is called the Hom-inverse of $x$.
The Hom-inverse of an element in a Hom-algebra may not be unique. This is different from
Hom-groups where the inverse of an element is unique. This prevents Hom-invertible elements in an Hom-algebra to be a Hom-group in general.
The authors in \cite{lmt} showed that for any unital Hom-algebra, the unit 1 is Hom-invertible, the product of any two Hom-invertible elements is Hom-invertible and every inverse of a Hom-invertible element is Hom-invertible. Furthermore they proved that the set of group-like elements in a Hom-Hopf algebra is a Hom-group.
The inverse of an element $cg$ in the Hom-group algebra $\mathbb{K}G$ is the unique element $c^{-1}g^{-1}$, where $c\in \mathbb{K}$ and $g\in G$.
\begin{definition}
{\rm
A subset $H$ of a Hom-group $(G,\alpha)$ is called a Hom-subgroup of $G$ if $(H,\alpha)$ itself is a Hom-group.
}
\end{definition}
One notes that if $H$ is a Hom-subgroup of $G$ then $\alpha(h) = 1 h\in H$ for all $h\in H$. Therefore $\alpha(H)\subseteq H$.
\begin{example}
{\rm
Let $G$ be a group and $\alpha: G\longrightarrow G$ be a group homomorphism. If $H$ is a subgroup of
$G$ which $\alpha(H)\subseteq H$ then $(H_{\alpha}, \alpha)$ is a Hom-subgroup of $G_{\alpha}$.
}
\end{example} \begin{definition}
{\rm
Let $(G, \alpha)$ and $(H, \beta)$ be two Hom-groups. The morphism $f: G\longrightarrow H$ is called a morphism of Hom-groups if $\beta(f(g))=f(\alpha(g))$ and $f(gk)=f(g)f(k)$ for all $g,k\in G$.
Two Hom-groups $G$ and $H$ are called isomorphic if there exist a bijective morphism of Hom-groups $f: G\longrightarrow H$.
} \end{definition}
\begin{proposition}
Let $(G, \alpha)$ and $(H, \beta)$ be two Hom-groups and $f: G\longrightarrow H$ be a morphism of Hom-groups.
If the invertibility index of the element $f(1_G)\in H$ is $n$ then $\beta^{n+2}(f(1_G))=1_H$.
\end{proposition} \begin{proof} Since $f$ is multiplicative then $$f(1_G) f(1_G)= f(1_G 1_G) = f(1_G).$$
Also $$1_H f(1_G)=\beta(f(1_G))= f(\alpha(1_G))= f(1_G).$$
Therefore $$f(1_G) f(1_G)= 1_H f(1_G).$$
Then $\beta^n(f(1_G)) \beta^n(f(1_G))= \beta^n(1) \beta^n(f(1_G))$.
So we have $\beta^n(f(1_G)) \beta^n(f(1_G))= 1_H \beta^n(f(1_G))$. Then $$[\beta^n(f(1_G)) \beta^n(f(1_G))] \beta^{n+1}(f(1_G)^{-1})= [1_H \beta^n(f(1_G))] \beta^{n+1}(f(1_G)^{-1}).$$
So
$$ \beta^{n+1}(f(1_G)) [ \beta^n(f(1_G) f(1_G)^{-1})] = \beta(1_H) [\beta^n(f(1_G) f(1_G)^{-1})] .$$ Then
$$\beta^{n+1}(f(1_G)) 1_H = b(1_H) 1_H. $$ Therefore $\beta^{n+2}(f(1_G))= \beta^2(1_H)= 1_H.$
\end{proof} This Lemma shows that in general for a Hom-group homomorphism $f: G\longrightarrow H$ the unitality condition $f(1)=1$ does not hold.
\begin{lemma}
Let $(G, \alpha)$ and $(H, \beta)$ be two Hom-groups and $f: G\longrightarrow H$ be a morphism of Hom-groups. If $f(1_G)=1_H$ then
$f(g^{-1})= f(g)^{-1}$.
\end{lemma} \begin{proof}
We suppose that the invertibility index of $g$ is $n$. Therefore $$ \beta^n(f(g) f(g^{-1}))=f(\alpha^n(g g^{-1}))= f(1_G)=1_H.$$
So $f(g^{-1})= f(g)^{-1}$. \end{proof}
\begin{lemma}
Let $(G, \alpha)$ and $(H, \beta)$ be two Hom-groups and $f: G\longrightarrow H$ be a morphism of Hom-groups. If $f(1_G)=1_H$ then
$kerf=\{ g\in G, ~~~ f(g)=1_H\}$ is a Hom-subgroup of $G$. \end{lemma} \begin{proof}
Since $f$ is multiplicative then $kerf$ is closed under multiplication. Also if $x\in kerf$ then $x^{-1}\in kerf$ because by previous lemma
$$f(x^{-1})= f(x)^{-1}= 1_H^{-1}=1_H.$$ \end{proof}
\section{$G$-modules }
In this section we introduce two different types of modules over a Hom-group $G$. The first type is called dual $G$-modules and we use them later to introduce a cohomology theory for Hom-groups. The other type is called $G$-modules and they will be used to define a homology theory of Hom-groups.
\begin{definition}
Let $(G, \alpha)$ be a Hom-group. An abelian group $M$ is called a dual left $G$-module if there are linear maps $\cdot: G\times M\longrightarrow M$, and $\beta: M\longrightarrow M$ where
\begin{equation}\label{left-dual-module}
g\cdot (\alpha(h)\cdot m)=\beta((gh)\cdot m), \quad\quad g,h\in G,
\end{equation}
and $$1\cdot m= \beta(m).$$
Similarly, $M$ is called a dual right $G$-module if \begin{equation}\label{right-dual-module}
(m\cdot \alpha(h))\cdot g= \beta(m\cdot (hg)), \quad\quad m\cdot 1=\beta(m). \end{equation}
Finally, we call $M$ a dual $G$-bimodule if it is both a dual left and a dual right $G$-module with the following bimodule property $$\alpha(a)\cdot (v\cdot b)=(a\cdot v)\cdot \alpha(b).$$ \end{definition}
\begin{lemma}\label{result of left dual module}
If $ (G, \alpha)$ is a Hom-group and $M$ a dual left $G$-module, then
\begin{equation}
g\cdot \beta(m)= \beta(\alpha(g)\cdot m), \quad\quad g\in G, m\in M.
\end{equation}
Similarly for a dual right $G$-module we have
\begin{equation}
\beta(m\cdot \alpha(g))= \beta(m)\cdot g.
\end{equation} \end{lemma} \begin{proof}
This is followed by substituting $h=1$ in \eqref{left-dual-module} and \eqref{right-dual-module} and using $1\cdot m=\beta(m)= m\cdot 1$.
\end{proof}
It is known that for every group $G$, a right $G$-module $M$ can be turned in to a left $G$-module $\widetilde{M}=M$
where the left action is given by $g\cdot m:= mg^{-1}$. This process can also be done for Hom-groups as follows.
\begin{lemma}\label{right to left}
Let $(G, \alpha)$ be a Hom-group. A dual right $G$-module $M$ can be turned in to a dual left $G$-module $\widetilde{M}=M$ by the left action given by
\begin{equation}
g\cdot m:= m\cdot g^{-1}.
\end{equation}
\end{lemma}
\begin{proof}
This is followed by
\begin{align*}
g\cdot (\alpha(k)\cdot m)&= g\cdot (m\cdot \alpha(k)^{-1})\\
&= (m\cdot \alpha(k)^{-1})\cdot g^{-1}= (m\cdot \alpha(k^{-1}))\cdot g^{-1}\\
&= \beta(m\cdot (k^{-1}g^{-1}))=\beta(m\cdot (gk)^{-1}) =\beta((gk)\cdot m),\\
\end{align*} and also \begin{equation*}
1\cdot m=m\cdot 1^{-1}=m\cdot 1=\beta(m). \end{equation*}
\end{proof}
The following notion of modules over Hom-groups will be used to introduce Hom-group homology.
\begin{definition}\label{modules over Hom-groups} Let $(G, \alpha)$ be a Hom-group. An abelian group $V$ equipped with $\cdot :M \times V \longrightarrow V$, $a\times v\mapsto a\cdot v$, and $\beta:V\longrightarrow V$, is called a left $G$-module if
\begin{equation}\label{aux-Hom-module} (gk)\cdot \beta(m) = \alpha(g)\cdot (k\cdot m), \quad\quad~~~~~~~~ 1\cdot m=\beta(m), \end{equation} for all $g,k\in G$ and $m\in M$.
Similarly, $(M,\beta)$ is called a right $G$-module if \begin{equation*} \beta(m)\cdot(gk)= (m\cdot g)\cdot \alpha(k), \quad\quad ~~~~ m\cdot 1= \beta(m). \end{equation*} Furthermore $M$ is called an $G$-bimodule if \begin{equation}\label{aux-A-bimodule} \alpha(g)\cdot (m \cdot k) = (g \cdot m)\cdot \alpha(k), \end{equation} for all $g,k\in G$, and $m \in M$. \end{definition}
\begin{example}\rm{
For a Hom-group $G$, the Hom-group algebra $\mathbb{K}G$ is a bimodule over $G$ by the left and right actions defined by its multiplication and $\beta=\alpha$.
More precise the left action is defined to be $g\cdot (ch)= c(gh)$ where $g,h\in G$ and $c\in k$.
} \end{example}
\begin{lemma}\label{right to left-2}
Let $(G, \alpha)$ be a Hom-group. A right $G$-module $M$ can be turned in to a left $G$-module $\widetilde{M}=M$ by the left action
\begin{equation}
g\cdot m:= m\cdot g^{-1}.
\end{equation}
\end{lemma} \begin{proof} This is because
\begin{align*}
&(gk)\cdot\beta(m)=\beta(m) \cdot (k^{-1}g^{-1})= (m\cdot k^{-1})\cdot \alpha(g^{-1})\\
&= (m\cdot k^{-1})\cdot \alpha(g)^{-1}= \alpha(g)\cdot (m\cdot k^{-1}) =\alpha(g)\cdot (k\cdot m)
\end{align*} \end{proof} \begin{example}\rm{
Let $G$ be a Hom-group and $M$ be a right $G$-module. If $\mathbb{K}$ is a field, then the algebraic dual
${{M}}^*= \mathop{\rm Hom}\nolimits(M,\mathbb{K})$ can be turned in to a left dual $G$-module by the left dual action given by \begin{equation}
(g\cdot f)(m)= f(m\cdot g). \end{equation}
} \end{example}
\section{Hom-group cohomoloy}
In this section we introduce Hom-group cohomology for Hom-groups. To to this we need to use the dual modules for the proper coefficients.
\begin{theorem}\label{Hom-group cohomology for left}
Let $(G, \alpha)$ be a Hom- group and $(M, \beta)$ be a dual left $G$-module. Let $C^n_{Hom}(G, M)$ be the space of all maps $\varphi: G^{\times n}\longrightarrow M$. Then \begin{equation*} C_{Hom}^\ast(G, M)=\bigoplus_{n\geq0} C_{Hom}^n(G, M), \end{equation*} with the coface maps \begin{align}\label{aux-cosimplisial-structure-vp} \begin{split} &\delta_0\varphi(g_1, \cdots , g_{n+1})=g_1\cdot \varphi(\alpha(g_2), \cdots , \alpha(g_{n+1}))\\ &\delta_i\varphi(g_1 , \cdots ,g_{n+1})=\beta(\varphi(\alpha(g_1), \cdots , g_i g_{i+1}, \cdots , \alpha(g_{n+1}))), ~~ 1\leq i \leq n\\ &\delta_{n+1}\varphi(g_1, \cdots , g_{n+1})= \beta(\varphi(\alpha(g_1), \cdots , \alpha(g_{n}))),\\ \end{split} \end{align} is a cosimplicial module. \end{theorem} \begin{proof} We need to show that $\delta_i \delta_j= \delta_j \delta_{i-1}$ for $0\leq j< i \leq n-1$. Let us first show that $\delta_1\delta_0=\delta_0\delta_0$. \begin{align*} \delta_0(\delta_0\varphi)(g_1,\cdots , g_{n+2}) &=g_1\cdot \delta_0\varphi(\alpha(g_2),\cdots ,\alpha(g_{n+2}))\\ &= g_1\cdot (\alpha(g_2)\cdot\varphi(\alpha^2(g_3),\cdots ,\alpha^2(g_{n+2}))) \\ & =\beta((g_1g_2)\cdot\varphi(\alpha^2(g_3),\cdots ,\alpha^2(g_{n+2}))) \\ &= \beta(\delta_0\varphi(g_1g_2\, \alpha(g_3),\cdots ,\alpha(g_{n+2})))\\ &=\delta_1\delta_0\varphi(g_1,\cdots ,g_{n+2}). \end{align*} We used the left dual module property in the third equality. Now we show that $\delta_{n+1} \delta_n= \delta_n \delta_{n}$. \begin{align*}
\delta_{n+1} \delta_n\varphi(g_1,\cdots, g_{n+1}) &=\beta(\delta_n\varphi(\alpha(g_1), \cdots, \alpha(g_{n})))\\
&=\beta^2(\varphi(\alpha(g_1), \cdots, \alpha(g_{n-1})))\\
&=\beta(\delta_n\varphi(\alpha(g_1),\cdots,\alpha( g_{n-1}), \alpha(g_{n}g_{n+1})))\\
&=\beta(\delta_n\varphi(\alpha(g_1),\cdots,\alpha( g_{n-1}), \alpha(g_{n})\alpha(g_{n+1})))\\
&=\delta_n \delta_{n}\varphi(g_1,\cdots, g_{n+1}). \end{align*} We used the multiplicity of $\alpha$ in the fourth equality. The following demonstrates that $\delta_{n+1}\delta_0=\delta_0\delta_{n}$. We have
\begin{align*}
\delta_{n+1}\delta_0\varphi(g_1,\cdots , g_{n+1}) &=\beta(\delta_0\varphi(\alpha(g_1),\cdots ,\alpha(g_{n})))\\
&=\beta(\alpha(g_1)\cdot \varphi(\alpha^2(g_2),\cdots ,\alpha^2(g_{n})))\\
&=g_1 \cdot\beta(\varphi(\alpha^2(g_2),\cdots ,\alpha^2(g_{n})) )\\
&=g_1\cdot \delta_n\varphi(\alpha(g_2),\cdots, \alpha(g_{n+1}))\\
&=\delta_0\delta_n\varphi(g_1,\cdots ,g_{n+1}). \end{align*}
We used the Lemma \ref{result of left dual module} in the third equality. The relations $ \delta_{j+1} \delta_j= \delta_j \delta_{j}$ follows from the Hom-associativity of $G$. \end{proof} Similarly we have the following result.
\begin{proposition}
Let $(G, \alpha)$ be a Hom-group and $(M, \beta)$ be a dual right $G$-module. Let $C^n_{Hom}(G, M)$ be the space of all maps $\varphi: G^{\times n}\longrightarrow M$. Then \begin{equation*} C_{Hom}^\ast(G, M)=\bigoplus_{n\geq0} C_{Hom}^n(G, M), \end{equation*} with the coface maps \begin{align}\label{aux-cosimplisial-structure-vp} \begin{split} &\delta_0\varphi(g_1, \cdots , g_{n+1})= \varphi(\alpha(g_1), \cdots , \alpha(g_{n}))\cdot g_{n+1}\\ &\delta_i\varphi(g_1 , \cdots ,g_{n+1})=\beta(\varphi(\alpha(g_1), \cdots , g_ig_{i+1}, \cdots , \alpha(g_{n+1}))), ~~ 1\leq i \leq n\\ &\delta_{n+1}\varphi(g_1, \cdots , g_{n+1})= \beta(\varphi(\alpha(g_2), \cdots , \alpha(g_{n+1}))),\\ \end{split} \end{align} is a cosimplicial module. \end{proposition} \begin{proof} Here we show that $\delta_{n+1}\delta_0=\delta_0\delta_{n}$. \begin{align*}
\delta_{n+1}\delta_0\varphi(g_1,\cdots , g_{n+1}) &=\beta(\delta_0\varphi(\alpha(g_2),\cdots , \alpha(g_{n+1})))\\
&=\beta(\varphi(\alpha^2(g_2),\cdots ,\alpha^2(g_{n}))\cdot\alpha(g_{n+1}))\\
&= \beta(\varphi(\alpha^2(g_2),\cdots, \alpha^2(g_{n})) )\cdot g_{n+1}\\
&= \delta_n\varphi(\alpha(g_1),\cdots ,\alpha(g_{n}))\cdot g_{n+1}\\
&=\delta_0\delta_n\varphi(g_1,\cdots , g_{n+1}). \end{align*} We used the Lemma \ref{result of left dual module} in the third equality.
The rest of the relations can be proved similar to the Theorem \ref{Hom-group cohomology for left}.
\end{proof} Now we define the coboundary $b= \sum_{i=0}^{n} d_i$. The previous Theorem and Proposition imply $b^2=0$. The cohomology of the cochain complex $$ \begin{CD} 0 @>b>> C_{Hom}^0(G, M) @>b>> C_{Hom}^1(G, M) @>b>> C_{Hom}^2(G, M) @>b>> C_{Hom}^3(G, M) \ldots \end{CD}\\ $$ is called Hom-group cohomology of $G$ with coefficients in $M$. Here $M= C_{Hom}^0(G, M)$. The following proposition shows the relation between Hom-group cohomology with coefficients with dual left and dual right modules.
\begin{proposition}
Let $(G, \alpha)$ be a Hom-group and $M$ be a dual right $G$-module. Then $\widetilde{M} =M$ with the left action $g\cdot m= m\cdot g^{-1}$ is a dual left $G$-module. Furthermore
$$H^*(G, M)\cong H^*(G, \widetilde{M}).$$
\end{proposition}
\begin{proof}
The space $\widetilde{M}$ is a dual left $G$-module by the Lemma \ref{right to left}. We define $$F: C^n(G, \widetilde{M})\longrightarrow C^n(G, M),$$ given by $$F(\varphi)(g_1, \dots , g_n)=\varphi(g_n^{-1}, \cdots , g_1^{-1}).$$ Here we show $F \delta^{\widetilde{M}}_0= \delta^{M}_0 F$ where $\delta^{\widetilde{M}}$ and $\delta^{M}$ stand for the coface maps when the coefficients are $\widetilde{M} $ and $M$, respectively. \begin{align*}
F \delta^{\widetilde{M}}_0\varphi(g_1, \cdots , g_{n+1})&=\delta^{\widetilde{M}}_0\varphi(g_{n+1}^{-1}, \cdots, g_1^{-1})\\
&=g_{n+1}^{-1}\cdot \varphi(\alpha(g_n^{-1}), \cdots , \alpha(g_1^{-1}))\\
&=\varphi(\alpha(g_n^{-1}), \cdots , \alpha(g_1^{-1}))\cdot g_{n+1}\\
&=\varphi(\alpha(g_n)^{-1}, \cdots , \alpha(g_1)^{-1})\cdot g_{n+1}\\
&= F\varphi(\alpha(g_1), \cdots , \alpha(g_n))\cdot g_{n+1}\\
&=\delta^{M}_0 F(g_1, \cdots , g_{n+1}). \end{align*}
Similarly $F$ commutes with all $\delta_i$'s and therefore with the coboundary maps $b=\sum_i \delta_i$. Thus $F$ is a
map of cochain complexes and induces a map on the level of cohomology.
Furthermore $F$ is a bijection on the level of cochain complexes because inverse elements are unique in Hom-groups.
\end{proof}
The following two examples show that the cohomology classes could contain important information about a Hom-group. \begin{example}{\rm \textbf{({$H^0$} and twisted invariant elements) }\\
Let $(G, \alpha)$ be a Hom-group and $M$ be a dual right $G$-module.
Then $$H^0(G, M)= \{ m\in M, mg=\beta(m), \forall g\in G\}.$$
So the zero cohomology class is the subspace of $M$ which contains those elements that are invariant under the $G$-action with respect to $\beta$.
} \end{example} \begin{example}
{\rm \textbf{($H^1$ and twisted crossed homomorphisms )}\\
Let $(G, \alpha)$ be a Hom-group and $M$ be a dual right $G$-module. To compute $H^1(G, M)$ we need to compute $ker b$ which
contains the 1-cochains $f: G\longrightarrow M$ with $df(g, h)=0$. This means
$$ f(\alpha(g))\cdot h -\beta(f(gh))+\beta(f(\alpha(h)))=0,$$ or
$$\beta(f(gh))= f(\alpha(g))\cdot h +\beta(f(\alpha(h))).$$ These maps are called twisted crossed homomorphism of $G$. Also $Im b$ contains all $ \varphi: G \longrightarrow M$ where there exists $m\in M$ such that
$\varphi(g)=mg - \beta(m)$. These map are called twisted principal crossed homomorphisms of $G$. Therefore the first cohomology is the quotient of
twisted crossed homomorphism by twisted principal crossed homomorphisms.
} \end{example}
\begin{example}{\rm
We recall that for a Hom-group $G$ the Hom-group algebra $V=\mathbb{K}G$ is a $G$-bimodule by multiplication of $G$.
Therefore by examples of the previous section $(\mathbb{K}G)^\ast$ is a $G$-dual bimodule.
Now we consider the Hom-group cohomology of $G$ with coefficients in the dual $G$-bimodule $(\mathbb{K}G)^\ast$.
We show that the coboundary map can be written differently in this case. One Identifies $\varphi\in C^n(G,G^\ast)$ with \begin{equation*}\label{aux-identification} \phi:G^{\times\,n+1}\longrightarrow k,\qquad \phi(g_0, g_1, \cdots g_n):=\varphi(g_1 \otimes\cdots\otimes g_n)(g_0). \end{equation*}
As a result the coboundary map will be changed in to \begin{align*} b:C^n(G,G^\ast)&\longrightarrow C^{n+1}(G,G^\ast),\\ b\phi(g_0, \cdots, g_{n+1})&=\phi(g_0g_1, \alpha(g_2) , \cdots , \alpha(g_{n+1}))\\ &\quad+\sum_{j=1}^n (-1)^j\phi(\alpha(g_0), \cdots, g_jg_{j+1}, \cdots , \alpha(g_{n+1}))\\ &\quad+(-1)^{n+1} \phi(g_{n+1}g_0\otimes \alpha(g_1), \cdots, \alpha(g_n)). \end{align*} Also the cosimplicial structure is translated into \begin{align}\label{aux-cosimplisial-structure-phi} \begin{split}
&\delta_0\phi(g_0 , \cdots , g_n)= \phi(g_0g_1, \alpha(g_2) , \cdots, \alpha(g_{n}))\\
&\delta_i\phi(g_0 , \cdots , g_n)=\phi(\alpha(g_0) , \cdots , g_i g_{i+1} , \cdots , \alpha(g_{n})), ~~ 1\leq i \leq n-1\\
&\delta_{n}\phi(g_0 , \cdots , g_n)=\phi(g_{n}g_0, \alpha(g_1), \cdots ,\alpha(g_n)). \end{split} \end{align} } \end{example}
The following proposition shows the functoriality of Hom-group cohomology with certain coefficients. \begin{proposition}
Let $(G, \alpha_{G})$ and $(G', \alpha_{G'})$ be two Hom-groups. Then any morphism $f: G\longrightarrow G'$ of Hom-groups induces the map \begin{equation*} \widehat{f}: H_{Hom}^n(G', (\mathbb{K}G')^\ast)\longrightarrow H_{Hom}^n(G, ({\mathbb{K}G})^\ast) \end{equation*}
\end{proposition} \begin{proof}
We define $F: C^n(G',\mathbb{K}G'^\ast)\longrightarrow C^n(G,\mathbb{K}G^\ast)$ given by
$$F\varphi(g_0, \cdots, g_n)= \varphi (f(g_0), \cdots, f(g_n))$$
The map $F$ commutes with all differentials $\delta_i$ in \eqref{aux-cosimplisial-structure-phi} because $f(\alpha_G(g))=\alpha_{G'}(f(g))$ and $ f(gk)=f(g) f(k)$. Here we only show that $F$ commutes with $\delta_0$ and we leave the other commutativity relations to the reader.
\begin{align*}
&\delta_0^GF\varphi (g_0, \cdots , g_n)\\
&= F\varphi(g_0g_1, \alpha_G(g_2), \cdots , \alpha_G(g_n))\\
&=\varphi(f(g_0g_1), f(\alpha_G(g_2)), \cdots , f(\alpha_G(g_n)))\\
&=\varphi(f(g_0) f(g_1), \alpha_{G'}(f(g_2)), \cdots, \alpha_{G'}(f(g_n)))\\
&=\delta_0^{G'}(f(g_0), \cdots , f(g_n))\\
& =F\delta_0^{G'}\varphi (g_0, \cdots , g_n).
\end{align*} \end{proof} One notes that even in the case of associative groups, $H^*(G, \mathbb{K}G)$ has not similar functoriality property. This reminds us that the coefficients $(\mathbb{K}G)^*$ as dual $G$-module have an important rule in Hom-group cohomology.
\begin{example}{\rm \textbf{(Trace $0$-cocycles})
Let $(G, \alpha)$ be a Hom-group. Using the differentials in \eqref{aux-cosimplisial-structure-phi} we have \begin{equation*}
H_{Hom}^{0}(G, \mathbb{K}G^*)=\{ \varphi: G\longrightarrow k, \quad\quad \varphi(gh)=\varphi(hg)\}. \end{equation*} More precisely, $0$-cocycles of $G$ are trace maps on $G$. } \end{example}
Here we aim to find out the relation between Hom-group cohomology of a Hom-group $G$ and the Hochschild cohomology of the Hom-group algebra $\mathbb{K}G$.
For this we recall the Hochschild cohomology of Hom-algebras introduced in \cite{hss}. First we need to recall the definition of dual modules for Hom-algebras from \cite{hss}.
Let $(\mathcal{A}, \alpha)$ be a Hom-algebra. A vector space $V$ is called a dual left $\mathcal{A}$-module if there are linear maps $\cdot: \mathcal{A}\otimes V\longrightarrow V$, and $\beta: V\longrightarrow V$ where
\begin{equation}
a\cdot (\alpha(b)\cdot v)=\beta((ab)\cdot v).
\end{equation} Similarly, $V$ is called a dual right $\mathcal{A}$-module if $v\cdot (\alpha(a))\cdot b= \beta(v\cdot (ab))$. Finally, we call $V$ a dual $\mathcal{A}$-bimodule if $\alpha(a)\cdot (v\cdot b)=(a\cdot v)\cdot \alpha(b).$ Let $(\mathcal{A}, \alpha)$ be a Hom-algebra, $(M, \beta)$ be a dual $\mathcal{A}$-bimodule and $C^n(\mathcal{A}, M)$ be the space of all $k$-linear maps $\varphi: \mathcal{A}^{\otimes n}\longrightarrow M$. Then the authors in \cite{hss} showed that \begin{equation*} C^\ast(\mathcal{A}, M)=\bigoplus_{n\geq0} C^n(\mathcal{A}, M), \end{equation*} with the coface maps \begin{align}\label{aux-cosimplisial-structure-vp} \begin{split} &d_0\varphi(a_1\otimes\cdots\otimes a_{n+1})=a_1\cdot \varphi(\alpha(a_2)\otimes\cdots\otimes \alpha(a_{n+1}))\\ &d_i\varphi(a_1\otimes\cdots\otimes a_{n+1})=\beta(\varphi(\alpha(a_1)\otimes\cdots\otimes a_i a_{i+1}\otimes\cdots\otimes \alpha(a_{n+1}))), ~~ 1\leq i \leq n\\ &d_{n+1}\varphi(a_1\otimes\cdots\otimes a_{n+1})= \varphi(\alpha(a_1)\otimes\cdots\otimes \alpha(a_{n}))\cdot a_{n+1}.\\ \end{split} \end{align} is a cosimplicial module. The cohomology of the complex $(C^\ast(\mathcal{A}, M),b)$, where $b=\sum_{i=0}^{n+1}d_i$, is the Hochschild cohomology of the Hom-algebra $\mathcal{A}$ with coefficients in $M$, and is denoted by $H^\ast(\mathcal{A}, M)$.
The following theorem shows that if the dual left $G$-module
$M$ satisfies an extra condition $\alpha(a)\cdot \beta(m)= \beta(a\cdot m)$, $a\in \mathbb{K}G, m\in M$, then
the group cohomology of a Hom-group $G$ with coefficients in $M$ will reduce to Hochschild cohomology of
the Hom-group algebra $\mathbb{K}G$ with coefficients in the dual $\mathbb{K}G$-bimodule $\widetilde{M}=M$ where the left action
is coming from the left action of $G$ and the right action is trivial. One notes that if $\alpha=\beta=\mathop{\rm Id}\nolimits$ then we
obtain the corresponding well-known result in the associative case.
\begin{theorem}\label{homology-relations}
Let $(G, \alpha)$ be a Hom-group and $M$ a dual left $G$-module.
If $\alpha(a)\cdot \beta(m)= \beta(a\cdot m)$, then
$\widetilde{M}=M$ is a dual $\mathbb{K}G$-bimodule where the left action is coming from the
original left action of $G$ and the right action is the trivial action $m\cdot g := m\cdot 1 = \beta(m)$. Furthermore
$$H^*(G, M)\cong H^\ast(\mathbb{K}G,\widetilde{M}).$$
\end{theorem}
\begin{proof} The condition $\alpha(a)\cdot \beta(m)= \beta(a\cdot m)$ insures that $\widetilde{M}$ with the given dual left action and the right trivial action $m\cdot g = m\cdot 1 = \beta(m)$ is a dual $G$-bimodule, and therefore a dual $\mathbb{K}G$-bimodule, because $$\alpha(g)\cdot (m\cdot k)= \alpha(g)\cdot \beta(m)= \beta(g\cdot m)=(g\cdot m)\cdot \alpha(k).$$ Now all differentials $d_i$ of Hochschild cohomology of $\mathbb{K}G$ will be the same as the ones, $\delta_i$, for group cohomology of $G$. Therefore the identity map $\mathop{\rm Id}\nolimits: C^n(G, M)\longrightarrow C^n(\mathbb{K}G, \widetilde{M})$ induces an isomorphism on the level of complexes.
\end{proof}
One knows that if $G$ is an associative group, and $\mathbb{K}G$ the group algebra, then any $\mathbb{K}G$-bimodule $M$ can be turned in to a $G$-right (or left) module by the adjoint action. The process of having the similar result for Hom-groups is not clear specially because we do not know how to define the adjoint action for Hom-groups.
\section{Hom-group homology}
In this section we introduce homology theory for Hom-groups. To do this we need to use the notion of modules instead of dual modules for the coefficients.
\begin{theorem}
Let $(G, \alpha)$ be a Hom-group and $(M, \beta)$ be a right
$G$-module satisfying
$$\beta(m\cdot g)= \beta(m)\cdot \alpha(g).$$
Let $C_n^{Hom}(G, M)= M\times G^{\times n}$. Then \begin{equation*} C_\ast^{Hom}(G, M)=\bigoplus_{n\geq0} C_n^{Hom}(G, M), \end{equation*} with the face maps \begin{align}\label{aux-cosimplisial-structure-vp} \begin{split} &d_0(m, g_1, \cdots , g_{n})=(m\cdot g_1, \alpha(g_2), \cdots , \alpha(g_{n}))\\ &d_i(m, g_1 , \cdots ,g_{n+1})=(\beta(m), \alpha(g_1), \cdots , g_i g_{i+1}, \cdots , \alpha(g_{n})), ~~ 1\leq i \leq n-1\\ &d_{n}(m, g_1, \cdots , g_{n})= (\beta(m), \alpha(g_1), \cdots , \alpha(g_{n-1})),\\ \end{split} \end{align} is a simplicial module. \end{theorem} \begin{proof} First we show $d_0d_0=d_0d_1$.
\begin{align*}
d_0d_0(m, g_1, \cdots , g_{n})&=d_0(m\cdot g_1, \alpha(g_2), \cdots , \alpha(g_{n}))\\
&=((m\cdot g_1)\cdot \alpha(g_2), \alpha^2(g_3), \cdots , \alpha^2(g_{n}))\\
&= ((m\cdot g_1)\cdot \alpha(g_2), \alpha^2(g_3), \cdots , \alpha^2(g_{n}))\\
&= (\beta(m)\cdot (g_1g_2), \alpha^2(g_3), \cdots , \alpha^2(g_{n}))\\
&= d_0(\beta(m), g_1g_2, \alpha(g_3), \cdots , \alpha(g_{n}))\\
&=d_0d_1(m, g_1, \cdots , g_{n}).
\end{align*}
We used the dual right property in the fourth equality. Here we show $d_0 d_j=d_{j-1}d_0$ for $j> 1$.
\begin{align*}
d_0d_j(m, g_1,&\cdots, g_n)\\
&=d_0(\beta(v)\otimes \alpha(a_1)\otimes\cdots\otimes a_j a_{j+1}\otimes\cdots\otimes \alpha(a_n)) \\
&=\beta(v)\cdot\alpha(a_1)\otimes \alpha^2(a_2)\otimes\cdots\otimes \alpha(a_j a_{j+1})\otimes\cdots\otimes \alpha^2(a_n) \\
&=\beta(v\cdot a_1)\otimes \alpha^2(a_2)\otimes\cdots\otimes \alpha(a_j )\alpha(a_{j+1})\otimes\cdots\otimes \alpha^2(a_n) \\
&=d_{j-1}(v\cdot a_1\otimes \alpha(a_2)\otimes\cdots\otimes \alpha(a_n)) \\
&=d_{j-1}\delta_0(m\otimes a_1\otimes\cdots\otimes a_n).
\end{align*}
We used the condition $\beta(m\cdot g)= \beta(m)\alpha(g)$ in the third equality. Now we show $d_0d_n=d_{n-1}d_0$. We have,
\begin{align*}
d_0d_n(m, g_1&, \cdots, g_n)\\
&=d_0(\beta(m), \alpha(g_1)\cdots , \alpha(g_{n-1}))\\
&=(\beta(m)\cdot \alpha(g_1), \alpha^2(g_2), \cdots \alpha^2(g_{n-1}))\\
&=\beta(m\cdot g_1), \alpha^2(g_2)\cdots \alpha^2(g_{n-1})\\
&=d_{n-1}(m\cdot g_1, \alpha(g_2)\cdots, \alpha(g_n))\\
&=d_{n-1}\delta_0(m, g_1\cdots g_n).
\end{align*}
Now we show $d_i d_n=d_{n-1}d_i$.
\begin{align*}
d_i d_n(m, g_1&, \cdots ,g_n)\\
&=d_i (\beta(m), \alpha(g_1), \cdots, \alpha(g_{n-1}))\\
&= (\beta^2(m), \alpha^2(g_1), \cdots, \alpha(g_i)\alpha(g_{i+1}), \cdots , \alpha^2(g_{n-1}))\\
&= d_{n-1}(\beta(m), \alpha(g_1), \cdots, g_i g_{i+1}, \cdots, \alpha(g_n))\\
&=d_{n-1}d_i(m, g_1, \cdots ,g_n).
\end{align*}
The rest of the commutativity relations can also be verified. \end{proof}
We define the boundary map $b= \sum_{i=0}^{n} d_i$. By the previous Theorem we have $b^2=0$. The homology of the chain complex $$ \begin{CD} 0 @<b<< M= C^{Hom}_0(G, M) @<b<< C^{Hom}_1(G, M) @<b<< C^{Hom}_2(G, M) @<b<< C^{Hom}_3(G, M) \ldots \end{CD}\\ $$ is called Hom-group homology of $G$ with coefficients in $M$. Similarly one has Hom-group homology with coefficients in left modules as follows.
\begin{proposition}
Let $(G, \alpha)$ be a Hom-group and $(M, \beta)$ be a left
$G$-module satisfying
$$\beta(g\cdot m)= \alpha(g)\cdot \beta(m) .$$
Let $C_n^{Hom}(G, M)=G^{\times n}$. Then \begin{equation*} C_\ast^{Hom}(G, M)=\bigoplus_{n\geq0} C_n^{Hom}(G, M), \end{equation*} with the face maps \begin{align}\label{aux-cosimplisial-structure-vp} \begin{split} &d_0( g_1, \cdots , g_{n}, m)=(\alpha( g_1), \alpha(g_2), \cdots ,\alpha( g_{n-1}), g_n\cdot m)\\ &d_i(g_1 , \cdots ,g_{n+1}, m)=( \alpha(g_1), \cdots , g_i g_{i+1}, \cdots , \alpha(g_{n}), \beta(m)), ~~ 1\leq i \leq n-1\\ &d_{n}(g_1, \cdots , g_{n}, m)= ( \alpha(g_2), \cdots , \alpha(g_{n}), \beta(m)).\\ \end{split} \end{align} is a simplicial module.
\end{proposition} \begin{proof}
Similar as the previous theorem. \end{proof} Here we state the relation between Hom-group homology with coefficients in left and right modules.
\begin{proposition}
Let $(G, \alpha)$ be a Hom-group and $M$ be a right $G$-module.
Then $\widetilde{M} =M$ with the left action $g\cdot m= m\cdot g^{-1}$, is a left $G$-module and
$$H_*(G, M)\cong H_*(G, \widetilde{M}).$$
\end{proposition}
\begin{proof}
The space $\widetilde{M}$ is a left module by the Lemma \ref{right to left-2}. We set $$F: C_n(G, \widetilde{M})\longrightarrow C_n(G, M),$$ given by $$F(g_1, \dots , g_n, m)=(m, g_n^{-1}, \cdots , g_1^{-1}).$$ Here we show $d_0^M F= F d_0^{\widetilde{M}}$. \begin{align*}
&d_0^M F( g_1, \cdots , g_{n}, m)\\
&=d_0^M(m, g_n^{-1}, \cdots , g_1^{-1})\\
&=(m\cdot g_n^{-1}, \alpha(g_{n-1}^{-1}), \cdots , \alpha(g_1^{-1}))\\
&=(m\cdot g_n^{-1}, \alpha(g_{n-1})^{-1}, \cdots , \alpha(g_1)^{-1})\\
&=F( \alpha(g_1), \cdots , \alpha(g_{n-1}), m\cdot g_n^{-1}))\\
&=F( \alpha(g_1), \cdots , \alpha(g_{n-1}), g_n\cdot m))\\
&=F d_0^{\widetilde{M}}( g_1, \cdots , g_{n}, m). \end{align*} Now we prove
$d_n^M F= F d_n^{\widetilde{M}}$. \begin{align*}
&d_n^M F( g_1, \cdots , g_{n}, m)\\
&= d_n^M(m, g_n^{-1}, \cdots , g_1^{-1})\\
&= (\beta(m), \alpha(g_n^{-1}), \cdots , \alpha(g_2^{-1}))\\
&=(\beta(m), \alpha(g_n)^{-1}, \cdots , \alpha(g_2)^{-1})\\
&=F(\alpha( g_2), \cdots , \alpha(g_{n}), \beta(m))\\
&= F d_n^{\widetilde{M}}( g_1, \cdots , g_{n}, m). \end{align*}
\end{proof}
Here we show that the Hom-group homology of a Hom-group with coefficients in a right (left) module
reduces to Hochschild homology of Hom-group algebra with coefficients in a certain bimodule.
To do this we remind that the authors in \cite{hss} introduce Hochschild homology of a Hom-algebra $A$ as follows. Let $(A,\mu, \alpha)$ be a Hom-algebra, and $(V, \beta)$ be an $A$-bimodule \cite{hss} such that $$\beta(v\cdot a) = \beta(v)\cdot \alpha(a) \quad \hbox{and} \quad \beta(a\cdot v)=\alpha(a)\cdot \beta(v).$$ Then \begin{equation*} C^{Hom}_\ast(A, V)=\bigoplus_{n\geq 0}C^{Hom}_n(A, V),\qquad C^{Hom}_n(A, V):=V\otimes A^{\otimes n}, \end{equation*} with the face maps \begin{align*} &\delta_0(v\otimes a_1\otimes \cdots \otimes a_{n})= v \cdot a_1 \otimes \alpha(a_2)\otimes \cdots \otimes \alpha(a_{n})\\ &\delta_i(v\otimes a_1\otimes \cdots \otimes a_{n})=\beta(v)\otimes \alpha(a_1) \cdots \otimes a_i a_{i+1}\otimes \cdots\otimes \alpha(a_{n}), ~~ 1\leq i \leq n-1\\ &\delta_{n}(v\otimes a_1\otimes \cdots \otimes a_n)= a_{n} \cdot v \otimes \alpha(a_1)\otimes \cdots\otimes \alpha(a_{n-1}),
\end{align*} is a simplicial module. Similar to the cohomology case we have the following result.
\begin{theorem}
Let $(G, \alpha)$ be a Hom-group and $M$ be a right $G$-module satisfying $\beta(m\cdot g)= \beta(m) \cdot \alpha(g) .$
Let $\widetilde{M}=M$ be a left module with the trivial left action $ g \cdot m:= m\cdot 1 = \beta(m)$. Then $\widetilde{M}$ will be a $\mathbb{K}G$-bimodule and furthermore
$$H_*(G, M)\cong H_\ast(\mathbb{K}G, \widetilde{M}).$$
\end{theorem} \begin{proof} The condition $\beta(m\cdot g)= \beta(m) \cdot \alpha(g) $ insures that $\widetilde{M}$ with the given right action and the trivial left action $g \cdot m:= m\cdot 1 = \beta(m)$ is a $G$-bimodule, and therefore a $\mathbb{K}G$-bimodule, because
$$\alpha(g)\cdot (m\cdot k)= \beta (m\cdot k)= \beta(m) \cdot\alpha(k)=(g\cdot m)\cdot \alpha(k).$$ Therefore all differentials $\delta_i$ of Hochschild homology of $\mathbb{K}G$ will be the same as the ones, $d_i$, for group homology of $G$. Therefore the identity map $\mathop{\rm Id}\nolimits: C_n(G, M)\longrightarrow C_n(\mathbb{K}G, \widetilde{M})$ induces an isomorphism on the level of complexes. \end{proof}
The conditions of $M$ in the previous theorem were also appeared in other context such as \cite{cg} where the authors used the category of Hom-modules over Hom-algebras to obtain a monoidal category for modules over Hom-bialgebras.
The functoriality of Hom-group homology is shown as follows. \begin{proposition}
Let $(G, \alpha_{G})$ and $(G', \alpha'_{G'})$ be two Hom-groups. The morphism $f: G\longrightarrow G'$ of Hom-groups induces the map \begin{equation*} \widehat{f}: H^{Hom}_n(G, \mathbb{K}G)\longrightarrow H^{Hom}_n(G', \mathbb{K}G') \end{equation*}
given by $$g_0, \cdots , g_n\mapsto f(g_0), \cdots, f(g_n).$$ \end{proposition} \begin{proof}
The map $\widehat{f}$ commutes with all faces $\delta_i$ because $f(\alpha(g))=\alpha'(f(g))$ and $ f(gk)=f(g) f(k)$. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1801.07398.tex",
"language_detection_score": 0.6250789165496826,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{conjecture}[theorem]{Conjecture}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\begin{center}
\vskip 1cm{\LARGE\bf On the Density of Spoof Odd Perfect Numbers}
\vskip 1cm
\large
L\'aszl\'o T\'oth\\
Rue des Tanneurs 7 \\
L-6790 Grevenmacher \\
Grand Duchy of Luxembourg \\
\href{mailto:[email protected]}{\tt [email protected]}
\end{center}
\vskip .2 in
\begin{abstract}
We study the set $\mathcal{S}$ of odd positive integers $n$ with the property ${2n}/{\sigma(n)} - 1 = 1/x$, for positive integer $x$, i.e., the set that relates to odd perfect and odd ''spoof perfect'' numbers. As a consequence, we find that if $D=pq$ denotes a spoof odd perfect number other than Descartes' example, with pseudo-prime factor $p$, then $q>10^{12}$. Furthermore, we find irregularities in the ending digits of integers $n\in\mathcal{S}$ and study aspects of its density, leading us to conjecture that the amount of numbers in $\mathcal{S}$ below $k$ is $\sim10 \log(k)$.
\end{abstract}
\section{Introduction}
Let $\mathcal{S}$ denote the set of odd positive integers $n$ with the property
$$
\frac{2n}{\sigma(n)} - 1 = \frac1x,
$$
where $\sigma$ denotes the sum-of-divisors function and $x$ is a positive integer. These numbers are interesting for several reasons. For instance, if $x$ is prime, $nx$ is an odd perfect number. No such number is currently known, and the abundant amount of restrictions for an odd integer to be perfect (such as those enumerated by Voight \cite{Voight03} and Nielsen \cite{Nielsen07}) suggest that these numbers are either extremely rare or do not exist.
On the other hand, if $x$ is odd but not prime, the number $nx$ is an \textit{odd spoof perfect number}, i.e., an odd number that would be perfect if only $x$ was prime. These numbers, also referred to as \textit{Descartes numbers} after the discoverer of the only currently known member,
$$
D = 198585576189,
$$
for which we have $n=9018009$ and $x=22021$, have been subject to considerable research in light of their connection with odd perfect numbers. Despite this, not many things are known about such numbers; for instance, it is not even known if there are an infinite number of positive integers with this property.
A few results however exist. Banks, G\"{u}lo\u{g}lu, Nevans and Saidak \cite{Banks06} showed in 2006 that the only cube-free odd spoof perfect number with fewer than seven distinct prime divisors is Descartes' example. Furthermore, they showed that cube-free spoof odd perfect numbers not divisible by 3 have over a million distinct prime divisors.
Then in 2014, Dittmer \cite{Dittmer13} gave a formal definition of spoof odd perfect numbers and showed that Descartes' example is the only spoof with less than seven distinct quasi-prime factors. In doing so, he defined a \textit{quasi-prime factorization} of a positive integer as a set of pairs $X=\{(x_1,\alpha_1),\ldots,(x_k,\alpha_k)\}$ such that
$$
n=\prod_{i=1}^{k} x_i^{\alpha_i},
$$
where $x_i\geq2$ for all $i$, but without the condition that the $x_i$ must be relatively prime. He then used this factorization to define the \textit{spoof $\sigma$-function}, $\tilde{\sigma}$, whose role is analogous to $\sigma$ but is related to $X$ instead of the prime factorization. Under such a factorization, we can say that $n$ is \textit{spoof perfect} if $\tilde{\sigma}(X)=2n$ (although the notation used by Dittmer \cite{Dittmer13} is slightly different).
We can thus also define ''\textit{spoof abundant}'' and ''\textit{spoof deficient}'' numbers as those having the property $\tilde{\sigma}(X) > 2n$ and $\tilde{\sigma}(X) < 2n$, respectively, under the corresponding quasi-prime factorization $X$.
\subsection{Scope of this paper}
Instead of considering the number of quasi-prime factors in an odd spoof perfect number $D=pq$, like Dittmer did, we find a new lower bound for $q$ by examining the properties of the set $\mathcal{S}$. In particular, our results show that:
\begin{theorem}
Let $D$ denote an odd spoof perfect number such that $D=pq$, where $q\in\mathcal{S}$ and $p$ is the pseudo-prime factor. Furthermore, $D\neq198585576189$. Then $q>10^{12}$.
\end{theorem}
In other words, the non-pseudo-prime component of a spoof odd perfect number (other than Descartes' example) is greater than $10^{12}$.
The OEIS sequence \seqnum{A222263} already contains the first 500 terms of $\mathcal{S}$. Using our results presented in this paper, we extend this sequence up to $10^{12}$ (one trillion), then use this dataset to examine various properties of $S$.
In the remainder of this paper, let $\pi_{\mathcal{S}}(n)$ denote the number of elements in $\mathcal{S}$ up to and including $n$. We define the asymptotic density of the set $\mathcal{S}$ as $\displaystyle \lim \inf \frac{\pi_{\mathcal{S}}(n)}{n}$ and its \textit{Schnirelmann density} as the greatest lower bound of $\displaystyle \frac{\pi_{\mathcal{S}}(n)}{n}$. We examine these densities in the sections below and submit a few conjectures on their values.
\section{Computational methods}
The computations were performed on a 6-core Intel i7-7800X processor @ 3.50GHz and took approximately $8$ months to complete. Our algorithm checked each odd $n$ to with the property $\displaystyle \frac{2n}{\sigma(n)} - 1 = \frac1x$ for some positive integer $x$, up to $n=10^{12}$. An outline of the algorithm is shown below.
\begin{algorithm}[h!]
\caption{Finding positive integers $n\in\mathcal{S}$ smaller than $k$}\label{algo}
\begin{algorithmic}[0]
\State \textbf{Input:} $k$
\State \textbf{Output:} $n\in\mathcal{S}$ smaller than $k$
\State $results \gets \{\emptyset\}$
\For{\textbf{all odd} $i \leq k$}
\State $m \gets $ \texttt{DivisorSigma}$[1,i] / 2n$
\State $num \gets $ \texttt{Numerator}$[m]$
\State $den \gets $ \texttt{Denominator}$[m]$
\State $diff \gets den - num$
\If{$diff=1$}
\State $results \gets results \cup \{i\}$
\If{$num\equiv 0 \ (\rm{mod}\ 2)$}
\State \textbf{print} 'Even spoof perfect number found: $i$'
\Else{}
\State \textbf{print} 'Odd spoof perfect number found: $i$'
\EndIf
\EndIf
\EndFor
\State \Return $results$
\end{algorithmic}
\end{algorithm}
This algorithm was executed using Wolfram Mathematica 11.1 and used the built-in \texttt{DivisorSigma[]} function to compute the sum of digits for each candidate.
Note that this algorithm is naive as it makes no assumptions on the admissibility of a candidate before processing it. However, in the absence of a sufficiently developed theoretical framework there are no practical alternatives to this, to the best of this author's knowledge.
\section{Results}
Our computations revealed no odd spoof perfect number $pq$ other than Descartes' example up to $q=10^{12}$. On the other hand, we found many more even spoof perfect numbers than listed in the OEIS sequence \seqnum{A222263}. Note that these correspond to an even pseudo-prime factor $p$. Table \ref{table-res1} shows the distribution of these numbers within intervals of size $10^k$, for $1 \leq k \leq 12$.
\begin{table}[h!] \caption{Distribution of integers $n\in\mathcal{S}$ in intervals of size $10^k$, $1 \leq k \leq 12$} \label{table-res1}
\centering
\begin{tabular}{|l|l|l|}\hline
$k$ & $\pi_{\mathcal{S}}(10^k)$ & $\pi_{\mathcal{S}}(10^k)-\pi_{\mathcal{S}}(10^{k-1})$ \\ \hline
$1$ & $2$ & $2$ \\ \hline
$2$ & $3$ & $1$ \\ \hline
$3$ & $7$ & $4$ \\ \hline
$4$ & $15$ & $8$ \\ \hline
$5$ & $28$ & $13$ \\ \hline
$6$ & $48$ & $20$ \\ \hline
$7$ & $81$ & $33$ \\ \hline
$8$ & $143$ & $62$ \\ \hline
$9$ & $227$ & $84$ \\ \hline
${10}$ &$319$ & $92$ \\ \hline
${11}$ &$459$ & $140$ \\ \hline
${12}$ &$692$ & $233$ \\ \hline
\end{tabular}
\end{table}
Before examining the density of $\mathcal{S}$ through the lens of our result set, we first take a look at some of its characteristics. We begin with the congruence classes formed by the $n\in\mathcal{S}$, as shown in Table \ref{table-res2}. Our results seem to indicate that the $n$ are uniformly distributed into residue classes mod $8$.
\begin{table}[h!] \caption{Congruence of integers $n\in\mathcal{S}$ up to $10^{12}$} \label{table-res2}
\centering
\begin{tabular}{|l|l|}\hline
Residue class & Amount of integers $n\in\mathcal{S}$ in the different residue classes \\ \hline
$1$ mod $8$ & $149$ \\ \hline
$3$ mod $8$ & $183$ \\ \hline
$5$ mod $8$ & $175$ \\ \hline
$7$ mod $8$ & $185$ \\ \hline
\end{tabular}
\end{table}
In hopes of finding an irregularity among the members of our data set, we examined the distribution of their ending digits. And indeed, our results show a strong bias in favour of numbers ending in $5$ than those ending in other digits. The distribution of ending digits is shown in Table \ref{table-res3}.
\begin{table}[h!] \caption{Distribution of ending digits of integers $n\in\mathcal{S}$ up to $10^{12}$} \label{table-res3}
\centering
\begin{tabular}{|l|l|}\hline
$l$ & Amount of $n\in\mathcal{S}$ with $l$ as last digit \\ \hline
$1$ & $51$ \\ \hline
$3$ & $46$ \\ \hline
$5$ & $492$ \\ \hline
$7$ & $54$ \\ \hline
$9$ & $49$ \\ \hline
\end{tabular}
\end{table}
\subsection{Density}
We pursue our analysis by studying the density of $\mathcal{S}$, i.e., the ratio $\displaystyle \frac{\pi_{\mathcal{S}}(n)}{n}$ for positive integer $n$. We have plotted this density at each $n\in\mathcal{S}$ on a log-log graph, shown in Figure \ref{fig-density} below.
\begin{figure}
\caption{Density of $\mathcal{S}$ up to $k=10^{12}$}
\label{fig-density}
\end{figure}
This suggests that the density follows a curve of the type
$$
A(k) = \frac{\alpha \log(k)}{k},
$$
with real $\alpha > 0$. We have found that a value around $\alpha=10$ provides a good fit to our experimental data, which we show in Figure \ref{fig-density2} below on a log-linear and log-log graph.
\begin{figure}
\caption{Density of $\mathcal{S}$ (blue) and $A(k)=10 \log(k)/k$ (orange) up to $k=10^{12}$}
\label{fig-density2}
\label{fig:minipage1}
\label{fig:minipage2}
\end{figure}
If this is true, it would naturally follow that:
\begin{conjecture}
$\pi_{\mathcal{S}}(n) \sim 10 \log(n)$.
\end{conjecture}
These results suggest several interesting properties of $\mathcal{S}$. On the one hand, we are tempted to conjecture that:
\begin{conjecture} \label{conj-density-zero}
The asymptotic density of $\mathcal{S}$ is $0$.
\end{conjecture}
On the other hand, even though $\{1\}\in\mathcal{S}$, our data suggests that the fraction $\displaystyle \frac{\pi_{\mathcal{S}}(k)}{k}$ tends to $0$ as $k\to\infty$, and so we also conjecture that:
\begin{conjecture}
The Schnirelmann density of the set $\mathcal{S}$ is $0$.
\end{conjecture}
\section{Conclusion and further work}
Nothing in our results suggests that another spoof perfect number exists, or even that there are an infinite number of positive integers in $\mathcal{S}$. On the other hand, we made a not entirely unreasonable conjecture that the density of $\mathcal{S}$ is $0$, providing further evidence of the scarcity of such numbers.
There are several ways to extend the results in this paper. First, it is easy to continue computations above $10^{12}$ with sufficient computing resources. Furthermore, one might be inspired to write a more clever algorithm by taking a look at the candidate integers before computing their sum-of-divisors function. This operation is expensive and could be avoided if we have additional information about such numbers. However, this is not currently the case, making it difficult to escape combing through odd integers up to a given number. Perhaps the considerable effort already expended on similar questions regarding odd perfect numbers could be adapted to spoof odd perfect numbers as well.
Classification: 11A51, 11N25, 11B83
\end{document}
|
arXiv
|
{
"id": "2101.09718.tex",
"language_detection_score": 0.7733990550041199,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Simultaneous cores with restrictions and a question of Zaleski and Zeilberger} \begin{abstract}
The main new result of this paper is to count the number of $(n,n+1)$-core partitions with odd parts, answering a question of Zaleski and Zeilberger \cite{ZZodd} with bounty a charitable contribution to the OEIS. Along the way, we prove a general theorem giving a recurrence for $(n, n+1)$-core parts whose smallest part $\lambda_\ell$ and consecutive part differences $\lambda_i-\lambda_{i+1}$ lie in an arbitrary $M\subset \mathbb{N}$ which unifies many known results about $(n, n+1)$-core partitions with restrictions. We end with discussions of enrichments of this theorem that keep track of the largest part, number of parts, and size of the partition, and about a few cases where the same methods work on more general simultaneous cores.
\end{abstract}
The paper is laid out as follows. Section \ref{sec:intro} gives a quick background to the question and states our main theorems. Section \ref{sec:recursion} reviews the abacus construction, and uses it to prove Theorem \ref{thm: }, a general theorem giving recrusions for simultaneous $(n,n+1)$-cores meeting certain restrictions. Section \ref{sec:odd} relates the count of simultaneous cores with odd parts to that of even parts, answering a question of Zeilberger and Zaleski. Section \ref{sec:enriched} briefly discusses enriching the recursion of the main theorem to take into account other partitions, and Section \ref{sec:ab} briefly touches on a few cases where the basic recursion can move beyond $(n, n+1)$-cores.
\section{Introduction} \label{sec:intro}
A \emph{partition of $n$} is a nonincreasing sequence $\lambda_1\geq \lambda_2\geq\cdots\geq \lambda_{k}> 0$ of positive integers such that $\sum \lambda_i=n$. We call $n$ the $\emph{size}$ of $\lambda$ and denote it by $|\lambda|$; we call $k$ the \emph{length} of $\lambda$ and denote it by $\ell(\lambda)$.
We frequently identify $\lambda$ with its Young diagram; there are many conventions for this. We draw $\lambda$ in the first quadrant, with the parts of $\lambda$ as the columns of a collection of boxes. \begin{definition} The \emph{arm} $a(\square)$ of a cell $\square$ is the number of cells contained in $\lambda$ and above $\square$, and the \emph{leg} $l(\square)$ of a cell is the number of cells contained in $\lambda$ and to the right of $\square$. The \emph{hook length} $h(\square)$ of a cell is $a(\square)+l(\square)+1$. \end{definition} \begin{example} The cell $(2,1)$ of $\lambda=3+2+2+1$ is marked $s$; the cells in the leg and arm of $s$ are labeled $a$ and $l$, respectively. \begin{center} \begin{tikzpicture}[scale=.5] \draw[thin, gray] (0,0) grid (1,3); \draw[thin, gray] (1,0) grid (3,2); \draw[thick] (0,0)--(0,3)--(1,3)--(1,2)--(3,2)--(3,1)--(4,1)--(4,0)--cycle; \draw (1.5,.5) node{$s$}; \draw (1.5,1.5) node{$a$}; \draw (2.5,.5) node{$l$}; \draw (3.5,.5) node {$l$}; \draw (8.5,1.5) node[align=left] {$a(s)=\# a=1$ \\ $l(s)=\# l=2$ \\ $h(s)=4$}; \end{tikzpicture} \end{center} \end{example} \begin{definition} An $a$-\emph{core} is a partition that has no hook lengths of size $a$. An $(a,b)$-\emph{core} is a partition that is simultaneously an $a$-core and a $b$-core. We use $\mathcal{C}_{a,b}$ to denote the set of $(a,b)$-cores. \end{definition} \begin{example} We have labeled each cell $\square$ of $\lambda=3+2+2+1$ with its hook length $h(\square)$. \begin{center} \begin{tikzpicture}[scale=.5] \draw[thin, gray] (0,0) grid (1,3); \draw[thin, gray] (1,0) grid (3,2); \draw[thick] (0,0)--(0,3)--(1,3)--(1,2)--(3,2)--(3,1)--(4,1)--(4,0)--cycle; \draw (.5,.5) node{$6$}; \draw (1.5,.5) node{$4$}; \draw (2.5,.5) node{$3$}; \draw (3.5,.5) node{$1$}; \draw (.5,1.5) node {$4$}; \draw (1.5,1.5) node{$2$}; \draw (2.5,1.5) node{$1$}; \draw (.5,2.5) node{$1$}; \end{tikzpicture} \end{center} We see that $\lambda$ is \emph{not} an $a$-core for $a\in \{1,2,3,4,6\}$; but \emph{is} an $a$-core for all other $a$. \end{example}
The study of simultaneous core partitions began with the work of Anderson. \cite{anderson}, who proved that when $a$ and $b$ are relatively prime, there $\frac{1}{a+b}\binom{a+b}{a}$ of them. Her proof uses the abacus to get a bijection with rational Dyck paths. The study of simultaneous core partitions has exploded in recent years, largely due to the work of Amstrong, Hanusa and Jones \cite{AHJ} relating them to rational Catalan combinatorics.
One direction of this study initiated by Amdeberhan \cite{Amdeberhan}, is to study simultaneous core partitions that are simultaneously core for more than two integers, and simultaneous core partitions that satisfy more classical conditions on partitions. In \cite{Amdeberhan} made many conjectures about counts of such simultaneous core partitions, which now seem to all be proven. Our first result is a general theorem that gives a unified recursion that proves many of Amdeberhan's conjectures.
The two most important examples for us are the following. First, Amdeberhan conjectured that the number of $(n, n+1)$-core partitions with distinct parts was the $n+1$st Fibonacci number $F_{n+1}$; this was proven independently by Xiong \cite{XiongDistinct} and Straub \cite{Straub}.
Second, Amdeberhan conjectured a formula for the number $P_d(n)$ of $(n, n+1)$-core partition all of whose parts are divisible by $d$, which was proven by Zhou \cite{ZhouRaney}. Specifically, if $n=qd+r$ with $0\leq r<d$, then $$P_d(n)=\frac{r+1}{n+1}\binom{n+q}{n}$$ which are known as Raney or Fuss-Catalan numbers, $P_d(n)=A_q(d-1, r-1)$ in the notation in wikipedia.
Our perspective is to unify both of these conditions (having distinct parts, and having all parts divisible by $d$) into the general family of conditions given by restricting the difference between consecutive parts of the partitions. Let $\mathbb{N}=\{0,1,2,\dots\}$ be the natural numbers, and let $\mathcal{P}$ denote the set of all partitions. Let $\ell(\lambda)$ be the number of parts, and define $\lambda_{\ell+1}=0$ by convention. We denote the empty partition by $0$.
\begin{definition} Let $M\subset\mathbb{N}$. We define
$$\mathcal{P}^M=\{\lambda\in\mathcal{P} : \lambda_i-\lambda_{i+1}\in M \text{ for } 1\leq i \leq\ell\-1\}$$
$$\mathcal{Q}^M=\{\lambda\in\mathcal{P} : \lambda_i-\lambda_{i+1}\in M \text{ for } 1\leq i \leq \ell\}$$ That is, $\mathcal{P}^M$ is the set of partitions where the difference between consecutive parts is in $M$, and $\mathcal{Q}^M\subset\mathcal{P}^M$ asks that in addition that the smallest part $\lambda_\ell$ is in $M$. \end{definition}
From our perspective, $\mathcal{Q}^M$ is more natural, and will be focused on that. For most choices of $M$, $\mathcal{P}^M$ and $\mathcal{Q}^M$ are very unnatural sets of partitions; we highlight a few choices of $M$ that give more familiar classes of partitions.
\begin{example} Let $\mathbb{N}_+=\{1,2,\dots\}$. Then $\mathcal{P}^{\mathbb{N}_+}=\mathcal{Q}^{\mathbb{N}_+}$ is the set of partitions with distinct parts. \end{example}
\begin{example} For $k>1$ an integer, let $k\mathbb{N}=\{0,k,2k,\dots \}$. Then $\mathcal{P}^{k\mathbb{N}}$ is the set of partitions with all parts congruent modulo $k$, and $\mathcal{Q}^{k\mathbb{N}}$ is the set of partitions with all parts divisible by $k$.
In particular, $\mathcal{P}^{2\mathbb{N}}\setminus \mathcal{Q}^{2\mathbb{N}}\cup {0}$ is the set of partitions with odd parts. \end{example} \begin{example} Combining the first two, taking $M=k\mathbb{N}_+=\{k,2k,3k,\dots\}$, then $\mathcal{Q}^{k\mathbb{N}_+}$ consists of partitions with distinct parts, all of which are divisible by $k$, and $\mathcal{P}^{k\mathbb{N}_+}$ consists of partitions with distinct parts, all of which are congruent mod $k$. \end{example}
\begin{example} Let $\mathbb{N}+r=\{r,r+1,r+2,\dots\}$ denote the set of integers great than or equal to $r$. Then $\mathcal{P}^{\mathbb{N}+r}$ is the set of partitions with difference between consecutive parts at least $r$, and $\mathcal{Q}^{\mathbb{N}+r}$ asks, in addition, that the smallest part is at least $r$. When $r=2$, these two classes of partitions take part in the Rogers-Ramanujan identities: \begin{align*}
\sum_{\lambda\in \mathcal{P}^{\mathbb{N}+2}} q^{|\lambda|}&=\prod_{m\geq 0} \frac{1}{(1-q^{5m+1})(1-q^{5m+4})} \\
\sum_{\lambda\in \mathcal{Q}^{\mathbb{N}+2}} q^{|\lambda|}&=\prod_{m\geq 0} \frac{1}{(1-q^{5m+2})(1-q^{5m+3})}
\end{align*}
\end{example}
\begin{example} \label{ex:rdistinct}
Let $[r]=\{0,1,\dots, r\}$. Then $\lambda\in\mathcal{Q}^{[r]}$ if and only if $\lambda^T$, the conjugate of $\lambda$, has no parts repeating more than $r$ times. These partitions take part in Glaisher's theorem:
$$\sum_{\lambda\in\mathcal{Q}^{[r]}}=\prod_{r+1\nmid m} \frac{1}{1-q^m}$$ in case $r=1$, this is Euler's theorem that the number of partitions of $n$ into distinct parts is equal to the number of partitions of $n$ into odd parts. \end{example}
Our general theorem is a recurrence for $|\mathcal{C}_{n,n+1}\cap \mathcal{Q}^M|$ for \emph{any} subset $M$. The recurrence has a different form depending on whether or not $0\in M$.
\begin{theorem}
For $M\subset \mathbb{N}$, let $C^M(n)=|\mathcal{C}_n\cap \mathcal{Q}^M|$ be the count of simultaneous $(n, n+1)$-core partitions with smallest part and consecutive part differences in $M$. Then $C^M(0)=1$ (the empty partition), and $C^m(n)$ is determined from this by the recurrence:
\begin{enumerate}
\item If $0\in M$,
$$C^M_{n+1}=\sum_{\substack{k\in M\\k\leq n}}C^M_kC^M_{n-k}$$
\item If $0\notin M$,
$$C^M_{n+1}=1+\sum_{\substack{k\in M\\k\leq n}}C^M_{n-k}$$
\end{enumerate} \end{theorem}
We might term these recurrences the restricted recurrence Catalan and Fibonacci sequences, respectively; when $M=\mathbb{N}$, the recurrence is the standard recurrence for the Catalan numbers; when $M=\mathbb{N}\setminus 0$, the recurrence is equivalent to the familiar formula for the sum of the first $n$ Fibonacci numbers: $$F_{n+1}=1+\sum_{k=1}^{n-1} F_{n-k}$$ When $M$ is only a subset of these two sets, the recurrence only has those terms corresponding to elements of $M$.
Both identities can be rephrased in terms of generating series, though if $0\in M$ the statement requires the Hadamard product: recall if $f=\sum_{i\geq 0} a_nq^n$ and $g=\sum_{m\geq 0} b_jq^m$ are formal power series, the Hadamard product $(f\star g)=\sum_{n\geq 0} a_nb_bq^n$. \begin{corollary} For $M \subset \mathbb{N}$, let $$\chi_M=\sum_{m\in M} q^m$$
$$C^M(q)=\sum_{n\geq 0} C^M_nq^n$$
Then if $0\in M$, we have
$$C^M(q)=1+q(C_M(q)\star \chi_M(q))C_M(q) $$
If $0\notin M$, then
$$C_M(q)=\frac{1}{1-q}+q\chi_m(q)C_m(q)$$
and hence
$$C_m(q)=\frac{1}{1-q}\frac{1}{1-q\chi_m(q)C_m(q)}$$ \end{corollary}
Before we give the proof, we illustrate some examples of this theorem.
\begin{corollary}Let $D(n)$ denote the number of $(n, n+1)$-cores with distinct parts, then $D(n)=|\mathcal{C}_n\cap \mathcal{Q}^{\mathbb{N}\setminus 0}|$, and so we have
$$D(n+1)=1+\sum_{k=1}^n D(n-k)$$
In particular, for $n=0$ the sum on the right hand side is empty, and so $D(1)=1$, and otherwise, removing the first term of the sum and reindexing gives
$$D(n+1)=1+\sum_{k=1}^n D(n-k)=D(n-1)+[1+\sum_{k=1}^{n-1} D(n-1-k)]=D(n-1)+D(n)$$
and so $D(n)=F(n)$, the Fibanocci numbers.
\end{corollary}
\begin{corollary}Let $D_r(n)$ denotes the number of $(n,n+1)$-core with no part occuring more than $r$-times; by Example \ref{ex:rdistinct} we have $D_r(n)=|\mathcal{C}_n\cap \mathcal{Q}^{[r]}|$, and hence $$D_{r}(n+1)=\sum_{k=0}^r D_r(k)D_r(n-k)$$ For $n\leq r$, this is the usual recurrence for Catalan numbers, and so in this range we have $D_r(n)=C_n$. If we take $r=1$, this directly recovers the standard Fibonacci recurrence for simultaneous cores with distinct parts, since $D_1(0)=D_1(1)=1$. In general, since the recursion does not differ from the standard Catalan recursion until $n>r$, we have $D_r(n)=C_n$ for $n\leq r$, and hence in general $D_r(n+1)=\sum_{k=0}^r C_k D_r(n-k)$. Taking $r=2$, we have $$D_2(n+1)=D_2(n)+D_2(n-1)+2D_2(n-2)$$
\end{corollary}
\begin{corollary}Let $P_d(n)$ denotes the number of $(n, n+1)$-core partitions with all parts divisible by $d$; this is the case $M=\{0,d,2d,3d,\dots\}$. The recurrence becomes: $$P_d(n+1)=\sum_{k=0}^{\lfloor n/d\rfloor} P_d(dk)P_d(n-dk)$$
which is Riordan's recurrence for the Fuss-Catalan numbers. In particular, in this case the Theorem reproves Amdeberhan's conjecture that, if $n=qd+r$ with $0\leq r<d$, then
$$P_d(n)=\frac{r+1}{n+1}\binom{n+q}{q}$$ \end{corollary}
\begin{corollary} Let $D_d(n)$ denote the number of $(n, n+1)$-core partitions with distinct parts, all divisible by $d$; this is the case $M=\{d,2d,3d,\dots\}$. Let $D_d(q)$ be the corresponding generating function. We have $\chi_M(q)=q^d/(1-q^d)$, and a little algebra gives
$$D_d(q)=\frac{1+q+q^2+\cdots +q^{d-1}}{1-q^d-q^{d+1}}$$
and hence $D_d(n)$ is the series determined by the recurrence $D_d(n)=D_d(n-d)+D_d(n-d-1)$ and the initial values $D_d(0)=D_d(1)=\cdots=D_d(d-1)=1$; when $d=2$ this is $1,1,1,2,2,3,4,5,7,9,12,16,\dots$ which up to reindexing is \href{https://oeis.org/A000931}{A000931} the Padovan numbers.
\end{corollary}
Finally, Let $O(n)$ and $E(n)$ be the number of $(n,n+1)$ cores with all parts odd or even, respectively. Then $E(n)=P_2(n)$ is already determined by the previous recurrence, and analysis of the abacus allows us to derive the formula $$E(n+2)=2O(n)-O(n-2)$$ and thus determine $O(n)$; our derivation of this last expression is, at present, not as elegant as the expression seems to merit.
\section{The abacus construction} In this section we briefly recall the abacus construction for $n$-core partitions.
Let $w(\lambda)=a_1a_2a_3a_4\cdots$ be the boundary path of $\lambda$, with each $a_i$ being either $E$ or $S$, normalized so $a_1=E$ is the first $E$ step, and padded out with trailing $E$s if necessary. Then $\lambda$ is an $n$ core if and only if $a_k=E$ implies that $a_{k+n}=E$.
The general idea behind the $n$-abacus construction is to split the $a_i$ among $n$ rows periodically. If we write $i=kn+r$ with $0\leq k, 1\leq r\leq n$, then we will put a black circle at the $r$th row and $k$th column if $a_i=E$, and a white circle in that spot if $a_i=S$. Note that this means our rows start with 1 but our columns start with 0.
On the $n$-abacus, the condition for being an $n$-core translate to once a spot on a given row is black, every later spot on that row must also be black. Our choice of normalization means that for $n$-cores, the first row will always be filled entirely with black dots.
Thus, we can record the $n$-core as a function $f:\{0,1,\dots, n-1\}\to \mathbb{N}$ by letting $f(i)$ be the first column where the $i$th row has a black dot. Note that we always have $f(0)=0$, giving a bijection from $n$-cores to $\mathbb{N}^{n-1}$, but including this value will prove useful for describing $(n, n+1)$-cores.
The condition to see an $n+1$-core on the $n$ abacus is that if the circle on the $i$th row and $j$th column is black, with $i<n$, then the circle at the $i+1$st row and $j+1$st column is also black. In terms of the function $f$, this condition is $f(i+1)\leq f(i)+1$. In addition to this, if the bead on the $n$th row (last row) and $j$th column is filled, then so must be the bead on the 1st row and $j+2$ column. Since we only work with $n$-cores, the first row is always completely filled, and this condition is vacuous. This is a large part of what makes $(n, n+1)$-cores easier to handle than general $(a,b)$-cores.
Summarizing, we have shown the following: \begin{lemma}The set of $(n, n+1)$-core partitions are in bijection with functions $f:\{0,\dots, n-1\}\to \mathbb{N}$ satisfying: \begin{enumerate} \item $f(0)=0$ \item $f(k+1)\leq f(k)+1$ \end{enumerate} \end{lemma}
\section{Proof of the general theorem}
\subsection{Main ideas} Proving theorem \ref{ } depends on two things: understanding how the standard Catalan number recurrence is proven in terms of the abaci functions, and understanding how to spot whether or not an $n$-core is in $\mathcal{Q}^M$ from its abaci.
The standard Catalan recurrence is often proven in terms of decomposing Dyck paths by the first return to the diagonal. There is a natural bijection from abaci functions to Dyck paths -- $f(k)$ measures how far beneath the line $y=x$ the Dyck path is at the $k$th step. Thus, we can prove the standard recurrence for Catalan numbers by decomposing the abaci functions by their first nontrivial 0.
\begin{tikzpicture}[scale=.8] \foreach \x in {0,1,...,3} \foreach \y in {0,1,...,7} \draw (\x,\y) circle (.4); \begin{scope}
\clip (-.5,7.5)--(-.5,6.5)--(.5,6.5)--(.5,5.5)--(1.5,5.5)--(1.5,4.5)--(-.5,4.5)--(-.5,1.5)--(.5,1.5)--(.5,-.5)--(3.5,-.5)--(3.5,7.5)--(-.5,7.5); \foreach \x in {0,1,...,3} \foreach \y in {0,1,...,7} \filldraw (\x,\y) circle (.4); \end{scope} \begin{scope}[shift={(-.5,8.5)}, yscale=-1] \foreach \i in {1,...,8}
\node at (0,\i) {\i}; \end{scope} \node at (.5,7.5) {9}; \node at (.5,6.5) {10}; \node at (.5,5.5) {11}; \begin{scope}[xshift=5cm]
\node at (0,7) {$f(0)=0$};
\node at (0,6) {$f(1)=1$};
\node at (0,5) {$f(2)=2$};
\node at (0,4) {$f(3)=0$};
\node at (0,3) {$f(4)=0$};
\node at (0,2) {$f(5)=0$};
\node at (0,1) {$f(6)=1$};
\node at (0,0) {$f(7)=1$};
\end{scope}
\begin{scope}[xshift=7cm]
\draw[very thin, gray] (0,0) grid (8,8);
\draw[very thin, dashed, gray] (0,0)--(8,8);
\draw[ultra thick](0,0)--(3,0)--(3,3)--(4,3)--(4,4)--(5,4)--(5,5)--(7,5)--(7,6)--(8,6)--(8,8);
\node at (-.2,.2) {0};
\node at (.8,1.2) {1};
\node at (1.8,2.2) {2};
\node at (2.8,3.2) {0};
\node at (3.8,4.2) {0};
\node at (4.8,5.2) {0};
\node at (5.8,6.2) {1};
\node at (6.8,7.2) {1};
\end{scope}
\begin{scope}[yshift=10cm] \draw[very thin, gray] (0,0) grid (7,6); \draw[ultra thick] (0,5)--(1,5)--(1,3)--(4,3)--(4,1)--(6,1)--(6,0); \node at (.5,5.2) {1}; \node at (1.2, 4.5) {2}; \node at (1.2, 3.5) {3}; \node at (1.5,3.2) {4}; \node at (2.5,3.2) {5}; \node at (3.5,3.2) {6}; \node at (4.2, 2.5) {7}; \node at (4.2, 1.5) {8}; \node at (4.5,1.2) {9}; \node at (5.5,1.2) {10}; \node at (6.2, .5) {11};
\end{scope}
\end{tikzpicture}
To recognize whether or not the partition corresponding to a given abacus is in $\mathcal{Q}^M$, we must discuss how to recognize consecutive part differences $\lambda_i-\lambda_{i+1}$ in terms of the abacus.
\begin{lemma} \label{lem:rubbeads} An $n$-core partitions $\lambda$ is in $\mathcal{Q}^M$ if and only if the length of every run of white beads in a column is in $M$.
\end{lemma} \begin{proof} Since the black circles are horizontal steps, and the white circles are vertical steps, the differences between consecutive parts (and the smallest part) are exactly the lengths of the runs of consecutive white beads, that is, the number of white beads between any two consecutive black beads on the abacus.
Since the first row is all black beads, none of these runs of beads will run across more than one column. \end{proof}
The proof of Theorem 1 is simply combining the standard derivation of the Catalan recurrence with Lemma \ref{lem:runbeads} \subsection{Proof of Theorem 1} Let $f$ be the abacus function of an $(n+1,n+2)$-core partition.
The proof is essentially in the diagram above, and the picture as follows:
First, we consider the case $0\notin M$. First, we argue that in the second column, we already have every bead is black. Indeed, the top bead of the second column is black because $\lambda$ is an $n$-core, and the second bead of the second column is black because $\lambda$ is an $n+1$-core, but since $0\notin M$ if we ever have two consecutive black beads then all the beads from that point on must be black. So $\lambda$ is determined by its first column.
Now, consider the first column, although $0\notin M$, there is still one partition where the second bead could also be black -- the empty partition. Otherwise, the second bead must occur after $k$ white beads, where $k\in M$. Removing the first $k+1$ rows, the remaining $n+1-(k+1)=n-k$ rows must be the abacus of an $(n-k, n-k+1)$ core, giving the recurrence.
This paragraph still holds in the case $0\in M$; now deleting the first row and first column from the first $k+1$ rows gives the abacus of a $k, k+1$ core.
Now, consider the first column; the top bead is black.
In particular, looking after the first column; the dot on runner 0 is black, and so there must be another black dot on at least one of runner $1,2...r+1$, or we would have $r+1$ consecutive white dots in a column.
Let $\ell>0$ be the first positive runner that has a black dot. Then, on the one hand we can read runners $\ell$ to $r-1$ as an $n-\ell$ abacus. On the other hand, looking at runners $0..\ell-1$, the first column is fixed as a black dot and $\ell-1$ white dots. Looking at the second column, the first two dots must be black. So, looking at the first
\section{Simultaneous Cores with all odd parts} We now turn to addressing simultaneous cores with all odd parts.
\begin{theorem}Let $E(n) (O(n))$ be the number of simultaneous $(n,n+1)$-cores with all parts even (odd, respectively). Then
$$2O(n)-O(n-2)=E(n+2)$$
Since we can calculate $E(n)$, this determines $O(n)$. \end{theorem}
Our proof of this theorem seems needlessly complicated -- presumably there's a more direct proof.
\begin{proof}
If $\lambda$ has all odd parts, then $\lambda_{i}-\lambda_{i+1}$ is always even, except for $\lambda_{\ell}-\lambda_{\ell+1}=\lambda_\ell$, the smallest part.
In terms of the abacus diagram, any run of white dots has even length, except the last run has odd length. Thus, we see that $n$-core partitions with all parts odd seem very closely related to $n$-core partitions with all parts even, and one might hope to relate them by just adding or removing an extra white bead to the last run. If this run is in the first column, this works fine. However, if the last run is not in the first column, then adding an extra column here will make one of the earlier columns have an odd run of white beads.
\begin{lemma} Let $DE(n)$ and $DO(n)$ be the number of $(n,n+1)$-core partitions with all parts even and odd, respectively. Then $DO(n)=DE(n+1)$. \end{lemma}
\begin{proof} We define a bijection $\varphi:DO(n)\to DE(n+1)$. First, $\varphi$ takes the empty partition to the empty partition. Since in both cases the partitions are required to have distinct parts, they are determined by the first column of their abacus.
Let $\lambda\in DO(n)$ be nonempty; then its $n$-abacus has at least one run of white beads. All runs of white beads in its $n$-abacus have an even length, except the last run has an odd length. We define the $n+1$-abacus of $\varphi(\lambda)$ by its first column -- it is the same as that of $\lambda$, except with one white bead added to its last run. The result clearly has distinct parts and all runs of white beads having even length, and hence is in $DE(n+1)$.
Clearly $\varphi$ is invertible, with $\varphi^{-1}$ removing one row from the $n+1$-abacus.
\end{proof}
There are two cases. The simple case is if all the white circles are in the first column of the $n$-abacus of $\lambda$ (that is, $f(k)\leq 1$). Given such a simultaneous core with all parts odd, we may simply add one white bead to odd run of beads, adding an extra row to the abacus, and getting the abacus diagram of an $(n+1, n+2)$-core with simultaneous part. Similarly, given a simultaneous core with all parts even, whose abacus is all in the first column, we may just remove a white dot from the last run of white dots and get an $(n, n+1)$ core with all parts odd.
Let $SE(n), SO(n)$ be the number of $(n, n+1)$-cores, whose $n$ abacus has all white circles in the first column, and all parts even, or odd, respectively. The previous paragraph shows $SO(n)=SE(n+1)$. On the other hand, we can derive a recursion for $SE(n)$ just as in our main theorem, by decomposing over the location of the second black circle in the first column. This gives
$$SE(n)=SE(n-1)+SE(n-3)+SE(n-5)+\cdots=SE(n-1)+SE(n-2)$$
by removing the first term of the sum and using the identity on the remaing terms. Note though that $SE(0)=SE(1)=SE(2)=1$, as the second equation only holds if $n\geq 3$. Thus, $SO(n-1)=SE(n)=F(n)$.
The complicated case is when the abacus diagram of $\lambda$ has white dots on at least two columns (that is, there is some $x$ with $f(x)\geq 2$. Denote $CE(n), CO(n)$, the number of such $(n, n+1)$-cores with all parts even (or odd, respectively).
If $\lambda\in CO(n)$, we may not simply insert a single extra row into our abacus diagram with a white dot in the last column and get a partition in $CE(n+1)$. Although doing so would make the last run of white circles have an even number, the runs of white circles in the previous columns would now have \emph{odd} number of beads.
In terms of the function $f$, suppose the last run was in the $m$th column, from rows $\ell+1$ to $\ell+k$; i.e., $f(\ell)=m-1, f(\ell+1)=f(\ell+2)=\cdots=f(\ell+k)=m$, and $f(\ell+k+i)<m$ for $i\geq 0$. Then the description above is attempts to define a new function $g$ by
$$g(i)=\begin{cases} f(i) & \mbox{if } i\leq \ell+k \\ m & \mbox{if } i=\ell+k+1 \\ f(i-1) & \mbox{if }i\geq \ell+k+2 \end{cases}$$ The final run of white circles in the abacus of $g$ does indeed have one more than that of $f$, but the run of beads in the $m-1$ column just before also has one more white bead in it; since this run was by assumption even before, it is now odd, a problem.
To fix this, we will insert \emph{two} new rows into the abacus diagram. One of these rows will add a white bead to the last run of white beads; the other other one will add a black bead immediately or after this run, and have white beads in all previous columns. Thus, we've added one white bead to the very last run of whites, but to every other run of white beads we've either added no white beads or two white beads, and the result is indeed in $CE(n+2)$.
However, there are two different ways we could add two columns as above -- we can stick the black bead immediately \emph{before} the last run, or immediately \emph{after} the last run. In terms of the function $f$ defining the abacus, these two new functions $g_b$ and $g_a$ (for before and after) are
$$g_b(i)=\begin{cases} f(i) & \mbox{if } i\leq \ell \\
m-1 & \mbox{if } i=\ell+1 \\
m & \mbox{if } i=\ell+2 \\
f(i-2) & \mbox{if }i\geq \ell+3 \end{cases}$$
$$g_a(i)=\begin{cases} f(i) & \mbox{if } i\leq \ell+k \\
m & \mbox{if } i=\ell+k+1 \\
m-1 & \mbox{if } i=\ell+k+2 \\
f(i-2) & \mbox{if }i\geq \ell+k+3 \end{cases}$$ \begin{tikzpicture}[scale=.6] \foreach \x in {0,1,...,2} \foreach \y in {0,1,...,4} \draw (\x,\y) circle (.4); \begin{scope}
\clip (-.5,4.5)--(-.5,3.5)--(.5,3.5)--(.5,1.5)--(1.5,1.5)--(1.5,.5)--(.5,.5)--(.5,-.5)--(2.5,-.5)--(2.5,4.5)--(-.5,4.5); \foreach \x in {0,1,...,2} \foreach \y in {0,1,...,4} \filldraw (\x,\y) circle (.4); \end{scope} \node at (1,-1) {$f$};
\begin{scope}[xshift=4cm] \foreach \x in {0,1,...,2} \foreach \y in {0,1,...,6} \draw (\x,\y) circle (.4); \begin{scope}
\clip (-.5,6.5)--(-.5,5.5)--(.5,5.5)--(.5,2.5)--(1.5,2.5)--(1.5,.5)--(.5,.5)--(.5,-.5)--(2.5,-.5)--(2.5,6.5)--(-.5,6.5); \foreach \x in {0,1,...,2} \foreach \y in {0,1,...,6} \filldraw (\x,\y) circle (.4); \end{scope} \draw[thin, gray] (-.5,1.5)--(-.5,3.5)--(2.5,3.5)--(2.5,1.5)--cycle; \node at (1,-1) {$g_b$}; \node at (3.5,2.5) {\begin{tabular}{c}Added \\ rows\end{tabular}};
\end{scope}
\begin{scope}[xshift=10cm] \foreach \x in {0,1,...,2} \foreach \y in {0,1,...,6} \draw (\x,\y) circle (.4); \begin{scope}
\clip (-.5,6.5)--(-.5,5.5)--(.5,5.5)--(.5,3.5)--(1.5,3.5)--(1.5,1.5)--(.5,1.5)--(.5,-.5)--(2.5,-.5)--(2.5,6.5)--(-.5,6.5); \foreach \x in {0,1,...,2} \foreach \y in {0,1,...,6} \filldraw (\x,\y) circle (.4); \end{scope} \draw[thin, gray] (-.5,.5)--(-.5,2.5)--(2.5,2.5)--(2.5,.5)--cycle; \node at (1,-1) {$g_a$}; \node at (3.5,1.5) {\begin{tabular}{c}Added \\ rows\end{tabular}}; \end{scope}
\end{tikzpicture}
Thus, there are $2CO(n)$ abacus diagrams in $CE(n+2)$ obtained in this way. However, some of the abacus diagrams in $CE(n+2)$ are obtained twice in this way, while others aren't obtained at all.
An abacus diagram in $CE(n+2)$ is obtained twice in this way if both immediately before and following this run there's an ``extra'' black bead in that column. More precisely, if the last run of white circles is in the $m$th column from $\ell+1$ to $\ell+k$, then we must have $f(\ell)=m-1$ and $f(\ell+1)=\cdots=f(\ell+k=m$. We say that there is an extra black bead \emph{before} if in addition $f(\ell-1)=m-1$, and we say there is an extra black bead \emph{after} if in addition $f(\ell+k+1=m-1)$.
In case there is an extra black bead both before and after, we may remove both the rows have the abacus having these ``extra'' black beads and get an abacus diagram in $CE(n)$, and vice versa. Thus, we are double counting $CE(n)$ different elements.
\begin{tikzpicture}[scale=.6] \foreach \x in {0,1,...,2} \foreach \y in {0,1,...,6} \draw (\x,\y) circle (.4); \begin{scope}
\clip (-.5,6.5)--(-.5,5.5)--(.5,5.5)--(.5,3.5)--(1.5,3.5)--(1.5,1.5)--(.5,1.5)--(.5,-.5)--(2.5,-.5)--(2.5,6.5)--(-.5,6.5); \foreach \x in {0,1,...,2} \foreach \y in {0,1,...,6} \filldraw (\x,\y) circle (.4); \end{scope} \node at (1,-1) {\begin{tabular}{c} extra before \\ and after \end{tabular}}; \draw[thin, gray] (-.5,.5)--(2.5,.5)--(2.5,1.5)--(-.5,1.5)--cycle; \draw[thin, gray] (-.5,4.5)--(-.5,5.5)--(2.5,5.5)--(2.5,4.5)--cycle;
\begin{scope}[xshift=4cm]
\foreach \x in {0,1,...,2} \foreach \y in {0,1,...,4} \draw (\x,\y) circle (.4); \begin{scope}
\clip (-.5,4.5)--(-.5,3.5)--(.5,3.5)--(.5,2.5)--(1.5,2.5)--(1.5,1.5)--(.5,1.5)--(.5,-.5)--(2.5,-.5)--(2.5,4.5)--(-.5,4.5); \foreach \x in {0,1,...,2} \foreach \y in {0,1,...,4} \filldraw (\x,\y) circle (.4); \end{scope} \node at (1,-1) {\begin{tabular}{c}add two \\ before this\end{tabular}}; \end{scope}
\begin{scope}[xshift=8cm]
\foreach \x in {0,1,...,2} \foreach \y in {0,1,...,4} \draw (\x,\y) circle (.4); \begin{scope}
\clip (-.5,4.5)--(-.5,3.5)--(.5,3.5)--(.5,1.5)--(1.5,1.5)--(1.5,.5)--(.5,.5)--(.5,-.5)--(2.5,-.5)--(2.5,4.5)--(-.5,4.5); \foreach \x in {0,1,...,2} \foreach \y in {0,1,...,4} \filldraw (\x,\y) circle (.4); \end{scope} \node at (1,-1) {\begin{tabular}{c}add two \\ after this\end{tabular}}; \end{scope}
\begin{scope}[xshift=12cm]
\foreach \x in {0,1,...,2} \foreach \y in {0,1,...,4} \draw (\x,\y) circle (.4); \begin{scope}
\clip (-.5,4.5)--(-.5,3.5)--(.5,3.5)--(.5,2.5)--(1.5,2.5)--(1.5,.5)--(.5,.5)--(.5,-.5)--(2.5,-.5)--(2.5,4.5)--(-.5,4.5); \foreach \x in {0,1,...,2} \foreach \y in {0,1,...,4} \filldraw (\x,\y) circle (.4); \end{scope} \node at (1,-1) {\begin{tabular}{c}extra rows\\ removed\end{tabular}}; \end{scope}
\end{tikzpicture}
The diagrams in $CE(n+2)$ that aren't obtained at all are those that don't have an ``extra'' black bead before OR after the last run of white beads. In this case, we see that immediately above the black bead that's the upper bound of this run, we must have another run of white beads; otherwise, the second to last column would have an odd run of white beads.
More long-windedly in terms of the function $f$: since the run of white beads is the last one, we must have $f(\ell+k+1\leq m-2)$. What about $f(\ell-1)$? Since $f(\ell)=m-1$, we must have $f(\ell-1)\geq m-2$. Since hte last row of beads is at height $m$, we must have $f(\ell-1)\leq m$. Since there is no extra bead in front of the run by assumption, we cannot have $f(\ell-1)=m-1$. Finally, we cannot have $f(\ell-1)=m-2$, as then we'd have an odd length run of beads in the $m-1$st column in rows $\ell$ to $\ell+k$. Therefore, we must have $f(\ell-1)=m$.
\begin{tikzpicture}[scale=.6]
\foreach \x in {0,1,...,2} \foreach \y in {0,1,...,11} \draw (\x,\y) circle (.4); \begin{scope}
\clip (-.5,11.5)--(-.5,10.5)--(.5,10.5)--(.5,9.5)--(1.5,9.5)--(1.5,5.5)--(.5,5.5)--(.5,4.5)--(1.5,4.5)--(1.5,.5)--(-.5,.5)--(-.5,-.5)--(2.5,-.5)--(2.5,11.5)--(-.5,11.5); \foreach \x in {0,1,...,2} \foreach \y in {0,1,...,11} \filldraw (\x,\y) circle (.4); \end{scope} \draw[thin, gray] (-.5,.5)--(-.5,6.5)--(2.5,6.5)--(2.5,.5)--cycle;
\end{tikzpicture}
Thus, in this case we can delete the whole final run of white beads (which has length $2c$ for some even $c$), the black bead bounding the bead above this, and the last white bead from this run before (hence deleting $2c+2$ rows), and get a partition in $CO(n-2c)$. In terms of the function $f$, we are defining a new function $h$ by
$$h(i)=\begin{cases} f(i) & \mbox{if } i\leq \ell-2 \\
f(i+2c+2) & \mbox{if } i\geq \ell-1 \end{cases}$$ However, there's the added wrinkle that we don't get all partitions in $CO(n-2k)$ in this way, but only those with no ``extra'' black dots after the last run of white beads -- the function $h$ has its last odd length run ending at $h(\ell-2)=f(\ell-2)=m$, but $h(\ell-1)=f(\ell+2c+1)\leq m-2$.
But if we have an abacus diagram of a partition in $CO(n-2k)$ that \emph{does} have a ``extra'' black dot, after the last run of white beads, we may just turn that extra black dot to white, obtaining an arbitrary diagram in $CE(n-2k)$. \begin{tikzpicture}[scale=.6] \foreach \x in {0,1,...,2} \foreach \y in {0,1,...,4} \draw (\x,\y) circle (.4); \begin{scope}
\clip (-.5,4.5)--(-.5,3.5)--(.5,3.5)--(.5,2.5)--(1.5,2.5)--(1.5,1.5)--(.5,1.5)--(.5,-.5)--(2.5,-.5)--(2.5,4.5)--(-.5,4.5); \foreach \x in {0,1,...,2} \foreach \y in {0,1,...,4} \filldraw (\x,\y) circle (.4); \end{scope} \node at (1,-1) {extra after};
\begin{scope}[xshift=8cm] \foreach \x in {0,1,...,2} \foreach \y in {0,1,...,4} \draw (\x,\y) circle (.4); \begin{scope}
\clip (-.5,4.5)--(-.5,3.5)--(.5,3.5)--(.5,2.5)--(1.5,2.5)--(1.5,.5)--(.5,.5)--(.5,-.5)--(2.5,-.5)--(2.5,4.5)--(-.5,4.5); \foreach \x in {0,1,...,2} \foreach \y in {0,1,...,4} \filldraw (\x,\y) circle (.4); \end{scope} \node at (1,-1) {turned 'extra' to white}; \end{scope}
\end{tikzpicture}
That is, we've seen the number of partitions in $CE(n+2)$ we've failed to count in $2CO(n)$ is $\sum (CO(n-2k)-CE(n-2k))$.
Putting it all together, we have:
$$CE(n+2)=2CO(n)-CE(n)+\sum_{k\geq 1} (CO(n-2k)-CE(n-2k))$$
or
$$CO(n)=\sum_{k\geq 0}\big( CE(n+2-2k)-CO(n-2k)\big)$$
Removing the first term of the sum, reindexing, and reapplying the equation, gives
\begin{align*}
CO(n)&=CE(n+2)-CO(n)+\sum_{k\geq 0}\big( CE(n-2+2-2k)-CO(n-2-2k)\big)\\
&=CE(n+2)-CO(n)-CO(n-2)
\end{align*}
and hence $CE(n+2)=2O(n)-CO(n-2)$.
To get the corresponding identity for $O(n)$ and $E(n)$, we need to add back the one column simultaneous cores counted by $SE(n)$ and $SO(n)$. But since $SO(n-1)=SE(n)=F(n)$, we have
$$SE(n+2)=F(n+2)=F(n+1)+F(n)=2F(n+1)-F(n-1)=2SO(n)-SO(n-2)$$
\end{proof} \section{Enriched recursions} \label{sec:enriched} It is natural to not just study the number of simultaneous cores, but various functions on them. We briefly summarize some results about how these play with our basic recursion.
The most obvious are the size $|\lambda|$, length (or number of parts) $\ell(\lambda)$, and largest part $\lambda_1$. In particular, we can enrich the generating function $C^M(q)$ to count the partitions $\lambda$ according to any or all of these variables. Armstrong, Hanusa and Jones \cite{AHJ} define $(q,t)$-rational Catalan numbers are the sum Paramonov \cite{Paramonov} studies this sum over partitions with distinct parts.
The three basic statistics can all be read off easily from the abacus diagram: \begin{itemize} \item The length $\ell(\lambda)$ is the number of black beads coming before at least one white bead \item The largest part $\lambda_1$ is the number of white beads on the abacus
\item The size $|\lambda|$ is the number of inversions, that is, pairs $(a, b)$ where $a$ is a black bead, $b$ is a white bead, and $a$ comes before $b$
\end{itemize}
When $0\notin M$, and the partition has distinct parts, all of these statistics play well with our basic recurrence.
Let $TL_M(q)$ be the generating function for the sum of all the largest parts, $TP_M(q)$ the total lengths, and $TS_M(q)$ the generating function for the total sizes of all parts, respectively. Then $$TL_M(q)=q^2\frac{d}{dq}\chi_M(q)F_M(Q)\frac{1}{1-q\chi_M(q)}$$ $$TP_M(q)=q\chi_M(q)F_M(q)\frac{1}{1-q\chi_M(q)}$$ $$TS_M(q)=(q^2\frac{d}{dq}\chi_M(q)F_M(q)+q\chi_MTL_M(q))\frac{1}{1-q\chi_M(q)}$$ $$TS_M=TL_M\frac{1}{1-q\chi_M}$$
$$TL=\frac{q^2\chi^\prime}{(1-q)(1-q\chi)^2}$$
$$TP=\frac{q\chi}{(1-q)(1-q\chi)^2}$$
$$TS=\frac{q^2\chi^\prime}{(1-q)(1-q\chi)^3}$$
\section{Beyond $(n, n+1)$-cores} \label{sec:ab}
\end{document}
|
arXiv
|
{
"id": "1802.09621.tex",
"language_detection_score": 0.7869322299957275,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\newcommand{\ECM}{\em Departament d'Estructura i Constituents de la Mat\`eria
\\ Facultat de F\'\i sica, Universitat de Barcelona \\
Diagonal 647, E-08028 Barcelona, Spain \\
and \\
I. F. A. E.}
\def\fnsymbol{footnote}{\fnsymbol{footnote}} \pagestyle{empty} {
\parbox{6cm}{\begin{center} July 2000
\end{center}}}
\begin{center} \large{Triviality of GHZ operators of higher spin}
\vskip .6truein \centerline {J. Savinien, J. Taron{\footnote{e-mail: [email protected]}},
R. Tarrach} \end{center}
\begin{center} \ECM \end{center}
\centerline{\bf Abstract}
We prove that local observables of the set of GHZ operators for particles of spin higher than 1/2 reduce to direct sums of the spin 1/2 operators $\sigma_x$, $\sigma_y$ and, therefore, no new contradictions with local realism arise by considering them.
\pagestyle{plain}
\section{Introduction} The GHZ theorem \cite{ghz} provides a powerful test of quantum non-locality, which can be confirmed or refuted by the outcome of just one single experiment \cite{mermin}. Formulated for three spin 1/2 particles \cite{mermin} \cite{peres}, the argument is based on the anti-commutative nature of the 2x2 spin operators $\sigma_x$, $\sigma_y$. The values of the three mutually commuting observables \begin{equation} \sigma_x^a \otimes \sigma_y^b \otimes \sigma_y^c \equiv \sigma_x^a \sigma_y^b \sigma_y^c, \;\;\; \sigma_y^a \sigma_x^b \sigma_y^c, \;\;\; \sigma_y^a \sigma_y^b \sigma_x^c, \label{tres} \end{equation} and their product, $-\sigma_x^a \sigma_x^b \sigma_x^c$, cannot be obtained, consistently, by making
local assignments to each of the individual spin operators, $m_x^I, \; m_y^I=\pm 1$, $I=a,b,c$. This is not a contradiction of Quantum Mechanics: the state $|\psi\rangle=
\frac{1}{\sqrt{2}} \left( |\uparrow \uparrow \uparrow \rangle -
| \downarrow \downarrow \downarrow \rangle \right)$, for instance, is one of the common eigenstates of the four operators, with eigenvalues $\lambda_1=\lambda_2=
\lambda_3=1$, $\lambda_4=-1$, respectively. $|\psi \rangle$ is a highly correlated (entangled) state of the three parties which has no defined value for $\sigma_x^I, \sigma_y^I$.
In this note we address the question of how to generalize the argument to particles of higher spin and find that there are no non-trivial extensions other than direct sums of operators that can be brought into the form $\sigma_x$,$\sigma_y$ by means of local unitarity transformations. (For odd dimensional Hilbert spaces the direct sum is completed by a one-dimensional submatrix, i.e., a c-number in the diagonal). We give a proof for the cases of spin 1 and 3/2. Similar problems have been addressed in \cite{adan}.
Let us look for observables $A$, $B$ such that $AB=\omega BA$ (their hermiticity implies that $\omega$ is at most a phase): this is a necessary condition for the commutator relations $[A_1^a A_2^b A_3^c,B_1^a B_2^b B_3^c]= {\rm etc}...=0$ to hold. As we shall see, all interesting cases correspond to $\omega=-1$. Without loss of generality, $A$ can always be taken diagonal, $A={\rm diag}(\lambda_1,\lambda_2)$, for the simplest case s=1/2. The above condition reads \begin{equation} AB-\omega BA=\left( \begin{array}{cc} (1-\omega) \lambda_1 b_{11} & (\lambda_1-\omega \lambda_2) b_{12} \\ (\lambda_2-\omega \lambda_1) b_{12}^* & (1-\omega) \lambda_2 b_{22} \end{array} \right)=0. \label{matriu} \end{equation} If $\omega \neq 1$, a solution with non-vanishing off-diagonal elements is allowed if $\omega^2=1$, i.e., $\omega=-1$ This leads to \begin{equation} A=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right), \;\;\; B=\left( \begin{array}{cc} 0 & b \\ b^* & 0 \end{array} \right), \label{unmig} \end{equation} which can always be transformed to $\sigma_x$ and $\sigma_y$, by rotations and adequate normalization. These are the operators of the example (\ref{tres}). For spin 1/2 the set of GHZ operators are in this sense unique.
\section{Spin one} For higher spins the proof proceeds along the same lines. We find one case of interest, with $\omega=-1$, \begin{equation} A=\left( \begin{array}{ccc} 1 & &\\
&-1& \\
& & -1 \end{array}\right), \;\;\; B=\left( \begin{array}{ccc} 0 & b & c \\ b^* & 0 & 0 \\ c^* & 0 & 0 \end{array} \right). \label{un} \end{equation}
In the basis where $B$ is diagonal $A$ and $B$ read \begin{equation} A=-\left( \begin{array}{ccc} 1 & 0 &0\\
0 &0 &1 \\
0 & 1 & 0
\end{array}\right),\;\;\; B=\sqrt{|b|^2+|c|^2}\left( \begin{array}{ccc} 0 & & \\
& 1 & \\
& & -1 \end{array} \right) \label{adiag} \end{equation} which proves the assertion in the case of spin one, as a rotation around $x$ brings $B$ into the form $0 \oplus \sigma_y$, while $A$ is left as $1 \oplus \sigma_x$, up to normalizations.
\section{Spin 3/2} For spin 3/2, in addition to cases that reduce straightforwardly to those of lower spins, we find: \begin{equation} A=\left( \begin{array}{cccc} 1 & & & \\
&-1& & \\
& &-1 & \\
& & &-1 \end{array} \right),\;\; B=\left( \begin{array}{cccc} 0 & a & b & c \\ a^* & 0 & 0 & 0 \\ b^* & 0 & 0 & 0 \\ c^* & 0 & 0 & 0 \end{array} \right). \label{quatre} \end{equation} In the basis where $B$ is diagonal $A$ and $B$ read \begin{equation} A=-\left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}
\right),\;\;\; B=\sqrt{|a|^2+|b|^2+|c|^2} \left( \begin{array}{cccc} 1 & & & \\
& -1 & & \\
& & 0 & \\
& & & 0 \end{array} \right) \label{cinc} \end{equation} which is again diagonal in two, 2x2, blocks.
The last case corresponds to \begin{equation} A=\left( \begin{array}{cccc} 1 & & & \\
&-1& & \\
& & 1 & \\
& & &-1 \end{array} \right),\;\; B=\left( \begin{array}{cccc} 0 & a & 0 & b \\ a^* & 0 & c^* & 0 \\ 0 & c & 0 & d \\ b^* & 0 & d^* & 0 \end{array} \right). \label{sis} \end{equation} The following list of unitary transformations bring these matrices to the desired form:
a) With \begin{equation} F=\left( \begin{array}{cccc} 1 &0 &0 &0 \\ 0 &0 &1 &0 \\ 0 &1 &0 &0 \\ 0 &0 &0 &1 \end{array} \right) \label{flip}, \end{equation} $F^\dagger=F=F^{-1}$, we find \begin{equation} A'=FAF= \left( \begin{array}{cc} I & \\
&-I \\ \end{array} \right),\;\; B'=FBF=\left( \begin{array}{cc}
& {\cal B} \\ {\cal B}^\dagger & \end{array} \right), \label{vuit} \end{equation} where $${\cal B}=\left( \begin{array}{cc} a & b \\ c & d \end{array} \right).$$
b) A unitary transformation of the form $U=\left( \begin{array}{cc} U_1& \\
& U_2 \end{array} \right)$ leaves $A'$ invariant and allows to diagonalize ${\cal B}$ \begin{equation} A''=A', \;\;\; B''=UB' U^\dagger=\left( \begin{array}{cc}
& U_1 {\cal B} U_2^\dagger \\ (U_1 {\cal B} U_2^\dagger)^\dagger \end{array} \right)=\left( \begin{array}{cccc}
& & m & 0 \\
& & 0 & n \\ m^* & 0 & & \\ 0 & n^* & & \end{array} \right). \label{nou} \end{equation} We have used the result that the generic matrix ${\cal B}$ can be brought to a diagonal form with two unitary matrices $U_1$, $U_2$.
c) Finally, acting with $F$ again, \begin{equation} A'''=A, \;\;\; B'''=\left( \begin{array}{cccc} 0 & m & & \\ m^* & 0 & & \\
& & 0 & n \\
& & n^* & 0 \end{array} \right), \end{equation} which completes the proof.
\section{Conclusions} We conclude that the equation $AB=\omega BA$ is very restrictive on $\omega$ and on the possible forms of A and B; as the Hilbert space dimension increases, with increasing spin, all its solutions for $\omega \neq 1$ have $\omega=-1$ and are essentially direct sums of the two-dimensional $\sigma_x$ and $\sigma_y$. In this sense there are no solutions that could, in principle, enrich the possibilities opened by the GHZ theorem.
\vspace*{1cm} \section{Acknowledgments} J.S., J.T., R.T. acknowledge the Centro de Ciencias de Benasque for hospitality while this work was beeing done. J.S. also acknowledges the Department ECM for hospitality and financial support. J.T. and R.T. acknowledge financial support by CICYT project AEN 98-0431, CIRIT project 1998 SGR-00026 and CEC project IST-1999-11053.
\end{document}
|
arXiv
|
{
"id": "0007069.tex",
"language_detection_score": 0.6355599164962769,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} The R\'{e}nyi $\alpha$-entropy $H_{\alpha}$ of complete antisymmetric directed graphs (i.e., tournaments) is explored. We optimize $H_{\alpha}$ when $\alpha = 2$ and $3$, and find that as $\alpha$ increases $H_{\alpha}$'s sensitivity to what we refer to as `regularity' increases as well. A regular tournament on $n$ vertices is one with each vertex having out-degree $\frac{n-1}{2}$, but there is a lot of diversity in terms of structure among the regular tournaments; for example, a regular tournament may be such that each vertex's out-set induces a regular tournament (a doubly-regular tournament) or a transitive tournament (a rotational tournament). As $\alpha$ increases, on the set of regular tournaments, $H_{\alpha}$ has maximum value on doubly regular tournaments and minimum value on rotational tournaments. The more `regular', the higher the entropy. We show, however, that $H_2$ and $H_3$ are maximized, among all tournaments on any number of vertices by any regular tournament. We also provide a calculation that is equivalent to the von Neumann entropy, but may be applied to any directed or undirected graph and shows that the von Neumann entropy is a measure of how quickly a random walk on the graph or directed graph settles. \end{abstract}
\title{Entropy of Tournament Digraphs} \section{Introduction}
We present results about an entropy function applied to directed graphs, in particular to orientations of complete graphs --- also known as \emph{tournaments}. While there is a fair amount of recent research focusing on entropy applied to undirected graphs, there is not as much applied to directed graphs in spite of the fact that many real-world networks such as citation, communication, financial and neural are best modeled with directed graphs.
All graphs are finite and simple. The degree of vertex $v$ in an undirected graph $G$ will be denoted $\deg_G(v)$ (subscripts omitted if the context allows), we write $V(G)$ for the vertex set of graph $G$, $E(G)$ for the adjacency relation of $G$, and we write $xy \in E(G)$ to indicate that vertices $x,y \in V(G)$ are adjacent in $G$. If $G$ is a directed graph we will use $A(G)$ to denote the adjacency relation since we may refer to elements of $A(G)$ as \emph{arcs}, and write $x \to y \in A(G)$ or $x \to y$ in $G$ to denote the arc \emph{from} $x$ \emph{to} $y$ in $G$;
$x$ is the \emph{tail} of arc $x \to y$ and $y$ is the \emph{head}. For a directed graph $G$ and $x \in V(G)$, the set $N^+_G(x) = \{y \in V(G): x \to y \in A(G)\}$ is called the \emph{out-set} of $x$ in $G$ (subscript omitted in appropriate contexts). The nonnegative integer $|N^+_G(x)|$ is the \emph{out-degree} or \emph{score} of vertex $x$ in directed graph $G$ and will be denoted $d^+_G(v)$ (subscript omitted if context allows). We use $M(i,j)$ to denote entry $(i,j)$ of matrix $M$, $\mathrm{spec}(M)$ to denote the spectrum of $M$ (the multiset of eigenvalues of $M$), and $\mathrm{tr}(M)$ to denote the trace of $M$ ($\mathrm{tr}(M) = \sum_{i}M(i,i)$). Other notation defined as needed.
The entropy of an undirected graph has been defined in many ways, with many motivations, but the starting point for our investigation is the classical Shannon entropy that, with a sleight of hand, is applied to the spectrum of a matrix representing the graph's structure. Many other functions intended to represent the entropy of undirected graphs that are in contradistinction to those we explore are surveyed in \cite{dehmer2011history}. The Shannon entropy of a discrete probability distribution $\vec{p} = (p_1, \dots, p_n)$ is \begin{equation} S(\vec{p}) = \sum_{p_i \in \vec{p}}p_i \log_2\frac{1}{p_i}, \label{Shannon ent def} \end{equation} and $S(\vec{p})$ is intended to be a measure of the information content in messages transmitted over a channel in which bit $i$ occurs with probability $p_i$. In the field of quantum information theory the von Neumann entropy is used heavily; see \cite{nielsen2010quantum} and of course \cite{von2018mathematical}. The von Neumann entropy of a quantum state of a physical system is defined in terms of the eigenvalues of the \emph{density matrix} associated to the physical system. The density matrix is Hermitian, positive semi-definite, and has unit trace. Hence the spectrum of the density matrix has the characteristics of a discrete probability distribution; and thereby the entropy of the physical system is defined to be the Shannon entropy of the spectrum of the density matrix. Suppose $G$ is an undirected graph with $V(G) = \{v_1, \dots, v_n\}$. The Laplacian of $G$, denoted $L_G$, is the matrix with non-diagonal entry $(i,j) = -1$ if $v_iv_j \in E(G)$, $0$ otherwise, and diagonal entry $(i,i)$ equal to the degree of vertex $v_i$. Alternatively, we think of the Laplacian $L_G$ as $D_G - A_G$, where $A_G$ is the \emph{adjacency matrix} of $G$ ($A_G(i,j) = 1$ if $v_iv_j \in E(G)$ and $0$ otherwise), and $D_G$ is the \emph{degree matrix} of $G$ ($D_G(i,i) = \deg_G(v_i)$ and $D_G(i,j) = 0$ if $i \neq j$). In this paper, we define the \emph{normalized Laplacian} matrix of $G$, by $\overline{L}_G = \frac{1}{\mathrm{tr}(L_G)}L_G$. Note that $\overline{L}_G$ is symmetric, positive semi-definite, and has unit trace; therefore $\overline{L}_G$ may be thought of as the density matrix of a physical system with $G$ its representation as an undirected graph. The \emph{von Neumann entropy} of graph $G$, denoted $H(G)$, is the von Neumann entropy of $G$'s normalized Laplacian: \begin{equation} H(G) = \sum_{\lambda \in \mathrm{spec}(\overline{L}_G)} \lambda \log_2 \frac{1}{\lambda}, \label{von Neumann ent def} \end{equation} where $0 \log_2 \frac 10$ is conventionally taken to be $0$.
The entropy of an undirected graph has been defined to be the von Neumann entropy of its normalized Laplacian by many authors and for many reasons, see \cite{anand2011shannon, braunstein2006laplacian, dairyko2017note, de2016interpreting, ye2016jensen}. For example the von Neumann entropy's interpretation when applied to a graph is studied in \cite{de2016interpreting}, it is studied as a measure of network regularity in \cite{passerini1quantifying}, in the context of representing quantum information in \cite{belhaj2016weighted}, and in \cite{dairyko2017note} its connection to graph parameters among other things is studied. The variety of applications and interpretations in the aforementioned references, at least to some extent, substantiates saying that it is not clear what entropy of a graph, in particular its von Neumann entropy, is telling us. This paper is a contribution to that conversation in the context of directed graphs.
A directed graph's Laplacian, however, is not necessarily symmetric or positive semi-definite; consequently we cannot simply treat its spectrum as a discrete probability distribution. But, in this paper, we come to the entropy of a directed graph via a function developed by R\'{e}nyi in \cite{renyi1961measures} to generalize Shannon's entropy: \begin{equation} H_{\alpha}(\vec{p}) = \frac{1}{1-\alpha}\log_2\left(\sum_{p_i \in \vec{p}}p_i^{\alpha}\right), \label{Renyi-alpha def}\end{equation} where $\vec{p}$ is a discrete probability distribution as in the Shannon entropy, $\alpha >0$ and $\alpha \neq 1$. Suppose $\Gamma$ is a directed graph with $V(\Gamma) = \{v_1,\dots, v_n\}$; the Laplacian of $\Gamma$, $L_{\Gamma}$, is constructed the same way as is the Laplacian of an undirected graph: $$L_{\Gamma}(i,j) = \left\{ \begin{array}{cc} d^+(v_i) & \mbox{ if $i = j$} \\ -1 & \mbox{ if $v_i \to v_j\in A$} \\ 0 & \mbox{ if $v_i \to v_j \not\in A$} \end{array} \right ..$$
We define, for directed graph $\Gamma$ with \emph{normalized} Laplacian $\overline{L}_{\Gamma}$ whose spectrum is $\Lambda_\Gamma$, its \emph{R\'{e}nyi $\alpha$-entropy} to be $H_{\alpha}(\Gamma) = H_{\alpha}(\Lambda_\Gamma)$. Note that $S(\vec{p}) = \lim_{\alpha \to 1}H_{\alpha}(\vec{p})$ (see \cite{renyi1961measures}) but we focus on positive integer values of $\alpha$ greater than $1$; doing this makes moot the inconvenient characteristics of the spectrum of a directed graph's Laplacian and also allows us to use combinatorial arguments to compute entropy. To wit, suppose $\Gamma$ is a directed graph whose normalized Laplacian is $\overline{L} = \frac{1}{\mathrm{tr}(D-A)}\left(D - A\right)$, where $A$ is $\Gamma$'s adjacency matrix, $D$ the diagonal matrix with out-degrees of vertices of $\Gamma$ as its diagonal entries, and $L$ its Laplacian; then using the various properties of the trace function\footnote{Recall the trace is linear, and that for any square matrix $M$, $\mathrm{tr}(M) = \sum_{i}M(i,i) = \sum_{\lambda \in \mathrm{spec}(M)}\lambda$. Also, if $\lambda$ is an eigenvalue of $M$, then $\lambda^k$ is an eigenvalue of $M^k$ and so $\mathrm{tr}(M^k) = \sum_{\lambda \in \mathrm{spec}(M)}\lambda^k$, and $\mathrm{tr}(AB) = \mathrm{tr}(BA)$ for (in particular) square matrices $A$ and $B$.} and focusing on the argument of the logarithm, we have \begin{eqnarray*} \sum_{\lambda \in \Lambda_\Gamma} \lambda^2= \mathrm{tr}\left(\overline{L}^2\right) &=& \mathrm{tr}\left(\left(\frac{1}{\mathrm{tr}(D-A)}(D -A)\right)^2\right) \\ &= & \mathrm{tr}(D-A)^{-2}\left(\mathrm{tr}(D^2) - \mathrm{tr}(AD) - \mathrm{tr}(DA) + \mathrm{tr}(A^2)\right). \end{eqnarray*} Noting that $A^{\alpha}$ records the number of walks of length $\alpha$ between vertices, we see that the computation of $H_{\alpha}(\Gamma)$ will involve $\Gamma$'s out-degree raised to powers and the number of walks of length $\alpha$ from vertices to themselves.
A directed graph $T$ with $|V(T)| = n$ is an \emph{$n$-tournament} if for each pair of vertices $x,y \in V(T)$ we have \emph{either} $x \to y \in A(T)$ or $y\to x \in A(T)$; in other terms, an $n$-tournament is an orientation of the complete graph on $n$ vertices. Note that if $M$ is the adjacency matrix of an $n$-tournament, then $M + M^t = J_n - I_n$, where $J_n$ is the $n \times n$ matrix all of whose entries equal $1$, and $I_n$ is the $n \times n$ identity matrix.
Now suppose $\Gamma$ is an $n$-tournament, then the trace of its Laplacian is $\binom{n}{2}$, and there are no walks of length $2$ from any vertex to itself and so in the computation of $\mathrm{tr}\left(\overline{L}^{\alpha}\right)$, with $\alpha$ an integer greater than or equal to $2$, terms such as $\mathrm{tr}\left(D^{\alpha -2}A^2\right), \mathrm{tr}\left(AD^{\alpha-2}A \right)$, and $\mathrm{tr}\left(A^2D^{\alpha-2}\right)$ equate to zero.
More generally, we have the following result we will use in the sequel and which follows from the same properties of the trace used above and those of tournaments.
\begin{lemma}\label{lem:powerequaltrace} Suppose $L = g(D -A)$ is the normalized Laplacian of an $n$-tournament \emph{(so $g = \binom{n}{2}^{-1}$)}, and let $\Lambda=\mathrm{spec}(L)$, then \[\sum_{\lambda \in \Lambda} \lambda^3 = \mathrm{tr}\left( g^3(D-A)^3 \right)=g^3\left(\mathrm{tr}\left(D^{3}\right)-\mathrm{tr}\left(A^3\right)\right),\] and \[\sum_{\lambda \in \Lambda} \lambda^4 = \mathrm{tr}\left( g^4(D-A)^4 \right)=g^4\left(\mathrm{tr}\left(D^{4}\right)-\mathrm{tr}\left(DA^3\right) - \mathrm{tr}\left(A^3D\right) + \mathrm{tr}\left(A^4\right) \right).\] \end{lemma}
An $n$-tournament is \emph{regular} if the score of each vertex is $\frac{n-1}{2}$. The number of $3$-cycles in a labeled $n$-tournament $T$ with $V(T) = \{v_1, \dots, v_n\}$ is obtained via \begin{eqnarray}\binom{n}{3} - \sum_{1 \leq i \leq n} \binom{d^+(v_i)}{2} \label{number_of_3-cycles}\end{eqnarray} and this number is maximized when $T$ is regular (and $n$ is necessarily odd).
On the other hand an $n$-tournament $T$ has no cycles if and only if it $T$ transitive: $T$ is \emph{transitive} if, for all $x,y,z \in V(T)$, $x \to y$ and $y \to z$ implies $x \to z$. Also, an $n$-tournament is transitive if and only if its vertices can be labeled $v_0, v_1, \dots, v_{n-1}$ so that $d^+(v_i) = i$; that is, its \emph{score sequence} is $(0,1,2, \dots, n-1)$. There is one transitive $n$-tournament up to isomorphism for each integer $n \geq 1$. In contrast, up to isomorphism, there are 1,123 and 1,495,297 regular $11$-tournaments and regular $13$-tournaments, respectively. We will show that for $\alpha = 2,3$ and $n > 4$, the transitive and regular $n$-tournaments yield minimum and maximum R\'{e}nyi $\alpha$-entropy, respectively.
But this is reductive in the case of regular $n$-tournaments, for $n > 5$; the R\'{e}nyi $\alpha$-entropy distinguishes among regular tournaments and gives a continuum of `\emph{regularity}' -- for lack of a better term. If $n$ is odd, then for $\alpha = 2$ and $\alpha = 3$, $H_{\alpha}(T)$ is minimum on the set of $n$-tournaments if and only if $T$ is transitive; $H_{\alpha}(T)$ is maximum if and only if $T$ is regular.
\subsection{Small Tournaments} Let $\mathcal{T}_n$ denote the set of all $n$-tournaments up to isomorphism. In the hope of shedding light on what the R\'{e}nyi entropy is telling us, and to foreshadow sequel sections, we examine the R\'{e}nyi entropy's behavior on $\mathcal{T}_4$, $\mathcal{T}_5$, and $\mathcal{T}_3$.
Up to isomorphism there are $4$ distinct $4$-tournaments. The \emph{score sequence} of an $n$-tournament on vertices $v_1, \dots, v_n$ is the list $(s_1, \dots, s_n)$ with, relabeling if necessary, $s_i = d^+(v_i)$ and $s_1 \leq s_2 \leq \cdots \leq s_n$. The $4$-tournament $TS_4$ in Figure \ref{fig:4_tournaments} represents the isomorphism class of all $4$-tournaments with score sequence $(1,1,2,2)$. The other isomorphism classes of $4$-tournaments are determined by their score sequences (this is the case only for $n$-tournaments with $n \leq 4$); the other $4$-tournament score sequences are $(0,2,2,2)$, $(1,1,1,3)$, and $(0,1,2,3)$, which have $TK_4$, $TO_4$, and $TT_4$, respectively, as their associated tournaments.
\begin{figure}
\caption{All $4$-tournaments}
\label{fig:4_tournaments}
\end{figure}
By Lemma \ref{lem:powerequaltrace} \begin{eqnarray*}H_2(TS_4) & =& -\log_2\left(\mathrm{tr}(\overline{L}_T)^2\right) = -\log_2\left(\mathrm{tr}\left(\frac{1}{36}(D - A)^2\right)\right)\\ & = &-\log_2\left(\frac{1}{36}\left(\mathrm{tr}(D^2) - 2\mathrm{tr}(DA) + \mathrm{tr}(A^2)\right)\right), \end{eqnarray*} and since no vertex of a tournament has a walk of length $2$ from itself to itself, the trace of its adjacency matrix squared is zero. Also, $\mathrm{tr}(DA) = \mathrm{tr}(AD) = 0$. Therefore, $H_2(TS_4) = -\log_2\left(\mathrm{tr}(D^2)/36\right) = -\log_2\left(\sum_{1 \leq i \leq 4}(d^+(v_i))^2/36\right)=-\log_2\left(\left(1^2+1^2+2^2+2^2\right)/36\right)$. Indeed, for any $n$-tournament $T$ on vertices $v_1, \dots, v_n$, $$H_2(T)=-\log_2\left(\binom{n}{2}^{-2}\sum_{1\leq i \leq n} d^+(v_i)^2\right).$$ With $\alpha =3$, the calculation is $$H_3(T) = -\log_2\left(\binom{n}{2}^{-3}\sum_{1\leq i \leq n} d^+(v_i)^3 - \sum_{1\leq i \leq n}c_3(i,i) \right),$$ where $c_3(i,j)$ is the number of walks of length $3$ from $v_i$ to $v_j$.
The table at (\ref{table:H2&H3on4tournies}) displays essentially $H_2$ and $H_3$ for all $4$-tournaments; in fact $\sum_{\lambda \in \mathrm{spec}(L_T)} \lambda^{\alpha}$, for $\alpha = 2, 3$ and each $T \in \mathcal{T}_4$ are displayed.
\begin{equation}
\begin{array}{| c | c | c |} \hline
& \sum \lambda^2 & \sum \lambda^3 \\ \hline TS_4 & 10 & 12 \\ TK_4 & 12 & 21 \\ TO_4 & 12 & 27 \\ TT_4 & 14 & 36 \\
\hline \end{array}\label{table:H2&H3on4tournies} \end{equation}
Though both $H_2$ and $H_3$ are functions only of the score sequence, $H_3$ seems to quantify something more than $H_2$ does, and distinguishes each tournament in $\mathcal{T}_4$.
We now explore $\mathcal{T}_5$. There are $12$ distinct $5$-tournaments up to isomorphism and $9$ distinct score sequences. The score sequences $(1,2,2,2,3)$ and $(1,1,2,3,3)$ have $3$ and $2$ distinct tournaments associated with them, see Figure \ref{fig:5_tournaments_(1,2,2,2,3)} and Figure \ref{fig:5_tournaments_(1,1,2,3,3)}.
\begin{figure}
\caption{Non-isomorphic $5$-tournaments with score sequence $(1,2,2,2,3)$. Arcs not depicted are directed downward.}
\label{fig:5_tournaments_(1,2,2,2,3)}
\end{figure}
\begin{figure}
\caption{Non-isomorphic $5$-tournaments with score sequence $(1,1,2,3,3)$. Arcs not depicted are directed downward.}
\label{fig:5_tournaments_(1,1,2,3,3)}
\end{figure}
Table \ref{table:H2&H3&H4on5tournies} shows the R\'{e}nyi $\alpha$-entropy values for all the $5$-tournaments, for $\alpha = 2,3,4$. Actually, again, what is shown is $\sum_{\lambda \in \mathrm{spec}(L_T)}\lambda^{\alpha}$, $\alpha = 2,3,4$, and $T \in \mathcal{T}_5$. We use $T_{\vec{s}}$ to denote the (unique in this case) tournament corresponding to the score sequence $\vec{s}$. $TT_5$ is the transitive $5$-tournament, $R_5$ is the $5$-tournament with score sequence $(2,2,2,2,2)$.
\begin{equation}
\begin{array}{| c | c | c | c |} \hline
& \sum \lambda^2 & \sum \lambda^3 & \sum \lambda^4 \\ \hline R_5 & 20 & 25 & -20\\ UR_1 & 22 & 40 & 46\\ UR_2 & 22 & 40 & 46\\ UR_3 & 22 & 40 & 50 \\ U_1 & 24 & 55 & 116 \\ U_2 & 24 & 55 & 120 \\ T_{(0,2,2,3,3)} \; (\mbox{``$E$''}) & 26 & 76 & 258 \\ T_{(1,1,2,2,4)} \; (\mbox{``$D$''}) & 26 & 64 & 138 \\ T_{(1,1,1,3,4)} \; (\mbox{``$C$''}) & 28 & 79 & 208 \\ T_{(0,2,2,2,4)} \; (\mbox{``$B$''}) & 28 & 85 & 280 \\ T_{(0,1,3,3,3)} \; (\mbox{``$A$''}) & 28 & 91 & 328 \\ TT_5 & 30 & 100 & 354 \\
\hline \end{array}\label{table:H2&H3&H4on5tournies} \end{equation} Notice that as $\alpha$ increases the number of distinct entropy values increases. Consider the partial order induced by the R\'{e}nyi entropy, where $T_1 <_{\alpha} T_2$ if $H_{\alpha}(T_1) < H_{\alpha}(T_2)$. Figure \ref{fig:Hasse_of_H2H3H4_on_5tourns} shows the Hasse diagrams for the orders $<_i$, for $i = 2,3,4$. We see fewer incomparabilities as $\alpha$ increases, but $<_{i}$ is not necessarily a refinement of $<_{i-1}$. For example, $C <_2 E$, $C <_3 E$, but $E <_4 C$.
\begin{figure}
\caption{Hasse diagrams of the partial orders determined by $H_2$, $H_3$, and $H_4$.}
\label{fig:Hasse_of_H2H3H4_on_5tourns}
\end{figure}
We now compare the R\'enyi $\alpha$-entropy of the two distinct $3$-tournaments as a function of $\alpha$ -- in what remains of this section $\alpha$ is not necessarily an integer. We treat this case last (out of $n = 3,4,5)$ because it is a bit different, but the results are consistent with the over arching claims we make about the R\'{e}nyi $\alpha$-entropy: that it is a measure of how regular a tournament is; the higher the entropy value, the more regular the tournament is. Moreover, and this will not be shown until the penultimate section there is more to `regular tournaments' than score sequences.
\begin{figure}
\caption{The $3$-tournament which is a cycle (left) and the transitive $3$-tournament (right)}
\label{fig:3_tourns}
\end{figure}
With $C_3$ denoting the $3$-tournament that is a cycle, \begin{equation*} \overline{L}(C_3) = \begin{bmatrix} \frac{1}{3} & -\frac{1}{3} & 0 \\ 0 & \frac{1}{3} & -\frac{1}{3} \\ -\frac{1}{3} & 0 & \frac{1}{3} \end{bmatrix} \text{ has spectrum } \left\{0, \frac{1}{2} \pm \frac{\sqrt{3}}{6}i\right\} \text{, so} \end{equation*} \begin{align*} H_\alpha(C_3) &= \frac{1}{1 - \alpha} \log_2 \left(2\left(\frac{1}{\sqrt{3}}\right)^\alpha \cos \left(\frac{\pi}{6}\alpha\right)\right) \\
&= \frac{1 - \alpha \log_2 \sqrt 3}{1 - \alpha} + \log_2 \left(\cos^{\frac{1}{1 - \alpha}} \left(\frac{\pi}{6}\alpha\right)\right). \end{align*} Now consider the domain for which this function gives a real-valued entropy. If the cosine evaluates to 0, as is the case for $\alpha = 3$, then $H_\alpha$ is not defined, and we see a vertical asymptote as $\alpha \to 3$. If the cosine value is negative, then the value of $S_\alpha$ is real only if $\alpha$ is of the form $2p/(2q +1)$ with $p,q \in \mathbb Z$.
As far as end behavior, $H_\alpha$ has no limit as $\alpha$ approaches infinity, but it does have a lower bound. We note that $H_\alpha$ has local minima at or near $12k$, with $k \in \mathbb{Z}^+$. Then \begin{align*} H_{12k}(C_3) &= \frac{1-12k \log_2 \sqrt 3}{1-12k} + \frac{1}{1 - 12k}\log_2\left(\cos(2\pi k)\right)\\ &= \frac{1-12k \log_2 \sqrt 3}{1-12k}. \end{align*} As $k \to \infty$, the first term tends to $0$, and $$\lim_{k \to \infty} H_{12k}(C_3) = \log_2 \sqrt{3} \approx 0.7925.$$
With $TT_3$ denoting the transitive $3$-tournament, we see that all eigenvalues are real, and the entropy is more well-behaved. \begin{equation*} \overline{L}(TT_3) = \begin{bmatrix} \frac{2}{3} & -\frac{1}{3} & -\frac{1}{3} \\ 0 & \frac{1}{3} & -\frac{1}{3} \\ 0 & 0 & 0 \end{bmatrix} \text{ has spectrum } \left\{0, \frac{1}{3}, \frac{2}{3}\right\}, \text{ so} \end{equation*} \begin{align*} H_\alpha(TT_3) &= \frac{1}{1 - \alpha} \log_2 \left(\left(\frac{1}{3}\right)^\alpha + \left(\frac{2}{3}\right)^\alpha \right). \end{align*} This function is continuous on $(1, \infty)$, and we can evaluate $\lim_{\alpha \to \infty}H_{\alpha}(T)$ by applying L'H\^opital's Rule (when the base is not specified `$\log$' is the natural logarithm): \begin{align*} \lim_{\alpha \to \infty} H_\alpha(TT_3) &= \lim_{\alpha \to \infty} \frac{\log_2\left(\left(\frac{1}{3}\right)^\alpha + \left(\frac{2}{3}\right)^\alpha\right)}{1 - \alpha} \\ &= \lim_{\alpha \to \infty} \frac{\frac{1}{\log 2}\frac{ (\log \frac{1}{3}) (\frac{1}{3})^\alpha + (\log \frac{2}{3}) (\frac{2}{3})^\alpha }{\left(\frac{1}{3}\right)^\alpha + \left(\frac{2}{3}\right)^\alpha}}{-1} \\ &= \lim_{\alpha \to \infty} \frac{1}{\log 2}\frac{ (\log 3) (\frac{1}{3})^\alpha + (\log \frac{3}{2}) (\frac{2}{3})^\alpha }{\left(\frac{1}{3}\right)^\alpha + \left(\frac{2}{3}\right)^\alpha} \\ &= \lim_{\alpha \to \infty} \frac{1}{\log 2} \frac{\log 3 + (\log \frac{3}{2}) 2^\alpha}{1 + 2^\alpha} \\ &= \lim_{u \to \infty} \frac{1}{\log 2} \frac{\log 3 + (\log \frac{3}{2}) u}{1 + u} \\ &= \frac{\log \frac{3}{2}}{\log{2}} \\ &= \log_2 3 - 1 \\ &\approx 0.5850. \end{align*}
\section{R\'{e}nyi $2$- and $3$-entropy: Min, Max, and What's in Between}
We focus on $H_2$ and $H_3$ on $\mathcal{T}_n$ in this section. The results give a strong indication that the R\'{e}nyi $\alpha$-entropy is a measurement of how regular a tournament is, similar to \cite{passerini1quantifying}. On the other hand, in \cite{landau1951dominance} Landau defined, for an $n$-tournament $T$ with score sequence $(s_1, \dots, s_n)$, what he called the \emph{hierarchy} score $h(T) = \frac{12}{n^3 - n}\sum_{i = 1}^n \left(s_i - \frac{n-1}{2} \right)^2$; this was Landau's measurement of how close $T$ is to the transitive tournament. It is straightforward to transform $H_2(T)$ into $h(T)$ and vice-versa, given Proposition \ref{prop:renyi2newformula}; hence $H_2$ is equivalent to Landau's hierarchy. We also enumerate the distinct $H_2$- and $h$-classes, and it can then be seen that $H_2$ and $h$ distinguish tournament structure less than the score sequence does. The same goes for $H_3$. But this is not so for $H_{\alpha}$ with $\alpha > 3$; indeed, $H_4$ distinguishes between some $n$-tournaments with the same score sequence for $n \geq 4$.
Lemma \ref{lem:powerequaltrace} together with equation \ref{number_of_3-cycles} yields the following proposition.
\begin{proposition}\label{prop:renyi2newformula}
Suppose $T$ is a tournament on vertices $\{v_1, \dots, v_n\}$ with $d^+(v_i) = s_i$. Then
\[\sum_{\lambda \in \Lambda_T} \lambda^2 = \binom{n}{2}^{-2}\sum_{i=1}^n s_i^2\]
and if $c_3(T)$ is the number of 3-cycles in $T$, then
\[\sum_{\lambda \in \Lambda_T} \lambda^3=
\binom{n}{2}^{-3}\left(\sum_{i =1}^n s_i^3 - 3c_3(T) \right)=
\binom{n}{2}^{-3}\left(\sum_{i =1}^n s_i^3 - 3\binom{n}{3} + 3\sum_{i=1}^{n}\binom{s_i}{2}\right).\]
\end{proposition}
Define the function $f_k$ on $\mathcal{T}_n$ by $f_k(T) = \sum_{\lambda \in \Lambda_T}\lambda^k$.
\begin{theorem}\label{thm:minimumpowersums} On $\mathcal{T}_n$, $f_2$ and $f_3$ are minimized by regular tournaments when $n$ is odd and by nearly-regular tournaments when $n$ is even. \end{theorem}
\begin{proof} Consider a tournament $T$ on vertices with score sequence $(s_1, \dots, s_n)$. Suppose $s_i + 2 \leq s_j$ for some $i,j$. If $j \rightarrow i$, then construct a new tournament $T'$ by reversing the arc so that $i \rightarrow j$. Otherwise, if $i \to j \in A(T)$, consider the tournament $\hat{T}$ induced on $\{i, j\} \cup N^+(j)$. Note that $j$ is a king in $\hat{T}$, so there is a path $P$ of length $2$ from $j$ to $i$, say $P=(j,u,i)$. Construct $T'$ by reversing the arcs on $P$ so that $i \rightarrow u$ and $u \rightarrow j$ are arcs of $\hat{T}$. This reversal lowers the score of $j$ by 1 and increases the score of $i$ by 1, the score of $u$ is unchanged. So, in either case, the score sequence of $T'$ is $s_1, \ldots s_i +1, \ldots, s_j -1, \ldots s_n$. It is not difficult to show that \begin{equation}\label{eqn:regarcreverse}
s_i^2 + s_j^2 > (s_i +1)^2 + (s_j -1)^2. \end{equation} Let $(s'_1, \ldots s'_n)$ be the score sequence of $T'$, $E = \mathrm{spec}(\overline{L}_T)$, and $E' = \mathrm{spec}(\overline{L}_T')$. Notice that $s_k = s'_k$ for $k \neq i$ and $k \neq j$. Also $s'_i = s_i +1$ and $s'_j = s_j-1$. By Theorem \ref{prop:renyi2newformula}, $ \sum_{\lambda \in E} \lambda^2= \binom{n}{2}^{-2}\sum_{i =1}^n s_i^2$ and $\sum_{\lambda \in E'} \lambda^2= \binom{n}{2}^{-2} \sum_{i = 1}^n (s'_i)^2$. These equalities together with equation \ref{eqn:regarcreverse} imply that $\sum_{\lambda \in E} \lambda^2 > \sum_{\lambda \in E'} \lambda^2$. Repeatedly applying the construction above until there are no scores that differ by at least 2 results in a regular tournament when $n$ is odd and a nearly-regular tournament when $n$ is even. (I don't think we need this sentence:) After each step the sum of the squares of the eigenvalues of the resulting tournament is decreased.
Now consider $\sum_{\lambda \in E'} \lambda^3$, and $T$ and $T'$ are as above with score sequences $(s_1, \dots, s_n)$ and $(s_1', \dots, s_n')$, respectively. By Proposition \ref{prop:renyi2newformula}, we have $$\sum_{\lambda \in E'} \lambda^3 = \binom{n}{2}^{-3}\left(\sum_{i =1}^n s_i^3 - 3\binom{n}{3} + 3\sum_{i=1}^{n}\binom{s_i}{2}\right).$$ Consider the part of the sum affected by the algorithm: $(s'_i)^3 + (s'_j)^3 + 3\binom{s'_i}{2} + 3 \binom{s'_j}{2}$. Using $s_i + 2 \leq s_j$ (and hence $s_j \geq 2$), $s_i' = s_i+1$, and $s_j-1 = s_j'$, the relationship $(s_i')^3 + (s_j)^3 +3\binom{s_i'}{2} + 3\binom{s_j'}{2} < s_i^3 + s_j^3 +3\binom{s_i}{2} + 3\binom{s_j}{2}$ may be obtained. Since $\binom{n}{2}^{-3}$ is constant for fixed $n$ as is $3\binom{n}{3}$, the expression for the R\'{e}nyi $3$-entropy will be maximized for small values of $\sum_{i=1}^n s_i^3 - 3\sum_{i=1}^{n}\binom{s_i}{2}$. Thus, by changing the scores of $T$ to create a tournament $T'$ in which $s'_j = s_j - 1$ and $s'_i = s_i+1$, we see that $H_3(T')>H_3(T)$. It follows that the tournament with maximum R\'{e}nyi $3$-entropy will have scores as close to equal as possible. This is achieved by any regular tournament if $n$ is odd, and any nearly-regular tournament if $n$ is even. \end{proof}
\begin{corollary} The R\'{e}nyi $2$- and R\'{e}nyi $3$-entropy are maximized by regular $n$-tournaments when $n$ is odd, otherwise by nearly-regular $n$-tournaments. \end{corollary}
To find the tournaments which minimize the R\'{e}nyi entropy, we use the following algorithm. Let $T_0$ be a tournament that is not transitive and therefore has a repeated score in its score sequence. For $i \geq 1$, obtain $T_i$ from $T_{i-1}$ by reversing the arc between any pair of vertices with the same score, say $s_m$. Then, if $T_{i-1}$ has score sequence $(s_1, \ldots, s_m, \ldots, s_m, \ldots, s_n)$, then $T_i$ will have score sequence $(s_1, \ldots, s_m - 1, \ldots, s_m + 1, \ldots, s_n)$. Note that $(s_m - 1)^2 + (s_m + 1)^2 = 2s_m^2 + 2.$ Since there are a finite number of $n$-tournaments and each step increases the value of $f_2$ by 2, the algorithm is guaranteed to terminate. This happens $T_i$ has no repeated scores, which is possible only if $T_i$ has score sequence $ (0,1, 2, \dots, n-1)$; that is, $T_i$ is the transitive $n$-tournament.
\begin{theorem} \label{minimumtransitive} Among all tournaments on $n$ vertices, the R\'{e}nyi 2- and 3-entropy are minimized by the transitive tournament. \end{theorem}
\begin{proof} Let $T_0$ be any tournament on $n$ vertices. Apply the algorithm described above until the transitive tournament $TT_n$ is reached. We already established that $f_2$ strictly increases throughout the algorithm, so $H_2(T_0) > H_2(TT_n)$.
It remains to show that $f_3$ does the same. By Proposition \ref{prop:renyi2newformula}, we have \begin{align*}f_3(T_i) - f_3(T_{i-1}) &= \left((s_m - 1)^3 + 3\binom{s_m - 1}{2} + (s_m + 1)^3 + 3\binom{s_m + 1}{2}\right) - 2\left(s_m^3 + 3\binom{s_m}{2}\right) \\ &= (s_m + 1)^3 + (s_m - 1)^3 - 2s_m^3 + 3\left[\binom{s_m + 1}{2} - \binom{s_m}{2} + \binom{s_m - 1}{2} - \binom{s_m}{2}\right] \\ &= 6s_m + 3\left[\binom{s_m}{1} - \binom{s_m - 1}{1}\right]\\ &= 6s_m + 3 \\ &\geq 9. \end{align*} Indeed, the value of $f_3$ increases by at least $9$ with each step. Therefore, the transitive tournament maximizes $f_3$ and minimizes $H_3$. \end{proof}
The next and final result in this section gives precisely the number of distinct values of $H_2$ on $\mathcal{T}_n$.
\begin{theorem} For tournaments on $n$ vertices, the number of distinct values of the $H_2$ is $$\left\{ \begin{array}{ll} \frac{1}{4} \binom{n + 1}{3} + 1 & \text{if $n$ is odd,} \\ 2\binom{\frac{n}{2} + 1}{3} + 1 & \text{if $n$ is even.} \end{array} \right.$$ \end{theorem}
\begin{proof} Using again the algorithm described above with $T_0$ (nearly-)regular and maximizing $f_2$, we take advantage of the fact that each step increases the value of $f_2$ by 2 until the transitive $n$-tournament is reached and $f_2$ is minimized.
Since the sum of the scores of any $T \in \mathcal{T}_n$ is $\binom{n}{2}$, there are an even number of odd scores when $\binom{n}{2}$ is even and an odd number of odd scores when $\binom{n}{2}$ is odd. Therefore, the sum of the squares of the scores has the same parity as $\binom{n}{2}$. Hence the algorithm produces all possible values of $f_2$.
Now we count the number of values generated by counting the odd or even numbers between minimal and maximal values of $f_2$. For a transitive tournament, the score sequence is $(0, 1, 2, \ldots, n -1)$, which gives maximum value $$\sum_{i = 0}^{n - 1} i^2 = \frac{n(n - \frac{1}{2})(n - 1)}{3}.$$ If $n$ is odd, a regular tournament gives minimum value \begin{align*} \sum_{i = 1}^n s_i^2 &= n \left(\frac{n - 1}{2}\right)^2 \\ &= \frac{n(n - 1)^2}{4} \end{align*} The number of distinct values for odd $n$ is then \begin{align*} \frac{1}{2}\left(\frac{n(n - \frac{1}{2})(n - 1)}{3} - \frac{n(n - 1)^2}{4}\right) + 1 &= \frac{n(n - 1)}{24} \left(4\left(n - \frac{1}{2}\right) - 3(n - 1)\right) + 1 \\ &= \frac{n(n - 1)(n + 1)}{24} + 1 \\ &= \frac{1}{4}\binom{n + 1}{3} + 1. \end{align*} If $n$ is even, a nearly-regular tournament has $\frac{n}{2}$ vertices with score $\frac{n}{2} - 1$ and $\frac{n}{2}$ vertices with score $\frac{n}{2}$, so \begin{align*} \sum_{i = 1}^n s_i^2 &= \frac{n}{2}\left(\frac{n}{2} - 1\right)^2 + \frac{n}{2}\left(\frac{n}{2}\right)^2 \\ &= \frac{n\left((n - 2)^2 + n^2\right)}{8} \\ &= \frac{n(n^2 - 2n + 2)}{4}. \end{align*} Therefore, the number of distinct values for even $n$ is \begin{align*} &\frac{1}{2}\left(\frac{n(n - \frac{1}{2})(n - 1)}{3} - \frac{n(n^2 - 2n + 2)}{4}\right) + 1 \\ &= \frac{n}{24}\left(4\left(n - \frac{1}{2}\right)(n - 1) - 3(n^2 - 2n + 2)\right) +1 \\ &= \frac{n(n^2 - 4)}{24} + 1 \\ &= \frac{\frac{n}{2}(\frac{n}{2} - 1)(\frac{n}{2} + 1)}{3} + 1 \\ &= 2\binom{\frac{n}{2} + 1}{3} + 1. \end{align*} \end{proof}
Let $h_n^{\alpha}$ be the number of distinct values for $H_{\alpha}$ over $\mathcal{T}_n$, and $S_n$ denote the number of distinct score sequences of $n$-tournaments in $\mathcal{T}_n$. The table below shows $h_n^2$ and $S_n$ up to $n=10$. $S_n$ is sequence A000571 in the OEIS \cite{OEIS}.
\begin{equation}
\begin{array}{| c | c | c | c | c|c|c|c|c|c|} \hline n & 2 & 3 &4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline S_n & 1 & 2 & 4 & 9 & 22 & 59 & 167 & 490 & 1486 \\ h^2_n & 1 & 2 & 3 & 6 & 9 & 15 & 21 & 31 & 41 \\
\hline \end{array}\label{table:number_score_sequences_vs_H2} \end{equation}
We have observed that, as $\alpha$ increases, $h_n^{\alpha}/S_n$ increases and we make the following conjecture.
\begin{conjecture} For $\alpha$ sufficiently large, $\displaystyle \lim_{n \to \infty}\frac{h_n^{\alpha}}{S_n} > 1$. \end{conjecture}
\section{R\'{e}nyi $4$-entropy}
In this section we focus on $\alpha = 4$ and regular $n$-tournaments for $n>5$. Recall that the \emph{out-set} of a vertex $v$ is the set of vertices at the heads of arcs whose tail is at $v$.
For any $n$ there is up to isomorphism a unique transitive tournament on $n$ vertices, but the case is different for regular tournaments. For example there are $1, 3, 15, 1223$, and $1,495,297$ regular $n$-tournaments for $n=5, 7, 9, 11$, and $13$, respectively. Let $\mathcal{R}_n$ denote the set of regular tournaments in $\mathcal{T}_n$. The results of the previous section showed that regular and nearly-regular tournaments maximize the R\'{e}nyi $\alpha$-entropy for $\alpha = 2$ and $\alpha =3$. If $\alpha >3$, what can be said about $H_{\alpha}(T)$? What we have seen experimentally is that $H_{\alpha}(T)$ is among the largest values of $H_{\alpha}$ on $\mathcal{T}_n$ if $T \in \mathcal{R}_n$; that is, if $T \in \mathcal{R}_n$ and $T' \in \mathcal{T}_n \setminus \mathcal{R}_n$, then $H_{\alpha}(T') < H_{\alpha}(T)$. What we have proved is that $H_4$ partitions $\mathcal{R}_n$, and it is this effect we explore presently. For example, there are three regular $7$-tournaments, $QR_7$, $B$, and $R_7$ drawn in Figure \ref{fig:regular_7}, and $H_4$ gives a distinct value to each: $$H_4(R_7) < H_4(B) < H_4(QR_7).$$
\begin{figure}
\caption{The regular $7$-tournaments}
\label{fig:regular_7}
\end{figure}
The regular $7$-tournaments $QR_7$ and $R_7$ are distinguishable in several ways; for example, the out-set of every vertex in $QR_7$ induces the $3$-tournament $C_3$ of Figure \ref{fig:3_tourns}, while every out-set of $R_7$ induces $TT_3$ of Figure \ref{fig:3_tourns}. $QR_7$ and $R_7$ are examples of two classes of tournaments that will be of interest in this section.
For the next two definitions, suppose the $n$-tournaments have vertex set $\{0,1,2,\dots, n-1\}$. Let $S \subset \{0,1,2,\dots, n-1\}$ with $|S| = \frac{n-1}{2}$ and $i-j \neq 0$ modulo $n$ for all $i,j \in S$. An $n$-tournament $T$ is \emph{rotational with symbol $S$}, if $i \to j$ in $T$ if and only if $j - i \in S$. A \emph{doubly regular} $n$-tournament $T$ is a regular tournament with the additional property that
for any two vertices $x, y \in V(T)$, $|N^+(x) \cap N^+(y)| = k$; necessarily $n =4k+3$. Equivalently a doubly-regular $(4k+3)$-tournament is a regular tournament in which the out-set of each vertex induces a regular $(2k+1)$-tournament. $QR_7$ is doubly regular and is the rotational $7$-tournament with symbol $\{1,2,4\}$, the nonzero quadratic residues modulo $7$. $R_7$ is the rotational $7$-tournament with symbol $\{1,2,3\}$. We also indentify the following class of tournaments.
A \emph{quasi doubly regular} tournament on $4k + 1$ vertices is a regular tournament with score of each vertex equal to $2k$ and, for any pair of vertices $x$ and $y$, $|N^+(x) \cap N^+(y)| \in \{k - 1, k\}$.
For simplicity, and since the log function is an artifact of what was desired out of an entropy function (see \cite{renyi1961measures}), we focus on the power sums of the eigenvalues, and define, for a tournament or any directed or undirected graph $T$, $$H^*_{\alpha}(T) = -f_{\alpha}(T)= - \sum_{\lambda \in \mathrm{spec}(\overline{L}(T))} \lambda^\alpha.$$ Note that minimizing $H^*_{\alpha}$ maximizes $H_{\alpha}$ when $H_{\alpha}$ is defined.
We first show that $H^*_4(T)$ is minimum on $\mathcal{R}_n$ if and only if $T$ is quasi doubly regular or doubly regular if $n = 4k+1$ or $n = 4k+3$, respectively. We'll use the following lemma which counts the number of distinct subtournaments isomorphic to $TS_4$ of Figure \ref{fig:4_tournaments}.
Let $T$ be an $n$-tournament and define: \begin{itemize}
\item $c_3(T)$ to be the number of subtournaments isomorphic to $C_3$ of Figure \ref{fig:3_tourns};
\item $c_4(T)$ to be the number of subtournaments of $T$ isomorphic to $TS_4$ of Figure \ref{fig:4_tournaments} -- the strongly connected\footnote{A digraph is \emph{strongly connected} if between any pair of vertices $x$ and $y$ there is a path from $x$ to $y$ and a path from $y$ to $x$.} $4$-tournament;
\item $t_4(T)$ to be the number of subtournaments of $T$ isomorphic to $TT_4$ of Figure \ref{fig:4_tournaments} -- the transitive $4$-tournament. \end{itemize}
We note that the following lemma addresses a problem similar to that in \cite{linial2016number} their Proposition 1.1).
\begin{lemma} \label{lemma:c_3_c_4_t_4} Let $T$ be an $n$-tournament on $n$ vertices, and $c_3$, $c_4$, and $t_4$ defined as above; then $$c_4(T) = t_4(T) - \frac{n - 3}{4} \left(\binom{n}{3} - 4c_3(T)\right).$$ \end{lemma}
\begin{proof} Consider the four $4$-tournaments up to isomorphism: \begin{enumerate} \item $TS_4$: The strong 4-tournament; \item $TT_4$: The transitive 4-tournament; \item $TO_4$: The tournament with score sequence (1, 1, 1, 3); \item $TK_4$: The tournament with score sequence (0, 2, 2, 2). \end{enumerate} It is quickly verified that $$c_3(C_4) = 2, \quad c_3(T_4) = 0, \quad c_3(TK_4) = 1, \quad c_3(TO_4) = 1.$$ Now let $T$ be any $n$-tournament. Since each $3$-cycle (subtournament isomorphic to $C_3$) belongs to exactly $n - 3$ subtournaments of $T$ on $4$ vertices, we have \begin{equation} (n - 3)c_3(T) = 2c_4(T) + to_4(T) + tk_4(T), \end{equation} where $to_4(T)$ and $tk_4(T)$ are the number of $TO_4$'s and $TK_4$'s in $T$. Furthermore, the total number of subtournaments of $T$ on $4$ vertices is equal to \begin{equation} c_4(T) + t_4(T) + to_4(T) + tk_4(T) = \binom{n}{4}. \end{equation} Combining equations (2) and (3), we obtain \begin{align*} c_4(T) &= t_4(T) - \binom{n}{4} + (n - 3)c_3(T) \\ &= t_4(T) - \frac{n - 3}{4}\left(\binom{n}{3} - 4c_3(T)\right). \end{align*}
\end{proof}
\begin{lemma}\label{lemma:H^*_4_max_when_t_4_min} For regular tournaments, $H_4^*(T)$ is maximized where $t_4(T)$ is minimized, and vice versa. \end{lemma} \begin{proof} Let $T = (V, A)$ be a regular tournament on $n = 2m + 1$ vertices. First note that for $\alpha \in \mathbb Z$ with $\alpha \geq 2$, we have $H^*_\alpha(T) = -\mathrm{tr}(\bar L(T)^\alpha)$. Furthermore, since $T$ is regular, we have $$\bar L(T) = \frac{1}{\binom{n}{2}}(mI - M).$$ Therefore, by the linearity of the trace and using Lemma \ref{lemma:c_3_c_4_t_4}, we can express $H^*_4$ in terms of $t_4(T)$, noting that $TS_4$ is the only tournament on $4$ vertices with a walk of length $4$ from a vertex to itself. \begin{align*} H^*_4(T) &= -\binom{n}{2}^{-4}\text{Tr}\left(m^4I - 4m^2M + 6m^2M^2 - 4mM^3 + M^4\right) \\ &= -\binom{n}{2}^{-4}\left(m^4n - 12mc_3(T) + 4c_4(T)\right) \\ &= -\binom{n}{2}^{-4}\left(m^4n - 12mc_3(T) + 4t_4(T) - (n - 3 )\left(\binom{n}{3} - 4c_3(T)\right)\right). \end{align*} Note that $n$, $m$ and $c_3$ are all constant for regular tournaments on $n$ vertices. \end{proof}
We next identify the regular tournament which minimizes $H_4$ on $\mathcal{R}_n$; it is a rotational tournament. A rotational tournament is distinguished by its symbol $S$, and we call the rotational tournament with symbol $S = \left\{1,2,\dots, \frac{n-1}{2}\right\}$ the \emph{consecutive rotational $n$-tournament}.
\begin{theorem} On $\mathcal{R}_{2m+1}$, $H_4(T)$ is minimum if and only if $T$ is isomorphic to the consecutive rotational tournament. \end{theorem}
\begin{proof} Let $T$ be a regular tournament on $n = 2m + 1$ vertices. By Lemma \ref{lemma:H^*_4_max_when_t_4_min}, we look to maximize $t_4(T)$. Since each vertex has score $m$, each vertex is the source of at most $\binom{m}{3}$ $TT_4$'s, and this value is achieved if and only if the outset of that vertex is transitive. If each of the vertices in $T$ have this property, then the maximum value $n\binom{m}{3}$ of $t_4(T)$ is achieved. For each odd $n$, there is only one such tournament up to isomorphism, namely the consecutive rotational tournament.
To see this, let $N^+(x)$ be transitive for each $x \in V(T),$ and relabel the vertices the following way in $\mathbb Z_n$. Choose a vertex to label $0$. Label the source of $N^+(0)$ by $1$, the source of $N^+(0) \cap N^+(1)$ by $2$, and so on until $N^+(0)$ consists of $\{1, 2, \ldots, m\}$. Then $m$ is beaten by $0, \ldots, m - 1$, so $m$ must beat all of the remaining vertices, with $N^+(m)$ transitive. Label the source of $N^+(m)$ by $m + 1$, the source of $N^+(m) \cap N^+(m + 1)$ by $m + 2$, and so on until all of the vertices are labeled $0, \ldots, n - 1$. Now $m$ beats $m + 1, \ldots, n - 1, 0$, so $2, \ldots, m$ must beat $m + 1$. Then $1$ beats $2, \ldots, m + 1$, so $m + 2, \ldots, n - 1, 0$ must beat $1$. This means that $m + 2$ beats $m + 3, \ldots n - 1, 0, 1$, so $2, \ldots, m + 1$ must beat $m + 2$. Continuing in this fashion, we see that for vertices $x$ and $y$, $x \to y$ if and only if $y - x \in \{1, 2, \ldots, m\}$, so $T$ is isomorphic to the consecutive rotational $n$-tournament. \end{proof}
We now find the argument maximum of $H_4$ on $\mathcal{R}_{n}$.
\begin{theorem} A $(4k+3)$-tournament $T$ achieves the maximum value of $H_4$ on $\mathcal{R}_{4k+3}$ if and only if $T$ is doubly regular. A $(4k+1)$-tournament $T$ achieves the maximum value of $H_4$ on $\mathcal{R}_{4k+1}$ if and only if $T$ is quasi doubly regular. \end{theorem}
\begin{proof} Let $T$ be a regular tournament on $n = 2m + 1$ vertices. Now we look to minimize $t_4(T)$. Consider a vertex $x \in V(T)$ and the corresponding subtournament $T'$ on the $m$ vertices in $N^+(x)$. The number of transitive triples in $T'$ is given by \begin{align*}
t_3(T') &= \sum_{y \in N^+(x)} \binom{|N^+(x) \cap N^+(y)|}{2} \\
&= \frac{1}{2}\sum_{y \in N^+(x)}\left(|N^+(x) \cap N^+(y)| - \frac{m - 1}{2}\right)^2 \\
& \qquad + \frac{(m - 2)}{2}\sum_{y \in N^+(x)}|N^+(x) \cap N^+(y)| \;\;-\;\; \frac{1}{2}\!\!\sum_{y \in N^+(x)} \left(\frac{m - 1}{2}\right)^2 \\
&= \frac{1}{2}\left(\sum_{y \in N^+(x)}\left(|N^+(x) \cap N^+(y)| - \frac{m - 1}{2}\right)^2 + (m - 2)\binom{m}{2} - m\left(\frac{m - 1}{2}\right)^2\right). \end{align*}
If $n \equiv 3 \pmod 4$ and $n = 4k + 3$, then \begin{align*} t_3(T') &\geq \frac{1}{2}\left((m - 2)\binom{m}{2} - m\left(\frac{m - 1}{2}\right)^2\right)\\ &= \frac{m}{2}\left((2k - 1)k - k^2\right) \\ &= m\frac{2k^2 - k - k^2}{2} \\ &= m\binom{k}{2}, \end{align*}
with equality if and only if $|N^+(x) \cap N^+(y)| = \frac{m - 1}{2} = k$ for each $y \in N^+(x)$. Now, since $t_3(T')$ is also the number of $T_4$s in $T$ in which $x$ is the source, it follows that $$t_4(T) \geq nm\binom{k}{2},$$ with equality if and only if $T$ is doubly regular.
If $n \equiv 1 \pmod 4$ and $n = 4k + 1$, then \begin{align*} t_3(T') &\geq \frac{1}{2}\left(m\left(\frac{1}{2}\right)^2 + (m - 2)\binom{m}{2} - m\left(\frac{m - 1}{2}\right)^2\right) \\ &= \frac{m}{2}\left(\frac{1}{4} + (2k - 2)\frac{2k - 1}{2} - \left(k - \frac{1}{2}\right)^2\right) \\ &= k\left((k - 1)(2k - 1) - k^2 + k\right) \\ &= k(k - 1)^2, \end{align*}
with equality if and only if $\left|N^+(x) \cap N^+(y)| - \frac{m - 1}{2}\right| = \frac{1}{2}$ for each $y \in N^+(x)$. Therefore, $$t_4(T) \geq nk(k - 1)^2,$$ with equality if and only if $T$ is quasi doubly regular. \end{proof}
From this, we obtain the tight bounds for regular tournaments $$-\frac{n(n - 1)\left(3n^3 - 17n^2 + n - 3\right)}{48} \leq \binom{n}{4}^4H_4^*(T) \leq -\frac{n^2(n-1)(n^2 - 6n + 1)}{16} \quad \text{for } n \equiv 3 (\text{mod } 4);$$ $$-\frac{n(n - 1)\left(3n^3 - 17n^2 + n - 3\right)}{48} \leq \binom{n}{4}^4H_4^*(T) \leq -\frac{n(n - 1)^2(n^2 - 5n - 4)}{16} \quad \text{for } n \equiv 1 (\text{mod } 4).$$
Since $H_\alpha$ depends entirely on the spectrum, we know that as $\alpha$ increases there is no partitioning of $\mathcal{R}_n$ via $H_{\alpha}$ beyond the spectrum-level. The next result shows that all doubly regular $n$-tournaments have the same spectrum.
\begin{theorem} For any doubly regular tournament $T$ on $n = 2m + 1 = 4k + 3$ vertices, $$\text{spec}(\bar L(T)) = \left\{0, \frac{1}{n - 1}\left(1 \pm \frac{i}{\sqrt n}\right)^{(m)}\right\},$$ where $(m)$ in superscript denotes that the eigenvalue has multiplicity $m$. \end{theorem} \begin{proof} Let $A$ be the adjacency matrix of a doubly-regular tournament $T$ on $n = 2m + 1 = 4k + 3$ vertices. Then $A A^t = mI + k(J - I)$ and $A + A^t = J - I$, where $I$ is the identity matrix and $J$ is the all-ones matrix. Then \begin{align*} (A - \lambda I)(A - \lambda I)^t &= A A^t - \lambda(A + A^t) + \lambda^2 I \\ &= mI + k(J - I) - \lambda(J - I) + \lambda^2 I \\ &= (k - \lambda)J + (m - k + \lambda + \lambda^2)I. \end{align*}
Since $spec(J) = \{n, 0^{(n - 1)}\}$, we have $$spec((A - \lambda I)(A - \lambda I)^t) = \{n(k - \lambda) + m - k + \lambda + \lambda^2, (m - k + \lambda + \lambda^2)^{(n - 1)}\}.$$ Therefore, \begin{align*}
|A - \lambda I|^2 &=
|(A - \lambda I)(A - \lambda I)^t| \\ &= (n(k - \lambda) + m - k + \lambda + \lambda^2) (m - k + \lambda + \lambda^2)^{n - 1} \\ &= (m^2 - 2m\lambda + \lambda^2) (k + 1 + \lambda + \lambda^2)^{n - 1} \\ &= \left((m - \lambda)(k + 1 + \lambda + \lambda^2)^m\right)^2. \end{align*}
Therefore, since $n$ is odd, $$|A - \lambda I| = (m - \lambda)(k + 1 + \lambda + \lambda^2)^m$$ and $$spec(A) = \left\{m, \left(-\frac{1}{2} \pm \frac{i\sqrt n}{2}\right)^{(m)}\right\}.$$ Finally, if $\bar L$ is the normalized Laplacian matrix of $T$, then $\bar L$ and $A$ are related by $$\bar L = \frac{2}{n(n - 1)}(mI - A),$$ so \begin{align*} spec(\bar L) &= \left\{\frac{2}{n(n - 1)}(m - m), \frac{2}{n(n - 1)}\left(m + \frac{1}{2} \pm \frac{i\sqrt n}{2}\right)^{(m)}\right\} \\ &= \left\{0, \frac{1}{n - 1}\left(1 \pm \frac{i}{\sqrt n}\right)^{(m)}\right\}. \end{align*} \end{proof}
\begin{corollary} For integer $\alpha >4$, $H_{\alpha}$ is maximized on $\mathcal{R}_{4k+3}$ via doubly regular tournaments. \end{corollary}
\section{Von Neumann Entropy and Random Walks}
We believe we have a compelling argument that, as far as directed graphs are concerned, the R\'{e}nyi entropy entropy calculation quantifies the \emph{regularity} of the directed graph. Entropy is apparently sensitive to local regularity vis-\'{a}-vis the refinement of the R\'{e}nyi ordering we observe on the set of regular tournaments with highest entropy being associated to doubly-regular tournaments, tournaments that are regular and locally regular. But this is either saying nothing, given that `regularity' has not been precisely defined, or we are simply defining `regularity' as \emph{the extent to which entropy is high relative to other directed graphs}.
In this section we more precisely describe what the von Neumann entropy calculation is quantifying in graphs and directed graphs. First we establish a lemma about the magnitudes of the eigenvalues of the scaled Laplacian. Let $L$ and $\overline{L}$ be the Laplacian and normalized Laplacian of some loopless directed graph $\Gamma$ on the set of $n$ vertices $\{v_1, \dots, v_n\}$, $\{\lambda_1, \dots, \lambda_n\}$ the multiset of eigenvalues of $\overline{L}$, and let $d_i^+ = d_{\Gamma}^+(v_i)$ denote the out-degree of vertex $v_i$ in $\Gamma$.
The next lemma is established to the end of supporting the following extension of the von Neumann entropy to a directed or undirected graph: Suppose $\overline{L}$ is the Laplacian of a (di)graph $\Gamma$ normalized as in this paper, then the \emph{von Neumannn entropy} of $\Gamma$ is $$S(\vec \lambda) = \frac{1}{\log 2}\left(\text{tr}(\overline{L}) - \sum_{j = 2}^\infty \frac{\text{tr}(\overline{L}^j)}{j(j - 1)}\right).$$
\begin{lemma}\label{Lem:Markov} Regarding $\Gamma, L, \overline{L}$, and $\{\lambda_1, \dots, \lambda_n\}$ as described above:
$|\lambda_k - 1| \leq 1$ for $1 \leq k \leq n$. \end{lemma}
\begin{proof} Consider the family $\mathcal F$ of matrices of the form \begin{equation*} M = \left(I - S L\right)^t \qquad \text{with} \qquad S = \begin{bmatrix} \frac{1}{s_1} & 0 & \cdots & 0 \\ 0 & \frac{1}{s_2} & & 0 \\ \vdots & &\ddots & \vdots \\ 0 & 0 & \cdots & \frac{1}{s_n} \end{bmatrix}, \end{equation*} where $d^+_i \leq s_i \neq 0$ for all $i$.
Since the row sums of $L$ are all zero, and $S$ scales each row of $L$ individually, the same is true of the rows of $S L$. Therefore, the row sums of $M^t$, and consequently the column sums of $M$, equal $1$.
Furthermore, note that all elements of $M$ are between $0$ and $1$. On the diagonal, the $k^\text{th}$ element is $1 - d^+_k/s_k$. Off the diagonal, each element is either $0$ or $1/s_k$ for some $s_k$. Hence $M$ is a Markov matrix, which guarantees that each of its eigenvalues has modulus at most $1$.
Notice in particular that $\overline{L}$ is of the form $S L$ with $s_1 = s_2 = \cdots = s_n = \sum_{i=1}^n d_i^+$. Now suppose that $\lambda$ is an eigenvalue of $\overline{L}$. Then $\lambda$ is also an eigenvalue of $\overline{L}^t$, so $1 - \lambda$ is an eigenvalue of $M = I - \overline{L}^t = (I - \overline{L})^t$, where $M \in \mathcal F$. This means that $1 - \lambda$ has modulus at most 1. \end{proof}
Recall that the function \begin{equation*} f(\lambda) = \left\{\begin{array}{l l} \lambda \log_2 \frac{1}{\lambda} & \text {if }\lambda \neq 0 \\ 0 & \text{if } \lambda = 0 \end{array}\right. \end{equation*} can be expanded as the power sum
$$f(\lambda) = \frac{1}{\log 2}\left((1 - \lambda) - \sum_{j = 2}^\infty \frac{(1 - \lambda)^j}{j(j - 1)}\right)\qquad \text{for $|\lambda - 1| \leq 1.$}$$
By Lemma \ref{Lem:Markov} the eigenvalues of the scaled Laplacian matrix are all within the radius of convergence of $f$ and the von Neumann entropy can be expressed as \begin{align*} S(\vec \lambda) &= \sum_{k = 1}^n f(\lambda_k) \\ &= \sum_{k = 1}^n \frac{1}{\log 2}\left((1 - \lambda_k) - \sum_{j = 2}^\infty \frac{(1 - \lambda_k)^j}{j(j - 1)}\right) \\ &= \frac{1}{\log 2}\left(\sum_{k = 1}^n (1 - \lambda_k) - \sum_{j = 2}^\infty \frac{1}{j(j - 1)}\sum_{k = 1}^n (1 - \lambda_k)^j\right). \end{align*}
\noindent Also, by Lemma \ref{Lem:Markov}, we know that $\{1 - \lambda_k\}_{k = 1}^n$ is the spectrum of the Markov matrix $M = (I - \bar L_{\Gamma})^t$. Therefore, for $j \geq 1$, $$\sum_{k = 1}^n (1 - \lambda_k)^j = \text{tr}(M^j),$$ and since $\lim_{j \to \infty} \text{tr}(M^j) = 1$, because $M$ is a Markov matrix, we can write $$S(\vec \lambda) = \frac{1}{\log 2}\left(\text{tr}(M) - \sum_{j = 2}^\infty \frac{\text{tr}(M^j)}{j(j - 1)}\right).$$\\
Let $g$ denote the sum $\sum d_i^+$ of out-degrees of vertices of $\Gamma$. Let $w_j(v_k)$ be a random walk of length $j$ starting at vertex $v_k$, where at each step, the walk has a probability of $1/g$ of moving to each vertex in its out-set. Then entry $l,k$ of matrix $M^j$ is the probability that $w_j(v_k)$ ends at $v_l$, and $$\text{tr}(M^j) = \sum_{k = 1}^n P(\text{$w_j(v_k)$ ends at $v_k$}).$$
Therefore, the von Neumann entropy can be expressed as $$S(\vec \lambda) = \frac{1}{\log 2}\left(n - 1 - \sum_{k = 1}^n \sum_{j = 2}^\infty \frac{P(\text{$w_j(v_k)$ ends at $v_k$})}{j(j - 1)}\right).$$ In this sense, the von Neumann entropy is a measure of how quickly a random walk will move away from its initial state and settle in to its limiting state.
Also, this viewpoint allows us to place general bounds on the von Neumann entropy. \begin{observation} \label{entropy bounds} For any loopless directed graph $\Gamma$, $S(\Gamma) \leq S(\vec d^+)$, where \[\vec d^+ = (d^+(v_1)/g, \ldots, d^+(v_n)/g)\] is the distribution of out-degrees in $\Gamma$, and equality holds if and only if $\Gamma$ has no (directed) cycles. \end{observation} \begin{proof} Clearly, $$P(\text{$w_j(v_k)$ ends at $v_k$}) \geq P(\text{$w_j(v_k)$ never leaves $v_k$}) = (1 - d^+_k/g)^j,$$ with equality if and only if $\Gamma$ has no directed cycles. Therefore, \begin{align*} S(\vec \lambda) &\leq \frac{1}{\log 2} \left(n - 1 - \sum_{k = 1}^n \sum_{j = 2}^\infty \frac{(1 - d^+(v_k)/g)^j}{j(j - 1)}\right) \\ &= \frac{1}{\log 2}\left(n - 1 - \sum_{k = 1}^n \left(\frac{d^+(v_k)}{g} \log \frac{d^+(v_k)}{g} + 1 - \frac{d^+(v_k)}{g}\right)\right) \\ &= -\sum_{k = 1}^n \frac{d^+(v_k)}{g} \log_2 \frac{d^+(v_k)}{g} \\ &= S(\vec d^+). \end{align*} \end{proof} Note that the condition for equality is equivalent to $\bar L_{\Gamma}$ being permutation equivalent to an upper-triangular matrix. This makes sense, since in that case the eigenvalues of $L_{\Gamma}$ are the out-degrees of the vertices of $\Gamma$.
\begin{corollary} For any loopless directed graph $\Gamma$, $S(\Gamma) < \log_2 n$. \end{corollary} \begin{proof} Since the out-degrees are real-valued, we have $S(\Gamma) = S(\vec \lambda) \leq S(\vec d^+) \leq \log_2 n$. If $S(\vec d^+) = \log_2 n$, then $d^+_1 = \ldots = d^+_n > 0$, and $\Gamma$ must have a directed cycle, so $S(\vec \lambda) < S(\vec d^+)$. \end{proof}
{}
\end{document}
|
arXiv
|
{
"id": "1812.09458.tex",
"language_detection_score": 0.7055642008781433,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{\bf Interval-valued fuzzy graphs} \normalsize \author{{\bf Muhammad Akram$^{\bf a}$\ and \ Wieslaw A. Dudek$^{\bf b}$} \\ {\small {\bf a.} Punjab University College of Information Technology, University of the Punjab,}\\ {\small Old Campus, Lahore-54000, Pakistan.}\\
{\small E-mail: [email protected],
[email protected]}\\
{\small {\bf b.} Institute of Mathematics and Computer Science, Wroclaw University of Technology,}\\
{\small Wyb. Wyspianskiego 27, 50-370,Wroclaw, Poland.}\\
{\small E-mail: [email protected]} } \date{}
\maketitle
\hrule \begin{abstract} We define the Cartesian product, composition, union and join on interval-valued fuzzy graphs and investigate some of their properties. We also introduce the notion of interval-valued fuzzy complete graphs and present some properties of self complementary and self weak complementary interval-valued fuzzy complete graphs. \end{abstract} {\bf Keywords}: Interval-valued fuzzy graph, Self complementary, Interval-valued fuzzy complete graph.\\
{\bf Mathematics Subject Classification 2000}: 05C99\\
\hrule
\footnote{Corresponding Author:\\ M. Akram ([email protected], [email protected])}
\section{Introduction} In 1975, Zadeh \cite{LA1} introduced the notion of interval-valued fuzzy sets as an extension of fuzzy sets \cite{LA} in which the values of the membership degrees are intervals of numbers instead of the numbers.
Interval-valued fuzzy sets provide a more adequate description of uncertainty than traditional fuzzy sets. It is therefore important to use interval-valued fuzzy sets in applications, such as fuzzy control. One of the computationally most intensive part of fuzzy control is defuzzification \cite{JMM}.
Since interval-valued fuzzy sets are widely studied and used, we describe briefly the work of Gorzalczany on approximate reasoning \cite{MB1, MB2}, Roy and Biswas on medical diagnosis \cite{MK}, Turksen on multivalued logic \cite{IB} and Mendel on intelligent control \cite{JMM}. \\ The fuzzy graph theory as a generalization of Euler's graph theory was first introduced by Rosenfeld \cite{RA} in 1975. The fuzzy relations between fuzzy sets were first considered by Rosenfeld and he developed the structure of fuzzy graphs obtaining analogs of several graph theoretical concepts. Later, Bhattacharya \cite{BP} gave some remarks on fuzzy graphs, and some operations on fuzzy graphs were introduced by Mordeson and Peng \cite{JN1}. The complement of a fuzzy graph was defined by Mordeson \cite{JN2} and further studied by Sunitha and Vijayakumar \cite{MS}. Bhutani and Rosenfeld introduced the concept of $M$-strong fuzzy graphs in \cite{KR2} and studied some properties. The concept of strong arcs in fuzzy graphs was discussed in \cite{KR1}. Hongmei and Lianhua gave the definition of interval-valued graph in \cite{JH}.\\ In this paper, we define the operations of Cartesian product, composition, union and join on interval-valued fuzzy graphs and investigate some properties. We study isomorphism (resp. weak isomorphism) between interval-valued fuzzy graphs is an equivalence relation (resp. partial order). We introduce the notion of interval-valued fuzzy complete graphs and present some properties of self complementary and self weak complementary interval-valued fuzzy complete graphs.\\
The definitions and terminologies that we used in this paper are standard. For other notations, terminologies and applications, the readers are referred to \cite{MA08, MA, AA, KT, FH, KP, SMS2, JN11,AM1, AP, LAZ75}.
\section{Preliminaries}
A {\it graph} is an ordered pair $G^*=(V,E),$ where $V$ is the set of vertices of $G^*$ and $E$ is the set of edges of $G^*$. Two vertices $x$ and $y$ in a graph $G^*$ are said to be adjacent in $G^*$ if $\{x,y\}$ is in an edge of $G^*$. (For simplicity an edge $\{x,y\}$ will be denoted by $xy$.) A {\it simple graph} is a graph without loops and multiple edges. A {\it complete graph} is a simple graph in which every pair of distinct vertices is connected by an edge. The complete graph on $n$ vertices has $n$ vertices and $n(n-1)/2$ edges. We will consider only graphs with the finite number of vertices and edges.
By a {\it complementary graph} $\overline{G^*}$ of a simple graph $G^*$ we mean a graph having the same vertices as $G^*$ and such that two vertices are adjacent in $\overline{G^*}$ if and only if they are not adjacent in $G^*$.
An {\it isomorphism} of graphs $G^*_1$ and $G^*_2$ is a bijection between the vertex sets of $G^*_1$ and $G^*_2$ such that any two vertices $v_1$ and $v_2$ of $G^*_1$ are adjacent in $G^*_1$ if and only if $f(v_1)$ and $f(v_2)$ are adjacent in $G^*_2$. Isomorphic graphs are denoted by $G^*_1 \simeq G^*_2.$
Let $G^*_1=(V_1, E_1)$ and $G^*_2=(V_2, E_2)$ be two simple graphs, we can construct several new graphs. The first construction called the {\it Cartesian product} of $G^*_1$ and $G^*_2$ gives a graph $G^*_1 \times G^*_2=(V, E)$ with $V=V_1 \times V_2$ and \[
E= \{(x,x_2)(x,y_2)| x\in V_1, x_2y_2\in E_2\}\cup\{(x_1,z)(y_1, z)|x_1y_1 \in E_1,z\in V_2 \}. \] The {\it composition} of graphs $G^*_1$ and $G^*_2$ is the graph $G^*_1[G^*_2]=(V_1 \times V_2,E^0)$, where $$
E^0= E\cup\{(x_1,x_2)(y_1,y_2)|x_1y_1 \in E_1, x_2\neq y_2\} $$ and $E$ is defined as in $G^*_1 \times G^*_2$. Note that $G^*_1[G^*_2]\neq G^*_2[G^*_1].$
The {\it union} of graphs $G^*_1$ and $G^*_2$ is defined as $G^*_1 \cup G^*_2=(V_1\cup V_2, E_1\cup E_2)$.
The {\it join} of $G^*_1$ and $G^*_2$ is the simple graph $G^*_1 + G^*_2=(V_1 \cup V_2, E_1 \cup E_2 \cup E')$, where $E'$ is the set of all edges joining the nodes of $V_1$ and $V_2$. In this construction it is assumed that $V_1\cap V_2\neq\emptyset$ .
By a {\it fuzzy subset} $\mu$ on a set $X$ is mean a map $\mu :X\to [0,1]$. A map $\nu: X\times X\to [0,1]$ is called a {\it fuzzy relation} on $X$ if $\nu(x,y)\leq \min(\mu(x),\mu(y))$ for all $x,y\in X$. A fuzzy relation $\nu$ is {\it symmetric} if $\nu(x, y)= \nu(y, x)$ for all $x,y\in X$.
An {\it interval number} $D$ is an interval $[a^{-}, a^{+}]$ with $0\leq a^-\leq a^+\leq 1$. The interval $[a,a]$ is identified with the number $a\in [0,1]$. $D[0,1]$ denotes the set of all interval numbers.
For interval numbers $D_1=[a_1^{-}, b_1^{+}]$ and $D_2=[a_2^{-}, b_2^{+}]$, we define \begin{itemize} \item ${\rm rmin}(D_1, D_2)={\rm rmin}([a_1^{-}, b_1^{+}], [a_2^{-}, b_2^{+}])= [\min\{a_1^{-}, a_2^{-}\}, \min\{b_1^{+}, b_2^{+}\}]$, \item ${\rm rmax}(D_1, D_2)={\rm rmax}([a_1^{-}, b_1^{+}], [a_2^{-}, b_2^{+}])= [\max\{a_1^{-}, a_2^{-}\}, \max\{b_1^{+}, b_2^{+}\}]$, \item $D_1 + D_2=[a_1^-+a_2^--a_1^-\cdot a_2^-, b_1^++b_2^+-b_1^+\cdot b_2^+]$, \item $D_1 \leq D_2$ $\Longleftrightarrow$ $a_1^{-} \leq a_2^{-}$ and $b_1^{+} \leq b_2^{+}$, \item $D_1=D_2$ $\Longleftrightarrow$ $a_1^{-} = a_2^{-}$ and $b_1^{+} = b_2^{+}$, \item $D_1 <D_2$ $\Longleftrightarrow$ $D_1 \leq D_2$ and $D_1 \neq D_2$, \item $kD= k[a_1^{-}, b_1^{+}]= [ka_1^{-}, kb_1^{+}]$, where $ 0 \leq k \leq 1$. \end{itemize} Then, $(D[0,1],\leq,\vee,\wedge)$ is a complete lattice with $[0,0]$ as the least element and $[1,1]$ as the greatest.
The {\it interval-valued fuzzy set} $A$ in $V$ is defined by \[ A=\{(x, [\mu^-_A(x), \mu^+_A(x)]): x \in V \}, \] where $\mu^-_A(x)$ and $\mu^+_A(x)$ are fuzzy subsets of $V$ such that $\mu^-_A (x)\leq \mu^+_A(x)$ for all $x\in V.$ For any two interval-valued sets $A=[\mu^-_A(x),\mu^+_A(x)]$ and $B=[\mu^-_B(x), \mu^+_B(x)])$ in $V$ we define: \begin{itemize} \item $A\bigcup B=\{(x,\max(\mu^-_A(x),\mu^-_B(x)),\max(\mu^+_A(x),\mu^+_B(x))):x\in V\}$, \item $A\bigcap B=\{(x,\min(\mu^-_A(x),\mu^-_B(x)),\min(\mu^+_A(x),\mu^+_B(x))):x\in V\}$. \end{itemize}
If $G^*=(V,E)$ is a graph, then by an {\it interval-valued fuzzy relation} $B$ on a set $E$ we mean an interval-valued fuzzy set such that \[ \mu^-_B(xy)\leq\min(\mu^-_A(x),\mu^-_A(y)), \] \[
\mu^+_B(xy) \leq \min(\mu^+_A(x),\mu^+_A(y)) \] for all $xy\in E$.
\section{Operations on interval-valued fuzzy graphs}
Throughout in this paper, $G^*$ is a crisp graph, and $G$ is an interval-valued fuzzy graph.
\begin{definition} By an {\it interval-valued fuzzy graph} of a graph $G^*=(V,E)$ we mean a pair $G=(A,B)$, where $A=[\mu^-_A,\mu^+_A]$ is an interval-valued fuzzy set on $V$ and $B=[\mu^-_B,\mu^+_B]$ is an interval-valued fuzzy relation on $E$. \end{definition}
\begin{example} Consider a graph $G^*=(V, E)$ such that $V=\{x,y,z\}$, $E=\{xy, yz,zx\}$. Let $A$ be an interval-valued fuzzy set of $V$ and let $B$ be an interval-valued fuzzy set of $E \subseteq V \times V$ defined by
\[A=< (\frac{x}{0.2}, \frac{y}{0.3}, \frac{z}{0.4}), (\frac{x}{0.4}, \frac{y}{0.5}, \frac{z}{0.5})
>, \]
\[B=< (\frac{xy}{0.1}, \frac{yz}{0.2}, \frac{zx}{0.1}), (\frac{xy}{0.3}, \frac{yz}{0.4}, \frac{zx}{0.4})
>. \]
\begin{center} \begin{tikzpicture}[scale=3]
\path (1,0.30) node (l) {$y$};
\path (1.9,-1.55) node (l){$G$}; \path (2,0.30) node (l) {$z$}; \path (2,-1.30) node (l) {$x$}; \path (1.5,0.1) node (l) {\tiny {$[0.2, 0.4]$}}; \path (2.20,-0.5) node (l) {\tiny{$[0.1,0.4]$}}; \path (1.7,-0.5) node (l) {\tiny{$[0.1, 0.3]$}};
\tikzstyle{every node}=[draw,shape=circle];
\path (2,0) node (z) {\tiny{$[0.4, 0.5]$}}; \path (2,-1) node (x) {\tiny{$[0.2, 0.4]$}}; \path (1,0) node (y) {\tiny{$[0.3, 0.5]$}};
\draw
(y) -- (z)
(y) -- (x)
(z) -- (x); \end{tikzpicture} \end{center}
By routine computations, it is easy to see that $G=(A, B)$ is an interval-valued fuzzy graph of $G^*$. \end{example}
\begin{definition}\label{D-33} The {\it Cartesian product $G_1\times G_2$ of two interval-valued fuzzy graphs} $G_1=(A_1,B_1)$ and $G_2=(A_2,B_2)$ of the graphs $G^*_1=(V_1,E_1)$ and $G^*_2=(V_2,E_2)$ is defined as a pair $(A_1\times A_1,B_1\times B_2)$ such that \begin{itemize} \item [\rm(i)] $\left\{\begin{array}{ll}(\mu^-_{A_1} \times \mu^-_{A_2})(x_1, x_2)=\min(\mu^-_{A_1}(x_1),\mu^-_{A_2}(x_2)) \\ (\mu^+_{A_1}\times\mu^+_{A_2})(x_1, x_2)=\min(\mu^+_{A_1}(x_1), \mu^+_{A_2}(x_2))\end{array}\right.$
for all\ $(x_1, x_2) \in V,$
\item [\rm(ii)] $\left\{\begin{array}{ll}(\mu^-_{B_1}\times\mu^-_{B_2})((x,x_2)(x,y_2))=\min(\mu^-_{A_1}(x), \mu^-_{B_2}(x_2y_2))\\ (\mu^+_{B_1} \times\mu^+_{B_2})((x,x_2)(x,y_2))= \min(\mu^+_{A_1}(x), \mu^+_{B_2}(x_2y_2))\end{array}\right.$
for all $x\in V_1$ and $x_2y_2 \in E_2$,
\item [\rm(iii)] $\left\{\begin{array}{ll}(\mu^-_{B_1}\times \mu^-_{B_2})((x_1,z)(y_1,z))=\min(\mu^-_{B_1}(x_1y_1),\mu^-_{A_2}(z))\\ (\mu^+_{B_1}\times\mu^+_{B_2})((x_1,z)(y_1,z))=\min(\mu^+_{B_1}(x_1y_1),\mu^+_{A_2}(z))\end{array}\right.$
for all $z\in V_2$ and $x_1y_1 \in E_1$. \end{itemize} \end{definition}
\begin{example}\label{Ex-34} Let $G^*_1=(V_1, E_1)$ and $G^*_2=(V_2, E_2)$ be graphs such that $V_1=\{a, b\}$, $V_2=\{c, d\}$, $E_1=\{ab\}$ and $E_2=\{cd\}$. Consider two interval-valued fuzzy graphs $G_1=(A_1,B_1)$ and $G_2=(A_2,B_2)$, where
\[A_1=< (\frac{a}{0.2}, \frac{b}{0.3}), (\frac{a}{0.4}, \frac{b}{0.5})
>, \ \ \ \ \ \ B_1=< \frac{ab}{0.1}, \frac{ab}{0.2} >, \]
\[A_2=< (\frac{c}{0.1}, \frac{d}{0.2}), (\frac{c}{0.4}, \frac{d}{0.6})
>, \ \ \ \ \ \ B_2=< \frac{cd}{0.1}, \frac{cd}{0.3} >. \] Then, as it is not difficult to verify \[ (\mu^-_{B_1}\times\mu^-_{B_2})((a,c)(a,d))=0.1, \ \ \ \ \ \ (\mu^+_{B_1}\times\mu^+_{B_2})((a,c)(a,d))=0.3,\] \[ (\mu^-_{B_1}\times\mu^-_{B_2})((a,c)(b,c))=0.1, \ \ \ \ \ \ (\mu^+_{B_1}\times\mu^+_{B_2})((a,c)(b,c))=0.2,\] \[ (\mu^-_{B_1}\times\mu^-_{B_2})((a,d)(b,d))=0.1, \ \ \ \ \ \ (\mu^+_{B_1}\times\mu^+_{B_2})((a,d)(b,d))=0.2,\] \[ (\mu^-_{B_1}\times\mu^-_{B_2})((b,c)(b,d))=0.1, \ \ \ \ \ \ (\mu^+_{B_1}\times\mu^+_{B_2})((b,c)(b,d))=0.3.\] \begin{center}
\begin{tikzpicture}[scale=3] \path (1,0.30) node (l) {$c$}; \path (1,-1.50) node (l) {$G_2$}; \path (0,0.30) node (l) {$a$}; \path (0,-1.50) node (l) {$G_1$}; \path (0,-1.30) node (l) {$b$}; \path (1,-1.30) node (l) {$d$}; \path (-0.2,-0.5) node (l) {\tiny{$[0.1,0.2]$}}; \path (1.20,-0.5) node (l) {\tiny{$[0.1,0.3]$}}; \tikzstyle{every node}=[draw,shape=circle];
\path (0,0) node (a) {\tiny{$[0.2,0.4]$}}; \path (1,0) node (b) {\tiny{$[0.1,0.4]$}}; \path (0,-1) node (d) {\tiny{$[0.3,0.5]$}}; \path (1,-1) node (e) {\tiny{$[0.2,0.6]$}};
\draw (a) -- (d)
(b) -- (e);
\end{tikzpicture} \begin{tikzpicture}[scale=3]
\path (0.5,-0.5) node (l) {$G_1 \times G_2$}; \path (1,0.30) node (l) {$(a,d)$}; \path (0,0.30) node (l) {$(a,c)$}; \path (0,-1.30) node (l) {$(b,c)$}; \path (1,-1.30) node (l) {$(b,d)$}; \path (0.5,0.1) node (l) {\tiny {$[0.1,0.3]$}};
\path (-0.2,-0.5) node (l) {\tiny{$[0.1,0.2]$}}; \path (1.20,-0.5) node (l) {\tiny{$[0.1,0.2]$}}; \path (0.5,-1.10) node (l) {\tiny{$[0.1,0.3]$}};
\tikzstyle{every node}=[draw,shape=circle];
\path (0,0) node (a) {\tiny{$[0.1,0.4]$}}; \path (1,0) node (b) {\tiny{$[0.2,0.4]$}}; \path (0,-1) node (d) {\tiny{$[0.1,0.4]$}}; \path (1,-1) node (e) {\tiny{$[0.2,0.5]$}};
\draw (a) -- (b)
(a) -- (d)
(d) -- (e)
(b) -- (e); \end{tikzpicture} \end{center} By routine computations, it is easy to see that $G_1\times G_2$ is an interval-valued fuzzy graph of $G^*_1\times G^*_2$. \end{example}
\begin{proposition} The Cartesian product $G_1\times G_2=(A_1\times A_2,B_1\times B_2)$ of two interval-valued fuzzy graphs of the graphs $G^*_1$ and $G^*_2$ is an interval-valued fuzzy graph of $G^*_1 \times G^*_2$. \end{proposition} \begin{proof} We verify only conditions for $B_1\times B_2$ because conditions for $A_1\times A_2$ are obvious.
Let $x\in V_1$, $x_2y_2\in E_2$. Then \begin{eqnarray*} (\mu^-_{B_1} \times \mu^-_{B_2})((x, x_2)(x, y_2)) &=& \min(\mu^-_{A_1}(x), \mu^-_{B_2}(x_2y_2)) \\
&\leq& \min(\mu^-_{A_1}(x), \min(\mu^-_{A_2}(x_2), \mu^-_{A_2}(y_2))) \\
&=& \min(\min(\mu^-_{A_1}(x), \mu^-_{A_2}(x_2)), \min(\mu^-_{A_1}(x),
\mu^-_{A_2}(y_2)))\\
&=& \min((\mu^-_{A_1} \times \mu^-_{A_2})(x, x_2), (\mu^-_{A_1} \times \mu^-_{A_2})(x,
y_2)),\\[4pt] (\mu^+_{B_1} \times \mu^+_{B_2})((x, x_2)(x, y_2)) &=& \min(\mu^+_{A_1}(x), \mu^+_{B_2}(x_2y_2)) \\
&\leq& \min(\mu^+_{A_1}(x), \min(\mu^+_{A_2}(x_2), \mu^+_{A_2}(y_2))) \\
&=& \min(\min(\mu^+_{A_1}(x), \mu^+_{A_2}(x_2)), \min(\mu^+_{A_1}(x),
\mu^+_{A_2}(y_2)))\\
&=& \min((\mu^+_{A_1} \times \mu^+_{A_2})(x, x_2), (\mu^+_{A_1} \times \mu^+_{A_2})(x,
y_2)). \end{eqnarray*}
Similarly for $z\in V_2$ and $x_1y_1\in E_1$ we have
\begin{eqnarray*} (\mu^-_{B_1} \times \mu^-_{B_2})((x_1, z)(y_1, z)) &=&\min(\mu^-_{B_1}(x_1 y_1),\mu^-_{A_2}(z)) \\
&\leq& \min( \min(\mu^-_{A_1}(x_1), \mu^-_{A_1}(y_1)), \mu^-_{A_2}(z)) \\
&=& \min(\min(\mu^-_{A_1}(x), \mu^-_{A_2}(z)), \min(\mu^-_{A_1}(y_1),
\mu^-_{A_2}(z)))\\
&=& \min((\mu^-_{A_1} \times \mu^-_{A_2})(x_1, z), (\mu^-_{A_1} \times \mu^-_{A_2})(y_1, z)),\\[4pt] (\mu^+_{B_1}\times\mu^+_{B_2})((x_1, z)(y_1, z)) &=&\min(\mu^+_{B_1}(x_1 y_1),\mu^+_{A_2}(z)) \\
&\leq& \min( \min(\mu^+_{A_1}(x_1), \mu^+_{A_1}(y_1)), \mu^+_{A_2}(z)) \\
&=& \min(\min(\mu^+_{A_1}(x), \mu^+_{A_2}(z)), \min(\mu^+_{A_1}(y_1),
\mu^+_{A_2}(z)))\\
&=& \min((\mu^+_{A_1} \times \mu^+_{A_2})(x_1, z), (\mu^+_{A_1} \times \mu^+_{A_2})(y_1, z)). \end{eqnarray*} This completes the proof. \end{proof}
\begin{definition}\label{D-36} The {\it composition $G_1[G_2]=(A_1\circ A_2,B_1\circ B_2)$ of two interval-valued fuzzy graphs} $G_1$ and $G_2$ of the graphs $G^*_1$ and $G^*_2$ is defined as follows: \begin{itemize}
\item [\rm (i)] $\left\{\begin{array}{ll}(\mu^-_{A_1} \circ \mu^-_{A_2})(x_1, x_2)=\min(\mu^-_{A_1}(x_1),
\mu^-_{A_2}(x_2)) \\ (\mu^+_{A_1} \circ \mu^+_{A_2})(x_1, x_2)=\min(\mu^+_{A_1}(x_1),
\mu^+_{A_2}(x_2))\end{array}\right.$
for all $(x_1, x_2) \in V,$
\item [\rm(ii)] $\left\{\begin{array}{ll}(\mu^-_{B_1}\circ \mu^-_{B_2})((x,x_2)(x,y_2))=\min(\mu^-_{A_1}(x),\mu^-_{B_2}(x_2y_2))\\ (\mu^+_{B_1}\circ\mu^+_{B_2})((x,x_2)(x,y_2))=\min(\mu^+_{A_1}(x), \mu^+_{B_2}(x_2y_2))\end{array}\right.$
for all $x\in V_1$ and $x_2y_2 \in E_2$,
\item [\rm(iii)] $\left\{\begin{array}{ll}(\mu^-_{B_1} \circ \mu^-_{B_2})((x_1,z)(y_1, z))=\min(\mu^-_{B_1}(x_1y_1), \mu^-_{A_2}(z))\\ (\mu^+_{B_1} \circ \mu^+_{B_2})((x_1,z)(y_1,z))= \min(\mu^+_{B_1}(x_1y_1), \mu^+_{A_2}(z))\end{array}\right.$
for all $z\in V_2$ and $x_1y_1\in E_1$,
\item [\rm(iv)] $\left\{\begin{array}{ll}(\mu^-_{B_1}\circ\mu^-_{B_2})((x_1, x_2)(y_1, y_2))= \min(\mu^-_{A_2}(x_2), \mu^-_{A_2}(y_2),\mu^-_{B_1}(x_1y_1))\\ (\mu^+_{B_1} \circ \mu^+_{B_2})((x_1, x_2) (y_1,y_2))= \min(\mu^+_{A_2}(x_2), \mu^+_{A_2}(y_2), \mu^+_{B_1}(x_1y_1))\end{array}\right.$
for all $(x_1, x_2)(y_1, y_2) \in E^0-E$. \end{itemize} \end{definition}
\begin{example}\label{Ex-37} Let $G^*_1$ and $G^*_2$ be as in the previous example. Consider two interval-valued fuzzy graphs $G_1=(A_1,B_1)$ and $G_2=(A_2,B_2)$ defined by
\[A_1=< (\frac{a}{0.2}, \frac{b}{0.3}), (\frac{a}{0.5}, \frac{b}{0.5})
>, \ \ \ \ \ \ B_1=< \frac{ab}{0.2},\frac{ab}{0.4}>, \]
\[A_2=< (\frac{c}{0.1}, \frac{d}{0.3}), (\frac{c}{0.4}, \frac{d}{0.6})
>, \ \ \ \ \ \ B_2=<\frac{cd}{0.1}, \frac{cd}{0.3} >. \] Then we have \[(\mu^-_{B_1}\circ\mu^-_{B_2})((a,c)(a,d))=0.2,\ \ \ \ \ \ (\mu^+_{B_1}\circ\mu^+_{B_2})((a,c)(a,d))=0.3, \] \[(\mu^-_{B_1}\circ\mu^-_{B_2})((b,c)(b,d))=0.1,\ \ \ \ \ \ (\mu^+_{B_1}\circ\mu^+_{B_2})((b,c)(b,d))=0.3, \] \[(\mu^-_{B_1}\circ\mu^-_{B_2})((a,c)(b,c))=0.1,\ \ \ \ \ \ (\mu^+_{B_1}\circ\mu^+_{B_2})((a,c)(b,c))=0.4, \] \[(\mu^-_{B_1}\circ\mu^-_{B_2})((a,d)(b,d))=0.2,\ \ \ \ \ \ (\mu^+_{B_1}\circ\mu^+_{B_2})((a,d)(b,d))=0.4, \] \[(\mu^-_{B_1}\circ\mu^-_{B_2})((a,c)(b,d))=0.1,\ \ \ \ \ \ (\mu^+_{B_1}\circ\mu^+_{B_2})((a,c)(b,d))=0.4, \] \[(\mu^-_{B_1}\circ\mu^-_{B_2})((b,c)(a,d))=0.1,\ \ \ \ \ \ (\mu^+_{B_1}\circ\mu^+_{B_2})((b,c)(a,d))=0.4. \] \begin{center}
\begin{tikzpicture}[scale=3] \path (1,0.30) node (l) {$c$}; \path (1,-1.50) node (l) {$G_2$}; \path (0,0.30) node (l) {$a$}; \path (0,-1.50) node (l) {$G_1$}; \path (0,-1.30) node (l) {$b$}; \path (1,-1.30) node (l) {$d$}; \path (-0.2,-0.5) node (l) {\tiny{$[0.2,0.4]$}}; \path (1.20,-0.5) node (l) {\tiny{$[0.1,0.3]$}}; \tikzstyle{every node}=[draw,shape=circle];
\path (0,0) node (a) {\tiny{$[0.2,0.5]$}}; \path (1,0) node (b) {\tiny{$[0.1,0.4]$}}; \path (0,-1) node (d) {\tiny{$[0.3,0.5]$}}; \path (1,-1) node (e) {\tiny{$[0.3,0.6]$}};
\draw (a) -- (d)
(b) -- (e);
\end{tikzpicture} \begin{tikzpicture}[scale=3] \path (1,0.30) node (l) {$(a,d)$}; \path (0.5,-1.50) node (l) {$G_1 [G_2]$}; \path (0,0.30) node (l) {$(a,c)$}; \path (0,-1.30) node (l) {$(b,c)$}; \path (1,-1.30) node (l) {$(b,d)$}; \path (0.5,0.1) node (l) {\tiny {$[0.2,0.3]$}};
\path (-0.2,-0.5) node (l) {\tiny{$[0.1,0.4]$}}; \path (1.20,-0.5) node (l) {\tiny{$[0.2,0.4]$}}; \path (0.5,-1.10) node (l) {\tiny{$[0.1,0.3]$}}; \path (0.7,-0.6) node (l)[rotate=-45] {\tiny{$[0.1,0.4]$}}; \path (0.30,-0.6) node (l)[rotate=45] {\tiny{$[0.1,0.4]$}};
\tikzstyle{every node}=[draw,shape=circle];
\path (0,0) node (a) {\tiny{$[0.1,0.4]$}}; \path (1,0) node (b) {\tiny{$[0.2,0.5]$}}; \path (0,-1) node (d) {\tiny{$[0.1,0.4]$}}; \path (1,-1) node (e) {\tiny{$[0.3,0.5]$}};
\draw (a) -- (b)
(a) -- (d)
(d) -- (e)
(b) -- (e)
(a) -- (e)
(b) -- (d); \end{tikzpicture}
\end{center} By routine computations, it is easy to see that $G_1[G_2]= (A_1 \circ A_2, B_1 \circ B_2)$ is an interval-valued fuzzy graph of $G^*_1[G^*_2]$.
\end{example}
\begin{proposition}\label{P-38} The composition $G_1[G_2]$ of interval-valued fuzzy graphs $G_1$ and $G_2$ of $G^*_1$ and $G^*_2$ is an interval-valued fuzzy graph of $G^*_1[G^*_2]$. \end{proposition} \begin{proof} Similarly as in the previous proof we verify the conditions for $B_1\circ B_2$ only.
In the case $x\in V_1$, $x_2 y_2\in E_2$, according to $(ii)$ we obtain \begin{eqnarray*} (\mu^-_{B_1} \circ \mu^-_{B_2})((x, x_2)(x, y_2)) &=& \min(\mu^-_{A_1}(x), \mu^-_{B_2}(x_2y_2)) \\
&\leq& \min(\mu^-_{A_1}(x), \min(\mu^-_{A_2}(x_2), \mu^-_{A_2}(y_2))) \\
&=& \min(\min(\mu^-_{A_1}(x), \mu^-_{A_2}(x_2)), \min(\mu^-_{A_1}(x),
\mu^-_{A_2}(y_2)))\\
&=& \min((\mu^-_{A_1} \circ \mu^-_{A_2})(x, x_2), (\mu^-_{A_1} \circ \mu^-_{A_2})(x,
y_2)),\\[4pt] (\mu^+_{B_1}\circ \mu^+_{B_2})((x, x_2)(x, y_2)) &=& \min(\mu^+_{A_1}(x), \mu^+_{B_2}(x_2y_2)) \\
&\leq& \min(\mu^+_{A_1}(x), \min(\mu^+_{A_2}(x_2), \mu^+_{A_2}(y_2))) \\
&=& \min(\min(\mu^+_{A_1}(x), \mu^+_{A_2}(x_2)), \min(\mu^+_{A_1}(x),
\mu^+_{A_2}(y_2)))\\
&=& \min((\mu^+_{A_1}\circ \mu^+_{A_2})(x, x_2), (\mu^+_{A_1} \circ \mu^+_{A_2})(x,
y_2)). \end{eqnarray*}
In the case $z\in V_2$, $x_1y_1\in E_1$ the proof is similar.
In the case $(x_1, x_2)(y_1, y_2)\in E^0-E$ we have $x_1y_1\in E_1$ and $x_2\neq y_2$, which according to $(iv)$ implies \begin{eqnarray*} (\mu^-_{B_1}\circ\mu^-_{B_2})((x_1,x_2)(y_1,y_2)) &=&\min(\mu^-_{A_2}(x_2),\mu^-_{A_2}(y_2),\mu^-_{B_1}(x_1y_1)) \\
&\leq& \min(\mu^-_{A_2}(x_2), \mu^-_{A_2}(y_2), \min(\mu^-_{A_1}(x_1), \mu^-_{A_1}(y_1)) )\\
&=& \min(\min(\mu^-_{A_1}(x_1), \mu^-_{A_2}(x_2)), \min(\mu^-_{A_1}(y_1),
\mu^-_{A_2}(y_2)))\\
&=&\min((\mu^-_{A_1}\circ\mu^-_{A_2})(x_1,
x_2),(\mu^-_{A_1}\circ\mu^-_{A_2})(y_1,y_2)),\\[4pt] (\mu^+_{B_1}\circ \mu^+_{B_2})((x_1, x_2)(y_1, y_2)) &=&\min(\mu^+_{A_2}(x_2),\mu^+_{A_2}(y_2),\mu^+_{B_1}(x_1y_1)) \\
&\leq& \min(\mu^+_{A_2}(x_2), \mu^+_{A_2}(y_2), \min(\mu^+_{A_1}(x_1), \mu^+_{A_1}(y_1)) )\\
&=& \min(\min(\mu^+_{A_1}(x_1), \mu^+_{A_2}(x_2)), \min(\mu^+_{A_1}(y_1),
\mu^+_{A_2}(y_2)))\\
&=& \min((\mu^+_{A_1} \circ \mu^+_{A_2})(x_1, x_2), (\mu^+_{A_1} \circ \mu^+_{A_2})(y_1, y_2)). \end{eqnarray*} This completes the proof. \end{proof}
\begin{definition}\label{D-39} The {\it union $G_1 \cup G_2=(A_1 \cup A_2, B_1 \cup B_2)$ of two interval-valued fuzzy graphs} $G_1$ and $G_2$ of the graphs $G^*_1$ and $G^*_2$ is defined as follows:
\begin{itemize} \item[(A)] \ $\left\{\begin{array}{ll}
(\mu^-_{A_1} \cup\mu^-_{A_2})(x)=\mu^-_{A_1}(x) \ \ \ {\rm if } \ x\in V_1 \ {\rm and } \ x\not\in{V_2}, \\ (\mu^-_{A_1} \cup \mu^-_{A_2})(x)=\mu^-_{A_2}(x) \ \ \ {\rm if } \ x\in V_2 \ {\rm and } \ x\not\in{V_1},\\ (\mu^-_{A_1}\cup\mu^-_{A_2})(x)=\max(\mu^-_{A_1}(x),\mu^-_{A_2}(x)) \ \ {\rm if } \ x\in V_1\cap V_2, \end{array}\right.$ \item[(B)] \ $\left\{\begin{array}{ll}
(\mu^+_{A_1} \cup \mu^+_{A_2})(x)=\mu^+_{A_1}(x) \ \ \ {\rm if } \ x\in V_1 \ {\rm and } \ x\not\in{V_2}, \\ (\mu^+_{A_1}\cup\mu^+_{A_2})(x)=\mu^+_{A_2}(x) \ \ \ {\rm if } \ x\in V_2 \ {\rm and } \ x\not\in{V_1},\\ (\mu^+_{A_1}\cup\mu^+_{A_2})(x)=\max(\mu^+_{A_1}(x),\mu^+_{A_2}(x)) \ \ {\rm if } \ x\in V_1\cap V_2, \end{array}\right.$ \item[(C)] \ $\left\{\begin{array}{ll}
(\mu^-_{B_1} \cup \mu^-_{B_2})(xy)=\mu^-_{B_1}(xy) \ \ \ {\rm if } \ xy\in E_1 \ {\rm and } \ xy\not\in{E_2}, \\ (\mu^-_{B_1}\cup\mu^-_{B_2})(xy)= \mu^-_{B_2}(xy) \ \ \ {\rm if } \ xy\in E_2 \ {\rm and } \ xy\not\in{E_1},\\ (\mu^-_{B_1}\cup\mu^-_{B_2})(xy)=\max(\mu^-_{B_1}(xy),\mu^-_{B_2}(xy)) \ \ {\rm if } \ xy\in E_1\cap E_2, \end{array}\right.$ \item[(D)] \ $\left\{\begin{array}{ll}
(\mu^+_{B_1} \cup \mu^+_{B_2})(xy)=\mu^+_{B_1}(xy) \ \ \ {\rm if } \ xy\in E_1 \ {\rm and } \ xy\not\in{E_2},\\ (\mu^+_{B_1}\cup\mu^+_{B_2})(xy)=\mu^+_{B_2}(xy) \ \ \ {\rm if } \ xy\in E_2 \ {\rm and } \ xy\not\in{E_1},\\ (\mu^+_{B_1}\cup\mu^+_{B_2})(xy)=\max(\mu^+_{B_1}(xy),\mu^+_{B_2}(xy)) \ \ {\rm if } \ xy\in E_1\cap E_2. \end{array}\right.$ \end{itemize} \end{definition}
\begin{example}\label{Ex-310} Let $G^*_1=(V_1, E_1)$ and $G^*_2=(V_2, E_2)$ be graphs such that $V_1=\{a, b, c, d, e\}$, $E_1=\{ab, bc, be, ce, ad, ed\}$, $V_2=\{a, b, c, d, f\}$ and $E_2=\{ab, bc, cf, bf, bd\}$. Consider two interval-valued fuzzy graphs $G_1=(A_1, B_1)$ and $G_2=(A_2, B_2)$ defined by \[A_1=< (\frac{a}{0.2}, \frac{b}{0.4}, \frac{c}{0.3}, \frac{d}{0.3}, \frac{e}{0.2}), (\frac{a}{0.4}, \frac{b}{0.5}, \frac{c}{0.6}, \frac{d}{0.7}, \frac{e}{0.6})>, \] \[B_1=< (\frac{ab}{0.1}, \frac{bc}{0.2}, \frac{ce}{0.1}, \frac{be}{0.2}, \frac{ad}{0.1}, \frac{de}{0.1} ), (\frac{ab}{0.3}, \frac{bc}{0.4}, \frac{ce}{0.5}, \frac{be}{0.5}, \frac{ad}{0.3}, \frac{de}{0.6})>,\] \[A_2=< (\frac{a}{0.2}, \frac{b}{0.2}, \frac{c}{0.3}, \frac{d}{0.2}, \frac{f}{0.4}), (\frac{a}{0.4}, \frac{b}{0.5}, \frac{c}{0.6}, \frac{d}{0.6}, \frac{f}{0.6})>, \] \[B_2=< (\frac{ab}{0.1}, \frac{bc}{0.2}, \frac{cf}{0.1}, \frac{bf}{0.1}, \frac{bd}{0.2} ), (\frac{ab}{0.2}, \frac{bc}{0.4}, \frac{cf}{0.5}, \frac{bf}{0.2},\frac{bd}{0.5})>.\] Then, according to the above definition:
$\begin{array}{ccccc} &(\mu^-_{A_1} \cup \mu^-_{A_2})(a)=0.2,& (\mu^-_{A_1} \cup \mu^-_{A_2})(b)=0.4,&\\ &(\mu^-_{A_1} \cup \mu^-_{A_2})(c)=0.3,& (\mu^-_{A_1} \cup \mu^-_{A_2})(d)=0.3,& \\ &(\mu^-_{A_1}\cup\mu^-_{A_2})(e)=0.2,&(\mu^-_{A_1} \cup \mu^-_{A_2})(f)=0.4,&\\ &(\mu^+_{A_1} \cup \mu^+_{A_2})(a)=0.4,&(\mu^+_{A_1} \cup \mu^+_{A_2})(b)=0.5,&\\ (\mu^+_{A_1} \cup \mu^+_{A_2})(c)=0.6,&(\mu^+_{A_1} \cup \mu^+_{A_2})(d)=0.7,& (\mu^+_{A_1} \cup \mu^+_{A_2})(e)=0.1,& (\mu^+_{A_1} \cup \mu^+_{A_2})(f)=0.6,\\ (\mu^-_{B_1} \cup \mu^-_{B_2})(ab)=0.1,&(\mu^-_{B_1} \cup \mu^-_{B_2})(bc)=0.2,&(\mu^-_{B_1} \cup \mu^-_{B_2})(ce)=0.1,& (\mu^-_{B_1}\cup\mu^-_{B_2})(be)=0.2, \\ (\mu^-_{B_1}\cup\mu^-_{B_2})(ad)=0.1,&(\mu^-_{B_1} \cup \mu^-_{B_2})(de)=0.1,&(\mu^-_{B_1} \cup \mu^-_{B_2})(bd)=0.2,& (\mu^-_{B_1}\cup\mu^-_{B_2})(bf)=0.1,\\ (\mu^+_{B_1}\cup \mu^+_{B_3})(ab)=0.3,&(\mu^+_{B_1} \cup \mu^+_{B_2})(bc)=0.4,&
(\mu^+_{B_1}\cup\mu^+_{B_2})(ce)=0.5,& (\mu^+_{B_1} \cup
\mu^+_{B_2})(be)=0.5,\\ (\mu^+_{B_1}\cup \mu^+_{B_2})(ad)=0.3,&(\mu^+_{B_1} \cup \mu^+_{B_2})(de)=0.6,&(\mu^+_{B_1}\cup \mu^+_{B_2})(bd)=0.5,& (\mu^+_{B_1}\cup\mu^+_{B_2})(bf)=0.2.\end{array}$ \begin{center} \begin{tikzpicture}[scale=3]
\path (1,0.80) node (l){$G_1$}; \path (1,0.30) node (l) {$b$}; \path (0,0.30) node (l) {$a$}; \path (2,0.30) node (l) {$c$}; \path (0,-1.30) node (l) {$d$}; \path (1,-1.30) node (l) {$e$}; \path (0.5,0.1) node (l) {\tiny {$[0.1,0.3]$}}; \path (1.5,0.1) node (l) {\tiny {$[0.2,0.4]$}}; \path (-0.2,-0.5) node (l) {\tiny{$[0.1,0.3]$}}; \path (0.8,-0.5) node (l) {\tiny{$[0.2,0.5]$}}; \path (0.5,-1.10) node (l) {\tiny{$[0.1,0.6]$}}; \path (1.7,-0.5) node (l) {\tiny{$[0.1,0.5]$}};
\tikzstyle{every node}=[draw,shape=circle];
\path (0,0) node (a) {\tiny{$[0.2,0.4]$}}; \path (1,0) node (b) {\tiny{$[0.4,0.5]$}}; \path (2,0) node (c) {\tiny{$[0.3,0.6]$}}; \path (0,-1) node (d) {\tiny{$[0.3,0.7]$}}; \path (1,-1) node (e) {\tiny{$[0.2,0.6]$}};
\draw (a) -- (b)
(b) -- (c)
(a) -- (d)
(d) -- (e)
(b) -- (e)
(c) -- (e); \end{tikzpicture}
\begin{tikzpicture}[scale=3]
\path (1,0.80) node (l) {$G_2$}; \path (1,0.30) node (l) {$b$}; \path (0,0.30) node (l) {$a$}; \path (2,0.30) node (l) {$c$}; \path (0,-1.30) node (l) {$d$}; \path (2,-1.30) node (l) {$f$}; \path (0.5,0.1) node (l) {\tiny {$[0.1,0.2]$}}; \path (1.5,0.1) node (l) {\tiny {$[0.2,0.4]$}}; \path (0.8,-0.5) node (l) {\tiny{$[0.2,0.5]$}}; \path (2.20,-0.5) node (l) {\tiny{$[0.1,0.5]$}}; \path (1.7,-0.5) node (l) {\tiny{$[0.1,0.2]$}};
\tikzstyle{every node}=[draw,shape=circle];
\path (0,0) node (a) {\tiny{$[0.2,0.4]$}}; \path (1,0) node (b) {\tiny{$[0.2,0.5]$}}; \path (2,0) node (c) {\tiny{$[0.3,0.6]$}}; \path (0,-1) node (d) {\tiny{$[0.2,0.6]$}}; \path (2,-1) node (f) {\tiny{$[0.4,0.6]$}};
\draw (a) -- (b)
(b) -- (c)
(b) -- (f)
(b) -- (d)
(c) -- (f); \end{tikzpicture}
\begin{tikzpicture}[scale=3]
\path (1,-1.80) node (l) {$G_1 \cup G_2$}; \path (1,0.30) node (l) {$b$}; \path (0,0.30) node (l) {$a$}; \path (2,0.30) node (l) {$c$}; \path (0,-1.30) node (l) {$d$}; \path (1,-1.30) node (l) {$e$}; \path (2,-1.30) node (l) {$f$}; \path (0.5,0.1) node (l) {\tiny {$[0.1,0.3]$}}; \path (1.5,0.1) node (l) {\tiny {$[0.2,0.4]$}}; \path (-0.2,-0.5) node (l) {\tiny{$[0.1,0.3]$}}; \path (0.8,-0.5) node (l) {\tiny{$[0.2,0.5]$}}; \path (0.5,-1.10) node (l) {\tiny{$[0.1,0.6]$}}; \path (1.7,-0.5) node (l) {\tiny{$[0.1,0.5]$}}; \path (2.20,-0.5) node (l) {\tiny{$[0.3,0.5]$}}; \path (0.4,-0.5) node (l)[rotate=40] {\tiny{$[0.2,0.5]$}}; \path (1.5,-0.2) node (l) {\tiny{$[0.1,0.2]$}}; \tikzstyle{every node}=[draw,shape=circle];
\path (0,0) node (a) {\tiny{$[0.2,0.4]$}}; \path (1,0) node (b) {\tiny{$[0.4,0.5]$}}; \path (2,0) node (c) {\tiny{$[0.3,0.6]$}}; \path (0,-1) node (d) {\tiny{$[0.3,0.7]$}}; \path (1,-1) node (e) {\tiny{$[0.2,0.6]$}}; \path (2,-1) node (f) {\tiny{$[0.4,0.6]$}};
\draw (a) -- (b)
(b) -- (c)
(a) -- (d)
(d) -- (e)
(b) -- (e)
(c) -- (e)
(c) -- (f)
(b) -- (d)
(b) -- (f); \end{tikzpicture} \end{center} Clearly, $G_1 \cup G_2=(A_1\cup A_2,B_1\cup B_2)$ is an interval-valued fuzzy graph of the graph $G_1^*\cup G_2^*$. \end{example}
\begin{proposition}\label{P-311} The union of two interval-valued fuzzy graphs is an interval-valued fuzzy graph. \end{proposition} \begin{proof} Let $G_1=(A_1, B_1)$ and $G_2=(A_2, B_2)$ be interval-valued fuzzy graphs of $G^*_1$ and $G^*_2$, respectively. We prove that $G_1\cup G_2=(A_1 \cup A_2, B_1 \cup B_2)$ is an interval-valued fuzzy graph of the graph $G_1^*\cup G_2^*$. Since all conditions for $A_1\cup A_2$ are automatically satisfied we verify only conditions for $B_1\cup B_2$.
At first we consider the case when $xy\in E_1\cap E_2$. Then \begin{eqnarray*}
(\mu^-_{B_1} \cup \mu^-_{B_2} )(xy) &=& \max(\mu^-_{B_1}(xy), \mu^-_{B_2}(xy)) \\
&\leq& \max( \min(\mu^-_{A_1}(x), \mu^-_{A_1}(y)), \min(\mu^-_{A_2}(x), \mu^-_{A_2}(y))) \\
&=& \min( \max(\mu^-_{A_1}(x), \mu^-_{A_2}(x)), \max(\mu^-_{A_1}(y), \mu^-_{A_2}(y))) \\
&=& \min((\mu^-_{A_1} \cup \mu^-_{A_2})(x), (\mu^-_{A-1} \cup
\mu^-_{A_2})(y)),\\
(\mu^+_{B_1} \cup \mu^+_{B_2} )(xy) &=& \max(\mu^+_{B_1}(xy), \mu^+_{B_2}(xy)) \\
&\leq& \max( \min(\mu^+_{A_1}(x), \mu^+_{A_1}(y)), \min(\mu^+_{A_2}(x), \mu^+_{A_2}(y))) \\
&=& \min( \max(\mu^+_{A_1}(x), \mu^+_{A_2}(x)), \max(\mu^+_{A_1}(y), \mu^+_{A_2}(y))) \\
&=& \min((\mu^+_{A_1} \cup \mu^+_{A_2})(x), (\mu^+_{A-1} \cup \mu^+_{A_2})(y)). \end{eqnarray*}
If $xy\in E_1$ and $xy\not\in{E_2}$, then \[ (\mu^-_{B_1} \cup \mu^-_{B_2} )(xy)\leq \min((\mu^-_{A_1} \cup \mu^-_{A_2})(x), (\mu^-_{A_1} \cup \mu^-_{A_2})(y)), \]
\[ (\mu^+_{B_1} \cup \mu^+_{B_2} )(xy)\leq \min((\mu^+_{A_1} \cup \mu^+_{A_2})(x), (\mu^+_{A_1} \cup \mu^+_{A_2})(y)). \]
If $xy\in E_2$ and $xy\not\in{E_1}$, then \[ (\mu^-_{B_1} \cup \mu^-_{B_2} )(xy)\leq \min((\mu^-_{A_1} \cup \mu^-_{A_2})(x),
(\mu^-_{A_1} \cup \mu^-_{A_2})(y)), \] \[ (\mu^+_{B_1} \cup \mu^+_{B_2} )(xy)\leq \min((\mu^+_{A_1} \cup \mu^+_{A_2})(x),
(\mu^+_{A_1} \cup \mu^+_{A_2})(y)). \]
This completes the proof. \end{proof}
\begin{definition}\label{D-312} The {\it join $G_1 + G_2=(A_1 + A_2, B_1 + B_2)$ of two interval-valued fuzzy graphs} $G_1$ and $G_2$ of the graphs $G^*_1$ and $G^*_2$ is defined as follows: \begin{itemize} \item [(A)] \ $\left\{\begin{array}{ll} (\mu^-_{A_1}+\mu^-_{A_2})(x)=(\mu^-_{A_1}\cup\mu^-_{A_2})(x)\\ (\mu^+_{A_1} + \mu^+_{A_2})(x)=(\mu^+_{A_1}\cup\mu^+_{A_2})(x) \end{array}\right.$ \ \ \
if $x\in V_1\cup V_2$, \item[(B)] \ $\left\{\begin{array}{ll} (\mu^-_{B_1}+\mu^-_{B_2})(xy)=(\mu^-_{B_1}\cup\mu^-_{B_2})(xy)\\ (\mu^+_{B_1} + \mu^+_{B_2})(xy)=(\mu^+_{B_1} \cup\mu^+_{B_2})(xy) \end{array}\right.$ \ \ \
if $xy \in E_1 \cap E_2$, \item [(C)] \ $\left\{\begin{array}{ll} (\mu^-_{B_1}+\mu^-_{B_2})(xy)=\min(\mu^-_{A_1}(x), \mu^-_{A_2}(y))\\ (\mu^+_{B_1} + \mu^+_{B_2})(xy)= \min(\mu^+_{A_1}(x),\mu^+_{A_2}(y)) \end{array}\right.$ \ \ \
if $xy\in E'$, where $E'$ is the set of all edges joining the nodes of $V_1$ and $V_2$. \end{itemize} \end{definition}
\begin{proposition}\label{P-313} The join of interval-valued fuzzy graphs is an interval-valued fuzzy graph. \end{proposition} \begin{proof} Let $G_1=(A_1, B_1)$ and $G_2=(A_2, B_2)$ be interval-valued fuzzy graphs of $G^*_1$ and $G^*_2$, respectively. We prove that $G_1+G_2=(A_1+A_2, B_1+B_2)$ is an interval-valued fuzzy graph of the graph $G_1^*+G_2^*$. In view of Proposition \ref{P-311} is sufficient to verify the case when $xy \in E'$. In this case we have \begin{eqnarray*}
(\mu^-_{B_1} + \mu^-_{B_2} )(xy) &=& \min(\mu^-_{A_1}(x), \mu^-_{A_2}(y)) \\
&\leq& \min( (\mu^-_{A_1} \cup \mu^-_{A_2})(x),(\mu^-_{A_1} \cup \mu^-_{A_2})(y)) \\
&=& \min((\mu^-_{A_1} + \mu^-_{A_2})(x), (\mu^-_{A_1} +
\mu^-_{A_2})(y)),\\[4pt] (\mu^+_{B_1} + \mu^+_{B_2} )(xy) &=& \min(\mu^+_{A_1}(x), \mu^+_{A_2}(y)) \\
&\leq& \min( (\mu^+_{A_1} \cup \mu^+_{A_2})(x),(\mu^+_{A_1} \cup \mu^+_{A_2})(y)) \\
&=& \min((\mu^+_{A_1} + \mu^+_{A_2})(x), (\mu^+_{A_1} + \mu^+_{A_2})(y)). \end{eqnarray*} This completes the proof. \end{proof}
\begin{proposition}\label{P-314} Let $G^*_1=(V_1, E_1)$ and $G^*_2=(V_2, E_2)$ be crisp graphs with $V_1 \cap V_2 =\emptyset$. Let $A_1$, $A_2$, $B_1$ and $B_2$ be interval-valued fuzzy subsets of $V_1$, $V_2$, $E_1$ and $E_2$, respectively. Then $G_1 \cup G_2=(A_1 \cup A_2, B_1 \cup B_2)$ is an interval-valued fuzzy graph of $G_1^*\cup G_2^*$ if and only if $G_1=(A_1, B_1)$ and $G_2=(A_2, B_2)$ are interval-valued fuzzy graphs of $G^*_1$ and $G^*_2$, respectively. \end{proposition} \begin{proof} Suppose that $G_1 \cup G_2=(A_1 \cup A_2, B_1 \cup B_2)$ is an interval-valued fuzzy graph of $G_1^*\cup G_2^*$. Let $xy \in E_1$. Then $xy \notin E_2$ and $x,y\in V_1-V_2$. Thus \begin{eqnarray*} \mu^-_{B_1}(xy) &=& (\mu^-_{B_1} \cup \mu^-_{B_2})(xy) \\
&\leq& \min((\mu^-_{A_1} \cup \mu^-_{A_2})(x), (\mu^-_{A_1} \cup \mu^-_{A_2})(y)) \\
&=& \min(\mu^-_{A_1}(x), \mu^-_{A_1}(y)),\\
\mu^+_{B_1}(xy) &=& (\mu^+_{B_1} \cup \mu^+_{B_2})(xy) \\
&\leq& \min((\mu^+_{A_1} \cup \mu^+_{A_2})(x), (\mu^+_{A_1} \cup \mu^+_{A_2})(y)) \\
&=& \min(\mu^+_{A_1}(x), \mu^+_{A_1}(y)). \end{eqnarray*} This shows that $G_1=(A_1, B_1)$ is an interval-valued fuzzy graph. Similarly, we can show that $G_2=(A_2, B_2)$ is an interval-valued fuzzy graph.
The converse statement is given by Proposition \ref{P-311}. \end{proof}
As a consequence of Propositions \ref{P-313} and \ref{P-314} we obtain \begin{proposition} Let $G^*_1=(V_1, E_1)$ and $G^*_2=(V_2, E_2)$ be crisp graphs and let $V_1 \cap V_2 =\emptyset$. Let $A_1$, $A_2$, $B_1$ and $B_2$ be interval-valued fuzzy subsets of $V_1$, $V_2$, $E_1$ and $E_2$, respectively. Then $G_1 + G_2=(A_1 + A_2, B_1 + B_2)$ is an interval-valued fuzzy graph of $G^*$ if and only if $G_1=(A_1, B_1)$ and $G_2=(A_2, B_2)$ are interval-valued fuzzy graphs of $G^*_1$ and $G^*_2$, respectively. \end{proposition}
\section{Isomorphisms of interval-valued fuzzy graphs}
In this section we characterize various types of (weak) isomorphisms of interval valued graphs. \begin{definition} Let $G_1=(A_1, B_1)$ and $G_2=(A_2, B_2)$ be two interval-valued fuzzy graphs. A {\it homomorphism} $f:G_1 \to G_2$ is a mapping $f:V_1 \to V_2$ such that \begin{itemize} \item [\rm (a)] \ $\mu^-_{A_1}(x_1)\leq \mu^-_{A_2}(f(x_1))$, \ \ \ $\mu^+_{A_1}(x_1)\leq \mu^+_{A_2}(f(x_1))$, \item [\rm (b)] \ $\mu^-_{B_1}(x_1y_1)\leq \mu^-_{B_2}(f(x_1)f(y_1))$, \ \ \ $\mu^+_{B_1}(x_1y_1)\leq \mu^+_{B_2}(f(x_1)f(y_1))$ \end{itemize} for all $x_1 \in V_1$, $x_1y_1 \in E_1$. \end{definition}
A bijective homomorphism with the property \begin{itemize} \item [\rm (c)] \ $\mu^-_{A_1}(x_1)= \mu^-_{A_2}(f(x_1))$, \ \ \ $\mu^+_{A_1}(x_1)=\mu^+_{A_2}(f(x_1))$, \end{itemize} is called a {\it weak isomorphism}. A weak isomorphism preserves the weights of the nodes but not necessarily the weights of the arcs.
A bijective homomorphism preserving the weights of the arcs but not necessarily the weights of nodes, i.e., a bijective homomorphism $f:G_1 \to G_2$ such that \begin{itemize}
\item [\rm (d)] $\mu^-_{B_1}(x_1y_1)= \mu^-_{B_2}(f(x_1)f(y_1))$, $\mu^+_{B_1}(x_1y_1) = \mu^+_{B_2}(f(x_1)f(y_1))$ \end{itemize} for all $x_1y_1 \in V_1$ is called a {\it weak co-isomorphism}.
A bijective mapping $f:G_1 \to G_2$ satisfying $(c)$ and $(d)$ is called an {\it isomorphism}.
\begin{example} Consider graphs $G^*_1=(V_1, E_1)$ and $G^*_2=(V_2, E_2)$ such that $V_1=\{a_1, b_1\}$, $V_2=\{a_2, b_2\}$, $E_1= \{a_1 b_1\}$ and $E_2= \{a_2 b_2\}$. Let $A_1$, $A_2$, $B_1$ and $B_2$ be interval-valued fuzzy subsets defined by \[ A_1=< (\frac{a_1}{0.2}, \frac{b_1}{0.3}), (\frac{a_1}{0.5}, \frac{b_1}{0.6}) >, \ \ \ B_1=<\frac{a_1b_1}{0.1}, \frac{a_1b_1}{0.3}>, \] \[A_2=<(\frac{a_2}{0.3}, \frac{b_2}{0.2}), (\frac{a_2}{0.6}, \frac{b_2}{0.5}) >, \ \ \ B_2=<\frac{a_2b_2}{0.1},\frac{a_2b_2}{0.4}>. \] Then, as it is easy to see, $G_1=(A_1, B_1)$ and $G_2=(A_2, B_2)$ are interval-valued fuzzy graphs of $G^*_1$ and $G^*_2$, respectively. The map $f:V_1\to V_2$ defined by $f(a_1)=b_2$ and $f(b_1)=a_2$ is a weak isomorphism but it is not an isomorphism.
\end{example}
\begin{example} Let $G^*_1$ and $G^*_2$ be as in the previous example and let $A_1$, $A_2$, $B_1$ and $B_2$ be interval-valued fuzzy subsets defined by \[ A_1=< (\frac{a_1}{0.2}, \frac{b_1}{0.3}), (\frac{a_1}{0.4}, \frac{b_1}{0.5}) >, \ \ \ B_1=<\frac{a_1b_1}{0.1}, \frac{a_1b_1}{0.3}>, \] \[A_2=<(\frac{a_2}{0.4}, \frac{b_2}{0.3}), (\frac{a_2}{0.5}, \frac{b_2}{0.6}) >, \ \ \ B_2=<\frac{a_2b_2}{0.1}, \frac{a_2b_2}{0.3}>. \] Then $G_1=(A_1, B_1)$ and $G_2=(A_2, B_2)$ are interval-valued fuzzy graphs of $G^*_1$ and $G^*_2$, respectively. The map $f:V_1\to V_2$ defined by $f(a_1)=b_2$ and $f(b_1)=a_2$ is a weak co-isomorphism but it is not an isomorphism. \end{example}
\begin{proposition} An isomorphism between interval-valued fuzzy graphs is an equivalence relation. \end{proposition}
\noindent{\bf Problem.} Prove or disprove that weak isomorphism (co-isomorphism) between interval-valued fuzzy graphs is a partial ordering relation.
\section{Interval-valued fuzzy complete graphs}
\begin{definition} An interval-valued fuzzy graph $G=(A, B)$ is called {\it complete} if \[\mu^-_B(xy)= \min(\mu^-_A(x), \mu^-_A(y)) ~~~ {\rm and}~~~\mu^+_B(xy)= \min(\mu^+_A(x), \mu^+_A(y))~~~{\rm for~ all}~~ xy\in E. \]
\end{definition}
\begin{example} Consider a graph $G^*=(V, E)$ such that $V=\{x, y, z\}$, $E= \{xy, yz, zx\}$. If $A$ and $B$ are interval-valued fuzzy subset defined by
\[A=< (\frac{x}{0.2}, \frac{y}{0.3}, \frac{z}{0.4}), (\frac{x}{0.4}, \frac{y}{0.5}, \frac{z}{0.5})
>, \]
\[B=< (\frac{xy}{0.2}, \frac{yz}{0.3}, \frac{zx}{0.2}), (\frac{xy}{0.4}, \frac{yz}{0.5}, \frac{zx}{0.4})
>, \] then $G=(A,B)$ is an interval-valued fuzzy complete graph of $G^*$. \end{example}
As a consequence of Proposition \ref{P-38} we obtain \begin{proposition} If $G=(A,B)$ be an interval-valued fuzzy complete graph, then also $G[G]$ is an interval-valued fuzzy complete graph.
\end{proposition}
\begin{definition} The {\it complement} of an interval-valued fuzzy complete graph $G=(A,B)$ of $G^*=(V,E)$ is an interval-valued fuzzy complete graph $\overline{G}=(\overline{A},\overline{B})$ on $\overline{G^*}=(V,\overline{E})$, where $\overline{A}=A=[\mu^-_A,\mu^+_A]$ and $\overline{B}=[\overline{\mu^-}_B, \overline{\mu^+}_B]$ is defined by
$$ \begin{aligned}\overline{\mu^-_B}(x y) & = \begin{cases}
0 & \hbox{if \ $\mu^-_B(x y) >0$,} \\
\min( \mu^-_A(x), \mu^-_A(y)) & \hbox{if \ if $\mu^-_B(x y)=0$, }\\ \end{cases}
\end{aligned}$$ $$ \begin{aligned}\overline{\mu^+_B}(x y) & = \begin{cases}
0 & \hbox{if \ $\mu^+_B(x y) >0$,} \\
\min( \mu^+_A(x), \mu^+_A(y)) & \hbox{if \ if $\mu^+_B(x y)=0$. }\\ \end{cases}
\end{aligned}$$ \end{definition} \begin{definition} An interval-valued fuzzy complete graph $G=(A, B)$ is called {\it self complementary} if $\overline{\overline{G}}=G$. \end{definition}
\begin{example} Consider a graph $G^*=(V, E)$ such that $V=\{a, b, c\}$, $E= \{ab, bc\}$. Then an interval-valued fuzzy graph $G=(A,B)$, where \[ A=<(\frac{a}{0.1},\frac{b}{0.2},\frac{c}{0.3}),(\frac{a}{0.3},\frac{b}{0.4}, \frac{c}{0.5})>, \] \[ B=<(\frac{ab}{0.1},\frac{bc}{0.2}),(\frac{ab}{0.3},\frac{bc}{0.4})>, \] is self complementary. \end{example}
\begin{proposition} In a self complementary interval-valued fuzzy complete graph $G=(A,B)$ we have
$a)$ \ $\sum\limits_{x\neq y} \mu^-_B(x y)=\sum\limits_{x\neq y} \min( \mu^-_A(x), \mu^-_A(y))$,
$b)$ \ $\sum\limits_{x\neq y} \mu^+_B(x y)=\sum\limits_{x\neq y} \min( \mu^+_A(x), \mu^+_A(y))$. \end{proposition} \begin{proof} Let $G=(A,B)$ be a self complementary interval-valued fuzzy complete graph. Then there exists an automorphism $f:V \to V$ such that $\mu^-_A(f(x))=\mu^-_A(x)$, $\mu^+_A(f(x))=\mu^+_A(x)$, $\overline{\mu^-_B}(f(x)f(y))=\mu^-_B(xy)$ and $\overline{\mu^+_B}(f(x)f(y))=\mu^+_B(xy)$ for all $x,y\in V$. Hence, for $x,y\in V$ we obtain \[ \mu^-_B(x y)=\overline{\mu^-}_B(f(x)f(y))=\min(\mu^-_A(f(x)),\mu^-_A(f(y)))=\min(\mu^-_A(x), \mu^-_A(y)), \] which implies $a)$. The proof of $b)$ is analogous. \end{proof} \begin{proposition}
Let $G=(A, B)$ be an interval-valued fuzzy complete graph. If $ \mu^-_B(x y)= \min( \mu^-_A(x), \mu^-_A(y))$ and $ \mu^+_B(x y)= \min( \mu^+_A(x), \mu^+_A(y))$ for all $x$, $y$ $\in V$, then $G$ is self complementary. \end{proposition} \begin{proof} Let $G=(A, B)$ be an interval-valued fuzzy complete graph such that $ \mu^-_B(x y)= \min( \mu^-_A(x), \mu^-_A(y))$ and $ \mu^+_B(x y)= \min( \mu^+_A(x), \mu^+_A(y))$ for all $x$, $y$ $\in V$. Then
$G=\overline{G}$ under the identity map $I: V \to V$. So
$\overline{\overline{G}}=G$. Hence $G$ is self complementary.
\end{proof}
\begin{proposition}
Let $G_1=(A_1, B_1)$ and $G_2=(A_2, B_2)$ be interval-valued fuzzy complete graphs. Then $G_1 \cong G_2$ if and only if $\overline{G}_1 \cong \overline{G}_2$. \end{proposition} \begin{proof}
Assume that $G_1$ and $G_2$ are isomorphic, there exists a
bijective map $f: V_1 \to V_2$ satisfying
\[ \mu^-_{A_1}(x)= \mu^-_{A_2}(f(x)), ~ \mu^+_{A_1}(x)= \mu^+_{A_2}(f(x)) ~~{\rm for ~ all } ~x \in V_1,\] \[ \mu^-_{B_1}(x y)= \mu^-_{B_2}(f(x)f(y)), ~ \mu^+_{B_1}(xy)= \mu^+_{B_2}(f(x)f(y)) ~~{\rm for ~ all } ~xy \in E_1.\] By definition of complement, we have \[ \overline{\mu^-}_{B_1}(xy)=\min(\mu^-_{A_1}(x), \mu^-_{A_1}(y)=\min(\mu^-_{A_2}(f(x)), \mu^-_{A_2}(f(y)) )=\overline{\mu^-}_{B_2}(f(x)f(y)), \] \[ \overline{\mu^+}_{B_1}(xy)=\min(\mu^+_{A_1}(x), \mu^+_{A_1}(y)=\min(\mu^+_{A_2}(f(x)), \mu^+_{A_2}(f(y)) )=\overline{\mu^+}_{B_2}(f(x)f(y)) ~~{\rm for ~ all} ~xy \in E_1. \] Hence $\overline{G}_1\cong \overline{G}_2$.
The proof of converse part is straightforward. \end{proof}
\section{Conclusions} It is well known that interval-valued fuzzy sets constitute a generalization of the notion of fuzzy sets. The interval-valued fuzzy models give more precision, flexibility and compatibility to the system as compared to the classical and fuzzy models. So we have introduced interval-valued fuzzy graphs and have presented several properties in this paper. The further study of interval-valued fuzzy graphs may also be extended with the following projects: \begin{itemize}
\item an application of interval-valued fuzzy graphs in database theory
\item an application of interval-valued fuzzy graphs in an expert system
\item an application of interval-valued fuzzy graphs in neural networks
\item an interval-valued fuzzy graph method for finding the shortest paths in networks
\end{itemize} \noindent{\bf\large Acknowledgement.} The authors are thankful to the referees for their valuable comments and suggestions.
\end{document}
|
arXiv
|
{
"id": "1205.6123.tex",
"language_detection_score": 0.5375728011131287,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\def\mathbb{F}{\mathbb{F}} \def\mathbb{F}_q^n{\mathbb{F}_q^n} \def\mathbb{F}_q{\mathbb{F}_q} \def\mathbb{F}_p{\mathbb{F}_p} \def\mathbb{D}{\mathbb{D}} \def\mathbb{E}{\mathbb{E}} \def\mathbb{Z}{\mathbb{Z}} \def\mathbb{Q}{\mathbb{Q}} \def\mathbb{C}{\mathbb{C}} \def\mathbb{R}{\mathbb{R}} \def\mathbb{N}{\mathbb{N}} \def\mathbb{H}{\mathbb{H}} \def\mathbb{P}{\mathbb{P}} \def\mathbb{T}{\mathbb{T}} \def\, (\text{mod }p){\, (\text{mod }p)} \def\, (\text{mod }N){\, (\text{mod }N)} \def\, (\text{mod }q){\, (\text{mod }q)} \def\, (\text{mod }1){\, (\text{mod }1)} \def\mathbb{Z}/N \mathbb{Z}{\mathbb{Z}/N \mathbb{Z}} \def\mathbb{Z}/p \mathbb{Z}{\mathbb{Z}/p \mathbb{Z}} \defa^{-n}\mathbb{Z}/ \mathbb{Z}{a^{-n}\mathbb{Z}/ \mathbb{Z}} \defa^{-l} \Z / \Z{a^{-l} \mathbb{Z} / \mathbb{Z}} \def\text{Pr}{\text{Pr}}
\def\leftsize{\left| \left\{}
\def\rightsize{\right\} \right|}
\title{Phase relations and pyramids}
\begin{abstract} We develop tools to study the averaged Fourier uniformity conjecture and extend its known range of validity to intervals of length at least $\exp(C (\log X)^{1/2} (\log \log X)^{1/2})$. \end{abstract}
\section{Introduction}
In this article we shall establish the following result.
\begin{teo} \label{1}
Given $0 < \rho < 1$ and $\eta > 0$, there exists some $C>0$ such that, for every $\exp(C (\log X)^{1/2} (\log \log X)^{1/2} ) \le H \le X^{1/2}$ and every complex-valued multiplicative function $g$ with $|g| \le 1$ satisfying \begin{equation} \label{A}
\int_X^{2X} \sup_{\alpha} \left| \sum_{x \le n \le x+H} g(n) e(\alpha n) \right| dx \ge \eta H X,
\end{equation} we have $\mathbb{D} (g; CX^2/H^{2-\rho},C) \le C$. \end{teo}
Here, we are writing $\mathbb{D}(g;T,Q)$ for the 'pretentious' distance \cite{GS}: $$ \mathbb{D}(g;T,Q) = \inf \left( \sum_{p \le T} \frac{1 - \text{Re}(g(p)p^{it}\chi(p))}{p} \right)^{1/2} ,$$
with the infimum taken over all $|t| \le T$ and all Dirichlet characters of modulus at most $Q$. In particular, Theorem \ref{1} implies that (\ref{A}) cannot hold for the Möbius and Liouville functions.
Theorem \ref{1} improves on estimates obtained in \cite{MRT} and \cite{MRTTZ}, where it was shown that for any $\varepsilon > 0$ the result holds for intervals of length at least $X^{\varepsilon}$ and $\exp((\log X)^{5/8+\varepsilon})$, respectively. The methods of this article should adapt to nilsequences, thus yielding corresponding progress on the higher uniformity conjecture. It is known that after passing to logarithmic averages, establishing this conjecture for intervals of length at least $(\log X)^{\varepsilon}$, for every $\varepsilon >0$, would imply both Chowla's and Sarnak's conjectures \cite{MRTTZ,T}.
We will proceed through the same general framework as in previous articles \cite{MRT, MRTTZ,W}, where it is shown that a function $g$ satisfying (\ref{A}), for the corresponding values of $H$, must correlate with $n \mapsto e(an/q) n^{2 \pi i T}$ on many of these intervals and for certain fixed choices of $T$ and $q$. The results of \cite{MR,MR2, MRT2} then imply that $g$ must behave globally like a function of this form, yielding the desired conclusion.
To obtain this local correlation one exploits Elliott's inequality and the large sieve (see \cite{MRT, MRTTZ}), which combined guarantee that under (\ref{A}) we may associate to many intervals $I \subseteq [X,2X]$ a frequency $\alpha_I \in \mathbb{R}$ in such a way that we may find many quadruples consisting of a pair of intervals $I=[x,x+H], J= [y,y+H]$ and a pair of primes $p,q \le H$, such that $|x/p-y/q|$ is small and $p \alpha_I$ close to $q \alpha_J$ mod $Q$, for some large integer $Q$. The problem then becomes that of showing that these relations force $\alpha_I$ to be close mod $1$ to $\frac{m_I}{q} + \frac{T}{x}$ for certain $m_I,q \in \mathbb{N}$ and $T \in \mathbb{R}$, which would then imply the desired local correlation.
To accomplish this we begin in Section \ref{phase} by studying 'pre-paths' consisting of a sequence of elements $\alpha_1, \ldots, \alpha_{k+1} \in \mathbb{Z} / Q \mathbb{Z}$ and primes $p_1, \ldots, p_k, q_1, \ldots, q_k$ with $p_i \alpha_i$ close to $q_i \alpha_{i+1}$, for every $1 \le i \le k$. Building on ideas of \cite{W}, we show how to construct certain 'pyramids' of frequencies that help relate the elements of the sequence and, in particular, a 'top' element $\alpha$ such that $\alpha_i$ is close to $\left( \prod_{j=1}^{i-1} p_j \prod_{j=i}^k q_j \right) \alpha$, for every $1 \le i \le k+1$, with an error that depends on the relative sizes of the primes involved.
After developing our general setting further in Section 3, we proceed in Section 4 to show how the additional 'physical' information that $|x/p - y/q|$ is small can be used to obtain uniform bounds for the approximations of the previous paragraph. In particular, the conclusions attained would work equally well for $H$ in the poly-logarithmic range and seem likely to be useful tools for future work on these problems.
The purpose of Section 5 is then to show that for many of the intervals we are studying the corresponding frequencies are connected by paths of the above form. Some connectedness of this type seems necessary in order to be able to find a fixed choice of $q$ and $T$ that works for many of these intervals and here is where the required lower bound on $H$ in Theorem \ref{1} emerges. The reason for it is that when studying paths of length $k$, one is naturally led to some losses of the order of $C^k$ in the bounds, for some absolute constant $C>1$. Such losses become problematic once $C^{k}$ is comparable to $H$ and since one needs to consider paths of length around $\frac{\log X}{\log H}$ in order to be able to have enough of the intervals connected with each other, we end up in that situation once $H$ goes below $\exp( C (\log X)^{1/2})$ (essentially the same observation can already be found in \cite{MRT, MRTTZ}). In fact, due to the density of prime numbers one is also led to some additional factors of the order of $(\log H)^k$, which is the reason for the exact lower bound on $H$ in Theorem \ref{1}.
Finally, we complete the proof in Section 6. Once enough intervals have been connected to a fixed interval $I_0$, one can relatively easy use the properties of the 'pyramids' obtained in the first sections to show that a fixed choice of $q$ and $T$ works for many of the intervals.
We notice that the methods of this article end up using auxiliary phases as in \cite{W}, but also the advantage of working with higher moduli as in \cite{MRT, MRTTZ}. As such, they can be seen as a middle point between both arguments. On the other hand, an outcome of this article is that neither the contagion arguments of \cite{W} nor the mixing lemmas of \cite{MRT, MRTTZ} end up being necessary to cover the natural range of $\exp( (\log X)^{1/2+\varepsilon})$. However, such tools may very well end up being useful when trying to lower the value of $H$ further.
\begin{notation}
We will write $X \lesssim Y$ or $X=O(Y)$ to mean that there is some absolute constant $C$ with $|X| \le C Y$ and $X \sim Y$ if both $X \lesssim Y$ and $Y \lesssim X$ hold. If the implicit constants depend on some additional parameters, we shall use a subscript to indicate this. For a finite set $S$ we write $|S|$ for its cardinality. We abbreviate $e(x):=e^{2 \pi i x}$ and given $Q \in \mathbb{N}$ we write $\| \cdot \|_Q$ for the distance to $0$ in $\mathbb{R} / Q \mathbb{Z}$. \end{notation}
\section{Pyramids} \label{phase}
We begin with the following extension of \cite[Lemma 2.1]{W}.
\begin{lema} \label{astart}
Let $\epsilon_1,\epsilon_2 > 0$ and let $Q \in \mathbb{N}$. Let $\alpha_1,\alpha_2 \in \mathbb{R} / Q \mathbb{Z}$ and let $p_1,p_2$ be distinct primes not dividing $Q$ with $\| p_1 \alpha_1 - p_2 \alpha_2\|_Q < \epsilon_1 + \epsilon_2$. Then, there exists $\alpha \in \mathbb{R} / Q \mathbb{Z}$ with $\| p_i \alpha - \alpha_j \|_Q < \frac{\epsilon_j}{p_j}$ if $i \neq j$. \end{lema}
\begin{proof}
For $\left\{ i, j \right\}= \left\{ 1,2 \right\}$, let $\alpha^{(i)} \in \mathbb{R} / Q \mathbb{Z}$ be such that $p_i \alpha^{(i)} = \alpha_j$. Adding integer multiples of $Q/p_i$ to $\alpha^{(i)}$ we may assume that $\| \alpha^{(1)} - \alpha^{(2)} \|_Q \le \frac{Q}{2 p_1 p_2}$. On the other hand, we have by hypothesis that $\| p_1 p_2 (\alpha^{(1)} - \alpha^{(2)} ) \|_Q < \epsilon_1 + \epsilon_2$. Combining both estimates we see that it must in fact be $\| \alpha^{(1)} - \alpha^{(2)} \|_Q < \frac{\epsilon_1 + \epsilon_2}{p_1 p_2}$. Taking $\alpha= \alpha^{(1)} - \frac{\epsilon_2}{\epsilon_1+\epsilon_2} (\alpha^{(1)}-\alpha^{(2)})$ we obtain the result. \end{proof}
Lemma \ref{astart} immediately implies the following corollary.
\begin{lema} \label{tcp}
Let $\epsilon_j,\epsilon_j'>0$ for every $1 \le j \le k$ and let $Q \in \mathbb{N}$. Let $(\alpha_1^{(1)},\ldots,\alpha^{(1)}_{k+1})$ and $(\alpha_2^{(0)},\ldots,\alpha_{k+1}^{(0)})$ be tuples of elements of $\mathbb{R} / Q \mathbb{Z}$ and let $p_1,\ldots,p_{k}$, $q_1,\ldots,q_{k}$ be a sequence of distinct primes not dividing $Q$ such that, for every $1 \le j \le k$, we have $\| q_{j} \alpha^{(1)}_{j+1} - \alpha^{(0)}_{j+1} \|_Q < \epsilon_j$ and $\| p_j \alpha^{(1)}_j - \alpha^{(0)}_{j+1} \|_Q < \epsilon'_j$. Then, there exists a tuple $(\alpha^{(2)}_1,\ldots,\alpha^{(2)}_{k})$ of elements of $\mathbb{R} /Q \mathbb{Z}$ such that, for every $1 \le j \le k$, we have $\| p_j \alpha^{(2)}_j - \alpha^{(1)}_{j+1} \|_Q < \frac{\epsilon_j}{q_j}$ and $\| q_j \alpha^{(2)}_j - \alpha^{(1)}_j \|_Q < \frac{\epsilon_j'}{p_j}$. \end{lema}
This can be iterated in the following way: after we have used sets of frequencies $(\alpha_1^{(j)},\ldots,\alpha^{(j)}_{k+2-j})$, $(\alpha_2^{(j-1)},\ldots,\alpha_{k+2-j}^{(j-1)})$ and primes $p_1,\ldots,p_{k+1-j}$, $q_j,\ldots,q_{k}$ not dividing $Q$ to obtain a new set $(\alpha^{(j+1)}_1,\ldots,\alpha^{(j+1)}_{k+1-j})$, we can then use the sets of frequencies $(\alpha^{(j+1)}_1,\ldots,\alpha^{(j+1)}_{k+1-j})$, $(\alpha_2^{(j)},\ldots,\alpha^{(j)}_{k+1-j})$ and primes $p_1,\ldots,p_{k-j}$, $q_{j+1},\ldots,q_{k}$ to obtain a new set $(\alpha^{(j+2)}_1,\ldots,\alpha^{(j+2)}_{k-j})$. We thus arrive, for every $1 \le j \le k+1$, at a tuple $(\alpha_1^{(j)},\ldots,\alpha_{k+2-j}^{(j)})$ of elements of $\mathbb{R} / Q \mathbb{Z}$. It will be convenient to name these objects.
\begin{defi} A \emph{pre-path} mod $Q$ is a choice of (ordered) tuples $(\alpha_1^{(1)},\ldots,\alpha^{(1)}_{k+1})$, $(\alpha_2^{(0)},\ldots,\alpha_{{k+1}}^{(0)})$ of elements of $\mathbb{R} / Q \mathbb{Z}$, real numbers $\epsilon_1, \ldots, \epsilon_{k}, \epsilon_1', \ldots, \epsilon'_{k}>0$ and distinct primes $p_1,\ldots,p_{k}$, $q_1,\ldots,q_{k}$ not dividing $Q$ satisfying the hypothesis of Lemma \ref{tcp}. We say a corresponding sequence $(\alpha_1^{(j)})_{1 \le j \le k+1}$ obtained as in the previous paragraph is a \emph{pyramid} associated with this pre-path. We call $\alpha_1^{(k+1)}$ the \emph{top element} of the pyramid and $k$ the \emph{length} of the pre-path. \end{defi}
The following will be our main input for the study of pre-paths.
\begin{lema} \label{premanli} Let $0 < \epsilon < 1$. Given a pre-path of length $k$ with $\epsilon_i=\epsilon_i'=\epsilon$ for every $1 \le i \le k$, we have that any associated pyramid $(\alpha_1^{(j)})_{1 \le j \le k+1}$ satisfies
$$ \| q_j \alpha_1^{(j+1)} - \alpha_1^{(j)} \|_Q < \epsilon \left( \prod_{i=1}^{\lfloor (j-1)/2 \rfloor+1} p_i \right)^{-1} \left( \prod_{i=1}^{\lceil (j-1)/2 \rceil } q_{j-i} \right)^{-1},$$ for every $1 \le j \le k$. \end{lema}
\begin{proof} We will proceed by induction on the length of the pre-path. If $k=1$ the claim is immediate from Lemma \ref{tcp}, so we let $k \ge 2$ and assume the result has already been established for all pre-paths of length at most $k-1$. If $j \le k-1$ observe that the parameters $(\alpha_1^{(1)},\ldots,\alpha_{j+1}^{(1)})$, $(\alpha_2^{(0)},\ldots,\alpha_{j+1}^{(0)})$, $p_1,\ldots,p_{j}$, $q_1,\ldots,q_{j}$ and $\epsilon_1=\epsilon'_1=\ldots=\epsilon_j=\epsilon_j'=\epsilon$ form a pre-path of length $j$ and $(\alpha_1^{(i)})_{1 \le i \le j+1}$ is a pyramid for this pre-path, so the result follows by induction in this case. We thus only need to treat $j=k$. By Lemma \ref{tcp} we know that \begin{equation} \label{ppp14}
\| q_{k} \alpha_1^{(k+1)} - \alpha_1^{(k)} \|_Q \le \frac{\| p_1 \alpha_1^{(k)} - \alpha_2^{(k-1)} \|_Q}{p_1}.
\end{equation} Here we are using that, after iterating Lemma \ref{tcp}, $q_{k}$ and $p_1$ are the primes that end up relating $\alpha_1^{(k+1)}$ with $\alpha_1^{(k)}$ and $\alpha_2^{(k)}$, respectively. Notice now that the 'inverted' parameters $(\alpha_{k}^{(1)},\ldots,\alpha_1^{(1)})$, $(\alpha_{k}^{(0)},\ldots,\alpha_2^{(0)})$, $q_{k-1},\ldots,q_1$, $p_{k-1},\ldots,p_1$ and $\epsilon$ form a pre-path of length $k-1$ and that $(\alpha_{k+1-j}^{(j)})_{1 \le j \le k}$ is a pyramid for this pre-path (notice that the roles of the primes $p_i$ and $q_i$ have also been inverted). It thus follows by induction that the right-hand side of (\ref{ppp14}) is $$ <\frac{\epsilon}{p_1} \left( \prod_{i=1}^{\lfloor k/2 \rfloor} q_{k-i} \right)^{-1} \left( \prod_{i=1}^{\lceil k/2-1 \rceil } p_{(k)-(k-1-i)} \right)^{-1}.$$ The result then follows upon rearranging. \end{proof}
\section{General setting}
In this section we will invoke some results from \cite{MRT} and \cite{MRTTZ} that will serve as the setting for our approach and will also establish a regularity estimate that will be useful in the rest of the article.
As mentioned in the introduction, once we have shown that $g$ correlates with $n \mapsto e(an/q) n^{2 \pi i T}$ on many of the intervals and for a certain fixed choice of $q$ and $T$, Theorem \ref{1} will then follow from the results of Matömaki and Radziwi\l\l \, \cite{MR,MR2, MRT2}. More precisely, we will be using the following reduction which follows from the last part of \cite[Section 6]{MRTTZ} and relies on the power saving bounds of \cite{MR2}.
\begin{prop} \label{2}
Let $g$ be a complex-valued multiplicative function with $|g| \le 1$. Let $\rho>0$ be sufficiently small, $C >0$ sufficiently large with respect to $\rho$ and $X \ge 1$ sufficiently large with respect to $\rho$ and $C$. Let $H$ be as in Theorem \ref{1}. Also, let $T \in \mathbb{R}$ and $q \in \mathbb{N}$ satisfy $|T| \le C X^2/H^{2-\rho}$ and $q \le C H^{\rho}$. Assume that for $\ge X/H^{1+\rho}$ disjoint intervals $I \subseteq [X,2X]$ of length $H^* \in [H^{1-\rho}, H]$ we can find some integer $a_I$ with $|\sum_{n \in I} g(n) n^{2 \pi iT} e(a_I n/q)| \ge cH^*$, for some $c >0$. Then $\mathbb{D} (g ; BX^2/H^{2-\rho},B) = O_{\rho,C,c}(1)$ for some $B =O_{\rho,C,c}(1)$. \end{prop}
Our task is then reduced to establishing the following estimate.
\begin{teo} \label{3}
Let $g,\rho,\eta$ and $H$ be as in Theorem \ref{1}. Then, if $C$ is sufficiently large with respect to $\eta$ and $\rho$ and $X$ is sufficiently large with respect to $C$, we can find $T \in \mathbb{R}$ and $q \in \mathbb{N}$ with $|T| \le C X^2/H^{2-\rho}$ and $q \le CH^{\rho}$ such that, for $\ge \frac{X}{H^{1+\rho}}$ disjoint intervals $I_x=[x,x+H^*] \subseteq [X/H^{\rho},2X]$ of length $H^* \in [H^{1-\rho}, H]$ there exists an integer $a_x$ with $|\sum_{n \in I_x} g(n) e(\frac{(n-x) a_x }{q} + \frac{(n-x)T}{x})| \ge H^*/C$. \end{teo}
Using Theorem \ref{3}, the Taylor expansion of $(n/x)^{2 \pi i T}$ and the pigeonhole principle, we can then locate an appropriate dyadic interval in $[X/H^{\rho},2X]$ where the hypothesis of Proposition \ref{2} are satisfied, after adjusting $X$, $C$, $H^*$ and $\rho$ if necessary. Thus, in order to prove Theorem \ref{1}, it will suffice to establish Theorem \ref{3}.
\begin{defi} \label{conf}
Given $R > 1$, $c > 0$ and an interval $I \subseteq \mathbb{R}$, we say a finite set $\mathcal{J} \subseteq I \times \mathbb{R}$ is a $(c,R)$-configuration if $|\mathcal{J}| \ge c|I|/R$ and the first coordinates are $R$-separated points in $I$ (i.e. $|x-y| \ge R$ if $x \neq y$). \end{defi}
If $g$ satisfies (\ref{A}), then one can use Elliott's inequality and the large sieve to obtain a configuration as in Definition \ref{conf} for which its elements are highly related to each other through a set of primes whose size is a small power of $H$. Concretely, we have the following estimate from \cite{MRT}.
\begin{lema}[\cite{MRT}, Proposition 3.2] \label{base} Let the notation and assumptions be as in Theorem \ref{3}. Let $c_0, \varepsilon >0$ be sufficiently small with respect to $\eta$ and $\rho$ and $X$ sufficiently large with respect to $c_0$ and $\varepsilon$. Then, there exists a $(c_0,H/K)$-configuration $\mathcal{J} \subseteq [X/(10K),2X/K] \times \mathbb{R}$, for some $K \in [H^{\varepsilon^2},H^{\varepsilon}]$, such that for every $(x,\alpha) \in \mathcal{J}$ we have
$$ \left| \sum_{x \le n \le x+H/K} g(n) e(\alpha n) \right| \gtrsim H/K,$$
and a pair $P,P' \gtrsim_{\rho,\eta} H^{\varepsilon^2}$ with $P P'=K$ such that, for $\gtrsim_{\rho,\eta} \frac{X}{H} \left( \frac{P}{\log P} \right)^2$ choices of $(x_1,\alpha_1), (x_2,\alpha_2) \in \mathcal{J}$ and $p,q$ primes in $[P,2P]$, we have $|x_1/p - x_2/q| \lesssim_{\rho,\eta} H/(PK)$ and $\| p \alpha_1 - q \alpha_2 \|_{p'} \lesssim_{\rho,\eta} PK/H$ for $\gtrsim_{\rho,\eta} \left( \frac{P'}{\log P'} \right)$ primes $p' \in [P',2P']$. Furthermore, there exist disjoint sets $\mathcal{P}_1, \mathcal{P}_2 \subseteq [P,2P]$ of size $\gtrsim_{\rho,\eta} \frac{P}{\log P}$ such that the same claim holds (up to the implicit constants) if we additionally require $(p,q) \in \mathcal{P}_1 \times \mathcal{P}_2$. \end{lema}
\begin{proof} The proof proceeds exactly as in Proposition 3.2 of \cite{MRT}, except for the last claim which simply follows from the pigeonhole principle. \end{proof}
\begin{notation} For the rest of this article we will work with a specific of $\eta, \rho, X$ and $H$ as in Theorem \ref{3} and of $c_0, \varepsilon, P, P', K, \mathcal{J}, \mathcal{P}_1$ and $\mathcal{P}_2$ as provided in Lemma \ref{base}. All implicit constants will be allowed to depend on $\eta$ and $\rho$, but will be uniform in our choice of $X$ and $H$. \end{notation}
We will require the following definition in order to study the properties of pre-paths arising from the configuration $\mathcal{J}$.
\begin{defi} Let $Q \in \mathbb{N}$ and $\mathcal{J}' \subseteq \mathcal{J}$. We define a \emph{path mod} $Q$ \emph{of length} $k$ \emph{in} $\mathcal{J}'$ to be a pre-path mod $Q$ of length $k$ with parameters $\epsilon_1,\epsilon_1',\ldots,\epsilon_k,\epsilon_k' \lesssim PK/H$, $p_1,\ldots,p_k, q_1,\ldots,q_{k} \in [P,2P]$, $(\alpha_i^{(1)})_{i=1}^{k+1} \subseteq \mathbb{R} / Q \mathbb{Z}$ and a set of elements $(x_1,\alpha_1),\ldots, (x_{k+1},\alpha_{k+1}) \in \mathcal{J}'$ such that $\alpha_i \equiv \alpha_i^{(1)}$ (mod $Q$) for every $1 \le i \le k+1$ and $x_i \frac{q_i}{p_i} = x_{i+1}+O(H/K)$ for every $1 \le i \le k$. We then say $(x_1,\alpha_1)$ and $(x_{k+1},\alpha_{k+1})$ are \emph{connected} by a path mod $Q$ of length $k$ in $\mathcal{J}'$. We call $(x_1,\alpha_1)$ the \emph{initial point} and $(x_{k+1},\alpha_{k+1})$ the \emph{end point} of the path. We additionally say the path is \emph{split} if $p_1,\ldots,p_k \in \mathcal{P}_1$ and $q_1,\ldots,q_{k} \in \mathcal{P}_2$. \end{defi}
\begin{notation} Given a path $\ell$ mod $Q$ of length $k$ consisting of elements $(x_1,\alpha_1), \ldots, (x_{k+1}, \alpha_{k+1})$ and primes $(p_1,\ldots,p_{k},q_1,\ldots,q_{k})$ we will sometimes need to consider the 'inverted' path $\ell^{-1}$ having initial point $(x_{k+1}, \alpha_{k+1})$ and end point $(x_1,\alpha_1)$, with the corresponding ordered set of primes given by $(q_{k},\ldots,q_1,p_{k},\ldots,p_1)$. Notice that if $\alpha$ is the top element of a pyramid associated to $\ell$, then it is also the top element of a pyramid associated with $\ell^{-1}$. Also, suppose we are given an additional path $\ell'$ mod $Q$ of length $m$ consisting of elements $(y_1,\beta_1),\ldots,(y_{m+1},\beta_{m+1})$. If $(y_1,\beta_1)=(x_{k+1},\alpha_{k+1})$, we may consider the combined path $\ell + \ell'$ of length $k+m$ with initial point $(x_1,\alpha_1)$ and endpoint $(y_{m+1},\beta_{m+1})$. Notice that if $(\alpha_1^{(j)})_{1 \le j \le k+1}$ is a pyramid associated to $\ell$, then $\ell + \ell'$ will admit a pyramid $(\beta_1^{(j)})_{j=1}^{k+m+1}$ with $\alpha_1^{(j)}=\beta_1^{(j)}$ for every $1 \le j \le k+1$. Finally, we observe that $(\ell + \ell')^{-1} = (\ell')^{-1} + \ell^{-1}$. \end{notation}
\begin{defi}
Let $Q \in \mathbb{N}$ and $c > 0$. We say $\mathcal{J}' \subseteq \mathcal{J}$ is a $(c,Q)$-\emph{regular} subset of $\mathcal{J}$ if $|\mathcal{J}'| \ge c |\mathcal{J}|$ and every $(x,\alpha) \in \mathcal{J}'$ is connected to $\ge c \frac{P^2}{(\log P)^2}$ other elements of $\mathcal{J}'$ by a split path mod $Q$ of length $1$. \end{defi}
We finish this section with the following combinatorial estimate which plays the role of the Blakley-Roy inequality in \cite{MRT, MRTTZ} and turns out to be rather advantageous in practice.
\begin{lema} \label{regular} There exists $c \sim 1$ such that, for $\gtrsim \frac{P'}{\log P'}$ primes $p' \in [P',2P']$, we can find a $(c,p')$-regular subset $\mathcal{J}_{p'}$ of $\mathcal{J}$. \end{lema}
\begin{proof}
For each prime $p' \in [P',2P']$ let $A_{p'}$ be the number of quadruples $((x_1,\alpha_1), (x_2,\alpha_2),p,q) \in \mathcal{J} \times \mathcal{J} \times \mathcal{P}_1 \times \mathcal{P}_2$ with $\| p \alpha_1 - q \alpha_2 \|_{p'} \lesssim PK/H$ and $|x_1/p - x_2/q| \lesssim H/(PK)$. By construction of $\mathcal{J}$, we know that $$\sum_{p' \in [P',2P']} A_{p'} \gtrsim \frac{X}{H} \left( \frac{P}{\log P} \right)^2 \frac{P'}{\log P'}.$$ By the prime number theorem, this means that we can find $\gtrsim \frac{P'}{\log P'}$ primes $p' \in [P',2P']$ with $A_{p'} \ge c_1 \frac{X}{H} \left( \frac{P}{\log P} \right)^2$, for some $c_1 \gtrsim 1$. In particular, we have $\ge c_1 \frac{X}{H} \left( \frac{P}{\log P} \right)^2$ paths mod $p'$ of length $1$ in $\mathcal{J}$.
Fix now such a choice of $p'$. Let $\delta > 0$ be a sufficiently small constant and let $\mathcal{J}_1$ be the set of elements of $\mathcal{J}$ that are connected by a path mod $p'$ of length $1$ to at most $\delta \frac{P^2}{(\log P)^2}$ elements of $\mathcal{J}$. Recursively, let $\mathcal{J}_k$ be the set of elements of $\mathcal{J} \setminus \bigcup_{j=1}^{k-1} \mathcal{J}_{j}$ that are connected by a path mod $p'$ of length $1$ to at most $\delta \frac{P^2}{(\log P)^2}$ elements of $\mathcal{J} \setminus \bigcup_{j=1}^{k-1} \mathcal{J}_j$. Let $k_0$ be the largest integer such that $\mathcal{J}_{k_0}$ is nonempty (which exists, since $\mathcal{J}$ is finite). Clearly, this means that every element of $\mathcal{J}_{p'} :=\mathcal{J} \setminus \bigcup_{j=1}^{k_0} \mathcal{J}_j$ is connected by a path mod $p'$ of length $1$ to at least $\delta \frac{P^2}{(\log P)^2}$ elements of $\mathcal{J}_{p'}$. Furthermore, since given $(x_1,\alpha_1), (x_2,\alpha_2) \in \mathcal{J}$ there is at most one choice of $(p,q) \in [P,2P]$ with $|x_1/p - x_2/q| \lesssim H/(PK)$, the number of paths mod $p'$ of length $1$ in $\mathcal{J}_{p'}$ is at least
$$ c_1 \frac{P^2}{(\log P)^2} \frac{X}{H} - 2 \delta \frac{P^2}{(\log P)^2} \sum_{j=1}^{k_0} |\mathcal{J}_j| \ge (c_1-4\delta) \frac{P^2}{(\log P)^2} \frac{X}{H}.$$
Since every element of $\mathcal{J}_{p'}$ belongs to at most $\lesssim \frac{P^2}{(\log P)^2}$ paths mod $p'$ of length $1$, if $\delta$ is sufficiently small it must be $|\mathcal{J}_{p'}| \gtrsim X/H$. The result follows. \end{proof}
\section{Uniformity}
In this section we will boost the results of Section \ref{phase} by exploiting the additional information that paths have on the relative sizes of the primes involved. We begin with the following observation in this direction.
\begin{lema} \label{uni} There exists $c \gtrsim 1$ such that, given a path mod $Q$ in $\mathcal{J}$ of length $k \le c \log (X/H)$ consisting of primes $p_1, \ldots, p_k$, $q_1, \ldots, q_k$, we have $\frac{\prod_{i=1}^m q_i}{\prod_{i=1}^m p_i} \sim 1$ for every $1 \le m \le k$. \end{lema}
\begin{proof} Fix the path and a choice of $1 \le m \le k$. Since for each $1 \le i \le m$ we have that there is some pair $x_i,x_{i+1}$ in $[X/(10K),2X/K]$ with $(q_i/p_i) x_i = x_{i+1} + O(H/K)$, we have that $$ \frac{q_i}{p_i} = \frac{x_{i+1}}{x_i} + O\left(\frac{H}{X} \right).$$ In particular, $$ \prod_{i=1}^m \frac{q_i}{p_i} = \prod_{i=1}^m \left( \frac{x_{i+1}}{x_i} + O \left(\frac{H}{X} \right) \right).$$ We expand the product in the right into $2^m$ terms, the first of which is $x_{m+1}/x_1 \sim 1$. Since for every $S \subseteq \left\{ 1, \ldots, m \right\}$ it is clearly $\prod_{i \in S} \frac{x_{i+1}}{x_i} \lesssim B^k$ with $B \sim 1$, if we choose $c \gtrsim 1$ sufficiently small in the statement of the current lemma we see that the contribution of the remaining terms is at most $$ \lesssim (2B)^{k} \frac{H}{X} \lesssim 1.$$ This proves that $\frac{\prod_{i=1}^m q_i}{\prod_{i=1}^m p_i} \lesssim 1$, while the reverse inequality follows from considering the 'inverted' path and applying the same reasoning. \end{proof}
\begin{coro} \label{pd21} Let the hypothesis be as in Lemma \ref{uni}. Let $1 \le i < j \le k+1$. If $(x_i,\alpha_i), (x_j,\alpha_j)$ are $i$th and $j$th elements of the path, then
$$|x_i \prod_{t=i}^{j-1} q_t/p_t - x_j| = O((j-i)H/K).$$ \end{coro}
\begin{proof} This follows immediately upon iterating the relation $x_{t+1} = x_t q_t/p_t + O(H/K)$ and using Lemma \ref{uni}. \end{proof}
We now insert this information into our previous estimate on pre-paths.
\begin{lema} \label{alf} Let the hypothesis be as in Lemma \ref{uni}. Then, for every pyramid $(\alpha_1^{(j)})_{1 \le j \le k+1}$ associated to the path and every $2 \le j \le k+1$ and $1 \le m < j$, we have
$$ \| \left( \prod_{i=m}^{j-1} q_i \right) \alpha_1^{(j)} - \alpha_1^{(m)} \|_Q \lesssim k \frac{K}{H} \frac{1}{\prod_{i=1}^{m-1} q_i}.$$ \end{lema}
\begin{proof} By considering shorter paths inside the original path, it will suffice to cover the case in which $\alpha_1^{(j)}$ is the top element of a pyramid associated to a path of length $j-1$. For every $m \le t \le j-1$ we have that $q_t \alpha_1^{(t+1)} \equiv \alpha_1^{(t)} + c_t \, \, (\text{mod }Q)$, where by Lemma \ref{premanli} and Lemma \ref{uni} we know that $c_t$ can be taken to be a real number satisfying
$$ | c_t \left( \prod_{i=1}^{t-1} q_i \right) | \lesssim \frac{P K}{H} \frac{1 }{p_{ \lfloor (t-1)/2 \rfloor+1}} \prod_{i=1}^{ \lfloor (t-1)/2 \rfloor} q_i/p_i \lesssim \frac{K}{H}.$$ In particular,
$$ | c_t \left( \prod_{i=m}^{t-1} q_i \right) | \lesssim \frac{K}{H} \frac{1}{\prod_{i=1}^{m-1} q_i}.$$ The result then follows evaluating $\left( \prod_{i=m}^{j-1} q_i \right) \alpha_1^{(j)} = \left( \prod_{i=m}^{j-2} q_i \right) (\alpha_1^{(j-1)} + c_{j-1})$ recursively. \end{proof}
This, in turn, leads to the following bound.
\begin{lema} \label{morealf} Let the hypothesis be as in Lemma \ref{uni}. Then
$$ \| \left( \prod_{i=1}^{j-1} p_i \prod_{i=j}^{k} q_i \right) \alpha_1^{(k+1)} - \alpha_j^{(1)} \|_Q \lesssim k \frac{K}{H},$$ for every $1 \le j \le k+1$. \end{lema}
\begin{proof} By Lemma \ref{alf} we know that
$$ \| \left( \prod_{i=j}^{k} q_i \right) \alpha_1^{(k+1)} - \alpha_1^{(j)} \|_Q \lesssim k \frac{K}{H} \frac{1}{\prod_{i=1}^{j-1} q_i}.$$ Considering the 'inverted' path that goes from $ \alpha_j^{(1)}$ to $\alpha_1^{(1)}$ and applying Lemma \ref{alf} again, we also see that
$$ \| \left( \prod_{i=1}^{j-1} p_i \right) \alpha_1^{(j)} - \alpha_j^{(1)} \|_Q \lesssim k \frac{K}{H}.$$ The result then follows from the triangle inequality and Lemma \ref{uni}. \end{proof}
\section{Connectedness}
The purpose of this section is to show that a significant proportion of the elements of $\mathcal{J}$ can be connected with each other through a path of reasonable length. Our first observation is that once the initial point of a path of short length is fixed, there are only a few ways of reaching the same end point.
\begin{lema} \label{olden} Let $1 \le k \le \frac{\log (X/(HB \log X))}{2 \log (2P)}$ for some sufficiently large constant $B \sim 1$. Then, for each pair $(x,\alpha), (y,\beta) \in \mathcal{J}$, there are at most $(2k)!$ paths mod $1$ of length $k$ that connect them.
\end{lema}
\begin{proof} It will suffice to show that if $(x,\alpha), (y, \beta) \in \mathcal{J}$ are connected by two paths consisting of primes $\left\{ p_1,\ldots,p_{k},q_1,\ldots,q_{k} \right\}$ and $\left\{ p_1',\ldots,p_{k}',q_1',\ldots,q_{k}' \right\}$, respectively, then these two sets of primes must coincide. Let us proceed by contradiction. By Corollary \ref{pd21} we know that
$$ \left| \left( \prod_{i=1}^{k} \frac{q_i}{p_i} \right) x - y \right| \lesssim \frac{k H}{K}.$$ Similarly, considering the other path and using Lemma \ref{uni} we see that we must also have
$$ \left| \left( \prod_{i=1}^{k} \frac{p_i'}{q'_i} \right) y - x \right| \lesssim \frac{k H}{K}.$$ Combining both estimates using Lemma \ref{uni} again and the triangle inequality, we obtain \begin{equation} \label{old28}
\left| \prod_{i=1}^{k} \frac{q_i}{p_i} \prod_{i=1}^{k} \frac{p_i'}{q'_i} - 1\right| \lesssim \frac{kH}{X}.
\end{equation} Since the primes in each path are all distinct by definition and the paths do not consist of exactly the same primes, the left-hand side cannot be $0$. But this means it must have size at least $(2P)^{-2k}$. The result then follows from our size hypothesis on $k$.
\end{proof}
We will be needing the following bound from \cite{MRTTZ}.
\begin{lema}[\cite{MRTTZ}, Lemma 6.1]
\label{products28} Let $r \in \mathbb{N}$, $A \ge 1$ and $P_0,N \ge 3$. Then, the number of $2r$-tuples of primes $(p_{1,1},\ldots,p_{1,r},p_{2,1},\ldots,p_{2,r}) \subseteq [P_0,2P_0]^{2r}$ satisfying
$$ \left| \prod_{i=1}^r p_{1,i} - \prod_{i=1}^r p_{2,i} \right| \le A \frac{(2P_0)^{r}}{N}$$ is bounded by $$ \lesssim A r!^2 (2 P_0)^r \left( \frac{(2 P_0)^r}{N} + 1 \right).$$ \end{lema}
The use of this lemma is that it will allow us to assume that no pair of paths mod $1$ of reasonable length that connect the same points share any primes in common. Essentially the same argument was already used in \cite{MRT, MRTTZ}. Precisely, we have the following bound on the number of exceptions.
\begin{coro} \label{ls28} Let $(x,\alpha) \in \mathcal{J}$ and let $k$ be as in Lemma \ref{uni}. Then, the number of pairs of split paths mod $1$ of length $k$ with initial point $(x,\alpha)$, sharing the same end point and having at least one prime in common is bounded by $$ \lesssim (2k)!^2 \frac{(2 P)^{2k}}{\log P} \left( \frac{kH}{X}(2 P)^{2k-1} + 1 \right).$$ \end{coro}
\begin{proof} Let $(p_1,\ldots,p_k,q_1,\ldots,q_k)$ and $(p_1',\ldots,p_k',q_1',\ldots,q_k')$ be the sets of distincts primes corresponding to two paths with initial point $(x,\alpha)$ and common end point $(y,\beta)$, for some $(y,\beta) \in \mathcal{J}$. Using Corollary \ref{pd21} as in the proof of Lemma \ref{olden} we obtain (\ref{old28}), which we may rewrite as
$$ \left| \prod_{i=1}^{k} q_i p_i' - \prod_{i=1}^{k} p_i q_i' \right| \lesssim \frac{kH}{X} (2P)^{2k}.$$ Since $\mathcal{P}_1$ and $\mathcal{P}_2$ are disjoint and the paths share at least a prime in common, we can find a prime appearing in both products on the left-hand side. Factoring it out and applying Lemma \ref{products28} with $r=2k-1$ and $N=\frac{X}{kH}$, we see that the ordered tuple of $4k-2$ remaining primes belongs to a set $S$ of size $$ \lesssim (2k-1)!^2 (2 P)^{2k-1} \left( \frac{kH}{X}(2 P)^{2k-1} + 1 \right).$$ The result follows observing that, for each such element of $S$, there are at most $(2k)^2 \frac{P}{\log P}$ choices of ordered sets of primes $(p_1,\ldots,p_k,q_1,\ldots,q_k)$ and $(p_1',\ldots,p_k', q_1',\ldots,q_k')$ that can lead to it in the above manner. \end{proof}
We now come to the main estimate of this section, which shows that a large number of elements $(y,\beta) \in \mathcal{J}$ can be connected to a fixed element $(x_0,\alpha_0) \in \mathcal{J}$ through two different paths of small length. In the next section, these two paths will be combined to a single path going through $(y,\beta)$ and having $(x_0,\alpha_0)$ as both its initial and end point.
\begin{lema}
\label{pairs}
Let $k_0 = \lfloor \frac{\log (X/(HB \log X))}{2 \log (2P)} \rfloor$ for some sufficiently large $B \sim 1$. Fix $\delta > 0$. Assume $\varepsilon > 0$ in Lemma \ref{base} is sufficiently small with respect to $\delta$, $H \ge \exp(C (\log X)^{1/2} (\log \log X)^{1/2})$ with $C$ sufficiently large with respect to $\delta$ and $\varepsilon$ and $X$ is sufficiently large with respect to $\delta, \varepsilon$ and $C$. Then, there exist $\tau \sim 1$, $(x_0,\alpha_0) \in \mathcal{J}$ and $\mathcal{J}_0 \subseteq \mathcal{J}$ with $|\mathcal{J}_0| \gtrsim \frac{X}{H^{1+\delta}}$ such that the following holds. For every $(y,\beta) \in \mathcal{J}_0$ there exists some $Q_y$ which is the product of $\gtrsim \tau^{k_0} \frac{P'}{\log P'}$ different primes in $[P',2P']$ and such that there are two split paths mod $Q_y$ of length $k_0+2$ connecting $(x_0,\alpha_0)$ with $(y,\beta)$. Furthermore, the two paths share no prime in common.
\end{lema}
\begin{proof} During the proof we will let $c \sim 1$ denote a constant that may change at each occurrence. By Lemma \ref{regular} and the pigeonhole principle, we may locate some $(x_0,\alpha_0) \in \mathcal{J}$ that belongs to $\mathcal{J}_{p'}$ for $\gtrsim \frac{P'}{\log P'}$ primes $p' \in [P',2P']$. By regularity of $\mathcal{J}_{p'}$ this means that for each such $p'$ there are $\gtrsim c^{k_0} \left( \frac{P}{\log P} \right)^{2k_0+4} $ split paths mod $p'$ of length $k_0+2$ in $\mathcal{J}_{p'}$ with initial point $(x_0,\alpha_0)$.
Since there are at most $\le D^{k_0} \left( \frac{P}{\log P} \right)^{2k_0+4}$ such paths mod $1$ in $\mathcal{J}$ for some $D \sim 1$, we conclude that there are $\gtrsim c^{k_0} \left( \frac{P}{\log P} \right)^{2k_0+4}$ split paths mod $1$ of length $k_0+2$ with initial point $(x_0,\alpha_0)$ that are also split paths mod $p'$ for $\gtrsim c^{k_0} \frac{P'}{\log P'}$ choices of $p' \in [P',2P']$ (which depend on each path). In particular, this means they are split paths mod $Q$, where $Q$ is the product of such primes. Write $\mathcal{R}$ for this set of paths.
Write $\mathcal{R}_y$ for the paths in $\mathcal{R}$ that have endpoint $(y,\beta)$. By construction of $\mathcal{R}$ and the Cauchy-Schwarz inequality, we see that there must be $\gtrsim c^{k_0} |\mathcal{R}_y|^2$ pairs of paths in $\mathcal{R}_y$ such that both paths are split paths mod $p'$ for $\gtrsim c^{k_0} \frac{P'}{\log P'}$ primes $p' \in [P',2P']$ in common. In particular, they will both be split paths mod $Q$, with $Q$ the product of these primes. It follows that the number of pairs of paths in $\mathcal{R}$ having the same endpoint and with both being split paths mod $p'$ for $\gtrsim c^{k_0} \frac{P'}{\log P'}$ primes $p' \in [P',2P']$ is, by Cauchy-Schwarz, at least
$$ \gtrsim c^{k_0} \frac{|\mathcal{R}|^2 H}{X} \gtrsim c^{k_0} \frac{H}{X} \left( \frac{P}{\log P} \right)^{4k_0+8}.$$ Since by our hypothesis on $\varepsilon, C$ and $X$ we have that $$ c^{-k_0} (2(k_0+2))!^2 k_0 (\log P)^{4 k_0+7} (\log X) < P^{1/2},$$ say, it also follows from Corollary \ref{ls28} and our choice of $k_0$ that at least half of these pairs of paths, say, do not share any primes from $[P,2P]$ in common.
Let now $(y,\beta) \in \mathcal{J}$. If $\ell$ is a path mod $1$ in $\mathcal{R}_y$, it can be written as $\ell_0+\ell_1$, where $\ell_0$ is a path mod $1$ of length $k_0$ and $\ell_1$ is a path mod $1$ of length $2$ with end point $(y,\beta)$. There are $\lesssim \left( \frac{P}{\log P} \right)^4$ choices of $\ell_1$ and each such choice fixes the end point of $\ell_0$. On the other hand, by Lemma \ref{olden} we know that there are at most $(2k_0)!$ choices of $\ell_0$ with the same endpoint. We thus conclude that $(y,\beta)$ is the common endpoint of $\lesssim (2k_0)!^2 \left( \frac{P}{\log P} \right)^8$ of the pairs of paths we have constructed and therefore, by our lower bound on $|\mathcal{R}|$ and our choice of $k_0$, there are
$$ \gtrsim \frac{c^{k_0}}{(2k_0)!^2} \frac{|\mathcal{R}|^2 H}{X}\left( \frac{P}{\log P} \right)^{-8} \gtrsim \frac{c^{k_0}}{(2k_0)!^2} \frac{X}{H (\log X)^2} P^{-4} (\log P)^{-4 k_0},$$ different elements $(y,\beta) \in \mathcal{J}$ that are connected to $(x_0,\alpha_0)$ by a pair of paths of the desired form. The result then follows, since $$ c^{-k_0} (2k_0)!^2 P^4 (\log P)^{4 k_0} (\log X)^2 < H^{\delta},$$ under our hypothesis on $\varepsilon, H$ and $X$. \end{proof}
\section{Creating a global frequency}
In this section we conclude the proof of Theorem \ref{1}. We begin by finding choices of $T_y$ and $q_y$ that work for each $(y,\beta) \in \mathcal{J}_0$ and then deduce that a common choice of $T$ and $q$ works for many of these elements.
\begin{prop}
Let the notation be as in Lemma \ref{pairs}. Then, for every $(y,\beta) \in \mathcal{J}_0$ there exists some $T_y \in \mathbb{R}$ with $|T_y| \lesssim X^2/H^{2-\delta}$ such that \begin{equation} \label{ff2} \alpha_0 \equiv \frac{a_y}{d_y} Q_y + \frac{T_y}{x_0} + O ( H^{\delta-1} ) \, \, (\text{mod }Q_y),
\end{equation} and \begin{equation} \label{ff28}
\beta \equiv \frac{b_y}{d_y} Q_y + \frac{T_y}{y} + O(H^{2\delta-1}) \, \, (\text{mod }Q_y),
\end{equation} for some integers $a_y, b_y, d_y \lesssim H^{\delta}$. \end{prop}
\begin{proof} Fix $(y,\beta) \in \mathcal{J}_0$. Write $\ell_1$ and $\ell_2$ for a pair of paths mod $Q_y$ of length $k:=k_0+2$ of the kind provided by Lemma \ref{pairs}. If we write $(p_1,\ldots,p_k, q_1,\ldots,q_k)$ and $(p_1',\ldots,p_k',q_1',\ldots,q_k')$ for the primes in $[P,2P]$ associated to $\ell_1$ and $\ell_2$ respectively, then $\ell_1+(\ell_2)^{-1}$ will be a path mod $Q_y$ of length $2k$ consisting of (distinct) primes $(p_1,\ldots,p_k,q_k',\ldots,q_1',q_1,\ldots,q_k,p_k',\ldots,p_1')$ having $(x_0,\alpha_0)$ as its initial and end point. If $\alpha_y \in \mathbb{R} / Q_y \mathbb{Z}$ is the top element of a pyramid associated to this path, then by Lemma \ref{morealf} applied with $j=1$ and $j=2k+1$ and the triangle inequality, we see that
$$ \| \left( \prod_{i=1}^k p_i q_i' - \prod_{i=1}^k q_i p_i' \right) \alpha_y \|_{Q_y} \lesssim k \frac{K}{H},$$ and therefore \begin{align*}
\alpha_y &\equiv \frac{u_y}{d_y} Q_y + O \left( k \frac{K}{H} \right) \, \, (\text{mod }Q_y) \\
&\equiv \frac{u_y}{d_y} Q_y + \frac{T_y}{x_0 \prod_{i=1}^k q_i p_i'} \, \, (\text{mod }Q_y),
\end{align*} for some $T_y \in \mathbb{R}$ that by our choice of $k$ can be taken to satisfy
$$|T_y| \lesssim k \frac{K}{H} x_0 \left( \prod_{i=1}^k q_i p_i' \right) \lesssim k \frac{K}{H} \frac{X}{K} \left(\frac{X}{H} (2P)^4 \right) \lesssim \frac{X^2}{H^{2-\delta}},$$
and for some integers $|u_y| \le d_y$ with $d_y$ dividing $\left| \prod_{i=1}^k p_i q_i' - \prod_{i=1}^k q_i p_i' \right|$. Using Corollary \ref{pd21} as in the deduction of (\ref{old28}) we see that
$$ \left| \prod_{i=1}^k p_i q_i' - \prod_{i=1}^k q_i p_i' \right| \lesssim \frac{k H}{X} (2P)^{2k} \lesssim k (2P)^4,$$
by our choice of $k$. In particular, we have $|d_y| \lesssim H^{\delta}$. Applying Lemma \ref{morealf} with $j=1$ again, we deduce that we can find some integer $|a_y| \le d_y$ with $$ \alpha_0 \equiv \left( \prod_{i=1}^k q_i p_i' \right) \alpha_y + O \left( k \frac{K}{H} \right) \equiv \frac{a_y}{d_y} Q_y + \frac{T_y}{x_0} + O \left( H^{\delta-1} \right) \, \, (\text{mod }Q_y).$$
Similarly, applying Lemma \ref{morealf} with $j=k+1$, we obtain an integer $|b_y| \le d_y$ with $$ \beta \equiv \frac{b_y}{d_y} Q_y + \frac{T_y}{x_0} \frac{\prod_{i=1}^k p_i p_i' }{ \prod_{i=1}^k q_i p_i'} + O \left( H^{\delta-1} \right) \, \, (\text{mod }Q_y).$$ Removing the primes $p_1', \ldots, p_k'$ appearing in both numerator and denominator, and using that by Lemma \ref{uni} and Corollary \ref{pd21} we have
$$ \left| \frac{T_y}{x_0} \frac{\prod_{i=1}^k p_i }{ \prod_{i=1}^k q_i} - \frac{T_y}{y} \right| \lesssim \frac{|T_y|}{(X/K)^2} \left|y- \left(\prod_{i=1}^k q_i/p_i \right) x_0 \right| \lesssim H^{2 \delta-1},$$ we obtain the result. \end{proof}
We can now conclude the proof of Theorem \ref{3}. Let $\delta >0$ be sufficiently small and let the notation be as in Lemma \ref{pairs}. By Cauchy-Schwarz and the pigeonhole principle, we know from Lemma \ref{pairs} that can find some $c \sim 1$ and $(y_0,\beta_0) \in \mathcal{J}_0$ such that $$\tilde{Q}_y := \text{gcd}(Q_{y_0},Q_y) \ge (P')^{c^{k_0} \frac{P'}{\log P'}} \ge \exp(c^{k_0} P') $$
for $\ge c^{k_0} |J_0| \gtrsim H^{-\delta} |J_0|$ elements $(y,\beta) \in \mathcal{J}_0$. If we define $T := T_{y_0}$, $Q := Q_{y_0}$, $a := a_{y_0}$ and $d=d_{y_0}$, we see from (\ref{ff2}) that for each such $(y,\beta) \in \mathcal{J}_0$ we have
$$ \alpha_0 \equiv \frac{a_y}{d_y} Q_y + \frac{T_y}{x_0} + O(H^{\delta-1}) \equiv \frac{a}{d} Q + \frac{T}{x_0} + O(H^{\delta-1}) \, \, (\text{mod }\tilde{Q}_y).$$
From our bounds on the individual quantities involved, it follows that it must necessarily be
$$ \frac{a_y}{d_y} Q_y \equiv \frac{a}{d} Q \, \, (\text{mod }\tilde{Q}_y),$$
and therefore it must also be $|T - T_y| \lesssim |x_0|/H^{1-\delta}$. We then conclude from (\ref{ff28}) that
\begin{equation}
\label{llu2}
\beta \equiv \frac{b_y}{d_y} Q_y + \frac{T}{y} + O(H^{2\delta-1}) \, \, (\text{mod }\tilde{Q}_y).
\end{equation}
By the pigeonhole principle, it follows that there is some $q \lesssim H^{\delta}$ such that (\ref{llu2}) holds with $d_y = q$ for $\gtrsim H^{-2 \delta} |\mathcal{J}_0| \gtrsim X/H^{1+3 \delta}$ elements $(y,\beta) \in \mathcal{J}$. For each such $(y,\beta)$, we know by construction of $\mathcal{J}$ and the triangle inequality that we can find some interval $[z,z+H^*] \subseteq [y,y+H/K]$ of length $H^* := H^{1-3 \delta}$ with
$$ \sum_{z \le n < z+H^*} g(n) e(\beta n) \gtrsim H^*.$$
Since $|T/y-T/z| \lesssim H^{2\delta-1}$, it then follows that
$$ \sum_{z \le n \le z+H^*} g(n) e(\frac{(n-z)b_y}{q} Q_y + \frac{(n-z)T}{z}) \gtrsim H^*.$$
Choosing $\delta$ sufficiently small with respect to $\rho$, this concludes the proof of Theorem \ref{3} and therefore of Theorem \ref{1}.
\end{document}
|
arXiv
|
{
"id": "2304.09792.tex",
"language_detection_score": 0.6813061237335205,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\newcommand{\diam} {\operatorname{diam}} \newcommand{\Scal} {\operatorname{Scal}} \newcommand{\scal} {\operatorname{scal}} \newcommand{\Ric} {\operatorname{Ric}} \newcommand{\Hess} {\operatorname{Hess}} \newcommand{\grad} {\operatorname{grad}} \newcommand{\Rm} {\operatorname{Rm}} \newcommand{\Rc} {\operatorname{Rc}} \newcommand{\Curv} {S_{B}^{2}\left( \mathfrak{so}(n) \right) } \newcommand{ \tr } {\operatorname{tr}} \newcommand{ \id } {\operatorname{id}} \newcommand{ \Riczero } {\stackrel{\circ}{\Ric}} \newcommand{ \ad } {\operatorname{ad}} \newcommand{ \Ad } {\operatorname{Ad}} \newcommand{ \dist } {\operatorname{dist}} \newcommand{ \rank } {\operatorname{rank}} \newcommand{\operatorname{Vol}}{\operatorname{Vol}} \newcommand{\operatorname{dVol}}{\operatorname{dVol}} \newcommand{ \zitieren }[1]{ \hspace{-3mm} \cite{#1}} \newcommand{ \pr }{\operatorname{pr}} \newcommand{\operatorname{diag}}{\operatorname{diag}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\operatorname{av}}{\operatorname{av}}
\newtheorem{theorem}{Theorem}[section] \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{acknowledgment}[theorem]{Acknowledgment} \newtheorem{DefAndLemma}[theorem]{Definition and lemma} \newtheorem{questionroman}[theorem]{Question}
\newenvironment{remarkroman}{\begin{remark} \normalfont }{\end{remark}} \newenvironment{exampleroman}{\begin{example} \normalfont }{\end{example}} \newenvironment{question}{\begin{questionroman} \normalfont }{\end{questionroman}}
\renewcommand{(\alph{enumi})}{(\alph{enumi})} \newtheorem{maintheorem}{Theorem}[] \renewcommand*{\themaintheorem}{\Alph{maintheorem}} \newtheorem*{theorem*}{Theorem} \newtheorem*{corollary*}{Corollary} \newtheorem*{remark*}{Remark} \newtheorem*{example*}{Example} \newtheorem*{question*}{Question}
\newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\mathcal{C}}{\mathcal{C}}
\begin{abstract} This paper studies cohomogeneity one Ricci solitons. If the isotropy representation of the principal orbit $G/K$ consists of two inequivalent $\Ad_K$-invariant irreducible summands, the existence of continuous families of non-homothetic complete steady and expanding Ricci solitons on non-trivial bundles is shown. These examples were detected numerically by Buzano-Dancer-Gallaugher-Wang. The analysis of the corresponding Ricci flat trajectories is used to reconstruct Einstein metrics of positive scalar curvature due to B\"ohm. The techniques also apply to $m$-quasi-Einstein metrics. \end{abstract}
\maketitle
\section*{Introduction}
A Riemannian manifold $(M,g)$ is called {\em Ricci soliton} if there exists a smooth vector field $X$ on $M$ and a real number $\varepsilon \in \mathbb{R}$ such that \begin{equation*} \Ric + \frac{1}{2} L_X g + \frac{\varepsilon}{2} g = 0, \end{equation*} where $L_X g$ denotes the Lie derivative of the metric $g$ with respect to $X.$ Ricci solitons are generalisations of Einstein manifolds and will be called {\em non-trivial} if $X$ is not a Killing vector field. If $X$ is the gradient of a smooth function $u \colon M \to \mathbb{R}$ then it is called a {\em gradient} Ricci soliton. It is called {\em shrinking}, {\em steady} or {\em expanding} depending on whether $\varepsilon<0,$ $\varepsilon=0$ or $\varepsilon>0.$ Ricci solitons were introduced by Hamilton \cite{HamiltonRFonSurfaces} as self-similar solutions to the Ricci flow and play an important role in its singularity analysis.
This paper studies the Ricci soliton equation under the assumption of a large symmetry group. For example, Lauret \cite{LauretHomogeneousRS} has constructed non-gradient, homogeneous expanding Ricci solitons. However, Petersen-Wylie \cite{PWRigidityWithSymmetry} have shown that any homogeneous gradient Ricci soliton is rigid, i.e. it is isometric to a quotient of $N \times \mathbb{R}^k,$ where $N$ is Einstein with Einstein constant $\lambda$ and $\mathbb{R}^k$ is equipped with the Euclidean metric and soliton potential $\frac{\lambda}{2} |x|^2.$
Therefore it is natural to assume that the Ricci soliton is of {\em cohomogeneity one.} That is, a Lie group acts isometrically on $(M,g)$ and the generic orbit is of codimension one. This will be the setting of this paper. A systematic investigation was initiated by Dancer-Wang \cite{DWCohomOneSolitons} who set up the general framework. Previous examples include the first non-trivial compact Ricci soliton due to Cao \cite{CaoSoliton} and Koiso \cite{KoisoSoliton} or the examples of Feldman-Ilmanen-Knopf \cite{FIKSolitons}, which include the first non-compact shrinking Ricci soliton. It is worth noting that all of these examples, as well as their generalisations due to Dancer-Wang \cite{DWCohomOneSolitons}, are {\em K\"ahler.} In fact, all currently known non-trivial compact Ricci solitons are K\"ahler. On the other hand, Angenent-Knopf \cite{AngenentKnopfRSConicalSingNonuniqueness} constructed non-compact, non-K\"ahler shrinking Ricci solitons.
Hamilton's cigar is also K\"ahler, whereas its higher dimensional analogue, the rotationally symmetric steady soliton on $\mathbb{R}^n$, $n>2,$ the Bryant soliton, is {\em non}-K\"ahler. By extending these examples in a series of papers and then in joint work with Buzano and Gallaugher, Dancer-Wang constructed steady and expanding Ricci solitons of multiple warped product type \cite{DWExpandingSolitons, DWSteadySolitons}, \cite{BDGWExpandingSolitons}, \cite{BDWSteadySolitons}. They also numerically investigated the case where the isotropy representation of the principal orbit $G/K$ consists of two inequivalent $\Ad_K$-invariant irreducible real summands and found numerical evidence for the existence of continuous families of complete steady and expanding Ricci solitons on certain non-trivial vector bundles in \cite{BDGWExpandingSolitons, BDGWSteadySolitons}. This paper gives a rigorous construction thereof:
Let $G$ be a compact Lie group and let $K \subset H \subset G$ be closed subgroups such that $H/K=S^{d_S}$. Then $H$ acts linearly on $\mathbb{R}^{d_S+1}$ and the associated vector bundle $G \times_H \mathbb{R}^{d_S+1}$ is a cohomogeneity one manifold. Examples where the Lie algebra of $G/K$ decomposes into two inequivalent $\Ad_K$-invariant irreducible real summands include the triples \begin{align} (G,H,K) & = (Sp(1) \times Sp(m+1), Sp(1) \times Sp(1) \times Sp(m), Sp(1) \times Sp(m)), \nonumber \\ (G,H,K) & = (Sp(m+1), Sp(1) \times Sp(m), U(1) \times Sp(m)), \label{GroupDiagrams} \\ (G,H,K) & = (Spin(9), Spin(8), Spin(7)). \nonumber \end{align} These examples come from the Hopf fibrations, cf. \cite{BesseEinstein}. In the first and third case, the associated vector bundle is diffeomorphic to $\mathbb{H}P^{m+1} \setminus \left\{ \text{ point } \right\}$ and $CaP^2 \setminus \left\{ \text{ point } \right\},$ respectively. The main theorem is the following:
\begin{maintheorem} On $CaP^2 \setminus \left\{ \text{ point } \right\},$ $\mathbb{H}P^{m+1} \setminus \left\{ \text{ point } \right\}$ for $m \geq 1$ and on the vector bundle associated to $(G,H,K) = (Sp(m+1), Sp(1) \times Sp(m), U(1) \times Sp(m))$ for $m \geq 3,$ there exist a $1$-parameter family of non-homothetic complete steady and a $2$-parameter family of non-homothetic complete expanding Ricci solitons.
The steady Ricci solitons are asymptotically paraboloid and thus non-collapsed. The expanding Ricci solitons are asymptotically conical. \label{MainTheoremTwoSummands} \end{maintheorem}
Notice that non-trivial gradient steady and expanding Ricci solitons must be non-compact. Furthermore, due to Perelman's \cite{Perelman1} no-local collapsing theorem, blow up limits of finite time Ricci flow singularities are necessarily non-collapsed.
The construction of the Ricci solitons in Theorem \ref{MainTheoremTwoSummands} partially carries over to the case of complex line bundles over Fano K\"ahler-Einstein manifolds, where Cao \cite{CaoSoliton} and Feldman-Ilmanen-Knopf \cite{FIKSolitons} previously constructed {\em K\"ahler} Ricci solitons. In contrast, Theorem \ref{MainTheoremRSOnLineBundles} exhibits continuous families of complete {\em non}-K\"ahler steady and expanding Ricci solitons.
\begin{maintheorem} Let $(V,J,g)$ be a Fano K\"ahler-Einstein manifold of real dimension $d$. Suppose that the first Chern class is given by $c_1(V,J) = p \rho$ for an indivisible class $\rho \in H^2(V,J)$ and $\Ric_g = pg.$ For $q \in \mathbb{Z}$ let $\pi \colon P_q \to V$ be the principal circle bundle with Euler class $q \pi^{*} \rho$ and let $L_q$ be the total space of the associated complex line bundle.
If $2p^2 > (d+2)q^2>0$ there exist a $1$-parameter family of non-homothetic complete steady Ricci solitons and a $2$-parameter family of non-homothetic complete expanding Ricci solitons on $L_q.$ In particular there exist non-K\"ahler Ricci solitons on $L_q.$ \label{MainTheoremRSOnLineBundles} \end{maintheorem}
In the steady case these Ricci solitons were independently discovered by Stolarski \cite{StolarskiSteadyRSOnCxLineBundles} and Appleton \cite{AppletonSteadyRS}, who use different techniques.
The proof of Theorem \ref{MainTheoremTwoSummands} establishes that the Ricci soliton metrics correspond to trajectories in a bounded region of a phase space, which implies completeness. This method also applies to Einstein metrics. In particular, in the situation of Theorem \ref{MainTheoremTwoSummands}, the methods of this paper provide an alternative construction of Ricci flat metrics and Einstein metrics with negative scalar curvature due to B\"ohm \cite{BohmNonCompactEinstein}, see also remark \ref{RemarkBoehmSetUpAndProofCompleteness}.
The associated coordinate change moreover allows good control on the trajectories close to the singular orbit. In the Einstein case this also yields an alternative approach to the following result of B\"ohm \cite{BohmInhomEinstein, BohmNonCompactEinstein}: The two summands Einstein metrics converge to explicit solutions with conical singularities as the volume of the singular orbit tends to zero. In comparison to B\"ohm's work, the main technical simplification is that the methods of this paper do not use the Poincar\'e-Bendixson theorem, see remark \ref{RemarkConvergenceConeSolutions}. As an application, an analysis of the {\em Ricci flat} trajectories will be used to reconstruct Einstein metrics of positive scalar curvature due to B\"ohm \cite{BohmInhomEinstein}.
The vector bundles associated to the two families of group diagrams in \eqref{GroupDiagrams} also admit {\em explicit} Ricci flat metrics in the lowest dimensional case $m=1.$ These are in fact of special holonomy $G_2$ and $Spin(7),$ respectively, and were discovered earlier by Bryant-Salamon \cite{BSExceptionalHolonomy} and Gibbons-Page-Pope \cite{GPPEinsteinOnSphereR3R4bundles}. However, it is worth noting that these metrics correspond to {\em linear} trajectories in the above phase space, see theorem \ref{ExplicitRFTrajectories}.
The techniques in this paper moreover apply if the Bakry-\'Emery Ricci tensor $\Ric + \Hess u$ is replaced with the more general version $\Ric - \Hess u - \frac{1}{m} du \otimes du.$ For any $m \in (0, \infty]$ this leads to the notion of $m$-quasi-Einstein metrics, i.e. Riemannian manifolds which satisfy the curvature condition \begin{equation*} \Ric + \Hess u - \frac{1}{m} du \otimes du + \frac{\varepsilon}{2} g = 0 \end{equation*} for $u \in C^{\infty}(M)$ and $\varepsilon \in \mathbb{R}.$ These metrics play an important role in the study of Einstein warped products, cf. \cite{CaseSMMSAndQEM} or \cite{HPWUniquenessWarpedProductEinstein} and references therein.
The initial value problem for cohomogeneity one $m$-quasi-Einstein manifolds will be discussed in the spirit of Eschenburg-Wang \cite{EWInitialValueEinstein} and Buzano \cite{BuzanoInitialValueSolitons}, see theorem \ref{QEMInitialValueTheorem}, and the $m$-quasi-Einstein analogue of Theorem \ref{MainTheoremTwoSummands} is proven in theorem \ref{TwoSummandsQEM}.
Furthermore, the setting of $m$-quasi Einstein metrics allows a unified proof of the existence of Einstein metrics and Ricci soliton metrics on $\mathbb{R}^{d_1+1} \times M_2 \times \ldots \times M_r,$ for $d_1 \geq 1,$ where $(M_i, g_i)$ are Einstein manifolds with positive scalar curvature. This summaries earlier work due to B\"ohm \cite{BohmNonCompactEinstein}, Dancer-Wang \cite{DWSteadySolitons, DWExpandingSolitons} for $d_1 > 1$ and Buzano-Dancer-Gallaugher-Wang \cite{BDGWExpandingSolitons, BDWSteadySolitons} for $d_1 = 1:$
\begin{maintheorem} Let $M_2, \ldots, M_r$ be Einstein manifolds with positive scalar curvature and let $d_1 \geq 1$ and $m \in (0, \infty].$
Then there is an $(r -1)$-parameter family of non-trivial, non-homothetic, complete, smooth Bakry-\'Emery flat $m$-quasi-Einstein metrics and an $r$-parameter family of non-trivial, non-homothetic, complete, smooth $m$-quasi-Einstein metrics with quasi-Einstein constant $\frac{\varepsilon}{2} > 0$ on $\mathbb{R}^{d_1+1} \times M_2 \times \ldots \times M_r.$ \label{MainTheoremQEM} \end{maintheorem}
\textit{Structure of the paper.} Section \ref{CohomOneRicciSolitonEQ} reviews the Ricci soliton equation on cohomogeneity one manifolds and recalls some structure theorems. Section \ref{SectionNewSolitons} focuses on the two summands case, with section \ref{SectionSolitonsFromCircleBundles} discussing the case of complex line bundles over Fano K\"ahler-Einstein manifolds. Completeness of the metrics in Theorem \ref{MainTheoremTwoSummands} is shown in section \ref{CompletenessTwoSummands} and the asymptotic behaviour is studied in section \ref{SectionTwoSummandsAsymptotics}. Applications to convergence to cone solutions and B\"ohm's Einstein metrics of positive scalar curvature follow in sections \ref{SectionConvergenceToConeSolutions} and \ref{SectionBohmEinsteinMetricsPosScal}, respectively. Finally, section \ref{SectionQuasiEinsteinMetrics} discusses $m$-quasi-Einstein metrics and the proof of Theorem \ref{MainTheoremQEM}.
\textit{Acknowledgements.} I wish to thank my PhD advisor Prof. Andrew Dancer for constant support, helpful comments and numerous discussions.
\section{The cohomogeneity one Ricci soliton equation} \label{CohomOneRicciSolitonEQ} \subsection{The general set-up} \label{SectionCohomOneSetUp}
The general framework for cohomogeneity one Ricci solitons has been set up by Dancer-Wang \cite{DWCohomOneSolitons}: Let $(M,g)$ be a Riemannian manifold and let $G$ be a compact connected Lie group which acts isometrically on $(M,g).$ The action is of {\em cohomogeneity one} if the orbit space $M / G$ is one-dimensional. In this case, choose a unit speed geodesic $\gamma \colon I \to M$ that intersects all principal orbits perpendicularly. Let $K=G_{\gamma(t)}$ denote the principal isotropy group. Then $\Phi \colon I \times G/K \to M_0,$ $(t,gK) \mapsto g \cdot \gamma(t)$ is a $G$-equivariant diffeomorphism onto an open dense subset $M_0$ of $M$ and the pullback metric is of the form $\Phi^{*}g=dt^2 + g_t,$ where $g_t$ is a $1$-parameter family of metrics on the principal orbit $P=G/K.$ Let $N = \Phi_*( \frac{\partial}{\partial t})$ be a unit normal vector field and let $L_t = \nabla N$ denote the shape operator of the hypersurface $\Phi(\left\lbrace t\right\rbrace \times P).$ Via $\Phi,$ $L_t$ can be regarded as a one-parameter family of $G$-equivariant, $g_t$-symmetric endomorphisms of $TP$ which satisfies $\dot{g}_t = 2 g_t L_t.$ Similarly, let $\Ric_t$ be the Ricci curvature corresponding to $g_t.$ According to Eschenburg-Wang \cite{EWInitialValueEinstein} the Ricci curvature of the cohomogeneity one manifold $(M,g)$ is given by \begin{align*} \Ric(X,N) & = - g_t(\delta^{\nabla^t} L_t, X) - d ( \tr(L_t) ) (X), \\ \Ric(N,N) & = -\tr(\dot{L})-\tr(L_t^2), \\ \Ric(X,Y) & = -g_t(\dot{L}(X),Y) - \tr(L_t)g_t(L_t(X),Y) + \Ric_t(X,Y), \end{align*} where $X, Y \in TP,$ $ \delta^{\nabla^t} \colon T^*P \otimes TP \to TP$ is the codifferential, and $L_t$ is regarded as a $TP$-valued $1$-form on $TP.$ Dancer-Wang \cite{DWCohomOneSolitons} observed that, since $G$ is compact, any cohomogeneity one Ricci soliton induces a Ricci soliton with a $G$-invariant vector field. Hence, in the case of gradient Ricci solitons, the soliton potential can be assumed to be $G$-invariant. The gradient Ricci soliton equation $\Ric + \Hess u + \frac{\varepsilon}{2} g= 0$ then takes the form \begin{align} -( \delta^{\nabla^t}L_t)^{\flat} - d(\tr(L_t)) & = 0, \label{CohomOneRSa}\\ - \tr( \dot{L}_t) - \tr(L_t^2) + \ddot{u} + \frac{\varepsilon}{2} & =0, \label{CohomOneRSb}\\ - \dot{L}_t - (- \dot{u} + \tr(L_t)) L_t + r_t + \frac{\varepsilon}{2} \mathbb{I} & = 0, \label{CohomOneRSc} \end{align} where $r_t = g_t \circ \Ric_t$ is the Ricci endomorphism, i.e. $g_t(r_t(X),Y)=\Ric_t(X,Y)$ for all $X,Y \in TP.$ Conversely, the above system induces a gradient Ricci soliton on $I \times P$ provided that the metric $g_t$ is defined via $\dot{g}_t = 2 g_t L_t.$ The special case of constant $u$ recovers the cohomogeneity one Einstein equations.
From now on, for simplicity, the $t$-dependence may not be stated explicitly.
It is an immediate consequence of \eqref{CohomOneRSb} that the mean curvature with respect to the volume element $e^{-u}d \operatorname{Vol}_g$ is a Lyapunov function if $\varepsilon \leq 0.$
\begin{proposition} Fix $\varepsilon \leq 0.$ Then the generalised mean curvature $-\dot{u} + \tr(L)$ is monotonically decreasing along the flow of the cohomogeneity one Ricci soliton equation. \end{proposition} If the Ricci soliton metric is at least $C^3$-regular, then the second Bianchi identity implies that the {\em conservation law} \begin{equation} \ddot{u} + (-\dot{u}+\tr(L)) \dot{u} = C+ \varepsilon u \label{GeneralConservationLaw} \end{equation} has to be satisfied for some constant $C \in \mathbb{R}.$ Using the equations \eqref{CohomOneRSb} and \eqref{CohomOneRSc} it can be reformulated as \begin{equation} \tr (r) + \tr ( L^2 ) - \left( - \dot{u} + \tr \left( L \right) \right) ^2 + (n-1) \frac{\varepsilon}{2} = C +\varepsilon u. \label{ReformulatedGeneralConsLaw} \end{equation} Recall that the scalar curvature $R$ of a cohomogeneity one Riemannian manifold $(M^{n+1},g)$ is given by
$R= \tr(r) - \tr(L^2)- \tr(L)^2 - 2 \tr(\dot{L}).$ Hence it follows with \eqref{CohomOneRSc} that the conservation law \eqref{ReformulatedGeneralConsLaw} is just the cohomogeneity one version of Hamilton's \cite{HamiltonSingularites} general identity $R + | \nabla u |^2 + \varepsilon u = \overline{C}$ for gradient Ricci solitons (where $\overline{C}= - C -\frac{n+1}{2} \varepsilon$). This also provides a formula for the scalar curvature in terms of the soliton potential: \begin{equation} R = - C - \varepsilon u - \dot{u}^2 - (n+1) \frac{\varepsilon}{2}. \label{ScalarCurvatureRicciSoliton} \end{equation}
\subsection{Ricci solitons with a singular orbit} \label{MetricWithASingularOrbit} From now on, assume that there is a singular orbit $Q = G/H$ at $t=0.$ That is, the orbit at $t=0$ is of dimension strictly less than the dimension of the principal orbit, and let $H=G_{\gamma(0)}$ denote its isotropy group.
Building up on an idea of Back \cite{BackLocalTheoryofEquiv}, see also \cite{EWInitialValueEinstein}, Dancer-Wang \cite{DWCohomOneSolitons} have shown that in the presence of a singular orbit, equation \eqref{CohomOneRSc} implies \eqref{CohomOneRSa} automatically, provided that the metric is at least $C^2$-regular and the soliton potential is of class $C^3.$ Moreover, if in this case the conservation law \eqref{GeneralConservationLaw} is satisfied, then equation \eqref{CohomOneRSb} holds as well. Conversely, any trajectory of the Ricci soliton equations \eqref{CohomOneRSb}, \eqref{CohomOneRSc} that describes a $C^3$-regular metric with a singular orbit has to satisfy the conservation law \eqref{ReformulatedGeneralConsLaw}.
The initial value problem for gradient cohomogeneity one Ricci solitons has been considered by Buzano \cite{BuzanoInitialValueSolitons}. Extending Eschenburg-Wang's work \cite{EWInitialValueEinstein} in the Einstein case, under a simplifying, technical assumption, the initial value problem can be solved close to a singular orbit regardless of the soliton being shrinking, steady or expanding. However, the solution may not be unique. For a precise statement, see theorem \ref{QEMInitialValueTheorem}.
Notice that $u(0)=0$ can be assumed, as the Ricci soliton equation is invariant under changing the potential by an additive constant. Furthermore, the existence of a singular orbit at $t=0$ imposes the smoothness condition $\dot{u}(0)=0$ on the soliton potential $u.$ If $d_S$ denotes the dimension of the collapsing sphere at the singular orbit, then the trace of the shape operator grows like $\tr(L) = \frac{d_S}{t} + O(t)$ as $t \to 0.$ Therefore the conservation law \eqref{GeneralConservationLaw} implies $\ddot{u}(0)= \frac{C}{d_S + 1}.$ To summarize: \begin{equation} u(0)=0, \ \ \ \dot{u}(0)=0, \ \ \ \ddot{u}(0)=\frac{C}{d_S +1}. \label{InitialConditionsPotentialFunction} \end{equation}
The existence of a singular orbit has consequences for the behaviour of the soliton potential. Proposition \ref{PotentialFunctionOfExpandingRSAlongCohomOneFlow} below follows from \cite[Propositions 2.3 and 2.4]{BDWSteadySolitons} and \cite[Proposition 1.11]{BDGWExpandingSolitons}. It should be emphasised that the properties hold along the flow of the Ricci soliton equation and completeness of the metric is not required.
\begin{proposition} Along any Ricci soliton trajectory with $\varepsilon \geq 0$ and $C<0$ in \eqref{InitialConditionsPotentialFunction} that corresponds to a cohomogeneity one manifold of dimension $n+1$ with a singular orbit at $t=0,$ for $t > 0$ and as long as the solution exists, the soliton potential satisfies $u(t),$ $\dot{u}(t)<0$ and also $\ddot{u}(t)<0$ if $\varepsilon >0$ or $\varepsilon = 0$ and $L_t \neq 0.$
Furthermore, if $\varepsilon = 0$ and $C \leq 0,$ there holds $\tr(L_t) \leq \frac{n}{t}$ for $t>0$ and as long as the solution exists. \label{PotentialFunctionOfExpandingRSAlongCohomOneFlow} \end{proposition}
\begin{remarkroman} The quantity $\frac{\tr(L)}{-\dot{u}+\tr(L)}$ will appear frequently in later calculations. It is useful to note that it satisfies the differential equation \begin{equation*} \frac{d}{dt} \frac{\tr(L)}{-\dot{u}+\tr(L)} = \frac{1}{-\dot{u}+\tr(L)} \left\lbrace \left( \frac{\tr(L)}{-\dot{u}+\tr(L)} -1 \right)\left( \tr(L^2) - \frac{\varepsilon}{2} \right) + \ddot{u} \right\rbrace. \end{equation*} In particular, in the steady case, proposition \ref{PotentialFunctionOfExpandingRSAlongCohomOneFlow} shows that $\frac{\tr(L)}{-\dot{u}+\tr(L)}$ is monotonically decreasing as long as $-\dot{u}+\tr(L) >0.$ According to proposition \ref{CompleteSteadyRSAsymptotics} below, this is always true if the metric corresponds to a complete steady Ricci soliton. In this case, moreover, it follows that $\frac{\tr(L)}{-\dot{u}+\tr(L)} \to 0$ as $t \to \infty.$ \label{RemarkEvolutionOftrLOverGeneralisedMeanCurvature} \end{remarkroman}
\subsection{Consequences of completeness} \label{SectionConsequencesOfCompleteness} If the solution corresponds to a non-trivial {\em complete} Ricci soliton metric, further restrictions on the asymptotics of the soliton potential and the metric are known.
In the steady case, according to a result of Chen \cite{ChenStrongUniquenessRF}, the ambient scalar curvature of steady Ricci solitons satisfies $R \geq 0$ with equality if and only if the metric is Ricci flat. Then \eqref{ScalarCurvatureRicciSoliton} implies that $C \leq 0$ is a necessary for completeness and $C=0$ precisely corresponds to the Ricci flat case. Munteanu-Sesum \cite{MSgradientRicciSolitons} have shown that non-trivial complete steady Ricci solitons have at least linear volume growth and Buzano-Dancer-Wang used this to show in \cite[Proposition 2.4 and Corollary 2.6]{BDWSteadySolitons}: \begin{proposition} Along any trajectory which corresponds to a non-trivial {\em complete} steady cohomogeneity one Ricci soliton of dimension $n+1$ with a singular orbit at $t=0$ and integrability constant $C<0,$ the estimates \begin{align*} 0 < \tr(L) \leq \frac{n}{t}
\ \text{ and } \ 0 < - \dot{u} \tr(L) < R < 2 \sqrt{-C} \frac{n}{t} + \frac{n^2}{t^2} \end{align*} hold for $t > 0$ and the soliton potential satisfies \begin{align*} -\dot{u}(t) \to \sqrt{-C} \ \text{ and } \ \ddot{u}(t) \to 0 \end{align*} as $t \to \infty.$ \label{CompleteSteadyRSAsymptotics} \end{proposition}
In the case of expanding Ricci solitons, a similar result of Chen \cite{ChenStrongUniquenessRF} implies that the scalar curvature $R$ of a non-trivial, complete expanding Ricci soliton satisfies $R > - \frac{\varepsilon}{2}(n+1).$ It follows from \eqref{ScalarCurvatureRicciSoliton} that $0 \geq - \dot{u} ^2 > C + \varepsilon u$ holds on any complete expanding Ricci soliton. The smoothness condition \eqref{InitialConditionsPotentialFunction} at the singular orbit therefore requires $C<0$ as a {\em necessary} condition to construct non-trivial, complete expanding Ricci solitons. Conversely, Einstein metrics with negative scalar curvature correspond to trajectories with $C=0.$
Once the Ricci soliton is shown to be complete, it follows from results of Buzano-Dancer-Gallaugher-Wang \cite{BDGWExpandingSolitons} that any non-trivial, complete, gradient expanding Ricci soliton has at least logarithmic volume growth. This has consequences for the asymptotic behaviour of the soliton, see \cite[Equation (1.10) and Proposition 1.18]{BDGWExpandingSolitons}: There exists constants $a_0, a_1 >0$ and a time $t_0>0$ such that for all $t>t_0$ \begin{equation}
| \tr(L_t) |< \sqrt{\frac{n}{2} \varepsilon} \ \text{ and } \ a_1 t + a_0 < - \dot{u}(t) < \frac{\varepsilon}{2} t + \sqrt{-C} \label{GeneralAsymptoticsExpandingRS} \end{equation} i.e. $- \dot{u}$ growths approximately linearly for $t$ large enough.
\subsection{The B\"ohm functional} \label{BohmFunctionalSection}
B\"ohm \cite{BohmNonCompactEinstein} introduced the functional $\mathscr{F}_0$ to the study of Einstein manifolds of cohomogeneity one. Subsequently it was considered by Dancer-Wang and their collaborators Buzano, Gallaugher and Hall in the context of cohomogeneity one Ricci solitons \cite{BDWSteadySolitons, BDGWExpandingSolitons, DHWShrinkingSolitons}. The significance of $\mathscr{F}_0$ lies in the fact that it is monotonic under mild assumptions.
To define it, let $v(t) = \sqrt{\det g_t}$ denote the relative volume of the principal orbits and let $L^{(0)} = L - \frac{1}{n}\tr(L) \mathbb{I}$ denote the trace less part of the shape operator. Then the B\"ohm functional is given by \begin{equation} \mathscr{F}_0= v ^{\frac{2}{n}} \left( \tr(r_t) + \tr(( L^{(0)})^2 )\right). \label{BohmFunctional} \end{equation}
The following proposition is due to Dancer-Hall-Wang \cite[Proposition 2.17]{DHWShrinkingSolitons}.
\begin{proposition} Along the flow of a $C^3$-regular cohomogeneity one gradient Ricci soliton the B\"ohm functional $\mathscr{F}_0$ satisfies \begin{equation} \frac{d}{dt} \mathscr{F}_0 = - 2 v^{\frac{n}{2}} \tr((L^{(0)})^2) \left( -\dot{u} + \frac{n-1}{n} \tr(L) \right). \label{DerivativeOfBohmFunctional} \end{equation} \label{PropositionBohmFunctional} \end{proposition}
\begin{remarkroman} The $C^3$-regularity condition guarantees that the conservation law \eqref{GeneralConservationLaw} is satisfied. On the other hand the existence of a singular orbit along the trajectory is not required to prove \eqref{DerivativeOfBohmFunctional}. \end{remarkroman}
\section{New Examples of Ricci solitons} \label{SectionNewSolitons}
\subsection{The geometric set-up} \label{SectionGeometricSetUp}
Let $(M^{n+1},g)$ be a Riemannian manifold and suppose that $G$ is a compact connected Lie group which acts isometrically on $(M,g)$. Assume that the orbit space is a half open interval and let $K \subset H$ denote the isotropy groups of the principal and singular orbit, respectively. It follows that $M$ is diffeomorphic to the open disc bundle $G \times_H D^{d_S+1} \to G/H,$ where $D^{d_S+1}$ denotes the normal disc to the singular orbit $G/H$ and $S^{d_S} = H/K$ is the collapsing sphere. Conversely, let $G$ be a compact connected Lie group and let $K \subset H$ be closed subgroups such that $H /K$ is a sphere. Then $G \times_H \mathbb{R}^{d_S+1}$ is a cohomogeneity one manifold with principal orbit $G/K.$ Suppose that the non-principal orbit $G/H$ is singular, i.e. of dimension strictly less than $G/K.$
Choose a bi-invariant metric $b$ on $G$ which induces the metric of constant curvature $1$ on $H/K.$ The {\em two summands case} assumes that the space of $G$-invariant metrics on the principal orbit is two dimensional: Let $\mathfrak{g}=\mathfrak{k} \oplus \mathfrak{p}$ be an $\Ad(K)$-invariant decomposition of the Lie algebra of $G$ and suppose furthermore that $\mathfrak{p}$ decomposes into two inequivalent, $b$-orthogonal, irreducible $K$-modules, $\mathfrak{p} = \mathfrak{p}_1 \oplus \mathfrak{p}_2.$ In fact, $\mathfrak{p}_1$ can be identified with the tangent space to the collapsing sphere $S^{d_S} = H /K$ and $\mathfrak{p}_2$ with the tangent space of the singular orbit $Q=G/H.$ Let $g_S = b_{|\mathfrak{p}_1}$ and $g_Q = b_{|\mathfrak{p}_2}$ denote the induced metrics. Then, away from the singular orbit $Q,$ the metric on $M$ is given by \begin{equation} g_{M \setminus Q} = dt^2 +f_1(t)^2 g_S + f_2(t)^2 g_Q \label{TwoSummandsMetric} \end{equation} and the shape operator of the principal orbit takes the form \begin{align*} L_t=\left( \frac{\dot{f}_1}{f_1} \mathbb{I}_{d_1}, \frac{\dot{f}_2}{f_2} \mathbb{I}_{d_2} \right), \end{align*} where $d_1=d_S$ is the dimension of the collapsing sphere and $d_2$ is the dimension of the singular orbit. Furthermore, it follows from the theory of Riemannian submersions and the O'Neill calculus, cf. \cite{BohmInhomEinstein}, that the Ricci endomorphism takes the form \begin{align} r_t = \left( \left\lbrace \frac{A_1}{d_1} \frac{1}{f_1^2} + \frac{A_3}{d_1} \frac{f_1^2}{f_2^4} \right\rbrace \mathbb{I}_{d_1},
\left\lbrace \frac{A_2}{d_2} \frac{1}{f_2^2} - \frac{2 A_3}{d_2} \frac{f_1^2}{f_2^4} \right\rbrace \mathbb{I}_{d_2} \right). \label{RicciEndomorphismTwoSummands} \end{align}
Here the constants $A_i \geq 0$ are defined as follows: $A_1 = d_1 (d_1 -1)$, $A_2 = d_2 \Ric^Q,$ where $\Ric^Q$ is the Einstein constant of the isotropy irreducible space $(Q,g_Q),$ and $A_3 = d_2 || A ||^2$, where $ || A || \geq 0$ appears naturally in the theory of Riemannian submersions, cf. \cite{BohmInhomEinstein}: Fix the background metric $g_P=g_S + g_Q$ on the principal orbit $P$ and let $\nabla^{g_P}$ be the corresponding Levi-Civita connection. If $H_1, \ldots, H_{d_2}$ is an orthonormal basis of horizontal vector fields with respect to the Riemannian submersion $(G/K,g_P) \to (G/H,g_Q)$, then $|| A ||^2 = \sum_{i=1}^{d_2} g_S( (\nabla_{H_1}^{g_P} H_i)_{|v}, (\nabla_{H_1}^{g_P} H_i)_{|v} )$ is the norm of an O'Neill tensor associated to the above Riemannian submersion, where $( \cdot )_{|v}$ denotes the projection onto the tangent space of the fibre $S^{d_1}=S^{d_S}.$
Warped product metrics with two homogeneous summands provide examples with $||A||=0.$ Examples with $||A|| > 0$ are given by the total spaces of non-trivial disc bundles which are induced by the Hopf fibrations, cf. \cite{BesseEinstein}. The following table, which lists the corresponding group diagrams and associated constants, is taken from \cite[Table 1]{BohmInhomEinstein}.
\begin{table}[!ht]
$\begin{array}{l|l|l|l|l} \text{} & \mathbb{C}P^{m+1} & \mathbb{H}P^{m+1} & F^{m+1} & CaP^2 \\ \hline G & U(m+1) & Sp(1) \times Sp(m+1) & Sp(m+1) & Spin(9) \\ H & U(1) \times U(m) & Sp(1) \times Sp(1) \times Sp(m) & Sp(1) \times Sp(m) & Spin(8) \\ K & U(m) & Sp(1) \times Sp(m) & U(1) \times Sp(m) & Spin(7) \\ d_1 & 1 & 3 & 2 & 7 \\ d_2 & 2m & 4m & 4m & 8 \\
||A||^2 & 1 & 3 & 8 & 7 \\ \Ric^Q & 2m+2 & 4m+8 & 4m+8 & 28 \end{array}$ \caption{Group diagrams associated to Hopf fibrations\label{HopfFibrationsTable}} \end{table}
The soliton potential $u$ will be assumed to be invariant under the action of $G$, $u=u(t),$ and $u(0)=0$ will be fixed. If $u$ satisfies the smoothness conditions \eqref{InitialConditionsPotentialFunction} and the functions $f_1,$ $f_2$ satisfy \begin{equation} f_1(0)=0, \ \dot{f}_1(0)=1 \ \text{ and } \ f_2(0)= \bar{f} > 0, \ \dot{f}_2(0)=0, \label{SmoothnessMetricTwoSummandsGeometricSetUpSection} \end{equation} then the work of Buzano \cite{BuzanoInitialValueSolitons} implies that there is a unique local solution of the Ricci soliton equations with these initial conditions, and it extends the soliton potential and the metric smoothly over the singular orbit.
\begin{remarkroman} The two summands case is also the set-up for B\"ohm's work \cite{BohmInhomEinstein, BohmNonCompactEinstein} on Einstein manifolds. In fact, the Lyapunov function \eqref{LyapunovForNonTrivialBundles} is motivated by B\"ohm's work. In contrast, B\"ohm's construction relies on the Poincar\'e-Bendixson theorem. In the Ricci soliton case, however, the extra degree of freedom of the soliton potential does not allow a similar reduction of the Ricci soliton equations to a planar ODE and a new proof is required, see remark \ref{RemarkConvergenceConeSolutions}. Conversely, the methods of section \ref{CompletenessTwoSummands} recover B\"ohm's non-compact Einstein manifolds. \label{RemarkBoehmSetUpAndProofCompleteness} \end{remarkroman}
\subsection{Qualitative ODE analysis} \label{CompletenessTwoSummands}
The Ricci soliton equations for the two summands system can be read off from the discussion in section \ref{SectionGeometricSetUp} and equations \eqref{CohomOneRSb} and \eqref{CohomOneRSc}. However, in this form, the equations become singular at the singular orbit. Therefore, a rescaling will be introduced which smooths the Ricci soliton equation close to the initial value. It was effectively used by Dancer-Wang \cite{DWSteadySolitons} and is motivated by Ivey's work \cite{IveyNewExamplesRS}. Notice that under the coordinate change \begin{align} X_i & = \frac{1}{- \dot u + \tr(L)} \frac{\dot{f}_i}{f_i}, \ \ \ \ Y_i = \frac{1}{- \dot u + \tr(L)} \frac{1}{f_i}, \ \text{for} \ i=1,2, \label{RescaledTwoSummandsVariables} \\ \mathcal{L} & = \frac{1}{- \dot u + \tr(L)}, \ \ \ \ \ \ \frac{d}{ds} = \frac{1}{-\dot{u}+\tr(L)} \frac{d}{dt} \nonumber \end{align} the cohomogeneity one two summands Ricci soliton equations reduce to the ODE system \begin{align} X_1^{'} & = X_1 \left( \sum_{i=1}^2 d_i X_i^2 - \frac{\varepsilon}{2} \mathcal{L}^2 -1 \right) + \frac{A_1}{d_1} Y_1^2+\frac{\varepsilon}{2} \mathcal{L}^2 + \frac{A_3}{d_1} \frac{Y_2^4}{Y_1^2}, \label{RescaledTwoSummandsODE} \\ X_2^{'} & = X_2 \left( \sum_{i=1}^2 d_i X_i^2 - \frac{\varepsilon}{2} \mathcal{L}^2 -1 \right) + \frac{A_2}{d_2} Y_2^2+\frac{\varepsilon}{2} \mathcal{L}^2 - \frac{2 A_3}{d_2} \frac{Y_2^4}{Y_1^2}, \nonumber \\ Y_j^{'} & =Y_j \left( \sum_{i=1}^2 d_i X_i^2 - \frac{\varepsilon}{2} \mathcal{L}^2 - X_j \right), \nonumber \\ \mathcal{L}^{'} & =\mathcal{L} \left( \sum_{i=1}^2 d_i X_i^2 - \frac{\varepsilon}{2} \mathcal{L}^2 \right). \nonumber \end{align} Here and in the following, the $\frac{d}{ds}$ derivative is denoted by a prime $'.$ On the other hand, the $\frac{d}{dt}$ derivative will always correspond to a dot $\dot{}$ .
To establish some basic properties of this ODE system, it will be enough to assume that $d_1, d_2 > 0,$ $A_1, A_2 > 0$ and $A_3 \geq 0.$ However, in the main body of the paper $d_1 > 1$ and $A_1, A_2, A_3 > 0$ will be assumed.
\begin{remarkroman} (a) The case $A_3 = 0$ is already well understood from works on multiple warped products, see \cite{IveyNewExamplesRS}, \cite{GKExpandingRS}, \cite{DWExpandingSolitons, DWSteadySolitons}, \cite{BDGWExpandingSolitons}, \cite{BDWSteadySolitons} and \cite{AngenentKnopfRSConicalSingNonuniqueness}.
(b) The case $d_1=1$ implies $A_1=0$ in geometric applications. In this case Cao-Koiso \cite{CaoSoliton},\cite{KoisoSoliton} and Feldman-Ilmanen-Knopf \cite{FIKSolitons} found explicit solutions to the associated {\em K\"ahler} Ricci solitons equations. {\em Non-K\"ahler} steady and expanding Ricci solitons will be constructed in section \ref{SectionSolitonsFromCircleBundles}. In the steady case these were independently found by Stolarski \cite{StolarskiSteadyRSOnCxLineBundles} and Appleton \cite{AppletonSteadyRS}, who use different techniques. \end{remarkroman}
Notice that time, metric and soliton potential can be recovered from the ODE via \begin{align*} t(s) = t(s_0) + \int_{s_0}^{s} \mathcal{L}( \tau ) d \tau \ \text{ and } \ f_i = \frac{\mathcal{L}}{Y_i}, \ \text{for } \ i=1,2, \ \text{ and } \ \dot{u} = \frac{\sum_{i=1}^2 d_i X_i - 1}{\mathcal{L}}. \end{align*}
In the new coordinate system, the smoothness conditions for the metric in \eqref{SmoothnessMetricTwoSummandsGeometricSetUpSection} and the soliton potential in \eqref{InitialConditionsPotentialFunction} correspond to the stationary point \begin{align} X_1 = Y_1 = \frac{1}{d_1} \ \text{ and } \ X_2 = Y_2 = 0 \ \text{ and } \ \mathcal{L} =0. \label{InitialCriticalPoint} \end{align} Trajectories emanating from \eqref{InitialCriticalPoint} will be parametrised so that \eqref{InitialCriticalPoint} corresponds to $s = - \infty.$
The conservation law \eqref{ReformulatedGeneralConsLaw} takes the form \begin{equation} \sum_{i=1}^2 d_i X_i^2 + \sum_{i=1}^2 A_i Y_i^2 - A_3 \frac{Y_2^4}{Y_1^2} + (n-1)\frac{\varepsilon}{2} \mathcal{L}^2 = 1 + \left( C + \varepsilon u \right) \mathcal{L}^2. \label{GeneralTwoSummandsConsLaw} \end{equation}
Consider the functions \begin{align*} \mathcal{S}_1 & = \sum_{i=1}^2 d_i X_i^2 + \sum_{i=1}^2 A_i Y_i^2 - A_3 \frac{Y_2^4}{Y_1^2} + (n-1)\frac{\varepsilon}{2} \mathcal{L}^2 -1, \\ \mathcal{S}_2 & = \sum_{i=1}^2 d_i X_i -1. \end{align*}
Notice that $\mathcal{S}_1$ occurs in the conservation law and $\mathcal{S}_2 = \frac{\dot{u}}{- \dot{u} + \tr(L)}$ encodes the derivative of the soliton potential in the rescaled coordinates.
Fix $\varepsilon \geq 0$ and recall from section \ref{SectionConsequencesOfCompleteness} that $C \leq 0$ is a necessary condition to obtain trajectories that correspond to complete steady or expanding Ricci solitons and that $C=0$ is the Einstein case. Due to the initial conditions \eqref{InitialConditionsPotentialFunction} and proposition \ref{PotentialFunctionOfExpandingRSAlongCohomOneFlow}, the soliton potential satisfies $u, \dot{u} \leq 0$ if $C \leq 0,$ and away from the singular orbit equality can only occur in the Einstein case.
Therefore, any trajectory with $\varepsilon \geq 0$ and $C \leq 0$ satisfies $\mathcal{S}_1, \mathcal{S}_2 \leq 0.$ Equality occurs at the initial stationary point \eqref{InitialCriticalPoint} and then {\em Einstein} trajectories lie in the locus \begin{equation} \left\{ \mathcal{S}_1 = 0\right\} \cap \left\{ \mathcal{S}_2 = 0\right\} \label{EinsteinLocus} \end{equation} whereas trajectories of {\em complete non-trivial} Ricci solitons are contained in the locus \begin{equation} \left\{ \mathcal{S}_1 < 0\right\} \cap \left\{ \mathcal{S}_2 < 0\right\}. \label{SolitonLocus} \end{equation}
Conversely, trajectories in these loci correspond to Einstein metrics and non-trivial Ricci solitons.
The invariance of the above loci for $\varepsilon \geq 0$ follows from the Ricci soliton ODE as a direct calculation verifies \begin{align*} \frac{1}{2} \frac{d}{ds} \mathcal{S}_1 & = \left( \sum_{i=1}^2 d_i X_i^2 - \frac{\varepsilon}{2} \mathcal{L}^2 \right) \mathcal{S}_1 + \frac{\varepsilon}{2} \mathcal{L}^2 \cdot \mathcal{S}_2 , \\
\frac{d}{ds} \mathcal{S}_2 & = \mathcal{S}_1 + \left( \sum_{i=1}^2 d_i X_i^2 - \frac{\varepsilon}{2} \mathcal{L}^2 - 1 \right) \mathcal{S}_2. \end{align*}
Now the existence of trajectories which lie in one of the above loci and in the unstable manifold of the critical point \eqref{InitialCriticalPoint} will be discussed. Different trajectories will correspond to non-homothetic Einstein or Ricci soliton metrics.
The linearisation of the Ricci soliton ODE at the initial stationary point \eqref{InitialCriticalPoint} is given by \begin{equation*} \begin{pmatrix} \frac{3}{d_1}-1 & 0 & \frac{2(d_1-1)}{d_1} & 0 & 0 \\ 0 & \frac{1}{d_1}-1 & 0 & 0 & 0 \\ \frac{1}{d_1} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{1}{d_1} & 0 \\ 0 & 0 & 0 & 0 & \frac{1}{d_1} \end{pmatrix}. \end{equation*}
The corresponding eigenvalues are hence $\frac{2}{d_1},$ and both $\frac{1}{d_1}-1$ and $\frac{1}{d_1}$ appear twice. In particular, the critical point is {\em hyperbolic} if $d_1 >1.$ The corresponding eigenspaces are given by $E_{\frac{2}{d_1}} = \operatorname{span} \left\lbrace (2,0,1,0,0) \right\rbrace ,$ $E_{\frac{1}{d_1}-1}= \operatorname{span} \left\lbrace (0,1,0,0,0), (d_1-1,0,-1,0,0) \right\rbrace$ and $E_{\frac{1}{d_1}} = \operatorname{span} \left\lbrace (0,0,0,1,0), (0,0,0,0,1) \right\rbrace.$ Notice that the stationary point \eqref{InitialCriticalPoint} lies in the set $\left\{ \mathcal{S}_1 = 0\right\} \cap \left\{ \mathcal{S}_2 = 0\right\}.$ Furthermore, $\left\{ \mathcal{S}_1 = 0\right\}$ is a submanifold of $\mathbb{R}^5$ if $Y_1 \neq 0$ and its tangent space at \eqref{InitialCriticalPoint} is $\operatorname{span} \left\lbrace (1,0,d_1-1,0,0) \right\rbrace^{\perp}.$ Similarly, $\left\{ \mathcal{S}_2 = 0\right\}$ is a submanifold with tangent space $\operatorname{span} \left\lbrace (d_1,d_2,0,0,0) \right\rbrace^{\perp}$ at \eqref{InitialCriticalPoint}. Notice that both tangent spaces contain $E_{\frac{1}{d_1}}$ but not $E_{\frac{2}{d_1}}$ and that $E_{\frac{1}{d_1}} \oplus E_{\frac{2}{d_1}}$ is the tangent space to the unstable manifold.
According to the above discussion, trajectories in the unstable manifold of \eqref{InitialCriticalPoint} that either remain in the set $\left\{ \mathcal{S}_1 = 0\right\} \cap \left\{ \mathcal{S}_2 = 0\right\}$ or flow into $\left\{ \mathcal{S}_1 < 0\right\} \cap \left\{ \mathcal{S}_2 < 0\right\}$ need to be considered. Notice, however, that if $\varepsilon = 0$ the ODE for $\mathcal{L}$ decouples. Hence, the soliton system effectively reduces to a system in $X_i, Y_i$ for $i=1,2.$ Counting trajectories with respect to the possibly reduced system then gives the following result.
\begin{proposition} Suppose that $d_1 > 1.$ If $\varepsilon \neq 0,$ then there exists a $1$-parameter family of trajectories lying both in the unstable manifold of \eqref{InitialCriticalPoint} and the Einstein locus \eqref{EinsteinLocus} and a $2$-parameter family of trajectories lying both in the unstable manifold of \eqref{InitialCriticalPoint} and the Ricci soliton locus \eqref{SolitonLocus}.
If $\varepsilon =0,$ then the unstable manifold of \eqref{InitialCriticalPoint} with respect to the reduced two summands ODE in $X_1, X_2$ and $Y_1, Y_2$ contains a unique trajectory lying in the Einstein locus \eqref{EinsteinLocus} and a $1$-parameter family of trajectories lying in the Ricci soliton locus \eqref{SolitonLocus}. These give rise to an (up to scaling) unique Ricci flat metric and a $1$-parameter family of Ricci solitons with soliton potential $u=0$ at the singular orbit. \label{NumberOfParameterFamilies} \end{proposition}
Proposition \ref{NumberOfParameterFamilies} is in agreement with the theory of solutions to the initial value problem for cohomogeneity one Ricci solitons and Einstein metrics developed by Buzano \cite{BuzanoInitialValueSolitons} and Eschenburg-Wang \cite{EWInitialValueEinstein}, respectively. Their methods also carry over to the case $d_1 = 1.$
Notice that the ODE system \eqref{RescaledTwoSummandsODE} and the initial stationary point \eqref{InitialCriticalPoint} are invariant under changing the signs of $Y_2,$ $\mathcal{L}.$ Since $\mathcal{L}^{-1}=-\dot{u} + \tr(L) \to + \infty$ as $t \to 0,$ $\mathcal{L}>0$ will be assumed along the trajectories. The choice $f_2(0)= \bar{f}>0$ in \eqref{SmoothnessMetricTwoSummandsGeometricSetUpSection} implies $f_2(t)>0$ for small $t>0$ and thus $Y_2 > 0$ will be assumed. Recall that $\lim_{s \to - \infty} Y_1(s)=1.$ The ODEs for $Y_1, Y_2, \mathcal{L}$ imply that positivity of the variables is preserved along the flow.
The following lemma shows a basic dynamical property of the Ricci soliton ODE and sets up the discussion of the long time behaviour.
\begin{lemma} Let $\varepsilon \geq 0$ and consider a trajectory of the two summands Ricci soliton ODE that emanates from \eqref{InitialCriticalPoint} at $s = - \infty$ and enters either \eqref{EinsteinLocus} or \eqref{SolitonLocus}.
Then there holds $X_1 >0$ for all finite $s$ and $X_2$ is positive for sufficiently negative $s.$ Moreover, suppose there is an $s_0 \in \mathbb{R}$ such that $X_2(s_0) <0.$ Then $X_2(s) < 0$ for all $s \geq s_0.$ \label{XVariablesPositiveInitially} \end{lemma} \begin{proof} Recall that $\lim_{s \to - \infty} X_1 = 1/d_1 >0$ and in particular $X_1$ is positive initially. If there is an $s \in \mathbb{R}$ such that $X_1(s) = 0,$ then $X_1^{'}(s)>0.$ By continuity this implies $X_1 > 0$ everywhere.
The conservation law \eqref{GeneralTwoSummandsConsLaw} implies that $\sum_{i=1}^2 d_i X_i^2 -1 \leq A_3 \frac{Y_2^4}{Y_1^2} - \sum_{i=1}^2 A_i Y_i^2 < 0$ close to \eqref{InitialCriticalPoint} as $Y_1 \to \frac{1}{d_1}$ and $Y_2 \to 0.$ Similarly, $\frac{A_2}{d_2} Y_2^2 - \frac{2 A_3}{d_2} \frac{Y_2^4}{Y_1^2} >0$ for sufficiently negative times. If $X_2(s_0) < 0$ in this region, then the ODE \begin{align*} X_2^{'} = X_2 \left( \sum_{i=1}^2 d_i X_i^2 - \frac{\varepsilon}{2} \mathcal{L}^2 -1 \right) + \frac{A_2}{d_2} Y_2^2+\frac{\varepsilon}{2} \mathcal{L}^2 - \frac{2 A_3}{d_2} \frac{Y_2^4}{Y_1^2} \end{align*} implies that $X_2^{'}(s_0) > 0$ as $\varepsilon \geq 0.$ In particular $X_2(s) \leq X_2(s_0) < 0$ for all $s \leq s_0.$ This contradicts $X_2 \to 0$ as $s \to - \infty.$
If the last statement is not true, then there exist $s_{*} < s^{*}$ such that $X_2 < 0$ on $(s_{*}, s^{*})$ and \begin{align*} X_2(s_{*}) & = 0 \ \text{ and } \ X_2^{'}(s_{*}) \leq 0, \\ X_2(s^{*}) & = 0 \ \text{ and } \ X_2^{'}(s^{*}) \geq 0. \end{align*} It follows that $ \frac{A_2}{d_2} Y_2^2(s_{*}) + \frac{\varepsilon}{2} \mathcal{L}^2(s_{*}) - 2 \frac{A_3}{d_2} \frac{Y_2^4}{Y_1^2}(s_{*}) \leq 0$ which is equivalent to \begin{equation*} \frac{A_2}{d_2} \leq \left[ 2 \frac{A_3}{d_2} \left( \frac{Y_2}{Y_1} \right) ^2 - \frac{\varepsilon}{2} \left( \frac{\mathcal{L}}{Y_2} \right) ^2 \right](s_{*}). \end{equation*} Similarly, the second condition implies the reverse inequality at $s^{*}.$ Therefore, \begin{align*} 0 & \leq \left[ 2 \frac{A_3}{d_2} \left( \frac{Y_2}{Y_1} \right) ^2 - \frac{\varepsilon}{2} \left( \frac{\mathcal{L}}{Y_2} \right) ^2 \right](s_{*}) - \left[ 2 \frac{A_3}{d_2} \left( \frac{Y_2}{Y_1} \right) ^2 - \frac{\varepsilon}{2} \left( \frac{\mathcal{L}}{Y_2} \right) ^2 \right](s^{*}) \\
& = \frac{d}{ds} \left[ 2 \frac{A_3}{d_2} \left( \frac{Y_2}{Y_1} \right) ^2 - \frac{\varepsilon}{2} \left( \frac{\mathcal{L}}{Y_2} \right) ^2 \right](\xi) \cdot (s_{*} - s^{*} ) \end{align*} for some $\xi \in (s_{*}, s^{*}).$ On the other hand, observe that \begin{align*} \frac{d}{ds} \frac{Y_2}{Y_1} = \frac{Y_2}{Y_1} (X_1-X_2) \ \text{ and } \ \frac{d}{ds} \frac{\mathcal{L}}{Y_2} = \frac{\mathcal{L}}{Y_2} X_2. \end{align*} Therefore, $X_2( \xi) < 0,$ $\varepsilon \geq 0$ and $s_{*} < s^{*}$ imply \begin{align*} 0 & \leq \frac{d}{ds} \left[ 2 \frac{A_3}{d_2} \left( \frac{Y_2}{Y_1} \right) ^2 - \frac{\varepsilon}{2} \left( \frac{\mathcal{L}}{Y_2} \right) ^2 \right](\xi) \cdot (s_{*} - s^{*} ) \\ & = \ 2 \ \left[ 2 \frac{A_3}{d_2} \left( \frac{Y_2}{Y_1} \right) ^2 (X_1-X_2) - \frac{\varepsilon}{2} \left( \frac{\mathcal{L}}{Y_2} \right) ^2 X_2 \right](\xi) \cdot (s_{*} - s^{*} ) < 0, \end{align*} which is a contradiction. \end{proof}
\begin{remarkroman} In fact, the possibility that $X_2 < 0$ is the only obstruction to long time existence. Geometrically this says that along the trajectory of an incomplete metric the shape operator cannot remain positive definite.
If $A_3=0,$ then $X_2 > 0$ is immediate and the Einstein and Ricci soliton loci \eqref{EinsteinLocus} and \eqref{SolitonLocus}, respectively, are bounded regions in phase space. Completeness of the metric then follows as in propositions \ref{CompletenessEpsZeroTwoSummands} and \ref{CompletenessEpsPosTwoSummands} below. Geometrically the case $A_3=0$ corresponds to the doubly warped product situation which was considered by Ivey \cite{IveyNewExamplesRS}, Gastel-Kronz \cite{GKExpandingRS}, Dancer-Wang \cite{DWExpandingSolitons, DWSteadySolitons} and Angenent-Knopf \cite{AngenentKnopfRSConicalSingNonuniqueness}. \end{remarkroman}
If $A_3>0,$ notice that $X_2 > 0$ clearly holds as long as $\frac{Y_2}{Y_1} < \sqrt{\frac{A_2}{2 A_3}}.$ Therefore, the quotient \begin{equation*} \omega = \frac{Y_2}{Y_1}. \end{equation*} plays a central role in the discussion. Observe that $\omega$ satisfies \begin{align} \omega{'} = \omega (X_1-X_2). \label{DByDsOmega} \end{align} In fact this implies that the Ricci soliton equation is equivalent to an ODE system with polynomial right hand side.
In order to obtain an a priori bound for $\omega,$ fix $d_1>1$ and consider the function \begin{align} \widehat{\mathcal{G}}(\omega) = \frac{A_1}{d_1} \frac{\omega^{2(d_1-1)}}{2(d_1-1)} - \frac{A_2}{d_2} \frac{\omega^{2d_1}}{2d_1} + A_3 \left( \frac{1}{d_1} + \frac{2}{d_2} \right) \frac{\omega^{2(d_1+1)}}{2(d_1+1)}. \label{FunctionGHatTwoSummandsCase} \end{align} Along trajectories of the two summands Ricci soliton ODE there holds \begin{align*} \frac{d}{ds} \widehat{\mathcal{G}}(\omega) = \omega^{2(d_1-1)} \left\{ \frac{A_1}{d_1} - \frac{A_2}{d_2} \omega^2 + A_3 \left( \frac{1}{d_1} + \frac{2}{d_2} \right) \omega^4 \right\} \left( X_1 - X_2 \right) \end{align*} and non-zero roots of $\widehat{\mathcal{G}}$ are of the form \begin{align*} \omega^2 = \frac{1}{2} \frac{A_2}{A_3} \frac{d_1+1}{2d_1+d_2} \left\lbrace 1 \pm \sqrt{1 - 4 \frac{A_1 A_3}{A_2^2} \frac{d_2 (2d_1+d_2)}{(d_1-1)(d_1 + 1)}} \right\rbrace. \end{align*} In particular, there exist two positive roots $0 < \hat{\omega}_1 < \hat{\omega}_2$ if and only if \begin{equation} \widehat{D} = \frac{A_2^2}{d_2^2} - 4 \frac{A_1}{d_1(d_1-1)} \frac{A_3}{d_2} \frac{d_1}{d_1+1} (2d_1 +d_2) > 0. \label{DefinitionDHat} \end{equation} Moreover, in this case, $\hat{\omega}_1^2 < \frac{A_2}{2 A_3}.$ \begin{proposition} Suppose that $d_1 > 1,$ $\widehat{D} >0$ and $\varepsilon \geq 0.$ Then the set \begin{align*} \left\{ \ X_2 > 0 \ \text{ and } \ 0 < \frac{Y_2}{Y_1} < \hat{\omega}_1 \ \right\} \end{align*} contains any trajectory of the two summands Ricci soliton ODE that emanates from \eqref{InitialCriticalPoint} and flows into either \eqref{EinsteinLocus} or \eqref{SolitonLocus}. \label{X2VariablePositive} \end{proposition} \begin{proof} The ODE for $X_2$ shows that $X_2$ remains positive if $\frac{Y_2^2}{Y_1^2} = \omega^2 < \frac{A_2}{2 A_3}.$ Since $\hat{\omega}_1^2 < \frac{A_2}{2 A_3},$ it suffices to show that $\omega < \hat{\omega}_1$ as long as $X_2 > 0.$ Consider the function \begin{align} \mathcal{K} = \frac{1}{2} \omega^{2(d_1-1)} \left( \frac{X_1 - X_2}{Y_1} \right)^2 - \widehat{\mathcal{G}} \left( \omega \right), \label{LyapunovForNonTrivialBundles} \end{align} which was introduced by B\"ohm in the Einstein case \cite{BohmInhomEinstein}. On the set $X_2 >0$ it is a Lyapunov function since \begin{align*} \frac{d}{ds} \mathcal{K} = \omega^{2(d_1-1)} \left( \frac{X_1 - X_2}{Y_1} \right)^2 \left\{ \sum_{i=1}^2 d_i X_i - 1 - (n-1) X_2 \right\} \end{align*} and $\sum_{i=1}^2 d_i X_i - 1 \leq 0$ holds in both loci. Notice that $\lim_{s \to - \infty} \mathcal{K} = 0$ and $\mathcal{K} \geq 0$ if $\frac{Y_2}{Y_1} = \omega = \hat{\omega}_1.$ However, $\mathcal{K}$ is non-increasing and strictly decreasing close to \eqref{InitialCriticalPoint}. This completes the proof. \end{proof}
\begin{corollary} Suppose that $d_1 >1,$ $\widehat{D} > 0$ and $\varepsilon \geq 0.$ Then along trajectories emanating from \eqref{InitialCriticalPoint} and flowing into \eqref{EinsteinLocus} or \eqref{SolitonLocus} there holds $X_1,$ $X_2 >0$ for all finite times. Moreover, the variables $X_1, X_2$ and $Y_1, Y_2$ and $\omega$ are bounded, and if $\varepsilon >0$ then $\mathcal{L}$ is bounded too. In particular, the rescaled flow exists for all times. \label{FlowExistsForAllTimes} \end{corollary} \begin{proof} According to lemma \ref{XVariablesPositiveInitially} one has $X_1,$ $X_2 > 0$ initially and $X_1 > 0$ is preserved along the flow. Positivity of $X_2$ follows from proposition \ref{X2VariablePositive} and $X_1,$ $X_2$ remain bounded as $0 \leq d_1 X_1 + d_2 X_2 \leq 1$ due to \eqref{EinsteinLocus} and \eqref{SolitonLocus}.
Then the ODE for $\mathcal{L}$ implies that $\mathcal{L}$ cannot blow up in finite time as $\varepsilon \geq 0.$ By the same argument, this also holds for $Y_1,$ $Y_2.$
Alternatively, it follows from the bound $\frac{Y_2^2}{Y_1^2} < \hat{\omega}_1^2 < \frac{A_2}{2 A_3}$ that $A_2 - A_3 \frac{Y_2^2}{Y_1^2} > \frac{A_2}{2}.$ The Einstein and Ricci soliton loci \eqref{EinsteinLocus} and \eqref{SolitonLocus} are therefore contained in the bounded region $\left\lbrace \ \sum_{i=1}^2 d_i X_i^2 + A_1 Y_1^2 + \frac{A_2}{2} Y_2^2 + (n-1) \frac{\varepsilon}{2} \mathcal{L}^2 \leq 1 \ \right\rbrace.$ By considering $\omega = \frac{Y_2}{Y_1}$ as an independent variable, one obtains an ODE system with polynomial right hand side. Since $\omega < \hat{\omega}_1$ is bounded, standard ODE theory implies that the flow exists for all times. \end{proof}
In order to prove that the corresponding metrics are complete, it suffices to show that $t_{\max} = \infty.$ Recall from the coordinate change that \begin{equation} t(s) = t(s_0) + \int_{s_0}^s \mathcal{L} (\tau) d \tau. \label{TimeRescaling} \end{equation} Therefore it is necessary to estimate the asymptotic behaviour of $\mathcal{L}.$ This needs to be considered separately for the cases $\varepsilon = 0$ and $\varepsilon >0.$
\begin{proposition} Suppose that $d_1>1$ and $\widehat{D}>0.$ Then the corresponding steady Ricci soliton and Ricci flat metrics are complete. \label{CompletenessEpsZeroTwoSummands} \end{proposition} \begin{proof} A special feature of the case $\varepsilon = 0$ is that $\mathcal{L} = \frac{1}{-\dot{u}+\tr(L)}$ is in fact a {\em Lyapunov} function. As $\mathcal{L}$ becomes positive initially and is therefore monotonically increasing, it is bounded away from zero for $s \geq s_0$ and any $s_0 \in \mathbb{R}.$ Then the time rescaling \eqref{TimeRescaling} shows that $t \to \infty$ as $s \to \infty,$ i.e. the metrics are complete. \end{proof}
\begin{remarkroman} The Ricci flat metrics in proposition \ref{CompletenessEpsZeroTwoSummands} have already been constructed by B\"ohm \cite{BohmNonCompactEinstein} by different means, see remark \ref{RemarkConvergenceConeSolutions}. \end{remarkroman}
The cases of expanding Ricci solitons and Einstein metrics with negative scalar curvature correspond to $\varepsilon >0.$ It will be sufficient to have an upper bound on $\mathcal{L}$ to prove completeness of the metric.
\begin{lemma} Let $d_1>1,$ $\widehat{D} > 0$ and $\varepsilon > 0.$ Then along trajectories that emanate from \eqref{InitialCriticalPoint} and flow into \eqref{EinsteinLocus} or \eqref{SolitonLocus} there holds \begin{equation*} 0 < \frac{\varepsilon}{2} \mathcal{L}^2 \leq \max \left\lbrace \frac{1}{d_1}, \frac{1}{d_2} \right\rbrace. \end{equation*} Moreover, in the Einstein case, given $s_0 \in \mathbb{R}$ there holds \begin{equation*} \frac{\varepsilon}{2} \mathcal{L}^2(s) \geq \min \left\lbrace \frac{\varepsilon}{2} \mathcal{L}^2(s_0), \frac{1}{n} \right\rbrace \end{equation*} for all $s \geq s_0.$ In particular, $\mathcal{L}(s)$ is bounded away from zero for all $s \geq s_0.$ \label{LemmaLBoundedAwayFromZero} \end{lemma} \begin{proof} Notice that $0 \leq \sum_{i=1}^2 d_i X_i^2 \leq \max \left\lbrace \frac{1}{d_1}, \frac{1}{d_2} \right\rbrace $ on $X_1, X_2 \geq 0$ and $\sum_{i=1}^2 d_i X_i \leq 1.$ Therefore, if there is an $s_0$ such that $\frac{\varepsilon}{2} \mathcal{L}^2(s_0) > \max \left\lbrace \frac{1}{d_1}, \frac{1}{d_2} \right\rbrace $ then $\mathcal{L}^{'}(s_0) < 0.$ This yields $\mathcal{L}(s) \geq \mathcal{L}(s_0)$ for all $s \leq s_0,$ which contradicts $\lim_{s \to - \infty} \mathcal{L} =0.$
To prove the second statement, suppose that $\frac{\varepsilon}{2} \mathcal{L}^2(s_0) < \frac{1}{n}.$ Since $\sum_{i=1}^2 d_i X_i = 1$ in the Einstein locus, one has $\sum_{i=1}^2 d_i X_i^2 \geq \frac{1}{n}$ and therefore $\mathcal{L}^{'}(s_0) > 0.$ Hence, $\frac{\varepsilon}{2}\mathcal{L}^2$ is monotonically increasing whenever it is less than $\frac{1}{n}.$ \end{proof}
\begin{corollary} Suppose that $d_1>1$ and $\widehat{D}>0.$ Then the corresponding expanding Ricci solitons and Einstein metrics with negative scalar curvature are complete. \label{CompletenessEpsPosTwoSummands} \end{corollary} \begin{proof} Suppose for contradiction that $t_{\max} < \infty.$ Due to \eqref{TimeRescaling} this is equivalent to saying that $\Vert \mathcal{L} \Vert_{L^1(0, \infty)} < \infty.$ However, since $\mathcal{L}$ is bounded due to lemma \ref{LemmaLBoundedAwayFromZero}, this implies $\mathcal{L} \in L^2(0, \infty).$ Hence the ODE for $\mathcal{L}$ yields $\mathcal{L}(s) \geq \mathcal{L}(0) \exp \left( - \frac{\varepsilon}{2} \Vert \mathcal{L} \Vert_{L^2(0, \infty)} \right) > 0$ and $\mathcal{L}$ is bounded away from zero for $s \geq 0.$ However, this contradicts $\mathcal{L} \in L^1(0, \infty).$ \end{proof}
\begin{remarkroman} The Einstein metrics of negative scalar curvature in corollary \ref{CompletenessEpsPosTwoSummands} have already been constructed by B\"ohm \cite{BohmNonCompactEinstein} by different means, see remark \ref{RemarkConvergenceConeSolutions}. \end{remarkroman}
\subsection{Ricci solitons from circle bundles} \label{SectionSolitonsFromCircleBundles}
The two summands case allows the possibility $d_1 = 1$ and $A_1=0.$ Geometrically this case is realised by manifolds which are foliated by principal circle bundles over a Fano K\"ahler-Einstein manifold $(V,J,g).$ In this setting, examples of {\em K\"ahler} Ricci solitons have been found by Cao-Koiso \cite{CaoSoliton},\cite{KoisoSoliton} and Feldman-Ilmanen-Knopf \cite{FIKSolitons}. {\em Non-K\"ahler} examples have also been constructed independently by Stolarski \cite{StolarskiSteadyRSOnCxLineBundles} and Appleton \cite{AppletonSteadyRS}.
The precise geometric set-up is as follows: Recall that due to a theorem of Kobayashi \cite{KobayashiCompactFanos} any Fano manifold $V$ is simply connected and hence $H^2(V, \mathbb{Z})$ is torsion free. Therefore the first Chern class is $c_1(V, J)= p \rho$ for a positive integer $p$ and an indivisible class $\rho \in H^2(V, \mathbb{Z}).$ Suppose that the Ricci curvature of $(V,g)$ is normalised to be $\Ric = p g.$ If $\pi \colon P \to V$ is the principal circle bundle with Euler class $q \pi^{*} \rho$ for a non-zero integer $q \in \mathbb{Z} \setminus \left\{ 0 \right\}$ and $\theta$ the principal $S^1$-connection with curvature form $\Omega = q \pi^{*} \eta,$ where $\eta$ is the K\"ahler form associated to $g,$ then the Ricci soliton equation on $I \times P$ corresponding to the metric \begin{equation*} dt^2 + f_1^2(t) \theta \otimes \theta + f_2^2(t) \pi^{*} g \end{equation*} is described by the two summands system with $d_1=1,$ $d_2=d=\dim_{\mathbb{R}} V$ and $A_1=0,$ $A_2=d_2 p,$ $A_3 = \frac{d_2 q^2}{4}.$ Notice also that the structure of the ODE has changed since $A_1=0.$ If the smoothness conditions \eqref{SmoothnessMetricTwoSummandsGeometricSetUpSection} are satisfied, this construction induces a smooth metric on the associated complex line bundle over $V.$
Metrics whose curvature tensor is invariant under the complex structure are considered by Dancer-Wang \cite{DWCohomOneSolitons} in the Ricci soliton case and by Wang-Wang \cite{WWEinsteinS2Bundles} in the Einstein case. This condition is equivalent to saying that \begin{align*} \frac{\dot{f}_2^2}{f_2^2} - \frac{q^2}{4} \frac{f_1^2}{f_2^4} = \left( - \dot{u} + \tr(L) + \frac{\dot{f}_1}{f_1} \right) \frac{\dot{f}_2}{f_2} - \frac{p}{f_2^2} + \frac{\varepsilon}{2}. \end{align*} As a special case, the {\em K\"ahler} condition reads \begin{equation*} \frac{\dot{f}_2}{f_2}= - \frac{q}{2} \frac{f_1}{f_2^2} \end{equation*} and it is preserved by the flow. In both cases, the equations can actually be integrated {\em explicitly.} In order to investigate {\em non-K\"ahler} trajectories, the Ricci soliton ODE will be studied qualitatively as before. To adjust the argument in proposition \ref{X2VariablePositive} to the conditions $d_1=1$ and $A_1=0,$ adopt the convention $\frac{A_1}{d_1(d_1-1)} = 1.$ That is, consider \begin{align*} \widehat{\mathcal{G}}( \omega ) = \frac{1}{2} - \frac{p}{2} \omega^2 + \frac{d+2}{16}q^2 \omega^4 \ \text{ and } \ \mathcal{K} = \frac{1}{2} \left( \frac{X_1-X_2}{Y_1} \right) ^2 - \widehat{\mathcal{G}}( \omega ) \end{align*} and note that $\widehat{\mathcal{G}}$ has two positive roots $0<\hat{\omega}_1 < \hat{\omega}_2$ if $2p^2 > (d+2)q^2.$ Then the proof of proposition \ref{X2VariablePositive} shows
\begin{proposition} Suppose that $d_1=1,$ $A_1=0$ and $2p^2 > (d+2)q^2 >0.$ If $\varepsilon \geq 0,$ the set \begin{align*} \left\lbrace X_2 > 0 \ \text{ and } \ 0 < \frac{Y_2}{Y_1} <\hat{\omega}_1 \right\rbrace \end{align*} contains any trajectory of the Ricci soliton ODE that emanates from \eqref{InitialCriticalPoint} and flows into either \eqref{EinsteinLocus} or \eqref{SolitonLocus}. \label{X2PositiveCircleBundles} \end{proposition}
Completeness of the metric can then be established as in proposition \ref{CompletenessEpsZeroTwoSummands} and corollary \ref{CompletenessEpsPosTwoSummands}. Notice in particular that long time existence still follows from corollary \ref{FlowExistsForAllTimes}. As the proof shows, even though $Y_1$ is not controlled by the conservation law \eqref{GeneralTwoSummandsConsLaw} anymore since $A_1 =0,$ it cannot blow up in finite time.
\begin{corollary} Let $d_1=1,$ $A_1 = 0,$ $2p^2 > (d+2)q^2 >0$ and $\varepsilon \geq 0.$ Then any trajectory of the Ricci soliton ODE which emanates from the critical point \eqref{InitialCriticalPoint} and lies in the Einstein locus \eqref{EinsteinLocus} or Ricci soliton locus \eqref{SolitonLocus} corresponds to a complete Einstein or Ricci soliton metric, respectively. \end{corollary}
\begin{remarkroman} (a) Notice on the contrary that the construction of K\"ahler Ricci solitons due to Feldman-Ilmanen-Knopf \cite{FIKSolitons} requires the condition $-q=p$ in the steady case and $-q > p$ in the expanding case, see also \cite[Theorem 4.20 and Remark 4.21]{DWCohomOneSolitons}. For example, in the case of $\mathbb{C}P^n$ one has $p=\frac{d+2}{2}$ and one thus requires $p>q^2>0$ for the argument of proposition \ref{X2PositiveCircleBundles} to work. In particular, the K\"ahler examples due to Feldman-Ilmanen-Knopf are not covered by the corollary. In the case of $\mathbb{C}P^n,$ these K\"ahler Ricci soliton metrics have also been investigated by Chave-Valent \cite{CVQasuiEinsteinRenormalization}.
(b) Explicit K\"ahler and non-K\"ahler Einstein metrics have already been described by Calabi \cite{CalabiKaehlerMetrics}, B{\'e}rard-Bergery \cite{BerardBergerySurDeNouvellesEintein}, Page-Pope \cite{PagePopeEinsteinMetricsOnCxLineBundles} and Wang-Wang \cite{WWEinsteinS2Bundles}.
(c) If $q > p,$ Appleton \cite{AppletonSteadyRS} proves that there cannot exist a complete Ricci flat metric on the associated complex line bundle. In particular, the corresponding trajectory cannot satisfy the bound $ \omega < \frac{4p}{(d+2)q^2}$ for all times. \label{RemarkCircleBundleSolutions} \end{remarkroman}
Observe that the initial stationary point \eqref{InitialCriticalPoint} is not hyperbolic in the case $d_1=1.$ Therefore a center manifold exists and the analysis before proposition \ref{NumberOfParameterFamilies} does not carry over. However, the work of Buzano \cite{BuzanoInitialValueSolitons} and Eschenburg-Wang \cite{EWInitialValueEinstein} still applies and the existence of Ricci soliton trajectories can be deduced, see also \cite{StolarskiSteadyRSOnCxLineBundles} or \cite{AppletonSteadyRS} for different arguments. Thus Theorem \ref{MainTheoremRSOnLineBundles} follows from the following result:
\begin{theorem} Suppose that $d_1=1,$ $A_1=0$ and $2p^2 > (d+2)q^2 >0.$
If $\varepsilon = 0$ there exists a $1$-parameter family and if $\varepsilon > 0$ a $2$-parameter family of trajectories lying in both the unstable manifold of \eqref{InitialCriticalPoint} and the Ricci soliton locus \eqref{SolitonLocus}. In particular, these give rise to complete Ricci soliton metrics on the total spaces of the corresponding complex line bundles over Fano K\"ahler-Einstein manifolds.
Similarly, there exist a (up to homotheties) unique complete Ricci flat metric and a $1$-parameter family of complete Einstein metrics with negative scalar curvature on these spaces. \label{SolitonsOnCircleBundles} \end{theorem}
It follows from the work of Appleton \cite{AppletonSteadyRS} that the Ricci solitons in theorem \ref{SolitonsOnCircleBundles} are asymptotically conical. Furthermore, recall from remark \ref{RemarkCircleBundleSolutions} that the existence of Einstein metrics is well known.
\subsection{Asymptotics} \label{SectionTwoSummandsAsymptotics}
This section discusses the asymptotic behaviour of the metrics which were constructed in section \ref{CompletenessTwoSummands}. In particular it will be shown that the steady Ricci solitons are asymptotically paraboloid and the expanding Ricci solitons are asymptotically conical.
\subsubsection{Cone solutions} \label{SectionConeSolutions} The concrete asymptotics of the metrics depend on the following well known construction, cf. \cite{BohmNonCompactEinstein} or \cite{DHWShrinkingSolitons}.
\begin{proposition} Let $(P, g_E)$ be a homogeneous space with $Ric = (n-1) g_E.$ Then the metrics \begin{align*} dt^2 & + \sin^2(t) g_E \ \text{ for } \ t \in (0, \pi), \\ dt^2 + t^2 g_E & \ \text{ and } \ dt^2 + \sinh^2(t) g_E \ \text{ for } \ t > 0 \end{align*} define cohomogeneity one Einstein metrics on $(0, \frac{\pi}{2}) \times P$ and $(0, \infty) \times P$ with Einstein constant $- \frac{\varepsilon}{2} = n, 0, -n,$ respectively. Any of these solutions will be called a {\em cone solution.}
Furthermore, the Ricci flat metrics together with the soliton potential $- \dot{u}(t) = \frac{\varepsilon}{2}t$ induce a shrinking or expanding Ricci soliton on $(0, \infty) \times P$ depending on whether $\varepsilon < 0$ or $\varepsilon >0.$ If $\varepsilon < 0$ these solutions are called {\em conical Gaussians.} \label{ExplicitConeSolutionsProposition} \end{proposition}
The above metrics have conical singularities at the singular orbits unless each singular orbit consists of a point. In this case the metrics correspond to the standard metrics on $S^{n+1},$ $\mathbb{R}^{n+1}$ and $\mathbb{H}^{n+1},$ respectively. To obtain concrete formulae in the two summands case, the following definitions are required.
\begin{definition} Positive solutions $(c_1, c_2)$ to the equations \begin{align} (n-1) d_1 = \frac{A_1}{c_1^2} + A_3 \frac{c_1^2}{c_2^4} \ \text{ and } \ (n-1) d_2 = \frac{A_2}{c_2^2} - 2 A_3 \frac{c_1^2}{c_2^4} \label{DefConeSolutionOne} \end{align} are called {\em cone solutions}. \label{DefinitionConeSolutions} \end{definition}
\begin{remarkroman} If $A_3 >0,$ the cone solutions take the explicit form \begin{align*} c_1^2 & = \frac{1}{2d_1+d_2} \left( \frac{A_2^2 d_1 + 4 A_1 A_3 (2d_1+d_2)}{2 A_3 (n-1)(2d_1+d_2)} \mp \sqrt{D} \right), \\ c_2^2 & = \frac{1}{2d_1+d_2} \left( A_2 n \pm 2 A_3 (2d_1+d_2) \sqrt{D} \right), \end{align*} where the discriminant $D$ is given by \begin{equation} D = \left( \frac{A_2}{2 A_3} \frac{d_1}{2d_1+d_2} \right)^2 - \frac{A_1}{A_3} \frac{d_2}{2d_1+d_2}. \label{DiskriminatConeSolutions} \end{equation} Inserting the geometric definitions of the constants $A_1,$ $A_2,$ $A_3$ into \eqref{DiskriminatConeSolutions}, one obtains \begin{align*}
D \geq 0 \ \text{ if and only if } \ \frac{(\Ric^{G/H})^2}{4 ||A||^2} \geq (2d_1+d_2) \frac{d_1-1}{d_1}. \end{align*} Suppose that there are two real cone solutions. For a cone solution $(c_1, c_2),$ set $\omega=\frac{c_1}{c_2}.$ Then the ordering $\omega_1 < \omega_2$ defines the {\em first} and {\em second} cone solution.
In particular, if $\widehat{D}>0,$ cf. \eqref{DefinitionDHat}, there exist two cone solutions and it is easy to check that $\omega_1 < \hat{\omega}_1 < \omega_2 < \hat{\omega}_2$ in this case. This has also been observed by B\"ohm \cite{BohmInhomEinstein}. \label{ExplicitFormulaConeSolutions} \end{remarkroman}
Let $D \geq 0$ and let $(c_1, c_2)$ be a cone solution as in definition \ref{DefinitionConeSolutions}. With the normalisation of the Einstein constant $- \frac{\varepsilon}{2} \in \left\{ -n, 0, n \right\},$ the two summands Einstein cone solutions of proposition \ref{ExplicitConeSolutionsProposition} take the form \begin{align} f_i(t)= c_i \sin(t) \ \text{ for } \ t \in (0, \pi) \label{ExplicitConeSolutionScalPos} \end{align} in the case of positive scalar curvature and \begin{align} f_i(t)= c_i t \ \text{ and } \ f_i(t)= c_i \sinh(t) \ \text{ for } \ t > 0 \label{ExplicitConeSolutionScalNonPos} \end{align} in the Ricci flat and negative scalar curvature case, respectively. Any cone solution is called {\em first} cone solution if that is the case for the pair $(c_1,c_2)$ as remark \ref{ExplicitFormulaConeSolutions}.
\begin{example} \normalfont Recall the examples of group diagrams in table \ref{HopfFibrationsTable}, which induce the Hopf fibrations. In the $\mathbb{H}P^{m+1}$-example the cone solutions are \begin{align*} c_1^2 & = \frac{9+14m+4m^2}{(1+2m)(3+2m)^2} \ \text{ and } \ c_2^2 = \frac{9+14m+4m^2}{(1+2m)(3+2m)}, \\ c_1^2 & = c_2^2 = 1, \end{align*} in the $F^{m+1}$-example they are given by \begin{align*} c_1^2 & = \frac{(1+m)^2+m}{(1+m)^2(1+4m)} \ \text{ and } \ c_2^2 = 4 \frac{(1+m)^2 + m}{(2m+1)^2+m}, \\ c_1^2 & = \frac{1+m}{1+4m} \ \text{ and } \ c_2^2 = 4 c_1^2, \end{align*} and in the $CaP^2$-example they are $c_1^2 = \frac{57}{121},$ $c_2^2 = \frac{19}{11}$ and $c_1^2 = c_2^2 = 1.$ In all cases, the first pair also describes the first cone solution. \label{ExamplesConeSolutionsFromHopfFibrations} \end{example}
The following elementary but useful characterisation of $\omega_1$ and $\omega_2$ is immediate from definition \ref{DefinitionConeSolutions} and remark \ref{ExplicitFormulaConeSolutions}.
\begin{proposition} Let $D > 0.$ Then the two positive roots of the function \begin{equation} f( \omega ) = \frac{A_1}{d_1} - \frac{A_2}{d_2} \omega^2 + A_3 \left( \frac{1}{d_1} + \frac{2}{d_2} \right) \omega^4 \label{FunctionDeterminingSecondDerivativeOmega} \end{equation} are the ratios $\omega_1,$ $\omega_2$ of the first and second cone solution, respectively, i.e. \begin{align*} \omega_{1}^2 = \frac{A_2}{2A_3} \frac{d_1}{2d_1+d_2} - \sqrt{ D } \ \ \text{and} \ \ \omega_{2}^2 = \frac{A_2}{2A_3} \frac{d_1}{2d_1+d_2} + \sqrt{ D }. \end{align*} In particular, it follows that $\omega_1^2 < \frac{A_2}{4A_3}$ and $\omega_2^2 < \frac{A_2}{2A_3}.$ \label{CharacterisationOfConeSolutionRatio} \end{proposition}
\subsubsection{Steady Ricci solitons} \label{SteadyRSAymptoticsSection}
The rotationally symmetric Bryant soliton on $\mathbb{R}^n,$ $n \geq 3,$ is asymptotically paraboloid and therefore non-collapsed. It will be shown that this is also the case for the non-trivial steady Ricci solitons constructed in section \ref{CompletenessTwoSummands}.
Recall from proposition \ref{CompleteSteadyRSAsymptotics} that on a complete, non-trivial cohomogeneity one steady Ricci soliton there holds $-\dot{u}(t) \to \sqrt{-C}$ as $t \to \infty$ and $0 < \tr(L) \leq \frac{n}{t}$ for $t >0.$ Therefore, if the shape operator remains positive definite, it follows that $\frac{\dot{f}_i}{f_i} \to 0$ as $t \to \infty.$ According to corollary \ref{FlowExistsForAllTimes}, this automatically holds in the two summands case if $d_1 > 1$ and $\widehat{D}>0.$
In order to obtain the concrete asymptotics of the metric if $A_3 >0$, an understanding of the long time behaviour of $\omega$ is essential:
\begin{proposition} Let $d_1 >1,$ $\widehat{D}>0$ and $\varepsilon = 0.$ Then along trajectories of non-trivial steady Ricci solitons the limit $\omega_{\infty} = \lim_{t \to \infty} \omega(t)$ exists and $\lim_{t \to \infty} \dot{\omega}(t) = 0.$ \label{OmegaConverges} \end{proposition} \begin{proof} Let $v(t) = \sqrt{\det g_t} = f_1^{d_1}(t) f_2^{d_2}(t)$ denote the relative volume of the principal orbit and consider the variables $\frac{v^{1/n}}{f_i}$ and $\frac{v^{2/n} \dot{f}_i}{f_i}$ for $i=1,2.$ Observe that $\frac{v^{1/n}}{f_1} = \frac{1}{\omega^{d_2/n}}$ and $\frac{v^{1/n}}{f_2} = \omega^{d_1/n}.$
Therefore, the B\"ohm functional has the lower bound \begin{align*} \mathscr{F}_0 = v ^{\frac{2}{n}} \left( \tr(r_t) + \tr(( L^{(0)})^2 )\right) \geq v ^{\frac{2}{n}} \tr(r_t) = \frac{A_1}{\omega^{2 d_2/n}} + A_2 \omega^{2 d_1/n} - A_3 w^{2(2 d_1 + d_2)/n}. \end{align*} Since $\mathscr{F}_0$ is non-increasing, $v ^{\frac{2}{n}} \tr(r_t)$ is bounded from above for $t \geq t_0 > 0$ and hence $\omega$ is bounded away from zero for these $t.$ As $\omega < \hat{\omega}_1,$ the variables $\frac{v^{1/n}}{f_i}$ are hence bounded for $t \geq t_0.$
Furthermore, the variables $\frac{v^{2/n} \dot{f}_i}{f_i}$ satisfy the ODE system \begin{align*} \frac{d}{dt} \frac{v^{2/n} \dot{f}_1}{f_1} & = - ( - \dot{u} + \frac{n-2}{n} \tr(L) ) \frac{v^{2/n} \dot{f}_1}{f_1} + \frac{A_1}{d_1} \left( \frac{v^{1/n}}{f_1} \right)^2 + \frac{A_3}{d_1} \omega^2 \left( \frac{v^{1/n}}{f_2} \right)^2, \\ \frac{d}{dt} \frac{v^{2/n} \dot{f}_2}{f_2} & = - ( - \dot{u} + \frac{n-2}{n} \tr(L) ) \frac{v^{2/n} \dot{f}_2}{f_2} + \frac{A_2}{d_2} \left( \frac{v^{1/n}}{f_2} \right)^2 - \frac{2 A_3}{d_2} \omega^2 \left( \frac{v^{1/n}}{f_2} \right)^2. \end{align*} Due to the known asymptotics, the coefficient of $\frac{v^{2/n} \dot{f}_i}{f_i}$ tends to $- \sqrt{-C}$ and the remaining polynomial terms are bounded. Hence, by comparison, the variables $\frac{v^{2/n} \dot{f}_i}{f_i}$ remain bounded. Therefore, one can pass to the $\omega$-limit set $\Omega.$ Due to its monotonicity, $\mathscr{F}_0$ converges. Its derivative \eqref{DerivativeOfBohmFunctional} has to vanish on $\Omega$ and therefore the limiting value $\mathscr{F}_0 = (v ^{\frac{2}{n}} \tr(r))_{\infty}$ can be expressed in terms of $\omega$ as above. In particular, $\omega$ converges.
The asymptotics of $\dot{\omega}$ simply follow from the ODE $\dot{\omega} = \omega \left\lbrace \frac{\dot{f}_1}{f_1} - \frac{\dot{f}_2}{f_2} \right\rbrace$ and the fact that $\frac{\dot{f}_i}{f_i} \to 0$ as $t \to \infty.$ \end{proof}
\begin{remarkroman} It is also possible to derive an integral formula for $\dot{\omega}.$ Indeed, it is straightforward to check that \begin{align*} \frac{d}{dt} \left\lbrace \dot{\omega} e^{-u} f_1^{d_1-1} f_2^{d_2+1} \right\rbrace = f(\omega) e^{-u} f_1^{d_1-2} f_2^{d_2} , \end{align*} where $f(\omega)$ is defined in \eqref{FunctionDeterminingSecondDerivativeOmega}. If $d_1 > 1,$ it follows that \begin{align*} \dot{\omega}(t) = \frac{e^u(t)}{f_1^{d_1-1}(t) f_2^{d_2+1}(t)} \cdot \int_{0}^{t} f( \omega(s) ) e^{-u(s)} f_1^{d_1-2}(s) f_2^{d_2}(s) ds. \end{align*} Since $f_1, f_2$ are monotonic and $e^{u(t)} \int_{0}^t e^{-u(s)} ds \to \frac{1}{\sqrt{-C}}$ as $t \to \infty$ due to L'H\^{o}pital's rule, one has the bound $\dot{\omega}(t) \leq \overline{C} \cdot \frac{1}{f_1(t) f_2(t)}$ for some constant $\overline{C} > 0.$ \label{IntegralFormulaOmegaDot} \end{remarkroman}
Now the asymptotics of the metric can be deduced:
\begin{proposition} Let $d_1>1,$ $A_1>0, \widehat{D}>0$ and suppose that $(d_1+1) \ddot{u}(0) = C < 0.$ Then the corresponding two summands steady Ricci soliton metrics satisfy \begin{align*} - \dot{u}(t) \to \sqrt{-C} \ \text{ and } \ \frac{f_i^2(t)}{t} \to \frac{2}{\sqrt{-C}}(n-1) c_i^2 \end{align*} as $t \to \infty,$ where $(c_1, c_2)$ denotes the first cone solution. In particular, $\omega \to \omega_1$ as $t \to \infty.$ \label{SummarisingAsymptoticsSteadyRicciSoliton} \end{proposition} \begin{proof} Recall that $- \dot{u}(t) \to \sqrt{-C}$ as $t \to \infty$ due to proposition \ref{CompleteSteadyRSAsymptotics}. Notice that $f_1,$ $f_2$ satisfy \begin{align*} \ddot{f}_1 & = - ( - \dot{u} - d_2 \frac{\dot{\omega}}{\omega} ) \dot{f}_1 - (n-1) \frac{\dot{f}_1^2}{f_1} + \frac{A_1+A_3 \omega^4}{d_1 f_1}, \\ \ddot{f}_2 & = - ( - \dot{u} + d_1 \frac{\dot{\omega}}{\omega} ) \dot{f}_2 - (n-1) \frac{\dot{f}_2^2}{f_2} + \frac{A_2-2 A_3 \omega^2}{d_2 f_2}. \end{align*}
As $\omega < \hat{\omega}_1 < \sqrt{\frac{A_2}{2 A_3}},$ $A_1 > 0$ and $\omega$ converges, both $f_1, f_2$ satisfy a differential equation of the form \begin{align*} \ddot{f} = - a_1 \dot{f} - (n-1) \frac{\dot{f}^2}{f} + \frac{a_2}{2 f}, \end{align*} where $a_i \colon [0,\infty) \to \mathbb{R}$ are smooth functions with $\lim_{t \to \infty} a_i(t) = a_i^{\ast} >0.$ Set $A= \min \lbrace a_1^{\ast}, a_2^{\ast}\rbrace.$ It is shown in \cite[Lemma 6.2]{AppletonSteadyRS} that for every $\varepsilon \in (0, A)$ and every solution $f: [0, \infty) \to \mathbb{R}$ with $f(0), \dot{f}(0)>0$ there exists $t_0>0$ such that \begin{align*} f(t_0)^2 + \gamma_{-} \left(1+ \varepsilon \right)^{-1} \left( t-t_0 \right) \leq f^2(t) \leq f(t_0)^2 + \gamma_{+} \left(t-t_0 \right) \end{align*} for all $t >t_0$, where $\gamma_{\pm} = \frac{a_2^{\ast} \pm \varepsilon}{a_1^{\ast} \mp \varepsilon}$.
It follows that $\frac{\gamma_{1,{-}}}{\gamma_{2,{+}}} \leq \omega_{\infty}^2 \leq \frac{\gamma_{1,{+}}}{\gamma_{2,{-}}}$ for every sufficiently small $\varepsilon >0.$ In the limit as $\varepsilon \to 0$ one obtains equality and thus \begin{align*} \omega_{\infty}^2 =\frac{d_2}{d_1} \frac{A_1+A_3 \omega_{\infty}^4}{A_2 - 2 A_3 \omega_{\infty}^2}. \end{align*} In particular, $0 < \omega_{\infty} \leq \hat{\omega}_1$ is a root of $f(\omega)$ and due the characterisation \ref{CharacterisationOfConeSolutionRatio} of the cone solutions, it follows that $\omega_{\infty} = \omega_1$ is the ratio of the first cone solution.
The asymptotic behaviour of $f_1,$ $f_2$ now follows with the formulae in definition \ref{DefinitionConeSolutions}. \end{proof}
\begin{remarkroman} For $d_1 >1,$ $\widehat{D}>0$ and $\varepsilon = 0,$ the asymptotics of the rescaled Ricci soliton ODE of section \ref{CompletenessTwoSummands} are \begin{align*} X_1, X_2 \to 0 \ \text{ and } \ Y_1, Y_2 \to 0 \ \text{ and } \ \mathcal{L} \to \frac{1}{\sqrt{-C}} \end{align*} as $s \to \infty.$ \label{VariablesBoundedSteadySolitonTwoSummandsCase} \end{remarkroman}
\subsubsection{Expanding Ricci solitons} \label{ExpandingRSAymptoticsSection}
It will be shown that the expanding Ricci solitons are asymptotically conical at infinity and the soliton potential grows quadratically at infinity.
Recall from \eqref{GeneralAsymptoticsExpandingRS} that on a complete, non-trivial cohomogeneity one expanding Ricci soliton $-\dot{u}$ is asymptotically linear and the mean curvature of the principal orbit is bounded. Furthermore, corollary \ref{FlowExistsForAllTimes} implies that the shape operator is positive definite in the two summands case. The definition of the rescaled variables in \eqref{RescaledTwoSummandsVariables} thus implies:
\begin{proposition} Suppose that $d_1 >1$ and $\widehat{D} > 0$ and consider the flow of the Ricci soliton ODE in the phase space of expanding Ricci solitons. Then \begin{align*} X_1, X_2 \to 0 \ \text{ and } \ Y_1, Y_2 \to 0 \ \text{ and } \ \mathcal{L} \to 0 \end{align*} as $s \to \infty.$ \end{proposition}
A modification of the discussion in \cite{DWExpandingSolitons} can now be used to deduce the claimed asymptotically conical geometry at infinity:
\begin{proposition} Suppose that $d_1 >1$ and $\widehat{D} > 0.$ Then along trajectories corresponding to non-trivial expanding Ricci soliton metrics the soliton potential and shape operator satisfy \begin{align*} \frac{- \dot{u}(t)}{t} \to \frac{\varepsilon}{2} \ \text{ and } \ t \cdot L_t \to \mathbb{I}_n \end{align*} as $t \to \infty.$ \label{AsymptoticsOfExpandingTwoSummandsRS} \end{proposition} \begin{proof} Consider the ODE system \begin{align*} \frac{d}{ds} \frac{X_1}{\mathcal{L}^2} & = \left(- \sum_{i=1}^2 d_i X_i^2 -1 \right) \frac{X_1}{\mathcal{L}^2} + \frac{\varepsilon}{2} \left( 1 + X_1 \right) + \frac{A_1}{d_1} \left( \frac{Y_1}{\mathcal{L}} \right)^2 + \frac{A_3}{d_1} \left( \frac{Y_2}{\mathcal{L}} \right)^2 \left( \frac{Y_2}{Y_1} \right)^2, \\ \frac{d}{ds} \frac{X_2}{\mathcal{L}^2} & = \left(- \sum_{i=1}^2 d_i X_i^2 -1 \right) \frac{X_2}{\mathcal{L}^2} + \frac{\varepsilon}{2} \left( 1 + X_2 \right) + \frac{A_2}{d_2} \left( \frac{Y_2}{\mathcal{L}} \right)^2 - \frac{2 A_3}{d_2} \left( \frac{Y_2}{\mathcal{L}} \right)^2 \left( \frac{Y_2}{Y_1} \right)^2 \end{align*} and notice that $\frac{d}{ds} \frac{Y_i}{\mathcal{L}} = - \frac{Y_i}{\mathcal{L}} X_i$ implies that both limits $\hat{y}_i = \lim_{s \to \infty} \frac{Y_i}{\mathcal{L}} \in [0, \infty)$ exist.
If $\hat{y}_1 = 0$ then, as $\omega = \frac{Y_2}{Y_1}$ remains bounded, one necessarily also has $\hat{y}_2 = 0.$ In this case one can proceed as in \cite[Lemma 3.15]{DWExpandingSolitons} to show that $\frac{X_i}{\mathcal{L}^2} \to \frac{\varepsilon}{2}$ as $s \to \infty$ because the extra terms involving $A_3$ tend to zero. Similarly, integrating the ODE for $\mathcal{L}$ implies $\mathcal{L}^2 \cdot s \to \frac{1}{\varepsilon}$ as $s \to \infty.$ Since $dt = \mathcal{L} ds,$ this yields $s \sim \frac{\varepsilon}{4} t^2$ and hence $\mathcal{L} \cdot t \to \frac{2}{\varepsilon}$ as $t \to \infty$. The claim then follows from the definition of the coordinate change in \eqref{RescaledTwoSummandsVariables}.
It remains to rule out that possibly $\hat{y}_1 > 0.$ In this case the existence of $\omega_{\infty} = \lim_{s \to \infty} \frac{Y_2}{Y_1}$ is immediate. Hence the ODEs imply \begin{align*} \frac{X_1}{\mathcal{L}^2} \to \frac{\varepsilon}{2} + \frac{1}{d_1} \left( A_1 \hat{y}_1^2+A_3 \hat{y}_2^2 \omega_{\infty}^2 \right) \ \text{ and } \ \frac{X_2}{\mathcal{L}^2} \to \frac{\varepsilon}{2} + \frac{1}{d_2} \left( A_2 \hat{y}_2^2 - 2 A_3 \hat{y}_2^2 \omega_{\infty}^2 \right) \end{align*} as $s \to \infty$ and both limits are positive as $\varepsilon > 0$ and $\omega_{\infty}^2 < \frac{A_2}{2 A_3}.$ Set $\Lambda_i = \lim_{s \to \infty} \frac{X_i}{\mathcal{L}^2} > 0.$ It follows that \begin{align*} \frac{Y_i^{'}}{\mathcal{L}^{'}} = \frac{Y_i \left(\sum_{i=1}^{2} d_i X_i^2 - \frac{\varepsilon}{2} \mathcal{L}^2 - X_i\right)}{\mathcal{L}\left(\sum_{i=1}^{2} d_i X_i^2 - \frac{\varepsilon}{2} \mathcal{L}^2 \right)} \to \hat{y}_i \cdot \frac{\varepsilon + 2 \Lambda_i}{\varepsilon} \end{align*} as $s \to \infty,$ but if $\hat{y}_1 > 0$ L'H\^{o}pital's rule implies $\Lambda_1=0$ and thus a contradiction. \end{proof}
\subsubsection{Ricci flat metrics} \label{SubSectionRFMetrics}
The rescaled coordinates of section \ref{CompletenessTwoSummands} are particularly suited to analyse the Ricci flat trajectories. The induced Ricci flat metric is asymptotically conical and in fact is asymptotic to the first cone solution $f_i(t) = c_i t.$ This also follows from B\"ohm's \cite{BohmNonCompactEinstein} original construction, see remark \ref{RemarkConvergenceConeSolutions}.
\begin{proposition} Let $d_1 >1$ and $\widehat{D}>0.$ Along trajectories of the Ricci flat system $X_i \to \frac{1}{n}$ and $Y_i \to \frac{1}{n c_i}$ as $s \to \infty,$ where $(c_1,c_2)$ denotes the first cone solution. \label{VariablesBoundedRicciFlatTwoSummandsCase} \end{proposition} \begin{proof} Recall from corollary \ref{FlowExistsForAllTimes} that the variables $X_i, Y_i$ for $i=1,2$ are all positive and bounded along the flow since $d_1 >1$ and $\widehat{D}>0.$ To deduce the asymptotics, consider the function \begin{equation*} \mathcal{G}= Y_1^{d_1} Y_2^{d_2}, \end{equation*} which is in fact the inverse of B\"ohm's Lyapunov \eqref{BohmFunctional} in the $X$-$Y$-coordinates. Its derivative is given by \begin{equation*} \mathcal{G}^{'} = n \mathcal{G} \left\{ \sum_{i=1}^2 d_i X_i^2 - \frac{1}{n} \right\} \end{equation*} and hence it is non-decreasing and bounded. Thus, it converges to a finite positive limit as $s \to \infty.$ This also shows that $Y_1, Y_2$ are bounded away from zero as $s \to \infty.$ Standard ODE theory now implies that the $\omega$-limit set $\Omega$ of the flow of $X_1, X_2,$ $Y_1, Y_2$ is non-empty, compact, connected and flow invariant. As $\mathcal{G}$ is monotonic and bounded, it must be constant on $\Omega.$ But since $d_1 X_1 + d_2 X_2 =1$ there holds $\mathcal{G}^{'} = 0$ if and only if $X_1 = X_2 = \frac{1}{n}.$ Moreover, this yields \begin{align*} 0 & = X_1^{'} = \frac{1}{n} \left( \frac{1}{n} - 1 \right) + \frac{A_1}{d_1} Y_1^2 + \ \frac{A_3}{d_1} \frac{Y_2^4}{Y_1^2}, \\ 0 & = X_2^{'} = \frac{1}{n} \left( \frac{1}{n} - 1 \right) + \frac{A_2}{d_2} Y_2^2 - 2 \frac{A_3}{d_2} \frac{Y_2^4}{Y_1^2} \end{align*} on the $\omega$-limit set. In particular, the pair $((n Y_1)^{-1}, (n Y_2)^{-1})$ satisfies the equations \eqref{DefConeSolutionOne} of the cone solutions. Since the bound $Y_2/Y_1 < \hat{\omega}_1$ holds along the flow and $\omega_1 < \hat{\omega}_1 < \omega_2 < \hat{\omega}_2,$ it follows that $Y_i \to \frac{1}{nc_i}$ as $s \to \infty,$ where $(c_1,c_2)$ describes the first cone solution. This completes the proof. \end{proof}
The asymptotic behaviour of the metric can be deduced from $\dot{f}_i = \frac{X_i}{Y_i} \to c_i$ as $t \to \infty$. The metric is therefore asymptotically conical at infinity.
\subsubsection{Ricci flat metrics: Explicit trajectories and rotational behaviour} \label{RicciFlatPlanarSystem}
It is a special feature of the Ricci flat equation that it reduces to a planar system for the variables $X_1, Y_1$ as the variable $\mathcal{L}$ decouples completely. More generally, in the Einstein case there holds $X_2= \frac{1}{d_2}(1 - d_1 X_1)$ and the conservation law \eqref{GeneralTwoSummandsConsLaw} then determines $Y_2$ in terms of $X_1, Y_1$ and $\frac{\varepsilon}{2} \mathcal{L}^2.$ Explicitly, it is given by \begin{equation*} Y_2^2 = \frac{A_2}{2 A_3} Y_1^2 \pm \frac{1}{2 A_3} \sqrt{A_2^2 Y_1^4 + 4 A_3 \left( \sum_{i=1}^2 d_i X_i^2 +(n-1) \frac{\varepsilon}{2} \mathcal{L}^2 - 1 + A_1 Y_1^2 \right) Y_1^2 }. \end{equation*} Initially, $Y_2$ is given by the solution corresponding to '$-$' as $\lim_{s \to - \infty} Y_2 =0.$ Notice that the discriminant vanishes if and only if $Y_2^2/Y_1^2= \frac{A_2}{2 A_3}$ and recall that if $d_1 >1,$ $\widehat{D} > 0$ and $\varepsilon \geq 0$ the estimate $Y_2^2/Y_1^2 < \hat{\omega}_1^2 < \frac{A_2}{2 A_3}$ has been established. Hence, in this case, only the '$-$' solution is realised by the flow.
Therefore, consider the ODE system \begin{align*} X_1^{'} = \ & \left( X_1 + \frac{1}{d_1} \right) \left( n \frac{d_1}{d_2} X_1^2 - 2 \frac{d_1}{d_2} X_1 + \frac{1}{d_2} - \frac{\varepsilon}{2} \mathcal{L}^2 - 1 \right) \\
& \ + \left( 2 A_1 + \frac{A_2^2}{2 A_3} \right) \frac{Y_1^2}{d_1}+ \frac{\varepsilon}{2}\left( 1 + \frac{n}{d_1} \right) \mathcal{L}^2 \\
& \ - \frac{A_2}{2 d_1 A_3} \sqrt{A_2^2 Y_1^4 + 4 A_3 \left( \sum_{i=1}^2 d_i X_i^2 +(n-1) \frac{\varepsilon}{2} \mathcal{L}^2 - 1 + A_1 Y_1^2 \right) Y_1^2 }, \\ Y_1^{'} = & \ Y_1 \left( n \frac{d_1}{d_2} X_1^2 - 2 \frac{d_1}{d_2} X_1 + \frac{1}{d_2} - \frac{\varepsilon}{2} \mathcal{L}^2 - X_1 \right), \\ \mathcal{L}^{'} = & \ \mathcal{L} \ \left( n \frac{d_1}{d_2} X_1^2 - 2 \frac{d_1}{d_2} X_1 + \frac{1}{d_2} - \frac{\varepsilon}{2} \mathcal{L}^2 \right). \end{align*} In the Ricci flat case this yields indeed a $2$-dimensional system for $X_1$ and $Y_1.$ Moreover, one has $\mathcal{L}(s) = \mathcal{L}(s_0) \exp \left[ \int_{s_0}^s \left( n \frac{d_1}{d_2} X_1^2 - 2 \frac{d_1}{d_2} X_1 + \frac{1}{d_2} \right) d \tau \right].$
Recall from proposition \ref{VariablesBoundedRicciFlatTwoSummandsCase} that one expects $(X_1,Y_1) \to (\frac{1}{n}, \frac{1}{n c_1})$ as $s \to \infty$ if the cone solutions are real. To study the dynamics of the planar $(X_1, Y_1)$-system close to the stationary point $(\frac{1}{n},\frac{1}{n c_1}),$ consider its linearisation at that point. It is described by the matrix \begin{equation*} \begin{pmatrix} -\frac{n-1}{n} & 2 \frac{c_1}{n}\left[ n-1 - 2 A_3 \frac{c_1^2}{c_2^4}\left( \frac{1}{d_1} + \frac{1}{d_2} \right) \right] \\ -\frac{1}{c_1 n} & 0 \end{pmatrix}. \end{equation*} The eigenvalues are the solutions to the quadratic equation \begin{equation*} \lambda^2 + \frac{n-1}{n} \lambda + \frac{2}{n^2} \left[ n-1 - 2 A_3 \frac{c_1^2}{c_2^4}\left( \frac{1}{d_1} + \frac{1}{d_2} \right) \right] = 0 \end{equation*} and it is therefore easy to deduce:
\begin{corollary} The limiting point of the Ricci flat trajectories is a stable spiral if and only if \begin{equation} \frac{(n-1)(n-9)}{8} + 2 A_3 \frac{c_1^2}{c_2^4} \left( \frac{1}{d_1} + \frac{1}{d_2} \right) < 0. \end{equation} In particular, if $A_3 =0$ this is equivalent to $2 \leq n \leq 8.$ Otherwise, it is a stable node. \label{StableSpiral} \end{corollary}
The reduction to the planar $(X_1,Y_1)$-system can also be used to describe explicit trajectories. Trajectories which correspond to smooth complete Ricci flat metrics must emanate from $( \frac{1}{d_1}, \frac{1}{d_1} )$ and are expected to converge to $( \frac{1}{n}, \frac{1}{n c_1} ).$ In low dimensional examples, these trajectories are actually realised by straight lines! This can be seen by introducing polar coordinates centred at $( \frac{1}{d_1}, \frac{1}{d_1} )$, and a straightforward calculation verifies that the angle remains constant.
This provides a new coordinate representation of metrics of special holonomy considered by Bryant-Salamon \cite{BSExceptionalHolonomy} and Gibbons-Page-Pope \cite{GPPEinsteinOnSphereR3R4bundles}.
\begin{theorem} On the open disc bundles associated to the group diagrams $G=Sp(2),$ $H=Sp(1) \times Sp(1),$ $K= U(1) \times Sp(1)$ and $G=Sp(1) \times Sp(2),$ $H=Sp(1) \times Sp(1) \times Sp(1),$ $K=Sp(1) \times Sp(1)$ the trajectories of the complete Ricci flat two summands metrics are line segments when represented in the above coordinate system. \label{ExplicitRFTrajectories} \end{theorem}
\subsubsection{Einstein metrics with negative scalar curvature} \label{SubSectionAsymptoticsEinsteinScalNeg} It will be shown that in this case the B\"ohm functional $\mathscr{F}_0$ asymptotically approaches the value of the first cone solution, and hence work of B\"ohm implies that the metric is in fact asymptotic to the first cone solution $f_i(t) = c_i \sinh(t)$.
\begin{proposition} Let $d_1 >1$ and $\widehat{D}>0.$ Then the asymptotic behaviour of trajectories corresponding to complete Einstein metrics with negative scalar curvature is given by \begin{align*} X_1, X_2 \to \frac{1}{n} \ \text{ and } \ Y_1, Y_2 \to 0, \ \omega = \frac{Y_2}{Y_1} \to \omega_1 \ \text{ and } \ \mathcal{L} \to \sqrt{\frac{2}{n \varepsilon}} \end{align*} as $s \to \infty,$ where $\omega_1 = \frac{c_1}{c_2}$ is the ratio of the first cone solution.
Furthermore, $\mathscr{F}_0 \to n(n-1) c_{1}^{{2d_1}/{n}} c_{2}^{{2d_2}/{n}}$ as $s \to \infty,$ which is the value of $\mathscr{F}_0$ evaluated on the first cone solution $(c_1, c_2).$ \label{TwoSummandsEinsteinMetricsWithNegativeScalarCurvature} \end{proposition} \begin{proof} As in the proof of corollary \ref{FlowExistsForAllTimes}, introduce the variable $\omega = \frac{Y_2}{Y_1}$ in order to view the Ricci soliton equation as an ODE with polynomial right hand side. Furthermore, all variables remain bounded along the flow and hence the $\omega$-limit set $\Omega$ is non-empty, connected, compact and flow-invariant.
Recall from lemma \ref{LemmaLBoundedAwayFromZero} that $\mathcal{L}(s)$ is bounded away from zero for $s \geq 0.$ As the quotients $\frac{Y_i}{\mathcal{L}}$ satisfy $\frac{d}{ds} \frac{Y_i}{\mathcal{L}} = - \frac{Y_i}{\mathcal{L}} X_i,$ they are monotonically decreasing and hence converge as $s \to \infty.$ Moreover, the quotients are well-defined on $\Omega.$ Therefore their derivatives vanish, which implies $Y_i \cdot X_i = 0$ on $\Omega.$ But due to the bounds on $\mathcal{L}$ in lemma \ref{LemmaLBoundedAwayFromZero}, $X_1$ is bounded away from zero and in particular is non-zero on $\Omega.$ This implies $0 < Y_2 < \hat{\omega}_1 Y_1 \to 0$ as $s \to \infty.$
Now consider the evolution of the B\"ohm functional $\mathscr{F}_0 = v ^{\frac{2}{n}} \left( \tr(r_t) + \tr(( L^{(0)})^2 )\right),$ which was introduced in \eqref{BohmFunctional}. In the current coordinate system it is given by \begin{align*} \mathscr{F}_0 & = \prod_{i=1}^2 Y_i^{-2d_i / n} \left\lbrace \sum_{i=1}^2 A_i Y_i^2 - A_3 \frac{Y_2^4}{Y_1^2} + \sum_{i=1}^2 d_i X_i^2 - \frac{1}{n}\left( \sum_{i=1}^2 d_i X_i \right) ^2 \right\rbrace \\ & = \prod_{i=1}^2 Y_i^{-2d_i / n} \left\lbrace A_1 Y_1^2 + Y_2^2 \left( A_2 - A_3 \frac{Y_2^2}{Y_1^2} \right) + \sum_{i=1}^2 d_i X_i^2 - \frac{1}{n} \right\rbrace. \end{align*} Observe that it is bounded from below by zero as $\frac{Y_2}{Y_1} = \omega < \hat{\omega}_1 < \frac{A_2}{2 A_3}.$ Furthermore, according to \eqref{DerivativeOfBohmFunctional}, $\mathscr{F}_0$ is non-increasing and therefore converges as $s \to \infty.$ However, for $\mathscr{F}_0$ to be finite on the $\omega$-limit set $\Omega,$ one has to have $\sum_{i=1}^2 d_i X_i^2 = \frac{1}{n},$ which forces $X_1 = X_2 = \frac{1}{n}$ in the Einstein locus $\sum_{i=1}^2 d_i X_i = 1$ as $X_1, X_2 \geq 0.$ Therefore $X_1$ is constant on $\Omega$ and then also $\mathcal{L}$ due to the ODE for $X_1.$ Finally, the ODE for $\mathcal{L}$ itself shows that $\frac{\varepsilon}{2} \mathcal{L}^2 = \frac{1}{n}$ on $\Omega.$
To deduce the asymptotic behaviour of $\omega$, first observe that the monotonicity of $\mathscr{F}_0$ and \begin{align*} \mathscr{F}_0 & = \frac{1}{\omega^{2 d_2 / n}} \left( A_1 + A_2 \omega^2 - A_3 \omega^4 \right) + v^{2/n} \tr( ( L^{(0)})^2) \\ & = \frac{A_1}{\omega^{2 d_2/n}} + A_2 \omega^{2 d_1/n} - A_3 w^{2(2 d_1 + d_2)/n} + v^{2/n} \tr( ( L^{(0)})^2) \end{align*} imply that $\omega$ is bounded away from zero for $t \geq t_0 > 0.$ Notice furthermore that \begin{align*} \frac{d}{dt} v^{2/n} \tr( ( L^{(0)})^2) & = \frac{d}{dt} \mathscr{F}_0 - \frac{d}{dt} \left\lbrace \frac{A_1}{\omega^{2 d_2/n}} + A_2 \omega^{2 d_1/n} - A_3 w^{2(2 d_1 + d_2)/n} \right\rbrace \\ & = -2 \frac{n-1}{n} v^{2/n} \tr( ( L^{(0)})^2) - \frac{2 d_1 d_2}{n} \omega^{-2 d_2/n-1} f(\omega), \end{align*} where the polynomial $f(\omega)$ is defined in \eqref{FunctionDeterminingSecondDerivativeOmega}. Therefore, $v^{2/n} \tr( ( L^{(0)})^2)$ can be treated as an independent variable, which is nonnegative, bounded by $\mathscr{F}_0$ and satisfies a well-defined ODE on the $\omega$-limit set $\Omega.$
Since $\mathscr{F}_0$ takes a finite value on $\Omega$ and $\frac{d}{dt} \mathscr{F}_0 = -2 \frac{n-1}{n} v^{2/n} \tr( ( L^{(0)})^2),$ it follows that $v^{2/n} \tr( ( L^{(0)})^2) \to 0$ as $t \to \infty.$ This in turn implies $f(\omega) \to 0$ and thus $\omega \to \omega_1$ as $t \to \infty$ due to proposition \ref{CharacterisationOfConeSolutionRatio}.
This also implies $\mathscr{F}_0 \to \frac{1}{\omega_1^{2 d_2 / n}} \left( A_1 + A_2 \omega_1^2 - A_3 \omega_1^4 \right)$ as $t \to \infty,$ which is easily seen to be the value of the first cone solution by using the identities in definition \ref{DefinitionConeSolutions}. \end{proof}
Notice that $\frac{\dot{f}_i}{f_i} = \frac{X_i}{\mathcal{L}} \to \sqrt{\frac{\varepsilon}{2n}}$ as $t \to \infty$ immediately implies that $f_1, f_2$ grow exponentially at infinity. In fact, the metric is asymptotic to the first cone solution at infinity. This follows from a more general result of B\"ohm \cite[Corollary 2.4]{BohmNonCompactEinstein}: If the scalar curvature of the principal orbit is positive and $\mathscr{F}_0$ is bounded from below, then any Einstein trajectory that takes a constant value on $\mathscr{F}_0$ is a cone solution. An argument specifically adapted to the two summands case is given in the proof of proposition \ref{ConvergenceToConeSolution}, see also remark \ref{RemarkConvergenceConeSolutions} (a).
\subsection{Convergence to cone solutions} \label{SectionConvergenceToConeSolutions}
The results in sections \ref{SubSectionRFMetrics} and \ref{SubSectionAsymptoticsEinsteinScalNeg} show that the non-compact Ricci flat metrics and Einstein metrics with negative scalar curvature of section \ref{CompletenessTwoSummands} are asymptotic to the cone solutions at infinity. In this section it will be shown that the asymptotics of the {\em Ricci flat} trajectories also imply that the metric actually converges to the cone solution as the volume of the singular tends to zero, i.e. as $f_2(0) = \bar{f} \to 0.$ In fact this follows for any sign of the Einstein constant and recovers convergence results due to B\"ohm \cite{BohmInhomEinstein, BohmNonCompactEinstein}. In comparison to B\"ohm's work, the main technical simplification is that the proof does not rely on the Poincar\'e-Bendixson theorem, see also remark \ref{RemarkConvergenceConeSolutions}.
Recall from \eqref{TwoSummandsMetric} that the metric is given by \begin{equation*} g_{M \setminus Q} = dt^2 +f_1(t)^2 g_S + f_2(t)^2 g_Q \end{equation*} away from the singular orbits. It follows from the results of Eschenburg-Wang \cite{EWInitialValueEinstein} that there exists a unique one parameter family $c_{\bar{f}}(t) = (f_1,\dot{f}_1,f_2,\dot{f}_2)(t)$ of solutions to the Einstein equations \eqref{CohomOneRSb}, \eqref{CohomOneRSc} with initial condition $c_{\bar{f}}(0)=(0,1,\bar{f},0)$ for any $\bar{f}>0.$ Moreover, B\"ohm \cite{BohmInhomEinstein} has observed that it depends {\em continuously} on the initial condition $\bar{f}>0.$ Notice also that \eqref{CohomOneRSc} implies $(d_1+1) \ddot{f}_2(0) = \frac{\varepsilon}{2} \bar{f} + \frac{A_2}{d_2} \frac{1}{\bar{f}} >0$ if either $\varepsilon \geq 0$ or $\bar{f}^2< - \frac{2}{\varepsilon} \frac{A_2}{d_2}$ and $\varepsilon < 0.$ However, the equations are a priori not well defined if $\bar{f}=0.$ This singular condition corresponds geometrically to the collapse of the full principal orbit.
To describe the behaviour of the Einstein equations as the volume of the singular orbit tends to zero more concretely, the following observation is key: In the $(X_i,Y_i, \mathcal{L})$-coordinate system defined in \eqref{RescaledTwoSummandsVariables}, the initial condition $(0,1,\bar{f},0)$ of the trajectory $c_{\bar{f}}$ corresponds to the stationary point \eqref{InitialCriticalPoint}, which is independent of $\bar{f}.$ Furthermore, the initial condition $f_2(0)=\bar{f}$ can be recovered via $\bar{f} = \lim_{s \to - \infty} \frac{\mathcal{L}}{Y_2}.$ In particular, $\bar{f} = 0$ is the limit of trajectories with $\mathcal{L} \equiv 0.$
However, the two coordinate systems are only equivalent along trajectories with $\mathcal{L} > 0.$ Nonetheless, due to the continuous dependence on the initial condition, any trajectory with $\mathcal{L} \equiv 0$ can hence be viewed as a continuous limit of Einstein trajectories. Hence, the collapse $\bar{f} \to 0$ is described in the $(X_i,Y_i, \mathcal{L})$-coordinates by the solution of the {\em Ricci flat} equations. By construction this solution lies in the unstable manifold of \eqref{InitialCriticalPoint} and due to proposition \ref{NumberOfParameterFamilies} it is indeed unique.
Furthermore, due to the uniqueness of solutions $c_{\bar{f}}$ of the Einstein equations with initial condition $c_{\bar{f}}(0)=(0,1,\bar{f},0)$, one might expect that the limit as $\bar{f} \to 0$ is a cone solution. This intuition is confirmed in proposition \ref{ConvergenceToConeSolution}.
In the case of Einstein metrics with positive scalar curvature, the proof of proposition \ref{ConvergenceToConeSolution} requires the concept of {\em maximal volume orbits:}
Notice that the volume $V$ of the principal orbit satisfies $\dot{V}=V \tr(L),$ where $\tr(L)$ is the mean curvature. Along trajectories corresponding to Einstein metrics with positive scalar curvature, every critical point of $V$ is a maximum or a singular orbit is reached. Therefore, if the maximal volume orbit exists, it is unique and characterised by $\tr(L) = 0.$
In the two summands case, if $A_3 = 0,$ due to a result of B\"ohm \cite[section 4, (e)]{BohmInhomEinstein}, the maximal volume orbit always exists. An alternative argument is discussed below, mainly to introduce a natural coordinate system which extends past the maximal volume orbit.
\begin{lemma} If $A_3 = 0$ and $\varepsilon <0,$ then any Einstein trajectory has a maximal volume orbit. \label{ExistenceMaxVolumeOrbit} \end{lemma} \begin{proof} In analogy to \eqref{RescaledTwoSummandsVariables}, introduce the variables \begin{equation} \widehat{X}_i = \frac{\dot{f}_i}{f_i}, \ \widehat{Y}_i = \frac{1}{f_i}, \ \text{ for } \ i=1,2, \ \text{ and } \ \widehat{\mathcal{L}} = {\tr(L)}. \label{HatVariables} \end{equation}
Due to the assumption $A_3=0$ the two summands Einstein equations take the form \begin{align*} \frac{d}{dt} & \widehat{X}_i = - \widehat{X}_i \widehat{\mathcal{L}}+ \frac{A_i}{d_i} \widehat{Y}_i^2 + \frac{\varepsilon}{2} \\ \frac{d}{dt} & \widehat{Y}_i = - \widehat{X}_i \widehat{Y}_i, \\ \frac{d}{dt} & \widehat{\mathcal{L}} = \frac{\varepsilon}{2} - \sum_{i=1}^2 d_i \widehat{X}_i^2 \end{align*} and the conservation law is \begin{equation} \sum_{i=1}^2 d_i \widehat{X}_i^2 + \sum_{i=1}^2 A_i \widehat{Y}_i^2 + (n-1) \frac{\varepsilon}{2} = \widehat{\mathcal{L}}^2. \label{UnrescaledConsLaw} \end{equation} Notice that the time slice has not been rescaled and that the conservation law \eqref{UnrescaledConsLaw} and $\widehat{\mathcal{L}} = \sum_{i=1}^2 d_i \widehat{X}_i$ describe the rescaled Einstein locus \eqref{EinsteinLocus}.
Clearly the above system is an ODE system with polynomial right hand side. In particular, a solution can only develop a finite time singularity if the norm of $( \widehat{X}_i, \widehat{Y}_i, \widehat{\mathcal{L}} )$ blows up. However, the conservation law \eqref{UnrescaledConsLaw} shows that this can only be the case if $\widehat{\mathcal{L}}$ blows up. At the first singular orbit, i.e. at time $t=0,$ one has $\widehat{\mathcal{L}}=+ \infty$ and $\widehat{\mathcal{L}}$ is strictly decreasing for all $t >0$ as $\varepsilon <0.$ Hence, the finite time singularity corresponds to $\widehat{\mathcal{L}}=- \infty$ and in particular there exists a time with $\tr(L) = \widehat{\mathcal{L}} = 0,$ the maximal volume orbit. \end{proof}
From now on fix the normalisation $- \frac{\varepsilon}{2} \in \left\{ -n, 0, n \right\}$ of the Einstein constant $- \frac{\varepsilon}{2}$ and recall that in this case the corresponding cone solutions are given by \eqref{ExplicitConeSolutionScalPos}, \eqref{ExplicitConeSolutionScalNonPos}. The following proposition recovers the convergence results of B\"ohm \cite[Theorem 5.7]{BohmInhomEinstein}, \cite[Theorem 11.1]{BohmNonCompactEinstein}.
\begin{proposition} Suppose that $d_1 >1$ and $A_3 = 0.$ As $\bar{f} \to 0,$ the solution $c_{\bar{f}}$ to the two summands Einstein equations converges to the first cone solution on every relatively compact subset of $(0, \pi)$ if $-\frac{\varepsilon}{2}=n$ and $(0, \infty)$ if $-\frac{\varepsilon}{2} \in \left\{ -n, 0 \right\},$ respectively. \label{ConvergenceToConeSolution} \end{proposition} \begin{proof} Recall that the limit trajectory with $\bar{f} = 0$ corresponds to a trajectory with $\mathcal{L} \equiv 0,$ more precisely the unique solution of the Ricci flat system in $X_i,$ $Y_i$ in the unstable manifold of \eqref{InitialCriticalPoint}. According to proposition \ref{VariablesBoundedRicciFlatTwoSummandsCase}, the Ricci flat trajectory asymptotically approaches the first cone solution, which takes the constant value $X_i = \frac{1}{n}$ and $Y_i=\frac{1}{nc_i}$ for $i=1,2.$ Notice that this is in fact the value at $t=0$ of all cone solutions. Therefore it will be called {\em base point} of the cone solution.
If $\varepsilon \geq 0$ notice as in the proof of proposition \ref{TwoSummandsEinsteinMetricsWithNegativeScalarCurvature} that the variables $X_i, Y_i$ are bounded, that the B\"ohm functional $\mathscr{F}_0$ is bounded from below and non-increasing, and that it has a critical point on the cone solution. In fact, any Einstein trajectory that takes a constant value on $\mathscr{F}_0$ is a cone solution and $\mathscr{F}_0 = n(n-1) c_{1}^{{2d_1}/{n}} c_{2}^{{2d_2}/{n}}.$ However, since $A_3=0,$ the cone solution is unique and hence the minimum.
If $-\frac{\varepsilon}{2}=n,$ then \eqref{DerivativeOfBohmFunctional} implies that $\mathscr{F}_0$ achieves its minimum along a trajectory $c_{\bar{f}}$ on the maximal volume orbit. On any maximal volume orbit the coordinates \eqref{HatVariables} satisfy $\sum_{i=1}^2 d_i \widehat{X}_i = \widehat{\mathcal{L}}=0$ and the conservation law \eqref{UnrescaledConsLaw} hence implies that the variables $\widehat{X}_i, \widehat{Y}_i$ are bounded. Thus, $\mathscr{F}_0 = n(n-1) \prod_{i=1}^2 \widehat{Y}_i^{-2d_i/n}$ has a minimum on the maximal volume orbit, which is achieved by the value of cone solution.
However, $\mathscr{F}_0$ is constant on the cone solution and since the solution $c_{\bar{f}}$ approaches the base point of the cone solution as $\bar{f} \to 0$, the claim follows. \end{proof}
\begin{remarkroman} (a) The simplifying assumption $A_3=0$ can be relaxed. For the geometric examples in \ref{ExamplesConeSolutionsFromHopfFibrations}, one can calculate directly that the first cone solution realises the minimum. So the exact same proof works if $\widehat{D}>0$ and $\varepsilon \geq 0$ due to proposition \ref{X2VariablePositive}.
(b) The behaviour of the B\"ohm functional close to cone solutions was studied in a more general context in \cite{BohmNonCompactEinstein}. In particular, B\"ohm shows that any {\em stable} cone solution is a local attractor of the cohomogeneity one Einstein equations. In the two summands case, the cone solutions are stable if $d_1>1$. However, the cone solutions corresponding to the circle bundle construction of section \ref{SectionSolitonsFromCircleBundles} are {\em un}stable.
(c) In the original proof, B\"ohm \cite{BohmInhomEinstein} uses a coordinate system specifically adapted to the cone solution to find a limit trajectory, which solves a planar ODE. The limit trajectory lies in a compact planar domain and the Poincar\'e-Bendixson theorem is applied to prove convergence to the base point. Stability of the first cone solution then follows via an attractor function, a version of which is \eqref{LyapunovForNonTrivialBundles} in the Einstein case.
The planar ODE in B\"ohm's work is similar to the reduction of the Ricci flat equations to a planar ODE in section \ref{RicciFlatPlanarSystem}. However, in the Ricci soliton case, the extra degree of freedom of the soliton potential prevents a similar reduction and a different proof is required.
(d) B\"ohm's \cite{BohmNonCompactEinstein} construction of the complete, non-compact Einstein metrics which were recovered in section \ref{CompletenessTwoSummands} relies on the above convergence result, i.e. on the fact that for $f_2(0)= \bar{f} \to 0$ the trajectories remain close to the cone solution and are thus defined for all times. The proof in section \ref{CompletenessTwoSummands} shows moreover that one obtains an Einstein metric for {\em all} $f_2(0)>0.$ Notice that in the Ricci flat case the metric is unique up to scaling. \label{RemarkConvergenceConeSolutions} \end{remarkroman}
\subsection{B\"ohm's Einstein metrics of positive scalar curvature} \label{SectionBohmEinsteinMetricsPosScal}
For the convenience of the reader, this section explains how the refined asymptotics of the {\em Ricci flat} equations in section \ref{RicciFlatPlanarSystem} and proposition \ref{ConvergenceToConeSolution} yield B\"ohm's \cite{BohmInhomEinstein} Einstein metrics of positive scalar curvature on $S^5, \ldots, S^9$ and other low dimensional spaces, including $S^2 \times S^3, \ldots, S^2 \times S^7$ or $S^4 \times S^5$.
It should be emphasised that the overall strategy of the construction due to B\"ohm remains the same.
\subsubsection{Symmetric solutions} \label{SymmetricSolutions}
A solution $c_{\bar{f}} = (f_1,\dot{f}_1,f_2,\dot{f}_2)$ of the two summands Einstein equations with initial condition $c_{\bar{f}}(0)=(0,1,\bar{f},0)$ is called {\em symmetric} if there is $\tau>0$ such that $c_{\bar{f}}(\tau)=(0,-1,\bar{f},0)$. In fact, $c_{\bar{f}}$ is symmetric if and only if there exists $t_0>0$ such that $c_{\bar{f}}(t_0)=(f_1(t_0),0,f_2(t_0),0)$ with $f_1(t_0),f_2(t_0) >0.$ In particular, reflection along the maximal volume orbit, the unique orbit with $\tr(L)=0,$ is an isometry precisely for symmetric solutions.
Moreover, since $\omega = \frac{f_1}{f_2}$ satisfies $\dot{\omega} = \omega( \frac{\dot{f}_1}{f_1} - \frac{\dot{f}_2}{f_2}),$ any symmetric solution is characterised by a critical point of $\omega$ on the maximal volume orbit. It is an important observation due to B\"ohm \cite[Lemma 4.2.1]{BohmInhomEinstein} that critical points of $\omega$ are {\em non-degenerate}. The non-degeneracy of the critical points of $\omega$ allows the application of the following general counting principle.
\begin{lemma} Let $T_{\bar{f}}, \varepsilon_{\bar{f}}$ be continuous, positive functions of the real parameter $\bar{f}.$ Suppose that $c_{\bar{f}} \colon [0,T_{\bar{f}} + \varepsilon_{\bar{f}}) \to \mathbb{R}^n$ is a family of $C^{1}$-maps which depends continuously on $\bar{f}$ and $\omega \in C^{1}$ is a real valued map such that any critical point of $\omega = \omega \circ c_{\bar{f}}$ is non-degenerate and $\dot{\omega}(0) >0$ for all $\bar{f}.$
Let $\mathcal{C}(\bar{f}) = \mathcal{C}(\bar{f}, T_{\bar{f}})$ denote the number of critical points of $\omega$ along $c_f$ before $T_{\bar{f}}.$ Fix $\bar{f}_1 < \bar{f}_2.$ Then the following statements hold: \begin{enumerate} \item If $\dot{\omega}(T_{\bar{f}}) \neq 0$ for all $\bar{f} \in [\bar{f}_1, \bar{f}_2]$ then $\mathcal{C}(\bar{f})$ is constant on $[\bar{f}_1, \bar{f}_2].$ \item If $\bar{f}^{*}$ is the unique value of $\bar{f} \in [\bar{f}_1, \bar{f}_2]$ with $\dot{\omega}(T_{\bar{f}})=0$ then \begin{equation*}
| \mathcal{C}(\bar{f}^{'}) - \mathcal{C}(\bar{f}^{''}) | \leq 1 \end{equation*} for all $\bar{f}^{'}, \bar{f}^{''} \in [\bar{f}_1, \bar{f}_2]$ with $\bar{f}^{'} < \bar{f}^{*} < \bar{f}^{''}.$ \end{enumerate}
In particular, for any $\bar{f}^{'}, \bar{f}^{''} \in [\bar{f}_1, \bar{f}_2]$ there exist at least $| \mathcal{C}(\bar{f}^{'}) - \mathcal{C}(\bar{f}^{''}) | $ solutions with $\dot{\omega}(T_{\bar{f}})=0$ for $\bar{f} \in [\bar{f}^{'}, \bar{f}^{''}].$ \label{GeneralCountingArgument} \end{lemma}
\begin{remarkroman} (a) In fact, if $c_{\bar{f}}$ is just continuous, the lemma can still be used to count roots of continuous functions along $c_{\bar{f}}.$ In this case $c_{\bar{f}}$ has to intersect the zero set of the function transversally.
(b) B\"ohm proved the counting principle explicitly in the case where $T_{\bar{f}}$ is the time when the maximal volume orbit is reached, \cite[Lemmas 4.4 and 4.5]{BohmInhomEinstein}. More recently it was used by Foscolo-Haskins in the construction of nearly K\"ahler metrics, \cite[Lemma 7.2]{FHNearlyKaehler}. \end{remarkroman}
\begin{theorem}[B\"ohm] Let $d_1 >1,$ $A_3=0$ and $-\frac{\varepsilon}{2} = n.$ If the dimension of the principal orbit satisfies $2 \leq n \leq 8,$ there exist infinitely many symmetric solutions to the two summands Einstein equations. \label{SymmetricEinsteinMetrics} \end{theorem} \begin{proof} Recall that symmetric solutions are induces by critical points of $\omega$ at the maximal volume orbit. With the normalisation $A_2 = d_2 (d_2-1)>0,$ i.e. in geometric applications $\Ric^Q = d_2-1 > 0,$ the metric of the round sphere $(f_1,f_2)(t) = (\sin(t), \cos(t))$ induces a solution to the two summands Einstein equations without any critical point of $\omega$ before the maximal volume orbit.
According to lemma \ref{GeneralCountingArgument}, it suffices to show that there are trajectories with an arbitrarily high number of critical points of $\omega$ before the maximal volume orbit. Recall that the maximal volume orbit of a trajectory is achieved exactly when $\tr(L)=0.$ In the $(X_i,Y_i, \mathcal{L})$-coordinates \eqref{RescaledTwoSummandsVariables} this corresponds to the blow up time of $\mathcal{L}.$ In particular, critical points of $\omega$ which are detected by the rescaled system happen to be before the maximal volume orbit. Recall that $\omega{'} = \omega \left( X_1 - X_2 \right)$ and that every critical point in the rescaled variables also corresponds to a critical point of $\omega$ in the original time frame $t.$ Since the Einstein trajectories lie in the subvariety $d_1 X_1 + d_2 X_2 = 1,$ critical points occur if and only if $X_1= \frac{1}{n}.$
Recall that by proposition \ref{VariablesBoundedRicciFlatTwoSummandsCase} the trajectory of the {\em Ricci flat} system satisfies $X_i \to \frac{1}{n}$ and $Y_i \to \frac{1}{c_i n}$ where $(c_1,c_2)$ denotes the first cone solution. Moreover, observe that the Ricci flat system is realised by solutions to the two summands system for any value of $\varepsilon \in \mathbb{R}$ by the trajectory with $\mathcal{L} \equiv 0,$ as $\varepsilon$ and $\mathcal{L}$ only occur in the combination $\frac{\varepsilon}{2}\mathcal{L}^2.$ However, as explained in section \ref{SectionConvergenceToConeSolutions}, the limit $\mathcal{L} \equiv 0$ exactly corresponds to a smoothing of the trajectory $c_{\bar{f}}$ in the limit $\bar{f}=0.$ Due to the continuous dependence of the solution on the initial condition, for any $\varepsilon \in \mathbb{R}$ and $\bar{f} >0$ small enough, the solution to the two summands system approaches the base point of the first cone solution along a trajectory which is $C^{0}$-close to the Ricci flat trajectory $\gamma_{\text{RF}}$ of proposition \ref{VariablesBoundedRicciFlatTwoSummandsCase} with $\mathcal{L} \equiv 0,$ and then remains close to the actual cone solution in the sense of proposition \ref{ConvergenceToConeSolution}.
The dimension assumption and corollary \ref{StableSpiral} imply that the projection of the Ricci flat trajectory $\gamma_{\text{RF}}$ onto the $(X_1, Y_1)$-plane rotates infinitely often around the stationary point $(\frac{1}{n}, \frac{1}{c_1 n}),$ which is the base point of the first cone solution. Hence, the variable $X_1$ takes the value $X_1 = \frac{1}{n}$ arbitrarily often, which implies that $\mathcal{C}(\bar{f}, T_{\bar{f}}) \to \infty$ as $\bar{f} \to 0,$ where $T_{\bar{f}}$ denotes the time of the maximal volume orbit.
A direct computation of curvatures shows that the metrics are inhomogeneous and non-isometric, cf. \cite[section 6]{BohmInhomEinstein}. \end{proof}
As an explicit application, theorem \ref{SymmetricEinsteinMetrics} recovers B\"ohm's Einstein metrics on certain low dimensional spaces \cite[Theorem 3.4]{BohmInhomEinstein}.
\begin{corollary}[B\"ohm] Let $d_S >1$ and suppose that $Q$ is a compact, connected, isotropy irreducible homogeneous space of positive Ricci curvature and of dimension $d_Q.$ If $2 \le d_S, d_Q$ and $d_S + d_Q \leq 8,$ then there exist infinitely many non-isometric cohomogeneity one Einstein metrics of positive scalar curvature on $S^{d_S+1} \times Q.$ \end{corollary}
\begin{remarkroman} By considering the linearisation of the Einstein equations along the cone solutions, B\"ohm was also able to construct a symmetric cohomogeneity one Einstein metric on $\mathbb{H}P^{2} \# \overline{\mathbb{H}P}^{2}.$ \end{remarkroman}
\subsubsection{B\"ohm's Einstein metrics on low dimensional spheres} \label{EinsteinMetricsOnSpheres}
Cohomogeneity one Einstein manifolds with singular orbits of (possibly different) dimensions $d_1,$ $d_2$ can be constructed via solutions $c_{\bar{f}} = (f_1,\dot{f}_1,f_2,\dot{f}_2)$ with $c_{\bar{f}}(0)=(0,1,\bar{f},0)$, $c_{\bar{f}}(\tau)=(\bar{f}^{'},0,0,-1)$ and $\bar{f}, \bar{f}^{'}, \tau>0$. In the case of $SO(d_1+1) \times SO(d_2+1)$-invariant doubly warped product metrics on spheres, the convergence theory from section \ref{SectionConvergenceToConeSolutions} can be applied to give B\"ohm's \cite{BohmInhomEinstein} inhomogeneous Einstein metrics on $S^5, \ldots, S^9.$
For the rest of the section, fix $A_3 = 0$ and normalise the Ricci curvature of the singular orbit to be $\Ric^Q = d_2-1,$ so that $A_i=d_i (d_i-1)$ holds for $i=1,2.$ As before, the trajectory $c_{\bar{f}}$ will always correspond to an Einstein metric on a tubular neighbourhood of a singular orbit of dimension $d_2$ and with a principal orbit of dimension $n=d_1+d_2.$
As in section \ref{SectionConvergenceToConeSolutions}, in the $(X_i, Y_i, \mathcal{L})$-coordinates the limit trajectory with $\bar{f} = 0$ corresponds to the unique solution of the Ricci flat system in the unstable manifold of \eqref{InitialCriticalPoint} with $\mathcal{L} \equiv 0$. Recall from proposition \ref{VariablesBoundedRicciFlatTwoSummandsCase} that this trajectory approaches the base point $(\frac{1}{n}, \frac{1}{n c_1})$ of the cone solution asymptotically. Under the dimensional assumptions $d_1 >1$ and $2 \leq n \leq 8,$ the base point $(\frac{1}{n}, \frac{1}{n c_1})$ is a stable spiral due to corollary \ref{StableSpiral}. As in the proof of theorem \ref{SymmetricEinsteinMetrics}, it follows from the continuous dependence on the initial value, that also $c_{\bar{f}},$ for $\bar{f}>0$ small enough, exhibits a rotational behaviour as it approaches the base point at $t=0$ of the first cone solution $\gamma.$ Proposition \ref{ConvergenceToConeSolution} then says that given any compact set $K \subset \subset (0, \pi),$ the trajectory $c_{\bar{f}}$ remains $C^{0}$-close to $\gamma$ on $K$ if $\bar{f}>0$ is small enough. In fact, $c_{\bar{f}}$ obeys a rotational behaviour in every slice around the cone solution as $\bar{f} \to 0.$ To make this precise, notice that the variables $(\widehat{X}_1,\widehat{Y}_1,\widehat{\mathcal{L}})$ of \eqref{HatVariables} form a local coordinate system along the Einstein trajectories away from the singular orbits. For example, the first cone solution has coordinates $(\cot(t), \frac{1}{c_1 \sin(t)}, n \cot(t)).$ One should think of $\widehat{\mathcal{L}}$ as the time variable. For any fixed value $\widehat{\mathcal{L}}$ and for $\bar{f} >0$ sufficiently small, the trajectory $c_{\bar{f}}$ intersects the $(\widehat{X}_1,\widehat{Y}_1,\widehat{\mathcal{L}})$-plane $P_{\widehat{\mathcal{L}}}$ in a unique point. As $\bar{f}>0$ varies, the intersection points describe a continuous curve in this plane.
\begin{proposition} Let $A_3 = 0$ and $2 \leq n \leq 8.$ Then in any coordinate slice $P_{\widehat{\mathcal{L}}},$ the intersection points of $c_{\bar{f}}$ with a disc around the first cone solution $\gamma$ in $P_{\widehat{\mathcal{L}}}$ exhibit the same rotational behaviour as $\bar{f} \to 0.$ \end{proposition} This follows from the general counting principle \ref{GeneralCountingArgument} applied to the time $T_{\bar{f}}$ when $c_{\bar{f}}$ intersects the disc, the observation that $\mathcal{C}(c_{\bar{f}},T_{\bar{f}}) \to \infty$ as $\bar{f} \to 0,$ and lemma \ref{OccurrenceOfCriticalPoints} below.
\begin{lemma} Along any trajectory $c_{\bar{f}},$ critical points of $\omega$ occur if and only if $\widehat{X}_1 = \frac{\widehat{\mathcal{L}}}{n}$ and $\omega$ is increasing if $\widehat{X}_1 > \frac{\widehat{\mathcal{L}}}{n}$ and decreasing if $\widehat{X}_1 < \frac{\widehat{\mathcal{L}}}{n}.$
Moreover, if $A_3 = 0,$ then the $\widehat{Y}_1$-coordinate of $\omega$ in $P_{\widehat{\mathcal{L}}}$ satisfies $\widehat{Y}_1(\omega) > \widehat{Y}_1(\gamma)$ at any maximum and $\widehat{Y}_1(\omega) < \widehat{Y}_1(\gamma)$ at any minimum, where $\gamma$ is the first cone solution.
\label{OccurrenceOfCriticalPoints} \end{lemma} \begin{proof} The first statement follows from $\dot{\omega} = \omega ( \widehat{X}_1 - \widehat{X}_2 )$ and $\sum_{i=1}^2 d_i \widehat{X}_i = \widehat{\mathcal{L}}.$ If $A_3 = 0,$ the identity $\ddot{\omega} = \frac{A_1}{d_1} \widehat{Y}_1^2 - \frac{A_2}{d_2} \widehat{Y}_2^2$ holds at every critical point of $\omega.$ However, as $\dot{\omega} = \ddot{\omega} = 0$ only occurs on the cone solution, the claim follows. \end{proof}
Suppose that $d_{\overline{F}}=(F_1,\dot{F}_1, F_2, \dot{F}_2)$ is also an Einstein trajectory which satisfies $d_{\overline{F}}(0)=(0,1,\overline{F},0)$ but instead induces a metric on a tubular neighbourhood of a singular orbit of dimension $d_2$ and with a principal orbit of dimension $n= d_1+d_2.$ Then the above considerations also apply to $d_{\overline{F}}.$ Clearly, the trajectories depend continuously on the parameters $d_1, d_2>1.$ Hence, in the dimension range $2 \leq n \leq 8,$ the trajectories $c_{\bar{f}}$ and $d_{\overline{F}}$ have the {\em same} rotational behaviour as they approach their respective base point of the cone solution at $t=0.$
Now consider the twisted trajectory $d_{\overline{F}}^{\text{twisted}}(t)=(F_2, - \dot{F}_2, F_1, - \dot{F}_1)(t).$ If $\tau >0$ is small enough, then $d_{\overline{F}}^{\text{twisted}}(\tau - t)$ is an actual solution to the Einstein equations due to the symmetries of the equations in $d_1, d_2$ as $A_3=0.$ That is, $d_{\overline{F}}^{\text{twisted}}(t)$ runs through the Einstein equations in `opposite direction', starting at $d_{\overline{F}}^{\text{twisted}}(0) = (\overline{F},0,0,-1)$ and then approaching the same cone solution as $c_{\bar{f}}$ but at the base point corresponding to $t = \pi.$ In particular, it has the {\em opposite} rotational behaviour to $c_{\bar{f}}.$ Due to proposition \ref{ConvergenceToConeSolution}, for $\bar{f}, \overline{F} >0$ small enough, both $c_{\bar{f}}$ and $d_{\overline{F}}^{\text{twisted}}$ intersect the plane $\{(\widehat{X}_1,\widehat{Y}_1, n)\},$ which is the slice of the maximal volume orbit of the cone solution, in a unique point. Since both trajectories in fact wind around the cone solution arbitrarily often as $\bar{f}, \overline{F} \to 0,$ respectively, and $c_{\bar{f}},$ $d_{\overline{F}}^{\text{twisted}}$ have the opposite rotational behaviour, there are infinitely many intersection points in this (or any other) slice. For any such, there exist $t_0, t_1 > 0$ such that the matching condition $c_{\bar{f}}(t_0) = d_{\overline{F}}^{\text{twisted}}(t_1)$ holds. Then \begin{align*} \tilde{c}_{\bar{f},\overline{F}}(t) = \begin{cases} c_{\bar{f}}(t) & \ \text{ for } \ 0 \leq t \leq t_0 \\
d_{\overline{F}}^{\text{twisted}}(t_0+t_1-t) & \ \text{ for } \ t_0 \leq t \leq t_0+t_1 \end{cases} \end{align*} satisfies $\tilde{c}_{\bar{f},\overline{F}}(0)=(0,1,\bar{f},0)$ and $\tilde{c}_{\bar{f},\overline{F}}(t_0+t_1)=(\overline{F},0,0,-1)$ and it is a {\em smooth} solution to the Einstein equation as required. Smoothness indeed follows from the uniqueness of solutions to ODEs with fixed initial conditions, since $\tilde{c}_{\bar{f},\overline{F}}$ clearly solves the Einstein equations on both intervals. In particular, any such pair $(\bar{f}, \overline{F})$ induces an Einstein metric $g{(\bar{f}, \overline{F})}$ on $S^{n+1}.$
\begin{remarkroman} A direct curvature computation shows that the metrics are indeed inhomogeneous. Moreover, if the metrics $g{(\bar{f}, \overline{F})}$ and $g{(\bar{f}^{'}, \overline{F}{'})}$ on $S^{n+1}$ are isometric, it follows that $\bar{f}=\bar{f}^{'}$ if $d_1 \neq d_2$ and $\bar{f}=\bar{f}^{'}$ or $\bar{f}=\overline{F}{'}$ if $d_1 = d_2,$ since isometries must map orbits onto orbits, see \cite[Section 7]{BohmInhomEinstein}. \end{remarkroman}
This recovers B\"ohm's Einstein metrics on low dimensional spheres \cite[Theorem 3.6]{BohmInhomEinstein}:
\begin{corollary}[B\"ohm] On $S^5$ and $S^6$ there exists one, on $S^7$ and $S^8$ there exist two, and on $S^9$ there exist three infinite families of non-isometric, strictly cohomogeneity one Einstein metrics of positive scalar curvature. \end{corollary}
\section{Quasi-Einstein Metrics} \label{SectionQuasiEinsteinMetrics}
\subsection{Introduction} \label{QEMIntroSection}
In the study of smooth metric measure spaces the $m$-Bakry-\'Emery Ricci tensor $\Ric + \Hess u - \frac{1}{m} du \otimes du$ plays a central role, cf. \cite{CaseSMMSAndQEM}. It also naturally appears in the context of warped product Einstein manifolds, where it has led to the notion of $m$-{\em quasi-Einstein metrics} or $(\lambda, n+m)$-Einstein metrics in the terminology of He-Petersen-Wylie, cf. \cite{HPWUniquenessWarpedProductEinstein}:
\begin{definition} Let $(M,g)$ be an $n$-dimensional Riemannian manifold, $u \in C^{\infty}(M)$ and $m \in (0,\infty].$ Then $(M,g,e^{-u} d \operatorname{Vol}_M)$ is called {\em $m$-quasi-Einstein manifold} if \begin{equation} \Ric + \Hess u - \frac{1}{m} du \otimes du + \frac{\varepsilon}{2} g = 0. \label{QEMequation} \end{equation} The sum $m+n$ is called {\em effective dimension} and $-\frac{\varepsilon}{2}$ is the {\em quasi-Einstein constant.} \end{definition}
Kim-Kim \cite{KimKim} observed that any connected $m$-quasi-Einstein manifold with $m < \infty$ satisfies the following conservation law: There exists a constant $\mu \in \mathbb{R},$ called {\em characteristic constant,} such that \begin{equation}
\Delta u - | \nabla u |^2 + m \mu e^{2u / m} + m \frac{\varepsilon}{2} = 0. \label{QEMConsLaw} \end{equation}
In this case, Kim-Kim \cite{KimKim} proved that if $m>1$ is an integer and $(N^m,h)$ is Einstein with $\Ric_h = \mu h$, then the warped product \begin{equation} (M \times N, g + e^{-2u /m} h) \label{EQWarpedProductKimKim} \end{equation} is Einstein. Conversely, if $(M \times N, g + e^{-2u /m} h)$ is Einstein, then $(M,g,e^{-u} d \operatorname{Vol}_M)$ must be $m$-quasi-Einstein.
This point of view on Einstein warped products was successfully used by Case-Shu-Wei \cite{CSWRigidityQEM} to show that any compact {\em K\"ahler} $m$-quasi-Einstein metric with $m < \infty$ is Einstein. In contrast, recall that all {\em known} non-trivial compact Ricci solitons are K\"ahler.
Hall \cite{HallQEinstein} constructed $m$-quasi-Einstein metrics on total spaces of complex vector bundles associated to principal circle bundles over products of Fano K\"ahler-Einstein manifolds. Due to the induced hypersurface foliation their geometry can in fact be described using the cohomogeneity one equations from section \ref{CohomOneQEMsection}. The case of a single base factor is due to L\"u-Page-Pope \cite{LuPagePopeQEinstein}. Remarkably, the L\"u-Page-Pope metrics are conformally K\"ahler and the associated K\"ahler class is a multiple of the first Chern class as shown by Batat-Hall-Jizany-Murphy \cite{BHJMConfKahlerQEM}.
\subsection{The initial value problem for cohomogeneity one quasi-Einstein metrics} \label{CohomOneQEMsection}
The formulae for the Ricci curvature of a cohomogeneity one manifold in section \ref{SectionCohomOneSetUp} yield that the $m$-quasi-Einstein equation takes the form \begin{align} -( \delta^{\nabla^t}L_t)^{\flat} - d(\tr(L_t)) & = 0, \label{QEMequationA} \\ - \tr( \dot{L}_t) - \tr(L_t^2) + \ddot{u} - \frac{1}{m} \dot{u}^2 + \frac{\varepsilon}{2} & =0, \label{QEMequationB} \\ - \dot{L}_t - (- \dot{u} + \tr(L_t)) L_t + r_t + \frac{\varepsilon}{2} \mathbb{I} & = 0, \label{QEMequationC} \end{align} and the conservation law \eqref{QEMConsLaw} is given by \begin{equation} \ddot{u} + (- \dot{u} + \tr(L)) \dot{u} + m \mu e^{2u/m} + m \frac{\varepsilon}{2} = 0. \label{CohomOneQEMConsLaw} \end{equation}
\begin{remarkroman} Notice that for $f=e^{-u/m}$ the conservation law is equivalent to \begin{align*} \frac{d}{dt} \frac{\dot{f}}{f} = - \left( m \frac{\dot{f}}{f} + \tr(L) \right) \frac{\dot{f}}{f} + \frac{\mu}{f^2} + \frac{\varepsilon}{2}, \end{align*} and hence it is the Einstein equation for the added factor in Kim-Kim's \cite{KimKim} warped product construction \eqref{EQWarpedProductKimKim}. \label{ConsLawIsEinsteinEQ} \end{remarkroman}
The following proposition generalises an observation due to Back\cite{BackLocalTheoryofEquiv} in the Einstein case, see also \cite{EWInitialValueEinstein} and \cite{DWCohomOneSolitons}.
\begin{proposition} Let $M$ be a connected manifold and $g$ a $C^2$-Riemannian metric on $M.$ Suppose that $G$ is a compact Lie group which acts isometrically and with cohomogeneity one on $(M,g)$ and that the action has a singular orbit. Let $u \in C^3(M)$ be $G$-invariant.
Then \eqref{QEMequationC} implies \eqref{QEMequationA} and if the conservation law \eqref{CohomOneQEMConsLaw} is satisfied, then \eqref{QEMequationB} holds as well. \label{ReducedQEMsystem} \end{proposition} \begin{proof} The fact that \eqref{QEMequationC} implies \eqref{QEMequationA} follows as in the Ricci soliton case, cf. \cite[Proposition 3.19]{DWCohomOneSolitons}, as the equations are identical.
Let $v_t$ be the relative volume of the principal orbit $(P,g_t).$ Then it follows that $\frac{d}{dt} v = \tr(L) v$ and due to \cite[Formula (3.16)]{DWCohomOneSolitons} there holds \begin{equation*}
\frac{d}{dt} \left( v^2 \left( \Ric(N,N) + \frac{\varepsilon}{2} \right) \right) + v^2 \left( 2 \dot{u} \tr(L^2)+ \frac{d}{dt} \left( \dot{u}\tr(L) \right) \right) = 0. \end{equation*} By combining this with $\Ric(N,N)= - \tr(\dot{L}) - \tr(L^2)$ and the conservation law \eqref{CohomOneQEMConsLaw}, one obtains \begin{equation*} \frac{d}{dt} \left( v^2 \left( \Ric(N,N) + \ddot{u} - \frac{1}{m} \dot{u}^2 + \frac{\varepsilon}{2} \right) \right)
= 2 \dot{u} v^2 \left( \Ric(N,N) + \ddot{u} - \frac{1}{m} \dot{u}^2 + \frac{\varepsilon}{2} \right). \end{equation*} Therefore $v^2 \left( \Ric(N,N) + \ddot{u} - \frac{1}{m} \dot{u}^2 + \frac{\varepsilon}{2} \right)$ is a multiple of $e^{2 u}$ which vanishes at the singular orbit, and thus vanishes identically. \end{proof}
\begin{proposition} Let $M$ be a smooth manifold of dimension $\dim M \geq 3.$ Suppose that a solution of the $m$-quasi-Einstein equation on $M$ is given by a $C^2$-Riemannian metric $g$ and $u \in C^3(M).$ Then $g$ and $u$ are real analytic in harmonic and geodesic normal coordinates. \label{QEMregularity} \end{proposition} \begin{proof} The $m$-quasi-Einstein equation and the contracted second Bianchi identity give rise to the PDE \begin{align*} \Ric + \Hess u - \frac{1}{m} du \otimes du + \frac{\varepsilon}{2} g & = 0, \\ \Delta(du) + \Ric( \cdot, \grad u) - \frac{2}{m} \left( \Delta u \right) du & = 0 \end{align*} for $(g,u).$ Notice that the $\frac{1}{m}$-terms are of lower order and thus the principal symbol is the same as in the Ricci soliton case. Hence, $(g,u)$ is a solution of a quasi-linear elliptic system and the regularity analysis in\cite[Lemma 3.2]{DWCohomOneSolitons} carries over without any changes, see also \cite[Theorem 5.2]{dTKRegularity}. \end{proof}
The initial value problem for $m$-quasi-Einstein metrics at a singular orbit can be solved analoguously to Buzano's \cite{BuzanoInitialValueSolitons} approach in the Ricci soliton case: Due to proposition \ref{ReducedQEMsystem} it suffices to consider \eqref{QEMequationC}, \eqref{CohomOneQEMConsLaw} and the relation $\dot{g}_t = 2 g_t L_t.$ Setting up an ODE system for $(g_t, L_t, u)$ as in \cite{BuzanoInitialValueSolitons}, one observes that the $-\frac{1}{m} \dot{u}^2$-term simply disappears in the error terms that occur in Buzano's proof because it is of lower order. In particular, the construction of a formal power series solution is unchanged. Due to the real analyticity of $m$-quasi-Einstein metrics as in proposition \ref{QEMregularity}, a theorem of Malgrange \cite[Theor{\`e}me 7.1]{MalgrangeEquationsDifferentielle} then yields a genuine solution. Alternatively, a Picard iteration may be applied as in \cite{EWInitialValueEinstein}.
\begin{theorem} Let $G$ be a compact Lie group acting isometrically on a connected Riemannian manifold $(M,g)$ and suppose there exists a singular orbit $Q = G / H.$ Choose $q \in M$ such that $Q = G \cdot q$ and denote by $V = T_qM / T_qQ$ the normal space of $Q$ at q. Then $H$ acts linearly and orthogonally on $V$ and a tubular neighbourhood of $Q$ may be identified with its normal bundle $E = G \times_H V.$ The principal orbits are $P = G / K = G \cdot v$ for any $v \in V \setminus \left\lbrace 0\right\rbrace.$ These can be identified with the sphere bundle of $E$ (with respect to an $H$-invariant scalar product on $V$). Let $\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{p}_{-}$ be a decomposition of the Lie algebra of $G$ where $\mathfrak{p}_{-}$ is an $Ad_H$-invariant complement of $\mathfrak{h}= \operatorname{Lie}(H).$
Assume that $V$ and $\mathfrak{p}_{-}$ have no common irreducible factors as $K$-representations.
Then for any $\varepsilon \in \mathbb{R},$ any $m \in (0, \infty],$ any $G$-invariant metric $g_Q$ on $Q$ and any shape operator $L \colon E \to \operatorname{Sym}^2(T^{*}Q)$ there exists a G-invariant $m$-quasi-Einstein metric on some open disc bundle of $E$. \label{QEMInitialValueTheorem} \end{theorem}
\begin{remarkroman} The assumption that $V$ and $\mathfrak{p}_{-}$ have no common irreducible factors as $K$-representations is primarily a technical simplification but as Eschenburg-Wang point out in \cite[Remark 2.7]{EWInitialValueEinstein} it is also natural in the context of the Kaluza-Klein construction. \end{remarkroman}
\subsection{New quasi-Einstein metrics} The analysis of the two summands case in section \ref{CompletenessTwoSummands} can be adapted to the $m$-quasi-Einstein case for $m < \infty.$ Recall that the metric restricted to the principal orbit is given by $g_t = f_1(t)^2 g_S + f_2(t)^2 g_Q,$ and set $f_3(t) = e^{-u(t)/m}.$ Due to proposition \ref{ReducedQEMsystem}, it suffices to consider \eqref{QEMequationC} and \eqref{CohomOneQEMConsLaw}, and thus the two summands $m$-quasi-Einstein equations take the form \begin{align*} \frac{d}{dt} \left( \frac{\dot{f}_1}{f_1} \right) & = - \tr ( \widehat{L} ) \frac{\dot{f}_1}{f_1} + \frac{\varepsilon}{2} + \frac{A_1}{d_1} \frac{1}{f_1^2} + \frac{A_3}{d_1} \frac{f_1^2}{f_2^4}, \\ \frac{d}{dt} \left( \frac{\dot{f}_2}{f_2} \right) & = - \tr ( \widehat{L} ) \frac{\dot{f}_2}{f_2} + \frac{\varepsilon}{2} + \frac{A_2}{d_2} \frac{1}{f_2^2} - 2 \frac{A_3}{d_2} \frac{f_1^2}{f_2^4}, \\ \frac{d}{dt} \left( \frac{\dot{f}_3}{f_3} \right) & = - \tr ( \widehat{L} ) \frac{\dot{f}_3}{f_3} + \frac{\varepsilon}{2} + \frac{\mu}{f_3^2}, \end{align*} where $\widehat{L} = \operatorname{diag} \left( \frac{\dot{f}_1}{f_1} \mathbb{I}_{d_1}, \frac{\dot{f}_2}{f_2} \mathbb{I}_{d_2}, \frac{\dot{f}_3}{f_3} \mathbb{I}_{m} \right)$ corresponds to the shape operator in Kim-Kim's \cite{KimKim} warped product construction. Notice that $\widehat{L}$ is only well-defined if $m \in \mathbb{N},$ but its trace always is.
Due to the regularity theorem \ref{QEMInitialValueTheorem} the metric can be smoothly extended over the singular orbit if the initial conditions \begin{align*} f_1(0)=0, \ \dot{f}_1(0)=1 \ \text{ and } \ f_2(0)= \bar{f} > 0, \ \dot{f}_2(0)=0 \end{align*} are imposed. Clearly one may fix $u(0)=0$ and then \begin{align*} f_3(0) = 1 \ \text{ and } \ \dot{f}_3(0) = 0 \ \text{ and } \ \ddot{f}_3(0) = \varepsilon + 2 \mu \end{align*} are the corresponding smoothness conditions for $f_3$.
Fix $\varepsilon \geq 0$ and $\mu > 0$. It follows that $\dot{f}_i(t) > 0$ for $i=1, 2, 3$ and sufficiently small $t>0.$ In analogy to \eqref{RescaledTwoSummandsVariables}, set \begin{align*} \mathcal{L} = \frac{1}{\tr( \widehat{L} )}, \ \frac{d}{ds} = \mathcal{L} \cdot \frac{d}{dt} \ \text{and} \ X_i = \mathcal{L} \cdot \frac{\dot{f}_i}{f_i}, \ Y_i = \mathcal{L} \cdot \frac{1}{f_i} \ \text{for} \ i=1,2,3. \end{align*}
In particular, $\mathcal{L}, X_i, Y_i$ are positive initially. Set $d_3 =m.$ Then $\sum_{i=1}^3 d_i X_i =1$ and the rescaled two summands $m$-quasi-Einstein equations take the form \begin{align*} X_1^{'} & = X_1 \left( \sum_{i=1}^3 d_i X_i^2 - \frac{\varepsilon}{2} \mathcal{L}^2 -1 \right) + \frac{A_1}{d_1} Y_1^2+\frac{\varepsilon}{2} \mathcal{L}^2 + \frac{A_3}{d_1} \frac{Y_2^4}{Y_1^2}, \\ X_2^{'} & = X_2 \left( \sum_{i=1}^3 d_i X_i^2 - \frac{\varepsilon}{2} \mathcal{L}^2 -1 \right) + \frac{A_2}{d_2} Y_2^2+\frac{\varepsilon}{2} \mathcal{L}^2 - \frac{2 A_3}{d_2} \frac{Y_2^4}{Y_1^2}, \\ X_3^{'} & = X_3 \left( \sum_{i=1}^3 d_i X_i^2 - \frac{\varepsilon}{2} \mathcal{L}^2 -1 \right) + \mu Y_3^2+\frac{\varepsilon}{2} \mathcal{L}^2, \end{align*} \begin{align*} Y_j^{'} & =Y_j \left( \sum_{i=1}^3 d_i X_i^2 - \frac{\varepsilon}{2} \mathcal{L}^2 - X_j \right) \ \text {for} \ j=1,2,3, \\ \mathcal{L}^{'} & =\mathcal{L} \left( \sum_{i=1}^3 d_i X_i^2 - \frac{\varepsilon}{2} \mathcal{L}^2 \right). \end{align*} It follows that $\mathcal{L},$ $X_i$, $Y_i >0$ holds along the flow, except possibly for $X_2.$ In the situations of proposition \ref{X2VariablePositive} and proposition \ref{X2PositiveCircleBundles}, the respective proofs carry over to show that both $X_2 > 0$ and $\frac{Y_2}{Y_1} < \hat{\omega}_1$ are preserved. Since $\hat{\omega}_1^2 < \frac{A_2}{2A_3},$ the conservation law \begin{align*} \sum_{i=1}^3 d_i X_i^2 + A_1 Y_1^2 + A_2 Y_2^2 + m \mu Y_3^2 - A_3 \frac{Y_2^4}{Y_1^2} + (n-1) \frac{\varepsilon}{2} \mathcal{L}^2 = 1 \end{align*} implies that $X_i$, $Y_i$ are bounded for $i=1,2,3.$ Thus $\mathcal{L}$ cannot blow up in finite time either. Completeness of the metric now follows as in proposition \ref{CompletenessEpsZeroTwoSummands} and corollary \ref{CompletenessEpsPosTwoSummands}.
If $\varepsilon > 0$ and $\mu = 0,$ then $\mathcal{L},$ $X_i,$ $Y_i$ are bounded due to the conservation law, except possibly $Y_3$. However, the ODE for $Y_3$ implies that $Y_3$ cannot blow up in finite time and a similar argument applies. This shows:
\begin{theorem} Let $d_1 \geq 1,$ $A_1=d_1(d_1-1)$ and $(d_1+1)A_2^2 > 4 d_1 d_2 (2d_1+d_2) A_3>0$ and fix $m>0.$
Then the associated two summands ODE gives rise to a $1$-parameter family of complete, non-trivial non-homothetic $m$-Bakry-\'Emery Ricci flat metrics and a $2$-parameter family of non-trivial, complete, non-homothetic $m$-quasi Einstein metrics with quasi-Einstein constant $- \frac{\varepsilon}{2} < 0,$ all of which have positive characteristic constant.
Furthermore, there exists a $1$-parameter family of complete, non-trivial non-homothetic $m$-quasi-Einstein metrics with quasi-Einstein constant $- \frac{\varepsilon}{2} < 0$ and vanishing characteristic constant. \label{TwoSummandsQEM} \end{theorem}
\begin{remarkroman} Case \cite{CaseNonExistenceQEM} has shown that any complete, non-trivial $m$-Bakry-\'Emery Ricci flat quasi-Einstein manifold has positive characteristic constant. \end{remarkroman}
Notice that if $A_3 = 0$ and $m \in \mathbb{N},$ the above construction gives rise to a triply warped product Einstein metric.
Multiple warped product Einstein metrics of nonpositive scalar curvature were constructed by B\"ohm \cite{BohmNonCompactEinstein} on $\mathbb{R}^{d_1+1} \times M_2 \times \ldots \times M_r$ if $d_1>1,$ for Einstein manifolds $(M_i,g_i)$ of positive scalar curvature $\mu_i >0.$ The corresponding steady and expanding Ricci solitons have been constructed by Dancer-Wang \cite{DWExpandingSolitons, DWSteadySolitons} who in joint work with Buzano and Gallaugher \cite{BDGWExpandingSolitons, BDWSteadySolitons} also settled the case $d_1 =1.$
Away from the singular orbit, on $(0, \infty) \times S^{d_1-1} \times M_2 \times \ldots \times M_r,$ the metrics are of the form $dt^2 + \sum_{i=1}^{r} f_i^2(t) g_i.$ Notice that the corresponding $m$-quasi-Einstein equations are \begin{align*} \frac{d}{dt} \frac{\dot{f}_i}{f_i} & = - (- \dot{u} + \tr(L) ) \frac{\dot{f}_i}{f_i} + \frac{\varepsilon}{2} + \frac{\mu_i}{f_i^2} \ \text{ for } \ i=1, \ldots, r, \end{align*} where $\mu_1 = d_1-1.$ If $m < \infty,$ then $f_{r+1} = e^{-u/m}$ satisfies an analogous equation, where $\mu_{r+1} > 0$ will be the characteristic constant of the induced $m$-quasi-Einstein metric.
Due to the regularity theorem \ref{QEMregularity} one induces a smooth $m$-quasi-Einstein metric on the trivial $\mathbb{R}^{d_1+1}$-bundle over $M_2 \times \ldots \times M_r$ by imposing the initial conditions $f_1=0,$ $\dot{f}_1=1$ and $f_i>0$ for $i \geq 2$ at $t=0$ and by requiring that $f_i(t)$ for $i \geq 2$ and $u(t)$ are even.
Set $d_{r+1} = m$ if $m<\infty$ and $d_{r+1}= \mu_{r+1}=0$ if $m=\infty.$ In terms of the rescaled coordinates $\mathcal{L},$ $X_i,$ $Y_i$ of \eqref{RescaledTwoSummandsVariables} the above initial conditions correspond to the stationary point \begin{equation*} X_1 = \frac{1}{d_1}, Y_1=\frac{1}{d_1} \ \text{and} \ X_i=Y_i=\mathcal{L} = 0 \ \text{for} \ i \geq 2 \end{equation*} of the Ricci soliton ODE \begin{align*} \mathcal{L}^{'} & = \mathcal{L} \left( \sum_{j=1}^{r+1} d_j X_j^2 - \frac{\varepsilon}{2} \mathcal{L}^2 \right), \\ X_i^{'} &= X_i \left( \sum_{j=1}^{r+1} d_j X_j^2 - \frac{\varepsilon}{2} \mathcal{L}^2 - 1 \right) + \frac{\varepsilon}{2} \mathcal{L}^2 + \mu_i Y_i^2, \\ Y_i^{'} &= Y_i \left( \sum_{j=1}^{r+1} d_j X_j^2 - \frac{\varepsilon}{2} \mathcal{L}^2 - X_i \right). \end{align*}
Notice that $f_i(t)>0,$ $\dot{f}_i(t) >0$ for $t >0$ small and thus the rescaled coordinates $\mathcal{L},$ $X_i,$ $Y_i$ are also positive initially. Moreover, for $\varepsilon \geq 0$ positivity is preserved by the flow of the Ricci soliton ODE.
Consider \begin{align*} \mathcal{S}_{1, m} = \sum_{i=1}^{r+1} d_i X_i^2 + \sum_{i=1}^{r+1} \mu_i Y_i^2 + (n-1)\frac{\varepsilon}{2} \mathcal{L}^2 -1 \ \text{ and } \ \mathcal{S}_{2,m} = \sum_{i=1}^{r+1} d_i X_i -1. \end{align*} In analogy to \eqref{EinsteinLocus}, \eqref{SolitonLocus} it follows that trajectories lying in the preserved locus $\left\{ \mathcal{S}_{1,m} = 0\right\} \cap \left\{ \mathcal{S}_{2,m} = 0\right\}$ correspond to non-trivial $m$-quasi-Einstein metrics for $m < \infty.$ Similarly, non-trivial Ricci soliton metrics correspond to trajectories in $\left\{ \mathcal{S}_{1,\infty} < 0\right\} \cap \left\{ \mathcal{S}_{2,\infty} < 0\right\}$ and Einstein metrics to trajectories in $\left\{ \mathcal{S}_{1,\infty} = 0\right\} \cap \left\{ \mathcal{S}_{2,\infty} = 0\right\}$.
In all cases, if $\varepsilon \geq 0,$ the variables $X_i, Y_i \geq 0$ are bounded, except possibly $Y_1$ if $d_1=1$. However, the ODEs for $\mathcal{L},$ $Y_1$ show as before that $\mathcal{L},$ $Y_1$ cannot blow up in finite time. Completeness of the metric again follows as in proposition \ref{CompletenessEpsZeroTwoSummands} and corollary \ref{CompletenessEpsPosTwoSummands}.
Thus this construction yields $m$-quasi-Einstein metrics on multiple warped products as in Theorem \ref{MainTheoremQEM}, and a unified proof of the works of B\"ohm \cite{BohmNonCompactEinstein} and Buzano-Dancer-Gallaugher-Wang \cite{DWSteadySolitons, DWExpandingSolitons, BDGWExpandingSolitons, BDWSteadySolitons}.
\end{document}
|
arXiv
|
{
"id": "1706.09712.tex",
"language_detection_score": 0.6457414031028748,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{{\bf Global Solution of Atmospheric Circulation Models with Humidity Effect} \thanks{Foundation item: the National Natural Science Foundation of China (NO. 11271271).}} \author{{\sl Hong Luo} \\{\small College of Mathematics and Software Science, Sichuan Normal University,} \\
{\small Chengdu, Sichuan 610066, China }} \date{} \maketitle
\begin{minipage}{5.5 in}
\noindent{\bf Abstract:}\,\ \ The atmospheric circulation models are
deduced from the very complex atmospheric circulation models
based on the actual background and meteorological data. The models are able to show features of
atmospheric circulation and are easy to be studied.
It is proved that existence of global solutions to atmospheric
circulation models with the use of the $T$-weakly continuous operator.
{\bf Key Words:}\ \ Atmospheric Circulation Models; Humidity Effect; Global Solution
{\bf 2000 Mathematics Subject Classification:} \ 35A01, 35D30, 35K20\\ \end{minipage}
\section{Introduction} Mathematics is a summary and abstraction of the production practices and the natural sciences, and is a powerful tool to explain natural phenomena and reveal the laws of nature as well. Atmospheric circulation is one of
the main factors affecting the global climate, so it is very necessary
to understand and master its mysteries and laws. Atmospheric circulation
is an important mechanism to complete the transports and balance of atmospheric
heat and moisture and the conversion between various energies. On the contrary,
it is also the important result of these physical transports, balance and conversion.
Thus it is of necessity to study the characteristics, formation, preservation, change
and effects of the atmospheric circulation and master its evolution law, which is not
only the essential part of human's understanding of nature, but also the helpful method
of changing and improving the accuracy of weather forecasts, exploring global climate change,
and making effective use of climate resources. \par The atmosphere and ocean around the earth are rotating geophysical fluids, which are also two important components of the climate system. The phenomena of the atmosphere and ocean are extremely rich in their organization and complexity, and a lot of them cannot be produced by laboratory experiments. The atmosphere or the ocean or the coupled atmosphere and ocean can be viewed as an initial and boundary value problem \cite{Phillips} \cite{Rossby}, or an infinite dimensional dynamical system \cite{Lions1}\cite{Lions2}\cite{Lions3}. These phenomena involve a broad range of temporal and spatial scales \cite{Charney2}. For example, according to \cite{von Neumann}, the motion of the atmosphere can be divided into three categories depending on the time scale of the prediction. They are motions corresponding respectively to the short time, medium range and long term behavior of the atmosphere. The understanding of these complicated and scientific issues necessitate a joint effort of scientists in many fields. Also, as \cite{von Neumann} pointed out, this difficult problem involves a combination of modeling, mathematical theory and scientific computing.
Some authors have studied the atmospheric motions viewed as an infinite dimensional dynamical system. In \cite{Lions1}, the authors study as a first step towards this long-range project which is widely considered as the basic equations of atmospheric dynamics in meteorology, namely the primitive equations of the atmosphere. The mathematical formulation and attractors of the primitive equations, with or without vertical viscosity, are studied. First of all, by integrating the diagnostic equations they present a mathematical setting, and obtain the existence and time analyticity of solutions to the equations. They then establish some physically relevant estimates for the Hausdorff and fractal dimensions of the attractors of the problems. In \cite{Li}, based on the complete dynamical equations of the moist atmospheric motion, the qualitative theory of nonlinear atmosphere with dissipation and external forcing and its applications are systematically discussed by new theories and methods on the infinite dimensional dynamical system. In \cite{Wang}, by Lions theorem in the Hilbert space, the existence and uniqueness of the weak solution of water vapour-equation with the first boundary-value problem are proven, and the scheme of the finite-element method according to the weak solution is proposed. In \cite{Huang}, the model of climate for weather forecast is studied, and the existence of the weak solution is proved by Galerkin method. The asymptotic behaviors of the weak solution is described by the trajectory attractors.
In this article, Atmospheric circulation equations with humidity effect is considered, which is different from the previous research. Previous studies are based on two kinds of equations, one is about the heat and humidity transfer(\cite{Li},\cite{Wang}), without considering the diffusion of heat and of humidity; the other is p-coordinates equation in(\cite{Lions1},\cite{Huang}), being used to consider the horizontal movement of the atmosphere, and the vertical direction of the velocity is transformed into pressure and topography. Atmospheric circulation equations with humidity effect, derivation from the original equations in \cite{Richardson}, considering the diffusion of heat and of humidity, not be restricted by p-coordinates, can be deformed based on the different concerns. In the last part of the article, Existence of global solutions to the atmospheric circulation models is obtained, which implies that atmospheric circulation has its own running way as humidity source and heat source change, and confirms that the atmospheric circulation models are reasonable.
The paper is organized as follows. In Section 2 we present derivation of atmospheric circulation models. In Section 3, we prove that the atmospheric circulation models possess global solutions by using space sequence method.
\section{Derivation of Atmospheric Circulation Models with \\Humidity Effect} \subsection{Original Model} The hydrodynamical equations governing the atmospheric circulation are the Navier-Stokes equations with the Coriolis force generated by the earth¡¯s rotation, coupled with the first law of thermodynamics. \par Let $(\varphi, \theta, r)$ be the spheric coordinates, where $\varphi$ represents the longitude, $\theta$ the latitude, and $r$ the radial coordinate. The unknown functions include the velocity field $u=(u_\varphi, u_\theta, u_r)$, the temperature function $T$, the humidity function $q$, the pressure $p$ and the density function $\rho$. Then the equations governing the motion and states of the atmosphere consist of the momentum equation, the continuity equation, the first law of thermodynamic, and the diffusion equation for humidity, and the equation of state (for ideal gas): \begin{eqnarray} \label{eqa1} \rho[\frac{\partial u}{\partial t}+\nabla_u u+2\overrightarrow{\Omega} \times u]+\nabla p+\overrightarrow{\kappa} \rho g=\mu \Delta u, \end{eqnarray} \begin{eqnarray} \label{eqa2} \frac{\partial \rho}{\partial t}+div(\rho u)=0, \end{eqnarray} \begin{eqnarray} \label{eqa3} \rho c_v[\frac{\partial T}{\partial t}+u\cdot\nabla T] +p div u=Q+\kappa_T \Delta T, \end{eqnarray} \begin{eqnarray} \label{eqa4} \rho[\frac{\partial q}{\partial t}+u \cdot \nabla_q]=G+\kappa_q \Delta q, \end{eqnarray} \begin{eqnarray} \label{eqa5} p=R_0\rho T, \end{eqnarray} Where $-\infty <\varphi <+\infty$, $-\frac{\pi}{2}<\theta <\frac{\pi}{2}$, $r_0<r<r_0+h$, $r_0$ is the radius of the earth, $h$ is the height of the troposphere, $\Omega$ is the earth's rotating angular velocity, $g$ is the gravitative constant, $\mu, \kappa_T, \kappa_q, c_v, R_0$ are constants, $Q$ and $G$ are heat and humidity sources, and $\overrightarrow{\kappa} = (0, 0, 1)$. The differential operators used are as follows: \par 1. The gradient and divergence operators are given by: $$ \nabla=(\frac{1}{r\cos\theta}\frac{\partial}{\partial \varphi}, \frac{1}{r} \frac{\partial}{\partial \theta}, \frac{\partial}{\partial r}), $$ $$ div u=\frac{1}{r^2}\frac{\partial}{\partial r}(r^2 u_r)+\frac{1}{r \cos\theta} \frac{\partial (u_\theta \cos \theta)}{\partial \theta}+\frac{1}{r \cos\theta} \frac{\partial u_\varphi}{\partial \varphi}, $$ \par 2. In the spherical geometry, although the Laplacian for a scalar is different from the Laplacian for a vectorial function, we use the same notation $\Delta$ for both of them: $$ \Delta u=( \Delta u_\varphi+\frac{2}{r^2 \cos\theta} \frac{\partial u_r }{\partial \varphi}+\frac{2 \sin \theta}{r^2 \cos^2 \theta} \frac{\partial u_\theta}{\partial \varphi}- \frac{u_\varphi}{r^2 \cos^2 \theta}, $$ $$ \Delta u_\theta+\frac{2}{r^2} \frac{\partial u_r }{\partial \theta}-\frac{u_\theta}{r^2 \cos^2 \theta}- \frac{2 \sin\theta}{r^2 \cos^2 \theta}\frac{u_\varphi}{\partial \varphi}, $$ $$
\Delta u_r+\frac{2 u_r}{r^2}+\frac{2}{r^2 \cos \theta} \frac{\partial (u_\theta \cos\theta)}{\partial \theta}- \frac{2}{r^2 \cos \theta}\frac{u_\varphi}{\partial \varphi}), $$ $$ \Delta f=\frac{1}{r^2 \cos\theta} \frac{\partial f}{\partial\theta}(\cos\theta \frac{\partial}{\partial \theta})+\frac{1}{r^2 \cos^2 \theta} \frac{\partial^2 f}{\partial \varphi^2}+\frac{1}{r^2} \frac{\partial f}{\partial r}(r^2 \frac{\partial}{\partial r}), $$ \par 3. The convection terms are given by $$ \nabla_u u=(u\cdot \nabla u_\varphi+\frac{u_\varphi u_r}{r}-\frac{u_\varphi u_\theta}{r} \tan \theta, $$ $$ u\cdot \nabla u_\theta+\frac{u_\theta u_r}{r}+\frac{u^2_\varphi}{r} \tan \theta, u\cdot \nabla u_r-\frac{u^2_\varphi+ u^2_\theta}{r}), $$ \par 4. The Coriolis term $2 \overrightarrow{\Omega}\times u$ is given by $$ 2 \overrightarrow{\Omega}\times u=2 \Omega(\cos \theta u_r-\sin\theta u_\theta, \sin \theta u_{\theta}, -\cos\theta u_\varphi), $$ Here $\overrightarrow{\Omega}$ is the angular velocity vector of the earth, and $\Omega$ is the magnitude of the angular velocity. \par They are supplemented with the following initial value conditions \begin{eqnarray} \label{eqa100} (u, T, q ) = (\varphi_{10},\varphi_{20}, \varphi_{30}) \quad at \quad t = 0. \end{eqnarray} \par
Boundary conditions are needed at the top and bottom boundaries $(r_0, r_0+h)$. At the top and bottom boundaries $(r=r_0,r_0+h)$, either the Dirichlet boundary condition or the free boundary condition is given \begin{eqnarray} \label{eqa101} (\hbox{Dirichlet})\quad \left\{
\begin{array}{ll} (u, T,q ) =(0, T_0, q_0), & r=r_0,\\ (u, T,q )=(0, T_1, q_1), & r=r_0+h, \end{array} \right. \end{eqnarray} \begin{eqnarray} \label{eqa102} (\hbox{free})\quad \left\{
\begin{array}{ll} (u, T,q ) =(0, T_0, q_0), & r=r_0,\\ (u_r,T,q )=(0,T_1, q_1),\quad \frac{\partial(u_\varphi, u_\theta)}{\partial r} =
0 & r=r_0+h, \end{array} \right. \end{eqnarray} \par For $\varphi$, periodic condition are usually used,
for any integers $k_1\in Z$ \begin{eqnarray} \label{eqa103} (u, T, q )(\varphi + 2k_1 \pi, \theta , r) = (u, T, q )(\varphi + 2k_1 \pi, \theta , r). \end{eqnarray} \par Because $(\varphi, \theta, r)$ are in a circular field with $C^\infty$ boundary, the space domain is taken as $(0,2\pi) \times (-\frac{\pi}{2}, \frac{\pi}{2}) \times (r_0,r_0+h)$ and periodic condition is written as \begin{eqnarray} \label{eqa104} (u, T, q )(0,\theta, r) = (u, T, q )(2\pi,\theta, r). \end{eqnarray}
For simplicity, we study the problem with the Dirichlet boundary conditions, and all results hold true as well for other combinations of boundary conditions. Atmospheric convection equations can be read as (\ref{eqa1})-(\ref{eqa101}), (\ref{eqa104}).
\par The above equations were basically the equations used by L. F. Richardson in his pioneering work \cite{Richardson}. However, they are in general too complicated to conduct theoretical analysis. As practiced by the earlier workers such as J. Charney, and from the lessons learned by the failure of Richardson's pioneering work, one tries to be satisfied with simplified models approximating the actual motions to a greater or lesser degree instead of attempting to deal with the atmosphere in all its complexity. By starting with models incorporating only what are thought to be the most important of atmospheric influences, and by gradually bringing in others, one is able to proceed inductively and thereby to avoid the pitfalls inevitably encountered when a great many poorly understood factors are introduced all at once. The simplifications are usually done by taking into consideration of some main characterizations of the large-scale atmosphere. One such characterization is the small aspect ratio between the vertical and horizontal scales, leading to hydrostatic equation replacing the vertical momentum equation. The resulting system of equation are called the primitive equations (PEs); see among others \cite{Lions1}. The another characterization of the large scale motion is the fast rotation of the earth, leading to the celebrated quasi-geostrophic equations \cite{Charney1}. \par
For convenience of research, the approximations we adopt involves the following components: \par First, we often use Boussinesq assumption, where the density is treated as a constant except in the buoyancy term and in the equation of state. \par Second, because the air is generally compressible, we do not use the equation of state for ideal gas, rather, we use the following empirical formula, which can by considered as the linear approximation of \begin{eqnarray} \label{eqa6} \rho=\rho_0[1-\alpha_T(T-T_0)+\alpha_q(q-q_0)], \end{eqnarray} where $\rho_0$ is the density at $T = T_0$ and $q = q_0$, and $\alpha_T$ and $\alpha_q$ are the coefficients of thermal and humidity expansion. \par Third, since the aspect ratio between the vertical scale and the horizontal scale is small, the spheric shell the air occupies is treated as a product space $S^2_{r_0}\times(r_0, r_0+ h)$. This approximation is extensively adopted in geophysical fluid dynamics. Under the above simplification, we have the following equations governing the motion and states of large scale atmospheric circulations: \begin{eqnarray} \label{eqa7} \frac{\partial u}{\partial t}+\nabla_u u=\nu \Delta u-2\overrightarrow{\Omega} \times u-\frac{1}{\rho_0}\nabla p-[1-\alpha_T(T-T_0)+\alpha_q(q-q_0)] g e_z, \end{eqnarray} \begin{eqnarray} \label{eqa8} \frac{\partial T}{\partial t}+(u\cdot\nabla) T =Q+\kappa_T \Delta T, \end{eqnarray} \begin{eqnarray} \label{eqa9} \frac{\partial q}{\partial t}+(u \cdot \nabla)q=G+\kappa_q \Delta q, \end{eqnarray} \begin{eqnarray} \label{eqa10} div u=0, \end{eqnarray} where $(\varphi, \theta, z)\in M = S^2_{r_0}\times(r_0, r_0 + h),$ $\nu=\frac{\mu}{\rho_0}$ is the kinematic viscosity, $u=u_\varphi e_\varphi+u_\theta e_\theta+u_r e_r,$ $(e_\varphi, e_\theta, e_r)$ the local normal basis in the sphereric coordinates, and $$ \nabla_u u=((u\cdot \nabla) u_\varphi+\frac{u_\varphi u_z}{r_0}-\frac{u_\varphi u_\theta}{r_0} \tan \theta)e_\varphi+ $$ $$ ((u\cdot \nabla) u_\theta+\frac{u_\theta u_z}{r_0}+\frac{u^2_\varphi}{r_0} \tan \theta)e_\theta+((u\cdot \nabla) u_z-\frac{u^2_\varphi+ u^2_\theta}{r_0})e_z, $$ $$ \Delta u=(\Delta u_\varphi+\frac{2}{r_0^2 \cos\theta} \frac{\partial u_z }{\partial \varphi}+\frac{2 \sin \theta}{r_0^2 \cos^2 \theta} \frac{\partial u_\theta}{\partial \varphi}- \frac{u_\varphi}{r_0^2 \cos^2 \theta})e_\varphi+ $$ $$ (\Delta u_\theta+\frac{2}{r_0^2} \frac{\partial u_z}{\partial \theta}-\frac{u_\theta}{r_0^2 \cos^2 \theta}- \frac{2 \sin\theta}{r_0^2 \cos^2 \theta}\frac{u_\varphi}{\partial \varphi})e_\theta+ $$ $$
(\Delta u_z+\frac{2 u_0}{r^2}+\frac{2}{r_0^2 \cos \theta} \frac{\partial (u_\theta \cos\theta)}{\partial \theta}- \frac{2}{r_0^2 \cos \theta}\frac{u_\varphi}{\partial \varphi})e_z, $$ $$ \nabla p=\frac{1}{r_0 \cos \theta}\frac{\partial p}{\partial \theta}e_\varphi+\frac{1}{r} \frac{\partial p}{\partial \theta}e_\theta+\frac{\partial p}{\partial z}e_z, $$ $$ div u=\frac{1}{r_0 \cos\theta} \frac{\partial u_\varphi}{\partial \varphi}+\frac{1}{r_0 \cos\theta} \frac{\partial (u_\theta \cos \theta)}{\partial \theta}+\frac{\partial u_z}{\partial z}, $$ and the differential operators $(u\cdot \nabla)$ and $\triangle$ are expressed as $$ (u \cdot \nabla)=\frac{u_\varphi}{r_0 \cos\theta} \frac{\partial}{\partial \varphi}+\frac{u_\theta}{r_0}\frac{\partial}{\partial \theta}+u_z \frac{\partial}{\partial z}, $$ $$ \Delta=\frac{1}{r_0^2 \cos\theta} \frac{\partial^2}{\partial \varphi^2}+\frac{1}{r_0^2 \cos\theta} \frac{\partial}{\partial \theta}(\cos\theta \frac{\partial}{\partial \theta})+\frac{\partial^2}{\partial z^2}. $$ \par Equations (\ref{eqa7})-(\ref{eqa10}) are supplemented with boundary conditions (\ref{eqa101}), (\ref{eqa104}).
\subsection{Simplification of Model} Atmospheric circulation is the large-scale motion of the air, which is essentially a thermal convection process caused by the temperature and humidity difference between the earth¡¯s surface and the tropopause. It is a crucial means by which heat and humidity are distributed on the surface of the earth. Air circulates within the troposphere, limit vertically by the tropopause at about 8-10km. Atmospheric motion in the troposphere, together with the oceanic circulation, plays a crucial role in leading to the global climate changes and evolution on the earth. There are two types of circulation cells: the latitudinal circulation and the longitudinal circulation. The latitudinal circulation is characterized by the Polar cell, the Ferrel cell, and the Hadley cell, which are major players in global heat and humidity transport, and do not act alone. The zonal circulation consists of six circulation cells over the equator. The overall atmospheric motion is known as the zonal overturning circulation, and also called the Walker circulation. The most remarkable feature of the global atmospheric circulation is that the equatorial Walker circulation divides the whole earth into three invariant regions of atmospheric flow: the northern hemisphere, the southern hemisphere, and the equatorial zone. We also note the important fact that the large-scale structure of the zonal overturning circulation varies from year to year, but its basic structure remains fairly stability, it never vanishes. Based on these natural phenomena, we here present the Zone Hypotheses for atmospheric dynamics, which amounts to saying that the global atmospheric system can be divided into three sub-systems: the North-Hemispheric System, the South-Hemispheric System, and the Tropical Zone System, which are relatively independent, and have less influence on each other in their basic structure. More precisely, the Atmospheric Zone Hypothesis is stated in the following form\cite{Ma2}. \par {\bf Atmospheric Zone Hypothesis.} The atmospheric circulation has three invariant regions: the northern hemisphere domain ($0 <\theta\leq\frac{\pi}{2}$), the southern hemisphere domain ($- \frac{\pi}{2}\leq \theta < 0$), and the equatorial zone ($\theta = 0$). Namely, the large-scale circulations in their invariant regions can act alone with less influence on the others. In particular, the velocity field $u = (u_\varphi, u_\theta, u_z)$ of the atmospheric circulation has a vanished latitudinal component in a narrow equatorial zone, i.e., $u_\theta = 0,$ for $-\varepsilon<\theta<\varepsilon$, where $\varepsilon> 0$ is a small number. \par
Atmospheric Zone Hypothesis is based on the following several evidences from theory and practice: \par (1) The global atmospheric motion equations (\ref{eqa7})-(\ref{eqa10}) are of $\theta-$reflexive symmetry, i.e. under the $\theta-$reflexive transformation $(\varphi, \theta, z)\rightarrow (\varphi,-\theta, z)$, the velocity field $u$ becomes $(u_\varphi, u_\theta, u_z)\rightarrow (u_\varphi,-u_\theta, u_z)$, and equations (\ref{eqa7})-(\ref{eqa10}) are invariant, which implies that this system is compatible with the Atmospheric Zone Hypothesis. \par (2) Climatic observation data show that when the El Ni$\tilde{n}o$-Southern Oscillation (the behavior that the Walker circulation cell in the Western Pacific stops or reverses its direction) takes place, no oscillation occurs in the latitudinal cells. It demonstrates the relative independence of these circulations in their invariant domain. \par (3) When a cold current moves southward from Siberia, or a violent typhoon sweeps northward from the tropics, the weather in Southern Hemisphere has no response. Atmospheric Zone Hypothesis provides a theoretic basis for the study of atmospheric dynamics, by which we can establish locally simplified models to treat many difficult problems in atmospheric science. \par (4) For the three-dimensional atmospheric circulation equation, it is too difficult to study. So we study the equatorial atmospheric circulation. \par The atmospheric motion equations over the tropics are the equations restricted on $\theta= 0$, where the meridional velocity component $u_\theta$ is set to zero, and the effect of the turbulent friction is taking into considering \begin{eqnarray} \label{eqa13}
\frac{\partial u_\varphi}{\partial t}=-(u \cdot\nabla ) u_\varphi-\frac{u_\varphi u_z}{r_0}+\nu (\Delta u_\varphi+\frac{2 }{r_0^2}\frac{\partial u_z}{\partial \varphi}-\frac{2u_\varphi}{r_0^2})-\sigma_0 u_\varphi-2 \Omega u_z-\frac{1}{\rho_0 r_0} \frac{\partial p}{\partial \varphi}, \end{eqnarray} $$
\frac{\partial u_z}{\partial t}=-( u\cdot \nabla) u_z+\frac{u_\varphi^2}{r_0}+\nu (\Delta u_z+\frac{2 }{r_0^2}\frac{\partial u_\varphi}{\partial \varphi}-\frac{2u_z}{r_0^2})-\sigma_1 u_z-2 \Omega u_\varphi $$ \begin{eqnarray} \label{eqa14} -\frac{1}{\rho_0} \frac{\partial p}{\partial z}-[1-\alpha_T(T-T_0)+\alpha_q(q-q_0)] g, \end{eqnarray} \begin{eqnarray} \label{eqa15} \frac{\partial T}{\partial t}=-(u\cdot\nabla) T +\kappa_T \Delta T+Q, \end{eqnarray} \begin{eqnarray} \label{eqa16} \frac{\partial q}{\partial t}=-(u \cdot \nabla)q+\kappa_q \Delta q+G, \end{eqnarray} \begin{eqnarray} \label{eqa17}\frac{1}{r_0}\frac{\partial u_\varphi}{\partial\varphi}+\frac{\partial u_z}{\partial z}=0, \end{eqnarray} Here $\sigma_i = C_i h^2 (i = 0, 1)$ represent the turbulent friction, $r_0$ is the radius of the earth, the space domain is taken as $M = S^1_{r_0} \times (r_0, r_0 + h)$ with $S^1_{r_0}$ being the one-dimensional circle with radius $r_0$, and $$ (u\cdot\nabla)=\frac{u_\varphi}{r_0}\frac{\partial}{\partial \varphi}+u_z \frac{\partial}{\partial z}, \quad \Delta=\frac{1}{r_0^2}\frac{\partial^2}{\partial \varphi^2}+\frac{\partial^2}{\partial z^2} $$
For simplicity, we denote $$ (x_1,x_2)=(r_0 \varphi, z), \quad (u_1,u_2)=(u_\varphi,u_z). $$ \par The atmospheric motion equations(\ref{eqa13})-(\ref{eqa17}) can be written as \begin{eqnarray} \label{eqa18}
\frac{\partial u_1}{\partial t}=-(u \cdot\nabla ) u_1-\frac{u_1 u_2}{r_0}+\nu (\Delta u_1+\frac{2 }{r_0}\frac{\partial u_2}{\partial x_1}-\frac{2u_1}{r_0^2})-\sigma_0 u_1-2 \Omega u_2-\frac{1}{\rho_0} \frac{\partial p}{\partial x_1}, \end{eqnarray} $$
\frac{\partial u_2}{\partial t}=-(u \cdot \nabla) u_2+\frac{u_1^2}{r_0}+\nu (\Delta u_2+\frac{2 }{r_0}\frac{\partial u_1}{\partial x_1}-\frac{2u_2}{r_0^2})-\sigma_1 u_2-2 \Omega u_1 $$ \begin{eqnarray} \label{eqa19} -\frac{1}{\rho_0} \frac{\partial p}{\partial x_2}-[1-\alpha_T(T-T_0)+\alpha_q(q-q_0)] g, \end{eqnarray} \begin{eqnarray} \label{eqa20} \frac{\partial T}{\partial t}=-(u\cdot\nabla) T +\kappa_T \Delta T+Q, \end{eqnarray} \begin{eqnarray} \label{eqa21} \frac{\partial q}{\partial t}=-(u \cdot \nabla)q+\kappa_q \Delta q+G, \end{eqnarray} \begin{eqnarray} \label{eqa22}\frac{\partial u_1}{\partial x_1}+\frac{\partial u_2}{\partial x_2}=0. \end{eqnarray} \par To make the nondimensional form, let $$ x=hx^{'}, \quad t=\frac{h^2}{\kappa_T} t^{'} \quad u=\frac{\kappa_T}{h} u^{'}, $$ $$ T=T_0-(T_0-T_1)\frac{x_2}{h}+(T_0-T_1)T^{'}, $$ $$ q=q_0-(q_0-q_1)\frac{x_2}{h}+(q_0-q_1)q^{'}, $$ $$ p=\frac{\rho_0 \nu \kappa_T p^{'}}{h^2}-g\rho_0[x_2+\frac{\alpha_T}{2}(T_0-T_1)\frac{x^2_2}{h}-\frac{\alpha_q}{2}(q_0-q_1)\frac{x^2_2}{h}], $$ \par The nondimensional form of (\ref{eqa18})-(\ref{eqa22}) reads $$
\frac{\partial u^{'}_1}{\partial t^{'}}=-(u^{'} \cdot \nabla ) u^{'}_1+\frac{\nu}{\kappa_T} \Delta u^{'}_1-\frac{\sigma_0 h^2}{\kappa_T} u^{'}_1-\frac{2 \Omega h^2}{\kappa_T} u^{'}_2-\frac{\nu}{k_T} \frac{\partial p^{'}}{\partial x^{'}_1} $$ \begin{eqnarray} \label{eqa23} -\frac{h}{r_0}u_1u_2+\frac{2h}{r_0 k_T}\frac{\partial u^{'}_2}{\partial x_1^{'}}-\frac{2h^2}{r_0 \kappa_T} u_1^{'}, \end{eqnarray} $$
\frac{\partial u^{'}_2}{\partial t^{'}}=-(u^{'} \cdot \nabla) u^{'}_2+\frac{\nu}{\kappa_T} \Delta u^{'}_2-\frac{\sigma_1 h^2}{\kappa_T} u^{'}_2-\frac{2 \Omega h^2}{\kappa_T} u^{'}_1-\frac{h}{r_0}u_1^2+\frac{h}{r_0 k_T}\frac{\partial u^{'}_1}{\partial x_2^{'}}-\frac{2h^2}{r_0^2 \kappa_T} u_2^{'} $$ \begin{eqnarray} \label{eqa24} -\frac{\nu}{\kappa_T} \frac{\partial p^{'}}{\partial x^{'}_2}+\frac{gh^3}{\kappa_T^2}[\alpha_T(T^0-T_1)T^{'}-\alpha_q(q^0-q_1)q^{'}], \end{eqnarray} \begin{eqnarray} \label{eqa25} \frac{\partial T^{'}}{\partial t^{'}}=-(u^{'}\cdot\nabla) T^{'} + \Delta T^{'}+u_2^{'}+\frac{h^2}{(T_0-T_1)\kappa_T}Q, \end{eqnarray} \begin{eqnarray} \label{eqa26} \frac{\partial q^{'}}{\partial t^{'}}=-(u^{'} \cdot \nabla)q^{'}+\frac{\kappa_q}{\kappa_T} \Delta q^{'}+u_2^{'}+\frac{h^2}{(T_0-T_1)\kappa_T}G, \end{eqnarray} \begin{eqnarray} \label{eqa27}div u^{'}=0. \end{eqnarray} \par
Because $r_0$ is far larger than $u_1$, $u_2$, the atmospheric motion equations (\ref{eqa23})-(\ref{eqa27}) can be read \begin{eqnarray} \label{eqa123} \frac{\partial u^{'}_1}{\partial t^{'}}=-(u^{'} \cdot \nabla ) u^{'}_1+\frac{\nu}{\kappa_T} \Delta u^{'}_1-\frac{\sigma_0 h^2}{\kappa_T} u^{'}_1-\frac{2 \Omega h^2}{\kappa_T} u^{'}_2-\frac{\nu}{k_T} \frac{\partial p^{'}}{\partial x^{'}_1}, \end{eqnarray} $$
\frac{\partial u^{'}_2}{\partial t^{'}}=-(u^{'} \cdot \nabla) u^{'}_2+\frac{\nu}{\kappa_T} \Delta u^{'}_2-\frac{\sigma_1 h^2}{\kappa_T} u^{'}_2-\frac{2 \Omega h^2}{\kappa_T} u^{'}_1 $$ \begin{eqnarray} \label{eqa124} -\frac{\nu}{\kappa_T} \frac{\partial p^{'}}{\partial x^{'}_2}+\frac{gh^3}{\kappa_T^2}[\alpha_T(T_0-T_1)T^{'}-\alpha_q(q_0-q_1)q^{'}], \end{eqnarray} \begin{eqnarray} \label{eqa125} \frac{\partial T^{'}}{\partial t^{'}}=-(u^{'}\cdot\nabla) T^{'} + \Delta T^{'}+u_2^{'}+\frac{h^2}{(T_0-T_1)\kappa_T}Q, \end{eqnarray} \begin{eqnarray} \label{eqa126} \frac{\partial q^{'}}{\partial t^{'}}=-(u^{'} \cdot \nabla)q^{'}+\frac{\kappa_q}{\kappa_T} \Delta q^{'}+u_2^{'}+\frac{h^2}{(T_0-T_1)\kappa_T}G, \end{eqnarray} \begin{eqnarray} \label{eqa127}div u^{'}=0. \end{eqnarray} \par Let $P_r=\frac{\nu}{\kappa_T}$, $L_e=\frac{\kappa_q}{\kappa_T}$, $R=\frac{g\alpha_T(T^0-T_1)h^3}{\kappa_T \nu}$, $\tilde{R}=\frac{g\alpha_q(q^0-q_1)h^3}{\kappa_T \nu}$, $\sigma_i^{'}=\frac{\sigma_i h^2}{\nu}$, $\omega=\frac{2 \Omega h^2}{\nu}$, $Q^{'}=\frac{h^2}{(T_0-T_1)\kappa_T}Q$, $G^{'}=\frac{h^2}{(T_0-T_1)\kappa_T}G$, omitting the primes, the nondimensional form of (\ref{eqa123})-(\ref{eqa127}) reads \begin{eqnarray} \label{eqa28}
\frac{\partial u}{\partial t}=P_r(\Delta u-\nabla p-\sigma u)+P_r(RT-\tilde{R}q)\vec{\kappa}-(\nabla \cdot u) u, \end{eqnarray} \begin{eqnarray} \label{eqa29} \frac{\partial T}{\partial t}=\Delta T+u_2-(u\cdot\nabla) T + Q, \end{eqnarray} \begin{eqnarray} \label{eqa30} \frac{\partial q}{\partial t}=L_e \Delta q+u_2-(u \cdot \nabla)q+G, \end{eqnarray} \begin{eqnarray} \label{eqa31}div u=0, \end{eqnarray} where $\sigma$ is constant matrix $$
\sigma=\left(
\begin{array}{ll} \sigma_0 & \omega\\ \omega & \sigma_1 \end{array} \right). $$ \par The problem (\ref{eqa28})-(\ref{eqa31}) are supplemented with the following Dirichlet boundary condition at $x_2=0,1$ and periodic condition for $x_1$: \begin{eqnarray} \label{eqa32} (u, T,q ) =0, \quad x_2=0,1, \end{eqnarray} \begin{eqnarray} \label{eqa33} (u, T, q )(0, x_2) = (u, T, q )(2 \pi, x_2), \end{eqnarray} and initial value conditions \begin{eqnarray} \label{eqa34} (u, T, q ) = (u_0, T_0, q_0), \ \ \ t=0. \end{eqnarray}
\section{Global Solution of Atmospheric Circulation Equations with \\Humidity Effect} \setcounter{equation}{0} \subsection{Preliminaries}
Let $X$ be a linear space, $X_1$, $X_2$ two separable reflexive Banach spaces, $H$ a Hilbert space. $X_1$, $X_2$ and $H$ are completion space of $ X $ under the respective norms. $X_1$, $X_2\subset H$ are dense embedding. $F:X_2 \times (0,\infty)\rightarrow X_1^*$ is a continuous mapping. We consider the abstract equation \begin{eqnarray} \label{eqc101} \left\{
\begin{array}{ll} \frac{du}{dt}=Fu, & 0<t<\infty, \\
\\
u(0)=\varphi, & \\ \end{array} \right. \end{eqnarray} where $\varphi \in H$, $u: [0,+\infty)\rightarrow H$ is unknown. \par {\bf Definition 3.1} Let $\varphi \in H$ be a given initial value. $u \in L^p((0,T),X_2)\cap L^\infty((0,T), H)$, ($0<T<\infty$) is called a global solution of Eq(\ref{eqc101}), if $u$ satisfies $$ (u(t),v)_H=\int_0^t<Fu,v>dt+(\varphi,v)_H, \quad \forall v \in X_1 \subset H. $$ \par {\bf Definition 3.2} Let $u_n, u_0 \in L^p((0,T), X_2)$. $u_n \rightarrow u_0$ is called uniformly weak convergence in $L^p((0,T), X_2)$, if $\{u_n\} \subset L^\infty((0,T),H)$ is bounded, and \begin{eqnarray} \label{eqc102} \left\{
\begin{array}{ll} u_n \rightharpoonup u_0, &\hbox{in} \ \ \ L^p((0,T), X_2), \\
\\
\lim_{n\rightarrow \infty}\int_0^T |<u_n-u_0,v>_H|^2 dt=0, & \forall v \in H.\\ \end{array} \right. \end{eqnarray}
\par {\bf Definition 3.3} A mapping $F: X_2 \times (0,\infty)\rightarrow X_1^*$ is called $T$-weakly continuous, if for $p=(p_1, \ldots, p_m)$, $0<T<\infty$, and $u_n$ uniformly weakly converge to $u_0$ under Eq(\ref{eqc102}), we have $$ \lim_{n\rightarrow \infty}\int_0^T <Fu_n,v>dt=\int_0^T <Fu_0,v>dt, \quad \forall v\in X_1. $$ \par {\bf Lemma 3.4} Let $\{u_n\}\subset L^p((0,T), W^{m,p})(m\geq 1)$ be bounded sequence, and $\{u_n\}$ uniformly weakly converge to $u_0 \subset L^p((0,T), W^{m,p})$, i.e. \begin{eqnarray} \label{eqc100} \left\{
\begin{array}{ll} u_n \rightharpoonup u_0 \ \ \ \hbox{in}\ \ L^p((0,T), W^{m,p}),& p\geq 2, \\
\\ \lim_{n\rightarrow \infty}\int_0^T [\int_\Omega (u_n-u_0)v dx ]^2dt=0, & \forall v \in C_0^\infty(\Omega).\\ \end{array} \right. \end{eqnarray}
Then for all $|\alpha|\leq m-1$, we have $$ D^\alpha u_n \rightarrow D^\alpha u_0\ \ \ \ \hbox{in}\ \ \ \ \ L^2((0,T)\times \Omega). $$ \par {\bf Lemma 3.5} $^{\cite{Ma3}}$ Assume $F: X_2 \times (0,\infty)\rightarrow X_1^*$ is $T$-weakly continuous, and satisfies \par (A1) there exists $p=(p_1, \cdots, p_m)$, $p_i>1(1\leq i \leq m)$, such that $$
<Fu,u> \leq -C_1\|u\|^p_{X_2}+C_2 \|u\|_H^2+f(t),\quad \forall u\in X, $$ where $C_1,C_2$ are constants, $f\in L^1(0,T)(0<T<\infty)$,
$\|\cdot\|^p_{X_2}=\sum_{i=1}^m|\cdot|_i^{p_i}$, $|\cdot|_i$ is seminorm of $X_2$, $\|\cdot\|_{X_2}=\sum_{i=1}^m |\cdot|_i$, \par (A2) there exists $0<\alpha<1$, for all $0<h<1$ and $u \in C^1([0,\infty),X)$, $$
|\int_t^{t+h}<Fu,v>dt|\leq Ch^\alpha, \quad \forall v\in X \quad \hbox{and} \quad 0\leq t<T, $$
where $C>0$ depends only on $T$, $\|v\|_{X_1}$, $\int_0^t
\|u\|^p_{X_2}dt$ and $\sup_{0\leq t \leq T} \|u\|_H$.\\
Then for all $\varphi \in H$, Eq(\ref{eqc101}) has a global weak solution
$$ u\in L^\infty((0,T),H) \cap L^p((0,T), X_2), \quad 0<T<\infty, \quad p\hbox{\ \ in \ \ (A1)}.
$$
\par
{\bf Remark 3.6} $\|\cdot\|_X$ denotes norm of $X$, and $C_i$ are
variable constants.
\subsection{Existence of Global Solution}
We introduce spatial sequences $$
X=\{\phi=(u,T,q) \in C^\infty(\Omega, R^4)| (u,T,q) \ \ \hbox{satisfy} \ \ (\ref{eqa31})-(\ref{eqa33})\}, $$ $$
H=\{\phi=(u,T,q) \in L^2(\Omega, R^4)| (u,T,q)\ \ \hbox{satisfy}\ \ (\ref{eqa31})-(\ref{eqa33})\}, $$ $$
H_1=\{\phi=(u,T,q) \in H^1(\Omega, R^4)| (u,T,q) \ \ \hbox{satisfy}\ \ (\ref{eqa31})-(\ref{eqa33})\}, $$
{\bf Theorem 3.7} If $\phi_0=(u_0,T_0,q_0)\in H$, $Q,G \in L^2(\Omega)$, then Eq(\ref{eqa28})-(\ref{eqa34}) there exists
a global solution $$ (u,T,q) \in L^\infty((0,T),H)\cap L^2((0,T),H_1), \quad 0<T<\infty. $$ \par {\bf Proof.} Definite $F: H_1 \rightarrow H_1^*$ as $$
\begin{array}{lrl} <F\phi, \psi>&=&\int_\Omega [-P_r \nabla u \nabla v-P_r \sigma u \cdot v+P_r (RT-\tilde{R}q)v_2-( u\cdot \nabla) u \cdot v \\\\ &&-\nabla T \nabla S+u_2 S-(u\cdot \nabla)T S+Q S-L_e \nabla q \nabla z+u_2 z \\\\ &&-(u\cdot \nabla)q z+G z]dx,\ \ \ \ \hbox{¶Ô}\quad \forall \psi=(v,S,z) \in H_1.
\end{array}
$$
\par
Let $\psi=\phi$. Then $$
\begin{array}{lrl}
<F\phi, \phi>&=&\int_\Omega [-P_r |\nabla u|^2-P_r \sigma u \cdot u+P_r(RT-\tilde{R}q)u_2-(u\cdot\nabla ) u \cdot u \\\\
&&-|\nabla T|^2+u_2 T-(u\cdot \nabla)T T +Q T-L_e |\nabla q|^2 \\\\ &&+u_2 q-(u\cdot \nabla)q q+G q]dx \\\\
&=&\int_\Omega [-P_r |\nabla u|^2-P_r \sigma u \cdot u+(P_r R+1)u_2 T-(P_r \tilde{R}-1)qu_2 \\\\
&&-|\nabla T|^2+ Q T-L_e |\nabla q|^2+G q]dx \\\\
&\leq&- C_1 \int_\Omega [|\nabla u|^2+|\nabla T|^2+ |\nabla q|^2]dx
+C_2 \int_\Omega[|u|^2+|u||T|+|q||u| \\\\
&&+|Q| |T|+|G| |q|]dx \\\\
&\leq& - C_1 \int_\Omega [|\nabla u|^2+|\nabla T|^2+ |\nabla q|^2]dx
+C_2 \int_\Omega[|u|^2+|T|^2+|q|^2]dx \\\\
&&+C_3 \int_\Omega[|Q|^2+|G|^2]dx \\\\
&\leq& - C_1 \|\phi\|_{H_1}^2 +C_2 \|\phi\|_{H}^2+C_4,
\end{array}
$$ which implies $(A_1)$. \par For $\forall \phi \in L^2(0,T), H_1)\cap L^\infty((0,T),H)$ and $\psi \in X$, $h(0<h<1)$, we have $$
\begin{array}{lrl}
&&|\int_t^{t+h}<F\phi, \psi>dt| \\\\
&=&|\int_t^{t+h} \int_\Omega [-P_r \nabla u \nabla v-P_r \sigma u \cdot v+P_r (RT-\tilde{R}q)v_2-( u\cdot \nabla) u \cdot v-\nabla T \nabla S \\\\ &&+u_2 S-(u\cdot \nabla)T S+ Q S-L_e \nabla q \nabla z+u_2 z-(u\cdot
\nabla)q z+G z]dxdt|
\end{array} $$ $$ \begin{array}{lrl}
&\leq& C\int_t^{t+h} \int_\Omega [|\nabla u| |\nabla v|+|u||v|+
|T||v|+|q||v|+|\nabla T| |\nabla S|+|u| |S| \\\\
&&+|Q| |S| +|\nabla q| |\nabla z|+|u| |z|+|G| |z|]dxdt+C\int_t^{t+h}
[|\int_\Omega ( u\cdot \nabla) u \cdot vdx| \\\\
&&+|\int_\Omega (u\cdot \nabla)T Sdx|+|\int_\Omega (u\cdot \nabla)q zdx|]dt \\\\
&\leq& C\int_t^{t+h}[ (\int_\Omega |\nabla u|^2dx)^{\frac{1}{2}}
(\int_\Omega|\nabla v|^2dx)^{\frac{1}{2}}+
(\int_\Omega|T|^2dx)^{\frac{1}{2}}(\int_\Omega|v|^2dx)^{\frac{1}{2}} \\\\
&&+(\int_\Omega|q|^2dx)^{\frac{1}{2}}
(\int_\Omega|v|^2dx)^{\frac{1}{2}}+(\int_\Omega|\nabla T|^2dx)^{\frac{1}{2}} (\int_\Omega|\nabla S|^2dx)^{\frac{1}{2}} \\\\
&&+(\int_\Omega|u|^2dx)^{\frac{1}{2}}
(\int_\Omega|S|^2dx)^{\frac{1}{2}}+(\int_\Omega|Q|^2dx)^{\frac{1}{2}}
(\int_\Omega|S|^2dx)^{\frac{1}{2}} \\\\
&&+(\int_\Omega|\nabla q|^2dx)^{\frac{1}{2}} (\int_\Omega|\nabla z|^2dx)^{\frac{1}{2}}+(\int_\Omega|u|^2dx)^{\frac{1}{2}}
(\int_\Omega|z|^2dx)^{\frac{1}{2}} \\\\
&&+(\int_\Omega|G|^2dx)^{\frac{1}{2}}
(\int_\Omega|z|^2dx)^{\frac{1}{2}}]dt+C\int_t^{t+h}
[|\sum_{i,j=1}^2\int_\Omega ( u_i u_j \frac{\partial v_j}{\partial x_i}dx| \\\\
&&+|\sum_{i=1}^2\int_\Omega u_iT \frac{\partial S}{\partial x_i}dx|+|\sum_{i=1}^2\int_\Omega u_iq\frac{\partial z}{\partial x_i}dx|]dt \\\\
&\leq& C(\|u\|_{L^2(0,T;H^1)} \|D v\|_{L^2}h^{\frac{1}{2}}+
\|T\|_{L^2(0,T;L^2)}\|v\|_{L^2}h^{\frac{1}{2}}+\|q\|_{L^2(0,T;L^2)}
\|v\|_{L^2}h^{\frac{1}{2}} \\\\
&&+\|T\|_{L^2(0,T;H^1)} \|D S\|_{L^2}h^{\frac{1}{2}}+\|u\|_{L^\infty(0,T;L^2)}\|S\|_{L^2}h^{\frac{1}{2}}
+\|Q\|_{L^2}\|S\|_{L^2}h \\\\
&&+\|
q\|_{L^2(0,T;H^1)} \|D z\|_{L^2}h^{\frac{1}{2}}+\|u\|_{L^\infty(0,T;H)}
\|z\|_{L^2}h+\|G\|_{L^2}\|z\|_{L^2}h \\\\
&&+\|v\|_{C^1}\|u\|_{L^\infty(0,T,L^2)}h+\|S\|_{C^1}
\|u\|_{L^\infty(0,T;L^2)}^{\frac{1}{2}}\|T\|_{L^\infty(0,T;L^2)}^{\frac{1}{2}}h \\\\
&&+\|z\|_{C^1}
\|u\|_{L^\infty(0,T;L^2)}^{\frac{1}{2}}\|q\|_{L^\infty(0,T;L^2)}^{\frac{1}{2}}h) \\\\ &\leq& C h^{\frac{1}{2}},
\end{array} $$ which implies $(A_2)$. \par We will prove that $F: H_1 \rightarrow H_1^*$ is $T$-weakly continuous. Let $\phi_n=(u^n,T^n,q^n) \rightharpoonup \phi_0 =(u^0,T^0,q^0)$ be uniformly convergence, i.e., $\{\phi_n\} \subset L^\infty((0,T);H)$ is bounded, and
$$
\left\{
\begin{array}{lcl} \phi_n \rightharpoonup \phi_0 & \hbox{in} & L^p((0,T);H_1),
\\ \lim_{n\rightarrow \infty}
\int_0^T|<\phi_n-\phi_0,\psi>_H|^2dt=0,&&\forall \psi \in H. \end{array} \right. $$ \par From Lemma 3.4, we known that $\phi_n \rightarrow \phi_0$ in $L^2(\Omega \times (0,T))$. \par Then $\forall \psi \in X \subset C^\infty(\Omega, R^4)\cap H_1$, we have $$ \lim_{n \rightarrow \infty}\int_0^t\int_\Omega(u^n \cdot \nabla)u^n \cdot v dxdt=\lim_{n \rightarrow \infty}\int_0^t\int_\Omega\sum_{i,j=1}^2 u^n_i \frac{\partial u^n_j}{\partial x_i} v_jdxdt $$ $$ =-\lim_{n \rightarrow \infty}\int_0^t\int_\Omega\sum_{i,j=1}^2 u^n_i u^n_j\frac{\partial v_j}{\partial x_i} dxdt $$ $$ =-\int_0^t\int_\Omega(u^0 \cdot \nabla)v \cdot u^0dxdt $$ $$ =\int_0^t\int_\Omega(u^0 \cdot \nabla)u^0 \cdot v dxdt, $$ and $$ \lim_{n \rightarrow \infty}\int_0^t\int_\Omega(u^n \cdot \nabla)T^n S dxdt=\lim_{n \rightarrow \infty}\int_0^t\int_\Omega\sum_{i=1}^2 u^n_i \frac{\partial T^n}{\partial x_i} Sdxdt $$ $$ =-\lim_{n \rightarrow \infty}\int_0^t\int_\Omega\sum_{i=1}^2 u^n_i T^n\frac{\partial S}{\partial x_i} dxdt $$ $$ =-\int_0^t\int_\Omega(u^0 \cdot \nabla)S T^0dxdt $$ $$ =\int_0^t\int_\Omega(u^0 \cdot \nabla)T^0 S dxdt, $$ and $$ \lim_{n \rightarrow \infty}\int_0^t\int_\Omega(u^n \cdot \nabla)q^n z dxdt=\lim_{n \rightarrow \infty}\int_0^t\int_\Omega\sum_{i=1}^2 u^n_i \frac{\partial q^n}{\partial x_i} zdxdt $$ $$ =-\lim_{n \rightarrow \infty}\int_0^t\int_\Omega\sum_{i=1}^2 u^n_i q^n\frac{\partial z}{\partial x_i} dxdt $$ $$ =-\int_0^t\int_\Omega(u^0 \cdot \nabla)z q^0dxdt $$ $$ =\int_0^t\int_\Omega(u^0 \cdot \nabla)q^0 z dxdt, $$ Thus, \begin{eqnarray} \label{eqc9} \lim_{n \rightarrow \infty}\int_0^t <F\phi_n, \psi>dt=\int_0^t <F\phi_0, \psi>dt. \end{eqnarray} \par Because $X$ is dense in $H_1$, Eq(\ref{eqc9}) holds for $\psi \in H_1$. In other words, the mapping $F: H_1 \rightarrow H_1^*$ is $T$-weakly continuous. \par From Lemma 3.5, Eq(\ref{eqa28})-(\ref{eqa34}) has a global weak solution $$ (u,T,q) \in L^\infty((0,T),H)\cap L^2((0,T),H_1), \quad 0<T<\infty. $$ $\Box$ \par {\bf Remark 3.8} Existence of global solutions to the atmospheric circulation models implies that atmospheric circulation has its own running way as humidity source and heat source change, and confirms that the atmospheric circulation models are reasonable.
\begin {thebibliography}{90}
\bibitem{Charney1} Charney J., The dynamics of long waves in a baroclinic westerly current, {\it J. Meteorol.}, {\bf 4}(1947), 135-163.
\bibitem{Charney2} Charney J., On the scale of atmospheric motion, {\it Geofys. Publ.}, {\bf 17}(2)(1948), 1-17.
\bibitem{Lions1} Lions, J.L., Temam R., Wang S.H., New formulations of the primitive equations of atmosphere and applications. {\it Nonlinearity}, {\bf 5}(2)(1992), 237-288.
\bibitem{Lions2} Lions, J.L., Temam R., Wang S.H., On the equations of the large-scale ocean. {\it Nonlinearity}, {\bf 5}(5)(1992), 1007-1053.
\bibitem{Lions3} Lions, J.L., Temam R., Wang S.H., Models for the coupled atmosphere and ocean. (CAO I), {\it Comput. Mech. Adv.}, {\bf 1}(1)(1993), 1-54.
\bibitem{Ma2} Ma T., Wang S.H., Phase Transition Dynamics in Nonlinear Sciences, New York, Springer, 2013.
\bibitem{Ma3} Ma T., Theories and Methods in partial differential equations, Academic Press, China, 2011(in Chinese).
\bibitem{Phillips} Phillips, N.A., The general circulation of the atmosphere: A numerical experiment. {\it Quart J Roy Meteorol Soc}, {\bf 82}(1956), 123-164.
\bibitem{Richardson} Richardson L.F., {\it Weather Prediction by Numerical Process}, Cambridge University Press, 1922.
\bibitem{Rossby} Rossby, C.G., On the solution of problems of atmospheric motion by means of model experiment, { \it Mon. Wea. Rev.}, {\bf 54}(1926), 237-240.
\bibitem{von Neumann} von Neumann, J., Some remarks on the problem of forecasting climatic fluctuations. In R. L. Pfeffer (Ed.), Dynamics of climate, pp. 9-12. Pergamon Press, 1960.
\bibitem{Li} Li J.P., Chou J.F., The Qualitative Theory of the Dynamical Equations of Atmospheric Motion and Its Applications, {\it Scientia Atmospherica Sinica}, {\bf 22(4)} (1998), 443-453.
\bibitem{Wang} Wang B.Z., Well-Posed Problems of the Weak Solution about Water Vapour Equation, {\it China. J. Atmos. Sci.}, {\bf 23(5)} (1999), 590-596.
\bibitem{Huang} Huang H.Y., Guo B.l., The Existence of Weak Solutions and the Trajectory Attractors to the Model of Climate Dynamics. {\it Acta Mathematica Scientia}, {\bf 27A(6)} (2007), 1098-1110.
\end{thebibliography}
\end{document}
|
arXiv
|
{
"id": "1501.01805.tex",
"language_detection_score": 0.6409223675727844,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{\bf Implementing Turing Machines \\
in Dynamic Field Architectures\footnote{Appeared in M. Bishop und Y. J. Erden (Eds.) \emph{Proceedings of AISB12 World Congress 2012 --- Alan Turing 2012}, 5th AISB Symposium on Computing and Philosophy: Computing, Philosophy and the Question of Bio-Machine Hybrids, 36 -- 40.} }
\author{Peter beim Graben\institute{
Institut f\"ur Deutsche Sprache und Linguistik,
Humboldt-Universit\"at zu Berlin,
email: [email protected].}
\and
Roland Potthast\institute{
Dept. of Mathematics and Statistics,
University of Reading,
email: [email protected].} }
\maketitle
\begin{abstract} Cognitive computation, such as e.g. language processing, is conventionally regarded as Turing computation, and Turing machines can be uniquely implemented as nonlinear dynamical systems using generalized shifts and subsequent G\"odel encoding of the symbolic repertoire. The resulting nonlinear dynamical automata (NDA) are piecewise affine-linear maps acting on the unit square that is partitioned into rectangular domains. Iterating a single point, i.e. a microstate, by the dynamics yields a trajectory of, in principle, infinitely many points scattered through phase space. Therefore, the NDAs microstate dynamics does not necessarily terminate in contrast to its counterpart, the symbolic dynamics obtained from the rectangular partition. In order to regain the proper symbolic interpretation, one has to prepare ensembles of randomly distributed microstates with rectangular supports. Only the resulting macrostate evolution corresponds then to the original Turing machine computation. However, the introduction of random initial conditions into a deterministic dynamics is not really satisfactory. As a possible solution for this problem we suggest a change of perspective. Instead of looking at point dynamics in phase space, we consider functional dynamics of probability distributions functions (p.d.f.s) over phase space. This is generally described by a Frobenius-Perron integral transformation that can be regarded as a neural field equation over the unit square as feature space of a dynamic field theory (DFT). Solving the Frobenius-Perron equation, yields that uniform p.d.f.s with rectangular support are mapped onto uniform p.d.f.s with rectangular support, again. Thus, the symbolically meaningful NDA macrostate dynamics becomes represented by iterated function dynamics in DFT; hence we call the resulting representation \emph{dynamic field automata}. \end{abstract}
\section{INTRODUCTION} \label{sec:intro}
According to the central paradigm of classical cognitive science and to the Church-Turing thesis of computation theory (cf., e.g., \cite{Anderson95, HopcroftUllman79, Siegelmann95, Turing37}), cognitive processes are essentially rule-based manipulations of discrete symbols in discrete time that can be carried out by Turing machines. On the other hand, cognitive and computational neuroscience increasingly provide experimental and theoretical evidence, how cognitive processes might be implemented by neural networks in the brain.
The crucial question, how to bridge the gap, how to realize a Turing machine \cite{Turing37} by state and time continuous dynamical systems has been hotly debated by ``computationalists'' (such as Fodor and Pylyshyn \cite{FodorPylyshyn88}) and ``dynamicists'' (such as Smolensky \cite{Smolensky88a}) over the last decades. While computationalists argued that dynamical systems, such as neural networks, and symbolic architectures were either incompatible to each other, or the former were mere implementations of the latter, dynamicists have retorted that neural networks could be incompatible with symbolic architectures because the latter cannot be implementations of the former; see \cite{Graben04, Tabor09} for discussion.
Moore \cite{Moore90, Moore91a} has proven that a Turing machine can be mapped onto a generalized shift as a generalization of symbolic dynamics \cite{LindMarcus95}, which in turn becomes represented by a piecewise affine-linear map at the unit square using G\"odel encoding and symbologram reconstruction \cite{CvitanovicGunaratneProcaccia88, KennelBuhl03}. These \emph{nonlinear dynamical automata} have been studied and further developed by \cite{GrabenJurishEA04, GrabenGerthVasishth08}. Using a similar representation of the machine tape but a localist one of the machine's control states, Siegelmann and Sontag have proven that a Turing machine can be realized as a recurrent neural network with rational synaptic weights \cite{SiegelmannSontag95}. Along a different vain, deploying sequential cascaded networks, Pollack \cite{Pollack91} and later Moore \cite{Moore98} and Tabor \cite{Tabor00a, Tabor09} introduced and further generalized \emph{dynamical automata} as nonautonomous dynamical systems (see \cite{GrabenPotthast09a} for a unified treatment of these different approaches).
Inspired by population codes studied in neuroscience, Sch\"oner and co-workers devised \emph{dynamic field theory} as a framework for cognitive architectures and embodied cognition where symbolic representations correspond to regions in abstract feature spaces (e.g. the visual field, color space, limb angle spaces) \cite{ErlhagenSchoner02, SchonerDineva07}. Because dynamic field theory relies upon the same dynamical equations as \emph{neural field theory} investigated in theoretical neuroscience \cite{Amari77b, WilsonCowan73}, one often speaks also about \emph{dynamic neural fields} in this context.
In this communication we unify the abovementioned approaches. Starting from a nonlinear dynamical automaton as point dynamics in phase space in \Sec{sec:nda}, which bears interpretational peculiarities, we consider uniform probability distributions evolving in function space in \Sec{sec:dfa}. There we prove the central theorem of our proposal, that uniform distributions with rectangular support are mapped onto uniform distributions with rectangular support by the underlying NDA dynamics. Therefore, the corresponding dynamic field, implementing a Turing machine, shall be referred to as \emph{dynamic field automaton}. In the concluding \Sec{sec:discu} we discuss possible generalizations and advances of our approach. Additionally, we point out that symbolic computation in a dynamic field automaton can be interpreted in terms of contextual emergence \cite{AtmanspacherGraben07, AtmanspacherGraben09, BishopAtmanspacher06}.
\section{NONLINEAR DYNAMICAL AUTOMATA} \label{sec:nda}
A nonlinear dynamical automaton (NDA: \cite{GrabenPotthast09a, GrabenGerthVasishth08, GrabenJurishEA04}) is a triple $M_{NDA} = (X, \mathcal{P}, \Phi)$ where $(X, \Phi)$ is a time-discrete dynamical system with phase space $X = [0, 1]^2 \subset \mathbb{R}^2$, the unit square, and flow $\Phi : X \to X$. $\mathcal{P} = \{ D_\nu | \nu = (i, j), 1 \le i \le m, 1 \le j \le n, m, n \in \mathbb{N} \}$ is a rectangular partition of $X$ into pairwise disjoint sets, $D_\nu \cap D_\mu = \emptyset$ for $\nu \ne \mu$, covering the whole phase space $X = \bigcup_\nu D_\nu$, such that $D_\nu = I_i \times J_j$ with real intervals $I_i, J_j \subset [0, 1]$ for each bi-index $\nu = (i, j)$. Moreover, the cells $D_\nu$ are the domains of the branches of $\Phi$ which is a piecewise affine-linear map \begin{equation}\label{eq:ndamap}
\Phi(\vec{x}) =
\begin{pmatrix}
a^{\nu}_x \\
a^{\nu}_y
\end{pmatrix} +
\begin{pmatrix}
\lambda^{\nu}_x & 0 \\
0 & \lambda^{\nu}_y
\end{pmatrix} \cdot
\begin{pmatrix}
x \\
y
\end{pmatrix} \:, \end{equation} when $\vec{x} = (x, y)^T \in D_\nu$. The vectors $(a^{\nu}_x, a^{\nu}_y )^T \in \mathbb{R}^2$ characterize parallel translations, while the matrix coefficients $\lambda^{\nu}_x, \lambda^{\nu}_y \in \mathbb{R}_0^+$ mediate either stretchings ($\lambda > 1$), squeezings ($\lambda < 1$), or identities ($\lambda = 1$) along the $x$- and $y$-axes, respectively.
The NDA's dynamics, obtained by iterating an orbit $\{ \vec{x}_t \in X | t \in \mathbb{N}_0 \}$ from initial condition $\vec{x}_0$ through \begin{equation}
\label{eq:ndadyn}
\vec{x}_{t + 1} = \Phi(\vec{x}_t) \end{equation} describes a symbolic computation by means of a generalized shift \cite{Moore90, Moore91a} when subjected to the coarse-graining $\mathcal{P}$. To this end, one considers the set of bi-infinite, ``dotted'' symbolic sequences \begin{equation}\label{eq:symseq}
s = \ldots a_{i_{-3}} a_{i_{-2}} a_{i_{-1}} . a_{i_{0}} a_{i_{1}} a_{i_{2}} \ldots \end{equation} with symbols $a_{i_k} \in \mathbf{A}$ taken from a finite set, an alphabet $\mathbf{A}$. In \Eq{eq:symseq} the dot denotes the observation time $t = 0$ such that the symbol right to the dot, $a_{i_{0}}$, displays the current state, dissecting the string $s$ into two one-sided infinite strings $s = (s'_L, s_R)$ with $s'_L = a_{i_{-1}} a_{i_{-2}} a_{i_{-3}} \ldots$ as the left-hand part in reversed order and $s_R = a_{i_{0}} a_{i_{1}} a_{i_{2}} \ldots$ as the right-hand part. Applying a G\"odel encoding \begin{eqnarray}
\label{eq:goedel}
x &=& \psi(s'_L) = \sum_{k = 1}^\infty \psi(a_{i_{-k}}) b_L^{-k} \\
y &=& \psi(s_R) = \sum_{k = 0}^\infty \psi(a_{i_k}) b_R^{-k - 1} \nonumber \end{eqnarray} to the pair $s = (s'_L, s_R)$, where $\psi(a_j) \in \mathbb{N}_0$ is an integer G\"odel number for symbol $a_j \in \mathbf{A}$ and $b_L, b_R \in \mathbb{N}$ are the numbers of symbols that could appear either in $s_L$ or in $s_R$, respectively, yields the so-called symbol plane or symbologram representation $(x, y)^T$ of $s$ in the unit square $X$ \cite{CvitanovicGunaratneProcaccia88, KennelBuhl03}.
A generalized shift emulating a Turing machine\footnote{
A generalized shift becomes a Turing machine by interpreting $a_{i_{-1}}$ as the current tape symbol underneath the head and $a_{i_{0}}$ as the current control state $q$. Then the remainder of $s_L$ is the tape left to the head and the remainder of $s_R$ is the tape right to the head. The DoD is the word $w = a_{i_{-1}} . a_{i_{0}}$ of length $d = 2$. }
is a pair $M_{GS} = (\mathbf{A}^\mathbb{Z}, \Psi)$ where $\mathbf{A}^\mathbb{Z}$ is the space of bi-infinite, dotted sequences with $s \in \mathbf{A}^\mathbb{Z}$ and $\Psi : \mathbf{A}^\mathbb{Z} \to \mathbf{A}^\mathbb{Z}$ is given as \begin{equation}\label{eq:genshift1}
\Psi(s) = \sigma^{F(s)}(s \oplus G(s)) \end{equation} with \begin{eqnarray} \label{eq:genshift}
\label{eq:genshift2} F: \mathbf{A}^\mathbb{Z} &\to& \mathbb{Z} \\
\label{eq:genshift3} G: \mathbf{A}^\mathbb{Z} &\to& \mathbf{A}^e \:, \end{eqnarray} where $\sigma : \mathbf{A}^\mathbb{Z} \to \mathbf{A}^\mathbb{Z}$ is the usual left-shift from symbolic dynamics \cite{LindMarcus95}, $F(s) = l$ dictates a number of shifts to the right ($l < 0$), to the left ($l > 0$) or no shift at all ($l = 0$), $G(s)$ is a word $w'$ of length $e \in \mathbb{N}$ in the domain of effect (DoE) replacing the content $w \in \mathbf{A}^d$, which is a word of length $d \in \mathbb{N}$, in the domain of dependence (DoD) of $s$, and $s \oplus G(s)$ denotes this replacement function.
From a generalized shift $M_{GS}$ with DoD of length $d$ an NDA $M_{NDA}$ can be constructed as follows: In the G\"odel encoding \pref{eq:goedel} the word contained in the DoD at the left-hand-side of the dot, partitions the $x$-axis of the symbologram into intervals $I_i$, while the word contained in the DoD at the right-hand-side of the dot partitions its $y$-axis into intervals $J_j$, such that the rectangle $D_\nu = I_i \times J_j$ ($\nu = (i, j)$) becomes the image of the DoD. Moore \cite{Moore90, Moore91a} has proven that the map $\Psi$ is then represented by a piecewise affine-linear (yet, globally nonlinear) map $\Phi$ with branches at $D_\nu$.
In general, a Turing machine has a distinguished blank symbol, $\sqcup$ delimiting the machine tape and also some distinguished final states indicating termination of a computation \cite{HopcroftUllman79}. If there are no final states, the automaton is said to terminate with empty tape $s = \sqcup^\infty . \sqcup^\infty$. By mapping $\psi(\sqcup) = 0$ through the G\"odel encoding, the terminating state becomes a fixed point attractor $(0, 0)^T \in X$ in the symbologram representation. Moreover, sequences of finite length are then described by pairs of rational numbers by virtue of \Eq{eq:goedel}. Therefore, NDA Turing machine computation becomes essentially rational dynamics.
In the framework of generalized shifts and nonlinear dynamical automata, however, another solution appears to be more appropriate for at least three important reasons: Firstly, Siegelmann \cite{Siegelmann96b} further generalized generalized shifts to so-called analog shifts, where the DoE $e$ in \Eq{eq:genshift3} could be infinity (e.g. by replacing the finite word $w$ in the DoD by the infinite binary representation of $\pi$). Secondly, the NDA representation of a generalized shift should preserve structural relationships of the symbolic description, such as the word semigroup property of strings. Beim Graben et al. \cite{GrabenJurishEA04} have shown that a representation of finite strings by means of equivalence classes of infinite strings, the so-called cylinder sets in symbolic dynamics \cite{McMillan53} lead to monoid homomorphisms from symbolic sequences to the symbologram representation. Then, the empty word $\varepsilon$, the neutral element of the word semigroup, is represented by the unit interval $[0, 1]$ of real numbers. And thirdly, beim Graben et al. \cite{GrabenGerthVasishth08} combined NDAs with dynamical recognizers \cite{Pollack91, Moore98, Tabor00a} to describe interactive computing where symbols from an information stream were represented as operators on the symbologram phase space of an NDA. There, a similar semigroup representation theorem holds.
For these reasons, we briefly recapitulate the cylinder set approach here. In symbolic dynamics, a cylinder set is a subset of the space $\mathbf{A}^\mathbb{Z}$ of bi-infinite sequences from an alphabet $\mathbf{A}$ that agree in a particular building block of length $n \in \mathbb{N}$ from a particular instance of time $t \in \mathbb{Z}$, i.e. \begin{eqnarray}\label{eq:cylinder}
C(n, t) &=& [\folg a {i_1} {i_n} ]_t \nonumber \\
&=& \{ s \in \mathbf{A}^{\mathbb{Z}} \,| \, s_{t + k - 1} = a_{i_k} , \quad k = 1, \dots, n \} \end{eqnarray}
is called $n$-cylinder at time $t$. When now $t < 0, n > |t| + 1$ the cylinder contains the dotted word $w = s_{-1} . s_0$ and can therefore be decomposed into a pair of cylinders $(C'(|t|, t), C(|t| + n - 1, 0))$ where $C'$ denotes reversed order of the defining strings again. In the G\"odel encoding \pref{eq:goedel} each cylinder has a lower and an upper bound, given by the G\"odel numbers 0 and $b_L - 1$, $b_R - 1$, respectively. Then \begin{eqnarray*}
\inf(\psi(C'(|t|, t))) &=& \psi(\folg a {i_{|t|}} {i_1}) \\
\sup(\psi(C'(|t|, t))) &=& \psi(\folg a {i_{|t|}} {i_1}) + b_L^{-|t|} \\
\inf(\psi(C(|t| + n - 1, 0))) &=& \psi(\folg a {i_{|t| + 1}} {i_n}) \\
\sup(\psi(C(|t| + n - 1, 0))) &=& \psi(\folg a {i_{|t| + 1}} {i_n}) + b_R^{-|t| - n + 1} \:, \end{eqnarray*} where the suprema have been evaluated by means of geometric series. Thereby, each part cylinder $C$ is mapped onto a real interval $[\inf(C), \sup(C)] \subset [0, 1]$ and the complete cylinder $C(n, t)$ onto the Cartesian product of intervals $R = I \times J \subset [0, 1]^2$, i.e. onto a rectangle in unit square. In particular, the empty cylinder, corresponding to the empty tape $\varepsilon . \varepsilon$ is represented by the complete phase space $X = [0, 1]^2$.
Fixing the prefixes of both part cylinders and allowing for random symbolic continuation beyond the defining building blocks, results in a cloud of randomly scattered points across a rectangle $R$ in the symbologram. These rectangles are consistent with the symbol processing dynamics of the NDA, while individual points $\vec{x} \in [0, 1]^2$ no longer have an immediate symbolic interpretation. Therefore, we refer to arbitrary rectangles $R \in [0, 1]^2$ as to NDA macrostates, distinguishing them from NDA microstates $\vec{x}$ of the underlying dynamical system. In other words, the symbolically meaningful macrostates are emergent on the microscopic NDA dynamics. We discuss in \Sec{sec:discu} how a particular concept, called contextual emergence, could describe this phenomenon \cite{AtmanspacherGraben07, AtmanspacherGraben09, BishopAtmanspacher06}.
\section{DYNAMIC FIELD AUTOMATA} \label{sec:dfa}
From a conceptional point of view it does not seem very satisfactory to include such a kind of stochasticity into a deterministic dynamical system. However, as we shall demonstrate in this section, this apparent defect could be easily remedied by a change of perspective. Instead of iterating clouds of randomly prepared initial conditions according to a deterministic dynamics, one could also study the deterministic dynamics of probability measures over phase space. At this higher level of description, introduced by Koopman et al. \cite{Koopman31, KoopmanVonNeumann32} into theoretical physics, the point dynamics in phase space is replaced by functional dynamics in Banach or Hilbert spaces. This approach has its counterpart in neural \cite{Amari77b, WilsonCowan73} and dynamic field theory \cite{ErlhagenSchoner02, SchonerDineva07} in theoretical neuroscience.
In dynamical system theory the abovementioned approach is derived from the conservation of probability as expressed by a Frobenius-Perron equation \cite{Ott93} \begin{equation}\label{eq:froper}
\rho(\vec{x}, t) = \int_X \delta(\vec{x} - \Phi^{t - t'}(\vec{x}')) \rho(\vec{x}', t') \, \mathrm{d} \vec{x}' \:, \end{equation} where $\rho(\vec{x}, t)$ denotes a probability density function over the phase space $X$ at time $t$ of a dynamical system, $\Phi^t : X \to X$ refers to either a continuous-time ($t \in \mathbb{R}_0^+$) or discrete-time ($t \in \mathbb{N}_0$) flow and the integral over the delta function expresses the probability summation of alternative trajectories all leading into the same state $\vec{x}$ at time $t$.
\subsection{Temporal Evolution} \label{sec:evolv}
In the case of an NDA, the flow is discrete and piecewise affine-linear on the domains $D_\nu$ as given by \Eq{eq:ndamap}. As initial probability distribution densities $\rho(\vec{x}, 0)$ we consider uniform distributions with rectangular support $R_0 \subset X$, corresponding to an initial NDA macrostate, \begin{equation}\label{eq:iniuni}
u(\vec{x}, 0) = \frac{1}{|R_0|} \chi_{R_0}(\vec{x}) \:, \end{equation}
where $|R_0| = \mathrm{vol}(R_0)$ is the ``volume'' (actually the area) of $R_0$ and \begin{equation}\label{eq:charfn}
\chi_{A}(\vec{x}) = \begin{cases}
0 & \quad:\quad \vec{x} \notin A \\
1 & \quad:\quad \vec{x} \in A
\end{cases} \end{equation} is the characteristic function for a set $A \subset X$. A crucial requirement for these distributions is that they must be consistent with the partition $\mathcal{P}$ of the NDA, i.e. there must be a bi-index $\nu = (i, j)$ such that the support $R_0 \subset D_\nu$.
Inserting \pref{eq:iniuni} into the Frobenius-Perron equation \pref{eq:froper} yields for one iteration \begin{equation}\label{eq:froper2}
u(\vec{x}, t + 1) = \int_X \delta(\vec{x} - \Phi(\vec{x}')) u(\vec{x}', t) \, \mathrm{d} \vec{x}' \:. \end{equation}
In order to evaluate \pref{eq:froper2}, we first use the product decomposition of the involved functions: \begin{equation}\label{eq:decouni1}
u(\vec{x}, 0) = u_x(x, 0) u_y(y, 0) \end{equation} with \begin{eqnarray}
\label{eq:decounix}
u_x(x, 0) &=& \frac{1}{|I_0|} \chi_{I_0}(x) \\
\label{eq:decouniy}
u_y(y, 0) &=& \frac{1}{|J_0|} \chi_{J_0}(y) \end{eqnarray} and \begin{equation}\label{eq:decdelta}
\delta(\vec{x} - \Phi(\vec{x}')) = \delta(x - \Phi_x(\vec{x}')) \delta(y - \Phi_y(\vec{x}')) \:, \end{equation} where the intervals $I_0, J_0$ are the projections of $R_0$ onto $x$- and $y$-axes, respectively. Correspondingly, $\Phi_x$ and $\Phi_y$ are the projections of $\Phi$ onto $x$- and $y$-axes, respectively. These are obtained from \pref{eq:ndamap} as \begin{eqnarray}
\label{eq:mapx}
\Phi_x(\vec{x}') &=& a^{\nu}_x + \lambda^{\nu}_x x' \\
\label{eq:mapy}
\Phi_y(\vec{x}') &=& a^{\nu}_y + \lambda^{\nu}_y y' \:. \end{eqnarray} Using this factorization, the Frobenius-Perron equation \pref{eq:froper2} separates into \begin{eqnarray}
\label{eq:fpx}
u_x(x, t + 1) &=& \int_{[0, 1]} \delta(x - a^{\nu}_x - \lambda^{\nu}_x x') u_x(x', t) \, \mathrm{d} x' \\
\label{eq:fpy}
u_y(y, t + 1) &=& \int_{[0, 1]} \delta(y - a^{\nu}_y - \lambda^{\nu}_y y') u_y(y', t) \, \mathrm{d} y' \end{eqnarray}
Next, we evaluate the delta functions according to the well-known lemma \begin{equation}\label{eq:evadelta}
\delta(f(x)) = \sum_{l : \text{simple zeros}} |f'(x_l)|^{-1} \delta(x - x_l) \:, \end{equation} where $f'(x_l)$ indicates the first derivative of $f$ in $x_l$. \Eq{eq:evadelta} yields for the $x$-axis \begin{equation}\label{eq:zeros}
x_\nu = \frac{x - a^{\nu}_x}{\lambda^{\nu}_x} \:, \end{equation} i.e. one zero for each $\nu$-branch, and hence \begin{equation}\label{eq:slope}
|f'(x_\nu')| = \lambda^{\nu}_x \:. \end{equation} Inserting \pref{eq:evadelta}, \pref{eq:zeros} and \pref{eq:slope} into \pref{eq:fpx}, gives \begin{eqnarray*}
u_x(x, t + 1) &=& \sum_\nu \int_{[0, 1]} \frac{1}{\lambda^{\nu}_x} \delta\left( x' - \frac{x - a^{\nu}_x}{\lambda^{\nu}_x} \right) u_x(x', t) \, \mathrm{d} x' \\
&=& \sum_\nu \frac{1}{\lambda^{\nu}_x} u_x\left( \frac{x - a^{\nu}_x}{\lambda^{\nu}_x}, t \right) \end{eqnarray*}
Next, we take into account that the distributions must be consistent with the NDA's partition. Therefore, for given $\vec{x} \in D_\nu$ there is only one branch of $\Phi$ contributing a simple zero to the sum above. Hence, \begin{equation}\label{eq:uniter}
u_x(x, t + 1) =
\sum_\nu \frac{1}{\lambda^{\nu}_x} u_x\left( \frac{x - a^{\nu}_x}{\lambda^{\nu}_x}, t \right) =
\frac{1}{\lambda^{\nu}_x} u_x\left( \frac{x - a^{\nu}_x}{\lambda^{\nu}_x}, t \right) \:. \end{equation}
\begin{theorem} The evolution of uniform p.d.f.s with rectangular support according to the NDA dynamics \Eq{eq:froper2} is governed by \begin{equation}\label{eq:fpuniform}
u(\vec{x}, t) = \frac{1}{|\Phi^t(R_0)|} \chi_{\Phi^t(R_0)}(\vec{x}) \:. \end{equation} \end{theorem}
\noindent\textbf{Proof (by means of induction).}
\noindent 1. Inserting the initial uniform density distribution \pref{eq:iniuni} for $t = 0$ into \Eq{eq:uniter}, we obtain by virtue of \pref{eq:decounix} \[
u_x(x, 1) = \frac{1}{\lambda^{\nu}_x} u_x\left( \frac{x - a^{\nu}_x}{\lambda^{\nu}_x}, 0 \right) =
\frac{1}{\lambda^{\nu}_x} \frac{1}{|I_0|} \chi_{I_0}\left( \frac{x - a^{\nu}_x}{\lambda^{\nu}_x} \right) \:. \] Deploying \pref{eq:charfn} yields \[
\chi_{I_0}\left( \frac{x - a^{\nu}_x}{\lambda^{\nu}_x} \right) =
\begin{cases}
0 & \quad:\quad \frac{x - a^{\nu}_x}{\lambda^{\nu}_x} \notin I_0 \\
1 & \quad:\quad \frac{x - a^{\nu}_x}{\lambda^{\nu}_x} \in I_0 \:.
\end{cases} \] Let now $I_0 = [p_0, q_0] \subset [0, 1]$ we get \begin{eqnarray*}
&& \frac{x - a^{\nu}_x}{\lambda^{\nu}_x} \in I_0 \\
&\Longleftrightarrow&
p_0 \le \frac{x - a^{\nu}_x}{\lambda^{\nu}_x} \le q_0 \\
&\Longleftrightarrow&
\lambda^{\nu}_x p_0 \le x - a^{\nu}_x \le \lambda^{\nu}_x q_0 \\
&\Longleftrightarrow&
a^{\nu}_x + \lambda^{\nu}_x p_0 \le x \le a^{\nu}_x + \lambda^{\nu}_x q_0 \\
&\Longleftrightarrow&
\Phi_x(p_0) \le x \le \Phi_x(q_0) \\
&\Longleftrightarrow&
x \in \Phi_x(I_0) \:, \end{eqnarray*} Where we made use of \pref{eq:mapx}.
Moreover, we have \[
\lambda^{\nu}_x |I_0| = \lambda^{\nu}_x (q_0 - p_0) = q_1 - p_1 = |I_1| \] with $I_1 = [p_1, q_1] = \Phi_x(I_0)$. Therefore, \[
u_x(x, 1) = \frac{1}{|I_1|} \chi_{I_1}(x) \:. \]
The same argumentation applies to the $y$-axis, such that we eventually obtain \begin{equation}\label{eq:uniter2}
u(\vec{x}, 1) = \frac{1}{|R_1|} \chi_{R_1}(\vec{x}) \:, \end{equation} with $R_1 = \Phi(R_0) $ the image of the initial rectangle $R_0 \subset X$. Thus, the image of a uniform density function with rectangular support is a uniform density function with rectangular support again.
2. Assume \pref{eq:fpuniform} is valid for some $t \in \mathbb{N}$. Then it is obvious that \pref{eq:fpuniform} also holds for $t + 1$ by inserting the $x$-projection of \pref{eq:fpuniform} into \pref{eq:uniter} using \pref{eq:decounix}, again. Then, the same calculation as under 1. applies when every occurrence of $0$ is replaced by $t$ and every occurrence of $1$ is replaced by $t + 1$.
By means of this construction we have implemented an NDA by a dynamically evolving field. Therefore, we call this representation \emph{dynamic field automaton (DFA)}.
\subsection{Kernel Construction} \label{sec:kern}
The Frobenius-Perron equation \pref{eq:froper2} can be regarded as a time-discretized Amari dynamic neural field equation \cite{Amari77b} which is generally written as \begin{equation}\label{eq:Amari}
\tau \frac{\partial u(\vec{x}, t)}{\partial t} + u(\vec{x}, t) =
\int_X w(\vec{x}, \vec{x}') f(u(\vec{x}', t)) \; \, \mathrm{d} \vec{x}' \:. \end{equation} Here, $\tau$ is the characteristic time constant of activation decay, $w(\vec{x}, \vec{x}')$ denotes the synaptic weight kernel, describing the connectivity between sites $\vec{x}, \vec{x}' \in X$ and $f$ is a typically sigmoidal activation function for converting membrane potential $u(\vec{x}, t)$ into spike rate $f(u(\vec{x}, t))$.
Discretizing time according to Euler's rule with increment $\Delta t = \tau$ yields \begin{eqnarray*}
\tau \frac{u(\vec{x}, t + \tau) - u(\vec{x}, t)}{\tau} + u(\vec{x}, t) &=&
\int_X w(\vec{x}, \vec{x}') f(u(\vec{x}', t)) \; \, \mathrm{d} \vec{x}' \\
u(\vec{x}, t + \tau) &=& \int_X w(\vec{x}, \vec{x}') f(u(\vec{x}', t)) \; \, \mathrm{d} \vec{x}' \:. \end{eqnarray*} For $\tau = 1$ and $f(u) = u$ the Amari equation becomes the Frobenius-Perron equation \pref{eq:froper2} when we set \begin{equation}\label{eq:nftkernel}
w(\vec{x}, \vec{x}') = \delta(\vec{x} - \Phi(\vec{x}')) \:. \end{equation} This is the general solution of the kernel construction problem \cite{PotthastGraben09a, GrabenPotthast09a}. Note that $\Phi$ is not injective, i.e.\ for fixed $x$ the kernel is a sum of delta functions coding the influence from different parts of the space $X = [0,1]^2$. Note further that higher-order discretization methods of explicit or implicit type such as the Runge-Kutta scheme could be applied to \Eq{eq:Amari} as well. But in this case the relationship between the Turing dynamics as expressed by the Frobenius-Perron equation \pref{eq:froper} and the neural field dynamics would become much more involved. We leave this as an interesting question for further research.
\section{DISCUSSION} \label{sec:discu}
In this communication we combined nonlinear dynamical automata as implementations of Turing machines by nonlinear dynamical systems with dynamic field theory, where computations are characterized as evolution in function spaces over abstract feature spaces. Choosing the unit square of NDAs as feature space we demonstrated that Turing computation becomes represented as dynamics in the space of uniform probability density functions with rectangular support.
The suggested framework of dynamic field automata may exhibit several advantages. First of all, massively parallel computation could become possible by extending the space of admissible p.d.f.s. By allowing either for supports that overlap the partition of the underlying NDA or for multimodal distribution functions, one could prepare as many symbolic representations one wants and process them in parallel by the DFA. Moreover, DFAs could be easily integrated into wider dynamic field architectures for object recognition or movement preparation. They could be programmed for problem-solving, logical interferences or syntactic language processing. In particular, Bayesian inference or the processing of stochastic grammars could be implemented by means of appropriate p.d.f.s.
For those applications, DFAs should be embedded into time-continuous dynamics. This involves the construction of more complicated kernels through solving inverse problems along the lines of Potthast et al. \cite{PotthastGraben09a, GrabenPotthast09a}. We shall leave these questions for future research.
The construction of DFAs has also interesting philosophical implications. One of the long-standing problems in philosophy of science was the precise relationship between point mechanics, statistical mechanics and thermodynamics in theoretical physics: Is thermodynamics merely reducible to point mechanics via statistical mechanics? Or are thermodynamic properties such as temperature emergent on mechanical descriptions?
Due to the accurate analysis of Bishop and Atmanspacher \cite{BishopAtmanspacher06}, point mechanics and statistical mechanics simply provide two different levels of description: On one hand, point mechanics deals with the dynamics of microstates in phase space. On the other hand, statistical mechanics, in the formulation of Koopman et al. \cite{Koopman31, KoopmanVonNeumann32} (see \Sec{sec:dfa}), deals with the evolution of probability distributions over phase space, namely macrostates, in abstract function spaces. Both are completely disparate descriptions, none reducible to the other. However, the huge space of (largely unphysical) macrostates must be restricted to a subspace of physically meaningful thermal equilibrium states that obey a particular stability criterium (essentially the maximum-entropy principle). This restriction of states bears upon a contingent context, and in this sense, thermodynamic properties have been called \emph{contextually emergent} by \cite{BishopAtmanspacher06}.
Our construction of DFAs exhibits an interesting analogy to the relationship between mechanical micro- and thermal macrostates: Starting from microscopic nonlinear dynamics of an NDA, we used the Frobenius-Perron equation for probability density functions in order to derive an evolution law of macrostates: The time-discretized Amari equation \pref{eq:Amari} with kernel \pref{eq:nftkernel}. However, with respect to the underlying NDA, not every p.d.f. can be interpreted as a symbolic representation of a Turing machine configuration. Therefore, we had to restrict the space of all possible p.d.f.s, by taking only uniform p.d.f.s with rectangular support into account. For those macrostates we were able to prove that the resulting DFA implements the original Turing machine. In this sense, the restriction to uniform p.d.f.s with rectangular support introduces a contingent context from which symbolic computation emerges. (Note that uniform p.d.f.s also have maximal entropy).
\ack This research was supported by a Heisenberg grant (GR 3711/1-1) of the German Research Foundation (DFG) awarded to PbG. Preliminary results have been presented at a special session ``Cognitive Architectures in Dynamical Field Theory'', that was partially funded by an EuCogIII grant, at the 2nd International Conference on Neural Field Theory, hosted by the University of Reading (UK). We thank Yulia Sandamirskaya, Slawomir Nasuto and Gregor Sch\"oner for inspiring discussions.
\end{document}
|
arXiv
|
{
"id": "1204.5462.tex",
"language_detection_score": 0.7704880833625793,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Electric Field effects on quantum correlations in semiconductor quantum dots}
\author{S.~Shojaei} \altaffiliation{ Author to whom correspondence should be addressed; electronic mail: [email protected], [email protected]}
\affiliation{ Photonics Group, Research Institute for Applied Physics and Astronomy (RIAPA), University of Tabriz, 51665-163 Tabriz, Iran}
\author{M.~Mahdian}
\affiliation{ Faculty of Physics, Theoretical and astrophysics department , University of Tabriz, 51665-163 Tabriz, Iran}
\author{R.~Yousefjani}
\affiliation{ Faculty of Physics, Theoretical and astrophysics department , University of Tabriz, 51665-163 Tabriz, Iran}
\begin{abstract}
We study the effect of external electric bias on the quantum correlations in the array of optically excited coupled semiconductor quantum dots. The correlations are characterized by the quantum discord and concurrence and are observed using excitonic qubits. We employ the lower bound of concurrence for thermal density matrix at different temperatures. The effect of the F\"{o}rster interaction on correlations will be studied. Our theoretical model detects nonvanishing quantum discord when the electric field is on while concurrence dies ,ensuring the existence of nonclassical correlations as measured by the quantum discord.
\end{abstract}
\pacs{78.67.-n, 78.67.Hc, 03.67.-a, 03.67.Mn}
\maketitle
\section*{1 Introduction}
In the recent years, the meaning of quantum discords has been attracted a lot of interest because it indicates that entangled states are not the only kind of quantum states exhibiting nonclassical features\cite{zurek, Zurek2, Henderson, Luo, Vedral, Datta}. Quantum discord is a measure of the discrepancy between two natural yet different quantum analogs of the classical mutual information . Quantum discord captures the fundamental feature of the quantumness of nonclassical correlations, which is much like quantum entanglement, but it is beyond quantum entanglement because quantum discord is even present in separable quantum states. Quantum discords have strong potentials in studying dynamical processes\cite{zurek3, oppenheim, Horodecki, mahdian}, some quantum information processes such as the broadcasting of quantum states\cite{piani1, piani} quantum state merging \cite{cavalcanti, madhok}, quantum entanglement distillation\cite{cornelio, piani3} , entanglement of formation \cite{fanchini}.\\ Besides, there has been considerable interest in the quantum information properties of solid state environment particularly, semiconductor quantum dots (QDs) due to their well defined controllable atom-like and molecule-like properties\cite{filippo, filippo2,hawrylak}. To study the electro-optical properties of QDs (specially coupled ones), one of the most interesting parameter is the exciton-exciton interaction. This kind of interaction in first neighbors dots will allow to implement a scheme for quantum information processing on QDs \cite{chen}. It was shown that the biexcitonic shift due to the dipolar interaction allows for subpicosecond quantum gate operations\cite{rinaldis02}. It was proven that the optically driven QDs, that are carrying excitons are good candidates for implementation of quantum gates and quantum computation\cite{xiaoqin,Herschbach,Friedrich}.
In order to study the amount of concurrence and discord in the array of excitons inside QDs, we employ the lower bound of concurrence for thermal density matrix of identical and equidistant coupled QDs at different bath temperatures. Furthermore, By the means of electric field, the manipulation of entanglement and discord will be discussed. We show that, quantum correlations may be enhanced upon increasing the temperature and decreased with incresing the electric field across the system. Our theoretical model detects nonvanishing quantum discord, ensuring the existence of nonclassical correlations as measured by the quantum discord
\section{2 Theory: quantum dot model and Hamiltonian}
The model sample to study the quantum correlation properties of optically driven QDs is a series of InAs coupled QDs with small equal spacing between them along the axis. This model can be realized experimentally(see for example reference\cite{Nishibayashi}). In this model, F\"{o}rster mechanism\cite{Forster} is a valid model to explain the energy transfer between QDs through dipolar interaction between the excitons. Here, the qubits are the excitonic electric dipole moments located in each QD which can only orient along ($|0\rangle$) or against ($|1\rangle$) the external electric fields. For such a system, tuning and controlling the quantum correlation between the dipoles is of great importance. The governing Hamiltonian of dipoles in the presence of external electric field simply reads:
\begin{eqnarray} H = \hbar\sum _{i=1}^{n}\omega_i[S_z^i+\frac{1}{2}]+\nonumber\\ +\hbar\sum_{i=1}^{n}\Omega_i\hat{S_i^z}+\hbar\sum_{i=1}^{n}J_z[S_+^i][S_-^j]+\nonumber\\ \frac{1}{2}\sum_{i,j=1}^{n}\lambda[S_+^iS_-^j+S_-^jS_+^i], \end{eqnarray}
Where $S_+^i$=$(|0\rangle\langle1\rangle)$,$S_-^i$=$(|1\rangle\langle0\rangle)$, and $S_z^i$=$\frac{1}{2}(|0\rangle\langle0\rangle-|1\rangle\langle1\rangle)$. $\omega_i$ presents the frequency of the excitons in QDs, $\Omega_i$ is the frequency related to dipole moment(exciton) that is a function of dipole moment and the external electric field (E) at \textit{i}th QD:
\begin{equation}
\hbar\Omega_i=|\vec{d}.\vec{E}|, \end{equation}
Where $\vec{d}$ is the electric dipole moment carried by the exciton that is assumed to be same for each QD. $\lambda$ presents the F\"{o}rster interaction which transfers an exciton from one QD to other ones. $J_z$ presents the exciton-exciton dipolar interaction energy, reads:
\begin{eqnarray} \hbar J_z=\frac{\vec{d}^2(1-3\cos^2\theta)}{\vec{r}_{ij}^3}, \end{eqnarray}
where $r_{ij}$ is the distance between dipoles \textit{i} and \textit{j} that is assumed along the $z$ axis.
For a qualitative discussion on quantum correlations we assume that the dipole carried by exciton is the one order of magnitude in debyes, a typical experimental electric field $10^6$ $V/m$, so the dipolar interaction parameter($\hbar J_z$) will be of the order of $meV$. F\"{o}rster interaction energy($\hbar \lambda$) and $\hbar \Omega$ are assumed to be in the order of $meV$. This values are consistent with experimental observations and calculations\cite{zhu}.
\subsection{2.1 Lower bound concurrence}
Since entanglement is conceived as a resource to perform various tasks of quantum information processing \cite{23,24,25,26}, knowledge about the amount of entanglement in a quantum state is so important. Indeed, awareness from the value of entanglement, means knowing how well a certain task can be accomplished. The quantification problem of entanglement only for bipartite systems in pure states \cite{27} and two-qubit system in mixed state \cite{28} is essentially solved. In multi-partite systems, even the pure state case, this problem is not exactly solved and just lower bounds for the entanglement have been proposed \cite{29,30,31,32}. Here, in order to determination the exact minimum of entanglement between three dipoles(excitons), we use the lower bound of concurrence for three-qubit state which is recently suggested by Li et al. \cite{33} \begin{eqnarray}
\tau_{3}(\rho)=\frac{1}{\sqrt{3}}(\sum_{j=1}^{6}(C_{j}^{12|3})^{2}+(C_{j}^{13|2})^{2}+(C_{j}^{23|1})^{2})^{\frac{1}{2}}, \end{eqnarray}
where $C_{j}^{12|3}$ is terms of the bipartite concurrences for qubits $12$ and $3$ which is given by \begin{eqnarray}
C_{j}^{12|3}=\max \{0,\lambda_{j}^{12|3}(1)-\lambda_{j}^{12|3}(2)-\lambda_{j}^{12|3}(3)-\lambda_{j}^{12|3}\}. \end{eqnarray}
In this notation, $\lambda_{j}^{12|3}(\kappa)$, $(\kappa=1..4)$, are the square nonzero roots, in decreasing order, of the non-Hermitian matrix $\rho \tilde{\rho}_{j}^{12|3}$. The matrix $\tilde{\rho}_{j}^{12|3}$ are obtained from rotated the complex conjugate of density operator, $\rho^{*}$, by the operator $S_{j}^{12|3}$ as $\tilde{\rho}_{j}^{12|3}=S_{j}^{12|3}\, \rho^{*} \, S_{j}^{12|3} $. The rotation operators $S_{j}^{12|3}$ are given by tensor product of the six generators of the group SO(4), ($L_{j}^{12}$), and the single generator of the group SO(2), ($L_{0}^{3}$) that is $S_{j}^{12|3}=L_{j}^{12} \otimes L_{0}^{3}$. Since the matrix $S_{j}^{12|3}$ has four rows and columns which are identically zero, so the rank of non-Hermitian matrix $\rho \tilde{\rho}_{j}^{12|3}$ can not be larger than 4, i.e., $\lambda_{j}^{12|3}(\kappa)=0$ for $\kappa\geq 5$. The bipartite concurrences $C^{13|2}$ and $C^{23|1}$ are defined in a similar way to $C^{12|3}$.
In order to calculate thermal entanglement, we need the temperature dependent density matrix and the density matrix for a system in equilibrium at a temperature T reads: $\rho=\exp(-\beta\hat{H}/Z)$ with $\beta=1/KT$ and $Z$ is partition function, $Z=Tr(\exp(-\beta\hat{H}))$. In this case, the partition function is
\begin{eqnarray} Z(T)=\sum_{i=1}g_ie^{-\beta\lambda_i}, \end{eqnarray}
where $\lambda_i$ is the $i$th eigenvalue and $g_i$ is the degeneracy. and the corresponding density matrix can be written
\begin{eqnarray}
\rho(T)=\frac{1}{Z}\sum_i^N e^{-\beta\lambda_i}|\Phi_i\rangle\langle\Phi_i|, \end{eqnarray}
here $|\Phi_i\rangle$ is the $i$th eigenfunction. The density matrix for our considered system has the form as:
\begin{equation}
\rho(T) = \left( \begin{array}{cccccccc} \rho_{11}&0&0&0&0&0&0&0 \\ 0&\rho_{22}&\rho_{23}&0&\rho_{23}&0&0&0 \\ 0&\rho_{23}&\rho_{22}&0&\rho_{23}&0&0&0 \\ 0&0&0&\rho_{44}&0&\rho_{46}&\rho_{46}&0 \\ 0&\rho_{23}&\rho_{23}&0&\rho_{22}&0&0&0 \\ 0&0&0&\rho_{46}&0&\rho_{44}&\rho_{46}&0 \\ 0&0&0&\rho_{46}&0&\rho_{46}&\rho_{44}&0 \\ 0&0&0&0&0&0&0&\rho_{88}\end{array} \right), \end{equation}
with \begin{eqnarray} \rho_{11}&=&\frac{e^{\beta\lambda}}{1+e^{\beta(2d+Fz+w)}} \cr\cr&\times&[e^{\beta(2d+2Fz+w)}+e^{\beta\lambda}-e^{\beta(2d+Fz+w+\lambda)}\cr&& +e^{\beta(4d+2Fz+2w+\lambda)}+2e^{\beta(2d+2Fz+w+\frac{3}{2}\lambda)}]^{-1}, \cr\cr\cr \rho_{22}&=&\frac{1}{3}(1+2e^{\frac{3}{2}\beta\lambda})\cr&\times& [(1+e^{\beta(2d+Fz+w)})(1+2e^{\frac{3}{2}\beta\lambda})\cr&& + e^{\beta(\lambda-2d-2Fz-w)}+e^{\beta(4d+Fz+2w+\lambda)}]^{-1}, \cr\cr\cr \rho_{23}&=&\frac{e^{\beta(2d+2Fz+w)}(1-e^{\frac{3}{2}\beta\lambda})}{3(1+e^{\beta(2d+Fz+w)})} \cr\cr&\times& [e^{\beta(2d+2Fz+w)}+e^{\beta\lambda}-e^{\beta(2d+Fz+w+\lambda)}\cr&& +e^{\beta(4d+2Fz+2w+\lambda)}+2e^{\beta(2d+2Fz+w+\frac{3}{2}\lambda)}]^{-1}, \cr\cr\cr \rho_{44}&=&\frac{1}{3}(1+2e^{\frac{3}{2}\beta\lambda})\cr&\times& [(1+e^{-\beta(2d+Fz+w)})(1+2e^{\frac{3}{2}\beta\lambda})\cr&& + e^{\beta(\lambda-4d-3Fz-2w)}+e^{\beta(2d+w+\lambda)}]^{-1}, \cr\cr\cr \rho_{46}&=&\frac{e^{\beta(4d+3Fz+2w)}(1-e^{\frac{3}{2}\beta\lambda})}{3(1+e^{\beta(2d+Fz+w)})} \cr\cr&\times& [e^{\beta(2d+2Fz+w)}+e^{\beta\lambda}-e^{\beta(2d+Fz+w+\lambda)}\cr&& +e^{\beta(4d+2Fz+2w+\lambda)}+2e^{\beta(2d+2Fz+w+\frac{3}{2}\lambda)}]^{-1}, \cr\cr\cr \rho_{88}&=&e^{3\beta d} \cr\cr&\times&[e^{3\beta d}+e^{-3\beta(d+Fz+w)}+2e^{\beta(d-w+\frac{1}{2}\lambda)}\cr&& +e^{\beta(d-w-\lambda)}+e^{-\beta(d+Fz+2w+\lambda)}\cr &&+2e^{-\beta(d+Fz+2w+\frac{1}{2}\lambda)}]^{-1}. \end{eqnarray}
Since the explicit expressions of solutions of Eq. (7) are very complicated, here we skip the details and give our results in terms of figures. The discussion of results will be postponed to the next section.
\subsection{2.2 Quantum discord}
For a bipartite system AB quantum discord is defined by the discrepancy between quantum versions of two classically equivalent expressions for mutual informationis \cite{20} \begin{eqnarray} \label{2-2} \textit{D}(\rho^{AB})=\textit{I}(\rho^{AB})- \textit{C}(\rho^{AB}), \end{eqnarray}
where $\textit{I}(\rho^{AB})=S(\rho^{A})+S(\rho^{B})-S(\rho^{AB})$ and the classical correlation is the maximum information about one subsystem $\rho^{A}$ or $\rho^{B}$ which depends on the type of measurement performed on the other subsystem such as $\textit{C}(\rho^{AB})=max[S(\rho^{A})-S(\rho^{AB}|\{\Pi_{k}\})]$ with $S(\rho)=-Tr[\rho \log_{2}\rho]$ as the von-Neumann entropy. Notice that the maximum is taken over the set of projective measurements $\{\Pi_{k}\}$ \cite{4}. \\By definition the conditional density operator $\rho^{AB}_{k}=\frac{1}{p_{k}}\{(I^{A}\otimes \Pi_{k}^{B})\rho^{AB}(I^{A}\otimes \Pi_{k}^{B})\}$ with $p_{k}=Tr[(I^{A}\otimes \Pi_{k}^{B})\rho^{AB}]$ as the probability of obtaining the outcome $k$, we can define the conditional entropy of $A$ as
$S(\rho^{AB}|\{\Pi_{k}\})=\sum_{k}p_{k}S(\rho_{k}^{A})$ with $\rho_{k}^{A}=Tr_{B}[\rho_{k}^{AB}]$ and $S(\rho_{k}^{A})=S(\rho_{k}^{AB})$. It has been shown that $\textit{D}(\rho^{AB})\geq 0$ with the equal sign only for classical correlation \cite{5}. \\Very recently, Rulli et al. \cite{3} have proposed a global measure of quantum discord based on a systematic extension of the bipartite quantum discord. Global quantum discord (GQD) which satisfy the basic requirements of a correlation function, for an arbitrary multipartite state $\rho^{A_{1}...A_{N}}$ under a set of local measurement $\{\Pi_{j}^{A_{1}}\otimes ... \otimes \Pi_{j}^{A_{N}}\}$ is defined as
\begin{eqnarray}
\textit{D}(\rho^{A_{1}...A_{N}})=\min_{\{\Pi_{k}\}}\,[S(\rho^{A_{1}...A_{N}}\|\Phi(\rho^{A_{1}...A_{N}}))\nonumber\\
-\sum_{j=1}^{N}S(\rho^{A_{j}}\|\Phi_{j}(\rho^{A_{j}}))]. \end{eqnarray}
Where $\Phi_{j}(\rho^{A_{j}})=\sum_{i}\Pi_{i}^{A_{j}}\rho^{A_{j}}\Pi_{i}^{A_{j}}$ and $\Phi(\rho^{A_{1}...A_{N}})=\sum_{k}\Pi_{k}\rho^{A_{1}...A_{N}}\Pi_{k}$ with $\Pi_{k}=\Pi_{j_{1}}^{A_{1}}\otimes ... \otimes\Pi_{j_{N}}^{A_{N}}$ and $k$ denoting the index string ($j_{1}...j_{N}$). We could eliminate dependence on measurement by minimization the set of projectors $\{\Pi_{j_{1}}^{A_{1}}, ... ,\Pi_{j_{N}}^{A_{N}}\}$. \\ By a set of von-Neumann measurements as
$$ \begin{array}{cc}
\Pi_{1}^{A_{j}}=\left(\begin{array}{cc}
\cos^{2}(\frac{\theta_{j}}{2})& e^{i\varphi_{j}}\cos(\frac{\theta_{j}}{2})\sin(\frac{\theta_{j}}{2}) \\
e^{-i\varphi_{j}}\cos(\frac{\theta_{j}}{2})\sin(\frac{\theta_{j}}{2}) & \sin^{2}(\frac{\theta_{j}}{2}) \\ \end{array} \right), \end{array} $$ $$ \begin{array}{cc}
\Pi_{2}^{A_{j}}= \left( \begin{array}{cc}
\sin^{2}(\frac{\theta_{j}}{2})& -e^{-i\varphi_{j}}\cos(\frac{\theta_{j}}{2})\sin(\frac{\theta_{j}}{2}) \\
-e^{i\varphi_{j}}\cos(\frac{\theta_{j}}{2})\sin(\frac{\theta_{j}}{2}) & \cos^{2}(\frac{\theta_{j}}{2})\\ \end{array} \right), \end{array} $$
with $\theta_{j}\in[0,\pi)$ and $\varphi_{j}\in[0,2\pi)$ for $j=1,2,3$, the equation 10 reduces to
\begin{eqnarray}
\textit{D}(\rho(T))=\min_{\{\theta_{j},\varphi_{j}\}}\,[S(\rho(T)\|\Phi(\rho(T)))\nonumber\\
-\sum_{j=1}^{3}S(\rho^{A_{j}}\|\Phi_{j}(\rho^{A_{j}}))]. \end{eqnarray}
By tracing out two qubits, the one qubit density matrices representing the individual subsystems are
$$ \begin{array}{cc}
\rho^{A_{j=1,2,3}}= \left(
\begin{array}{cc}
\rho_{11}+2\rho_{22}+\rho_{44}& 0 \\
0 & \rho_{22}+2\rho_{44}+\rho_{88} \\ \end{array} \right). \end{array} $$
To find the measurement bases that minimize quantum discord, after some algebraic calculation we have perceived that by adopting local measurements in the $\sigma_{z}$ eigen basis for each particle, the value of quantum discord will be minimized. It leads to $S(\rho^{A_{j}}\|\Phi_{j}(\rho^{A_{j}}))=0$ and
\begin{eqnarray} S(\Phi(\rho(T)))&=&-\{\rho_{11}\log_{2}(\rho_{11})+\rho_{88}\log_{2}(\rho_{88})\cr&+& 3\rho_{22}\log_{2}(\rho_{22})+3\rho_{44}\log_{2}(\rho_{44})\}. \end{eqnarray}
The Entropy $S(\rho((T)))$ can be obtained as
\begin{eqnarray} S(\rho(T))&=&-\{\rho_{11}\log_{2}(\rho_{11})+\rho_{88}\log_{2}(\rho_{88})\cr&+&2(\rho_{22}-\rho_{23})\log_{2}(\rho_{22}-\rho_{23})\cr &+&(\rho_{22}+2\rho_{23})\log_{2}(\rho_{22}+2\rho_{23})\cr&+& 2(\rho_{44}-\rho_{46})\log_{2}(\rho_{44}-\rho_{46})\cr&+&(\rho_{44}+2\rho_{46})\log_{2}(\rho_{44}+2\rho_{46})\}. \end{eqnarray}
In this case, the time evolution of quantum discord is explicitly obtained as
\begin{eqnarray} \textit{D}(\rho(T))&=&-3\rho_{22}\log_{2}(\rho_{22})-3\rho_{44}\log_{2}(\rho_{44})\cr&+&2(\rho_{22}-\rho_{23})\log_{2}(\rho_{22}-\rho_{23})\cr &+&(\rho_{22}+2\rho_{23})\log_{2}(\rho_{22}+2\rho_{23})\cr&+& 2(\rho_{44}-\rho_{46})\log_{2}(\rho_{44}-\rho_{46})\cr&+&(\rho_{44}+2\rho_{46})\log_{2}(\rho_{44}+2\rho_{46})\}. \end{eqnarray}
\begin{figure}
\caption{(Color online) Discord(top panel) and concurrence(bottom panel) measures versus temperature for different values of F\"{o}rster interaction. Electric filed is zero.}
\label{fig1}
\end{figure}
\begin{figure}
\caption{(Color online) Discord(top panel) and concurrence(bottom panel) measures versus temperature for different values of F\"{o}rster interaction. The amount of applied Electric filed is about $20\times10^6$ $V/m$.}
\label{fig2}
\end{figure}
\section{3 Results and discussion}
In what follows, we theoretically test for the existence of quantum discord in aforementioned system containing excitonic qbits and that how it behaves with external electric field. From figures 1, comparing the concurrence and the discord measures at different temperatures shows that, quantum discord survives at relatively high temperatures where concurrence is zero, also figures depict at very low temperatures both measures yield the same result. At higher temperatures, concurrence suddenly diminishes while quantum discord is still finite, slowly decreasing to zero. This is in accord with the fact that discord quantifies nonclassical correlations beyond entanglement. Generally, both quantum correlations decay with temperature due to thermal relaxation effects. Figures demonstrate that when electrical bias is off the quantum correlations increase monotonically with increasing F\"{o}rster interaction. This can be explained in terms of the increasing excitonic interaction that leads to increase the correlations. At F\"{o}rster interactions higher than about $10meV$ and at temperatures smaller than $20K$, concurrence is slightly larger than discord. In contrast, at lower F\"{o}rster interactions the opposite trend is observed and discord overcomes the concurrence almost for all temperatures that is because of as mentioned fact that discord indicates nonclassical correlations beyond entanglement even when $\lambda$ is low. From figures it is clear that at lower $\lambda$, Temperature helps to make the correlations so that for very low temperatures, till about $20K$(for discord) and $10K$(for concurrence), discord and concurrence increase with increasing temperature due to the thermal entanglement effects\cite{Arnesen}.\\ Figures 2 displays discord and correlation measures when $\hbar\Omega$ is assumed to be 2.5 $meV$ that corresponds to the electric field($E$) of about $20\times10^6$ $V/m$( this small value for electric field can be realized by experiments very easily). It is clear that discord and concurrence are smaller than that of observed in figure1. This indicates that turning electric field on, decreases the correlations. It is because electric field makes all the dipoles align in the same direction that results in increasing dipolar repulsive interaction that immediately leads to the reduction of coulomb induced correlations\cite{Saeid}. For all the values of $\lambda$ the discord is survived with increasing $E$ while the concurrence dies for smaller $\lambda$. Our results shows that increasing the electric field overcomes the effect of F\"{o}rster interaction and removes the concurrence for all values of $\lambda$, however, nonzero discord can still be observed for higher values of electric field(figure3). This manifests the importance of discord to measure the quantum correlations.
\begin{figure}
\caption{(Color online) Discord(top panel) and concurrence(bottom panel) measures versus temperature for different values of F\"{o}rster interaction. The amount of applied Electric filed is about $40\times10^6$ $V/m$.}
\label{fig3}
\end{figure}
\section{4. Conclusions}
In summary, We studied the quantum discord and concurrence measures in the array of optically driven coupled quantum dots. Qubits are excitons in each quantum dot that can be model by dipoles. We used the lower bound of concurrence for thermal density matrix of identical and equidistant coupled QDs at different temperatures. We found that the discord and concurrence are enhanced by increasing the parameter of F\"{o}rster interaction that is resulted by increasing correlations with increasing this parameter. For very low temperatures and higher F\"{o}rster interaction, concurrence is slightly larger than discord. In stark contrast, at low F\"{o}rster interactions the opposite trend is observed. Also in very low temperature region, because of thermal entanglement, both measures increase and then tend to zero for higher temperatures induced by thermal relaxation effects. We observed that switching electric field on, correlations diminish, however, discord is survived while concurrence dies for the all values of F\"{o}rster interaction. It is because electric field makes all the dipoles to be parallel that results in increasing dipolar repulsive interaction and finally decrease the entanglement.
\section*{References}
\end{document}
|
arXiv
|
{
"id": "1204.4005.tex",
"language_detection_score": 0.7679377198219299,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} In this article, we are interested in the Einstein vacuum equations on a Lorentzian manifold displaying $\mathbb{U}(1)$ symmetry. We identify some freely prescribable initial data, solve the constraint equations and prove the existence of a unique and local in time solution at the $H^3$ level. In addition, we prove a blow-up criterium at the $H^2$ level. By doing so, we improve a result of Huneau and Luk in \cite{hunluk18} on a similar system, and our main motivation is to provide a framework adapted to the study of high-frequency solutions to the Einstein vacuum equations done in a forthcoming paper by Huneau and Luk. As a consequence we work in an elliptic gauge, particularly adapted to the handling of high-frequency solutions, which have large high-order norms. \end{abstract} \maketitle \tableofcontents
\section{Introduction}
\subsection{Presentation of the results}
In this article, we are interesting in solving the Einstein vacuum equations \begin{equation*} R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}=0 \end{equation*} on a four-dimensional lorentzian manifold $(\mathcal{M}, ^{(4)}g)$, where $R_{\mu\nu}$ and $R$ are the Ricci tensor and the scalar curvature associated to $^{(4)}g$. We assume that the manifold $\mathcal{M}$ amits a translation Killing field, this symmetry being called the $\mathbb{U}(1)$ symmetry. Thanks to this symmetry, the $3+1$ Einstein vacuum equations reduce to the $2+1$ Einstein equations coupled with two scalar fields satisfying a wave map system: \begin{equation}\label{notre système}
\left\{ \begin{array}{l} \Box_g \varphi =-\frac{1}{2}e^{-4\varphi}\partial^\rho\omega\partial_\rho\omega\\ \Box_g \omega =4\partial^\rho\omega\partial_\rho\varphi\\ R_{\mu\nu}(g) =2\partial_{\mu}\varphi\partial_{\nu}\varphi+\frac{1}{2}e^{-4\varphi}\partial_\mu\omega\partial_\nu\omega \end{array} \right. \end{equation} where $\varphi$ and $\omega$ are the two scalar fields and $g$ is a $2+1$ lorentzian metric appearing in the decomposition of $^{(4)}g$ (see Section \ref{section U(1) symmetry} for more details).
The goal of this paper is to solve the previous system in an elliptic gauge. This particular choice of gauge for the $2+1$ spacetime will be precisely defined in Section \ref{section geometrie}, but let us just say for now that it allows us to recast the Einstein equations as a system of \textit{semilinear elliptic equations} for the metric coefficients. This gauge is therefore especially useful for low-regularity problems, since it offers additional regularity for the metric.
More precisely, we obtain two results on this system: local well-posedness with some precise smallness assumptions and a blow-up criterium. Both these results can be roughly stated as follows (see Theorems \ref{theoreme principal} and \ref{theo 2} for some precise statements):
\begin{thm}[Rough version of Theorem \ref{theoreme principal}]\label{rough theo 1} Given admissible initial data for $(\varphi,\omega)$ large in $H^3$ and small enough in $W^{1,4}$ (the smallness threshold being independent of the potentially large $H^3$-norms), there exists a unique solution to \eqref{notre système} in the elliptic gauge on $[0,T]\times \mathbb{R}^2$ for some $T>0$ depending on the initial $H^3$-norms. \end{thm}
\begin{thm}[Rough version of Theorem \ref{theo 2}]\label{rough theo 2} If the time of existence $T$ of the solution obtained in Theorem \ref{rough theo 1} is finite, then the $H^2$ norm of $(\varphi,\omega)$ diverges at $T$ or the smallness in $W^{1,4}$ no longer holds. \end{thm}
\subsection{Strategy of proof and main challenges}
Let us briefly discuss the strategy employed to prove the two previous theorems and point out the main challenges we face. We adopt the same global strategy as in the work of Huneau and Luk \cite{hunluk18}, and we will discuss the differences and similarities with this article in Section \ref{section comparaison}.
\subsubsection{Theorem \ref{rough theo 1}}
In order to prove Theorem \ref{rough theo 1}, we need to solve the Einstein equations in the elliptic gauge. As the name of the gauge suggests, the system \eqref{notre système} then reads \begin{equation}\label{système avec jauge}
\left\{ \begin{array}{l} \Box_g U =(\partial U)^2 \\ \Delta g = (\partial U)^2 + (\partial g)^2 \end{array} \right. \end{equation} where $U$ denotes either $\varphi$ or $\omega$ and in the second equation $g$ denotes any metric coefficient. One of the main challenges of solving such a system is therefore the inversion of the Laplacian operator on a unbounded set, here $\mathbb{R}^2$. Indeed this will imply that some of the metric coefficients, the lapse $N$ and the conformal factor $\gamma$ (see Section \ref{section geometrie} for their definitions), presents some logarithmic growth at spacelike infinity. To counteract these growth, we work in the whole paper with weighted Sobolev spaces (see Definition \ref{wss}).
One major aspect of Theorem \ref{rough theo 1} is that the smallness assumed for the initial data is only at the $W^{1,4}$ level, while their higher order norms can be arbitrarily large. It is quite unusual to require some smallness on the initial data to only prove \textit{local} existence, usually one would only ask for smallness on the time of existence. Here however, the smallness of the time of existence can only be of help when performing energy estimates for the hyperbolic part of \eqref{système avec jauge}. When dealing with the non-linearities in the elliptic part of \eqref{système avec jauge}, we rely on the smallness of the solution to close the hierarchy of estimates we introduce.
However, one of the strength of our result is that the smallness of the initial data is only assumed for their first derivatives in $L^4$ topology. The higher order norm, i.e the $L^2$ norm of their second and third derivatives can be large, and this largeness is \textit{not} compensated by the smallness ot the initial data, which concretely means that the smallness threshold in Theorem \ref{rough theo 1} doesn't depend on the $H^3$ norm of the initial data. This initial data regime (largeness in $H^3$ not compensated by smallness in $W^{1,4}$) is motivated by the main application of this article, namely to the construction of high-frequency spacetimes in the context of the Burnett conjecture in general relativity. See Huneau and Luk's article \cite{hunluk} for the application of the present article and \cite{bur89} for the original paper of Burnett.
\par\leavevmode\par
Despite the particularities of the elliptic gauge we just discussed, the global strategy to solve the Einstein vacuum equations is standard: \begin{itemize} \item we first solve the \textit{constraint equations} and by doing so construct initial data for the metric on the slice $\{ t=0\}$ which in particular satisfies the gauge conditions, \item then, we solve a \textit{reduced system}, which in our case is a coupled system of elliptic, wave and transport equations, \item finally, we prove using the Bianchi identity that solving the reduced system actually implies the full Einstein vacuum equations and the propagation of the gauge conditions. \end{itemize}
As a final comment, note that wave map structure of the hyperbolic part of \eqref{système avec jauge} plays no role in the proof of Theorem \ref{rough theo 1}.
\subsubsection{Theorem \ref{rough theo 2}}
Inversely, the wave map structure of the coupling between the wave equations for $\varphi$ and $\omega$ is at the heart of Theorem's \ref{rough theo 1} proof. This result basically means that the $H^2$ norm of the initial data controls the time of existence of the solution (as long as the smallness in $W^{1,4}$ holds), whereas we need $H^3$ regularity to prove local existence. This is not a consequence of the standard energy method for the wave equation, since in dimension 2 it only allows for $H^{2+\varepsilon}$ regularity. Reaching $H^2$ requires therefore to use another structure, in our case the wave map structure of the hyperbolic part of \eqref{système avec jauge}. Since the work of Choquet-Bruhat in \cite{CBwavemaps} it is well-known that we can associate to any wave map systems a \textit{third order energy estimate}, which we crucially use to reach $H^2$.
As explain above, we rely on the smallness of the initial data to prove local-existence. This requirement has the following consequence: we are unable to prove local well-posedness at the $H^2$ level. Indeed, in order to obtain such a result, we would need (in addition to the third order energy estimate) to propagate the smallness in $W^{1,4}$ through the wave map system using only $H^2$ norms. This is not possible in dimension 2 using only energy estimates. Therefore, we need to assume that the $W^{1,4}$ smallness is propagated, which explains why we "only" prove a blow-up criterium and not local well-posedness at the $H^2$ level.
\subsection{Relation to previous works}
In this section we discuss the link of our work with the litterature. To say it briefly, the proof of Theorem \ref{rough theo 1} draws from \cite{hunluk18} and the proof of Theorem \ref{rough theo 2} uses tehniques from Choquet-Bruhat.
\subsubsection{An improvement of \cite{hunluk18}}\label{section comparaison}
This work has a lot of common points with the work of Huneau and Luk in \cite{hunluk18}, where they also study the system \eqref{notre système}. In this section, let us detail the similarities and differences between these two works.
\par\leavevmode\par
The system actually solved in \cite{hunluk18} is the Einstein \textit{null dust} system in \textit{polarized} $\mathbb{U}(1)$ symmetry. The polarized assumption implies $\omega=0$, and thus simplifies the hyperbolic part of the Einstein equations: a classical linear wave equation replaces our wave map system and its non-linear coupling associated to the non-polarized case we study here.
The Einstein null dust system is a particular case of Einstein Vlasov system and is translated as follows: the system studied in \cite{hunluk18} is coupled with some transport equations for massless particles along null geodesics. This involves the solving of the eikonal equation and thus requires the use of the null structure in $2+1$ dimension to avoid a loss of derivatives. Since we solve the Einstein \textit{vacuum} equations, this difficulty disappears in our work.
\par\leavevmode\par
As explained earlier in this introduction, the actual structure of the hyperbolic part of \eqref{notre système} doesn't influence the proof of the local existence of solutions. The proof given here nevertheless differs from the one of \cite{hunluk18} because of the differences in terms of regularity of the initial data. In \cite{hunluk18}, the initial data enjoy $H^4$ regularity and are small in $W^{1,\infty}$. This should be compared to our assumptions: $H^3$ regularity with smallness in $W^{1,4}$ only. Because of this fact, the hierarchy of estimates we introduce during the bootstrap argument differs from the one introduced in \cite{hunluk18}.
\subsubsection{Symmetry and wave maps}
As explained in the seventh chapter of the appendix of \cite{cho09}, the presence of a symmetry group acting on the spacetime generically implies the reduction of the Einstein vacuum equations into a coupled system between some Einstein-type equations and a wave map system. This is in particular the case for the $\mathbb{U}(1)$ symmetry. In \cite{Malone}, Moncrief performed this reduction and in \cite{CBM} Choquet-Bruhat and Moncrief prove local-existence at the $H^2$ level for a manifold of the form $\mathbb{R}_t \times \Sigma \times \mathbb{U}(1)$ where $\Sigma$ is a compact two-dimensional manifold. The compactness of $\Sigma$ allows them to use Schauder fixed point theorem, thus avoiding the need for some initial smallness. This has to be compared to the present work, where we need some initial smallness to solve the PDE system.
As explained earlier in this introduction, the wave map structure is particularly important for the proof of Theorem \ref{rough theo 2}. Indeed, as noted by Choquet-Bruhat in \cite{CBwavemaps} in the most general case, it is always possible to associate to any wave map system a third order energy estimate (see also \cite{MR2387237}).
\section{Geometrical setting}
In this section, we first introduce our notations, and then we present the $\mathbb{U}(1)$ symmetry and the elliptic gauge.
\subsection{Notations}
In this section we introduce the notations of this article. We will be working on $\mathcal{M}\vcentcolon=I\times\mathbb{R}^2$, where $I\subset\mathbb{R}$ is an interval. This space will be given a coordinate system $(t,x^1,x^2)$. We will use $x^i$ with lower case Latin index $i=1,2$ to denote the spatial coordinates.
\paragraph{Convention with indices :} \begin{itemize}
\item Lower case Latin indices run through the spatial indices 1, 2, while lower case Greek indices run through all the spacetime indices. Moreover, repeat indices are always summed over their natural range.
\item Lower case Latin indices are always raised and lowered with respect to the standard Euclidean metric $\delta_{ij}$, while lower case Greek indices are raised and lowered with respect to the spacetime metric $g$. \end{itemize}
\paragraph{Differential operators :} \begin{itemize}
\item For a function $f$ defined on $\mathbb{R}^{2+1}$, we set $\partial f=(\partial_t f,\nabla f)$, where $\nabla f$ is the usual spatial gradient on $\mathbb{R}^2$. Samewise, $\Delta$ denotes the standard Laplacian on $\mathbb{R}^2$. If $A=(a_1,a_2)$ and $B=(b_1,b_2)$ are two vectors of $\mathbb{R}^2$, we use the dot notation for their scalar product
\begin{equation*}
A\cdot B=a_1b_1+a_2b_2= \delta^{ij}a_ib_j.
\end{equation*}
The notation $|\cdot|$ is reserved for the norm associated to this scalar product, meaning $|A|^2=A\cdot A$.
\item $\mathcal{L}$ denotes the Lie derivatives, $D$ denotes the Levi-Civita connection associated to the spacetime metric $g$, and $\Box_g$ denotes the d'Alembertian operator on functions :
\begin{equation*}
\Box_gf=\frac{1}{\sqrt{|\det(g)|}}\partial_{\mu}\left( \left(g^{-1}\right)^{\mu\nu}\sqrt{|\det(g)|}\partial_{\nu}f\right).
\end{equation*}
\item $L$ denotes the euclidean conformal Killing operator acting on vectors on $\mathbb{R}^2$ to give a symmetric traceless (with respect to $\delta$) covariant 2-tensor :
\begin{equation*}
(L\xi)_{ij}\vcentcolon=\delta_{j\ell}\partial_i\xi^{\ell}+\delta_{i\ell}\partial_j\xi^{\ell}-\delta_{ij}\partial_k\xi^k.
\end{equation*} \end{itemize}
\paragraph{Functions spaces :} We will work with standard functions spaces $L^p$, $H^k$, $C^m$, $C^{\infty}_c$, etc., and assume their standard definitions. We use the following convention : \begin{itemize}
\item All function spaces will be taken on $\mathbb{R}^2$ and the measures will be the 2D-Lebesgue measure.
\item When applied to quantities defined on a spacetime $I\times\mathbb{R}^2$, the norms $L^p$, $H^k$, $C^m$ denote fixed-time norms. In particular, if in an estimate the time $t\in I$ in question is not explicitly stated, then it means that the estimate holds for all $t\in I$ for the time interval $I$ that is appropriate for the context. \end{itemize}
We will also work in weighted Sobolev spaces, which are well-suited to elliptic equations. We recall here their definition, together with the definition of weighted Hölder space. The properties of these spaces that we need are listed in Appendix \ref{appendix B}. We use the standard notation $\langle x \rangle=\left( 1+|x|^2\right)^{\frac{1}{2}}$ for $x\in\mathbb{R}^2$.
\begin{mydef}\label{wss} Let $m\in\mathbb{N}$, $1<p<\infty$, $\delta\in\mathbb{R}$. The weighted Sobolev space $W^{m,p}_{\delta}$ is the completion of $C^{\infty}_0$ under the norm \begin{equation*}
\| u\|_{W^{m,p}_{\delta}}=\sum_{|\beta|\leq m} \left\| \langle x \rangle^{\delta+\beta}\nabla^{\beta}u \right\|_{L^p}. \end{equation*} We will use the notation $H^m_{\delta}=W^{m,2}_{\delta}$, $L^p_{\delta}=W^{0,p}_{\delta}$ and $W^{m,p}$ denotes the standard Sobolev spaces on $\mathbb{R}^2$.
The weighted Hölder space $C^m_{\delta}$ is the completion of $C^m_c$ under the norm \begin{equation*}
\| u\|_{C^m_{\delta}}=\sum_{|\beta|\leq m} \left\| \langle x \rangle^{\delta+\beta}\nabla^{\beta}u \right\|_{L^{\infty}}. \end{equation*} For a covariant 2-tensor $A_{ij}$ tangential to $\mathbb{R}^2$, we use the convention : \begin{equation*}
\| A\|_{X}=\sum_{i,j=1,2} \| A_{ij}\|_{X}, \end{equation*} where $X$ stands for any function spaces defined above. \end{mydef} We denote by $B_r$ the ball in $\mathbb{R}^2$ of radius $r$ centered at 0.
\subsection{Einstein vacuum equations with a translation Killing field}\label{section U(1) symmetry}
In this section, we present the $\mathbb{U}(1)$ symmetry. From now on, we consider a Lorentzian manifold $(I\times\mathbb{R}^3,^{(4)}g)$, where $I\subset\mathbb{R}$ is an interval, and $^{(4)}g$ is a Lorentzian metric, for which $\partial_3$ is a Killing field. Following the Appendix VII of \cite{cho09}, this is equivalent to say that $^{(4)}g$ has the following form : \begin{equation}
^{(4)}g=e^{-2\varphi}g+e^{2\varphi}(\mathrm{d} x^3+A_\alpha\mathrm{d} x^\alpha)^2, \end{equation} where $\varphi:I\times\mathbb{R}^2\longrightarrow\mathbb{R}$ is a scalar function, $g$ is a Lorentzian metric on $I\times\mathbb{R}^2$ and $A$ is a 1-form on $I\times\mathbb{R}^2$. The \textit{polarized} $\mathbb{U}(1)$ symmetry is the case where $A=0$. We extend $\varphi$ to a function on $I\times\mathbb{R}^3$ in such a way that $\varphi$ does not depend on $x^3$. Given this ansatz of the metric, the vector field $\partial_3$ is Killing and hypersurface orthogonal. Assuming that the metric $^{(4)}g$ satisfies the Einstein vacuum equations, i.e $R_{\mu\nu}(^{(4)}g)=0$, one can prove that there exists a function $\omega$ such that \begin{equation*} F=-e^{-3\varphi}*\mathrm{d}\omega \end{equation*} where $F_{\alpha\beta}=\partial_\alpha A_\beta-\partial_\beta A_\alpha$. \par\leavevmode\par The Einstein vacuum equations for $(I\times\mathbb{R}^3,^{(4)}g)$ are thus equivalent to the following system of equations : \begin{equation}\label{EVE}
\left\{ \begin{array}{l} \Box_g \varphi =-\frac{1}{2}e^{-4\varphi}\partial^\rho\omega\partial_\rho\omega\\ \Box_g \omega =4\partial^\rho\omega\partial_\rho\varphi\\ R_{\mu\nu}(g) =2\partial_{\mu}\varphi\partial_{\nu}\varphi+\frac{1}{2}e^{-4\varphi}\partial_\mu\omega\partial_\nu\omega \end{array}. \right. \end{equation} Solving the system \eqref{EVE} is the goal of this article. Note that the last equation of \eqref{EVE} is actually the Einstein equation $G_{\mu\nu}(g)=T_{\mu\nu}$ with the following stress-energy-momentum tensor : \begin{equation}
T_{\mu\nu}=2\partial_{\mu}\varphi\partial_{\nu}\varphi-g_{\mu\nu}g^{\alpha\beta}\partial_{\alpha}\varphi\partial_{\beta}\varphi+\frac{1}{2}e^{-4\varphi}\left( 2\partial_{\mu}\omega\partial_{\nu}\omega-g_{\mu\nu}g^{\alpha\beta}\partial_{\alpha}\omega\partial_{\beta}\omega\right).\label{tenseur energie impulsion } \end{equation}
\subsection{The elliptic gauge}\label{section geometrie}
In this section, we present the elliptic gauge. We first write the $(2+1)$-dimensional metric $g$ on $\mathcal{M}\vcentcolon=I\times\mathbb{R}^2$ in the usual form : \begin{equation}
g=-N^2\mathrm{d} t^2+\Bar{g}_{ij}\left(\mathrm{d} x^i+\beta^i\mathrm{d} t \right)\left(\mathrm{d} x^j+\beta^j\mathrm{d} t \right). \end{equation} Let $\Sigma_t\vcentcolon=\enstq{(s,x)\in\mathcal{M}}{s=t}$ and $e_0\vcentcolon=\partial_t-\beta^i\partial_i$, which is a future directed normal to $\Sigma_t$. The function $N$ is called the lapse and the vector field $\beta$ is the shift. We introduce $\mathbf{T}\vcentcolon=\frac{e_0}{N}$, the unit future directed normal to $\Sigma_t$. We introduce the second fundamental form of the embedding $\Sigma_t\xhookrightarrow{}\mathcal{M}$ \begin{equation}
K_{ij}\vcentcolon=-\frac{1}{2N}\mathcal{L}_{e_0}\Bar{g}_{ij}.\label{Kij} \end{equation} We decompose $K$ into its trace and traceless part : \begin{equation}
K_{ij}=H_{ij}+\frac{1}{2}\Bar{g}_{ij}\tau,\label{def H} \end{equation} where $\tau=\mathrm{tr}_{\Bar{g}}K$. We introduce the following gauge conditions, which define the elliptic gauge : \begin{itemize}
\item $\Bar{g}$ is conformally flat, i.e there exists a function $\gamma$ such that
\begin{equation}\label{g bar}
\Bar{g}_{ij}=e^{2\gamma}\delta_{ij}.
\end{equation}
\item the hypersurfaces $\Sigma_t$ are maximal, which means that $K$ is traceless, i.e
\begin{equation}
\tau=0.
\end{equation} \end{itemize} Thus, the metric takes the following form : \begin{equation}
g=-N^2\mathrm{d} t^2+e^{2\gamma}\delta_{ij}\left(\mathrm{d} x^i+\beta^i\mathrm{d} t \right)\left(\mathrm{d} x^j+\beta^j\mathrm{d} t \right).\label{metrique elliptique} \end{equation} The main computations in the elliptic gauge are performed in Appendix \ref{appendix A}. They show that \eqref{EVE} is schematically of the form \begin{equation}\label{EVE2}
\left\{ \begin{array}{l} \Box_g U =(\partial U)^2 \\ \Delta g = (\partial U)^2+ (\partial g)^2 \end{array} \right. \end{equation} where $U$ denotes either $\varphi$ or $\omega$ and in the second equation $g$ denotes any metric coefficient.
\section{Main results}
\subsection{Initial data}\label{subsection initial data}
We now describe our choice of initial data for the system \eqref{EVE}. We distinguish the \textit{admissible} initial data, and the \textit{admissible free} initial data.
For the rest of this paper, we choose a fixed smooth cutoff function $\chi:\mathbb{R}\to\mathbb{R}$ such that $\chi_{|[-1,1]}=0$ and $\chi_{|[-2,2]}=0$. The notation $\chi\ln$ stands for the function $x\in\mathbb{R}^2\longmapsto \chi(|x|)\ln(|x|)$.
\begin{mydef}[Admissible initial data] For $-1<\delta<0$ and $R>0$, an admissible initial data set with respect to the elliptic gauge for \eqref{EVE} consists of \begin{enumerate}
\item A conformally flat intrinsic metric $(e^{2\gamma}\delta_{ij})_{|\Sigma_0}$ which admits a decomposition
\begin{equation*}
\gamma=-\alpha\chi\ln+\Tilde{\gamma},
\end{equation*}
where $\alpha\geq 0$ is a constant and $\Tilde{\gamma}\in H^{4}_{\delta}$.
\item A second fundamental form $(H_{ij})_{|\Sigma_0}\in H^{3}_{\delta+1}$ which is traceless.
\item $\left( \mathbf{T}\varphi,\nabla\varphi \right)_{|\Sigma_0}\in H^2$, compactly supported in $B_R$.
\item $\left( \mathbf{T}\omega,\nabla\omega \right)_{|\Sigma_0}\in H^2$, compactly supported in $B_R$.
\item $\gamma$ and $H$ are required to satisfy the following constraint equations :
\begin{align*}
\partial^iH_{ij}&=-2e^{2\gamma}\mathbf{T}\varphi\partial_j\varphi-\frac{1}{2}e^{-4\varphi+2\gamma}\mathbf{T}\omega\partial_j\omega, \\
\Delta\gamma&=-\frac{e^{-2\gamma}}{2}\vert H\vert^2-e^{2\gamma}\left(\mathbf{T}\varphi\right)^2-\vert\nabla\varphi\vert^2-\frac{e^{-4\varphi}}{4}\left( e^{-2\gamma}\left(\mathbf{T}\omega\right)^2+|\nabla\omega|^2 \right). \end{align*} \end{enumerate} \end{mydef}
We recall that the constraint equations for the Einstein Vacuum Equations are $G_{00}=T_{00}$ and $G_{0i}=T_{0i}$, where $G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}$ is the Einstein tensor, and $T_{\mu\nu}$ is the stress-energy-momentum tensor associated to the matter fields $\varphi$ and $\omega$, according to the RHS of system \eqref{EVE}. The fact that these equations reduces in the elliptic gauge to the previous equations on $H$ and $\gamma$ can be proved using the computations done in Appendix \ref{appendix A}. \par\leavevmode\par We define the notion of admissible free initial data as follows :
\begin{mydef}[Admissible free initial data]
We set $\mathring{\varphi}=e^{2\gamma}\mathbf{T}\varphi$ and $\mathring{\omega}=e^{2\gamma}\mathbf{T}\omega$, where $\gamma$ is as in \eqref{metrique elliptique}. For $-1<\delta<0$ and $R>0$, an admissible free initial data set with respect to the elliptic gauge for \eqref{EVE} is given by $(\mathring{\varphi},\nabla\varphi)_{|\Sigma_0}\in H^2$ and $(\mathring{\omega},\nabla\omega)_{|\Sigma_0}\in H^2$, all compactly supported in $B_R$, satisfying \begin{equation}\label{orthogonality condition}
\int_{\mathbb{R}^2}\left(2\mathring{\varphi}\partial_j\varphi+\frac{1}{2}e^{-4\varphi}\mathring{\omega}\partial_j\omega\right)\mathrm{d} x=0, \quad j=1,2. \end{equation} \end{mydef}
The interest of the admissible free initial data is that we can construct from them a set of admissible initial data, which in particular satisfies the constraint equations. Note that instead of prescribing $\mathbf{T} \varphi$ and $\mathbf{T}\omega$, we prescribe a suitable rescaled version of them, which allows the decoupling of the two constraint equations : we will first solve for $H$ and then for $\gamma$.
\subsection{Statement of the theorems}
The following is our main result on local well-posedness for \eqref{EVE}.
\begin{thm}\label{theoreme principal} Let $-1<\delta<0$ and $R>0$. Given an admissible free initial data set such that \begin{equation*}
\left\| \mathring{\varphi} \right\|_{L^4}+\left\| \nabla\varphi \right\|_{L^4}+\left\| \mathring{\omega} \right\|_{L^4}+\left\| \nabla\omega \right\|_{L^4}\leq \varepsilon, \end{equation*} and \begin{equation*}
C_{high}\vcentcolon=\left\| \mathring{\varphi} \right\|_{H^2}+\left\| \nabla\varphi \right\|_{H^2}+\left\| \mathring{\omega} \right\|_{H^2}+\left\| \nabla\omega \right\|_{H^2}<\infty, \end{equation*} for any $C_{high}$, there exists a constant $\varepsilon_{0}=\varepsilon_{0}(\delta,R)>0$ independent of $C_{high}$ and a time $T=T(C_{high},\delta,R)>0$ such that, if $0<\varepsilon\leq\varepsilon_{0}$, there exists a unique solution to \eqref{EVE} in elliptic gauge on $[0,T]\times\mathbb{R}^2$.
Moreover, defining $\delta'=\delta-\varepsilon$, there exists a constant $C_h=C_h(C_{high},\delta,R)>0$ such that \begin{itemize}
\item The fields $\varphi$ and $\omega$ satisfy for all $t\in [0,T]$
\begin{align*}
\left\| \partial_t^2\varphi \right\|_{H^1}+\left\| \partial_t\varphi \right\|_{H^2}+\left\| \nabla\varphi \right\|_{H^2}&\leq C_h,\\
\left\| \partial_t^2\omega \right\|_{H^1}+\left\| \partial_t\omega \right\|_{H^2}+\left\| \nabla\omega \right\|_{H^2}&\leq C_h,
\end{align*}
and their supports are both included in $ J^+\left(\Sigma_0\cap B_R\right)$, where $J^+$ denotes the causal future.
\item The metric components $\gamma$ and $N$ can be decomposed as
\begin{equation*}
\gamma=-\alpha\chi\ln+\Tilde{\gamma},\quad N=1+N_a\chi\ln+\Tilde{N},
\end{equation*}
with $\alpha\geq 0$ and $N_a(t)\geq 0$ a function of $t$ alone.
\item $\gamma$, $N$ and $\beta$ satisfy the following estimates for all $t\in[0,T]$ :
\begin{align*}
|\alpha|+\left\|\Tilde{\gamma}\right\|_{H^4_{\delta}}+\left\|\partial_t\Tilde{\gamma}\right\|_{H^3_{\delta}}+\left\|\partial_t^2\Tilde{\gamma}\right\|_{H^2_{\delta}} & \leq C_h,\\
\left|N_a\right|+\left|\partial_tN_a\right|+\left|\partial_t^2N_a\right| & \leq C_h,\\
\left\|\Tilde{N}\right\|_{H^4_{\delta}}+\left\|\partial_t\Tilde{N}\right\|_{H^3_{\delta}}+\left\|\partial_t^2\Tilde{N}\right\|_{H^2_{\delta}} & \leq C_h,\\
\left\| \beta \right\|_{H^4_{\delta'}} +\left\| \partial_t\beta \right\|_{H^3_{\delta'}}+\left\| \partial_t^2\beta \right\|_{H^2_{\delta'}}&\leq C_h.
\end{align*}
\item The following conservation laws hold :
\begin{align}
\int_{\mathbb{R}^2}\left(4e^{2\gamma}\mathbf{T}\varphi\partial_j\varphi+e^{-4\varphi+2\gamma}\mathbf{T}\omega\partial_j\omega\right)\mathrm{d} x&=0,\label{CL1}\\
\int_{\mathbb{R}^2}\left(2e^{-2\gamma}|H|^2+4e^{2\gamma}(\mathbf{T}\varphi)^2+e^{-4\varphi+2\gamma}(\mathbf{T}\omega)^2+4|\nabla\varphi|^2+e^{-4\varphi}|\nabla\omega|^2 \right)\mathrm{d} x &=4\alpha.\label{CL2}
\end{align} \end{itemize} \end{thm}
This theorem has the following corollary, which basically states that if we want to have $T=1$, it suffices to take $C_{high}$ small enough. We will omit the details of its proof because it is actually simplier than the proof of Theorem \ref{theoreme principal}.
\begin{coro}\label{CORO} Suppose the assumptions of Theorem \ref{theoreme principal} hold. There exists $\varepsilon_{small}=\varepsilon_{small}(\delta,R)>0$ such that if $C_{high}$ and $\varepsilon$ in Theorem \ref{theoreme principal} satisfy \begin{equation*}
C_{high},\varepsilon\leq\varepsilon_{small}, \end{equation*} then the unique solution exists in $[0,1]\times\mathbb{R}^2$. Moreover, there exists $C_0=C_0(\delta,R)$ such that all the estimates in Theorem \ref{theoreme principal} hold with $C_h$ replaced by $C_0\varepsilon$. \end{coro}
\par\leavevmode\par We also prove the following theorem, which is a blow-up criterium (See the introduction of Section \ref{section theo 2} for a discussion of this theorem) :
\begin{thm}\label{theo 2} Let $T>0$ be the maximal time of existence of the solution of \eqref{EVE} obtained in Theorem \ref{theoreme principal}. If $T<+\infty$ and $\varepsilon_0$ is small enough (still independent of $C_{high}$) then one of the following holds : \begin{enumerate}[label=\roman*)] \item $\sup_{[0,T)} \left( \left\| \partial \varphi \right\|_{H^1} + \left\| \partial\omega \right\|_{H^1} \right)=+\infty$, \item $\sup_{[0,T)} \left( \left\| \partial \varphi \right\|_{L^4} + \left\| \partial\omega \right\|_{L^4} \right)>\varepsilon_0$, \end{enumerate} \end{thm}
As said in the introduction, one major feature of these two theorems is that the smallness constant $\varepsilon_0$ does not depend on $C_{high}$. Note in contrast that the time of existence $T$ does depend on $C_{high}$.
\subsection{The reduced system}\label{section reduced system}
In order to solve the Einstein Equations, we will first solve the following system, which we call the reduced system. It is identical to the one introduced in \cite{hunluk18}. \begin{align}
\Delta N&=e^{-2\gamma}N\vert H\vert^2+\frac{\tau^2}{2}e^{2\gamma}N+\frac{2e^{2\gamma}}{N}(e_0\varphi)^2+\frac{e^{2\gamma-4\varphi}}{2N}(e_0\omega)^2,\label{EQ N} \\
L\beta&=2e^{-2\gamma}NH,\label{EQ beta} \\
N\tau&=-2e_0\gamma+\mathrm{div}\beta,\label{EQ tau} \\
e_0H_{ij}&=-2e^{-2\gamma}NH_i^{\;\,\ell}H_{j\ell}+\partial_{(j}\beta^kH_{i)k}-\frac{1}{2}(\partial_i\Bar{\otimes}\partial_j)N+(\delta_i^k\Bar{\otimes}\partial_j\gamma)\partial_kN\nonumber\\&\qquad\qquad\qquad\qquad\qquad-(\partial_i\varphi\Bar{\otimes}\partial_j\varphi)N-\frac{1}{4}e^{-4\varphi}(\partial_i\omega\Bar{\otimes}\partial_j\omega)N,\label{EQ H} \\
\mathbf{T}^2\gamma-e^{-2\gamma}\Delta\gamma&=-\frac{\tau^2}{2}+\frac{1}{2}\mathbf{T}\left(\frac{\mathrm{div}(\beta)}{N}\right)+e^{-2\gamma}\left(\frac{\Delta N}{2N}+\vert\nabla\varphi\vert^2+\frac{1}{4}e^{-4\varphi}\left|\nabla\omega \right|^2\right),\label{EQ gamma} \\
\mathbf{T}^2\varphi-e^{-2\gamma}\Delta \varphi&=\frac{e^{-2\gamma}}{N}\nabla \varphi\cdot\nabla N+\tau\mathbf{T}\varphi+\frac{1}{2}e^{-4\varphi}\left( (e_0\omega)^2+|\nabla\omega|^2 \right),\label{EQ ffi}\\
\mathbf{T}^2\omega-e^{-2\gamma}\Delta \omega&=\frac{e^{-2\gamma}}{N}\nabla \omega\cdot\nabla N+\tau\mathbf{T}\omega-4e_0\omega e_0\varphi-4\nabla\omega\cdot\nabla\varphi,\label{EQ omega} \end{align} where we use the notation $u_i\Bar{\otimes} v_j=u_iv_j +u_jv_i-\delta_{ij}u^kv_k$.
Let us explain where equations \eqref{EQ N}-\eqref{EQ omega} come from : \begin{itemize}
\item Considering \eqref{appendix R00} and \eqref{JSP}, the equation $R_{00}=T_{00}-g_{00}\mathrm{tr}_gT$ without the term in $e_0\tau$ gives \eqref{EQ N}.
\item For $\beta$ and $\tau$, the equations \eqref{EQ beta} and \eqref{EQ tau} simply come from \eqref{appendix beta} and \eqref{appendix tau}.
\item To obtain the equation for $H$, we basically take the traceless part of $R_{ij}$. More precisely, using \eqref{appendix Rij}, \eqref{appendix trace ricci}, \eqref{JSP} and \eqref{JSP 3} the equation
\begin{equation*}
R_{ij}-\frac{1}{2}\delta_{ij}\delta^{k\ell}R_{k\ell}=T_{ij}-g_{ij}\mathrm{tr}_gT-\frac{1}{2}\delta_{ij}\delta^{k\ell}\left(T_{k\ell}-g_{k\ell}\mathrm{tr}_gT \right)
\end{equation*}
gives \eqref{EQ H}.
\item Considering \eqref{appendix trace ricci} and \eqref{JSP 2}, the equation $\delta^{ij}R_{ij}=\delta^{ij}(T_{ij}-g_{ij}\mathrm{tr}_gT)$ reads :
\begin{equation*}
\Delta\gamma=\frac{\tau^2}{2}e^{2\gamma}-\frac{e^{2\gamma}}{2}\mathbf{T}\tau-\frac{\Delta N}{2N}-\left|\nabla\varphi\right|^2-\frac{1}{4}e^{-4\varphi}\left|\nabla\omega \right|^2.
\end{equation*}
Using \eqref{appendix tau}, we can compute $\mathbf{T}\tau$ and inject it in the previous equation to obtain \eqref{EQ gamma}.
\item For the equation on the matter fields $\varphi$ and on $\omega$, we simply use Proposition \ref{appendix box} to rewrites $\Box_g\varphi$ and $\Box_g\omega$. \end{itemize} After obtaining a solution to the reduced system, our next task will be to prove that this solution is in fact a solution of \eqref{EVE}.
Note that $H$ and $\gamma$ no longer satisfy elliptic equations, whereas in the "full" Einstein equations in the elliptic gauge they do. We follow this strategy to avoid to propagate the two conservation laws \eqref{CL1} and \eqref{CL2}, which would have been essential for solving elliptic equations and obtain a suitable behavior at spacelike infinity for $H$ and $\gamma$. Since we assume these conservation laws to hold initially, we do obtain this behavior while solving the constraints equations.
Therefore, only $N$ and $\beta$ satisfy elliptic equations, and the reduced system is a coupled hyperbolic-elliptic-transport system.
\subsection{Outline of the proof}
We briefly discuss the structure of this article, which aims at proving Theorems \ref{theoreme principal} and \ref{theo 2}. \par\leavevmode\par First of all, in Section \ref{initial data}, we solve the constraints equations. More precisely we prove that an admissible free initial data set gives rise to an actual admissible initial data set, thus satisfying the constraints equations. Then, we split the proof of Theorem \ref{theoreme principal} into two parts : \begin{itemize} \item in Section \ref{section solving the reduced system}, we solve the reduced system \eqref{EQ N}-\eqref{EQ omega} using an iteration scheme, with initial data given by Section \ref{initial data}. During this iteration scheme, we first prove that our sequence of approximate solution is uniformaly bounded (see Section \ref{uniforme boundedness}) and then that it is a Cauchy sequence (see Section \ref{Cauchy}). \item in Section \ref{section end of proof}, we prove that the solution to the reduced system is indeed a solution to $\eqref{EVE}$ and that it satisfies all the estimates stated in Theorem \ref{theoreme principal}. \end{itemize}
We prove Theorem \ref{theo 2} in Section \ref{section theo 2}, using a continuity argument based on a special energy estimate which suits the wave map structure of the coupled system satisfied by $\varphi$ and $\omega$.
\par\leavevmode\par Finally, this article contains two appendices : \begin{itemize} \item Appendix \ref{appendix A} presents the computations of the connection coefficients and the Ricci tensor in the elliptic gauge, as well as some formulae related to the stress-energy-momentum tensor. \item Appendix \ref{appendix B} presents the main tools regarding the spaces $W^{m,p}_{\delta}$ : embeddings results, product laws, and a theorem due to McOwen which allows us to solve elliptic equations on thoses spaces. It ends with some standard inequalities used in the proof. \end{itemize}
\section{Initial data and the constraints equations}\label{initial data}
In this section, we follow \cite{hun16} and discuss the initial data for the reduced system, and in particular we solve the constraints equations. More precisely, we will show that an admissible free initial data set gives rise to a unique admissible initial data set satisfying the constraint equations.
We will then derive the initial data for $N$ and $\beta$ and prove their regularity properties. Note that, since $\mathring{\varphi}$ and $\mathring{\omega}$ are prescribed, once we have the initial data for $N$, $\beta$ and $\gamma$, we obtain the initial data for $\partial_t\varphi$ and $\partial_t\omega$.
We will only care about highlighting the dependence on $\varepsilon$ and $C_{high}$ in the following estimates and will use the notation $\lesssim$ where the implicit constant only depends on $\delta$, $R$ or on any constants coming from embeddings results.
\par\leavevmode\par Before we go into solving the constraint equations, let us prove a simple lemma which will allows us to deal with the $e^{-4\varphi}$ and $e^{\pm2\gamma}$ factors, which will occur many times in the equations.
\begin{lem}\label{useful lem}
Let $\gamma=-\alpha\chi\ln+\Tilde{\gamma}$ be a function on $\mathbb{R}^2$ such that $0\leq\alpha\leq 1$, $\|\Tilde{\gamma}\|_{H^2_{\delta}}\leq 1$ and $\varphi\in H^3$ a compactly supported function on $\mathbb{R}^2$ such that $\left\|\varphi\right\|_{W^{1,4}}\leq\varepsilon$. Then, for all functions $f$ on $\mathbb{R}^2$ and $\nu\in\mathbb{R}$, the following estimates holds for $k=0,1,2$ : \begin{align}
\left\|e^{-2\gamma}f \right\|_{H^k_{\nu}}&\lesssim \|f\|_{H^k_{\nu+2\alpha}}\label{useful gamma 1},\\
\left\|e^{2\gamma}f \right\|_{H^k_{\nu}}&\lesssim \|f\|_{H^k_{\nu}}\label{useful gamma 1a},\\
\left\| e^{-4\varphi}f\right\|_{H^{k}_\nu}&\lesssim \left\| f\right\|_{H^{k}_\nu}+k\left\|\nabla\varphi\right\|_{H^2}\left\| f\right\|_{H^{k'-1}}+k(k-1)\left\|\nabla^2\varphi f\right\|_{L^2}\label{useful ffi 1}. \end{align}
Moreover, if in addition $\|\nabla\Tilde{\gamma}\|_{H^2_{\delta'+1}}<\infty$, the following estimate holds : \begin{align}
\left\|e^{-2\gamma}f \right\|_{H^3_{\nu}}&\lesssim \|f\|_{H^3_{\nu+2\alpha}}+\|\nabla\Tilde{\gamma}\|_{H^2_{\delta'+1}}\|f\|_{H^2_{\nu+2\alpha}},\label{useful gamma 2}\\
\left\|e^{2\gamma}f \right\|_{H^3_{\nu}}&\lesssim \|f\|_{H^3_{\nu}}+\|\nabla\Tilde{\gamma}\|_{H^2_{\delta'+1}}\|f\|_{H^2_{\nu}}.\label{useful gamma 2a} \end{align} \end{lem} \begin{proof}
We recall the embedding $H^2_{\delta}\xhookrightarrow{}L^{\infty}$, which implies that $\left| e^{-2\Tilde{\gamma}}\right|\lesssim 1$, which allows us to forget about these factors in the following computations. Similarly, we have $\left|e^{2\alpha\chi\ln} \right|\lesssim \langle x\rangle^{2\alpha}$, which will be responsible for the change of decrease order (this remark also implies that proving \eqref{useful gamma 1} and \eqref{useful gamma 2} is enough to get \eqref{useful gamma 1a} and \eqref{useful gamma 2a}).
Moreover, we only prove \eqref{useful gamma 2}, since it will be clear that its proof will include the proof of \eqref{useful gamma 1}. With these remarks in mind, we compute directly : \begin{align*}
\left\| e^{-2\gamma}f \right\|_{H^3_{\nu}} & \lesssim \left\|f \right\|_{H^3_{\nu+2\alpha}}+ \left\|\nabla\gamma f \right\|_{L^2_{\nu+2\alpha+1}}
+ \left\|\nabla^2\gamma f \right\|_{L^2_{\nu+2\alpha+2}}\\&\qquad+ \left\|\left( \nabla\gamma\right)^2f \right\|_{L^2_{\nu+2\alpha+2}} +\left\| \nabla\gamma\nabla f\right\|_{L^2_{\nu+2\alpha+2}} +\left\|\nabla^2\gamma\nabla f \right\|_{L^2_{\nu+2\alpha+3}}
\\&\qquad+\left\|\left(\nabla\gamma \right)^2\nabla f \right\|_{L^2_{\nu+2\alpha+3}}+\left\|\nabla\gamma\nabla^2 f \right\|_{L^2_{\nu+2\alpha+3}}
\\&\qquad+\left\| \nabla^3\gamma f\right\|_{L^2_{\nu+2\alpha+3}}+\left\|\nabla\gamma\nabla^2\gamma f \right\|_{L^2_{\nu+2\alpha+3}}+\left\|\left(\nabla\gamma \right)^3f \right\|_{L^2_{\nu+2\alpha+3}} \end{align*}
Because of $\left|\nabla^{a}(\chi\ln) \right|\lesssim \langle x\rangle^{-|a|}$ (which is valid for every multi-index $ a\neq 0$), we can forget about the $\chi\ln$ part in $\gamma$ and pretend that $\gamma$ is replaced by $\Tilde{\gamma}$. Using the product estimate (see Proposition \ref{prop prod}), we can deal with all these terms :
\begin{center} \begin{tabular}{ c c }
$\left\|\nabla\Tilde{\gamma} f \right\|_{L^2_{\nu+2\alpha+1}}\lesssim\|\nabla\Tilde{\gamma}\|_{H^1_{\delta+1}}\|f\|_{H^1_{\nu+2\alpha+1}},\quad$ & $\left\|\nabla^2\Tilde{\gamma} f \right\|_{L^2_{\nu+2\alpha+2}}\lesssim \left\| \nabla^2\Tilde{\gamma}\right\|_{L^2_{\delta+2}}\| f\|_{H^2_{\nu+2\alpha}}$, \\
$\left\|\left(\nabla\Tilde{\gamma}\right)^2 f \right\|_{L^2_{\nu+2\alpha+2}}\lesssim \|\nabla\Tilde{\gamma}\|_{H^1_{\delta+1}}^2\|f\|_{H^2_{\nu+2\alpha}},\quad$ & $\left\|\nabla\Tilde{\gamma} \nabla f \right\|_{L^2_{\nu+2\alpha+2}}\lesssim\| \nabla\Tilde{\gamma}\|_{H^1_{\delta+1}}\|\nabla f\|_{H^1_{\nu+2\alpha+1}}$, \\
$\left\|\nabla^2\Tilde{\gamma} \nabla f \right\|_{L^2_{\nu+2\alpha+3}}\lesssim \left\|\nabla^2\Tilde{\gamma} \right\|_{L^2_{\delta+2}}\|\nabla f\|_{H^2_{\nu+2\alpha+1}},\quad$ & $\left\|\left(\nabla\Tilde{\gamma}\right)^2 \nabla f \right\|_{L^2_{\nu+2\alpha+3}}\lesssim \|\nabla\Tilde{\gamma}\|_{H^1_{\delta+1}}^2\|\nabla f\|_{H^2_{\nu+2\alpha+1}},$ \\
$\left\|\nabla\Tilde{\gamma} \nabla^2f \right\|_{L^2_{\nu+2\alpha+3}}\lesssim \|\nabla\Tilde{\gamma}\|_{H^1_{\delta+1}}\left\|\nabla^2 f\right\|_{H^1_{\nu+2\alpha+2}},\quad$ & $\left\|\nabla^3\Tilde{\gamma} f \right\|_{L^2_{\nu+2\alpha+3}}\lesssim \left\|\nabla^3\Tilde{\gamma}\right\|_{L^2_{\delta'+3}}\|f\|_{H^2_{\nu+2\alpha}},$\\
$\left\|\nabla\Tilde{\gamma}\nabla^2\Tilde{\gamma} f \right\|_{L^2_{\nu+2\alpha+3}}\lesssim \|\nabla\Tilde{\gamma}\|_{H^2_{\delta'+1}}\left\|\nabla^2\Tilde{\gamma}f \right\|_{L^2_{\nu+2\alpha+2}},\quad$ & $\left\|\left(\nabla\Tilde{\gamma}\right)^3 f \right\|_{L^2_{\nu+2\alpha+3}}\lesssim \|\nabla\Tilde{\gamma}\|_{H^2_{\delta'+1}}\left\|\left(\nabla\Tilde{\gamma}\right)^2 f \right\|_{L^2_{\nu+2\alpha+3}}.$ \end{tabular} \end{center}
Note that the last two estimates involve $\left\|\nabla^2\Tilde{\gamma}f \right\|_{L^2_{\nu+2\alpha+2}}$ and $ \left\|\left(\nabla\Tilde{\gamma}\right)^2 f \right\|_{L^2_{\nu+2\alpha+3}}$, which have already been estimated. Looking at these estimates, we see that the only ones which uses $\| \nabla\Tilde{\gamma}\|_{H^2_{\delta'+1}}$ are the three last ones. Those terms don't appear if we only differentiate twice or less, it is therefore clear why \eqref{useful gamma 1} is also proved. The proof of \eqref{useful ffi 1} is identical, using the embeddings $W^{1,4}\xhookrightarrow{}L^{\infty}$ and $H^2\xhookrightarrow{}L^{\infty}$. \end{proof}
\subsection{The constraints equations}\label{section constraints equations}
We are now ready to solve the constraints equations, which we rewrite in terms of $\mathring{\varphi}$ and $\mathring{\omega}$ : \begin{align}
\partial^iH_{ij}&=-2\mathring{\varphi}\partial_j\varphi-\frac{1}{2}e^{-4\varphi}\mathring{\omega}\partial_j\omega, \label{C1}\\
\Delta\gamma&=-e^{-2\gamma}\left(\mathring{\varphi}^2+\frac{1}{4}e^{-4\varphi}\mathring{\omega}^2+\frac{1}{2}\vert H\vert^2\right)-\vert\nabla\varphi\vert^2-\frac{1}{4}e^{-4\varphi}|\nabla\omega|^2.\label{C2} \end{align}
\begin{lem}\label{CI sur H} The equation \eqref{C1} admits a unique solution $H\in H^3_{\delta+1}$, a symmetric traceless covariant 2-tensor with $\Vert H\Vert_{H^1_{\delta+1}}\lesssim \varepsilon^2$. \end{lem}
\begin{proof} We look for a solution under the form $H=LY$ where $Y$ is a 1-form. We have $\partial^iH_{ij}=\Delta Y_j$ and $Y$ solves \begin{equation*}
\Delta Y_j=-2\mathring{\varphi}\partial_j\varphi-\frac{1}{2}e^{-4\varphi}\mathring{\omega}\partial_j\omega. \end{equation*} Using the definition of $L$, it's easy to check that $LY$ is a traceless symmetric 2-tensor. We use the Theorem \ref{mcowens 1} in the case $p=2$ and $m=0$, the range of the Laplacian is then the functions $f\in H^0_{\delta+2}$ such that $\int f=0$. By assumption, $\int_{\mathbb{R}^2}\left(-2\mathring{\varphi}\partial_j\varphi-\frac{e^{-4\varphi}}{2}\mathring{\omega}\partial_j\omega\right)\mathrm{d} x=0$ and thanks to the support property, the Hölder inequality and \eqref{useful ffi 1} we have : \begin{equation*}
\left\| \Delta Y_j\right\|_{ H^0_{\delta+2}}\lesssim \left\| \mathring{\varphi}\partial_j\varphi\right\|_{ L^2}+\left\| \mathring{\omega}\partial_j\omega\right\|_{ L^2}\lesssim \left\|\mathring{\varphi}\right\|_{L^4}\left\|\nabla\varphi\right\|_{L^4}+\left\|\mathring{\omega}\right\|_{L^4}\left\|\nabla\omega\right\|_{L^4}\lesssim \varepsilon^2. \end{equation*} Thus, there exists a unique solution $Y_j\in H^2_{\delta}$. Moreover we have $\Vert Y_j\Vert_{H^2_{\delta}}\leq \varepsilon^2$, which implies $\Vert H\Vert_{H^1_{\delta+1}}\leq \varepsilon^2$.
We can improve the regularity of $H$, by noting that \begin{equation*}
\Vert H\Vert_{H^3_{\delta+1}}\leq \Vert Y\Vert_{H^4_{\delta}}\lesssim\Vert \mathring{\varphi}\nabla\varphi\Vert_{H^2}+\Vert \mathring{\omega}\nabla\omega\Vert_{H^2}\lesssim C_{high}^2. \end{equation*} In the last inequality we use the fact that in dimension 2, $H^2$ is an algebra.
Our solution $H\in H^3_{\delta+1}$ is unique, because of the following fact : if $H\in H^3_{\delta+1}$ is a traceless symmetric divergence free 2-tensor, we have componentwise $\Delta H_{ij}=0$, which implies $H=0$, again thanks to Theorem \ref{mcowens 1}. \end{proof}
\begin{lem}\label{lem CI gamma} For $\varepsilon$ sufficiently small, the equation \eqref{C2} admits a unique solution $\gamma=-\alpha \chi\ln+\Tilde{\gamma}$ with $\Tilde{\gamma}\in H^4_{\delta}$, $\Vert \Tilde{\gamma}\Vert_{H^2_{\delta}}\lesssim \varepsilon^2$ and $0\leq\alpha\lesssim \varepsilon^2$. \end{lem}
\begin{proof} We are going to use a fixed point argument in $[0,\varepsilon]\times B_{H^2_{\delta}}(0,\varepsilon)$. We define on this space the application $\phi:(\alpha^{(1)},\Tilde{\gamma}^{(1)})\longmapsto(\alpha^{(2)},\Tilde{\gamma}^{(2)})$, where $\gamma^{(2)}$ is the unique solution of \begin{equation}
\Delta\gamma^{(2)}=-\vert\nabla\varphi\vert^2-\frac{1}{4}e^{-4\varphi}|\nabla\omega|^2-e^{-2\gamma^{(1)}}\left( \frac{1}{2}\vert H\vert^2+\mathring{\varphi}^2+\frac{1}{4}e^{-4\varphi}\mathring{\omega}^2\right),\label{contraction 1} \end{equation} with the notation $\gamma^{(i)}=-\alpha^{(i)}\chi(r)\ln(r)+\Tilde{\gamma}^{(i)}$. We want to prove that if $\varepsilon$ is small enough, $\phi$ is indeed a contraction. Let us show that the RHS of \eqref{contraction 1} is in $H^0_{\delta+2}$. By assumption on $\varphi$ and $\omega$ we can write, using Hölder's inequality and \eqref{useful ffi 1} : \begin{equation*}
\left\| \vert \nabla \varphi \vert^2+\frac{1}{4}e^{-4\varphi}|\nabla\omega|^2\right\|_{ H^0_{\delta+2}}\lesssim \left\| \vert \nabla \varphi \vert^2\right\|_{ L^2}+\left\| \vert \nabla \omega \vert^2\right\|_{ L^2}\lesssim \varepsilon^2. \end{equation*}
For the term $e^{-2\gamma^{(1)}}|H|^2$, we use \eqref{useful gamma 1}, the product estimate and choose $\varepsilon$ small enough : \begin{equation*}
\left\| e^{-2\gamma^{(1)}}|H|^2 \right\|_{ H^0_{\delta+2}}\lesssim \left\| |H|^2\right\|_{H^0_{\delta+2(1+\varepsilon)}} \lesssim \Vert H\Vert_{H^1_{\delta+1}}^2\lesssim \varepsilon^4. \end{equation*} The last terms is handled with the same arguments : \begin{equation*}
\left\| e^{-2\gamma^{(1)}}\left(\mathring{\varphi}^2+\frac{1}{4}e^{-4\varphi}\mathring{\omega}^2 \right)\right\|_{H^0_{\delta+2}} \lesssim \left\| \mathring{\varphi}^2\right\|_{L^2} +\left\| \mathring{\omega}^2\right\|_{L^2}\lesssim \varepsilon^2. \end{equation*} We next prove the bound on $\alpha^{(2)} =-\frac{1}{2\pi}\int_{\mathbb{R}^2}(\text{RHS of \eqref{contraction 1}})$, its positivity being clear. We have \begin{align*}
\left|\alpha^{(2)}\right| & \lesssim \left\| \nabla\varphi \right\|_{L^2}^2+ \left\| \nabla\omega \right\|_{L^2}^2 + \left\|e^{-\gamma^{(1)}} H \right\|_{L^2}^2 + \left\| e^{-\gamma^{(1)}}\mathring{\varphi} \right\|_{L^2}^2+ \left\| e^{-\gamma^{(1)}-2\varphi}\omega \right\|_{L^2}^2
\\& \lesssim\left\| \nabla\varphi \right\|_{L^4}^2+\left\| \nabla\omega \right\|_{L^4}^2+ \left\|H\right\|_{H^0_{\varepsilon}}^2+\left\| \mathring{\varphi} \right\|_{L^4}^2+\left\| \mathring{\omega} \right\|_{L^4}^2
\\& \lesssim \varepsilon^2, \end{align*} where we used Hölder's inequality, \eqref{useful gamma 1} (for the three last terms) and the support property of $\varphi$, $\mathring{\varphi}$, $\omega$ and $\mathring{\omega}$. In conclusion, thanks to Corollary \ref{mcowens 2}, if $\varepsilon$ is small enough, $\phi$ is indeed an application from $[0,\varepsilon]\times B_{H^2_{\delta}}(0,\varepsilon)$ to itself and we can prove in the same way that this is a contraction. \par\leavevmode\par We can improve the regularity of $\Tilde{\gamma}$, using \eqref{useful gamma 1} and \eqref{useful ffi 1} : \begin{align*}
\left\| \Tilde{\gamma}\right\|_{H^4_{\delta}}&\lesssim \left\| e^{-2\gamma}H^2 \right\|_{ H^2_{\delta+2}} + \left\| e^{-2\gamma}\mathring{\varphi}^2\right\|_{H^2}+ + \left\| \vert \nabla \varphi \vert^2\right\|_{ H^2}
\\& \lesssim \left\| H^2 \right\|_{H^2_{\delta+2\varepsilon+2}}+\left\| \mathring{\varphi}^2\right\|_{H^2}+\left\| \vert \nabla \varphi \vert^2\right\|_{ H^2}
\\& \lesssim \|H\|_{H^3_{\delta+1}}^2+C_{high}^2, \end{align*} where in the last inequality, we used the product estimate (with $\varepsilon$ small enough) for the first term and the algebraic structure of $H^2$ for the remaining terms. Thanks to Lemma \ref{CI sur H}, the final quantity is finite, which concludes the proof.
\end{proof}
\subsection{Initial data to the reduced system}
The equations satisfied by $N$ and $\beta$ are : \begin{align}
\Delta N & =e^{-2\gamma}N\left(\vert H\vert^2+\mathring{\varphi}^2+\frac{1}{4}e^{-4\varphi}\mathring{\omega}^2\right) , \label{CI sur N}\\
L\beta & =2e^{-2\gamma}NH.\label{CI sur beta} \end{align} The equation \eqref{CI sur N} comes from \eqref{EQ N} in the case $\tau=0$, and the equation \eqref{CI sur beta} comes from \eqref{appendix beta}. \begin{lem}\label{lem CI N} For $\varepsilon$ sufficiently small, the equation \eqref{CI sur N} admits a unique solution $N=1+N_a\chi\ln+\Tilde{N}$ with $\Tilde{N}\in H^4_{\delta}$, $\Vert \Tilde{N}\Vert_{H^2_{\delta}}\lesssim \varepsilon^2$ and $0\leq N_a\lesssim \varepsilon^2$. \end{lem}
\begin{proof} We look for a solution of the form $N=1+N_a\chi(r)\ln(r)+\Tilde{N}$, with $N_a\geq 0$. On the space $[0,\varepsilon]\times B_{H^2_{\delta}}(0,\varepsilon)$, we define the application $\phi(N_a^{(1)},\Tilde{N}^{(1)})=(N_a^{(2)},\Tilde{N}^{(2)})$ where (with the notation $N^{(i)}=1+N_a^{(i)}\chi(r)\ln(r)+\Tilde{N}^{(i)}$), $N^{(2)}$ is the solution of \begin{equation}
\Delta N^{(2)}=e^{-2\gamma}N^{(1)}(\vert H\vert^2+\mathring{\varphi}^2+\frac{1}{4}e^{-4\varphi}\mathring{\omega}^2). \end{equation}
Let's show that the RHS is in $H^0_{\delta+2}$. Thanks to the support property of $\mathring{\varphi}$ and $\mathring{\omega}$, the first term is handled quite easily using \eqref{useful gamma 1}, \eqref{useful ffi 1} and the fact that $\left\|N^{(1)} \right\|_{L^{\infty}(B_R)}\lesssim 1$ (note the embedding $H^2_\delta\xhookrightarrow{}L^{\infty}$) : \begin{equation*}
\left\| e^{-2\gamma}N^{(1)}\left(\mathring{\varphi}^2+\frac{1}{4}e^{-4\varphi}\mathring{\omega}^2\right)\right\|_{H^0_{\delta+2}}\lesssim \left\|N^{(1)} \right\|_{L^{\infty}(B_R)}\left(\left\|\mathring{\varphi}^2\right\|_{L^2}+\left\|\mathring{\omega}^2\right\|_{L^2}\right)\lesssim\varepsilon^2. \end{equation*} Using again \eqref{useful gamma 1}, the fact that $\vert \chi\ln\vert\lesssim\langle x\rangle^{\frac{\delta+1}{2}} $, the embedding $H^2_{\delta}\xhookrightarrow{}L^{\infty}$ (used for $\Tilde{N}^{(1)}$) and the product estimate, we handle the second term : \begin{align*}
\left\| e^{-2\gamma}N^{(1)}|H|^2\right\|_{H^0_{\delta+2}} & \lesssim\left\| N^{(1)}|H|^2\right\|_{H^0_{\delta+2+2\varepsilon}}\nonumber\\ & \lesssim \left\| |H|^2\right\|_{H^0_{\delta+2+2\varepsilon}}\left(1+\left\|\Tilde{N}^{(1)}\right\|_{H^2_{\delta}}\right)+\varepsilon\left\| |H|^2\right\|_{H^0_{\delta+2+2\varepsilon+\frac{\delta+1}{2}}}
\\&\lesssim(1+\varepsilon)\left\| H\right\|_{H^1_{\delta+1}}^2
\\&\lesssim \varepsilon^4. \end{align*} We showed that, for $\varepsilon$ small enough, we have $\Vert\Delta N^{(2)}\Vert_{H^0_{\delta+2}}\lesssim \varepsilon^2$. \\ We have : \begin{equation}
2\pi N_a^{(2)}=\int_{\mathbb{R}^2}e^{-2\gamma}N^{(1)}|H|^2+\int_{\mathbb{R}^2}e^{-2\gamma}N^{(1)}\mathring{\varphi}^2+\frac{1}{4}\int_{\mathbb{R}^2}e^{-2\gamma-4\varphi}N^{(1)}\mathring{\omega}^2. \end{equation} If $\varepsilon$ is small enough, we have $N^{(1)}\geq 0$ (using the embedding $H^2_{\delta}\xhookrightarrow{}L^{\infty}$) so that $N_a^{(2)}\geq 0$. With the same kind of arguments than previously, we can easily show that $N_a^{(2)}\lesssim\varepsilon^2$. \\ This concludes the fact that $\phi$ is well defined (providing $\varepsilon$ is small and thanks to Corollary \ref{mcowens 2}), and that this is a contraction (the calculations are likewise, since the equation is linear). \par\leavevmode\par We can improve the regularity of $\Tilde{N}$, using \eqref{useful gamma 1} and \eqref{useful ffi 1} : \begin{align}
\left\|\Tilde{N} \right\|_{H^4_{\delta}}&\lesssim \left\|e^{-2\gamma}N|H|^2 \right\|_{H^2_{\delta+2}}+ \left\|e^{-2\gamma}N\mathring{\varphi}^2 \right\|_{H^2}+ \left\|e^{-2\gamma-4\varphi}N\mathring{\omega}^2 \right\|_{H^2} \label{idem1}
\\& \lesssim \left\| |H|^2\right\|_{H^2_{\delta+2+2\varepsilon}}\left(1+\left\|\Tilde{N}^{(1)}\right\|_{H^2_{\delta}}\right)+\varepsilon\left\| \chi\ln |H|^2\right\|_{H^2_{\delta+2+2\varepsilon}}
+ \left\|N\right\|_{L^{\infty}(B_R)}\left(\left\| \mathring{\varphi}^2 \right\|_{H^2}+\left\|\mathring{\omega}^2\right\|_{H^2}\right).\nonumber \end{align}
Using $\vert \chi\ln\vert\lesssim\langle x\rangle^{\frac{\delta+1}{2}} $ and $\left| \nabla^a(\chi\ln)\right|\lesssim\langle x\rangle^{-|a|}$ (for $a\neq0$), we easily show that $\left\| \chi\ln |H|^2\right\|_{H^2_{\delta+2+2\varepsilon}}\lesssim \left\| |H|^2\right\|_{H^2_{\delta+2+2\varepsilon+\frac{\delta+1}{2}}}$ to obtain : \begin{equation*}
\left\| \Tilde{N}\right\|_{H^4_{\delta}}\lesssim (1+\varepsilon)\left\| H\right\|_{H^3_{\delta+1}}^2+\|\mathring{\varphi}\|_{H^2}^2+\|\mathring{\omega}\|_{H^2}^2, \end{equation*} which is finite, thanks to Lemma \ref{CI sur H}. \end{proof}
The following simple lemma will be useful in order to use Theorem \ref{mcowens 1} for $\beta$ :
\begin{lem}\label{divergence nulle} Let $m\in\mathbb{N}$, $\nu\in\mathbb{R}$ and $u=(u_1,u_2)$ be a fonction from $\mathbb{R}^2$ to $\mathbb{R}^2$ such that $u_i\in H^m_{\nu}$. If $m\geq 2$ and $\nu>0$, then \begin{equation*}
\int_{\mathbb{R}^2}\mathrm{div}(u)=0 \end{equation*} \end{lem}
\begin{proof} We fix $R>0$ and use the Stokes formula : \begin{align*}
\bigg| \int_{B_R}\mathrm{div}(u)\bigg| = \bigg| \int_{\partial B_R}u.n\,\mathrm{d}\sigma\bigg| \leq \int_{\partial B_R}\langle x\rangle^{-\nu-1}\langle x\rangle^{\nu+1}|u.n|\,\mathrm{d}\sigma \lesssim \Vert u\Vert_{C^0_{\nu+1}}R^{-\nu} \end{align*} If $m\geq 2$ and $\nu>0$ we have the Sobolev embeddings $H^m_{\nu}\subset C^0_{\nu+1}$, which concludes the proof since the last inequality implies \begin{equation*}
\lim_{R\to +\infty}\int_{B_R}\mathrm{div}(u)=0. \end{equation*} \end{proof}
\begin{lem}\label{lem CI beta} For $\varepsilon$ sufficiently small, the equation \eqref{CI sur beta} admits a unique solution $\beta\in H^4_{\delta'}$ with $\Vert \beta\Vert_{H^2_{\delta'}}\lesssim\varepsilon^2$. \end{lem}
\begin{proof} We take the divergence of \eqref{CI sur beta} to obtain the following elliptic equation : \begin{equation}
\Delta\beta_j=\partial^i(2Ne^{-2\gamma}H_{ij})\label{laplacien beta CI} \end{equation}
Thanks to Lemma \ref{divergence nulle}, $\int_{\mathbb{R}^2}\partial^i(2Ne^{-2\gamma}H_{ij})=0$ (the fact that $e^{-2\gamma}NH\in H^2_{\delta'+1}$ will be proved in the sequel of this proof). Thus, in order to apply Theorem \ref{mcowens 1}, it remains to show that $\Vert \partial^i(2Ne^{-2\gamma}H_{ij})\Vert_{H^0_{\delta'+2}}\lesssim \varepsilon^2$. For that, we use \eqref{useful gamma 1}, $\varepsilon|\chi\ln|\lesssim\langle x\rangle^{\frac{\varepsilon}{2}}$, Lemmas \ref{CI sur H} and \ref{lem CI N} : \begin{align*}
\left\| \partial^i(2Ne^{-2\gamma}H_{ij})\right\|_{H^0_{\delta'+2}} &\lesssim \left\| e^{-2\gamma}NH\right\|_{H^1_{\delta'+1}}\\&\lesssim \left\| H\right\|_{H^1_{\delta+1}}\left(1+\left\|\Tilde{N} \right\|_{H^2_{\delta}} \right)+\left\| H\right\|_{H^1_{\delta'+1+C\varepsilon^2+\frac{\varepsilon}{2}}}
\\& \lesssim \varepsilon^2, \end{align*} where in the last inequality, we take $\varepsilon$ such that $C\varepsilon^2\leq\frac{\varepsilon}{2}$. Thus, we can apply Theorem \ref{mcowens 1} to obtain the existence of a solution to \eqref{laplacien beta CI}. We can improve the regularity of this solution using \eqref{useful gamma 2} : \begin{align*}
\left\|\beta \right\|_{H^4_{\delta'}}&\lesssim \left\| e^{-2\gamma}NH\right\|_{H^3_{\delta'+1}}
\\& \lesssim\left\|NH \right\|_{H^3_{\delta'+C\varepsilon^2+1}}+\left\|\nabla\Tilde{\gamma} \right\|_{H^2_{\delta'+1}}\left\| NH\right\|_{H^2_{\delta'+C\varepsilon^2+1}}
\\& \lesssim\left(1+\left\|\nabla\Tilde{\gamma} \right\|_{H^2_{\delta'+1}} \right)\left(\left\| H\right\|_{H^3_{\delta+1}}\left(1+\left\|\Tilde{N} \right\|_{H^4_{\delta}} \right)+\left\| H\right\|_{H^3_{\delta'+1+C\varepsilon^2+\frac{\varepsilon}{2}}}\right). \end{align*}
Taking $\varepsilon$ such that $C\varepsilon^2\leq\frac{\varepsilon}{2}$, we conclude using Lemmas \ref{CI sur H}, \ref{lem CI gamma} and \ref{lem CI N} that $\left\|\beta \right\|_{H^4_{\delta'}}<\infty$. \par\leavevmode\par It remains to show that our solution $\beta$ satisfies $L\beta=2Ne^{-2\gamma}H$. We have shown that $L\beta-2Ne^{-2\gamma}H$ is a covariant symmetric traceless divergence free 2-tensor, it implies that its components are harmonic, and thus vanishes (because they belong to $H^4_{\delta'}$). We use the same argument to show that the solution is unique. \end{proof}
In order to have $\tau_{|\Sigma_0}=0$, we must have the following :
\begin{lem} We set $e_0\gamma=\frac{1}{2}\mathrm{div}(\beta)$. Then, we have $e_0\gamma\in H^3_{\delta'+1}$ and $\Vert e_0\gamma\Vert_{H^1_{\delta'+1}}\leq\varepsilon^2$. \end{lem}
\begin{proof} It follows directly from the estimates on $\beta$ proved in Lemma \ref{lem CI beta} and from Lemma \ref{B1}. \end{proof}
We summarise in the next corollary our results about the constraints equations and the initial data :
\begin{coro}\label{coro premiere section}
For $\varepsilon$ sufficiently small depending only on $\delta$, given a free initial data set, there exists an initial data set to the reduced system such that the constraints equations are satisfied and $\tau_{|\Sigma_0}=0$ . Moreover, we have the following estimates : \begin{itemize}[label=\textbullet]
\item there exists $C>0$ depending only on $\delta$ and $R$ such that :
\begin{equation}
\Vert H\Vert_{H^1_{\delta+1}}+\vert\alpha\vert+\left\|\Tilde{\gamma}\right\|_{H^2_{\delta}}+\Vert e_0\gamma\Vert_{H^1_{\delta'+1}}+\vert N_a\vert+\left\|\Tilde{N}\right\|_{H^2_{\delta}}+\Vert \beta\Vert_{H^2_{\delta}}\leq C\varepsilon^2\label{CI petit}
\end{equation}
\item there exists $C_i>0$ depending on $\delta$, $R$ and $C_{high}$ such that :
\begin{equation}
\Vert H\Vert_{ H^3_{\delta+1}}+\left\|\Tilde{\gamma}\right\|_{ H^4_{\delta}}+\Vert e_0\gamma\Vert_{H^3_{\delta'+1}}+\left\|\Tilde{N}\right\|_{ H^4_{\delta}}+\Vert\beta\Vert_{ H^4_{\delta'}}\leq C_i\label{CI gros}
\end{equation} \end{itemize} \end{coro}
\section{Solving the reduced system}\label{section solving the reduced system}
In this section, we solve the reduced system of equations introduced in Section \ref{section reduced system} by an iteration methode. We first prove that we can construct a sequence, defined in Section \ref{section iteration scheme} and bounded in a small space (this is done in Section \ref{uniforme boundedness}). Then we prove in Section \ref{Cauchy} that the sequence is Cauchy in a larger space, which will imply the existence and uniqueness of solutions to the reduced system of equations.
\subsection{Iteration scheme}\label{section iteration scheme}
In order to solve the reduced system \eqref{EQ N}-\eqref{EQ ffi}, we construct the sequence \begin{equation*}
(N^{(n)}=1+N_a^{(n)}\chi\ln+\Tilde{N}^{(n)},\tau^{(n)},H^{(n)},\beta^{(n)}, \gamma^{(n)}=-\alpha\chi\ln+\Tilde{\gamma}^{(n)},\varphi^{(n)},\omega^{(n)}) \end{equation*} iteratively as follows : for $n=1,2$, let $N^{(n)},\tau^{(n)},H^{(n)},\beta^{(n)}, \gamma^{(n)},\varphi^{(n)}$ be time-independent, with initial data as in Section \ref{initial data}. For $n\geq 2$, given the $n$-th iterate, the $(n+1)$-st iterate is then defined by solving the following system with initial data as in Section \ref{initial data} : \begin{align}
\Delta N^{(n+1)}&=e^{-2\gamma^{(n)}}N^{(n)}\left| H^{(n)}\right|^2+\frac{\left(\tau^{(n)}\right)^2}{2}e^{2\gamma^{(n)}}N^{(n)}\nonumber\\&\qquad\quad\quad+\frac{2e^{2\gamma^{(n)}}}{N^{(n)}}\left(e_0^{(n-1)}\varphi^{(n)}\right)^2+\frac{e^{2\gamma^{(n)}-4\varphi^{(n)}}}{2N^{(n)}}\left(e_0^{(n-1)}\omega^{(n)}\right)^2 \label{reduced system N}\\
L\beta^{(n+1)}&=2e^{-2\gamma^{(n)}}N^{(n)}H^{(n)} \label{reduced system beta}\\
\tau^{(n+1)}&=-2\mathbf{T}^{(n-1)}\gamma^{(n)}+\frac{\mathrm{div}\left(\beta^{(n)}\right)}{N^{(n-1)}} \label{reduced system tau}\\
e_0^{(n+1)}\left(H^{(n+1)}\right)_{ij}&=-2e^{-2\gamma^{(n)}}N^{(n)}\left(H^{(n)}\right)_i^{\;\,\ell}\left(H^{(n)}\right)_{j\ell}+\partial_{(j}\left(\beta^{(n)}\right)^k\left(H^{(n)}\right)_{i)k}\nonumber\\&\qquad-\frac{1}{2}\left(\partial_i\Bar{\otimes}\partial_j\right)N^{(n)}+\left(\delta_i^k\Bar{\otimes}\partial_j\gamma^{(n)}\right)\partial_kN^{(n)}\label{reduced system H}\\&\qquad-\left(\partial_i\varphi^{(n)}\Bar{\otimes}\partial_j\varphi^{(n)}\right)N^{(n)}-\frac{1}{4}e^{-4\varphi^{(n)}}\left(\partial_i\omega^{(n)}\Bar{\otimes}\partial_j\omega^{(n)}\right)N^{(n)} \nonumber\\
\left(\mathbf{T}^{(n)}\right)^2\gamma^{(n+1)}-e^{-2\gamma^{(n)}}\Delta \gamma^{(n+1)}&=-\frac{\left(\tau^{(n)}\right)^2}{2}+\frac{1}{2N^{(n)}}e_0^{(n-1)}\left(\frac{\mathrm{div}\left(\beta^{(n)}\right)}{N^{(n-1)}}\right)\nonumber\\&\qquad\quad\quad+e^{-2\gamma^{(n)}}\left(\frac{\Delta N^{(n)}}{2N^{(n)}}+\left|\nabla\varphi^{(n)}\right|^2+\frac{1}{4}e^{-4\varphi^{(n)}}\left|\nabla\omega^{(n)} \right|^2\right) \label{reduced system gamma}\\
\left(\mathbf{T}^{(n)}\right)^2\varphi^{(n+1)}-e^{-2\gamma^{(n)}}\Delta \varphi^{(n+1)}&=\frac{e^{-2\gamma^{(n)}}}{N^{(n)}}\nabla \varphi^{(n)}\cdot\nabla N^{(n)}+\frac{\tau^{(n)} e_0^{(n-1)}\varphi^{(n)}}{N^{(n)}}\nonumber\\&\qquad\qquad\qquad +\frac{1}{2}e^{-4\varphi^{(n)}}\left( \left(e_0^{(n-1)}\omega^{(n)}\right)^2+\left|\nabla\omega^{(n)}\right|^2 \right)\label{reduced system fi}\\
\left(\mathbf{T}^{(n)}\right)^2\omega^{(n+1)}-e^{-2\gamma^{(n)}}\Delta \omega^{(n+1)}&=\frac{e^{-2\gamma^{(n)}}}{N^{(n)}}\nabla \omega^{(n)}\cdot\nabla N^{(n)}+\frac{\tau^{(n)} e_0^{(n-1)}\omega^{(n)}}{N^{(n)}}\nonumber\\&\qquad\qquad\qquad-4e_0^{(n-1)}\omega^{(n)} e_0^{(n-1)}\varphi^{(n)}-4\nabla\omega^{(n)}\cdot\nabla\varphi^{(n)},\label{reduced system omega} \end{align}
This system is not a linear system in the $(n+1)$-th iterate, because of the term $e_0^{(n+1)}H^{(n+1)}$ in \eqref{reduced system H} (which contains $\beta^{(n+1)}\cdot\nabla H^{(n+1)}$). The local well-posedness of this system follows from the estimates we are about to prove. Note that we use the following notation : \begin{equation*} e_0^{(k)} = \partial_t - \nabla \beta^{(k)}\cdot \nabla \quad \text{and}\quad \mathbf{T}^{(k)}=\frac{e_0^{(k)}}{N^{(k)}}. \end{equation*}
\subsection{Boundedness of the sequence}\label{uniforme boundedness}
The first step is to show that the sequence is uniformly bounded in appropriate function spaces. We proceed by strong induction and suppose that the following estimates hold for all $k$ up to some $n\geq 2$ and for all $t\in [0,T]$. Here, $A_0\ll A_1\ll A_2\ll A_3\ll A_4$ are all sufficiently large constants independent of $\varepsilon$ or $n$ to be choosen later. We also set $\delta'=\delta-\varepsilon$ and take $\varepsilon$ small enough so that $-1<\delta'$. We also choose $\lambda>0$ a small constant such that $\lambda<\delta+1$.
\begin{itemize}[label=\textbullet]
\item $N^{(k)}$ is of the form $N^{(k)}=1+N_a^{(k)}\chi\ln+\Tilde{N}^{(k)}$ with $N_a^{(k)}\geq 0$ and
\begin{align}
\left| N_a^{(k)}\right|+\left\|\Tilde{N}^{(k)}\right\|_{H^2_{\delta}}&\leq \varepsilon\label{HR N 1},\\
\left|\partial_tN_a^{(k)}\right|+\left\|\Tilde{N}^{(k)}\right\|_{H^3_{\delta}}+\left\|\partial_t\Tilde{N}^{(k)}\right\|_{H^2_{\delta}}&\leq 2C_i\label{HR N 2},
\\ \left\| \Tilde{N}^{(k)}\right\|_{H^4_{\delta}} & \leq A_2 C_i^2\label{HR N 3}.
\end{align}
\item $\beta^{(k)}$ satisfies
\begin{align}
\left\|\beta^{(k)}\right\|_{H^2_{\mathbf{\delta'}}}&\leq \varepsilon\label{HR beta 1},\\
\left\|\beta^{(k)}\right\|_{H^3_{\mathbf{\delta'}}}&\leq A_0 C_i\label{HR beta 2},\\
\left\| \nabla e_0^{(k-1)}\beta^{(k)}\right\|_{L^2_{\delta'+1}}&\leq C_i\label{HR beta 2.5},\\
\left\| e_0^{(k-1)}\beta^{(k)}\right\|_{H^2_{\delta'}}&\leq A_1C_i\label{HR beta 3},\\
\left\| e_0^{(k-1)}\beta^{(k)}\right\|_{H^3_{\delta'}}&\leq A_4C_i^2.\label{HR beta 4}
\end{align}
\item $H^{(k)}$ satisfies
\begin{align}
\left\| H^{(k)}\right\|_{H^2_{\delta+1}}&\leq 2C_i\label{HR H 1},\\
\left\| e_0^{(k)}H^{(k)}\right\|_{L^2_{1+\lambda}}&\leq \varepsilon,\label{HR H 1.5}\\
\left\| e_0^{(k)}H^{(k)}\right\|_{H^1_{\delta+1}}&\leq A_0C_i,\label{HR H 2}\\
\left\| e_0^{(k)}H^{(k)}\right\|_{H^2_{\delta+1}}&\leq A_3C_i^2.\label{HR H 3}
\end{align}
\item $\tau^{(k)}$ satisfies
\begin{align}
\left\| \tau^{(k)}\right\|_{H^2_{\delta'+1}}&\leq A_1C_i\label{HR tau 1},\\
\left\| \partial_t\tau^{(k)}\right\|_{L^2_{\delta'+1}}&\leq A_2C_i\label{HR tau 2},\\
\left\| \partial_t\tau^{(k)}\right\|_{H^1_{\delta'+1}}&\leq A_3C_i.\label{HR tau 3}
\end{align}
\item $\gamma^{(k)}$ is of the form $\gamma^{(k)}=-\alpha\chi\ln+\Tilde{\gamma}^{(k)}$ with $\alpha$ as previously and $\Tilde{\gamma}^{(k)}$ satisfies
\begin{align}
\sum_{|\alpha|\leq 2} \left\Vert\mathbf{T}^{(k-1)}\nabla^{\alpha}\Tilde{\gamma}^{(k)}\right\Vert_{L^2_{\delta'+1+|\alpha|}}+\left\| \nabla \Tilde{\gamma}^{(k)}\right\|_{H^2_{\delta'+1}}&\leq 8 C_i,\label{HR gamma 1}
\\ \left\Vert\partial_t\left(\mathbf{T}^{(k-1)}\Tilde{\gamma}^{(k)}\right)\right\Vert_{L^2_{\delta'+1}}&\leq A_0C_i,\label{HR gamma 2}
\\ \left\Vert\partial_t\left(\mathbf{T}^{(k-1)}\Tilde{\gamma}^{(k)}\right)\right\Vert_{H^1_{\delta'+1}}&\leq A_2C_i.\label{HR gamma 3}
\end{align}
\item $\varphi^{(k)}$ and $\omega^{(k)}$ are compactly supported in
\begin{equation*}
\enstq{(t,x)\in[0,T]\times\mathbb{R}^2}{\vert x\vert\leq R+C_s(1+R^{\varepsilon})t},
\end{equation*}
where $C_s>0$ is to be choosen in Lemma \ref{support fi n+1}. Choosing $T$ smaller if necessary, we assume that the above set is a subset of $[0,T]\times B_{2R}$. Moreover, the following estimates hold :
\begin{align}
\left\|\partial_t\varphi^{(k)}\right\|_{H^2}+\left\|\nabla\varphi^{(k)}\right\|_{H^2}+\left\Vert\partial_t\left(\mathbf{T}^{(k-1)}\varphi^{(k)}\right)\right\Vert_{H^1}&\leq A_0C_i,\label{HR fi 1}\\
\left\|\partial_t\omega^{(k)}\right\|_{H^2}+\left\|\nabla\omega^{(k)}\right\|_{H^2}+\left\Vert\partial_t\left(\mathbf{T}^{(k-1)}\omega^{(k)}\right)\right\Vert_{H^1}&\leq A_0C_i,\label{HR omega 1}
\end{align} \end{itemize}
Recalling the statement of Theorem \ref{theoreme principal}, $C_{high}$ is a potentially large constant on which $T$ can depend, but $\varepsilon_{0}$ has to be independent of $C_{high}$ and $C_i$ (which, as explained in Corollary \ref{coro premiere section}, depends on $C_{high}$). Therefore, in the following estimates, we will keep trace of $C_i$, and $\varepsilon C_i$ is not a small constant.
We will use the symbol $\lesssim$ where the implicit constants are independent of $A_0$, $A_1$, $A_2$, $A_3$, $A_4$ and $C_i$ and use $C$ as the notation for such a constant. Moreover, $C(A_i)$ will denote a constant depending on $A_i$, but not on $C_i$. At the end of the proof, we will choose $A_0$, $A_1$, $A_2$, $A_3$ and $A_4$ such that $C(A_i)\ll A_{i+1}$ for all $i=0,\dots,3$. \par\leavevmode\par Our goal now is to prove that all this estimates are still true for the next iterate. For most of these, we will in fact show that they hold with better constants on the RHS.
\subsubsection{Preliminary estimates}
The next result will be very useful in the sequel :
\begin{prop}\label{commutation estimate} The following estimate holds : \begin{equation*}
\left\|\mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)}\right\|_{H^2_{\delta'+1}}\leq 9C_i.\label{commutation estimate eq} \end{equation*} \end{prop}
\begin{proof}
In view of \eqref{HR gamma 1}, we have to commute $\mathbf{T}^{(n-1)}$ with $\nabla^{\alpha}$ (for $|\alpha|\leq 2$). Indeed : \begin{equation*}
\left\|\mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)}\right\|_{H^2_{\delta'+1}}\leq \sum_{|\alpha|\leq 2}\left( \left\Vert\mathbf{T}^{(n-1)}\nabla^{(\alpha)}\Tilde{\gamma}^{(n)}\right\Vert_{L^2_{\delta'+1+|\alpha|}}+\left\|\left[ \mathbf{T}^{(n-1)},\nabla^{\alpha}\right]\Tilde{\gamma}^{(n)}\right\|_{L^2_{\delta'+1+|\alpha|}}\right) \end{equation*} Using the commutation formula $[e_0^{(n-1)},\nabla]=\nabla\beta^{(n-1)}\nabla$, we compute \begin{equation*}
\left[\mathbf{T}^{(n-1)},\nabla\right] \Tilde{\gamma}^{(n)} =\frac{\nabla\beta^{(n-1)}}{N^{(n-1)}}\nabla \Tilde{\gamma}^{(n)}-\frac{\nabla N^{(n-1)}}{N^{(n-1)}}\mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)} \end{equation*}
We need smallness for the metric component so we use on one hand \eqref{HR beta 1} the product estimate, the fact that $|\frac{1}{N^{(n-1)}}|\lesssim 1$ and on the other hand the fact that $|\nabla(\chi\ln)|\lesssim \langle x\rangle^{-1}$ and \eqref{HR N 1} to write \begin{align*}
\left\|\left[ \mathbf{T}^{(n-1)},\nabla\right]\Tilde{\gamma}^{(n)}\right\|_{L^2_{\delta'+2}}&\lesssim \left\|\nabla\beta^{(n-1)}\right\|_{H^1_{\delta'+1}}\left\|\nabla\Tilde{\gamma}^{(n)}\right\|_{H^1_{\delta'+1}}+\left|N_a^{(n-1)}\right|\left\|\mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)}\right\|_{L^2_{\delta'+1}} \\ & \quad+ \left\|\nabla\Tilde{N}^{(n-1)}\right\|_{H^1_{\delta+1}}\left\|\mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)}\right\|_{H^1_{\delta'+1}}\\
&\lesssim \varepsilon\left(\left\|\nabla\Tilde{\gamma}^{(n)}\right\|_{H^2_{\delta'+1}}+ \left\|\mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)}\right\|_{L^2_{\delta'+1}}\right)+\varepsilon\left\|\mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)}\right\|_{H^2_{\delta'+1}}. \end{align*} Now we compute $\left[\mathbf{T}^{(n-1)},\nabla^2\right] \Tilde{\gamma}^{(n)}$ : \begin{align*}
\left[\mathbf{T}^{(n-1)},\nabla^2\right] \Tilde{\gamma}^{(n)}&=2\frac{\nabla\beta^{(n-1)}}{N^{(n-1)}}\nabla^2 \Tilde{\gamma}^{(n)}-2\frac{\nabla N^{(n-1)}}{N^{(n-1)}}\mathbf{T}^{(n-1)}\nabla\Tilde{\gamma}^{(n)} +\left( \frac{\nabla^2\beta^{(n-1)}}{N^{(n-1)}}+\frac{\nabla N^{(n-1)}\nabla\beta^{(n-1)}}{\left(N^{(n-1)}\right)^2}\right)\nabla \Tilde{\gamma}^{(n)}\\
&-\left(\frac{\nabla^2 N^{(n-1)}}{N^{(n-1)}}+\left(\frac{\nabla N^{(n-1)}}{N^{(n-1)}} \right)^2 \right)\mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)}. \end{align*}
Using the product estimate and $|\frac{1}{N^{(n-1)}}|\lesssim 1$ we do the following : \begin{align*}
&\left\|\left[ \mathbf{T}^{(n-1)},\nabla^2\right] \Tilde{\gamma}^{(n)} \right\|_{L^2_{\delta'+3}}& \\&\lesssim \left\| \nabla^2\Tilde{\gamma}^{(n)} \right\|_{H^1_{\delta'+2}} \left\| \nabla\beta^{(n-1)} \right\|_{H^1_{\delta'+1}} +\left\|N_a^{(n-1)}\nabla(\chi\ln)\mathbf{T}^{(n-1)}\nabla\Tilde{\gamma}^{(n)} \right\|_{L^2_{\delta'+3}} +\left\| \nabla\Tilde{N}^{(n-1)} \right\|_{H^1_{\delta+1}}\left\|\mathbf{T}^{(n-1)}\nabla\Tilde{\gamma}^{(n)} \right\|_{H^1_{\delta'+2}} \\
& \qquad+\left\| \nabla\Tilde{\gamma}^{(n)} \right\|_{H^2_{\delta'+1}}\left(\left\| \nabla^2\beta^{(n-1)} \right\|_{L^2_{\delta'+2}}+\left\| \nabla N^{(n-1)}\nabla\beta^{(n-1)} \right\|_{L^2_{\delta'+2}} \right)\\
& \qquad+ \left\| \mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)} \right\|_{H^2_{\delta'+1}}\left( \left\| \nabla^2N^{(n-1)} \right\|_{L^2_{\delta+2}} +\left\| \left(\nabla\Tilde{N}^{(n-1)}\right)^2+N_a^{(n-1)}\nabla(\chi\ln)\nabla\Tilde{N}^{(n-1)} \right\|_{L^2_{\delta+2}} \right)\\
&\qquad+\left\| \left( N_a^{(n-1)}\nabla(\chi\ln)\right)^2\mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)} \right\|_{L^2_{\delta'+3}}. \end{align*}
Now using \eqref{HR N 1}, \eqref{HR beta 1}, $|\nabla(\chi\ln)|\lesssim \langle x\rangle^{-1}$ we have : \begin{align*}
\left\|\left[ \mathbf{T}^{(n-1)},\nabla^2\right] \Tilde{\gamma}^{(n)} \right\|_{L^2_{\delta'+3}} \lesssim & \;\varepsilon\left( \sum_{|\alpha|\leq 2} \left\Vert\mathbf{T}^{(n-1)}\nabla^{\alpha}\Tilde{\gamma}^{(n)}\right\Vert_{L^2_{\delta'+1+|\alpha|}}+\left\|\nabla\Tilde{\gamma}^{(n)}\right\|_{H^2_{\delta'+1}}\right) \\
& +\varepsilon\left\| \mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)} \right\|_{H^2_{\delta'+1}}+\varepsilon\left\| \mathbf{T}^{(n-1)}\nabla\Tilde{\gamma}^{(n)} \right\|_{H^1_{\delta'+2}} \end{align*} It remains to deal with the last term in this last inequality. Using the same type of arguments as above we can show that : \begin{equation*}
\left\|\mathbf{T}^{(n-1)}\nabla\Tilde{\gamma}^{(n)} \right\|_{H^1_{\delta'+2}}\lesssim \left\|\mathbf{T}^{(n-1)}\nabla\Tilde{\gamma}^{(n)} \right\|_{L^2_{\delta'+2}} + \left\|\mathbf{T}^{(n-1)}\nabla^2\Tilde{\gamma}^{(n)} \right\|_{L^2_{\delta'+3}} +\varepsilon\left\| \nabla^2\Tilde{\gamma}^{(n)} \right\|_{H^1_{\delta'+2}} \end{equation*} Summarising, we get : \begin{align*}
\left\|\mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)}\right\|_{H^2_{\delta'+1}} \lesssim & (1+\varepsilon)\left(\sum_{|\alpha|\leq 2} \left\Vert\mathbf{T}^{(n-1)}\nabla^{\alpha}\Tilde{\gamma}^{(n)}\right\Vert_{L^2_{\delta'+1+|\alpha|}}+\left\|\nabla\Tilde{\gamma}^{(n)}\right\|_{H^2_{\delta'+1}}\right)\\&\qquad+\varepsilon \left\|\mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)}\right\|_{H^2_{\delta'+1}} \end{align*} By choosing $\varepsilon$ small enough, we can absorb the last term of the RHS into the LHS and using \eqref{HR gamma 1} we finally prove the desired result. \end{proof}
We continue with a propagation of smallness result.
\begin{prop}\label{prop propsmall} The following estimates hold for $T$ sufficiently small and $C_p>0$ a constant depending on $\delta$ and $R$ only : \begin{align}
\left\|\partial_t\varphi^{(n)}\right\|_{L^4}+\left\|\nabla\varphi^{(n)}\right\|_{L^4}+\left\|\partial_t\omega^{(n)}\right\|_{L^4}+\left\|\nabla\omega^{(n)}\right\|_{L^4}&\leq C_p\varepsilon\label{propsmall fi},\\
\left\| H^{(n)}\right\|_{H^1_{\delta+1}}&\leq C_p\varepsilon^2\label{propsmall H},\\
\left\| \Tilde{\gamma}^{(n)}\right\|_{H^2_{\delta'}}&\leq C_p \varepsilon^2\label{propsmall gamma},\\
\left\|\mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)} \right\|_{H^1_{\delta'+1}}&\leq C_p\varepsilon^2\label{propsmall eo gamma},\\
\left\|\tau^{(n)}\right\|_{H^1_{\delta'+1}}&\leq C_p\varepsilon^2.\label{propsmall tau} \end{align} \end{prop}
\begin{proof} By Corollary \ref{coro premiere section}, all these quantities satisfy the desired smallness estimates at $t=0$. The fact that these estimates are true for all $t\in[0,T]$ will then follow from calculus inequalities of the type \begin{equation*}
\sup_{s\in[0,T]}\|u\|_{W^{m,p}_{\eta}}(s)\leq C'\left( \|u\|_{W^{m,p}_{\eta}}(0)+\int_0^T\|\partial_tu\|_{W^{m,p}_{\eta}}(s)\mathrm{d} s\right). \end{equation*} Therefore, it remains to show that the $\partial_t$ derivatives (we recall that $\partial_t=e_0^{(n)}+\beta^{(n)}\cdot\nabla=e_0^{(n-1)}+\beta^{(n-1)}\cdot\nabla$) of all these terms in the relevant norms are bounded by a constant depending on $A_0$, $A_1$, $A_2$, $A_3$, $A_4$ or $C_i$, and then to choose $T$ small enough. We proceed as follows : \begin{itemize}
\item for $\nabla\varphi^{(n)}$ and $\nabla\omega^{(n)}$, we use the embedding $H^1\xhookrightarrow{}L^4$ and \eqref{HR fi 1} :
\begin{equation}
\left\|\partial_t\nabla\varphi^{(n)}\right\|_{L^4}\lesssim \left\|\nabla\partial_t\varphi^{(n)}\right\|_{H^1}\lesssim \left\|\partial_t\varphi^{(n)}\right\|_{H^2}\lesssim A_0C_i,\label{propsmall a}
\end{equation}
and we do the same for $\nabla\omega^{(n)}$, using \eqref{HR omega 1}.
\item for $\partial_t\varphi^{(n)}$ and $\partial_t\omega^{(n)}$, we use the support property of $\varphi^{(n)}$, the embedding $H^1\xhookrightarrow{}L^4$, \eqref{HR fi 1}, \eqref{HR beta 1}, \eqref{HR beta 3}, \eqref{HR N 2} and \eqref{propsmall a} :
\begin{align*}
\left\|\partial_t^2\varphi^{(n)}\right\|_{L^4}&\leq \left\|N^{(n-1)}\partial_t\left(\mathbf{T}^{(n-1)}\varphi^{(n)}\right)\right\|_{L^4}+\left\|\mathbf{T}^{(n-1)}\varphi^{(n)}\partial_tN^{(n-1)}\right\|_{L^4}+
\left\|\partial_t\left( \beta^{(n)}\cdot\nabla\varphi^{(n)}\right)\right\|_{L^4}\\
& \lesssim \left\|\partial_t\left(\mathbf{T}^{(n-1)}\varphi^{(n)}\right)\right\|_{H^1}+\left\|\partial_t\varphi^{(n)}\right\|_{H^2}+ \left\| \partial_t\beta^{(n)} \right\|_{H^2} \left\|\nabla\varphi^{(n)}\right\|_{H^2}+ \left\|\partial_t\nabla\varphi^{(n)}\right\|_{L^4}\\& \lesssim A_0C_i+ \varepsilon A_1C_i,
\end{align*}
and we do the same for $\partial_t\omega^{(n)}$, using \eqref{HR omega 1}.
\item for $H^{(n)}$, we use \eqref{HR H 1}, \eqref{HR H 2}, \eqref{HR beta 1} and the product estimate :
\begin{equation*}
\left\|\partial_t H^{(n)} \right\|_{H^1_{\delta+1}}\leq \left\| e_0^{(n)}H^{(n)} \right\|_{H^1_{\delta+1}}+\left\|\beta^{(n)}\nabla H^{(n)} \right\|_{H^1_{\delta+1}}\lesssim C_i.
\end{equation*}
\item for $\Tilde{\gamma}^{(n)}$, we use \eqref{commutation estimate eq}, \eqref{HR beta 1} and \eqref{HR gamma 1} :
\begin{align*}
\left\| \partial_t\Tilde{\gamma}^{(n)}\right\|_{H^2_{\delta'}}&\leq \left\| N^{(n-1)}\mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)} \right\|_{H^2_{\delta'}}+\left\|\beta^{(n-1)}\cdot\nabla\Tilde{\gamma}^{(n)} \right\|_{H^2_{\delta'}}\\
& \lesssim\left\| \mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)} \right\|_{H^2_{\delta'+1}}+ \left\|\nabla\Tilde{\gamma}^{(n)} \right\|_{H^2_{\delta'+1}}\left\|\beta^{(n-1)} \right\|_{H^2_{\delta'+1}}\\
&\lesssim C_i+A_0C_i^2.
\end{align*}
\item for $\mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)}$ and $\tau^{(n)}$, we simply use \eqref{HR gamma 3} and \eqref{HR tau 3}, which give directly the result. \end{itemize} \end{proof}
\subsubsection{Elliptic estimates}
We begin with the two elliptic equations (the ones for $N$ and $\beta$). These are the most difficult to handle, because we can't rely on the smallness of a time parameter and therefore have to keep properly trace of the $\varepsilon$, $C_i$ and $A_i$.
\begin{prop}\label{hr+1 N prop} For $n\geq 2$, $N^{(n+1)}$ admits a decomposition \begin{equation*}
N^{(n+1)}=1+N_a^{(n+1)}\chi\ln+\Tilde{N}^{(n+1)}, \end{equation*} with $N_a^{(n+1)}\geq 0$ and such that \begin{align}
\left| N_a^{(n+1)}\right|+\left\|\Tilde{N}^{(n+1)}\right\|_{H^2_{\delta}}&\lesssim \varepsilon^2\label{HR+1 N 1},\\
\left|\partial_tN_a^{(n+1)}\right|+\left\|\Tilde{N}^{(n+1)}\right\|_{H^3_{\delta}}+\left\|\partial_t\Tilde{N}^{(n+1)}\right\|_{H^2_{\delta}}&\lesssim \varepsilon C(A_3)C_i\label{HR+1 N 2},
\\ \left\| \Tilde{N}^{(n+1)}\right\|_{H^4_{\delta}}&\lesssim \varepsilon^2C(A_2)C_i^2+C(A_0)C_i^2.\label{HR+1 N 3}
\end{align} \end{prop}
\begin{proof} We claim that : \begin{equation*}
\left\| \text{RHS of \eqref{reduced system N}} \right\|_{L^2_{\delta+2}}\leq C\varepsilon^2. \end{equation*}
Except for the term $e^{2\gamma^{(n)}}N^{(n)}\left( \tau^{(n)}\right)^2$, all the terms in \eqref{reduced system N} can be estimated in an identical manner as in Lemma \ref{lem CI N}, except that we estimate the norms using Proposition \ref{prop propsmall} instead of using the assumtions on the reduced data and the estimates in Lemmas \ref{CI sur H} and \ref{lem CI gamma}. It therefore remains to control $e^{2\gamma^{(n)}}N^{(n)}\left( \tau^{(n)}\right)^2$. Using \eqref{useful gamma 1a} and \eqref{HR N 1}, we see that $\left\|e^{2\gamma^{(n)}} N^{(n)}\right\|_{C^0_\varepsilon}\lesssim 1$. We finally use the product estimate and \eqref{propsmall tau} to handle $\left( \tau^{(n)}\right)^2$ : \begin{equation*}
\left\|e^{2\gamma^{(n)}}N^{(n)}\left( \tau^{(n)}\right)^2 \right\|_{L^2_{\delta+2}} \lesssim \left\| e^{2\gamma^{(n)}} N^{(n)}\right\|_{C^0_\varepsilon} \left\| \tau^{(n)} \right\|_{H^1_{\delta'+1}}^2\lesssim \varepsilon^4. \end{equation*} This proves the claim. Applying Corollary \ref{mcowens 2} to $N^{(n+1)}-1$ yields the existence of the decomposition of $N^{(n+1)}$, as well as the estimate \eqref{HR+1 N 1}. \par\leavevmode\par We now turn to the proof of \eqref{HR+1 N 2}. To obtain the $H^3_{\delta}$ bound for $\Tilde{N}^{(n+1)}$, we need to control the RHS of \eqref{reduced system N} in $H^1_{\delta+2}$ : \begin{itemize}
\item for the term $e^{-2\gamma^{(n)}}N^{(n)}|H^{(n)}|^2$, we do exactly the same calculations as in \eqref{idem1}, but in $H^1_{\delta+2}$ instead of $H^2_{\delta+2}$. In contrast to \eqref{idem1}, here we have less liberty to bound the term $|H^{(n)}|^2$ (because we need $C_i$ and not $C_i^2$ bounds), therefore we use \eqref{HR H 1} and \eqref{propsmall H} to write
\begin{equation*}
\left\| e^{-2\gamma^{(n)}}N^{(n)}\left|H^{(n)}\right|^2\right\|_{H^1_{\delta+2}} \lesssim\left\| \left|H^{(n)}\right|^2\right\|_{H^1_{\delta+2+2\varepsilon+\frac{\delta+1}{2}}}\lesssim \left\|H^{(n)} \right\|_{H^1_{\delta+1}}\left\|H^{(n)} \right\|_{H^2_{\delta+1}}\lesssim \varepsilon^2C_i.
\end{equation*}
\item for the term $e^{2\gamma^{(n)}}N^{(n)}( \tau^{(n)})^2$, we note that $\tau^{(n)}$ and $H^{(n)}$ satisfy the exact same estimate (according to \eqref{HR H 1}, \eqref{HR tau 1}, \eqref{propsmall H} and \eqref{propsmall tau}), except for a slight difference of weights ($\delta'$ instead of $\delta$) and constants ($A_1$ compared to 2). Therefore we treat this term exactly as the previous one and omit the details.
\item we now discuss the term $\frac{e^{2\gamma^{(n)}}}{N^{(n)}}\left(e_0^{(n-1)}\varphi^{(n)}\right)^2$. Since the smallness for $e_0^{(n-1)}\varphi^{(n)}$ is at the $L^4$-level (thanks to \eqref{propsmall fi}), any spatial derivative of $e_0^{(n-1)}\varphi^{(n)}$ destroys the $\varepsilon$-smallness, and therefore we have to be precise. Thanks to \eqref{useful gamma 1a}, we can forget the $e^{2\gamma^{(n)}}$ factor, thanks to \eqref{HR N 1} we have $\left| \frac{1}{N^{(n)}}\right|\lesssim 1$ (we also forget about $\nabla(\chi\ln)$) and thanks to \eqref{HR N 2} and the embedding $H^2_{\delta+1}\xhookrightarrow{}L^{\infty}$ we have $\left\|\nabla \Tilde{N}^{(n)}\right\|_{L^{\infty}}\lesssim C_i$ :
\begin{align*}
\left\|\frac{e^{2\gamma^{(n)}}}{N^{(n)}}\left(e_0^{(n-1)}\varphi^{(n)}\right)^2 \right\|_{H^1_{\delta+2}}&\lesssim \left\|\left( e_0^{(n-1)}\varphi^{(n)}\right)^2 \right\|_{L^2}\left( 1+ \left\|\nabla \Tilde{N}^{(n)}\right\|_{L^{\infty}}\right)
+\left\|e_0^{(n-1)}\varphi^{(n)}\nabla\left( e_0^{(n-1)}\varphi^{(n)}\right) \right\|_{L^2}\\
&\lesssim \left\| e_0^{(n-1)}\varphi^{(n)} \right\|_{L^4}^2\left( 1+ \left\|\nabla \Tilde{N}^{(n)}\right\|_{L^{\infty}}\right)+\left\| e_0^{(n-1)}\varphi^{(n)} \right\|_{L^4}\left\| e_0^{(n-1)}\varphi^{(n)} \right\|_{H^2},
\end{align*}
where in the last inequality we used Hölder's inequality and the Sobolev injection $H^1\xhookrightarrow{}L^4$. We now use \eqref{propsmall fi} and \eqref{HR fi 1} to obtain :
\begin{align*}
\left\|\frac{e^{2\gamma^{(n)}}}{N^{(n)}}\left(e_0^{(n-1)}\varphi^{(n)}\right)^2 \right\|_{H^1_{\delta+2}}&\lesssim \varepsilon^2(1+C_i)+\varepsilon A_0C_i\lesssim \varepsilon C(A_0)C_i.
\end{align*}
\item the term $\frac{e^{2\gamma^{(n)}-4\varphi^{(n)}}}{N^{(n)}}\left(e_0^{(n-1)}\omega^{(n)}\right)^2$ is handled in a similar way, using first \eqref{useful ffi 1} to get ride of the $e^{-4\varphi^{(n)}}$ factor, and then using \eqref{HR omega 1} instead of \eqref{HR fi 1}. \end{itemize}
This concludes the proof of the estimate $\left\| \Tilde{N}^{(n+1)}\right\|_{H^3_{\delta}}\lesssim \varepsilon C(A_1)C_i$. \par\leavevmode\par We now turn to the estimate for $\partial_tN^{(n+1)}$, including both $\partial_tN_a^{(n+1)}$ and $\partial_t\Tilde{N}^{(n+1)}$. Since the RHS of \eqref{reduced system N} is differentiable in $t$, it is easy to see that $\partial_tN^{(n+1)}=\partial_tN_a^{(n+1)}\chi\ln+\partial_t\Tilde{N}^{(n+1)}$ is the solution given by Corollary \ref{mcowens 2} to the equation \begin{equation*}
\Delta f=\partial_t(\text{RHS of \eqref{reduced system N}}). \end{equation*} Therefore, to finish the proof of \eqref{HR+1 N 2}, it suffices to bound the integral of $\partial_t(\text{RHS of \eqref{reduced system N}})$ with respect to $\mathrm{d} x$ and to bound $\partial_t(\text{RHS of \eqref{reduced system N}})$ in $L^2_{\delta+2}$.
Since the estimate for $\partial_t\tau^{(n)}$ are worse than those for $\partial_tH^{(n)}$, and those for $\tau^{(n)}$ and $H^{(n)}$ are similar, we will treat the term $\partial_t \left( e^{2\gamma^{(n)}}N^{(n)}( \tau^{(n)})^2\right) $ and leave the easier term $\partial_t \left( e^{2\gamma^{(n)}}N^{(n)}| H^{(n)}|^2\right) $ to the reader. We use \eqref{useful gamma 1a} for the $e^{2\gamma^{(n)}}$ factor and the fact that $|\chi\ln|\lesssim \langle x\rangle^{\varepsilon}$ : \begin{align*}
\left\|\partial_t \left( e^{2\gamma^{(n)}}N^{(n)}( \tau^{(n)})^2\right) \right\|_{L^2_{\delta+2}} &\lesssim\left\|N^{(n)} \right\|_{C^0_{\varepsilon}} \left(
\left\|\tau^{(n)}\partial_t\tau^{(n)} \right\|_{L^2_{\delta+2+\varepsilon}}+\left\|\partial_t\Tilde{\gamma}^{(n)}( \tau^{(n)})^2 \right\|_{L^2_{\delta+2+\varepsilon}}\right)\\&\qquad +\left|\partial_tN_a^{(n)} \right|\left\| ( \tau^{(n)})^2\right\|_{L^2_{\delta+2+3\varepsilon}} +\left\| \partial_t\Tilde{N}^{(n)}( \tau^{(n)})^2\right\|_{L^2_{\delta+2+2\varepsilon}}
\\& \lesssim\varepsilon^2C(A_3)C_i. \end{align*}
where in the last inequality we use $\left\|N^{(n)} \right\|_{C^0_{\varepsilon}}\lesssim 1$ (which comes from \eqref{HR N 1}), $\left\| \partial_t\Tilde{\gamma}^{(n)}\right\|_{H^2_{\delta'+1}}\lesssim C_i$ (wich comes from \eqref{commutation estimate eq}), \eqref{HR N 2} and the product estimate together with \eqref{propsmall tau} (for $(\tau^{(n)})^2$). For $\tau^{(n)}\partial_t\tau^{(n)}$, we use the Hölder's inequality ($L^4_{\delta'+2}\times L^4_{\delta'+1}\xhookrightarrow{}L^2_{\delta+2+\varepsilon}$), the embedding $H^1_{\delta'+1}\xhookrightarrow{}L^4_{\delta'+1}$, \eqref{propsmall tau} and \eqref{HR tau 3}.
We now turn to the compactly supported term $\partial_t\left(\frac{e^{2\gamma^{(n)}}}{N^{(n)}}\left(e_0^{(n-1)}\varphi^{(n)}\right)^2\right)$. We use \eqref{useful gamma 1a} for the $e^{2\gamma^{(n)}}$ factor and $\left|\frac{1}{N^{(n)}}\right|+\left\| \chi\ln\right\|_{L^{\infty}(B_{2R})}\lesssim 1$ : \begin{align*}
\left\| \partial_t\left(\frac{e^{2\gamma^{(n)}}}{N^{(n)}}\left(e_0^{(n-1)}\varphi^{(n)}\right)^2\right)\right\|_{L^2} &\lesssim \left\|e_0^{(n-1)}\varphi^{(n)}\partial_t\left(e_0^{(n-1)}\varphi^{(n)}\right) \right\|_{L^2}+\left\|\partial_t\Tilde{\gamma}^{(n)}\left(e_0^{(n-1)}\varphi^{(n)}\right)^2 \right\|_{L^2}\\&\qquad+\left\|\partial_t\Tilde{N}^{(n)}\left(e_0^{(n-1)}\varphi^{(n)}\right)^2 \right\|_{L^2}+\left|\partial_tN_a^{(n)}\right|\left\|\left(e_0^{(n-1)}\varphi^{(n)}\right)^2 \right\|_{L^2}
\\&\lesssim \left\|e_0^{(n-1)}\varphi^{(n)} \right\|_{L^4}\left(C_i\left\|e_0^{(n-1)}\varphi^{(n)} \right\|_{L^4}+ \left\|\partial_t\left(e_0^{(n-1)}\varphi^{(n)}\right) \right\|_{H^1}\right)
\lesssim \varepsilon C_i, \end{align*}
where in the last inequality we use $\left\| \partial_t\Tilde{\gamma}^{(n)}\right\|_{H^2_{\delta'+1}}\lesssim C_i$, \eqref{HR N 2} and the embedding $L^4\times H^1\xhookrightarrow{}L^2$. The term $\partial_t\left(\frac{e^{2\gamma^{(n)}-4\varphi^{(n)}}}{N^{(n)}}\left(e_0^{(n-1)}\omega^{(n)}\right)^2\right)$ is handled in the same way, using \eqref{useful ffi 1} and \eqref{propsmall fi} to get rid of the $e^{-4\varphi^{(n)}}$. Combining all these estimates concludes the proof of \eqref{HR+1 N 2}. \par\leavevmode\par We now turn to the proof of \eqref{HR+1 N 3}. To obtain the $H^4_{\delta}$ bound for $\Tilde{N}^{(n+1)}$, we need to control the RHS of \eqref{reduced system N} in $H^2_{\delta+2}$. Since we already know that the RHS of \eqref{reduced system N} is in $H^1_{\delta+2}$ with bound $\varepsilon C(A_1)C_i$, it remains to bound the $L^2_{\delta+4}$ norm of the second derivative the RHS of \eqref{reduced system N} : \begin{itemize}
\item for the term $e^{-2\gamma^{(n)}}N^{(n)}|H^{(n)}|^2$, we first use \eqref{useful gamma 1}, and then the embedding $H^1_{\delta+1}\xhookrightarrow{}L^4_{\delta+1}$, \eqref{HR H 1}, \eqref{propsmall H}, \eqref{HR N 1}, \eqref{HR N 2} and \eqref{HR N 3} :
\begin{align*}
\left\| \nabla^2\left(e^{-2\gamma^{(n)}}N^{(n)}|H^{(n)}|^2\right) \right\|_{L^2_{\delta+4}} &\lesssim \left\|\nabla^2N^{(n)}(H^{(n)})^2 \right\|_{L^2_{\delta+4}} + \left\| \nabla N^{(n)}H^{(n)}\nabla H^{(n)} \right\|_{L^2_{\delta+4}} \\&\qquad+\left\| N^{(n)} H^{(n)} \nabla^2H^{(n)} \right\|_{L^2_{\delta+4}} +\left\| N^{(n)} (\nabla H^{(n)})^2 \right\|_{L^2_{\delta+4}}
\\& \lesssim\varepsilon^2 C(A_2) C_i^2
\end{align*}
where we used $L^\infty$ bounds for $N^{(n)}$, $\nabla N^{(n)}$ and $\nabla^2 N^{(n)}$ (see \eqref{HR N 1}, \eqref{HR N 2} and \eqref{HR N 3} respectively). Note that for $N$ the logarithmic growth is handled by adding a small weight. We also used \eqref{HR H 1} and \eqref{propsmall H} and the product law to handle the $H^{(n)}$ terms.
\item for the term $e^{2\gamma^{(n)}}N^{(n)}( \tau^{(n)})^2$, we note that $\tau^{(n)}$ and $H^{(n)}$ satisfy the exact same estimate (according to \eqref{HR H 1}, \eqref{HR tau 1}), except for a slight difference of weights ($\delta'$ instead of $\delta$) and constants ($A_1$ compared to 2). Therefore we treat this term exactly as the previous one and omit the details.
\item we next discuss the compactly supported term $\frac{e^{2\gamma^{(n)}}}{N^{(n)}}\left(e_0^{(n-1)}\varphi^{(n)}\right)^2$. We first use \eqref{useful gamma 1} :
\begin{align*}
\left\|\nabla^2\left( \frac{e^{2\gamma^{(n)}}}{N^{(n)}}\left(e_0^{(n-1)}\varphi^{(n)}\right)^2\right) \right\|_{L^2} &\lesssim \left\| e_0^{(n-1)}\varphi^{(n)}\nabla^2\left(e_0^{(n-1)}\varphi^{(n)}\right) \right\|_{L^2}+\left\|\left(\nabla\left( e_0^{(n-1)}\varphi^{(n)}\right)\right)^2 \right\|_{L^2}\\&+\left\|\nabla N^{(n)} e_0^{(n-1)}\varphi^{(n)}\nabla\left(e_0^{(n-1)}\varphi^{(n)}\right)\right\|_{L^2}+\left\|\nabla^2N^{(n)}\left( e_0^{(n-1)}\varphi^{(n)}\right)^2 \right\|_{L^2}\\&+\left\|\left( \nabla N^{(n)}\right)^2\left( e_0^{(n-1)}\varphi^{(n)}\right)^2 \right\|_{L^2} \\& \lesssim\varepsilon^2C(A_2)C_i^2+C(A_0)C_i^2,
\end{align*}
where we used \eqref{HR N 2}, \eqref{HR fi 1} and \eqref{propsmall fi}. The idea is to use $L^{\infty}$-bounds for $\nabla N^{(n)}$ and $\nabla^2 N^{(n)}$ and the Hölder's inequality to deal with the product of terms depending on $\varphi^{(n)}$.
\item the term $\frac{e^{2\gamma^{(n)}-4\varphi^{(n)}}}{N^{(n)}}\left(e_0^{(n-1)}\omega^{(n)}\right)^2$ is handled in a similar way, but we have to be careful about the case where two derivatives hit $e^{-4\varphi^{(n)}}$. Using \eqref{useful ffi 1}, \eqref{useful gamma 1a} and $1\lesssim N^{(n)}$, this leads to estimating the following term :
\begin{align*}
\left\|\nabla^2\varphi^{(n)}\left(e_0^{(n-1)}\omega^{(n)}\right)^2 \right\|_{L^2}\lesssim \left\| \nabla^2\varphi^{(n)}\right\|_{L^4}\left\| e_0^{(n-1)}\omega^{(n)}\right\|_{L^4}\left\| e_0^{(n-1)}\omega^{(n)}\right\|_{L^\infty}\lesssim \varepsilon C(A_0)C_i^2,
\end{align*}
where we used \eqref{HR fi 1}, \eqref{HR omega 1} and \eqref{propsmall fi}.
\end{itemize} This concludes the proof of \eqref{HR+1 N 3}. \end{proof}
The following lemma will allow us to estimate the $H^1$ norm of solutions of elliptic equations.
\begin{lem} Let $\xi=(\xi^1,\xi^2)$ a vector field on $\mathbb{R}^2$, for all $\sigma<1$ the following holds : \begin{align}
\left\|\nabla\xi \right\|_{L^2_{\sigma}} \lesssim \left\| L\xi\right\|_{L^2_{1}}\label{killing operator H1}. \end{align} \end{lem}
\begin{proof} We set $A_{ij}\vcentcolon=(L\xi)_{ij}$ and take the divergence to obtain $\Delta\xi^i=\delta^{ij}\partial^kA_{kj}$. Let $w(x)=\langle x\rangle^{2\sigma}$. We multiply this equation by $w\xi^{\ell}$, contract it with $\delta_{i\ell}$ and integrate over $\mathbb{R}^2$ to get (after integrating by parts) : \begin{equation*}
\delta_{i\ell}\int_{\mathbb{R}^2}\nabla(w\xi^{\ell})\cdot\nabla\xi^i\mathrm{d} x=\int_{\mathbb{R}^2}\partial^k(w\xi^{\ell})A_{k\ell}\mathrm{d} x, \end{equation*} which becomes \begin{equation*}
\left\| \nabla\xi\right\|_{L^2_{\sigma}}^2=\frac{1}{2} \int_{\mathbb{R}^2}\Delta w|\xi|^2\mathrm{d} x + \int_{\mathbb{R}^2}w\partial^k\xi^{\ell}A_{k\ell}\mathrm{d} x +\int_{\mathbb{R}^2}\partial^k w\xi^{\ell}A_{k\ell}\mathrm{d} x . \end{equation*} Using the Cauchy-Schwarz inequality and the trick $ab\leq \eta a^2+\frac{1}{\eta} b^2$, we have \begin{equation*}
\int_{\mathbb{R}^2}w\partial^k\xi^{\ell}A_{k\ell}\mathrm{d} x\lesssim \left\|\nabla\xi \right\|_{L^2_{\sigma}} \left\| A \right\|_{L^2_{\sigma}}\lesssim \eta\left\|\nabla\xi \right\|_{L^2_{\sigma}}^2+\frac{1}{\eta}\left\| A \right\|_{L^2_{\sigma}}^2. \end{equation*}
We note that $|\nabla w|\lesssim \langle x \rangle^{2\sigma-1}$ and $|\Delta w|\lesssim \langle x \rangle^{2\sigma-2}$, which imply that \begin{equation*}
\int_{\mathbb{R}^2}\partial^k w\xi^{\ell}A_{k\ell}\mathrm{d} x \lesssim \left\|\xi \right\|_{L^2_{\sigma-1}} \left\| A \right\|_{L^2_{\sigma}}\lesssim \left\|\xi \right\|_{L^2_{\sigma-1}}^2+\left\| A \right\|_{L^2_{\sigma}}^2\quad\text{and}\quad
\frac{1}{2} \int_{\mathbb{R}^2}\Delta w|\xi|^2\mathrm{d} x \lesssim \left\|\xi \right\|_{L^2_{\sigma-1}}^2. \end{equation*} Thus, \begin{equation*}
\left\| \nabla\xi\right\|_{L^2_{\sigma}}^2 \lesssim \left\|\xi \right\|_{L^2_{\sigma-1}}^2+\left( 1+\frac{1}{\eta}\right)\left\| A \right\|_{L^2_{\sigma}}^2+\eta\left\|\nabla\xi \right\|_{L^2_{\sigma}}^2. \end{equation*} We take $\eta$ small enough in order to absorb $\eta\left\|\nabla\xi \right\|_{L^2_{\sigma}}^2$ into the LHS. Taking the square root of the inequality we obtained, we get : \begin{equation*}
\left\| \nabla\xi\right\|_{L^2_{\sigma}} \lesssim \left\|\xi \right\|_{L^2_{\sigma-1}}+\left\| A \right\|_{L^2_{\sigma}}. \end{equation*} It remains to show that $\left\|\xi \right\|_{L^2_{\sigma-1}}\lesssim \left\| A\right\|_{L^2_1}$. For that, we start by using Lemma \ref{prop holder 2} : $\sigma<1$ so there exists $r>2$ such that $\sigma<\frac{2}{r}<1$. According to Lemma \ref{prop holder 2}, we have $\left\|\xi \right\|_{L^2_{\sigma-1}}\lesssim \left\|\xi\right\|_{L^r}$. Recalling that $\Delta\xi^i=\delta^{ij}\partial^kA_{kj}$ we have : \begin{equation*}
\xi^i(x)=\frac{\delta^{ij}}{2\pi}\int_{\mathbb{R}^2}\ln|x-y|\partial^kA_{kj}\mathrm{d} y=-\frac{\delta^{ij}}{2\pi}\int_{\mathbb{R}^2}\frac{y^k-x^k}{|x-y|^2}A_{kj}\mathrm{d} y.\ \end{equation*} Therefore we can use the Hardy-Littlewood-Sobolev inequality (Proposition \ref{prop HLS}), that \begin{equation}
\left\|\xi \right\|_{L^r}\lesssim \left\| A*\frac{1}{|\cdot|} \right\|_{L^r} \lesssim \left\| A\right\|_{L^{\frac{2r}{2+r}}} . \end{equation} We again use Lemma \ref{prop holder 2} to get the embedding $L^2_1\xhookrightarrow{}L^{\frac{2r}{2+r}}$ (recall that $r>2$), which conclude the proof of \eqref{killing operator H1}.
\end{proof}
\begin{prop}\label{hr+1 beta prop} For $n\geq 2$, the following estimates hold : \begin{align}
\left\|\beta^{(n+1)}\right\|_{H^2_{\delta'}}&\lesssim \varepsilon^2\label{HR+1 beta 1},\\
\left\|\beta^{(n+1)}\right\|_{H^3_{\delta'}}&\lesssim C_i\label{HR+1 beta 2},\\
\left\|\nabla e_0^{(n)}\beta^{(n+1)}\right\|_{L^2_{\delta'+1}}&\lesssim \varepsilon C_i\label{HR+1 beta 2.5},\\
\left\| e_0^{(n)}\beta^{(n+1)}\right\|_{H^2_{\delta'}}&\lesssim A_0C_i\label{HR+1 beta 3},\\
\left\| e_0^{(n)}\beta^{(n+1)}\right\|_{H^3_{\delta'}}&\lesssim A_3C_i^2.\label{HR+1 beta 4}
\end{align} \end{prop}
\begin{proof} In view of Proposition \ref{prop propsmall} and \eqref{HR N 1}, the existence and uniqueness of $\beta^{(n+1)}$ and the estimate \eqref{HR+1 beta 1} can be proven in exactly the same manner as in Lemma \ref{CI sur beta} and we omit the details. \par\leavevmode\par We begin by the proof of \eqref{HR+1 beta 2}. We take the divergence of \eqref{reduced system beta} to get \begin{equation}
\Delta(\beta^{(n+1)})^i=2\delta^{i\ell}\delta^{jk}\partial_k\left( e^{-2\gamma^{(n)}}N^{(n)}(H^{(n)})_{j\ell} \right).\label{laplacien beta} \end{equation}
Note that the RHS has 0 mean (by Lemma \ref{divergence nulle}) and therefore by Theorem \ref{mcowens 1}, in order to prove \eqref{HR+1 beta 2}, it suffices to bound the RHS of \eqref{laplacien beta} in $H^1_{\delta'+2}$ by $CC_i$. Using \eqref{useful gamma 1}, \eqref{HR H 1}, \eqref{HR N 1} and $\varepsilon\left|\chi\ln \right|\lesssim \langle x \rangle^{\frac{\varepsilon}{2}}$ and taking $\varepsilon$ small enough, we get : \begin{align}
\left\|\partial_k\left( e^{-2\gamma^{(n)}}N^{(n)}(H^{(n)})_{j\ell}\right)\right\|_{H^1_{\delta'+2}}&\lesssim \left\|N^{(n)}H^{(n)}\right\|_{H^2_{\delta'+2\varepsilon^2+1}} \label{NnHn}\\
&\lesssim \left\| H^{(n)}\right\|_{H^2_{\delta+1}}\left( 1+ \left\|\Tilde{N}^{(n)} \right\|_{H^2_{\delta}} \right)+
\left\| H^{(n)}\right\|_{H^2_{\delta'+2\varepsilon^2+\frac{\varepsilon}{2}+1}}\nonumber\\& \lesssim C_i.\nonumber \end{align}
We now turn to the proof of \eqref{HR+1 beta 2.5}. We have $e_0^{(n)}\beta^{(n+1)}=\partial_t\beta^{(n+1)}-\beta^{(n)}\cdot\nabla\beta^{(n+1)}$. Using \eqref{HR beta 1}, \eqref{HR+1 beta 1} and the product estimate we have \begin{equation*}
\left\| \beta^{(n)}\cdot\nabla\beta^{(n+1)}\right\|_{H^1_{\delta'}} \lesssim \left\|\beta^{(n)} \right\|_{H^2_{\delta'}}\left\| \beta^{(n+1)}\right\|_{H^2_{\delta'}} \lesssim \varepsilon^2. \end{equation*}
Applying $\partial_t$ to \eqref{reduced system beta}, $\partial_t\beta^{(n+1)}$ satisfy $(L\partial_t\beta^{(n+1)})_{ij}=2\partial_t(e^{-2\gamma^{(n)}}N^{(n)}(H^{(n)})_{ij})$. We apply \eqref{killing operator H1} with $\sigma =\delta'+1$ and use $|\chi\ln|\lesssim \langle x\rangle^{\eta}$ (where $\eta$ is as small as we want) : \begin{align*}
\left\|\nabla\partial_t\beta^{(n+1)} \right\|_{L^2_{\delta'+1}} &\lesssim \left\|\partial_t\left(e^{-2\gamma^{(n)}}N^{(n)}(H^{(n)})_{ij}\right) \right\|_{L^2_{1}}
\\&\lesssim \left\| \partial_t\Tilde{\gamma}^{(n)}H^{(n)}\right\|_{L^2_{1+2\varepsilon^2+\eta}}+\left\| \partial_t\Tilde{\gamma}^{(n)}\Tilde{N}^{(n)}H^{(n)}\right\|_{L^2_{1+2\varepsilon^2}} + \left\|\partial_t N_a^{(n)}H^{(n)} \right\|_{L^2_{1+2\varepsilon^2+\eta}} \\&\qquad+ \left\| \partial_t\Tilde{N}^{(n)}H^{(n)} \right\|_{L^2_{1+2\varepsilon^2}} + \left\| \partial_tH^{(n)} \right\|_{L^2_{1+2\varepsilon^2+\eta}}+ \left\| \Tilde{N}^{(n)}\partial_tH^{(n)} \right\|_{L^2_{1+2\varepsilon^2}}
\\& \lesssim \varepsilon(1+ C_i), \end{align*} where used \eqref{HR gamma 1}, \eqref{HR N 1}, \eqref{HR N 2}, \eqref{propsmall H} and \eqref{HR H 1.5} (with $\varepsilon$ and $\eta$ small enough, depending on $\lambda$).
We now turn to the proof of \eqref{HR+1 beta 3} and \eqref{HR+1 beta 4}. Applying $e_0^{(n)}$ to \eqref{laplacien beta}, we show that the following equation is satisfied : \begin{equation}
\Delta(e_0^{(n)}\beta^{(n+1)})^i=2\delta^{i\ell}\delta^{jk}e_0^{(n)}\partial_k\left( e^{-2\gamma^{(n)}}N^{(n)}(H^{(n)})_{j\ell} \right)+\left[ \Delta,e_0^{(n)} \right](\beta^{(n+1)})^i=\vcentcolon I+II.\label{laplacien e0 beta} \end{equation} It's easy to check the RHS of \eqref{laplacien e0 beta} has 0 mean, as a consequence we can apply Theorem \ref{mcowens 1}, so that in order to prove the estimate \eqref{HR+1 beta 3}, it suffices to bound the RHS of \eqref{laplacien e0 beta} in $L^2_{\delta'+2}$ by $C_i$ : \begin{itemize}
\item For $I$, we first commute $\nabla$ and $e_0^{(n)}$ :
\begin{equation}
| I| \lesssim \left|\nabla e_0^{(n)}\left( e^{-2\gamma^{(n)}}N^{(n)}H^{(n)} \right) \right|+\left| \nabla\beta^{(n)}\right| \left|\nabla\left( e^{-2\gamma^{(n)}}N^{(n)}H^{(n)} \right) \right|.\label{commutation estimate''}
\end{equation}
It implies, using \eqref{useful gamma 1} :
\begin{align}
\| I\|_{L^2_{\delta'+2}} & \lesssim \left\| (e_0^{(n)}\gamma^{(n)})N^{(n)}H^{(n)} \right\|_{H^1_{\delta'+2\varepsilon^2+1}}+\left\|(e_0^{(n)}N^{(n)})H^{(n)} \right\|_{H^1_{\delta'+2\varepsilon^2+1}}\label{I L^2}\\&\qquad+\left\|N^{(n)}(e_0^{(n)}H^{(n)}) \right\|_{H^1_{\delta'+2\varepsilon^2+1}}+\left\|\nabla\beta^{(n)} \right\|_{L^{\infty}} \left\|N^{(n)}H^{(n)} \right\|_{H^1_{\delta'+2\varepsilon^2+1}}.\nonumber
\end{align}
Thanks to \eqref{HR beta 2} and \eqref{propsmall H}, the last term is bounded by $\varepsilon A_0 C_i$. Thanks to \eqref{HR H 2}, the third term is handled as $N^{(n)}H^{(n)}$ in \eqref{NnHn} and is therefore bounded by $A_0C_i$. Thanks to \eqref{HR N 1}, \eqref{HR N 2}, \eqref{HR beta 1}, \eqref{HR H 1} and \eqref{propsmall H}, the second term is bounded by $C_i$. The first term is similar to the second, and actually easier to bound, so we omit the details. We have shown that $\| I\|_{L^2_{\delta'+2}}\lesssim A_0C_i$.
\item For $II$, we use the following commutation estimate :
\begin{equation}
\left| \left[ \Delta,e_0^{(n)} \right](\beta^{(n+1)})^i \right| \lesssim \left|\nabla\beta^{(n)} \right|\left|\nabla^2\beta^{(n+1)} \right|+\left|\nabla^2\beta^{(n)} \right|\left|\nabla\beta^{(n+1)} \right|.\label{commutation estimate'}
\end{equation}
Now, using in addition \eqref{HR beta 1}, \eqref{HR beta 2}, \eqref{HR+1 beta 1} and \eqref{HR+1 beta 2} and the product estimate :
\begin{align*}
\left\| II\right\|_{L^2_{\delta'+2}} &\lesssim \left\|\nabla\beta^{(n)} \right\|_{H^1_{\delta'+1}}\left\|\nabla^2\beta^{(n+1)} \right\|_{H^1_{\delta'+2}}+\left\| \nabla^2\beta^{(n)}\right\|_{H^1_{\delta'+2}}\left\|\nabla\beta^{(n+1)} \right\|_{H^1_{\delta'+1}} \\&\lesssim\varepsilon A_0C_i.
\end{align*} \end{itemize} Similarly, in order to prove \eqref{HR+1 beta 4}, we have to bound the RHS of \eqref{laplacien e0 beta} in $H^1_{\delta'+2}$ by $CC_i^2$ : \begin{itemize}
\item For $I$, we again use \eqref{commutation estimate''}. Instead of using $L^{\infty}$ bounds for $\nabla\beta^{(n)}$, we use the product estimate and then \eqref{useful gamma 1}. For the terms where $e_0^{(n)}$ appears, we simply use \eqref{useful gamma 1} :
\begin{align*}
\| I\|_{H^1_{\delta'+2}} & \lesssim \left\| (e_0^{(n)}\gamma^{(n)})N^{(n)}H^{(n)} \right\|_{H^2_{\delta'+2\varepsilon^2+1}}+\left\|(e_0^{(n)}N^{(n)})H^{(n)} \right\|_{H^2_{\delta'+2\varepsilon^2+1}}\\&\qquad+\left\|N^{(n)}(e_0^{(n)}H^{(n)}) \right\|_{H^2_{\delta'+2\varepsilon^2+1}}+\left\|\nabla\beta^{(n)}\right\|_{H^2_{\delta'+1}}\left\| N^{(n)}H^{(n)} \right\|_{H^2_{\delta'+2\varepsilon^2+1}}.\nonumber
\end{align*}
The last term is handled thanks to \eqref{HR beta 2} and \eqref{NnHn} and is indeed bounded by $A_0C_i^2$. The third term is similar to the last one (because $e_0^{(n)}H^{(n)}$ satisfies \eqref{HR H 3}) and is therefore handled as in \eqref{NnHn}, finally it is bounded by $A_3C_i^2$. We handled the two first terms as we did in \eqref{I L^2}, using \eqref{HR H 1} instead of \eqref{propsmall H}, this change explains why we get $C_i^2$ instead of $C_i$.
\item For $II$, we use again the commutation estimate \eqref{commutation estimate'}, \eqref{HR beta 2} and \eqref{HR+1 beta 2} and the product estimate :
\begin{align*}
\left\| \left[ \Delta,e_0^{(n)} \right](\beta^{(n+1)})^i\right\|_{L^2_{\delta'+2}} &\lesssim \left\|\nabla\beta^{(n)} \right\|_{H^2_{\delta'+1}}\left\|\nabla^2\beta^{(n+1)} \right\|_{H^1_{\delta'+2}}+\left\| \nabla^2\beta^{(n)}\right\|_{H^1_{\delta'+2}}\left\|\nabla\beta^{(n+1)} \right\|_{H^2_{\delta'+1}} \\&\lesssim A_0^2C_i^2.
\end{align*} \end{itemize}
\end{proof}
We have finished all the elliptic estimates, in the sequel we deal with evolution equations, and we will use the freedom of taking $T$ as small as we want, in order to recover our estimates.
\subsubsection{The transport equation and $\tau^{(n+1)}$}
We begin this section by prooving the estimates on $H^{(n+1)}$. We first prove a technical lemma about the transport equation :
\begin{lem}\label{transport inegalite} Let $\sigma\in\mathbb{R}$. If $f$ and $h$ satisfy \begin{equation*}
e_0^{(n+1)}f=h, \end{equation*} then, \begin{equation*}
\sup_{t\in[0,T]}\| f\|_{L^2_{\sigma}}(t)\leq 2 \| f\|_{L^2_{\sigma}}(0)+2\sqrt{T}\sup_{t\in[0,T]}\| h\|_{L^2_{\sigma}}(t) . \end{equation*} \end{lem}
\begin{proof}Let $w(x)=\langle x\rangle^{2\sigma}$. We multiply the equation $e_0^{(n+1)}f=h$ by $w f$ and integrate over $\mathbb{R}^2$. Writing $e_0^{(n+1)}=\partial_t-\beta^{(n+1)}\cdot\nabla$, we get : \begin{equation*}
\frac{\mathrm{d}}{\mathrm{d} t}\left(\|f\|_{L^2_{\sigma}}^2\right)=2\int_{\mathbb{R}^2}w fh\,\mathrm{d} x+\int_{\mathbb{R}^2}w \beta^{(n+1)}\cdot\nabla \left(f^2\right)\,\mathrm{d} x. \end{equation*} We integrate by part the last term in order to get : \begin{align*}
\frac{\mathrm{d}}{\mathrm{d} t}\left(\|f\|_{L^2_{\sigma}}^2\right)&=2\int_{\mathbb{R}^2}w fh\,\mathrm{d} x-\int_{\mathbb{R}^2} f^2\mathrm{div}\left(w\beta^{(n+1)}\right)\mathrm{d} x . \end{align*}
For the last term, we use \eqref{HR+1 beta 2} (and the embedding $H^2_{\delta'+1}\xhookrightarrow{}C^0_1$) and $\left| \nabla w\right|\lesssim\frac{w}{\langle x \rangle}$ to obtain : \begin{align*}
-\int_{\mathbb{R}^2} f^2\mathrm{div}\left(w\beta^{(n+1)}\right)\mathrm{d} x \lesssim C_i \| f\|_{L^2_{\sigma}} \end{align*} For the first term, we simply use the Cauchy-Scwharz inequality and $2ab\leq a^2+b^2$ to obtain : \begin{equation*}
2\int_{\mathbb{R}^2}w f h\,\mathrm{d} x\leq 2\left(\int_{\mathbb{R}^2}w f^2 \mathrm{d} x\right)^{\frac{1}{2}}\left(\int_{\mathbb{R}^2}w h^2 \mathrm{d} x\right)^{\frac{1}{2}}\leq \| f\|_{L^2_{\sigma}}^2+ \| h\|_{L^2_{\sigma}}^2. \end{equation*} Summarising, we get : \begin{equation*}
\frac{\mathrm{d}}{\mathrm{d} t}\left(\|f\|_{L^2_{\sigma}}^2\right)\leq C(C_i) \| f\|_{L^2_{\sigma}}^2+ \| h\|_{L^2_{\sigma}}^2. \end{equation*} We apply Gronwall's Lemma, take $T$ small enough and use $\sqrt{a^2+b^2}\leq a+b$ to get : \begin{equation*}
\sup_{t\in[0,T]}\| f\|_{L^2_{\sigma}}(t)\leq 2 \| f\|_{L^2_{\sigma}}(0)+2\sqrt{T}\sup_{t\in[0,T]}\| h\|_{L^2_{\sigma}}(t) . \end{equation*}
\end{proof}
\begin{prop}\label{HR+1 H prop} For $n\geq 2$, the following estimates hold : \begin{align}
\left\Vert H^{(n+1)}\right\Vert_{H^2_{\delta+1}}&\leq 2C_i\label{HR+1 H 1},\\
\left\Vert e_0^{(n+1)}H^{(n+1)}\right\Vert_{L^2_{1+\lambda}}&\lesssim \varepsilon^2,\label{HR+1 H 1.5}\\
\left\Vert e_0^{(n+1)}H^{(n+1)}\right\Vert_{H^1_{\delta+1}}&\lesssim C_i,\label{HR+1 H 2}\\
\left\Vert e_0^{(n+1)}H^{(n+1)}\right\Vert_{H^2_{\delta+1}}&\lesssim A_2C_i^2.\label{HR+1 H 3}
\end{align} \end{prop}
\begin{proof} To prove \eqref{HR+1 H 1.5} we just bound the RHS of \eqref{reduced system H} in $L^2_{\delta+1}$ using the weighted product estimates $H^1\times H^1\xhookrightarrow{} L^2$ and using weighted $L^{\infty}$ estimates for $N^{(n)}$ in the first and last terms. More concretely, we use \eqref{HR N 1}, \eqref{propsmall H}, \eqref{HR beta 1}, \eqref{propsmall gamma}, \eqref{propsmall fi}, \eqref{useful gamma 1} and \eqref{useful ffi 1}, and we recall that $\lambda<\delta+1$ : \begin{align*}
\left\Vert e_0^{(n+1)}H^{(n+1)}\right\Vert_{L^2_{1+\lambda}} &\lesssim \left\| N^{(n)} (H^{(n)})^2 \right\|_{L^2_{1+\lambda}} + \left\|\nabla\beta^{(n)} H^{(n)} \right\|_{L^2_{1+\lambda}}+\left\| \nabla^2 N^{(n)}\right\|_{L^2_{1+\lambda}} \\&\qquad+ \left\| \nabla\gamma^{(n)}\nabla N^{(n)}\right\|_{L^2_{1+\lambda}} + \left\| N^{(n)} (\nabla \varphi^{(n)})^2 \right\|_{L^2}+ \left\| N^{(n)} (\nabla \omega^{(n)})^2 \right\|_{L^2}\\&\lesssim \varepsilon^2. \end{align*}
We continue by the proof of \eqref{HR+1 H 2} and \eqref{HR+1 H 3}, which amounts to bounding the $H^1_{\delta+1}$ and $H^2_{\delta+1}$ norms of the RHS of \eqref{reduced system H}. First notice that the terms $e^{-2\gamma^{(n)}}N^{(n)}(H^{(n)})_i^{\;\,\ell}(H^{(n)})_{j\ell}$, $(\partial_i\varphi^{(n)}\Bar{\otimes}\partial_j\varphi^{(n)})N^{(n)}$ and $(\partial_i\omega^{(n)}\Bar{\otimes}\partial_j\omega^{(n)})N^{(n)}$ are analogous to terms in \eqref{reduced system N} (because $\nabla\varphi^{(n)}$ and $\partial_t\varphi^{(n)}$ satisfy the same estimates, samewise for $\omega^{(n)}$), and can be treated as in Proposition \ref{hr+1 N prop}. We recall the estimates obtained : \begin{align*}
\left\| e^{-2\gamma^{(n)}}N^{(n)}(H^{(n)})_i^{\;\,\ell}(H^{(n)})_{j\ell} \right\|_{H^1_{\delta+1}} & \lesssim \varepsilon^2 C_i,\\
\left\| e^{-2\gamma^{(n)}}N^{(n)}(H^{(n)})_i^{\;\,\ell}(H^{(n)})_{j\ell}\right\|_{H^2_{\delta+1}} & \lesssim \varepsilon^2 C(A_2) C_i^2,\\
\left\|(\partial_i\varphi^{(n)}\Bar{\otimes}\partial_j\varphi^{(n)})N^{(n)} \right\|_{H^1_{\delta+1}}+\left\| e^{-4\varphi}(\partial_i\omega^{(n)}\Bar{\otimes}\partial_j\omega^{(n)})N^{(n)} \right\|_{H^1_{\delta+1}} & \lesssim \varepsilon C(A_0)C_i,\\
\left\|(\partial_i\varphi^{(n)}\Bar{\otimes}\partial_j\varphi^{(n)})N^{(n)} \right\|_{H^2_{\delta+1}}+\left\| e^{-4\varphi}(\partial_i\omega^{(n)}\Bar{\otimes}\partial_j\omega^{(n)})N^{(n)} \right\|_{H^2_{\delta+1}} & \lesssim \varepsilon^2 C(A_2) C_i^2+C(A_0)C_i^2. \end{align*} The remaining terms are treated as follows : \begin{itemize}
\item We first use \eqref{HR H 1} and \eqref{HR beta 1} and the product estimate :
\begin{align*}
\left\|\partial_{(j}(\beta^{(n)})^k(H^{(n)})_{i)k} \right\|_{H^1_{\delta+1}}& \lesssim \left\|H^{(n)}\right\|_{H^2_{\delta+1}}\left\|\beta^{(n)}\right\|_{H^2_{\delta'+1}}\lesssim \varepsilon C_i.
\end{align*}
Then we use \eqref{propsmall H}, \eqref{HR H 1} and \eqref{HR beta 2} and the product estimate :
\begin{align*}
\left\|\partial_{(j}(\beta^{(n)})^k(H^{(n)})_{i)k} \right\|_{H^2_{\delta+1}}&\lesssim \left\| \nabla\beta^{(n)}H^{(n)} \right\|_{H^1_{\delta+1}}+\left\| \nabla^3\beta^{(n)}H^{(n)}\right\|_{L^2_{\delta+3}}\\&\qquad+\left\| \nabla^2\beta^{(n)}\nabla H^{(n)}\right\|_{L^2_{\delta+3}}+\left\| \nabla\beta^{(n)}\nabla^2H^{(n)}\right\|_{L^2_{\delta+3}}
\\&\lesssim\left\|\nabla\beta^{(n)}\right\|_{H^2_{\delta'+1}}\left\|H^{(n)}\right\|_{H^1_{\delta+1}} + \left\|\nabla^3\beta^{(n)} \right\|_{L^2_{\delta'+3}}\left\| H^{(n)} \right\|_{H^2_{\delta+1}} \\&\qquad+\left\|\nabla^2\beta^{(n)} \right\|_{H^1_{\delta'+2}} \left\|\nabla H^{(n)} \right\|_{H^1_{\delta+2}} +\left\| \nabla\beta^{(n)}\right\|_{H^2_{\delta'+1}} \left\|\nabla^2H^{(n)}\right\|_{L^2_{\delta+3}}\\
&\lesssim \varepsilon^2 A_0 C_i+A_0C_i^2.
\end{align*}
\item We use \eqref{HR N 2} and the fact that $\langle x\rangle^{\alpha}\in L^2$ if and only if $\alpha<-1$ :
\begin{equation*}
\left\| (\partial_i\Bar{\otimes}\partial_j)N^{(n)} \right\|_{H^1_{\delta+1}}\leq \left\|\nabla^2\Tilde{N}^{(n)}\right\|_{H^1_{\delta+1}}+\left| N_a^{(n)} \right|\left\|\nabla^2(\chi\ln)\right\|_{H^1_{\delta+1}}\lesssim C_i.
\end{equation*}
We then use \eqref{HR N 3} for the $H^2$ estimate :
\begin{equation*}
\left\| (\partial_i\Bar{\otimes}\partial_j)N^{(n)} \right\|_{H^2_{\delta+1}} \leq \left\|\nabla^2\Tilde{N}^{(n)}\right\|_{H^2_{\delta+1}}+\left| N_a^{(n)} \right|\left\|\nabla^2(\chi\ln)\right\|_{H^2_{\delta+1}}\lesssim A_2 C_i^2.
\end{equation*}
\item For the following term, we get both $H^1$ and $H^2$ estimates by using \eqref{HR N 1}, \eqref{HR N 2} and \eqref{HR gamma 1} and the product estimate :
\begin{align*}
\left\| (\delta_i^k\Bar{\otimes}\partial_j\gamma^{(n)})\partial_kN^{(n)}\right\|_{H^2_{\delta+1}}&\leq \left\|\nabla\Tilde{\gamma}^{(n)}\nabla\Tilde{N}^{(n)}\right\|_{H^2_{\delta+1}}+|\alpha|\left\|\nabla(\chi\ln)\nabla\Tilde{N}^{(n)}\right\|_{H^2_{\delta+1}}\\&\quad+\left|N_a^{(n)}\right|\left\|\nabla(\chi\ln)\nabla\Tilde{\gamma}^{(n)}\right\|_{H^2_{\delta+1}}+|\alpha|\left|N_a^{(n)}\right|\left\|(\nabla(\chi\ln))^2\right\|_{H^2_{\delta+1}}
\\&\lesssim
\left\|\nabla\Tilde{\gamma}^{(n)}\right\|_{H^2_{\delta+1}}
\left\|\nabla\Tilde{N}^{(n)}\right\|_{H^2_{\delta+1}}
+\varepsilon\left( \left\|\nabla\Tilde{N}^{(n)}\right\|_{H^2_{\delta}}+\left\|\nabla\Tilde{\gamma}^{(n)}\right\|_{H^2_{\delta}} \right)\\&\qquad+
\varepsilon^2\left\|\langle x\rangle^{\delta-1}\right\|_{L^2}
\\&\lesssim\varepsilon C_i^2.
\end{align*} \end{itemize} We now prove \eqref{HR+1 H 1}. We recall the following commutation formula : \begin{align*}
\left|\left[ e_0^{(n+1)},\nabla \right]H^{(n+1)}\right|&\lesssim \left|\nabla\beta^{(n+1)} \right|\left|\nabla H^{(n+1)} \right|,\\
\left|\left[ e_0^{(n+1)},\nabla^2 \right]H^{(n+1)}\right|&\lesssim \left|\nabla\beta^{(n+1)} \right|\left|\nabla^2 H^{(n+1)} \right|+\left|\nabla^2\beta^{(n+1)} \right|\left|\nabla H^{(n+1)} \right|. \end{align*} Hence, using \eqref{HR+1 beta 2} : \begin{equation}
\left\|e_0^{(n+1)}\nabla^{\alpha}H^{(n+1)}_{ij} \right\|_{L^2_{\delta+1+|\alpha|}}\lesssim \left\|e_0^{(n+1)}H^{(n+1)}_{ij} \right\|_{H^2_{\delta+1}}+C_i\left\|H^{(n+1)}\right\|_{H^2_{\delta+1}}\label{transport H} \end{equation}
where $|\alpha|\leq 2$. We apply the Lemma \ref{transport inegalite} with $\sigma=\delta+1+|\alpha|$ and $f=\nabla^{\alpha} H^{(n+1)}$ : \begin{align*}
\sup_{t\in[0,T]}\left\| \nabla^{\alpha} H^{(n+1)} \right\|_{L^2_{\delta+1+|\alpha|}}(t) &\leq 2\left\| \nabla^{\alpha} H^{(n+1)} \right\|_{L^2_{\delta+1+|\alpha|}}(0)+2\sqrt{T}\sup_{t\in[0,T]}\left\|e_0^{(n+1)}\nabla^{\alpha}H^{(n+1)}_{ij} \right\|_{L^2_{\delta+1+|\alpha|}}
\\& \lesssim 2\left\| \nabla^{\alpha} H^{(n+1)} \right\|_{L^2_{\delta+1+|\alpha|}}(0) +2\sqrt{T}C_i^2+2C_i\sqrt{T}\left\|H^{(n+1)}\right\|_{H^2_{\delta+1}}, \end{align*} where in the last inequality we use \eqref{transport H} and \eqref{HR+1 H 3}.
We sum over all $|\alpha|\leq 2$ and absorb the term $\left\|H^{(n+1)}\right\|_{H^2_{\delta+1}}$ of the RHS into the LHS (choosing $T$ small enough). Recalling that $\left\| H^{(n+1)} \right\|_{H^2_{\delta+1}}(0)\leq C_i$ ends the proof of \eqref{HR+1 H 1}.
\end{proof}
Next, we prove the estimates for $\tau^{(n+1)}$, gathered in the following proposition :
\begin{prop}\label{hr+1 tau prop} For $n\geq 2$, the following estimates hold : \begin{align}
\left\| \tau^{(n+1)}\right\|_{H^2_{\delta'+1}}&\lesssim A_0C_i\label{HR+1 tau 1},\\
\left\| \partial_t\tau^{(n+1)}\right\|_{L^2_{\delta'+1}}&\lesssim A_1C_i\label{HR+1 tau 2},\\
\left\| \partial_t\tau^{(n+1)}\right\|_{H^1_{\delta'+1}}&\lesssim A_2 C_i\label{HR+1 tau 3}. \end{align} \end{prop}
\begin{proof} In view of $\eqref{reduced system tau}$, the estimates for $\tau^{(n+1)}$ can be obtained by directly controlling \begin{equation*}
-2\mathbf{T}^{(n-1)}\gamma^{(n)}+\frac{\mathrm{div}\left(\beta^{(n)}\right)}{N^{(n-1)}} . \end{equation*}
We bound the two terms separately, using first \eqref{HR beta 2}, \eqref{HR N 1}, \eqref{HR N 2} and $\left|\frac{1}{N^{(n-1)}} \right|\lesssim 1$ : \begin{align*}
\left\| \frac{\mathrm{div}\left(\beta^{(n)}\right)}{N^{(n-1)}} \right\|_{H^2_{\delta'+1}} & \lesssim
\left\| \beta^{(n)}\right\|_{H^3_{\delta'}} + \left\| \nabla\Tilde{N}^{(n-1)}\nabla \beta^{(n)} \right\|_{L^2_{\delta'+2}} \\&\qquad+ \left\|\nabla^2\Tilde{N}^{(n-1)}\nabla \beta^{(n)} \right\|_{L^2_{\delta'+3}}+\left\|\nabla\Tilde{N}^{(n-1)}\nabla^2 \beta^{(n)} \right\|_{L^2_{\delta'+3}}\\
&\lesssim \left\| \beta^{(n)}\right\|_{H^3_{\delta'}}+ \left\| \nabla\Tilde{N}^{(n-1)}\right\|_{H^1_{\delta+1}}\left\|\nabla\beta^{(n)} \right\|_{H^1_{\delta'+1}} \\& \qquad +\left\|\nabla^2\Tilde{N}^{(n-1)}\right\|_{H^1_{\delta+2}} \left\|\nabla\beta^{(n)} \right\|_{H^1_{\delta'+1}}+\left\| \nabla\Tilde{N}^{(n-1)}\right\|_{H^1_{\delta+1}}\left\| \nabla^2\beta^{(n)}\right\|_{H^1_{\delta'+2}}
\\& \lesssim (1+\varepsilon)A_0C_i. \end{align*} Using in addition Proposition \ref{commutation estimate} we get : \begin{align*}
\left\|\mathbf{T}^{(n-1)}\gamma^{(n)} \right\|_{H^2_{\delta'+1}} \leq \left\| \mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)} \right\|_{H^2_{\delta'+1}} +|\alpha|\left\|\frac{\beta^{(n-1)}\cdot\nabla(\chi\ln)}{N^{(n-1)}} \right\|_{H^2_{\delta'+1}}\lesssim (1+\varepsilon)C_i, \end{align*} which concludes the proof of \eqref{HR+1 tau 1}. \par\leavevmode\par We now turn to the estimates concerning $\partial_t\tau^{(n+1)}$, which has the following expression : \begin{equation*}
\partial_t\tau^{(n+1)}=-2\partial_t\left( \mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)} \right)+2\alpha\nabla(\chi\ln)\cdot\partial_t\left( \frac{\beta^{(n-1)}}{N^{(n-1)}} \right)+\partial_t\left( \frac{\mathrm{div}(\beta^{(n)})}{N^{(n-1)}} \right) \end{equation*}
By \eqref{HR gamma 2}, $\left\|\partial_t( \mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)})\right\|_{L^2_{\delta'+1}}\leq A_0 C_i$. Then, we note, that thanks to \eqref{HR beta 1} and \eqref{HR beta 3}, we have $\|\partial_t\beta^{(n)}\|_{H^1_{\delta'+1}}\lesssim A_1C_i$ (and the same with $n$ replaced by $n-1$). For the second term, we do the following : \begin{align}
\left\| 2\alpha\nabla(\chi\ln)\cdot\partial_t\left( \frac{\beta^{(n-1)}}{N^{(n-1)}} \right)\right\|_{L^2_{\delta'+1}}&\lesssim\;\varepsilon\left( \left\|\partial_t\beta^{(n-1)}\right\|_{H^1_{\delta'+1}}+\left| \partial_tN_a^{(n-1)} \right|\left\|\beta^{(n-1)}\right\|_{L^2_{\delta'}}+\left\|\partial_t\Tilde{N}^{(n-1)}\right\|_{L^2_{\delta}}\left\|\beta^{(n-1)}\right\|_{L^{\infty}}\right)\nonumber\\&\lesssim \varepsilon A_1C_i,\label{lkj} \end{align} where we used \eqref{HR N 2} and \eqref{HR beta 1}. The third term is very similar : \begin{align*}
\left\| \partial_t\left( \frac{\mathrm{div}(\beta^{(n)})}{N^{(n-1)}} \right)\right\|_{L^2_{\delta'+1}}&\lesssim \left\|\partial_t\beta^{(n)}\right\|_{H^1_{\delta'}}+ \left\|\nabla\beta^{(n)}\right\|_{H^1_{\delta'+1}}\left( \left\|\partial_t\Tilde{N}^{(n-1)}\right\|_{H^1_{\delta}} +\left| \partial_tN_a^{(n-1)} \right|\right)\\& \lesssim A_1C_i. \end{align*} This finishes the proof of \eqref{HR+1 tau 2}.
\par\leavevmode\par We now turn to the proof of \eqref{HR+1 tau 3}. In view of \eqref{HR+1 tau 2}, we just have to bound $\|\nabla \partial_t\tau^{(n+1)}\|_{L^2_{\delta'+2}}$ by $C_i^2$. We have the following expression : \begin{align*}
\nabla \partial_t\tau^{(n+1)}&=-2\nabla\partial_t\left( \mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)} \right)+2\alpha\nabla^2(\chi\ln)\cdot \partial_t\left( \frac{\beta^{(n-1)}}{N^{(n-1)}} \right)+2\alpha\nabla(\chi\ln)\partial_t\left( \frac{\nabla\beta^{(n-1)}}{N^{(n-1)}} \right)\\
&\quad -2\alpha\nabla(\chi\ln)\partial_t\left( \frac{\beta^{(n-1)} \nabla N^{(n-1)}}{\left(N^{(n-1)}\right)^2} \right)+ \partial_t\left( \frac{\mathrm{div}(\nabla\beta^{(n)})}{N^{(n-1)}} \right)-\partial_t\left( \frac{\nabla N^{(n-1)}\mathrm{div}(\beta^{(n)})}{\left(N^{(n-1)}\right)^2} \right)
\\&=\vcentcolon I + II + III+IV+V+VI. \end{align*}
The term $I$ is easily handle thanks to \eqref{HR gamma 3} : we have $\left\|I\right\|_{L^2_{\delta'+2}}\leq A_2 C_i$. For the other terms, we make the following remarks : \begin{itemize}
\item the term $VI$ is worse than the term $IV$,
\item the term $V$ is worse than the terms $II$ and $III$. \end{itemize} Thus, it only remains to bound the terms $V$ and $VI$, for which we use \eqref{HR N 2}, \eqref{HR beta 1} and \eqref{HR beta 3} : \begin{align*}
\left\| V\right\|_{L^2_{\delta'+2}} &\lesssim \left\|\partial_t\beta^{(n)} \right\|_{H^2_{\delta'}} +\left\|\nabla^2\beta^{(n)} \right\|_{L^2_{\delta'+2}}\left\|\partial_t\Tilde{N}^{(n-1)} \right\|_{H^2_{\delta}}\lesssim A_1C_i+\varepsilon C_i\lesssim A_1 C_i.\\
\left\| VI \right\|_{L^2_{\delta'+2}} & \lesssim \left\|\nabla\Tilde{N}^{(n-1)} \right\|_{H^1_{\delta+1}}\left\|\partial_t\beta^{(n)} \right\|_{H^1_{\delta'}}+\left\|\nabla\partial_t\Tilde{N}^{(n-1)} \right\|_{H^1_{\delta+1}}\left\|\nabla\beta^{(n-1)} \right\|_{H^1_{\delta'+1}}\\&\qquad +\left\|\partial_t\Tilde{N}^{(n-1)} \right\|_{H^2_{\delta}}\left\|\nabla\Tilde{N}^{(n-1)} \right\|_{H^1_{\delta+1}}\left\|\nabla\beta^{(n-1)}
\right\|_{H^1_{\delta'+1}}
\\&\lesssim \varepsilon C_i. \end{align*} This concludes the proof of \eqref{HR+1 tau 3}. \par\leavevmode\par
\end{proof}
\subsubsection{Energy estimate for $\Box_{g^{(n)}}$}
In this section, we establish the usual energy estimate for the operator $\Box_{g^{(n)}}$.
\begin{lem}\label{inegalite d'energie lemme 1} Let $\sigma\in\mathbb{R}$. If $h$ is a solution of \begin{equation}\label{partie principale de box}
\left(\mathbf{T}^{(n)} \right)^2h-e^{-2\gamma^{(n)}}\Delta h=f, \end{equation} then, if $T$ is sufficiently small, we have for all $t\in[0,T]$ \begin{align}
\left\Vert\mathbf{T}^{(n)}h\right\Vert_{L^2_{\sigma}}(t)&+\left\Vert e^{-\gamma^{(n)}}\nabla h\right\Vert_{L^2_{\sigma}}(t)\nonumber\\& \leq 2\left( \left\Vert\mathbf{T}^{(n)}h\right\Vert_{L^2_{\sigma}}(0)+\left\Vert e^{-\gamma^{(n)}}\nabla h\right\Vert_{L^2_{\sigma}}(0)+\sqrt{2T}\sup_{s\in[0,T]}\left\| fN^{(n)}\right\|_{L^2_{\sigma}}\right).\label{inégalité d'énergie} \end{align} \end{lem} \begin{proof} Let $w(x)=\langle x \rangle^{2\sigma}$. We multiply the equation by $w e_0^{(n)}h$ and we integrate over $\mathbb{R}^2$ with respect to $\mathrm{d} x$. After integration by parts we obtain : \begin{equation}
\int_{\mathbb{R}^2}\frac{w}{2}e_0^{(n)}\left(\mathbf{T}^{(n)}h\right)^2\mathrm{d} x+\int_{\mathbb{R}^2}\nabla h\cdot\nabla\left(e^{-2\gamma^{(n)}}w e_0^{(n)}h\right)\mathrm{d} x=\int_{\mathbb{R}^2}w f e_0^{(n)}h\,\mathrm{d} x.\label{IPP} \end{equation}
We define the energy $E(t)\vcentcolon=\int_{\mathbb{R}^2}w\left( \left(\mathbf{T}^{(n)}h\right)^2+e^{-2\gamma^{(n)}}|\nabla h|^2\right)(t,x)\mathrm{d} x$ and compute its time derivative, writing $\partial_t=e_0^{(n)}+\beta^{(n)}\cdot\nabla$ and integrating by parts the terms coming from $\beta^{(n)}\cdot\nabla$ : \begin{align*}
\frac{\mathrm{d} E}{\mathrm{d} t}(t) = \int_{\mathbb{R}^2} we_0^{(n)}\left(\mathbf{T}^{(n)}h\right)^2\mathrm{d} x + \int_{\mathbb{R}^2} w e_0^{(n)}& \left(e^{-2\gamma^{(n)}} |\nabla h|^2 \right) \mathrm{d} x \\&-\int_{\mathbb{R}^2}\mathrm{div} (w\beta^{(n)})\left( \left(\mathbf{T}^{(n)}h\right)^2+e^{-2\gamma^{(n)}}|\nabla h|^2\right)\mathrm{d} x \end{align*} We now use \eqref{IPP} to express the first integral in $\frac{\mathrm{d} E}{\mathrm{d} t}$ : \begin{align*} \frac{\mathrm{d} E}{\mathrm{d} t}(t)& =2\int_{\mathbb{R}^2}w f e_0^{(n)}h\,\mathrm{d} x - 2\int_{\mathbb{R}^2}\nabla h\cdot\nabla\left(e^{-2\gamma^{(n)}}w e_0^{(n)}h\right)\mathrm{d} x
\\& \quad + \int_{\mathbb{R}^2} w e_0^{(n)} \left(e^{-2\gamma^{(n)}} |\nabla h|^2 \right) \mathrm{d} x -\int_{\mathbb{R}^2}\mathrm{div} (w\beta^{(n)})\left( \left(\mathbf{T}^{(n)}h\right)^2+e^{-2\gamma^{(n)}}|\nabla h|^2\right)\mathrm{d} x \end{align*} We now expand the second integral and commute $\nabla$ and $e_0^{(n)}$ : \begin{align*}
- 2\int_{\mathbb{R}^2}\nabla h\cdot\nabla\left(e^{-2\gamma^{(n)}}w e_0^{(n)}h\right)\mathrm{d} x & =- \int_{\mathbb{R}^2}w e^{-2\gamma^{(n)}} e_0^{(n)}\left( |\nabla h|^2 \right) \mathrm{d} x +2 \int_{\mathbb{R}^2} w e^{-2\gamma^{(n)}} \partial_i h \nabla h \cdot \nabla\beta^{(n)i} \mathrm{d} x \\& \quad -2 \int_{\mathbb{R}^2} e^{-2\gamma^{(n)}}e_0^{(n)}h \nabla h\cdot\nabla w \mathrm{d} x -2 \int_{\mathbb{R}^2} w e_0^{(n)} h\nabla h\cdot\nabla\left( e^{-2\gamma^{(n)}}\right) \mathrm{d} x \end{align*} With this, we see that the $\partial h \partial^2h$ terms in $\frac{\mathrm{d} E}{\mathrm{d} t}$ cancel each other. Thus, we obtain the following energy equality : \begin{align}
\frac{\mathrm{d} E}{\mathrm{d} t}(t)& = -\int_{\mathbb{R}^2}\mathrm{div} (w\beta^{(n)})\left( \left(\mathbf{T}^{(n)}h\right)^2+e^{-2\gamma^{(n)}}|\nabla h|^2\right)\mathrm{d} x \nonumber
+2 \int_{\mathbb{R}^2} w e^{-2\gamma^{(n)}} \partial_i h \nabla h \cdot \nabla\beta^{(n)i} \mathrm{d} x\\& \quad -2 \int_{\mathbb{R}^2} e^{-2\gamma^{(n)}}e_0^{(n)}h \nabla h\cdot\nabla w \mathrm{d} x + 2\int_{\mathbb{R}^2}w f e_0^{(n)}h\,\mathrm{d} x + R_{\gamma^{(n)}}(t) \label{energy equality} \end{align} with \begin{equation*}
R_{\gamma^{(n)}}(t)\vcentcolon=-2\int_{\mathbb{R}^2}e^{-2\gamma^{(n)}}w|\nabla h|^2\partial_t\gamma^{(n)}\mathrm{d} x+ 4\int_{\mathbb{R}^2}e^{-2\gamma^{(n)}}w e_0^{(n)}h\nabla h\cdot\nabla\gamma^{(n)}\mathrm{d} x-2\int_{\mathbb{R}^2} e^{-2\gamma^{(n)}}w|\nabla h|^2\beta^{(n)}\cdot \nabla\gamma^{(n)}\mathrm{d} x. \end{equation*} This sort of remainder contains all the term involving derivatives of $\gamma^{(n)}$, therefore it would vanish if $e^{-2\gamma^{(n)}}$ didn't appear in \eqref{partie principale de box}. We then show that the first three integrals in \eqref{energy equality} can be bounded by $E(t)$ : \begin{itemize}
\item Let's show that $|\mathrm{div}(w\beta^{(n)})|\leq C(C_i)w$. We have $\mathrm{div}(w\beta^{(n)})=w\mathrm{div}(\beta^{(n)})+\nabla w\cdot\beta^{(n)}$. We have $|\nabla w|\lesssim\frac{ w}{\langle x\rangle}$ and $\beta^{(n)}$ bounded so $\nabla w\cdot\beta^{(n)}$ is indeed bounded by $w$. For $w\mathrm{div}(\beta^{(n)})$, we use the embedding $H^2_{\delta'+1}\xhookrightarrow{}L^{\infty}$ and the estimate \eqref{HR beta 2}. This shows that
\begin{equation*}
-\int_{\mathbb{R}^2}\mathrm{div} (w\beta^{(n)})\left( \left(\mathbf{T}^{(n)}h\right)^2+e^{-2\gamma^{(n)}}\nabla h\right)\mathrm{d} x\lesssim A_0C_iE(t).
\end{equation*}
\item Let's show that $|e^{-\gamma^{(n)}}N^{(n)}\nabla w|\lesssim w$. We have $|e^{-\gamma^{(n)}}|\lesssim \langle x\rangle^{\varepsilon^2}$ and $|N^{(n)}|\lesssim\langle x\rangle^{\frac{1}{2}}$ so $|e^{-\gamma^{(n)}}N^{(n)}\nabla w|\lesssim w \langle x\rangle^{\varepsilon^2-\frac{1}{2}}\lesssim w$, providing $\varepsilon$ is small. This allows us to do the following (using $2ab\leq a^2+b^2$) :
\begin{equation*}
-2\int_{\mathbb{R}^2}e^{-2\gamma^{(n)}}e_0^{(n)}h\nabla h\cdot\nabla w\,\mathrm{d} x \lesssim \int_{\mathbb{R}^2} w\left|\frac{e_0^{(n)}h}{N^{(n)}}\right|e^{-\gamma^{(n)}}|\nabla h|\,\mathrm{d} x\lesssim E(t).
\end{equation*}
\item We already used the fact that $\nabla\beta^{(n)}$ is bounded by $A_0C_i$ so we simply do
\begin{equation*}
2\int_{\mathbb{R}^2}e^{-2\gamma^{(n)}}w\partial_ih\nabla h\cdot\nabla\beta^{(n)i}\,\mathrm{d} x\lesssim A_0C_i\int_{\mathbb{R}^2}e^{-2\gamma^{(n)}}w|\nabla h|^2\,\mathrm{d} x \lesssim A_0C_iE(t).
\end{equation*} \end{itemize} We now show that $R_{\gamma^{(n)}}(t)$ can also be bounded by $E(t)$ : \begin{itemize}
\item Let's show that $\partial_t\gamma^{(n)}$ is bounded. Since $\alpha$ doesn't depend on time, we have $\partial_t\gamma^{(n)}=N^{(n-1)}\mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)}+\beta^{(n)}\cdot\nabla\Tilde{\gamma}^{(n)}$. For the first term we use the Proposition \ref{commutation estimate} and the embedding $H^2_{\delta'+1}\xhookrightarrow{}C^0_1$ (together with the fact that $|N^{(n-1)}|\lesssim\langle x\rangle$). For the second term we simply use the embedding $H^2_{\delta'}\xhookrightarrow{}L^{\infty}$. We thus get
\begin{equation*}
2\int_{\mathbb{R}^2}e^{-2\gamma^{(n)}} w|\nabla h|^2\partial_t\gamma^{(n)}\mathrm{d} x \leq C(C_i) E(t).
\end{equation*}
\item Let's show that $e^{-\gamma^{(n)}}N^{(n)}\nabla\gamma^{(n)}$ is bounded. We only deal with the $\chi\ln$ part of $N^{(n)}$ (since $\Tilde{N}^{(n)}$ is bounded), and only with the $\nabla\Tilde\gamma^{(n)}$ part in $\nabla\gamma^{(n)}$ (because $\nabla(\chi\ln)$ decrease more than $e^{-\gamma^{(n)}}$). Using $|\chi\ln|\lesssim \langle x\rangle^{\varepsilon^2}$ and $e^{-\gamma^{(n)}}\lesssim \langle x\rangle^{\varepsilon^2}$, we write
\begin{equation*}
\left|e^{-\gamma^{(n)}}\chi\ln\nabla\Tilde{\gamma}^{(n)}\right|\lesssim \|\nabla\Tilde{\gamma}^{(n)}\|_{C^0_{2\varepsilon^2}}.
\end{equation*}
If $\varepsilon$ is small enough we have the embedding $H^2_{\delta'+1}\xhookrightarrow{}C^0_{2\varepsilon^2}$ which together with \eqref{HR gamma 1} allows us to say
\begin{equation*}
4\int_{\mathbb{R}^2}e^{-2\gamma^{(n)}}w e_0^{(n)}h\nabla h\cdot\nabla\gamma^{(n)}\mathrm{d} x \lesssim C(C_i) \int_{\mathbb{R}^2}w\left|\frac{e_0^{(n)}h}{N^{(n)}}\right|e^{-\gamma^{(n)}}|\nabla h|\,\mathrm{d} x \leq C(C_i) E(t).
\end{equation*}
\item We already used multiple times that $\beta^{(n)}$ and $\nabla\gamma^{(n)}$ are bounded (by $\varepsilon$ and $C(C_i)$ respectively), and thus
\begin{equation*}
2\int_{\mathbb{R}^2} e^{-2\gamma^{(n)}}w|\nabla h|^2\beta^{(n)}\cdot \nabla\gamma^{(n)}\mathrm{d} x\leq C(C_i) E(t).
\end{equation*} \end{itemize} For the last integral in \eqref{energy equality} we apply Cauchy-Schwarz inequality : \begin{equation*}
2\int_{\mathbb{R}^2}w f e_0^{(n)}h\,\mathrm{d} x\leq 2\left(\int_{\mathbb{R}^2}w f^2 N^{(n)2}\mathrm{d} x\right)^{\frac{1}{2}}\left(\int_{\mathbb{R}^2}w \left(\mathbf{T}^{(n)}h\right)^2 \mathrm{d} x\right)^{\frac{1}{2}}\leq E(t)+\int_{\mathbb{R}^2}w f^2 N^{(n)2}\mathrm{d} x . \end{equation*} Summarising all the estimates, we get : \begin{equation*}
\frac{\mathrm{d} E}{\mathrm{d} t}(t)\leq C(C_i)E(t)+\int_{\mathbb{R}^2}w f^2 N^{(n)2}(t,x)\,\mathrm{d} x . \end{equation*} We apply Gronwall's inequality with $T$ sufficiently small to obtain \begin{equation*}
E(t)\leq 2\left( E(0)+T\sup_{s\in[0,T]}\int_{\mathbb{R}^2}w f^2 N^{(n)2}(s,x)\,\mathrm{d} x\right) . \end{equation*} We recognize in $E(t)$ a weighted Sobolev norm, and using inequality such as $\frac{1}{\sqrt{2}}(a+b)\leq \sqrt{a^2+b^2}\leq a+b$, we obtain the inequality of the lemma.
\end{proof}
\begin{lem}\label{inegalite d'energie lemme} If $h$ is a solution of \eqref{partie principale de box} then, if $T$ is sufficiently small, we have for all $t\in[0,T]$ \begin{align}
&\sum_{|\alpha|\leq 2} \left( \left\Vert\mathbf{T}^{(n)}\nabla^{\alpha}h\right\Vert_{L^2_{\delta'+1+|\alpha|}}(t)+\left\Vert e^{-\gamma^{(n)}}\nabla (\nabla^{\alpha}h)\right\Vert_{L^2_{\delta'+1+|\alpha|}}(t)\right)\nonumber \\
& \leq 3\sum_{|\alpha|\leq 2} \left( \left\Vert\mathbf{T}^{(n)}\nabla^{\alpha}h\right\Vert_{L^2_{\delta'+1+|\alpha|}}(0)+\left\Vert e^{-\gamma^{(n)}}\nabla (\nabla^{\alpha}h)\right\Vert_{L^2_{\delta'+1+|\alpha|}}(0)\right)+C(C_i)\sqrt{T}\sup_{s\in[0,T]}\left\| fN^{(n)}\right\|_{H^2_{\delta'+1}}(s). \label{inegalite d'energie equation} \end{align} \end{lem}
\begin{proof} For the sake of clarity, we set \begin{equation*}
\mathcal{E}^{(n)}[h](t)=\sum_{|\alpha|\leq 2} \left( \left\Vert\mathbf{T}^{(n)}\nabla^{\alpha}h\right\Vert_{L^2_{\delta'+1+|\alpha|}}(t)+\left\Vert e^{-\gamma^{(n)}}\nabla (\nabla^{\alpha}h)\right\Vert_{L^2_{\delta'+1+|\alpha|}}(t)\right) \end{equation*} If $h$ satisfies \eqref{partie principale de box}, then, applying $\nabla^{\alpha}$ to the equation, we show that $\nabla^{\alpha}h$ satisfies \begin{equation}
\left( \mathbf{T}^{(n)} \right)^2\nabla^{\alpha}h-e^{-2\gamma^{(n)}}\Delta (\nabla^{\alpha}h)=\nabla^{\alpha}f+\left[\nabla^{\alpha},e^{-2\gamma^{(n)}}\right]\Delta h+\left[\left( \mathbf{T}^{(n)} \right)^2,\nabla^{\alpha}\right]h. \label{eq derivee} \end{equation}
Thanks to the previous lemma, in order to prove \eqref{inegalite d'energie equation}, we have to bound $\sum_{|\alpha|\leq 2}\Vert (\text{RHS of \eqref{eq derivee}})\times N^{(n)}\Vert_{L^2_{\delta'+1+|\alpha|}}$. \begin{itemize}
\item First step is bounding $\Vert N^{(n)}\nabla^{\alpha}f \Vert_{L^2_{\delta'+1+|\alpha|}}$ (using the fact that $\frac{1}{N^{(n)}}\in L^{\infty}$, $|\nabla^{\alpha}(\chi\ln)|\leq \langle x\rangle^{-|\alpha|}$, the product estimates and \eqref{HR N 2}). If $|\alpha|=1$ :
\begin{align*}
\Vert N^{(n)}\nabla^{\alpha}f \Vert_{L^2_{\delta'+2}} &\lesssim \Vert\nabla^{\alpha}(fN^{(n)})\Vert_{L^2_{\delta'+2}}+ \Vert f\nabla^{\alpha} N^{(n)}\Vert_{L^2_{\delta'+2}}\\
&\lesssim \Vert fN^{(n)}\Vert_{H^2_{\delta'+1}}+\Vert N^{(n)} f\nabla^{\alpha}(\chi\ln)\Vert_{L^2_{\delta'+2}}+\Vert N^{(n)} f\nabla^{\alpha}\Tilde{N}^{(n)}\Vert_{L^2_{\delta'+2}}\\
&\lesssim \Vert fN^{(n)}\Vert_{H^2_{\delta'+1}} \left( 2+ \Vert \nabla^{\alpha}\Tilde{N}^{(n)}\Vert_{H^{2}_{\delta+1}}\right)\\
&\leq C(C_i)\Vert fN^{(n)}\Vert_{H^2_{\delta'+1}}.
\end{align*}
If $|\alpha|=2$, then there exist $\alpha_1,\alpha_2$ with $|\alpha_1|=|\alpha_2|=1$ such that :
\begin{equation*}
\Vert N^{(n)}\nabla^{\alpha}f\Vert_{L^2_{\delta'+3}}\lesssim \Vert \nabla^{\alpha}(fN^{(n)}) \Vert_{L^2_{\delta'+3}}+\Vert f\nabla^{\alpha}N^{(n)} \Vert_{L^2_{\delta'+3}}+\Vert \nabla^{\alpha_1}N^{(n)}\nabla^{\alpha_2}f \Vert_{L^2_{\delta'+3}}.
\end{equation*}
The two first terms can be handled as in the case $|\alpha|=1$. For the last term we do the following :
\begin{align*}
\Vert \nabla^{\alpha_1}N^{(n)}\nabla^{\alpha_2}f \Vert_{L^2_{\delta'+3}} & \lesssim \Vert N^{(n)} \nabla^{\alpha_1}N^{(n)}\nabla
^{\alpha_2}f \Vert_{L^2_{\delta'+3}}\\
&\lesssim \Vert N^{(n)}\nabla^{\alpha_2}f \Vert_{L^2_{\delta'+2}}\Vert \nabla^{\alpha_1}N^{(n)} \Vert_{H^2_{\delta+1}}\\
&\leq C(C_i)\Vert fN^{(n)}\Vert_{H^2_{\delta'+1}},
\end{align*}
where in the last inequality we use the calculation of the $|\alpha|=1$ case. Summarising, we get :
\begin{equation}
\sum_{|\alpha|\leq 2}\|N^{(n)}\nabla^{\alpha}f \|_{L^2_{\delta'+1+|\alpha|}}\leq C(C_i)\Vert fN^{(n)}\Vert_{H^2_{\delta'+1}}\label{first step}.
\end{equation}
\item Second step is bounding $\left\| N^{(n)}\left[\nabla^{\alpha},e^{-2\gamma^{(n)}}\right]\Delta h \right\|_{L^2_{\delta'+1+|\alpha|}}$. If $|\alpha|=1$ we have $\left[\nabla^{\alpha},e^{-2\gamma^{(n)}}\right]\Delta h=-2e^{-2\gamma^{(n)}}\nabla^{\alpha}\gamma^{(n)}\Delta h$.
Using the fact that $\left|\Tilde{N}^{(n)}\right|\leq\varepsilon$, \eqref{useful gamma 1}, $|\chi\ln|\lesssim\langle x\rangle^{\varepsilon^2}$ and the expression of $\gamma^{(n)}$ we have
\begin{align*}
\left\| e^{-2\gamma^{(n)}}N^{(n)}\nabla^{\alpha}\gamma^{(n)}\Delta h\right\|_{L^2_{\delta'+2}} & \lesssim \left\| \nabla^{\alpha}\gamma^{(n)}\Delta h\right\|_{L^2_{\delta'+2+3\varepsilon^2}} \\ & \lesssim\left\| \nabla^{\alpha}(\chi\ln)\Delta h\right\|_{L^2_{\delta'+2+3\varepsilon^2}}+\left\| \nabla^{\alpha}\Tilde{\gamma}^{(n)}\Delta h\right\|_{L^2_{\delta'+2+3\varepsilon^2}}.
\end{align*}
Since $|\alpha|=1$, we have $|\nabla^{\alpha}(\chi\ln)|\lesssim \langle x \rangle^{-1}$ and $\nabla^{\alpha}\Tilde{\gamma}^{(n)}\in H^2_{\delta'+1}$ which embeds in $C^0_1$. This implies :
\begin{equation*}
\left\| e^{-2\gamma^{(n)}}N^{(n)}\nabla^{\alpha}\gamma^{(n)}\Delta h\right\|_{L^2_{\delta'+2}} \leq C(C_i) \|\Delta h\|_{L^2_{\delta'+1+3\varepsilon^2}}\leq C(C_i)\sum_{|\alpha'|=1}\left\Vert e^{-\gamma^{(n)}}\nabla (\nabla^{\alpha'}h)\right\Vert_{L^2_{\delta'+2}},
\end{equation*}
where in the last inequality, we used that $1\lesssim |e^{-\gamma^{(n)}}|$ and took $\varepsilon$ small enough. If $|\alpha|=2$, then there exist $\alpha_1,\alpha_2$ with $|\alpha_1|=|\alpha_2|=1$ such that :
\begin{align*}
\left\| N^{(n)}\left[\nabla^{\alpha},e^{-2\gamma^{(n)}}\right]\Delta h \right\|_{L^2_{\delta'+3}} &\lesssim \left\|e^{-2\gamma^{(n)}}N^{(n)}\nabla^{\alpha}\gamma^{(n)} \Delta h \right\|_{L^2_{\delta'+3}}+\left\|e^{-2\gamma^{(n)}}N^{(n)}\nabla^{\alpha_1}\gamma^{(n)} \nabla^{\alpha_2}\Delta h \right\|_{L^2_{\delta'+3}}\\
&\quad + \left\| e^{-2\gamma^{(n)}} N^{(n)} \nabla^{\alpha_1}\gamma^{(n)}\nabla^{\alpha_2}\gamma^{(n)}\Delta h \right\|_{L^2_{\delta'+3}}
\\& \lesssim \left\|\nabla^{\alpha}\gamma^{(n)} \Delta h\right\|_{L^2_{\delta'+3+3\varepsilon^2}}+\left\|\nabla^{\alpha_1}\gamma^{(n)} \nabla^{\alpha_2}\Delta h \right\|_{L^2_{\delta'+3+3\varepsilon^2}}
\\&\quad + \left\| \nabla^{\alpha_1}\gamma^{(n)}\nabla^{\alpha_2}\gamma^{(n)}\Delta h \right\|_{L^2_{\delta'+3+3\varepsilon^2}}
\end{align*}
For the first term, we use the fact that $|\nabla^{\alpha}(\chi\ln)|\lesssim \langle x \rangle^{-2}$, $\nabla^{\alpha}\Tilde{\gamma}^{(n)}\in H^1_{\delta'+2}$ (since $|\alpha|=2$) and the product estimate to get :
\begin{equation*}
\left\|\nabla^{\alpha}\gamma^{(n)} \Delta h\right\|_{L^2_{\delta'+3+3\varepsilon^2}}\leq C(C_i) \|\Delta h\|_{H^1_{\delta'+2}}\leq C(C_i)\sum_{|\alpha'|=1,2}\left\|e^{-\gamma^{(n)}}\nabla(\nabla^{\alpha'}h)\right\|_{L^2_{\delta'+1+|\alpha'|}}.
\end{equation*}
For the second term, we again use that $\nabla^{\alpha_1}\Tilde{\gamma}^{(n)},\nabla^{\alpha_1}(\chi\ln)\in C^0_1$ (since $|\alpha_1|=1$) to get
\begin{equation*}
\left\|\nabla^{\alpha_1}\gamma^{(n)} \nabla^{\alpha_2}\Delta h \right\|_{L^2_{\delta'+3+3\varepsilon^2}}\leq C(C_i) \|\nabla^{\alpha_2}\Delta h\|_{L^2_{\delta'+2+
3\varepsilon^2}}\leq C(C_i)\sum_{|\alpha'|=2}\left\|e^{-\gamma^{(n)}}\nabla(\nabla^{\alpha'}h)\right\|_{L^2_{\delta'+3}}.
\end{equation*}
The third term is easier to handle than the first one. Summarising, we get :
\begin{equation}
\sum_{|\alpha|\leq 2}\left\| N^{(n)}\left[\nabla^{\alpha},e^{-2\gamma^{(n)}}\right]\Delta h \right\|_{L^2_{\delta'+1+|\alpha|}}\leq C(C_i) \mathcal{E}^{(n)}[h].\label{second step}
\end{equation}
\item Third step is bounding $\left\| N^{(n)}\left[\left( \mathbf{T}^{(n)}\right)^2,\nabla^{\alpha}\right]h \right\|_{L^2_{\delta'+1+|\alpha|}}$.
Given the expression of $\mathcal{E}^{(n)}[h]$, we are allowed to bound this term by norms involving $\mathbf{T}^{(n)}\nabla^{\mu}$, $\nabla^{\nu}$ (for $|\mu|\leq 2$ and $|\nu|\leq 3$) and $\left( \mathbf{T}^{(n)}\right)^2$. The strategy is then to express $N^{(n)}\left[\left( \mathbf{T}^{(n)}\right)^2,\nabla \right]h$ and $N^{(n)}\left[\left( \mathbf{T}^{(n)}\right)^2,\nabla^2 \right]h$ in terms of those operators acting on $h$, using the commutation formula
\begin{equation*}
\left[\mathbf{T}^{(n)},\nabla\right] h =\frac{\nabla\beta^{(n)}}{N^{(n)}}\nabla h-\frac{\nabla N^{(n)}}{N^{(n)}}\mathbf{T}^{(n)}h.
\end{equation*}
Doing so, we find the following formula (we don't write the irrelevant numerical constants) :
\begin{align}
N^{(n)}\left[\left( \mathbf{T}^{(n)}\right)^2,\nabla \right]h & =\left(e_0^{(n)}\left(\frac{\nabla\beta^{(n)}}{N^{(n)}}\right) +\frac{\left(\nabla\beta^{(n)}\right)^2}{N^{(n)}} \right)\nabla h +\left(\frac{\nabla\beta^{(n)}\nabla N^{(n)}}{N^{(n)}} +e_0^{(n)}\left( \frac{\nabla N^{(n)}}{N^{(n)}} \right) \right)\mathbf{T}^{(n)}h\label{commutateur 1}\\& \qquad\qquad+2\nabla\beta^{(n)}\mathbf{T}^{(n)}\nabla h -2\nabla N^{(n)}\left( \mathbf{T}^{(n)}\right)^2h.\nonumber
\end{align}
We recall that $\left( \mathbf{T}^{(n)}\right)^2h=e^{-2\gamma^{(n)}}\Delta h+f$, so that the $\left( \mathbf{T}^{(n)}\right)^2$ term in \eqref{commutateur 1} has already been estimate during the two first steps. The coefficients in front of $\nabla h$ and $\mathbf{T}^{(n)} h$ are all in $\mathcal{C}^1_0$ except the two involving $e_0\nabla N^{(n)}$, for wich we use the product law $H^1\times H^1$ and \eqref{HR N 2} :
\begin{align*}
\left\| e_0^{(n)}\left( \frac{\nabla N^{(n)}}{N^{(n)}} \right)\mathbf{T}^{(n)}h \right\|_{L^2_{\delta'+2}} & \lesssim \left\|\nabla \partial_t N^{(n)} \right\|_{H^1_{\delta+1}} \left\| \mathbf{T}^{(n)} h \right\|_{H^1_{\delta'+1}} \lesssim C(C_i)\mathcal{E}^{(n)}[h].
\end{align*}
We only need to bound the coefficient in front of $\mathbf{T}^{(n)}\nabla$ in $L^{\infty}$, which is easily thanks to \eqref{HR N 3}. This allows us to handle the case $|\alpha|=1$ : \begin{align}
&\left\| N^{(n)}\left[\left( \mathbf{T}^{(n)}\right)^2,\nabla^{\alpha}\right]h \right\|_{L^2_{\delta'+2}}\nonumber\\&\leq C(C_i)\left( \sum_{|\alpha'|\leq 1} \left( \left\Vert\frac{e_0^{(n)}\nabla^{\alpha'}h}{N^{(n)}}\right\Vert_{L^2_{\delta'+1+|\alpha'|}}+\left\| e^{-\gamma^{(n)}}\nabla (\nabla^{\alpha'}h)\right\|_{L^2_{\delta'+1+|\alpha'|}} \right)+\Vert fN^{(n)}\Vert_{H^2_{\delta'+1}} \right).\label{alpha 1}
\end{align}
Before turning to the case $|\alpha|=2$, let's remark that, in view of \eqref{eq derivee}, so far we have proved that
\begin{equation}
\left\| \left( \mathbf{T}^{(n)}\right)^2\nabla h\right\|_{L^2_{\delta'+2}}\leq C(C_i) \left( \mathcal{E}^{(n)}[h]+\Vert fN^{(n)}\Vert_{H^2_{\delta'+1}}\right).\label{L carré}
\end{equation}
This means that, even if $\left( \mathbf{T}^{(n)}\right)^2\nabla h$ doesn't appear in the expression of $\mathcal{E}^{(n)}[h]$, we are allowed to use it in the sequel of the third step.
We now turn to the case $|\alpha|=2$ and push our calculations further to get (we still don't write the irrelevant numerical constants) :
\begin{align}
& N^{(n)}\left[\left( \mathbf{T}^{(n)}\right)^2,\nabla^2 \right]h =\nabla N^{(n)}\left( \mathbf{T}^{(n)}\right)^2\nabla h +\nabla N^{(n)}\left[\left( \mathbf{T}^{(n)}\right)^2,\nabla \right]h \nonumber\\
& \left( N^{(n)}\nabla\mathbf{T}^{(n)}\left( \frac{\nabla\beta^{(n)}}{N^{(n)}}\right)+N^{(n)}\nabla\left(\left(\frac{\nabla\beta^{(n)}}{N^{(n)}}\right)^2\right) +\frac{\nabla N^{(n)}\left(\nabla\beta^{(n)}\right)^2}{N^{(n)} }+ \nabla\beta^{(n)}\mathbf{T}^{(n)}\left(\frac{\nabla N^{(n)}}{N^{(n)}}\right)\right)\nabla h \nonumber\\
&+\left( e_0^{(n)}\left(\frac{\nabla\beta^{(n)}}{N^{(n)}} \right)+\frac{\left(\nabla\beta^{(n)}\right)^2}{N^{(n)}}\right)\nabla^2 h +N^{(n)}\nabla\left(\frac{\nabla N^{(n)}}{N^{(n)}}\right) \left( \mathbf{T}^{(n)}\right)^2 h +
\nabla\beta^{(n)}\mathbf{T}^{(n)}\nabla^2 h\nonumber\\
& +\left( N^{(n)}\nabla\left(\frac{\nabla\beta^{(n)}\nabla N^{(n)}}{N^{(n)}}\right)+N^{(n)}\nabla\mathbf{T}^{(n)}\left( \frac{\nabla N^{(n)}}{N^{(n)}}\right) +\frac{\nabla \beta^{(n)}\left(\nabla N^{(n)}\right)^2}{N^{(n)} }+ \nabla N^{(n)}\mathbf{T}^{(n)}\left(\frac{\nabla N^{(n)}}{N^{(n)}}\right) \right)\mathbf{T}^{(n)} h\nonumber\\
& +\left( \nabla\beta^{(n)}\nabla N^{(n)} +e_0^{(n)}\left( \frac{\nabla N^{(n)}}{N^{(n)}}\right)+N^{(n)}\nabla\left( \frac{\nabla\beta^{(n)}}{N^{(n)}} \right)+\frac{\nabla\beta^{(n)}\nabla N^{(n)}}{N^{(n)}} \right)\mathbf{T}^{(n)}\nabla h\label{commutateur 2}.
\end{align}
We need to estimate the $L^2_{\delta'+3}$ norm of \eqref{commutateur 2}. The term $\left( \mathbf{T}^{(n)}\right)^2 h$ has already been handled since $h$ satisfies \eqref{partie principale de box}. Since $\nabla N^{(n)}\in C^0_1$ we can use $\eqref{L carré}$ to estimate the term $\left( \mathbf{T}^{(n)}\right)^2\nabla h$. With the same argument, using \eqref{alpha 1} we handle the $\left[\left( \mathbf{T}^{(n)}\right)^2,\nabla \right]h$ term. Thanks to \eqref{HR beta 2} and \eqref{HR beta 3}, the coefficients in front of $\mathbf{T}^{(n)}\nabla^2h$ and $\nabla^2h$ are in the appropriate weighted $L^\infty$-based spaces ($L^\infty$ and $C^0_1$ respectively). The only problematic terms are the ones where two spatial derivatives hit $\beta^{(n)}$ or when at least one spatial derivative and $\mathbf{T}^{(n)}$ hit $N^{(n)}$. For them, we use the product estimate (see Proposition \ref{prop prod}). Let us give two examples, the first one using the embedding $H^1\times H^1\xhookrightarrow{}L^2$ (with appropriate weights) : \begin{align*} \left\| \nabla^2\beta^{(n)} \mathbf{T}^{(n)}\nabla h \right\|_{L^2_{\delta'+3}} & \lesssim \left\|\nabla^2\beta^{(n)} \right\|_{H^1_{\delta'+2}} \left\| \mathbf{T}^{(n)}\nabla h \right\|_{H^1_{\delta'+2}} \lesssim C(C_i) \mathcal{E}^{(n)}[h]. \end{align*} The second example uses the embedding $L^2\times H^2\xhookrightarrow{}L^2$ (with appropriate weights) : \begin{align*} \left\| \nabla \mathbf{T}^{(n)} \nabla N^{(n)}\mathbf{T}^{(n)}h\right\|_{L^2_{\delta'+3}} \lesssim \left( 1+ \left\| \nabla^2 \partial_t \Tilde{N}^{(n)} \right\|_{L^2_{\delta+2}} \right) \left\| \mathbf{T}^{(n)} h \right\|_{H^2_{\delta'+1}} \lesssim C(C_i) \mathcal{E}^{(n)}[h]. \end{align*}
This allows us to handle entirely the case $|\alpha|=2$. Summarising the third step, we get : \begin{equation}
\sum_{|\alpha|\leq 2}\left\| N^{(n)}\left[\left( \mathbf{T}^{(n)}\right)^2,\nabla^{\alpha}\right]h \right\|_{L^2_{\delta'+1+|\alpha|}} \leq C(C_i)\left( \mathcal{E}^{(n)}[h]+\Vert fN^{(n)}\Vert_{H^2_{\delta'+1}}\right).\label{third step} \end{equation} \end{itemize} Combining \eqref{first step}, \eqref{second step} and \eqref{third step}, we get for all $t\in [0,T]$ : \begin{equation}
\mathcal{E}^{(n)}[h](t)\leq 2 \mathcal{E}^{(n)}[h](0)+C(C_i)\sqrt{T}\left( \sup_{s\in[0,T]}\Vert fN^{(n)}\Vert_{H^2_{\delta'+1}}(s)+\mathcal{E}^{(n)}[h](t)\right). \end{equation}
By choosing $T$ sufficiently small, we can absorb the term $\mathcal{E}^{(n)}[h](t)$ of the RHS into the LHS and conclude the proof of the lemma. \end{proof}
With this energy estimate, we are ready to prove estimates on $\Tilde{\gamma}^{(n+1)}$, $\varphi^{(n+1)}$ and $\omega^{(n+1)}$. The spatial term in the energy $\mathcal{E}^{(n)}[h]$ is different from what appear in \eqref{HR gamma 1}, but \eqref{propsmall gamma} implies that $1\lesssim e^{-\gamma^{(n)}}$ so we will get back the estimates we want for $\Tilde{\gamma}^{(n+1)}$, $\varphi^{(n+1)}$ and $\omega^{(n+1)}$ if we bound $\mathcal{E}^{(n)}$, using Lemma \ref{inegalite d'energie lemme}.
\subsubsection{Hyperbolic estimates}
We use our energy estimate to prove the estimates on $\Tilde{\gamma}^{(n+1)}$. Since we are not getting $\gamma^{(n+1)}$ from an elliptic equation, we cannot obtain the decomposition $\gamma^{(n+1)}=-\alpha\chi\ln+\Tilde{\gamma}^{(n+1)}$ directly. Our strategy is to solve for $\gamma^{(n+1)}+\alpha\chi\ln$ to artificially recover our decomposition after having set $\Tilde{\gamma}^{(n+1)}\vcentcolon=\gamma^{(n+1)}+\alpha\chi\ln$. For the sake of clarity, we gather in the following lemma the estimates of the extra terms due to $\alpha\chi\ln$ :
\begin{lem}\label{op chi ln lemme} We set $\Psi^{(n)}= N^{(n)}\left(\left(\mathbf{T}^{(n)}\right)^2(\chi\ln)- e^{-2\gamma^{(n)}}\Delta (\chi\ln) \right)$. For $n\geq 2$, the following estimate hold : \begin{align}
\left\| \Psi^{(n)} \right\|_{H^2_{\delta'+1}}&\leq C(C_i),\label{op chi ln 1}\\
\left\| \Psi^{(n)} \right\|_{H^1_{\delta'+1}}&\leq A_0C_i.\label{op chi ln 3} \end{align} \end{lem}
\begin{proof} To prove \eqref{op chi ln 1}, we don't need to be very precise about the dependence on $C_i$ of the bound, so we don't give many details. For the first part of $\Psi^{(n)}$ : \begin{align*}
\left\| N^{(n)}\left(\mathbf{T}^{(n)}\right)^2(\chi\ln)\right\|_{H^2_{\delta'+1}} & \lesssim \left\| \partial_t\left( \frac{\beta^{(n)}}{N^{(n)}}\right) \right\|_{H^2_{\delta'}}+\left\|\beta^{(n)} \right\|_{H^2_{\delta'}}\left( \left\|\frac{\beta^{(n)}}{N^{(n)}} \right\|_{H^2_{\delta'-1}}+\left\|\nabla\left( \frac{\beta^{(n)}}{N^{(n)}} \right) \right\|_{H^2_{\delta'}} \right)\\
&\leq C(C_i). \end{align*} For the second part of $\Psi^{(n)}$, we notice that $\Delta(\chi\ln)=\Delta(\chi)\ln+\nabla\chi\cdot\nabla\ln$ is a smooth compactly supported function (its support is included in $B_2$) and in particular belongs to all $C^k$ spaces, using \eqref{useful gamma 1} and \eqref{HR N 1} : \begin{align*}
\left\|e^{-2\gamma^{(n)}} N^{(n)}\Delta(\chi\ln) \right\|_{H^2_{\delta'+1}} & \lesssim\left\|\Delta(\chi\ln) \right\|_{C^2} \left\| N^{(n)} \right\|_{H^2(B_2)}\lesssim 1. \end{align*}
We now turn to the proof of \eqref{op chi ln 3}. For the first part of $\Psi^{(n)}$, we use \eqref{HR beta 1} (and the product estimate $H^2_{\delta'}\times H^1_{\eta}\xhookrightarrow{}H^1_{\eta}$), \eqref{HR N 1}, \eqref{HR N 2} and \eqref{HR beta 3}, and actually the only term that will bring some $C_i$ are $\partial_t\beta^{(n)}$ and $\partial_t N^{(n)}$ : \begin{align*}
\left\| N^{(n)}\left(\mathbf{T}^{(n)}\right)^2(\chi\ln)\right\|_{H^1_{\delta'+1}} & \lesssim \left\|\partial_t\left(\frac{\beta^{(n)}}{N^{(n)}} \right)\right\|_{H^1_{\delta'}}+\left\|\beta^{(n)} \right\|_{H^2_{\delta'}}\left( \left\|\frac{\beta^{(n)}}{N^{(n)}} \right\|_{H^1_{\delta'-1}}+\left\|\nabla\left( \frac{\beta^{(n)}}{N^{(n)}} \right) \right\|_{H^1_{\delta'}} \right)
\\&\lesssim A_0C_i+\varepsilon. \end{align*} For the second part of $\Psi^{(n)}$, we again use the properties of $\Delta(\chi\ln)$, \eqref{useful gamma 1} and \eqref{HR N 1} : \begin{align*}
\left\|e^{-2\gamma^{(n)}} N^{(n)}\Delta(\chi\ln) \right\|_{H^1_{\delta'+1}}
&\lesssim \left\|\Delta(\chi\ln) \right\|_{C^1} \left\| N^{(n)} \right\|_{H^1(B_2)}\lesssim 1. \end{align*}
\end{proof}
\begin{prop}\label{hr+1 gamma prop} For $n\geq 2$ the following estimates hold : \begin{align}
\sum_{|\alpha|\leq 2} \left\Vert\mathbf{T}^{(n)}\nabla^{\alpha}\Tilde{\gamma}^{(n+1)}\right\Vert_{L^2_{\delta'+1+|\alpha|}}+\left\|\nabla\Tilde{\gamma}^{(n+1)}\right\|_{H^2_{\delta'+1}}&\leq 8 C_i,\label{HR+1 gamma 1}\\
\left\| \partial_t\left(\mathbf{T}^{(n)}\Tilde{\gamma}^{(n+1)}\right)
\right\|_{L^2_{\delta'+1}}&\lesssim C_i,\label{HR+1 gamma 2}\\
\left\| \partial_t\left(\mathbf{T}^{(n)}\Tilde{\gamma}^{(n+1)}\right)\right\|_{H^1_{\delta'+1}}&\lesssim
A_1C_i.\label{HR+1 gamma 3}
\end{align} \end{prop}
\begin{proof} The strategy is to recover the decomposition of $\gamma^{(n+1)}$ by setting $\Tilde{\gamma}^{(n+1)}\vcentcolon=\gamma^{(n+1)}+\alpha\chi\ln$. In view of \eqref{reduced system gamma}, $\Tilde{\gamma}^{(n+1)}$ is solution of \begin{equation}
\left(\mathbf{T}^{(n)}\right)^2\Tilde{\gamma}^{(n+1)}-e^{-2\gamma^{(n)}}\Delta \Tilde{\gamma}^{(n+1)}=\left( \text{RHS of \eqref{reduced system gamma}} \right)+ \alpha\frac{\Psi^{(n)}}{N^{(n)}}.\label{eq gamma tilde} \end{equation}
In order to prove \eqref{HR+1 gamma 1} and in view of \eqref{inegalite d'energie equation}, we have to bound $\left\| \text{RHS of \eqref{eq gamma tilde}}\times N^{(n)} \right\|_{H^2_{\delta'+1}}$, and thanks to the factor $\sqrt{T}$, we don't need to worry about the bounds. Thanks to \eqref{op chi ln 1}, it remains to deal with the RHS of \eqref{reduced system gamma} mutltiplied by $N^{(n)}$, which gives the following expression : \begin{align*}
-\frac{N^{(n)}\left(\tau^{(n)}\right)^2}{2}+\frac{1}{2}e_0^{(n-1)}\left(\frac{\mathrm{div}(\beta^{(n)})}{N^{(n-1)}}\right)&+e^{-2\gamma^{(n)}}\frac{\Delta N^{(n)}}{2}+e^{-2\gamma^{(n)}}N^{(n)}|\nabla\varphi^{(n)}|^2\\&+\frac{1}{4}e^{-2\gamma^{(n)}-4\varphi^{(n)}}N^{(n)} |\nabla\omega^{(n)}|^2=\vcentcolon I+II+III+IV+V. \end{align*} \begin{itemize}
\item For $I$, we mainly use the fact that $H^2_{\delta'+1}$ is an algebra and \eqref{HR tau 1} (and the product estimate to deal with $\chi\ln$) :
\begin{equation*}
\|I\|_{H^2_{\delta'+1}}\lesssim \left\| \tau^{(n)}\right\|^2_{H^2_{\delta'+1}}\left( 1+ \left\|\Tilde{N}^{(n)}\right\|_{H^2_{\delta'+1}}\right)\lesssim C(C_i).
\end{equation*}
\item For $II$, we use the computations already performed about $\left[\mathbf{T}^{(n-1)},\nabla \right]$, \eqref{HR N 2} (which implies that $\left|\frac{1}{N^{(n-1)}}\right|$ and $\left\|\nabla N^{(n-1)}\right\|_{C^1}$ are bounded) to get rid of the $\frac{1}{N^{(n-1)}}$ factors, in order to get :
\begin{align}
\| II\|_{H^2_{\delta'+1}} & \lesssim \left\|e_0^{(n-1)}N^{(n-1)} \nabla\beta^{(n)} \right\|_{H^2_{\delta'+1}}+\left\| \mathbf{T}^{(n-1)}\beta^{(n)} \right\|_{H^3_{\delta'}}\label{II}\\&\qquad +\left\|\nabla N^{(n-1)}\mathbf{T}^{(n-1)}\beta^{(n)} \right\|_{H^2_{\delta'+1}}+\left\|\nabla\beta^{(n-1)}\nabla\beta^{(n)} \right\|_{H^2_{\delta'+1}}\nonumber.
\end{align}
Using \eqref{HR beta 4}, it's easy to see that $\left\| \mathbf{T}^{(n-1)}\beta^{(n)} \right\|_{H^3_{\delta'}}\leq C(C_i)$, and we recall the embedding $H^3_{\delta'}\xhookrightarrow{}H^2_{\delta'+1}$. Using $|\nabla(\chi\ln)|\lesssim \langle x\rangle^{-1}$ and \eqref{HR N 2}, we see that $\left\|\nabla N^{(n-1)}\right\|_{ H^3_{\delta}}\lesssim C_i$ and thus we use the product estimate to write :
\begin{equation*}
\left\| \mathbf{T}^{(n-1)}\beta^{(n)} \right\|_{H^3_{\delta'}}+\left\|\nabla N^{(n-1)}\mathbf{T}^{(n-1)}\beta^{(n)} \right\|_{H^2_{\delta'+1}} \lesssim \left\| \mathbf{T}^{(n-1)}\beta^{(n)} \right\|_{H^3_{\delta'}}\left( 1+\left\|\nabla N^{(n-1)} \right\|_{H^2_{\delta}} \right) \leq C(C_i).
\end{equation*}
Using $\nabla\beta^{(n)}\in H^3_{\delta'+1}\xhookrightarrow{}H^2_{\delta'+2}$, $\eqref{HR N 2}$, and $\nabla N^{(n-1)}\in H^2_{\delta}$ :
\begin{align*}
\left\|e_0^{(n-1)}N^{(n-1)} \nabla\beta^{(n)} \right\|_{H^2_{\delta'+1}}& \lesssim \left| \partial_t N_a^{(n-1)}\right|\left\| \chi\ln \nabla\beta^{(n)} \right\|_{H^2_{\delta'+1}}+\left\|\partial_t\Tilde{N}^{(n-1)}\nabla\beta^{(n)} \right\|_{H^2_{\delta'+1}}\\&\qquad+\left\|\beta^{(n-1)}\nabla N^{(n-1)}\nabla\beta^{(n)} \right\|_{H^2_{\delta'+1}}
\\& \lesssim\left(\left| \partial_t N_a^{(n-1)}\right|+\left\|\partial_t\Tilde{N}^{(n-1)}\right\|_{H^2_{\delta}}\right)\left\|
\nabla\beta^{(n)} \right\|_{H^2_{\delta'+2}}\\&\qquad+\left\|\beta^{(n-1)}\right\|_{H^2_{\delta'+2}}\left\|\nabla N^{(n-1)}\right\|_{H^2_{\delta}}\left\|\nabla\beta^{(n)} \right\|_{H^2_{\delta'+2}} \\& \leq C(C_i).
\end{align*}
The last term in \eqref{II} doesn't present any difficulty and we get $\| II\|_{H^2_{\delta'+1}}\leq C(C_i)$.
\item For $III$, we first notice that $\Delta(\chi\ln)=\Delta(\chi)\ln+\nabla\chi\cdot\nabla\ln$ is a smooth compactly supported function and therefore belongs to all $H^k$ spaces.
We then use \eqref{useful gamma 1} and \eqref{HR N 3} :
\begin{align*}
\|III\|_{H^2_{\delta'+1}} &\lesssim \left| N_a^{(n)}\right|\left\|\Delta(\chi\ln) \right\|_{H^2}+\left\|\Delta\Tilde{N}^{(n)} \right\|_{H^2_{\delta+1}}\leq C(C_i).
\end{align*}
\item For $IV$, thanks to the support property of $\varphi^{(n)}$, we don't worry about the decrease of our functions. We simply use \eqref{useful gamma 1}, \eqref{HR N 3} (which implies that $\left\| N^{(n)}\right\|_{C^2(B_{2R})}\leq C(C_i)$) and the fact that $H^2$ is an algebra :
\begin{align*}
&\| IV\|_{H^2}\lesssim \left\| N^{(n)}\right\|_{C^2(B_{2R})} \left\|\nabla\varphi^{(n)} \right\|_{H^2}^2\leq C(C_i).
\end{align*}
\item For $V$, we do as for $IV$, using in addition \eqref{useful ffi 1}, which implies that it remains to deal with the following term :
\begin{align*}
\left\|\nabla^2\varphi^{(n)}|\nabla\omega^{(n)}|^2 \right\|_{L^2} \lesssim \left\| \nabla^2\varphi^{(n)}\right\|_{L^2}\left\|\nabla\omega^{(n)} \right\|_{H^2}^2\leq C(C_i).
\end{align*} \end{itemize} Using Lemma \ref{inegalite d'energie lemme}, we get for all $t\in [0,T]$ : \begin{equation*}
\mathcal{E}^{(n)}\left[ \Tilde{\gamma}^{(n+1)} \right](t)\leq 3\, \mathcal{E}^{(n)}\left[ \Tilde{\gamma}^{(n+1)} \right](0)+C(C_i)\sqrt{T}.\label{blabla} \end{equation*} It remains to show that $\mathcal{E}^{(n)}\left[ \Tilde{\gamma}^{(n+1)} \right](0)$ is bounded by $C_i$. Therefore, the following calculations will be performed on $\Sigma_0$ and we can forget about the indices $(n)$ or $(n+1)$ and use the estimates \eqref{CI petit} and \eqref{CI gros}, which are more comfortable. Using the calculations performed in Proposition \ref{commutation estimate}, we show that : \begin{align}
\sum_{|\alpha|\leq 2} \left\Vert\mathbf{T}\nabla^{\alpha}\Tilde{\gamma}\right\Vert_{L^2_{\delta'+1+|\alpha|}} \leq (1+C\varepsilon)\| \mathbf{T}\Tilde{\gamma}\|_{H^2_{\delta+1}}+C\varepsilon\mathcal{E}\left[ \Tilde{\gamma} \right](0)+C\varepsilon\|\Tilde{\gamma}\|_{H^4_{\delta}}.\label{E0 temps} \end{align} Using the same ideas as in the second step of the proof of Lemma \ref{inegalite d'energie lemme}, we show that : \begin{align}
\sum_{|\alpha|\leq 2} \left\Vert e^{-\gamma}\nabla( \nabla^{\alpha}\Tilde{\gamma})\right\Vert_{L^2_{\delta'+1+|\alpha|}}\leq (1+C\varepsilon)\|\Tilde{\gamma}\|_{H^4_{\delta}}.\label{E0 espace} \end{align} Putting together \eqref{E0 temps} and \eqref{E0 espace} and using \eqref{CI petit} and \eqref{CI gros}, we get \begin{equation*}
\mathcal{E}\left[ \Tilde{\gamma} \right](0) \leq (1+C\varepsilon) \left( \| \mathbf{T}\Tilde{\gamma}\|_{H^2_{\delta+1}} +\|\Tilde{\gamma}\|_{H^4_{\delta}}\right)+C\varepsilon\mathcal{E}\left[ \Tilde{\gamma} \right](0) \leq2 (1+C\varepsilon)C_i +C\varepsilon\mathcal{E}\left[ \Tilde{\gamma} \right](0). \end{equation*} We can absorb the last term of the RHS into the LHS by choosing $\varepsilon$ small enough. Taking $T$ small enough and remembering that $1\lesssim e^{-\gamma^{(n)}}$, we finish the proof of \eqref{HR+1 gamma 1}. \par\leavevmode\par We now turn to the proof of \eqref{HR+1 gamma 2} and \eqref{HR+1 gamma 3} which amounts to estimating $\partial_t\left(\mathbf{T}^{(n)}\Tilde{\gamma}^{(n+1)}\right)$, which, thanks to \eqref{eq gamma tilde}, has the following expression : \begin{align}
\partial_t\left(\mathbf{T}^{(n)}\Tilde{\gamma}^{(n+1)}\right)&=e^{-2\gamma^{(n)}}N^{(n)}\Delta\Tilde{\gamma}^{(n+1)}-\frac{N^{(n)}\left(\tau^{(n)}\right)^2}{2}+\frac{1}{2}e_0^{(n-1)}\left(\frac{\mathrm{div}(\beta^{(n)})}{N^{(n-1)}}\right)\\&\qquad+e^{-2\gamma^{(n)}}\frac{\Delta N^{(n)}}{2}+e^{-2\gamma^{(n)}}N^{(n)}\left|\nabla\varphi^{(n)}\right|^2\\&\qquad+\frac{1}{4}e^{-2\gamma^{(n)}-4\varphi^{(n)}}N^{(n)} |\nabla\omega^{(n)}|^2+ \alpha \Psi^{(n)}\\&=\vcentcolon I+II+III+IV+V+VI+VII. \end{align} The term $VII$ is handled thanks to \eqref{op chi ln 3}. For the remainings terms, we first bound their $L^2_{\delta'+1}$ norms with $C_i$, and then the $H^1_{\delta'+1}$ norms of their derivatives by $C_i^2$. \begin{itemize}
\item For $I$, we first perform the $H^1$ estimate, using \eqref{useful gamma 1}, $\varepsilon|\chi\ln|\lesssim \langle x \rangle^{\varepsilon}$, \eqref{HR N 1} and \eqref{HR+1 gamma 1} :
\begin{align*}
\| I\|_{H^1_{\delta'+1}}&\lesssim \left\| N^{(n)}\Delta\Tilde{\gamma}^{(n+1)}\right\|_{H^1_{\delta'+1}}\lesssim \left\|\Delta\Tilde{\gamma}^{(n+1)} \right\|_{H^1_{\delta'+2}}\left(1+\left\|\Tilde{N}^{(n)} \right\|_{H^2_{\delta}} \right)\lesssim C_i.
\end{align*}
To get the $L^2$ estimate, we simply use the embeddings $H^1_{\delta'+1}\xhookrightarrow{}L^2_{\delta'+1}$.
\item For $II$, we first use \eqref{HR N 1} and \eqref{propsmall tau} :
\begin{equation*}
\|II\|_{L^2_{\delta'+1}}\lesssim \left\|\tau^{(n)}\right\|_{H^1_{\delta'+1}}^2\left(1+ \left\| \Tilde{N}^{(n)}\right\|_{H^2_{\delta}} \right)\lesssim \varepsilon^2.
\end{equation*}
For the $H^1$ estimate, we use \eqref{HR N 1}, \eqref{propsmall tau} and \eqref{HR tau 1} :
\begin{equation*}
\|II\|_{H^1_{\delta'+1}}\lesssim \left\|\tau^{(n)}\right\|_{H^1_{\delta'+1}}\left\|\tau^{(n)}\right\|_{H^2_{\delta'+1}}\left(1+ \left\| \Tilde{N}^{(n)}\right\|_{H^2_{\delta}} \right)\lesssim \varepsilon A_1C_i.
\end{equation*}
\item For $III$, we use \eqref{HR beta 2.5}, \eqref{HR beta 1} and \eqref{HR N 2} :
\begin{align*}
\left\| III \right\|_{L^2_{\delta'+1}} &\lesssim \left\|\nabla\partial_t\beta^{(n)} \right\|_{L^2_{\delta'+1}} + \left\| \nabla\beta^{(n)}\right\|_{H^1_{\delta'+1}}\left\|\partial_t\Tilde{N}^{(n-1)}\right\|_{H^2_{\delta}}+ \left\|\beta^{(n-1)}\right\|_{H^2_{\delta'}}\left\|\nabla\beta^{(n)} \right\|_{H^1_{\delta'+1}}\\&\lesssim C_i.
\end{align*}
For the $H^1$ estimate, we use :
\begin{align*}
\left\| III \right\|_{H^1_{\delta'+1}}& \lesssim \left\| \nabla\partial_t\beta^{(n)}\right\|_{H^1_{\delta'+1}} + \left\|\nabla\beta^{(n)}\partial_t\Tilde{N}^{(n-1)} \right\|_{H^1_{\delta'+1}} + \left\| \beta^{(n-1)}\nabla\beta^{(n)}\right\|_{H^1_{\delta'+1}} \\
&\lesssim \left\| \nabla\partial_t\beta^{(n)}\right\|_{H^1_{\delta'+1}} + \left\|\nabla\beta^{(n)}\right\|_{H^1_{\delta'+1}} \left\| \partial_t\Tilde{N}^{(n-1)}\right\|_{H^2_{\delta}} +\left\|\beta^{(n-1)} \right\|_{H^2_{\delta'}} \left\|\nabla\beta^{(n)} \right\|_{H^1_{\delta'+1}}
\\&\lesssim A_1 C_i + \varepsilon C_i +\varepsilon^2.
\end{align*}
\item For $IV$, we recall that $\Delta(\chi\ln)$ is a smooth compactly supported function. We only perform the $H^1$ estimate, because the $L^2$ estimate will be a consequence of the embedding $H^1_{\delta'+1}\xhookrightarrow{}L^2_{\delta'+1}$. We use \eqref{useful gamma 1} and \eqref{HR N 2} :
\begin{equation*}
\| IV\|_{H^1_{\delta'+1}} \lesssim \left| N_a^{(n)}\right|\left\|\Delta(\chi\ln) \right\|_{H^1}+\left\|\Delta\Tilde{N}^{(n)}\right\|_{H^1_{\delta+1}}\lesssim C_i.
\end{equation*}
\item For $V$, we don't care about the decrease of our functions, thanks to the support property of $\varphi^{(n)}$. We first use \eqref{useful gamma 1} and the fact that $N^{(n)}\in L^{\infty}(B_{2R})$ (which comes from \eqref{HR N 1}), the Hölder's inequality and \eqref{propsmall fi} :
\begin{equation*}
\| V\|_{L^2}\leq \left\|N^{(n)} \right\|_{L^{\infty}(B_{2R})} \left\|\nabla\varphi^{(n)} \right\|_{L^4}^2\lesssim \varepsilon^2.
\end{equation*}
For the $H^1$ estimate, we use \eqref{useful gamma 1}, \eqref{HR N 1}, \eqref{HR N 2} and \eqref{HR fi 1} :
\begin{align*}
\| \nabla V\|_{L^2}&\lesssim \left\|\nabla\varphi^{(n)}\nabla^2\varphi^{(n)} \right\|_{L^2}+ \left\|\nabla\Tilde{N}^{(n)} \left|\nabla\varphi^{(n)}\right|^2\right\|_{L^2}
\\& \lesssim \left\| \nabla^2\varphi^{(n)} \right\|_{H^1} \left\| \nabla\varphi^{(n)}\right\|_{L^4} + \left\| \nabla\Tilde{N}^{(n)}\right\|_{H^2_{\delta+1}}\left\|\nabla\varphi^{(n)} \right\|_{L^4}^2
\\& \lesssim\varepsilon A_0C_i.
\end{align*}
\item For $VI$, we do as for $V$, using in addition \eqref{useful ffi 1}. \end{itemize} \end{proof}
We are now interesting in proving estimates for $\varphi^{(n+1)}$ and $\omega^{(n+1)}$. We first prove their support property :
\begin{lem}\label{support fi n+1} There exists $C_s>0$ such that for $\varepsilon$, $T$ sufficiently small (depending on $R$), $\varphi^{(n+1)}$ is supported in \begin{equation*}
\enstq{(t,x)\in[0,T]\times\mathbb{R}^2}{\vert x\vert\leq R+C_s(1+R^{\varepsilon})t}. \end{equation*} In particular, choosing $T$ small enough, $\mathrm{supp}(\varphi^{(n+1)})\subset [0,T]\times B_{2R}$. \end{lem}
\begin{proof} Since the initial data for $\varphi^{(n+1)}$ and $\partial_t\varphi^{(n+1)}$ are compactly supported and $\Box_{g^{(n)}}\varphi^{(n+1)}$ is compactly supported in \begin{equation*} A\vcentcolon=\enstq{(t,x)\in[0,T]\times\mathbb{R}^2}{\vert x\vert \leq R+C_s(1+R^{\varepsilon})t} \end{equation*}
we juste have to show that $\partial A$ is a spacelike hypersurface. We set $f(x,t)=-|x|+C_s(1+R^{\varepsilon})t$, in order to have $\partial A=f^{-1}(-R)$. Thus, we have to show that $(g^{(n)})^{-1}(\mathrm{d} f,\mathrm{d} f)$ is non-positive on this hypersurface. We have $\mathrm{d} f=-\frac{x_i}{|x|}\mathrm{d} x^i+C_s(1+R^{\varepsilon})\mathrm{d} t$, which implies : \begin{align*}
\left(g^{(n)}\right)^{-1}(\mathrm{d} f,\mathrm{d} f)&=\frac{x_ix_j}{|x|^2}\left(g^{(n)}\right)^{ij}+C_s^2(1+R^{\varepsilon})^2\left(g^{(n)}\right)^{tt}-\frac{2x_i}{|x|}C_s(1+R^{\varepsilon})\left(g^{(n)}\right)^{it}\\
&=e^{-2\gamma^{(n)}}-\left(\frac{x\cdot\beta^{(n)}}{|x|N^{(n)}}\right)^2-\left( \frac{C_s(1+R^{\varepsilon})}{N^{(n)}}\right)^2-\frac{2(x\cdot\beta^{(n)})C_s(1+R^{\varepsilon})}{(N^{(n)})^2|x|} \end{align*}
We have $e^{-2\gamma^{(n)}}\lesssim \langle x\rangle^{2\varepsilon^2}$, $\frac{|x\cdot\beta^{(n)}|}{(N^{(n)})^2|x|}+\left(\frac{x\cdot\beta^{(n)}}{|x|N^{(n)}}\right)^2\lesssim \varepsilon$, so choosing the parameters appropriately, one easily sees that $(g^{(n)})^{-1}(\mathrm{d} f,\mathrm{d} f)$ is non-positive on the hypersurface. \end{proof}
\begin{prop}\label{hr+1 fi prop} For $n\geq 2$, the following estimates holds : \begin{align}
\left\|\partial_t\varphi^{(n+1)}\right\|_{H^2}+\left\|\nabla\varphi^{(n+1)}\right\|_{H^2} & \lesssim C_i ,\label{esti ondes phi 1}\\
\left\Vert \partial_t\left( \mathbf{T}^{(n)}\varphi^{(n+1)} \right) \right\Vert_{H^1} & \lesssim C_i.\label{esti ondes phi 2} \end{align} \end{prop}
\begin{proof} First, note that since $\varphi^{(n+1)}$ is compactly supported in $B_{2R}$ for all time by previous lemma, we do not need to worry about the spatial decay in this proof. We recall the wave equation satisfied by $\varphi^{(n+1)}$ : \begin{align*}
\left( \mathbf{T}^{(n)} \right)^2\varphi^{(n+1)}-e^{-2\gamma^{(n)}}\Delta \varphi^{(n+1)}&=\frac{e^{-2\gamma^{(n)}}}{N^{(n)}}\nabla \varphi^{(n)}\cdot\nabla N^{(n)}+\frac{\tau^{(n)} e_0^{(n-1)}\varphi^{(n)}}{N^{(n)}}\\&\qquad+\frac{1}{2}e^{-4\varphi^{(n)}}\left( \left(e_0^{(n-1)}\omega^{(n)}\right)^2+\left|\nabla\omega^{(n)}\right|^2 \right). \end{align*} Using our energy estimate for this wave equation we see that to prove the first part of the proposition we have to bound \begin{equation*}
\left\Vert e^{-2\gamma^{(n)}}\nabla \varphi^{(n)}\cdot\nabla N^{(n)}\right\Vert_{H^2}+\left\Vert \tau^{(n)} e_0^{(n-1)}\varphi^{(n)}\right\Vert_{H^2}+\left\| e^{-4\varphi^{(n)}}N^{(n)}(e_0^{(n-1)}\omega^{(n)})^2 \right\|_{H^2} +\left\| e^{-4\varphi^{(n)}}N^{(n)} |\nabla\omega^{(n)}|^2\right\|_{H^2} . \end{equation*} We mainly use the fact that in dimension 2, $H^2$ is an algebra. Noting that every norm is not taking on the whole space but only on $B_{2R}$, using \eqref{useful gamma 1} and \eqref{useful ffi 1} and thanks to the estimates made on the $n$-th iterate, it's easy to see that this quantity is bounded by some constant $C(A_i,C_i)$. We also recall that : \begin{equation*}
\mathcal{E}^{(n)}\left[ \varphi^{(n+1)} \right](0)\lesssim C_i. \end{equation*} Thanks to the Lemma \ref{inegalite d'energie lemme}, if $T$ is small enough, we have for all $t\in[0,T]$ \begin{equation*}
\mathcal{E}^{(n)}\left[ \varphi^{(n+1)} \right](t)\lesssim C_i. \end{equation*}
Thanks to the support property of $\varphi^{(n+1)}$, the fact that $1\lesssim e^{-\gamma^{(n)}}$ and $|N^{(n)}|\lesssim 1$ (on $B_{2R}$), we have : \begin{equation}
\left\|\partial_t\varphi^{(n+1)}\right\|_{H^2}+\left\|\nabla\varphi^{(n+1)}\right\|_{H^2}\lesssim \mathcal{E}^{(n)}\left[ \varphi^{(n+1)} \right]+C_i.\label{commutation energie} \end{equation} which concludes the proof of \eqref{esti ondes phi 1}. \par\leavevmode\par We next prove the estimate \eqref{esti ondes phi 2}. We use the equation satisfied by $\varphi^{(n+1)}$ to express the term we want to estimate: \begin{align*}
\partial_t\left( \mathbf{T}^{(n)}\varphi^{(n+1)} \right) & =e^{-2\gamma^{(n)}}N^{(n)}\Delta\varphi^{(n+1)}+e^{-2\gamma^{(n)}}\nabla \varphi^{(n)}\cdot\nabla N^{(n)}+\tau^{(n)} e_0^{(n-1)}\varphi^{(n)}+\beta^{(n)}\cdot\nabla\left( \mathbf{T}^{(n)}\varphi^{(n+1)} \right)\\&\qquad+\frac{1}{2}e^{-4\varphi^{(n)}}N^{(n)} \left(e_0^{(n-1)}\omega^{(n)}\right)^2+\frac{1}{2}e^{-4\varphi^{(n)}}N^{(n)}\left|\nabla\omega^{(n)}\right|^2 \\
& =\vcentcolon I+II+III+IV+V+VI. \end{align*} Thus, it remains to bound those terms by $C_i$ in $H^1$, and the main difficulty is avoiding any $C_i^2$ bound. We mainly use the embedding of $H^1$ in $L^q$ for all $q\geq 2$ and the Hölder inequality, in particular the $L^4\times L^4\xhookrightarrow{}L^2$ and $L^8\times L^8\xhookrightarrow{}L^4$ case (note that in the following we do not write down the factors that are trivially in $L^{\infty}$) : \begin{itemize}[label=\textbullet]
\item for $I$, the only issues are the terms where $\Tilde{N}^{(n)}$ or $\Tilde{\gamma}^{(n)}$ get one derivative :
\begin{align*}
\|I\|_{H^1} & \lesssim \Vert\nabla\varphi^{(n+1)}\Vert_{H^2}+\|\nabla\Tilde{N}^{(n)}\Delta\varphi^{(n+1)}\|_{L^2}+\|\nabla\Tilde{\gamma}^{(n)}\Delta\varphi^{(n+1)}\|_{L^2}\\
& \lesssim \Vert\nabla\varphi^{(n+1)}\Vert_{H^2}\left(1+\|\nabla\Tilde{N}^{(n)}\|_{H^1}+\|\nabla\Tilde{\gamma}^{(n)}\|_{H^1}\right)\\
& \lesssim C_i.
\end{align*}
\item for $II$, we forget about the $\chi\ln$ in $N^{(n)}$, which is less problematic than $\Tilde{N}^{(n)}$ :
\begin{align*}
\|II\|_{H^1} & \lesssim\|\nabla\varphi^{(n)}\cdot\nabla N^{(n)}\|_{L^2}+\|\nabla^2\varphi^{(n)}\nabla N^{(n)}\|_{L^2}+\|\nabla\varphi^{(n)}\nabla^2 N^{(n)}\|_{L^2}+\|\nabla\Tilde{\gamma}^{(n)}\nabla\varphi^{(n)}\nabla N^{(n)}\|_{L^2}\\
& \lesssim\|\nabla\varphi^{(n)}\|_{L^4}\|\Tilde{N}^{(n)}\|_{H^4}+\|\nabla\varphi^{(n)}\|_{H^2}\|\Tilde{N}^{(n)}\|_{H^2}\left( 1+\|\Tilde{\gamma}^{(n)}\|_{H^2}\right)\\
& \lesssim C_i.
\end{align*}
\item for $III$, we use \eqref{HR tau 1} when no derivatives hits $e_0^{(n-1)}\varphi^{(n)}$ and \eqref{propsmall tau} when one derivative hits $e_0^{(n-1)}\varphi^{(n)}$ :
\begin{align*}
\| III\|_{H^1}&\lesssim \|\tau^{(n)}\|_{H^2}\left( \Vert\partial_t\varphi^{(n)}\Vert_{L^4}+\Vert\nabla\varphi^{(n)}\Vert_{L^4}\left( 1+\left\| \nabla\beta^{(n-1)}\right\|_{H^1}\right) \right)\\& \quad+\|\tau^{(n)}\|_{H^1}\left( \|\nabla^2\varphi^{(n)}\|_{H^1}+\|\partial_t\nabla\varphi^{(n)}\|_{H^1}\right)\\& \lesssim C_i.
\end{align*}
\item for $IV$, we just notice that, applying the same type of arguments as in Proposition \ref{commutation estimate}, it's easy to deduce from the first part of this proof that $\left\|\mathbf{T}^{(n)}\varphi^{(n+1)}\right\|_{H^2}\lesssim C_i$ :
\begin{align*}
\|IV\|_{H^1} & \lesssim \left\|\mathbf{T}^{(n)}\varphi^{(n+1)}\right\|_{H^1}+ \left\|\nabla\beta^{(n)}\nabla\left( \mathbf{T}^{(n)}\varphi^{(n+1)} \right)\right\|_{L^2}+\left\|\beta^{(n)}\nabla^2\left( \mathbf{T}^{(n)}\varphi^{(n+1)} \right)\right\|_{L^2}\\
& \lesssim \left\|\mathbf{T}^{(n)}\varphi^{(n+1)}\right\|_{H^2}\left( 1+\|\nabla\beta^{(n)}\|_{H^1}\right)\\
& \lesssim C_i.
\end{align*}
\item for $V$, we use first \eqref{useful ffi 1} and the fact that $N^{(n)}$ and $\nabla N^{(n)}$ are bounded, and then \eqref{propsmall fi} and \eqref{HR omega 1} :
\begin{align*}
\left\| V\right\|_{H^1} & \lesssim \left( 1 + \left\| \nabla\Tilde{N}^{(n)} \right\|_{H^2} \right) \left\| \left(e_0^{(n-1)}\omega^{(n)}\right)^2 \right\|_{L^2}+ \left\| e_0^{(n-1)}\omega^{(n)}\nabla e_0^{(n-1)}\omega^{(n)} \right\|_{L^2}
\\& \lesssim C_i \left\| e_0^{(n-1)}\omega^{(n)}\right\|_{L^4}^2+ \left\| e_0^{(n-1)}\omega^{(n)}\right\|_{L^4}\left\|\nabla e_0^{(n-1)}\omega^{(n)}\right\|_{H^1}
\\&\lesssim \varepsilon^2 C_i+\varepsilon C_i
\end{align*}
\item for $VI$, we do as for $V$, since $\nabla\omega^{(n)}$ and $e_0^{(n-1)}\omega^{(n)}$ satisfy the same estimates.
\end{itemize} \end{proof}
\begin{prop}\label{hr+1 omega prop} For $n\geq 2$, the following estimates holds : \begin{align}
\left\|\partial_t\omega^{(n+1)}\right\|_{H^2}+\left\|\nabla\omega^{(n+1)}\right\|_{H^2} & \lesssim C_i ,\label{esti ondes omega 1}\\
\left\Vert \partial_t\left( \mathbf{T}^{(n)}\omega^{(n+1)} \right) \right\Vert_{H^1} & \lesssim C_i.\label{esti ondes omega 2} \end{align} \end{prop}
\begin{proof} The proof of Proposition \ref{hr+1 omega prop} uses the same estimates as the one of Proposition \ref{hr+1 fi prop} (since $\varphi^{(n)}$ and $\omega^{(n)}$ satisfy the same estimates), so we omit the details. \end{proof}
Looking at the estimates we proved for the $(n+1)$-th iterate in Propositions \ref{hr+1 N prop}, \ref{hr+1 beta prop}, \ref{HR+1 H prop}, \ref{hr+1 tau prop}, \ref{hr+1 gamma prop}, \ref{hr+1 fi prop} and \ref{hr+1 omega prop} we see that in order to recover the estimates \eqref{HR N 1}-\eqref{HR fi 1}, we have to choose the constants $A_0$, $A_1$, $A_2$, $A_3$ and $A_4$ such that $C(A_i)\ll A_{i+1}$ for all $i=0,\dots,3$ and $\varepsilon$ small, depending on the $A_i$ constants. We make such a choice.
This concludes the proof of the fact claimed above : the sequence constructed in Section \ref{section iteration scheme} is uniformly bounded. Moreover, the bounds \eqref{HR N 1}-\eqref{HR omega 1} hold for every $k\in\mathbb{N}$. and for every $t\in[0,T]$.
\subsection{Convergence of the sequence}\label{Cauchy}
In this section, we show that the sequence we constructed in fact converges to a limit in larger functional spaces than those used in the previous sequence, where we only proved boundedness. To this end, we will show that the sequence is a Cauchy sequence. We introduce the following distances, as in \cite{hunluk18} : \begin{align}
d_1^{(n)} &\vcentcolon= \left\|\Tilde{\gamma}^{(n+1)}-\Tilde{\gamma}^{(n)} \right\|_{H^1_{\delta'}} +\left\|\partial_t\left(\Tilde{\gamma}^{(n+1)}-\Tilde{\gamma}^{(n)} \right) \right\|_{L^2_{\delta'}}+\left\|H^{(n+1)}-H^{(n)} \right\|_{H^1_{\delta+1}}\nonumber\\&\qquad+\left\|\tau^{(n+1)}-\tau^{(n)} \right\|_{L^2_{\delta'+1}}+\left\|\partial_t \left(\varphi^{(n+1)}-\varphi^{(n)} \right) \right\|_{H^1} +\left\|\nabla\left(\varphi^{(n+1)}-\varphi^{(n)}\right) \right\|_{H^1}
\\&\qquad +\left\|\partial_t \left(\omega^{(n+1)}-\omega^{(n)} \right) \right\|_{H^1} +\left\|\nabla\left(\omega^{(n+1)}-\omega^{(n)}\right) \right\|_{H^1},\nonumber\\
d_2^{(n)} &\vcentcolon= \left| N_a^{(n+1)}-N_a^{(n)}\right|+\left\|\Tilde{N}^{(n+1)}-\Tilde{N}^{(n)} \right\|_{H^2_{\delta}}+\left\| \beta^{(n+1)}-\beta^{(n)}\right\|_{H^2_{\delta'}},\\
d_3^{(n)} &\vcentcolon= \left\| \partial_t \left(\mathbf{T}^{(n)}\varphi^{(n+1)}-\mathbf{T}^{(n-1)}\varphi^{(n)} \right)\right\|_{L^2}+\left\| \partial_t \left(\mathbf{T}^{(n)}\omega^{(n+1)}-\mathbf{T}^{(n-1)}\omega^{(n)} \right)\right\|_{L^2},\\
d_4^{(n)} &\vcentcolon= \left\|e_0^{(n+1)}H^{(n+1)}-e_0^{(n)}H^{(n)} \right\|_{H^1_{\delta+1}},\\
d_5^{(n)} &\vcentcolon= \left\|\partial_t \left(\mathbf{T}^{(n)}\Tilde{\gamma}^{(n+1)}-\mathbf{T}^{(n-1)}\Tilde{\gamma}^{(n)} \right) \right\|_{L^2_{\delta'}}+\left\|\partial_t\left(\tau^{(n+1)}-\tau^{(n)} \right) \right\|_{L^2_{\delta'+1}},\\
d_6^{(n)} &\vcentcolon= \left|\partial_t\left(N_a^{(n+1)}-N_a^{(n)} \right) \right|+\left\|\partial_t\left(\Tilde{N}^{(n+1)}-\Tilde{N}^{(n)} \right) \right\|_{H^2_{\delta}}+\left\|e_0^{(n)}\beta^{(n+1)}-e_0^{(n-1)}\beta^{(n)} \right\|_{H^2_{\delta'}}. \end{align}
The goal is to show that each series $\sum_{n\leq 0}d_i^{(n)}$ is converging. This is a consequence of the following Proposition. At this low-level of regularity, its proof is identical to the corresponding one done in \cite{hunluk18} (see Proposition $8.19$ and Corollary $8.20$ in this article).
\begin{prop}\label{distance prop} If $T$ and $\varepsilon$ is small enough (where $\varepsilon$ does not depend on $C_i$), the following bounds hold for every $n\geq 3$ : \begin{align*} d_1^{(n)} + d_2^{(n)} + d_3^{(n)} + d_4^{(n)} + d_5^{(n)} + d_6^{(n)} \lesssim 2^{-n}. \end{align*} \end{prop}
It shows that in the function spaces involved in the definition of the distances $d^{(n)}_i$, the sequence we constructed is Cauchy and therefore convergent to some \begin{equation}\label{strong limit} (N=1+N_a\chi\ln+\Tilde{N},\tau,H,\beta,\gamma=-\alpha\chi\ln+\Tilde{\gamma},\varphi,\omega). \end{equation} Since the sequence is bounded in a smaller space, we can find a subsequence weakly converging to some limit, which has to coincide with the strong limit \eqref{strong limit}. Consequently, \eqref{strong limit} satisfies the estimates \eqref{HR N 1}-\eqref{HR omega 1}, from which we can prove that it is a solution to the reduced system \eqref{EQ N}-\eqref{EQ omega}.
If there are two solutions to the reduced system, we can control their difference using the distances $d^{(n)}_i$ and arguing as in Proposition \ref{distance prop} we show that these two solutions coincide. This proves the uniqueness of solution to the reduced system.
We summarize this discussion in the following corollary :
\begin{coro}\label{coro reduced sytem} Given the initial conditions in Section \ref{initial data}, there exists a unique solution \begin{equation}
(N,\beta,\tau,H,\gamma,\varphi,\omega)\label{sol} \end{equation} to the reduced system \eqref{EQ N}-\eqref{EQ omega} such that : \begin{itemize}
\item $\gamma$ and $N$ admit the decompositions
\begin{equation*}
\gamma=-\alpha\chi\ln+\Tilde{\gamma},\qquad N=1+N_a\chi\ln+\Tilde{N},
\end{equation*}
where $\alpha\geq0$ is a constant, $N_a(t)\geq 0$ is a function of $t$ alone and
\begin{equation*}
\Tilde{\gamma}\in H^3_{\delta'}, \quad \mathbf{T}\Tilde{\gamma}\in H^2_{\delta'+1}, \quad \partial_t\mathbf{T}\Tilde{\gamma}\in H^1_{\delta'+1}, \quad \Tilde{N}\in H^4_{\delta}, \quad \partial_t\Tilde{N}\in H^2_{\delta},
\end{equation*}
with estimates depending on $C_i$, $\delta$ and $R$.
\item $(\beta,\tau,H)$ are in the following spaces :
\begin{equation*}
\beta\in H^3_{\delta'},\quad e_0\beta\in H^3_{\delta'},\quad \tau\in H^2_{\delta'+1},\quad \partial_t\tau\in H^1_{\delta'+1}, \quad H,e_0H\in H^2_{\delta'+1},
\end{equation*}
with estimates depending on $C_i$, $\delta$ and $R$.
\item The smallness conditions in \eqref{HR N 1} and \eqref{HR beta 1} and Proposition \ref{prop propsmall} hold (without the $(n)$). \end{itemize} \end{coro}
\section{End of the proof of Theorem \ref{theoreme principal} }\label{section end of proof}
In this section we conclude the proof of Theorem \ref{theoreme principal} in two steps. As a first step, we show that the unique solution of the reduced system obtained in Corollary \eqref{coro reduced sytem} is actually a solution of the full system \eqref{EVE}. As we will see in Proposition \ref{Gij et Goo}, this involves among other things propagating the gauge condition $\tau =0$ (the condition $\Bar{g}=e^{2\gamma}\delta$ is also a gauge condition but we don't need to propagate it). As in the harmonic gauge, this step is done using the Bianchi equation and the constraint equations. While in the harmonic gauge the Bianchi equation implies a second order hyperbolic system for the gauge, here we obtain a transport system (see Proposition \ref{usage de bianchi}).
In a second step, we prove the remaining estimates stated in Theorem \ref{theoreme principal}, i.e the $H^4$ norm of the metric coefficients with a loss of one regularity order for each time derivative. For this, we use the full Einstein equations in the elliptic gauge, thanks to the first step.
\subsection{Solving the Einstein equations}
In order to solve \eqref{EVE} in the elliptic gauge, it only remains to prove that $G_{\mu\nu}=T_{\mu\nu}$ (the wave equations for $\varphi$ and $\omega$ being already included into the reduced system) and that $\tau=0$.
To define properly the tensors $G$ and $T$ we need to define a metric. Let $g$ be the metric on $\mathbb{R}^2\times [0,T]$ defined by the geometric quantities $N$, $\gamma$ and $\beta$ (obtained from \eqref{sol}) as in \eqref{metrique elliptique}. To compute the Einstein tensor of $g$, we need the second form fundamental and its traceless part. We define $K$ with $H$, $\gamma$ and $\tau$ (obtained from \eqref{sol}) according to \eqref{def H} and \eqref{g bar}. Thanks to \eqref{EQ beta} and \eqref{EQ tau} we have \begin{align*}
K_{ij}=H_{ij}+\frac{1}{2}e^{2\gamma}\tau\delta_{ij}
= -\frac{1}{2N}e_0\left( e^{2\gamma}\right)\delta_{ij}+\frac{e^{2\gamma}}{2N}\left( \partial_i\beta_j+\partial_j\beta_i \right). \end{align*} By \eqref{seconde forme fonda}, this proves that $K_{ij}$ is the second fundamental form of $\Sigma_t$. On the other hand, by \eqref{EQ tau}, we know that $\tau$ is the mean curvature of $\Sigma_t$. This implies that $H_{ij}$ is the traceless part of $K_{ij}$ with respect to $\Bar{g}=e^{2\gamma}\delta$. We also define the tensor $T$ with $g$ and $(\varphi,\omega)$ (obtained from \eqref{sol}) according to \eqref{tenseur energie impulsion }.
We can now use both our computations in the elliptic gauge and the reduced system to compute $G_{00}-T_{00}$ and $G_{ij}-T_{ij}$.
\begin{prop}\label{Gij et Goo} Given a solution to \eqref{EQ N}-\eqref{EQ omega}, the Einstein tensor in the basis $(e_0,\partial_i)$ is given by : \begin{align}
G_{00}&=\frac{N}{2}e_0\tau+T_{00},\label{G 00}\\
G_{ij}&=\frac{e^{2\gamma}e_0\tau}{2N}\delta_{ij}+T_{ij}.\label{G ij} \end{align} Moreover, we have $D^{\mu}T_{\mu\nu}=0$. \end{prop}
\begin{proof} In this proof we just have to put together our calculations about $R_{\mu\nu}$ and $T_{\mu\nu}$ perdormed in Appendix \eqref{appendix A} and the reduced system \eqref{EQ N}-\eqref{EQ omega}. Note that putting \eqref{EQ tau} and \eqref{EQ gamma} together gives back an elliptic equation satisfied by $\gamma$ : \begin{equation}
\Delta\gamma=\frac{\tau^2}{2}e^{2\gamma}-\frac{e^{2\gamma}}{2N}e_0\tau-\frac{\Delta N}{2N}-\left|\nabla\varphi\right|^2-\frac{1}{4}e^{-4\varphi}\left|\nabla\omega \right|^2.\label{vraie eq sur gamma} \end{equation}
In order to compute $G_{\mu\nu}$, we need the scalar curvature $R$, which, thanks to \eqref{EQ N}, \eqref{vraie eq sur gamma} and \eqref{appendix R}, has the following expression : \begin{equation*}
R= -\mathbf{T}\tau+2e^{-2\gamma}|\nabla\varphi|^2-\frac{2}{N^2}(e_0\varphi)^2+\frac{1}{2}e^{-2\gamma-4\varphi}|\nabla\omega|^2-\frac{1}{2N^2}e^{-4\varphi}(e_0\omega)^2. \end{equation*} We also recall the expression of $g_{\mu\nu}$ in the $(e_0,\partial_i)$ basis : $g_{00}=-N^2$, $g_{ij}=e^{2\gamma}\delta_{ij}$ and $g_{0i}=0$. Since $N$ satisfies \eqref{EQ N} and thanks to \eqref{appendix R00} we get : \begin{align*}
G_{00}&=R_{00}-\frac{1}{2}g_{00}R\\
&=\frac{N}{2}e_0\tau+(e_0\varphi)^2+N^2e^{-2\gamma}|\nabla\varphi|^2+\frac{1}{4}e^{-4\varphi}\left( (e_0\omega)^2+e^{-2\gamma}N^2|\nabla\omega|^2\right), \end{align*} which, looking at \eqref{T 00}, gives \eqref{G 00}. Thanks to \eqref{EQ H} and \eqref{appendix Rij} we get : \begin{equation*}
R_{ij}=\delta_{ij}\left(-\Delta\gamma+\frac{\tau^2}{2}e^{2\gamma}-\frac{e^{2\gamma}}{2}\mathbf{T}\tau-\frac{\Delta N}{2N}\right)+2\partial_i\varphi\partial_j\varphi-\delta_{ij}|\nabla\varphi|^2+\frac{1}{2}e^{-4\varphi}\partial_i\omega\partial_j\omega-\frac{1}{4}e^{-4\varphi}\delta_{ij}|\nabla\omega|^2, \end{equation*} which, using \eqref{vraie eq sur gamma} gives $R_{ij}=2\partial_i\varphi\partial_j\varphi+\frac{1}{2}e^{-4\varphi}\partial_i\omega\partial_j\omega$. It gives us \begin{equation*}
G_{ij}=\frac{1}{2}e^{2\gamma}\mathbf{T}\tau\delta_{ij}+2\partial_i\varphi\partial_j\varphi+\frac{e^{2\gamma}}{N^2}(e_0\varphi)^2\delta_{ij}-|\nabla\varphi|^2\delta_{ij}+\frac{1}{4}e^{-4\varphi}\left( 2\partial_i\omega\partial_j\omega+\frac{e^{2\gamma}}{N^2}(e_0\omega)^2\delta_{ij}-|\nabla\omega|^2\delta_{ij}\right), \end{equation*} which, looking at \eqref{T ij}, gives \eqref{G ij}. The conservation law $D^{\mu}T_{\mu\nu}=0$ is just a consequence of \eqref{EQ ffi}, \eqref{EQ omega} and \eqref{divergence de T}. \end{proof}
By Proposition \ref{Gij et Goo}, in order to show that a solution to \eqref{EQ N}-\eqref{EQ omega} is indeed a solution to \eqref{EVE} it remains to show that $\tau=0$ and $G_{0i} - T_{0i}=0$. These will be shown simultaneously and the Bianchi identities \begin{equation*}
D^{\mu}G_{\mu\nu}=0 \end{equation*} are used in the following proposition to obtain a coupled system for this two quantities. For the sake of clarity, we use the following notations : \begin{equation*}
A_i\vcentcolon=G_{0i}-T_{0i},\qquad B_i\vcentcolon=G_{0i}-T_{0i}-\frac{N}{2}\partial_i\tau, \end{equation*}
and $\mathrm{div}(A)=\delta^{ij}\partial_iA_j$. The important remark about these quantities is that if we manage to show that $e_0\tau=0$, $A_i=0$ and $B_i=0$, we first have $G_{0i}-T_{0i}=0$, which, looking at the expression of $B_i$ implies that $\nabla\tau =0$, which, in addition to $e_0\tau=0$ and $\tau_{|\Sigma_0}=0$ implies that $\tau=0$ in the whole space-time.
\begin{prop}\label{usage de bianchi} The quantities $A_i$, $B_i$ and $e_0\tau$ satisfy the following coupled system : \begin{align}
e_0A_i &=\frac{N}{2}\partial_ie_0\tau+\frac{\partial_iN}{2}e_0\tau+\left(\mathbf{T} N+N\tau\right)A_i+
\partial_i\beta^j A_j ,\label{coupled system 1}\\
e_0B_i & = \frac{\partial_iN}{2}e_0\tau+N\tau A_i+\mathbf{T} NB_i+\partial_i\beta^jB_j,\label{coupled system 3}\\
e_0\left( e_0\tau\right)&=2e^{-2\gamma}N\mathrm{div}(A)+2e^{-2\gamma}\delta^{ij}\partial_iNA_j+(2N\tau+\mathbf{T} N) e_0\tau .\label{coupled system 2} \end{align} \end{prop}
\begin{proof}
The equation \eqref{coupled system 3} follows from \eqref{coupled system 1} and we omit the proof, which is a direct computation. Thanks to the previous proposition and the Bianchi identity, we have $D^{\mu}(G_{\mu\nu}-T_{\mu\nu})=0$.
We first prove \eqref{coupled system 1}. By \eqref{covariante 1} and \eqref{covariante 2}, \begin{align}
D_0(G_{0i}-T_{0i}) & = e_0(G_{0i}-T_{0i})-\mathbf{T} N(G_{0i}-T_{0i})-e^{-2\gamma}\delta^{jk}N\partial_kN(G_{ij}-T_{ij})
\nonumber\\&\quad-\frac{\partial_iN}{N}(G_{00}-T_{00})-\frac{1}{2}\left(2\delta^{j}_ie_0\gamma+\partial_i\beta^j-\delta_{ik}\delta^{j\ell}\partial_{\ell}\beta^k \right)(G_{0j}-T_{0j})
\nonumber\\ & =e_0(G_{0i}-T_{0i})-\mathbf{T} N(G_{0i}-T_{0i})-\partial_iNe_0\tau\label{DoAoi}\\&\quad-\frac{1}{2}\left(2\delta^{j}_ie_0\gamma+\partial_i\beta^j-\delta_{ik}\delta^{j\ell}\partial_{\ell}\beta^k \right)(G_{0j}-T_{0j}),\nonumber \end{align} where in the last equality we have used \eqref{G 00} and \eqref{G ij}. Similarly, by \eqref{covariante 4} and \eqref{G 00}-\eqref{G ij}, \begin{align}
g^{jk}D_j(G_{ki}-T_{ki}) = \frac{1}{2}\partial_i\mathbf{T}\tau-\frac{\tau}{N}(G_{0i}-T_{0i})-\frac{1}{2N^2}\left(2\delta_i^ke_0\gamma-\delta_{\ell}^k\partial_i\beta^{\ell}-\delta_{i\ell}\delta^{jk}\partial_j\beta^{\ell} \right)(G_{0k}-T_{0k})
\label{DjAji} \end{align} Thanks to $D^{\mu}(G_{\mu i}-T_{\mu i})=0$, we have \begin{equation*}
\text{\eqref{DoAoi}}-N^2\times\text{\eqref{DjAji}}=0, \end{equation*} which, after some straightforward simplifications, gives exactly \eqref{coupled system 1}.
We now prove \eqref{coupled system 2}. By \eqref{covariante 1} and \eqref{G 00}, \begin{equation}
D_0(G_{00}-T_{00})=\frac{N}{2}e_0(e_0\tau)-\frac{e_0 N}{2}e_0\tau-2e^{-2\gamma}\delta^{ij}N\partial_iN(G_{0j}-T_{0j}).\label{DoAoo} \end{equation} On the other hand, by \eqref{covariante 3}-\eqref{covariante 4} and \eqref{G 00}-\eqref{G ij}, \begin{equation}
g^{ij}D_i(G_{j0}-T_{j0})=e^{-2\gamma}\delta^{ij}\partial_{i}(G_{j0}-T_{j0})-e^{-2\gamma}\delta^{ij}\frac{\partial_iN}{N}(G_{j0}-T_{j0})+\tau e_0\tau.\label{DiAi0} \end{equation} Thanks to $D^{\mu}(G_{\mu 0}-T_{\mu 0})=0$, we have \begin{equation*}
-\frac{1}{N}\times\text{\eqref{DoAoo}}+N\times\text{\eqref{DiAi0}}=0, \end{equation*} which, after some straightforward simplifications, gives exactly \eqref{coupled system 2}. \end{proof}
\begin{prop}
Suppose the solution to \eqref{EQ N}-\eqref{EQ omega} as constructed in Section \ref{section solving the reduced system} arises from initial data with $\tau_{|\Sigma_0}=0$ and that the constraint equations are initially satisfied, then the solution satisfies \begin{align*}
\tau&=0,\\
G_{0i}&=T_{0i}. \end{align*} As a consequence, the solution to \eqref{EQ N}-\eqref{EQ omega} is indeed a solution to \eqref{EVE}. \end{prop}
\begin{proof} We set the following energy : \begin{equation*}
E(t)\vcentcolon=\left\| e_0\tau\right\|_{L^2}^2+\sum_{i=1,2}\left(\left\|2 e^{-\gamma}A_i\right\|_{L^2}^2+\left\| B_i\right\|_{L^2}^2 \right). \end{equation*}
We first note that $E(0)=0$ because our solution arises from initial date satisfying the constraint equations (which implies that $(G_{0i}-T_{0i})_{|\Sigma_0}=0$) and because $\tau_{|\Sigma_0}=0$. Our goal is to show that $E(t)=0$ for all $t\in[0,T]$.
We first multiply \eqref{coupled system 3} by $B_i$ and sum over $i=1,2$ the two equations we obtain. We integrate over $\mathbb{R}^2$ and write $e_0=\partial_t-\beta\cdot\nabla$ to obtain (after an integration by part on the last term) : \begin{align*}
\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d} t}\sum_{i=1,2}\left\| B_i\right\|_{L^2}^2&=\int_{\mathbb{R}^2}\sum_{i=1,2}\frac{\partial_iN}{2}B_ie_0\tau+\int_{\mathbb{R}^2}\sum_{i=1,2}N\tau B_iA_i+\int_{\mathbb{R}^2}\sum_{i=1,2}\mathbf{T} NB_i^2\\&\qquad+\int_{\mathbb{R}^2}\sum_{i=1,2}\partial_i\beta^jB_iB_j-\int_{\mathbb{R}^2}\sum_{i=1,2}\frac{1}{2}\mathrm{div}(\beta)B_i^2. \end{align*} Using Corollary \ref{coro reduced sytem} and Proposition \ref{embedding}, we see that the quantities $\nabla N$, $N\tau$, $\mathbf{T} N$ and $\nabla\beta$ are bounded (for $N\tau$ and $N\partial_iN$, we use the decay property of $\tau$ and $\partial_i N$ to deal with the logarithmic growth of $N$). Using the trick $2ab\leq a^2+b^2$ and the fact that $1\lesssim e^{-\gamma}$, we get : \begin{equation}
\frac{\mathrm{d}}{\mathrm{d} t}\sum_{i=1,2}\left\| B_i\right\|_{L^2}^2\leq C E(t).\label{coupled energy 1} \end{equation}
Similarly, multiplying \eqref{coupled system 2} by $e_0\tau$ , we get : \begin{align}
\frac{\mathrm{d}}{\mathrm{d} t}\left\|e_0\tau \right\|_{L^2}^2 & = 4\int_{\mathbb{R}^2}e^{-2\gamma}N\,\mathrm{div}(A)e_0\tau+2\int_{\mathbb{R}^2}\sum_{i=1,2}e^{-2\gamma}e_0\tau\partial_iN A_i+2\int_{\mathbb{R}^2}(2N\tau+\mathbf{T} N)\left( e_0\tau\right)^2\nonumber\\&\qquad\qquad-\int_{\mathbb{R}^2}\mathrm{div}(\beta)\left( e_0\tau\right)^2
\nonumber\\&=-4\int_{\mathbb{R}^2}e^{-2\gamma}NA_i\partial_ie_0\tau+O(E(t)),\label{coupled energy 2} \end{align}
where we integrated by part the first term and bound the other terms just as we did for $\|B_i\|_{L^2}$, mainly using Corollary \ref{coro reduced sytem}. Now, writing $\partial_t=e_0+\beta\cdot\nabla$ and integrating by part, we get : \begin{align*}
\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d} t}\sum_{i=1,2}\|e^{-\gamma} A_i\|_{L^2}^2&=\int_{\mathbb{R}^2}\sum_{i=1,2}e^{-2\gamma}A_ie_0A_i-\frac{1}{2}\int_{\mathbb{R}^2}\sum_{i=1,2}\mathrm{div}\left(e^{-2\gamma}\beta\right)A_i^2+\frac{1}{2}\int_{\mathbb{R}^2}\sum_{i=1,2}\partial_t(-2\Tilde{\gamma})e^{-2\gamma}A_i^2
\\& =\int_{\mathbb{R}^2}\sum_{i=1,2}e^{-2\gamma}A_ie_0A_i+O(E(t)). \end{align*} Using \eqref{coupled system 1}, we thus get : \begin{align}
\frac{\mathrm{d}}{\mathrm{d} t}\sum_{i=1,2}\|2e^{-\gamma} A_i\|_{L^2}^2= 4\int_{\mathbb{R}^2}\sum_{i=1,2}e^{-2\gamma}NA_i\partial_ie_0\tau+O(E(t)).\label{coupled energy 3} \end{align} Looking at \eqref{coupled energy 2} and \eqref{coupled energy 3}, we see that our choice of scaling in the expression of $E(t)$ implies a cancellation and we finally get, recalling \eqref{coupled energy 1} : \begin{equation*}
\frac{\mathrm{d}}{\mathrm{d} t}E(t)\leq CE(t) \end{equation*} which, using the Gronwall's Lemma and $E(0)=0$, implies that $E(t)=0$ for all $t\in [0,T]$, which implies the desired result. \end{proof}
\subsection{Improved regularity}
To conclude the proof of the Theorem \ref{theoreme principal}, it only remains to prove the bounds stated in this theorem. Notice that some of the estimates are already obtained in Corollary \ref{coro reduced sytem}. This improvement of regularity is due to the fact that we now know that the solution of the reduced system is also a solution to the system \eqref{EVE}, and therefore all the metric components solves elliptic equations.
\begin{lem}\label{lemme elliptique} The metric components $N$, $\gamma$ and $\beta$ satisfy the following elliptic equations : \begin{align}
\Delta N & =e^{-2\gamma}N|H|^2+\frac{2e^{2\gamma}}{N}(e_0\varphi)^2+\frac{e^{2\gamma-4\varphi}}{2N}(e_0\omega)^2,\label{elliptic N}\\
\Delta\gamma & = -|\nabla\varphi|^2-\frac{1}{4}e^{-4\varphi}|\nabla\omega|^2-\frac{e^{2\gamma}}{N^2}(e_0\varphi)^2-\frac{e^{2\gamma-4\varphi}}{4N^2}(e_0\omega)^2-\frac{e^{-2\gamma}}{2}|H|^2,\label{elliptic gamma}\\
\Delta\beta^j & =\delta^{jk}\delta^{i\ell}(L\beta)_{ik}\left(\frac{\partial_{\ell}N}{2N}-\partial_{\ell}\gamma \right)-2\delta^{kj}e_0\varphi\partial_k\varphi-\frac{1}{2}e^{-4\varphi}\delta^{kj}e_0\omega\partial_k\omega.\label{elliptic beta} \end{align} \end{lem}
\begin{proof} Since we solved \eqref{EVE}, we have $R_{00}=T_{00}-g_{00}\mathrm{tr}_{g}T$, which, according to \eqref{appendix R00} and \eqref{JSP}, easily implies \eqref{elliptic N}.
Using \eqref{appendix R00}, \eqref{appendix R} and the fact that $\tau=0$, we get that \begin{equation*}
G_{00}=N^2e^{-2\gamma}\left(- \Delta\gamma-\frac{e^{-2\gamma}}{2}|H|^2\right), \end{equation*} Using \eqref{T 00} and the fact that $G_{00}=T_{00}$ we get \eqref{elliptic gamma}.
The equation $R_{0j}=2e_0\varphi\partial_j\varphi+\frac{1}{2}e^{-4\varphi}e_0\omega\partial_j\omega$ and the fact that $\tau=0$ together with \eqref{appendix R0j} and \eqref{appendix beta} gives \eqref{elliptic beta}.
\end{proof}
In the following proposition, we state and prove the missing estimates :
\begin{prop} Taking $\varepsilon_0$ smaller if necessary, the following estimates hold : \begin{align}
\left\|\Tilde{\gamma}\right\|_{H^4_{\delta}}+ \left\| \beta \right\|_{H^4_{\delta'}} & \leq C_h,\label{improved regularity 1}\\
\left\|\partial_t\Tilde{\gamma}\right\|_{H^3_{\delta}}+ \left\|\partial_t\Tilde{N}\right\|_{H^3_{\delta}}+\left\| \partial_t\beta \right\|_{H^3_{\delta'}} & \leq C_h,\label{improved regularity 2}\\
\left|\partial_t^2N_a\right|+\left\|\partial_t^2\Tilde{N}\right\|_{H^2_{\delta}}+\left\| \partial_t^2\beta \right\|_{H^2_{\delta'}}+\left\|\partial_t^2\Tilde{\gamma}\right\|_{H^2_{\delta}} & \leq C_h.\label{improved regularity 3}
\end{align} \end{prop}
\begin{proof} The idea is just to apply Corollary \ref{mcowens 2} to the equations \eqref{elliptic N}-\eqref{elliptic gamma}-\eqref{elliptic beta}, after having proven, using the regularity obtained in Corollary \ref{coro reduced sytem}, that the RHS of these equations are in the appropriate spaces. For the estimates involving time derivatives, we proceed in the same way after having differenciated once or twice the equations \eqref{elliptic N}-\eqref{elliptic gamma}-\eqref{elliptic beta}. We omit the details, since the computations are straightforward (mainly because now we don't have to worry about the constants in the estimates).
\end{proof}
This concludes the proof of Theorem \ref{theoreme principal}.
\section{Proof of Theorem \ref{theo 2}}\label{section theo 2}
\subsection{Almost $H^2$ well-posedness}
At this stage, thanks to Theorem \ref{theoreme principal}, we proved that the system \eqref{EVE} is well posed locally in time with initial data $(\partial\varphi,\partial\omega)\in H^2$. The next step would be to consider initial data $(\partial\varphi,\partial\omega)$ which are only in $H^1$. In order to obtain well-posedness in this setting, we could regularize the initial data with a sequence $(\partial\varphi_n,\partial\omega_n)\in H^2$ to which we can apply Theorem \ref{theoreme principal}, thus obtaining a sequence of solution to \eqref{EVE} on $[0,T_n)$. A priori, if $(\partial\varphi,\partial\omega)$ only belongs to $H^1$, the $H^2$ norm of $(\partial\varphi_n,\partial\omega_n)$ explodes as $n$ tends to $+\infty$ and therefore the sequence $(T_n)_{n\in\mathbb{N}}$ converges to 0, forbidding us to define a limit on some non-trivial interval.
To prevent this to happen, we need to prove that the $H^2$ and $L^4$ estimates of each $(\partial\varphi_n,\partial\omega_n)$ can be propagated on some fixed interval using only their $H^1$ norm (which are bounded by the $H^1$ norm of the initial data) using the system that $\varphi$ and $\omega$ solve, i.e the system \eqref{WM ffi}-\eqref{WM omega} below. As we will see in the rest of this section, it is possible to improve the $H^2$ norm. But unfortunately, we can't improve the $L^4$ estimates using only the $H^1$ norm and the system \eqref{WM ffi}-\eqref{WM omega}. Note that this difficulty already occured in the proof of Theorem \ref{theoreme principal} but we bypassed it by taking advantage of the smallness of the time of existence (see Proposition \ref{prop propsmall}), something that we cannot do in this approximation procedure.
Therefore, we can't prove local well-posedness at the $H^2$ level. Instead we prove a blow-up criterium, meaning that the $L^4$ estimates that we can't propagate is assumed to hold from the start. It only remains to improve the $H^2$ estimates.
\subsection{The wave map structure}
To prove Theorem \ref{theo 2}, we argue by contradiction and assume throughout this section that the following statements both hold on $[0,T)$ : \begin{align} \left\| \partial \varphi \right\|_{H^1} + \left\| \partial\omega \right\|_{H^1}&\leq C_0,\label{propa H1}\\ \left\| \partial \varphi \right\|_{H^1} + \left\| \partial\omega \right\|_{L^4}&\leq \varepsilon_0,\label{propa L4} \end{align} for some $C_0>0$, and $\varepsilon_0>0$ defined in Theorem \ref{theoreme principal}, and where $T$ is the maximal time of existence of a solution to \eqref{EVE}. The goal is to show that we can actually bound the $H^2$ norm of $\partial\varphi$ and $\partial\omega$ on $[0,T)$, and hence up to $T$, using \eqref{propa H1}. Then, using in addition \eqref{propa L4} and applying Theorem \ref{theoreme principal}, we construct a solution of \eqref{EVE} beyond the time $T$. This would contradict the maximality of $T$, and thus prove Theorem \ref{theo 2}. \par\leavevmode\par In order to estimate the $H^2$ norm of $\partial\varphi$ and $\partial\omega$ using \eqref{propa H1}, we are going to use the wave map structure of the coupled wave equations solved by $\varphi$ and $\omega$, which we recall : \begin{align} \Box_g\varphi&=-\frac{1}{2}e^{-4\varphi}\partial^\rho\omega\partial_\rho\omega,\label{WM ffi}\\ \Box_g\omega&=4\partial^\rho\omega\partial_\rho\varphi.\label{WM omega} \end{align} We also recall the expression of the operator $\Box_g$ in the case $\tau=0$ : \begin{equation} \Box_g f= -\mathbf{T}^2f+\frac{e^{-2\gamma}}{N}\mathrm{div}(N\nabla f),\label{expression de box} \end{equation} where $f$ is any function on $\mathcal{M}$. Note the following notation for the rest of this section : $U$ stands for $\varphi$ or $\omega$, $g$ stands for any metric coefficient, meaning $N$, $\gamma$ and $\beta$.
\subsubsection{The naive energy estimate}
We want to control the $H^2$ norm of $\partial U$. As $U$ satisfies a wave equation, we could use Lemma \ref{inegalite d'energie lemme}. With our formal notation, this wave equation writes \begin{equation*} \Box_g U= g^{-1}(\partial U)^2. \end{equation*} Thus, Lemma \ref{inegalite d'energie lemme} would basically implies that \begin{align*} \left\| \partial\nabla^2 U \right\|_{L^2}^2 & \lesssim C_{high}^2 + \int_0^t \left\| \partial\nabla^2 U \nabla^2\left( g^{-1}(\partial U)^2\right) \right\|_{L^1} \\& \lesssim C_{high}^2 +\int_0^t\left\|(\partial\nabla^2 U)^2\partial U \right\|_{L^1} + \cdots, \end{align*} where the dots reprensent term easily bounded by $\left\|\partial U \right\|_{H^2}^2$. The problem is that, using only \eqref{propa H1} and \eqref{propa L4}, the term $\left\|(\partial\nabla^2 U)^2\partial U \right\|_{L^1}$ cannot be bounded by $\left\|\partial U \right\|_{H^2}^2$, it requires necessarily $\left\|\partial U \right\|_{H^2}^{2+\eta}$ with $\eta>0$. Thus, a continuity argument, aiming at proving boundedness in $H^2$, would be impossible to carry out.
Therefore, we need to use deeper the structure of the coupled equations \eqref{WM ffi} and \eqref{WM omega}. This structure will allows us to define a third order energy, which will have the property of avoiding $\left\|\partial U \right\|_{H^2}^{2+\eta}$ terms into the energy estimate.
\subsubsection{The third order energy}
The system \eqref{WM ffi}-\eqref{WM omega} has actually more structure than we could expect : it is a wave map system, as shown in \cite{Malone}. More precisely, if we consider the map $u=(\varphi,\omega)$, then $u$ is an harmonic map from $([0,T)\times \mathbb{R}^3,g)$ to $(\mathbb{R}^2, h)$ with $h$ being the following metric : \begin{equation*} 2(\mathrm{d} x)^2 + \frac{1}{2}e^{-4x}(\mathrm{d} y)^2. \end{equation*} For those wave map systems, Choquet-Bruhat in \cite{CBwavemaps} noted that we can define a third order energy, which in our case is \begin{equation*} \mathscr{E}_3\vcentcolon=\mathscr{E}_3^\varphi+\mathscr{E}_3^\omega, \end{equation*} with \begin{align*}
\mathscr{E}_3^\varphi & \vcentcolon= \int_{\mathbb{R}^2} 2 \left[ \frac{1}{N^2}\left(e_0\partial_j\partial_i\varphi+\frac{1}{2}e^{-4\varphi}\partial_j\partial_i\omega e_0\omega \right)^2 +e^{-2\gamma}\left|\nabla\partial_j\partial_i\varphi+\frac{1}{2}e^{-4\varphi}\partial_j\partial_i\omega \nabla\omega \right|^2\right]\mathrm{d} x, \\
\mathscr{E}_3^\omega &\vcentcolon = \int_{\mathbb{R}^2}\frac{1}{2}e^{-4\varphi}\left[ \frac{1}{N^2}\left( e_0\partial_j\partial_i\omega -2\partial_j\partial_i\omega e_0\varphi-2\partial_j\partial_i\varphi e_0\omega \right)^2 \right.\\&\qquad\qquad\qquad\qquad\qquad \left.+e^{-2\gamma}\left|\nabla\partial_j\partial_i\omega -2\partial_j\partial_i\omega \nabla\varphi-2\partial_j\partial_i\varphi \nabla\omega \right|^2\right] \mathrm{d} x . \end{align*} Our goal is to show that we can estimate $\mathscr{E}_3$ by $\left\|\partial U \right\|_{H^2}^{2}$. We start by commuting $\Box_g$ with $\partial_i\partial_j$ to obtain :
\begin{align} \Box_g\partial_j\partial_i\varphi+e^{-4\varphi}g^{\alpha\beta}\partial_\alpha\partial_j\partial_i\omega\partial_\beta\omega & = F^\varphi_{ij}\label{WM dd ffi},\\ \Box_g\partial_j\partial_i\omega - 4g^{\alpha\beta}\partial_\alpha\partial_j\partial_i\omega\partial_\beta\varphi- 4g^{\alpha\beta}\partial_\alpha\omega\partial_\beta\partial_j\partial_i\varphi&= F^\omega_{ij},\label{WM dd omega} \end{align} where we set \begin{align} F^\varphi_{ij} & \vcentcolon= \left[\Box_g ,\partial_j\partial_i \right]\varphi+-\frac{1}{2}\partial_i\partial_j\left(e^{-4\varphi}g^{\alpha\beta} \right)\partial_\alpha\omega\partial_\beta\omega\nonumber \\&\qquad\qquad\qquad-\partial_{(i}\left(e^{-4\varphi}g^{\alpha\beta} \right)\partial_\alpha\partial_{j)}\omega\partial_\beta\omega - e^{-4\varphi}g^{\alpha\beta}\partial_\alpha\partial_i\omega\partial_\beta\partial_j\omega,\label{F ffi}\\ F^\omega_{ij} & \vcentcolon=\left[\Box_g ,\partial_j\partial_i \right]\omega+ 4\partial_i\partial_jg^{\alpha\beta}\partial_\alpha\omega\partial_\beta\varphi\nonumber\\&\qquad +4\partial_{(i}g^{\alpha\beta}\partial_\alpha\omega\partial_\beta\partial_{j)}\varphi+4\partial_{(i}g^{\alpha\beta}\partial_\alpha\partial_{j)}\omega\partial_\beta\varphi+4g^{\alpha\beta}\partial_\alpha\partial_{(i}\omega\partial_\beta\partial_{j)}\varphi.\label{F omega} \end{align} We also define the following quantity : \begin{align}\label{def Reste} \mathscr{R}&\vcentcolon = \left\|\partial_t\gamma \right\|_{L^\infty}\left(\left\|\partial \nabla^2 U\right\|_{L^2}^2 + \left\|\partial U\nabla^2 U \right\|_{L^2}^2\right)\nonumber \\& \quad \; +\left(\left\|\partial \nabla^2 U\right\|_{L^2} + \left\|\partial U\nabla^2 U \right\|_{L^2}\right) \left( \left\|\nabla ^2 U (\partial U)^2 \right\|_{L^2}+\left\| (\partial U)^3\right\|_{L^2}+\left\| F^U\right\|_{L^2}\right.\\&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.+\left\| (\partial\nabla U)^2 \right\|_{L^2} +\left\|\nabla g\nabla^3U \right\|_{L^2} + \left\|\nabla g\nabla U\nabla^2 U \right\|_{L^2} \right) \nonumber \end{align} where by $F^U$ we mean either $F^\varphi$ or $F^\omega$. For clarity, the computations for the time derivative of the energy $\mathscr{E}_3$ are done in Appendix \ref{appendix C}, where we prove the following proposition. \begin{prop}\label{dernier coro} The energy $\mathscr{E}_3$ satisfies \begin{equation*} \frac{\mathrm{d}}{\mathrm{d} t}\mathscr{E}_3=O(\mathscr{R}(t)). \end{equation*} \end{prop} This proposition shows the interest of the energy $\mathscr{E}_3$ : its time derivative do not include terms of the form $\left\|(\partial\nabla^2U)^2\partial U \right\|_{L^1}$, unlike the usual energy estimate of Lemma \ref{inegalite d'energie lemme}.
\subsection{Continuity argument}
Before starting the continuity argument, we need to show that $\mathscr{R}$ can be bounded by $\left\| \partial U \right\|_{H^2}^2$ (Lemmas \ref{estimation F} and \ref{majorer R}) and to compare $\mathscr{E}_3$ with $\left\| \partial U \right\|_{H^2}^2$ (Lemma \ref{comparaison}). To this end, we will use the following key estimates : \begin{align} \left\| u \right\|_{L^4}&\lesssim \left\| u\right\|_{L^2}^{\frac{1}{2}}\left\| u\right\|_{H^1}^{\frac{1}{2}},\label{estimate A}\\ \left\| u\right\|_{L^\infty}&\lesssim \left\| u\right\|_{L^2}^{\frac{1}{2}}\left\| \nabla^2u\right\|_{L^2}^{\frac{1}{2}}.\label{estimate B} \end{align} Both are consequences of the Gagliardo-Nirenberg interpolation inequality, see Proposition \ref{GN}. We will use without mention the fact that $\left\|\varphi\right\|_{L^2}+ \left\| \omega\right\|_{L^2}\lesssim \left\|\varphi\right\|_{L^4}+ \left\| \omega\right\|_{L^4}\lesssim \varepsilon_0$ (since $\varphi$ and $\omega$ are compactly supported function and because of \eqref{propa L4}), and also the fact that $\left\| g\right\|_{H^2}\lesssim \varepsilon_0$. We also need to estimate $\nabla^3g$. For this, we apply the usual elliptic estimate to the equation $\Delta g = (\nabla g)^2 + (\partial U)^2$ (this is the type of equations solved by the metric coefficients in the elliptic gauge, see Lemma \ref{lemme elliptique}). It first gives : \begin{align*} \left\| \nabla g \right\|_{W^{2,\frac{4}{3}}} & \lesssim \left\| \nabla^2 g \nabla g \right\|_{L^\frac{4}{3}} + \left\| \partial \nabla U \partial U \right\|_{L^\frac{4}{3}} \\& \lesssim \left\|\nabla g \right\|_{H^1}^2 + \left\|\partial U \right\|_{H^1}^2 \end{align*} where we used Hölder's inequality $L^2\times L^4 \xhookrightarrow{} L^\frac{4}{3}$ and the embedding $H^1\xhookrightarrow{}L^4$. The embedding $W^{2,\frac{4}{3}}\xhookrightarrow{}L^\infty$ then gives : \begin{equation} \left\| \nabla g \right\|_{L^\infty} \lesssim \varepsilon_0^2 + C_0^2. \label{L infini dg} \end{equation} The $L^2$ elliptic estimate implies : \begin{align*} \left\| \nabla g \right\|_{H^2} & \lesssim \left\| \nabla^2 g \nabla g \right\|_{L^2} + \left\| \partial \nabla U \partial U \right\|_{L^2} \\ & \lesssim \varepsilon_0 \left\| \nabla g \right\|_{H^2} + \varepsilon_0 C_0 \left\| \partial U\right\|_{H^2}^\frac{1}{2} \end{align*} where we used $\left\| g \right\|_{H^2}\lesssim \varepsilon_0$, the Hölder's inequality and \eqref{estimate A}. Taking $\varepsilon_0$ small enough this gives : \begin{equation} \left\| \nabla g \right\|_{H^2}\lesssim C(C_0)\left\|\partial U \right\|_{H^2}^\frac{1}{2}. \label{H 2 dg} \end{equation} In the sequel, we will commute without mention $\partial$ and $\nabla$ since $[e_0,\nabla]=\nabla\beta \nabla$ and $\nabla\beta$ can be bounded using \eqref{L infini dg}.
\subsubsection{The energy $\mathscr{R}$}
We start by the estimates for $F^U$ :
\begin{lem}\label{estimation F} There exists $C(C_0)>0$ such that \begin{align*} \left\| F^\varphi_{ij}\right\|_{L^2}+\left\| F^\omega_{ij}\right\|_{L^2}& \lesssim C(C_0)\left\|\partial U \right\|_{H^2}. \end{align*} \end{lem}
\begin{proof} The expressions of $F^\varphi_{ij}$ and $F^\omega_{ij}$ are given by \eqref{F ffi} and \eqref{F omega}. We start by estimate the commutator $[\Box_g,\nabla^2]U$. Looking at the expression \eqref{expression de box}, we start by the spatial part of $\Box_g$ : \begin{align*} \left\| \left[ \nabla^2, g\nabla (g\nabla\cdot) \right] U \right\|_{L^2} & \lesssim \left\| g\nabla^3 g \nabla U \right\|_{L^2} + \left\| \nabla g \nabla^2 g \nabla U \right\|_{L^2} + \left\| g\nabla^2 g \nabla^2 U \right\|_{L^2} + \left\| (\nabla g)^2 \nabla^2 U \right\|_{L^2} \\&\quad + \left\| g\nabla g \nabla^3 U \right\|_{L^2}. \end{align*} For the last two terms, we simply bound $\nabla g$ using \eqref{L infini dg} and put $\partial U$ in $H^2$ : \begin{align*} \left\| (\nabla g)^2 \nabla^2 U \right\|_{L^2} + \left\| g\nabla g \nabla^3 U \right\|_{L^2} & \lesssim C(C_0) \left( \left\| \nabla^2 U \right\|_{L^2} + \left\| \nabla^3 U \right\|_{L^2} \right). \end{align*} We do the same for the second term, using in addition $\left\|\nabla^2 g \right\|_{L^2}\lesssim 1$ and the embedding $H^2\xhookrightarrow{}L^\infty$ : \begin{align*} \left\| \nabla g \nabla^2 g \nabla U \right\|_{L^2} & \lesssim C(C_0) \left\| \nabla U \right\|_{L^\infty}. \end{align*} We deal with the third term using first the Hölder's inequality, the embedding $H^1\xhookrightarrow{}L^4$, \eqref{H 2 dg} and \eqref{estimate A} : \begin{align*} \left\| g\nabla^2 g \nabla^2 U \right\|_{L^2} & \lesssim \varepsilon_0 \left\| \nabla^2g \right\|_{H^1} \left\| \nabla^2U \right\|_{L^4} \\& \lesssim C(C_0) \left\| \partial U \right\|_{H^2}^{\frac{1}{2}} \left\| \nabla^2 U \right\|_{H^1}^{\frac{1}{2}}. \end{align*} For the first term, we put $\nabla^3g$ in $L^2$ and $\nabla U$ in $L^\infty$, and then use \eqref{H 2 dg} and \eqref{estimate B} : \begin{align*} \left\| g\nabla^3 g \nabla U \right\|_{L^2} & \lesssim C(C_0) \left\| \partial U \right\|_{H^2}^{\frac{1}{2}} \left\| \nabla U \right\|_{H^2}^{\frac{1}{2}}. \end{align*} Summarizing everything we obtain : \begin{equation} \left\| \left[ \nabla^2, g\nabla (g\nabla\cdot) \right] U \right\|_{L^2} \lesssim C(C_0) \left\|\partial U \right\|_{H^2}.\label{spatial} \end{equation} We now estimate the contribution of $\mathbf{T}^2$ to the commutator. We have : \begin{align*} \left\| \left[ \nabla^2, \mathbf{T}^2 \right] U \right\|_{L^2} & \lesssim \left\| \nabla^2 g \partial_t^2 U \right\|_{L^2} + \left\| \nabla g \nabla \partial_t^2 U \right\|_{L^2} + \left\|\nabla^2\partial_t g\partial U \right\|_{L^2} +\left\|\nabla^2g \nabla \partial U \right\|_{L^2} + \left\|\nabla g \nabla^2\partial U \right\|_{L^2} \end{align*} The last two terms have already been estimated during the proof of \eqref{spatial}. For the first two terms, we use the equation $\Box_g U= (\partial U)^2$ satisfied by $U$ to express $\partial_t^2 U$. It shows that \begin{align}
|\partial_t^2 U| & \lesssim \left( 1 + |\nabla g| \right) |\partial U |^2 + |g\nabla^2U|
\lesssim C(C_0) \left( |\partial U |^2 + |\nabla^2U|\right)\label{dt2} \end{align} where we also used \eqref{L infini dg}. We can put $\nabla^2g$ in $L^2$ and $\partial U$ in $L^\infty$ using \eqref{estimate B} (note that the second term has already been estimated) : \begin{align*} \left\| \nabla^2 g \partial_t^2 U \right\|_{L^2} & \lesssim \left\|\nabla^2g (\partial U)^2\right\|_{L^2} + \left\| \nabla^2g \nabla^2 U\right\|_{L^2} \lesssim C(C_0) \left\| \partial U \right\|_{H^2} \end{align*} The equation $\Box_g U= (\partial U)^2$ also gives us \begin{equation*}
|\nabla \partial_t^2 U|\lesssim |\nabla \partial U \partial U | + |\nabla^2 g \nabla U| + |\nabla^3 U| + |\nabla g \nabla^2 U|. \end{equation*} Because of \eqref{L infini dg}, $\left\| \nabla g \nabla \partial_t^2 U \right\|_{L^2} \lesssim C(C_0)\left\| \nabla \partial_t^2 U \right\|_{L^2} $ and the previous estimate shows therefore that all the terms in $\left\| \nabla g \nabla \partial_t^2 U \right\|_{L^2} $ have already been estimated. This gives : \begin{equation*} \left\| \nabla g \nabla \partial_t^2 U \right\|_{L^2} \lesssim C(C_0) \left\| \partial U \right\|_{H^2}. \end{equation*} It remains to deal with the term involving $\partial_t g$. This quantity satisfies the following equation : \begin{equation*} \Delta \partial_t g = \nabla \partial_t g \nabla g + \partial_t^2 U \nabla U + \partial_t U \nabla \partial_t U. \end{equation*} The usual elliptic estimates gives us \begin{align*} \left\|\partial_t g \right\|_{H^2} & \lesssim \left\| \nabla \partial_t g \nabla g \right\|_{L^2} + \left\|\partial_t^2 U \nabla U \right\|_{L^2} + \left\|\partial_t U \nabla \partial_t U \right\|_{L^2} \\& \lesssim \varepsilon_0 \left\| \nabla \partial_t g \right\|_{H^1} + C(C_0) \left\| \partial U \right\|_{H^2}^\frac{1}{2} \end{align*} where we used \eqref{estimate A} and \eqref{estimate B}. Taking $\varepsilon_0$ small enough, this shows that $\left\|\partial_t g \right\|_{H^2}\lesssim C(C_0) \left\| \partial U \right\|_{H^2}^\frac{1}{2}$. With this, we estimate the remaining term in the commutator $\left[ \nabla^2,\mathbf{T}^2\right]$ using in addition \eqref{estimate B} : \begin{align*} \left\|\nabla^2\partial_t g\partial U \right\|_{L^2} & \lesssim \left\| \nabla^2\partial_t g \right\|_{L^2} \left\| \partial U \right\|_{L^\infty} \lesssim C(C_0) \left\| \partial U \right\|_{H^2} \end{align*} Thus, we obtain : \begin{equation} \left\| \left[ \nabla^2, \mathbf{T}^2 \right] U \right\|_{L^2} \lesssim C(C_0) \left\| \partial U \right\|_{H^2}.\label{time} \end{equation} Putting \eqref{spatial} and \eqref{time} together we finally obtain : \begin{equation*} \left\| \left[ \Box_g, \nabla^2 \right] U \right\|_{L^2} \lesssim C(C_0) \left\| \partial U \right\|_{H^2}. \end{equation*} The lemma is actually proved because all the remaining terms in $F^U$ have already been estimated in the proof of \eqref{spatial} and \eqref{time}. \end{proof}
We now estimate $\mathscr{R}$ :
\begin{lem}\label{majorer R} There exists $C'(C_0)>0$ such that \begin{equation*} \mathscr{R}\leq C'(C_0) \left\| \partial U \right\|_{H^2}^2. \end{equation*} \end{lem}
\begin{proof} First, note that the previous lemma handles the term $\left\| F^U \right\|_{L^2}$. Most of the remaining terms in $\mathscr{R}$ can simply be estimated using the Hölder's inequality, \eqref{estimate A} and \eqref{estimate B} : \begin{align*} \left\|\partial U \nabla^2 U \right\|_{L^2} & \lesssim C_0 \left\|\partial U \right\|_{H^2}, \\ \left\| \nabla^2U (\partial U)^2\right\|_{L^2} & \lesssim \varepsilon_0 C_0 \left\|\partial U \right\|_{H^2}, \\ \left\| (\partial U)^3\right\|_{L^2} & \lesssim \varepsilon_0^2 \left\| \partial U \right\|_{H^2}, \\ \left\| (\partial\nabla U)^2 \right\|_{L^2} & \lesssim C_0 \left\| \partial U \right\|_{H^2}, \\ \left\| \nabla g \nabla U \nabla^2 U \right\|_{L^2} & \lesssim \varepsilon_0C(C_0)\left\|\partial U \right\|_{H^2} . \end{align*} Let us give the details only for the last one. We bound $\nabla g$ with \eqref{L infini dg}, and then we the Hölder's inequality $L^4 \times L^4\xhookrightarrow{} L^2$ and the embedding $H^1\xhookrightarrow{} L^4$ : \begin{align*} \left\| \nabla g \nabla U \nabla^2 U \right\|_{L^2} & \lesssim \left\| \nabla g \right\|_{L^\infty} \left\| \nabla U \right\|_{L^4} \left\| \nabla^2 U \right\|_{L^4}\lesssim \varepsilon_0C(C_0)\left\|\partial U \right\|_{H^2} . \end{align*}
Because of \eqref{appendix tau} and the gauge condition $\tau=0$ we have $|\partial_t\gamma|\lesssim |\nabla g|$ so we estimate $\left\| \partial_t\gamma \right\|_{L^\infty}$ with \eqref{L infini dg}. Samewise with \eqref{L infini dg} we estimate the very last term appearing in $\mathscr{R}$ : \begin{align*} \left\| \nabla g \nabla^3 U \right\|_{L^2} \lesssim (\varepsilon_0^2+C_0^2) \left\| \partial U \right\|_{H^2}. \end{align*} \end{proof}
In the next lemma, we compare $\mathscr{E}_3$ with the $H^2$ norm of $\partial U$. We omit the proof since all the terms involved have been already estimated in the two previous lemmas. \begin{lem}\label{comparaison} There exists $K(C_0)>0$ such that \begin{align*} \mathscr{E}_3& \leq K(C_0) \left\| \partial U \right\|_{H^2}^2,\\ \left\| \nabla^2\partial U \right\|_{L^2}^2& \leq K(C_0)\mathscr{E}_3 +\varepsilon_0^2K(C_0) \left\|\partial U \right\|_{H^2}^2. \end{align*}
\end{lem}
\subsubsection{Conclusion} Putting everything together, we can now complete the continuity argument by propagating the $H^2$ regularity. We consider the following bootstrap assumption : \begin{equation} \left\| \partial U \right\|_{H^2}(t) \leq C_1\exp(C_1t),\label{bootstrap H2} \end{equation} with $C_1>0$ to be chosen later. Let $T_0<T$ be the maximal time such that \eqref{bootstrap H2} holds for all $0\leq t\leq T_0$. Note that if $C_1$ is large enough we have $T_0>0$, since $\partial\varphi$ and $\partial\omega$ are initially in $H^2$.
\begin{prop} If $\varepsilon_0$ is small enough (still independent of $C_{high}$) and $C_1$ is large enough, the following holds on $[0,T_0]$ : \begin{equation} \left\| \partial U \right\|_{H^2}(t) \leq \frac{1}{2}C_1\exp(C_1t). \end{equation} \end{prop}
\begin{proof} The $H^1$ norm of $\partial U$ is already controled, so it suffices to prove the bound stated in the proposition for $\left\| \nabla^2\partial U \right\|_{L^2}$. For this, we use the Proposition \ref{dernier coro}, which implies that for $t\in[0,T_0]$ (we also use Lemma \ref{comparaison} and \eqref{bootstrap H2}) : \begin{align*} \left\| \nabla^2\partial U \right\|_{L^2}^2(t) & \leq K^2\left\|\partial U \right\|_{H^2}^2(0)+\varepsilon_0^2K \left\|\partial U \right\|_{H^2}^2(t)+CK\int_0^t\mathscr{R}(s)\mathrm{d} s \\&\leq K^2C_{high}^2+\varepsilon_0^2K C_1^2\exp(2C_1t)+CK\int_0^t\mathscr{R}(s)\mathrm{d} s, \end{align*} for some $C>0$ given by Proposition \ref{dernier coro}. We now use Lemma \ref{majorer R} : \begin{align*} \left\| \nabla^2\partial U \right\|_{L^2}^2(t) & \leq K^2C_{high}^2+\varepsilon_0^2K C_1^2\exp(2C_1t)+CKC'(C_0)\int_0^t\left\|\partial U \right\|_{H^2}^2(s)\mathrm{d} s \\& \leq K^2C_{high}^2+\varepsilon_0^2K C_1^2\exp(2C_1t) + \frac{1}{2}CKC'(C_0)C_1\exp(2C_1t). \end{align*} We now choose $C_1\geq \max\left(3CKC'(C_0),\sqrt{6}KC_{high}\right)$ and $\varepsilon_0\leq \frac{1}{\sqrt{6K}}$, so that each term of the previous inequality is bounded by $\frac{1}{6}C_1^2\exp(2C_1t)$. This concludes the proof.
\end{proof}
By continuity of the quantities involved, the previous proposition contradicts the maximality of $T_0$, and thus proves that $T_0=T$. As explained at the beginning of Section \ref{section theo 2}, this concludes the proof of Theorem \ref{theo 2}.
\appendix
\section{Computations in the elliptic gauge}\label{appendix A}
In this section, we collect some computations for the spacetime metric in the elliptic gauge defined in Section \ref{section geometrie}. See also \cite{hunluk18}.
\subsection{Connection coefficients}\label{connection coefficients} The 2+1 metric $g$ has the form \begin{equation*}
g=-N^2\mathrm{d} t+\Bar{g}_{ij}\left(\mathrm{d} x^i+\beta^i\mathrm{d} t\right)\left(\mathrm{d} x^j+\beta^j\mathrm{d} t\right), \end{equation*} with $\Bar{g}=e^{2\gamma}\delta$. In the basis $(e_0,\partial_i)$, we have $g_{00}=-N^2$, $g_{0i}=0$ and $g_{ij}=e^{2\gamma}\delta_{ij}$, which gives $\det(g)=-e^{4\gamma}N^2$.
In the basis $(\partial_t,\partial_i)$ we have : \begin{equation}\label{inverse de g}
g^{-1}=\frac{1}{N^2}
\begin{pmatrix}
-1 & \beta^1 & \beta^2 \\
\beta^1 & N^2e^{-2\gamma}-\left(\beta^1\right)^2 & -\beta^1\beta^2 \\
\beta^2 & -\beta^1\beta^2 & N^2e^{-2\gamma}-\left(\beta^2\right)^2
\end{pmatrix} \end{equation} This allows us to compute $\Box_gh$ for $h$ a function on $\mathcal{M}$ : \begin{prop}\label{appendix box} If $h$ is a function on $\mathcal{M}$, we have \begin{align*}
\Box_gh & =-\mathbf{T}^2h+\frac{e^{-2\gamma}}{N}\mathrm{div}(N\nabla h)+\tau\mathbf{T} h
\\&=-\mathbf{T}^2h+e^{-2\gamma}\Delta h+\frac{e^{-2\gamma}}{N}\nabla h\cdot\nabla N+\tau\mathbf{T} h. \end{align*} \end{prop}
\begin{proof} By definition of $\Box_g$, we have : \begin{align*}
\Box_g h & = \frac{1}{\sqrt{|\det(g)|}}\partial_\beta\left(g^{\beta\alpha}\sqrt{|\det(g)|}\partial_\alpha h \right) \\& = \frac{e^{-2\gamma}}{N}\partial_t \left(\frac{e^{2\gamma}}{N} \left( -\partial_t h +\beta^1\partial_1h+\beta^2\partial_2h \right) \right) \\& \quad + \frac{e^{-2\gamma}}{N} \partial_1 \left( \frac{e^{2\gamma}}{N} \left( \beta^1\partial_t h+\left(N^2e^{-2\gamma}-\left(\beta^1\right)^2\right) \partial_1 h -\beta^1\beta^2\partial_2h \right) \right) \\& \quad + \frac{e^{-2\gamma}}{N} \partial_2 \left( \frac{e^{2\gamma}}{N} \left( \beta^2\partial_t h -\beta^1\beta^2\partial_1h+\left(N^2e^{-2\gamma}-\left(\beta^2\right)^2\right) \partial_2 h \right) \right), \end{align*} where we used the expression of $g^{-1}$ in the basis $(\partial_t,\partial_i)$ (see expression \eqref{inverse de g}). By rearranging the terms, we get : \begin{align*} \Box_g h & =- \frac{1}{N}\partial_t\mathbf{T} h -\frac{2\partial_t\gamma}{N}\mathbf{T} h + \frac{e^{-2\gamma}}{N}\mathrm{div}\left( e^{2\gamma}\mathbf{T} h\beta + N \nabla h \right) \\& = -\mathbf{T}^2h+\frac{e^{-2\gamma}}{N}\mathrm{div}(N\nabla h)+\left(-2\mathbf{T}\gamma+\frac{\mathrm{div}(\beta)}{N} \right)\mathbf{T} h. \end{align*} This proves the proposition, by looking at \eqref{appendix tau}. \end{proof}
We now compute the connection coefficients for the metric \eqref{metrique elliptique} in the basis $(e_0,\partial_i)$. Notice that $[e_0,\partial_i]=\partial_i\beta^j\partial_j$. Using this, we compute : \begin{align*}
g(D_0e_0,e_0)&=\frac{1}{2}e_0g_{00}=-\frac{1}{2}e_0(N^2)=-Ne_0N,\\
g(D_ie_0,e_0)&=\frac{1}{2}\partial_ig_{00}=-\frac{1}{2}\partial_i(N^2)=-N\partial_iN,\\
g(D_0e_0,\partial_i)&=-\frac{1}{2}\partial_ig_{00}-g(e_0,[e_0,\partial_i])=N\partial_iN,\\
g(D_ie_0,\partial_j)&=\frac{1}{2}\left(e_0g_{ij}-g(\partial_i,[e_0,\partial_j])-g(\partial_j,[e_0,\partial_i])\right)=\frac{e^{2\gamma}}{2}\left(2e_0\gamma\delta_{ij}-\partial_i\beta^k\delta_{jk}-\partial_j\beta^k\delta_{ik}\right), \\
g(D_0\partial_i,e_0)&=\frac{1}{2}\partial_ig_{00}+g(e_0,[e_0,\partial_i])=-N\partial_iN,\\
g(D_0\partial_i,\partial_j)&=\frac{1}{2}\left(e_0g_{ij}+g(\partial_i,[\partial_j,e_0])+g(\partial_j,[e_0,\partial_i])\right)=\frac{e^{2\gamma}}{2}\left(2e_0\gamma\delta_{ij}+\partial_i\beta^k\delta_{jk}-\partial_j\beta^k\delta_{ik}\right),\\
g(D_i\partial_j,e_0)&=\frac{1}{2}\left(-e_0g_{ij}-g(\partial_i,[\partial_j,e_0])+g(\partial_j,[e_0,\partial_i])\right)=-\frac{e^{2\gamma}}{2}\left(2e_0\gamma\delta_{ij}-\partial_i\beta^k\delta_{jk}-\partial_j\beta^k\delta_{ik}\right),\\
g(D_i\partial_j,\partial_k)&=e^{2\gamma}\left(\delta_{ik}\partial_j\gamma+\delta_{jk}\partial_i\gamma-\delta_{ij}\delta^{\ell}_k\partial_{\ell}\gamma \right).\\ \end{align*} The first two expressions are derived using $X(g(Y,Z))=g(D_XY,Z)+g(Y,D_XZ)$ and the other ones with the Koszul formula : \begin{equation*}
2g(D_VW,X)=Vg(W,X)+Wg(X,V)-Xg(V,W)-g(V,[W,X])+g(W,[X,V])+g(X,[V,W]). \end{equation*} From the above calculations, we obtain \begin{align}
D_0e_0 &=\mathbf{T} N e_0 +e^{-2\gamma}\delta^{ij}N\partial_iN\partial_j , \label{covariante 1}\\
D_0\partial_i &=\partial_iN\mathbf{T}+\frac{1}{2}\left(2\delta^{j}_ie_0\gamma+\partial_i\beta^j-\delta_{ik}\delta^{j\ell}\partial_{\ell}\beta^k \right)\partial_j ,\label{covariante 2}\\
D_ie_0 &=\partial_iN\mathbf{T}+\frac{1}{2}\left(2\delta^{j}_ie_0\gamma-\partial_i\beta^j-\delta_{ik}\delta^{j\ell}\partial_{\ell}\beta^k \right)\partial_j ,\label{covariante 3}\\
D_i\partial_j &=\frac{e^{2\gamma}}{2N}\left(2\delta_{ij}e_0\gamma-\left(\partial_i\beta^k \right)\delta_{jk}-\left(\partial_j\beta^k \right)\delta_{ik} \right)\mathbf{T}+\left(\delta^k_i\partial_j\gamma+\delta^k_j\partial_i\gamma-\delta_{ij}\delta^{k\ell}\partial_{\ell}\gamma\right)\partial_k.\label{covariante 4} \end{align}
\subsection{Decomposition of the Ricci tensor}\label{ricci tensor}
\begin{prop} Given $g$ of the form \eqref{metrique elliptique}, we have the following identities : \begin{align}
K_{ij}&=-\frac{\delta_{ij}}{2}\mathbf{T}\left( e^{2\gamma}\right)+\frac{e^{2\gamma}}{2N}\left(\partial_i\beta_j+\partial_j\beta_i\right),\label{seconde forme fonda}\\
H_{ij}&=\frac{e^{2\gamma}}{2N}(L\beta)_{ij},\label{appendix beta}\\
\tau &=-2\mathbf{T}\gamma+\frac{\mathrm{div}\left(\beta\right)}{N}.\label{appendix tau} \end{align} \end{prop}
\begin{proof} The equation \eqref{seconde forme fonda} follows from \eqref{Kij}, and \eqref{appendix beta} and \eqref{appendix tau} follow from \eqref{seconde forme fonda}. \end{proof}
\begin{prop} Given $g$ of the form \eqref{metrique elliptique}, the components of the Ricci tensor in the basis $(e_0,\partial_i)$ are given by \begin{align}
R_{ij} &=\delta_{ij}\left(-\Delta\gamma+\frac{\tau^2}{2}e^{2\gamma}-\frac{e^{2\gamma}}{2}\mathbf{T}\tau-\frac{\Delta N}{2N}\right) -\mathbf{T} H_{ij}-2e^{-2\gamma}H_i^{\;\ell}H_{j\ell}\label{appendix Rij}\\& \quad+\frac{1}{N}\left( \partial_j\beta^kH_{ki}+\partial_i\beta^kH_{kj}\right)-\frac{1}{N}\left( \partial_i\partial_jN-\frac{1}{2}\delta_{ij}\Delta N-\left( \delta_i^k\partial_j\gamma+\delta_j^k\partial_i\gamma-\delta_{ij}\delta^{\ell k}\partial_{\ell}\gamma \right)\partial_k N \right)\nonumber,\\
R_{0j} &= N\left(\frac{1}{2}\partial_j\tau-e^{-2\gamma}\partial^iH_{ij} \right),\label{appendix R0j}\\
R_{00} &= N\left(e_0\tau-e^{-4\gamma}N\left|H\right|^2-\frac{N\tau^2}{2}+e^{-2\gamma}\Delta N \right)\label{appendix R00}. \end{align} Moreover, \begin{align}
\delta^{ij}R_{ij}&= 2\left(-\Delta\gamma+\frac{\tau^2}{2}e^{2\gamma}-\frac{e^{2\gamma}}{2}\mathbf{T}\tau-\frac{\Delta N}{2N}\right),\label{appendix trace ricci}\\
R&=-2\mathbf{T}\tau +\frac{3}{2}\tau^2 + e^{-4\gamma}\left|H\right|^2-2e^{-2\gamma}\frac{\Delta N}{N} - 2e^{-2\gamma}\Delta\gamma.\label{appendix R} \end{align} \end{prop}
\begin{proof} From Chapter 6 of \cite{cho09}, we have \begin{align}
R_{ij} & = \Bar{R}_{ij}+K_{ij}\mathrm{tr}_{\Bar{g}}K-2K_i^{\;\ell}K_{j\ell}- N^{-1}\left( \mathcal{L}_{e_0}K_{ij}+D_i\partial_jN\right),\label{Rij CB}\\
R_{0j} & = N\left( \partial_j( \mathrm{tr}_{\Bar{g}}K)-D_{\ell}K^{\ell}_{\;j}\right) ,\label{R0j CB}\\
R_{00} & = N\left( e_0(\mathrm{tr}_{\Bar{g}}K)-N|K|^2+\Delta_{\Bar{g}} N \right),\label{R00 CB} \end{align} where $\Bar{D}$, $\Bar{R}_{ij}$ and $\Delta_{\Bar{g}}$ are defined with respect to $\Bar{g}$. First, by \eqref{def H} and the connection coefficients computations, \eqref{Rij CB} becomes \begin{align}
R_{ij} & =-\delta_{ij}\Delta\gamma+\tau\left( H_{ij}+\frac{1}{2}e^{2\gamma}\delta_{ij}\tau\right)-2e^{-2\gamma}\left( H_i^{\;\ell} +\frac{1}{2}e^{2\gamma}\delta_i^{\ell}\tau\right) \left( H_{j\ell} +\frac{1}{2}e^{2\gamma}\delta_{j\ell}\tau\right) \label{Rij inter}\\&\quad -\frac{1}{N} \left( \mathcal{L}_{e_0}K_{ij}+\partial_i\partial_jN- \left( \delta_i^k\partial_j\gamma+\delta_j^k\partial_i\gamma-\delta_{ij}\delta^{k\ell}\partial_{\ell}\gamma\right) \partial_kN \right). \nonumber \end{align} To proceed, we compute $\mathcal{L}_{e_0}K_{ij}$ by considering $H_{ij}$ and $\tau$ : \begin{align*}
\mathcal{L}_{e_0}H_{ij}&=e_0H_{ij}-\partial_j\beta^kH_{ki}-\partial_i\beta^kH_{kj},\\
\mathcal{L}_{e_0}(\tau\Bar{g}_{ij})&=e^{2\gamma}\delta_{ij}e_0\tau-2N\tau K_{ij}. \end{align*} Therefore, using \eqref{def H} and plugging $\mathcal{L}_{e_0}K_{ij}$ into \eqref{Rij inter}, we obtain \eqref{appendix Rij}.
The expression of $R_{0j}$ in \eqref{appendix R0j} follows from \eqref{R0j CB} and the fact that for any covariant symmetric 2-tensor $A_ij$, \begin{equation*}
\Bar{g}^{ik}\Bar{D}_kA_{ij}=e^{-2\gamma}\partial^iA_ij-\partial_j\gamma\mathrm{tr}_{\Bar{g}}A. \end{equation*}
Using \eqref{R00 CB} and the conformal invariance of the Laplacian we easily get \eqref{appendix R00}.
To prove \eqref{appendix trace ricci}, we first note that \begin{equation*}
\delta^{ij}(\partial_j\beta^k H_{ki}+\partial_i\beta^k H_{kj})=H_{ij}(L\beta)^{ij}. \end{equation*} Combining this with \eqref{appendix beta}, we obtain \begin{equation*}
\delta^{ij}\left( -2e^{-2\gamma}H_i^{\;\ell}H_{j\ell}+\frac{1}{N}(\partial_j\beta^k H_{ki}+\partial_i\beta^k H_{kj}) \right)=0. \end{equation*} Taking the trace of \eqref{appendix Rij} and using this identity yield \eqref{appendix trace ricci}.
Finally, by putting \eqref{metrique elliptique}, \eqref{appendix R00} and \eqref{appendix trace ricci} we easily get \eqref{appendix R}. \end{proof}
\subsection{The stress-energy-momentum tensor}\label{T mu nu}
Define $T_{\mu\nu}$ by \begin{equation*}
T_{\mu\nu}=2\partial_{\mu}\varphi\partial_{\nu}\varphi-g_{\mu\nu}g^{\alpha\beta}\partial_{\alpha}\varphi\partial_{\beta}\varphi+\frac{1}{2}e^{-4\varphi}\left( 2\partial_{\mu}\omega\partial_{\nu}\omega-g_{\mu\nu}g^{\alpha\beta}\partial_{\alpha}\omega\partial_{\beta}\omega\right). \end{equation*}
\begin{prop} The following identities are satisfied (with respect to the $(e_0,\partial_i)$ basis) : \begin{align}
T_{00}&= (e_0\varphi)^2+e^{-2\gamma}N^2|\nabla\varphi|^2+\frac{1}{4}e^{-4\varphi}\left( (e_0\omega)^2+e^{-2\gamma}N^2|\nabla\omega|^2\right),\label{T 00}\\
T_{0j}&= 2e_0\varphi\partial_j\varphi+\frac{1}{2}e^{-4\varphi}e_0\omega\partial_j\omega,\label{T 0j}\\
T_{ij}&= 2\partial_i\varphi\partial_j\varphi+\frac{e^{2\gamma}}{N^2}(e_0\varphi)^2\delta_{ij}-|\nabla\varphi|^2\delta_{ij}\label{T ij}\\&\qquad+\frac{1}{4}e^{-4\varphi}\left( 2\partial_i\omega\partial_j\omega+\frac{e^{2\gamma}}{N^2}(e_0\omega)^2\delta_{ij}-|\nabla\omega|^2\delta_{ij}\right),\nonumber\\
\mathrm{tr}_gT &= -g^{\alpha\beta}\partial_{\alpha}\varphi\partial_{\beta}\varphi-\frac{1}{4}e^{-4\varphi}g^{\alpha\beta}\partial_{\alpha}\omega\partial_{\beta}\omega,\\
T_{00}-g_{00}\mathrm{tr}_gT &= 2\left(e_0\varphi\right)^2+\frac{1}{2}e^{-4\varphi}(e_0\omega)^2,\label{JSP}\\
T_{ij}-g_{ij}\mathrm{tr}_gT &= 2\partial_{i}\varphi\partial_{j}\varphi+\frac{1}{2}e^{-4\varphi}\partial_{i}\omega\partial_{j}\omega,\label{JSP 2}\\
\delta^{ij}\left( T_{ij}-g_{ij}\mathrm{tr}_gT\right) & =2\left|\nabla\varphi \right|^2+\frac{1}{2}e^{-4\varphi}\left|\nabla\omega \right|^2,\label{JSP 3}\\
D^{\mu}T_{\mu\nu}&=2(\Box_g\varphi)\partial_{\nu}\varphi+\frac{1}{2}e^{-4\varphi}(\Box_g\omega)\partial_{\nu}\omega \label{divergence de T}\\&\qquad-e^{-4\varphi}\partial^\mu\varphi \left( 2\partial_{\mu}\omega\partial_{\nu}\omega-g_{\mu\nu}g^{\alpha\beta}\partial_{\alpha}\omega\partial_{\beta}\omega\right) .\nonumber \end{align} \end{prop}
\section{Weighted Sobolev spaces}\label{appendix B} Here are some results about weighted Sobolev spaces on $\mathbb{R}^2$, which are systematically used during the proof. Most of them can be found in the Appendix I of \cite{cho09}.
\begin{lem}\label{B1} Let $m\geq 1$, $p\in[1,\infty)$ and $\delta\in\mathbb{R}$, then \begin{align*}
\|\nabla u\|_{W^{m-1,p}_{\delta+1}} & \lesssim \| u\|_{W^{m,p}_{\delta}},\\
\|\nabla u\|_{C^{m-1}_{\delta}} & \lesssim \| u\|_{C^{m}_{\delta+1}}. \end{align*} \end{lem} We have an easy embedding result, which is a straightforward application of the Hölder’s inequality : \begin{lem}\label{prop holder 2} If $1\leq p_1\leq p_2\leq \infty$ and $\delta_2-\delta_1>2\left( \frac{1}{p_1}-\frac{1}{p_2} \right)$, then we have the continuous embedding \begin{equation*}
L^{p_2}_{\delta_2}\xhookrightarrow{}L^{p_1}_{\delta_1}. \end{equation*} \end{lem} Next, we have Sobolev embedding theorems for weighted Sobolev spaces : \begin{prop}\label{embedding} Let $s,m\in\mathbb{N}\cup\{0\}$, $1<p<\infty$. \begin{itemize}
\item If $s>\frac{2}{p}$ and $\beta\leq \delta+\frac{2}{p}$, then we have the continuous embedding
\begin{equation*}
W^{s+m,p}_{\delta}\xhookrightarrow{}C^m_{\beta}.
\end{equation*}
\item If $s<\frac{2}{p}$, then we have the continuous embedding
\begin{equation*}
W^{s+m,p}_{\delta}\xhookrightarrow{}W^{m,\frac{2p}{2-sp}}_{\delta+s}.
\end{equation*} \end{itemize} \end{prop} We will also need a product estimate. \begin{prop}\label{prop prod} Let $s,s_1,s_2\in\mathbb{N}\cup\{0\}$, $p\in[1,\infty]$, $\delta,\delta_1,\delta_2\in\mathbb{R}$ such that $s\leq\min(s_1,s_2)$, $s<s_1+s_2-\frac{2}{p}$ and $\delta<\delta_1+\delta_2+\frac{2}{p}$. Then we have the continuous multiplication property \begin{equation*}
W^{s_1,p}_{\delta_1}\times W^{s_2,p}_{\delta_2} \xhookrightarrow{}W^{s,p}_{\delta}. \end{equation*} \end{prop}
The following simple lemma will be useful as well. \begin{lem}
Let $\alpha\in\mathbb{R}$ and $g\in L^{\infty}_{loc}$ such that $|g(x)|\lesssim \langle x \rangle^{\alpha}$. Then the multiplication by $g$ map $L^2_{\delta+\alpha}$ to $L^2_{\delta}$ with operator norm bounded by $\sup_{x\in\mathbb{R}^2}\frac{|g(x)|}{\langle x\rangle^{\alpha}}$. \end{lem} The next result, which is due to McOwen, concerns the invertibility of the Laplacian on weighted Sobolev spaces. Its proof can be found in \cite{mco79}. \begin{thm}\label{mcowens 1} Let $m,s\in\mathbb{N}\cup\{0\}$ and $-1+m<\delta<m$. The Laplace operator $\Delta:H^{s+2}_{\delta}\longrightarrow H^{s}_{\delta+2}$ is an injection with closed range \begin{equation*}
\enstq{f\in H^{s}_{\delta+2}}{ \forall v\in \cup_{i=0}^m\mathcal{H}_i,\; \int_{\mathbb{R}^2}fv=0}, \end{equation*} where $\mathcal{H}_i$ is the set of harmonic polynomials of degree $i$. Moreover, $u$ obeys the estimate \begin{equation*}
\| u\|_{H^{s+2}_{\delta}}\leq C(\delta,m,p)\|\Delta u\|_{H^{s}_{\delta+2}}. \end{equation*} \end{thm} The following is a corollary of Theorem \ref{mcowens 1} : \begin{coro}\label{mcowens 2} Let $-1<\delta<0$ and $f\in H^0_{\delta+2}$. Then there exists a solution u of \begin{equation*}
\Delta u=f, \end{equation*} which can be written \begin{equation*}
u=\frac{1}{2\pi}\left(\int_{\mathbb{R}^2}f\right)\chi(|x|)\ln(|x|)+v, \end{equation*}
where $\chi$ is as in Section \ref{subsection initial data} and $\|v\|_{H^2_{\delta}}\leq C(\delta)\|f\|_{H^0_{\delta+2}}$. \end{coro}
We will also use some classical inequalities, which we recall here, even if they are not related to weighted Sobolev spaces. The proof of the next property can be found in Appendix A of \cite{tao06}. \begin{prop}\label{littlewood paley} If $s\in\mathbb{N}$, then \begin{equation*}
\| uv\|_{H^s}\lesssim \|u\|_{H^s}\|v\|_{L^{\infty}}+\|v\|_{H^s}\|u\|_{L^{\infty}}. \end{equation*} \end{prop} We recall the Hardy-Littlewood-Sobolev inequality : \begin{prop}\label{prop HLS} If $0<\alpha<2$ and $1<p<r<\infty$ and $\frac{1}{r}=\frac{1}{p}-\frac{\alpha}{2}$, then \begin{equation*}
\left\| u* \frac{1}{|\cdot|^{2-\alpha}} \right\|_{L^r}\lesssim \left\| u \right\|_{L^p}. \end{equation*} \end{prop} We recall the Gagliardo-Nirenberg inequality, for which a proof can be found in \cite{fri69} : \begin{prop}\label{GN} Let $1\leq q,r\leq +\infty$, $m\in\mathbb{N}^*$. Let $\alpha\in\mathbb{R}$ and $j\in\mathbb{N}$ such that \begin{equation*} \frac{j}{m}\leq\alpha\leq 1. \end{equation*} Then : \begin{equation*} \left\| \nabla^j u \right\|_{L^p} \lesssim \left\| \nabla ^mu\right\|_{L^r}^\alpha \left\| u \right\|_{L^q}^{1-\alpha}, \end{equation*} with \begin{equation*} \frac{1}{p}=\frac{j}{2}+\left(\frac{1}{r}-\frac{m}{2} \right)\alpha+\frac{1-\alpha}{q}. \end{equation*} \end{prop}
\section{Third order energy estimate}\label{appendix C}
In this section, we prove Proposition \ref{dernier coro}. We split the proof into two lemmas : their goal is to point out the dependence of $\frac{\mathrm{d}}{\mathrm{d} t}\mathscr{E}_3^\varphi$ and $\frac{\mathrm{d}}{\mathrm{d} t}\mathscr{E}_3^\omega$ on non-linear terms in $\partial\nabla^2 U$.
\begin{lem} The energy $\mathscr{E}_3^\varphi$ satisfies \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}\mathscr{E}_3^\varphi & = \int_{\mathbb{R}^2}2e^{-4\varphi}e_0 \partial_j\partial_i\varphi \left( -\mathbf{T} \partial_j\partial_i\omega \mathbf{T}\omega+e^{-2\gamma}\nabla \partial_j\partial_i\omega \cdot\nabla\omega\right)\mathrm{d} x \\&\quad +\int_{\mathbb{R}^2}2e^{-2\gamma}e^{-4\varphi}\nabla \partial_j\partial_i\varphi\cdot \left( e_0\partial_j\partial_i\omega \nabla\omega-e_0\omega\nabla \partial_j\partial_i\omega \right)\mathrm{d} x +O(\mathscr{R}(t)). \end{align*} \end{lem}
\begin{proof} We split $\mathscr{E}_3^\varphi$ into two parts $A^\varphi+B^\varphi$ : \begin{equation*}
\mathscr{E}_3^\varphi =\underbrace{\int_{\mathbb{R}^2} \frac{2}{N^2}\left(e_0\partial_j\partial_i\varphi+\frac{1}{2}e^{-4\varphi}\partial_j\partial_i\omega e_0\omega \right)^2\mathrm{d} x}_{A^\varphi\vcentcolon=} +\underbrace{\int_{\mathbb{R}^2}2e^{-2\gamma}\left|\nabla\partial_j\partial_i\varphi+\frac{1}{2}e^{-4\varphi}\partial_j\partial_i\omega \nabla\omega \right|^2\mathrm{d} x}_{B^\varphi\vcentcolon=}. \end{equation*} We start with $A^\varphi$, by writing $\partial_t=e_0+\beta\cdot\nabla$. Note that if for some function $f$ we have $\left\| \nabla g f \right\|_{L^1}=O(\mathscr{R})$, then by integration by parts we have : \begin{equation*} \int_{\mathbb{R}^2}\partial_tf\mathrm{d} x = \int_{\mathbb{R}^2}e_0f\mathrm{d} x+\int_{\mathbb{R}^2}\beta\cdot\nabla f\mathrm{d} x = \int_{\mathbb{R}^2}e_0f\mathrm{d} x - \int_{\mathbb{R}^2}\mathrm{div}(\beta)f\mathrm{d} x= \int_{\mathbb{R}^2}e_0f\mathrm{d} x+O(\mathscr{R}). \end{equation*} Therefore, in what follows, we can forget about the $\beta\cdot\nabla$-part in $\partial_t$, which only contributes to $O(\mathscr{R})$. We now compute : \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}A^\varphi & = \int_{\mathbb{R}^2}4\left(e_0\partial_j\partial_i\varphi+\frac{1}{2}e^{-4\varphi}\partial_j\partial_i\omega e_0\omega \right)\left( \mathbf{T}^2\partial_j\partial_i\varphi+\frac{1}{2}e^{-4\varphi}\mathbf{T} \partial_j\partial_i\omega \mathbf{T}\omega+\frac{1}{2}e^{-4\varphi}\partial_j\partial_i\omega \mathbf{T}^2\omega \right)\mathrm{d} x \\&\quad +O(\mathscr{R}(t)). \end{align*} We then replace terms involving $\mathbf{T}^2$ according to \eqref{expression de box}, and then replace $\Box_g \partial_j\partial_i\varphi$ according to \eqref{WM dd ffi} ($F^\varphi_{ij}$ and $\partial_j\partial_i\omega\Box_g\omega$ only contributes to $O(\mathscr{R})$) : \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}A^\varphi& = \int_{\mathbb{R}^2}4\left(e_0\partial_j\partial_i\varphi+\frac{1}{2}e^{-4\varphi}\partial_j\partial_i\omega e_0\omega \right) \left[ -\Box_g \partial_j\partial_i\varphi+\frac{e^{-2\gamma}}{N}\mathrm{div}(N\nabla \partial_j\partial_i\varphi)+\frac{1}{2}e^{-4\varphi}\mathbf{T} \partial_j\partial_i\omega \mathbf{T}\omega \right. \\&\qquad\qquad\qquad\qquad\qquad\qquad\qquad \left. -\frac{1}{2}e^{-4\varphi}\partial_j\partial_i\omega \Box_g\omega +\frac{e^{-2\gamma}}{2N}e^{-4\varphi}\partial_j\partial_i\omega \mathrm{div}(N\nabla\omega) \right]\mathrm{d} x+O(\mathscr{R}(t)) \\& = \int_{\mathbb{R}^2}4\left(e_0\partial_j\partial_i\varphi+\frac{1}{2}e^{-4\varphi}\partial_j\partial_i\omega e_0\omega \right) \left[ -\frac{1}{2}e^{-4\varphi}\mathbf{T} \partial_j\partial_i\omega \mathbf{T}\omega+e^{-4\varphi}e^{-2\gamma}\nabla \partial_j\partial_i\omega \cdot\nabla\omega\right. \\&\qquad\qquad\qquad\qquad\qquad\qquad\qquad \left. +\frac{e^{-2\gamma}}{N}\mathrm{div}(N\nabla \partial_j\partial_i\varphi)+\frac{e^{-2\gamma}}{2N}e^{-4\varphi}\partial_j\partial_i\omega \mathrm{div}(N\nabla\omega) \right]\mathrm{d} x+O(\mathscr{R}(t)). \end{align*} We integrate by parts the terms with a divergence and expand : \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}A^\varphi & = \int_{\mathbb{R}^2}4\left(e_0\partial_j\partial_i\varphi+\frac{1}{2}e^{-4\varphi}\partial_j\partial_i\omega e_0\omega \right) \left( -\frac{1}{2}e^{-4\varphi}\mathbf{T} \partial_j\partial_i\omega \mathbf{T}\omega+e^{-4\varphi}e^{-2\gamma}\nabla \partial_j\partial_i\omega \cdot\nabla\omega\right)\mathrm{d} x \\& \quad -\int_{\mathbb{R}^2}4e^{-2\gamma}\nabla \partial_j\partial_i\varphi\cdot \nabla\left(e_0\partial_j\partial_i\varphi+\frac{1}{2}e^{-4\varphi}\partial_j\partial_i\omega e_0\omega \right)\mathrm{d} x \\& \quad -\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}\nabla\omega\cdot\nabla\left( \partial_j\partial_i\omega \left(e_0\partial_j\partial_i\varphi+\frac{1}{2}e^{-4\varphi}\partial_j\partial_i\omega e_0\omega \right)\right)\mathrm{d} x+O(\mathscr{R}(t)) \\& = \int_{\mathbb{R}^2}4\left(e_0\partial_j\partial_i\varphi+\frac{1}{2}e^{-4\varphi}\partial_j\partial_i\omega e_0\omega \right) \left( -\frac{1}{2}e^{-4\varphi}\mathbf{T} \partial_j\partial_i\omega \mathbf{T}\omega+e^{-4\varphi}e^{-2\gamma}\nabla \partial_j\partial_i\omega \cdot\nabla\omega\right)\mathrm{d} x \\& \quad -\int_{\mathbb{R}^2}4e^{-2\gamma}\nabla \partial_j\partial_i\varphi\cdot\nabla e_0 \partial_j\partial_i\varphi \mathrm{d} x -\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}\nabla \partial_j\partial_i\varphi\cdot\nabla (\partial_j\partial_i\omega e_0\omega)\mathrm{d} x \\& \quad -\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}(\nabla\omega\cdot\nabla \partial_j\partial_i\omega )\left(e_0\partial_j\partial_i\varphi+\frac{1}{2}e^{-4\varphi}\partial_j\partial_i\omega e_0\omega \right)\mathrm{d} x \\& \quad -\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}\partial_j\partial_i\omega \nabla\omega\cdot\nabla e_0\partial_j\partial_i\varphi\mathrm{d} x - \int_{\mathbb{R}^2}e^{-8\varphi}e^{-2\gamma}\partial_j\partial_i\omega \nabla\omega\cdot\nabla(\partial_j\partial_i\omega e_0\omega)\mathrm{d} x +O(\mathscr{R}(t)) \\& = \int_{\mathbb{R}^2}2e_0\partial_j\partial_i\varphi \left( -e^{-4\varphi}\mathbf{T} \partial_j\partial_i\omega \mathbf{T}\omega+e^{-4\varphi}e^{-2\gamma}\nabla \partial_j\partial_i\omega \cdot\nabla\omega\right)\mathrm{d} x \\& \quad -\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}e_0\omega\nabla \partial_j\partial_i\varphi\cdot\nabla \partial_j\partial_i\omega \mathrm{d} x
-\int_{\mathbb{R}^2}4e^{-2\gamma}\nabla \partial_j\partial_i\varphi\cdot\nabla e_0 \partial_j\partial_i\varphi \mathrm{d} x \\& \quad-\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}\partial_j\partial_i\omega \nabla\omega\cdot\nabla e_0\partial_j\partial_i\varphi\mathrm{d} x +O(\mathscr{R}(t)) \end{align*} We now deal with $B^\varphi$ : \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}B^\varphi & = \int_{\mathbb{R}^2}4e^{-2\gamma} \left( \nabla \partial_j\partial_i\varphi+\frac{1}{2}e^{-4\varphi}\partial_j\partial_i\omega \nabla\omega \right)\cdot e_0\left( \nabla \partial_j\partial_i\varphi+\frac{1}{2}e^{-4\varphi}\partial_j\partial_i\omega \nabla\omega \right)\mathrm{d} x +O(\mathscr{R}(t)) \\& = \int_{\mathbb{R}^2}2e^{-2\gamma}e^{-4\varphi}e_0\partial_j\partial_i\omega \nabla\partial_j\partial_i\varphi\cdot \nabla\omega \mathrm{d} x \\&\quad+ \int_{\mathbb{R}^2}4e^{-2\gamma}\nabla \partial_j\partial_i\varphi\cdot \nabla e_0 \partial_j\partial_i\varphi \mathrm{d} x +\int_{\mathbb{R}^2}2e^{-2\gamma}e^{-4\varphi}\partial_j\partial_i\omega \nabla\omega\cdot\nabla e_0\partial_j\partial_i\varphi \mathrm{d} x +O(\mathscr{R}(t)). \end{align*} We see that the terms which contains $\nabla e_0 \partial_j\partial_i\varphi $ in $A^\varphi$ and $B^\varphi$ cancel each other, and that every terms wich are linear in $\partial \nabla^2 U$ only contribute to $O(\mathscr{R}(t))$, so that : \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}\mathscr{E}_3^\varphi & = \int_{\mathbb{R}^2}2e^{-4\varphi}e_0 \partial_j\partial_i\varphi \left( -\mathbf{T} \partial_j\partial_i\omega \mathbf{T}\omega+e^{-2\gamma}\nabla \partial_j\partial_i\omega \cdot\nabla\omega\right)\mathrm{d} x \\&\quad +\int_{\mathbb{R}^2}2e^{-2\gamma}e^{-4\varphi}\nabla \partial_j\partial_i\varphi\cdot \left( e_0\partial_j\partial_i\omega \nabla\omega-e_0\omega\nabla \partial_j\partial_i\omega \right)\mathrm{d} x +O(\mathscr{R}(t)). \end{align*}
\end{proof}
\begin{lem} The energy $\mathscr{E}_3^\omega$ satisfies \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}\mathscr{E}_3^\omega & =\int_{\mathbb{R}^2}2e^{-4\varphi}e_0\partial_j\partial_i\omega \left( \mathbf{T}\partial_j\partial_i\varphi\mathbf{T}\omega -e^{-2\gamma}\nabla\partial_j\partial_i\varphi\cdot\nabla\omega \right) \mathrm{d} x
\\& \quad +\int_{\mathbb{R}^2}2e^{-2\gamma}e^{-4\varphi}\nabla\partial_j\partial_i\omega \cdot \left( e_0\omega\nabla\partial_j\partial_i\varphi-e_0\partial_j\partial_i\varphi\nabla\omega \right) \mathrm{d} x+O(\mathscr{R}(t)). \end{align*} \end{lem}
\begin{proof} The proof of this lemma is very similar to the one of the previous lemma, except that we also differenciate the coefficient $e^{-4\varphi}$ in the energy $\mathscr{E}_3^\omega$. We split $\mathscr{E}_3^\omega$ into two parts $A^\omega+B^\omega$ : \begin{align*}
\mathscr{E}_3^\varphi & = \underbrace{\int_{\mathbb{R}^2} \frac{1}{2N^2}e^{-4\varphi} \left( e_0\partial_j\partial_i\omega -2\partial_j\partial_i\omega e_0\varphi-2\partial_j\partial_i\varphi e_0\omega \right)^2\mathrm{d} x}_{A^\omega\vcentcolon=} \\&\qquad\qquad\qquad\qquad+\underbrace{\int_{\mathbb{R}^2}\frac{1}{2}e^{-4\varphi}e^{-2\gamma}\left|\nabla\partial_j\partial_i\omega -2\partial_j\partial_i\omega \nabla\varphi-2\partial_j\partial_i\varphi \nabla\omega \right|^2\mathrm{d} x}_{B^\omega\vcentcolon=}. \end{align*} We start by $A^\omega$ : \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}A^\omega & = \int_{\mathbb{R}^2}e^{-4\varphi} \left( e_0\partial_j\partial_i\omega -2\partial_j\partial_i\omega e_0\varphi-2 \partial_j\partial_i\varphi e_0\omega \right)\left( \mathbf{T}^2\partial_j\partial_i\omega -2\mathbf{T} \partial_j\partial_i\omega \mathbf{T}\varphi -2\partial_j\partial_i\omega \mathbf{T}^2\varphi\right.\\& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.-2\mathbf{T} \partial_j\partial_i\varphi \mathbf{T}\omega -2 \partial_j\partial_i\varphi \mathbf{T}^2\omega \right) \mathrm{d} x \\&\quad -\int_{\mathbb{R}^2}2e^{-4\varphi}e_0\varphi \left( \mathbf{T} \partial_j\partial_i\omega -2\partial_j\partial_i\omega \mathbf{T}\varphi-2 \partial_j\partial_i\varphi \mathbf{T}\omega \right)^2\mathrm{d} x +O(\mathscr{R}(t)) \\& = \int_{\mathbb{R}^2}e^{-4\varphi} \left( e_0\partial_j\partial_i\omega -2\partial_j\partial_i\omega e_0\varphi-2 \partial_j\partial_i\varphi e_0\omega \right) \left[ -\Box_g\partial_j\partial_i\omega +\frac{e^{-2\gamma}}{N}\mathrm{div}(N\nabla \partial_j\partial_i\omega ) + 2\partial_j\partial_i\omega \Box_g\varphi \right. \\& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \left.- \frac{2e^{-2\gamma}}{N}\partial_j\partial_i\omega \mathrm{div}(N\nabla\varphi) +2 \partial_j\partial_i\varphi \Box_g\omega \right. \\& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.- \frac{2e^{-2\gamma}}{N} \partial_j\partial_i\varphi \mathrm{div}(N\nabla\omega) -2\mathbf{T} \partial_j\partial_i\omega \mathbf{T}\varphi-2\mathbf{T} \partial_j\partial_i\varphi \mathbf{T}\omega \right]\mathrm{d} x \\&\qquad\qquad\qquad-\int_{\mathbb{R}^2}2e^{-4\varphi}e_0\varphi \left( \mathbf{T} \partial_j\partial_i\omega \right)^2\mathrm{d} x+O(\mathscr{R}(t)). \end{align*} We integrate by parts the terms with a divergence (note that we differenciate the $e^{-4\varphi}$, but the one with $\mathrm{div}(N\nabla\partial_j\partial_i\omega)$ in front is the only divergence term which gives a main term) : \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}A^\omega & = \int_{\mathbb{R}^2}2e^{-4\varphi} \left( e_0\partial_j\partial_i\omega -2\partial_j\partial_i\omega e_0\varphi-2 \partial_j\partial_i\varphi e_0\omega \right)\left( \mathbf{T} \partial_j\partial_i\omega \mathbf{T}\varphi+ \mathbf{T} \partial_j\partial_i\varphi \mathbf{T}\omega \right. \\& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left. -2e^{-2\gamma}\nabla \partial_j\partial_i\omega \cdot\nabla\varphi-2e^{-2\gamma}\nabla \partial_j\partial_i\varphi \cdot\nabla\omega \right) \mathrm{d} x \\& \quad -\int_{\mathbb{R}^2}e^{-4\varphi}e^{-2\gamma}\nabla \partial_j\partial_i\omega \cdot\nabla\left( e_0\partial_j\partial_i\omega -2\partial_j\partial_i\omega e_0\varphi-2 \partial_j\partial_i\varphi e_0\omega \right)\mathrm{d} x \\&\quad+\int_{\mathbb{R}^2}4e^{-4\varphi}e^{-2\gamma}\nabla\varphi\cdot\nabla\partial_j\partial_i\omega \left( e_0\partial_j\partial_i\omega -2\partial_j\partial_i\omega e_0\varphi-2 \partial_j\partial_i\varphi e_0\omega \right) \mathrm{d} x \\&\quad +\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}\nabla\varphi\cdot \nabla\left( \partial_j\partial_i\omega \left( e_0\partial_j\partial_i\omega -2\partial_j\partial_i\omega e_0\varphi-2 \partial_j\partial_i\varphi e_0\omega \right)\right)\mathrm{d} x \\&\quad +\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}\nabla\omega\cdot\nabla \left( \partial_j\partial_i\varphi \left( e_0\partial_j\partial_i\omega -2\partial_j\partial_i\omega e_0\varphi-2 \partial_j\partial_i\varphi e_0\omega \right)\right)\mathrm{d} x \\& \quad-\int_{\mathbb{R}^2}2e^{-4\varphi}e_0\varphi \left( \mathbf{T} \partial_j\partial_i\omega \right)^2\mathrm{d} x+O(\mathscr{R}(t)). \end{align*} We now expand all the terms and note again that the linear terms in $\partial\nabla^2 U$ only contribute to $O(\mathscr{R}(t))$ : \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}A^\omega & = \int_{\mathbb{R}^2}2e^{-4\varphi} \left( e_0\partial_j\partial_i\omega -2\partial_j\partial_i\omega e_0\varphi-2 \partial_j\partial_i\varphi e_0\omega \right) \left( \mathbf{T} \partial_j\partial_i\omega \mathbf{T}\varphi+ \mathbf{T} \partial_j\partial_i\varphi \mathbf{T}\omega \right. \\& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \left.-2e^{-2\gamma}\nabla \partial_j\partial_i\omega \cdot\nabla\varphi-2e^{-2\gamma}\nabla \partial_j\partial_i\varphi \cdot\nabla\omega \right) \mathrm{d} x
\\& \quad +\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}e_0\varphi|\nabla \partial_j\partial_i\omega|^2 \mathrm{d} x +\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}e_0\omega\nabla \partial_j\partial_i\omega \cdot\nabla \partial_j\partial_i\varphi \mathrm{d} x \\&\quad +\int_{\mathbb{R}^2}6e^{-4\varphi}e^{-2\gamma}(\nabla\varphi\cdot\nabla \partial_j\partial_i\omega ) e_0\partial_j\partial_i\omega \mathrm{d} x +\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}(\nabla\omega\cdot\nabla \partial_j\partial_i\varphi ) e_0\partial_j\partial_i\omega \mathrm{d} x \\&\quad +\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma} \partial_j\partial_i\varphi \nabla\omega\cdot\nabla e_0\partial_j\partial_i\omega \mathrm{d} x -\int_{\mathbb{R}^2}e^{-4\varphi}e^{-2\gamma}\nabla \partial_j\partial_i\omega \cdot\nabla e_0\partial_j\partial_i\omega \mathrm{d} x \\&\quad +\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}\partial_j\partial_i\omega \nabla\varphi\cdot\nabla e_0\partial_j\partial_i\omega \mathrm{d} x -\int_{\mathbb{R}^2}2e^{-4\varphi}e_0\varphi \left( \mathbf{T} \partial_j\partial_i\omega \right)^2\mathrm{d} x+O(\mathscr{R}(t)) \\& = \int_{\mathbb{R}^2}2e^{-4\varphi} e_0\partial_j\partial_i\omega \left( \mathbf{T} \partial_j\partial_i\varphi \mathbf{T}\omega +e^{-2\gamma}\nabla \partial_j\partial_i\omega \cdot\nabla\varphi-e^{-2\gamma}\nabla \partial_j\partial_i\varphi \cdot\nabla\omega \right) \mathrm{d} x
\\& \quad +\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}e_0\varphi|\nabla \partial_j\partial_i\omega|^2 \mathrm{d} x +\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}e_0\omega\nabla \partial_j\partial_i\omega \cdot\nabla \partial_j\partial_i\varphi \mathrm{d} x \\&\quad +\int_{\mathbb{R}^2}e^{-4\varphi}e^{-2\gamma}(-\nabla \partial_j\partial_i\omega +2\partial_j\partial_i\omega \nabla\varphi+2 \partial_j\partial_i\varphi \nabla\omega)\cdot \nabla e_0\partial_j\partial_i\omega \mathrm{d} x +O(\mathscr{R}(t)) \end{align*} We now deal with $B^\omega$ : \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}B^\omega & =\int_{\mathbb{R}^2}e^{-4\varphi}e^{-2\gamma}\left(\nabla \partial_j\partial_i\omega -2\partial_j\partial_i\omega \nabla\varphi-2 \partial_j\partial_i\varphi \nabla\omega \right)\cdot e_0\left(\nabla \partial_j\partial_i\omega -2\partial_j\partial_i\omega \nabla\varphi-2 \partial_j\partial_i\varphi \nabla\omega \right)\mathrm{d} x
\\&\quad -\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}e_0\varphi\left|\nabla \partial_j\partial_i\omega -2\partial_j\partial_i\omega \nabla\varphi-2 \partial_j\partial_i\varphi \nabla\omega \right|^2\mathrm{d} x \\& = \int_{\mathbb{R}^2}e^{-4\varphi}e^{-2\gamma}(\nabla \partial_j\partial_i\omega -2\partial_j\partial_i\omega \nabla\varphi-2 \partial_j\partial_i\varphi \nabla\omega)\cdot \nabla e_0\partial_j\partial_i\omega \mathrm{d} x \\& \quad -\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}e_0\partial_j\partial_i\omega\nabla \partial_j\partial_i\omega\cdot \nabla\varphi\mathrm{d} x -\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}e_0 \partial_j\partial_i\varphi\nabla \partial_j\partial_i\omega \cdot \nabla\omega\mathrm{d} x
\\&\quad- \int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}e_0\varphi|\nabla \partial_j\partial_i\omega |^2\mathrm{d} x+O(\mathscr{R}(t)). \end{align*} We see that the terms which contains $\nabla e_0 \partial_j\partial_i\omega $ in $A^\omega$ and $B^\omega$ cancel each other, therefore : \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}\mathscr{E}_3^\omega & = \int_{\mathbb{R}^2}2e^{-4\varphi} e_0\partial_j\partial_i\omega \left( \mathbf{T} \partial_j\partial_i\varphi \mathbf{T}\omega +e^{-2\gamma}\nabla \partial_j\partial_i\omega \cdot\nabla\varphi-e^{-2\gamma}\nabla \partial_j\partial_i\varphi \cdot\nabla\omega \right) \mathrm{d} x \\& \quad +\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}e_0\omega\nabla \partial_j\partial_i\omega \cdot\nabla \partial_j\partial_i\varphi \mathrm{d} x
-\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}e_0\partial_j\partial_i\omega\nabla \partial_j\partial_i\omega\cdot \nabla\varphi\mathrm{d} x \\& \quad -\int_{\mathbb{R}^2}2e^{-4\varphi}e^{-2\gamma}e_0 \partial_j\partial_i\varphi\nabla \partial_j\partial_i\omega \cdot \nabla\omega\mathrm{d} x
+O(\mathscr{R}(t))
\\& = \int_{\mathbb{R}^2}2e^{-4\varphi}e_0\partial_j\partial_i\omega \left( \mathbf{T}\partial_j\partial_i\varphi\mathbf{T}\omega -e^{-2\gamma}\nabla\partial_j\partial_i\varphi\cdot\nabla\omega \right) \mathrm{d} x
\\& \quad +\int_{\mathbb{R}^2}2e^{-2\gamma}e^{-4\varphi}\nabla\partial_j\partial_i\omega \cdot \left( e_0\omega\nabla\partial_j\partial_i\varphi-e_0\partial_j\partial_i\varphi\nabla\omega \right) \mathrm{d} x+O(\mathscr{R}(t))
\\& = \int_{\mathbb{R}^2}2e^{-4\varphi}e_0\partial_j\partial_i\omega \left( \mathbf{T}\partial_j\partial_i\varphi\mathbf{T}\omega -e^{-2\gamma}\nabla\partial_j\partial_i\varphi\cdot\nabla\omega \right) \mathrm{d} x
\\& \quad +\int_{\mathbb{R}^2}2e^{-2\gamma}e^{-4\varphi}\nabla\partial_j\partial_i\omega \cdot \left( e_0\omega\nabla\partial_j\partial_i\varphi-e_0\partial_j\partial_i\varphi\nabla\omega \right) \mathrm{d} x+O(\mathscr{R}(t)). \end{align*}
Adding the two previous lemmas, we see that the main parts of $\frac{\mathrm{d}}{\mathrm{d} t}\mathscr{E}_3^\varphi$ and $\frac{\mathrm{d}}{\mathrm{d} t}\mathscr{E}_3^\omega$ cancel each other, and we obtain Proposition \ref{dernier coro}.
\end{proof}
\end{document}
|
arXiv
|
{
"id": "2101.09093.tex",
"language_detection_score": 0.4011680781841278,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray*}}{\begin{eqnarray*}} \newcommand{\end{eqnarray*}}{\end{eqnarray*}} \newcommand{\backslash}{\backslash} \newcommand{\begin{center}}{\begin{center}} \newcommand{\end{center}}{\end{center}} \def\mathscr{C}{\mathscr{C}}
\title{Rapid parametric density estimation} \author{\IEEEauthorblockN{Jarek Duda}\\ \IEEEauthorblockA{Jagiellonian University, Golebia 24, 31-007 Krakow, Poland, Email: \emph{[email protected]}}} \maketitle
\begin{abstract} Parametric density estimation, for example as Gaussian distribution, is the base of the field of statistics. Machine learning requires inexpensive estimation of much more complex densities, and the basic approach is relatively costly maximum likelihood estimation (MLE). There will be discussed inexpensive density estimation, for example literally fitting a polynomial (or Fourier series) to the sample, which coefficients are calculated by just averaging monomials (or sine/cosine) over the sample. Another discussed basic application is fitting distortion to some standard distribution like Gaussian - analogously to ICA, but additionally allowing to reconstruct the disturbed density. Finally, by using weighted average, it can be also applied for estimation of non-probabilistic densities, like modelling mass distribution, or for various clustering problems by using negative (or complex) weights: fitting a function which sign (or argument) determines clusters. The estimated parameters are approaching the optimal values with error dropping like $1/\sqrt{n}$, where $n$ is the sample size. \end{abstract} \textbf{Keywords:} machine learning, statistics, density estimation, independent component analysis, clustering \section{Introduction} \begin{figure}
\caption{Two basic discussed examples of fitting parameters for assumed family of probability densities: polynomials (left column) in $[-1,1]$ range and polynomials multiplied by $e^{-x^2/2}$ (right) in $\mathbb{R}$, basing on a random sample of size $n=25$ (top row), $n=100$ (middle) or $n=400$ values (bottom), generated using the assumed probability distribution - represented as the thick blue line. Every plot contains also 10 thin red lines representing results of 10 independent experiments of estimating the parameters basing on the obtained size $n$ sample. Inaccuracy drops with $1/\sqrt{n}$, what can be seen in dispersion dropping approximately twice every row. For convenience there will be discussed using orthogonal family of functions, for example polynomials, making their parameters independent - calculated as just average of the value of a given function over the obtained sample. The polynomial formula used for the left column can be also expressed as $\rho=\frac{5}{8}\left(1-3x^2+[x](15x-21x^3)+[x^2](9x^2-3)+[x^3](35x^3-21x)\right)$,
where $[f]$ denotes average of function $f$ over the sample.}
\label{overv}
\end{figure} \noindent The complexity of our world makes it useful to imagine a sample obtained from some measurements, in nearly all kind of science, as coming from some probability distribution. The natural approach to model this distribution is using a parametric function, which parameters should be estimated basing on the sample. The best known example is Gaussian (normal) distribution, which parameters are estimated from averages over the sample of all up to degree 2 monomials.
To model real data we usually need more complex densities. In machine learning there are popular kernel density estimators (KDE)~(\cite{ker1,ker2}), which smoothen the sample by convolving it with a kernel: a nonnegative function integrating to 1, for instance a Gaussian distribution. However, one issue is that it requires arbitrarily choosing the width of this kernel: if it is too narrow we get a series of spikes, if too wide we loose the details. Another problem is that such estimated density is a sum of potentially large number of functions - is very costly to directly work with.
Hence, usually it would be more convenient to have a parametric model with a relatively small number of parameters, for example to locally approximate density with a polynomial. The standard approach to estimate such parameters is the maximum likelihood estimation~(MLE)~(\cite{MLE1,MLE2}), which finds the parameters maximizing likelihood of obtaining the given sample. However, beside really simple examples, such maximization would require costly numerical procedures like gradient descent.\\
As in examples from fig. \ref{overv}, we will discuss here inexpensive parametric estimation, especially with a linear combination of a chosen family of functions, for instance with a polynomial or Fourier series in a finite region. It is based on mean-square fitting of the optimal parameters to the sample smoothed with a kernel (KDE). Surprisingly, the mean-square ($L^2$) fitting gives the best agreement for the 0 width limit of the kernel: degenerated to Dirac delta, like in fig. \ref{spike}, by the way removing the very inconvenient requirement of arbitrarily choosing the kernel width. Intuitively, it fits a function to a series of spikes, what makes it much simpler to calculate and asymptotically leads to the real parameters of the used density: the error drops like $1/\sqrt{n}$ with $n$ being sample size.
The parameters of a linear combination are obtained analogously to algebraic projection on a preferably (complete) orthogonal base of functions, making the estimated coefficient for a given function as just the average of value of this function over the sample. For fitting order $m$ polynomial in a finite region (preferably a hyperrectangle: product of ranges), we just need to calculate averages over the sample of all monomials of power up to $m$ (estimators of moments). Orthogonality of the used family allows to ask independently about each coefficient, what can be used to directly work with as high order polynomial approximation as needed, eventually neglecting coefficients which turn out close to zero. Finally, the Stone-Weierstrass theorem theorem says that any continuous function on a closed interval can be uniformly approximated as closely as desired by polynomials, making it theoretically possible to asymptotically recreate any continuous density using the discussed approach. Analogously for Fourier series, where the coefficients are obtained as just averages of corresponding sine or cosine over the sample.
Beside being much less expensive to calculate than MLE, minimization of mean-square error might lead to more appropriate densities for some machine learning problems. Specifically, MLE categorically requires all probabilities being positive on the sample, what seems a strong restriction for example for fitting polynomials. Mean-square optimization sometimes leads to negative values of density, what obviously may bring some issues, but can also lead to better global agreement of the fitted function.
Another discussed basic application, beside literally fitting polynomial to a sample in a finite region, is estimating and describing distortion from some simple expected approximated distribution, like uniform or Gaussian distribution. For example for testing long range correlations of a pseudorandom number generators (PRNG). Taking $D$ length sequences of its values as points from $[0,1]^D$ hypercube, ideally they should come from $\rho=1$ uniform density. The discussed method allows to choose some testing function (preferably orthogonal to $\rho=1$: integrating to 0) describing our suspicion of distortion from this uniform density, like $\prod_i (x_i-1/2)$, and estimate its coefficient basing on the sample - test if the average of its values approaches zero as expected.
\begin{figure}
\caption{Examples of smoothing (KDE) sample of $n=25$ points from density represented by the thick blue line. The three thin lines represent smoothing with Gaussian kernel for standard deviation $\epsilon=0.5,\ 0.1$ and $0.001$. We can fit for example a polynomial to such smoothen sample. Large $\epsilon$ intuitively blurs the information, so we focus here on mean-square fitting to the sharpest: $\epsilon\to 0$ limit spikes, sum of Dirac deltas.}
\label{spike}
\end{figure}
More important example is modelling distortions form the Gaussian distribution. The standard way is to calculate higher (central) moments, however, it is a difficult problem to translate them back into the actual probability density - so called "the problem of moments"~\cite{mom}.
The most widely used methods for understanding distortion from Gaussian distribution, as noise in which we would often like to find the real signal, is probably the large family of independent component analysis (ICA)~\cite{ica} methods. They usually start with normalization: shifting to the mean value, and rescaling directions of eigenvectors of the covariance matrix accordingly to the corresponding eigenvalues, getting normalized sample: with unitary covariance matrix. Then it searches for distortions from the perfect Gaussian (as noise) of this normalized sample, for example as directions maximizing kurtosis - candidates for the real signal. The discussed method allows to directly fit parameters of distortion as a linear combination of some chosen family of functions, like polynomials multiplied by $e^{-x^2/2}$. Its advantages comparing to ICA is the possibility to model multiple modes for every direction (like degrees of polynomial) and allowing to reconstruct the disturbed density function. It can also work with different than Gaussian types of tail, like $e^{-|x|}$ or $1/|x|^K$. \\
While the main focus here are probabilistic densities: nonnegative and integrating to 1, this approach can be also used to model more general or abstract densities: fitting a relatively simple function like a polynomial to a smoothen set of points. For example, the original motivation for this approach was fitting a low order polynomial to a distribution of atomic masses or electron negativity along the longest axis of a chemical molecule~\cite{mole}. Coefficients of this polynomial describe some prosperities of this molecule and can be used as a part of information in its fingerprint (descriptor) for virtual screening. Another basic application can be trend-seeking estimation of probability of a symbol basing on its previous appearances in adaptive data compressors: exploiting trends like rising or dropping probability by fitting and extrapolating a polynomial.
Finally, this approach can be also applied for various clustering problems. By thresholding the estimated density function we can determine regions of relatively high density, interpreted as clusters for unsupervised learning. For supervised problem: with some additional information about classification of points into clusters, we can use these labels as weights while averaging. For two classes we can use weights of opposite signs to distinguish them (under-represented class should have correspondingly higher absolute weights), then the sign of the fitted density can be used to determine the cluster. We can also directly distinguish a larger number of classes with a single function, for example by using complex (or vector) weights (like $W=\pm 1 \pm \sqrt{-1}$), then use its argument to determine the cluster (like $\lfloor \frac{2}{\pi}\arg(\rho)\rfloor$).
The standard approaches in machine learning, like neural networks or SVMs, are based on linear classifiers with parameters found e.g. by backpropagation. The discussed approach not only allows to generalize it to directly use for example higher order polynomials here, but also directly fits their parameters by just averaging over the sample. Generalization to higher order polynomials with clusters defined for example as thresholded polynomials (semi-algebraic sets), gives much stronger separability strength. For example sign of $xy$ monomial already classifies the XOR pattern. Paroboloida as second order polynomial can directly separate an ellipsoid, and much more for higher order polynomials. This approach can be used for example as one of layers of neural network with initially directly fitted coefficients, then improved e.g. through backpropagation. Due to simplicity of calculating the coefficients, the formulas for fitting them can be also directly put into the backpropagation process. \section{General case} \noindent Imagine we have a sample of $n$ points in $D$ dimensional space: $S=\{\textbf{x}^i\}_{i=1..n}$, where $\textbf{x}^i =\{x^i_1,\ldots,x^i_D\}\in \mathbb{R}^D$. Assuming it comes from some probability distribution, we will try to approximate (estimate) its probability density function (PDF) as a function from some chosen parameterized family $\rho_\textbf{a}:\mathbb{R}^D\to \mathbb{R}$ for some $m$ parameters $\textbf{a}=\{a_1,\ldots,a_m\}$. In other words, we would like to estimate the real parameters basing on the observed sample. For generality, assume each point has also a weight $W(\textbf{x})$, which is $W=1$ for probability estimation, but generally it can represent a mass of an object, or can be negative for separation into two classes, or even be a complex number or a vector for example for simultaneous classification into multiple classes.
Preferably, a probabilistic $\rho_\textbf{a}$ should be smooth, nonengative and integrate to one. However, as we will mostly focus on $\rho_\textbf{a}$ being a linear combination, in some cases there will appear negative values as an artifact of the method. Probabilistic normalization (integration to 1) will be enforced by construction or additional constraints.
A natural approach is smoothing the sample (before fitting) into a continuous function, for example by convolution with a kernel (KDE): \begin{equation} g_\epsilon(\textbf{x}) := \frac{1}{n}\sum_{\textbf{y}\in S} W(\textbf{y})\, k_{\epsilon}(\textbf{x}-\textbf{y})\end{equation}
\noindent where $k$ is nonnegative and integrates to 1, for example is the Gaussian kernel: $k_\epsilon(\textbf{x}) = (2\pi \epsilon^2)^{-D/2}\, e^{-\textbf{x}\cdot \textbf{x}/2\epsilon^2}$. Obviously, for probabilistic $W=1$, $g_\epsilon$ is also nonnegative and integrates to 1.\\
We can now formulate the basic problem as finding parameters minimizing some distance from $g_\epsilon$:
\begin{equation} \min_{\textbf{a}} \| \rho_\textbf{a} - g_\epsilon \| \label{min} \end{equation}
However, there remains a difficult question of choosing the parameter $\epsilon$. In fact it is even worse as the kernel assumed here is spherically symmetric, while for the real data more appropriate might be for example some elliptic kernel with a larger number of parameters to choose, which additionally should be able to vary with position.
Surprisingly, we can remove this dependency on $\epsilon$ by choosing mean-square norm ($L^2$), which allows to perform the $\epsilon\to 0$ limit of kernel: to Dirac delta as in fig. \ref{spike}. The $g_\epsilon$ becomes a series of spikes in this limit, no longer being a type of function expected while fitting with a family of smooth functions. However, it turns out to well behave from the mathematical point of view. Intuitively, while $\epsilon\to \infty$ limit would mean smoothing the sample into a flat function - loosing all the details, the $\epsilon\to 0$ limit allows to exploit the sharpest possible picture.
Assume we would like to minimize mean-square norm $\|f\|:=\sqrt{\langle f,f \rangle }$ for scalar product: \begin{equation} \langle f,g \rangle:=\int_{\mathbb{R}^D} f(\textbf{x})\, g(\textbf{x})\,w(\textbf{x}) \,d\textbf{x} \end{equation}
\noindent where we will usually use constant weight $w=1$. However, for fitting only local behavior, it might be also worth to consider some vanishing $w\to 0$ while going away from the point of interest.
For mean-square norm, the minimization (\ref{min}) is equivalent to:
$$ \min_{\textbf{a}} \| \rho_\textbf{a} - g_\epsilon \|^2 = \min_{\textbf{a}} \langle \rho_\textbf{a} - g_\epsilon , \rho_\textbf{a} - g_\epsilon\rangle \ =$$
\begin{equation} \min_{\textbf{a}}\ \|\rho_\textbf{a}\|^2 - 2 \langle g_\epsilon,\rho_\textbf{a} \rangle + \|g_\epsilon\|^2 \label{min1}\end{equation}
In the $\epsilon\to 0$ limit we would have $\|g_\epsilon\|\to \infty$. However, as we are only interested in the optimal parameters $\textbf{a}$ here, and $\|g_\epsilon\|$ does not depend on them, we can just remove this term resolving the issue with infinity. The term $ \langle g_\epsilon,\rho_\textbf{a} \rangle$ becomes $\frac{1}{n}\sum_{\textbf{x}\in S} w(\textbf{x})W(\textbf{x})\, \rho_\textbf{a}(\textbf{x})$ in the $\epsilon\to 0$ limit, finally getting:
\begin{df}
The general considered minimization problem is
\end{df} \begin{equation} \min_{\textbf{a}}\ \langle \rho_\textbf{a}, \rho_\textbf{a}\rangle - \frac{2}{n} \sum_{\textbf{x}\in S} w(\textbf{x})W(\textbf{x})\, \rho_\textbf{a}(\textbf{x}). \label{min2}\end{equation}
Its necessary differential condition is:
\begin{equation} \langle \rho_\textbf{a}, \, \partial_{a_j} \rho_\textbf{a}\rangle = \frac{1}{n} \sum_{\textbf{x}\in S} w(\textbf{x})W(\textbf{x})\, (\partial_{a_j} \rho_\textbf{a})(\textbf{x}) \label{nec} \end{equation} for all $j\in\{1,\ldots,m\}$.
\section{Density estimation with \\a linear combination} \noindent The most convenient application of the discussed method is density estimation with a linear combination of some family of functions, for instance polynomials or sines and cosines. In this and the following section we assume:
\begin{equation} \rho_\textbf{a} = \sum_{i=1}^m a_i f_i \end{equation} for some functions $f_i: \mathbb{R}^D\to \mathbb{R}$ (not necessarily non-negative in the entire domain).
While in practice we can rather only work on a finite family, it can be in fact an infinite complete orthogonal base - allowing to approximate any continuous function as close as needed, what is true for example for polynomials and Fourier base in a finite closed interval.
\subsection{Basic formula} In the linear combination case, $\partial_{a_j} \rho_\textbf{a} = f_j$, and the necessary condition (\ref{nec}) becomes just: \begin{equation} \sum_i a_i\, \langle f_i,f_j \rangle = \frac{1}{n} \sum_{\textbf{x}\in S} w(\textbf{x})W(\textbf{x})\, f_j(\textbf{x})\end{equation} for $j=1,\ldots,m$. Denoting $(\langle f_i,f_j \rangle)$ as the $m\times m$ matrix of scalar products, the optimal coefficients become:
\begin{equation} \textbf{a}^T = \, (\langle f_i,f_j \rangle)^{-1} \cdot ([f_1],\ldots,[f_m])^T \label{gene}\end{equation} \begin{equation} \textrm{where}\quad [f]:=\frac{1}{n} \sum_{\textbf{x}\in S} w(\textbf{x})W(\textbf{x})\, f(\textbf{x}) \end{equation} is estimator of the expected value of $f$. Assuming orthonormal set of functions: $\langle f_i,f_j \rangle = \delta_{ij}$ and $w=W=1$ standard weights, we get a surprisingly simple formula: \begin{equation} a_i = [f_i] = \frac{1}{n} \sum_{\textbf{x}\in S} f_i(\textbf{x}) \label{ai}\end{equation}
\begin{equation} \rho_{\textbf{a}} (\textbf{x}) = \sum_i\, [f_i]\, f_i(\textbf{x}) \label{rho} \end{equation}
We see that using an orthonormal family of functions is very convenient as it allows to calculate estimated coefficients independently, each one being just (weighted) average over the sample of the corresponding function. It is analogous to making algebraic projections of the sample on some orthonormal base. The independence allows this base to be potentially infinite, preferably complete to allow for approximation of any function, for example a base of orthogonal polynomials. Orthonormality (independence) allows to inexpensively estimate with as high order polynomial as needed. Eventually, without orthonormality, there can be used the more general formula (\ref{gene}) instead.
As the estimated coefficients are just (weighted) average of functions over the sample, it can be naturally generalized to continuous samples, for example representing some arbitrary knowledge, where the average can be calculated by integration.
\begin{equation} a_i = [f_i]=\frac{1}{C} \int_S w(\textbf{x})W(\textbf{x})\,f(\textbf{x}) d\textbf{x} \end{equation} with some normalization if needed, for example $C=\int_S w(\textbf{x})W(\textbf{x}) d\textbf{x}$. For clustering applications there can be used $C=1$. We can also mix some continuous arbitrary knowledge with sample of points from measurements by treating them as Dirac deltas to combine sum with integrations.
Figures \ref{2spir} and \ref{4class} shows such examples for using spirals as continuous arbitrary knowledge. They also use $W=\pm 1$ or $W=\pm 1 \pm \sqrt{-1}$ sample weights to determine the clusters basing on the sign of the fitted function, or its argument in the complex case.
\begin{figure}
\caption{Examples of generalization to separate two regions (clusters) in 2D: defined by the red and green spiral. One of them was used with weight $W=1$, the second with $W=-1$ to fit order 6 (top) or 7 (bottom) polynomial (left), or Fourier base (right): $(\sin(im),\cos(im))$ for $i$ up to 2 (top) or 3 (bottom). The marked blue region was determined by $\rho>0$ condition. As there was used orthonormal family of functions, each coefficient is just average of a given function. This time instead of discrete samples, we have some arbitrary knowledge given by continuous sets (spirals): the average was made by integration. Due to symmetries, most of coefficients have turned out zero here as specified in the figure, generally suggesting to neglect functions with low coefficient in the considered linear combination. While it properly reconstructs complex boundaries, we can also see introduced some artifacts far from the samples.}
\label{2spir}
\end{figure}
\begin{figure}
\caption{Example of generalization of approach from fig. \ref{2spir} to simultaneously separate a larger number of classes - this time into 4 classes defined by arbitrary knowledge as the 4 spirals of different colors. Instead of using $\pm 1$ weights as previously, this time there were used 4 complex weights: $W=\pm 1 \pm \sqrt{-1}$. Then classification was made based on the argument (angle) of the fitted linear combination: $\lfloor \frac{2}{\pi}\arg(\rho)\rfloor$ (in which of 4 quaters this function is). There was used Fourier base for $(\sin{im},\cos(im))$ for $i$ up to 5, getting 200 real coefficients, 100 of them are nonzero.}
\label{4class}
\end{figure}
\subsection{Asymptotic behavior ($n\to \infty$)} Assume $W=w=1$ weights, orthonormal family of functions $(\langle f_i,f_j\rangle = \delta_{ij})$ and that the real density can be indeed written as $\rho=\sum_i a_i f_i$ (not necessarily finite).
For estimation of $a_i$, the (\ref{rho}) formula says to use $a_i\approx [f_i]$: average value of $f_i$ over the obtained sample. With the number of points going to infinity, it approaches the expected value of $f_i$ for this probability distribution:
$$ [f_i] \xrightarrow{n\to\infty} \int f_i\, \rho\, d\textbf{x} =\left\langle f_i,\, \sum_i a_i f_i\right\rangle=a_i $$
\noindent As needed, $[f_i]$ approaches the exact value of $a_i$ from the assumed probability distribution. From the Central Limit Theorem, the error of $i$-th coefficient comes from approximately a normal distribution of width being standard deviation of $f_i$ (assuming it is finite) divided by $\sqrt{n}$: \begin{equation} [f_i]-a_i \sim \mathcal{N}\left(0,\frac{1}{\sqrt{n}}\sqrt{\int (f_i-a_i)^2 \rho\, d\textbf{x}}\right) \end{equation} Hence, to obtain twice smaller errors, we need to sample 4 times more points.
\subsection{Nondegenerate kernel corrections} The zero width kernel limit (to Dirac delta) were only used to replace scalar product with sum while going from (\ref{min1}) to (\ref{min2}). For a general kernel $k$ (nonegative, integrating to 1), orthonormal base and $D=1$ case, we would analogously obtain estimation:
$$ a_i =\frac{1}{n}\int \sum_{x\in S} k(y-x)\, f_i(y) dy\approx $$ $$\approx\frac{1}{n}\sum_{x\in S} \left(f_i(x)+\frac{1}{2}f''_i(x)\int h^2 k(h)dh \right) $$
\noindent where we have used second order Taylor expansion $\left(f_i(x+h)\approx f_i(x)+h\,f'_i(x)+\frac{1}{2}h^2\,f''_i(x)\right)$ and assumed symmetry of kernel $(k(-h)=k(h))$, zeroing the first order correction. Finally we got the second order correction: \begin{equation} a_i \approx [f_i + v f''_i]\qquad\textrm{for}\quad 0\leq v:= \frac{1}{2} \int h^2 k(h)dh \end{equation} \noindent with $v$ characterizing the width (variance) of the kernel. For $v\neq 0$, we would approach a bit different coefficients than previously, which as discussed were optimal for the $\rho=\sum_i a_i f_i$ density.
Adding to a function its second derivative times a positive value corresponds to smoothing this function, analogously to evolution of diffusion equation ($\partial_t f=\partial_{xx}f$). Hence, the $v\to 0$ limit (to Dirac delta) intuitively corresponds to the sharpest possible fit.
\subsection{Normalization of density} At least in theory, probabilistic density functions should integrate to 1. For polynomials and Fourier series it will be enforced by construction: all but the zero order $f_i$ will integrate to 0, hence, normalization of density can be obtained by just fixing the zero order coefficient. There are also situations, especially in machine learning, where the necessity of density integrating exactly to 1 is not crucial.
Let us now briefly focus on situations where density normalization is required, but cannot be directly enforced by construction. A trivial solution is just performing additional final normalization step: divide obtained $\rho_\textbf{a}$ by $\int\rho_\textbf{a}(\textbf{x})\,d\textbf{x}$.
However, more accurate values should be obtained by using the density normalization condition as constraint while the (\ref{min}) minimization, what can be done with the Lagrange multiplier method. Denoting \begin{equation} F_i:=\int f_i(\textbf{x}) d\textbf{x} \end{equation} \noindent the density normalization condition becomes $\sum_i a_i F_i = C$, where usually $C=1$, but generally can be also for example $C=\frac{1}{n}\sum_{\textbf{x}\in S} W(\textbf{x})$. For orthonormal set of functions we analogously get the following equations:
$$\sum_j a_j F_j = C,\qquad\ a_i = [f_i] + \lambda F_i\quad\textrm{for all}\ i.$$ \noindent Substituting to the first equation and transforming: $$\sum_j ([f_j] + \lambda F_j) F_j = C $$ $$ \lambda = \left(C-\sum_j [f_j]F_j\right) / \sum_j (F_j)^2 $$ \begin{equation} a_i = [f_i] + \frac{C-\sum_j [f_j]F_j}{\sum_j (F_j)^2}\, F_i \label{nor} \end{equation}
The experiments from the right column of fig. \ref{overv} were made using this normalized formula for $C=1$.
\section{Polynomials, Fourier and Gaussian} \noindent There will be now briefly discussed three natural examples of fitting with a linear combination, assuming weight $w=1$. As discussed, choosing this family as orthogonal allows to estimate the parameters independently. The first considered example are (Legendre) polynomials, the second is Fourier series, the last one are (Hermite) polynomials multiplied by $e^{-x^2/2}$.
\subsection{Fitting polynomial in a finite region} A natural first application is parameterizing density function as a polynomial. The zeroth order function should be constant, but also needs to integrate to a finite value. Hence, we have to restrict to a finite region here (of volume $v<\infty$), preferable a range like $[-1,1]$ in fig. \ref{overv}, or a product of ranges (hyperrectangle) in a higher dimension. We will only use points inside this region to approximate (estimate) density inside this region with a polynomial - this region should be chosen as a relatively small one containing the behaviour of interest. All integrals and sums here are inside this finite region, for example by setting $w=0$ outside.
This zero order function is $f_0=1/\sqrt{v}$ to get normalization $\|f_0\|=1$. Its estimated coefficient is average of this function over the sample, which is always $a_0 = 1/\sqrt{v}$. Hence, the zero order estimation is just $\rho\approx a_0 f_0=1/v$ constant density (integrates to 1).
The following functions (polynomials) should be orthogonal to $f_0$, what means integration to 0 $(\int f_i\, d\textbf{x}=0)$, hence
\begin{equation} \rho_\textbf{a} = \frac{1}{v} +\sum_{i\geq 1} a_i f_i \end{equation}
\noindent always integrates to 1, probabilistic normalization is enforced for any choice of $a_i$ parameters ($i\geq 1$). However, we need to remember that such density sometimes may obtain negative values in the assumed region, like some red lines going below 0 in fig. \ref{overv}.
Orthonormal polynomials for $[-1,1]$ range and $w=1$ weight are known as Legendre polynomials. The first four are:
$$\frac{1}{\sqrt{2}},\ \sqrt{\frac{3}{2}}x,\ \sqrt{\frac{5}{8}}(3x^2-1),\ \sqrt{\frac{7}{8}}(5x^3-3x) $$
Thanks to orthogonality, we can independently ask about their coefficients. Finally, density estimation in $[-1,1]$ range with second order polynomial becomes:
\begin{equation} \rho\approx \frac{1}{2}+\frac{3}{2}[x]x+\frac{5}{8}\left(3[x^2]-1\right)\left(3x^2-1\right) \label{square} \end{equation}
\noindent where we have used linearity $[\alpha f+\beta g]=\alpha [f]+\beta [g]$, which makes it sufficient to calculate only averages of monomials over the sample - estimators of the moments. Such third order formula was used (and directly written with grouped $[x^i]$) in experiments presented in the left column of fig. \ref{overv}.
For a different interval, the Legendre polynomials should be properly shifted and rescaled. For $D$ dimensional case, if the region is a product of ranges (hyperrectangle), the orthogonal polynomials can be chosen as products of $D$ (rescaled) Legengre polynomials. Otherwise, there can be used Gram-Schmidt orthonormalization procedure to get an orthonormal base.
For example for $[-1,1]^D$ hypercube, the polynomial for density can be naturally chosen as: $$\rho(\textbf{x})=\frac{1}{2^D} + \sum_{i_1,\ldots,i_D} a_{i_1\ldots i_D}\,f_{i_1}(x_1)\cdot\ldots\cdot f_{i_D}(x_D) $$
The number of coefficient grow exponentially with dimension: fitting order $m$ polynomial in $D$ dimensions requires $m^D$ coefficients. However, some of them might be small and so neglected in the linear combination.
Before applying such fitting, it is crucial to properly choose the finite region of focus (like to the boundary between two classes), for example by normalizing it to $[-1,1]^D$ hypercube. This approach often brings some artifacts far from the sample, especially near the boundaries of the region. To reduce this effect, better effect can be obtained by using functions vanishing at these boundaries, like $\sin(j \pi x)$.
\subsection{Fourier series} Like in Fig. \ref{2spir}, for some type of data Fourier base might be more appropriate, still requiring working on a finite region (bounded or torus), preferably a hyperrectangle. Its zero order term is again constant $1/v$ and guards the normalization. The orthonormal base for $[-1,1]$ range is formed by $\sin(j\pi x)$ and $\cos(j \pi x)$ functions, which do not influence the normalization:
\begin{equation} \rho = \frac{1}{2}+\sum_{j\geq 1} a_j\sin(j\pi x) + b_j \cos(j \pi x)\end{equation} where $a_j=[\sin(j\pi x)]$, $b_j=[\cos(j\pi x)]$ are just the averages of this function over the sample. For $i$ up to $m$ in $D$ dimensions we need $(2m)^D$ coefficients here.
Observe that sine terms vanish at the boundaries of interval (hypercube) - using only them we can reduce artifacts at the boundaries.
If we need to work on a two-dimensional sphere, the complete orthonormal base of spherical harmonics can be used. In a more general case, the orthonormal base can be obtained by Gram-Schmidt orthonormalization procedure.
\subsection{Global fitting combination of vanishing functions}
Estimating density function with polynomials or Fourier series requires restriction to some finite region, what is a strong limitation and often brings some artifacts. To overcome it, we can for example use a combination of vanishing functions instead: with values dropping to zero while going away from the central point, for example in $e^{-x^2}$ or $e^{-|x|}$ or $1/|x|^K$ way. For the convenience of applying the (\ref{rho}) formula, it is preferred that this set of functions is orthogonal.
A well known example of such orthogonal family of functions are Hermite polynomial multiplied by $e^{-x^2/2}$. Denoting by $h_i$ as $i$-th Hermite polynomial, the following set of functions is orthonormal for $\langle f,g\rangle = \int_{\mathbb{R}} f(x)g(x) dx$ scalar product (weight $w=1$):
\begin{equation} f_i(x) = \frac{1}{\sqrt{2^i\, i!\,\sqrt{\pi} }}\ h_i(x)\ e^{-x^2/2} \label{herm} \end{equation}
This time it starts with $i=0$, Hermite polynomials $h_i$ for $i=0,1,2,3,4,5$ are correspondingly:
$$ 1,\ x,\ x^2-1,\ x^3-3x,\ x^4-6x^2+3,\ x^5-10x^3+15x $$
As the space has infinite volume here, we cannot use constant function in the considered base, which enforced $F_i = \int f_i dx =0$ for all but the constant orthogonal functions, making $\int \rho dx=1$ enforced by the zero order term, not changed by other parameters.
The odd order terms are asymmetric here, hence $F_i=0$ for them. However, even order terms integrate to nonzero values, hence the normalized formula (\ref{nor}) should lead to a bit better accuracy. It was used to obtain the right column of fig. \ref{overv}.\\
The used $e^{-x^2/2}$ is characteristic for Gaussian distribution with standard deviation equal 1. The remaining terms from the linear combination can slightly modify this standard deviation, however, the natural first approach for applying the discussed fitting is to perform normalization first. Like in ICA: first shift to the mean value to centralize the sample, then perform PCA (principal component analysis): find eignenvectors and eigenvalues of the covariance matrix, and rescale these directions correspondingly - getting normalized sample: with mean in 0 and unitary covariance matrix.
For such normalized sample we can start looking for distortions from the Gaussian distribution (often corresponding to a noise). Such sample is prepared to directly start fitting the $f_i$ from formula (\ref{herm}). However, it might be worth to first rotate it to emphasize directions with the largest distortion from the Gaussian distribution - which are suspected to carry the real signal. In ICA these directions are usually chosen as the ones maximizing kurtosis. Here we could experiment with searching for directions maximizing some $a_i$ coefficient instead: average of $f_i$ in the given direction over the sample.
Observe that in contrast to ICA, the discussed approach also allows to reconstruct the modelled distortion of Gaussian distribution. It can also model distortion of different probability distributions.
Generally, while high dimensional cases might require relatively huge base, most of the coefficients might turn out practically zero and so can be removed from the sum for density. Like in ICA, it might be useful to search for the really essential distortions (both direction and order), which may represent the real signal. For example orthogonal base can be built by searching for a function orthonormal to the previously found, which maximizes own coefficient - then adding it to the base and so on.
\section{Some further possibilities} \noindent The discussed approach is very general, here are some possibilities of extensions, optimizations for specific problems.
One line of possibilities is using a linear combination of different family of functions, for example obtained by Gram-Schmidt orthonormalization. For instance, while there was discussed perturbation of Gaussian distribution with Hermite polynomials, in some situations heavy tails might be worth to consider instead, vanishing for example like $e^{-|x|}$ or $1/|x|^K$.
While we have focused on global fitting in the considered region, there could be also considered local ones, for example as a linear combination of lattice of vanishing functions like wavelets. Or divide the space into (hyper)rectangles, fit polynomial in each of them and smoothen on the boundaries using a weighted average.
A different approach to local fitting is through modifying the weight $w$: focus on some point by using a weight vanishing while going away from this point. Then for example fit polynomial describing behavior of density around this point, getting estimation of derivatives of density function in this point, which then could be combined in a lattice by using some splines.\\
Another line of possibility is that while there were only discussed linear combinations as the family of density functions, it might be worth to consider also some different families, for example enforcing being positive and vanishing, like: $$f_{\textbf{a}}(x) = e^{-\sum_i a_i\, x^i}\quad\textrm{or}\quad f_{\textbf{a}}(x) = \frac{1}{\sum_i a_i\, x^i}$$ where in both cases the polynomial needs to have even order and positive dominant coefficient. In the latter case it additionally needs to be strictly positive. Their advantage is that they are both vanishing and nonnegative, as a density function should be.\\
Finally, a wide space of possibilities is using the discussed method for different tasks than estimation of probability density integrating to 1. For example to predict probability of the current symbol basing on the previous occurrences, what is a crucial task in data compression. Or the sampled points may contain some weights, basing on which we might want to parameterize the density profile, for example of a chemical molecule~\cite{mole}.
Very promising is also application for the clustering problem, especially the supervised situation: where we need to model a complex boundary between two labeled classes - using polynomials generalizes linear separators used in standard approaches, and here we can directly calculate the coefficients as just averages, eventually improving them later e.g. by backpropagation. Such classification can be done by assigning weights of opposite signs (or complex) to points from different classes, and look for the sign of the fitted density function (or argument). Its zero value manifold models the boundary. Often some classes are under-represented, what should be compensated by increasing weights of its representatives.
\section{Conclusions} \noindent There was presented and discussed a very general and powerful, but still looking basic approach. It might be already known, but seems to be missed in the standard literature. Its discussed basic cases can be literally fitting a polynomial or Fourier series to the sample (in a finite region), or distortion from a standard probability distribution like Gaussian. There is a wide range of potential applications, starting with modelling of local density for data of various origin, testing PRNG, finding the real signal in a noise in analogy to ICA, reconstructing density of this distortion, or for various clustering problems. This article only introduces to this topic, discussing very basic possibilities, only mentioning some of many ways to expand this general approach of mean-square fitting to Dirac delta limit of smoothing the sample with a kernel.
\end{document}
|
arXiv
|
{
"id": "1702.02144.tex",
"language_detection_score": 0.8492969274520874,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Simulating Arbitrary Pair-Interactions by a Given Hamiltonian:
Graph-Theoretical Bounds on the Time Complexity}
\abstract{ We use an $n$-spin system with permutation symmetric $zz$-interaction for simulating arbitrary pair-interaction Hamiltonians. The calculation of the required time overhead is mathematically equivalent to a separability problem of $n$-qubit density matrices. We derive lower and upper bounds in terms of chromatic index and the spectrum of the interaction graph. The complexity measure defined by such a computational model is related to gate complexity and a continuous complexity measure introduced in a former paper. We use majorization of graph spectra for classifying Hamiltonians with respect to their computational power. }
\section{Introduction} The most common models for quantum computers use single and two qubit gates as basic transformations in order to generate arbitrary unitary operations on the quantum registers. Most discussions about the generation of quantum algorithms, quantum codes and possible realizations had successfully been based on this concept. Mostly, even the definition of quantum complexity refers to such a model \cite{nielsen}. Nevertheless there is a priori no reason, why two qubit gates should be considered as basic operations for future quantum computers. In principle every quantum system could serve as a quantum register provided that its time evolution can be controlled in a universal way. At first sight, every definition of quantum complexity seems hence to be adequate only for a specific model of quantum computation. But it seems to be a rather general feature of Hamiltonians available in nature that particles interact with other particles in such a form, that the total Hamiltonian is a sum of pair-interactions. Therefore we want to base quantum complexity theory only on such a general feature.\footnote{This might be seen in the spirit of D.~Deutsch's statement ``What computers can or cannot compute is determined by the laws of physics alone and not by pure mathematics.''\cite{nielsen}, Chapter II} This feature justifies the following control theoretic model: If $n$ qubits are assumed to be physically represented by $n$ particles, the only part of the system's Hamiltonian which can be changed by extern access is the free Hamiltonian of each qubit. These $1$-particle Hamiltonians might be controllable since they are only effective Hamiltonians which are phenomenologically given by an interaction to many extern particles (mean-field approximation \cite{duffield}). Based on results of quantum control theory in $2$-spin systems \cite{kha} we investigate the problem of simulating arbitrary pair-interaction Hamiltonians by a given one. We assume that the Hamiltonian of the $n$-system is a permutation invariant $zz$-interaction and show that the computational power\footnote{in the sense of time required to generate unitaries} of this Hamiltonian (together with local transformations on each spin) is at least as large as the power of quantum computers with $2$ qubit gates. For infinitesimal time evolutions, it turns out to be even stronger.
We develop a theory, where the computational power of a Hamiltonian for simulating arbitrary Hamiltonians is characterized by features of the interactions graphs. Standard concepts of graph theory like chromatic index and spectrum of the adjacency matrix together with majorization turn out to provide lower and upper bounds on the simulation overhead. Here we are interested in the exact overhead and not only in polynomial equivalence as in \cite{dodd}.
\section{Our model of computation} Based on the approach of \cite{kha} we consider the following model. The quantum system is a spin system, i.e.\ its Hilbert space is $\mathcal{H}_n=(\mathbb{C}^2)^{\otimes n}$, and its Hamiltonian $H_d\in\mathfrak{su}(2^n)$ consists only of pair-interactions, i.e.\ \begin{equation} H_d=\sum_{1\le k<l\le n} H_{k,l} \end{equation} where $H_{kl}$ acts only on the Hilbert space of the qubits $k$ and $l$. We assume that for every $k$ and $l$ the Hamiltonian $H_{kl}$ describes a non-trivial coupling and is traceless. The system's Hamiltonian $H_d$ is also called the \emph{drift} Hamiltonian since it is always present. We assume that we can perform all unitaries in the \emph{control} group $K=SU(2)\otimes\cdots\otimes SU(2)$ arbitrarily fast compared to the time evolution of the internal couplings between the qubits. Let $G$ be the unitary Lie group $SU(2^n)$ and $u\in G$ be a unitary we want to realize. To achieve this all we can do is perform $v_1\in K$, wait $t_1$, perform $v_2\in K$, wait $t_2\,,\ldots,$ perform $v_p\in K$ and wait $t_p$. The resulting unitary is $$u=\exp(i H_d t_p) v_p \cdots \exp(i H_d t_2) v_2 \exp(i H_d t_1) v_1\,. $$ This can be written as $$ u=k_p \exp(i k_p^\dagger H_d k_p) \cdots \exp(i k_2^\dagger H_d k_2)
\exp(i k_1^\dagger H_d k_1)$$ where $k_i=v_i \cdots v_1$ for $i=1,\ldots,p$. This is just the solution of a time-dependent Schr\"odinger equation with piecewise constant Hamiltonians -- conjugates of the drift Hamiltonian $H_d$ by unitaries of $K$ -- followed by the unitary $k_p\in K$. Let $Ad_K(H_d)$ denote the conjugacy class $$ Ad_K(H_d)=\{Ad_k(H_d)=k^{\dagger} H_d k\mid k\in K\}\,. $$ \begin{Definition} A continuous time algorithm $A$ of running time $T$ is a piecewise constant function $t\mapsto H(t)$ from the interval $[0,T]$ onto the set $Ad_K(H_d)$ followed by some local unitary $k\in K$. We say $A$ implements $u$ if $u=k u(T)$ where $(u(t))_{t\in [0,T]}$ is the solution of the time-dependent Schr\"odinger equation $(d/dt) u(t) = -iH(t) u(t)$ with $u(0)=I$. \end{Definition} The complexity of a unitary in this model is the running time of the optimal continuous time algorithms.
Let $\sigma_\alpha^i$ denote the Pauli spin matrix $\sigma_\alpha$ ($\alpha=x,y,z$) that acts on the $i$th spin. For simplicity, we assume that the drift Hamiltonian is \begin{equation} H_d=\sum_{1\le k<l\le n} \sigma_z^k\sigma_z^l\,. \end{equation} The physical systems we have in mind might be for example solid states with long-range interactions. Of course one might object that the interaction strength always decreases with the distance between the interacting particles. It will turn out that the assumption on non-decreasing interaction strengths makes our model rather strong with respect to its computational power. One should understand our assumptions as the attempt to use a strong computational model which is still physically justificable. Many aspects of our theory can be developed in strong analogy for more general drift Hamiltonians.
In Section~\ref{LowUpBounds} we will compare the computational power of our model with the power of quantum computers based on $2$-qubit gates. Our arguments refer always to infinitesimal time evolutions, i.e., we will show that our model can implement quantum gates without overhead since we can simulate the time evolution implementing parallelized quantum gates.
First we have to define what we mean by simulating the time evolution $\exp(i H t)$ during a small time interval $[0,\epsilon]$ where $H$ is an arbitrary pair-interaction Hamiltonian. Assume we have written $H$ as a positive linear combination $H=\sum_j \mu_j H_j$ with $\mu_j>0$ and each $H_j$ is an element of the conjugacy class $Ad_K(H_d)$. For small $\epsilon$ the unitary \[ \prod_j \exp(i\epsilon\mu_j H_j) \] is a good approximation for \[ \exp(i\epsilon H)=\exp(i\epsilon\sum_j\mu_j H_j)\,. \] This approximation is implemented if the system evolves the time $\epsilon\mu_j$ with respect to the Hamiltonian $H_j$. The sum $\mu=\sum_j\mu_j$ is exactly the time overhead of the simulation. Hence the problem is to express $H$ as a positive linear combination such that the overhead $\mu$ is minimal. Of course such a procedure might not be optimal if one were interested in the implementation of $\exp(iHs)$ for any special value of $s$. Here we want to imitate the whole dynamical time evolution $(\exp(iHs))_{s>0}$ in arbitrary small steps $\epsilon$. Then the optimization reduces clearly to the convex problem stated above.
In the following it will be convenient to use a concise representation for the drift Hamiltonian and the interaction to be simulated: a pair interaction Hamiltonian between qubits $k$ and $l$ can be written as \begin{equation} H_{kl} = \sum_{\alpha,\beta=x,y,z} J_{kl;\alpha\beta} \sigma_{\alpha}^k\sigma_{\beta}^l\,. \end{equation} The strengths of the components are represented by the pair-interaction matrix \begin{equation} J_{kl}=\left( \begin{array}{ccc} J_{kl;xx} & J_{kl;xy} & J_{kl;xz} \\ J_{kl;yx} & J_{kl;yy} & J_{kl;yz} \\ J_{kl;zx} & J_{kl;zy} & J_{kl;zz} \end{array} \right)\in\mathbb{R}^{3\times 3}\,. \end{equation} The total Hamiltonian $H$ is represented by the $J$-matrix \begin{equation} J=\left(
\begin{array}{c|c|c|c|c} 0 & J_{12} & J_{13} & \cdots & J_{1n} \\ \hline J_{21} & 0 & J_{23} & \cdots & J_{2n} \\ \hline J_{31} & J_{32} & 0 & & J_{3n} \\ \hline \vdots & \vdots & & \ddots & \\ \hline J_{n1} & J_{n2} & J_{n3} & \quad & 0 \end{array} \right)\in\mathbb{R}^{3n\times 3n}\,. \end{equation}
To explain more explicitly, why our simulation problem is a convex optimization, we recall that every convex combination $\mu H_1 + (1-\mu)H_2$ of two Hamiltonians $H_1$ and $H_2$ can be simulated with overhead $1$ if $H_1$ and $H_2$ can. Remarkably, the problem of specifying the set of Hamiltonians which can be simulated with overhead $1$ is related to the problem of generalizing Bell inequalities to $n$-qubit states. More specifically, the convex problem can be reduced to the question `how strong can $2$-spin correlations be in a separable $n$-qubit quantum state?'
\begin{Theorem}[Optimal simulation]\label{optimal} The Hamiltonian $H$ can be simulated with overhead $\mu$ if and only if there is a separable quantum state $\rho$ in $(\mathbb{C}^2)^{\otimes n}$ such that $$ \frac{1}{\mu}J+I= (\mathrm{tr}(\rho\sigma_{\alpha}^k\sigma_{\beta}^l))_{kl;\alpha\beta}$$ where $J$ denotes the $J$-matrix of $H$ and $I$ the $3n\times 3n$ identity matrix. \end{Theorem}
Proof: By rescaling the considered Hamiltonian, it is sufficient to show that this is true for all Hamiltonians in $Ad_K(H_d)$ with $\mu=1$. Assume we have written $H$ as a convex combination $H=\sum_j\mu_j H_j$ with $H_j\in Ad_K(H_d)$. In order to show that there is a separable state of the desired form it is sufficient to show that the $J$-matrix of each Hamiltonian $H_j$ satisfies the equation of the theorem for an appropriate separable state. Let $H_j=uH_du^\dagger$. $H_j$ can be represented by $n$ three dimensional real unit vectors: to each qubit we associate the vector
$|J_k\rangle=(J_{k;x},J_{k;y},J_{k;z})^t\in \mathbb{R}^3$ where $u_k\sigma_z u_k^\dagger=J_{k;x}\sigma_x + J_{k;y}\sigma_y + J_{k;z}\sigma_z$ and $u=u_1\otimes\ldots\otimes u_n$. The pair-interaction matrices are given
by the matrix products $J_{kl}=|J_k\rangle\langle J_l|$.
By the Bloch sphere representation we have a correspondence between the unit vectors $|J_k\rangle$ and the projections $\rho_k$ in $\mathbb{C}^2$ defined by $J_{k;\alpha}=\mathrm{tr}(\rho_k \sigma_{\alpha})$. Let $\rho$ be the product state $\rho:=\rho_1\otimes\ldots\otimes\rho_n$. Then we have $J_{kl;\alpha\beta}=\mathrm{tr}(\rho \sigma_{\alpha}^k\sigma_{\beta}^l)$ for all $k\neq l$. Note that the product of two different Pauli matrices is the third Pauli matrix multiplied by a scalar. The only problem that remains is that we may have $\mathrm{tr}(\rho\sigma_\alpha^k\sigma_\beta^k)\neq 0$ for $\alpha\neq\beta$. We substitute $\rho$ by a state $\bar{\rho}$ in such a way that the expectation values of all traceless $1$-qubit observables vanish and the
expectation values of all considered $2$-qubit observables remain unchanged. For every $|J_k\rangle$ we can find $U'_k\in SO(3)$ such that
$U'_k|J_k\rangle=-|J_k\rangle$. This rotation corresponds to conjugation of the qubit $k$ by a unitary $u'_k$. To
$-|J_k\rangle$ corresponds the projection $\rho'_k := I_2-\rho_k$. Let $$ \bar{\rho}:= \frac{1}{2}
(\rho_1\otimes\ldots\otimes\rho_n + \rho'_1\otimes\ldots\otimes\rho'_n) \,. $$ Then we have $\mathrm{tr}(\bar{\rho} \sigma_{\alpha}^k\sigma_{\beta}^l)= \mathrm{tr}(\rho \sigma_{\alpha}^k\sigma_{\beta}^l)$ for all $k\neq l$ (all vectors are multiplied by $-1$ and therefore there is no effect on the pairs) and $J_{kk}$ is the $3\times 3$ identity matrix.
Assume conversely we have a separable state of the desired form. Take its decomposition into pure product states. By the Bloch sphere representation we obtain the required conjugations of the drift Hamiltonian $H_d$. The time they have to be applied are given by the coefficients in the convex decomposition.
$\Box$
\section{Lower and upper bounds}\label{LowUpBounds} A simple lower bound on the simulation time overhead can be derived from the fact that $J/\mu+I$ has to be a positive matrix, which is an easy conclusion from Theorem~\ref{optimal}.
\begin{Corollary}[Lower bound] The absolute value of the smallest eigenvalue of the $J$-matrix is a lower bound on the simulation overhead of $H$. \end{Corollary}
Proof: The matrix $$ (\mathrm{tr}(\rho\sigma_{\alpha}^k\sigma_{\beta}^l))_{kl;\alpha\beta} $$ is positive for every state $\rho$ in $(\mathbb{C}^2)^{\otimes n}$:
let $|d\rangle=(d_{k;\alpha})$ be an arbitrary vector and $A=\sum_{k,\alpha} d_{k;\alpha} \sigma_{\alpha}^k$. Then we have $$ \sum_{k,l,\alpha,\beta} d_{k;\alpha} \mathrm{tr}(\rho\sigma_{\alpha}^k\sigma_{\beta}^l) d_{l;\beta} = \mathrm{tr}(\rho AA^*)\ge 0\,. $$ {}
$\Box$
Now we show that our computational model is at least as powerful as the usual model with $2$-qubit gates, even if also cares about constant overhead. We describe here briefly the quantum circuit model and introduce the \emph{weighted depth} following \cite{jan}. It is a complexity measure for unitary transformations based on the quantum circuit model. We assume that two qubit gates acting on disjoint pairs of qubits can be implemented simultaneously and define:
\begin{Definition} A {\em quantum circuit} $A$ {\em of depth} $k$ is a sequence of $s$ steps $\{A_1,\dots,A_s\}$ where every step consists of a set of two qubit gates $\{u_{kl}\}_{k,l}$ acting on disjoint pairs $(k,l)$ of qubits. Every step $i$ defines a unitary operator $v_i$ by taking the product of all corresponding unitaries in any order. The product $u:=\Pi_{i\leq s} v_i$ is the `unitary operator implemented by $A$'. \end{Definition} The following quantity measures the deviation of a unitary operator from the identity: \begin{Definition} The {\em angle} of an arbitrary unitary operator $u\in SU(4)$
is the smallest possible norm\footnote{Here $\|.\|$ denotes the operator norm
given by $\|a\|:=\max_x \|ax\|$ where $x$ runs over the unit vectors of the
corresponding Hilbert space.} $\|a\|$ of a self-adjoint operator $a\in\mathfrak{su}(4)$ which satisfies $\exp(ia)=u$. \end{Definition} It coincides with the time required for the implementation of $u$ if the norm of the used Hamiltonian is $1$. We consider only the angle of two-qubit gates, i.e.\ we do not include the angle of local gates in the definition of the weighted depth. The notion of angle allows us to formulate a modification of the term `depth' which will later turn out to be decisive in connecting complexity measures of discrete and continuous algorithms: \begin{Definition} Let $\alpha_i$ be the maximal angle of the unitaries performed in step $i$. Then the {\em weighted depth} is defined to be the sum $\alpha=\sum_i\alpha_i$. \end{Definition}
Assuming that the implementation time of a unitary is proportional to its angle, the weighted depth is the running time of the algorithm. We first need two technical lemmas to show that such an algorithm can be simulated by our computational model with complete $zz$-Hamiltonian without any time overhead. \begin{Lemma} Let $M$ be a set of qubit pairs, such that no two pairs contain a common qubit. Then we can simulate \begin{equation} H_M=\sum_{(k,l)\in M} \sigma_z^k\sigma_z^l \end{equation} with overhead $1$. \end{Lemma}
Proof: This has been noted in \cite{leung}. Theorem \ref{independent} proves a more general statement.
$\Box$ \begin{Lemma} Let $H_d=\sigma_z\otimes\sigma_z$ be the drift Hamiltonian of a $2$-spin system. All Hamiltonians $H\in\mathfrak{su}(4)$ can be simulated with overhead less than
$\|H\|$. \end{Lemma}
Proof: We first assume that $H$ contains no local terms, i.e.\ $H=\sum_{\alpha,\beta} J_{\alpha\beta} \sigma_{\alpha}\otimes\sigma_{\beta}$. Let $J_{12}$ be the matrix representing $H$. Conjugation of $H$ by $k=u\otimes v\in SU(2)\otimes SU(2)$ corresponds to multiplication of $J_{12}$ by $U\in SO(3)$ from the left and by $V\in SO(3)$ from the right. By the singular value decomposition \cite{horn} there are $U,V\in SO(3)$ such that $J_{12}=U\mbox{diag}(s_x,s_y,s_z)V$ where $s_x,s_y,s_z$ are the singular values of $J_{12}$. Equivalently, there is $k\in SU(2)\otimes SU(2)$ such that $kHk^\dagger=H_{s_x,s_y,s_z}$ where $H_{s_x,s_y,s_z}= s_x\sigma_x\otimes\sigma_x+s_y\sigma_y\otimes\sigma_y+s_z\sigma_z\otimes\sigma_z$. By computing the eigenvalues we see that
$\|H_{s_x,s_y,s_z}\|=\sum_{\alpha} |s_\alpha|$. The simulation time overhead can not be more than the right hand side since each term $s_\alpha \sigma_\alpha \otimes \sigma_\alpha$ can be simulated with overhead $s_\alpha$.
Let $H$ contain local terms, i.e.\ $H=\sum_{\alpha} J_{\alpha\alpha} \sigma_{\alpha}\otimes\sigma_{\alpha} + 1\otimes a + b\otimes 1$. We can split $H=H'+H''$ where $H'$ is the non-local part and $H''$ the local one. By the Trotter formula we can simulate the parts
independently. The simulation of $H''$ takes no time by assumption. It remains to show that $\|H'\|\le\|H\|$. We may assume that $H$ is invariant with respect to qubit permutation since
$\|\frac{1}{2}H+\frac{1}{2}H_{ex}\|\le\|H\|$ where $H_{ex}$ is the Hamiltonian obtained from $H$ by exchanging the qubits. By conjugation we can obtain a Hamiltonian of the form $H=H_{s_x,s_y,s_z} + s (1\otimes\sigma_x + \sigma_x\otimes 1)$.
By computing the eigenvalues we see that $\|H_{s_x,s_y,s_z}\|\le\|H\|$.
$\Box$
\begin{Corollary}\label{step} Let $A=\{u_{kl}\}_{k,l}$ be a step of a quantum circuit and $\alpha$ its weighted depth. Then the $zz$-model can simulate the unitary implemented by $A$ with overhead $\alpha$. \end{Corollary}
Proof: Let $M=\{(k,l)\}$ be the set of the pairs which the two-qubit gates act on. No two pairs in $M$ contain a common vertex and therefore we can simulate the Hamiltonian $H_M=\sum_{(k,l)\in M}\sigma_z^k\sigma_z^l$ with overhead $1$. Let $H_{kl}$ be the Hamiltonian of minimal norm such that $u_{kl}=\exp(i H_{kl})$ for every $(k,l)\in M$. Now we can simulate every $H_{kl}$ parallely with overhead
less than $\|H_{kl}\|$ by conjugating $H_M$.
$\Box$
Our goal is to compare interactions with respect to the simulation complexity in our model given by the complete $zz$-interaction and the quantum circuit model. For doing so, we need some basic concepts of graph theory \cite{bol}. A graph is an ordered pair $G=(V,E)$ with $V\subseteq\{1,2,\ldots, n\}$ and $E=\{e_1,e_2,\ldots,e_m\}\subseteq V\times V$. Elements of $V$ are called vertices. They label the qubits. Elements of $E$ are called edges. They label the pair-interactions between the qubits. An edge $e=(k,l)$ is an ordered pair of vertices $k$ and $l$ called the ends of $e$. We consider only undirected graphs with no loops. To have a unique representation we require that $k<l$. Two distinct edges are called adjacent if and only if they have a common end vertex. A subset $M$ of the edge set $E$ is called independent if no two edges of $M$ are adjacent in $G$. A graph $G$ is called complete if every pair of distinct vertices of $G$ are adjacent in $G$; such a graph is denoted by $K_n$. Rephrased in this language, our drift Hamiltonian is of the form $$ H_d = \sum_{(k,l)\in E(K_n)} \sigma_z^k\sigma_z^l$$ and is called in the following the complete $zz$-Hamiltonian.
\begin{Definition} Let $H$ be an arbitrary pair-interaction Hamiltonian. For every non-negative real number $r$ we define the {\em interaction graph} $G_r$ as follows:
Let the qubits $\{1,\dots,n\}$ label the vertices and let the edges be all the pairs $(k,l)$ with the property $\|H_{k,l}\|> r$. \end{Definition} The chromatic index $\chi'$ is the minimum number of colors permitting an edge-coloring such that no two adjacent edges receive the same color or equivalently a partition $E=M_1\cup M_2\cup \ldots\cup M_{\chi'}$ into independent subsets of $E$. The following quantity turns out to be an upper bound on the overhead. \begin{Definition} We define the {\em weighted chromatic index} of $H$ \begin{equation} \chi':=\int_0^\infty \chi'_r dr \end{equation} where $\chi'_r$ denotes the chromatic index of $G_r$. \end{Definition}
In a former paper \cite{jan} we have introduced the weighted chromatic index as a complexity measure of the interaction. This point of view has been justified by two arguments, where the first one is an observation in \cite{jan}:
\begin{Theorem}\label{janInfinite} The evolution generated by a pair-interaction Hamiltonian $H$ during the infinitesimal time period $dt$ can be simulated by a parallelized $2$-qubit gate network with weighted depth $\chi'\, dt$ if $\chi'$ is the weighted chromatic index of $H$. \end{Theorem}
The second argument to consider chromatic index as a complexity measure for the interaction is only intuitive: in general, it should be easy to control interactions on disjoint qubit pairs, whereas one should expect that its unlikely that one can {\it control} simultaneously the interaction between qubit $1$ and $2$ and the interaction $1$ and $3$ at the same moment. This `a priori'-assumption of \cite{jan} can be partly justified by the following corollary which is an easy conclusion of Corollary~\ref{step} and Theorem~\ref{janInfinite}.
\begin{Corollary} The time overhead for simulating the Hamiltonian $H$ in the $zz$-model is at most the weighted chromatic index of $H$. \end{Corollary}
The assumption that the drift Hamiltonian contains only pair-interaction of the form $\sigma_z\otimes\sigma_z$ can be dropped. Let $H=\sum_{\alpha,\beta} J_{\alpha\beta} \sigma_\alpha\otimes\sigma_\beta$ be an arbitrary pair-interaction. By conjugating $H$ with $\{I\otimes I,I\otimes\sigma_z,\sigma_z\otimes I,\sigma_z\otimes\sigma_z\}$ we obtain $J_{zz} \sigma_z\otimes\sigma_z$. This can be done with overhead $1$. The bounds of the corollary must be divided by the minimum $J_{zz}$ of all pair-interactions occurring in $H$.
\section{Applications} The graph theoretical nature of our optimization problems becomes even stronger if we reduce our attention to one type of interactions, namely $zz$-interactions. Then the desired Hamiltonian is completely described by a weighted graph.
We consider the problem to simulate the time evolution \begin{equation} H=\sum_{(k,l)} J_{kl} \sigma^k_z \sigma^l_z\,. \end{equation} when the complete $zz$-Hamiltonian is present. We first show that in this case it is sufficient to use conjugation by $\sigma_x$ only. Let $H':=\sigma_z\otimes\sigma_z$. Note that $(\sigma_x\otimes I) H' (\sigma_x\otimes I)=-H'$ and $(\sigma_x\otimes\sigma_x) H' (\sigma_x\otimes\sigma_x)=H'$. In the following we denote conjugation by $\sigma_x$ by $-$ and no conjugation by $+$. The Hamiltonian to be simulated contains only terms of the form $J_{kl;zz} \sigma^k_z \sigma_z^l$ by assumption. If it is written as a convex combination of elements of $Ad_K(H_d)$ it is sufficient to show that for each of these elements there is a procedure which cancels the terms $J_{kl;\alpha\beta}$ for $(\alpha, \beta)\neq (z,z)$ without any effect on the $J_{kl,zz}$ terms. Therefore consider $\tilde{H}=kH_dk^\dagger$. Then we can also achieve the Hamiltonian $\tilde{H}_{zz}=\sum_{(k,l)} \tilde{J}_{kl;zz}\sigma^k_z\sigma^l_z$ with overhead $1$. For every qubit $i$ there is a $\tilde{J}_{i;z}$ such that $\tilde{J}_{kl;zz}=\tilde{J}_{k;z} \tilde{J}_{l;z}$ for all edges $(k,l)$. We express each $\tilde{J}_{i;z}=c^{+}_i-c^{-}_i$ with $0\le c^+_k,c^-_k\le 1$ and $c^+_k+c^-_k=1$. Let $K=\{I,\sigma_x\}\otimes\ldots\otimes\{I,\sigma_x\}$. We conjugate the drift Hamiltonian by $u=u_1\otimes u_2\otimes\ldots\otimes u_n\in K$ for time $t(u)=\prod_{i=1}^n c_i(u)$ where $c_i(u)=c^+_k$ if $u_i=I$ and $c_i(u)=c^-_k$ if $u_k=\sigma_x$. We have $$ \sum_{u\in K} t(u) u\tilde{H}u^{\dagger} = \tilde{H}_{zz}\,. $$
Since we restrict our attention to interactions with $zz$-terms only a shorter notation will be useful. To each edge $e=(k,l)$ of $G$, we associate a real number $w_{kl}$ called the weight of $e$. The resulting graph is called a weighted graph. Its adjacency matrix $J$ is the real symmetric matrix with zeros on the diagonal defined by \begin{equation} J_{ii}:=0\,,\quad J_{kl}:=w_{kl} \mbox{ and } J_{lk}:=w_{kl} \end{equation} for all edges $(k,l)$ of $G$. An unweighted graph can be considered as a weighted whose edges all have the weight $1$.
A (unweighted) graph is bipartite if its vertex set can be partitioned into two nonempty subsets $X$ and $Y$ such that each edge of $G$ has one end in $X$ and the other in $Y$. The pair $(X,Y)$ is called a bipartition of the bipartite graph. The complete bipartite graph with bipartition $(X,Y)$ is denoted by $G(X,Y)$.
A Seidel matrix defines a modified adjacency matrix $S=(s_{kl})$ for (unweighted) graphs in the following way \cite{cvet}: $$ s_{kl}=\left\{ \begin{array}{rl} -1 & \mbox{ if $k$ and $l$ are adjacent } k\neq l \\
1 & \mbox{ if $k$ and $l$ are non-adjacent} \\ \end{array} \right. $$ and $s_{kk}=0$. Obviously, $S=K-I-2J$, where $K$ denotes a square matrix all of whose entries are equal to $1$ and $J$ the adjacency matrix of $G$.
\begin{Theorem}[Optimal simulation] A graph $G$ can be simulated with overhead $1$ if and only if it can be expressed as a convex combination \begin{equation} J=\sum_i t_i S_i \end{equation} where the sum runs over the Seidel adjacency matrices of all complete bipartite graphs, i.e., over $2^{n-1}$ possible matrices. \end{Theorem}
Proof: By assigning to each vertex either $+$ or $-$ we have a bipartition of the vertex set: $X$ contains all vertices with $+$ and $Y$ all vertices with $-$. The sign of the edge $(k,l)$ is $-$ if and only if the edge has one end in $X$ and the other end in $Y$ and $+$ otherwise. The edges with $-$ define the complete bipartite graph $G(X,Y)$. We also include the case $X=\emptyset$ and $Y=V$ to cover the case when $+$ is assigned to all knots. Therefore all we can achieve in the one step is $K-I-2J(X,Y)$.
$\Box$
\begin{Corollary}[Lower bound] The absolute value of the smallest eigenvalue of the $J$-matrix is a lower bound on the simulation overhead. \end{Corollary}
We present now some upper bounds on the overhead. A graph $G=(V',E')$ is called a subgraph of $G'$ if $V'\subseteq V$ and $E'\subseteq E$. A clique of $G$ is a complete subgraph of $G$. A clique of $G$ is called a maximal clique of $G$ if it is not properly contained in another clique of $G$. A clique partition $P$ of $G$ is a partition of $E(G)$ such that its classes induce maximal cliques of $G$. Given a set $C$ of $h$ colors, an $h$-coloring of $P$ in $G$ is a mapping from $P$ to $C$, such that cliques sharing a vertex have different colors. Let the clique coloring index $c(G)$ be the smallest $h$ such that there is a partition $P$ permitting an $h$-coloring \cite{wallis}. We say the graph $G$ consists of independent cliques if $c(G)=1$. \begin{Lemma}[Upper bound]\label{independent} Let $G$ be a graph consisting of independent cliques. We can simulate the Hamiltonian $H_E$ with overhead $1$ which is optimal. \end{Lemma}
Proof: Let $\omega\ge 2$ be the number of maximal cliques. We construct $\omega$ vectors $s_i$ of length $2^{\omega-1}$ as follows: $$ \begin{array}{lcl} s_1 & = & (++++++++\cdots) \\ s_2 & = & (+-+-+-+-\cdots) \\ s_3 & = & (++--++--\cdots) \\
& \vdots & \\ s_\omega & = & (\underbrace{++\,\cdots\,+}_{2^{\omega-2}}
\underbrace{--\,\cdots\,-}_{2^{\omega-2}})\\ \end{array} $$ where $+$ stands for $1$ and $-$ for $-1$. The scalar products are $\langle s_i,s_j\rangle=2^{\omega-1} \delta_{ij}$. We partition the time interval into $2^{\omega-1}$ intervals of equal length. In the $m$th interval we conjugate all qubits of the $i$th clique by $\sigma_x$ if $s_{i,m}=-$ and do nothing otherwise. This is optimal since $q\le -1$ where $q$ is the smallest eigenvalue of $G$.
$\Box$
The scheme used in the proof is time optimal. But the number of conjugations grows exponentially with the number of cliques $\omega$. It is possible to use the conjugations schemes based on Hadamard matrices \cite{leung}. There the number of conjugations grows only quadratically with $\omega$.
\begin{Corollary}[Upper bound] Let $G$ be an arbitrary graph. Then the Hamiltonian $H_G$ can be simulated with the overhead $c(G)$. \end{Corollary} Note that if $M$ is an independent set than the graph $G=(V,M)$ consists of independent cliques. Therefore the chromatic index is an upper bound on clique index. However, this bounds is not always good. Consider e.g.\ the graph $G$ containing all edges that do not have $1$ as end vertex. Then the chromatic index of $G$ is still high but the clique coloring index is only $1$.
Since the optimal simulation of graph consisting of independent cliques has overhead $1$ one might think that the clique index is the smallest overhead. But this is not so as shows the following example. Consider the star $G=(V,E)$ with $V=\{1,\ldots,5\}$ and $E=\{(1,2),(1,3),(1,4),(1,5)\}$. The clique index of is $4$ but the optimal simulation has overhead $2$ only. The vectors can be chosen as $s_1=(++++),\, s_2=(-+++),\, s_3=(+-++),\, s_4=(++-+),\, s_5=(+++-)$ and each of the four intervals has length $1/2$. This is optimal since the smallest eigenvalue of the adjacency matrix of $G$ is $-2$.
\section{Quasi-order of Hamiltonians} Let $H$ and $\tilde{H}$ be arbitrary pair-interaction Hamiltonians. We investigate the question whether $\tilde{H}$ can be simulated by $H$ with overhead $\mu$. Note that this defines a quasi-order of the pair-interaction Hamiltonians for $\mu=1$. A partial characterization of the quasi-order is expressed in terms of majorization of the spectra of the corresponding matrices $J$ and $\tilde{J}$. Similar methods have been used to derive conditions for a class of entanglement transformations and to characterize mixing and measurement in quantum mechanics \cite{nielsen1,nielsen2}.
Suppose that $x=(x_1,\ldots,x_d)$ and $y=(y_1,\ldots,y_d)$ are two dimensional real vectors. We introduce the notation $\downarrow$ to denote the components of a vector rearranged into non-increasing order, so $x^\downarrow=(x_1^\downarrow,\ldots,x_d^\downarrow)$, where $(x_1^\downarrow\ge x_2^\downarrow\ge\ldots\ge x_d^\downarrow)$. We say that $x$ is majorized by $y$ and write $x\prec y$, if $$ \sum_{j=1}^k x_j^\downarrow\le \sum_{j=1}^k y_j^\downarrow\,, $$ for $k=1,\ldots,d-1$, and with equality when $k=d$ \cite{bha}.
Let $\mathrm{Spec}(X)$ denote the spectrum of the hermitian matrix $X$, i.e.\ the vector of eigenvalues, and $\lambda(X)$ denote the vector of components of $\mathrm{Spec}(X)$ arranged so they appear in non-increasing order. Ky Fan's maximum principle \cite{nielsen2} states that for any Hermitian matrix $A$, the sum of the $k$ largest eigenvalues of $A$ is the maximum value $\mathrm{tr}(AP)$, where the maximum is taken over all $k$-dimensional projections $P$, $$ \sum_{j=1}^k \lambda_j(A)=\max_P\mathrm{tr}(AP)\,. $$ It gives rise to a useful constraint on the eigenvalues of a sum of two Hermitian matrices $C:=A+B$, that $\lambda(C)\prec\lambda(A)+\lambda(B)$. Choose a $k$-dimensional projection $P$ such that \begin{equation} \sum_{j=1}^k\lambda_j(C) = \mathrm{tr}(CP) = \mathrm{tr}(AP)+\mathrm{tr}(BP) \le \sum_{j=1}^k \lambda_j(A)+\sum_{j=1}^k \lambda_j(B)\,. \label{EigIneq} \end{equation} This permits us to derive a lower bound on the simulation overhead. \begin{Lemma}[Majorization] Let $H$ and $\tilde{H}$ be arbitrary pair-interaction Hamiltonians. A necessary condition that $\tilde{H}$ can be simulated with overhead $\mu$ by $H$ is that $\mathrm{Spec}(\tilde{J})\prec\mu\mathrm{Spec}(J)$. \end{Lemma}
Proof: By representing the Hamiltonians by their $J$-matrices we see that $\tilde{H}$ can be simulated with overhead $\mu$ if and only if there is a sequence of orthogonal matrices $U_j=U_{j1}\oplus\ldots U_{jn}\in SO(3)\oplus\ldots\oplus SO(3)$ and $\mu_j>0$ with $\sum_j\mu_j=\mu$ such that $$ \tilde{J}=\sum_j\mu_j U_j J U_j^T\,. $$ The proof now follows from the inequality~(\ref{EigIneq}).
We consider now the problem to reverse the time evolution $\exp(iH_d t)$, i.e.\ what is the overhead of simulating $-H_d$ when $H_d$ is present. \begin{Lemma}[Lower bound on inverting] Let $r$ be the greatest eigenvalue and $q$ the smallest eigenvalue of $J$. Then $\mu\ge\frac{r}{-q}$ is a lower bound on the overhead for simulating $-H_d$ by $H_d$. \end{Lemma} Proof: This is a direct consequence of the Weyl inequality (see \cite{bha}, Theorem~III.2) $\lambda_d(A+B)\ge\lambda_d(A)+\lambda(B)$ for the sum of two Hermitian matrices where $\lambda_d$ denotes the smallest eigenvalue.
$\Box$
Let $G$ be a connected graph and $H_d=\sum_{(k,l)\in E(G)}\sigma_z^k\sigma_z^l$. If $G$ is not connected then the components can be treated independently. For the spectrum the following statements hold (see \cite{cvet}, Theorem~0.13): $$ 1\le r\le n-1\,,\quad -r\le q\le -1\,. $$ It is interesting to note that this gives a tight bound for simulating $-H_d$ when $G=K_n$ since for a complete graph we have $r=(n-1)$ and $q=-1$. An upper bound is the (weighted) chromatic index $\chi'(K_n)$ which is either $n$ (if $n$ is even) or $n-1$ (otherwise). This simple example shows that the inverse of the natural time evolution may have a relatively high complexity.
\begin{Lemma} Let the drift Hamiltonian $H_d$ be an arbitrary pair-interaction Hamiltonian. If the interaction graph $G_0(H_d)$ is bipartite then we can invert the time evolution with overhead of less than $3$. \end{Lemma}
Proof: Let $S=\{\sigma_x,\sigma_y,\sigma_z\}$. We have $\sum_{u\in S} uau^\dagger=-a$ for all $a\in\mathfrak{su}(2)$. Let $X,Y$ be the bipartition of $G_0(H_d)$. By conjugating all qubits in $X$ with elements of $S$ we obtain $-H_d$.
$\Box$
If $H$ contains only $\sigma_z\otimes\sigma_z$ then the overhead is $1$. This is optimal since we have $\mu\ge 1$.
\end{document}
|
arXiv
|
{
"id": "0106077.tex",
"language_detection_score": 0.7977046370506287,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} In this paper we are concerned with finite soluble groups $G$ admitting a factorisation $G=AB$, with $A$ and $B$ proper subgroups having coprime order. We are interested in bounding the Fitting height of $G$ in terms of some group-invariants of $A$ and $B$: including the Fitting heights and the derived lengths. \end{abstract}
\maketitle
\section{Introduction}\label{intro}
In this paper, all groups considered are finite and soluble, and hence the word ``group'' should always be understood as ``finite soluble group''.
We investigate groups $G$ in which a \textit{factorisation} $$G=AB=\{ab\mid a\in A,\,b\in B\}$$
with $A$ and $B$ subgroups of $G$ of coprime order is given. We are interested in obtaining some upper bounds on the \textit{Fitting height} $h(G)$ of $G$, in terms of the Fitting heights ($h(A)$ and $h(B)$) and of the \textit{derived lengths} ($d(A)$ and $d(B)$) of $A$ and $B$. (Our notation is standard, see Section~\ref{1.1} for undefined terminology.)
\begin{theorem}\label{thrmA}
Let $G=AB$ be a finite soluble group factorised by its proper subgroups $A$ and $B$ with $\gcd(|A|,|B|)=1$. If $|B|$ is odd, then \begin{equation}\label{eq:1} h(G)\leq h(A)+h(B)+2d(B)-1. \end{equation} If $B$ is nilpotent, then \begin{equation}\label{eq:2} h(G)\leq h(A)+2d(B). \end{equation} \end{theorem}
Before continuing with our discussion we need to introduce some notation. Given a group $G$, we write \begin{equation*} \delta(G):=\max\{d(S)\mid S \textrm{ Sylow subgroup of }G\}, \end{equation*} that is, $\delta(G)$ is the maximal derived length of the Sylow subgroups of $G$. We also bound the Fitting height of $G$ in terms of the group-invariants $\delta(A)$ and $\delta(B)$.
\begin{theorem}\label{thrmB}
Let $G=AB$ be a finite soluble group factorised by its proper subgroups $A$ and $B$ with $\gcd(|A|,|B|)=1$. Then \begin{equation*} h(G)\leq h(A)+(2\delta(B)+1)h(B)-1. \end{equation*} \end{theorem}
Both Theorems~\ref{thrmA} and~\ref{thrmB} extend and generalise some well-known results on groups admitting a factorisation with subgroups of coprime order, see for example the two monographs~\cite[Chapter~$2$]{AFD} and~\cite[pages~$133$--$135$]{BB}. Observe that when $A$ and $B$ are both nilpotent, we have $h(A)=h(B)=1$ and the inequality in Theorem~\ref{thrmB} specialises to the inequality of the main result in~\cite{Gemma}.
When $B$ is nilpotent, we have $\delta(B)=d(B)$ and $h(B)=1$, and thus Theorem~\ref{thrmA}~\eqref{eq:2} follows immediately from Theorem~\ref{thrmB}.
The hypothesis of $|B|$ being odd in Theorem~\ref{thrmA}~\eqref{eq:1} is important in our proof because at a critical juncture we apply a remarkable theorem of Kazarin~\cite{Ka} (which requires $B$ having odd order). However, we believe that our hypothesis is only factitious and in fact we pose the following: \begin{conjecture}
Let $G=AB$ be a finite soluble group factorised by its proper subgroups $A$ and $B$ with $\gcd(|A|,|B|)=1$. Then $$ h(G)\leq h(A)+h(B)+2d(B)-1.$$ \end{conjecture}
We also prove: \begin{theorem}\label{thrmC}
Let $G=AB$ be a finite soluble group factorised by its proper subgroups $A$ and $B$ with $\gcd(|A|,|B|)=1$. Then \begin{equation*} h(G)\leq h(A)\delta(A)+h(B)\delta(B). \end{equation*} \end{theorem}
Finally, with an immediate application of Theorem~\ref{thrmA} and of the machinery developed in Section~\ref{section3}, we prove: \begin{corollary}\label{corcor}
Let $G=AB$ be a finite soluble group factorised by its proper subgroups $A$ and $B$ with $\gcd(|A|,|B|)=1$. For each $p\in \pi(B)$, let $B_p$ be a Sylow $p$-subgroup of $B$. Then $$h(G)\leq h(A)+2\sum_{p\in\pi(B)}d(B_p).$$ In particular, $h(G)\leq h(A)+2|\pi(B)|\delta(B)$. \end{corollary}
In Section~\ref{1.1} we introduce some basic notation and some preliminary results that we use throughout the whole paper. In Section~\ref{section3} we present our main tool (the \textit{towers} as defined by Turull~\cite{Tu}) and we prove some auxiliary results. Section~\ref{section4} is dedicated to the proof of Theorems~\ref{thrmA} and~\ref{thrmB} and of Corollary~\ref{corcor}. The proof of Theorem~\ref{thrmC} (which requires a slightly different machinery) is postponed to Section~\ref{section5}.
\section{Notation and preliminary results}\label{1.1}
Given a group $G$, we denote by $\F G$ the \textit{Fitting subgroup} of $G$ (that is, the largest normal nilpotent subgroup of $G$). Moreover, the Fitting series of $G$ is defined inductively by $\FF 0 G:=1$ and $\FF {i+1}G/\FF i G:=\F{G/\FF i G}$, for every $i\ge 0$. Clearly, $\FF i G<\FF {i+1} G$ when $\FF i G<G$, and the minimum natural number $h$ with $\FF h G=G$ is called the \textit{Fitting height} (or Fitting length) of $G$ and is denoted by $h(G)$. Similarly, the \textit{derived length} of $G$ is indicated by $d(G)$.
We let $|G|$ denote the order of $G$ and we let $\pi(G)$ denote the set of prime divisors of $|G|$. Given a prime number $p$, we write $G_p$ for a Sylow $p$-subgroup of $G$. A \textit{Sylow basis} of $G$ is a family $\{G_p\}_{p\in \pi(G)}$ of Sylow subgroups of $G$ such that $G_{p}G_q=G_qG_p$ for any $p,q\in \pi(G)$. By a pioneering result of Philip Hall~\cite[$9.1.7$,~$9.1.8$ and $9.2.1$~(ii)]{Rob}, every (finite soluble) group has a Sylow basis. In particular, for every set of primes $\pi$, $G$ contains a Hall $\pi$-subgroup, which will be denoted by $G_{\pi}$.
Given a set $\pi$ of prime numbers, we set $\pi':=\{p\textrm{ prime}\mid p\notin \pi\}$. Moreover, when $\pi=\{p\}$, for simplicity we write $p'$ for $\pi'$. As usual, $\O \pi G$ is the largest normal $\pi$-subgroup of $G$ and the upper $\pi'\pi$-\textit{series} of $G$ is generated by applying ${\bf O}_{\pi'}$ and ${\bf O}_\pi$ (in this order) repeatedly to $G$, that is, the series $1=P_0\leq N_0\leq P_1\leq N_1\leq \cdots \leq P_i\leq N_i\leq \cdots $ defined by \[ N_i/P_i:=\O{\pi'}{G/P_i}\quad\textrm{and}\quad P_{i+1}/N_i:=\O \pi {G/N_i}. \] This is a series of characteristic subgroups having factor groups $\pi'$- and $\pi$-groups, alternately. The minimum natural number $\ell$ such that the $\pi'\pi$-series terminates is named the $\pi$-\textit{length} of $G$ and denoted by $\ell_\pi(G)$. When $\pi=\{p\}$, we write simply $\O p G$ and $\ell_p(G)$.
We first state a basic elementary result which will be used repeatedly and without comment.
\begin{lemma}\label{lemma:2.1}
Let $G=AB$ be a group factorised by $A$ and $B$ with $\gcd(|A|,|B|)=1$. Then there exists a Sylow basis $\{G_p\}_{p\in \pi(G)}$ with $A=\prod_{p\in \pi(A)}G_p$ and $B=\prod_{p\in \pi(B)}G_p$. \end{lemma} \begin{proof} From~\cite[Lemma~$1.3.2$]{AFD}, we see that for every $p\in \pi(G)$ there exists a Hall $p'$-subgroup $A_{p'}$ of $A$ and a Hall $p'$-subgroup $B_{p'}$ of $B$ such that $A_{p'}B_{p'}$ is a Hall $p'$-subgroup of $G$. Now, for each $p\in \pi(G)$, define $G_p:=\bigcap_{q\in \pi(G)\setminus\{p\}}A_{q'}B_{q'}$. A computation shows that $\{G_p\}_{p\in \pi(G)}$ is a Sylow basis of $G$ (see for example~\cite[$9.2.1$]{Rob}). Moreover, $A=\prod_{p\in \pi(A)}G_p$ and $B=\prod_{p\in \pi(B)}G_p$. \end{proof}
The next two results are crucial for our proofs of Theorems~\ref{thrmA} and~\ref{thrmB}.
\begin{theorem}\label{thrm2.2} Let $G$ be a group and let $p$ be a prime. Then $\ell_p(G)\leq d(G_p)$. \end{theorem} \begin{proof} When $p$ is odd, this is~\cite[Theorem~$A$~(i)]{HH}. The analogous result for $p=2$ is proved in~\cite{Br}. \end{proof}
Kazarin~\cite{Ka} has proved Theorem~\ref{thrm2.2} for arbitrary sets of primes $\pi$ with $2\notin \pi$. We state this generalisation in a form tailored to our needs.
\begin{theorem}\label{thrm2.3} Let $G$ be a group and let $\pi$ be a set of primes. If $2\notin \pi$ or if $G_\pi$ is nilpotent, then $\ell_\pi(G)\leq d(G_\pi)$. \end{theorem} \begin{proof} When $2\notin \pi$, this is the main result of~\cite{Ka} (see also~\cite[Theorem~$1.7.20$]{BB}). When $G_\pi$ is nilpotent, the proof follows from Theorem~\ref{thrm2.2}. \end{proof}
\section{Our toolkit: towers}\label{section3} We start this section with a pivotal definition introduced by Turull~\cite{Tu}. (The definition of $B$-\textit{tower} in~\cite[Definition~$1.1$]{Tu} is actually more general then the one we give here and coincides with ours when $B=1$.)
\begin{definition}\label{Ttower}{\rm Let $G$ be a group. A family $\mathfrak{T} := (P_i\mid i\in \{1,\ldots,h\})$ is said to be a \textit{tower of length} $h$ of $G$ if the following are satisfied. \begin{enumerate} \item $P_i$ is a $p_i$-subgroup of $G$ and $p_i\in \pi(G)$. \item If $1\leq i\leq j\leq h$, then $P_i$ normalises $P_j$. \item Define inductively $\overline{P_h}:=P_h$, and $\overline{P_i}:=P_i/\cent {P_i}{\overline{P_{i+1}}}$ for $i\in \{1,\ldots,h-1\}$. Then $\overline{P_i}\neq 1$, for every $i\in \{1,\ldots,h\}$. \item $p_i\neq p_{i+1}$, for every $i\in\{1,\ldots,h-1\}$. \end{enumerate} } \end{definition}
A concept that resembles the definition of tower was orinigally introduced by Dade in~\cite{Dade} for investigating the Fitting height of a group. The relationship between Fitting height and towers was uncovered by Turull. \begin{lemma}[{{\cite[Lemma~$1.9$]{Tu}}}]\label{lemma31}Let $G$ be a group. Then \[h(G)=\max\{h\mid G \textrm{ admits a tower of length }h\}.\] \end{lemma} In view of Lemma~\ref{lemma31} we give the following: \begin{definition}\label{Fittingtower}{\rm We say that the tower $\mathfrak{T}$ of $G$ a \textit{Fitting tower} if $\mathfrak{T}$ has length $h(G)$.} \end{definition}
The following is an easy consequence of~\cite[Lemma~$1.5$]{Tu}. For simplifying the notation, given a $p$-group $P$, we write $\pi^*(P)=p$ when $P\neq 1$, and $\pi^*(P)=1$ when $P=1$. Observe that when $P\neq 1$ we have $\pi(P)=\{\pi^*(P)\}$.
\begin{lemma}\label{lemma33} Let $G$ be a group, let $\mathfrak{T}=(P_i\mid i\in \{1,\ldots,h\})$ be a tower of $G$, let $j\in \{1,\ldots,h\}$, let $s\geq 0$ be an integer and let $\mathfrak{T}'=(P_i\mid i\in \{1,\ldots,h\}\setminus\{j,j+1,\ldots,j+s-1,j+s\})$. Then either $\mathfrak{T}'$ is a tower of $G$, or $1<j\leq j+s<h$ and $\pi^*(P_{j-1})=\pi^*(P_{j+s+1})$. \end{lemma} \begin{proof} Lemma~$1.5$ in~\cite{Tu} says that, for every $h_0$ with $1\leq h_0\leq h$ and for every increasing function $f:\{1,\ldots,h_0\}\to \{1,\ldots,h\}$, the family $(P_{f(i)}\mid i\in \{1,\ldots,h_0\})$ satisfies the conditions ~$(1)$,~$(2)$ and~$(3)$ in Definition~$\ref{Ttower}$. Applying this with $h_0:=h-s-1$ and with $f:\{1,\ldots,h_0\}\to \{1,\ldots,h\}$ defined by \[ f(i)=\begin{cases} i&\textrm{if }1\leq i< j,\\ i+s+1&\textrm{if }j\leq i\leq h_0, \end{cases} \] we obtain that $\mathfrak{T}'$ satisfies the conditions~$(1)$,~$(2)$ and~$(3)$ of Definition~\ref{Ttower}. As $\mathfrak{T}$ satisfies Definition~\ref{Ttower}~$(4)$, we immediately get that either $\mathfrak{T'}$ satisfies also~$(4)$ (and hence is a tower of $G$), or $1<j\leq j+s<h$ and $\pi^*(P_{j-1})=\pi^*(P_{j+s+1})$. \end{proof}
\begin{definition}{\rm
Let $G$ be a group, let $\mathfrak{T}=(P_i\mid i\in \{1,\ldots,h\})$ be a tower of $G$ and let $\sigma$ be a set of primes. We set $$\nu_\sigma(\mathfrak{T}):=|\{i\in \{1,\ldots,h\}\mid \pi^*(P_i)\in \sigma\}|.$$ Clearly, $\nu_\sigma(\mathfrak{T})=0$ when $\sigma$ has no element in common with $\{\pi^*(P_1),\ldots,\pi^*(P_h)\}$.
Now, set $P_0:=1$ and $P_{h+1}:=1$. For $i,j\in \{1,\ldots,h\}$ with $i\leq j$, the sequence $(P_\ell\mid i\leq \ell\leq j)$ of consecutive elements of $\mathfrak{T}$ is said to be a $\sigma$-\textit{block} if \begin{itemize} \item $\pi^*(P_{i+s})\in \sigma$ for every $s$ with $0\leq s\leq j-i$, and \item $\pi^*(P_{i-1})\notin\sigma$, $\pi^*(P_{j+1})\notin \sigma$. \end{itemize} Moreover, we denote by $\beta_\sigma(\mathfrak{T})$ the number of $\sigma$-blocks of $\mathfrak{T}$.} \end{definition}
The main result of this section is Lemma~\ref{lemma34}: before proceeding to its proof we single out two basic observations.
\begin{lemma}\label{elementary} Let $\mathfrak{T}=(P_i\mid i\in \{1,\ldots,h\})$ be a tower of $G$. Then, for $j\in \{1,\ldots,h-1\}$, we have $\cent {P_j}{P_h}\le \cent {P_j}{\overline{P_{j+1}}}$. \end{lemma} \begin{proof} We argue by induction on $h-j$. If $j=h-1$, then $\overline{P_h}=P_h$ and hence there is nothing to prove. Suppose $h-j>1$ and set $R:=\cent {P_j}{P_h}$. We have $[R,P_h,P_{j+1}]=1$, and also $[P_h,P_{j+1},R]\leq [P_h,R]=1$ by Definition~\ref{Ttower}~$(2)$. Thus the Three Subgroups Lemma yields $[P_{j+1},R,P_h]=1$, that is, $[P_{j+1},R]\leq \cent {P_{j+1}}{P_h}$. Now the inductive hypothesis gives $[P_{j+1},R]\leq \cent {P_{j+1}}{\overline{P_{j+2}}}$, and hence $[\overline{P_{j+1}},R]=1$. Therefore $\cent {P_j}{P_h}=R\leq \cent {P_j}{\overline{P_{j+1}}}$. \end{proof}
\begin{lemma}\label{elementaryy} Let $\mathfrak{T}=(P_i\mid i\in \{1,\ldots,h\})$ be a tower of $G$ and let $N$ be a normal subgroup of $G$ with \begin{equation}\label{eq1} P_j\cap N\leq \cent {P_j}{P_h}, \end{equation} for every $j\in \{1,\ldots,h-1\}$. Then $\mathfrak{T}':=(P_iN/N\mid i\in 1,\ldots,h-1\})$ is a tower of $G/N$. \end{lemma} \begin{proof} From~\eqref{eq1} and Lemma~\ref{elementary}, we have $P_j\cap N\leq \cent {P_j}{\overline{P_{j+1}}}$ for $j<h$. Set $R_h:=1$, and set $R_j:=\cent {P_j}{\overline{P_{j+1}}}$ for $j<h$. Thus $\overline{P_j}=P_j/R_j$, for every $j$.
Now, for $j<h$, we have $$ P_j\cap N= R_j\cap N$$ and hence \begin{equation}\label{Pjbar} \frac{P_jN}{R_jN}=\frac{P_j(R_jN)}{R_jN}\cong \frac{P_j}{P_j\cap R_jN}=\frac{P_j}{R_j(P_j\cap N)}=\frac{P_j}{R_j(R_j\cap N)}=\frac{P_j}{R_j}=\overline{P_j}. \end{equation}
For each $j\in \{1,\ldots,h-1\}$, set $Q_j:=P_jN/N$, and define $\overline{Q_{h-1}}:=Q_{h-1}$, and $\overline{Q_j}:=Q_j/\cent {Q_j}{\overline{Q_{j+1}}}$ for $j<h-1$. In particular, for each $j\in \{1,\ldots,h-2\}$, there exists $L_j\leq P_j$ with $\cent {Q_j}{\overline{Q_{j+1}}}=L_jN/N$. Moreover, set $L_{h-1}:=1$.
We show (by induction on $h-j$) that $L_j\leq R_j$, for each $j\in \{1,\ldots,h-1\}$. If $h-j=1$, then $L_j=L_{h-1}=1\leq R_{h-1}=R_j$. Assume then that $h-j>1$ and let $x\in L_{j}$. As $[xN,\overline{Q_{j+1}}]=1$, we get $[xN,Q_{j+1}]\leq \cent {Q_{j+1}}{\overline{Q_{j+2}}}=L_{j+1}N/N$ when $h-j>2$, and $[xN,Q_{j+1}]=1$ when $h-j=2$. In both cases, applying the inductive hypothesis, we obtain $$[x,Q_{j+1}]\leq \frac{L_{j+1}N}{N}\leq \frac{R_{j+1}N}{N}.$$ This gives $$[x,P_{j+1}]\leq P_{j+1}\cap R_{j+1}N=R_{j+1}(P_{j+1}\cap N).$$ Combining~\eqref{eq1}, Lemma~\ref{elementary} and the definition of $R_{j+1}$, we have $P_{j+1}\cap N\leq \cent {P_{j+1}}{P_h}\leq \cent {P_{j+1}}{\overline{P_{j+2}}}=R_{j+1}$. Therefore $[x,P_{j+1}]\leq R_{j+1}$ and hence $x\in \cent {P_{j}}{P_{j+1}/R_{j+1}}=\cent {P_j}{\overline{P_{j+1}}}=R_j$. Thus $L_j\leq R_j$ and the induction is proved.
Observe that \begin{equation}\label{Qjbar} \overline{Q_j}=\frac{P_jN/N}{L_{j}N/N}\cong \frac{P_jN}{L_jN}. \end{equation} As $L_jN\le R_jN \le P_jN$, from~\eqref{Pjbar} and~\eqref{Qjbar}, we see that $\overline{P_j}$ is an epimorphic image of $\overline Q_j$. Finally, since $\mathfrak{T}$ is a tower of $G$, it follows immediately that $\mathfrak{T}'$ is a tower of $G/N$. \end{proof}
Given a tower $\mathfrak{T}=(P_i\mid i\in \{1,\ldots,h\})$ and $j\in \{1,\ldots,h\}$, we set $T_j:=P_hP_{h-1}\cdots P_j$. Observe that from Definition~\ref{Ttower}~(2), we have $T_j\unlhd T_1$.
We are now ready to prove one of the main tools of our paper. \begin{lemma}\label{lemma34} Let $G$ be a group, let $\sigma$ be a non-empty subset of $\pi(G)$, let $A$ be a Hall $\sigma$-subgroup of $G$ and let $\mathfrak{T}:=(P_i\mid i\in \{1,\ldots,h\})$ be a tower of $G$. Then \begin{enumerate} \item $h(A)\geq \nu_\sigma(\mathfrak{T})-\beta_\sigma(\mathfrak{T})+1$, and \item $\ell_\sigma(G)\geq\beta_\sigma(\mathfrak{T})$. \end{enumerate} \end{lemma} \begin{proof}Observe that $h(A),\ell_\sigma(G)\geq 1$ because $\emptyset\neq\sigma\subseteq \pi(G)$. In particular, we may assume that $\nu_\sigma(\mathfrak{T}),\beta_{\sigma}(\mathfrak{T})\neq 0$ and hence $\sigma_0:=\sigma\cap \{\pi^*(P_i)\mid 1\leq i\leq h\}\neq\emptyset$. Let $A_0$ be a Hall $\sigma_0$-subgroup of $T_1$. Observe that $\mathfrak{T}$ is a tower of $T_1$ and that the hypothesis of this lemma are satisfied with $(G,\sigma,A)$ replaced by $(T_1,\sigma_0,A_0)$. As $h(A_0)\leq h(A)$ and $\ell_\sigma(T_1)\leq \ell_\sigma(G)$, for proving parts~$(1)$ and~$(2)$ we may assume that $G=T_1$, $\sigma=\sigma_0$ and $A=A_0$.
Part~(1): We argue by induction on $h+|G|$. If $h=1$, then $\nu_\sigma(\mathfrak{T})=\beta_\sigma(\mathfrak{T})=1$ and the proof follows.
Assume that $\pi^*(P_h)\notin \sigma$. Write $\mathfrak{T}':=(P_i\mid i\in\{1,\ldots,h-1\})$. From Lemma~\ref{lemma33}, the family $\mathfrak{T}'$ is a tower of $G$. As $\nu_{\sigma}(\mathfrak{T}')=\nu_{\sigma}(\mathfrak{T})$, $\beta_\sigma(\mathfrak{T}')=\beta_\sigma(\mathfrak{T})$ and $\mathfrak{T}'$ has length $h-1$, the proof follows by induction.
Assume that $\pi^*(P_h)\in \sigma$. Let $t\in\{1,\ldots, h\}$ with $T_t=P_hP_{h-1}\cdots P_t$ a $\sigma$-block of $\mathfrak{T}$. Suppose that $T_t$ is the only $\sigma$-block of $\mathfrak{T}$. Thus $\nu_\sigma(\mathfrak{T})=h-t+1$, $\beta_\sigma(\mathfrak{T})=1$ and $T_t$ is a Hall $\sigma$-subgroup of $G$. Moreover, since $T_t\unlhd T_1=G$, we have $A=T_t$. Write $\mathfrak{T}':=(P_i\mid i\in \{t,\ldots,h\})$. From Lemma~\ref{lemma33}, the family $\mathfrak{T}'$ is a tower of $G$ and hence a tower of $A$. As $\mathfrak{T}'$ has length $h-t+1$, from Lemma~\ref{lemma31}, we get $h(A)\geq h-t+1$ and the proof follows.
Suppose that $T_t$ is not the only $\sigma$-block of $G$, and let $j\in \{1,\ldots,t-1\}$ be maximal with $\pi^*(P_j)\in\sigma$. Suppose $\pi^*(P_j)\neq \pi^*(P_t)$. Then Lemma~\ref{lemma33} yields that $\mathfrak{T}':=(P_i\mid i\in \{1,\ldots,h\}\setminus\{j+1,\ldots,t-1\})$ is a tower of $G$. Since $\mathfrak{T}'$ has length $h-(t-j-1)<h$, from our induction we deduce $$h(A)\geq \nu_\sigma(\mathfrak{T}')-\beta_\sigma(\mathfrak{T}')+1=\nu_\sigma(\mathfrak{T})- (\beta_\sigma(\mathfrak{T})-1)+1=\nu_\sigma(\mathfrak{T})-\beta_\sigma(\mathfrak{T})+2.$$
Finally, suppose that $\pi^*(P_j)=\pi^*(P_t)$. In particular, either $\pi^*(P_{j-1})\neq \pi^*(P_t)$ or $j=1$. Now, Lemma~\ref{lemma33} gives that $\mathfrak{T}':=(P_i\mid i\in \{1,\ldots,h\}\setminus\{j,\ldots,t-1\})$ is a tower of $G$. As $\mathfrak{T}'$ has length $h-(t-j)<h$, the inductive hypothesis yields $$h(A)\geq \nu_\sigma(\mathfrak{T}')-\beta_\sigma(\mathfrak{T}')+1=(\nu_\sigma(\mathfrak{T})-1)- (\beta_\sigma(\mathfrak{T})-1)+1=\nu_\sigma(\mathfrak{T})-\beta_\sigma(\mathfrak{T})+1.$$
Part~(2): As in Part~$(1)$, we proceed by induction on $h+|G|$. Assume $\pi^*(P_h)\notin\sigma$. Then $\mathfrak{T}':=(P_i\mid i\in \{1,\ldots,h-1\})$ is a tower of $G$ of length $h-1$ with $\beta_\sigma(\mathfrak{T}')=\beta_\sigma(\mathfrak{T})$. Thus the proof follows by induction.
Assume that $\pi^*(P_{h})\in \sigma$. Write $N:=\O {\sigma'} G$ and assume first that $N\neq 1$. For $j\in \{1,\ldots,h-1\}$, we have $[P_j\cap N,P_h]\leq N\cap P_h=1$ and hence $P_j\cap N\leq \cent {P_j}{P_h}$. In particular, by Lemma~\ref{elementaryy}, $\mathfrak{T}':=(P_iN/N\mid i\in \{1,\ldots,h-1\})$ is a tower of $G/N$ and, by induction, $\beta_\sigma(\mathfrak{T}')\leq \ell_{\sigma}(G/N)$. Since $\beta_\sigma(\mathfrak{T})=\beta_\sigma(\mathfrak{T}')$ and $\ell_\sigma(G)\geq \ell_{\sigma}(G/N)$, we get $\beta_\sigma(\mathfrak{T})\leq \ell_{\sigma}(G)$. Assume then that $N=1$.
Write $\mathfrak{T}':=(P_i\mid i\in \{1,\ldots,h-1\})$. By Lemma~\ref{lemma33}, $\mathfrak{T}'$ is a tower of $G$ of length $h-1$. If $P_h$ is a not $\sigma$-block, then $\beta_\sigma(\mathfrak{T}')=\beta_\sigma(\mathfrak{T})$ and, by induction, $\beta_\sigma(\mathfrak{T})\leq \ell_\sigma(G)$. Suppose that $P_h$ is a $\sigma$-block, that is, $\pi^*(P_{h-1})\notin\sigma$. Clearly, $ \beta_\sigma(\mathfrak{T})=\beta_{\sigma}(\mathfrak{T}')+1$.
Write $M:=\O {\sigma}G$ and observe that $M\neq 1$ and $\ell_\sigma(G)=\ell_{\sigma}(G/M)+1$ because $\O {\sigma'} G=N=1$. For $j\in \{1,\ldots,h-2\}$, we have $[ P_j\cap M,P_{h-1}]\leq M\cap P_{h-1}=1$ and hence $P_j\cap M\leq \cent {P_j}{P_{h-1}}$. In particular, by Lemma~\ref{elementaryy} (applied to $\mathfrak{T}'$), $\mathfrak{T}{''}:=(P_jM/M\mid j\in \{1,\ldots,h-2\})$ is a tower of $G/M$. Now, by induction, $\ell_\sigma(G/M)\geq \beta_\sigma(\mathfrak{T}^{''})=\beta_\sigma(\mathfrak{T}')$ from which it follows that $\ell_\sigma(G)\geq\beta_\sigma(\mathfrak{T})$. \end{proof}
\section{Factorisations: Proofs of Theorems~\ref{thrmA} and~\ref{thrmB} and Corollary~\ref{corcor}}\label{section4} We start by proving the following.
\begin{lemma}\label{lemma41} Let $G$ be a group, let $\sigma$ be a non-empty proper subset of $\pi(G)$ and let $G=AB$ be a factorisation, with $A$ a $\sigma$-subgroup of $G$ and $B$ a $\sigma'$-subgroup of $G$. Then \[ h(G)\leq h(A)+h(B)+\ell_\sigma(G)+\ell_{\sigma'}(G)-2 \] and \[ h(G)\leq h(A)+h(B)+2\min\{\ell_\sigma(G),\ell_{\sigma'}(G)\}-1. \] \end{lemma} \begin{proof} Let $\mathfrak{T}$ be a Fitting tower of $G$ (see Definition~\ref{Fittingtower}). Using first Lemma~\ref{lemma34} part~(1) and then part~(2), we have \begin{eqnarray*} (\dag)\qquad h(G)&=&\nu_\sigma(\mathfrak{T})+\nu_{\sigma'}(\mathfrak{T})\leq (h(A)+\beta_\sigma(\mathfrak{T})-1)+(h(B)+\beta_{\sigma'}(\mathfrak{T})-1)\\ &=&h(A)+h(B)+\beta_{\sigma}(\mathfrak{T})+\beta_{\sigma'}(\mathfrak{T})-2\\ &\leq &h(A)+h(B)+\ell_{\sigma}(G)+\ell_{\sigma'}(G)-2. \end{eqnarray*}
Observe that, for each set of prime numbers $\pi$, from the definition of $\pi'\pi$-series we have $\ell_{\pi'}(G)\leq \ell_\pi(G)+1$. Applying this remark with $\pi=\sigma$ and with $\pi=\sigma'$, from $(\dag)$ we get $h(G)\leq h(A)+h(B)+2\min\{\ell_\sigma(G),\ell_{\sigma'}(G)\}-1.$ \end{proof}
\begin{proof}[Proof of Theorem~$\ref{thrmA}$]
Write $\sigma:=\pi(A)$ and $\sigma':=\pi(B)$. If $|B|$ is odd or if $B$ is nilpotent, then Theorem~\ref{thrm2.3} yields $\ell_{\sigma'}(G)\leq d(B)$. In the first case, Eq.~\eqref{eq:1} follows directly from Lemma~\ref{lemma41}. In the second case, $h(B)=1$ and now Eq.~\eqref{eq:2} follows again from Lemma~\ref{lemma41}. \end{proof}
We now show that the bounds in Theorem~\ref{thrmA} are (in some cases) best possible. (We denote by $C_n$ a cyclic group of order $n$.) \begin{example}\label{ex1}{\rm Let $p,q,r$ and $t$ be distinct primes and let $n\geq 1$. Define $H_0:=C_p\mathop{\mathrm{wr}} C_q$ and $H_1:=(H_0\mathop{\mathrm{wr}} C_r)\mathop{\mathrm{wr}} (C_q\mathop{\mathrm{wr}} C_p)$. Now, for each $i\geq 1$, define inductively $H_{2i}:=(H_{2i-1}\mathop{\mathrm{wr}} C_r)\mathop{\mathrm{wr}} (C_p\mathop{\mathrm{wr}} C_q)$ and $H_{2i+1}:=(H_{2i}\mathop{\mathrm{wr}} C_r)\mathop{\mathrm{wr}} (C_q\mathop{\mathrm{wr}} C_p)$.
We let $H:=H_{n}$ and $G:=C_t\mathop{\mathrm{wr}} H$. Let $A$ be a Hall $\{p,q\}$-subgroup of $G$ and let $B$ be a Hall $\{r,t\}$-subgroup of $G$. A computation shows that $h(A)=n+2$, $h(B)=2$, $h(G)=3n+3$ and $d(B)=n+1$. Theorem~\ref{thrmA}~\eqref{eq:1} predicts $h(G)\leq h(A)+h(B)+2d(B)-1$, and in fact in this example the equality is met. } \end{example}
\begin{example}\label{ex2}{\rm Let $p$ and $q$ be distinct primes and let $n\geq 0$. Define $G_0:=C_p$ and $G_1:=G_0\mathop{\mathrm{wr}} C_q$. Now, for each $i\geq 1$, define inductively $G_{2i}:=G_{2i-1}\mathop{\mathrm{wr}} C_p$ and $G_{2i+1}:=G_{2i}\mathop{\mathrm{wr}} C_q$.
Let $G:=G_{2n}$, let $A$ be a Sylow $p$-subgroup of $G$ and let $B$ be a Sylow $q$-subgroup of $G$. A computation shows that $h(A)=1$, $d(B)=n$ and $h(G)=2n+1$. Theorem~\ref{thrmA}~\eqref{eq:2} predicts $h(G)\leq h(A)+2d(B)$, and in fact in this example the equality is met.} \end{example}
\begin{proof}[Proof of Corollary~$\ref{corcor}$] From Lemma~\ref{lemma:2.1}, there exists a Sylow basis $\{G_p\}_{p\in\pi(G)}$ of $G$ with $A=\prod_{p\in\pi(A)}G_p$ and $B=\prod_{p\in \pi(B)}G_p$.
Now, we argue by induction on $|\pi(B)|$. If $|\pi(B)|=1$, then $B$ is nilpotent and hence the proof follows from Theorem~\ref{thrmA}~\eqref{eq:2}. Suppose that $|\pi(B)|>1$. Fix $q\in \pi(B)$ and write $B_{q'}:=\prod_{p\in \pi(B)\setminus\{q\}}G_p$. Clearly, $G=AB=(AG_q)B_{q'}$ and hence, by induction, \begin{eqnarray*} h(G)&\leq& h(AG_q)+2\sum_{p\in \pi(B_{q'})}d(G_p)\leq (h(A)+2d(G_q))+2\sum_{p\in\pi(B_{q'})}d(G_p)\\ &=&h(A)+2\sum_{p\in \pi(B)}d(G_p). \end{eqnarray*} \end{proof}
The proof of Theorem~\ref{thrmB} will follow at once from the following lemma, which (in our opinion) is of independent interest.
\begin{lemma}\label{lemma4.4} Let $G$ be a group, let $\sigma$ be a non-empty subset of $\pi(G)$ and let $H$ be a Hall $\sigma$-subgroup of $G$. Then $\ell_\sigma(G)\leq \delta(H)h(H)$. \end{lemma} \begin{proof}
When $|\sigma|=1$, the proof follows immediately from Theorem~\ref{thrm2.2}. In particular, we may assume that $|\sigma|>1$. Now we proceed by induction on $|G|+|\sigma|$.
Clearly, $\ell_\sigma(G)=\ell_\sigma(G/\O{\sigma'}G)$ and $H\O{\sigma'} G/\O{\sigma'}G\cong H$ is a Hall $\sigma$-subgroup of $G/\O{\sigma'}G$. When $\O{\sigma'}G\neq 1$, the proof follows by induction, and hence we may assume that $\O{\sigma'}G=1$.
Suppose that $G$ contains two distinct minimal normal subgroups $N$ and $M$. Clearly, $\pi(N),\pi(M)\subseteq \sigma$. As $\O{\sigma'}G=1$, we deduce that $\ell_\sigma(G/N)=\ell_\sigma(G)=\ell_\sigma(G/M)$. Moreover, by induction, $\ell_\sigma(G/N)\leq \delta(H/N)h(H/N)\leq \delta(H)h(H)$. This gives $\ell_\sigma(G)\leq\delta(H)h(H)$, and hence we may assume that $G$ contains a unique minimal normal subgroup. This yields $\F G=\O p G$, for some $p\in \sigma$. As $\cent G{\O p G}\leq \O p G$ and $\O p G\leq H$, we have $\F H=\O p H$.
Write $\tau:=\sigma\setminus\{p\}$. Observe that $\ell_\sigma(G)\leq \ell_p(G)+\ell_\tau(G)$. As $G_\tau$ is isomorphic to a subgroup of $H/\F H$, we get $h(G_\tau)\leq h(H/\F H)=h(H)-1$. Now, from the inductive hypothesis, we obtain \begin{eqnarray*} \ell_\sigma(G)&\leq& \ell_p(G)+\ell_\tau(G)\leq \delta(G_p)h(G_p)+\delta(G_\tau)h(G_\tau)\\ &\leq& \delta(H)+\delta(H)(h(H)-1)\leq \delta(H)h(H). \end{eqnarray*} \end{proof}
\begin{proof}[Proof of Theorem~$\ref{thrmB}$] Write $\sigma:=\pi(A)$ and $\sigma':=\pi(B)$. From Lemma~\ref{lemma4.4}, we get $\ell_\sigma(G)\leq \delta(A)h(A)$ and $\ell_{\sigma'}(G)\leq \delta(B)h(B)$. Now the proof follows from the second inequality in Lemma~\ref{lemma41}. \end{proof}
\section{Factorisations: Proof of Theorem~\ref{thrmC}}\label{section5} Before proceeding with the proof of Theorem~\ref{thrmC} we need to introduce some auxiliary notation.
Given a group $G$, we denote with $\R G $ the \textit{nilpotent residual} of $G$, that is, the smallest (with respect to inclusion) normal subgroup $N$ of $G$ with $G/N$ nilpotent. Then, we define inductively the descending normal series $\{\RR i G\}_i$ by $\RR 0 G:=G$ and $\RR {i+1} G:=\R{\RR i G}$, for every $i\geq 0$. Observe that if $h=h(G)$, then for every $i\in \{0,\ldots,h\}$ we have $\RR {h-i}G\leq \FF i G$.
Now, let $A$ be a Hall subgroup of $G$ and, for $i\in \{1,\ldots,h\}$, define \[ \ell^i(G,A):=\max\{\ell_p(G)\mid p\in \pi(\RR{i-1} A /\RR i A)\}\quad\textrm{and}\quad \Lambda_G(A):=\sum_{i=1}^{h(A)}\ell^i(G,A). \] It is clear that, for every normal subgroup $N$ of $G$, $\Lambda_{G/N}(AN/N)\leq \Lambda_G(A)$.
\begin{lemma}\label{thrm48}Let $G=AB$ be a finite soluble group factorised by its proper subgroups $A$ and $B$ with $\gcd(|A|,|B|)=1$. Then $h(G)\leq \Lambda_G(A)+\Lambda_G(B)$. \end{lemma}
\begin{proof}
We argue by induction on $|G|$. Suppose that $G$ contains two distinct minimal normal subgroups $N$ and $M$. Clearly, $h(G/N)=h(G)=h(G/M)$ and hence by induction $h(G)\leq \Lambda_{G/N}(AN/N)+\Lambda_{G/N}(BN/N)\leq \Lambda_G(A)+\Lambda_G(B)$. In particular, we may assume that $G$ contains a unique minimal normal subgroup $N$ and, replacing $A$ by $B$ if necessary, that $\{p\}=\pi(N)\subseteq \pi(A)$. This yields $\F G=\O p G$. As $\cent G{\O p G}\leq \O p G$ and $\O p G\leq A$, we have $\F A=\O p A$.
Write $h:=h(A)$. Now, $\RR{h-1}A\leq \FF 1 A=\F A$ and hence $\RR {h-1}A$ is a $p$-group. Thus $$\Lambda_G(A)=\ell_p(G)+\sum_{i=1}^{h-1}\ell^i(G,A).$$ Since $\ell_p(G/\F G)=\ell_p(G)-1$, we get $\Lambda_{G/\F G}(A/\F G)\leq \Lambda_G(A)-1$. Moreover, since $p\notin \pi(B)$, we have $\Lambda_{G/\F G}(B\F G/\F G)=\Lambda_{G}(B)$. Therefore the inductive hypothesis gives \begin{eqnarray*} h(G)&=&h(G/\F G)+1\\ &\leq& \Lambda_{G/\F G}(A\F G/\F G)+\Lambda_{G/\F G}(B\F G/\F G)+1\\ &\leq& \Lambda_G(A)+\Lambda_G(B), \end{eqnarray*} and the proof is complete. \end{proof}
\begin{proof}[Proof of Theorem~$\ref{thrmC}$] For each $p\in \pi(A)$, Theorem~\ref{thrm2.2} yields $\ell_p(G)\leq d(G_p)$ and hence $\ell_p(G)\leq \delta(A)$. It follows that $\Lambda_G(A)\leq \delta(A)h(A)$. The same argument applied to $B$ gives $\Lambda_G(B)\leq \delta(B)h(B)$. Now the proof follows from Lemma~\ref{thrm48}. \end{proof}
A weaker form of Theorem~\ref{thrmC} can be deduced from the results in Section~\ref{section4}. Indeed, from the first inequality in Lemma~\ref{lemma41} and from Lemma~\ref{lemma4.4}, we get \begin{eqnarray*} h(G)&\leq &h(A)+h(B)+\ell_{\sigma}(G)+\ell_{\sigma'}(G)-2\\ &\leq& h(A)+h(B)+\delta(A)h(A)+\delta(B)h(B)-2\\ &=&(\delta(A)+1)h(A)+(\delta(B)+1)h(B)-2. \end{eqnarray*} Clearly Theorem~\ref{thrmC} always offer a better estimate on $h(G)$.
\thebibliography{10} \bibitem{AFD}B.~Amberg, S.~Franciosi, F.~de Giovanni, \textit{Products of groups}, Oxford Mathematical Monographs, Clarendon Press, Oxford, 1992.
\bibitem{BB}A.~Ballester-Bolinches, R.~Esteban-Romero, M.~Asaad, \textit{Products of finite groups}, Expositions in Mathematics 53, Walter de Gruyter, Berlin, 2010.
\bibitem{Br}E.~G.~Brjuhanova, The relation between $2$-length and derived length of a Sylow $2$-subgroup of a finite soluble group. (Russian), \textit{Mat. Zametki} \textbf{29} (1981), 161--170, 316.
\bibitem{Dade} E.~C.~Dade, Carter subgroups and Fitting heights of finite solvable groups, \textit{Illinois J. Math.} \textbf{13} (1969), 449--514.
\bibitem{HH}P.~Hall, G.~Higman, On the $p$-length of $p$-soluble groups and reduction theorems for Burnside's problem, \textit{Proc. London Math. Soc. }\textbf{6} (1956), 1--42.
\bibitem{Ka}L.~S.~Kazarin, Soluble products of groups, \textit{Infinite groups 1994}, Proceedings of the International Conference held in Ravello, May 23--27 1994, (F.~de Giovanni, and M.~L.~Newell eds.), Walter de Gruyter, Berlin, 1996, 111--123.
\bibitem{Gemma}G.~Parmeggiani, The Fitting series of the product of two finite nilpotent groups, \textit{Rend. Sem. Mat. Univ. Padova} \textbf{91} (1994), 273--278.
\bibitem{Rob}D.~J.~S.~Robinson, \textit{A Course in the Theory of Groups}, Springer-Verlag, New York, 1982.
\bibitem{Tu}A.~Turull, Fitting height of groups and of fixed points, \textit{J. Algebra} \textbf{86} (1984), 555--566.
\end{document}
|
arXiv
|
{
"id": "1311.4314.tex",
"language_detection_score": 0.6398511528968811,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[Order Induced Proximity]{Proximity Induced by Order Relations}
\author[M.Z. Ahmad]{M.Z. Ahmad$^{\alpha}$} \email{[email protected]} \address{\llap{$^{\alpha}$\,} Computational Intelligence Laboratory, University of Manitoba, WPG, MB, R3T 5V6, Canada} \thanks{\llap{$^{\alpha}$\,}The research has been supported by University of Manitoba Graduate Fellowship and Gorden P. Osler Graduate Scholarship.}
\author[J.F. Peters]{J.F. Peters$^{\beta}$} \email{[email protected]} \address{\llap{$^{\beta}$\,} Computational Intelligence Laboratory, University of Manitoba, WPG, MB, R3T 5V6, Canada and Department of Mathematics, Faculty of Arts and Sciences, Ad\.{i}yaman University, 02040 Ad\.{i}yaman, Turkey} \thanks{\llap{$^{\beta}$\,}The research has been supported by the Natural Sciences \& Engineering Research Council of Canada (NSERC) discovery grant 185986 and Instituto Nazionale di Alta Matematica (INdAM) Francesco Severi, Gruppo Nazionale per le Strutture Algebriche, Geometriche e Loro Applicazioni grant 9 920160 000362, n.prot U 2016/000036.}
\subjclass[2010]{Primary 54E05 (Proximity); Secondary 68U05 (Computational Geometry)}
\date{}
\dedicatory{Ju. M. Smirnov and S.A. Naimpally}
\begin{abstract} This paper introduces an order proximity on a collection of objects induced by a partial order using the Smirnov closeness measure on a Sz\'{a}z relator space. A Sz\'{a}z relator is a nonempty family of relations defined on a nonvoid set $K$. The Smirnov closeness measure provides a straightforward means of assembling partial ordered of pairwise close sets. In its original form, Ju. M. Smirnov closeness measure $\delta(A,B) = 0$ for a pair of nonempty sets $A,B$ with nonvoid intersection and $\delta(A,B) = 1$ for non-close sets. A main result in this paper is that the graph obtained by the proximity is equivalent to the Hasse diagram of the order relation that induces it. This paper also includes an application of order proximity in detecting sequences of video frames that have order proximity. \end{abstract}
\maketitle \section{Introduction}\label{sec:intro} This paper introduces proximities in terms of order relations on sets of objects with proximity induced by the order. Axioms for the closeness (proximity) of nonempty sets have been given in terms of set intersections\cite{Cech1966}\cite{Lodato1964}\cite{Lodato1966}. Different proximities consider different associated sets such as set closures\cite{Wallman1938}, set interiors\cite{peters2015strongly} and set descriptions\cite{Peters2016CP}.
Traditionally, the closeness (proximity) of nonempty sets is viewed in terms of asymptotically close sets or those sets that have points in common. With descriptive proximity, closeness of non-disjoint as well as disjoint nonempty sets occurs in cases where sets have matching descriptions. A description of a set is a feature vector whose components are feature values that quantify the characteristics of a set. Instead of the usual forms of proximity, the paper considers proximities relative to a partial ordering of elements of a set or collections of elements of sets or collections of nonempty sets or collections of cells in a cell complex or descriptions of collections of nonempty sets using a combination of a Sz\'{a}z relator~\cite{Szaz1987,Szaz2000,Szaz2014} on a nonempty set and the Smirnov closeness measure~\cite{Smirnov1952,Smirnov1952a}. The main result in this paper states the equivalence of graphs obtained by proximity to the Hasse diagrams of order relations from which they are induced(see Theorem~\ref{thm:proxhassec}).
\section{Preliminaries}\label{sec:prelim} Recall that a \emph{binary relation} is a subset of the Cartesian product,$X \times Y$. Then, \begin{definition}\label{def:porder}
A \emph{partial order} is a binary relation $\leq$, over a set $X$ satisfying for any $a,b,c \in X$:
\begin{compactenum}[1$^o$]
\item \textbf{Reflexivity:} $a \leq a$
\item \textbf{Antisymmetry:} if $a \leq b$ and $b \leq a$, then $a=b$
\item \textbf{Transitivity:} if $a \leq b$ and $b \leq c$, then $a \leq c$
\end{compactenum} \end{definition} Let us now define a total order. \begin{definition}\label{def:torder}
If a partial order $\leq$ also satisfies,
\begin{description}
\item[4$^o$] \textbf{Connex property or Totality:} either $a \leq b$ or $b \leq a$
\end{description}
then, $\leq$ is termed a \emph{total order}. Hence, the total order is a special case of a partial order. \end{definition} Let us now consider a pair $(X, \leq)$, where $X$ is a set and $\leq$ is a partial(total) order, then $(X,\leq)$ is a \emph{partially(totally) ordered set}. \begin{figure}
\caption{This figure shows the Hasse diagram of ordered sets with different types of order. Fig.~\ref{subfig:integers} represents a total order on $\mathbb{Z}$. Fig.~\ref{subfig:subsetinc} is a partially ordered set formed by the power set of $\{x,y,z\}$ under set inclusion $\subseteq$. Fig.~\ref{subfig:cycle} is the cyclic order on the set $\{v_1,v_2,v_3,v_4,v_5\}$.}
\label{subfig:integers}
\label{subfig:subsetinc}
\label{subfig:cycle}
\end{figure} \begin{example}\label{ex:example1}
Consider the set of integers $\mathbb{Z}=\{\cdots,-2,-1,0,1,2,\cdots\}$ and the relation $\leq$(less than or equal to), then $(\mathbb{Z},\leq)$ is a totally ordered set. We can write $\cdots \leq -1 \leq0 \leq1<\cdots$ and for all $a,b \in \mathbb{Z}$ it is clear that either $a \leq b$ or $b \leq a$.
Let us now consider a set $X=\{a,b,c\}$ and its power set $2^X=\{\emptyset,\{a\},\{b\},\{c\},\{a,\linebreak b\},\{a,c\},\{b,c\},\{a,b,c\}\}$. The pair $(2^X,\subseteq)$ is a poset as neither $\{a\} \not\subseteq \{b\}$, nor $\{b\} \not\subseteq \{a\}$. This is true for any of the singletons $\{a\},\{b\},\{c\}$ taken pairwise. \qquad \eot \end{example}
Let $(X,\leq)$ be a set with an order, consider $X$ as the vertex set and for each $a \leq b$ there is a directed edge $a \rightarrow b$. This graph represented as $\mathop{\mathcal{O}}\limits_\leq(X)$. The transiticve reduction of this graph is called the \emph{Hasse diagram} and is represented as $\mathop{\mathcal{H}s}\limits_\leq(X)$. A \emph{transitive reduction} of a directed graph $D_1$ is another directed graph $D_2$ with the same vertex set but a reduced edge set, such that if there is a path between vertices $v_1$ and $v_2$ in $D_1$, then there is also such a path in $D_2$. \begin{example}\label{ex:example2}
Let us consider the Hasse diagrams for the ordered sets considered in example \ref{ex:example1}. It can be seen that the operation $\leq$ on $\mathbb{Z}$ is a total order the Hasse diagram,Fig.~\ref{subfig:integers}, is a path graph(similar to a line in which each node except the terminating node is connected to two other nodes). That is why a total order is also called a linear order.
Now let us look at the Hasse diagram for the $(2^X,\subseteq)$, where $X=\{x,y,z\}$ shown in Fg.~\ref{subfig:subsetinc}. Comparing this diagram with the diagram for total order in Fig.~\ref{subfig:integers}, the difference between partial and total order become evident. In total order we can compare any two elements as there is a path between them. Whereas for the partial order as shown in Fig.~\ref{subfig:subsetinc} there is no path between $\{x\}$ and $\{y\}$. This is also true for all the singletons pairwise and all the subsets of size $2$ such as $\{x,y\}$.
\qquad \eot \end{example}
\emph{Ternary relaiton} defined for three sets $X,Y,Z$, is the subset of $X \times Y \times Z$ and is an extension of the binary relation. Using this notion, \begin{definition}\label{def:corder}
Define a ternary relation $\gamma$, that yields triples $[a,b,c]$ such that if one proceeds from $a \rightarrow c$, one has to pass through $b$. For $a,b,c,d \in X$, if this ternary relation $\gamma$ satisfies:
\begin{compactenum}[1$^o$]
\item \textbf{Cyclicity:} if $[a,b,c]$, then $[b,c,a]$
\item \textbf{Antisymmetry:} if $[a,b,c]$ then not $[c,b,a]$
\item \textbf{Transitivity:} if $[a,b,c]$ and $[a,c,d]$, then $[a,b,d]$
\item \textbf{Totality:} if $a,b,c$ are distinct then either,$[a,b,c]$ or $[c,b,a]$
\end{compactenum}
then it is called a \emph{cyclic order}. \end{definition}
If the \emph{totality condition} is not satisfied we get a \emph{partial cyclic order}, but in this paper we will restrict ourselves to the total cyclic order. \begin{example}\label{ex:example3}
Let us explain the triples $[a,b,c]$ yielded by the ternary relation $\gamma$. It means that the points are ordered so that in order to move from $a$ to $c$ one has to pass through $b$. Thus, the order is defined using three elements of the set rather than two as in the total/partial orders. It can be seen that if we have a cycle represented as $v_1\rightarrow v_2\rightarrow v_3\rightarrow v_4\rightarrow v_5\rightarrow v_1$, then any subset of three elements in this sequence(as long as we move in the same direction) satisfies the properties of cyclic order, defined above. To illustrate the choices we have $[v_4,v_5,v_1]$ and $[v_5,v_2,v_4]$ are valid choices but $[v_4,v_2,v_5]$ and $[v_1,v_5,v_4]$ are not. It can be seen that the Hasse diagram is of this particular order is a cyclic graph, and hence the name cyclic order. In the Hasse diagram we ignore all the ordered sets like $[v_2,v_4,v_1]$ as it is a combination of $[v_2,v_3,v_4]$ and $[v_4,v_5,v_1]$. \end{example} \begin{figure}
\caption{This figure illustrates the concept of proximity spaces. Fig.~\ref{subfig:proximity} is the space $X$ that has been equipped by proximity relations. Fig.~\ref{subfig:Lodato} is the graph obtained by considering the spatial Lodato proximity $\delta$. Fig.~\ref{subfig:desLodato} is the graph obtained using the descriptive Lodato proximity $\delta_{\Phi}$.}
\label{subfig:proximity}
\label{subfig:Lodato}
\label{subfig:desLodato}
\end{figure}
Proximity is used to study whether sets in a space are near or far. It can be defined as a binary realtion on a space that yields pairs of subsets that are near each other. We will use the Smirnov functional notation for proximity i.e. $\delta(A,B)=0$ if $A$ and $B$ are near and $\delta(A,B)=1$ if they are far.
Different criteria for nearness yield varying proximities. Suppose $A,B \subset X$, $A \cap B \Rightarrow \delta(A,B)=0$ is the centeral axiom of C\u{e}ch\cite{Cech1966} and Lodato spatial\cite{Lodato1964},\cite{Lodato1966} proximities. Wallman proximity\cite{Wallman1938} uses $\delta(A,B)=0 \Leftrightarrow cl\,A \cap cl\, B \neq \emptyset$, where $cl\, A$ is the closure of $A$. Strong proximity\cite{peters2015strongly} uses, $int\, A \cap int\, B \neq \emptyset \Rightarrow \mathop{\delta}\limits^{\doublewedge}(A,B)=0$, where $int\, A$ is the interior of $A$. The idea of a proximity can be extended to near sets using the notion of a probe function, $\phi:2^X \rightarrow \mathbb{R}^n$, that assigns a feature vector to subsets of the space. We generalize the classical intersection as: \begin{align*} A \mathop{\cap}\limits_{\Phi} B= \{x\in A \cup B: \phi(x) \in \phi(A)\,or\,\phi(x) \in \phi(B)\} \end{align*} The descriptive Lodato proximity\cite{Peters2016CP} uses $A \mathop{\cap}\limits_{\Phi} B \neq \emptyset \Rightarrow \delta_{\Phi}(A,B)=0$ and the descriptive strong proximity uses $int\,A \mathop{\cap}\limits_{\Phi} int\,B \neq \emptyset \Rightarrow \mathop{\delta_{_{\Phi}}}\limits^{\doublewedge}(A,B)=0$. We can also get a proximity graph by treating the subsets of a space as the vertices and drawing an edge between $A$ and $B$ if $\delta(A,B)=0$. \begin{example}\label{ex:example4}
Let us consider the space $X$ with the family of subsets $\{A_1,A_2,A_3,A_4,\linebreak A_5\}$ as shown in Fig.~\ref{subfig:proximity}. It can be seen that if we consider the spatial Lodato proximity then $\{A_1,A_2,A_3\}$ are pairwise proximal as they share a common point. $\delta(A_4,A_5)=0$ as they share a common edge. Now, before we move on to drawing the proximity graph for spatial Lodato $\delta$, it must be noted that all the proximity relations that we have considered are symmetric i.e. $\delta(A,B)=0 \Leftrightarrow \delta(B,A)=0$. This is different from the order relations that are antisymmetric. Hence, the proximity diagram for spatial Lodato shown in Fig.~\ref{subfig:Lodato} is an undirected graph. Now let us consider the descriptive Lodato proximity $\delta_{\Phi}$. We can see that $\delta_{\Phi}(A_1,A_4)=0$ and $\delta_{\Phi}(A_3,A_5)=0$ as their interior has the same color. The proximity diagram in this case, as shown in Fig.~\ref{subfig:desLodato}, is also undirected as it is a symmetric relation.
\qquad \eot \end{example} \section{Order Induced Proximities}\label{sec:results} Previous works define proximity using a set of axioms. In this work we use different order relations to induce a proximity relation. We have introduced the notion of an ordered set which is a pair $(X,\leq)$, where $X$ is a set and $\leq$ is an order relation. We generalize this structure using the notion of a relator space $(X,\mathcal{R})$, $X$ is a set and $\mathcal{R}$ is a family of relations. We begin by studying the proximity induced by a partial order defined in def.~\ref{def:porder}. \begin{definition}\label{def:poprox}
Let there be a relator space $(X,\{\leq\})$, where $\leq$ is a partial order then we define for $a,b \in X$
\begin{align*}
(a \leq b) \wedge(\not\exists x \in X \setminus \{a,b\}\; s.t.\;a\leq x \leq b) \Leftrightarrow\; \ponear{\gamma}(a,b)=0 \\
(a \leq b) \wedge (\exists x \in X \setminus \{a,b\}\; s.t.\;a\leq x \leq b) \Leftrightarrow\; \ponear{\gamma}(a,b)=1
\end{align*}
Here, $\ponear{\leq}$ is the proximity induced by the partial order $\leq$. \end{definition} We formulate the following theorem regarding the properties of the proximity induced by the partial order $\ponear{\leq}$. \begin{theorem}\label{thm:ponearprop}
Let $(X,\{\leq\})$ be a relator space where $X$ is a set and $\leq$ is a partial order. Let $a,b,c \in X$ be three elements in $X$. Then, the proximity induced on $X$ by the partial order $\leq$ represented by $\ponear{\leq}$ satisfies the following conditions:
\begin{compactenum}[1$^o$]
\item \textbf{Reflexivity:} $\ponear{\leq}(a,a)=0$
\item \textbf{Antisymmetry:} if $\ponear{\leq}(a,b)=0$ and $\ponear{\leq}(b,a)=0$ then $a=b$
\item \textbf{Antitransitivity:} if $\ponear{\leq}(a,b)=0$ and $\ponear{\leq}(b,c)=0$ then $\ponear{\leq}(a,c)=1$
\end{compactenum} \end{theorem} \begin{proof}
\begin{compactenum}[$1^o$]
\item It follows directly from the reflexivity condition of the partial order as defined in def.~\ref{def:porder} stating that $a \leq a$ and the antisymmetry condition which states that if $a \leq b$ and $b \leq a$, then $a=b$. Hence, there is no distinct $x \in X$ such that $a \leq x \leq a$, the only such $x$ is $a$. Thus, from def.~\ref{def:poprox} we get $\ponear{\leq}(a,a)=0$.
\item From def.~\ref{def:poprox} it can be seen that $\ponear{\leq}(a,b)=0$ means that $a \leq b$ and $\ponear{\leq}(b,a)=0$ means that $b \leq a$. Now invoking the antisymmetry property of the partial order as defined in def.~\ref{def:porder}. Thus, if $a \leq b$ and $b \leq a$ then $a=b$.
\item From def.~\ref{def:poprox}, $\ponear{\leq}(a,b)=0$ states $a \leq b$ and $\ponear{b,c}=0$ is equivalaent to $b \leq c$. This gives us $a \leq b \leq c$. From def.~\ref{def:poprox} we an see that $\ponear{\leq}(a,c)=1$ as there exists an $x\ in X$ such that $a \leq x=b \leq c$.
\end{compactenum} \end{proof} Given a realtor space $(X,\{\leq\})$ we draw a graph over the vertex set $X$ such that there exists an edge between $a \rightarrow b$ if $a \leq b$. This graph is represented as $\mathop{\check{\mathcal{O}}}\limits_\leq(X)$. We can draw another graph over the vertex set $X$ such that an edge $a \rightarrow b$ exists if $\ponear{\leq}(a,b)=0$. This graph is represented as $\mathop{\check{\mathcal{P}}}\limits_\leq(X)$. \begin{lemma}\label{lm:reduction}
Let $(X,\{\leq\})$ be a relator space where $X$ is a set and $\leq$ be a partial order. Then if there exists a path between two vertices in the graph $\mathop{\check{\mathcal{O}}}\limits_\leq(X)$, then there also exists a path between these vertices in $\mathop{\check{\mathcal{P}}}\limits_\leq(X)$. \end{lemma} \begin{proof}
Suppose there is a path between two vertices $a$ and $b$ in $\mathop{\check{\mathcal{O}}}\limits_\leq(X)$. This means that for $a,b \in X \;a\leq b$. Transitivity of the partial order(def.~\ref{def:porder}) dictates that there can be two cases. Either there exists no $x \in X$ such that $a \leq x \leq b$ or there exists a family of elements $\{d_i\} \in X$ such that $a \leq d_1 \leq \cdots \leq d_n \leq b$ and no element $x \in X$ can be inserted at any location in this sequence. For the case $\not \exists x \in X\;s.t. \; a \leq x\leq b$, from def.~\ref{def:poprox} we have $\ponear{\leq}(a,b)=0$. Hence, there exists an edge $a \rightarrow b$ in the graph $\mathop{\check{\mathcal{P}}}\limits_\leq(X)$. Now for the case in which $a \leq d_1 \leq \cdots \leq d_n \leq b$ such that no element can be inserted in this sequnce. From def.~\ref{def:poprox} we can write $\ponear{\leq}(a,d_1)=\ponear{\leq}(d_1,d_2)=\cdots=\ponear{\leq}(d_{n-1},d_n)=\ponear{\leq}(d_n,b)=0$. Hence, there exists a sequence of edges $a \rightarrow d_1 \rightarrow d_2 \rightarrow \cdots \rightarrow d_n \rightarrow b$ in the graph $\mathop{\check{\mathcal{P}}}\limits_\leq(X)$ constituting a path between $a$ and $b$. \end{proof} \begin{theorem}\label{thm:proxhassepo}
Let $(X,\{\leq\})$ be a realtor space where $X$ is a set, $\leq$ is a partial order and $\ponear{\leq}$ is the induced proximity as defined by def.~\ref{def:poprox}. Then, $\mathop{\check{\mathcal{P}}}\limits_\leq(X)$ is equivalent to $\mathop{\mathcal{H}s}\limits_\leq(X)$. \end{theorem} \begin{proof}
We know that $\mathop{\mathcal{H}s}\limits_\leq(X)$ is a transitive reduction of $\mathop{\check{\mathcal{O}}}\limits_\leq(X)$. This means that for every path between two vertices in $\mathop{\check{\mathcal{O}}}\limits_\leq(X)$, there exists a path in $\mathop{\mathcal{H}s}\limits_\leq(X)$. From lemma~\ref{lm:reduction} we can see that $\mathop{\check{\mathcal{P}}}\limits_\leq(X)$ has a path between every pair of vertices that are connected by a path in $\mathop{\check{\mathcal{O}}}\limits_\leq(X)$. \end{proof} \begin{example}\label{ex:example5}
Consider $X=\{a,b,c\}$ and its power set $2^X=\{\{a\},\{b\},\{c\},\{a,b\},\{a,c\} \linebreak ,\{b,c\},\{a,b,c\}\}$, as in example~\ref{ex:example1}. It has been discussed that $(2^X,\subseteq)$ is a partially ordered set and can be visually represented as the Hasse diagram shown in Fig.~\ref{subfig:subsetinc}. Let us see how the proximity relations $\ponear{\subseteq}$ are induced by def.~\ref{def:poprox}. It can be seen that as there is no $A \in 2^X$ such that $\emptyset \subseteq A
\subseteq \{x\}$, we have $\ponear{\subseteq}(\emptyset, \{x\})=0$. Whereas $\emptyset \subseteq \{x\} \subseteq \{x,y\}$, hence $\ponear{\subseteq}(\emptyset, \{x,y\})=1$. Moreover, it can be seen that $\{x\} \not\subseteq \{y\}$ and $\{y\} \not\subseteq \{x\}$ thus the two elements $\{x\},\{y\}$ have no order relation between them. This is why $\subseteq$ is called a partial order. Folowing the same argument there is no induced proximity relation $\ponear{\subseteq}$ between $\{x\}$ and $\{y\}$ hence we cannot say whether they are far or near based on proximity induced by partial order $\subseteq$. It can be seen from Thm.~\ref{thm:proxhassepo} that the proximity graph $\mathop{\check{\mathcal{P}}}\limits_\leq(2^X)$ is the same as the Hasse diagram shown in Fig.~\ref{subfig:subsetinc}.
\qquad \eot \end{example} Let us study the proximity induced by the total order defined in def.~\ref{def:torder}. \begin{definition}\label{def:toprox}
Let there be a relator space $(X,\{\leq\})$, where $\leq$ is a total order then we define for $a,b \in X$
\begin{align*}
(a \leq b) \wedge (\not\exists x \in X \setminus \{a,b\}\; s.t.\;a\leq x \leq b) \Leftrightarrow\; \tonear{\gamma}(a,b)=0 \\
(a \leq b) \wedge (\exists x \in X \setminus \{a,b\}\; s.t.\;a\leq x \leq b) \Leftrightarrow\; \tonear{\gamma}(a,b)=1
\end{align*} Here, $\tonear{\leq}$ is the proximity induced by the total order $\leq$. \end{definition} Let us formulate properties of the proximity induced by the total order $\tonear{\leq}$. \begin{theorem}\label{thm:tonearprop}
Let $(X,\{\leq\})$ be a relator space where $X$ is a set and $\leq$ is a total order. Let $a,b,c \in X$ be three elements in $X$. Then, the proximity induced on $X$ by the total order $\leq$ represented by $\tonear{\leq}$ satisfies the following conditions:
\begin{compactenum}[$1^o$]
\item \textbf{Reflexivity:} $\tonear{\leq}(a,a)=0$
\item \textbf{Antisymmetry:} if $\tonear{\leq}(a,b)=0$ and $\tonear{\leq}(b,a)=0$ then $a=b$
\item \textbf{Antitransitivity:} if $\tonear{\leq}(a,b)=0$ and $\tonear{\leq}(b,c)=0$ then $\tonear{\leq}(a,c)=1$
\item \textbf{Totality:} either $\big( \exists A=\{a,d_1,\cdots,d_n,b\} \subseteq X \; s.t. \; \mathop{\sum}\limits_{i=1}^{n+1}\tonear{\leq}(A_i,A_{i+1})=0 \big)$ or $\big( \exists B=\{b,e_1,\cdots,e_m,a\} \subseteq X \; s.t. \; \mathop{\sum}\limits_{i=1}^{m+1}\tonear{\leq}(B_i,B_{i+1})=0 \big)$
\end{compactenum} where $A_i$ is the $i-$th element of set $A$ and $m,n \in \mathbb{Z}^+$. \end{theorem} \begin{proof}
\begin{compactenum}[1$^o$]
\item From reflexivity property of total order(def.~\ref{def:torder}) we get $a \leq a$. If we take into account the antisymmetry property of the total order(def.~\ref{def:torder}), which states that if $a \leq b$ and $b \leq a$ then $a=b$. We conclude that $a \leq b \leq a$ dictates that $b=a$. From def.~\ref{def:toprox} we can say that as no distinct $x \in X$ such that $a \leq x \leq a$ then $\tonear{\leq}(a,a)=0$.
\item From def.~\ref{def:toprox} we an establish that $\tonear{\leq}(a,b)=0$ means $a \leq b$ and $\tonear{\leq}(b,a)=0$ means $b \leq a$. Now from the antisymmetry property of the total order as defined in def.~\ref{def:torder}, w can conclude that if $a \leq b$ and $b \leq a$ then $a=b$.
\item From def.~\ref{def:toprox} we can say that $\tonear{\leq}(a,b)=0$ means $a \leq b$ and $\tonear{\leq}(b,c)=0$. Thus $a \leq b \leq c$. Now from def.~\ref{def:toprox} we know that if there exists $x\in X$ such that $a \leq x \leq b$ then $\tonear{\leq}(a,b)=1$. Hence, $\tonear{\leq}(a,c)=1$.
\item From the totality property of the total order(def.~\ref{def:torder}) we can conclude that either $a \leq b$ or $b \leq a$. Using the transitive property of the total order(def.~\ref{def:torder}) we conclude that $a \leq b$ means that either there is no $x \in X$ such that $a \leq x \leq b$ in which case from def.~\ref{def:toprox} $\tonear{\leq}(a,b)=0$, or there exists a set of elements $\{d_1,\cdots,d_n\}\subseteq X$ such that $a \leq d_1 \leq \cdots \leq d_n \leq b$. Assuming that there exists no $x \in X$ that can be added to this chain at any point. In this case using the def.~\ref{def:toprox} we can conclude that $\tonear{\leq}(a,d_1)=\tonear{\leq}(d_1,d_2)=\cdots=\tonear{\leq}(d_{n-1},d_n)=\tonear{\leq}(d_n,b)=0$. This can be written as $\exists A=\{a,d_1,\cdots,d_n,b\}\subseteq X$ such that $\mathop{\sum}\limits_{i=1}^{n+1}\tonear{\leq}(A_i,A_{i+1})=0 $. Replicating this argument for $b \leq a$ we can obtain $\exists B=\{b,e_1,\cdots,e_m,a\} \subseteq X$ such that $\mathop{\sum}\limits_{i=1}^{m+1}\tonear{\leq}(B_i,B_{i+1})=0$.
\end{compactenum} \end{proof} For the relator space $(X,\{\leq\})$ where $X$ is a set and $\leq$ is a total order then we have a graph with $X$ as the vertex set and an edge $a \rightarrow b$ for $a,b \in X$ such that $a \leq b$. This is represented as $\mathop{\bar{\mathcal{O}}}\limits_\leq(X)$. Another graph over the vertex set $X$ is obtained by adding an edge $a \rightarrow b$ if $\tonear{\leq}(a,b)=0$, and is represented as $\mathop{\bar{\mathcal{P}}}\limits_\leq(X)$. \begin{lemma}\label{lm:toreduction}
Let $(X,\{\leq \})$ be arelator space where $X$ is a space and $\leq$ a total order. Then, if there exists a path between two vertices in the graph $\mathop{\bar{\mathcal{O}}}\limits_\leq(X)$, then there also exists a path between these vertices in $\mathop{\bar{\mathcal{P}}}\limits_\leq(X)$. \end{lemma} \begin{proof}
A path $a \rightarrow b$ in $\mathop{\bar{\mathcal{O}}}\limits_\leq(X)$ means $a \leq b$. By the transitivity of total order(def.~\ref{def:torder}) we can conclude that this leads to one of the two cases. Either $\not \exists x\ in X\; s.t. \; a \leq x \leq b $ or $\exists \{d_i\} \in X \; s.t. a\leq d_1
\leq \cdots \leq d_n \leq b$ such that no $x \in X$ can be inserted at any location in this sequence. For the first case from def.~\ref{def:toprox} we can write $\tonear{\leq}(a,b)=0$ and hence there is an edge $a \rightarrow b$ in $\mathop{\bar{\mathcal{P}}}\limits_\leq(X)$. For the second case usin def.~\ref{def:toprox} we can write $\tonear{\leq}(a,d_1)=\tonear{\leq}(a,d_1)=\cdots=\tonear{\leq}(d_{n-1},d_n)=\tonear{\leq}(d_n,b)=0$. Thus, there exist a sequence of edges $a \rightarrow d_1 \cdots \rightarrow d_{n}\rightarrow b$ in $\mathop{\bar{\mathcal{P}}}\limits_\leq(X)$ constituting a path between $a$ and $b$. \end{proof} \begin{theorem}\label{thm:proxhasseto}
Let $(X,\{\leq\})$ be a realtor space where $X$ is a set, $\leq$ is a partial order and $\ponear{\leq}$ is the induced proximity as defined by def.~\ref{def:toprox}. Then, $\mathop{\check{\mathcal{P}}}\limits_\leq(X)$ is equivalent to $\mathop{\mathcal{H}s}\limits_\leq(X)$. \end{theorem} \begin{proof}
From lemma~\ref{lm:toreduction} we know that there is a path between two vertices in $\mathop{\check{\mathcal{P}}}\limits_\leq(X)$ if there is a path between them in $\mathop{\bar{\mathcal{O}}}\limits_\leq(X)$. Hence, $\mathop{\check{\mathcal{P}}}\limits_\leq(X)$ is a transitive reduction of $\mathop{\bar{\mathcal{O}}}\limits_\leq(X)$. We know that by definition $\mathop{\mathcal{H}s}\limits_\leq(X)$ is also a transitive reduction of $\mathop{\bar{\mathcal{O}}}\limits_\leq(X)$. \end{proof} \begin{example}\label{ex:example6}
Now consider $\mathbb{Z}$ as in example \ref{ex:example1}, with the total order $\leq$. $(\mathbb{Z},\{\leq \})$ is a totally ordered set that can be visualized by the aid of the Hasse diagram displayed in Fig.~\ref{subfig:integers}.
Now, we move on to understanding how this order induces a proximity $\tonear{\leq}$ as per def.~\ref{def:toprox}. It can be seen that as there is no $x \in \mathbb{Z}$ such that $0 \leq x \leq 1$, we have $\tonear{\leq}(0,1)=0$. Moreover, as $0 \leq 1 \leq 2$, we have $\tonear{\leq}(0,2)=1$. It can also bee seen that $1 \not\leq 0$, hence we cannot talk about $\tonear{\leq}(1,0)$. Thus, the proximity inherits antisymetric nature of the underlying order as it is symmetric if both elements are the same.
We can see that if $a,b \in \mathbb{Z}$ then either $a \leq b$ or $b \leq a$, which is the totality(connex) property. Thus, results in the linear structure of fig.~\ref{subfig:integers} and is the reason why thew total order is also called a linear order. The connex property dictates that any two elements $a,b \in \mathbb{Z}$ are related in one of two ways. Either there is a set of elements $\{a,x_1,\cdots,x_n,b\}$ in $\mathbb{Z}$ such that $\tonear{\leq}(a,x_1)=0 \wedge\tonear{\leq}(x_1,x_2)=0\wedge \cdots \wedge \tonear{\leq}(x_n,b)=0$, or there exists a set $\{b,y_1,\cdots,y_m,a\}$ such that $\tonear{\leq}(b,y_1)=0\wedge \tonear{\leq}(y_1,y_2)=0\wedge \cdots \wedge \tonear{\leq}(y_n,a)=0$. Moreover, by Thm.~\ref{thm:proxhasseto} the proximity graph $\mathop{\check{\mathcal{P}}}\limits_\leq(X)$ is the same as the Hasse diagram shown in Fig.~\ref{subfig:integers}. \qquad \eot \end{example} We move on to the study of proximity induced by cyclic order defined in def.~\ref{def:corder}. \begin{definition}\label{def:cprox}
Let there be a realtor space $(X,\{\gamma\})$, where $\gamma$ is a cyclic order which yields triples $[a,b,c]$ implying that moving from $a$ to $c$ one passes through $b$. For $a,b \in X$ we define
\begin{align*}
([a,b,c]) \wedge (\not\exists x \in X \setminus \{a,b\}\; s.t.\;[a,x,b]) \Leftrightarrow\; \cnear{\gamma}(a,b)=0 \\
([a,b,c]) \wedge (\exists x \in X \setminus \{a,b\}\; s.t.\;[a,x,b]) \Leftrightarrow\; \cnear{\gamma}(a,b)=1
\end{align*}
where $\cnear{\gamma}$ is the proximity induced by the cyclic order $\gamma$. \end{definition} Now, we establish the properties of proximity induced by the cyclic order $\cnear{\gamma}$. \begin{theorem}\label{thm:cnearprop}
Let $(X,\{\gamma\})$ be a relator space where $X$ is a set and $\gamma$ is a cyclic order. Let $a,b,c \in X$ be three elements in $X$. Then, the proximity induced on $X$ by the total order $\gamma$ represented by $\cnear{\gamma}$ satisfies the following conditions:
\begin{compactenum}[1$^o$]
\item \textbf{Irreflexivity:} $\cnear{\gamma}(a,a)=1$
\item \textbf{Antisymmetry:} if $\cnear{\gamma}(a,b)=0$ then $\cnear{\gamma}(b,a)=1$
\item \textbf{Antitransitivity:} if $\cnear{\gamma}(a,b)=0$ and $\cnear{\gamma}(b,c)=0$ then $\cnear{\gamma}(a,c)=1$
\item \textbf{Totality:} either $\big( \exists A=\{a,d_1,\cdots,d_n,b\} \subseteq X \; s.t. \; \mathop{\sum}\limits_{i=1}^{n+1}\cnear{\gamma}(A_i,A_{i+1})=0 \big)$ or $\big( \exists B=\{b,e_1,\cdots,e_m,a\} \subseteq X \; s.t. \; \mathop{\sum}\limits_{i=1}^{m+1}\cnear{\gamma}(B_i,B_{i+1})=0 \big)$
\item \textbf{Cyclicity:} if $ \big( \exists A=\{a,d_1,\cdots,d_m,b,e_1,\cdots,e_n,c\} \; s.t. \; \mathop{\sum}\limits_{i=1}^{n+m+2}\cnear{\gamma}(A_i,A_{i+1})=0 \big)$ then $\big( \exists B=\{b,f_1,\cdots,f_o,c,g_1,\cdots,g_p,a\} \; s.t. \; \mathop{\sum}\limits_{i=1}^{o+p+2}\cnear{\gamma}(B_i,B_{i+1})=0 \big)$
\end{compactenum} where $A_i$ is the $i-$th element of set $A$ and $m,n,o,p \in \mathbb{Z}^+$. \end{theorem} \begin{proof}
\begin{compactenum}[1$^o$]
\item Cyclic order requires three elements to define it. If there exists $[a,b,c]$ meaning that the elements are ordered in such a way that moving from $a$ to $c$ we must pass from $b$, then from the cyclicity property of the cyclic order(def.~\ref{def:corder}) we have $[b,c,a]$. Now using the transitivity property of the cyclic order(def.~\ref{def:corder}) if $[a,b,c]$ and $[b,c,a]$ then we can write $[a,c,a]$. Hence, from def.~\ref{def:cprox} we can coclude that $\cnear{\gamma}(a,a)=1$ as there is $c \in X$ such that $[a,c,a]$.
\item It must be noted that symmetry in a function corresponds to switching the locations of input variables.From def.~\ref{def:cprox} we can conclude that $\cnear{\gamma}(a,b)=0$ means that $[a,b,c]$ and $\cnear{\gamma}(b,a)=0$ means that $[b,a,c]$. Using the cyclicity property of the cyclic order(def.~\ref{def:corder}) we can see that $[b,a,c] \Rightarrow [a,c,b] \Rightarrow [c,b,a]$. Now using the antisymmetry property of the cyclic order(def.~\ref{def:corder}) if $[a,b,c]$ then not $[c,b,a]$.
\item From def.~\ref{def:cprox} we can conclude that $\cnear{\gamma}(a,b)=0$ means that there exists a path from $a$ passing through $b$ and moving onwards. Moreover, there exists no such $x\in X$, for which $[a,x,b]$. Using the same reasoning we can conclude from $\cnear{\gamma}(b,c)=0$ that there exists no such $x \in X$ for which $[b,x,c]$. Combining the two using transitivity of the cyclic order(def.~\ref{def:corder}) we can write $[a,b,c]$. Now, using the def.~\ref{def:cprox} we can see that $\cnear{\gamma}(a,c)=1$.
\item From the totality property of the cyclic order(def.~\ref{def:corder}) it can seen that for three distinct points $a,x,b \in X$ either $[a,x,b]$ or $[b,x,a]$. Using the transitive property of the cyclic order(def.~\ref{def:corder}) we can conclude that there exists a set of elements $\{d_1,\cdots,d_n\} \subseteq X$ such that $[a,d_1,d_2],[d_1,d_2,d_3],\cdots,[d_{n-2},d_{n-1},d_n],[d_{n-1},d_n,b]$. Moreover there exists no arbitrary $x\in X$ such that $[a,x,d_1],[d_1,x,d_2],\cdots,[d_{n-1},\linebreak x,d_n],[d_n, x,b]$.This means that there exists a sequence of adjacent elements traversing which we can go from $a$ to $b$. Using def.~\ref{def:cprox} we can rewrite this as $\cnear{\gamma}(a,d_1)=\cnear{\gamma}(d_1,d_2)=\cdots=\cnear{\gamma}(d_{n-1},d_n)=\cnear{\gamma}(d_n,b)=0$. This can be rewritten as $\exists A={a,d_1,\cdots,d_n,b} \subseteq X$ such that $\mathop{\sum}\limits_{i=1}^{n+1}\cnear{\gamma}(A_i,A_{i+1})=0$. Using similar line of argument we can conclude that $[b,x,a]$ can be rewritten as $\exists B={b,e_1,\cdots,e_m,a} \subseteq X$ such that $\mathop{\sum}\limits_{i=1}^{m+1}\cnear{\gamma}(B_i,B_{i+1})=0$.
\item We know that the ternary relation $\gamma$ yields $[a,b,c]$ which means that elements are ordered in such a way that one has to go through $b$ when moving from $a$ to $c$, from def.~\ref{def:corder}. We know that $\exists A=\{a,d_1,\cdots,d_m,b,e_1,\cdots,e_n,c\} \; s.t. \; \mathop{\sum}\limits_{i=1}^{n+m+2}\cnear{\gamma}(A_i,A_{i+1}) \linebreak =0$ means that $\cnear{\gamma}(a,d_1)=\cnear{\gamma}(d_1,d_2)=\cdots=\cnear{\gamma}(d_{m-1},d_m)=\cnear{\gamma}(d_m,b)=\cnear{\gamma}(b,e_1)=\cnear{\gamma}(e_{n-1},e_n)=\cnear{\gamma}(e_n,c)=0$. Let us consider two adjacent terms in this chain e.g. $\cnear{\gamma}(a,d_1)=\cnear{\gamma}(d_1,d_2)=0$. From def.~\ref{def:cprox} $\cnear{\gamma}(a,d_1)=0$ means that $a$ and $d_1$ are adjacent in some path i.e. $[a,d_1,y]$ where $y \in X$ and there exists no $x \in X$ such that $[a,x,d_1]$ . Combining this with $\cnear{\gamma}(d_1,d_2)=0$ which yields that $[d_1,d_2,\tilde{y}]$ for some $\tilde{y} \in X$ and there exists no $\tilde{x} \in X$ such that $[d_1,\tilde{x},d_2]$, we get $[a,d_1,d_2]$. If we go on doing this we can get $[a,b,c]$. Now using the cyclicity property of the cyclic order(def.~\ref{def:corder}) if $[a,b,c]$ then $[b,c,a]$. Using the transativity property of the cyclic order(def.~\ref{def:corder}) we can decompose $[b,c,a]$ into $[b,c,x]$ and $[c,x,a]$ where $x \in X$. Using subsequent decompositions such that we have paths in which all the elements are adjacents(i.e. they cannot be further decomposed in this way) we get $[b,f_1,f_2],[f_1,f_2,f_3],\cdots,[f_{o-1},f_{o},c],[f_{o},c,g_1],[c,g_1,g_2],\cdots,[g_{p-1},g_p,a]$ where $\{f_i\} \linebreak ,\{g_j\} \in X$ and $p,q \in \mathbb{Z}^+$. Using def.~\ref{def:cprox} $[b,f_1,f_2]$ yields $\cnear{\gamma}(b,f_1)=0$. In a similar fashion we can write $\cnear{\gamma}(b,f_1)=\cnear{\gamma}(f_1,f_2)=\cdots=\cnear{\gamma}(f_{o-1},f_o)=\cnear{\gamma}(f_o,c)=\cnear{\gamma}(c,g_1)=\cdots=\cnear{\gamma}(g_{p-1},g_p)=\cnear{\gamma}(g_p,a)=0$. This can inturn be simplified as $\exists B=\{b,f_1,\cdots,f_o,c,g_1, \cdots,g_p,a\} \subseteq X \; s.t. \; \mathop{\sum}\limits_{i=1}^{o+p+2}\cnear{\gamma}(B_i,B_{i+1})=0$
\end{compactenum} \end{proof} Given a relator space $(X,\{\gamma\})$, where $X$ is a set and $\gamma$ is a cyclic order, the graph $\mathop{\mathring{\mathcal{O}}}\limits_\leq(X)$ has the vertex set $X$ with a sequence of edges $a \rightarrow b \rightarrow c$ if $[a,b,c]$. The graph $\mathop{\mathring{\mathcal{P}}}\limits_\leq(X)$ also has the vertex set $X$ but there exists an edge $a \rightarrow b$ if $\cnear{\gamma}(a,b)=0$. \begin{lemma}\label{lm:creduction} Let $(X,\{\gamma \})$ be a realtor space where $X$ is a set and $\gamma$ is a cyclic order. Then, if there exists a path in the graph $\mathop{\mathring{\mathcal{O}}}\limits_\gamma(X)$, then there also exists a path between these vertices in $\mathop{\mathring{\mathcal{P}}}\limits_\gamma(X)$. \end{lemma} \begin{proof}
It must be noted that $\gamma$ is a ternary relation and yields $[a,b,c]$ stating that the elements are ordered in such a way that when moving from $a$ to $c$ we must pass through $b$. Hence, a path $a \rightarrow b \rightarrow c$ in $\mathop{\mathring{\mathcal{O}}}\limits_\leq(X)$ means $[a,b,c]$. Using the transitivity of cyclic order(def.~\ref{def:corder}) we can see that there can two cases. Either, there exist no $x,y \in X$ such that $[a,x,b]$ and $[b,y,c]$. Or, there exist $\{d_i\},\{e_j\} \in X$ such that $[a,d_1,d_2],[d_1,d_2,d_3],\cdots,[d_{n-1},d_n,b],[d_n,b,e_1],\cdots,[e_{m-1},e_m,c]$ and there exist no $x \in X$ which can be inserted in this sequence at any place. For the first case using def.~\ref{def:cprox} we can write $\cnear{\gamma}(a,b)=\cnear{\gamma}(b,c)=0$, hence there exists a sequence of edges $a \rightarrow b \rightarrow c$ in graph $\mathop{\mathring{\mathcal{P}}}\limits_\leq(X)$ and hence a path between $a$ and $b$. For the second case using def.~\ref{def:cprox} we can write $\cnear{\gamma}(a,d_1)=\cnear{\gamma}(d_1,d_2)=\cdots=\cnear{\gamma}(d_n,b)=\cnear{\gamma}(b,e_1)=\cnear{\gamma}(e_1,e_2)=\cdots=\cnear{\gamma}(e_m,c)=0$. Thus we have a sequence of edges $a \rightarrow d_1 \rightarrow \cdots \rightarrow d_n \rightarrow b
\rightarrow e_1 \rightarrow \cdots \rightarrow e_m \rightarrow c$ in $\mathop{\mathring{\mathcal{P}}}\limits_\leq(X)$ and hence a path between $a$ and $b$.
\end{proof} \begin{theorem}\label{thm:proxhassec} Let $(X,\{\gamma \})$ be a realtor space where $X$ is a set, $\gamma$ is a cyclic order and $\cnear{\gamma}$ is the induced proximity as defined by def.~\ref{def:cprox}. Then, $\mathop{\mathring{\mathcal{P}}}\limits_\gamma(X)$ is equivalent to $\mathop{\mathcal{H}s}\limits_\gamma(X)$. \end{theorem} \begin{proof}
From lemma~\ref{lm:creduction} it can be seen that $\mathop{\mathring{\mathcal{P}}}\limits_\gamma(X)$ has a path between every pair of vertices that are connected by a path in $\mathop{\mathring{\mathcal{O}}}\limits_\gamma(X)$. Thus it is a transitive reduction of $\mathop{\mathring{\mathcal{O}}}\limits_\leq(X)$. We know that by definition $\mathop{\mathcal{H}s}\limits_\gamma(X)$ is also a transitive reduction of $\mathop{\mathring{\mathcal{O}}}\limits_\leq(X)$ \end{proof} \begin{example}\label{ex:example7}
Consider $X=\{v_1,v_2,v_3,v_4,v_5\}$ with the relation $\gamma$ as in example \ref{ex:example3}. We know that $\gamma$ yields triples $[a,b,c]$ which mean that when moving from $a$ to $c$ one has to pass through $b$. For more on the relation $\gamma$ consult example \ref{ex:example3}. Let us consider the relator space $(X,\{\gamma\})$ such that $\gamma$ is a cyclic order as per def.\ref{def:torder}. The relator space can be visualized as the Hasse diagram shown in Fig.~\ref{subfig:cycle}.
Let us look at how the cyclic order $\gamma$ induces the proximity $\cnear{\gamma}$. It can be seen that as there exists no $x \in X$ such that $[a,x,b]$ we have $\cnear{\gamma}(a,b)=0$. Here, we must note that the totality property of def.~\ref{def:corder} dictates that any three elements in $X$ are a part of some path. Hidden in the above is an assumption that there exists a c such that $[a,b,c]$ hence the existence of a path between $a$ and $b$ is guarrenteed. Moreover, from Thm.~\ref{thm:proxhassec} the proximity graph $\mathop{\mathring{\mathcal{P}}}\limits_\leq(X)$ is the same as the Hasse diagram shown in Fig.~\ref{subfig:cycle}. \qquad \eot \end{example} \section{Applications}\label{sec:apps} This section considers the possible applications of order induced proximities. \subsection{Maximal centroidal vortices on images}\label{subsec:mcv} For this purpose we consider a triangulated space. A set of triangles with a nonempty intersection is a \emph{nerve}. The nerve with maximal number of triangles is termed a \emph{maximal nuclear cluster}(MNC), and the common intersection of this is the \emph{nucleus}. Each of the triangles in MNC is termed as the \emph{$1$-spoke}($sk_1$). To keep it simple we assume that there is only one MNC in the triangulation.
Generalizing these structures we have $k$-spokes($sk_k$), that are triangles sharing an intersection with $s_{k-1}$ but not with $sk_{k-2}$. This is a recursive definition for which the base case is $sk_0$ which is the nucleus. Collection of all the $sk_k$ is the \emph{$k-$spoke complex}($skcx_k$). We can see that the MNC is $skcx_1$, and hence spoke complexes generalize the notion of MNC. Associated with the $skcx_k$ is the $k-$maximal cycle($mcyc_k$), that is the cycle formed by centroids of all the $sk_k$. A collection of $\{mcyc_k\}_{k\in \mathbb{Z}^+}$ is termed a \emph{vortex}. For further detail we refer reader to \cite{ahmad2018maximal}.
The notion of order gives us a systematic way of constructing such cycles. Let $\phi:2^K \rightarrow \mathbb{R}$ be a real valued probe function that attaches description to the subsets of $K$. Thus, for $skcx_k$ we have a set $X_k=\{\phi(\triangle_i) s.t. \triangle_i \in skcx_k\}$. We can use $\leq$ to establish a total order(def.~\ref{def:torder}) $X_k$ such that $\phi(\triangle_1) \leq \cdots \leq \phi(\triangle_{n-1}) \leq \phi(\triangle_n)$. Let us define a ternary relation $\gamma_\leq$ induced by $\leq$ as $(a \leq b \leq c) \lor (b \leq c \leq a) \lor(c \leq a \leq b) \Leftrightarrow [a,b,c]$. \begin{lemma}\label{lm:app}
Let $(X,\leq)$ be a relator space where $X$ is a set and $\leq$ is a total order. Define a ternary relation $\gamma_\leq$ such that for $a,b,c \in X$, $(a \leq b \leq c) \lor (b \leq c \leq a) \lor(c \leq a \leq b) \Leftrightarrow [a,b,c]$. Then, $\gamma_\leq$ satisfies the following properties:
\begin{compactenum}[1$^o$]
\item \textbf{Cyclicity:} if $[a,b,c]$, then $[b,c,a]$
\item \textbf{Antisymmetry:} if $[a,b,c]$ then not $[c,b,a]$
\item \textbf{Transitivity:} if $[a,b,c]$ and $[a,c,d]$, then $[a,b,d]$
\item \textbf{Totality:} if $a,b,c$ are distinct then either,$[a,b,c]$ or $[c,b,a]$
\end{compactenum} \end{lemma} \begin{proof}
\begin{compactenum}[1$^o$]
\item From the definition of $\gamma_\leq$ we can see that $[a,b,c]$ is equivalent to $(a \leq b \leq c) \lor (b \leq c \leq a) \lor(c \leq a \leq b)$. It can also be seen by substitution that $[b,c,a]$ is equivalent to $(b \leq c \leq a) \lor (c \leq a \leq b) \lor (a \leq b \leq b)$. Hence $[a,b,c]$ and $[b,c,a]$ are equivalent.
\item We can see from the definition of $\gamma_\leq$ that $[a,b,c]$ stands for $(a \leq b \leq c) \lor (b \leq c \leq a) \lor(c \leq a \leq b)$. Moreover, not $[c,a,d]$ would stand for $\neg \big((c \leq b \leq a) \lor (b \leq a \leq c) \lor (a \leq c \leq b)\big)$, that can be written as $\neg(c \leq b \leq a) \wedge \neg(b \leq a \leq c) \wedge \neg(a \leq c \leq b)$. It can be seen for each of the conditions of $[a,b,c], \neg(c \leq b \leq a) \wedge \neg(b \leq a \leq c) \wedge \neg(a \leq c \leq b)$ holds. This is because none of the conditions of $[a,b,c]$ are the same as any of the conditions in $[c,b,a]$.
\item From the definition of $\gamma_\leq$ we can write $[a,b,c]$ as $(a \leq b \leq c) \lor (b \leq c \leq a) \lor(c \leq a \leq b) $ and $[a,c,d]$ as $(a \leq c \leq c=d) \lor (c \leq d \leq a) \lor(d \leq a \leq c) $. Out of the $9$ possible combinations only $4$ can occur. Combination such as $b \leq c \leq a$ and $a \leq c \leq d$ cannot occur as one forces $a \leq c$ and the other forces $c \leq a$ while $a,b,c$ are distinct. The answers for the possible combinations are:
\begin{align*}
(a \leq b \leq c)\, and\, (a \leq c \leq d) \Rightarrow (a \leq b \leq c \leq d) \Rightarrow (a \leq b \leq d)\\
(a \leq b \leq c)\, and\, (d \leq a \leq c) \Rightarrow (d \leq a \leq b \leq c) \Rightarrow (d \leq a \leq b)\\
(b \leq c \leq a)\, and\, (c \leq d \leq a) \Rightarrow (b \leq c \leq d \leq a) \Rightarrow (b \leq d \leq a)\\
(c \leq a \leq b)\, and\, (c \leq d \leq a) \Rightarrow (c \leq d \leq a \leq b) \Rightarrow (d \leq a \leq b)
\end{align*}
We can confirm from the definition of $\gamma_\leq$ that $(a \leq b \leq d) \lor (b \leq d \leq a) \lor (d \leq a \leq b)$ is equivalent to $[a,b,d]$.
\item We know that totality of the total order(def.~\ref{def:torder}) implies that for any two elements either $a \leq b$ or $b \leq a$. When we extend this to three elements we can compute that there are $6$ different possibilities $(a \leq b \leq c),(b \leq c \leq a),(c \leq b \leq a),(c \leq b \leq a),(b \leq a \leq c),(a \leq c \leq b)$. For visualization we can consider these as the permutations of $3$ numbers. One of these six possibilities has to hold as dictated by totality of the underlying total order $\leq$. Further more we know from the definition of $\gamma_\leq$ that $(a \leq b \leq c) \lor (b \leq c \leq a) \lor(c \leq a \leq b)$ is equivalent to $[a,b,c]$ and $(c \leq b \leq a) \lor (b \leq a \leq c) \lor (a \leq c \leq b)$ is equivalent to $[c,b,a]$. We can see that $[a,b,c]$ and $[c,b,a]$ are mutually exclusive and together cover all the six possibilities. Hence, one of them has to occur.
\end{compactenum} \end{proof} \begin{theorem}
Let $(X,\leq)$ be a relator space where $X$ is a set and $\leq$ is a total order. Define a ternary relation $\gamma_\leq$ such that for $a,b,c \in X$, $(a \leq b \leq c) \lor (b \leq c \leq a) \lor(c \leq a \leq b) \Leftrightarrow [a,b,c]$. Then, $\gamma_\leq$ is a cyclic order. \end{theorem} \begin{proof}
The proof follows directly from lemma \ref{lm:app} and the definition of cyclic order given in def.~\ref{def:corder}. \end{proof} \begin{figure}\label{subfig:orientorder}
\label{subfig:mcyc1}
\end{figure} Now that we have shown that $\gamma_\leq$ is a cyclic order we can induce a proximity on the relator space $(X_k,\gamma_\leq)$, where $X_k$ is the set of real valued descriptions of all triangles in $skcx_k$. Upon this we can induce a proximity $\cnear{\gamma_\leq}$ as defined by def.~\ref{def:cprox}. It can be seen that for $X_k=\{\phi(\triangle_i) \, s.t. \, \triangle_i \in skcx_k\}$ that are ordered as $\phi(\triangle_1) \leq \phi(\triangle_2)\cdots \leq \phi(\triangle_{n-1}) \leq \phi(\triangle_n)$, the induced proximity by the ternary relation $\gamma_\leq$(lemma \ref{lm:app}) would yield $\cnear{\gamma_\leq}(\phi(\triangle_1),\phi(\triangle_2))=\cnear{\gamma_\leq}(\phi(\triangle_2),\phi(\triangle_3))=\cdots=\cnear{\gamma_\leq}(\phi(\triangle_{n-1}),\phi(\triangle_{n}))=\cnear{\gamma_\leq}(\phi(\triangle_n),\phi(\triangle_1))=0$. Now, if we were to draw a path $cnt(\triangle_1)\rightarrow cnt(\triangle_2)\rightarrow \cdots \rightarrow cnt(\triangle_{n-1})\rightarrow cnt(\triangle_n) \rightarrow cnt(\triangle_1)$, where $cnt(\triangle_i)$ is the centroid, it would be a $k-$maximal cycle or $mcyc_k$. It is similar to drawing the proximal graph($\mathop{\mathring{\mathcal{P}}}\limits_\gamma(X_k)$) for $(X,\gamma_\leq)$ but using $cnt(\triangle_i)$ instead of $\phi(\triangle_{n})$. Let us look at an example to clarify this procedure.
\begin{example}
Consider the triangulation(shown in Figs.~\ref{subfig:orientorder},\ref{subfig:mcyc1}) that is a subset of the $\mathbb{R}^2$. We can see that all the triangles are a part of the MNC or the $skcx_1$. The nucleus is shown as black diamond and the centroids of the triangles are shown as red crosses. It is evident that each of the triangles in $skcx_1$ shares a nonempty intersection with the nucleus and is hence proximal to it and each other as per Lodato proximity. This is an important point, that under the Lodato proximity w.r.t. the nucleus each of the $sk_1$s is equivalent. How to connect them in a cycle so that we can obtain the $mcyc_1$?
We can use the fact that the triangulation is embedded in $\mathbb{R}^2$ to our advantage. We can calculate the orientation of each of the centroids $v_1,v_2, \cdots,v_6$ from the $x$ axis. Suppose the coordinate of the centroids are $v_i=(x_i,y_i)$, then the orientation $\theta_i=\arctan(\frac{y_i}{x_i})$. We can arrange in the order of increasing orientation angle. For Fig.~\ref{subfig:orientorder} we get $\{\theta_1,\theta_2,\theta_3,\theta_4,\theta_5,\theta_6\}$. We can see that in this case $\phi:2^X \rightarrow \mathbb{R}$ is the function that calculates the orientation of the centroids of the triangles. Now defining a ternary relation $\gamma_\leq$ as in lemma~\ref{lm:app} and then inducing the proximity from the cyclic order we can write $\cnear{\gamma_\leq}(\triangle_1,\triangle_2)=\cnear{\gamma_\leq}(\triangle_2,\triangle_3)=\cnear{\gamma_\leq}(\triangle_3,\triangle_4)=\cnear{\gamma_\leq}(\triangle_4,\triangle_5)=\cnear{\gamma_\leq}(\triangle_5,\triangle_6)=\cnear{\gamma_\leq}(\triangle_6,\triangle_1)=0$. Subsituting $cnt(\triangle_{i})$ for $\triangle_{i}$, this yields a proximity graph that is a cycle $v_1 \rightarrow v_2 \rightarrow v_3 \rightarrow v_4 \rightarrow v_5 \rightarrow v_6 \rightarrow v_1$. This graph is displayed in Fig.~\ref{subfig:mcyc1}.
We must note that by choosing a different $\phi$ we can have cycle ordered in a different way. Another possible choice could be to arrange in the increasing order of area. \qquad \eot \end{example}
\subsection{Order induced proximities on video frames}\label{subsec:frameord} We consider approaches to establish an order on video frames, that will lead to the induction of a proximity relation. The first approach considers an order established based on the area of maximal nuclear clusters(MNC) in the triangulated frames. As previously stated, the nerve is a collection of sets(triangles) with a nonempty intersection and a nerve with the most number of sets(triangles) is the MNC.
\begin{algorithm}[!ht]
\caption{Order Induced Proximity on MNCs in Video Frames based on area}
\label{alg:mncarea_ord}
\SetKwData{Left}{left}
\SetKwData{This}{this}
\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}
\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\SetKwComment{tcc}{/*}{*/}
\Input{digital video $\mathcal{V}$,Number of keypoints $n$}
\Output{A set $X=\{\{f_i,mnc_j\}:f_i \in \mathcal{V},mnc_j \subseteq f_i\,and\,i,j \in \mathbb{Z}^+\}$ arranged in increasing order of MNC area}
\emph{$\mathcal{X} \mathrel{\mathop :}= \{\},X \mathrel{\mathop :}= \{\}$, declare empty array}\;
\ForEach{$f_i \in \mathcal{V}$}{
\emph{$f_i \longmapsto S=\{s_1,\cdots,s_n\}$, where $S$ is the set of keypoints}\;
\emph{$S \longmapsto \mathcal{T}(S)$, where $\mathcal{T}(S)$ is delaunay triangulation on the keypoints $S$}\;
\emph{$\mathcal{T}(S) \longmapsto \mathcal{M}$, where $\mathcal{M}=\{mnc_j:j \in \mathbb{Z}^+\}$ is the set of all the MNCs in $\mathcal{T}(S)$}\;
\ForEach{$mnc_j \in \mathcal{M}$}{
\emph{$mnc_j \longmapsto a_{ij}$, where $a_{ij}$ is the area of $mnc_j \subset f_i$}\;
\emph{$\mathcal{X} \mathrel{\mathop :}= \{\mathcal{X},\{f_i,mnc_j,a_{ij}\} \}$, appending at the end}\;
}
}
\emph{$\mathcal{X} \rightarrow \mathcal{X}_{sort}$, where $\mathcal{X}_{sort}$ contains all the $3$-tuples in $\mathcal{X}$ arranged in ascending order of $a_{ij}$}\;
\ForEach{$\{f_i,mnc_j,a_{ij}\} \in \mathcal{X}_{sort}$}{
\emph{$\{f_i,mnc_j,a_{ij}\} \longmapsto \{f_i,mnc_j\}$}\;
\emph{$X \mathrel{\mathop :}= \{X,\{f_i,mnc_j\} \}$, appending at the end}\;
} \end{algorithm}
Using these notions let us explain the approach which has been stated in algorithm~\ref{alg:mncarea_ord}. Let $\mathcal{V}$ be the video, that is a collection of frames $\{f_1,\cdots,f_n\}$. For each frame $f_i$ we select keypoints $S=\{s_1, \cdots, s_n\}$ to serve as seeds for Delaunay triangulation $\mathcal{T}(S)$. Once, we have the triangulation we proceed to determining the MNCs that are represented as $\mathcal{M}=\{mnc_1,\cdots,mnc_j\}$. Area of $mnc_j \subset f_i$ is represented as $a_{ij}$. It must be noted that $a_{ij}$ is $\sum \triangle_k$, over all $\triangle_k \in mnc_j \subset f_i$. We have $3$-tuples $\{f_i,mnc_j,a_{ij}\}$ for each frame-MNC pair. These $3$-tuples form the set $\mathcal{X}$, which is then sorted in ascending order for each of area $a_{ij}$ yielding the set $\mathcal{X}_{sort}$. By selecting the first two elements of each $3$-tuple $\{f_i,mnc_j,a_{ij}\} \in \mathcal{X}_{sort}$ we get the corresponding $\{f_i,mnc_j\} \in X$.
For ease of understanding consider that $X=\{x_1,\cdots,x_m\}$, where each $x_i$ is the $2$-tuple that represents a particular frame-MNC pair. Let $a(x_i)$ be the area of MNC represented by $x_i$. It can be seen that $a(x_i)<=a(x_j)$ for all $j \geq i$ due to the sorting performed in algorithm~\ref{alg:mncarea_ord}. We present some important results regarding this set $X$ sorted with respect to the MNC area. \begin{lemma}\label{lm:mncarea_ord}
Let $X=\{x_1,\cdots,x_m\} \, s.t. \, x_1=\{f_i,mnc_j\}$ where $f_i$ is a video frame and $mnc_j \subset f_i$ is a MNC in this frame. Define a function $a:2^{\mathbb{R}^2} \rightarrow \mathbb{R}$, such that $a(x_i)$ is the area of MNC in the frame-MNC pair $x_i$. Then, the inequality relation $\leq$ over the set $A=\{a(x_i): x_i \in X \}$ satisfy the following conditions:
\begin{compactenum}[1$^o$]
\item \textbf{Reflexivity:} $a(x_i) \leq a(x_i)$
\item \textbf{Antisymmetry:} if $a(x_i) \leq a(x_j)$ and $a(x_j) \leq a(x_i)$, then $a(x_i)=a(x_j)$
\item \textbf{Transitivity:} if $a(x_i) \leq a(x_j)$ and $a(x_j) \leq a(x_k)$, then $a(x_i) \leq a(x_k)$
\item \textbf{Connex property or Totality:} either $a(x_i) \leq a(x_j)$ or $a(x_j) \leq a(x_i)$
\end{compactenum} \end{lemma} \begin{proof}
\begin{compactenum}[1$^o$]
\item It follows directly from the fact that area of each MNC is equal to itself.
\item as area function $a$ outputs real numbers and for two real numbers $a,b$ we know that $(a \leq b) \wedge (b \leq a) \Rightarrow (a=b)$.
\item the area function output is a real number and for three real numbers $a,b,c$ it is known that $(a \leq b) \wedge (b \leq c) \Rightarrow (a \leq c)$.
\item the area of an MNC is a real number and for any two real numbers $a,b$ it is known that either $a \leq b$ or $b \leq a$.
\end{compactenum} \end{proof} From this lemma we can arrive at the following result. \begin{theorem}\label{thm:mncarea_ord}
Let $X=\{x_1,\cdots,x_m\} \, s.t. \, x_1=\{f_i,mnc_j\}$ where $f_i$ is a video frame and $mnc_j \subset f_i$ is a MNC in this frame. Define a function $a:2^{\mathbb{R}^2} \rightarrow \mathbb{R}$, such that $a(x_i)$ is the area of MNC in the frame-MNC pair $x_i$. Then, the inequality relation $\leq$ over the set $A=\{a(x_i): x_i \in X \}$ is a total order. \end{theorem} \begin{proof}
This follows directly from the lemma \ref{lm:mncarea_ord} and definition \ref{def:torder}. \end{proof} As we know that area is a real valued function we can say that areas of $x_i \in X$ form a total order as formed by $(\mathbb{Z},\leq)$ as considered in example~\ref{ex:example1}. Thus, we can also conclude that the Hasse diagram for the areas $a(x_i)$ is similar to the one shown in Fig.~\ref{subfig:integers}. Moreover, this order would induce a proximity on the set $A=\{a(x_i):x_i \in X \}$ as given by def.~\ref{def:toprox}, such that $\tonear{\leq}(a(x_i),a(x_j))=0$ iff $j=i+1$ and $\tonear{\leq}(a(x_i),a(x_j))=1$ otherwise. We can use $\tonear{\leq}$ on $A$ to define a notion of proximity for $X$. As the set $X$ has frame-MNC pairs arranged in an order of increasing MNC area. We define a proximity relation $\tonear{a_\leq}$ on $X$ such that $\tonear{a_\leq}(x_i,x_j)=0$ iff $j=i+1$ and $\tonear{a_\leq}(x_i,x_j)=0$. Let us explain this proximity on frame-MNC pairs with the help of following example. \begin{algorithm}[!ht]
\caption{Extracting subgraphs for a specific frame from the Proximity graph of an order induced proximity on Video frames}
\label{alg:mncarea_subgraph}
\SetKwData{Left}{left}
\SetKwData{This}{this}
\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}
\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\SetKwComment{tcc}{/*}{*/}
\Input{A set $X=\{\{f_i,mnc_j\}:f_i \in \mathcal{V},mnc_j \subseteq f_i\,and\,i,j \in \mathbb{Z}^+\}$ arranged in some order, frame number under consideration $k$}
\Output{$G_k$ the subgraph representing order relations for video frame number $k$}
\emph{$V \mathrel{\mathop :}= \{\},E \mathrel{\mathop :}= \{\}$, declare empty array}\;
\ForEach{$\{f_i,mnc_j\} \in X\; s.t.\;i=k$}{
\emph{$\{f_i,mnc_j\} \longmapsto ind$, the index of $2-$tuple in $X$}\;
\uIf{$ind=1$}{
\emph{$V=\{V,X(ind),X(ind+1)\}$ append to the end}\;
\emph{$E=\{E,de(X(ind),X(ind+1))\}$, $de(a,b)$ is a directed edge from $a$ to $b$}\;
}
\uElseIf{$ind=length(X)$}{
\emph{$V=\{V,X(ind-1),X(ind)\}$ }\;
\emph{$E=\{E,de(X(ind-1),X(ind))\}$}\;
}
\Else{
\emph{$V=\{V,X(ind-1),X(ind),X(ind+1)\}$ }\;
\emph{$E=\{E,de(X(ind-1),X(ind)),de(X(ind),X(ind+1))\}$}\;
}
}
\emph{$G_k \mathrel{\mathop :}= (V,E)$, where $V$ is the vertex set and $E$ is the edge set}\;
\end{algorithm} \begin{figure}\label{fig:framesim_area}
\end{figure}
\begin{example}
In this example we will use the stock video in MATLAB, "Traffic.mp4" to illustrate how a total order on the areas of the MNCs can be used to induce a proximity on frame-MNC pairs. The process has been detailed in algorithm~\ref{alg:mncarea_ord}. We represent a frame-MNC pair as $(f_i,mnc_j)$ and group them in a set $X=\{x_1,\cdots,x_m\}$. The elements in X are arranged in the increasing order of MNC area, $a(x_i)$. We have seen that $(A,\leq)$, where $A=\{a(mnc_j): mnc_j \in x_i \in X\}$. Def.~\ref{def:toprox} allows a proximity on $A$ such that $\tonear{\leq}(a(x_i),a(x_j))=0$ iff $j=i+1$ and $\tonear{\leq}(a(x_i),a(x_j))=1$ otherwise. This is a direct result of the sorting performed on $X$ in ascending order w.r.t. $a(x_i)$. The condition $(l(x_i) \leq l(x_j)) \wedge (\not\exists l(x_k) \in L \setminus \{l(x_i),l(x_j)\}\; s.t.\;l(x_i)\leq l(x_k) \leq l(x_j))$ in def.~\ref{def:toprox} is reduced to $j=i+1$. Thus, we can write $a(x_1)\leq a(x_2)\leq \cdots \leq a(x_m)$ and induce the proximity relation represented as $\tonear{\leq}(a(x_1),a(x_2))=0 \wedge \tonear{\leq}(a(x_2),a(x_3))=0 \wedge \cdots \wedge \tonear{\leq}(a(x_{m-1}),a(x_m))=0$. Now we can simply lift the proximity from $a(x_i)$ to $x_i$ and simply rewrite this as $\tonear{a_\leq}(x_1,x_2)=0 \wedge \tonear{a_\leq}(x_2,x_3)=0 \wedge \cdots \wedge \tonear{a_\leq}(x_{m-1},x_m)=0$. This gives us a proximity relation on $x_i$, the frame-MNC pairs.
It can be seen from the definition of induced proximity(def.~\ref{def:toprox}) that a frame-MNC pair can at most be related to two other frames, with $x_{i-1}$ is proximal to $x_{i}$ and $x_i$ is proximal to $x_{i+1}$. A frame $f_i$ can have multiple MNCs and thus corresponding number of frame-MNC pairs. All such relations for a particular frame are found using algorithm~\ref{alg:mncarea_subgraph}. This is the implementation of these relations for each of the frame-MNC pairs in a frame.
We illustrate this using $f_{21}$ from the video having three MNCs and corresponding pairs, $(f_{21},mnc_1),(f_{21},mnc_2),(f_{21},mnc_3)$. The figure~\ref{fig:framesim_area} summarizes the relations for each of these pairs in a path graph corresponding to each of them. The images are labeled with the $(f_i,mnc_j)$ and the edges are labeled with the $\frac{a(x_{i+1})}{a(x_i)}$. It must be noted that $(f_{40},mnc_1)\leq(f_{21},mnc_1)\leq(f_{63},mnc_1)$ and no other frame-MNC pair can be inserted in this chain. MNCs of varying shape can similar areas as is represented by $(f_{40},mnc_1),(f_{21},mnc_1)$ and $(f_{21},mnc_3),(f_{91},mnc_1)$. Moreover, it can be seen that this relation of proximity transcends the temporal order of the frames as $\tonear{a_\leq}((f_{21},mnc_1),(f_{63},mnc_1))=0$. \qquad \eot \end{example}
\begin{algorithm}[!ht]
\caption{Order Induced Proximity on $1$-maximal cycles in Video Frames based on length}
\label{alg:vortlen_ord}
\SetKwData{Left}{left}
\SetKwData{This}{this}
\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}
\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\SetKwComment{tcc}{/*}{*/}
\Input{digital video $\mathcal{V}$,Number of keypoints $n$}
\Output{A set $X=\{\{f_i,mnc_j\}:f_i \in \mathcal{V},mnc_j \subseteq f_i\,and\,i,j \in \mathbb{Z}^+\}$ arranged in increasing order of vortex length}
\emph{$\mathcal{X} \mathrel{\mathop :}= \{\},X \mathrel{\mathop :}= \{\},C_{sort} \mathrel{\mathop :}= \{\}$, declare empty array}\;
\ForEach{$f_i \in \mathcal{V}$}{
\emph{$f_i \longmapsto S=\{s_1,\cdots,s_n\}$, where $S$ is the set of keypoints}\;
\emph{$S \longmapsto \mathcal{T}(S)$, where $\mathcal{T}(S)$ is delaunay triangulation on the keypoints $S$}\;
\emph{$\mathcal{T}(S) \longmapsto \mathcal{M}$, where $\mathcal{M}=\{mnc_j:j \in \mathbb{Z}^+\}$ is the set of all the MNCs in $\mathcal{T}(S)$}\;
\ForEach{$mnc_j \in \mathcal{M}$}{
\emph{$mnc_j \longmapsto C=\{c_1, \cdots,c_i\}$, where $c_i$ is centroid of $\triangle_i \in mnc_j$}\;
\emph{$C \longmapsto u$, where $u$ is the centroid of points in $C$}\;
\emph{$C \longmapsto \Theta=\{\{c_1,\theta_1\},\cdots,\{c_j,\theta_j\}\}$, where $\theta_j=\arctan(\frac{c_{j}(2)-u(2)}{c_{j}(1)-u(1)})$}\;
\emph{$\Theta \longmapsto \Theta_{sort}$, where $\Theta_{sort}$ is arranged in order of increasing $\theta$}\;
\ForEach{$\{c_k,\theta_k \} \in \Theta_{sort}$}{
\emph{$\{c_k, \theta_k\} \longmapsto c_k$}\;
\emph{$C_{sort} \mathrel{\mathop :}= \{C_{sort},c_k\}$}\;
}
\emph{$\Theta_{sort} \longmapsto C_{sort}$, project onto $1st$ cordinate}\;
\emph{$C_{sort}=\{cs_1,\cdots,cs_j\} \longmapsto v_j=cyc(cs_1,\cdots,cs_j)$}\;
\emph{$v_j \longmapsto l_{ij}$, $l_{ij}$ is the length of $v_j$}
\emph{$\mathcal{X} \mathrel{\mathop :}= \{\mathcal{X},\{f_i,v_j,l_{ij}\} \}$, appending at the end}\;
}
}
\emph{$\mathcal{X} \rightarrow \mathcal{X}_{sort}$, where $\mathcal{X}_{sort}$ contains all the $3$-tuples in $\mathcal{X}$ arranged in ascending order of $l_{ij}$}\;
\ForEach{$\{f_i,v_j,l_{ij}\} \in \mathcal{X}_{sort}$}{
\emph{$\{f_i,v_j,l_{ij}\} \longmapsto \{f_i,v_j\}$}\;
\emph{$X \mathrel{\mathop :}= \{X,\{f_i,v_j\} \}$, appending at the end}\;
} \end{algorithm} Another method to induce a proximity relation is to use the length of $1$-maximal cycle $mcyc_1$ that is the cycle constructed using the centroids of triangles in the MNC. A method for constructing such cycles has been discussed in section~\ref{subsec:mcv}. The method has been detailed in algorithm~\ref{alg:vortlen_ord}. For each frame $f_i \in \mathcal{V}$, we select keepoints $S=\{s_1,\cdots,s_n\}$ upon which the Delaunay triangulation $\mathcal{T}(S)$ is constructed. We determine $\mathcal{M}=\{mnc_1,\cdots,mnc_j\}$, the set of MNCs in $f_i$. For each $mnc_j \in \mathcal{M}$ we determine the centroids of triangles constituting it,$C=\{c_1,\cdots,c_i\}$.
We calculate the centroid of points in $C$ denoting it as $u$, and then using this as the origin we calculate orientation $\theta_i$ of the position vectors of $c_i \in C$. Now connect these points in the increasing order of $\theta_i$ and close the loop to get $mcyc_1$. The $mcyc_1$ for $mnc_j$ is denoted $v_j$. The length of $v_j \subset f_i$ is $l_{ij}$. We form an array of $3$-tuples $\mathcal{X}=\{\{f_i,v_j,l_{ij}\}: f_i \in \mathcal{V},v_j \subset mnc_j\}$. This array is sorted in the ascending order of $l_{ij}$ to yield $X_{sort}$. Projection of the $3$-tuple $\{f_i,v_j,l_{i,j}\}$ onto the first two elements gives us the frame-cycle pair $\{f_i,v_j \}$. The set $X$ is the collection of all such pairs arranged in increasing order of length $l_{ij}$.
We express this set $X$ as $\{x_1,\cdots,x_m\}$ where each $x_m=\{f_i,v_j\}$ is a frame-cycle pair. Let $l(x_i)$ be the length of $_j$, the cycle represented by $x_i$. Then we can see that due to the sorting performed $l(x_i) \leq l(x_j)$ for all $j \geq i$. We present the following results. \begin{lemma}\label{lm:vortlen_ord}
Let $X=\{x_1,\cdots,x_m\} \, s.t. \, x_1=\{f_i,v_j\}$ where $f_i$ is a video frame and $v_j \subset f_i$ is a $mcyc_1$ in this frame. Define a function $l:2^{\mathbb{R}^2} \rightarrow \mathbb{R}$, such that $l(x_i)$ is the length of $mcyc_1$ in the frame-cycle pair $x_i$. Then, the inequality relation $\leq$ over the set $L=\{l(x_i): x_i \in X \}$ satisfy the following conditions:
\begin{compactenum}[1$^o$]
\item \textbf{Reflexivity:} $l(x_i) \leq l(x_i)$
\item \textbf{Antisymmetry:} if $l(x_i) \leq l(x_j)$ and $l(x_j) \leq l(x_i)$, then $l(x_i)=l(x_j)$
\item \textbf{Transitivity:} if $l(x_i) \leq l(x_j)$ and $l(x_j) \leq l(x_k)$, then $l(x_i) \leq l(x_k)$
\item \textbf{Connex property or Totality:} either $l(x_i) \leq l(x_j)$ or $l(x_j) \leq l(x_i)$
\end{compactenum} \end{lemma} \begin{proof}
\begin{compactenum}[1$^o$]
\item It follows directly from the fact that length of each $mcyc_1$ is equal to itself.
\item as the range of $l$ is the set $\mathbb{R}$ and for two real numbers $a,b$ we know that $(a \leq b) \wedge (b \leq a) \Rightarrow (a=b)$.
\item the length function output is a real number and for three real numbers $a,b,c$ it is known that $(a \leq b) \wedge (b \leq c) \Rightarrow (a \leq c)$.
\item the length is a real number and for any two real numbers $a,b$ it is known that either $a \leq b$ or $b \leq a$.
\end{compactenum} \end{proof} This lemma leads to the following theorem. \begin{theorem}\label{thm:vortlen_ord}
Let $X=\{x_1,\cdots,x_m\} \, s.t. \, x_1=\{f_i,mnc_j\}$ where $f_i$ is a video frame and $v_j \subset f_i$ is a $mcyc_1$ in this frame. Define a function $l:2^{\mathbb{R}^2} \rightarrow \mathbb{R}$, such that $l(x_i)$ is the length of $mcyc_1$ in the frame-cycle pair $x_i$. Then, the inequality relation $\leq$ over the set $L=\{l(x_i): x_i \in X \}$ is a total order. \end{theorem} \begin{proof}
This follows directly from the lemma \ref{lm:vortlen_ord} and definition \ref{def:torder}. \end{proof} Thus, similar to the case with MNC area we can see that length of $mcyc_1$ is areal number and the order on lengths of such cycles is a total order simialr to $(\mathbb{Z},\leq)$ considered in example \ref{ex:example1}. Accordingly the Hasse diagram is similar in structure to Fig.~\ref{subfig:integers}. Thus, this order induces a proximity on $L=\{l(x_i): x_i \in X \}$ as per definition~\ref{def:toprox}. It can be seen that $\tonear{\leq}(l(x_i),l(x_j))=0$ iff $j=i+1$ and $\tonear{\leq}(l(x_i),l(x_j))=1$ otherwise. This proximity on $L$ can be extended to a similar notion on set $X$. The set $X$ as we know from algorithm~\ref{alg:vortlen_ord} is the list of frame-cycle pairs arranged in the increasing order of length. Thus, we define $\tonear{l_\leq}$ a proximity relation in X such that $\tonear{l_\leq}(x_i,x_j)=0$ iff $j=i+1$ and $\tonear{l_\leq}(x_i,x_j)=1$ otherwise. We explain this with the following example. \begin{figure}\label{fig:framesim_length}
\end{figure} \begin{example}
In this example we consider how a total order(def.~\ref{def:torder}) on the length of $mcyc_1$ induces a proximity $\tonear{l_\leq}$ on the video frames. We use "Traffic.mp4" which is a stock video in MATLAB. We tesselate each frame $f_i$ and construct $v_j$ for each MNC $mnc_j \subset f_i$. It must be noted that we consider frame-cycle pairs as an entity i.e. if a frame has multiple MNC leading to multiple $mcyc_1$ each pair will be considered separately. As we have seen that $X=\{x_1,\cdots,x_m\}$ where $x_m=\{f_i,v_j\}$ is a frame-cycle pair is sorted with increasing length of $v_j$ represented as $l(v_j)$. $(L,\leq)$ where $L=\{l(v_i):v_i \in x_i \in X\}$, is a totaly orderd set. Using the def.~\ref{def:toprox} we can easily determine that $\tonear{\leq}(l(x_i),l(x_j))=0$ iff $j=i+1$ and $\tonear{\leq}(l(x_i),l(x_j))=1$ otherwise. This happens because we have sorted $X$ in ascendimg order w.r.t to $l(x_i)$, for $x_i \in X$. Thus the condition $(l(x_i) \leq l(x_j)) \wedge (\not\exists l(x_k) \in L \setminus \{l(x_i),l(x_j)\}\; s.t.\;l(x_i)\leq l(x_k) \leq l(x_j))$ in def.~\ref{def:toprox} is reduced to $j=i+1$. We just lift this proximity relation on the lengths $l(x_i)$ to the frame-cycle pairs $x_i$ and term it $\tonear{l_\leq}$. As discussed this is defined as $\tonear{l_\leq}(x_i,x_j)=0$ iff $j=i+1$ and $\tonear{l_\leq}(x_i,x_j)=1$ otherwise. Thus, as we can write $l(x_1) \leq l(x_2) \leq \cdots \leq l(x_m)$ we can write the induced proximites as $\tonear{\leq}(l(x_1),l(x_2))=0 \wedge \tonear{\leq}(l(x_2),l(x_3))=0 \wedge \cdots \wedge \tonear{\leq}(l(x_{m-1}),l(x_m))=0$. By lifting the proximity from $l(x_i)$ to $x_i$ we have $\tonear{l_\leq}$. For which we can write $\tonear{l_\leq}(x_1,x_2)=0 \wedge \tonear{l_\leq}(x_2,x_3)=0 \wedge \cdots \wedge \tonear{l_\leq}(x_{m-1},x_m)=0$. Thus, we have a proximity relation on the frame-cycle pairs.
We can see that a frame-cycle $x_i$ pair can be related to two other pairs, with $x_{i-1}$ being proximal to $x_i$ and $x_i$ being proximal to $x_{i+1}$. So for a particular frame we can have multiple frame-cycle pairs due to multiple MNC. We determine all such relations for frame-cycle pairs of a particular $f_i$. This is done using algorithm~\ref{alg:mncarea_subgraph} which is an implementation of the explained relations. To illustrate our point we choose a frame $f_{21}$ having three $mcyc_1$ represented as $\{f_{21},v_1 \},\{f_{21},v_2 \},\{f_{21},v_3\}$. In fig.~\ref{fig:framesim_length} we represent the proximity relations for each of these pairs. We have a path graph for each of the three frame-cycle pairs. Each of the images is labelled with the corresponding $(f_i,v_j)$. The ratio $\frac{l(x_{i+1})}{l(x_i)}$ is labelled as a weight on each of the arrows. It must be noted that $l(\{f_{16},v_1\}) \leq l(\{f_{21},v_1\}) \leq l(\{f_{98},v_2\}) $ and no other frame-cycle pair can be inserted in between the terms in this inequality chain. Moreover, we see that $mcyc_1$ of very different shapes can have similar lengths such as $\{f_{98},v_2\},\{f_{21},v_1\}$ and $\{f_{75},v_2\},\{f_{21},v_3\}$. An important thing to note is that this proximity relates frames that may not be temporally adjacent such as $f_{98}$ and $f_{21}$. \qquad \eot \end{example}
\section{Conclusions}\label{sec:conclusions} This paper defines and studies the proximity relations induced by partial, total and cyclic orders.The equivalence between the Hasse diagram of the orderd set and the proximity graph of the induced relation has been established. Moreover, an application of this induced proximity to constructing maximal centroidal vortices has been presented. Another application with regards to inducing a proximity relation on video frames has also been presented.
\end{document}
|
arXiv
|
{
"id": "1903.05532.tex",
"language_detection_score": 0.7453703880310059,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Normality of Necessary Optimality Conditions for Calculus of Variations Problems with State Constraints}
\author{
N. Khalil\footnote{ {\it MODAL'X, Universit\'e Paris Ouest Nanterre La D\'efense, 200 Avenue de la R\'epublique, 92001 Paris Nanterre, France, e-mail: \/}
{\tt [email protected]}} ,
S. O. Lopes \footnote{ {\it CFIS and DMA, Universidade do Minho, Guimar$\tilde{a}$es, Portugal, e-mail: \/}
{\tt [email protected]}
This author was supported by POCI-01-0145-FEDER-006933-SYSTEC, PTDC/EEI-AUT/2933/2014, POCI-01-0145-FEDER-016858 TOCCATTA and POCI-01-0145-FEDER-028247 To Chair - funded by FEDER funds through COMPETE2020 - Programa Operacional Competitividade e Internacionaliza\c{c}$\tilde{a}$o (POCI) and by national funds (PIDDAC) through FCT/MCTES which is gratefully acknowledged. Financial support from the Portuguese Foundation for Science and Technology (FCT) in the framework of the Strategic Financing UID/FIS/04650/2013 is also acknowledged. \protect \includegraphics[height=5.0mm]{logo.png} }
}
\maketitle \begin{abstract}\noindent We consider non-autonomous calculus of variations problems with a state constraint represented by a given closed set. We prove that if the interior of the Clarke tangent cone of the state constraint set is non-empty (this is the constraint qualification that we suggest here), then the necessary optimality conditions apply in the normal form. We establish normality results for (weak) local minimizers and global minimizers, employing two different approaches and invoking slightly diverse assumptions. More precisely, for the local minimizers result, the Lagrangian is supposed to be Lipschitz with respect to the state variable, and just lower semicontinuous in its third variable. On the other hand, the approach for the global minimizers result (which is simpler) requires the Lagrangian to be convex with respect to its third variable, but the Lipschitz constant of the Lagrangian with respect to the state variable might now depend on time.
\end{abstract}
\vskip3ex \noindent {\bf Keywords:}{ Calculus of Variations $\cdot$ Constraint qualification $\cdot$ Normality $\cdot$ Optimal Control $\cdot$ Neighboring Feasible Trajectories}
\section{Introduction} We consider the following non-autonomous calculus of variations problem subject to a state constraint \begin{equation}\begin{cases}\label{problem: calculus of variations}
\begin{aligned}
& {\text{minimize}}
&& \int_S^{T} L (t,x(t),\dot{x}(t)) \ dt \\
&&& \hspace{-1.9cm} \text{over arcs } x(.) \in W^{1,1}([S,T], \mathbb{R}^{n}) \text{ satisfying} \\
&&& x(S) = x_0 \ , \\
&&& x(t) \in A \quad \quad {\rm for \, all }\; t \in [S,T] \ , \end{aligned}\end{cases} \tag{CV}\tagsleft@true\let\veqno\@@leqno \end{equation} for a given Lagrangian $L: [S,T]\times \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$, an initial datum $x_0 \in \mathbb{R}^n$ and a closed set $A \subset \mathbb{R}^{n}$.
Briefly stated, the problem consists in minimizing the integral of a time-dependent Lagrangian $L$ over admissible absolutely continuous arcs $x(.)$ (that is, the left end-point and the state constraint of the problem (\ref{problem: calculus of variations}) are satisfied). We say that an admissible arc $\bar x$ is a $W^{1,1}-${\it local minimizer} if there exists $\epsilon > 0$ such that \[ \int_S^{T} L (t,\bar{x} (t),\dot{\bar x}(t))dt \leq \int_S^{T} L (t,x(t),\dot{x}(t))dt , \] for all admissible arcs $x$ satisfying \[
\left\| x(.) - \bar x (.) \right\|_{W^{1,1}} \leq \epsilon . \]
The purpose of this paper is to derive necessary optimality conditions in the normal form for problems like (\ref{problem: calculus of variations}); that is, when trajectories satisfy a set of necessary conditions with a nonzero cost multiplier. Indeed, in the pathological situation of abnormal extrema, the objective function to minimize does not intervene in the selection of the candidates to be minimizers. To overcome this difficulty, new additional hypotheses have to be imposed, known as {\it constraint qualifications}, that permit to identify some class of problems for which normality is guaranteed. There has been a growing interest in the literature to ensure the normality of necessary optimality conditions for state-constrained calculus of variations problems. For instance, \cite{ferreira_when_1994} deals with autonomous Lagrangian, studied for $W^{1,1}-$local minimizers and with a state constraint expressed in terms of an inequality of a twice continuously differentiable function. The constraint qualification referred to these smooth problems imposes that the gradient of the function representing the state constraint set is not zero at any point on the boundary of the state constraint set. The result in \cite{ferreira_when_1994} has been successively extended in \cite{fontes_normal_2013} to the nonsmooth case, for $L^\infty-$local minimizers, imposing a constraint qualification which makes use of some hybrid subgradients to cover situations in which the function, which defines the state constraint set, is not differentiable. More precisely the idea of the constraint qualification in \cite{fontes_normal_2013} is the following: the angle between any couple of (hybrid) subgradients of the function that defines the state constraint set is `acute'. A useful technique employed in \cite{ferreira_when_1994} and \cite{fontes_normal_2013} in order to derive optimality conditions in the normal form consists in introducing an extra variable which reduces the reference calculus of variations problem to an optimal control problem with a terminal cost (this method is known as the `state augmentation').
In this paper, the state constraint set is given in the intrinsic form (i.e. it is a given closed set) and the constraint qualification we suggest is to assume that the interior of the Clarke tangent cone to the state constraint set is nonempty. We emphasize that our constraint qualification generalizes the ones discussed in \cite{fontes_normal_2013} and \cite{ferreira_nondegenerate_1999} and can be therefore applied to a broader class of problems. This is clarified in Proposition \ref{Prop3} and Example \ref{ex0} where the constraint qualifications reported in \cite{fontes_normal_2013} and \cite{ferreira_nondegenerate_1999} imply the one that we suggest in our paper.
We propose two main theorems which establish the normality of the necessary optimality conditions for $W^{1,1}-$local minimizers and for global minimizers. For these two cases we identify two different approaches, both based on a state augmentation technique, but combined with: \begin{enumerate}[label= \arabic*), ref= \arabic*)]
\item\label{item: approach 1) for normality in CV} either a construction of a suitable control and a normality result for optimal control problems (this is for the $W^{1,1}-$local minimizers case);
\item\label{item: approach 2) for normality in CV} or a `distance estimate' result coupled with a standard maximum principle in optimal control (for the global minimizers case). \end{enumerate}
More precisely, in the first result approach (considering the case of $W^{1,1}-$local minimizers), we invoke some stability properties of the interior of the Clarke tangent cone. This allows to select a particular control which pushes the dynamic of the control system inside the state constraint more than the reference minimizer. In such circumstances, necessary conditions in optimal control apply in the normal form: we opt here for the broader version (i.e. covering $W^{1,1}-$local minimizers) of normality for optimal control problem established in \cite{fontes_normal_2013} (which is concerned merely with $L^\infty-$local minimizers). The normality of the associated optimal control problem yields the desired normality property for our reference calculus of variations problem, considering $W^{1,1}-$local minimizers. This result is valid for Lagrangians $L=L(t,x,v)$ which are Lipschitz w.r.t. $x$ and lower semicontinuous w.r.t. $v$. The second result approach (which deals with global minimizers) requires to impose slightly different assumptions on the data: here the Lipschitz constant of the Lagrangian w.r.t. $x$ might be an integrable function depending on the time variable, but $v \mapsto L(t,x,v)$ has to be convex. It is also necessary to impose a constraint qualification which is slightly stronger than that one considered for the first result (but still in the same spirit). The proof in this case is much shorter, simpler, and it employs a neighboring feasible trajectories result satisfying $L^\infty-$linear estimates (cf. \cite{rampazzo_theorem_1999} and \cite{bettiol_l$infty$_2012}). This permits to find a global minimizer for an auxiliary optimal control problem to which we apply a standard maximum principle. The required normality form of the necessary conditions for the reference problem in calculus of variations can be therefore derived.
This paper is organized as follows. In a short preliminary section, we provide some of the basic notions and results of the nonsmooth analysis that are used throughout the paper. Section 3 provides the main results of this paper: necessary optimality conditions in a normal form for non-autonomous calculus of variations problems, for $W^{1,1}-$local and global minimizers. The last two sections are devoted to prove our main results making use of a normality result for optimal control problems (for the case of $W^{1,1}-$local minimizers) and a neighboring feasible trajectory result (for the case of global minimizers).
\vskip2ex \noindent
{\bf Notation.} In this paper, $| . |$ refers to the Euclidean norm. $\mathbb{B}$ denotes the closed unit ball. Given a subset $X$ of $\mathbb{R}^{n}$, $\partial X$ is the boundary of $X$, ${\rm int}\ X$ is the interior of $X$ and ${\rm co}\ X$ is the convex hull of $X$. The Euclidean distance of a point $y$ from the set $X$ (that is $\inf \limits_{x \in X} | x-y | $) is written $d_X(y)$. Given a measure $\mu$ on the time interval $[S,T]$, we write $supp(\mu)$ for the support of the measure $\mu$. $W^{1,1}([S,T], \mathbb{R}^n)$ denotes the space of absolutely continuous functions defined on $[S,T]$ and taking values in $\mathbb{R}^{n}$.
\section{Some basic nonsmooth analysis tools} We introduce here some basic nonsmooth analysis tools which will be employed in this paper. For further details, we refer the reader for instance to the books \cite{aubin_set-valued_2009}, \cite{clarke_functional_2013}, \cite{clarke_nonsmooth_2008}, \cite{clarke_optimization_1990}, \cite{mordukhovich_variational_2006} and \cite{vinter_optimal_2010}. \vskip2ex \noindent
Take a closed set $A \subset \mathbb{R}^{n}$ and a point $\bar x \in A$. The \textit{proximal normal cone} of $A$ at $\bar{x}$ is defined as \begin{eqnarray*}
N_{A}^{P}(\bar{x}) := \{ \eta \in \mathbb{R}^{n} : \text{there exists } M > 0 \text{ such that } \eta \cdot (x-\bar{x}) \leq M \left\| x - \bar{x} \right\| ^{2} \text{ for all } x \in A \} \ . \end{eqnarray*}
The \textit{limiting normal cone} of $A$ at $\bar{x} \in A$ is defined to be \begin{eqnarray*}
N_A(\bar{x}) := \lbrace \eta \in \mathbb{R}^n : \text{there exists sequences } x_i \xrightarrow {A} \bar{x} \text{ and } \eta _i \rightarrow \eta \text{ such that } \eta_i \in N_{A}^{P}(x_{i}) \text{ for all } i \rbrace. \end{eqnarray*} Here $x_{i} \xrightarrow []{A} \bar{x}$ means that $x_{i} \rightarrow \bar{x}$ and $x_{i} \in A, \text{ for all } i.$
Given a lower semicontinuous function $f: \mathbb{R}^n \longrightarrow \mathbb{R} \cup \lbrace \infty \rbrace$, the \textit{limiting subdifferential} of $f$ at a point $\bar{x} \in \mathbb{R}^{n}$ such that $f(\bar{x}) < +\infty$ can be expressed in terms of the limiting normal cone to the epigraph of $f$ as follows: \[ \partial f(\bar x) := \{ \eta \ : \ (\eta,-1) \in N_{\text{epi }f} (\bar{x}, f(\bar{x})) \} \ . \]
We also make use of the \textit{hybrid subdifferential} of a Lipschitz continuous function $h: \mathbb{R}^n \to \mathbb{R}$ on a neighborhood of $\bar x$, defined as: \[ \partial^{>}h(\bar x) := {\rm co }\ \lbrace \gamma \in \mathbb{R}^{n} : \text{there exists } \; x_{i} \xrightarrow[]{h} \bar x \text{ such that } h(x_{i}) > 0, \; \text{for all } i \text{ and } \nabla h(x_{i}) \rightarrow \gamma \rbrace, \] where $x_i \xrightarrow[]{h} \bar x$ denotes $x_i \rightarrow \bar x$ and $h(x_i) \rightarrow h(\bar x)$ for all $i$. \noindent Moreover, we make reference to the Clarke tangent cone to the closed set $A$ at $\bar x \in A$, defined as follow: \begin{align*} T_A(\bar{x}):= \lbrace \xi \in \mathbb{R}^n : \; \text{ for all} \; x_i \xrightarrow[]{A} \bar{x} & \text{ and } \ t_i \downarrow 0, \text{ there exist } \lbrace a_i \rbrace \text{ in } A \text{ such that } \; t_i^ {-1} (a_i- x_i) \rightarrow \xi \rbrace. \end{align*}
We present also the following result:
\begin{proposition} \label{Prop2}
\begin{itemize}
\item[(i)] $\partial^{>} d_A(a) \subset {\rm co}\left(N_A(a) \cap \partial \mathbb{B}\right)$ for a closed set $A \subset \mathbb{R}^{n}$ and $a \in \partial A$.
\item[(ii)] If in addition co $N_A(a)$ is pointed, then $0 \not \in {\rm co} \left( N_A(a) \cap \partial \mathbb{B}\right)$.
\end{itemize} \noindent
(We recall that $ { \rm co} \ N_{A}(a)$ is `pointed' if for any nonzero elements $d_{1}, d_{2} \in {\rm co}\ N_{A}(a),$
$
d_{1} + d_{2} \neq 0.
$)
\end{proposition} \begin{proof}
The proof of (i) follows directly from the definition of the hybrid subdifferentialof the distance function $d_A(.)$, and from the Caratheodory Theorem; while (ii) is proved by contradiction. \end{proof} \section{Main Results} This section provides the main results of this paper: necessary optimality conditions in a normal form for non-autonomous calculus of variations problems in the form of (\ref{problem: calculus of variations}). More precisely, we are interested in providing normality conditions for problems in which the state constraint is given in an implicit form: $A$ is just a closed set. Using the distance function, the state constraint $x(t) \in A$ can be equivalently written as a pathwise functional inequality \[ d_{A}(x(t)) \leq 0 \quad \quad {\rm for \; all }\ t \in [S,T] \ . \]
\noindent We provide first a normality result for $W^{1,1}-$local minimizers. We assume therefore that for a given reference arc $\bar x \in W^{1,1}([S,T], \mathbb{R}^n)$: \begin{enumerate}[label=(CV\arabic*), ref=CV\arabic*]
\item \label{item: CV1} $L(t,x,v)$ is measurable in $(t,v)$, bounded on bounded sets; and there exist $\epsilon'>0$, $K_L>0$ such that for all $t \in [S,T]$ and $x, x' \in \bar{x} (t)+ \epsilon'\mathbb{B}$
$$
|L(t,x,v)-L(t,x',v)| \leq K_L |x-x'| \quad \text{uniformly on } v \in \mathbb{R}^n.$$
\item \label{item: CV2} $v \mapsto L(t,\bar{x}(t),v)$ is lower semicontinuous for all $t \in [S,T]$.
\end{enumerate}
\begin{theorem}[$W^{1,1}-$Local Minimizers]\label{Thm1}
Let $\bar{x}$ be a $W^{1,1}-$local minimizer for (\ref{problem: calculus of variations}), and assume that hypotheses (\ref{item: CV1})-(\ref{item: CV2}) are satisfied. Suppose also that $\bar x$ is Lipschitz continuous and
\begin{enumerate}[label= $(CQ)$, ref=$CQ$]
\item \label{CQ for calculus of variations} \[\text{int } T_A(z) \neq \emptyset \ , \quad \text{for all } z \in \bar{x}([S,T])\cap \partial A.\]
\end{enumerate}
Then, there exist $p(.) \in W^{1,1}([S,T], \mathbb{R}^n)$, a function of bounded variation $\nu(\cdot) : [S,T] \rightarrow \mathbb{R}^n$, continuous from the right on $(S,T)$, such that: for some positive Borel measure $\mu$ on $[S,T]$, whose support satisfies
\[ \text{supp}(\mu) \ \subset \ \{ t \in [S,T] \ : \ \bar x(t) \in \partial A \}, \]
and some Borel measurable selection
\[ \gamma(t) \in \partial_x^> d_A(\bar x(t)) \quad \mu-\text{a.e. }\ t \in [S,T] \]
we have
\begin{enumerate}[label=(\alph*), ref=\alph*]
\item \label{0_chap5} $\nu(t) = \int_{[S,t]} \gamma(s) d\mu(s)$ \quad for all $t\in (S,T]$ ;
\item \label{1_chap5} $\dot{p}(t) \in \textrm{co } \partial_{x} L (t,\bar{x}(t), \dot{\bar{x}}(t)) \quad \textrm{ and } \quad q(t) \in \textrm{co } \partial_{\dot{x}} L (t,\bar{x}(t), \dot{\bar{x}}(t))$ \quad a.e. $t\in [S,T]$ ;
\item \label{2_chap5} $q(T)=0$ ;
\end{enumerate}
where
\begin{equation*} q(t) = \begin{cases}
p(S) \quad & t =S \\
p(t) +\int_{[S,t]} \gamma(s) d\mu(s) \quad &t \in (S,T] \ .
\end{cases}
\end{equation*} \end{theorem}
The following theorem concerns global minimizers, establishing the same necessary optimality conditions (in the normal form) as Theorem \ref{Thm1}. Here we invoke slightly different assumptions: we impose the convexity of the Lagrangian w.r.t. $v$, and the stronger constraint qualification (\ref{CQ for calculus of variations_second_technique}) below (valid for any point belonging to the boundary of $A$); however, we allow that the Lipschitz constant of $L$ w.r.t. $x$ might depend on $t$. More precisely, we assume that for some $R_0 >0$,
\begin{enumerate}[label=(CV\arabic*)$'$, ref=(CV\arabic*)$'$]
\item \label{item: CV1'} $L(.,x,v)$ is measurable for all $x\in \mathbb{R}^n$ and $v \in \mathbb{R}^n$, and there exists $\eta > 0$ such that the set-valued map $t \leadsto \{ L(t,x,v) : v \in R_0 \mathbb{B} \}$ is absolutely continuous from the left uniformly over $x \in (\partial A + \eta \mathbb{B}) \cap R_0 \mathbb{B}$. Moreover, $L(t,x,u)$ is bounded on bounded sets and, there exists an integrable function $K_L(.): [S,T] \to \mathbb{R}_+$ such that for all $t \in [S,T]$ and $x, x' \in R_0 \mathbb{B}$
$$
|L(t,x,v)-L(t,x',v)| \leq K_L(t) |x-x'| \quad \text{uniformly in } v\in \mathbb{R}^n. $$
\item \label{item: CV2'} $v \mapsto L(t,x,v)$ is convex, for all $(t,x) \in [S,T] \times \mathbb{R}^n$. \end{enumerate}
We recall that, given a set $X_0 \subset \mathbb{R}^n$ and a multifunction $F(.,.): [S,T] \times \mathbb{R}^n \leadsto \mathbb{R}^n$, we say that $F(.,x)$ is absolutely continuous from the left, uniformly over $x\in X_0$ if and only if the following condition is satisfied: given any $\epsilon>0$, we may find $\delta>0$ such that, for any finite partition of $[S,T]$ \[ S \le s_1 < t_1 \le s_2 < t_2 \le \ldots \le s_m < t_m \le T \] satisfying $\sum_{i=1}^{m} (t_i-s_i) < \delta$, we have \[ \sum_{i=1}^{m} d_{F(t_i,x)} (F(s_i,x)) < \epsilon. \]
\begin{theorem}[Global Minimizers]\label{Thm2}
Let $\bar{x}$ be a global minimizer for (\ref{problem: calculus of variations}), and assume that hypotheses \ref{item: CV1'}-\ref{item: CV2'} are satisfied. Suppose also that $\bar x$ is Lipschitz continuous and
\begin{enumerate}[label=($\widetilde{CQ}$), ref= $\widetilde{CQ}$]
\item \label{CQ for calculus of variations_second_technique} \[\text{int } T_A(z) \neq \emptyset \ , \quad \text{for all } z \in \partial A.\]
\end{enumerate}
Then, there exist $p(.) \in W^{1,1}([S,T], \mathbb{R}^n)$, a function of bounded variation $\nu(\cdot) : [S,T] \rightarrow \mathbb{R}^n$, continuous from the right on $(S,T)$, such that
conditions (\ref{0_chap5})-(\ref{2_chap5}) of Theorem \ref{Thm1} remain valid.
\end{theorem}
\begin{remark}
We observe that the validity of Theorem \ref{Thm1} and Theorem \ref{Thm2} requires that the minimizer (local or global) $\bar x(.)$ is Lipschitz. This assumption is not restrictive, indeed, not only covers previous results in this framework (as \cite{fontes_normal_2013} and \cite{ferreira_when_1994}), but earlier work shows that Lipschitz regularity of minimizers can be obtained (for both cases of autonomous and non-autonomous Lagrangian) under unrestrictive assumptions on the data. This is not a peculiar property of some problems in the absence of state constraints (see for instance \cite{dal_maso_autonomous_2003}, \cite[Theorem 4.5.2 and Theorem 4.5.4]{clarke_necessary_2005}, \cite[Corollary 3.2]{clarke_regularity_1985}). It is well-known also in the framework of state-constrained calculus of variations problems, see \cite[Corollary 16.19]{clarke_functional_2013}, \cite[Theorem 11.5.1]{vinter_optimal_2010} (for the autonomous case), and more recently in \cite[Theorem 5.2]{bettiol_nonautonomous_2017} (for the nonautonomous case).
\end{remark}
We recall that the problem of normality of necessary conditions in calculus of variations has been previously investigated in \cite{ferreira_when_1994}, for the case of autonomous Lagrangian and smooth state constraint expressed in terms of a scalar inequality function $\{x \ : \ h(x) \leq 0 \}$ (that is, the function $h$ is of class $C^{2}$). This result has been successively extended to the case where the scalar function $h$ is nonsmooth in \cite{fontes_normal_2013}, always in the framework of autonomous Lagrangian. In both papers \cite{fontes_normal_2013} and \cite{ferreira_when_1994}, the Lagrangian is supposed to be Lipschitz w.r.t. $x$ and convex w.r.t. $v$. In our paper we deal with a non-autonomous Lagrangian and with a constraint qualification of different nature when we consider a state constraint condition in terms of a merely closed set $A$.
In the following proposition and example, it is shown that the constraint qualification invoked in \cite{fontes_normal_2013} (when we consider the state constraint to be expressed in terms of a distance function, that is $h(.) : = d_A(.)$) implies our constraints qualifications (\ref{CQ for calculus of variations}) and (\ref{CQ for calculus of variations_second_technique}), but the reverse implication is not true. Accordingly, (\ref{CQ for calculus of variations}) and (\ref{CQ for calculus of variations_second_technique}) can be applied to a larger class of problems.
\begin{proposition} \label{Prop3} Consider a closed set $A \subset \mathbb{R}^{n}$ and assume that the distance function to $A$, $d_{A}(.)$, satisfies the following condition: if for any given $y \in \partial A$ there exists $c >0$ such that
\begin{equation} \label{simpler}
\gamma_{1} \cdot \gamma_{2} > c \ , \quad \quad \text{for all } \ \gamma_{1} \ , \gamma_{2} \in \partial^{>}d_{A}(y),
\end{equation}
then, \; $ {\textrm int }\ T_{A}(y) \neq \emptyset.$
\noindent(Condition (\ref{simpler}) is a slightly weaker version of the constraint qualification considered in \cite{fontes_normal_2013}.) \end{proposition}
\begin{proof}[\textbf{Proof of Proposition \ref{Prop3}}]
Since the hypothesis (\ref{simpler}) considered here clearly implies that $0 \notin \partial ^{>} d_{A}(y)$, then the proof uses the same ideas of \cite[Proposition 2.1]{fontes_normality_2015} which consists in showing that the cone $\mathbb{R}^{+} (\partial ^{>} d_{A}(y))$ is closed and pointed and that it is also equal to ${\rm co }\ N_{A}(y)$. That is the convex hull of the limiting normal cone to $A$ is pointed and, consequently, its polar, $T_{A}(y)$ has a nonempty interior. \end{proof}
\begin{example} \label{ex0} Consider the set $A := \{ (x,y) \ : \ h(x,y) \leq 0 \}$, where $h:\mathbb{R}^2 \rightarrow \mathbb{R}$ such that
\[ h(x,y) = |y| - x. \]
It is straightforward to check that $\partial^{>} h (0,0)= {\rm co }\ \{ \gamma_1, \gamma_2 \}$, where $ \gamma_1:=(-1,+1)$, and $ \gamma_2:=(-1,-1)$. Therefore, at the point $(0,0)$, condition (\ref{simpler}) is violated when a minimizer $\bar{x}$ is such that $(0,0) \in \bar{x}([S,T])$. This is because we could find two vectors $\gamma_1$ and $\gamma_2$ such that $\gamma_1 \cdot \gamma_2 =0$. However, $\text{int }T_A(0,0) \neq \emptyset$, and more in general $\text{int }T_A(y) \neq \emptyset$ for all $y\in \partial A$. Then, (\ref{CQ for calculus of variations}) (and also (\ref{CQ for calculus of variations_second_technique})) is always satisfied. \end{example}
This is a simple example which shows that we can find a state constraint set $A$ defined by a functional inequality (for some Lipschitz function $h$) such that (\ref{CQ for calculus of variations}) (and also (\ref{CQ for calculus of variations_second_technique})) is always verified but (\ref{simpler}) fails to hold true when a minimizer goes in a region where $A$ is nonsmooth.
\section{Proof of Theorem \ref{Thm1} ($W^{1,1}-$Local Minimizers)} Along this section, we identify a class of optimal control problems whose necessary optimality conditions apply in the normal form, under some constraint qualifications. The main purpose of introducing such problems is that calculus of variations problems can be regarded as an optimal control problem (owing to the `state augmentation' procedure). Therefore, the results on optimal control problems will be used to establish normality of optimality conditions for the reference calculus of variations problem (\ref{problem: calculus of variations}). \vskip2ex
\subsection{Normality in Optimal Control Problems} We recall that `normality' means that the Lagrangian multiplier associated with the objective function -- here written $\lambda$ -- is different from zero (it can be taken equal to 1).\\ Consider the fixed left end-point optimal control problem (\ref{problem: ocp with state constraint chap 5}) with a state constraint set $A \subset \mathbb{R}^n$ which is merely a closet set:
\begin{equation}\begin{cases}\label{problem: ocp with state constraint chap 5}
\begin{aligned}
& {\text{minimize}}
&& g( x(T)) \\
&&& \hspace{-1.9cm} \text{over } x\in W^{1,1}([S,T],\mathbb{R}^n) \text{ and measurable functions } u \text{ satisfying} \\
&&& \dot{x}(t) = f(t,x(t), u(t)) \quad \textrm{ a.e. } t \in [S,T] \\
&&& x(S)=x_{0} \\
&&& x(t) \in A \quad \text{ for all } t \in [S,T] \\
&&& u(t) \in U(t) \quad \textrm{a.e. } t \in [S,T] \ . \end{aligned}\end{cases}\tag{P}\tagsleft@true\let\veqno\@@leqno \end{equation} \ \\ \noindent The data for this problem comprise functions $g: \mathbb{R}^n \longrightarrow \mathbb{R}, f:[S,T] \times \mathbb{R}^n \times \mathbb{R}^m \longrightarrow \mathbb{R}^n,$ an initial state $x_0 \in \mathbb{R}^n,\text{ and a multifunction } U(.): [S,T] \leadsto \mathbb{R}^m.$ The set of control functions for (\ref{problem: ocp with state constraint chap 5}), denoted by $\mathcal{U}$, is the set of all measurable functions $u: [S,T] \longrightarrow \mathbb{R}^m \text{ such that } u(t) \in U(t) \text{ a.e. } t\in [S,T].$
\ \\
\noindent We say that an admissible process $(\bar{x}, \bar{u})$ is a $W^{1,1}-$\textit{local minimizer} if there exists $\epsilon >0$ such that \[ g(\bar{x}(T)) \leq g(x(T)),\] for all admissible processes $(x,u)$ satisfying \[ \| x(.)-\bar{x}(.) \| _{W^{1,1}} \leq \epsilon. \] \noindent There follows a `normal' version of the maximum principle for state constrained problems. For a $W^{1,1}-$local minimizer $(\bar{x}, \bar{u})$ and a positive scalar $\delta$, we assume the following: \begin{enumerate}[label= (H\arabic*), ref=H\arabic*]
\item \label{item: H1 ocp chap5} The function $(t,u) \mapsto f(t,x,u)$ is measurable for each $x\in \mathbb{R}^n$. There exists a measurable function $k(t,u)$ such that $t \mapsto k(t,\bar{u}(t))$ is integrable and \[ | f(t,x,u)- f(t,x',u) | \leq k(t,u) | x-x' | \] for $ x, x' \in \bar{x}(t) + \delta \mathbb{B}, \ u \in U(t), {\rm a.e.}\ t\in [S,T]$. Furthermore there exist scalars $K_f >0$ and $\varepsilon' >0 $ such that \[ | f(t,x,u)- f(t,x',u) | \leq K_f | x-x' | \] for $x, x' \in \bar{x}(S) + \delta \mathbb{B}, \ u \in U(t), \text{ a.e. } t\in [S,S+\varepsilon'].$
\item \label{item: H2 ocp chap5} Gr $U(.)$ is measurable.
\item\label{item: H3 ocp chap5} The function $g$ is Lipschitz continuous on $\bar{x}(T) + \delta \mathbb{B}$.
\end{enumerate} Reference is also made to the following constraint qualifications. There exist positive constants $K, \tilde \varepsilon, \tilde \beta, \tilde \rho$ and a control $\hat{u} \in \mathcal{U}$ such that \begin{enumerate}[label= (CQ\arabic*), ref= CQ\arabic*]
\item \label{item: CQ1 normality ocp chap 5}
\begin{equation} \label{boundedness boundary point}
| f(t,\bar{x}(t), \bar{u}(t))-f(t,\bar{x}(t),\hat{u}(t)) | \leq K, \quad \text{ for a.e. } t \in (\tau- \tilde \varepsilon, \tau] \cap [S,T]
\end{equation}
and
\[
\eta \cdot [f(t,\bar{x}(t), \hat{u}(t))-f(t,\bar{x}(t),\bar{u}(t))] < -\tilde \beta,
\]
for all $\eta \in \partial^{>}_x d_A(\bar x (s)), \text{ a.e. } s, \; t \in (\tau - \tilde \varepsilon, \tau] \cap [S,T] $ and for all $\tau \in \lbrace \sigma \in [S,T]\; : \; \bar{x}(\sigma) \in \partial A \rbrace.$
\item\label{item: CQ2 normality ocp chap 5} If $ x_0 \in \partial A,$ then for a.e. $t \in [S, S+ \tilde \varepsilon)$
\begin{equation} \label{boundedness initial point}
| f(t,x_0,\hat{u}(t)) | \leq K, \quad | f(t,x_0,\bar{u}(t)) | \leq K,
\end{equation}
and
\[
\eta \cdot [f(t,x_0, \hat{u}(t))-f(t,x_0,\bar{u}(t))] < - \tilde \beta \]
for all $\eta \in \partial^{>}_x d_A(x), \; x \in (x_0 + \tilde \rho \mathbb{B}) \cap \partial A .$
\item\label{item: CQ3 normality ocp chap 5} \index{Pointed Convex Cone}
\[
{ \rm co }\ N_{A}(\bar{x}(t)) \text{ is pointed for each } t \in [S,T].
\]
\end{enumerate} \begin{theorem}\label{Thm3}
Let $(\bar{x},\bar{u})$ be a $W^{1,1}-$local minimizer for (\ref{problem: ocp with state constraint chap 5}). Assume that hypotheses (\ref{item: H1 ocp chap5})-(\ref{item: H3 ocp chap5}) and the constraint qualifications (\ref{item: CQ1 normality ocp chap 5})-(\ref{item: CQ3 normality ocp chap 5}) hold. Then, there exist $p(.) \in W^{1,1}([S,T], \mathbb{R}^n)$, a Borel measure $\mu(.)$ and a $\mu-$integrable function $\gamma(.)$ such that
\begin{enumerate}[label= (\roman{*}), ref= \roman{*}]
\item $ - \dot{p}(t) \in {\rm co }\ \partial_{x} (q(t) \cdot f(t,\bar{x}(t), \bar{u}(t)) \; \; {\rm a.e.}\ t\in [S,T],$
\item $ -q(T) \in \partial g (\bar{x}(T)),$
\item $q(t) \cdot f(t,\bar{x}(t),\bar{u}(t)) = \max_{u\in U(t)} q(t) \cdot f(t,\bar{x}(t),u),$
\item $ \gamma(t) \in \partial^{>}d_A(\bar{x} (t))$ and $\textrm{supp} (\mu) \subset \lbrace t\in [S,T]\; : \; \bar{x}(t) \in \partial A \rbrace,$
\end{enumerate}
where
\begin{equation*} q(t) = \begin{cases}
p(S) & \quad t =S \\
p(t) + \int_{[S,t]} \gamma(s) d\mu(s) & \quad t \in (S,T].
\end{cases} \end{equation*}
\end{theorem}
\begin{proof}
This result was proved in \cite{fontes_normal_2013} for $L^{\infty}-$local minimizers and for a state constraint expressed as an inequality function. It remains valid for the weaker case of $W^{1,1}-$local minimizers (by adding eventually an extra variable $\dot{y}(t) = |f(t,x(t),u(t))-\dot{\bar{x}}(t)|$) and for a state constraint expressed in terms of a closed set $A$.
\noindent
Indeed, let $(\bar x,\bar u)$ be merely a $W^{1,1}-$local minimizer for (\ref{problem: ocp with state constraint chap 5}). Then there exists $\alpha>0$ such that $(\bar x, \bar y \equiv 0, \bar u)$ is a $L^\infty-$local minimizer for
\begin{equation*}\begin{cases}
\begin{aligned} \label{augmented problem}
& {\text{minimize}}
&& g( x(T)) \\
&&& \hspace{-1.9cm} \text{over } x\in W^{1,1}([S,T],\mathbb{R}^n) \text{ and measurable functions } u \text{ satisfying} \\
&&& \dot{x}(t) = f(t,x(t), u(t)) \quad \textrm{ a.e. } t \in [S,T] \\
&&& \dot{y}(t) = |f(t,x(t),u(t))-\dot{\bar{x}}(t)| \quad \textrm{ a.e. } t \in [S,T] \\
&&& (x(S),y(S), y(T)) \in \{x_{0}\} \times \{ 0 \} \times \alpha\mathbb{B} \\
&&& x(t) \in A \quad \text{ for all } t \in [S,T] \\
&&& u(t) \in U(t) \quad \textrm{a.e. } t \in [S,T] \ . \end{aligned}\end{cases}\tag{P1}\tagsleft@true\let\veqno\@@leqno
\end{equation*}
(It suffices to prove it by contradiction.) Therefore we can apply \cite[Theorem 4.2]{fontes_normal_2013} in its normal formulation to the augmented problem (\ref{augmented problem}) with reference to the $L^\infty-$local minimizer $(\bar x, \bar y \equiv 0, \bar u)$. After expliciting the corresponding necessary optimality conditions, we notice that the adjoint arc $r$ associated with the state variable $y$ is constant because the right hand side of the dynamics does not depend on $y$. Moreover, $r=0$ owing to the transversality condition since $\bar y(T) \in \alpha \mathbb{B}$ is inactive. Hence, we deduce the required necessary optimality conditions as represented in Theorem \ref{Thm3} covering the case of $W^{1,1}-$local minimizers.
\end{proof}
\begin{remark} \label{remark1}
Normality of the necessary optimality conditions for state constrained optimal control problems has been well studied and many results exist in the literature where each result requires some regularity assumptions on the problem data (cf. \cite{fontes_normal_2013}, \cite{frankowska2009normality}, \cite{frankowska2013inward}, \cite{fontes_normality_2015} etc.). In \cite{fontes_normal_2013}, normality is established for free right-endpoint optimal control problems having $L^{\infty}-$local minimizers. The constraint qualifications assumed on the data are in the form of (\ref{item: CQ1 normality ocp chap 5}) and (\ref{item: CQ2 normality ocp chap 5}) but considered for an inequality state constraint of a possibly nonsmooth function. This result was extended in \cite{fontes_normality_2015} (considering always the case of $L^\infty-$local minimizers) to cover a larger class of problems where the state constraint is given in terms of a closed set and the right-endpoint (denoted by the set $K_1$ in \cite{fontes_normality_2015}) is considered to be a closed subset of $\mathbb{R}^n$. New (and weaker) constraint qualifications are discussed in \cite{fontes_normality_2015} to guarantee normality when $\bar{x}(T) \in \text{int }K_1$: the inward pointing condition has just to be satisfied for almost all times at which the optimal trajectory has an outward pointing velocity. Moreover in \cite{fontes_normality_2015} relations with previous constraint qualifications are examined.
\noindent
Theorem \ref{Thm3} might seem to be a particular case of \cite[Theorem 3.2]{fontes_normality_2015} when $K_1 = \mathbb{R}^n$ (eventually after considering the augmented problem in order to cover the case of $W^{1,1}-$local minimizers). However, looking closely to assumption (H3) in \cite{fontes_normality_2015}, we see that (H3) is a stronger assumption than the one considered in previous papers (see for instance \cite{ferreira_nondegenerate_1999}, \cite{lopes_constraint_2011}, \cite{fontes_normal_2013}) and in assumptions (\ref{boundedness boundary point}) and (\ref{boundedness initial point}) considered in our paper. Indeed, in (\ref{boundedness boundary point}) and (\ref{boundedness initial point}), the boundedness of the dynamics is considered only for the optimal control $\bar u(.)$ and the constructed control $\hat u(.)$ at the initial data $x_0$ and at the optimal trajectory points $\bar x(t)$ only for times $t < \tau$ at which the trajectory hits the boundary of the state constraint, however in \cite{fontes_normality_2015}, the boundedness concerns all controls $u \in U(t)$ and all $x$ near the optimal trajectory $\bar x(.)$, which is not implied by the assumptions of our paper.
\noindent
We underline the fact that to derive the desired necessary conditions stated in Theorem \ref{Thm1} above, a crucial requirement is the possibility to apply normality results for optimal control problems with dynamics which might be unbounded (cf. Remark \ref{remark2}).
\end{remark}
\subsection{Technical Lemmas}
We invoke now two technical lemmas which are crucial for establishing the proof of Theorem \ref{Thm1}.
The first lemma says that one can select a particular bounded control $v(.)$ which pulls the dynamics inward the state constraint set more than the reference minimizer.
\begin{lemma} \label{Lemma3} Let $\bar{x}(.)$ be a Lipschitz $W^{1,1}-$local minimizer for (\ref{problem: calculus of variations}) for which (\ref{item: CV1}) and (\ref{CQ for calculus of variations}) are satisfied. Then, there exist positive constants $\varepsilon, \; \rho, \; \beta, \; C, \; C_1$ and a measurable function $v(.)$ such that:
\begin{enumerate}[label=(\roman{*}), ref=\roman{*}]
\item\label{item: control existence bounded chapter CV} $ \| v -\dot{\bar{x}} \| _{L^{\infty}} \leq C$ and $\| v \| _{L^{\infty}} \leq C_{1}.$
\item \label{item: control existence ipc1 chapter CV}For all $ \tau \in \lbrace \sigma \in [S,T] : \bar{x}(\sigma) \in \partial A \rbrace$,
\[
\mathop {\sup }\limits_{\begin{array}{*{20}c}
{\eta \in {\rm co} \left(N_A(\bar{x}(s)) \cap \partial \mathbb{B}\right) } \\
\end{array}} (v(t)-\dot{\bar{x}}(t)) \cdot \eta < -\beta, \quad {\rm a.e. } \ s,t \in (\tau-\varepsilon, \tau].
\]
\item\label{item: control existence ipc2 chapter CV} If $x_0=\bar{x}(S) \in \partial A$, then
\[
\mathop {\sup }\limits_{\begin{array}{*{20}c}
{\eta \in {\rm co} \left(N_A(x) \cap \partial \mathbb{B}\right) } \\
{x \in (x_0 + \rho {\mathbb{B}}) \cap \partial A.} \\
\end{array}}
(v(t)- \dot{\bar{x}}(t))\cdot \eta < - \beta, \quad {\rm a.e. } \ t \in [S,S+\varepsilon). \]
\end{enumerate}
\end{lemma}
For the proof of Lemma \ref{Lemma3}, we shall invoke the following technical lemma:
\begin{lemma} \label{Lemma2} Fix $R>0$. Assume that ${\text{ int }}\ T_A(z) \neq \emptyset$ for all $z \in \partial A \cap (R+1) \mathbb{B}$. Then we can find positive numbers $ \beta, ~\epsilon_{0}, ~\epsilon_{1},\ldots, \epsilon_{k}$, points $z_{1}, \ldots, z_{k} \in \partial{A} \cap (R+1)\mathbb{B}$, and vectors $\zeta_{j} \in {\rm int }\ T_{A}(z_{j}), \text{ for } j=1,\ldots, k,$ \text{such that}
\begin{enumerate}[label = (\roman{*}), ref= \roman{*}]
\item \label{item: i lemma appendix chap 5} $ \mathop \bigcup \limits_{j = 1}^k (z_j+ \frac{\epsilon_j}{2} {\rm int }\ \mathbb{B}) \supset (\partial A + \epsilon_{0} \mathbb{B}) \cap (R+1)\mathbb{B}$ ,
\item \label{item: ii lemma appendix chap 5} \[ \mathop {\sup }\limits_{\begin{array}{*{20}c}
{\eta \in { \rm co }(N_A(z) \cap \partial \mathbb{B})} \\
{z \in (z_j + \epsilon_{j} {\mathbb{B}}) \cap \partial A.} \\
\end{array}}
\zeta_j \cdot \eta < -\beta \ , \quad \text{for all } j=1, \ldots, k \ . \] \end{enumerate}
\end{lemma}
\begin{proof} [\textbf{Proof of Lemma \ref{Lemma3}}]
Indeed, by defining the function $v(.): [S,T] \rightarrow \mathbb{R}^{n}$ as follows
\[ v(t) := \dot{\bar{x}}(t) + \sum \limits_{j=1}^{k} \chi_{z_{j}+ \frac{\epsilon_{j}}{2} {\rm int }\ \mathbb{B}} (\bar{x}(t)) \; \zeta_{j}, \quad \quad t \in [S,T] \]
where $z_j$, $\epsilon_j$ and $\zeta_j$ are respectively the points, positive numbers and vectors of Lemma \ref{Lemma2}. $ \chi_{Y}$ denotes the characteristic function of the subset $ Y \subset \mathbb{R}^{n}$, and owing to the assertions of Lemma \ref{Lemma2}, we can prove that the constructed $v(.)$ verifies the statement of Lemma \ref{Lemma3}. (We refer the reader to \cite{khalil:tel-01740334} for full details of the proof.)
\end{proof}
\subsection{Proof of Theorem \ref{Thm1}}
We first employ a standard argument, the so-called `state augmentation', which allows to write the problem of calculus of variations (\ref{problem: calculus of variations}) as an optimal control problem of type (\ref{problem: ocp with state constraint chap 5}). Indeed, it is enough to add an extra absolutely continuous state variable \[ \index{State Augmentation}z(t)= \int_{S}^{t} L(s,x(s), \dot{x}(s))ds\]
and consider the dynamics $\dot{x} =u$. We notice that $z(S)=0$ and $U(t)= \mathbb{R}^n$.
\vskip2ex
\noindent
Then, the problem (\ref{problem: calculus of variations}) can be written as the optimal control problem (\ref{ocp by state augmentation chap 5}):
\begin{equation}\begin{cases}
\begin{aligned} \label{ocp by state augmentation chap 5}
& {\text{minimize}}
&& z(T) \\
&&& \hspace{-1.9cm} \text{over } W^{1,1}-\text{ arcs } (x(.),z(.)) \text{ and measurable functions } u(.) \in \mathbb{R}^n \text{ satisfying} \\
&&& (\dot{x}(t), \dot z(t)) = (u(t),L(t,x(t),u(t))) \quad \textrm{ a.e. } t \in [S,T] \\
&&& (x(S), z(S)) = (x_{0},0) \\
&&& x(t) \in A \quad \text{ for all } t \in [S,T] \\
&&& u(t) \in U(t)=\mathbb{R}^n \quad \textrm{a.e. } t \in [S,T] \ . \end{aligned}\end{cases}\tag{P$'$}\tagsleft@true\let\veqno\@@leqno
\end{equation}
We set $w(t):=
\begin{pmatrix}
x(t) \\
z(t)
\end{pmatrix}$ and $\tilde{f}{(t,w(t), u(t))} :=
\begin{pmatrix}
u(t) \\ L(t,x(t),u(t))
\end{pmatrix}.$
Here the set of controls $\mathcal{U}$ is the set of dynamics $\dot{x}(t) \in \mathbb{R}^{n}$ for a.e. $t \in [S,T]$.
It is easy to prove that if $\bar x$ is a $W^{1,1}-$local minimizer for the reference calculus of variations problem (\ref{problem: calculus of variations}), then $\left(\bar{w} (t)= \begin{pmatrix}
\bar{x}(t) \\
\bar{z}(t)
\end{pmatrix}, \bar{u} \right)$ is a $W^{1,1}-$local minimizer for (\ref{ocp by state augmentation chap 5}) where $\dot{\bar{z}}(t)= L(t,\bar{x}(t), \bar{u}(t))$ and $\bar{u}= \dot{\bar{x}}$.
\vskip2ex
\noindent
The proof of Theorem \ref{Thm1} is given in three steps. The first step is devoted to show that the constraint qualifications (\ref{item: CQ1 normality ocp chap 5})-(\ref{item: CQ3 normality ocp chap 5}) of Theorem \ref{Thm3} are mainly implied by (\ref{CQ for calculus of variations}) (of Theorem \ref{Thm1}). In step 2, we verify that hypotheses (\ref{item: H1 ocp chap5})-(\ref{item: H3 ocp chap5}) of Theorem \ref{Thm3} can be deduced from hypothesis (\ref{item: CV1}) of Theorem \ref{Thm1}. In step 3, we apply Theorem \ref{Thm3} to (\ref{ocp by state augmentation chap 5}) in order to obtain the assertions of Theorem \ref{Thm1}.
\vskip2ex
\noindent
{\bf Step 1.} Prove that the constraint qualifications (\ref{item: CQ1 normality ocp chap 5})-(\ref{item: CQ3 normality ocp chap 5}) of Theorem \ref{Thm3} are mainly implied by (\ref{CQ for calculus of variations}) (of Theorem \ref{Thm1}).
\vskip1ex
\noindent
The constraint qualifications (\ref{item: CQ1 normality ocp chap 5}) and (\ref{item: CQ2 normality ocp chap 5}) for the optimal problem (\ref{ocp by state augmentation chap 5}) become as follows: there exist positive constants $K, \tilde \varepsilon, \tilde \beta, \tilde \rho$ and a control $\hat{u} \in \mathcal{U}$ such that
\begin{enumerate}[label= (CQ\arabic*)$'$ , ref= (CQ\arabic*)$'$]
\item \label{CQ1'}
\begin{eqnarray} \label{8}
| \tilde{f}{(t,(\bar{x}(t),\bar{z}(t)),\bar{u}(t))}-\tilde{f}{(t,(\bar{x}(t),\bar{z}(t)),\hat{u}(t))} | \leq K, \; \text{a.e. } t \in (\tau- \tilde \varepsilon, \tau] \cap [S,T]
\end{eqnarray}
and
\begin{equation} \label{9}
\tilde{\eta} \cdot [\tilde{f}{(t,(\bar{x}(t),\bar{z}(t)), \hat{u}(t))}-\tilde{f}{(t,(\bar{x}(t),\bar{z}(t)),\bar{u}(t))}] < -\tilde \beta,
\end{equation}
for all $\tilde{\eta} \in \partial^{>}_{w} d_{A}(\bar{x}(s)) \text{ a.e.}\ s, \; t \in (\tau - \tilde \varepsilon, \tau] \cap [S,T] \text{ and for all } \tau \in \lbrace \sigma \in [S,T]\; : \; \bar{x}(\sigma) \in \partial A \rbrace$. Here $
\partial_w^> d_A(\bar{x}(s)) :={\rm co\,} \{ (a,b): \text{there exists } \; s_i \rightarrow s \text{ such that } d_A(\bar{x}(s_i))>0 \, \, \text{for all } i \ , d_A(\bar{x}(s_i)) \rightarrow d_A(\bar{x}(s))\text{ and } \nabla_{w}d_A (\bar{x}(s_i)) \rightarrow(a,b) \}.
$
\item \label{CQ2'} If $ x_0 \in \partial A$ then for a.e. $ t \in [S, S+ \tilde \varepsilon)$
\begin{equation} \label{5}
| \tilde{f}{(t,(x_0,0),\hat{u}(t))} | \leq K \quad \text{ and } \quad
| \tilde{f}{(t,(x_0,0),\bar{u}(t))} | \leq K,
\end{equation}
and \begin{equation} \label{7} \tilde{\eta} \cdot [\tilde{f}{(t,(x_0,0), \hat{u}(t))}-\tilde{f}{(t,(x_0,0),\bar{u}(t))}] < -\tilde \beta,
\end{equation} for all $\tilde{\eta} \in \partial^{>}_{w} d_{A} (x), \; x \in (x_0 + \tilde \rho \mathbb{B}) \cap \partial A$. Here,
$
\partial_w^> d_A(x)={\rm co\,} \{ (a,b):\text{there exists } \; x_i \xrightarrow{d_A} x \text{ such that } d_A(x_i)>0 \, \, \text{for all } i
\; \text{ and } \; \nabla_{w}d_A (x_i) \rightarrow(a,b) \}.
$
\end{enumerate}
\vskip2ex
\noindent
{\bf 1.} We start with the proof of condition (\ref{5}).
\noindent
By the Lipschitz continuity of the minimizer $\bar{x}$, we take $R:=| x_0 | + \| \dot{\bar{x}} \|_{L^{\infty}} (1+T-S) >0$. Then $\bar{x}([S,T])\subset R \mathbb{B}$ and $| \dot{\bar{x}}(t) | \leq R$ for a.e. $t \in [S,T].$ Take $x_0 \in \partial A$. Since the function $L(.,.,.)$ is bounded on bounded sets (owing to (\ref{item: CV1})), we obtain
\[
| \tilde{f}{(t,(x_0,0), \bar{u}(t))} | = \left| \begin{pmatrix}
\bar{u}(t)= \dot{\bar{x}}(t) \\
L(t,x_0,\bar{u}(t))
\end{pmatrix} \right| \leq K \quad \text{ a.e. } t \in [S,T], \ \text{ for some } K > 0.
\]
Moreover, under (\ref{CQ for calculus of variations}), Lemma \ref{Lemma3} ensures the existence of a measurable function $v(.)$ and positive constants $C$ and $C_1$ such that $\| v \|_ {L^{\infty}} \leq C_1$ and $\| v - \dot{\bar{x}} \|_ {L^{\infty}} \leq C.$ Therefore, by choosing a control $\hat u$ such that $\hat u(t)= v(t)$ a.e. $t$, we deduce easily that
\[ | \tilde{f}(t,(x_0,0),\hat u(t)) | \le K \quad \text{a.e. } t \in [S,T] \quad \text{for some } K>0 . \] Condition (\ref{5}) is therefore satisfied.
\vskip2ex
\noindent
{\bf 2.} Condition (\ref{8}) follows also (for $K$ big enough) from the particular choice of $\hat u (t)$ to be the bounded control $v(.)$ of Lemma \ref{Lemma3}, and from (\ref{item: CV1}).
\vskip2ex
\noindent
{\bf 3.} To prove condition (\ref{7}), we consider the same choice of the control function $\hat{u}(.)$, that is $\hat u(t):=v(t)$ for $t \in [S,S+\varepsilon]$, where $\varepsilon$ is the positive number and $v(t)$ is the measurable function which satisfy the properties of Lemma \ref{Lemma3}. Condition (iii) of Lemma \ref{Lemma3} implies that there exist positive constants $ \rho, \beta $ such that
\begin{equation}\label{equation: choice of control proof main theorem}
\eta \cdot(\hat{u}(t)-\dot{\bar x}(t)) < -\beta \quad \text{for a.e. } t \in [S,S+\varepsilon)
\end{equation}
for all $\eta \in {\rm co }(N_A(x) \cap \partial \mathbb{B})$ and for all $x \in (x_0 + \rho \mathbb{B}) \cap \partial A.$ In particular for all $\eta \in \partial^>_x d_A(x)$ (owing to Proposition \ref{Prop2}(i)) and by choosing $\tilde \varepsilon \in (0, \varepsilon] $, \, $ \tilde \rho \in (0, \rho] $ and $\tilde \beta = \beta.$ Take now any $\tilde \eta \in \partial_w^> d_A(x)$ where $x \in (x_0 + \tilde \rho \mathbb{B}) \cap \partial A$. By definition of $\partial^{>}_{w} d_{A}(x)$
\[
\begin{array}{l}
\tilde{\eta} \in \Big(\mathrm{co\,}\{a: \text{there exists } \; x_i \xrightarrow{d_A} x \text{ such that }
\, d_A(x_i)>0 \, \,
\text{for all } i,\, \text{ and } \nabla_x d_A(x_i)\rightarrow a\},0\Big).
\end{array}
\] This is because $\nabla_z d_A(x_i)=0$. We conclude that $\tilde{\eta}$ can be written as:
\begin{equation*}
\tilde{\eta} :=(\eta,0) \quad \text{ where }\eta\in \partial^> d_A(x) \ .
\end{equation*}
It follows that, owing to (\ref{equation: choice of control proof main theorem}):
\[
(\eta,0) \cdot [\tilde{f}{(t,(x_0,0), \hat{u}(t))}-\tilde{f}{(t,(x_0,0),\bar{u}(t))}] = \eta \cdot (\hat{u}(t)-\bar{u}(t)) < - \tilde \beta,
\] for all $\eta \in \partial ^{>} d _{A}(x)$ a.e. $t \in [S,S+\tilde \varepsilon)$ for all $x \in (x_0+\tilde \rho \mathbb{B}) \cap \partial A$. Condition (\ref{7}) is therefore confirmed.
\vskip2ex
\noindent
{\bf 4.}
For the proof of (\ref{9}), we take any $\tau \in \lbrace \sigma \in [S,T]\; : \; \bar{x}(\sigma) \in \partial A \rbrace$. Consider again the positive constants $\varepsilon$ and $\beta$ and the selection $v(.)$
provided by Lemma \ref{Lemma3}. Property (\ref{item: control existence ipc1 chapter CV}) of the latter lemma implies that for a.e. $s, \ t \in (\tau-\varepsilon, \tau]$
\begin{equation} \label{choice of control 2 proof main theorem chap 5} \eta \cdot (v(t) - \dot{\bar x} (t)) < -\beta \end{equation} for all $\eta \in {\rm co}(N_{A}(\bar x (s)) \cap \partial \mathbb{B}).$ In particular for all $\eta \in \partial^{>}_{x} d_{A}(\bar x (s))$ (owing to Proposition \ref{Prop2}(i)) and for $\tilde \varepsilon \in (0, \varepsilon] $ and $ \tilde \beta = \beta$ and by considering the control function $\hat u (t) := v(t)$ for $t\in (\tau - \varepsilon, \tau]$ (and eventually for $t\in (\tau-\tilde{\varepsilon}, \tau]$). Now we take an element $\tilde \eta$ in $\partial^>_{w} d_A(\bar{x}(s))$ for $ s \in (\tau -\tilde \varepsilon, \tau] \cap [S,T]$.
\noindent
It follows that,
\[
\begin{array}{l}
\tilde{\eta} \in \Big(\mathrm{co\,}\{a: \text{there exists } \, t_i \rightarrow s\text{ s.t. }
\, d_A(\bar{x}(s_i))>0 \, \,
\text{for all } i,\, d_A(\bar{x}(s_i)) \rightarrow d_A(\bar{x}(s))\text{ and } \\ \hspace{3 in}\nabla_x d_A(\bar{x}(s_i))\rightarrow a \},0 \Big)
\end{array}
\]
which is equivalent to write $\tilde{\eta}$ as
\begin{equation*}
\tilde{\eta} :=(\eta,0) \quad \text{where }\eta\in \partial^> d_A(\bar{x}(s))
\end{equation*}
and $ s \in (\tau - \tilde \varepsilon, \tau] \cap [S,T]$. Making use of (\ref{choice of control 2 proof main theorem chap 5}), it follows that
\[
(\eta,0) \cdot [\tilde{f}{(t,(\bar{x}(t),\bar{z}(t)), \hat{u}(t))}-\tilde{f}{(t,(\bar{x}(t),\bar{z}(t)),\bar{u}(t))}]= \eta \cdot (\hat{u}(t)-\bar{u}(t)) < - \tilde \beta,
\]
for all $\eta \in \partial^>d_A(\bar{x}(s))$ for a.e. $s, t \in (\tau - \tilde \varepsilon,\tau] \cap [S,T]$ and for all $\tau \in \lbrace \sigma \in [S,T] \; : \; \bar{x}(\sigma) \in \partial A \rbrace.$ We conclude that condition (\ref{9}) is satisfied.
\vskip2ex
\noindent
{\bf 5.} Finally, it is clear that (\ref{CQ for calculus of variations}) implies that (\ref{item: CQ3 normality ocp chap 5}) is satisfied.
\vskip4ex
\noindent
\textbf{Step 2.} It is an easy task to prove that hypotheses (\ref{item: H1 ocp chap5})-(\ref{item: H3 ocp chap5}) adapted to the optimal control problem (\ref{ocp by state augmentation chap 5}) are satisfied. This is a direct consequence of hypothesis (\ref{item: CV1}).
\vskip4ex
\noindent
\textbf{Step 3.} Apply Theorem \ref{Thm3} to (\ref{ocp by state augmentation chap 5}) and then obtain the assertions of Theorem \ref{Thm1}.
\vskip1ex
\noindent
Since $\bar x$ is a $W^{1,1}-$local minimizer for the reference calculus of variations problem (\ref{problem: calculus of variations}), then $((\bar x, \bar z),\dot{\bar x}=\bar u)$ is a $W^{1,1}-$local minimizer for the optimal control problem (\ref{ocp by state augmentation chap 5}). Theorem \ref{Thm3} can be therefore applied to the problem (\ref{ocp by state augmentation chap 5}). Namely, there exist a couple of absolutely continuous functions $(p_1,p_2) \in W^{1,1}([S,T],\mathbb{R}^n) \times W^{1,1}([S,T],\mathbb{R})$, a Borel measure $\mu(.)$ and a $\mu-$integrable function $\gamma(.)$, such that:
\begin{enumerate}[label = (\roman{*})$'$, ref= (\roman{*})$'$]
\item \label{item: adjoint arc proof main thm chap 5} $-(\dot{p_1}(t), \dot{p_2}(t)) \in \text{ co } \partial_{(x,z)} ((q_1(t),q_2(t)) \cdot \tilde{f}(t,(\bar{x}(t),\bar{z}(t)),\bar{u}(t))) \text{ a.e. } t \in [S,T],$
\item \label{item: transversality condition proof main thm chap 5}$-(q_1(T), q_2(T)) \in \partial_{(x,z)} \bar{z}(T),$
\item \label{item: weirsterass condition proof main thm chap 5}$(q_1(t), q_2(t)) \cdot \tilde{f}(t,(\bar{x}(t),\bar{z}(t)), \bar u(t))= \max_{u \in U (t)} (q_1(t), q_2(t)) \cdot \tilde{f}(t,(\bar{x}(t),\bar{z}(t)), u) ,$
\item \label{item: element in the hybrid subdifferential proof main thm chap 5}$\gamma(t) \in \partial^> d_A(\bar{x}(t))$ \quad and \quad $\textrm{supp}(\mu) \subset \lbrace t\in [S,T]\; : \; \bar{x}(t) \in \partial A \rbrace,$
\end{enumerate}
where
\begin{equation*}
q_1(t)= \begin{cases}
p_1(S) & \quad t =S \\
p_1(t)+ \int_{[S,t]} \gamma(s) d\mu(s), & \quad t\in (S,T] \end{cases}
\end{equation*}
and
\[
q_2(t)= p_2(t), \quad \text{for } t \in [S,T].
\]
The transversality condition \ref{item: transversality condition proof main thm chap 5} ensures that $q_1(T)=0$. This implies that condition (\ref{2_chap5}) of Theorem \ref{Thm1} is satisfied. Furthermore, $q_2(T)=-1 =p_2(T)$.
\noindent
The adjoint system \ref{item: adjoint arc proof main thm chap 5} ensures, by expanding it and applying a well-known nonsmooth calculus rule, that:
\[ -(\dot{p_1}(t), \dot{p_2}(t)) \in \textrm{co } \partial_{(x,z)} \left(q_1(t) \cdot \bar{u}(t) + q_2(t) L(t,\bar{x}(t), \bar{u}(t)) \right).\]
By the Lipschitz continuous of $L(t,.,u)$ on a neighborhood of $\bar{x}(t)$, we obtain
\[
-(\dot{p}_1(t), \dot{p_2}(t)) \in q_2(t) \text{ co } \partial_{x} L(t,\bar{x}(t), \bar{u}(t)) \times \lbrace 0 \rbrace.
\]
Therefore, $\dot{p_2}(t)=0$ and $ - \dot{p_1}(t) \in q_2(t) \textrm{co } \partial_{x} L(t,\bar{x}(t), \bar{u}(t)).$
We deduce that $ p_2(t) = q_2(t)=-1 $ and $\dot{p_1}(t) \in {\rm co }\ \partial_{x} L(t,\bar{x}(t), \bar{u}(t))$ a.e. $t$.
\ \\
\noindent
The maximization condition \ref{item: weirsterass condition proof main thm chap 5} is equivalent to:
\[
q_1(t) \cdot \bar{u}(t) - L(t,\bar{x}(t), \bar{u}(t)) = \mathop {\max }\limits_{u \in \mathbb{R}^n} \lbrace q_1(t) \cdot u - L(t,\bar{x}(t), u) \rbrace.
\]
Deriving the expression above in terms of $u$, and evaluating it at $u=\dot{\bar x}$, we obtain
\[
0 \in \textrm{co } \partial_{\dot{x}} \lbrace q_1(t) \cdot \bar{u}(t) - L(t,\bar{x}(t), \bar{u}(t)) \rbrace.
\]
Making use of the lower semicontinuity property of $v \mapsto L(t,\bar x(t),v)$ (cf. (\ref{item: CV2})) and using the sum rule \cite[page 45]{vinter_optimal_2010}, we have
\[
0 \in \text{ co }\lbrace \partial_{\dot{x}}(q_1(t) \cdot \dot{\bar{x}}(t)) + \partial_{\dot{x}} (-L(t,\bar{x}(t), \bar{u}(t))) \rbrace \quad {\rm a.e.}
\]
Moreover, since $ \text{co }\partial(-f) = - \text{co }\partial(f)$, we deduce that
\[ 0 \in q_1(t) - \textrm{co } \partial_{\dot{x}} (L(t,\bar{x}(t),\bar{u}(t)).\] Equivalently, \[ q_1(t) \in \textrm{co } \partial_{\dot{x}} L(t,\bar{x}(t), \bar{u}(t)) \quad \text{a.e. } t. \] Condition (\ref{1_chap5}) of Theorem \ref{Thm1} is therefore satisfied. Condition (\ref{0_chap5}) is straightforward owing to \ref{item: element in the hybrid subdifferential proof main thm chap 5}. By consequence, the proof of Theorem \ref{Thm1} is complete. \qed
\begin{remark} \label{remark2}
One may expect that the proof of Theorem \ref{Thm1} follows directly from results like \cite[Theorem 3.2]{fontes_normality_2015} for the particular case of free right-hand point constraint (eventually after expressing (\ref{problem: calculus of variations}) as an optimal control problem and by subsequently considering the augmented problem to cover the case of $W^{1,1}-$local minimizers). However, \cite[Theorem 3.2]{fontes_normality_2015} cannot be applied in our paper. Indeed, by inspecting \cite{fontes_normality_2015}, we notice that in order to apply the normality result \cite[Theorem 3.2]{fontes_normality_2015}, some regularity assumptions on the data should be satisfied, coupled with constraint qualifications (obviously weaker than the ones proposed in our paper (\ref{item: CQ1 normality ocp chap 5})-(\ref{item: CQ3 normality ocp chap 5}), and those discussed in the series of papers \cite{fontes_normal_2013}, \cite{lopes_constraint_2011}, \cite{ferreira_nondegenerate_1999}). In particular, \cite{fontes_normality_2015} assumes the following boundedness strong condition on the dynamics (referred as (H3) in \cite{fontes_normality_2015}): for a given $\delta>0$ and a local minimizer $\bar x(.)$,
\[ \text{there exists } C_u \ge 0 \text{ such that } |f(t, x, u)| \le C_u \text{ for } x \in \bar x(t) + \delta \mathbb{B} , \ u \in U (t),
\text{ and } t \in [S,T] \ . \]
It is clear that this regularity assumption is stronger than the one considered in our paper (cf. inequalities (\ref{boundedness boundary point}) and (\ref{boundedness initial point}) above), and it cannot be satisfied for our problem (\ref{ocp by state augmentation chap 5}). Indeed, the dynamic $\tilde{f}$ in problem (\ref{ocp by state augmentation chap 5}) is the following
\[\tilde{f}{(t,(x(t),z(t)), u(t))} :=
\begin{pmatrix}
u(t) \\ L(t,x(t),u(t))
\end{pmatrix} \qquad \text{where } u(t) \in \mathbb{R}^n \ . \]
The dynamic $\tilde{f}$ satisfies assumption (H3) of \cite{fontes_normality_2015} only when $u(.)$ and $x(.)$ are bounded (owing to the boundedness of $L$ on bounded sets by (\ref{item: CV1})). By contrast, nothing guarantees, under the assumptions considered in our paper, that $\tilde{f}$ satisfies (H3) of \cite{fontes_normality_2015} for any $x$ near $\bar x(.)$ and for any $u \in U(t)$ which eventually has to coincide with the whole space $\mathbb{R}^n$, to derive the correct necessary conditions (in the normal form) for the class of problems considered in Theorem \ref{Thm1}.
\end{remark}
\section{Proof of Theorem \ref{Thm2} (Global Minimizers)}
In this subsection we give details of a shorter proof based on a simple technique using the neighboring feasible trajectory result with linear estimate (initially introduced in \cite{rampazzo_theorem_1999}), while regarding the calculus of variations problems as an optimal control problem with final cost. The result we aim to prove is valid for global minimizers and under the stronger constraint qualification
\begin{enumerate}[label=($\widetilde{CQ}$), ref= $\widetilde{CQ}$]
\item \[\text{int }\ T_A(z) \neq \emptyset \ , \quad \text{for all } z \in \partial A.\]
\end{enumerate}
We consider $\bar x(.)$ to be a global minimizer for the calculus of variations problem (\ref{problem: calculus of variations}).
We now employ the same standard argument (known as {\it state augmentation}) previously stated in the first proof technique. This allows to write the problem of calculus of variations (\ref{problem: calculus of variations}) as an optimal control problem. Indeed, by adding an extra absolutely continuous state variable \[ z(t)= \int_{S}^{t} L(s,x(s), \dot{x}(s))ds\]
and by considering the dynamics $\dot{x} =u$, the problem (\ref{problem: calculus of variations}) can be written as the optimal control problem (\ref{problem: ocp second technique}):
\begin{equation}\begin{cases}
\begin{aligned} \label{problem: ocp second technique}
& {\text{minimize}}
&& z(T) \\
&&& \hspace{-1.9cm} \text{over } W^{1,1} \text{ arcs } (x(.),z(.)) \text{ satisfying} \\
&&& (\dot{x}(t), \dot z(t)) \in F(t,x(t)) \quad & {\rm a.e}\ t \in [S,T] \\
&&& (x(S), z(S)) = (x_{0},0) \\
&&& (x(t), z(t)) \in A \times \mathbb{R} \quad & {\rm for \; all}\ t \in [S,T] \ . \end{aligned}\end{cases}\tag{$\widetilde{P}$}\tagsleft@true\let\veqno\@@leqno
\end{equation}
where
\begin{equation} \label{velocity set chap 5 second technique} F(t,x) \ := \ \{ (u,L(t,x,u)) \ : \ u \in M \mathbb{B} \} \end{equation}
for any $M>0$ large enough (for instance $M \ge 2 \| {\bar u(.)} \|_{L^\infty}$).
\noindent
It is easy to prove that if $\bar x(.)$ is a global minimizer for (\ref{problem: calculus of variations}), then $(\bar{x}(.), \bar{z}(.))$ is a global minimizer for (\ref{problem: ocp second technique}) where $\bar z(t)= \int_{S}^{t} L(s,\bar x(s), {\dot{\bar x}}(s))ds$.
\vskip2ex
\noindent
The proof of Theorem \ref{Thm2} is given in three steps. In Step 1, we show that the neighboring feasible trajectory theorem in \cite[Theorem 2.3]{bettiol_l$infty$_2012} holds true for our velocity set $F(.,.)$ and our constraint qualification $A\times\mathbb{R}$. In Step 2, we combine a penalization method with the $L^\infty-$linear estimate provided by \cite[Theorem 2.3]{bettiol_l$infty$_2012} which will permit to derive a minimizer for a problem involving a penalty term (in terms of the state constraint) in the cost. Step 3 is devoted to apply a standard Maximum Principle to an auxiliary problem and deduce the assertions of Theorem \ref{Thm2} in their normal form.
\vskip2ex
\noindent
{\bf Step 1. } In this step, we prove that $(F(.,.),A\times \mathbb{R})$ satisfies the hypothesis of \cite[Theorem 2.3]{bettiol_l$infty$_2012}. Indeed, the set of velocities is nonempty, of closed values owing to the Lipschitz continuity of $u \to L(t,x,u)$ (see \cite[Proposition 2.2.6]{clarke_optimization_1990}), and $F(.,x)$ is Lebesgue measurable for all $x\in \mathbb{R}^n$. Moreover, the set $F(t,x)$ is bounded on bounded sets, owing to the boundedness of $L(.,.,.)$ on bounded set, and to the boundedness of $\bar u(t)$ a.e. $t\in[S,T]$ (we recall that the minimizer $\bar x(.)$ is assumed to be Lipschitz). The Lipschitz continuity of $F(t,.)$ and the absolute continuity from the left of $F(.,x)$ is a direct consequence of assumption \ref{item: CV1'}. Finally, we observe that
\[ F(t,x) \cap \text{int }T_{A\times \mathbb{R}}(x,z) \ = \
\\ \bigg( M \mathbb{B} \cap \text{int }T_A(x) \bigg) \times \mathbb{R} . \]
The constraint qualification (\ref{CQ for calculus of variations_second_technique}) that we suggest
\[\text{int }\ T_A(x) \neq \emptyset \quad \text{for all } x \in \partial A \ , \]
and the boundedness of $\|\bar u \|_{L^\infty} = \| \dot{\bar x} \|_{L^\infty}$ guarantee that for $M>0$ chosen large enough
\[ F(t,x) \cap \text{int }T_{A\times \mathbb{R}}(x,z) \ \neq \ \emptyset . \]
Therefore, \cite[Theorem 2.3]{bettiol_l$infty$_2012} is applicable.
\vskip4ex
\noindent
{\bf Step 2.} In this step, we combine a penalization technique with the linear estimate given by \cite[Theorem 2.3]{bettiol_l$infty$_2012}. We obtain that $(\bar x, \bar z)$ is also a global minimizer for a new optimal control problem, in which an extra penalty term intervenes in the cost:
\begin{lemma}\label{lemma: minimizer for a problem with penalty term in the cost, second technique chap 5} Assume that all hypotheses of Theorem \ref{Thm2} are satisfied. Then, $(\bar x, \bar z)$ is a global minimizer for the problem:
\begin{equation}\begin{cases}\label{problem: penalized ocp chap 5}\begin{aligned}
& {\text{minimize}}
&& z(T) + K \max\limits_{t\in[S,T]} d_{A \times \mathbb{R}}(x(t),z(t)) =: J(x(.),z(.)) \\
&&& \hspace{-1.9cm} \text{over arcs }\ (x,z) \in W^{1,1}([S,T], \mathbb{R}^{n}\times \mathbb{R}) \text{ satisfying} \\
&&& (\dot x(t), \dot z(t)) \in F(t,x(t)) \quad \text{a.e. }\ t\in[S,T] \\
&&& (x(S),z(S)) = (x_0,0) , \quad x(S) \in A \ . \end{aligned}\end{cases} \tag{$\widetilde{P1}$}\tagsleft@true\let\veqno\@@leqno
\end{equation} Here, $ K$ is the constant provided by \cite[Theorem 2.3]{bettiol_l$infty$_2012}.
\end{lemma}
\begin{proof}
Suppose that there exists a global minimizer $(\hat{x}(.), \hat{z}(.))$ for (\ref{problem: penalized ocp chap 5}) such that
\[ J(\hat{x}(.), \hat{z}(.)) < J(\bar{x}(.), \bar{z}(.)) . \]
Denote by $\hat \varepsilon := \max\limits_{t\in[S,T]} d_{A\times \mathbb{R}}(\hat{x}(t),\hat{z}(t)),$ the extent to which the reference trajectory $(\hat{x}(.),\hat{z}(.))$ violates the state constraint $A\times \mathbb{R}$. By the neighboring feasible trajectory result (with $L^\infty-$estimates) \cite[Theorem 2.3]{bettiol_l$infty$_2012}, there exists an $F-$trajectory $(x(.),z(.))$ and $ K>0$ such that
\[
\begin{cases}
(x(S),z(S))=(x_0,0) \\
(x(t),z(t)) \in A \times \mathbb{R} \quad \text{for all } t \in [S,T] \\ \| (x(.),z(.)) - (\hat x(.), \hat z(.)) \|_{L^\infty(S,T)} \le K\hat \varepsilon.
\end{cases}\]
In particular
\[ |z(T) - \hat{z}(T)| \le K \hat{\varepsilon}. \] Therefore,
\begin{align*}
z(T) \le \hat{z}(T) + K \hat{\varepsilon} = J(\hat x(.),\hat{z}(.)) < J(\bar x(.),\bar{z}(.)) = \bar z(T).
\end{align*}
But this contradicts the minimality of $(\bar x,\bar z)$ for (\ref{problem: ocp second technique}). The proof is therefore complete.
\end{proof}
A consequence of Lemma \ref{lemma: minimizer for a problem with penalty term in the cost, second technique chap 5} is the following:
\begin{lemma}
$\bar X(.) := (\bar{x}(.),\bar{z}(.)=\int_{S}^{t}L(s,\bar{x}(s),\dot{\bar x}(s))\ ds ,\bar w(.) \equiv 0)$ is a global minimizer for
\begin{equation*}\begin{cases}\begin{aligned}
& {\text{minimize}}
&& {g}(X(T)) \\
&&& \hspace{-1.9cm} \text{over }\ X(.)=(x(.),z(.),w(.)) \in W^{1,1}([S,T], \mathbb{R}^{n+2}) \text{ satisfying} \\
&&& \dot{X}(t) = (\dot x(t), \dot z(t), \dot{w}(t)) \in G(t,X(t)) \quad \text{a.e. } t\in[S,T] \\
&&& {h}(X(t)) \le 0 \quad \text{for all } t\in [S,T] \\
&&& (x(S),z(S),w(S)) \in \{x_0\} \times \{0\} \times \mathbb{R}^+ , \quad x(S) \in A
\ . \end{aligned}\end{cases}
\end{equation*}
The cost function ${g}$ is defined by
\[ {g}(X(T)):= z(T) + {K} w(T) . \]
The multivalued function is defined by
\[ G(t,X(t)) := \{ (u,L(t,x,u),0) \ : \ u \in M \mathbb{B} \} \]
and the function ${h}: \mathbb{R}^n \times \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ (which provides the state constraint in terms of a functional inequality) is given by:
\[{h}(X)= {h}(x,z,w) := d_{A}(x) - w \]
\end{lemma}
\begin{proof}
By contradiction, suppose that there exists a state trajectory $X(.):=(x(.),z(.),w(.))$ satisfying the state and dynamic constraints of the problem, such that
\[ {g}(X(T)) < {g}(\bar X(T)). \] Observe that $w(.)\equiv w \ge 0$, and the state constraint condition is equivalent to
\[ \max\limits_{t\in[S,T]} d_{A} (x(t)) \le w . \]
Then, we would obtain
\begin{align*}
J(x(.),z(.)) & = z(T) + {K} \max\limits_{t\in[S,T]}d_{A}(x(t)) \\ & \le z(T) + K w \\ & = g(X(T)) < g(\bar{X}(T)) = J(\bar{x}(.),\bar z(.)).
\end{align*}
This contradicts the fact that $(\bar x(.),\bar z(.))$ is a global minimizer for (\ref{problem: penalized ocp chap 5}).
\end{proof}
\vskip4ex
\noindent
\textbf{Step 3.} In this step we apply known necessary optimality conditions: there exist costate arcs $P(.)=(p_1(.),p_2(.),p_3(.)) \in W^{1,1}([S,T],\mathbb{R}^{n+2})$ associated with the minimizer $(\bar x(.), \bar{z}(.), \bar{w}\equiv 0)$, a Lagrange multiplier $\lambda \ge 0$, a Borel measure $\mu(.): [S,T] \to \mathbb{R}$ and a $\mu-$integrable function $\gamma(.)=(\gamma_1(.),\gamma_2(.),\gamma_3(.))$ such that:
\begin{enumerate}[label=(\roman{*}), ref= \roman{*}]
\item $(\lambda, P(.),\mu(.)) \ne (0,0,0)$ ,
\item $-\dot{P}(t) \in \text{co }\partial_{(x,z,w)} \bigg( Q(t) \cdot (\dot{\bar x}(t), L(t,\bar{x}(t), \dot{\bar x}(t)), 0) \bigg)$ ,
\item $(P(S),-Q(T)) \in \lambda \partial g (\bar x(T), \bar z(T), \bar w \equiv 0) + (\mathbb{R} \times \mathbb{R} \times \mathbb{R}^-) \times (\{0\}\times \{0\}\times \{0\})$ ,
\item $Q(t) \cdot (\dot{\bar x}(t), L(t,\bar{x}(t), \dot{\bar x}(t)), 0) = \max\limits_{u\in M\mathbb{B}} Q(t) \cdot (u, L(t,\bar x(t),u), 0) $
\item \label{condition_v} ${\gamma}(t) \in \partial^>_{(x,z,w)} {h}(\bar{x}(t),\bar{z}(t), \bar{w}\equiv0)$ \; $\mu-$a.e. \quad and \quad supp$(\mu) \subset \{ t \ : \ {h}(\bar{x}(t),\bar{z}(t), \bar{w}\equiv0) =0 \}$.
\end{enumerate}
Here, $Q(.): [S,T] \to \mathbb{R}^{n+2}$ is the function
\[ Q(t) := P(t) + \int_{[S,t]}\gamma(s) \ d\mu(s) \quad \text{for } t\in (S,T] \ . \]
Note that by condition (\ref{condition_v}), $\gamma(t) = (\gamma_1(t),\gamma_2(t),\gamma_3(t)) \in \partial^>_x d_A(\bar x(t)) \times \{0\} \times \{-1\} $ \; $\mu-$a.e.\\
\noindent
From the conditions above, we derive the following:
\begin{enumerate}[label=(\roman{*})$'$, ref= (\roman{*})$'$]
\item \label{item: nco1 second technique}$(\lambda, p_1(.),p_2(.), p_3(.),\mu(.)) \ne (0,0,0,0,0)$ ,
\item $-\dot{p}_1(t) \in p_2(t)\text{co }\partial_{x} L(t,\bar{x}(t), \dot{\bar x}(t))$ , a.e. $t\in[S,T]$ \quad and \quad $\dot{p_2}=\dot{p_3} \equiv 0$ ,
\item \label{item: nco3 second technique} $-(p_1(T) + \int_{S}^{T}{\gamma}_1(s) \ d\mu(s)) = 0$, \quad $p_3(S)=p_3 \le 0$, \quad $-(p_3(T)-\int_{[S,T]} d\mu(s)) = \lambda K$ \quad and \quad $p_2(T)=p_2=-\lambda$ ,
\item \label{item: nco4 second technique} $(p_1(t) + \int_{S}^{t}{\gamma}_1(s) \ d\mu(s)) \cdot \dot{\bar x}(t) - \lambda L(t,\bar{x}(t),\dot{\bar{x}}(t)) = \max\limits_{u\in M\mathbb{B}} \big\{ (p_1(t) + \int_{S}^{t}{\gamma}_1(s) \ d\mu(s)) \cdot u - \lambda L(t,\bar{x}(t),u) \big\} $
\item \label{item: nco5 second technique}${\gamma}_1(t) \in \partial^>_{x}d_{A}(\bar{x}(t))$ \; $\mu-$a.e. \quad and \quad supp$(\mu) \subset \{ t \ : \ {h}(\bar{x}(t),\bar{z}(t), \bar{w}\equiv0) =0 \}$
\end{enumerate}
We prove that conditions \ref{item: nco1 second technique}-\ref{item: nco5 second technique} apply in the normal form (i.e. with $\lambda=1$). Indeed, suppose that $\lambda=0$, then
\[ -p_3 + \int_{[S,T]} d\mu(s) =0 \quad p_1=0 \quad p_2=0 \quad p_3=0. \]
But this contradicts the nontriviality condition \ref{item: nco1 second technique}. Therefore, the relations \ref{item: nco1 second technique}-\ref{item: nco5 second technique} apply in the normal form.
Moreover, notice that the convexity of $L(t,x,.)$ yields that the maximality condition \ref{item: nco4 second technique} is verified globally (i.e. for all $u\in \mathbb{R}^n$), and by deriving it w.r.t. $u=\dot x$, and making use of the max rule (\cite[Theorem 5.5.2]{vinter_optimal_2010}) we deduce that
\[ p_1(t) + \int_{[S,t]}\gamma_1(s) \ d\mu(s) \in \text{co }\partial_{\dot x}L(t,\bar{x}(t), \dot{\bar{x}}(t)) \quad \text{a.e. } t\in [S,T]. \]
This permits to conclude the necessary optimality conditions in the normal form of Theorem \ref{Thm2}. \qed
\vskip2ex{\bf Acknowledgments:} The authors are thankful to the reviewer of the first version of the paper who suggested to use the $L^\infty-$linear distance estimate approach. Moreover, helpful comments and suggestions by Piernicola Bettiol are gratefully acknowledged.
\end{document}
|
arXiv
|
{
"id": "1810.08529.tex",
"language_detection_score": 0.6284520030021667,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\def\title #1{\begin{center} {\Large {\sc #1}} \end{center}} \def\author #1{\begin{center} {#1} \end{center}}
\setstretch{1.1}
\begin{titlepage}
\phantomsection \label{Titlepage}
\addcontentsline{toc}{section}{Title page}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}\addtocounter{footnote}{1} \title{\sc Influence in Weighted Committees \\
\ }
\author{Sascha Kurz\\ {\small Dept.\ of Mathematics, University of Bayreuth, Germany\\[email protected]}}
\author{Alexander Mayer\\ {\small Dept.\ of Economics, University of Bayreuth, Germany\\[email protected] }}
\author{Stefan Napel\\ {\small Dept.\ of Economics, University of Bayreuth, Germany\\[email protected] }}
\begin{center} {\bf {\sc Abstract}} \end{center} {\small A committee's decisions on more than two alternatives much depend on the adopted voting method, and so does the distribution of power among the committee members. We investigate how different aggregation methods such as plurality runoff, Borda count, or Copeland rule map asymmetric numbers of seats, shares, voting weights, etc.\ to influence on outcomes when preferences vary. A generalization of the Penrose-Banzhaf power index is proposed and applied to the IMF Executive Board's election of a Managing Director, extending a~priori voting power analysis from binary simple voting games to choice in weighted committees. }
\begin{description} {\small \item[Keywords:] weighted voting $\cdot$ voting power $\cdot$ weighted committee games $\cdot$ plurality runoff $\cdot$ Borda rule~$\cdot$ Copeland rule $\cdot$ Schulze rule $\cdot$
IMF Executive Board $\cdot$ IMF Director
} \end{description}
\noindent {\footnotesize We are grateful to Hannu Nurmi for stimulating discussions on the topic. We also benefitted from feedback on seminar or workshop presentations in Bamberg, Bayreuth, Berlin, Bremen, Dagstuhl, Delmenhorst, Graz, Hagen, Hamburg, Hanover, Leipzig, Moscow, Munich and Turku.}
\end{titlepage}
\addtocounter{footnote}{-1}
\setstretch{1.2}
\pagenumbering{arabic}
\section{Introduction}\label{sec:Introduction}
The aggregation of individual preferences by some form of voting is common in politics, business, and everyday life. Members of a board, council, or committee are rarely aware how much their collective choices depend on the adopted aggregation rule. The importance of the method may be identified a~posteriori by comparing voting outcomes for a given preference configuration. For instance, suppose a hiring committee involved three groups with $n_1=6$, $n_2=5$, and $n_3=3$ members each and the following preferences over five candidates $\{a,b,c,d,e\}$: $a \succ_1 d \succ_1 e \succ_1 c \succ_1 b $, $b \succ_2 c \succ_2 d \succ_2 e \succ_2 a$, and $c \succ_3 e \succ_3 d \succ_3 b \succ_3 a$. If the groups voted sincerely (for informational, institutional, or other reasons) then candidate $a$ would have received the position under \emph{plurality rule} with 6 vs.\ 5 vs.\ 3 votes. However,
requiring a \emph{runoff} between the front-runners, given that none has majority support, would have made $b$ the winner with 8 to 6 votes. Candidate $c$ would have won every pairwise comparison and been the \emph{Condorcet winner}; \emph{Borda rule} would have singled out $d$; candidate $e$ could have been the winner if \emph{approval voting} had been used.
With enough posterior information, each voter group can identify a `most-prefer\-red voting method' for the decision at hand: group~1 should have tried to impose plurality rule in order to have its way; group~2 should have argued for plurality runoff; and group~3 for pairwise comparisons. It is less obvious, though, how adoption of one aggregation method rather than another will affect a group's success and influence \emph{a~priori}, i.e., not yet knowing what will be the applicable preferences, or averaged across many decisions. More generally, can we verify if small groups are enjoying a greater say when committees fill a top position, elect an official, etc.\ by pairwise votes or by the plurality runoff method? Which rules from a given list of suggestions tend to maximize (or minimize) voting power of a particularly sized group in a committee? There is a huge literature on voting power but questions of this kind have to our knowledge not been addressed by it yet.\footnote{The most closely related analysis seems to be the investigation of effectivity functions; cf.\ \citeN{Peleg:1984}. See \citeN{Felsenthal/Machover:1998}, \citeN{Laruelle/Valenciano:2008}, or \citeN{Napel:2018} for overviews on the measurement of voting power.} We seek to change this by generalizing tools that were developed for analysis of {simple voting games} with binary options (`yes' or `no') to collective choice from $m\ge 3$ alternatives.
The most prominent tools for analyzing the former are the \emph{Penrose-Banzhaf index} (\citeNP{Penrose:1946}; \citeNP{Banzhaf:1965}) and the \emph{Shapley-Shubik index} \cite{Shapley/Shubik:1954}. They evaluate the sensitivity of the outcome to changes in a given voter's preferences -- operationalized as the likelihood of this voter being \emph{pivotal} or \emph{critical}: flipping its vote would swing the collective decision -- under specific probabilistic assumptions. This paper applies the same logic to {weighted committee games}. These, more simply referred to as \emph{weighted committees}, are
tuples $(N,A, r|\mathbf{w})$ that specify a set $N=\{1, \ldots, n\}$ of players, a set $A=\{a_1, \ldots, a_m\}$ of alternatives, and the combination $r|\mathbf{w}$ of an anonymous voting method $r$ (e.g., Borda rule, plurality rule, and so on) with a vector $\mathbf{w}$ of integers that represents group sizes, voting shares, etc.
As in analysis of (binary) simple voting games, high \emph{a priori voting power} of player~$i\in N$ in a committee goes with high sensitivity of the outcome to $i$'s preferences. It can be quantified as the probability for a change in $i$'s preferences causing a change of the collective choice. Although other assumptions could be made, we focus on the simplest case in which all profiles of strict preference orderings are equally likely a~priori. This corresponds to the Penrose-Banzhaf index for $m=2$. The respective power indications help to assess who gains and loses from institutional reforms or whether the distribution of influence in a committee satisfies some fairness criterion; they can also inform stakeholders, lobbyists, or other committee outsiders with an interest in who has how much say in the committee.
\section{Related Work} The distribution of power in binary weighted voting games has received wide attention ever since \citeN[Ch.~10]{vonNeumann/Morgenstern:1953} formalized them as a subclass of so-called \emph{simple (voting) games}. See, e.g., \citeN{Mann/Shapley:1962}, \citeN{Riker/Shapley:1968}, \citeN{Owen:1975:presidential}, or \citeN{Brams:1978} for seminal investigations. The binary framework can be restrictive, however. Even for collective `yes'-or-`no' decisions, individual voters usually have more than two options. For instance they can abstain or not even attend a vote, and this may affect the outcome differently than casting a vote either way. Corresponding situations have been formalized as \emph{ternary voting games} (\citeNP{Felsenthal/Machover:1997}; \citeNP{Tchantcho/DiffoLambo/Pongou/Engoulou:2008}; \citeNP{Parker:2012}) and \emph{quaternary voting games} \cite{Laruelle/Valenciano:2012}. Players can also be allowed to express graded intensities of support: in \emph{$(j,k)$-games}, studied by \citeN{Hsiao/Raghavan:1993} and Freixas and Zwicker~(\citeyearNP{Freixas/Zwicker:2003}, \citeyearNP{Freixas/Zwicker:2009}), each player selects one of $j$ ordered levels of approval. The resulting $j$-partitions of players are mapped to one of $k$ ordered output levels; suitable power indices have been defined by Freixas~(\citeyearNP{Freixas:2005:Banzhaf}, \citeyearNP{Freixas:2005:Shapley}).
Linear orderings of actions and feasible outcomes, as required by $(j,k)$-games, are naturally given in many applications but fail to exist in others. Think of options that have multidimensional attributes -- for instance, candidates for office or an open position, policy programs, locations of a facility, etc. Pertinent extensions of simple games, along with corresponding power measures, have been introduced as \emph{multicandidate voting games} by \citeN{Bolger:1986} and taken up as \emph{simple $r$-games} by \citeN{Amer/Carreras/Magana:1998b} and as \emph{weighted plurality games} by \citeN{Chua/Ueng/Huang:2002}. They require each player to vote for a single candidate. This results in partitions of player set $N$ that, in contrast to $(j,k)$-games, are mapped to a winning candidate without ordering restrictions.
We will draw on the yet more general framework of \emph{weighted committee games} (\citeNP{Kurz/Mayer/Napel:2018}). Winners in these games can depend on the entire preference rankings of voters rather than just the respective top. We conceive of \emph{player~$i$'s influence} or \emph{voting power} as the sensitivity of joint decisions to $i$'s actions or likings. The resulting ability to affect collective outcomes is closely linked to the opportunity to manipulate social choices in the sense of \citeN{Gibbard:1973} and \citeN{Satterthwaite:1975}. Our investigation therefore relates to computational studies by \citeN{Nitzan:1985}, \citeN{Kelly:1993}, \citeN{Aleskerov/Kurbanov:1999}, or \citeN{Smith:1999} that have quantified the aggregate \emph{manipulability} of a given decision rule. The conceptual difference between corresponding manipulability indices and the power index defined below is that the latter is evaluating consequences of arbitrary preference perturbations, while the indicated studies only look at strategic preference misrepresentation that is beneficial from the perspective of a player's original preferences.\footnote{\citeN{Nitzan:1985} also checked if outcomes could be affected by arbitrary variations of preferences before assessing manipulation. He tracked this at the aggregate level, while we break it down to individuals in order to link outcome sensitivity to voting weights.} Voting power as we quantify it could be used for a player's strategic advantage but it need not. A `preference change' might also be purely idiosyncratic, result from log-rolling or external lobbying (where costs of persuasion
can relate more to preference intensity than a player's original ranking of options), or could be a demonstration of power for its own sake.
Other conceptualizations of the influence derived from a given collective choice rule track the sets of outcomes that can be induced by partial coalitions. For instance, \citeN{Moulin:1981} uses \emph{veto functions} in order to describe outcomes that given coalitions of players could jointly prevent; Peleg's \citeyear{Peleg:1984} \emph{effectivity functions} describe the power structure in a committee by a list of all sets of alternatives that specific coalitions of voters can force the outcome to lie in. We, by contrast, follow the literature pioneered by \citeN{Penrose:1946}, \citeN{Shapley/Shubik:1954}, and \citeN{Banzhaf:1965}, and try to assess individual influence on outcomes concisely by a number between zero and one.
\section{Preliminaries\label{sec:preliminaries}}
\subsection{Anonymous Voting Rules\label{sec:rules}} We consider finite sets $N=\{1,\dots,n\}$ of $n$ voters or players such that each voter~$i\in N$ has strict preferences $P_i$ over a set $A=\{a_1,\dots,a_m\}$ of $m\ge 2$ alternatives. We write $abc$ in abbreviation of $aP_ibP_ic$ when the player's identity is clear. The set of all $m!$ strict preference orderings on $A$ is denoted by $\mathcal{P}(A)$. A \emph{(resolute) voting rule} $r\colon \mathcal{P}(A)^n \to A$ maps each preference profile $\mathbf{P}=(P_1,\dots,P_n)$ to a single winning alternative $a^*=r(\mathbf{P})$. Rule $r$ is \emph{anonymous} if for any $\mathbf{P}\in \mathcal{P}(A)$ and any permutation $\pi\colon N\to N$ with $\pi(\mathbf{P})=(P_{\pi(1)}, \dots, P_{\pi(n)})$ we have $r(\mathbf{P})=r(\pi(\mathbf{P}))$.
\renewcommand{1.1}{1.4} \begin{table}
\begin{center}
\begin{tabular}{|l|l|}
\hline\hline
\emph{Rule} &\emph{ Winning alternative at preference profile $\mathbf{P}$ }\\
\hline \hline
Borda
& $r^B(\mathbf{P})\in \argmax_{a\in A} \sum_{i\in N} b_i(a,\mathbf{P})$\\ \hline
Copeland
& $r^C(\mathbf{P})\in \argmax_{a\in A} \big|\{a'\in A \ |\ a \succ_M^\mathbf{P} a'\}\big|$ \\ \hline
Plurality
& $r^P(\mathbf{P})\in \argmax_{a\in A} \big|\{i\in N \ |\ \forall a'\neq a\in A\colon a P_i a'\}\big|$ \\
\hline
& \\[-0.6cm]
Plurality runoff
&
$r^{PR}(\mathbf{P})
\begin{dcases}
= r^P(\mathbf{P}) \; \text{ if } \; \big|\{i\in N \ |\ \forall a'\in A\setminus\{ r^P(\mathbf{P})\}\colon r^P(\mathbf{P}) P_i a'\}\big| > \tfrac{n}{2}\\
\in \argmax_{a\in \{a_{(1)}, a_{(2)}\}} \big|\{i\in N \ |\ \forall a'\neq a\in \{a_{(1)}, a_{(2)}\} \colon a P_i a'\}\big| \text{ otherwise}
\end{dcases}$
\\[0.55cm] \hline
Schulze & $r^S(\mathbf{P})$ -- see \citeN{Schulze:2011} \\
\hline \hline
\end{tabular}
\caption{Considered anonymous voting rules \label{table:rules}}
\end{center} \end{table}
We will restrict attention to truthful voting\footnote{In principle, power analysis could also be carried out for strategic voters. This would require specifying the mapping from profiles of players' preferences to the element of $A$ (or a probability distribution over $A$) which is induced by the selected voting equilibrium. Determination of the latter usually is a hard task in itself and here left aside.} under one of the five anonymous rules summarized in Table~\ref{table:rules}, assuming {lexicographic tie breaking}. See \citeN{Laslier:2012} on the pros and cons of a big variety of voting procedures. Our selection comprises two positional rules (Borda, plurality), two Condorcet methods (Copeland, Schulze), and a two-stage procedure that is used for filling political offices in many European jurisdictions (plurality runoff).
Under \emph{plurality rule} $r^P$, each voter simply names his or her top-ranked alternative and the alternative that is ranked first by the most voters is chosen. This is also the winner under \emph{plurality with runoff rule}~$r^{PR}$ if the obtained plurality constitutes a majority (i.e., more than 50\% of votes); otherwise a binary runoff between the alternatives $a_{(1)}$ and $a_{(2)}$ that obtained the highest and second-highest plurality scores in the first stage is conducted.
\emph{Borda rule} $r^B$ has each player~$i$ assign $m-1$, $m-2$, \ldots, $0$ points to the alternative that he or she ranks first, second, etc. These points $
b_i(a,\mathbf{P}):= \big|\{a'\in A \ |\ a P_i a'\}\big| $ equal the number of alternatives that $i$ ranks below $a$. The alternative with the highest total number of points, known as its {Borda score}, is selected.
\emph{Copeland rule} $r^C$ considers pairwise majority votes between the alternatives. They define the {majority relation} $
a\succ_M^\mathbf{P} a' \ :\Leftrightarrow \ \big|\{i\in N\ | \ a P_i a' \} \big| > \big|\{i\in N\ | \ a' P_i a \}\big| $ and the alternative that beats the most others according to $\succ_M^\mathbf{P}$ is selected.
Copeland rule is a {Condorcet method}: if some alternative $a$ beats all others, then $r^C(\mathbf{P})=a$. The same is true for \emph{Schulze rule} $r^S$. But
while $r^C$ just counts the number of direct pairwise victories, $r^S$ also considers indirect victories and invokes majority margins in order to evaluate their strengths. $r^S$ then picks the alternative that has the strongest chain of direct or indirect majority support (
see \citeNP{Schulze:2011} for details). The attention paid to margins makes $r^S$ more sensitive to voting weights than $r^C$ in case $\succ_M^\mathbf{P}$ is cyclical. Despite its non-trivial computation, $r^S$ has been applied, e.g., by the Wikimedia Foundation and Linux open-source communities.
\subsection{Weighted Committees\label{sec:wcg}}
Anonymous rules treat any components $P_i$ and $P_j$ of a preference profile $\mathbf{P}$ like indistinguishable ballots. Still, when a committee conducts pairwise comparisons, plurality voting, etc., two individual preferences often feed into the collective decision rather asymmetrically because, e.g., stockholders have as many votes as they own shares or the relevant players $i\in N$ in a political committee cast bloc votes in proportion to party seats. The resulting (non-anonymous) mapping from preferences to outcomes is a combination of anonymous voting rule $r$ with weights $w_1, \ldots, w_n$ for players $1, \ldots,n$ that is defined for all $\mathbf{P}\in \mathcal{P}(A)^n$ by \begin{equation}
r|\mathbf{w}(\mathbf{P}):=r([P_1]^{w_1}, [P_2]^{w_2}, \ldots, [P_n]^{w_n})=r(\underbrace{P_1, \ldots, P_1}_{w_1 \text{ times}}, \underbrace{P_2, \ldots, P_2}_{w_2 \text{ times}},\ldots, \underbrace{P_n, \ldots, P_n}_{w_n \text{ times}}).
\end{equation}
The combination $(N,A,r|\mathbf{w})$ of a set of voters, a set of alternatives and a particular weighted voting rule defines a \emph{weighted committee (game)}. When the underlying anonymous rule is plurality rule $r^P$, then $(N,A,r^P|\mathbf{w})$ is called a \emph{(weighted) plurality committee}. Similarly, $(N,A,r^{PR}|\mathbf{w})$, $(N,A,r^B|\mathbf{w})$, $(N,A,r^C|\mathbf{w})$ and $(N,A,r^S|\mathbf{w})$ are referred to as a \emph{plurality runoff committee}, \emph{Borda committee}, \emph{Copeland committee}, and \emph{Schulze committee}. See \shortciteN{Kurz/Mayer/Napel:2018} on structural differences between them.
Weighted committee games $(N, A,r|\mathbf{w})$ and $(N, A,r'|\mathbf{w'})$ are \emph{equivalent} if the respective mappings from preference profiles
to outcomes $a^*$ coincide; that is, when $r|\mathbf{w}(\mathbf{P}) = r'|\mathbf{w'}(\mathbf{P})$ for all $\mathbf{P}\in \mathcal{P}(A)^n$. Equivalent games evidently come with equivalent expectations for individual players to influence the collective decision (voting power) and to obtain outcomes that match their own preferences (success). We will focus on voting power and non-equivalent committees that involve either the same rule $r$ but different weights $\mathbf{w}$ and $\mathbf{w'}$, or the same weights $\mathbf{w}$ but different rules $r$ and $r'$. Example questions that we would like to address are: to what extent does a change of voting weights, implied for example by the recent re-allocation of voting rights in the International Monetary Fund or party-switching of a member of parliament, shift the respective balance of power? How does players' attractiveness to lobbyists change when a committee replaces one voting method by another?
\section{Measuring Influence in Weighted Committees\label{sec:influence_measure}}
Some obvious shortcomings notwithstanding (see, e.g., \citeNP{Garrett/Tsebelis:1999}, \citeyearNP{Garrett/Tsebelis:2001}), voting power indices such as the Penrose-Banzhaf and the Shapley-Shubik index have received wide attention in theoretical and applied analysis of binary decisions. See, e.g., the contributions in \citeN{Holler/Nurmi:2013}. Most indices can be seen as operationalizing power of player~$i$ as \emph{expected sensitivity of collective decisions to player~$i$'s behavior} \cite{Napel/Widgren:2004}. Sensitivity in the binary case means that, taking the behavior of other players as given, the collective outcome would have been different had player~$i$ voted `no' instead of `yes', or `yes' instead of `no'. Distinct indices reflect distinct probabilistic assumptions about the voting configurations that are evaluated. For instance, the Penrose-Banzhaf index is predicated on the assumption that the preferences of all $n$ players are independent random variables
and the $2^n$ different `yes'-`no' configurations are equally likely.
\subsection{Power as (Normalized) Expected Sensitivity of the Outcome}
It is not hard to generalize the idea of measuring influence as outcome sensitivity to weighted committees $(N, A,r|\mathbf{w})$.
Continuing in the Penrose-Banzhaf tradition, we will assume that individual preferences are drawn independently from the uniform distribution over $\mathcal{P}(A)$, i.e., all profiles $\mathbf{P}\in \mathcal{P}(\{a_1, \ldots, a_m\})^n$ are equally likely.\footnote{This is also known as the \emph{impartial culture assumption}. It has limited empirical support (see, e.g., \citeNP[Ch.~1]{Regenwetter/Grofman/Marley/Tsetlin:2012}) but is commonly adopted as a starting point for assessing links between voting weights and power a~priori. We leave the pursuit of other options for future research (e.g., single-peakedness along a common dimension).} In order to assess the voting power of player~$i$ with weight $w_i$, we perturb $i$'s realized preferences $P_i$ to a random $P_i'\neq P_i \in \mathcal{P}(A)$ and check if this individual preference change would change the collective outcome.\footnote{One might also restrict attention to local perturbations, that is, only allow changes of $P_i$ to adjacent orderings. So when $m=3$ and $P_i=abc$, one would impose the constraint $P_i'\in\{acb,bac\}$. This would not change the qualitative observations in the IMF section below.}
Specifically, using notation $\mathbf{P}=(P_i,\mathbf{P}_{-i})$ with $\mathbf{P}_{-i}= (P_1,\dots,P_{i-1},P_{i+1},\dots,P_n)$, we are interested in the behavior of indicator function \begin{equation}
\Delta r|\mathbf{w}(\mathbf{P};{P'_i}):= \begin{cases}
1& \text{if} \quad r|\mathbf{w}(\mathbf{P}) \neq r|\mathbf{w}(P_i',\mathbf{P}_{-i}), \\
0 & \text{if} \quad r|\mathbf{w}(\mathbf{P}) = r|\mathbf{w}(P_i',\mathbf{P}_{-i}). \end{cases} \end{equation}
We stay agnostic about the precise source of perturbations: the switch from $P_i$ to $P_i'$ might reflect a spontaneous change of mind or intentional preference misrepresentation, e.g., because someone has bought $i$'s vote. Variations might also be the result of a mistake or of receiving last-minute private information about some of the candidates. Our important premise is only that: a committee member's input to the collective decision process matters more, the more influential player~$i$ is in the committee and vice~versa.
We can then quantify player~$i$'s a~priori influence or power -- and compare it to that of other players or for variations of $r|\mathbf{w}$ -- by taking expectations over all $(m!)^n$ conceivable $\mathbf{P}$ and all $m!-1$ possible perturbations of $P_i$ at any given $\mathbf{P}$:
\begin{equation}\label{eq:expected_Delta}
\widehat{\mathcal{I}}_i(N,A,r|\mathbf{w}) :=
\mathbb{E}\big[\Delta r|\mathbf{w}(\mathbf{P};{P'_i})\big]=\frac{\sum_{\mathbf{P} \in \mathcal{P}(A)^n} \sum_{P_i'\neq P_i \in \mathcal{P}(A)} \Delta r|\mathbf{w}(\mathbf{P};{P'_i})}{(m!)^n\cdot (m!-1)}, \;\; i \in N. \end{equation}
A value of $\widehat{\mathcal{I}}_i(N,A,r|\mathbf{w}) = 0.25$, for example,
means that 25\% of player $i$'s preference variations would change the outcome.
The expected value in (\ref{eq:expected_Delta}) equals the probability that a change of player $i$'s preferences from $P_i$ to random $P_i'\neq P_i$ at a randomly drawn profile $\mathbf{P}$ affects the outcome. It is zero if and only if
player $i$ is a \emph{null player}, i.e., its preferences never make a difference to the committee decision.
However, $\widehat{\mathcal{I}}_i(N,A,r|\mathbf{w})$ falls short of one for a \emph{dictator player}, i.e., when $r|\mathbf{w}(\mathbf{P})=a^*$ if and only if player~$i$ ranks $a^*$ top: since only changes of the dictator's top preference matter, only $(m!-(m-1)!)$ out of $m!-1$ perturbations of $P_i$ affect the outcome. We suggest to normalize power indications so that they range from zero to one. Specifically, we focus on the voting power index $\mathcal{I}(\cdot )$ with \begin{equation}\label{eq:I_definition}
\mathcal{I}_i(N,A,r|\mathbf{w}):=\frac{\mathbb{E}\big[\Delta r|\mathbf{w}(\mathbf{P};{P'_i})\big]}{(m!-(m-1)!)/(m!-1)}= \frac{\sum_{\mathbf{P} \in \mathcal{P}(A)^n} \sum_{P_i'
\in \mathcal{P}(A)} \Delta r|\mathbf{w}(\mathbf{P};{P'_i})}{(m!)^n\cdot(m!-(m-1)!)} , \;\; i \in N, \end{equation}
denoting \emph{player $i$'s a priori influence} or \emph{voting power} in weighted committee $(N,A,r|\mathbf{w})$.
The normalization destroys $\widehat{\mathcal{I}}_i(N,A,r|\mathbf{w})$'s interpretation as a probability but facilitates comparison across committees. Regardless of how many alternatives and players are involved,
$\mathcal{I}_i(N,A,r|\mathbf{w})\in[0,1]$ indicates how close player~$i$ is to being a dictator in $(N,A,r|\mathbf{w})$.
$\mathcal{I}_i(N,A,r|\mathbf{w})=0.5$, for instance, implies that $i$'s influence lies halfway between that of a null player and a dictator. So, on average, outcomes are half as sensitive to $i$'s preferences than they would be if $i$ commanded all votes.
\subsection{Relationship to the Penrose-Banzhaf Index\label{sec:PBI}}
For $m=2$ alternatives, the denominators in (\ref{eq:expected_Delta}) and (\ref{eq:I_definition}) equal $2^n$ and ${\mathcal{I}}_i(N,A,r|\mathbf{w})=\widehat{\mathcal{I}}_i(N,A,r|\mathbf{w})$. Moreover, for any of the rules~$r$ introduced above,
weighted committees coincide with the subclass of {simple voting games} $(N,v)$ where $v(S)\in \{0,1\}$ is given by $v(S)=1 \Leftrightarrow w(S)\ge \frac{1}{2}w(N)$ with $w(T):=\sum_{i\in T}w_i$ for all $T\subseteq N$.
If we consider the simple game $(N,v)$ induced by $(N,\{a_1,a_2\},r|\mathbf{w})$, its Penrose-Banzhaf index $PBI(N,v)$ turns out to coincide with
${\mathcal{I}}(N,A,r|\mathbf{w})$. Namely, $PBI(\cdot)$'s usual definition then specializes to \begin{align} PBI_i(N,v)&:=\frac{1}{2^{n-1}} \sum_{S\subseteq N\setminus \{i\}} [v(S \cup \{i\})-v(S)] \\
& =\frac{\big|\{S\subseteq N\setminus \{i\}: w(S)< \frac{1}{2}w(N)
\text{\ \ and\ \ } w(S\cup\{i\})\ge \frac{1}{2}w(N)\}\big|}{2^{n-1}}. \notag \end{align}
In this case, for $r\in \{r^B, r^C, r^P, r^{PR}, r^S\}$ with lexicographic tie-breaking, \begin{equation} \begin{array}{rcr}
\Delta r|\mathbf{w}(\mathbf{P};{P'_i})=1 \quad
& \Leftrightarrow & [ r|\mathbf{w}(P_i, \mathbf{P}_{-i})=a_1 \text{\ \ and \ \,} r|\mathbf{w}(P_i', \mathbf{P}_{-i})=a_2 ] \\
& & \text{ or \ \ } [ r|\mathbf{w}(P_i, \mathbf{P}_{-i})=a_2 \text{\ \ and \ \,} r|\mathbf{w}(P_i', \mathbf{P}_{-i})=a_1 ] \\ & \Leftrightarrow & w(\overbrace{\{j\neq i\in N: a_1 P_j a_2 \}}^{S^{\mathbf{P}_{-i}}}) + w_i \ge \frac{1}{2}w(N) \\ & & \text{ and \ \ } w(\{j\neq i\in N: a_2 P_j a_1 \})+ w_i > \frac{1}{2}w(N)
\\ & \Leftrightarrow & w(S^{\mathbf{P}_{-i}})< \frac{1}{2}w(N) \text{\ \ and \ } w(S^{\mathbf{P}_{-i}}\cup \{i\}) \ge \frac{1}{2}w(N). \end{array} \end{equation} The last line substitutes $w(\{j\neq i\in N: a_2 P_j a_1 \})=w(N)- w_i - w(S^{\mathbf{P}_{-i}})$ where $S^{\mathbf{P}_{-i}}:=\{j\neq i\in N: a_1 P_j a_2 \}$. Hence \begin{align}
\mathcal{I}_i(N,\{a_1,a_2\},r|\mathbf{w})
&=\frac{\sum_{\mathbf{P} \in \mathcal{P}(A)^n} \sum_{P_i' \in \mathcal{P}(A)} \Delta r|\mathbf{w}(\mathbf{P};{P'_i})}{2^n} \\
&= \frac{2\cdot \big|\{S\subseteq N\setminus \{i\}: w(S)< \frac{1}{2}w(N)
\text{\ \ and\ \ }w(S\cup\{i\})\ge \frac{1}{2}w(N)\}\big|}{2^n}
= PBI(N,v) \notag \end{align} as every $S=S^{\mathbf{P}_{-i}}\subseteq N\setminus \{i\}$ arises for two profiles $(P_i,\mathbf{P}_{-i})\in \mathcal{P}(A)^n$, one involving $a_1 P_i a_2$ and the other $a_2 P_i a_1$.
\section{Illustration} \label{sec:illustration} \subsection{A Toy Example\label{sec:example}} As a first illustration let us evaluate the distribution of voting power when our stylized hiring committee
(see Introduction) with three groups of 6, 5, and 3 members adopts Borda rule $r^B$, that is weighted committee $(N,A,r^B|(6,5,3))$.
With $|A|=2$ candidates, the applicant ranked first by any two groups wins. So all three players are symmetric and have identical power according to the Penrose-Banzhaf or any other established voting power index.
The symmetry is broken when three or more candidates are involved. Given $A=\{a,b,c\}$, $\mathcal{I}(N,A,r^B|(6,5,3))$ evaluates all $(3!)^{3}=216$ strict preference profiles $\mathbf{P}\in \mathcal{P}(A)^3$ and checks whether a change of player~$i$'s respective preference $P_i$ makes a difference to the Borda winner. Table~\ref{table:big_Borda_table} illustrates this for profile $\mathbf{P}=(bca, abc, cba)$. The Borda winner $b$ at $\mathbf{P}$ has a score of $20 =6\cdot 2+5\cdot 1+3\cdot 1$ points vs.\ 10 for $a$ vs.\ 12 for $c$ (first block of table). When preferences $P_1=bca$ of group~1 are varied (second block), changes to $P_1'\in \{abc, acb, cab, cba\}$ result in a new Borda winner (indicated by an asterisk) while $P_1'=bac$ does not. Similarly, three out of five perturbations $P_2'$ of player~2's preferences would change the outcome (third block); however, no variation of $P_3$ affects the committee choice (last block). The considered profile $\mathbf{P}=(bca, abc, cba)$ therefore contributes $(\sfrac{4}{864}, \sfrac{3}{864},0)$ to $\mathcal{I}(\cdot)$.
\begin{table}[htbp]
\begin{tabular}{||ccc||c|c|c|c||c|c|c|c||c|c|c|c||}
\hline\hline
$ $ & $\mathbf{P}=$ & & $P_1'$ & $a$ & $b$ & $c$ & $P_2'$ & $a$ & $b$ & $c$ & $P_3'$ & $a$ & $b$ & $c$ \\
\hline
$(bca,$ & $abc,$ & $cba)$ & $abc$ & $\mathbf{22}$* & 14 & 6 & - & - & - & - & $abc$ & 16 & $\mathbf{20}$ & 6 \\
& $\Downarrow$ & & $acb$ & $\mathbf{22}$* & 8 & 12 & $acb$ & 10 & 15 & $\mathbf{17}$* & $acb$ & 16 & $\mathbf{17}$ & 9 \\
$a$ & $b$ & $c$ & $bac$ & 16 & $\mathbf{20}$ & 6 & $bac$ & 5 & $\mathbf{25}$ & 12 & $bac$ & 13 & $\mathbf{23}$ & 6 \\
10 & $\mathbf{20}$ & 12 & - & - & - & - & $bca$ & 0 & $\mathbf{25}$ & 17 & $bca$ & 10 & $\mathbf{23}$ & 9 \\
& & & $cab$ & 16 & 8 & $\mathbf{18}$* & $cab$ & 5 & 15 & $\mathbf{22}$* & $cab$ & 13 & $\mathbf{17}$ & 12 \\
& & & $cba$ & 10 & 14 & $\mathbf{18}$* & $cba$ & 0 & 20 & $\mathbf{22}$* & - & - & - & -\\
\hline\hline \end{tabular}
\caption{\small Effect of perturbation of $\mathbf{P}=(bca, abc, cba)$ to $(P_i',\mathbf{P}_{-i}$) on Borda scores}
\label{table:big_Borda_table} \end{table} Aggregating the corresponding figures for all $\mathbf{P}\in \mathcal{P}(A)^n$ yields \begin{equation}\label{eq:example_I}
\mathcal{I}(N,A,r^B|(6,5,3))=(\sfrac{588}{864}, \sfrac{516}{864},\sfrac{312}{864})\approx(0.6806, 0.5972, 0.3611). \end{equation} So for the weights at hand, group~1 has almost 70\% of the opportunities to swing the collective choice that it would have when deciding alone. This figure is roughly halved for group~3 even though both are symmetric when choosing between two candidates. The comparison shows that traditional voting power indices for simple voting games, such as $PBI(\cdot)$, can yield highly misleading conclusions when decisions of the investigated institution involve more than $m=2$ alternatives. (This is one of the shortcomings alluded to earlier.) One can also see from the numbers in (\ref{eq:example_I}) that $\mathcal{I}(\cdot)$ need not add to one: the collective outcome at a given $\mathbf{P}$ may be sensitive to the preferences of several players at the same time, or to those of none.\footnote{Therefore, normalization $\overline{PBI}(N,v)=PBI(N,v)/\sum_i PBI_i(N,v)$ is sometimes applied in binary voting analysis.}
\begin{table}[htbp]
\small
\centering
\begin{tabular}{||c|c|c|c||}
\hline\hline
& $m=3$ & $m=4$ & $m=5$ \\
\hline
$\mathcal{I}(r^P|(6,5,3))$ & (0.6667, 0.4444, 0.4444) & (0.7500, 0.3750, 0.3750) & (0.8000, 0.3200, 0.3200) \\
\hline
$\mathcal{I}(r^{PR}|(6,5,3))$ & (0.5556, 0.5556, 0.5000) & (0.5833, 0.5833, 0.5000) & (0.6000, 0.6000, 0.5000) \\
\hline
$\mathcal{I}(r^{B}|(6,5,3))$ & (0.6806, 0.5972, 0.3611) & (0.7372, 0.6246, 0.3644) & (0.7631, 0.6462, 0.3839) \\
\hline
$\mathcal{I}(r^{C}|(6,5,3))$ & (0.5509, 0.5509, 0.5509) & (0.5851, 0.5851, 0.5851) & (0.6098, 0.6098, 0.6098) \\
\hline
$\mathcal{I}(r^{S}|(6,5,3))$ & (0.5972, 0.5278, 0.5278) & (0.6584, 0.5426, 0.5426) & (0.7011, 0.5515, 0.5515) \\
\hline\hline
\end{tabular}
\caption{\small Voting power in committee $(N, A, r|(6,5,3))$ for $|A|=m$ and $r\in\{r^P,r^{PR},r^B,r^C,r^S\}$}
\label{table:results_example} \end{table}
Of course, manual computations as in Table~\ref{table:big_Borda_table} are tedious. It is not difficult, though, to evaluate $\mathcal{I}(\cdot)$ with a standard desktop computer for up to five alternatives; and to compare the respective distribution of voting power to that arising from other voting rules. Findings are summarized for $r\in \{r^P,r^{PR},r^B,r^C,r^S\}$ in Table~\ref{table:results_example}. As the comparison between $m=2$ and 3 showed already, voting powers vary in the number of alternatives. Under plurality rule, for instance, player~1 is closer to having dictatorial influence, the more alternatives split the vote of players~2 and 3.
$\mathcal{I}(N,\{a_1,\ldots, a_m\},r^P|(6,5,3))$ tends to $(1,0,0)$ as $m\to \infty$ (given sincere voting and independent preferences). \footnote{Comparative statics are more involved for other rules: bigger $m$ tends to raise the share of profiles $\mathbf{P}$ at which \emph{some} perturbation of $P_i$ affects the outcome but lowers the fraction of perturbations $P_i'$ that do so for both player~$i$ and the hypothetical dictator used as normalization. Superposition of these effects here increases power for all players and $r\in \{r^{PR},r^B,r^C,r^S\}$, but this is not true in general.} That power of all three players coincides under $r^C$ confirms that Copeland rule extends the symmetry relation between players for $m=2$ alternatives to any number $m>2$ (see \shortciteNP[Prop.~3]{Kurz/Mayer/Napel:2018}).
\subsection{Election of the IMF's Managing Director\label{sec:IMF}}
Evaluation of $\mathcal{I}(\cdot)$ is, of course, more interesting for real-world institutions than toy examples. We pick the International Monetary Fund (IMF) as a case in point. Binary power indices have been applied to it many times. See, e.g., Leech~\citeyear{Leech:2002,Leech:2003}, \citeN{Aleskerov/Kalyagin/Pogorelskiy:2008}, or \citeN{Leech/Leech:2013}. We extend the analysis to election of the IMF's Managing Director from three candidates by the Executive Board.
The Executive Board consists of 24 members whose voting weights reflect financial contributions to the IMF, so-called \emph{quotas}. The six largest contributors -- USA, China, Japan, Germany, France and the UK -- and Saudi Arabia currently provide one Executive Director each. The remaining 182 countries are grouped into seventeen constituencies. Each supplies one Executive Director who represents the group members and wields their combined voting rights.
Various changes to the distribution of quotas have taken place since the IMF's inception in 1944 at Bretton Woods. The most recent reform was agreed in 2010 and started to be implemented in 2016. A significant share of votes has shifted from the USA and Europe to emerging and developing countries.
In particular, China's vote share has gone up to 6.1\% (compared to 3.8\% before). India's share increased to 2.6\% (2.3\%), Russia's to 2.6\% (2.4\%), Brazil's to 2.2\% (1.7\%) and Mexico's to 1.8\% (1.5\%).
On the occasion,
the IMF also modified the election process for its key representative, the Managing Director (currently: Christine Lagarde).
Prior to the reform, the process was criticized as intransparent and undemocratic: the Managing Director used to be a European chosen in backroom negotiations with the US. The new process is advertised as ``open, merit based, and transparent'' (IMF Press Release 16/19): all Executive Directors and IMF Governors may nominate candidates. If the number of nominees is too big, a shortlist of three candidates is drawn up based on indications of support. From this shortlist the new Managing Director is elected ``by a majority of the votes cast'' in the Executive Board.\footnote{IMF Press Release 16/19, Part~4, holds that ``Although the Executive Board may select a Managing Director by a majority of the votes cast, the objective of the Executive Board is to select the Managing Director \mbox{by consensus \ldots''.} The same is said in Part~3 about adoption of the ``shortlist''. Our analysis presumes that a consensus may not always be found -- or it actually arises in the shadow of the anticipated outcome of voting.}
\renewcommand{1.1}{1.1} \begin{table}
\centering
\begin{tabular}{||l|c|c|c|c|c|c|c|c||}
\hline \hline
& \multicolumn{2}{c|}{Vote share (\%)} & \multicolumn{2}{c|}{$\mathcal{I}(r^P|\mathbf{w})$} & \multicolumn{2}{c|}{$\mathcal{I}(r^{PR}|\mathbf{w})$} & \multicolumn{2}{c||}{$\mathcal{I}(r^C|\mathbf{w})$} \\
\cline{2-9}
&$\mathbf{w}_{\text{pre}}$ &$\mathbf{w}_{\text{post}}$ & $\mathbf{w}_{\text{pre}}$ & $\mathbf{w}_{\text{post}}$ & $\mathbf{w}_{\text{pre}}$ & $\mathbf{w}_{\text{post}}$ & $\mathbf{w}_{\text{pre}}$ & $\mathbf{w}_{\text{post}}$ \\
\hline
USA & 16.72 & 16.47& 0.7126 &0.7030 &0.6740 &0.6653 & 0.6880 &0.6790 \\
Japan & 6.22 & 6.13&0.1986 & 0.1989 &0.2239 & 0.2233& 0.2164 &0.2159 \\
China & 3.80 & 6.07&0.1216 &0.1967 &0.1404 &0.2209 &0.1340 &0.2135 \\
\emph{Netherlands} & 6.56 & 5.41 &0.2092 &0.1755 &0.2350 &0.1983 &0.2277 &0.1910 \\
Germany & 5.80 &5.31&0.1851 & 0.1720 &0.2097 & 0.1950&0.2024 & 0.1876\\
\emph{Spain} & 4.90 &5.29 & 0.1567 &0.1718 &0.1789 &0.1945 & 0.1717 & 0.1871\\
\emph{Indonesia} & 3.93 &4.33&0.1254 &0.1403 &0.1448 &0.1607 &0.1382 &0.1538 \\
\emph{Italy} & 4.22 &4.12&0.1349 &0.1337 &0.1551 & 0.1533& 0.1482 & 0.1465\\
France & 4.28 &4.02&0.1370 &0.1306 & 0.1574 &0.1499 &0.1507 &0.1432 \\
United Kingdom & 4.28 &4.02&0.1369 &0.1304 &0.1574 &0.1498 &0.1506 &0.1431 \\
\emph{Korea} & 3.48 &3.78&0.1114 &0.1226 & 0.1291 & 0.1410& 0.1230 &0.1345 \\
\emph{Canada} & 3.59 &3.37& 0.1150 &0.1093 &0.1332 &0.1265 &0.1268 & 0.1203\\
\emph{Sweden} & 3.39 &3.28& 0.1085 & 0.1063 & 0.1259 &0.1231 & 0.1198 & 0.1171\\
\emph{Turkey} & 2.91 & 3.22& 0.0932 & 0.1044 & 0.1088 & 0.1209& 0.1032 & 0.1149\\
\emph{South Africa} & 3.41 &3.09& 0.1091 & 0.1001 &0.1267 &0.1162 &0.1205 &0.1104 \\
\emph{Brazil} & 2.61 &3.06&0.0835 &0.0993 &0.0979 &0.1154 & 0.0927 & 0.1096\\
\emph{India} & 2.80 &3.04&0.0898 &0.0988 &0.1048 & 0.1147& 0.0993 & 0.1089\\
\emph{Switzerland} & 2.94 &2.88&0.0941 &0.0935 &0.1097 &0.1087 &0.1041 &0.1030 \\
\emph{Russian Federation} & 2.55 &2.83&0.0817 & 0.0920 &0.0957 &0.1070 &0.0905 &0.1015 \\
\emph{Iran} & 2.73 &2.54& 0.0874 &0.0823 & 0.1024 &0.0962 & 0.0970 & 0.0910\\
\emph{Utd.~Arab Emirates} & 2.57 &2.52& 0.0822 &0.0817 &0.0963 &0.0955 & 0.0911 & 0.0904\\
Saudi Arabia & 2.80 &2.01& 0.0896 &0.0652 & 0.1046 & 0.0767&0.0992 & 0.0723\\
\emph{Dem.~Rep. Congo} & 1.46 &1.62& 0.0465 &0.0526 &0.0555 &0.0621 & 0.0521 & 0.0584\\
\emph{Argentina} & 1.84 &1.59& 0.0587 &0.0515 &0.0695 &0.0610 &0.0654 & 0.0573\\
\hline
\hline
\end{tabular}
\caption{\small Influence
in IMF Executive Board for pre- and post-reform weights and $m=3$ \\
(\emph{groups} as of Dec.~2018 indicated by largest member, Nauru included in $\mathbf{w}_{\text{post}}$)
}
\label{table:IMF_results} \end{table}
The IMF has neither publicly nor upon our email request specified what ``majority of the votes cast'' exactly means to it for three candidates. We take the resulting room for interpretation as an opportunity to simultaneously investigate the voting power effects of the weight reform and of a procedural choice between using $(i)$~plurality rule, $(ii)$~plurality with a runoff if none of three shortlisted candidates secures 50\% of the votes, or $(iii)$~Copeland rule. In the spirit of earlier a~priori analysis of the IMF, we maintain the independence assumption that underlies $PBI(\cdot)$ and $\mathcal{I}(\cdot)$. This provides an a~priori assessment of how level is the playing field created by weights as such rather than an estimate of who wields how much influence on the next decision given prevailing political ties, economic interdependencies, etc.
Influence figures in Table~\ref{table:IMF_results} are based on Monte Carlo simulations with sufficiently many iterations so that differences within rows are significant with $\ge\! 95\%$ confidence.\footnote{
The only exception is that the difference between $\mathcal{I}_{\text{Japan}}(r^P|\mathbf{w_{\text{pre}}})$ and $\mathcal{I}_{\text{Japan}}(r^P|\mathbf{w_{\text{post}}})$ is not signficant. The large number $6^{24}>4.7\cdot 10^{18}$ of preference profiles renders exact calculation of $\mathcal{I}(\cdot)$ impractical.} We find that 2016's increase of vote shares for emerging market economies has indeed raised their voting power, no matter which voting rule we consider. This is most pronounced for China, with an increase of more than 50\%.
Influence of the groups led by Brazil and Russia (incl.\ Syria) increased by about 18\% and 12\%, respectively; that of the Turkish and Indonesian group by about 11\% each; the Indian and Spanish (incl.\ Mexico and others) groups gained about 10\% and 9\%, respectively. Intended or not, the South African group lost about 8\% of its a~priori voting power; Saudi Arabia is the greatest loser with roughly 27\%. Germany, France and UK each lost between 5\% and 7\% while voting power of the USA stayed largely constant.
The computations exhibit
a simple pattern regarding the adopted interpretation of `majority': voting power of the USA is higher for plurality rule than for Copeland than for plurality runoff; the opposite applies to all other (groups of) countries. This echoes findings for our toy example: the largest player's influence was highest for $r^P$; small and medium players were more influential under $r^{PR}$ and $r^C$.
\section{Towards More General Rule Comparisons\label{sec:general_comparison}}
It seems worthwhile to check whether observations like the ones above are robust: does the largest group benefit from plurality votes or the smallest from pairwise comparisons in general? And can recommendations for maximizing a player's influence be given also if information about the exact distribution of voting weights is vague or fluctuates? We take a first step beyond specific examples and look for possible size biases of the rules. Attention is restricted to small numbers of players and alternatives, namely $n=m=3$. The respective intuitions may still apply more generally to shareholder meetings, weighted voting in political bodies, etc.
We use the standard projection of the 3-dimensional simplex of relative weights to the plane in order to represent all possible weight distributions among three players: vertices give 100\% of voting weight to the indicated player, the midpoint corresponds to $(\sfrac{1}{3},\sfrac{1}{3},\sfrac{1}{3})$, etc.
Each figure
presents the result of comparing influence of player~1 under some rule~$\rho_A$ vs.\ rule~$\rho_B$. Areas colored green (red) indicate weight distributions for which $\mathcal{I}_1(N,A, \rho_A) >\!\!\mbox{$(<)$}\ \mathcal{I}_1(N,A, \rho_B)$.
Borda rule was found to be particularly sensitive to weight differences by \shortciteN{Kurz/Mayer/Napel:2018}; when it is involved in a comparison, we use darker tones of green or red to indicate greater influence differences.
\subsection{Borda vs.\ Plurality} The major cases that arise when we compare player~1's influence in Borda vs.\ plurality committees are numbered in Figure~\ref{fig:Borda_Plurality_3_3_example}. We focus on $w_1 \neq w_2 \neq w_3$ and write $\tilde w_i=w_i/(w_1+w_2+w_3)$, $ w_{-1}^+=\max \{\tilde w_2, \tilde w_3\}$, and $ w_{-1}^-=\min \{\tilde w_2, \tilde w_3\}$. The following recommendations could be given to an influence-maximizing player~1 if the procedural choice between $r^B$ and $r^P$ is at this player's discretion:
\begin{itemize}
\item If you wield the majority of votes (region 1) impose plurality rule.
\\
Namely, $\tilde w_1>\frac{2}{3}$ makes you both a plurality and Borda dictator (region 1a);
$\frac{2}{3}\ge\tilde{w_1}>\frac{1}{2} $ implies dictatorship only under plurality rule (region 1b).
\item Also impose plurality rule (region~5)
\begin{itemize} \item if your weight is smallest and both other players have a third to half of the votes each ($\frac{1}{3}\le w_{-1}^-<\frac{1}{2}$), or \item if you have less than a third of votes and the largest player falls short of the majority by no more than a quarter of the remaining player's votes
($\frac{1}{2}>w_{-1}^+\ge \frac{1}{2} -\frac{1}{4}w_{-1}^-$). \end{itemize}
\item Otherwise, as a good `rule of thumb', impose Borda rule.
\\
Namely, when some other player holds the majority (region~2, $w_{-1}^+>\frac{1}{2}$) the observations for region~1 essentially get reversed.
In case that nobody holds the majority, Borda comes with greater influence if you are second-largest with at least a third of votes (region~4, $\frac{1}{2}>w_{-1}^+>\tilde{w_1}\ge \frac{1}{3}$). This extends weakly to when you are largest (region~3). The only exception to the rule are two small subregions where all weights are similar but $\tilde w_1\ > \tilde w_{-1}^+ >\frac{1}{3}>\tilde w_{-1}^-$. \end{itemize}
\begin{figure}
\caption{\small Borda vs. plurality for
$m=3$. Regions colored green (yellow/red) indicate that Borda influence is greater than (equal to/smaller than) plurality influence}
\label{fig:Borda_Plurality_3_3_example}
\end{figure}
\subsection{Further Comparisons} Analogous pairwise influence comparisons are depicted in the Appendix for all possible weight distributions $\mathbf{w}$ and $r\in \{r^B, r^C, r^P, r^{PR}, r^S\}$. Again, Borda's high responsiveness to weight differences makes for more detailed case distinctions (see Figures~\ref{fig:Borda_Plurality_3_3}--\ref{fig:Borda_Schulze_3_3}). By contrast, when plurality rule is compared to either Copeland, plurality runoff, or Schulze rule (Figure~\ref{fig:Plurality_PluralityRunoffCopelandSchulze_3_3}), the recommendation always is simple and intuitive: plurality rule maximizes influence if you have the most votes. If anyone else has more votes, your influence is greater (at least weakly) under the respective other rule.
Recommendations to an influence-maximizing player are similar for Copeland vs.\ Schulze rule (Figure~\ref{fig:Copeland_Schulze_3_3}): if you wield a plurality of votes, Schulze comes with greater influence; in case someone else has more votes, it is the opposite. For Copeland vs.\ plurality runoff (Figure~\ref{fig:Copeland_PluralityRunoff_3_3}), the former gives greater influence to you if you have at least the second-most votes. If player~1 is to choose between plurality runoff and Schulze rule (Figure~\ref{fig:PluralityRunoff_Schulze_3_3}), Schulze rule gives greater influence if $w_1$ is either largest or smallest; otherwise it is better to adopt plurality runoff.
\subsection{Influence-maximizing Voting Rules}
\begin{figure}
\caption{\small Maximal influence map
for Borda, Copeland, plurality, plurality runoff, and Schulze rules for $m=3$.}
\label{fig:Best_rule_3_3}
\end{figure}
Complementing pairwise comparisons as in Figures~\ref{fig:Borda_Plurality_3_3_example} and \ref{fig:Borda_Plurality_3_3}--\ref{fig:PluralityRunoff_Schulze_3_3}, one can also check directly which of the considered voting rules maximizes a specific player's a~priori voting power at any given weight distribution. This is done in Figure~\ref{fig:Best_rule_3_3}: configurations of same color indicate the same set of influence-maximizing voting rules for player~1. (When the respective weight regions are lines or single points, we have manually enlarged them.) Tongue-in-cheek, Figure~\ref{fig:Best_rule_3_3} provides a `map' for influence-maximizing chairpersons -- or for whoever has a say on the adopted voting rule and cares about a~priori influence on decisions between three candidates. It also gives players~2 or 3 reason for criticizing adoption of a particular rule as biased.
\section{Concluding Remarks}
We have investigated how the distribution of voting power in a committee depends not only on voting weights but which of the many possible aggregation procedures for $m>2$ alternatives is adopted -- from simple plurality voting to the elaborate computation of Schulze winners. Traditional measures of voting power, such as the Penrose-Banzhaf or Shapley-Shubik indices, fail to capture this. They should be accompanied by power indices for three or more alternatives.
One such index has been proposed and illustrated here. It evaluates how sensitive the collective choice under the given aggregation rule and weights is to preference changes by an individual player. The index is proportional to the probability for a random individual preference perturbation to affect the outcome, assuming preferences are independently uniformly distributed a priori.\footnote{It as an open challenge to find properties or `axioms', like those used by \citeN{Dubey/Shapley:1979} or \citeN{Laruelle/Valenciano:2001} for the Penrose-Banzhaf index, that would characterize the proposed index without making specific probability assumptions. Our attempts have failed so far.}
A dictator player's voting power is normalized to one; a null player's power is zero. The extent to which the distribution of weights matters differs across rules. So do comparative statics regarding the number of alternatives.
How the adopted rule affects the distribution of voting power has been illustrated for (non-consensual) election of the IMF's Managing Director by its Executive Board.
Similar analysis might be conducted for multi-candidate election primaries, party conventions, corporate boards, etc.
Several case studies have shown with the benefit of hindsight how choice of a particular voting method may have affected big political decisions (cf., e.g., \citeNP{Leininger:1993}; \citeNP{Tabarrok/Spector:1999}; or \citeNP{Maskin/Sen:2016}). We deem it worthwhile to evaluate collective choice methods also from an a~priori perspective and `on average'.
For the simplest case with three options, we have considered the power implications of all conceivable weight distributions among three players and identified differences in how several prominent rules translate weights into voting power. Except for Borda rule, precise knowledge about the distribution of voting weights is typically not needed for gaging the sensitivity of outcomes to preferences of a large, middle, or small player respectively. It is of course desirable to obtain results also for bigger numbers of players and alternatives in the future.
Our analysis hopefully encourages the extension of voting power analysis to richer choice settings. There is a great variety of single-winner rules that could be added to the influence map in Figure~\ref{fig:Best_rule_3_3} (see, e.g., \citeNP{Aleskerov/Kurbanov:1999}; \citeNP[Ch.~7]{Nurmi:2006}; or \citeNP{Laslier:2012}). And nothing in principle would preclude similar analysis for multi-winner elections (see, e.g., \citeNP{Elkind/Faliszewski/Skowron/Slinko:2017}) or strategic voting equilibria, at least for restricted preference domains. It also seems worthwhile to investigate weight apportionment for two-tiered voting systems, such as US presidential elections, with more than two promising candidates. It is unknown, for instance, how the Penrose square root rule for the independent binary case (see, e.g, \citeNP[Ch.~3.4]{Felsenthal/Machover:1998}) or Shapley linear rule for affiliated spatial preferences (cf.\ \citeNP{Kurz/Maaser/Napel:2018}) extend to two-tiered plurality decisions or plurality voting with a runoff.
\setlength{\labelsep}{-0.2cm}
\phantomsection \label{References} \addcontentsline{toc}{section}{References}
\section*{Appendix: Comparisons of voting rules for \emph{m}$\,\mathbf{=3}$} \phantomsection \addcontentsline{toc}{section}{Appendix}
\renewcommand{A-\arabic{figure}}{A-\arabic{figure}} \setcounter{figure}{0}
\begin{figure}
\caption{\small Borda vs. plurality (repeated from p.~\pageref{fig:Borda_Plurality_3_3_example}) }
\label{fig:Borda_Plurality_3_3}
\end{figure}
\begin{figure}
\caption{\small Borda vs. Copeland }
\label{fig:Borda_Copeland_3_3}
\end{figure}
\begin{figure}
\caption{\small Borda vs. plurality runoff }
\label{fig:Borda_PluralityRunoff_3_3}
\end{figure}
\vspace*{\floatsep}
\begin{figure}
\caption{\small Borda vs. Schulze }
\label{fig:Borda_Schulze_3_3}
\end{figure}
\begin{figure}
\caption{\small Plurality vs. plurality runoff, Copeland and Schulze }
\label{fig:Plurality_PluralityRunoffCopelandSchulze_3_3}
\end{figure}
\vspace*{\floatsep}
\begin{figure}
\caption{\small Copeland vs. Schulze }
\label{fig:Copeland_Schulze_3_3}
\end{figure}
\begin{figure}
\caption{\small Copeland vs. plurality runoff }
\label{fig:Copeland_PluralityRunoff_3_3}
\end{figure}
\vspace*{\floatsep}
\begin{figure}
\caption{\small Plurality runoff vs. Schulze }
\label{fig:PluralityRunoff_Schulze_3_3}
\end{figure}
\end{document}
|
arXiv
|
{
"id": "1912.10208.tex",
"language_detection_score": 0.7927638292312622,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\thispagestyle{empty} \title{\bf Boundary feedback stabilization of a chain of serially connected strings} \author{ Ka\"{\i}s Ammari \thanks{UR Analysis and Control of Pde, UR 13ES64, Department of Mathematics, Faculty of Sciences of Monastir, University of Monastir, Tunisia, e-mail: [email protected]}
\, and \, Denis Mercier \thanks{UVHC, LAMAV, FR CNRS 2956, F-59313 Valenciennes, France, email: [email protected]}} \date{} \maketitle
\begin{quotation} {\bf Abstract.} {\small We consider $N$ strings connected to one another and forming a particular network which is a chain of strings. We study a stabilization problem and precisley we prove that the energy of the solutions of the dissipative system decay exponentially to zero when the time tends to infinity, independently of the densities of the strings. Our technique is based on a frequency domain method and a special analysis for the resolvent. Moreover, by same appraoch, we study the transfert function associated to the chain of strings and the stability of the Schr\"odinger system.} \end{quotation} 2010 Mathematics Subject Classification. 35L05, 35M10, 35R02, 47A10, 93D15, 93D20.\\ Key words and phrases. Network, wave equation, resolvent method, transfert function, boundary feedback stabilization.
\section{Introduction} \label{secintro}
\setcounter{equation}{0} We consider the evolution problem $(P)$ described by the following system of $N$ equations: \begin{equation*} \leqno(P) \left \{ \begin{array}{l} (\partial_t^2 u_{j}- \rho_j \partial_x^2u_{j})(t,x)=0,\, x\in(j,j+1),\, t\in(0,\infty),\, j = 0,...,N-1, \\ \rho_0 \, \partial_x u_0(t,0) = { \bf \partial_t u_0(t,0)},\ u_{N-1}(t,N)=0,\, t\in(0,\infty),\\ u_{j-1}(t,j)=u_{j}(t,j),\, t\in(0,\infty),\, j = 1,...,N-1,\\ - \rho_{j-1} \partial_x u_{j-1}(t,j)+ \rho_j \partial_x u_{j}(t,j)= 0,\, t\in(0,\infty),\, j = 1,...,N-1, \\ u_j(0,x)=u_j^0(x),\ \partial_t u_j(0,x)=u_j^1(x), \,x \in (j,j+1),\, j=0,...,N-1, \end{array} \right.\\ \end{equation*}
where $\rho_j > 0, \, \forall \, j=0,...,N-1$.
We can rewrite the system (P) as a first order hyperbolic system, by putting $$ V_j = \left( \begin{array}{ll} \partial_t u_j \\ \rho_j \, \partial_x u_j \end{array} \right), \; \hbox{and} \; V_j^0 = \left( \begin{array}{c} u^1_j \\ \rho_j \, \partial_x u^0_j \end{array} \right), \, 0 \leq j \leq N-1, $$
\begin{equation*} \leqno(P^\prime) \left \{ \begin{array}{l} (\partial_t V_{j}- B_j \, \partial_x V_{j})(t,x)=0,\, x\in(j,j+1),\, t\in(0,\infty),\, j = 0,...,N-1, \\ C_0 V_0(t,0) = 0, \, C_{N-1} V_{N-1} (t,N) = 0,\, t\in(0,\infty),\\ V_{j-1}(t,j)=V_{j}(t,j),\, t\in(0,\infty),\, j = 1,...,N-1,\\ V_j(0,x)= V_j^0 (x), \,x\in(j,j+1),\, j=0,...,N-1, \end{array} \right.\\ \end{equation*} where \begin{equation} \label{C0CN} B_j = \left(\begin{array}{ll} 0 & 1 \\ \rho_j & 0\end{array} \right), C_0 = \left(\begin{array}{lclc} 1 & - 1 \\ 0 & 0 \end{array} \right), \, C_{N-1} = \left(\begin{array}{ll} 1 & 0 \\ 0 & 0 \end{array} \right), \, 0 \leq j \leq N -1. \end{equation}
Models of the transient behavior of some or all of the state variables describing the motion of flexible structures have been of great interest in recent years, for details about physical motivation for the models, see \cite{dagerzuazua}, \cite{lagnese} and the references therein. Mathematical analysis of transmission partial differential equations is detailed in \cite{lagnese}.
Let us first introduce some notation and definitions which will be used throughout the rest of the paper, in particular some which are linked to the notion of $C^{\nu }$- networks, $\nu \in \nline$ (as introduced in \cite{jvb}). \\ Let $\Gamma$ be a connected topological graph embedded in $\rline$, with $N$ edges ($N \in \nline^{*}$). Let $K=\{k_{j}\, :\, 0\leq j\leq N-1\}$ be the set of the edges of $\Gamma$. Each edge $k_{j}$ is a Jordan curve in $\rline$ and is assumed to be parametrized by its arc length $x_{j}$ such that the parametrization $\pi _{j}\, :\, [j,j+1]\rightarrow k_{j}\, :\, x_{j}\mapsto \pi _{j}(x_{j})$ is $\nu$-times differentiable, i.e. $\pi _{j}\in C^{\nu }([j,j+1],\rline)$ for all $0\leq j\leq N-1$. The density of the edge $k_j$ is $\rho_j>0$. The $C^{\nu}$- network $R$ associated with $\Gamma$ is then defined as the union $$R=\bigcup _{j=0}^{N-1}k_{j}.$$
We study a feedback stabilization problem for a wave and a Schr\"dinger equations in networks, see \cite{ammari1}-\cite{amjellk}, \cite{lagnese} and Figure \ref{fig}. \begin{figure}
\caption{Serially connected strings}
\label{fig}
\end{figure}
More precisely, we study a linear system modelling the vibrations of a chain of strings. For each edge $k_{j}$, the scalar function $u_j(t,x)$ for $x \in R$ and $t > 0$ contains the information on the vertical displacement of the string, $0 \leq j \leq N-1$.
Our aim is to study the behaviour of the resolvent of the spatial operator which is defined in Section \ref{resolvent} and to obtain stability result for $(P)$.
We define the natural energy $E(t)$ of a solution $\underline{u} = (u_0,...,u_{N-1})$ of $(P)$ and the natural energy of a solution $V$ of $(P^\prime)$, respectively, by \begin{equation} \label{energy1}
E(t)=\frac{1}{2} \displaystyle \sum_{j=0}^{N-1} \left( \int_{j}^{j+1} \left(|\partial_t u_{j}(t,x)|^2+ \rho_j \, |\partial_x u_{j}(t,x)|^2\right){\rm d}x \right), \end{equation} \begin{equation} \label{energy1b}
e(t)=\frac{1}{2} \, \displaystyle \sum_{j=0}^{N-1} \left\|V_{j}\right\|^2_{L^2_{\rho_j}(j,+,j+1) \times L^2(j,j+1)}, \, 0 \leq j \leq N-1, \end{equation} where $L^2_{\rho_j} (j,j+1) = L^2_{\rho_j} ((j,j+1),dx) = L^2((j,+j+1),\rho_j \, dx)$.
We note that $E(t) \asymp e(t), \, \forall \, t \geq 0$.
We can easily check that every sufficiently smooth solution of $(P)$ satisfies the following dissipation law \begin{equation}\label{dissipae1}
E^\prime(t) = - \displaystyle \bigl|\partial_t u_{0}(t,0)\bigr|^2\leq 0, \,
e^\prime(t) = - \displaystyle \bigl|C_0 V_{0}(t,0)\bigr|^2\leq 0, \end{equation} and therefore, the energy is a nonincreasing function of the time variable $t$.
The result concerns the well-posedness of the solutions of $(P)$ and the exponential decay of the energy $E(t)$ of the solutions of $(P)$.
The main result of this paper then concerns the precise asymptotic behaviour of the solutions of $(P)$. Our technique is based on a frequency domain method and a special analysis for the resolvent.
This paper is organized as follows: In Section \ref{wellposed}, we give the proper functional setting for system $(P)$ and prove that the system is well-posed. In Section \ref{resolvent}, we then show that the energie of system $(P)$ tends to zero. We study, in Section \ref{resolvent}, the stabilization result for $(P)$ by the frequency domain technique and give the explicit decay rate of the energy of the solutions of $(P)$. Finally, in the last sections, we study the transfert function associated to a string network and the exponential stability of the Schr\"odinger system.
\section{\label{wellposed}Well-posedness of the system}
In order to study system $(P)$ we need a proper functional setting. We define the following spaces $$ H = \displaystyle \prod_{j=0}^{N-1} (L^2_{\rho_j}(j,j+1) \times L^2(j,j+1)) $$ and $$ V= \bigg \{\underline{u}=(u_0,...,u_{N-1}) \in \displaystyle \prod_{j=0}^{N-1} H^1(j,j+1),\\ u_{N-1}(N)=0, \, u_{j-1}(j) = u_{j} (j), \, j=1, \ldots ,N-1 \bigg \}, $$ equipped with the inner products \begin{equation}\label{ipVb} <V,\tilde{V}>_{H}=\sum_{j=0}^{N-1}\left(\int_j^{j+1} \rho_j \, u_{j}(x) \, \overline{\tilde{u}_{j}(x)} + v_{j}(x) \, \overline{\tilde{v}_{j}(x)} dx\right), \, V = \left(\begin{array}{c} \underline{u} \\ \underline{v} \end{array} \right), \tilde{V} = \left(\begin{array}{c} \underline{\tilde{u}} \\ \underline{\tilde{v}} \end{array} \right), \end{equation} \begin{equation}\label{ipV} <\underline{u},\,\underline{\tilde{u}}>_{V}=\sum_{j=0}^{N-1}\left(\int_j^{j+1} \rho_j \partial_x u_{j}(x)\partial_x \overline{\tilde{u}_{j}(x)} dx\right). \end{equation}
It is well-known that system $(P)$ may be rewritten as the first order evolution equation \begin{equation} \left\{ \begin{array}{l} U^\prime =\mathcal{A} U,\\ U(0)=(\underline{u}^{0},\,\underline{u}^{1})=U_0,\end{array}\right.\label{pbfirstorder}\end{equation} where $U$ is the vector $U=(\underbar{u},\,\partial_t \underbar{u})^t$ and the operator $\mathcal{A} : {\cal D}({\cal A}) \rightarrow {\cal H} = V\times \displaystyle \prod_{j=0}^{N-1} L^2(j,j+1)$ is defined by \[\mathcal{A} (\underline{u},\underline{v})^t:=(\underline{v}, (\rho_j \partial_x^2u_{j})_{0 \leq j\leq N-1})^t,\] with \begin{multline*} {\cal D}({\cal A}):=\left\{(\underline{u},\,\underline{v})\in \prod_{j=0}^{N-1} H^2(j,j+1) \times V \,: \mbox {\textrm{satisfies }} \,(\ref {e2}) \; \mbox{\textrm{to}} \; (\ref {e5}) \; \mbox{\textrm{hereafter}} \right\},\end{multline*} \begin{equation}\label{e2} \rho_0 \, \partial_x u_{0}(0) = v_{0}(0) \end{equation} \begin{equation}\label{e5} - \rho_j \partial_x u_{j}(j) + \rho_{j-1} \partial_x u_{j-1}(j)= 0, \quad j = 1,...,N-1. \end{equation}
It is clear that $\mathcal{H}$ is a Hilbert space, equipped with the usual inner product \begin{multline*} \left\langle\left(\begin{array}{c}\underline{u}\\\underline{v}\end{array}\right), \left(\begin{array}{c}\underline{\tilde{u}}\\ \underline{\tilde{v}}\end{array}\right)\right\rangle_{{\cal H}} = \sum_{j=0}^{N-1}\left(\int_{j}^{j+1}\left(v_{j}(x)\overline{\tilde{v}_{j}(x)} + \rho_j \partial_x u_{j}(x)\partial_x\overline{\tilde{u}_{j}(x)}\right){\rm d}x. \right. \end{multline*}
By the same way we define the operator $A$ as following: $$ A : {\cal D}(A) \subset H \rightarrow H, A V = B \, \partial_x \, V, \, \forall \, V \in {\cal D}(A), $$ where $${\cal D}(A) = \left\{V = (V_j)_{0 \leq j \leq N-1} \in H, \, V_j \in (H^1(j,j+1))^2, \, V_{j-1} (j) = V_j (j), 1 \leq j \leq N-1, \right. $$ $$ C_0 V_0 (0 ) = 0, \, C_{N-1} V_{N-1} (N) = 0 $$ and $B = (B_j)_{0 \leq j \leq N-1}.$
Now we can prove the well-posedness of system $(P)$ and that the solution of $(P)$ satisfies the dissipation law (\ref{dissipae1}).
\begin{proposition}\label{3exist1} (i) For an initial datum $U_{0}\in \mathcal{H}$, there exists a unique solution $U\in C([0,\,+\infty),\, \mathcal{H})$ to problem (\ref{pbfirstorder}). Moreover, if $U_{0}\in \mathcal{D}(\mathcal{A})$, then $$U\in C([0,\,+\infty),\, \mathcal{D}(\mathcal{A}))\cap C^{1}([0,\,+\infty),\, \mathcal{H}).$$
(ii) The solution $\underline{u}$ of $(P)$ with initial datum in $\mathcal{D}(\mathcal{A})$ satisfies \rfb{dissipae1}. Therefore the energy is decreasing. \end{proposition}
\begin{proof} (i) By Lumer-Phillips' theorem (see \cite{Pazy, tucsnakbook}), it suffices to show that $\mathcal{A}$ is dissipative and maximal.
We first prove that $\mathcal{A}$ is dissipative. Take $U=(\underline{u},\underline{v})^{t}\in \mathcal{D}(\mathcal{A})$. Then \begin{multline*} \left\langle\mathcal{A}U,\, U \right\rangle_{\mathcal{H}}=\sum_{j=0}^{N-1}\left(\int_{j}^{j+1}\left(\rho_j \partial_x^2u_{j}(x)\overline{v_{j}(x)} + \rho_j \partial_x v_{j}(x)\partial_x \overline{u_{j}(x)}\right){\rm d}x\right ). \end{multline*} By integration by partsand by using the transmission and boundary conditions, we have
\begin{equation}\label{dissipativeness}
\Re\left(\left\langle\mathcal{A}U,\, U \right\rangle_{\mathcal{H}}\right)=- \left|v_{0}(0)\right|^2\leq 0. \end{equation} This shows the dissipativeness of $\mathcal{A}$.
Let us now prove that $\mathcal{A}$ is maximal, i.e. that $\lambda I-\mathcal{A}$ is surjective for some $\lambda>0$.
Let $(\underline{f}, \underline{g})^{t}\in \mathcal{H}$. We look for $U=(\underline{u}, \underline{v})^{t}\in \mathcal{D}(\mathcal{A})$ solution of \begin{equation}\label{eqmaxmon} (\lambda I-\mathcal{A})\left(\begin{array}{c} \underline{u}\\\underline{v}\end{array}\right)=\left(\begin{array}{c} \underline{f}\\\underline{g}\end{array}\right),\end{equation} or equivalently \begin{equation} \left\{ \begin{array}{ll} \lambda u_{j}-v_{j}=f_{j} & \forall j\in\{0,...,N-1\},\\ \lambda v_{j}- \rho_j \partial^{2}_xu_{j}=g_{j} & \forall j\in\{0,...,N-1\}.\end{array}\right.\label{eqmaxmon2}\end{equation}
Suppose that we have found $\underline{u}$ with the appropriate regularity. Then for all $j\in\{0,...,N-1\},$ we have \begin{equation} v_{j}:=\lambda u_{j}-f_{j}\in V.\label{maxmonv}\end{equation} It remains to find $\underline{u}$. By (\ref{eqmaxmon2}) and (\ref{maxmonv}), $u_{j}$ must satisfy, for all $j=0,...,N-1$, $$ \lambda^{2}u_{j}- \rho_j \partial^{2}_xu_{j}=g_{j}+\lambda f_{j}. $$ Multiplying these identities by a test function $\underline{\phi}$, integrating in space and using integration by parts, we obtain \begin{multline*} \sum_{j=0}^{N-1}\int_j^{j+1} \left(\lambda^2u_{j}\overline{\phi_{j}}+ \rho_j \partial_x u_{j}\partial_x \overline{\phi_{j}}\right)dx -\sum_{j=0}^{N-1}\left[\rho_j \partial_xu_{j}\overline{\phi_{j}}\right]_j^{j+1} \\ =\sum_{j=0}^{N-1}\int_j^{j+1}\left( g_j+\lambda f_j \right)\overline{\phi_j} \, dx. \end{multline*} Since $(\underline{u},\underline{v})\in \mathcal{D}(\mathcal{A})$ and $(\underline{u},\underline{v})$ satisfies (\ref{maxmonv}), we then have \begin{multline}\label{maxmoneq1} \sum_{j=0}^{N-1}\int_j^{j+1}\left(\lambda^2u_{j}\overline{\phi_{j}}+\rho_j \partial_xu_{j}\partial_x\overline{\phi_{j}}\right)dx + \\ \left(\lambda u_0(0) - f_0(0) \right) \overline{\phi_0}(0) =\sum_{j=0}^{N-1}\int_j^{j+1}\left(g_j+\lambda f_j\right)\overline{\phi_j}dx. \end{multline} This problem has a unique solution $\underline{u}\in V$ by Lax-Milgram's lemma, because the left-hand side of (\ref{maxmoneq1}) is coercive on $V$. If we consider $\underline{\phi}\in \displaystyle \prod_{j=0}^{N-1}\mathcal{D}(j,j+1)\subset V$, then $\underline{u}$ satisfies $$\begin{array}{c} \displaystyle{\lambda^{2}u_{j}-\rho_j \partial_x^{2}u_{j}=g_{j}+\lambda f_{j} \quad\hbox{ in } \mathcal{D}^\prime (j,j+1),\quad j=0,\cdots,N-1.} \end{array}$$ This directly implies that $\underline{u}\in \displaystyle \prod_{j=0}^{N-1}H^{2}(j,j+1)$ and then $\underline{u}\in V\cap \displaystyle \prod_{j=0}^{N-1}H^{2}(j,j+1)$. Coming back to (\ref{maxmoneq1}) and by integrating by parts, we find $$\begin{array}{ll} - \displaystyle \sum_{j=0}^{N-1}\left(\rho_j \partial_x u_{j}(j) \overline{\phi_{j}}(j) - \rho_{j} \partial_x u_{j}(j+1)\overline{\phi_{j}}(j+1) \right)\\ + \left(\lambda u_0(0) - f_0(0) \right)\overline{\phi_0}(0) = 0. \end{array} $$ Consequently, by taking particular test functions $\underline{\phi}$, we obtain $$\begin{array}{c} \rho_0 \partial_x u_{0}(0)= v_0(0) \quad \hbox{and}\quad \rho_j \partial_x u_{j}(j) - \rho_{j-1} \partial_x u_{j-1}(j) =0, \, j=1,\cdots,N-1. \end{array}$$ In summary we have found $(\underline{u},\underline{v})^{t}\in \mathcal{D}(\mathcal{A})$ satisfying (\ref{eqmaxmon}), which finishes the proof of (i).
(ii) To prove (ii), it suffices to derivate the energy (\ref{energy1}) for regular solutions and to use system $(P)$. The calculations are analogous to those of the proof of the dissipativeness of $\mathcal{A}$ in (i), and then, are left to the reader. \end{proof}
\begin{remark} \label{opha} By the same we can prove that the operator $A$ is a m-dissipatif operator of $H$ and generates a $C_0-$ semigroup of contractions of $H$. \end{remark}
\section{Exponential stability} \label{resolvent}
We prove a decay result of the energy of system $(P)$, independently of $N$ and of the densities, for all initial data in the energy space. Our technique is based on a frequency domain method and a special analysis for the resolvent. \begin{theorem} \label{lr} There exists a constant $C, \omega >0$ such that, for all $(\underline{u}^0,\underline{u}^1)\in {\cal H}$, the solution of system $(P)$ satisfies the following estimate \BEQ{EXPDECEXP3nb} E(t) \le C \, e^{- \omega \,t} \, \left\Vert (\underline{u}^0,\underline{u}^1) \right\Vert_{{\cal H}}^2, \FORALL t > 0. \end{equation} \end{theorem}
{\it Proof.} By classical result (see Huang \cite{huang} and Pr\"{u}ss \cite{pruss}) it suffices to show that ${\cal A}$ satisfies the following two conditions:) of a $C_0$ semigroup of contractions on a Hilbert space:
\begin{equation}
\rho ({\cal A})\supset \bigr\{i \beta \bigm|\beta \in \rline \bigr\} \equiv i \rline, \label{1.8w} \end{equation}
and \begin{equation} \limsup_{|\beta |\to \infty } \|(i\beta -{\cal A})^{-1}\|_{{\cal L}({\cal H})} <\infty, \label{1.9} \end{equation} where $\rho({\cal A})$ denotes the resolvent set of the operator ${\cal A}$.
Then the proof of Theorem \ref{lr} is based on the following two lemmas.
\begin{lemma} \label{condsp} The spectrum of ${\cal A}$ contains no point on the imaginary axis. \end{lemma}
\begin{proof} Since ${\cal A}$ has compact resolvent, its spectrum $\sigma({\cal A})$ only consists of eigenvalues of ${\cal A}$. We will show that the equation \begin{equation} {\cal A} Z = i \beta \, Z \label{1.10} \end{equation} with $Z= \left(\begin{array}{l} \underline{y} \cr \underline{v} \end{array} \right) \in {\cal D}({\cal A})$ and $\beta \neq 0$ has only the trivial solution.
By taking the inner product of (\ref{1.10}) with $Z \in {\cal H}$ and using \begin{equation} \label{1.7}
\Re <{\cal A}Z,Z>_{{\cal H}} = - \, \left| v_0 (0)\right|^2, \end{equation} we obtain that $v_0(0)=0$. Next, we eliminate $\underline{v}$ in (\ref{1.10}) to get a second order ordinary differential system: \begin{equation} \left\{ \begin{array}{l} \rho_j \, \frac{d^2 y_j}{dx^2} + \beta^2 \, y_j = 0, \, (j,j+1), \, j = 0,...,N-1,\\ y_0(0) = \frac{d y_0}{dx}(0)=0, \, y_{N-1} (N) = 0, \\ y_{j-1}(j) = y_{j}(j),\, j = 1,...,N-1. \end{array} \right. \label{1.11} \end{equation} The above system has only trivial solution.
\end{proof}
\begin{lemma}\label{lemresolvent} The resolvent operator of $\mathcal{A}$ satisfies condition \rfb{1.9}. \end{lemma}
\begin{proof} In order to prove \rfb{1.9} or by equivalence the following \begin{equation}
\limsup_{|\beta |\to \infty } \|(i\beta -A)^{-1}\|_{{\cal L}(H)} <\infty, \label{1.9n} \end{equation} we will compute and estimate the resolvent of the operator $A$ associated to the problem $(P^\prime)$. \\ More precisely, let $\lambda=i\beta,\beta \in \R,$ $G=(G_0,...,G_{N-1}) \in H,$ we look for $W=(W_0,...,W_{N-1})\in {\cal D}(A)$ solution of \begin{equation} \label{respb} (i \beta - B \partial_x ) W =G, \end{equation} where $B=(B_0,...,B_{N-1})$ and $$B_j=\left(\begin{array}{ll} 0 & 1 \\ \rho_j & 0 \end{array} \right), \;j=0,...N-1. $$ We want to prove that there exists a constant $C$ independent of $\beta$ such that \begin{equation} \label{estresolv} \norm{W}{H} \leq C \norm{G}{H}\end{equation}
{\bf First step :} Computation of the resolvent\\ From (\ref{respb}) we have $$\partial_x W_j = i\beta B_j^{-1} W_i - B_j^{-1}G_j,j=0,...,N-1$$ therefore
\begin{equation}\label{W0} W_0(x)=e^{i \beta (x-1) B_0^{-1} }F_0-\int_1^x e^{i \beta (x-s) B_0^{-1} } B_0^{-1} G_0(s)\; ds, \forall x\in [0,1],\end{equation} and \begin{equation} \label{Wj} W_j(x)=e^{i \beta (x-j) B_j^{-1} }F_j-\int_j^x e^{i \beta (x-s) B_j^{-1} } B_j^{-1} G_j(s) \; ds, \forall x\in [j,j+1],j=1,2,...,N-1,\end{equation} where $F_0=W_0(1)$ and $F_j=W_j(j), j=0,...,N-1.$ For simplification we set \begin{equation}\label{Gjt} \tilde{G_j} (x)=\int_j^x e^{i \beta(x-s) B_j^{-1} } B_j^{-1} G_j(s) \; ds ,j=0,...,N-1.\end{equation} Using the transmission conditions at nodes $j=1,...,N-1$ we have \begin{equation}\label{F1} F_1=F_0, \mbox{ and } F_j=W_{j-1}(j), j=2,...,N-1,\end{equation} which implies that \begin{equation}\label{Fj0} F_j=\left( \prod_{k=j-1}^{k=1} e^{i \beta B_k^{-1}}\right)F_0- \left(\sum_{p=2}^{j-1}\left( \prod_{k=j-1}^{k=p} e^{i \beta B_k^{-1}}\right) \tilde{G}_{p-1}(p) + \tilde{G}_{j-1}(j)\right) ,j=2,...,N-1.\end{equation} For all $j=1,...,N-1$ we set $$M_j(\beta)=\left( \prod_{k=j-1}^{k=1} e^{i \beta B_k^{-1}}\right)$$ and \begin{equation} \label{gammaj}\Gamma_j(\beta)=\sum_{p=2}^{j-1}\left( \prod_{k=j-1}^{k=p} e^{i \beta B_k^{-1}}\right) \tilde{G}_{p-1}(p)
+\tilde{G}_{j-1}(j),\end{equation} hence \begin{equation} \label{Fj} F_j=M_j(\beta) F_0-\Gamma_j(\beta).\end{equation} Note that the solution $W$ is completely determined if $F_0$ is known. Indeed, it suffices to insert the identity (\ref{Fj}) in (\ref{Wj}).
Thus, we give the equation satisfied by $F_0.$ The boundary conditions at nodes $x=0$ and $x=N$ are respectively $C_0 W_0(0)=\left(\begin{array}{l} 0 \\ 0 \end{array} \right),$ and $C_{N-1} W_{N-1}(N)=\left(\begin{array}{l} 0 \\ 0 \end{array} \right),$
where $C_0, C_{N-1}$ are the matrices given in (\ref{C0CN}). Since the second lines of $C_0$ and $C_{N-1}$ vanish the previous equations may be written as $$(1,-1). W_0(0)=0,\; \mbox{\rm and } (1,0). W_{N-1}(N)=0,$$ where "$.$" represents the matrix-product of a vector line by a vector column. These equations are equivalent to \begin{equation}\label{eq1} (1,-1).e^{-i\beta B_0^{-1} }F_0 = (1,-1),\tilde{G_0}(0) \end{equation} and $$(1,0).e^{i\beta B_{N-1}^{-1} }F_{N-1} = (1,0).\tilde{G_{N-1}}(N). $$ Inserting (\ref{Fj}) in the previous equation we get \begin{equation}\label{eq2}(1,0).e^{i\beta B_{N-1}^{-1} }M_{N-1}(\beta)F_0 = (1,0).\left( \tilde{G}_{N-1}(N)+e^{i\beta B_{N-1}^{-1} } \Gamma_{N-1}(\beta)\right). \end{equation} If we denote by $H_{N-1}$ the $2\times 2$ matrix whose the first line is the vector line is $(1,-1).e^{-i\beta B_0^{-1} }$ and the second line is $(1,0).e^{i\beta B_{N-1}^{-1} }M_{N-1}(\beta)$ i.e
\begin{equation}\label{HN} H_{N-1}=\left(\begin{array}{ll} (1,-1).e^{-i\beta B_0^{-1} }\\ (1,\;\;0).e^{i\beta B_{N-1}^{-1}} \end{array} \right)\end{equation} and $Y_{N-1}$ the $2\times 1$ vectors columns by
\begin{equation} \label{YN} Y_{N-1}=\left(\begin{array}{c} (1,-1).\tilde{G_0}(0)\\ (1,0).\left( \tilde{G_{N-1}}(N)+e^{i\beta B_{N-1}^{-1} } \Gamma_{N-1}(\beta)\right) \end{array} \right)\end{equation} then equation (\ref{eq1}) and (\ref{eq2}) are equivalent to the following system:
\begin{equation} \label{eq3} H_{N-1} F_0= Y_{N-1}. \end{equation}
{\bf Second step: estimate of $F_0$} \\
We first start by given an estimation of $\tilde{G}=(\tilde{G_0},...,\tilde{G}_{N-1})$ where $\tilde{G_j}$ are defined in (\ref{Gjt}).
For all $j=0,...,N-1,$ the matrix is $B_j$ is invertible: $$B_j^{-1}=\left( \begin{array}{cc}
0 & \frac{1}{\rho_j} \\
1 & 0 \end{array} \right)$$
\ and we easily find after some computation that \begin{equation}\label{expbj} e^{i\beta x B_j^{-1}}=\left( \begin{array}{cc}
\cos(\dfrac{\beta x}{\sqrt{\rho_j}}) & \dfrac{i \sin(\dfrac{\beta x}{\sqrt{\rho_j}})}{\sqrt{\rho_j}} \\
i \sqrt{\rho_j}\sin(\dfrac{\beta x}{\sqrt{\rho_j}}) & \cos(\dfrac{\beta x}{\sqrt{\rho_j}}) \end{array} \right). \end{equation} Since $\beta \in \R,$ from the previous identity, we directly get the following estimates
\begin{equation} \label{est1}
|\tilde{G_j}(j)|
\lesssim \norm{G}{H},\;|\tilde{G_j}(j+1)| \lesssim \norm{G}{H}, j=0,...,N-1 \ \end{equation}
\begin{equation} \label{est2} \norm{\tilde{G}}{H} \lesssim \norm{G}{H}. \end{equation}
From the definition of $\Gamma_j$ in (\ref{gammaj}) we also get
\begin{equation} \label{est3} |\Gamma_j(\beta) | \lesssim \norm{G}{H}, j=1,...,N-1. \end{equation}
It follows that
\begin{equation} \label{est4} \|Y_{N-1}\| \lesssim \norm{G}{H}. \end{equation}
Note that from (\ref{expbj}) the entries of $H_{N-1}$ are bounded and so it is for the entries of the matrix $( \mbox{\rm com} H_{N-1})^T.$ Assume for the moment that there exists a constant $\gamma_{N-1}>0$ such that
\begin{equation} \label{det} \forall \beta \in\R, |\det(H_{N-1})|\geq \gamma_{N-1},\end{equation} then it follows with (\ref{est4}) that
\begin{equation}\label{estF0} \|F_0\|=\dfrac{1} {\det H_{N-1}} ( \mbox{\rm com} H_{N-1})^T Y_{N-1} \lesssim \norm{G}{{\cal H}}.\end{equation} It remains to prove (\ref{det}).
The idea of the proof is that (\ref{det}) is well known for $N=1$ and that this property spreads by iteration.
First, similarly to (\ref{HN}) we define for all $N\in \N^*$ the matrix \begin{equation}\label{HNt} \tilde{H}_{N-1}=\left(\begin{array}{ll} (1,-1).e^{-i\beta B_0^{-1} }\\ (0,\;\;1).e^{i\beta B_{N-1}^{-1}} \end{array} \right)\end{equation} and we set $$D_{N-1}=\det(H_{N-1}),\;\; \tilde{D}_{N-1}=\det(\tilde{H}_{N-1}), \forall N\in \N^*.$$ Particularly, for $N=1$ we have $$H_0=\left( \begin{array}{cc}
\cos(\dfrac{\beta}{\sqrt{\rho_0}})+i \sqrt{\rho_0} \sin(\dfrac{\beta}{\sqrt{\rho_0}}) & -\cos(\dfrac{\beta}{\sqrt{\rho_0}})-i\dfrac{ \sin(\dfrac{\beta}{\sqrt{\rho_0}})}{\sqrt{\rho_0}} \\
\cos(\dfrac{\beta}{\sqrt{\rho_0}}) & -i\dfrac{ \sin(\dfrac{\beta}{\sqrt{\rho_0}})}{\sqrt{\rho_0}} \end{array} \right),$$ and $$\tilde{H}_0=\left( \begin{array}{cc}
\cos(\dfrac{\beta}{\sqrt{\rho_0}})+i \sqrt{\rho_0} \sin(\dfrac{\beta}{\sqrt{\rho_0}}) & -\cos(\dfrac{\beta}{\sqrt{\rho_0}})-i\dfrac{ \sin(\dfrac{\beta}{\sqrt{\rho_0}})}{\sqrt{\rho_0}} \\ i \sqrt{\rho_0} \sin(\dfrac{\beta}{\sqrt{\rho_0}}) & \cos(\dfrac{\beta}{\sqrt{\rho_0}}) \end{array} \right).$$ Thus $$D_0= \cos(\dfrac{2\beta}{\sqrt{\rho_0}}) + i \dfrac{\sin(\dfrac{2\beta}{\sqrt{\rho_0}})}{\sqrt{\rho_0}},\; \tilde{D}_0= \cos(\dfrac{2\beta}{\sqrt{\rho_0}}) + i \sqrt{\rho_0} \sin(\dfrac{2\beta}{\sqrt{\rho_0}}),$$
and $$|D_0|^2=\cos^2(\dfrac{2\beta}{\sqrt{\rho_0}})+\dfrac{\sin^2(\dfrac{2\beta}{\sqrt{\rho_0}})}{\rho_0}\geq \min(1,\dfrac{1}{\rho_0})>0,$$
$$|\tilde{D}_0|^2=\cos^2(\dfrac{2\beta}{\sqrt{\rho_0}})+\rho_0\sin^2(\dfrac{2\beta}{\sqrt{\rho_0}})\geq \min(1,\rho_0)>0.$$ It is useful for the sequel to remark that $$\Re(D_0 \overline{\tilde{D}_0)}=1.$$ Using (\ref{expbj}) we have the following identity $$\left\{\begin{array}{lll} D_{N-1} = \cos(\dfrac{\beta}{\sqrt{\rho_{N-1}}})D_{N-2}+\dfrac{i}{\sqrt{\rho_{N-1}}} \sin(\dfrac{\beta }{\sqrt{\rho_{N-1}}})\tilde{D}_{N-2}, \\ \tilde{D}_{N-1} = i \sqrt{\rho_{N-1}} \sin(\dfrac{\beta }{\sqrt{\rho_{N-1}}})D_{N-2}+\cos(\dfrac{\beta}{\sqrt{\rho_{N-1}}})\tilde{D}_{N-2}. \end{array} \right. $$ A simple computation shows that $$\Re(D_{N-1}\overline{\tilde{D_{N-1}}})=\Re(D_{N-2}\overline{\tilde{D}_{N-2}})$$ consequently, $$\forall N \in \N^*, \;\Re(D_{N-1}\overline{\tilde{D}_{N-1}})=1.$$
Now, since
$$|D_{N-1}|^2= $$ $$ (\cos(\dfrac{\beta}{\sqrt{\rho_{N-1}}}),\sin(\dfrac{\beta}{\sqrt{\rho_{N-1}}})) \left(\begin{array}{ll}
|D_{N-2}|^2 &\dfrac{1}{\sqrt{\rho_{N-1}}} \Im(D_{N-2}\overline{\tilde{D}_{N-2}}) \\
\dfrac{1}{\sqrt{\rho_{N-1}}}\Im(D_{N-2}\overline{\tilde{D}_{N-2}}) & |\dfrac{1}{\sqrt{\rho_{N-1}}}\tilde{D}_{N-2}|^2 \end{array} \right) \left(\begin{array}{l} \cos(\dfrac{\beta}{\sqrt{\rho_{N-1}}}) \\ \sin(\dfrac{\beta}{\sqrt{\rho_{N-1}}}) \end{array} \right), $$ it follows that
\begin{equation} \label{mu}|D_{N-1}|^2 \geq \mu_{min,N-2},\end{equation}
where $\mu_{min,N-2}$ is the smallest eigenvalue of the matrix in the previous identity. The determinant of this matrix is: $$\dfrac{1}{\rho_{N-1}}\Re(D_{N-2}\overline{\tilde{D}_{N-2}})^2=\dfrac{1}{\rho_{N-1}}.$$ Since $D_{N-2}$ and $\tilde{D}_{N-2}$ are clearly bounded the trace of this matrix is bounded, i.e
$$\exists C'_{N-1}>0, |D_{N-2}|^2+ |\dfrac{1}{\sqrt{\rho_{N-1}}}\tilde{D}_{N-2}|^2 \leq C'_{N-1}.$$ It follows that $$\mu_{min,N-2}\geq \dfrac{1}{\rho_{N-1} C'_{N-1}}.$$ Setting $C_{N-1}=\sqrt{\dfrac{1}{\rho_{N-1} C'_{N-1}}},$ then (\ref{mu}) implies (\ref{det}). Consequently we have prove the estimate (\ref{estF0}) for $F_0.$
Finally, using estimates (\ref{estF0}), (\ref{est1}), (\ref{est2}) in (\ref{F1}) and (\ref{Fj0}) and the fact that the matrices involved in (\ref{Fj0}) are uniformly bounded we get
$$\|F_j\| \lesssim \norm{G}{{\cal H}}, \forall j=1,...N-1.$$ Using (\ref{estF0}) and the previous estimates in (\ref{W0}) and (\ref{Wj}), we get (\ref{estresolv}).
Which implies \rfb{1.9n} and thereafter \rfb{1.9}, and end the proof of Theorem \ref{lr}.
\end{proof}
\section{Comments and related questions} The same strategy can be applied to stabilize the following models and to verify and compute the transfer function.
\subsection{Transfer function} We can use the same strategy to verify that the operator $H(\lambda) = \lambda \, C^*(\lambda^2 I +\underline{A} )^{-1} \, C \in {\cal L}(U), \, \lambda \in {\bl C}_+,$ satisfies the property (\ref{Hest}) of the following lemma, where here $\underline{A}$ is the self-adjoint operator corresponding to the conservative problem associated to problem (P), namely we replace in (P) the boundary feedback condition by \begin{equation}\label{newe2} \rho_0 \, \partial_x u_{0}(t,0) =0, \end{equation} i.e.,
$\underline{A} : {\cal D}(\underline{A}) \subset \underline{H} = \displaystyle \prod_{j=0}^{N-1} L^2(j,j+1) \rightarrow \underline{H}$ is defined by \[\underline{A} (\underline{u}):=(- \, \rho_j \partial_x^2u_{j})_{0 \leq j\leq N-1}, \] with \begin{multline*} {\cal D}(\underline{A}):=\left\{\underline{u} \in \prod_{j=0}^{N-1} H^2(j,j+1)\,: \mbox {\textrm{satisfies }} \,(\ref {e2bn}) \; \mbox{\textrm{to}} \; (\ref {e5bn}) \; \mbox{\textrm{hereafter}} \right\},\end{multline*} \begin{equation}\label{e2bn} \rho_0 \, \partial_x u_{0}(0) = 0 \end{equation} \begin{equation}\label{e5bn} - \rho_j \partial_x u_{j}(j) + \rho_{j-1} \partial_x u_{j-1}(j)= 0, \quad j = 1,...,N-1. \end{equation} $$ C \in {\cal L}(\mathbb{C}, V^\prime = {\cal D}(\underline{A}^\half)^\prime), \, Ck = \sqrt{\rho_0} \, \underline{A}_{-1} \underline{N} k = k \, \left( \begin{array}{c} \frac{1}{\sqrt{\rho_0}} \, \delta_0 \\ . \\ . \\. \\ 0 \end{array} \right), \, \forall \, k \in \mathbb{C}, $$ $$ C^* \underline{u} = \left( \frac{1}{\sqrt{\rho_0}} \, u_0(0) \ 0 \; ... \; 0 \right), \, \forall \, \underline{u} \in V, $$ where $\underline{A}_{-1}$ is the extension of $\underline{A}$ to $({\cal D}(\underline{A}))^\prime$ (the duality is in the sense of $\underline{H}$) and $\underline{N}$ is the Neumann map, $$ \left\{ \begin{array}{lll} \rho_j \, \partial_{x}^2 (\underline{N}k)_j = 0, \, (j,j+1), \, 0 \leq j \leq N -1, \\ \partial_{x} (\underline{N} k)_0 (0) = k, \, (\underline{N}k)_{N -1} (N) = 0,\\ \rho_{j-1} \, \partial_{x} (\underline{N}k)_{j-1} (j) = \rho_{j} \, \partial_{x} (\underline{N}k)_{j} (j), \, 1 \leq j \leq N-1. \end{array} \right. $$
\begin{lemma} The transfer function $H$ satisfies the following estimate: \begin{equation}\label{Hest}
\sup_{\Re \lambda = \gamma} \left\| \lambda \, C^*(\lambda^2 I
+\underline{A})^{-1} \, C \right\|_{{\cal L}(U)} < \infty, \end{equation} for $\gamma > 0.$ \end{lemma} \begin{proof}
In the same way as the proof of Lemma \ref{lemresolvent}, we give an equivalent formulation of the function $H.$ For that purpose, we consider $W=(W_j)_{0 \leq j \leq N-1} \in H,W_j \in (H^1(j,j+1))^2$ solution of \begin{equation} \label{respbb} \begin{array}{l} (\lambda - B \partial_x ) W =0, \\ W_{j-1} (j) = W_j (j), 1 \leq j \leq N-1, \\ \tilde{C}_0 W_0 (0 ) = \left(\begin{array}{l} z \\ 0 \end{array} \right) ,\, C_{N-1} W_{N-1} (N) = 0 \end{array} \end{equation} where $z \in {\bl C},$ $B$ is defined as in the proof of Lemma \ref{lemresolvent}, $C_{N-1}$ is the matrix given in (\ref{C0CN}) and $\tilde{C}_0 =\left(\begin{array}{ll} 0 &1 \\ 0&0 \end{array} \right).$ Therefore, for $\lambda \in {\bl C}, \Re(\lambda)=\gamma>0,$ the transfer function is $$ H(\lambda) : z\in {\bl C} \mapsto (1,0). W_0(0)\in {\bl C}.$$ Consequently to prove (\ref{Hest}) it suffices to check that for a fixed $\gamma>0,$ there exists a constant $c_\gamma>0$ such that
\begin{equation} \label{448} \forall \lambda \in {\bl C}, \Re(\lambda)= \gamma,\;\; |(1,0). W_0(0)| \leq c_\gamma |z| . \end{equation}
Using (\ref{W0}), (\ref{Wj}), (\ref{Fj0}), we have $$W_0(x)=e^{\lambda (x-1) B_0^{-1}} F_0, W_j(x)=e^{\lambda (x-j)B_j^{-1}} F_j,j =1,...,N-1,$$ where $$ F_0=W_0(1),\; F_1=F_0,\; \mbox{\rm and } F_j=\left( \prod_{k=j-1}^{k=1} e^{\lambda B_k^{-1}}\right)F_0, \; j=2,...,N-1. $$ Therefore, from the boundary conditions at $x=0$ and $x=N,$ we find that $F_0$ is the solution of $$ H_{N-1}F_0= \left(\begin{array}{l} z \\ 0 \end{array} \right),$$ where $H_{N-1}$ is the $2\times2 $ matrix $$\left(\begin{array}{ll} (0,1).e^{-\lambda B_0^{-1}} \\ (1,0 ). \left(\displaystyle \prod_{k=N-1}^{k=1} e^{\lambda B_k^{-1}}\right) \end{array} \right), $$ with the convention that $\left(\displaystyle \prod_{k=N-1}^{k=1} e^{\lambda B_k^{-1}}\right) $ is the identity matrix if $N=1.$
{\bf Estimate of $F_0$}\\ Since for all $j$ $$e^{\lambda x B_j^{-1} }=\left( \begin{array}{cc}
\cosh(\frac{\lambda x}{c_j}) & \frac{\sinh(\frac{ \lambda x}{c_j})}{c_j} \\
c_j \sinh(\frac{\lambda x }{c_j}) & \cosh(\frac{ \lambda x}{c_j}) \end{array} \right), $$ it is clear that there exists a constant $c'_\gamma>0$ sucht that
\begin{equation} \label{up} \forall \lambda \in {\bl C}, \Re(\lambda)=\gamma,\; \|H_{N-1}\|\leq c'_\gamma.\end{equation} We need a similar estimate for $H_{N-1}^{-1};$ this will be done by giving a lower uniform bound of $|D_{N-1}|$ on the line $\Re(\lambda)=\gamma,$ where we have set $D_{N-1}=\det(H_{N-1}).$
Thus we introduce the matrix $$\tilde{H}_{N-1}= \left(\begin{array}{ll} (0,1).e^{-\lambda B_0^{-1}} \\ (0,1). \left( \prod_{k=N-1}^{k=1} e^{\lambda B_k^{-1}}\right) \end{array} \right), \; N\geq 1, $$ and set $\tilde{D}_{N-1}=\det(\tilde{H}_{N-1}).$ Now, we prove by iteration that $$ \Re (D_{N-1} \overline{\tilde{D}_{N-1}})\geq k_{N-1} >0.$$ For $N=1$ we have $$H_0=\left( \begin{array}{cc} -\sinh(\frac{\lambda }{c_0}) &\cosh(\frac{\lambda }{c_0}) \\
1 & 0 \end{array} \right),\; \tilde{H}_0=\left( \begin{array}{cc} -\sinh(\frac{\lambda }{c_0}) &\cosh(\frac{\lambda }{c_0}) \\
0 & 1 \end{array} \right), $$ thus $$\Re (D_0 \overline{\tilde{D}_0})=\dfrac{1}{2} \sinh(\frac{2 \gamma}{c_0} )>0.$$ Assume that there exists a constant $k_{N-2}>0$ such that \begin{equation} \label{rere} \Re (D_{N-2} \overline{\tilde{D}_{N-2}})\geq k_{N-2} >0.\end{equation}
We have the following easily checked identities $$D_{N-1}= \cosh(\frac{\lambda}{c_{N-1}}) D_{N-2}+\dfrac{1}{c_{N-1}}\sinh(\frac{\lambda}{c_{N-1}})\tilde{D}_{N-2} $$ $$\tilde{D}_{N-1}=c_{N-1} \sinh(\frac{\lambda}{c_{N-1}}) D_{N-2}+\cosh(\frac{\lambda}{c_{N-1}})\tilde{D}_{N-2}. $$ A computation leads to $$\begin{array}{lll} \Re (D_{N-1} \overline{\tilde{D}_{N-1}})&=&\Re(\cosh(\frac{\lambda}{c_{N-1}})\overline{\sinh(\frac{\lambda}{c_{N-1}})})
(c_{N-1}|D_{N-2}|^2+\dfrac{1}{c_{N-1}}|\overline{\tilde{D}_{N-2}}|^2)\\
&+&(|\cosh(\frac{\lambda}{c_{N-1}})|^2+|\sinh(\frac{\lambda}{c_{N-1}})|^2)\Re (D_{N-2} \overline{\tilde{D}_{N-2}})\\
&=& \dfrac{1}{2} \sinh(\frac{2 \gamma}{c_{N-1}} )(c_{N-1}|D_{N-2}|^2+\dfrac{1}{c_{N-1}}|\overline{\tilde{D}_{N-2}}|^2)\\ &+&\cosh(\frac{2 \gamma}{c_{N-1}} )\Re (D_{N-2} \overline{\tilde{D}_{N-2}})\\ &\geq& \cosh(\frac{2 \gamma}{c_{N-1}} ) k_{N-2}\\ &=& k_{N-1}>0. \end{array} $$ We have proved (\ref{rere}). It follows
$$|D_{N-1} \tilde{D}_{N-1}|\geq k_{N-1}>0.$$ But $|\tilde{D}_{N-1}|$ is obviously upper bounded on the line $\Re(\lambda)=\gamma,$ consequently there exists a constant $k'_{N-1}>0$ such that
$$\forall \lambda, \Re(\lambda)=\gamma,\; |D_{N-1}|\geq k'_{N-1}>0.$$ Finally, with (\ref{up}) we deduce that $H_{N-1}^{-1}$ is bounded on the line $\Re(\lambda)=\gamma$ and it follows that there exits $c_\gamma >0$ such that
$$\forall z \in {\bl C}, \forall \lambda : \Re(\lambda)=\gamma,\; |F_0|\leq c_\gamma z.$$
Conclusion: (\ref{448}) is a direct consequence of the previous estimate. The proof is complete
\end{proof} As application is that the open loop system associated to $(P)$ is satisfies a regularity property. \begin{corollary} Let $T > 0$. Then, for all $v \in L^2(0,T)$ the following problem \begin{equation*} \left \{ \begin{array}{l} (\partial_t^2 \psi_{j}- \rho_j \partial_x^2 \psi_{j})(t,x)=0,\, x\in(j,j+1),\, t\in(0,\infty),\, j = 0,...,N-1, \\ \rho_0 \, \partial_x \psi_0(t,0) = v(t),\ \psi_{N-1}(t,N)=0,\, t\in(0,\infty),\\ \psi_{j-1}(t,j)=\psi_{j}(t,j),\, t\in(0,\infty),\, j = 1,...,N-1,\\ - \rho_{j-1} \partial_x \psi_{j-1}(t,j)+ \rho_j \partial_x \psi_{j}(t,j)= 0,\, t\in(0,\infty),\, j = 1,...,N-1, \\ \psi_j(0,x)=0,\ \partial_t \psi_j(0,x)=0, \,x \in (j,j+1),\, j=0,...,N-1. \end{array} \right. \end{equation*} admits a unique solution $(\psi,\partial_t \psi) \in C(0,T; {\cal H})$ which satisfies the following regularity property (says also open loop admissibility): there exists a constant $C > 0$ such that $$
\int_0^T \left| \partial_t \psi_0 (t,0)\right|^2 \, dt \leq C \, \left\|v\right\|^2_{L^2(0,T)}, \, \forall \, v \in L^2(0,T). $$ \end{corollary}
Moreover, according to \cite[Theorem 2.2]{ammari}, we have that:
\begin{corollary} The system $(P)$ is exponentially stable in the energy space if and only if there exists $T, C > $ such that $$
\int_0^T \left| \partial_t \varphi_0 (t,0)\right|^2 \, dt \geq C \, \left\|(\varphi^0,\varphi^1)\right\|^2_{{\cal H}}, \, \forall \, (\varphi^0,\varphi^1) \in {\cal D}({\cal A}), $$ where \begin{multline*} {\cal D}({\cal A}):=\left\{(\underline{u},\,\underline{v})\in \prod_{j=0}^{N-1} H^2(j,j+1) \times V \,: \mbox {\textrm{satisfies }} \,(\ref {e2sd}) \; \mbox{\textrm{to}} \; (\ref {e5sd}) \; \mbox{\textrm{hereafter}} \right\},\end{multline*} \begin{equation}\label{e2sd} \rho_0 \, \partial_x u_{0}(0) = 0 \end{equation} \begin{equation}\label{e5sd} - \rho_j \partial_x u_{j}(j) + \rho_{j-1} \partial_x u_{j-1}(j)= 0, \quad j = 1,...,N-1. \end{equation} and $\varphi = (\varphi_0,...,\varphi_{N-1})$ satisfies the following problem \begin{equation*} \left \{ \begin{array}{l} (\partial_t^2 \varphi_{j}- \rho_j \partial_x^2 \varphi_{j})(t,x)=0,\, x\in(j,j+1),\, t\in(0,\infty),\, j = 0,...,N-1, \\ \rho_0 \, \partial_x \varphi_0(t,0) = 0,\ \varphi_{N-1}(t,N)=0,\, t\in(0,\infty),\\ \varphi_{j-1}(t,j)=\varphi_{j}(t,j),\, t\in(0,\infty),\, j = 1,...,N-1,\\ - \rho_{j-1} \partial_x \varphi_{j-1}(t,j)+ \rho_j \partial_x \varphi_{j}(t,j)= 0,\, t\in(0,\infty),\, j = 1,...,N-1, \\ \varphi_j(0,x)=\varphi_j^0(x),\ \partial_t \varphi_j(0,x)=\varphi_j^1(x), \,x \in (j,j+1),\, j=0,...,N-1. \end{array} \right. \end{equation*} \end{corollary}
\section{Schr\"{o}dinger system} \label{Schr}
\setcounter{equation}{0} We consider the evolution problem $(S)$ described by the following system of $N$ equations: \begin{equation*} \leqno(S) \left \{ \begin{array}{l} (\partial_t u_{j}+i \rho_j \partial_x^2u_{j})(t,x)=0,\, x\in(j,j+1),\, t\in(0,\infty),\, j = 0,...,N-1, \\ \rho_0 \, \partial_x u_0(t,0) = { \bf i u_0(t,0)},\ u_{N-1}(t,N)=0,\, t\in(0,\infty),\\ u_{j-1}(t,j)=u_{j}(t,j),\, t\in(0,\infty),\, j = 1,...,N-1,\\ - \rho_{j-1} \partial_x u_{j-1}(t,j)+ \rho_j \partial_x u_{j}(t,j)= 0,\, t\in(0,\infty),\, j = 1,...,N-1, \\ u_j(0,x)=u_j^0(x),\, j=0,...,N-1, \end{array} \right.\\ \end{equation*}
where $\rho_j > 0, \, \forall \, j=0,...,N-1$.
We define the natural energy $E(t)$ of a solution $u = (u_0,...,u_{N-1})$ of $(S)$ by \begin{equation} \label{schrenergy1}
E(t)=\frac{1}{2} \displaystyle \sum_{j=0}^{N-1} \int_{j}^{j+1} | u_{j}(t,x)|^2 {\rm d}x, \end{equation} We can easily check that every sufficiently smooth solution of $(S)$ satisfies the following dissipation law \begin{equation}\label{schrdissipae1}
E^\prime(t) = - \displaystyle \bigl| u_{0}(t,0)\bigr|^2\leq 0, \, \end{equation} and therefore, the energy is a nonincreasing function of the time variable $t$.
In order to study system $(S)$ we introduce the following Hilbert space
$$ \mathcal{H}= \bigg \{u=(u_0,...,u_{N-1}) \in \displaystyle \prod_{j=0}^{N-1} L^2(j,j+1)) \bigg \}, $$ equipped with the inner product $$ <u,\tilde{u}>_{\mathcal{H}}=\sum_{j=0}^{N-1} \int_j^{j+1} u_{j}(x) \, \overline{\tilde{u}_{j}(x)} dx. $$
The system $(S)$ is a first order evolution equation which as the form \begin{equation} \left\{ \begin{array}{l} u' =\mathcal{A} u,\\ u(0)=u^{0},\,\end{array}\right.\label{schrfirstorder}\end{equation}
where $u^{0}=(u_0^0,u_1^0,...,u_{N-1}^0) \in {\cal H} $ and the operator $\mathcal{A} : {\cal D}({\cal A}) \rightarrow \mathcal{H}$ is defined by \[\mathcal{A} u:=(-i \rho_j \partial_x^2u_{j})_{0 \leq j\leq N-1},\] with \begin{multline*} {\cal D}({\cal A}):=\left\{u\in \prod_{j=0}^{N-1} H^2(j,j+1) \,: \mbox {\textrm{satisfies }} \,(\ref {schre2}) \; \mbox{\textrm{to}} \; (\ref {schre4}) \; \mbox{\textrm{hereafter}} \right\},\end{multline*} \begin{equation}\label{schre2} \rho_0 \, \partial_x u_0(0) = { \bf i u_0(0)}, u_{N-1}(N)=0, \end{equation} \begin{equation}\label{schre3} u_{j-1}(j)=u_{j}(j),\, j = 1,...,N-1,\\ \end{equation} \begin{equation}\label{schre4} -\rho_{j-1} \partial_x u_{j-1}(j)+ \rho_j \partial_x u_{j}(j)= 0,\, j = 1,...,N-1. \\ \end{equation}
Now we can prove the well-posedness of system $(S)$ and that the solution of $(S)$ satisfies the dissipation law (\ref{schrdissipae1}).
\begin{proposition}\label{schr3exist1} (i) For an initial datum $u^{0}\in \mathcal{H}$, there exists a unique solution $u\in C([0,\,+\infty),\, \mathcal{H})$ to problem (\ref{schrfirstorder}). Moreover, if $u^{0}\in \mathcal{D}(\mathcal{A})$, then $$u\in C([0,\,+\infty),\, \mathcal{D}(\mathcal{A}))\cap C^{1}([0,\,+\infty),\, \mathcal{H}).$$
(ii) The solution $u$ of $(S)$ with initial datum in $\mathcal{D}(\mathcal{A})$ satisfies \rfb{schrdissipae1}. Therefore the energy is decreasing. \end{proposition}
\begin{proof} (i) By Lumer-Phillips' theorem, it suffices to show that $\mathcal{A}$ is dissipative and maximal.
$\mathcal{A}$ is clearly dissipative. Indeed, by integration by parts and by using the transmission and boundary conditions, we have
\begin{equation}\label{schrdissipativeness}
\forall u \in \mathcal{D}(\mathcal{A}),\; \Re\left(\left\langle\mathcal{A}u,\, u \right\rangle_{\mathcal{H}}\right)=- \left|u_{0}(0)\right|^2\leq 0. \end{equation}
Now, let $f\in \mathcal{H}$. We look for $u\in \mathcal{D}(\mathcal{A})$ solution of \begin{equation}\label{schrbijection1} -\mathcal{A} u = f. \end{equation} or equivalently \begin{equation} i \rho_j \partial_x^2u_{j}=f_{j} ,\; \forall j\in\{0,...,N-1\}, \label{schrbijection2}\end{equation} and $u$ satisfies the boundary and transmission conditions (\ref{schre2})-(\ref{schre4}).
The general solution of (\ref{schrbijection2}) is $$u_j(x)=\dfrac{1}{i\rho_j} \int_{j}^{x}\left( \int_{j}^{u} f_j(s) ds \right ) du + p_i(x),\; j=0,...,N-1,\; x\in[j,j+1],$$ where each $p_j$ is a polynomial of degree 1. It remains to find $p_j,\;j=0,...,N-1$ such that the equations (\ref{schre2})-(\ref{schre4}) are satisfied. It is equivalent to solve a linear system with $2N$ equations and $2N$ unknowns. This system admits an unique solution if and only if the corresponding homogeneous system admits only the trivial solution.
So we suppose that $p_j, j=0,...,N-1$ are polynomials of degree 1 and satisfy (\ref{schre2})-(\ref{schre4}). Then by integrations by parts and using (\ref{schre2})-(\ref{schre4}) we get $$
\displaystyle \sum_{j=0}^{N-1} \rho_j \int_j^{j+1} |p_j'(x)|^2 dx= -\sum_{j=0}^{N-1} \rho_j \int_j^{j+1} p_j''(x) \overline{p_j(x) }dx=0. $$ Consequently, the polynomials $p_j$ are constant and finally vanish from the continuity conditions and the right Dirichlet condition. Hence we have proved that (\ref{schrbijection1}) admits an unique solution. Consequently $\mathcal{A}$ is maximal.
(ii) To prove (ii), we use the same argument as in the proof of Theorem \ref{3exist1}. \end{proof}
\subsection{Exponential stability of the Schr\"{o}dinger system} \label{scrresolvent}
The stability result of system $(S)$ is given by \begin{theorem} \label{schlr} There exist constants $C>0$ and $\omega >0$ such that, for all $u^0\in {\cal H}$, the solution of system $(S)$ satisfies the following estimate \BEQ{schrEXPDECEXP3nb} E(t) \le C \, e^{- \omega \,t} \, \left\Vert u^0 \right\Vert_{{\cal H}}^2, \FORALL t > 0. \end{equation} \end{theorem}
{\it Proof.} As in the proof of Theorem (\ref{lr}) the result is based on the following two lemmas.
\begin{lemma} \label{schrcondsp} The spectrum of ${\cal A}$ contains no point on the imaginary axis. \end{lemma}
\begin{proof} Since ${\cal A}$ has compact resolvent, its spectrum $\sigma({\cal A})$ only consists of eigenvalues of ${\cal A}$. We will show that the equation \begin{equation} {\cal A} u = i \beta \,u \label{schr1.9nn} \end{equation} with $u = (u_0,...,u_{N-1}) \in {\cal D}({\cal A})$ and $\beta \neq 0, \beta \in \R$ has only the trivial solution.
By taking the inner product of (\ref{schr1.9nn}) with $u \in {\cal H}$ and using (\ref{schrdissipativeness}) we get that $u_0(0)=0$. From the left boundary condition we deduce also that $\partial_x u_0(0)=0.$ Therefore we get that $u_0=0$ since $\rho_0 \partial^2_x u_0=\beta u_0.$ Therefore by iteration we easily find $u_j=0, j=1,...,N-1.$
The system (\ref{schr1.9nn}) has only trivial solution.
\end{proof}
\begin{lemma}\label{schrlemresolvent} The resolvent operator of $\mathcal{A}$ satisfies \begin{equation}
\limsup_{|\beta |\to \infty } \|(i\beta -\mathcal{A})^{-1}\|_{{\cal L}(H)} <\infty, \label{schr1.9n} \end{equation} \end{lemma}
\begin{proof} In order to prove \rfb{schr1.9n} we look for $u=(u_0,...,u_{N-1})\in {\cal D}({\cal A})$ solution of \begin{equation}\label{schrL2} (i \beta - {\cal A} ) u =g,\end{equation} where $\beta \in \R$ and $g=(g_0,...,g_{N-1}) \in \mathcal{H}.$
We will consider two cases since for each case the method is different.
{\bf First case : $\beta>0$. }
{\bf First step :} Computation of the resolvent
The solution of( \ref{schrL2}) satisfies \begin{equation} \label{schrequation} i \beta u_j +i \rho_i \partial_x^2 u_j =g_j,\; i=0,...,N-1.\end{equation} An easy calculation shows that \begin{equation}\label{defuj} u_j(x)=G_i(x), + c_j^1 \cos (\dfrac{\sqrt{\beta }x}{\sqrt{\rho_j}}) + c_j^2\dfrac{1}{\beta} \sin (\dfrac{\sqrt{\beta }x}{\sqrt{\rho_j}}), x\in[j,j+1],\; j=0,...,N-1,\end{equation} where \begin{equation} \label{defGJ}G_j(x)=\int_j^{x} \dfrac{\sin (\dfrac{\sqrt{\beta }(x-s)}{\sqrt{\rho_j}})}{ i\sqrt{\beta} \sqrt{\rho_j}}
g_i(s) ds,\end{equation} and $c_j^1,c_j^2, \in \C.$ Note that $c_j^1=u_j(j)$ and $\rho_j (\partial_x u_j)(j)=c_j^2.$
Now, let $F_j=\left( \begin{array}{l} c_j^1\\ c_j^2\\ \end{array} \right),\; j=0,...,N-1,$ then using the transmission conditions (\ref{schre3})-(\ref{schre4}) we have $$\begin{array}{lll} F_{j+1}&=&\left( \begin{array}{c} u_j(j+1)\\ \rho_j(\partial_x u_j)(j+1)\\ \end{array} \right)\\ &=& \left(\begin{array}{cc} \cos(\dfrac{\sqrt{\beta}}{\sqrt{\rho_j}}) &\dfrac{\sin(\dfrac{\sqrt{\beta}}{\sqrt{\rho_j}})}{\sqrt{\beta}\sqrt{\rho_j}} \\ -\sqrt{\beta}\sqrt{\rho_j}\sin(\dfrac{\sqrt{\beta}}{\sqrt{\rho_j}}) & \cos(\dfrac{\sqrt{\beta}}{\sqrt{\rho_j}}) \\ \end{array} \right)F_j+ \left( \begin{array}{l} G_j(j+1)\\ (\partial_x G_j)(j+1)\\ \end{array} \right). \end{array} $$
For simplification we introduce the matrix $M_j$ and the vector $W_j$ as \begin{equation}\label{defMJWJ} M_j=\left(\begin{array}{cc} \cos(\dfrac{\sqrt{\beta}}{\sqrt{\rho_j}}) &\dfrac{\sin(\dfrac{\sqrt{\beta}}{\sqrt{\rho_j}})}{\sqrt{\beta}\sqrt{\rho_j}} \\ -\sqrt{\beta}\sqrt{\rho_j}\sin(\dfrac{\sqrt{\beta}}{\sqrt{\rho_j}}) & \cos(\dfrac{\sqrt{\beta}}{\sqrt{\rho_j}}) \\ \end{array} \right),\; W_j=\left( \begin{array}{l} G_j(j+1)\\ (\partial_x G_j)(j+1)\\ \end{array} \right),\end{equation} hence the transmission conditions are \begin{equation} \label{defFj} F_{j+1}=M_j F_j +W_j, j=0,...,N-1.\end{equation}
It follows that \begin{equation}\label{schrfn} F_N=(\prod_{j=N-1}^0 M_j) F_0 +\sum_{k=0}^{N-2}(\prod_{j=N-2}^{k+1} M_j)W_k+W_{N-1}.\end{equation}
From the first boundary condition (\ref{schre2}) we have $F_0=c_0^1\left(\begin{array}{c} 1 \\ i \end{array} \right), $ (i.e, $c_0^2=i c_0^1)$) therefore we now compute $c_0^1$ by using the second boundary condition (\ref{schre2}). For that we set
\begin{equation}\label{schalphagamma} \prod_{j=N-1}^0 M_j= \left(\begin{array}{cc} \alpha_{N,1} & \gamma_{N,1} \\ \alpha_{N,2} & \gamma_{N,2} \end{array} \right),\end{equation} and
\begin{equation}\label{schrsecondmembre} -\sum_{k=0}^{N-2}(\prod_{j=N-2}^{k+1} M_j)W_k+W_{N-1}= \left(\begin{array}{c} \omega_{N,1} \\ \omega_{N,2} \end{array} \right).\end{equation} Thus we find that $$c_0^1=\dfrac{\omega_{N,1}}{\alpha_{N,1}+i \gamma_{N,1}}.$$
This last identity completely determine the solution $u$ of (\ref{schrL2}).
{\bf Second step :} Estimate of $F_0$ for $\beta$ large.
From one hand, since the order of each matrix $M_j$ is \begin{equation}\label{schorderM}\left(\begin{array}{cc} O(1) & O(\dfrac{1}{\sqrt{\beta}} )\\ O(\sqrt{\beta}) & O(1) \end{array} \right) \end{equation} then it is easy to see that all the matrices involved in (\ref{schrsecondmembre}) have the same order.
On the other hand, from (\ref{defGJ}) we have clearly
\begin{equation}\label {schest1} |G_j(j+1)| \lesssim \dfrac{1}{\sqrt{\beta}}\, \| g_j\|\lesssim \dfrac{1}{\sqrt{\beta}}\, \| g\|,\; j=0,....,N-1\end{equation} and
\begin{equation}\label {schest2} |(\partial_x G_j)(j+1)| \lesssim \| g_j\|\lesssim \| g\|,\; j=0,....,N-1.\end{equation}
Therefore using the order (\ref{schorderM}) and estimate (\ref{schest1})-(\ref{schest2}) for $W_k$ in (\ref{schrsecondmembre}) we get \begin{equation}\label{schestomega1}
\omega_{N,1} \lesssim \dfrac{1}{\sqrt{\beta}}\, \| g\|. \end{equation}
Now, we remark that for all $j=0,...,N-1,\; \det M_j=1,$ which implies from (\ref{schalphagamma}) that $$ \alpha_{N,1} \gamma_{N,2}-\alpha_{N,2} \gamma_{N,1}=1.$$ Thus
$$|\alpha_{N,1}+i \gamma_{N,1}||\alpha_{N,2}+i \gamma_{N,2}|\geq Re[(\alpha_{N,1}+i \gamma_{N,1})(\alpha_{N,2}+i \gamma_{N,2})]=1,$$
implies with (\ref{schorderM}) and (\ref{schalphagamma}) that
$$\dfrac{1}{ |\alpha_{N,1}+i \gamma_{N,1}|}\leq |\alpha_{N,2}+i \gamma_{N,2}|\leq O(\sqrt{\beta}).$$ The previous estimate and (\ref{schestomega1}) lead to
\begin{equation}\label{schfinal1} |c_0^1| \lesssim \|g\| \mbox{ and } |c_0^2| \lesssim \|g\| . \end{equation}
{\bf Last step :} Estimate of $u.$
First, from (\ref{defGJ}) we have
$$\|G_j\|\lesssim \|g_j\| \lesssim \|g\|, \; j=0,...,N-1.$$
Then, using (\ref{defFj}), (\ref{schorderM}) and (\ref{schfinal1}) we get by iteration the components of $F_j,j=0,...,N-1$ satisfy :
$$c_j^1 \lesssim \|g\| \mbox { and } c_j^1 \lesssim \sqrt{\beta} \|g\|.$$
Consequently, using the two previous estimates in (\ref{defuj}) we directly obtain that the solution of (\ref{schrL2}) satisfies
\begin{equation} \label{schrfinal2} \|u\| \lesssim \|g\| \;\; ( \beta \rightarrow +\infty ).\end{equation}
{\bf Second case : $\beta<0$. }
If $\beta<0$ the previous procedure doesn't work but fortunately, in this case, we can get the estimate (\ref{schrfinal2}) directly. Indeed, multiplying (\ref{schrequation}) by $\dfrac{\overline{u_j}}{i},$ integrating by parts, summing from $j=0$ to $N-1$ and using the boundary-transmision conditions we have $$\begin{array}{lll}
-\beta \displaystyle \sum_{j=0}^{N-1} \int_j^{j+1}|u_j(x)|^2dx &+&\displaystyle\sum_{j=0}^{N-1} \rho_j \int_j^{j+1}|\partial_x u_j(x)|^2dx \\
&+ & i|u_0(0)|^2\\ &=&\dfrac{1}{i} \displaystyle \sum_{j=0}^{N-1} \int_j^{j+1} g_j(x) \overline{u_j(x)} dx. \end{array} $$
Therefore,
$$ -\beta \sum_{j=0}^{N-1} \int_j^{j+1}|u_j(x)|^2dx \leq \Re[\dfrac{1}{i} \sum_{j=0}^{N-1} \int_j^{j+1} g_j(x) \overline{u_j(x)} dx ] \leq \|u\| \|g\|, $$ and we find
\begin{equation} \label{schrfinal3} \|u\| \lesssim \|g\| \;\; ( \beta \rightarrow -\infty ).\end{equation} Finally the result follows from (\ref{schrfinal2})-(\ref{schrfinal3}). \end{proof}
\end{document}
|
arXiv
|
{
"id": "1406.1144.tex",
"language_detection_score": 0.549740195274353,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\maketitle \begin{abstract} We study networks of communicating learning agents that cooperate to solve a common nonstochastic bandit problem. Agents use an underlying communication network to get messages about actions selected by other agents, and drop messages that took more than $d$ hops to arrive, where $d$ is a delay parameter. We introduce \textsc{Exp3-Coop}, a cooperative version of the {\sc Exp3} algorithm and prove that with $K$ actions and $N$ agents the average per-agent regret after $T$ rounds is at most of order $\sqrt{\bigl(d+1 + \tfrac{K}{N}\alpha_{\le d}\bigr)(T\ln K)}$, where $\alpha_{\le d}$ is the independence number of the $d$-th power of the communication graph $G$.
We then show that for any connected graph, for $d=\sqrt{K}$ the regret bound is $K^{1/4}\sqrt{T}$, strictly better than the minimax regret $\sqrt{KT}$ for noncooperating agents.
More informed choices of $d$ lead to bounds which are arbitrarily close to the full information minimax regret $\sqrt{T\ln K}$ when $G$ is dense. When $G$ has sparse components, we show that a variant of \textsc{Exp3-Coop}, allowing agents to choose their parameters according to their centrality in $G$, strictly improves the regret. Finally, as a by-product of our analysis, we provide the first characterization of the minimax regret for bandit learning with delay. \end{abstract}
\section{Introduction} \label{s:intro}
Delayed feedback naturally arises in many sequential decision problems. For instance, a recommender system typically learns the utility of a recommendation by detecting the occurrence of certain events (e.g., a user conversion), which may happen with a variable delay after the recommendation was issued. Other examples are the communication delays experienced by interacting learning agents. Concretely, consider a network of geographically distributed ad servers using real-time bidding to sell their inventory. Each server sequentially learns how to set the auction parameters (e.g., reserve price) in order to maximize the network's overall revenue, and shares feedback information with other servers in order to speed up learning. However, the rate at which information is exchanged through the communication network is slower than the typical rate at which ads are served. This causes each learner to acquire feedback information from other servers with a delay that depends on the network's structure.
Motivated by the ad network example, we consider networks of learning agents that cooperate to solve the same nonstochastic bandit problem, and study the impact of delay on the global performance of these agents. We introduce the {\sc Exp3-Coop} algorithm, a distributed and cooperative version of the {\sc Exp3} algorithm of \cite{auer2002nonstochastic}. {\sc Exp3-Coop} works within a distributed and synced model where each agent runs an instance of the same bandit algorithm ({\sc Exp3}). All bandit instances are initialized in the same way irrespective to the agent's location in the network (that is, agents have no preliminary knowledge of the network), and we assume the information about an agent's actions is propagated through the network with a unit delay for each crossed edge. In each round $t$, each agent selects an action and incurs the corresponding loss (which is the same for all agents that pick that action in round $t$). Besides observing the loss of the selected action, each agent obtains the information previously broadcast by other agents with a delay equal to the shortest-path distance between the agents. Namely, at time $t$ an agent learns what the agents at shortest-path distance $s$ did at time $t-s$ for each $s = 1, \ldots, d$, where $d$ is a delay parameter. In this scenario, we aim at controlling the growth of the regret averaged over all agents (the so-called average welfare regret).
In the noncooperative case, when agents ignore the information received from other agents, the average welfare regret grows like $\sqrt{KT}$ (the minimax rate for standard bandit setting), where $K$ is the number of actions and $T$ is the time horizon. We show that, using cooperation, $N$ agents with communication graph $G$ can achieve an average welfare regret of order $\sqrt{\bigl(d+1 + \tfrac{K}{N}\alpha_{\leq d}\bigr)(T\ln K)}$. Here $\alpha_{\leq d}$ denotes the independence number of the $d$-th power of $G$ (i.e., the graph $G$ augmented with all edges between any two pair of nodes at shortest-path distance less than or equal to $d$). When $d = \sqrt{K}$ this bound is at most $K^{1/4}\sqrt{T\ln K} + \sqrt{K}(\ln T)$
for any connected graph
---see Remark~\ref{rem:choiceofd} in Section~\ref{s:single-d}--- which is asymptotically better than $\sqrt{KT}$.
Networks of nonstochastic bandits were also investigated by~\cite{awerbuch2008competitive} in a setting where the distribution over actions is shared among the agents without delay. \cite{awerbuch2008competitive} prove a bound on the average welfare regret of order $\sqrt{\bigl(1 + \tfrac{K}{N}\bigr)T}$ ignoring polylog factors.\footnote{ The rate proven in~\citep[Theorem~2.1]{awerbuch2008competitive} has a worse dependence on $T$, but we believe this is due to the fact that their setting allows for dishonest agents and agent-specific loss vectors. } We recover the same bound as a special case of our bound when $G$ is a clique and $d=1$. In the clique case our bound is also similar to the bound $\sqrt{\tfrac{K}{N}(T\ln K)}$ achieved by~\cite{seldin2014prediction} in a single-agent bandit setting where, at each time step, the agent can choose a subset of $N \le K$ actions and observe their loss.
In the case when $N=1$ (single agent), our analysis can be applied to the nonstochastic bandit problem where the player observes the loss of each played action with a delay of $d$ steps.
In this case we improve on the previous result of $\sqrt{(d+1)KT}$ by~\cite{neu2010online,neu2014online},
and give the first characterization (up to logarithmic factors) of the minimax regret, which is of order $\sqrt{(d + K)\,T}$.
In principle, the problem of delays in online learning could be tackled by simple reductions. Yet, these reductions give rise to suboptimal results. In the single agent setting, where the delay is constant and equal to $d$, one can use the technique of~\cite{weinberger2002delayed} and run $d+1$ instances of an online algorithm for the nondelayed case, where each instance is used every $d+1$ steps. This delivers a suboptimal regret bound of $\sqrt{(d+1)KT}$. In the case of multiple delays, like in our multi-agent setting, one can repeat the same action for $d+1$ steps while accumulating information from the other agents, and then perform an update on scaled-up losses. The resulting (suboptimal) bound on the average welfare regret would be of the form $\sqrt{(d+1)\bigl(1 + \tfrac{K}{N}\alpha_{\leq d}\bigr)(T\ln K)}$.
Rather than using reductions, the analysis of {\sc Exp3-Coop} rests on quantifying the performance of
suitable importance weighted estimates. In fact, in the single-agent setting with delay parameter $d$, using {\sc Exp3-Coop} reduces to running the standard {\sc Exp3} algorithm performing an update as soon a new loss becomes available. This implies that at any round $t > d$, {\sc Exp3} selects an action without knowing the losses incurred during the last $d$ rounds. The resulting regret is bounded by relating the standard analysis of {\sc Exp3} to a detailed quantification of the extent to which the distribution maintained by {\sc Exp3} can drift in $d$ steps.
In the multi-agent case, the importance weighted estimate of \textsc{Exp3-Coop} is designed in such a way that at each time $t > d$ the instance of the algorithm run by an agent $v$ updates all actions that were played at time $t-d$ by agent $v$ or by other agents not further away than $d$ from $v$. Compared to the single agent case, here each agent can exploit the information circulated by the other agents. However, in order to compute the importance weighted estimates used locally by each agent, the probabilities maintained by the agents must be propagated together with the observed losses. Here, further concerns may show up, like the amount of communication, and the location of each agent within the network. In particular, when $G$ has sparse components, we show that a variant of \textsc{Exp3-Coop}, allowing agents to choose their parameters according to their centrality within $G$, strictly improves on the regret of \textsc{Exp3-Coop}.
\section{Additional Related Work} \label{s:related}
Many important ideas in delayed online learning, including the observation that the effect of delays can be limited by controlling the amount of change in the agent strategy, were introduced by~\citet{mesterharm2005line} ---see also \cite[Chapter~8]{Mester2007}. A more recent investigation on delayed online learning is due to~\citet{neu2010online,neu2014online}, who analyzed exponential weights with delayed feedbacks. Furher progress is made by~\citet{joulani2013online}, who also study delays in the general partial monitoring setting. Additional works~\citep{DBLP:conf/aaai/JoulaniGS16,NIPS2015_5833} prove regret bounds for the full-information case of the form $\sqrt{(D+T)\ln K}$, where $D$ is the total delay experienced over the $T$ rounds. In the stochastic case, bandit learning with delayed feedback was considered by \citet{DBLP:conf/uai/DudikHKKLRZ11,joulani2013online}.
To the best of our knowledge, the first paper about nonstochastic cooperative bandit networks is \citep{awerbuch2008competitive}. More papers analyze the stochastic setting, and the closest one to our work is perhaps~\citep{szorenyi2013gossip}. In that paper, delayed loss estimates in a network of cooperating stochastic bandits are analyzed using a dynamic P2P random networks as communication model. A more recent paper is~\citep{landgren2015distributed}, where the communication network is a fixed graph and a cooperative version of the UCB algorithm is introduced which uses a distributed consensus algorithm to estimate the mean rewards of the arms. The main result is an individual (per-agent) regret bound that depends on the network structure without taking delays into account.
Another interesting paper about cooperating bandits in a stochastic setting is~\citep{kar2011bandit}. Similar to our model, agents sit on the nodes of a communication network. However, only one designated agent observes the rewards of actions he selects, whereas the others remain in the dark. This designated agent broadcasts his sampled actions through the networks to the other agents, who must learn their policies relying only on this indirect feedback. The paper shows that in any connected network this information is sufficient to achieve asymptotically optimal regret. Cooperative bandits with asymmetric feedback are also studied by \cite{barrett2011ad}. In their model, an agent must teach the reward distribution to another agent while keeping the discounted regret under control. \cite{TekinS15} investigate a stochastic contextual bandit model where each agent can either privately select an action or have another agent select an action on his behalf. In a related paper, \cite{TekinZS14} look at a stochastic bandit model with combinatorial actions in a distributed recommender system setting, and study incentives among agents who can now recommend items taken from other agents' inventories.
Another line of relevant work involves problems of decentralized bandit coordination. For example, \cite{stranders2012dcops} consider a bandit coordination problem where the the reward function is global and can be represented as a factor graph in which each agent controls a subset of the variables.
A parallel thread of research concerns networks of bandits that compete for shared resources. A paradigmatic application domain is that of cognitive radio networks, in which a number of channels are shared among many users and any two or more users interfere whenever they simultaneously try to use the same channel. The resulting bandit problem is one of coordination in a competitive environment, because every time two or more agents select the same action at the same time step they both get a zero reward due to the interference ---see~\citep{RosenskiSS15} for recent work on stochastic competitive bandits and~\citep{kleinberg2009multiplicative} for a study of more general congestion games in a game-theoretic setting.
Finally, there exists an extensive literature on the adaptation of gradient descent and related algorithms to distributed computing settings, where asynchronous processors naturally introduce delays ---see, e.g., \citep{NIPS2009_3888,NIPS2011_4247,li2013distributed,NIPS2014_5242,NIPS2015_5833,liu2015asynchronous,duchi2015asynchronous}. However, none of these works considers bandit settings, which are an essential ingredient for our analysis.
\section{Preliminaries}\label{s:prel}
We now establish our notation, along with basic assumptions and preliminary facts related to our algorithms. Notation and setting here both refer to the single agent case. The cooperative setting with multiple agents (and notation thereof) will be introduced in Section~\ref{s:multi}. Proofs of all the results stated here can be found in~\citep{cesa2016delay}.
Let $A = \{1,\dots,K\}$ be the action set. A learning agent runs an exponentially-weighted algorithm with weights $w_t(i)$, and learning rate $\eta > 0$. Initially, $w_1(i) = 1$ for all $i \in A$. At each time step $t=1,2,\dots$, the agent draws action $I_t$ with probability $\field{P}(I_t = i) = p_t(i) = w_t(i)/W_t$, where $W_t = \sum_{j \in A} w_t(j)$. After observing the loss $\ell_t(I_t) \in [0,1]$ associated with the chosen action $I_t$, and possibly some additional information, the agent computes, for each $i \in A$, nonnegative loss estimates $\widehat{\loss}_t(i)$, and performs the exponential update
\begin{equation} \label{eq:exp-upd}
w_{t+1}(i) = p_t(i)\,\exp\bigl(-\eta\,\widehat{\loss}_t(i)\bigr) \end{equation}
to these weights. The following two lemmas
are general results that control the evolution of the probability distributions in the exponentially-weighted algorithm. As we said in the introduction, bounding the extent to which the distribution used by our algorithms can drift in $d$ steps is key to controlling regret in a delayed setting. The first result bounds the {\em additive} change in the probability of any action, and it holds no matter how $\widehat{\loss}_t(i)$ is defined.
\begin{lemma} \label{l:sandwich} Under the update rule~(\ref{eq:exp-upd}), for all $t \ge 1$ and for all $i \in A$, \[
-\eta\,p_t(i)\widehat{\loss}_t(i) \le p_{t+1}(i) - p_t(i) \le \eta\,p_{t+1}(i)\sum_{j \in A} p_t(j)\widehat{\loss}_t(j) \] holds deterministically with respect to the agent's randomization. \end{lemma}
The second result delivers a {\em multiplicative} bound on the change in the probability of any action when the loss estimates $\widehat{\loss}_t(i)$ are of the following form:
\begin{equation}\label{e:lossestimate}
\widehat{\loss}_t(i) = \left\{ \begin{array}{cl}
\displaystyle{\frac{\ell_{t-d}(i)}{q_{t-d}(i)}}\, B_{t-d}(i) & \text{if $t > d$,}
\\
0 & \text{otherwise~,}
\end{array} \right. \end{equation}
where $d \geq 0$ is a delay parameter, $B_{t-d}(i) \in \{0,1\}$, for $i \in A$, are indicator functions, and $q_{t-d}(i) \ge p_{t-d}(i)$ for all $i$ and $t > d$. In all later sections, $B_{t-d}(i)$ will be instantiated to the indicator function of the event that action $i$ has been played at time $t-d$ by some agent, and $q_{t-d}(i)$ will be the (conditional) probability of this event.
\begin{lemma} \label{l:mult} Let $\widehat{\loss}_t(i)$ be of the form (\ref{e:lossestimate}) for each $t \ge 1$ and $i \in A$.
If $\eta \le \frac{1}{Ke(d+1)}$ in the update rule~(\ref{eq:exp-upd}), then
\[
p_{t+1}(i) \le \left(1 + \frac{1}{d}\right)p_t(i) \] holds for all $t \ge 1$ and $i \in A$, deterministically with respect to the agent's randomization.
\end{lemma}
As we said in Section~\ref{s:intro}, the idea of controlling the drift of the probabilities in order to bound the effects of delayed feedback is not new. In particular, variants of Lemma~\ref{l:sandwich} were already derived in the work of~\cite{neu2010online,neu2014online}. However, Lemma~\ref{l:mult} appears to be new, and this is the key result to achieving our improvements.
\section{The Cooperative Setting on a Communication Network}\label{s:multi}
In our multi-agent bandit setting, there are $N$ agents sitting on the vertices of a connected and undirected communication graph $G = (V,E)$, with $V = \{1, \ldots, N\}$. The agents cooperate to solve the same instance of a nonstochastic bandit problem while limiting the communication among them. Let $N_s(v)$ be the set of nodes $v' \in V$ whose shortest-path distance $\mathrm{dist}_G(v,v')$ from $v$ in $G$ is exactly $s$. At each time step $t=1,2,\dots$, each agent $v \in V$ draws an action $I_t(v)$ from the common action set $A$. Note that each action $i \in A$ delivers the same loss $\ell_t(i) \in [0,1]$ to all agents $v$ such that $I_t(v) = i$. At the end of round $t$, each agent $v$ observes his own loss $\ell_t\bigl(I_t(v)\bigr)$, and sends to his neighbors in $G$ the message \[
m_t(v) = \Bigl\langle t,v,I_t(v),\ell_t\bigl(I_t(v)\bigr),\boldsymbol{p}_t(v) \Bigr\rangle \] where $\boldsymbol{p}_t(v) = \bigl(p_t(1,v),\dots,p_t(K,v)\bigr)$ is the distribution of $I_t(v)$. Moreover, $v$ also receives from his neighbors a variable number of messages $m_{t-s}(v')$. Each message $m_{t-s}(v')$ that $v$ receives from a neighbor is used to update $\boldsymbol{p}_t(v)$ and then forwarded to the other neighbors only if $s < d$,
otherwise it is dropped.\footnote { Dropping messages older than $d$ rounds is clearly immaterial with respect to proving bandit regret bounds. We added this feature just to prove a point about the message complexity of the protocol. See Remark~\ref{r:exo} in Section~\ref{s:many-d} for further discussion. } Here $d$ is the maximum delay, a parameter of the communication protocol. Therefore, at the end of round $t$, each agent $v$ receives one message $m_{t-s}(v')$ for each agent $v'$ such that $\mathrm{dist}_G(v,v') = s$, where $s\in\{1,\dots,d\}$.
Graph $G$ can thus be seen as a synchronous multi-hop communication network where messages are broadcast, each hop causing a delay of one time step.
Our learning protocol is summarized in Figure~\ref{f:protocol}, while Figure~\ref{f:example} contains a pictorial example.
Our model is
similar to the {\sc local} communication model in distributed computing \citep{Linial92,Suomela13}, where the output of a node depends only on the inputs of other nodes in a constant-size neighborhood of it,
and the goal is to derive algorithms whose running time is independent of the network size.
(The main difference is that the task here has no completion time, however, also in our model influence on a node is only through a constant-size neighborhood of it.)
\begin{figure}
\caption{ The cooperative bandit protocol where all agents share the same delay parameter $d$. }
\label{st:2}
\label{st:3}
\label{f:protocol}
\end{figure}
One aspect deserving attention is that, apart from the common delay parameter $d$, the agents need not share further information. In particular, the agents need not know neither the topology of the graph $G$ nor the total number of agents $N$. In Section~\ref{s:many-d}, we show that our distributed algorithm can also be analyzed when each agent $v$ uses a personalized delay $d(v)$, thus doing away with the need of a common delay parameter, and guaranteeing a generally better performance.
Further graph notation is needed at this point. Given $G$ as above, let us denote by $G_{\le d}$ the graph $(V,E_{\le d})$ where $(u,v) \in E_{\le d}$ if and only if the shortest-path distance between agents $u$ and $v$ in $G$ is {\em at most} $d$ (hence $G_{\le 1} = G$). Graph $G_{\le d}$ is sometimes called the $d$-th power of $G$. We also use $G_0$ to denote the graph $(V,\emptyset)$.
Recall that an independent set of $G$ is any subset $T \subseteq V$ such that no two $i,j \in T$ are connected by an edge in $E$.
The largest size of an independent set is the {\em independence number} of $G$, denoted by $\alpha(G)$.
Let ${d_G}$ be the {\em diameter} of $G$
(maximal length over all possible shortest paths between all pairs of nodes); then $G_{\le d_G}$ is a clique, and one can easily see that $N = \alpha(G_0) > \alpha(G) \ge \alpha(G_{\le 2}) \ge \cdots \ge \alpha(G_{\le {d_G}}) = 1$. We show in Section~\ref{s:single-d} that the collective performance of our algorithms depends on $\alpha(G_{\le d})$. If the graph $G$ under consideration is directed (see Section~\ref{s:many-d}), then $\alpha(G)$ is the independence number of the undirected graph obtained from $G$ by disregarding edge orientation.
The adversary generating losses is oblivious: loss vectors $\boldsymbol{\loss}_t = \big(\ell_t(1), \ldots, \ell_t(K)\big) \in [0,1]^K$ do not depend on the agents' internal randomization. The agents' goal is to control the {\em average welfare} regret $R_T^{\mathrm{coop}}$, defined as \[
R_T^{\mathrm{coop}} = \left( \frac{1}{N}\sum_{v \in V}\field{E}\left[ \sum_{t=1}^T \ell_t\bigl(I_t(v)\bigr)\right] - \min_{i \in A} \sum_{t=1}^T \ell_t(i) \right)~, \] the expectation being with respect to the internal randomization of each agent's algorithm. In the sequel, we write $\field{E}_{t}[\cdot]$ to denote the expectation w.r.t.\ the product distribution $\prod_{v \in V} \boldsymbol{p}_t(v)$, conditioned on $I_1(v),\dots,I_{t-1}(v)$, $v \in V$.
\begin{figure}
\caption{ In this example, $G$ is a line graph with $N = 6$ agents, and delay $d = 2$. At the end of time step $t$, agent $4$ sends to his neighbors $3$ and $5$ message $m_t(4)$,
receives from agent $3$ messages $m_{t-1}(3)$, and $m_{t-2}(2)$, and from agent $5$ messages $m_{t-1}(5)$ and $m_{t-2}(6)$. Finally, $4$ forwards to $5$ message $m_{t-1}(3)$ and forwards to $3$ message $m_{t-1}(5)$. Any message older than $t-1$ received by $4$ at the end of round $t$ will not be forwarded to his neighbors.
}
\label{f:example}
\end{figure}
\subsection{The Exp3-Coop algorithm} \label{s:single-d}
Our first algorithm, called {\sc Exp3-Coop} (Cooperative Exp3) is described in Figure \ref{f:exp3-coop}. The algorithm works in the learning protocol of Figure~\ref{f:protocol}. Each agent $v \in V$ runs the exponentially-weighted algorithm~(\ref{eq:exp-upd}), combined with a ``delayed'' importance-weighted loss estimate $\widehat{\loss}_t(i,v)$ that incorporates the delayed information sent by the other agents. Specifically, denote by $
N_{\le d}(v) = \bigcup_{s \le d} N_s(v) $ the set of nodes in $G$ whose shortest-path distance from $v$ is at most $d$, and note that, for all $v$, $\{v\} = N_{\leq 0}(v) \subseteq N_{\leq 1}(v) \subseteq N_{\leq 2}(v) \subseteq \cdots $\,. If any of the agents in $N_{\leq d}(v)$ has played at time $t-d$ action $i$ (that is, $B_{d,t-d}(i,v) = 1$ in Eq. in (\ref{eq:estimator})), then the corresponding loss $\ell_{t-d}(i)$ is incorporated by $v$ into $\widehat{\loss}_t(i,v)$.
The denominator $q_{d,t-d}(i,v)$ is simply,
conditioned on the history,
the probability of $B_{d,t-d}(i,v) = 1$,
i.e., $q_{d,t-d}(i,v)=\field{E}_t[B_{d,t-d}(i,v)]$.
Observe that $\{v\} \subseteq N_{\leq d}(v)$ for all $d \geq 0$ implies $q_{d,t-d}(i,v) \geq p_{t-d}(i,v)$, as required by (\ref{e:lossestimate}). It is also worth mentioning that, despite this is not strictly needed by our learning protocol, each agent $v$ actually exploits the loss information gathered from playing action $I_t(v)$ only $d$ time steps later. A relevant special case of this learning mode is when we only have a single bandit agent receiving delayed feedback (Section~\ref{s:delayed}).
\begin{figure}
\caption{ The Exp3-Coop algorithm where all agents share the same delay parameter $d$. }
\label{eq:estimator}
\label{f:exp3-coop}
\end{figure}
By their very definition, the loss estimates $\widehat{\loss}_t(\cdot,\cdot)$ at time $t$ are determined by the realizations of $I_s(\cdot)$, for $s=1,\dots,t-d$. This implies that the numbers $p_t(\cdot,\cdot)$ defining $q_{d,t-d}(\cdot,\cdot)$, are determined by the realizations of $I_s(\cdot)$ for $s=1,\dots,t-d-1$ (because the probabilities $\boldsymbol{p}_t(v)$ at time $t$ are determined by the loss estimates up to time $t-1$, see~(\ref{eq:exp-upd})). We have, for all $t > d$, $i \in A$, and $v \in V$,
\begin{equation} \label{eq:avevar}
\field{E}_{t-d}\Bigl[\widehat{\loss}_t(i,v)\Bigr] = \ell_{t-d}(i)~. \end{equation}
\sloppypar { Further, because of what we just said about $p_t(\cdot,\cdot)$ and $q_{d,t-d}(\cdot,\cdot)$ being determined by $I_1(\cdot),\dots,I_{t-d-1}(\cdot)$, we also have }
\begin{equation} \label{eq:aveprob}
\field{E}_{t-d}\Bigl[p_t(i,v)\widehat{\loss}_t(i,v)\Bigr] = p_t(i,v)\ell_{t-d}(i)~, \quad
\field{E}_{t-d}\Bigl[p_t(i,v)\widehat{\loss}_t(i,v)^2\Bigr] = p_t(i,v)\frac{\ell_{t-d}(i)^2}{q_{d,t-d}(i,v)}~. \end{equation}
The following theorem quantifies the behavior of {\sc Exp3-Coop} in terms of a free parameter $\gamma$ in the learning rate, the tuning of which will be addressed in the subsequent Theorem~\ref{th:main}.
\begin{theorem} \label{th:nontuned} The regret of {\sc Exp3-Coop} run over a network $G = (V,E)$ of $N$ agents, each using delay $d$ and learning rate $\eta = \gamma\big/\bigl(Ke(d+1)\bigr)$, for $\gamma \in (0,1]$, satisfies \[
R_T^{\mathrm{coop}} \le 2d + \frac{Ke(d+1)\ln K}{\gamma} + \gamma\left(\frac{\alpha(G_{\le d})}{2(1-e^{-1})(d+1)N} + \frac{3}{Ke}\right)T~. \]
\end{theorem}
With this bound handy, we might be tempted to optimize for $\gamma$. However, this is not a legal learning rate setting in a distributed scenario, for the optimized value of $\gamma$ would depend on the global quantities $N$ and $\alpha(G_{\leq d})$. Thus, instead of this global tuning, we let each agent set its own learning rate $\gamma$ through a ``doubling trick'' played locally. The doubling trick\footnote { There has been some recent work on adaptive learning rate tuning applied to nonstochastic bandit algorithms~\citep{k+14,neu15}. One might wonder whether the same techniques may apply here as well. Unfortunately, the specific form of our update~(\ref{eq:exp-upd}) makes this adaptation nontrivial, and this is why we resorted to a more traditional ``doubling trick". } works as follows. For each $v\in V$, we let
$\gamma_r(v) = Ke(d+1)\sqrt{(\ln K)/2^r}$
for each $r = r_0,r_0+1,\dots$, where $r_0 = \bigl\lceil\log_2\ln K + 2\log_2(Ke(d+1))\bigr\rceil$ is chosen in such a way that $\gamma_r(v) \le 1$ for all $r \ge r_0$. Let $T_r$ be the random set of consecutive time steps where the same $\gamma_r(v)$ was used. Whenever the local algorithm at $v$ is running with $\gamma_r(v)$ and detects $\sum_{s \in T_r} Q_s(v) > 2^r$, then we restart this algorithm with $\gamma(v) = \gamma_{r+1}(v)$.
We have the following result.
\begin{theorem} \label{th:main} The regret of {\sc Exp3-Coop} run over a network $G = (V,E)$ of $N$ agents, each using delay $d$, and an individual learning rate $\eta(v) = \gamma(v)/\bigl(Ke(d+1)\bigr)$, where $\gamma(v) \in (0,1]$ is adaptively selected by each agent through the above doubling trick, satisfies, when $T$ grows large,\footnote { The big-oh notation here hides additive terms that are independent of $T$ and do depend polynomially on the other parameters. }
\begin{align*}
R_T^{\mathrm{coop}} &=
\mathcal{O}\left(\sqrt{(\ln K)\left(d + 1+ \frac{K}{N}\,\alpha(G_{\le d})\right)T} + d\,\log T \right)~.
\end{align*}
\end{theorem}
\begin{remark} Theorem~\ref{th:main} shows a natural trade-off between delay and information. To make it clear, suppose $N \approx K$. In this case, the regret bound becomes of order $\sqrt{\bigl(d + \alpha(G_{\le d})\bigr)T\ln K} + d\ln T$. Now, if $d$ is as big as the diameter $d_G$ of $G$,
then $\alpha(G_{\le d})=1$. This means that at every time step all $N \approx K$ agents observe (with some delay) the losses of each other's actions. This is very much reminiscent of a full information scenario, and in fact our bound becomes of order $\sqrt{(d_G+1)T\ln K} + d_G\ln T$, which is close to the full information minimax rate $\sqrt{(d+1)T\ln K}$ when feedback has a constant delay $d$~\citep{weinberger2002delayed}. When $G$ is sparse (i.e., $d_G$ is likely to be large, say $d_G \approx N$), then agents have no advantage in taking $d = d_G$ since $d_G \approx N \approx K$. In this case, agents may even give up cooperation (choosing $d = 0$ in Figure~\ref{f:exp3-coop}),
and fall back on the standard bandit bound $\sqrt{TK\ln K}$, which corresponds to running {\sc Exp3-Coop} on the edgeless graph $G_0$. (No doubling trick is needed in this case, hence no extra $\log T$ term appears.) \end{remark}
\begin{remark} When $d = d_G$, each neighborhood $N_{\le d}(v)$ used in the loss estimate~(\ref{eq:estimator}) is equal to $V$, hence all agents receive the same feedback. Because they all start off from the same initial weights, the agents end up computing the same updates. This in turn implies that: (1) the individual regret incurred by each agent is the same as the average welfare regret $R_T^{\mathrm{coop}}$; (2) the messages exchanged by the agents (see Figure~\ref{f:protocol}) may be shortened by dropping the distribution part $\boldsymbol{p}_{t-s}(v')$. \end{remark}
\begin{remark} \label{rem:choiceofd}
An interesting question is whether the agents can come up with a reasonable choice for the value of $d$ even when they lack any information whatsoever about the global structure of $G$. A partial answer to this question follows. It is easy to show that the choice $d = \sqrt{K}$ in Theorem~\ref{th:main} yields a bound on the average welfare regret of the form $K^{1/4}\sqrt{T\ln K} + \sqrt{K}(\ln T)$ {\em for all} $G$ (and irrespective to the value of $N = |V|$), provided $G$ is connected. This holds because, for any connected graph $G$, the independence number $\alpha(G_{\le d})$ is always bounded by\footnote { Because it holds for a worst-case (connected) $G$, this upper bound on $\alpha(G_{\le d})$ can be made tighter when specific graph topologies are considered. } $\bigl\lceil 2N\big/(d+2)\bigr\rceil$. To see why this latter statement is true, observe that the neighborhood $N_{\leq d/2}(v)$ of any node $v$ in $G_{\le d/2}$ contains at least $d/2+1$ nodes (including $v$), and any pair of nodes $v', v'' \in N_{\leq d/2}(v)$ are adjacent in $G_{\le d}$. Therefore, no independent set of $G_{\le d}$ can have size bigger than $\lceil 2N\big/(d+2)\bigr\rceil$. A more detailed bound is contained, e.g., in~\citep{fh97}. \end{remark}
\section{Extensions: Cooperation with Individual Parameters}\label{s:many-d}
In this section, we analyze a modification of \textsc{Exp3-Coop} that allows each agent $v$ in the network to use a delay parameter $d(v)$ different from that of the other agents. We then show how such individual delays may improve the average welfare regret of the agents. In the previous setting, where all agents use the same delay parameter $d$, messages have an implicit time-to-live equal to $d$. In this setting, however, agents may not have a detailed knowledge of the delay parameters used by the other agents. For this reason we allow an agent $v$ to generate messages with a time-to-live $ttl(v)$ possibly different from the delay parameter $d(v)$. Note that the role of the two parameters $d(v)$ and $ttl(v)$ is inherently different. Whereas $d(v)$ rules the extent to which $v$ uses the messages received from the other agents, $ttl(v)$ limits the number of times a message from $v$ is forwarded to the other agents, thereby limiting the message complexity of the algorithm. In order to accomodate this additional parameter, we are required to modify the cooperative bandit protocol of Figure~\ref{f:protocol}. As in Section~\ref{s:multi}, we have an undirected communication network $G = (V,E)$ over the agents. However, in this new protocol the message that at the end of round $t$ each agent $v$ sends to his neighbors in $G$ has the format \[
m_t(v) = \Bigl\langle t,v,ttl(v),I_t(v),\ell_t\bigl(I_t(v)\bigr),\boldsymbol{p}_t(v) \Bigr\rangle \] where $ttl(v)$ is the time-to-live parameter of agent $v$. Each message $m_{t-s}(v')$, which $v$ receives from a neighbor, first has its time-to-leave decremented by one. If the resulting value is positive, the message is forwarded to the other neighbors, otherwise it is dropped. Moreover, $v$ uses this message to update $\boldsymbol{p}_t(v)$ only if $s \leq d(v)$.
Hence, at time $t$ an agent $v$ uses the message sent at time $t-s$ by $v'$ if and only if $\mathrm{dist}_G(v',v) = s$ with $s \le \min\{d(v),ttl(v')\}$, where $\mathrm{dist}_G(v,v')$ is
the shortest-path distance from $v'$ to $v$ in $G$.
Based on the collection ${\mathcal{P}} = \{d(v),ttl(v)\}_{v\in V}$ of individual parameters, we define the directed graph $G_{\mathcal{P}} = (V,E_{\mathcal{P}})$ as follows: arc $(v',v) \in E_{\mathcal{P}}$ if and only if $\mathrm{dist}_G(v,v') \leq \min\{d(v),ttl(v')\}$.
The in-neighborhood $N^-_{\mathcal{P}}(v)$ of $v$ thus contains the set of all $v'\in V$ whose distance from $v$ is not larger than $\min\{d(v),ttl(v')\}$. Notice that, with this definition, $v\in N^-_{\mathcal{P}}(v)$, so that $(V,E_{\mathcal{P}})$ includes all self-loops $(v,v)$.
Figure~\ref{f:multi-d}(a) illustrates these concepts through a simple pictorial example.
\begin{remark} \label{r:exo} It is important to remark that the communication structure encoded by $\mathcal{P}$ is an exogenous parameter of the regret minimization problem, and so our algorithms cannot trade it off against regret. In addition to that, the parameterization $\mathcal{P} = \{d(v),ttl(v)\}_{v\in V}$ defines a simple and static communication graph which makes it relatively easy to express regret as a function of the amount of available communication. This would not be possible if we had each individual node $v$ decide whether to forward a message based, say, on its own local delay parameter $d(v)$. To see why, consider the situation where nodes $v$ and $v'$ are along the route of a message that is reaching $v$ before $v'$. The decision of $v$ to drop the message may clash with the willingness of $v'$ to receive it, and this may clearly happen when $d(v) < d(v')$. The structure of the communication graph resulting from this individual behavior of the nodes would be rather complicated. On the contrary, the time-to-live-based parametrization, which is commonly used in communication networks to control communication complexity, does not have this issue. \end{remark}
\begin{figure}\label{f:multi-d}
\end{figure}
Figure~\ref{f:exp3-coop2} contains our algorithm (called {\sc Exp3-Coop2}) for this setting. {\sc Exp3-Coop2} is a strict generalization of {\sc Exp3-Coop}, and so is its analysis. The main difference between the two algorithms is that {\sc Exp3-Coop2} deals with directed graphs. This fact prevents us from using the same techniques of Section~\ref{s:single-d} in order to control the regret. Intuitively, adding orientations to the edges reduces the information available to the agents and thus increases the variance of their loss estimates. Thus, in order to control this variance, we need a lower bound\footnote { We find it convenient to derive this lower bound without mixing with the uniform distribution over $A$ ---see, e.g., \citep{auer2002nonstochastic}--- but in a slightly different manner. This facilitates our delayed feedback analysis. } on the probabilities $p_t(i,v)$.
\begin{figure}
\caption{ The Exp3-Coop2 algorithm with individual delay and time-to-live parameters. }
\label{f:exp3-coop2}
\end{figure}
From Figure~\ref{f:exp3-coop2}, one can easily see that
\begin{equation}\label{e:ptildebound} 1 = \sum_{i\in A}\frac{w_t(i,v)}{W_t(v)} \leq \widetilde{P}_t(v) \leq \sum_{i\in A}\left( \frac{w_t(i,v)}{W_t(v)} + \frac{\delta}{K} \right) = 1 + \delta \end{equation}
implying the lower bound \( p_t(i,v) \geq \frac{\delta}{K(1 + \delta)}\,, \) holding for all $i$, $t$, and $v$.
The following theorem
is the main result of this section.
\begin{theorem}\label{t:main-individual} The regret of {\sc Exp3-Coop2} run over a network $G = (V,E)$ of $N$ agents, each agent $v$ using individual delay $d(v)$, individual time-to-leave $ttl(v)$, exploration parameter $\delta = 1/T$, and learning rate $\eta$ such that $\eta \rightarrow 0$ as $T \rightarrow \infty$ satisfies, when $T$ grows large, \[ R_T^{\mathrm{coop}} = \mathcal{O}\left(\frac{\ln K}{\eta} + \eta\Big({\bar d}_V + \frac{K}{N}\,\alpha\left(G_{\mathcal{P}}\right)\,\ln(T N K)\Big)T\right)~,
\quad\text{where} \quad {\bar d}_V = \frac{1}{N}\,\sum_{v\in V} d(v)~. \] \end{theorem}
Using a doubling trick in much the same way we used it to prove Theorem~\ref{th:main}, we can prove the following result.
\begin{corollary}\label{c:main-individual-tuned} The regret of {\sc Exp3-Coop2} run over a network $G = (V,E)$ of $N$ agents, each agent $v$ using individual delay $d(v)$, individual time-to-leave $ttl(v)$, exploration parameter $\delta = 1/T$, and individual learning rate $\eta(v)$ adaptively selected by each agent through a doubling trick, satisfies, when $T$ grows large \[
R_T^{\mathrm{coop}} =
\mathcal{O}\left(\sqrt{(\ln K)\left({\bar d}_V + 1 + \frac{K}{N}\,\alpha(G_{\mathcal{P}})\ln(TNK)\right)T} + {\bar d}_V\,\big(\ln T + \ln\ln(TNK)\big) \right)~. \] \end{corollary}
To illustrate the advantage of having individual delays as opposed to sharing the same delay value, it suffices to consider a communication network including regions of different density. Concretely, consider the graph in Figure~\ref{f:multi-d}(b) with a large densely connected region (red agents) and a small sparsely connected region black agents). In this example, the black agents prefer a large value of their individual delay so as to receive more information from nearby agents, but this comes at the price of a larger bias for their estimators $\widehat{\loss}_t(i,v)$. On the contrary, information from nearby agents is readily available to the red agents, so that they do not gain any regret improvement from a large delay parameter. A similar argument applies here to the individual time-to-live values: red agents $v$ will set a small $ttl(v)$ to reduce communication. Black agents $v'$ may decide to set $ttl(v')$ depending on their intention to reach the red nodes. But because the red agents have set a small $d(v)$, any effort made by $v'$ trying to reach them would be a communication waste. Hence, it is reasonable for a black agent $v'$ to set a moderately large value for $ttl(v')$, but perhaps not so large as to reach the red agents. One can read this off the bounds in both Theorem~\ref{t:main-individual} and Corollary~\ref{c:main-individual-tuned}, as explained next. Suppose for simplicity that $K \approx N$ so that, disregarding log factors, these bounds depend on parameters $\mathcal{P}$ only through the quantity
$ H = {\bar d}_V + \alpha\left(G_{\mathcal{P}}\right) $.
Now, in the case of a common delay parameter $d$ (Section~\ref{s:single-d}), it is not hard to see that the best setting for $d$ in order to minimize $H$ is of the form $d = N^{1/4}$, resulting in $H = \Theta(N^{1/4})$. On the other hand, the best setting for the individual delays is $d(v) = 1$ when $v$ is red, and $d(v) = \sqrt{N}$ when $v$ is black, resulting in $H = \Theta(1)$.
The time-to-live parameters $ttl(v)$ affect the regret bound only through $\alpha\left(G_{\mathcal{P}}\right)$, but they clearly play the additional role of bounding the message complexity of the algorithm. In our example of Figure \ref{f:multi-d}(b), we essentially have $d(v) \approx ttl(v)$ for all $v$. A typical scenario where agents may have $d(v) \neq ttl(v)$ is illustrated in Figure~\ref{f:multi-d}(c). In this case, we have star-like graph where a central agent is connected through long rays to all others agents. The center $v$ prefers to set a small $d(v)$, since it has a large degree, but also a large $ttl(v)$ in order to reach the green peripheral nodes. The green nodes $v'$ are reasonably doing the opposite: a large $d(v')$ in order to gather information from other nodes, but also a smaller time-to-live than the center, for the information transmitted by $v'$ is comparatively less valuable to the whole network than the one transmitted by the center.
Agents can set their individual parameters in a topology-dependent manner using any algorithm for assessing the centrality of nodes in a distributed fashion ---e.g.,~\citep{wz13}, and references therein. This can be done at the beginning in a number of rounds which only depends on the network topology (but not on $T$). Hence, this initial phase would affect the regret bound only by an additive constant.
\section{Delayed Losses (for a Single Agent)}\label{s:delayed}
\textsc{Exp3-Coop} can be specialized to the setting where a single agent is facing a bandit problem in which the loss of the chosen action is observed with a fixed delay $d$. In this setting, at the end of each round $t$ the agent incurs loss $\ell_t(I_t)$ and observes $\ell_{t-d}(I_{t-d})$, if $t > d$, and nothing otherwise. The regret is defined in the usual way, \[
R_T = \field{E}\left[\sum_{t=1}^T \ell_t(I_t)\right] - \min_{i=1,\dots,K} \sum_{t=1}^T \ell_t(i)~. \] This problem was studied by~\cite{weinberger2002delayed} in the full information case, for which they proved that $\sqrt{(d+1)T\ln K}$ is the optimal order for the minimax regret. The result was extended to the bandit case by~\cite{neu2010online,neu2014online} ---see also~\cite{joulani2013online}--- whose techniques can be used to obtain a regret bound of order $\sqrt{(d+1)KT}$. Yet, no matching lower bound was available for the bandit case.
As a matter of fact, the upper bound $\sqrt{(d+1)KT}$ for the bandit case is easily obtained: just run in parallel $d+1$ instances of the minimax optimal bandit algorithm for the standard (no delay) setting, achieving $R_T \le \sqrt{KT}$ (ignoring constant factors). At each time step $t = (d+1)r + s$ (for $r=0,1,\dots$ and $s=0,\dots,d$), use instance $s+1$ for the current play. Hence, the no-delay bound applies to every instance and, assuming $d+1$ divides $T$, we immediately obtain
\(
R_T \le \sum_{s=1}^{d+1} \sqrt{K\frac{T}{d+1}} \le \sqrt{(d+1)KT}~,
\)
again, ignoring constant factors.
Next, we show that the machinery we developed in Section \ref{s:single-d} delivers an improved upper bound on the regret for the bandit problem with delayed losses, and then we complement this result by providing a lower bound matching the upper bound up to log factors, thereby characterizing (up to log factors) the minimax regret for this problem.
\begin{corollary}\label{c:delayed} In the nonstochastic bandit setting with $K \ge 2$ actions and delay $d \ge 0$, where at the end of each round $t$ the predictor has access to the losses $\ell_1(I_1),\dots,\ell_s(I_s)\in [0,1]^K$ for $s = \max\{1,t-d\}$, the minimax regret is of order $
\sqrt{(K + d)T}~, $ ignoring logarithmic factors. \end{corollary}
\section{Conclusions and Ongoing Research}\label{s:conc}
We have investigated a cooperative and nonstochastic bandit scenario where cooperation comes at the price of delayed information. We have proven average welfare regret bounds that exhibit a natural tradeoff between amount cooperation and delay, the tradeoff being ruled by the underlying communication network topology. As a by-product of our analysis, we have also provided the first characterization to date of the regret of learning with (constant) delayed feedback in an adversarial bandit setting.
There are a number of possible extensions which we are currently considering:
\begin{enumerate}
\item So far our analysis only delivers average welfare regret bounds. It would be interesting to show simultaneous regret bounds that hold for each agent individually. We conjecture that the individual regret bound of an agent $v$ is of the form $\sqrt{(\ln K)\left(d+\frac{K}{|N_{\le d}(v)|}\right)\,T}$, where $|N_{\le d}(v)|$ is the degree of $v$ in $G_{\le d}$ (plus one). Such bound would in fact imply, e.g., the one in Theorem~\ref{th:main}. A possible line of attack to solve this problem could be the use of graph sparsity along the lines of~\citep{NIPS2015_5814,NIPS2013_4939,mania2015perturbed,NIPS2014_5242}.
\item It would be nice to characherize the average welfare regret by complementing our upper bounds with suitable {\em lower} bounds: Is the upper bound of Theorem~\ref{th:main} optimal in the communication model considered here?
\item The two algorithms we designed do not use the loss information in the most effective way, for they both postpone the update step by $d$ (Figure \ref{f:exp3-coop}) or $d(v)$ ((Figure \ref{f:exp3-coop2}) time steps. In fact, we do have generalized versions of both algorithms where all losses $\ell_{t-s}(i)$ coming from agents at distance $s$ from any given agent $v$ are indeed used at time $t$ by agent $v$ i.e., as soon as these losses become available to $v$.
The resulting regret bounds mix delays and independence numbers of graphs at different levels of delay. (Details will be given in the full version of this paper.)
More ambitiously, it is natural to think of ways to adaptively tune our algorithms so as to automatically determine the best delay parameter $d$. For instance, disregarding message complexity, is there a way for each agent to adaptively tune $d$ locally so to minimize the bound in Theorem~\ref{th:main}?
\item Our messages $m_t(v)$ contain both action/loss information and distribution information. Is it possible to drop the distribution information and still achieve average welfare regret bounds similar to those in Theorems~\ref{th:nontuned} and~\ref{th:main}? \item Even for the single-agent setting, we do not know whether regret bounds of the form $\sqrt{(D+T)\ln K}$, where $D$ is the total delay experienced over the $T$ rounds, could be proven ---see~\citep{DBLP:conf/aaai/JoulaniGS16,NIPS2015_5833} for similar results in the full-information setting. In general, the study of learning on a communication network with time-varying delays, and its impact on the regret rates, is a topic which is certainly worth of attention. \end{enumerate}
\subsection*{Acknowledgments} We thank the anonymous reviewers for their careful reading, and for their thoughtful suggestions that greatly improved the presentation of this paper. Yishay Mansour is supported in part by the Israeli Centers of Research Excellence (I-CORE) program, (Center No.~4/11), by a grant from the Israel Science Foundation (ISF), by a grant from United States-Israel Binational Science Foundation (BSF) and by a grant from the Len Blavatnik and the Blavatnik Family Foundation.
\appendix
\section{Proofs from Section \ref{s:prel}}\label{a:prel}
\proofof{Lemma \ref{l:sandwich}}
\begin{proof} Directly from the definition of the update~(\ref{eq:exp-upd}), $w_{t+1}(i) \le p_t(i)$ for all $i \in A$, so that $W_{t+1} \le 1$, which in turn implies $w_{t+1}(i) \le w_{t+1}(i)/W_{t+1} = p_{t+1}(i)$. Therefore
\begin{align*}
p_{t+1}(i) - p_t(i) &\ge
w_{t+1}(i) - p_t(i)\\ &=
p_t(i)\left(e^{-\eta\,\widehat{\loss}_t(i)} - 1\right)\\ &\ge
-\eta\,p_t(i)\widehat{\loss}_t(i)~, \end{align*}
the last inequality using $1-e^{-x} \leq x$ for $x \geq 0$.
Similarly, \begin{align*}
p_{t+1}(i) - p_t(i) &\le
p_{t+1}(i) - w_{t+1}(i)\\ &=
p_{t+1}(i) - p_{t+1}(i)W_{t+1}\\ &=
p_{t+1}(i)\sum_{j \in A} \bigl(p_t(j) - w_{t+1}(j)\bigr)\\ &=
p_{t+1}(i)\sum_{j \in A} p_t(j)\left(1 - e^{-\eta\,\widehat{\loss}_t(j)}\right)\\ &\le
\eta\,p_{t+1}(i)\sum_{j \in A} p_t(j)\widehat{\loss}_t(j) \end{align*}
concluding the proof. \end{proof}
\proofof{Lemma \ref{l:mult}} \begin{proof} We proceed by induction over $t$. For all $t \le d$, $\widehat{\loss}_t(\cdot) = 0$. Hence $p_t(\cdot) = 1/K$, and the lemma trivially holds. For $t > d$ we can write
\begin{align*}
\sum_{i\in A} p_t(i)\widehat{\loss}_t(i) &=
\sum_{i\in A} p_t(i)\frac{\ell_{t-d}(i)}{q_{t-d}(i)} B_{t-d}(i) \\ &\le
\sum_{i\in A} \frac{p_t(i)}{q_{t-d}(i)}
\qquad\qquad\qquad\ \ \ \text{(because $B_{t-d}(i)\ell_{t-d}(i) \leq 1$)} \\ &\le
\sum_{i\in A} \left(1 + \frac{1}{d}\right)^d\frac{p_{t-d}(i)}{q_{t-d}(i)}
\qquad \text{(by the inductive hypothesis)} \\ &\le
\left(1 + \frac{1}{d}\right)^d K
\qquad\qquad\qquad \text{(because $q_{t-d}(i) \ge p_{t-d}(i)$)} \\ &\le
Ke~. \end{align*}
Hence, using Lemma \ref{l:sandwich}, \[
p_{t+1}(i)\bigl(1-\eta\,Ke\bigr) \le
p_{t+1}(i)\left(1-\eta\,\sum_{j \in A} p_t(j)\widehat{\loss}_t(j)\right) \le
p_t(i) \] which implies $
p_{t+1}(i) \le \left(1 + \frac{1}{d}\right)p_t(i) $ whenever $\eta \le \frac{1}{Ke(d+1)}$. \end{proof}
\section{Proofs from Section \ref{s:single-d}}\label{a:single-d}
The next lemma relates the variance of the estimates~(\ref{eq:estimator}) to the structure of the communication graph $G$. The lemma is stated for a generic undirected communication graph $G$, but our application of it
actually involves graph $G_{\le d}$.
\begin{lemma} \label{l:q-bound}
Let $G = (V,E)$ be an undirected graph with independence number $\alpha(G)$. For each $v \in V$, let $N_{\le 1}(v)$ be the neighborhood of node $v$ (including $v$ itself), and $\boldsymbol{p}(v) = \bigl(p(1,v),\dots,p(K,v)\bigr)$ be a probability distribution over $A = \{1,\dots,K\}$. Then, for all $i \in A$, \[
\sum_{v \in V} \frac{p(i,v)}{q(i,v)} \le
\frac{1}{1-e^{-1}}\left(\alpha(G) + \sum_{v\in V} p(i,v) \right) \quad \text{where} \quad q(i,v) = 1 - \prod_{v' \in N_{\le 1}(v)}\bigl(1-p(i,v')\bigr)~. \] \end{lemma}
\begin{proof} Fix $i \in A$ and set for brevity $P(i,v) = \sum_{v' \in N_{\le 1}(v)} p(i,v')$. We can write
\begin{align*}
\sum_{v \in V} \frac{p(i,v)}{q(i,v)} &=
\underbrace{\sum_{v \in V \,:\, P(i,v) \ge 1} \frac{p(i,v)}{q(i,v)}}_{\mathrm{(I)}} \quad + \quad
\underbrace{\sum_{v \in V\,:\, P(i,v) < 1} \frac{p(i,v)}{q(i,v)}}_{\mathrm{(II)}}~, \end{align*}
and proceed by upper bounding the two terms~(I) and~(II) separately. Let $r(v)$ be the cardinality of $N_{\le 1}(v)$. We have, for any given $v \in V$, \[
\min\left\{ q(i,v) \,:\, \sum_{v' \in N_{\le 1}(v)} p(i,v') \ge 1 \right\} =
1-\left(1-\frac{1}{r(v)}\right)^{r(v)} \ge
1-e^{-1}~. \] The equality is due to the fact that the minimum is achieved when $p(i,v') = \frac{1}{r(v)}$ for all $v' \in N_{\le 1}(v)$, and the inequality comes from $r(v) \ge 1$ (for, $v \in N_{\le 1}(v)$). Hence
\begin{align*}
\mathrm{(I)} \le
\sum_{v \in V \,:\, P(i,v) \ge 1} \frac{p(i,v)}{1-e^{-1}} \le
\sum_{v \in V} \frac{p(i,v)}{1-e^{-1}}~. \end{align*}
As for~(II), using the inequality $1-x \leq e^{-x}, x\in [0,1]$, with $x = p(i,v')$, we can write
\begin{align*}
q(i,v) \ge
1-\exp\left(-\sum_{v' \in N_{\le 1}(v)} p(i,v')\right) =
1-\exp\left(- P(i,v)\right)~. \end{align*}
In turn, because $P(i,v) < 1$ in terms (II), we can use the inequality $1-e^{-x} \geq (1-e^{-1})\,x$, holding when $x \in [0,1]$, with $x = P(i,v)$, thereby concluding that \[
q(i,v) \ge (1-e^{-1})P(i,v) \]
Thus \begin{align*}
\mathrm{(II)} \le
\sum_{v \in V\,:\, P(i,v) < 1} \frac{p(i,v)}{(1-e^{-1})P(i,v)} \le
\frac{1}{1-e^{-1}}\,\sum_{v \in V} \frac{p(i,v)}{P(i,v)} \le
\frac{\alpha(G)}{1-e^{-1}}~, \end{align*}
where in the last step we used \cite[Lemma 10]{alon2014nonstochastic}. Notice that despite the statement of this lemma refers to a directed graph and its maximum acyclic subgraph, in the special case of undirected graphs, the size of the maximum acyclic subgraph coincides with the independence number. Moreover, observe that $p(i,1),\dots,p(i,N) \ge 0$ need not sum to one in order for this lemma to hold. \end{proof}
\proofof{Theorem \ref{th:nontuned}}
\begin{proof} The standard analysis of the exponentially-weighted algorithm with importance-sampling estimates (see, e.g., the proof of \cite[Lemma~1]{alon2014nonstochastic}) gives for each agent $v$ and each action $k$ the deterministic bound
\begin{equation} \label{eq:exp3}
\sum_{t=1}^T \sum_{i=1}^K p_t(i,v) \widehat{\loss}_t(i,v) \le \sum_{t=1}^T \widehat{\loss}_t(k,v) + \frac{\ln K}{\eta} + \frac{\eta}{2} \sum_{t=1}^T \sum_{i=1}^K p_t(i,v) \widehat{\loss}_t(i,v)^2~. \end{equation}
We take expectations of the three (double) sums in~(\ref{eq:exp3}) separately. As for the first sum, notice that an iterative application of Lemma~\ref{l:sandwich} gives, for $t > d$, \[ p_t(i,v) \ge p_{t-d}(i,v) - \eta\,\sum_{s=1}^d p_{t-s}(i,v)\widehat{\loss}_{t-s}(i,v)~, \] so that, setting for brevity $A_t(i,v) = \sum_{s=1}^d p_{t-s}(i,v)\widehat{\loss}_{t-s}(i,v)$, we have
\begin{align*} \sum_{t=1}^T \sum_{i=1}^K p_t(i,v) \widehat{\loss}_t(i,v) &\geq \sum_{t=2d+1}^T \sum_{i=1}^K p_t(i,v) \widehat{\loss}_t(i,v) \\ &\geq \sum_{t=2d+1}^T \sum_{i=1}^K p_{t-d}(i,v) \widehat{\loss}_t(i,v)
- \eta\,\sum_{t=2d+1}^T \sum_{i=1}^K A_t(i,v)\,\widehat{\loss}_t(i,v)~. \end{align*}
Hence
\begin{align*} \field{E}\left[\sum_{t=1}^T \sum_{i=1}^K p_t(i,v) \widehat{\loss}_t(i,v) \right] &\geq \field{E}\left[ \sum_{t=2d+1}^T \sum_{i=1}^K p_{t-d}(i,v) \widehat{\loss}_t(i,v) \right] - \eta\,\field{E}\left[\sum_{t=2d+1}^T \sum_{i=1}^K A_t(i,v)\,\widehat{\loss}_t(i,v) \right]\\ &= \field{E}\left[ \sum_{t=2d+1}^T \sum_{i=1}^K p_{t-d}(i,v)\,\field{E}_{t-d}\left[\widehat{\loss}_t(i,v)\right] \right]\\ &\qquad\qquad - \eta\,\field{E}\left[\sum_{t=2d+1}^T \sum_{i=1}^K A_t(i,v)\,\field{E}_{t-d}\left[\widehat{\loss}_t(i,v)\right] \right]\\ &{\mbox{(since $p_t(i,v)$ is determined by $I_1(\cdot), \ldots, I_{t-d-1}(\cdot)$)}}\\ &= \field{E}\left[ \sum_{t=2d+1}^T \sum_{i=1}^K p_{t-d}(i,v)\,\ell_{t-d}(i) \right] - \eta\,\field{E}\left[\sum_{t=2d+1}^T \sum_{i=1}^K A_t(i,v)\,\ell_{t-d}(i)\right]\\ &{\mbox{(using (\ref{eq:avevar}))}}\\ &\geq \field{E}\left[ \sum_{t=1}^T \sum_{i=1}^K p_{t}(i,v)\,\ell_{t}(i) \right] - 2d - \eta\,T\,d~. \end{align*}
The last step uses
\begin{align*} \field{E}\left[\sum_{i=1}^K A_t(i,v)\,\ell_{t-d}(i) \right] &\leq \field{E}\left[\sum_{i=1}^K A_t(i,v) \right] \\ &=\field{E}\left[\sum_{i=1}^K \sum_{s=1}^d p_{t-s}(i,v)\widehat{\loss}_{t-s}(i,v)\right]\\ &= \field{E}\left[\sum_{i=1}^K \sum_{s=1}^d p_{t-s}(i,v)\ell_{t-s-d}(i)\right]\\ &\leq \field{E}\left[\sum_{i=1}^K\sum_{s=1}^d p_{t-s}(i,v)\right]\\ &= d \end{align*}
holding for $t \geq 2d+1$.
\iffalse ************************ Now, letting $A_t(i,v) = \sum_{s=1}^d p_{t-s}(i,v)\widehat{\loss}_{t-s}(i,v)$ for $t > d$, and since $\field{E}\bigl[A_t(i,v)\bigr] \le d$ due to~(\ref{eq:avevar}), \begin{align*}
\field{E}&\left[\sum_{t=1}^T \sum_i p_t(i,v) \ell_t(i)\right] \le
d + \field{E}\left[\sum_{t=d+1}^T \sum_i p_{t-d}(i,v) \ell_{t-d}(i)\right] \\ &=
d + \eta\,\field{E}\left[\sum_{t=d+1}^T \sum_i A_t(i,v)\ell_{t-d}(i) \right] + \field{E}\left[\sum_{t=d+1}^T \sum_i \bigl(p_{t-d}(i,v) - \eta A_t(i,v)\bigr) \ell_{t-d}(i)\right] \\ &\le
d + Td\eta + \field{E}\left[\sum_{t=d+1}^T \sum_i p_t(i,v)\ell_{t-d}(i) \right] \qquad \text{using Lemma~\ref{l:sandwich}} \\ &=
d + Td\eta + \field{E}\left[\sum_{t=d+1}^T \sum_i p_t(i,v) \field{E}_{t-d}\left[\widehat{\loss}_t(i,v)\right] \right] \qquad \text{using~(\ref{eq:aveprob}).} \end{align*} ************************ \fi
Similarly, for the second sum in (\ref{eq:exp3}), we have
\begin{align*}
\field{E}\left[\sum_{t=1}^T \widehat{\loss}_t(k,v)\right] =
\sum_{t=d+1}^T \ell_{t-d}(k) \le
\sum_{t=1}^T \ell_t(k)~. \end{align*}
Finally, for the third sum in (\ref{eq:exp3}), an iterative application of Lemma \ref{l:mult} yields, for $t > d$, \[ p_t(i,v) \leq \left(1+\frac{1}{d}\right)^d p_{t-d}(i,v) \leq e\,p_{t-d}(i,v)~, \] so that we can write
\begin{align*}
\field{E}\left[\sum_{t=1}^T \sum_{i=1}^K p_t(i,v) \widehat{\loss}_t(i,v)^2 \right] &=
\field{E}\left[\sum_{t=d+1}^T \sum_{i=1}^K \field{E}_{t-d}\left[p_t(i,v) \widehat{\loss}_t(i,v)^2\right] \right] \\ &\le
\field{E}\left[\sum_{t=d+1}^T \sum_{i=1}^K \frac{p_t(i,v)}{q_{d,t-d}(i,v)} \right]
\qquad\,\,\, \text{(using~(\ref{eq:aveprob}) and $\ell_t(\cdot) \leq 1$)} \\ &\le
e\,\field{E}\left[\sum_{t=d+1}^T \sum_{i=1}^K \frac{p_{t-d}(i,v)}{q_{d,t-d}(i,v)} \right],
\end{align*}
the last inequality being due to an iterative application of Lemma~\ref{l:mult}, and the observation that $\left(1+\frac{1}{d}\right)^d \leq e$.
Hence, summing over all agents $v$, dividing by $N$, and using Lemma~\ref{l:q-bound} on $G_{\leq d}$ gives
\begin{align*}
\frac{1}{N}\,\field{E}\left[\sum_{t=1}^T \sum_{i=1}^K \sum_{v \in V} p_t(i,v) \widehat{\loss}_t(i,v)^2 \right] &\le
\frac{e}{N}\,\field{E}\left[\sum_{t=d+1}^T \sum_{i=1}^K \sum_{v \in V} \frac{p_{t-d}(i,v)}{q_{d,t-d}(i,v)} \right] \\ &\le
\frac{e}{(1-e^{-1})\,N}\,\field{E}\left[\sum_{t=d+1}^T\sum_{i=1}^K \left( \alpha(G_{\le d}) + \sum_{v \in V} p_{t-d}(i,v) \right) \right] \\ &\le
\frac{e}{1-e^{-1}}\,T\left(\frac{K}{N}\,\alpha(G_{\leq d}) + 1\right)~. \end{align*}
Finally, putting together as in (\ref{eq:exp3}), setting $\eta = \gamma\big/\bigl(Ke(d+1)\bigr)$, and overapproximating, we obtain the desired bound. \end{proof}
\proofof{Theorem \ref{th:main}}
\begin{proof} We start off from first part of the proof of Theorem~\ref{th:nontuned} which, after rearranging terms, gives the following bound for each agent $v$:
\begin{align} \nonumber
\field{E}&\left[\sum_{t=1}^T \sum_{i=1}^K p_t(i,v) \ell_t(i)\right] - \sum_{t=1}^T \ell_t(k) \\ &\le \nonumber
2d + \field{E}\left[ \frac{\ln K}{\eta(v)} + \eta(v)\,d^2 + \eta(v)\,\sum_{t=d+1}^T \left(d + \frac{e}{2}\,\sum_{i=1}^K \frac{p_{t-d}(i,v)}{q_{d,t-d}(i,v)}\right) \right] \\ &\le \label{eq:Q-bound}
3d + \field{E}\left[\frac{Ke(d+1)\ln K}{\gamma(v)} + \frac{\gamma(v)}{Ke(d+1)}\sum_{t=1}^T \underbrace{\left(\Ind{t > d}\,d + \frac{e}{2}\,\left(\sum_{i=1}^K \frac{p_{t-d}(i,v)}{q_{d,t-d}(i,v)}\right)\Ind{t > d}\right)}_{Q_t(v)} \right]~. \end{align}
Note that the optimal tuning of $\gamma(v)$ depends on the random quantity \[
\overline{Q}_T(v) = \sum_{t=1}^T Q_t(v)~. \] We now apply the doubling trick to each instance of {\sc Exp3-Coop}. Recall that, for each $v\in V$, we let
$\gamma_r(v) = Ke(d+1)\sqrt{(\ln K)/2^r}$
for each $r = r_0,r_0+1,\dots$, where $r_0 = \bigl\lceil\log_2\ln K + 2\log_2(Ke(d+1))\bigr\rceil$ is chosen in a way that $\gamma_r(v) \le 1$ for all $r \ge r_0$. Let $T_r$ be the random set of consecutive time steps where the same $\gamma_r(v)$ was used. Whenever the algorithm is running with $\gamma_r(v)$ and detects $\sum_{s \in T_r} Q_s(v) > 2^r$, then we restart the algorithm with $\gamma(v) = \gamma_{r+1}(v)$. The largest $r = r(v)$ we need is $\bigl\lceil\log_2\overline{Q}_T(v)\bigr\rceil$ and \[
\sum_{r=r_0}^{\bigl\lceil \log_2\overline{Q}_T(v) \bigr\rceil} 2^{r/2} < 5\sqrt{\overline{Q}_T(v)}~. \]
Because of~(\ref{eq:Q-bound}), the regret agent $v$ suffers when using $\gamma_r(v)$ within $T_r$ is at most $
3d + 2\sqrt{(\ln K)2^r} $. Now, since we pay at most regret $d$ at each restart, we have
\begin{align*}
\field{E}\left[\sum_{t=1}^T \sum_i p_t(i,v) \ell_t(i)\right] - \sum_{t=1}^T \ell_t(k) &\le
3d + 4Ke(d+1)\ln K \\ &\quad
+ \field{E}\left[10\sqrt{(\ln K)\overline{Q}_T(v)} + 3d\Bigl\lceil\log_2\overline{Q}_T(v)\Bigr\rceil \right]~. \end{align*} The term $3d + 4Ke(d+1)\ln K$ bounds the regret when the algorithm is never restarted implying that only $\gamma_{r_0}(v)$ is used.
Taking averages with respect to $v$, using Jensen's inequality multiple times, and applying the (deterministic) bound \[ \frac{1}{N}\,\sum_{v\in V} \overline{Q}_T(v) \leq \left(d + \frac{e}{2(1-e^{-1})}\,\frac{K\,(\alpha(G_{\le d})+1)}{N} \right)\,T \] derived with the aid of Lemma~\ref{l:q-bound} at the end of the proof of Theorem~\ref{th:nontuned}, gives
\begin{align*}
R_T^{\mathrm{coop}} &\le
3d + 4Ke(d+1)\ln K \\ &\quad
+ 10\sqrt{(\ln K)\field{E}\left[\frac{1}{N}\sum_{v\in V} \overline{Q}_T(v)\right]} + 3d\log_2\left(\field{E}\left[\frac{1}{N}\sum_{v\in V} \overline{Q}_T(v)\right]\right) \\ &\le
10\sqrt{(\ln K)\left(d + \frac{e}{2(1-e^{-1})}\,\frac{K\,(\alpha(G_{\le d})+1)}{N}\right)T}
+ 3d\log_2 T + C\,,
\end{align*}
where $C$ is independent of $T$ and depends polynomially on the other parameters. Hence, as $T$ grows large, \[
R_T^{\mathrm{coop}} = \mathcal{O}\left(\sqrt{(\ln K)\left(d+1 + \frac{K}{N}\,\alpha(G_{\le d})\right)T} + d\,\log T \right)~, \] as claimed. \end{proof}
\section{Proofs from Section~\ref{s:many-d}}\label{a:many-d}
We first need to adapt the preliminary Lemmas~\ref{l:sandwich} and~\ref{l:mult} to the new update rule of {\sc Exp3-Coop2} contained in Figure~\ref{f:exp3-coop2}.
\begin{lemma}\label{la:sandwich}
Under the update rule contained in Figure~\ref{f:exp3-coop2}, for all $t \geq 1$, for all $i \in A$, and for all $v \in V$
\begin{align*} -p_t(i,v)\left(\eta\widehat{\loss}_t(i,v)+\delta\right) &\leq p_{t+1}(i,v) - p_t(i,v)\\ & \leq p_{t+1}(i,v)\,\sum_{j=1}^{K} p_t(j,v)\left(1 - \Ind{\widetilde{p}_{t+1}(i,v) > \delta / K}\big(1-\eta\,\widehat{\loss}_t(i,v)\big)\right) \end{align*}
holds deterministically with respect to the agents' randomization. \end{lemma}
\begin{proof} For the lower bound, we have \[ p_{t+1}(i,v) - p_t(i,v) = \frac{\widetilde{p}_{t+1}(i,v)}{\widetilde{P}_{t+1}(v)} - p_t(i,v) \geq \frac{w_{t+1}(i,v)}{W_{t+1}(v)\,\widetilde{P}_{t+1}(v)} - p_t(i,v)~. \] Since $W_{t+1}(v) = \sum_{i\in A} p_t(i,v) e^{-\eta \widehat{\loss}_t(i,v)} \leq \sum_{i\in A} p_t(i,v) = 1$, and $\widetilde{P}_{t+1}(v) \leq 1 + \delta$ by~(\ref{e:ptildebound}), we can write
\begin{eqnarray*} p_{t+1}(i,v) - p_t(i,v) &\geq&
\frac{w_{t+1}(i,v)}{1 + \delta} - p_t(i,v)\\
&=&
p_t(i,v)\left(\frac{e^{-\eta\widehat{\loss}_t(i,v)}}{1 + \delta} - 1\right)\\ &\geq&
p_t(i,v)\left(\frac{1 - \eta\widehat{\loss}_t(i,v)}{1 + \delta} - 1\right)\qquad {\mbox{(using $e^{-x} \geq 1 - x$)}}\\
&\geq&
p_t(i,v)\left(-\delta -\eta\widehat{\loss}_t(i,v)\right) \end{eqnarray*} as claimed. As for the upper bound, we first claim that
\begin{equation}\label{e:claim} \frac{w_{t+1}(i,v)}{W_{t+1}(v)} \geq p_{t+1}(i,v)\Ind{\widetilde{p}_{t+1}(i,v) > \delta / K}\,. \end{equation}
To prove (\ref{e:claim}), we recall that $\widetilde{p}_{t+1}(i,v) = \max\left\{\frac{w_{t+1}(i,v)}{W_{t+1}(v)},\frac{\delta}{K}\right\}$. Then we distinguish two cases:
\begin{enumerate} \item If $\frac{w_{t+1}(i,v)}{W_{t+1}(v)} \leq \frac{\delta}{K}$, then $\widetilde{p}_{t+1}(i,v) = \delta / K$, and $w_{t+1}(i,v) / W_{t+1}(v) > 0$ by definition, hence (\ref{e:claim}) holds; \item If $\frac{w_{t+1}(i,v)}{W_{t+1}(v)} > \frac{\delta}{K}$ then $\widetilde{p}_{t+1}(i,v) = \frac{w_{t+1}(i,v)}{W_{t+1}(v)}$, so that \( p_{t+1}(i,v) \leq p_{t+1}(i,v)\,\widetilde{P}_{t+1}(v) = \widetilde{p}_{t+1}(i,v) \) and~(\ref{e:claim}) again holds. \end{enumerate}
Then, setting for brevity $C = \Ind{\widetilde{p}_{t+1}(i,v) > \delta / K}$, we can write
\begin{eqnarray*} p_{t+1}(i,v) - p_{t}(i,v) &\leq& p_{t+1}(i,v) - w_{t+1}(i,v) \qquad\qquad\qquad\ \text{(from the update~(\ref{eq:exp-upd}))} \\ &\leq& p_{t+1}(i,v) - W_{t+1}(v) p_{t+1}(i,v)\,C \qquad\text{(using~(\ref{e:claim}))}\\ &=& p_{t+1}(i,v)\big(1 - W_{t+1}(v)\,C\big)\\ &=& p_{t+1}(i,v)\left(\sum_{j\in A}\big(p_t(j,v) - C\,w_{t+1}(j,v)\big)\right)\\ &=& p_{t+1}(i,v)\,\sum_{j\in A} p_t(j,v)\left(1 - C\,e^{-\eta\,\widehat{\loss}_t(j,v)}\right)\\ &\le& p_{t+1}(i,v)\,\sum_{j\in A} p_t(j,v)\left(1 - C(1-\eta\,\widehat{\loss}_t(j,v))\right) \end{eqnarray*}
where in the last step we again used $e^{-x} \geq 1 - x$. This concludes the proof. \end{proof}
\begin{lemma}\label{la:mult} Under the update rule contained in Figure~\ref{f:exp3-coop2}, if $\delta \leq 1/d(v)$ and $\eta \leq \frac{1}{Ke(d(v)+1)}$, then
\begin{equation}\label{eq:mult-lemma-manyd} p_{t+1}(i,v) \leq \left(1 + \frac{1}{d(v)}\right)\,p_{t}(i,v) \end{equation}
holds for all $t \geq 1$ and $i \in A$, deterministically with respect to the agents' randomization. \end{lemma}
\begin{proof} If $\widetilde{p}_{t+1}(i,v) = \delta / K$ then, from (\ref{e:ptildebound}), we have $\delta / K = p_{t+1}(i,v)\widetilde{P}_{t+1}(v) \geq p_{t+1}(i,v)$, and $p_t(i,v) \geq \frac{\delta}{K(1 + \delta)}$. Hence, $\frac{p_{t+1}(i,v)}{p_{t}(i,v)} \leq \frac{\delta / K}{\delta / (K (1 + \delta))} = 1 + \delta$, so the claim follows from $\delta \leq \frac{1}{d(v)}$.
On the other hand, if $\widetilde{p}_{t+1}(i,v) > \delta / K$, then the proof is exactly the same as the proof of Lemma~\ref{l:mult}, for the second inequality in the statement of Lemma~\ref{la:sandwich} turns out to be exactly the same as the corresponding inequality in the statement in Lemma~\ref{l:sandwich}. \end{proof}
Next, we generalize Lemma~\ref{l:q-bound} to the case of directed graphs. This is where we need a lower bound on the probabilities $p_t(i,v)$. If $G = (V,E)$ is a directed graph, then for each $v \in V$ let $N^-_{\le 1}(v)$ be the in-neighborhood of node $v$ (i.e., the set of $v'\in V$ such that arc $(v',v) \in E$), including $v$ itself.
\begin{lemma}\label{la:q-bound}
Let $G = (V,E)$ be a directed graph with independence number $\alpha(G)$. Let $\boldsymbol{p}(v) = \bigl(p(1,v),\dots,p(K,v)\bigr)$ be a probability distribution over $A = \{1,\dots,K\}$ such that $p(i,v) \geq \frac{\delta}{K(1 + \delta)}$. Then, for all $i \in A$, \[
\sum_{v \in V} \frac{p(i,v)}{q(i,v)} \le
\frac{1}{1-e^{-1}}\left(6\,\alpha(G) \ln \left(1+ \frac{N^2 K(1+\delta)}{\delta}\right) + \sum_{v\in V} p(i,v) \right)~, \] where $q(i,v) = 1 - \prod_{v' \in N^-_{\le 1}(v)}\bigl(1-p(i,v')\bigr)$. \end{lemma}
\begin{proof} We follow the notation and the proof of Lemma \ref{l:q-bound}, where it is shown that \[
\sum_{v \in V} \frac{p(i,v)}{q(i,v)} \le
\frac{1}{1-e^{-1}}\,\sum_{v\in V}\left(\frac{p(i,v)}{P(i,v)} + p(i,v) \right)~. \] In order to bound from above the sum $\sum_{v\in V}\frac{p(i,v)}{P(i,v)}$, we combine \citep[Lemma~14 and~16]{alon2014nonstochastic} and derive the upper bound \[ \sum_{v\in V}\frac{p(i,v)}{P(i,v)} \leq 6\,\alpha(G) \ln \left(1+ \frac{N^2 K(1+\delta)}{\delta}\right) \] holding when $p(i,v) \geq \frac{\delta}{K(1 + \delta)}$. Again, the probabilities $p(i,1),\dots,p(i,N) \ge 0$ need not sum to one in order for this lemma to apply. \end{proof}
\iffalse ************************************************* \begin{remark}[Recursive Multiplicative Lemma for agent $v$ with local delay $d_v$] \label{r:remark-multiplicative-lemma-manyd} Under the same conditions of Lemma~\ref{l:multiplicative-lemma-manyd} the following holds for $d_v \geq 1$ \begin{equation}\label{eq:rec-mult-lemma-manyd} \frac{p_{t+d_v}(i)}{p_{t}(i)} \leq \left( 1 + \frac{1}{d_v}\right)^{d_v} \end{equation} \end{remark}
\begin{proof} If $d_v = 1$ the thesis follows by Lemma~\ref{l:multiplicative-lemma-manyd}.\\ Then for $d_v > 1$ by induction we can show that \begin{align*}
\frac{p_{t+d_v+1}(i)}{p_{t}(i)} &\leq
\frac{p_{t+d_v}(i)}{p_{t}(i)} \cdot \left(1 + \frac{1}{d_v}\right) \quad \text{by Lemma~\ref{l:multiplicative-lemma-manyd}}
\\ &\leq
\left(1 + \frac{1}{d_v}\right)^{d_v} \quad \text{from inductive hypothesis} \end{align*} \end{proof}
From Remark~\ref{r:remark-multiplicative-lemma-manyd} we can see that $p_{t + d_v}(i,v) \leq e \cdot p_{t}(i,v)$ since $\left( 1 + \frac{1}{d_v}\right)^{d_v} \leq e$.
\begin{remark}[Recursive Additive Lemma for agent $v$ with local delay $d_v$] \label{r:remark-additive-lemma-manyd} Under the same conditions of Lemma~\ref{l:additive-lemma-manyd} the following holds for $d_v \geq 1$ \begin{equation}\label{eq:rec-add-lemma-manyd} p_{t}(i) - p_{t-d_v}(i) \geq \sum_{h=1}^{d_v} \left( -\delta p_{t-h}(i) -\eta p_{t-h}(i) \widehat{\loss}_{t-h}(i) \right) \end{equation} \end{remark}
\begin{proof} If $d_v = 1$ the thesis follows by Lemma~\ref{l:additive-lemma-manyd}.\\ Then for $d_v > 1$ by induction we can show that \begin{eqnarray*}
p_{t}(i) - p_{t - d_v - 1}(i) &=&
p_{t}(i) - p_{t-d_v}(i) + - p_{t-d_v}(i) - p_{t - d_v - 1}(i)\\ &\geq&
\sum_{h=1}^{d_v}\left( -\delta p_{t-h}(i) -\eta p_{t-h}(i) \widehat{\loss}_{t-h}(i) \right) - \delta p_{t-d_v-1}(i) - \eta p_{t-d_v-1}(i)\widehat{\loss}_{t-d_v-1}(i)\\ &=&
\sum_{h=1}^{d_v+1} \left( -\delta p_{t-h}(i) -\eta p_{t-h}(i) \widehat{\loss}_{t-h}(i) \right) \end{eqnarray*} From Lemma~\ref{l:additive-lemma-manyd} and induction hypothesis. \end{proof}
\hl{THIS IS AN EXTENSION OF LEMMA 3 TO BE PROVED}
\begin{lemma} \label{l:q-bound-manyd} Let $G_{d_1 \ldots d_N} = (V,E)$ be a directed graph obtained connecting each node $v$ with an incoming arc from each $v' \in N_{\leq d_v}(v)$. For each $v \in V$, let $\boldsymbol{p}(v) = \bigl(p(1,v),\dots,p(K,v)\bigr)$ be a probability distribution over $A = \{1,\dots,K\}$. Then \begin{equation}
\sum_{v \in V} \frac{p(i,v)}{q(i,v)} \le
\frac{1}{1-e^{-1}}\left(\alpha(G)\log\frac{N}{\delta} + \sum_{v\in V} p(i,v) \right) \end{equation}
where: $q(i,v) = 1 - \prod_{v' \in N_1(v)}\bigl(1-p(i,v')\bigr) \ge p(i,v)$, $0 < \delta \leq p(i,v)$ and $\alpha(G)$ is the independence number of $G_{d_1 \ldots d_N}$~. \end{lemma}
******************************************************* \fi
With the above three lemmas handy, we are ready to prove Theorem~\ref{t:main-individual}.
\begin{proof}[Theorem \ref{t:main-individual}]
This proof is similar to the proof of Theorem~\ref{th:nontuned}, hence we only emphasize the differences between the two.
From the update rule in Figure~\ref{f:exp3-coop2}, we have, for each $v \in V$,
\begin{eqnarray*} W_{T+1}(v)
&=&
{\displaystyle \sum_{i=1}^{K} \frac{\widetilde{p}_T(i)}{\widetilde{P}_T(v)} e^{-\eta\widehat{\loss}_T(i,v)}}\\ &\geq&
\sum_{i=1}^{K} \frac{w_T(i,v)}{W_T(v) \widetilde{P}_T(v)} e^{-\eta\widehat{\loss}_T(i,v)} \qquad\qquad\qquad{\mbox{(since $\widetilde{p}_T(i) \geq w_T(i,v)/W_T(v)$)}}\\
&=&
\sum_{i=1}^{K} \frac{\widetilde{p}_{T-1}(i,v) e^{-\eta\widehat{\loss}_{T-1}(i,v)} e^{-\eta\widehat{\loss}_T(i,v)}}{W_T(v)\widetilde{P}_{T-1}(v)\widetilde{P}_T(v)}\\ &\vdots&\\ &\geq&
\sum_{i=1}^{K} \frac{\displaystyle w_1(i,v)\,e^{-\eta \sum_{t=1}^T \widehat{\loss}_t (i,v)}}{W_1(v) \cdots W_T(v) \widetilde{P}_1(v) \cdots \widetilde{P}_T(v)}~. \end{eqnarray*}
Now, because $w_1(i,v) = 1$, $W_1(v) = K$, and $\widetilde{P}_t(v) \leq 1+\delta$ for all $t$, see~(\ref{e:ptildebound}), the above chain of inequalities implies that, for any fixed action $k \in A$,
\begin{equation}\label{e:deterministic} (1 + \delta)^T\, K\,\left(\prod_{t=1}^{T} W_{t+1}(v)\right) \geq e^{-\eta \sum_{t=1}^T \widehat{\loss}_t (k,v)}~. \end{equation}
\iffalse ********************** \begin{align}
\displaystyle \left(\prod_{t=1}^{T} \widetilde{P}_t\right) \left(\prod_{t=1}^{T+1} W_t\right) &\geq
\sum_{i=1}^{K} e^{-\eta \sum_i \widehat{\loss}_t (i)} \\
\left(\prod_{t=1}^{T}\widetilde{P}_t\right) K \left(\prod_{t=1}^{T} W_{t+1}\right) &\geq
\sum_{i=1}^{K} e^{-\eta \sum_i \widehat{\loss}_t (i)} \\
(1 + \delta)^T K \left(\prod_{t=1}^{T} W_{t+1}\right) &\geq
e^{-\eta \sum_i \widehat{\loss}_t (k)} \end{align}
For a fixed action $k$ and because of Remark~\ref{r:remark1-manyd}.\\ ********************** \fi
As usual, the quantity $W_{t+1}(v)$ can be upper bounded as
\begin{eqnarray*} W_{t+1}(v)
&=&
\sum_{i=1}^{K} p_t(i,v) e^{-\eta\widehat{\loss}_t(i,v)}\\ &\leq&
\sum_{i=1}^{K} p_t(i,v) \left(1 -\eta\widehat{\loss}_t(i,v) + \frac{\eta^2}{2}\widehat{\loss}_t(i,v)^2\right)\\ &&{\mbox{(from $e^{-x} \leq 1-x+x^2/2$ for all $x \geq 0$)}}\\ &=&
1 - \eta\sum_{i=1}^{K} p_t(i,v)\widehat{\loss}_t(i,v) + \frac{\eta^2}{2}\sum_{i=1}^{K} p_t(i,v)\widehat{\loss}_t(i,v)^2~. \end{eqnarray*}
Plugging back into (\ref{e:deterministic}) and taking logs of both sides gives
\[
T\ln(1+\delta) + \ln K + \sum_{t=1}^T \ln\left( 1 - \eta\sum_{i=1}^{K} p_t(i,v)\widehat{\loss}_t(i,v) + \frac{\eta^2}{2}\sum_{i=1}^{K} p_t(i,v)\widehat{\loss}_t(i,v)^2 \right) \geq
-\eta \sum_{i=1}^K \widehat{\loss}_t (k,v)\,. \] Finally, using $\ln(1+x) \leq x$, dividing by $\eta$, using $\delta = 1/T$, and rearranging yields
\begin{equation} \label{eq:pre-bound-manyd} \sum_{t=1}^T \sum_{i=1}^K p_t(i,v) \widehat{\loss}_t(i,v) \leq \frac{1+\ln K}{\eta} + \sum_{t=1}^T \widehat{\loss}_t (k,v) + \frac{\eta}{2} \sum_{t=1}^T \sum_{i=1}^K p_t(i,v) \widehat{\loss}_t(i,v)^2 \end{equation}
hence arriving at the counterpart to~(\ref{eq:exp3}).
From this point on, we proceed as in the proof of Theorem~\ref{th:nontuned} by taking expectation on the three sums in~(\ref{eq:pre-bound-manyd}). Notice that we do still have, for all $v \in V$, $t > d(v)$, and $i \in A$,
\begin{eqnarray*} \field{E}_{t-d(v)}\Bigl[\widehat{\loss}_t(i,v)\Bigr] &=& \ell_{t-d(v)}(i)\\
\field{E}_{t-d(v)}\Bigl[p_t(i,v)\widehat{\loss}_t(i,v)\Bigr] &=& p_t(i,v)\ell_{t-d(v)}(i)\\
\field{E}_{t-d(v)}\Bigl[p_t(i,v)\widehat{\loss}_t(i,v)^2\Bigr] &=& p_t(i,v)\frac{\ell_{t-d(v)}(i)^2}{q_{\mathcal{P},t-d(v)}(i,v)}~. \end{eqnarray*}
We can write
\begin{eqnarray*} \field{E}\left[\sum_{t=1}^T \sum_{i=1}^K p_t(i,v) \widehat{\loss}_t(i,v) \right] &\geq& \field{E}\left[ \sum_{t=1}^T \sum_{i=1}^K p_{t}(i,v)\,\ell_{t}(i) \right] - 2d(v) - (\eta+\delta)\,T\,d(v)\\ \field{E}\left[\sum_{t=1}^T \widehat{\loss}_t(k,v)\right] &\le& \sum_{t=1}^T \ell_t(k) \end{eqnarray*}
and, as in the proof of Theorem \ref{th:nontuned},
\begin{align*}
\field{E}\left[\sum_{t=1}^T \sum_{i=1}^K p_t(i,v) \widehat{\loss}_t(i,v)^2 \right] \le
e\,\field{E}\left[\sum_{t=d(v)+1}^T \sum_{i=1}^K \frac{p_{t-d(v)}(i,v)}{q_{\mathcal{P},t-d(v)}(i,v)} \right]~. \end{align*}
Summing over all agents $v$, dividing by $N$, and applying Lemma~\ref{la:q-bound} to the directed graph $G_{\mathcal{P}}$, the latter inequality gives \[ \frac{1}{N}\,\field{E}\left[\sum_{t=1}^T \sum_{i=1}^K \sum_{v \in V} p_t(i,v) \widehat{\loss}_t(i,v)^2 \right] \le \frac{e}{1-e^{-1}}\,T\left(\frac{6K}{N}\,\alpha\left(G_{\mathcal{P}}\right)\,\ln\left(1+2TN^2 K \right) + 1\right)~. \]
Combining as in (\ref{eq:pre-bound-manyd}), recalling that $\delta = 1/T$, and setting for brevity ${\bar d}_V = \frac{1}{N}\,\sum_{v\in V} d(v)$, we have thus obtained that the average welfare regret of {\sc Exp3-Coop2} satisfies
\begin{align*} R_T^{\mathrm{coop}} &\leq 3{\bar d}_V + \eta\,T\,{\bar d}_V + \frac{1+\ln K}{\eta} + \frac{e\eta}{2(1-e^{-1})}\,T\left(\frac{6K}{N}\,\alpha\left(G_{\mathcal{P}}\right)\,\ln\left(1+2TN^2 K \right) + 1\right)\\ & = \mathcal{O}\left(\eta\,T\,{\bar d}_V + \frac{\ln K}{\eta} + \frac{\eta\,T K}{N}\,\alpha\left(G_{\mathcal{P}}\right)\,\ln\left(T N K \right)\right) \end{align*}
as $T$ grows large. This concludes the proof.
\iffalse ***********************************************************************************************************
On the left side of (\ref{eq:pre-bound-manyd}) we have:\\
\begin{align*}
\field{E}\left[\sum_{t=1}^T \sum_{i=1}^K p_t(i)\widehat{\loss}_t(i)\right] &=
\field{E}\left[p_t(i,v)\frac{B_{t-d_v}(i)\ell_{t - d_v}(i)}{q_{t-d_v}(i,v)}\right] \\ &=
\field{E}\left[\frac{\ell_{t-d_v}(i)}{q_{t-d_v}(i,v)}\field{E}_{t-d_v}\left[p_t(i,v) B_{t-d_v}(i)\right]\right] \\ &\geq
\field{E}\left[ \frac{\ell_{t-d_v}(i)}{q_{t-d_v}(i,v)} \field{E}_{t-dv}\left[ \left(p_{t-d_v}(i,v) + \sum_{h=1}^{d_v}\left( -\delta p_{t-h}(i) - \eta p_{t-h}(i,v)\widehat{\loss}_{t-h}(i) \right)\right) B_{t-d_v}(i) \right] \right] \\ &=
\field{E}\left[ \frac{\ell_{t-d_v}(i)}{q_{t-d_v}(i,v)} \field{E}_{t-dv}\left[ p_{t-d_v}(i,v)B_{t-d_v}(i) \right] \right]\\ &\quad
- \field{E}\left[ \frac{\ell_{t-d_v}(i)}{q_{t-d_v}(i,v)} \field{E}_{t-d_v}\left[ \sum_{h=1}^{d_v}\left(\delta p_{t-h}(i) + \eta p_{t-h}(i,v)\widehat{\loss}_{t-h}(i) \right) B_{t-d_v}(i)\right] \right] \\ &=
\field{E}\bigl[ \ell_{t-d_v}(i) p_{t-d_v}(i,v) \bigr]\\ &\quad
- \field{E}\left[ \frac{\ell_{t-d_v}(i)}{q_{t-d_v}(i,v)} \sum_{h=1}^{d_v}\left(\delta p_{t-h}(i) + \eta p_{t-h}(i,v)\widehat{\loss}_{t-h}(i) \right) B_{t-d_v}(i) \right]\\ \end{align*}
From Remark~\ref{r:remark-additive-lemma-manyd}.
On the right side of (\ref{eq:pre-bound-manyd}) we have:\\ \begin{equation} \field{E}\left[ \widehat{\loss}_t(k,v) \right] = \field{E}\left[ \frac{\ell_{t-d_v}(i)}{q_{t-d_v}(i,v)} \field{E}_{t-d_v}\left[ B_{t-d_v}(i,v) \right] \right] = \field{E}\left[ \ell_{t-d_v}(k,v) \right] = \ell_{t-d_v}(k,v) \end{equation}
Also we have: \begin{eqnarray}
\field{E}\left[ p_t(i,v) \widehat{\loss}_t^2(i,v) \right] &=&
\field{E}\left[ p_t(i,v) \frac{B_{t-d_v}(i,v)\ell_{t-d_v}^2(i))}{q_{t-d_v}^2(i,v)} \right]\\ &\leq&
\field{E}\left[ p_t(i,v) \frac{\field{E}_{t-d_v}\left[ B_{t-d_v}(i,v) \right]}{q_{t-d_v}^2(i,v)} \right]\\ &\leq&
e \cdot \field{E}\left[ \frac{p_{t-d_v}(i,v)}{q_{t-d_v}(i,v)} \right] \leq e\\ \end{eqnarray}
That follows from facts: $\ell_{t-d_v} \leq 1$, $p_{t-d_v}(i,v) \leq q_{t-d_v}(i,v) $, and Lemma~\ref{l:multiplicative-lemma-manyd}.
Rewriting the whole (\ref{eq:pre-bound-manyd}) and taking the sum over the $N$ agents we get:\\ \begin{equation}\label{eq:pre-bound-expect-manyd} \begin{split}
\sum_v \field{E}\left[ p_{t-d_v}(i,v) \ell_{t-d_v}(i) \right] - \sum_v \underbrace{\field{E}\Bigl[ \frac{\ell_{t-d_v}(i)}{q_{t-d_v}(i)} \sum_{h=1}^{d_v}\left( \delta p_{t-h}(i) + \eta p_{t-h}(i,v)\widehat{\loss}_{t-h}(i) \right)B_{t-d_v}(i,v) \Bigr]}_{\text{(I)}} \leq\\
\frac{N\log(K)}{\eta} + \frac{NT\log(1+\delta)}{\eta} + NT\ell_{t-d_v}(k,v) + \frac{\eta e}{2}\sum_{v=1}^N\sum_{t=d_v+1}^T\sum_{i=1}^K\field{E}\left[ \frac{p_{t-d_v}(i,v)}{q_{t-d_v}(i,v)} \right] \quad\\ \end{split} \end{equation}
The quantity (I) can be rewritten as: \begin{equation*} \underbrace{\field{E}\left[ \frac{\ell_{t-d_v}(i)}{q_{t-d_v}(i,v)} \left( \sum_{h=1}^{d_v} \delta p_{t-h}(i,v) \right) B_{t-d_v}(i) \right]}_{\text{(II)}} + \underbrace{\eta \cdot \field{E}\left[ \frac{\ell_{t-d_v}(i)}{q_{t-d_v}(i,v)} \left( \sum_{h=1}^{d_v} p_{t-h}(i,v)\widehat{\loss}_{t-h}(i) \right) B_{t-d_v}(i) \right]}_{\text{(III)}} \end{equation*}
The quantity (II) can be bounded as follow: \begin{align*}
\sum_{h=1}^{d_v} \field{E}\left[ \frac{\ell_{t-d_v}(i)}{q_{t-d_v}(i,v)} \delta p_{t-h}(i,v) B_{t-d_v}(i,v) \right] &\leq
\sum_{h=1}^{d_v} \field{E}\left[\delta p_{t-h}(i,v) \frac{\field{E}_{t-d_v}\left[ B_{t-d_v}(i) \right]}{q_{t-d_v}(i,v)} \right]
\\ &=
\delta \cdot \sum_{h=1}^{d_v} \sum_{t=d_v + 1}^{T} \sum_{i=1}^{K} p_{t-h}(i,v)
\\ &\leq
\delta \cdot d_v \cdot T \end{align*}
While the quantity (III) can be bounded as follow: \begin{align*}
\eta \cdot \sum_{h=1}^{d_v} \field{E}\left[ \frac{\ell_{t-d_v}(i)}{q_{t-d_v}(i,v)} p_{t-h}(i,v)\widehat{\loss}_{t-h}(i) B_{t-d_v}(i,v) \right] &\leq
e \cdot \eta \cdot \sum_{h=1}^{d_v} \field{E}\left[ \frac{p_{t-d_v}(i,v)}{q_{t-d_v}(i,v)}\widehat{\loss}_{t-h}(i) B_{t-d_v}(i,v)\right]\\ &=
e \cdot \eta \cdot \sum_{h=1}^{d_v}\field{E}\left[ \frac{p_{t-d_v}(i,v)}{q_{t-d_v}(i,v)}\widehat{\loss}_{t-h}(i) \field{E}_{t-d_v}\left[ B_{t-d_v}(i,v)\right] \right]\\ &=
e \cdot \eta \cdot \sum_{h=1}^{d_v}\field{E}\left[ p_{t-d_v}(i,v) \widehat{\loss}_{t-h}(i) \right]\\ &\leq
e \cdot \eta \cdot \sum_{h=1}^{d_v}\field{E}\left[ p_{t-d_v}(i,v) \frac{B_{t-h-d_v}(i,v)}{q_{t-h-d_v}(i,v)} \right]\\ &\leq
e^2 \cdot \eta \cdot \sum_{h=1}^{d_v}\field{E}\left[ \frac{p_{t-d_v-h}(i,v)}{q_{t-h-d_v}(i,v)} \field{E}_{(t-h)-d_v}\left[ B_{t-h-d_v}(i,v) \right]\right]\\ &=
e^2 \cdot \eta \cdot \sum_{h=1}^{d_v} \field{E}\left[ p_{t-d_v-h}(i,v) \right]\\ &\leq
e^2 \cdot \eta \cdot \sum_{h=1}^{d_v} \sum_{t=1}^T \sum_{i=1}^K p_{t-d_v-h}(i,v)\\ &=
e^2 \cdot \eta \cdot d_v \cdot T \end{align*}
Because $\ell_{t-d_v}(i) \leq 1$, from Remark~\ref{r:remark-multiplicative-lemma-manyd} and from definition of $\widehat{\loss}_{t-h}(i)$.
We can then simplify the (\ref{eq:pre-bound-expect-manyd}) dividing by $N$ and rewriting:
\begin{align}\label{eq:pre-bound-expect1-manyd} \begin{split}
\frac{1}{N} \sum_v \sum_t \sum_i (p_{t-d_v}(i,v) \ell_{t-d_v}(i)) &\leq
\frac{1}{\eta}\left( \log(K) + T\log(1 + \delta) \right) + T + \delta T \frac{\sum_v d_v}{N} + e^2 \eta T \frac{\sum_v d_v}{N} +\\ &\quad
+ \underbrace{\frac{\eta e}{2} \sum_v \sum_t \sum_i \field{E}\left[ \frac{p_{t-d_v}(i,v)}{q_{t-d_v , d_v}(i,v)} \right]}_{\text{(IV)}} \end{split} \end{align}
Because of $\ell_{t-d_v}(k,v) \leq 1$. Let's proceed on upper bounding the quantity (IV):
\begin{align}
\frac{\eta e}{2} \sum_v \sum_t \sum_i \field{E}\left[ \frac{p_{t-d_v}(i,v)}{q_{t-d_v , d_v}(i,v)} \right] &\leq
\frac{\eta e}{2} \sum_{t=1}^T \sum_{i=1}^K \field{E}\left[ \sum_{v=1}^N \frac{p_{t-d_v}(i,v)}{q_{t-d_v , d_v}(i,v)} \right]\\ &\leq
\frac{\eta e}{2 (1 - 1/e)} \sum_{t=1}^T \sum_{i=1}^K \left( \alpha(G_{d_1 \ldots d_N}) \log \frac{N}{\delta} + \sum_v p_t(i,v)\right)\\ &=
\frac{\eta e^2}{2 (e - 1)} \left( \sum_i \sum_t \left( \alpha(G_{d_1 \ldots d_N}) \log \frac{N}{\delta}\right) + \sum_v \sum_t \sum_i p_t(i,v)\right)\\ &=
\frac{\eta e^2}{2 (e-1)} \left( KT \alpha(G_{d_1 \ldots d_N}) \log \frac{N}{\delta} + NT \right)\\ &=
\frac{\eta e^2 T}{2 (e-1)} \left( K \alpha(G_{d_1 \ldots d_N}) \log \frac{N}{\delta} + N \right) \end{align}
Where we used Lemma~\ref{l:q-bound-manyd}. If we indicate with $\widehat{d} = \left(\sum_v d_v\right) / N$ we can rewrite the upper bound (\ref{eq:pre-bound-expect1-manyd}) this way: \begin{align}\label{eq:bound-expect-noeta-manyd} \frac{1}{\eta}\left( \log(K) + T\log(1 + \delta) \right) + \eta \left(e^2 T \widehat{d} + \frac{e^2 T}{2 (e-1)} \left( K \alpha(G_{d_1 \ldots d_N}) \log \frac{N}{\delta} + N \right)\right) + \left(T + \delta T \widehat{d}\right) \end{align}
Differentiating with respect to $\eta$ and setting the derivative to $0$ we obtain: \begin{align}\label{eq:eta-value-manyd}
\eta &=
\sqrt{\frac{\log(K) + T \log(1 + \delta)}{e^2 T \widehat{d} + \frac{e^2 T \left( K \alpha(G_{d_1 \ldots d_N}) \log \frac{N}{\delta} + N \right)}{2(e-1)}}} \end{align}
Hence replacing the (\ref{eq:eta-value-manyd}) into the (\ref{eq:bound-expect-noeta-manyd}) we obtain \begin{align}
\frac{1}{N} \sum_{v=1}^N \field{E}\left[ p_{t-d_v}(i,v)\ell_{t-d_v}(i) \right] &\leq
2 \cdot \sqrt{\log(K) + T \log(1 + \delta)} \cdot\\ &\quad
\cdot \sqrt{e^2 T \widehat{d} + \frac{e^2 T \left( K \alpha(G_{d_1 \ldots d_N}) \log \frac{N}{\delta} + N \right)}{2(e-1)}} + (T + \delta T \widehat{d}) \end{align}
*********************************************************************************************************** \fi
\end{proof}
\section{Proofs regarding Section \ref{s:delayed}}\label{a:delayed}
\proofof{Corollary \ref{c:delayed}}
\begin{proof} In order to prove the upper bound, we use the exponentially-weighted algorithm with Estimate~(\ref{eq:estimator}) specialized to the case of one agent only, namely $B_{t-d}(i) = \Ind{I_{t-d} = i}$ and $q_{d,t-d}(i) = p_{t-d}(i)$. Notice that this amounts to running the standard Exp3 algorithm performing an update as soon a new loss becomes available. In this case, because $N = \alpha(G_{\leq d}) = 1$, the bound of Theorem~\ref{th:nontuned}, with a suitable choice of $\gamma$ (which depends on $T$, $K$, and $d$) reduces to \[
R_T = \mathcal{O}\left(d + \sqrt{(K+d)\,T\ln K}\right)~. \]
We now prove a lower bound matching our upper bound up to logarithmic factors. The proof hinges on combining the known lower bound $\Omega\bigl(\sqrt{KT}\bigr)$ for bandits without delay of~\cite{auer2002nonstochastic} with the following argument by~\cite{weinberger2002delayed} that provides a lower bound for the full information case with delay.
The proof of the latter bound is by contradiction: we show that a low-regret full information algorithm for delay $d>0$ can be used to design a low-regret full information algorithm for the $d=0$ (no delay) setting. We then apply the known lower bound for the minimax regret in the no-delay setting to derive a lower bound for the setting with delay.
Fix $d > 0$ and let $\mathcal{A}$ be a predictor for the full-information online prediction problem with delay $d$. Let $\boldsymbol{p}_t$ be the probability distribution used by $\mathcal{A}$ at time $t$. We now apply algorithm $\mathcal{A}$ to design a new algorithm $\mathcal{A}'$ for a full information online prediction problem with arbitrary loss vectors $\boldsymbol{\loss}_1',\dots,\boldsymbol{\loss}_B' \in [0,1]^K$ and no delay. More specifically, we create a sequence $\boldsymbol{\loss}_1,\dots,\boldsymbol{\loss}_T \in [0,1]^K$ of loss vectors such that $T = (d+1)B$ and $\boldsymbol{\loss}_t = \boldsymbol{\loss}_b'$ where $b = \bigl\lceil t/(d+1) \bigr\rceil$. At each time $b=1,\dots,B$ algorithm $\mathcal{A}'$ uses the distribution \[
\boldsymbol{p}_b' = \frac{1}{d+1} \sum_{s=1}^{d+1} \boldsymbol{p}_{(d+1)(b-1)+s} \] where $\boldsymbol{p}_t = \bigl(\frac{1}{K},\dots,\frac{1}{K}\bigr)$ for all $t \le 1$. Note that $\boldsymbol{p}_b'$ is defined using $\boldsymbol{p}_{(d+1)(b-1)+1},\dots,\boldsymbol{p}_{(d+1)b}$. These are in turn defined using the same loss vectors $\boldsymbol{\loss}_1',\dots,\boldsymbol{\loss}_{b-1}'$ since, by definition, each $\boldsymbol{p}_{t+1}$ uses $\boldsymbol{\loss}_1,\dots,\boldsymbol{\loss}_{t-d}$, and $\bigl\lceil (t-d)/(d+1) \bigr\rceil = b-1$ for all $t = (d+1)(b-1),\dots,(d+1)b-1$. So $\mathcal{A}'$ is a legitimate full-information online algorithm for the problem $\boldsymbol{\loss}_1',\dots,\boldsymbol{\loss}_B'$ with no delay. As a consequence,
\begin{align*}
\sum_{t=1}^T \sum_{i=1}^K \ell_t(i) p_t(i) &=
\sum_{b=1}^B \sum_{s=1}^{d+1} \sum_{i=1}^K \ell_b'(i) p_{(d+1)(b-1)+s}(i) \\ &=
(d+1)\sum_{b=1}^B \sum_{i=1}^K \frac{1}{d+1}\sum_{s=1}^{d+1} \ell_b'(i) p_{(d+1)(b-1)+s}(i) \\ &=
(d+1)\sum_{b=1}^B \sum_{i=1}^K \ell_b'(i) p'_b(i)~. \end{align*}
Moreover, \[
\min_{k \in A} \sum_{t=1}^T \ell_t(k) = (d+1) \min_{k \in A} \sum_{b=1}^B \ell_b'(k)~. \] Since we know that for any predictor $\mathcal{A}'$ there exists a loss sequence $\boldsymbol{\loss}_1',\boldsymbol{\loss}_2',\dots$ such that the regret of $\mathcal{A}'$ is at least $\bigl(1 - o(1)\bigr)\sqrt{(T/2)\ln K}$, where $o(1) \to 0$ for $K,B \to \infty$, we have that the regret of $\mathcal{A}$ is at least \[
(d+1) R_{T/(d+1)}(\mathcal{A}') = \bigl(1 - o(1)\bigr)(d + 1)\sqrt{\frac{T}{2(d+1)}\ln K} = \bigl(1 - o(1)\bigr)\sqrt{(d + 1)\frac{T}{2}\ln K}~, \] where $R_{T/(d+1)}(\mathcal{A}')$ is the regret of $\mathcal{A}'$ over $T/(d+1)$ time steps.
The proof is completed by observing that that the regret of any predictor in the bandit setting with delay $d$ cannot be smaller than the regret of the predictor in the bandit setting with no delay or smaller than the regret of the predictor in the full information setting with delay $d$. Hence, the minimax regret in the bandit setting with delay $d$ must be at least of order \[
\max\left\{ \sqrt{KT}, \sqrt{(d + 1)T\ln K} \right\} = \Omega\left(\sqrt{(K + d)\,T}\right)~. \] \end{proof}
\end{document}
|
arXiv
|
{
"id": "1602.04741.tex",
"language_detection_score": 0.6859935522079468,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{A Two-Stage Fourth Order Time-Accurate Discretization for Lax-Wendroff Type Flow Solvers \[3mm]
I. Hyperbolic Conservation Laws } \markboth{Jiequan Li and Zhifang Du}{A Two-Stage Fourth Order L-W Type Scheme }
\begin{abstract} In this paper we develop a novel two-stage fourth order time-accurate discretization for time-dependent flow problems, particularly for hyperbolic conservation laws. Different from the classical Runge-Kutta (R-K) temporal discretization for first order Riemann solvers as building blocks, the current approach is solely associated with Lax-Wendroff (L-W) type schemes as the building blocks. As a result, a two-stage procedure can be constructed to achieve a fourth order temporal accuracy, rather than using well-developed four stages for R-K methods. The generalized Riemann problem (GRP) solver is taken as a representative of L-W type schemes for the construction of a two-stage fourth order scheme. \end{abstract}
{\bf Key Words.} Lax-Wendroff Method, two-stage fourth order temporal accuracy, hyperbolic conservation laws, GRP solver.
\section{Introduction}
The design of high order accurate CFD methods has attracted much attention in the past decades. Successful examples include ENO \cite{Harten-ENO, Shu-Osher,Barth}, WENO \cite{Liu, Jiang}, DG \cite{DG-2}, residual distribution (RD) method \cite{Abgrall-RD}, spectral methods \cite{Tang-Shen} etc., and references therein. Most of these methods use the Runge-Kutta (R-K) approach to achieve high order temporal accuracy starting from the first order numerical flux functions, such as first order Riemann solvers. In order to achieve a fourth order temporal accuracy, four stages of R-K type iterations in time are usually adopted.
In this paper we develop a novel fourth order temporal discretization for time-dependent problems, particularly for hyperbolic conservation laws \begin{equation} \dfr{\partial \mathbf{u}}{\partial t} +\nabla\cdot \mathbf{f}(\mathbf{u})=0, \label{law} \end{equation} where $\mathbf{u}=(u_1,\cdots, u_m)^\top$ is a conservative vector, $\mathbf{f}(\mathbf{u})=(\mathbf{f}_1(\mathbf{u}),\cdots, \mathbf{f}_d(\mathbf{u}))$ is the associated flux vector function, $m\geq 1$, $d\geq 1$. The approach under investigation is based on the second order Lax-Wendroff (L-W) methodology and uses a two-stage procedure to achieve a fourth order accuracy, which is different from the classical R-K approach. This approach can be easily extended to many other time-dependent flow problems \cite{Du-Li}.
The Lax-Wendroff methodology \cite{L-W}, i.e., the Cauchy-Kovalevskaya method in the context of PDEs, is fundamental in the sense that it has second order accuracy both in space and time, and the underlying governing equations are fully incorporated into approximations of spatial and temporal evolution. In a finite volume framework, Eq. \eqref{law} is discretized as \begin{equation} \begin{array}{l}
\displaystyle \b\mathbf{u}_j^{n+1} =\b\mathbf{u}_j^n -\sum_{\ell}\frac{\Delta t}{|\Omega_j|}\mathbf{F}_{j\ell}(\mathbf{u}(\cdot, t_n+\frac{\Delta t}2),\Gamma_{j\ell},\mathbf{n}_{j\ell}) \label{scheme} \end{array} \end{equation} where $\b\mathbf{u}_j^n$ is the solution averaged over the control volume $\Omega_j$ at time $t=t_n$, $t_{n+1}=t_n+\Delta t$, $\Gamma_{j\ell}$ is the $\ell$-th side of $\Omega_j$, and $\mathbf{n}_{j\ell}$ is the unit outward normal direction. The numerical flux $\mathbf{F}_{j\ell}$ along the time-space surface $\Gamma_{j\ell}$ is based on the half time-step value $\mathbf{u}(\cdot, t+\Delta t/2)$, or an equivalent approximation. The L-W method achieves the time accuracy through the formulae \begin{equation} \begin{array}{l} \displaystyle \mathbf{u}(\mathbf{x}, t_n+\frac{\Delta t}2) =\mathbf{u}(\mathbf{x}, t_n) +\dfr{\Delta t}{2} \cdot \dfr{\partial \mathbf{u}}{\partial t}(\mathbf{x},t_n) +\mathcal{O}(\Delta t^2), \ \ \ \mathbf{x}\in \Gamma_{j\ell}, \\ \dfr{\partial \mathbf{u}}{\partial t}(\mathbf{x},t_n) =-\nabla\cdot \mathbf{f}(\mathbf{u})(\mathbf{x}, t_n), \end{array} \end{equation} which adopt two instantaneous values $\mathbf{u}(\mathbf{x}, t_n)$ and $\frac{\partial \mathbf{u}}{\partial t}(\mathbf{x}, t_n)$ at any point $(\mathbf{x}, t_n)$ on the boundary of a control volume. In particular, the time variation is related to the spatial derivatives of the solutions. The first order {\em Riemann solvers} \cite{Godunov, Toro} and second order {\em L-W solvers} have distinguishable procedures to define the two instantaneous values, respectively. The L-W method is a one-stage spatial-temporal coupled second order accurate method, which utilizes the information only at time $t=t_n$. If a R-K approach is preferred, usually two stages are needed to achieve a second order accuracy in time.
Our goal is to extend the L-W type schemes to even higher order accuracy. Based on a second order L-W type solver with the information $\mathbf{u}$ and $\frac{\partial \mathbf{u}}{\partial t}$, a two-stage procedure can be designed to obtain a fourth order temporal accurate approximation for $\mathbf{u}(\cdot, t_{n+1})$: one stage at $t=t_n$ and the other stage at $t_n+\frac{\Delta t}2$. The algorithm is stated below.
\begin{enumerate} {\em \item[(i)] {\bf Lax-Wendroff step}. Given an initial data $\mathbf{u}^n(x)$ to \eqref{law} at $t=t_n$, construct instantaneous values $\mathbf{u}(\mathbf{x},t_n+0)$ and $\frac{\partial\mathbf{u}}{\partial t}(\mathbf{x},t_n+0)$, which are symbolically denoted as \begin{equation} \mathbf{u}(\cdot, t_n+0) = \mathcal{M}(\mathbf{u}^n),\ \ \ \ \frac{\partial}{\partial t}\mathbf{u}(\cdot,t_n+0) = \mathcal{L}(\mathbf{u}^n). \end{equation} Then $\frac{\partial}{\partial t}\mathcal{L}(\mathbf{u})(\cdot,t_n+0)$ is subsequently obtained using the chain rule, \begin{equation} \dfr{\partial}{\partial t} \mathcal{L}(\mathbf{u}^n)=\dfr{\partial}{\partial\mathbf{u}}\mathcal{L}(\mathbf{u}^n) \dfr{\partial}{\partial t}\mathbf{u}(\cdot,t_n+0). \end{equation}
\item[(ii)] {\bf Solution advancing step}. Define the intermediate data $\mathbf{u}^*(\mathbf{x})$ \begin{equation} \mathbf{u}^* =\mathbf{u}^n +\dfr 12 \Delta t\mathcal{L}(\mathbf{u}^n) +\dfr 18\Delta t^2 \dfr{\partial }{\partial t}\mathcal{L}(\mathbf{u}^n), \end{equation} which can be used to reconstruct new initial data $\mathbf{u}^*(\mathbf{x})$ and get the solution $\frac{\partial }{\partial t}\mathcal{L}(\mathbf{u}^*)$.
Then the solution to the next time level $t_{n+1} =t_n+\Delta t$ can be updated by \begin{equation} \begin{array}{l} \mathbf{u}^{n+1} =\mathbf{u}^n + \Delta t \mathcal{L}(\mathbf{u}^n) + \dfr 16 \Delta t^2 \left(\dfr{\partial }{\partial t}\mathcal{L}(\mathbf{u}^n)+2\dfr{\partial }{\partial t}\mathcal{L}(\mathbf{u}^*)\right). \end{array} \end{equation} } \end{enumerate}
The above updating scheme distinguishes from the traditional R-K approach in the following aspects.
(i) The above approach is based on a second order L-W type solver to achieve a fourth order temporal accuracy, which is different from traditional R-K methods for \eqref{law} based on first order solvers. It is a two-stage approach, while the R-K approach usually needs four stages to attain the same accuracy. With the data reconstruction in the middle stage, this new approach removes two stages of data reconstruction from the standard R-K approach. Together with numerical flux evaluation, at least about $20\%$ computational cost can be saved for 1-D problems, and $30\%$ cost saved for 2-D problems in the current method.
(ii) The governing equation \eqref{law} is explicitly used in the L-W solver so that all useful information can be included in the solution approximation. More importantly, we stick to the utilization of the time derivatives of solutions $\partial \mathbf{u}/\partial t$ to advance the solution, which seems more effective to capture discontinuities sharply. See Remark \ref{rem-grp} in Section \ref{sec-law} for the explanation.
(iii) This approach can be applied in other frameworks, such as DG or finite difference methods etc. Not only for hyperbolic conservation laws \eqref{law}, other time-dependent problems can be solved by the above approach as well once there is a corresponding Cauchy-Kovalevskaya theorem.
Since this method is based on L-W type solvers, the generalized Riemann problem (GRP) solver \cite{Ben-Artzi-84, Ben-Artzi-01, Li-1, Li-2} is used as the building block. The GRP solver is an extension of the first order Godunov solver to the second order time accuracy from the MUSCL-type initial data \cite{Leer}. Its simplified acoustic version reduces to the so-called ADER solver \cite{Toro-ADER}, which can of course be used as the building block too. Other alternative choice could be the gas kinetic solvers (GKS) \cite{Xu-1, Xu-2}. Numerical experiments from the GRP solver demonstrate the suitability to design such a fourth order method.
This paper is organized in the following. After the introduction section, a two-stage temporal discretization is presented in Section 2. In Section 3, this approach is applied to hyperbolic conservation laws in 1-D and 2-D, respectively.
In Section 4, numerical experiments for scalar conservation laws and the compressible Euler equations are taken to validate the performance of the proposed approach. The last section presents discussions and some prospectives of this approach.
\section{A high order temporal discretization for time-dependent problems}
In order to advance the solution of \eqref{law} with a fourth order temporal accuracy for the L-W type solvers, consider the following time-dependent equations, \begin{equation} \dfr{\partial \mathbf{u}}{\partial t}=\mathcal{L}(\mathbf{u}), \label{ODE} \end{equation} subject to the initial data at $t=t_n$, \begin{equation}
\mathbf{u}(t)|_{t=t_n}=\mathbf{u}^n, \end{equation} where $\mathcal{L}$ is an operator for spatial derivatives. It is evident that the initial time variation of the solution at $t=t_n$ can be obtained using the chain rule and the Cauchy-Kovalevskaya method, \begin{equation} \dfr{\partial}{\partial t}\mathbf{u}(t_n) =\mathcal{L}(\mathbf{u}^n), \ \ \ \ \dfr{\partial^2}{\partial t^2}\mathbf{u}(t_n) =\dfr{\partial}{\partial t}\mathcal{L}(\mathbf{u}^n) = \dfr{\partial}{\partial \mathbf{u}}\mathcal{L}(\mathbf{u}^n)\mathcal{L}(\mathbf{u}^n). \end{equation} Let's consider a high order accurate approximation to $\mathbf{u}^{n+1}:= \mathbf{u}(t_n+\Delta t)$. We write \eqref{ODE} as \begin{equation} \mathbf{u}^{n+1} =\mathbf{u}^n +\int_{t_n}^{t_n+\Delta t} \mathcal{L}(\mathbf{u}(t))dt. \label{ODE-A} \end{equation} Introduce an intermediate value at time $t=t_n+ A\Delta t$ with a parameter $A$, within a third order accuracy, \begin{equation} \mathbf{u}^* = \mathbf{u}^n +A\Delta t \mathcal{L}(\mathbf{u}^n) +\dfr 12 A^2 \Delta t^2 \dfr{\partial}{\partial t}\mathcal{L}(\mathbf{u}^n), \label{appr-1} \end{equation} which subsequently determines the solution at the middle stage, \begin{equation} \dfr{\partial \mathbf{u}^*}{\partial t} =\mathcal{L}(\mathbf{u}^*), \ \ \ \ \dfr{\partial}{\partial t}\mathcal{L}(\mathbf{u}^*) =\dfr{\partial}{\partial \mathbf{u}}\mathcal{L}(\mathbf{u}^*) \mathcal{L}(\mathbf{u}^*). \label{appr-2} \end{equation} Set \begin{equation} \mathbf{u}^{n+1}=\mathbf{u}^n + \Delta t(B_0 \mathcal{L}(\mathbf{u}^n) + B_1 \mathcal{L}(\mathbf{u}^*)) +\frac 12 \Delta t^2 \left(C_0 \frac{\partial}{\partial t} \mathcal{L}(\mathbf{u}^n) + C_1 \frac{\partial}{\partial t}\mathcal{L}(\mathbf{u}^*)\right), \label{appr-3} \end{equation} where $B_0$, $B_1$, $C_0$ and $C_1$, together with $A$, are determined according to accuracy requirement. We formulate this approximation in the form of a proposition.
\begin{prop} If the following parameters are taken, \begin{equation} A=\frac 12, \ \ \ B_0= 1, \ \ \ B_1=0, \ \ \ C_0 =\frac 13, \ \ \ \ C_1=\frac 23, \label{coeff-ode} \end{equation} the iterations \eqref{appr-1}--\eqref{appr-3} provide a fourth order accurate approximation to the solution $\mathbf{u}(t)$ at $t=t_n+\Delta t$. These parameters are uniquely determined for the fourth order accuracy requirement. \end{prop}
\begin{proof} The proof uses the standard Taylor series expansion, as usually done for the R-K approach. For notational simplicity, we denote \begin{equation} \mathcal{G}(\mathbf{u}) := \mathcal{L}_\mathbf{u}(\mathbf{u})\mathcal{L}(\mathbf{u}), \ \ \ \ \ \ \ L_\mathbf{u}(\mathbf{u}):=\dfr{\partial}{\partial \mathbf{u}}\mathcal{L}(\mathbf{u}), \end{equation} and similarly for $\mathcal{L}_{\mathbf{u}\mathbf{u}}$, $\mathcal{L}_{\mathbf{u}\mathbf{u}\mathbf{u}}$, $\mathcal{G}_\mathbf{u}$, and $\mathcal{G}_{\mathbf{u}\mathbf{u}}$. Then we have the following expansions around $\mathbf{u}^n$, \begin{equation} \mathcal{L}(\mathbf{u}^*) =\mathcal{L}(\mathbf{u}^n) + \mathcal{L}_\mathbf{u} (\mathbf{u}^*-\mathbf{u}^n) +\frac{\mathcal{L}_{\mathbf{u}\mathbf{u}}}2 (\mathbf{u}^*-\mathbf{u}^n)^2 + \frac{\mathcal{L}_{\mathbf{u}\mathbf{u}\mathbf{u}}}6 (\mathbf{u}^*-\mathbf{u}^n) ^3+\mathcal{O}(\mathbf{u}^*-\mathbf{u}^n)^4, \end{equation} and \begin{equation} \mathcal{G}(\mathbf{u}^*) =\mathcal{G}(\mathbf{u}^n) + \mathcal{G}_\mathbf{u} (\mathbf{u}^*-\mathbf{u}^n) +\frac{\mathcal{G}_{\mathbf{u}\mathbf{u}}}2 (\mathbf{u}^*-\mathbf{u}^n)^2 +\mathcal{O}(\mathbf{u}^*-\mathbf{u}^n)^3. \end{equation} Using \eqref{appr-1} and \eqref{appr-2}, as well as substituting the above two expansions into \eqref{appr-3}, we obtain \begin{equation} \begin{array}{rl} & \Delta t(B_0 \mathcal{L}(\mathbf{u}^n) + B_1 \mathcal{L}(\mathbf{u}^*)) +\frac 12 \Delta t^2 \left(C_0 \frac{\partial}{\partial t} \mathcal{L}(\mathbf{u}^n) + C_1 \frac{\partial}{\partial t}\mathcal{L}(\mathbf{u}^*)\right)\\[3mm] =& \Delta t(B_0+B_1)\mathcal{L}(\mathbf{u}^n)+ \dfr{\Delta t^2}{2} \left[AB_1+\frac 12(C_0+C_1)\right]\mathcal{L}_\mathbf{u}(\mathbf{u}^n)\mathcal{L}(\mathbf{u}^n)\\[3mm] &\displaystyle +\frac{\Delta t^3}6\left[3(A^2B_1 +AC_1)\right]\cdot [\mathcal{L}_{\mathbf{u}}^2(\mathbf{u}^n)\mathcal{L}(\mathbf{u}^n) +\mathcal{L}_{\mathbf{u}\mathbf{u}}(\mathbf{u}^n)\mathcal{L}^2(\mathbf{u}^n)]\\[3mm] &\displaystyle +\frac{\Delta t^4}{24} \left[6A^2 C_1 \mathcal{L}_\mathbf{u}^3(\mathbf{u}^n)\mathcal{L}(\mathbf{u}^n) + (12A^3 B_1+24A^2 C_1)(\mathcal{L}_{\mathbf{u}\mathbf{u}}(\mathbf{u}^n)\mathcal{L}_\mathbf{u}(\mathbf{u}^n)\mathcal{L}^2(\mathbf{u}^n)\right. \\[3mm] &\displaystyle +\left.(4A^3 B_1+ 6A^2 C_1)\mathcal{L}_{\mathbf{u}\mathbf{u}\mathbf{u}}(\mathbf{u}^n)\mathcal{L}(\mathbf{u}^n)\right] +\mathcal{O}(\Delta t^5). \end{array} \label{t-1} \end{equation} Taking the Taylor series expansion directly for the time integration in \eqref{ODE-A} yields \begin{equation} \begin{array}{rl} &\displaystyle \int_{t_n}^{t_n+\Delta t} \mathcal{L}(\mathbf{u}(t))dt\\[3mm] &= \displaystyle \Delta t\mathcal{L}(\mathbf{u}^n) +\dfr{\Delta t^2}2 \mathcal{L}_\mathbf{u}(\mathbf{u}^n)\mathcal{L}(\mathbf{u}^n)+\dfr{\Delta t^3}6[\mathcal{L}_\mathbf{u}^2(\mathbf{u}^n) \mathcal{L}(\mathbf{u}^n) +\mathcal{L}_{\mathbf{u}\mathbf{u}}(\mathbf{u}^n) \mathcal{L}^2(\mathbf{u}^n)] \\[3mm] &+ \dfr{\Delta t^4}{24} [\mathcal{L}_\mathbf{u}^3(\mathbf{u}^n)\mathcal{L}(\mathbf{u}^n) +4 \mathcal{L}_{\mathbf{u}\mathbf{u}}(\mathbf{u}^n)\mathcal{L}_\mathbf{u}(\mathbf{u}^n)\mathcal{L}^2(\mathbf{u}^n) +\mathcal{L}_{\mathbf{u}\mathbf{u}\mathbf{u}}(\mathbf{u}^n)\mathcal{L}(\mathbf{u}^n)]+\mathcal{O}(\Delta t^5). \end{array} \label{t-2} \end{equation} The comparison of \eqref{t-1} and \eqref{t-2} gives \begin{equation} \begin{array}{c} B_0+B_1 =1, \ \ \ AB_1 +\frac 12(C_0+C_1) =\frac 12, \ \ \ \ A^2 B_1 +AC_1 =\frac 13, \\[2mm] A^2C_1=\frac 16, \ \ \ A^3 B_1 +2A^2C_1 = \frac 13, \ \ \ 2A^3 B_1 +3A^2 C_1 =\frac 12. \end{array} \end{equation} The above equations uniquely determine $A$, $B_0$, $B_1$, $C_0$ and $C_1$ with the values in \eqref{coeff-ode}. \end{proof}
Thus we present the algorithm for \eqref{ODE} explicitly.
\noindent{\bf Algorithm-general.} \begin{enumerate} \item[\bf Step 1.] Define intermediate values \begin{equation} \begin{array}{l} \mathbf{u}^* = \mathbf{u}^n +\frac 12\Delta t \mathcal{L}(\mathbf{u}^n) +\dfr 18 \Delta t^2 \dfr{\partial}{\partial t}\mathcal{L}(\mathbf{u}^n),\\ \dfr{\partial}{\partial t}\mathcal{L}(\mathbf{u}^*) =\dfr{\partial}{\partial \mathbf{u}}\mathcal{L}(\mathbf{u}^*) \mathcal{L}(\mathbf{u}^*). \end{array} \label{a-1} \end{equation}
\item[\bf Step 2.] Advance the solution using the formula \begin{equation} \mathbf{u}^{n+1} =\mathbf{u}^n + \Delta t \mathcal{L}(\mathbf{u}^n) + \dfr 16 \Delta t^2 \left(\dfr{\partial }{\partial t}\mathcal{L}(\mathbf{u}^n)+2\dfr{\partial }{\partial t}\mathcal{L}(\mathbf{u}^*)\right). \end{equation} \label{a-2} \end{enumerate}
\begin{rem} Note that $A=\frac 12$. The time $t_n+A\Delta t$ is in the middle of interval $[t_n, t_n+\Delta t]$ and $\mathbf{u}^*$ is the mid-point value of $\mathbf{u}$. Therefore, the iterations \eqref{appr-1}--\eqref{appr-3} can be actually regarded as an Hermite-type approximation to \eqref{ODE-A}. In contrast, the classical R-K iteration method is written as \begin{equation} \begin{array}{l} \displaystyle \mathbf{u}^{(i)} = \sum_{k=0}^{i-1} \alpha_{ki} \mathbf{u}^{(k)} + \Delta t\beta_{ki}\mathcal{L}(\mathbf{u}^{(k)}), \ \ \ \ i = 1, ... ,m, \\ \mathbf{u}^{(0)} =\mathbf{u}^n, \ \ \ \ \ \mathbf{u}^{(m)} =\mathbf{u}^{n+1}, \end{array} \label{R-K} \end{equation} where $\alpha_{ki}\geq 0$, $\beta_{ki}>0$ are the integration weights, satisfying the compatibility condition $\displaystyle \sum_{k=0}^{i-1} \alpha_{ki} =1$.
Since the approximation \eqref{R-K} does not involve the derivative of $\mathcal{L}$, it is regarded as the Simpson-type approximation to \eqref{ODE-A}.
\end{rem}
\section{Fourth order accurate temporal discretization for hyperbolic conservation laws}\label{sec-law}
In this section, we will extend the approach in the last section to hyperbolic conservation laws \eqref{law} to design a time-space fourth order accurate method. This extension is based on L-W type solvers with the instantaneous solution and its temporal derivative through the governing equation \eqref{law}.
We will first discuss the one-dimensional case, and then go to the two-dimensional case.
\subsection{One-dimensional hyperbolic conservation laws} Let us start with one-dimensional hyperbolic conservation laws \begin{equation} \mathbf{u}_t +\mathbf{f}(\mathbf{u})_x=0, \label{1d-law} \end{equation} where $\mathbf{u}$ is, as in \eqref{law}, a conserved variable and $\mathbf{f}(\mathbf{u})$ is the associated flux function. We integrate it over the cell $(x_{j-\frac 12},x_{j+\frac 12})$ to obtain the semi-discrete form \begin{equation} \dfr{d}{dt} \bar \mathbf{u}_j(t) =\mathcal{L}_j(\mathbf{u}): =-\dfr{1}{\Delta x_j} [\mathbf{f}_{j+\frac 12}-\mathbf{f}_{j-\frac 12}], \label{semi} \end{equation} where $\mathbf{f}_{j+\frac 12}$ is the numerical flux through the cell boundary $x=x_{j+\frac 12}$ at time $t$, $\Delta x_j=x_{j+\frac 12}-x_{j-\frac 12}$. We construct initial data for \eqref{semi} through a fifth order WENO or HWENO interpolation technology \cite{Jiang, Qiu}, \begin{equation} \mathbf{u}(x,t_n) =\mathbf{u}^n(x). \label{data-weno} \end{equation} Based on this initial condition, with possible discontinuities at the cell boundaries, the instantaneous solution can be obtained, \begin{equation} \mathbf{u}_{j+\frac 12}^n : = \lim_{t\rightarrow t_n+0} \mathbf{u}(x_{j+\frac 12},t), \ \ \ \ \left(\dfr{\partial\mathbf{u}}{\partial t}\right)_{j+\frac 12}^n : = \lim_{t\rightarrow t_n+0} \dfr{\partial}{\partial t}\mathbf{u}(x_{j+\frac 12},t). \label{1d-grp} \end{equation} There is an analytical solution for the generalized Riemann problem (GRP) solver \cite{Li-1,Li-2}; or approximately as ADER solvers \cite{Toro-ADER}. Intrinsically, the temporal derivative $(\partial\mathbf{u}/\partial t)_{j+\frac 12}^n$ is replaced by the spatial derivative at time $t=t_n$ using the governing equation \eqref{1d-law}, \begin{equation}
\left(\dfr{\partial\mathbf{u}}{\partial t}\right)_{j+\frac 12}^n =-\lim_{t\rightarrow t_n+0} \dfr{\partial}{\partial x}\mathbf{f}(\mathbf{u}(x_{j+\frac 12},t)), \end{equation} where the spatial derivative takes account of the wave propagation. This approach is called the L-W approach numerically or the Cauchy-Kovalevskaya approach in the context of PDE theory. In the numerical experiments in Section 4, we use the GRP solver developed in \cite{Li-1,Li-2} and construct the corresponding algorithm for \eqref{1d-law}. This two-stage approach for \eqref{1d-law} is proposed as follows.
\noindent{\bf Algorithm 1-D.} \begin{enumerate}
\item[\bf Step 1.] With the initial data $\mathbf{u}^n(x)$ in \eqref{data-weno} obtained by the HWENO interpolation, we compute the instantaneous values $\mathbf{u}_{j+\frac 12}^n$ and $(\partial\mathbf{u}/\partial t)_{j+\frac 12}^n$ analytically or approximately using a L-W type solver.
\item[\bf Step 2.] Construct the intermediate values $\mathbf{u}^*(x)$ at $t_*=t_n+\frac 12\Delta t$ using the formulae, \begin{equation} \begin{array}{l} \bar\mathbf{u}_j^* =\bar\mathbf{u}_j^n -\dfr{\Delta t}{2\Delta x_j}[\mathbf{f}_{j+\frac 12}^*-\mathbf{f}_{j-\frac 12}^*], \\[3mm] \displaystyle \mathbf{f}_{j+\frac 12}^* = \mathbf{f}(\mathbf{u}(x_{j+\frac 12},t_n+\frac 14\Delta t)), \\[3mm]
\displaystyle \mathbf{u}(x_{j+\frac 12},t_n+\frac 12\Delta t):= \mathbf{u}_{j+\frac 12}^n +\frac{\Delta t}{2}\left(\frac{\partial\mathbf{u}}{\partial t}\right)_{j+\frac 12}^n. \end{array} \label{1d-al} \end{equation} Then we use the HWENO interpolation again to construct $\mathbf{u}^*(x)$ and find the values $\mathbf{u}_{j+\frac 12}^*$ and $(\partial\mathbf{u}/\partial t)_{j+\frac 12}^*$ at stage $t=t_n+\frac{\Delta t}2$, as done in Step 1.
\item[\bf Step 3.] Advance the solution to the next time level $t_n+\Delta t$, \begin{equation} \begin{array}{l} \mathbf{u}^{n+1}_j =\bar \mathbf{u}_j^n -\dfr{\Delta t}{\Delta x_j} [\mathbf{f}_{j+\frac 12}^{4th} -\mathbf{f}_{j-\frac 12}^{4th}],\\[3mm]
\displaystyle \mathbf{f}_{j+\frac 12}^{4th}=\mathbf{f}(\mathbf{u}_{j+\frac 12}^n) +\dfr{\Delta t}{6} \left[\left.\dfr{\partial \mathbf{f}(\mathbf{u})}{\partial t} \right|_{(x_{j+\frac 12}, t_n) } +2\left. \dfr{\partial \mathbf{f}(\mathbf{u})}{\partial t}\right|_{ (x_{j+\frac 12}, t_*)}\right]. \\[3mm]
\end{array} \label{1d-al2} \end{equation}
\end{enumerate}
This is exactly a two-stage method: One stage at $t=t_n$ and the other at $t=t_n+\frac{\Delta t}2$. We only need to reconstruct data and use the L-W solver twice, at $t=t_n$ and $t=t_n+\frac 12 \Delta t$, respectively. The procedure to reconstruct the intermediate state $\mathbf{u}^*(x)$ and get the GRP solution at time $t=t_n+\frac{\Delta t}2$ is the same as that at time $t=t_n$.
\begin{rem} \label{rem-grp} The utilization of the time derivative $(\partial\mathbf{u}/\partial t)_{j+\frac 12}^n$ is one of central points in our algorithm. Indeed, the fully explicit form of \eqref{1d-law} is, \begin{equation} \bar\mathbf{u}_j^{n+1} =\b\mathbf{u}_j^n-\dfr{\Delta t}{\Delta x_j}\left[\dfr{1}{\Delta t}\int_{t_n}^{t_{n+1} }\mathbf{f}(\mathbf{u}(x_{j+\frac 12},t))dt-\dfr{1}{\Delta t} \int_{t_n}^{t_{n+1} }\mathbf{f}(\mathbf{u}(x_{j-\frac 12},t))dt\right]. \end{equation} It is crucial to approximate the flux at $x=x_{j+\frac 12}$ in the sense that \begin{equation} \mbox{Numerical flux at } x_{j+\frac 12} - \dfr{1}{\Delta t}\int_{t_n}^{t_{n+1} }\mathbf{f}(\mathbf{u}(x_{j+\frac 12},t))dt =\mathcal{O}(\Delta t^r), \ \ r>1. \end{equation} Many algorithms approximate the flux with error measured by $\Delta\mathbf{u}$, the jump across the interface, \begin{equation}
\mbox{Numerical flux at } x_{j+\frac 12} - \dfr{1}{\Delta t}\int_{t_n}^{t_{n+1} }\mathbf{f}(\mathbf{u}(x_{j+\frac 12},t))dt =\mathcal{O}(\|\Delta \mathbf{u}\|^r), \end{equation}
which is not proportional to the mesh size $\Delta x_j$ or the time step length $\Delta t$ when the jump is large, e.g., strong shocks,
\begin{equation}
\|\Delta \mathbf{u}\| \not\approx\mathcal{O}(\Delta x_j).
\end{equation}
It turns out that there is a large discrepancy when strong discontinuities present in the solutions. In order to overcome this difficulty, we have to solve the associated generalized Riemann problem (GRP) analytically and derive the value $(\partial\mathbf{u}/\partial t)_{j+\frac 12}^n$ and subsequently $(\partial\mathbf{u}/\partial t)_{j+\frac 12}^*$. \end{rem}
\begin{rem} Without using the data reconstruction for $\mathbf{u}^*(x)$, the above procedure could be regarded as the Hermite-type approximation to the total flux in the sense
\begin{equation} \dfr{1}{\Delta t}\int_{t_n}^{t_{n+1}} \mathbf{f}(\mathbf{u}(x_{j+\frac 12},t)) dt =\sum_{k=1}^2 \left[C_k \mathbf{f}(\mathbf{u}(x_{j+\frac 12},t_n+\alpha_k\Delta t)) +\Delta t D_k \dfr{\partial \mathbf{f}}{\partial t} (\mathbf{u}(x_{j+\frac 12},t_n+\alpha_k\Delta t))\right], \label{qua} \end{equation} where $t_n+\alpha_k\Delta t$ are the quadrature nodes, $C_k$, $D_k$ are the quadrature weights. For linear equations, the formula is exact. We can further verify through numerical examples that this two-stage method indeed provides a temporal discretization with fourth order accuracy.
\end{rem}
\subsection{Multidimensional hyperbolic conservation laws.}
For multidimensional cases of \eqref{law}, we still use the finite volume framework to develop a two-stage temporal-spatial fourth order accurate method. For simplicity of presentation, we only consider the two-dimensional (2-D) case with rectangular meshes in the present paper. All other cases can be treated analogously, e.g., over unstructured meshes.
We write the 2-D case of \eqref{law} as \begin{equation} \dfr{\partial \mathbf{u}}{\partial t} +\dfr{\partial \mathbf{f}(\mathbf{u})}{\partial x} +\dfr{\partial \mathbf{g}(\mathbf{u})}{\partial y}=0. \label{2D-law} \end{equation} The computational domain $\Omega$ is divided into rectangular meshes $K_{ij}$, $\Omega=\cup_{i\in I, j\in J} K_{ij}$, $K_{ij}=(x_{i-\frac 12},x_{j+\frac 12})\times (y_{j-\frac 12},y_{j+\frac 12})$ with $(x_i,y_j)$ as the center. Then \eqref{2D-law} reads over $K_{ij}$ \begin{equation} \dfr{d\bar\mathbf{u}_{i,j}(t)}{dt} =\mathcal{L}_{i,j}(\mathbf{u}): =-\dfr{1}{\Delta x_i}[\mathbf{f}_{i+\frac 12, j} -\mathbf{f}_{i-\frac 12,j}] -\dfr{1}{\Delta y_j} [ \mathbf{g}_{i,j+\frac 12}-\mathbf{g}_{i,j-\frac 12}], \end{equation} where $\Delta x_i=x_{i+\frac 12}-x_{i-\frac 12}$, $\Delta y_j=y_{j+\frac 12}-y_{j-\frac 12}$, and as convention, \begin{equation} \begin{array}{l} \displaystyle \bar\mathbf{u}_{i,j} =\dfr{1}{\Delta x_i\Delta y_j}\int_{K_{ij}} \mathbf{u}(x,y,t)dxdy, \\[3mm] \displaystyle \mathbf{f}_{i+\frac 12,j} (t)=\dfr{1}{\Delta y_j}\int_{y_{j-\frac 12}}^{y_{j+\frac 12}} \mathbf{f}(\mathbf{u}(x_{i+\frac 12},y,t))dy, \\[3mm]
\mathbf{g}_{i,j+\frac 12}(t) =\dfr{1}{\Delta x_i} \int_{x_{i-\frac 12}}^{x_{i+\frac 12}}\mathbf{g}(\mathbf{u}(x,y_{j+\frac 12},t))dx. \end{array} \end{equation} We use the Gauss quadrature to evaluate the above integrals to obtain numerical fluxes in order to guarantee the accuracy in space. For example, we evaluate $\mathbf{f}_{i+\frac 12,j}(t)$ for any time $t$, \begin{equation} \dfr{1}{\Delta y_j}\int_{y_{j-\frac 12}}^{y_{j+\frac 12}} \mathbf{f}(\mathbf{u}(x_{i+\frac 12},y,t))dy \approx \sum_{\ell=0}^k \omega_\ell \mathbf{f}(\mathbf{u}(x_{i+\frac 12},y_\ell,t)), \end{equation}
where $y_\ell\in(y_{j-\frac 12},y_{j+\frac 12})$, $\ell=1,\cdots, k$, are Gauss points and $\omega_\ell$ are corresponding weights. At each Gauss point $(x_{i+\frac 12},y_\ell,t_n)$, we solve the quasi 1-D generalized Riemann problem (GRP) for \eqref{2D-law} , \begin{equation} \begin{array}{l} \mathbf{u}(x,y_\ell,t_n) =\left\{ \begin{array}{ll} \mathbf{u}_{i,j}^n(x,y_\ell), \ \ \ &x<x_{i+\frac 12},\\ \mathbf{u}_{i,j+1}^n(x,y_\ell), \ \ \ &x>x_{i+\frac 12}.\\ \end{array} \right. \end{array} \label{2d-data} \end{equation} In analogy with the 1-D case in \eqref{1d-grp}, we obtain \begin{equation} \mathbf{u}_{i+\frac 12,j_\ell}^n: =\lim_{t\rightarrow t_n+0} \mathbf{u}(x_{i+\frac 12},y_\ell,t), \ \ \ \left(\frac{\partial \mathbf{u}}{\partial t}\right)_{i+\frac 12,j_\ell}^n =\lim_{t\rightarrow t_n+0} \frac{\partial \mathbf{u}}{\partial t}(x_{i+\frac 12},y_\ell,t), \end{equation} and similarly for others. The GRP solver for \eqref{2D-law} and \eqref{2d-data} is put in Appendix \eqref{app-B}. Thus we propose the following two-stage algorithm for 2-D hyperbolic conservation laws \eqref{2D-law}.
\noindent{\bf Algorithm 2-D.} \begin{enumerate}
\item[\bf Step 1.] With the initial data $\mathbf{u}^n(x,y)$ obtained by the HWENO interpolation, we compute the instantaneous values $\mathbf{u}_{i_\ell, j+\frac 12}^n$, $\mathbf{u}_{i+\frac 12, j_\ell}^n$, $(\partial\mathbf{u}/\partial t)_{i_\ell,j+\frac 12}^n$and $(\partial\mathbf{u}/\partial t)_{i+\frac 12,j_\ell}^n$ at every Gauss point.
\item[\bf Step 2.] Construct the intermediate values $\mathbf{u}^*(x,y)$ at $t_*=t_n+\frac 12\Delta t$ using the formulae, \begin{equation} \begin{array}{l} \bar\mathbf{u}_{i,j}^* =\bar\mathbf{u}_{i,j}^n -\dfr{\Delta t}{2\Delta x_i}[\mathbf{f}_{i+\frac 12,j}^*-\mathbf{f}_{i-\frac 12,j}^*]-\dfr{\Delta t}{2\Delta y_j}[\mathbf{g}_{i,j+\frac 12}^*-\mathbf{g}_{i,j-\frac 12}^*], \\[3mm] \displaystyle \mathbf{f}_{i+\frac 12,j}^* = \sum_{\ell=0}^k\omega_\ell \mathbf{f}(\mathbf{u}(x_{i+\frac 12},y_\ell, t_n+\frac 14\Delta t)), \ \ \ \mathbf{g}_{i_\ell,j+\frac 12}^* = \sum_{\ell=0}^k\omega_\ell \mathbf{g}(\mathbf{u}(x_\ell,y_{j+\frac 12}, t_n+\frac 14\Delta t)),\\[3mm]
\displaystyle \mathbf{u}(x_{i+\frac 12},y_\ell, t_n+\frac 12\Delta t):= \mathbf{u}_{i+\frac 12,j_\ell}^n +\frac{\Delta t}{2}\left(\frac{\partial\mathbf{u}}{\partial t}\right)_{i+\frac 12,j_\ell}^n,\\[3mm]
\displaystyle \mathbf{u}(x_\ell,y_{j+\frac 12}, t_n+\frac 12\Delta t):= \mathbf{u}_{i_\ell,j+\frac 12}^n +\frac{\Delta t}{2}\left(\frac{\partial\mathbf{u}}{\partial t}\right)_{i_\ell,j+\frac 12}^n. \end{array} \label{2d-al} \end{equation} Then we use the HWENO interpolation to reconstruct $\mathbf{u}^*(x,y)$ and find the values $\mathbf{u}_{i_\ell, j+\frac 12}^*$, $\mathbf{u}_{i+\frac 12, j_\ell}^*$, $(\partial\mathbf{u}/\partial t)_{i_\ell,j+\frac 12}^*$and $(\partial\mathbf{u}/\partial t)_{i+\frac 12,j_\ell}^*$ at $t=t_n+\frac{\Delta t}2$ as done in Step 1.
\item[\bf Step 3.] Advance the solution to the next time level $t_n+\Delta t$, \begin{equation} \begin{array}{l} \bar \mathbf{u}^{n+1}_{i,j} =\bar \mathbf{u}_{i,j} ^n -\dfr{\Delta t}{\Delta x_j} [\mathbf{f}_{i+\frac 12,j}^{4th} -\mathbf{f}_{i-\frac 12,j}^{4th}]-\dfr{\Delta t}{\Delta y_j} [\mathbf{g}_{i,j+\frac 12}^{4th} -\mathbf{g}_{i,j-\frac 12}^{4th}],\\[3mm] \displaystyle \mathbf{f}_{i+\frac 12,j}^{4th} =\sum_{\ell=0}^k \omega_\ell\mathbf{f}_{i+\frac 12,j_\ell}^{4th},\ \ \ \mathbf{g}_{i,j+\frac 12}^{4th} =\sum_{\ell=0}^k \omega_\ell\mathbf{g}_{i_\ell,j+\frac 12}^{4th}; \\[3mm] \displaystyle \mathbf{f}_{i+\frac 12,j_\ell}^{4th}=\mathbf{f}(\mathbf{u}_{i+\frac 12,j_\ell}^n) + \dfr{\Delta t}6\left[\dfr{\partial \mathbf{f}}{\partial t}(\mathbf{u}_{i+\frac 12,j_\ell}^n)+ 2\dfr{\partial \mathbf{f}}{\partial t}(\mathbf{u}_{i+\frac 12,j_\ell}^*)\right],\\[3mm] \displaystyle \mathbf{g}_{i_\ell,j+\frac 12}^{4th}=\mathbf{g}(\mathbf{u}_{i_\ell,j+\frac 12}^n) + \dfr{\Delta t}6\left[\dfr{\partial \mathbf{g}}{\partial t}(\mathbf{u}_{i_\ell,j+\frac 12}^n)+ 2\dfr{\partial \mathbf{g}}{\partial t}(\mathbf{u}_{i_\ell,j+\frac 12}^*)\right].\\[3mm] \end{array} \label{2d-al2} \end{equation}
\end{enumerate}
\section{Numerical Examples}
In this section we provide several examples to validate the performance of the proposed approach. The examples include linear and nonlinear scalar conservation laws, 1-D Euler equations and 2-D Euler equations. The order of accuracy will be tested. All results are obtained with CFL number $0.5$ except the large density ratio problem for which the CFL number is taken to be $0.2$. We use GRP4-HWENO5 to denote the algorithm with the GRP solver and the HWENO fifth order accurate spatial reconstruction, and use RK4-WENO5 to denote the algorithm with the WENO fifth order accurate spatial reconstruction and fourth order accurate temporal R-K iteration.
\subsection{Scalar conservation laws}
We use our approach to solve two examples of scalar conservation laws.
\noindent {\bf Example 1. } The first example is a linear advection equation with a periodic boundary condition,
\begin{equation}
u_t + u_x=0, \ \ \ \ \ \ \ \ u(x,0)=\sin(\pi x).
\end{equation}
The solution is computed over the space interval $[0, 2]$ and the results are displayed in Table 1. We can see that the accuracy is achieved as expected.
\begin{table*}[!htbp]
\centering
\begin{tabular}{|l|r|r|r|r|r|r|r|r|}
\hline
$m$& \multicolumn{4}{|c|}{RK4-WENO5} & \multicolumn{4}{|c|}{GRP4-HWENO5} \\\cline{2-9}
&$ L_1 $ error&order&$ L_\infty $ error&order&$ L_1 $ error&order&$ L_\infty $ error &order\\\hline
40 & 4.47(-4) &4.91& 3.81(-4) & 4.73 & 1.67(-4) & 5.07& 1.60(-4) & 4.91\\ 80 & 1.40(-5) &5.00& 1.27(-5) & 4.91 & 5.28(-6) & 4.99& 5.10(-6) & 4.97\\ 160& 4.37(-7) &5.00& 3.97(-7) & 5.00 & 1.79(-7) & 4.88& 1.60(-7) & 4.99\\ 320& 1.37(-8) &5.00& 1.25(-8) & 4.99 & 7.19(-9) & 4.64& 5.68(-9) & 4.82\\ 640& 4.30(-10) &4.99& 3.77(-10) & 5.05 & 3.60(-10) & 4.32& 2.86(-10) & 4.31\\\hline
\end{tabular}
\caption[small]{The comparison of $L_1$, $L_\infty$ errors and convergence order for a convection equation. The schemes are RK4-WENO5 and GRP4-HWENO5 with $ m $ cells. The results are shown at time $ t=10 $.}
\label{tab:scalar_smooth} \end{table*}
\noindent{\bf Example 2.} The second example is taken for the Burgers equation with a periodic boundary condition \cite{DG-2}, \begin{equation} u_t +\left(\frac{u^2}{2}\right)_x=0, \ \ \ u(x,0) = \frac 14 + \frac 12\sin(\pi x). \end{equation} The solution is smooth up to the time $t=2/\pi$ and develops a shock that moves to interact with a rarefaction, as shown in Figure \ref{Fig-scalar_burgers} . The errors and convergence order are shown in Table 2.
\begin{figure}\label{Fig-scalar_burgers}
\end{figure}
\begin{table*}[!htbp]
\centering
\begin{tabular}{|l|r|r|r|r|r|r|r|r|}
\hline
$m$& \multicolumn{4}{|c|}{RK4-WENO5} & \multicolumn{4}{|c|}{GRP4-HWENO5} \\\cline{2-9}
&$ L_1 $ error&order&$ L_\infty $ error&order&$ L_1 $ error&order&$ L_\infty $ error &order\\\hline
40 & 3.60(-5) &3.79& 1.74(-4) & 2.94 & 9.07(-6) & 4.39& 4.32(-5) & 3.58\\ 80 & 1.84(-6) &4.29& 8.17(-6) & 4.41 & 5.53(-7) & 4.03& 3.01(-6) & 3.84\\ 160& 7.06(-8) &4.71& 4.31(-7) & 4.25 & 2.48(-8) & 4.48& 1.83(-7) & 4.05\\ 320& 1.84(-9) &5.26& 9.53(-9) & 5.50 & 8.91(-10) & 4.80& 2.97(-9) & 5.94\\ 640& 4.63(-11) &5.31& 2.94(-10) & 5.02 & 4.93(-11) & 4.17& 1.96(-10) & 3.91\\\hline
\end{tabular}
\caption[small]{The comparison of $L_1$, $L_\infty$ errors and convergence order for the Burgers equation. The schemes are RK4-WENO5 and GRP4-HWENO5 with $ m $ cells. The results are shown at time $ t=1/\pi $.}
\label{tab:scalar_burgers} \end{table*}
\subsection{One-dimensional Euler equations} We provide several examples for 1-D compressible Euler equations,
\begin{equation}
\mathbf{u} =(\rho, \rho v, \rho E)^\top, \ \ \ \ \ \mathbf{f}(\mathbf{u}) =(\rho v, \rho v^2 +p, v(\rho E+p))^\top,
\label{Euler}
\end{equation}
where $\rho$ is the density, $v$ is the velocity, $p$ is the pressure and $E= v^2/2+ e$ is the total energy, $e=\frac{p}{(\gamma-1)\rho}$ is the internal energy for polytropic gases. We test several standard examples to validate the proposed scheme.
\noindent{\bf Example 3. Smooth problem.} In order to verify the numerical accuracy of the present fourth order accurate scheme, we check the numerical results for a smooth problem whose initial data is \begin{equation}\label{eq:smooth}
\rho(x,0) = 1+0.2\text{sin}(x), \ \ \ v(x,0)=1, \ \ p(x,0)=1. \end{equation} The periodic boundary conditions are used again. The results are shown in Table 3, which verifies the expected accuracy order.
\begin{table*}[!htbp]
\centering
\begin{tabular}{|l|r|r|r|r|r|r|r|r|}
\hline
$m$& \multicolumn{4}{|c|}{RK4-WENO5} & \multicolumn{4}{|c|}{GRP4-HWENO5} \\\cline{2-9}
&$ L_1 $ error&order&$ L_\infty $ error&order&$ L_1 $ error &order&$ L_\infty $ error &order\\\hline
40 & 8.92(-5) & 4.91 & 7.64(-5) & 4.72 & 3.33(-5) & 5.07& 3.13(-5) & 4.90 \\ 80 & 2.78(-6) & 5.00 & 2.53(-6) & 4.91 & 1.04(-6) & 5.01& 9.86(-7) & 4.97 \\ 160& 8.61(-8) & 5.01 & 7.83(-8) & 5.02 & 3.31(-8) & 4.97& 3.05(-8) & 5.01 \\ 320& 2.59(-9) & 5.06 & 2.23(-9) & 5.13 & 1.12(-9) & 4.88& 1.01(-9) & 4.91 \\ 640& 7.07(-11) & 5.19 & 6.15(-11) & 5.18 & 4.37(-11) & 4.68& 4.19(-11) & 4.60 \\\hline
\end{tabular}
\caption[small]{The comparison of $L_1$, $L_\infty$ errors and convergence order for the Euler equations in Example 3. The schemes are RK4-WENO5 and GRP4-HWENO5 with $ m $ cells. The results are shown at time $ t=10 $.}
\label{tab:Euler_smooth} \end{table*}
\noindent{\bf Example 4. Shock-turbulence interaction problem.} This example was proposed in \cite{Shu-Osher} to model shock-turbulence interactions. The initial data is take as
\begin{equation} (\rho,v,p)(x,0)=\left\{ \begin{array}{ll} (3.857143,2.629369, 10.333333), \ \ \ \ &\mbox{for } x<-4, \\[3mm]
(1 + 0.2 \sin(5x), 0, 1),&\mbox{for } x\geq -4.
\end{array}
\right.
\end{equation}
The result is shown in Figure \ref{Fig-Shu-Osher} and it is comparable with those by other schemes.
\begin{figure}\label{Fig-Shu-Osher}
\end{figure}
\noindent{\bf Example 5. Woodward-Colella problem.} This is the Woodward-Colella interacting blast wave problem. The gas is at rest and ideal with $\gamma=1.4$, and the density is everywhere unit. The pressure is $p=1000$ for $0\leq x <0.1$ and $p=100$ for $0.9<x\leq 1.0$, while it is only $p=0.01$ in $0.1<x<0.9$. Reflecting boundary conditions are applied at both ends. Both the GRP4-HWENO5 scheme and the RK4-WENO5 scheme could give a well-resolved solution using $800$ grids. See Figure \ref{fig:Wood}. The reference solution is computed with $4000$ grids.
\begin{figure}\label{fig:Wood}
\end{figure}
\noindent{\bf Example 6. Large pressure ratio problem. } The large pressure ratio problem is first presented in \cite{Tang-Liu} to test the ability to capture extremely strong rarefaction waves and its influence on the shock location. In the original paper \cite{Tang-Liu}, it shows that most MUSCL--type schemes have defects in resolving, even with very fine mesh, the correct wave structures. In this problem, initially the pressure and density ratio between two neighboring states are very high. The initial data is $(\rho,v,p)=(10000,0,10000)$ for $0\leq x<0.3$ and $(\rho,v,p)=(1,0,1)$ for $0.3\leq x \leq 1.0$. The results with $400$ points are shown in Figure \ref{fig:ratio400}, by GRP4-HWENO5 and RK4-WENO5, respectively. With $400$ grid points, the GRP4-HWENO scheme gives perfect results, while the RK4-WENO5 fails to achieve that.
\begin{figure}\label{fig:ratio400}
\end{figure}
\subsection{2-D Examples.} We provide several two-dimensional examples to validate the proposed approach. The governing equations are the 2-D Euler equations, \begin{equation} \begin{array}{l} \mathbf{u}=(\rho,\rho u,\rho v,\rho E)^\top, \\[3mm] \mathbf{f}(\mathbf{u}) =(\rho u, \rho u^2+p,\rho uv, u(\rho E+p))^\top,\\[3mm] \mathbf{g}(\mathbf{u}) =(\rho v, \rho uv, \rho v^2+p, v(\rho E+p))^\top, \end{array} \label{2d-Euler} \end{equation} where $(u,v)$ is the velocity, $E=\frac{u^2+v^2}2+e$, $e=\frac{p}{(\gamma-1)\rho}$. The first example is about the isentropic vortex problem to test the accuracy. The other examples aim to verify the expected performance of this approach.
\noindent{\bf Example 7. Isentropic vortex problem.} In this first 2-D isentropic vortex example we want to verify the numerical accuracy of our scheme. Initially the mean flow is given with $ \rho = 1 $, $ p = 1 $, and $ (u,v)=(1,1) $. Then an isentropic vortex is put on this mean flow \begin{equation*} \begin{array}{c}
(\delta u, \delta v) = \frac{\epsilon}{2\pi}e^{0.5(1-r^2)}(-\bar{y}, \bar{x}), \\[10pt]
\delta T = - \frac{(\gamma - 1) \epsilon^2}{8\gamma\pi^2}e^{1-r^2}, ~ \delta S=0, \end{array} \end{equation*} where $ (\bar{x},\bar{y}) = (x-5,y-5)$, $ r^2 = \bar{x}^2+\bar{y}^2 $, and the vortex strength is often set to be $ \epsilon = 5.0 $. The computation is performed in the domain $ [0,10]\times[0,10] $, extended periodically in both directions. The accuracy is achieved with the expected order $4$.
\begin{table*}[!htbp]
\centering
\begin{tabular}{|l|r|r|r|r|r|r|r|r|}
\hline
$m$& \multicolumn{4}{|c|}{RK4-WENO5} & \multicolumn{4}{|c|}{GRP4-HWENO5} \\\cline{2-9}
&$ L_1 $ error&order&$ L_\infty $ error&order&$ L_1 $ error &order&$ L_\infty $ error &order\\\hline
40 & 1.79(-4) & 3.80 & 3.29(-3) & 3.76 & 7.70(-3) & 3.87 & 1.92(-3) & 3.71 \\ 80 & 6.92(-6) & 4.69 & 1.96(-4) & 4.07 & 3.47(-4) & 4.47 & 1.26(-4) & 3.93 \\ 160& 2.03(-7) & 5.09 & 4.95(-6) & 5.31 & 1.05(-5) & 5.05 & 3.26(-6) & 5.28 \\ 320& 7.83(-9) & 4.70 & 1.96(-7) & 4.66 & 3.47(-7) & 4.92 & 1.54(-7) & 4.40 \\ 640& - & - & - & - & 1.03(-8) & 5.08 & 5.33(-9) & 4.86 \\ \hline
\end{tabular}
\caption[small]{The comparison of $L_1$, $L_\infty$ errors and convergence order for the isentropic vortex problem of the Euler equations. The schemes are RK4-WENO5 and GRP4-HWENO5 with $ m \times m $ cells. The results are given at time $ t=2 $.}
\label{tab:Isen_vortex} \end{table*}
\noindent{\bf Example 8. Two-dimensional Riemann problems.} We provide three examples for two-dimensional Riemann problems, as shown in Figure \ref{fig:Riemann4in1_A4H5}. These examples are taken from \cite{Han} and involve the interactions of shocks, the interaction of shocks with vortex sheets and the interaction of vortices. Here we use $S$ represents a shock, $J$ a vortex sheet and $R$ a rarefaction wave. The computation is implemented over the domain $[0,1]\times [0,1]$. The output time is specified below case by case.
\noindent {\bf a. Interaction of shocks and vortex sheets } $S_{12}^{+}J_{23}^{-}J_{34}^{+}S_{41}^{-}$.
The initial data are \begin{equation} (\rho,u,v,p)(x,y,0) =\left\{ \begin{array}{ll} (1.4,8,20,8), \ & 0.5<x<1.0,0.5<y<1.0,\\ (-4.125,4.125,-4.125,-4.125), & 0< x<0.5,0.5<y<1.0,\\ (-4.125,-4.125,-4.125,4.125), & 0<x<0.5,0<y<0.5,\\ (1,116.5,116.5,116.5), & 0.5<x<1.0,0<y<0.5. \end{array} \right. \end{equation} The output time is $0.26$.
\noindent {\bf b. Interaction of shocks, rarefaction waves and vortex sheets} $J_{12}^{+}S_{23}^{-}J_{34}^{-}R_{41}^{+}$.
The initial data are \begin{equation} (\rho,u,v,p)(x,y,0) =\left\{ \begin{array}{ll} (1, 2, 1.0625, 0.5179), \ & 0.5<x<1.0,0.5<y<1.0,\\ (0, 0, 0, 0), & 0< x<0.5,0.5<y<1.0,\\ (0.3, -0.3, 0.2145, -0.4259), & 0<x<0.5,0<y<0.5,\\ (1, 1, 0.4, 0.4), & 0.5<x<1.0,0<y<0.5. \end{array} \right. \end{equation} The output time is $ t=0.055 $.
\noindent { \bf c. Interaction of rarefaction waves and vortex sheets} $R_{12}^{+}J_{23}^{+}J_{34}^{-}R_{41}^{-}$.
The initial data are \begin{equation} (\rho,u,v,p)(x,y,0) =\left\{ \begin{array}{ll} (1,0.5197,0.8,0.5197), \ & 0.5<x<1.0,0.5<y<1.0,\\ (0.1,-0.6259,0.1,0.1), & 0< x<0.5,0.5<y<1.0,\\ (0.1,0.1,0.1,-0.6259), & 0<x<0.5,0<y<0.5,\\ (1,0.4,0.4,0.4), & 0.5<x<1.0,0<y<0.5. \end{array} \right. \end{equation} The output time is $0.3$.
From the results we can see that this scheme can capture very small scaled vortices resulting from the interaction of vortex sheets. The resolution of vortices is comparable to that by the adaptive moving mesh GRP method (cf. \cite{Han}).
\begin{figure}
\caption{The density contours of three 2-D Riemann problems computed with GRP4-HWENO5. a. $ [J_{12}^{+}S_{23}^{-}J_{34}^{-}R_{41}^{+}] $ with $ 200 \times 200 $ cells. b. $ [S_{12}^{+}J_{23}^{-}J_{34}^{+}S_{41}^{-}] $ with $ 300 \times 300 $ cells. c. $ [R_{12}^{+}J_{23}^{+}J_{34}^{-}R_{41}^{-}] $ with $ 500 \times 500 $ cells. d. Local enlargement of c.}
\label{fig:Riemann4in1_A4H5}
\end{figure}
\noindent{\bf Example 9. The double mach reflection problem.} This is again a standard test problem to display the performance of high resolution schemes. The computational domain for this problem is $ [0,4] \times [0,1] $, and $ [0,3] \times [0,1]$ is shown. The reflection wall lies at the bottom of the computaional domain starting from $ x=\frac{1}{6} $. Initially a right-moving Mach 10 shock is positioned at $ x=\frac{1}{6} ,y=0$ and makes a $ \frac{\pi}{3} $ angle with the $x$-axis. The results are shown in Figures \ref{fig:DM240_A4H5} and \ref{fig:DMGRP-H480} with excellent performance.
\begin{figure}
\caption{The contours of the density for the double mach reflection problem. GRP4-HWENO5 is implemented with $960\times 240$ cells and the result is shown at $ t = 0.2 $. 30 contours are drawn.}
\label{fig:DM240_A4H5}
\end{figure} \begin{figure}
\caption{The contours of the density of the double mach reflection problem. GRP2-HWENO5 is implemented with $1920\times 480$ cells and the result is shown at $ t = 0.2 $. 30 contours are drawn.}
\label{fig:DMGRP-H480}
\end{figure}
\noindent{\bf Example 10. The shock vortex interaction problem} This example describes the interaction between a stationary shock and a vortex, the computational domain is taken to be $ [0,2] \times [0,1] $. A stationary Mach 1.1 shock is positoned at $ {x = 0.5} $ and normal to the $ x $-axis. Its left state is $ (\rho, u, v, p) = (1, \sqrt{\gamma}M, 0, 1)$, where $ M $ is the mach number of the shock. A small vortex is superposed to the flow left to the shock and centers at $ (x_c, y_c) = (0.25, 0.5)$. The vortex can be considered as a perturbation to the mean flow. The perturbations to the velocity ($u$, $v$), the temperature ($ T=\frac{p}{\rho} $) and the entropy ($ S=\frac{p}{\rho^{\gamma}} $) are: \begin{equation*}
\begin{array}{c}
(\delta u, \delta v) = \frac{\epsilon}{r_c} e^{\alpha(1-\tau^2)}(\bar{y}, -\bar{x}), \\[10pt]
\delta T = - \frac{(\gamma - 1) \epsilon^2}{4\alpha\gamma}e^{2\alpha(1-\tau^2)}, ~ \delta S=0.
\end{array} \end{equation*} In our case, we set $ \epsilon=0.3$ and $ \alpha=0.204 $. The computation is performed on a $ 400 \times 100 $ uniform mesh. The results (the pressure contours) are shown in Figure \ref{fig:SVI_A4H5}. \begin{figure}
\caption{The contours of the pressure for the shock vortex interaction problem. GRP4-HWENO5 with $400\times 200$ cells is implemented and $30$ contours are drawn.}
\label{fig:SVI_A4H5}
\end{figure}
\section{Discussions and Prospectives}
This paper proposes a two-stage fourth order accurate temporal discretization for time-dependent problems based on the L-W type flow solvers. The particular applications are given for hyperbolic conservation laws. Based on HWENO interpolation technology \cite{Qiu}, a scheme with a fifth order accuracy in space and a fourth order accuracy in time is developed. A number of numerical examples are provided to validate the accuracy of the scheme and its computational performance for complex flow problems.
The current temporal discretization is different from the classical R-K approach. As discussed in Sections 2 and 3, the present approach is of the Hermite type while the R-K approach is of Simpson type. The L-W approach with coupled space and time evolution is the basis for the development of the current high order method. Our approach can be viewed as the extension of the L-W method from second order to even higher order accuracy, without using successive replacement of temporal derivatives by spatial derivatives in the one-stage method. This technique is particularly useful for nonlinear systems.
In this paper we just apply this approach to hyperbolic conservation laws in the finite volume framework over rectangular meshes. However, this approach can be applied to any time-dependent problems as long as L-W type solvers are available over any type of computational mesh. In the future studies, we will extend this approach to other formulations, e.g., DG formulation \cite{Li-Wang}, to other systems e.g., the Navier-Stokes equations \cite{Du-Li}.
This work is just a starting point for the design of high order accurate methods and a lot of theoretical problems remain for the further study, such as numerical stability. Nevertheless, the numerical experiments clearly show that the current fourth order scheme can use a CFL number as large as that for the second order GRP scheme. Indeed, the CFL number can be taken even larger than $1/2$ if the waves computed are not very strong. The large time step, in comparison with other high order schemes, does not decrease the accuracy of the scheme. So this approach will be efficient for the simulation of turbulence flows with multi-scale structures by taking a large time step and the coupling of the spatial and temporal numerical flow evolution.
\appendix \
\section{The GRP solver} \label{app-B}
This appendix includes the GRP solver used in the coding process just for completeness and readers' convenience. The details can be found in \cite{Li-1} for the Euler equations and \cite{Li-2} for general hyperbolic systems, \begin{equation} \mathbf{u}_t+\mathbf{f}(\mathbf{u})_x=\mathbf{g}(\mathbf{u},x), \label{1d-balance} \end{equation} where $\mathbf{g}(\mathbf{u},x)$ is a source term. This paper focuses on the homogeneous case, $\mathbf{g}(\mathbf{u},x)\ev0$ for 1-D case. As far as 2-D case is concerned with, the effect tangential to cell interfaces can be regarded as a source and therefore the 2-D GRP solver can be derived using the similar idea to that for 1-D GRP solver.
\subsection{1-D GRP}
The 1-D GRP solver assumes that the initial data consist of two pieces of polynomials, \begin{equation} \mathbf{u}(x,0) =\left\{ \begin{array}{ll} \mathbf{u}_-(x), \ \ \ & x<0,\\[3mm] \mathbf{u}_+(x), & x>0, \end{array} \right. \label{GRP-data} \end{equation} where $\mathbf{u}_\pm(x)$ are two polynomials with limiting states \begin{equation} \begin{array}{c} \displaystyle\mathbf{u}_\ell =\lim_{x\rightarrow 0-0} \mathbf{u}_-(x), \ \ \ \ \ \mathbf{u}_r=\lim_{x\rightarrow 0+0} \mathbf{u}_+(x); \\[3mm] \displaystyle \mathbf{u}_\ell'=\lim_{x\rightarrow 0-0} \mathbf{u}_-'(x), \ \ \ \ \ \mathbf{u}_r'=\lim_{x\rightarrow 0+0} \mathbf{u}'_+(x). \end{array} \end{equation} In the present study, we use the HWENO method in \cite{Qiu} to construct the initial data and therefore $\mathbf{u}_\pm(x)$ are two pieces of polynomials of order five.
The GRP solver has two versions: (i) Acoustic version; (ii) Genuinely nonlinear version.
\subsubsection{Acoustic GRP solver} The acoustic GRP deals with weak discontinuities or smooth flows and assumes that \begin{equation}
\|\mathbf{u}_\ell-\mathbf{u}_r\|\ll 1. \end{equation} However, we emphasize that the difference $\mathbf{u}_\ell'-\mathbf{u}_r' $ is not necessarily small. Then we denote by \begin{equation} \mathbf{u}_0\approx\mathbf{u}_\ell\approx \mathbf{u}_r, \end{equation} and linearize \eqref{1d-balance} around $\mathbf{u}_0$ as \begin{equation} \mathbf{u}_t+ A(\mathbf{u}_0) \mathbf{u}_x=0, \ \ \ \ \ A(\mathbf{u}_0) := \dfr{\partial \mathbf{f}(\mathbf{u}_0)}{\partial \mathbf{u}}. \end{equation} Then the instantaneous time derivative of $\mathbf{u}$ is computed as, \begin{equation} \left(\dfr{\partial \mathbf{u}}{\partial t}\right)_0 := \lim_{t\rightarrow 0+0} \dfr{\partial \mathbf{u}}{\partial t}(0,t) = -[R\Lambda^+ R^{-1} \mathbf{u}_\ell' + R\Lambda^- R^{-1} \mathbf{u}_r'], \end{equation} where $\Lambda=\mbox{diag}(\lambda_1, \cdots, \lambda_m)$, $\lambda_i$, $i=1,\cdots, m$ are the eigenvalues of $A(\mathbf{u}_0)$, $R$ is the (left) eigenmatrix of $A(\mathbf{u}_0)$, $\Lambda^+ =\mbox{diag}(\max(\lambda_i,0))$, $\Lambda^- =\mbox{diag}(\min(\lambda_i,0))$.
The acoustic GRP is named as the $G_1$ scheme in the series of GRP papers and it is consistent with the ADER solver by Toro \cite{Toro-ADER}.
\subsubsection{Nonlinear GRP solver.} As the jump at $x=0$ is large, the acoustic GRP is not sufficient to resolve the resulting strong discontinuities. Any ``rough" approximation is dangerous since the error is measured with jump $\|\mathbf{u}_r-\mathbf{u}_r\|$, which is not proportional to the mesh size in the practical computation and may lead to large numerical discrepancy. Therefore, we have to analytically solve the associated generalized Riemann problem \eqref{1d-balance}-\eqref{GRP-data} and derive
the ``genuinely" nonlinear GRP solver, which is named as the $G_\infty$ GRP. This version is interpreted as the L-W approach plus the tracking of strong discontinuities.
Here we include the resolution of GRP \eqref{1d-balance}-\eqref{GRP-data} for the Euler equations \eqref{Euler}. The instantaneous value $\mathbf{u}_0$ is obtained by the Riemann solver and $(\partial\mathbf{u}/\partial t)_0$ is obtained by solving a pair of algebraic equations essentially,
\begin{equation}
\begin{array}{ll}
a_\ell\left(\dfr{\partial v}{\partial t}\right)_0 + b_\ell \left(\dfr{\partial p}{\partial t}\right)_0=d_\ell,\\[3mm]
a_r\left(\dfr{\partial v}{\partial t}\right)_0 + b_r\left(\dfr{\partial p}{\partial t}\right)_0=d_r,
\end{array}
\end{equation}
where the coefficients $a_i, b_i, d_i$, $i=1,2$, are given explicitly in terms of the initial data \eqref{GRP-data}, and their formulae can be found in \cite{Li-1}.
Since the variation of entropy $s$ is precisely quantified, the instantaneous time derivative of the density is then obtained using the equation of state $p=p(\rho, s)$, \begin{equation} dp =c^2 d\rho +\dfr{\partial p}{\partial s} ds. \end{equation}
\subsection{Quasi-1-D GRP solver} As the two-dimensional case are dealt with, we need to solve a so-called quasi-1-D GRP \begin{equation} \begin{array}{l} \mathbf{u}_t +\mathbf{f}(\mathbf{u})_x +\mathbf{g}(\mathbf{u})_y=0, \\[3mm] \mathbf{u}(x,y,0)=\left\{ \begin{array}{ll} \mathbf{u}_\ell(x,y), \ \ \ \ & x<0, \\ \mathbf{u}_r(x,y) & x>0, \end{array} \right. \end{array} \label{Q1-GRP} \end{equation} where $\mathbf{u}_\ell(x,y)$ and $\mathbf{u}_r(x,y)$ are two polynomials defined on the two neighboring computational cells, respectively. Since we just want to construct the flux normal to cell interfaces, the tangential effect can be regarded as a source. Therefore, we rewrite \eqref{Q1-GRP} as \begin{equation} \begin{array}{l} \mathbf{u}_t +\mathbf{f}(\mathbf{u})_x=- \mathbf{g}(\mathbf{u})_y, \\[3mm] \mathbf{u}(x,\tilde y,0)=\left\{ \begin{array}{ll} \mathbf{u}_\ell(x,\tilde y), \ \ \ \ & x<0, \\ \mathbf{u}_r(x,\tilde y) & x>0, \end{array} \right. \end{array} \label{Q2-GRP} \end{equation} by fixing a $y$-coordinate. That is, we solve 1-D GRP at a point $(0,\tilde y)$ on the interface, by considering the effect tangential to the interface $x=0$. The value $\mathbf{g}(\mathbf{u})_y$ at $(0,\tilde y)$ takes account of the local wave propagation.
Again, the quasi 1-D GRP solver for solving \eqref{Q1-GRP}, particularly for the Euler equations \eqref{2d-Euler}, has the following two versions. The difference from 1-D version is that the multi-dimensional effect is included.
\subsubsection{Quasi-1-D acoustic case.} At any point $(0,\tilde y)$, if $\mathbf{u}_\ell(0-0,\tilde y)\approx \mathbf{u}_r(0+0,\tilde y)$ and $\|\nabla\mathbf{u}_\ell\|\neq \|\nabla\mathbf{u}_r\|$, we view it as a quai-1-D acoustic case. Denote $\mathbf{u}_0: = \mathbf{u}_\ell(0-0,\tilde y)\approx \mathbf{u}_r(0+0,\tilde y)$ and $A(\mathbf{u}_0) = \frac{\partial \mathbf{f}}{\partial \mathbf{u}}(\mathbf{u}_0)$. We make the decomposition $A(\mathbf{u}_0) =R \Lambda R^{-1}$, where $\Lambda=\mbox{diag}\{\lambda_i\}$, $R$ is the (left) eigenmatrix of $A(\mathbf{u}_0)$. Then the acoustic GRP solver takes \begin{equation} \begin{array}{rl} \displaystyle \left(\dfr{\partial \mathbf{u}}{\partial t}\right)_{(0,\tilde y, 0)} =& \displaystyle -R\Lambda^+ R^{-1} \left(\dfr{\partial \mathbf{u}_\ell}{\partial x}\right)_{(0-0,\tilde y)}-R I^+ R^{-1} \left(\dfr{\partial \mathbf{g}(\mathbf{u}_\ell)}{\partial y}\right)_{(0-0,\tilde y)}\\[3mm]
&-R\Lambda^- R^{-1} \left(\dfr{\partial \mathbf{u}_r}{\partial x}\right)_{(0+0,\tilde y)}-R I^- R^{-1} \left(\dfr{\partial \mathbf{g}(\mathbf{u}_r)}{\partial y}\right)_{(0+0,\tilde y)}, \end{array} \label{acoust} \end{equation} where $\Lambda^+ =\mbox{diag}\{\max(\lambda_i,0)\}$, $\Lambda^- =\mbox{diag}\{\min(\lambda_i,0)\}$, $I^+ =\frac 12 \mbox{diag}\{1+\mbox{sign}(\lambda_i)\}$, $I^- =\frac 12 \mbox{diag}\{1-\mbox{sign}(\lambda_i)\}$.
\subsubsection{Quasi-1-D nonlinear GRP solver.} At any point $(0,\tilde y)$, if the difference $\|\mathbf{u}_\ell(0-0,\tilde y)-\mathbf{u}_r(0+0,\tilde y)\|$ is large, we regard it as the genuinely nonlinear case and have to solve the quasi 1-D GRP analytically. A key ingredient is how to understand $\mathbf{g}(\mathbf{u})_y$. Here we construct the quasi 1-D GRP solver by two steps.
(i) We solve the local 1-D planar Riemann problem for \begin{equation} \begin{array}{l} \mathbf{v}_t +\mathbf{f}(\mathbf{v})_x =0,\ \ \ \ t>0, \\[3mm] \mathbf{v}(x,\tilde y,0) =\left\{ \begin{array}{ll} \mathbf{u}_\ell(0-0,\tilde y), \ \ \ &x<0,\\ \mathbf{u}_r(0+0,\tilde y), & x>0, \end{array} \right. \end{array} \label{q1d} \end{equation} to obtain the local Riemann solution $\mathbf{u}_0 =\mathbf{v}(0,\tilde y,0+0)$. Just as in the acoustic case, we decompose $A(\mathbf{u}_0) =\frac{\partial \mathbf{f}}{\partial \mathbf{u}}(\mathbf{u}_0) =R\Lambda R^{-1}$. Then we set \begin{equation} \mathbf{h}(x,\tilde y) = \left\{ \begin{array}{ll} -R I^+ R^{-1} \left(\dfr{\partial \mathbf{g}(\mathbf{u}_\ell)}{\partial y}\right)_{(0-0,\tilde y)}, \ \ \ &x<0,\\[3mm] -RI^- R^{-1} \left(\dfr{\partial \mathbf{g}(\mathbf{u}_r)}{\partial y}\right)_{(0+0,\tilde y)}, \ \ \ &x>0, \end{array} \right. \end{equation} where $I^\pm$ are defined the same as in \eqref{acoust}.
(ii) We solve the quasi 1-D GRP \begin{equation} \begin{array}{l} \mathbf{u}_t + \mathbf{f}(\mathbf{u})_x=h(x,\tilde y), \ \ \ \ t>0,\\[3mm] \mathbf{u}(x,\tilde y, 0) =\left\{ \begin{array}{ll} \mathbf{u}_\ell(x,\tilde y), \ \ \ &x<0,\\ \mathbf{u}_r(x,\tilde y), & x>0, \end{array} \right. \end{array} \end{equation} to obtain $\frac{\partial \mathbf{u}}{\partial t}(0,\tilde y, 0+0)$. The details can be found in \cite{Li-2}.
\section*{Acknowlegement}
The authors appreciate Yue Wang and Kun Xu for their careful reading, which greatly improves the English presentation. Jiequan Li is supported by by NSFC (No. 11371063, 91130021), the doctoral program from the Education Ministry of China (No. 20130003110004) and the Innovation Program from Beijing Normal University (No. 2012LZD08).
\end{document}
|
arXiv
|
{
"id": "1512.03664.tex",
"language_detection_score": 0.6735479831695557,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract}
We introduce the computable FS-jump, an analog of the classical Friedman--Stanley jump in the context of equivalence relations on the natural numbers. We prove that the computable FS-jump is proper with respect to computable reducibility. We then study the effect of the computable FS-jump on computably enumerable equivalence relations (ceers). \end{abstract}
\maketitle
\section{Introduction}
The backdrop for our study is the notion of computable reducibility of equivalence relations. If $E,F$ are equivalence relations on ${\mathbb N}$ we say $E$ is \emph{computably reducible} to $F$, written $E\leq F$, if there exists a computable function $f\colon{\mathbb N}\to{\mathbb N}$ such that for all $n$,$n'$ \[n\mathrel{E}n'\iff f(n)\mathrel{F}f(n')\text{.} \] This notion was first studied in both \cite{ershov,bernardi-sorbi}; it has recently garnered further study for instance in \cite{gao-gerdes,fokina-friedman-classes,fokina-friedman-etal,coskey-hamkins-miller} and numerous other works including those cited below.
Computable reducibility of equivalence relations may be thought of as a computable analog to Borel reducibility of equivalence relations on standard Borel spaces. Here if $E,F$ are equivalence relations on standard Borel spaces $X,Y$ we say $E$ is \emph{Borel reducible} to $F$, written $E\leq_B F$, if there exists a Borel function $f\colon X\to Y$ such that $x\mathrel{E}x'\iff f(x)\mathrel{F}f(x')$. We refer the reader to \cite{gao} for the basic theory of Borel reducibility.
One of the major goals in the study of computable reducibility is to compare the relative complexity of classification problems on a countable domain. In this context, if $E\leq F$ we say that the classification up to $E$-equivalence is no harder than the classification up to $F$-equivalence. For instance, classically the rank~$1$ torsion-free abelian groups (the subgroups of $\mathbb{Q}$) may be classified up to isomorphism by infinite binary sequences up to almost equality. Since this classification may be carried out in a way which is computable in the indices, there is a computable reduction from the isomorphism equivalence relation on c.e.\ subgroups of $\mathbb{Q}$ to the almost equality equivalence relation on c.e.\ binary sequences.
A second major goal in this area is to study properties of the hierarchy of equivalence relations with respect to computable reducibility. The computable reducibility quasi-order is quite complex: for instance it is shown in \cite[Theorem~4.5]{bard} that it is at least as complex as the Turing degree order, and in \cite{andrews2021structure} that its theory is equivalent to second order arithmetic. In a portion of this article we will pay special attention to the sub-hierarchy consisting of just the ceers. An equivalence relation $E$ on ${\mathbb N}$ is called a \emph{ceer} if it is computably enumerable, as a set of pairs. Ceers were called positive equivalence relations in \cite{ershov}, subsequently named ceers in \cite{gao-gerdes}, and further studied in works such as \cite{andrews-etal-universal,andrews-sorbi-joins,andrews-sorbi-jumps}.
As with other complexity hierarchies, it is natural to study operations such as jumps. One of the most important jumps in Borel complexity theory is the Friedman--Stanley jump, which is defined as follows. If $E$ is a Borel equivalence relation on the standard Borel space $X$, then the \emph{Friedman--Stanley jump} of $E$, denoted $E^{+}$, is the equivalence relation defined on $X^{\mathbb N}$ by \[x\mathrel{E^{+}}x'\iff\{[x(n)]_E:n\in{\mathbb N}\}=\{[y(n)]_E:n\in{\mathbb N}\}. \] Friedman and Stanley showed in \cite{friedman-stanley} that the jump is \emph{proper}, that is, if $E$ is a Borel equivalence relation, then $E<_B E^{+}$. Moreover they studied the hierarchy of iterates of the jump and showed that any Borel equivalence relation induced by an action of $S_{\infty}$ is Borel reducible to some iterated jump of the identity.
In this article we study a computable analog of the Friedman--Stanley jump, called the computable FS-jump and denoted $E^{{\dot{+}}}$, in which the arbitrary sequences $x(n)$ are replaced by computable enumerations $\phi_e(n)$. In Section~2 we will give the formal definition of the computable FS-jump, and establish some of its basic properties.
In Section~3 we show that the computable FS-jump is proper, that is, if $E$ is a hyperarithmetic equivalence relation, then $E<E^{{\dot{+}}}$. We do this by showing that any hyperarithmetic set is many-one reducible to some iterated jump of the identity, and establishing rough bounds on the descriptive complexity of these iterated jumps.
In Section~4 we study the effect of the computable FS-jump on ceers. We show that if $E$ is a ceer with infinitely many classes, then $E^{{\dot{+}}}$ is bounded below by the identity relation $\mathsf{Id}$ on ${\mathbb N}$, and above by the equality relation $=^{ce}$ on c.e.\ sets. This leads to a natural investigation of the structure that the jump induces on the ceers, analogous to the study of the structure that the Turing jump induces on the c.e.\ degrees. For instance, we may say that a ceer $E$ is \emph{high for the computable FS-jump} if $E^{{\dot{+}}}$ is computably bireducible with $=^{ce}$. At the close of the section, we begin to investigate the question of which ceers are high for the computable FS-jump and which are not.
In the final section we present several open questions arising from these results.
\textbf{Acknowledgement.} This work includes a portion of the third author's master's thesis \cite{gianni-thesis}. The thesis was written at Boise State University under the supervision of the first and second authors. The authors would also like to thank the referee for suggesting numerous improvements.
\section{Basic properties of reducibility and the jump}
In this section we fix some notation, introduce the computable FS-jump, and exposit some of its basic properties.
In this and future sections, we will typically use the letter $e$ for an element of ${\mathbb N}$ which we think of as an index for a Turing program. We will use $\phi_e$ for the partial computable function of index $e$, and $W_e$ for the domain of $\phi_e$.
\begin{definition}
\label{def:jump}
Let $E$ be an equivalence relation on ${\mathbb N}$. The \emph{computable FS-jump} of $E$ is the equivalence relation on indices of c.e.\ subsets of ${\mathbb N}$ defined by
\[e\mathrel{E^{{\dot{+}}}}e'\iff
\{[\phi_e(n)]_E:n\in{\mathbb N}\}=\{[\phi_{e'}(n)]_E:n\in{\mathbb N}\}.
\]
When $E$ is defined on a countable set other than ${\mathbb N}$ (or computable subset thereof) we define $E^{{\dot{+}}}$ similarly, considering $\varphi_e$ to have its range in the domain of $E$; formally we may compose $\varphi_e$ with a computable bijection from ${\mathbb N}$ to the domain of $E$.
Furthermore we define the iterated jumps $E^{{\dot{+}} n}$ inductively by $E^{{\dot{+}} 1}=E^{{\dot{+}}}$ and $E^{{\dot{+}} (n+1)}=(E^{ {\dot{+}} n})^{{\dot{+}}}$. \end{definition}
We remark that we could also have defined $E^{{\dot{+}}}$ by working with domains $W_e$ rather than ranges $\ran(\phi_e)$. While each choice has conveniences, we use Definition~\ref{def:jump} due to its analogy with the Friedman--Stanley jump.
We mention here that several other jumps of equivalence relations have been studied in the case of ceers. The halting jump and saturation jump were introduced in \cite{gao-gerdes}. The halting jump of $E$, denoted $E'$, is defined by setting $x \mathrel{E'} y$ iff $x = y \vee \phi_x(x) \downarrow \mathrel{E} \varphi_y(y) \downarrow$. The halting jump and its transfinite iterates are investigated extensively in \cite{andrews-sorbi-jumps}. The saturation jump of $E$, denoted $E^{+}$, is defined on finite subsets of ${\mathbb N}$ where $x$ and $y$ are saturation jump equivalent if their $E$-saturations are equal as sets. The saturation jump may be viewed as a finite-sequence version of the computable FS-jump. As observed in \cite{gao-gerdes} it is not always the case that $E < E'$ and $E < E^{+}$. It is worth noting that the computable FS-jump dominates the saturation jump under computable reducibility, and dominates the halting jump for ceers $E$.
Unless explicitly stated otherwise, any further use of the word ``jump'' will refer to the computable FS-jump.
We are now ready to establish some of the basic properties of the computable FS-jump. In the following, we let $\mathsf{Id}$ denote the identity equivalence relation on ${\mathbb N}$. It is worth noting that, although several of these results are direct analogues of results in Section~7 of \cite{gao-gerdes}, our results apply to an arbitrary equivalence relation $E$ and not only ceers (unless stated otherwise).
\begin{proposition}
\label{prop:monotone}
For any equivalence relations $E$ and $F$ on ${\mathbb N}$ we have:
\begin{enumerate}
\item $E\leq E^{{\dot{+}}}$.
\item If $E$ has only finitely many classes, then $E < E^{{\dot{+}}}$.
\item If $E\leq F$ then $E^{{\dot{+}}}\leq F^{{\dot{+}}}$.
\end{enumerate} \end{proposition}
\begin{proof}
(a) Let $f$ be a computable function such that for all $e$ we have that $\phi_{f(e)}$ is the constant function with value $e$. (To see that there is such a computable function $f$, one can either ``write a Turing program'' for the machine indexed by $f(e)$ or employ the s-m-n theorem. In the future we will not comment on the computability of functions of this nature.) Then $e\mathrel{E}e'$ if and only if $[e]_E=[e']_E$, if and only if $f(e)\mathrel{E^{{\dot{+}}}}f(e')$.
(b) Note that if $E$ has $n$ classes, then $E^{{\dot{+}}}$ has $2^n$ classes.
(c) This is similar to \cite[Theorem~8.4]{gao-gerdes}. Let $f$ be a computable reduction from $E$ to $F$. Let $g$ be a computable function such that $\phi_{g(e)}(n)=f(\phi_e(n))$. Then it is straightforward to verify that $g$ is a computable reduction from $E^{{\dot{+}}}$ to $F^{{\dot{+}}}$. \end{proof}
Slightly less trivially we also note the following.
\begin{proposition}
\label{prop:double_plus}
For any $E$ with infinitely many classes we have $\mathsf{Id}\leq E^{{\dot{+}}{\dot{+}}}$. \end{proposition}
\begin{proof}
We define a reduction function $f$ that works simultaneously for all equivalence relations $E$ with infinitely many classes. Given $n$, let $f(n)$ be a code for a machine such that the sequence of sets $S_i=\phi_{f(n)}(i)$ consists of all $n$-element subsets of ${\mathbb N}$. Clearly since $E^{{\dot{+}}{\dot{+}}}$ is reflexive we have that $n=n'$ implies $f(n)\mathrel{E^{{\dot{+}}{\dot{+}}}}f(n')$. Conversely suppose $n\neq n'$, and assume without loss of generality that $n<n'$. Then for all $i\in{\mathbb N}$ we have that $[\phi_{f(n)}(i)]_{E^{{\dot{+}}}}$ is a code for at most $n$-many $E$-classes. On the other hand since $E$ has infinitely many classes, there exists $i\in{\mathbb N}$ such that $[\phi_{f(n)}(i)]_{E^{{\dot{+}}}}$ is a code for exactly $n'$-many $E$-classes. It follows that $\{[\phi_{f(n)}(i)]_{E^{{\dot{+}}}}:i\in{\mathbb N}\}\neq\{[\phi_{f(n')}(i)]_{E^{{\dot{+}}}}:i\in{\mathbb N}\}$, or in other words, $f(n)\mathrel{\cancel{E^{{\dot{+}}{\dot{+}}}}}f(n')$. \end{proof}
In the following, we let $E\oplus F$ denote the equivalence relation defined on ${\mathbb N}\times\{0,1\}$ by $(m,i)(E\oplus F)(n,j)$ iff $(i=j=0)\wedge(m\mathrel{E}n)$ or $(i=j=1)\wedge(m\mathrel{F}n)$. Finally, we let $E\times F$ denote the equivalence relation defined on ${\mathbb N}\times{\mathbb N}$ by $(m,n)(E\times F)(m',n')$ iff $m\mathrel{E}m'\wedge n\mathrel{F}n'$.
\begin{proposition}
\label{prop:product}
$(E\oplus F)^{{\dot{+}}}$ is computably bireducible with $E^{{\dot{+}}}\times F^{{\dot{+}}}$. \end{proposition}
\begin{proof}
For the forward reduction, given an index $e$ for a function into ${\mathbb N}\times\{0,1\}$, let $\phi_{e_0}(n)=m$ if $\phi_{e}(n)=(m,0)$ and let $\phi_{e_1}(n)=m$ if $\phi_{e}(n)=(m,1)$; $\phi_{e_i}$ is undefined otherwise. Then the map $e\mapsto(e_0,e_1)$ is a reduction from $(E\oplus F)^{{\dot{+}}}$ to $E^{{\dot{+}}}\times F^{{\dot{+}}}$. For the reverse reduction, given a pair of indices $(e_0,e_1)$ we define $\phi_e(2n)=(\phi_{e_0}(n),0)$ and $\phi_e(2n+1)=(\phi_{e_1}(n),1)$. Once again it is easy to verify $(e_0,e_1)\mapsto e$ is a reduction from $E^{{\dot{+}}}\times F^{{\dot{+}}}$ to $(E\oplus F)^{{\dot{+}}}$. \end{proof}
In the next result we will briefly consider the connection between the computable FS-jump and the restriction of the classical FS-jump to c.e.\ sets. In the literature, the $n$th iterated classical FS-jump of $\mathsf{Id}$ is usually denoted $F_n$. For our purposes it will be convenient to regard each $F_n$ as an equivalence relation on $\mathcal P({\mathbb N})$. Thus we officially define $F_1$ as the equality relation on $\mathcal P({\mathbb N})$. Letting $\langle\cdot,\cdot\rangle$ be the usual pairing function ${\mathbb N}^2\to{\mathbb N}$, and let $A^{[n]}$ denote the $n$th ``column'' of $A$, that is, $A^{[n]}=\{p\in{\mathbb N}:\langle n,p\rangle\in A\}$. We then officially define $A\mathrel{F_2}B$ iff $\{A^{[n]}:n\in{\mathbb N}\}=\{B^{[n]}:n\in{\mathbb N}\}$. Similarly for all $n$ we can officially define $F_n$ on $\mathcal P({\mathbb N})$ by means of a fixed uniformly computable family of bijections between ${\mathbb N}^n$ and ${\mathbb N}$. So defined, $F_n$ is naturally Borel bireducible with the literal $n$th iterated classical FS-jump of $\mathsf{Id}$.
Next, recall from \cite{coskey-hamkins-miller} that for any equivalence relation $E$ on $\mathcal P({\mathbb N})$ we can define its \emph{restriction to c.e.\ sets} $E^{ce}$ on ${\mathbb N}$ by \[e\mathrel{E}^{ce}e'\iff W_e\mathrel{E}W_{e'}. \] In particular, $(F_1)^{ce}$ is $=^{ce}$, which figures prominently in the theory of computable reducibility. We are now ready to state the following.
\begin{proposition}
\label{prop:idplus}
For any $n$, we have that $\mathsf{Id}^{{\dot{+}} n}$ is computably bireducible with $(F_n)^{ce}$. \end{proposition}
\begin{proof}[Proof sketch]
For $n=1$, we need to show that $\mathsf{Id}^{{\dot{+}}}$ is computably bireducible with $=^{ce}$, which amounts to the effective equivalence of a c.e. set being either the domain or the range of a partial computable function. Namely, let $f$ and $g$ be computable functions so that $W_{f(e)}=\ran(\varphi_e)$ and $\ran(\varphi_{g(e)})=W_e$; then $f$ and $g$ provide the respective reductions.
For the induction step, it is sufficient to show that for any $n$ we have that $((F_n)^{ce})^{{\dot{+}}}$ is computably bireducible with $(F_{n+1})^{ce}$. For notational simplicity, we briefly illustrate this just in the case when $n=1$. For the reduction from $((F_1)^{ce})^{{\dot{+}}}$ to $(F_2)^{ce}$, we define $f$ to be a computable function such that for all $n$ we have $(W_{f(e)})^{[n]}=W_{\phi_e(n)}$. For the reduction from $(F_2)^{ce}$ to $((F_1)^{ce})^{{\dot{+}}}$, we define $g$ to be a computable function such that for all $n$ we have $(W_{\phi_{g(e)}})^{[n]}=(W_e)^{[n]}$. \end{proof}
We shall make frequent use of the particular case that $\mathsf{Id}^{{\dot{+}}}$ is computably bireducible with $=^{ce}$.
To conclude the section, we define transfinite iterates of the computable FS-jump. The transfinite jumps allow one to extend results such as the previous proposition into the transfinite, and they also play a key role in the next section. For the definition, recall that Kleene's $\mathcal O$ consists of notations for ordinals and is defined as follows: $1\in\mathcal O$ is a notation for $0$, if $a\in\mathcal O$ is a notation for $\alpha$ then $2^a$ is a notation for $\alpha+1$, and if for all $n$ we have $\phi_e(n)$ is a notation for $\alpha_n$ with the notations increasing in $\mathcal{O}$ with respect to $n$, then $3\cdot 5^e$ is a notation for $\sup_n\alpha_n$. We refer the reader to \cite{sacks} for background on $\mathcal O$.
\begin{definition}
We define $E^{{\dot{+}} a}$ for $a\in\mathcal O$ recursively as follows.
\begin{align*}
E^{{\dot{+}} 1}&=E\\
E^{{\dot{+}} 2^b}&=(E^{{\dot{+}} b})^{{\dot{+}}}\\
E^{{\dot{+}} 3\cdot 5^e}&=\{(\langle m,x\rangle,\langle n,y\rangle):(m=n)\wedge (x\mathrel{E}^{{\dot{+}} \phi_e(m)}y)\}
\end{align*} \end{definition}
We remark that it is straightforward to extend Proposition~\ref{prop:idplus} into the transfinite as follows. Given a notation $a\in\mathcal O$ for $\alpha$, we may use $a$ to define an equivalence relation $F_a$ on $\mathcal P({\mathbb N})$ which is Borel bireducible with the $\alpha$-iterated FS-jump $F_\alpha$. We then have that $\mathsf{Id}^{{\dot{+}} a}$ is computably bireducible with $(F_a)^{ce}$. We do not know, however, whether $\mathsf{Id}^{{\dot{+}} a}$ and $\mathsf{Id}^{{\dot{+}} a'}$ are computably bireducible when $a$ and $a'$ are different notations for the same ordinal.
The following propositions will be used in the next section.
\begin{proposition}
\label{prop:closed}
If $E^{{\dot{+}}} \leq E$ then for any $a \in \mathcal{O}$ we have $E^{{\dot{+}} a} \leq E$. \end{proposition}
\begin{proof}
We proceed by recursion on $a \in \mathcal{O}$. It follows from our hypothesis together with Proposition~\ref{prop:monotone}(b) that $E$ has infinitely many classes. By Proposition~\ref{prop:double_plus}, we have $\mathsf{Id} \leq E^{{\dot{+}}{\dot{+}}}$ and hence $\mathsf{Id} \leq E$. From this we can see that $E\times\mathsf{Id}\leq E^{{\dot{+}}{\dot{+}}}$ as follows. Suppose $h\colon\mathsf{Id} \leq E$ and define $h'$ by arranging for $W_{h'(e,n)}=\{0,1\}$, $\phi_{h'(e,n)}(0)=$ a code for $\{e\}$, and $\phi_{h'(e,n)}(1)=$ a code for $\{h(n),h(n+1)\}$. Since $h(n)$ and $h(n+1)$ are distinct for each $n$, we can distinguish $\{h(n),h(n+1)\}$ from $\{e\}$ and recover $e$ and $n$ from $h'(e,n)$, so that $h'\colon E \times \mathsf{Id} \leq E^{{\dot{+}}{\dot{+}}}$. Hence we have $E \times \mathsf{Id} \leq E$, and we may fix a computable reduction function $g\colon E \times \mathsf{Id} \leq E$.
Now let $f\colon E^{{\dot{+}}} \leq E$ and define uniformly $f_a : E^{{\dot{+}} a} \leq E$ as follows. Let $f_1$ be the identity map. Given $f_a:E^{{\dot{+}} a}\leq E $ apply Proposition \ref{prop:monotone}(c) to get $f^+_a:(E^{{\dot{+}} a})^{{\dot{+}}}\leq E^{{\dot{+}}}$, then define $f_{2^a}=f\circ f^+_a$. To define $f_{3 \cdot 5^e}$ it suffices to find a reduction from $E^{{\dot{+}} 3 \cdot 5^e}$ to $E \times \mathsf{Id}$ and compose with $g$; this follows from the fact that we have each $E^{{\dot{+}} \varphi_e(n)}$ uniformly reducible to $E$ by the effectiveness of the recursion. \end{proof}
\begin{proposition} If $E \times \mathsf{Id} \leq E$ then for any $a \in \mathcal{O}$ we have $E^{{\dot{+}} a} \times \mathsf{Id} \leq E^{{\dot{+}} a}$. \end{proposition}
\begin{proof}
We proceed by recursion on $a \in \mathcal{O}$, noting that the induction will produce the reduction functions effectively from $a$. Suppose first that $E^{{\dot{+}} a} \times \mathsf{Id} \leq E^{{\dot{+}} a}$. Then $E^{{\dot{+}} 2^a} \times \mathsf{Id} = (E^{{\dot{+}} a})^{{\dot{+}}} \times \mathsf{Id} \leq (E^{{\dot{+}} a})^{{\dot{+}}} \times \mathsf{Id}^{{\dot{+}}}$, which is bireducible with $(E^{{\dot{+}} a} \oplus \mathsf{Id})^{{\dot{+}}}$ by Proposition~\ref{prop:product}. Since the hypothesis implies $\mathsf{Id} \leq E$, this is reducible to $(E^{{\dot{+}} a} \oplus E^{{\dot{+}} a})^{{\dot{+}}}$, which is reducible to $(E^{{\dot{+}} a} \times \mathsf{Id})^{{\dot{+}}}$, and hence reducible to $(E^{{\dot{+}} a})^{{\dot{+}}}=E^{{\dot{+}} 2^a}$.
For $E^{{\dot{+}} 3 \cdot 5^e}$, we assume that $E^{{\dot{+}} \varphi_e(m)} \times \mathsf{Id} \leq E^{{\dot{+}} \varphi_e(m)}$ uniformly in $m$, from which we see that $E^{{\dot{+}} 3 \cdot 5^e} \times \mathsf{Id} \leq (E \times \mathsf{Id})^{{\dot{+}} 3 \cdot 5^e} \leq E^{{\dot{+}} 3 \cdot 5^3}$. \end{proof}
Since a computable bijection from ${\mathbb N} \times {\mathbb N}$ to ${\mathbb N}$ shows $\mathsf{Id} \times \mathsf{Id} \leq \mathsf{Id}$, we get:
\begin{corollary}
\label{cor:absorbs_id}
For any $a \in \mathcal{O}$ we have $\mathsf{Id}^{{\dot{+}} a} \times \mathsf{Id} \leq \mathsf{Id}^{{\dot{+}} a}$. \end{corollary}
\section{Properness of the jump}
In this section we establish the following main result.
\begin{theorem}
\label{thm:proper}
If $E$ is a hyperarithmetic equivalence relation on ${\mathbb N}$, then $E<E^{{\dot{+}}}$. \end{theorem}
Since we have $E \leq E^{{\dot{+}}}$ for each equivalence relation $E$, this amounts to showing that no hyperarithmetic equivalence relation is a fixed point of the computable FS-jump. We will in fact establish the following stronger result.
\begin{theorem} \label{thm:fixed_points} Let $E$ be an equivalence relation on ${\mathbb N}$ which is a fixed point for the computable FS-jump. Then $E$ is an upper bound in the $m$-degrees for all hyperathmetic sets. \end{theorem}
The proof will proceed by showing that iterated jumps of the identity have cofinal descriptive complexity among hyperarithmetic sets. Specifically, we will show that every hyperarithmetic set is many-one reducible to $\mathsf{Id}^{{\dot{+}} a}$ for some $a \in \mathcal{O}$. The proof will involve an induction on the hyperarithmetic hierarchy, and we will utilize a particular type of many-one reduction which we now introduce.
\begin{definition} Given a relation $E$ on ${\mathbb N}$, we define the relation $\subseteq_E$ by setting $e \subseteq_E e'$ if the following holds:
\[\forall n [ \phi_e(n) \downarrow \ \Rightarrow \exists m ( \phi_{e'}(m) \downarrow \wedge \phi_e(n) \mathrel{E} \phi_{e'}(m))].
\] \end{definition} We write $e \supseteq_E e'$ when $e' \subseteq_E e$. Note that when $E$ is an equivalence relation, $\subseteq_E$ is a quasi-order and we have $e\mathrel{E^{{\dot{+}}}}e'$ iff $e\subseteq_E e'$ and $e'\subseteq_E e$.
\begin{definition} Given a set $P$ and an equivalence relation $E$, we say that \emph{$P$ is subset-reducible to $E^{{\dot{+}}}$} if there is a computable function $h$ and $e_0 \in {\mathbb N}$ so that for all $n$ we have $h(n) \subseteq_E e_0$, and $P(n) \iff h(n) \mathrel{E^{{\dot{+}}}} e_0$. We call the pair $(h, e_0)$ a \emph{subset-reduction}. \end{definition}
If $P$ is subset-reducible to $E^{{\dot{+}}}$ then it is clearly many-one reducible; we will show that every hyperarithmetic set is subset-reducible to some iterated jump of $\mathsf{Id}$. Since in general $P$ may be many-one reducible to $E$ without $P^c$ being reducible to an iterated jump of $E$, we wish to only use ``positive'' induction steps, i.e., an inductive construction of the hyperarithmetic sets starting from computable sets and involving only effective unions and intersections. Also, since we need to uniformly produce reducing functions throughout the construction of a set, we want to consider the entire construction at once. To this end we introduce the notion of a computable Borel code for a hyperarithmetic set. There are many different presentations of computable Borel codes, all of which give the same collection of sets; the following definition is a slight variation of that given in Chapter~27 of \cite{miller}.
\begin{definition}
A \emph{computable Borel code} is a pair $(T,f)$ where $T$ is a computable well-founded tree on ${\mathbb N}$ so that $t \smallfrown n \in T$ for all $n$ for non-terminal nodes $t$, and $f$ is a computable function from the terminal nodes of $T$ to ${\mathbb N}$. Given a computable Borel code $(T,f)$, the set $B(T,f)$ is defined by recursion on $t \in T$ as follows. If $t$ is a terminal node, then $B_t(T,f)=\ran \phi_{f(t)}$, and if $t$ is not a terminal node, then $B_t(T,f)=\{ n: \forall p \exists q (n \in B_{t \smallfrown \langle p,q \rangle}(T,f) )\}$. We let $B(T,f)=B_{\emptyset}(T,f)$. \end{definition}
The following characterization then follows from the fact that a set is hyperarithmetic if and only if it is $\Delta^1_1,$ together with the Kleene Separation Theorem and the hyperarithmetic codes used in its proof (see, e.g., \cite[Chapter II]{sacks} and \cite[Theorem 27.1]{miller}).
\begin{theorem}
A set $B$ is hyperarithmetic if and only if there is a computable Borel code $(T,f)$ such that $B=B(T,f)$. \end{theorem}
From this characterization, we see that it will suffice to consider three types of inductive steps as described in Lemma~\ref{lem:proper_union}, Lemma~\ref{lem:proper_intersection}, and Lemma~\ref{lem:proper_limit}. We begin by considering the case of a $\Sigma^0_3$ set because it allows us to produce slightly better complexity bounds, as discussed later in this section, and introduces key ideas used in the subsequent proofs.
In the following, we will say that $e$ is \emph{an index for an enumeration} of the c.e.\ set $W$ if $\ran \phi_e = W$. We will repeatedly utilize the fact that we can effectively enumerate the $E^{{\dot{+}}}$-classes of the c.e. supersets of a given set, i.e., $\{[e]_{E^{{\dot{+}}}} : e \supseteq_{E} e_0\} = \{[W_i \cup e_0]_{E^{{\dot{+}}}} : i \in {\mathbb N}\}$, where we use $W_i \cup e_0$ to denote an index for an enumeration of $\ran \phi_{e_0} \cup W_i$. The analogous statement with $\subseteq_E$ replacing $\supseteq_E$ does not hold, as illustrated in Proposition~\ref{prop:counterexample} below, which is why we repeat this process twice to handle existential quantification. Recall that $=^{ce}$ is computably bireducible with $\mathsf{Id}^{{\dot{+}}}$.
\begin{lemma} \label{lem:proper_base} Let $P$ be $\Sigma^0_3$. Then $P$ is subset-reducible to $\left(=^{ce}\right)^{{\dot{+}}}$. \end{lemma}
\begin{proof} Choose $i_0$ with
$P(n) \iff \exists q \forall m\ \phi_{i_0}(\langle q,m,n\rangle) \downarrow$,
so that
\[ P(n) \iff \exists q\ \{ \langle q,m,n \rangle : m \in {\mathbb N} \} \subset W_{i_0} .\]
Letting $W_{g(q,n)} = W_{i_0} \cup \{ \langle q,m,n\rangle : m \in {\mathbb N}\}$, we then have
\[ P(n) \iff \exists q\ W_{g(q,n)} = W_{i_0} ,\]
with $W_{g(q,n)} \supset W_{i_0}$ for all $q$ and $n$. Then
\[ P(n) \iff \exists q\ \{W_i \cup W_{g(q,n)} : i \in {\mathbb N}\} = \{W_i \cup W_{i_0} : i \in {\mathbb N} \}, \]
with $\{W_i \cup W_{g(q,n)} : i \in {\mathbb N}\} \subset \{W_i \cup W_{i_0} : i \in {\mathbb N} \}$ for all $q$ and $n$. Hence
\[ P(n) \iff \{ W_i \cup W_{g(q,n)} : i \in {\mathbb N} \wedge q \in {\mathbb N}\} = \{W_i \cup W_{i_0} : i \in {\mathbb N} \}, \]
with $\{ W_i \cup W_{g(q,n)} : i \in {\mathbb N} \wedge q \in {\mathbb N}\} \subset \{W_i \cup W_{i_0} : i \in {\mathbb N} \}$ for all $q$ and $n$, and equality holding only when there is $q$ with $W_{g(q,n)} = W_{i_0}$.
Let $h(n)$ be such that $\phi_{h(n)}(\langle i,q\rangle)$ is an index for an enumeration of $W_i \cup W_{g(q,n)}$ and let $e_0$ be such that $\phi_{e_0}(i)$ is an index for an enumeration of $W_i \cup W_{i_0}$. Then we have
$P(n) \iff h(n) \mathrel{\left(=^{ce}\right)^{{\dot{+}}}} e_0$, with $h(n) \subseteq_{=^{ce}} e_0$ for all $n$.
\end{proof}
\begin{lemma} \label{lem:proper_union} Suppose $Q$ is subset-reducible to $E^{{\dot{+}}}$, and $P(n) \iff \exists q\ Q(\langle q,n\rangle)$. Then $P$ is subset-reducible to $E^{{\dot{+}}{\dot{+}}{\dot{+}}}$. Moreover, there are computable functions $\Psi$ and $\chi$ so that if $(\phi_i,d_0)$ is a subset-reduction from $Q$ to $E^{{\dot{+}}}$, then $(\phi_{\Psi(i)}, \chi(d_0))$ is a subset-reduction from $P$ to $E^{{\dot{+}} {\dot{+}} {\dot{+}}}$. \end{lemma}
\begin{proof}
Let $(f,d_0)$ be a subset-reduction from $Q$ to $E^{{\dot{+}}}$. We then have:
\begin{align*}
P(n) &\iff \exists q\ Q(\langle q,n\rangle) \\
& \iff \exists q\ f(\langle q,n \rangle) \mathrel{E^{{\dot{+}}}} d_0 \\
& \iff \exists q\ \{ [m]_E : m \in \ran \phi_{f(\langle q,n \rangle)} \} = \{ [m]_E : m \in \ran \phi_{d_0}\} \\
& \iff \exists q\ \{[e]_{E^{{\dot{+}}}} : e \supseteq_E f(\langle q,n\rangle) \} = \{[e]_{E^{{\dot{+}}}} : e \supseteq_E d_0\},
\end{align*}
with $\{[e]_{E^{{\dot{+}}}} : e \supseteq_E f(\langle q,n\rangle) \} \supset \{[e]_{E^{{\dot{+}}}} : e \supseteq_E d_0\}$ for all $q$ and $n$. Let $j$ be such that $\phi_{j(n,q)}(i)$ is an index for an enumeration of $W_i \cup \ran \phi_{f(\langle q,n\rangle)}$ for each $n$, $q$, and $i$, and let $j_0$ be such that $\phi_{j_0}(i)$ is an index for an enumeration of $W_i \cup \ran \phi_{d_0}$ for each $i$. Then we have
\[ P(n) \iff \exists q\ j(n,q) \mathrel{E^{{\dot{+}} {\dot{+}}}} j_0 ,\]
with $j(n,q) \supseteq_{E^{{\dot{+}}}} j_0$ for all $n$ and $q$. Hence
\[ P(n) \iff \exists q\ \{[e]_{E^{{\dot{+}} {\dot{+}} }} : e \supseteq_{E^{{\dot{+}}}} j(n,q) \} = \{[e]_{E^{{\dot{+}} {\dot{+}} }} : e \supseteq_{E^{{\dot{+}}}} j_0\},\]
with $\{[e]_{E^{{\dot{+}} {\dot{+}} }} : e \supseteq_{E^{{\dot{+}}}} j(n,q) \} \subset \{[e]_{E^{{\dot{+}} {\dot{+}} }} : e \supseteq_{E^{{\dot{+}}}} j_0\}$ for all $n$ and $q$. Hence we also have $\{[e]_{E^{{\dot{+}} {\dot{+}} }} :\exists q\ e \supseteq_{E^{{\dot{+}}}} j(n,q) \} \subset \{[e]_{E^{{\dot{+}} {\dot{+}} }} : e \supseteq_{E^{{\dot{+}}}} j_0\}$ for all $n$, and we claim that
\[ P(n) \iff \{[e]_{E^{{\dot{+}} {\dot{+}} }} : \exists q\ e \supseteq_{E^{{\dot{+}}}} j(n,q) \} = \{[e]_{E^{{\dot{+}} {\dot{+}} }} : e \supseteq_{E^{{\dot{+}}}} j_0\}.\]
To see this, note if equality holds then $[j_0]_{E^{{\dot{+}}}}$ must be an element of the left-hand set, so there must be $q_0$ with $j_0 \supseteq_{E^{{\dot{+}}}} j(n,q_0)$. Since $j(n,q) \supseteq_{E^{{\dot{+}}}} j_0$ for all q, we we thus have $j(n,q_0) \mathrel{E^{{\dot{+}} {\dot{+}}}} j_0$, so that $P(n)$ holds.
Finally, let $h$ be such that $\phi_{h(n)}(\langle i,q\rangle)$ is an index for an enumeration of $W_i \cup \ran \phi_{j(n,q)}$ for each $n$, $q$, and $i$, an let $e_0$ be such that $\phi_{e_0}(i)$ is an index for an enumeration of $W_i \cup \ran \phi_{j_0}$ for each $i$. Then $h(n) \subseteq_{E^{{\dot{+}} {\dot{+}}}} e_0$ for each $n$, and $P(n) \iff h(n) \mathrel{E^{{\dot{+}} {\dot{+}} {\dot{+}}}} e_0$, so that $(h,e_0)$ is a subset-reduction of $P$ to $E^{{\dot{+}} {\dot{+}} {\dot{+}}}$. The construction from $h$ and $e_0$ is uniform in $f$ and $d_0$, so we can produce the functions $\Psi$ and $\chi$ as described. \end{proof}
\begin{lemma} \label{lem:proper_intersection} Suppose $E \times \mathsf{Id} \leq E$, $Q$ is subset-reducible to $E^{{\dot{+}}}$, and $P(n) \iff \forall p Q(\langle p,n\rangle)$. Then $P$ is subset-reducible to $E^{{\dot{+}}}$. Moreover, there are computable functions $\Psi$ and $\chi$ so that if $(\phi_i,d_0)$ is a subset-reduction from $Q$ to $E^{{\dot{+}}}$, then $(\phi_{\Psi(i)}, \chi(d_0))$ is a subset-reduction from $P$ to $E^{{\dot{+}}}$. \end{lemma}
\begin{proof}
Let $(f,d_0)$ be a subset-reduction from $Q$ to $E^{{\dot{+}}}$, and let $g$ be a reduction from $E \times \mathsf{Id}$ to $E$. Define $h$ so that $h(n)$ is an index for an enumeration of
$ \{ g(m,p) : m \in \ran \phi_{f(\langle p,n \rangle)} \wedge p \in {\mathbb N}\} $
and let $e_0$ be an index for an enumeration of
$ \{ g(m,p) : m \in \ran \phi_{d_0} \wedge p \in {\mathbb N}\}$.
For all $n$ and $p$ we have $f(\langle p,n\rangle) \subseteq_E d_0$, so that $h(n) \subseteq_E e_0$, and for all $n$ we have:
\begin{align*}
P(n) &\iff \forall p\ Q(\langle p,n\rangle) \\
& \iff \forall p\ f(\langle p,n \rangle) \mathrel{E^{{\dot{+}}}} d_0 \\
& \iff \forall p\ \{ [m]_E : m \in \ran \phi_{f(\langle p,n \rangle)} \} = \{ [m]_E : m \in \ran \phi_{d_0}\} \\
& \iff \{ [(m,p)]_{E \times \mathsf{Id}} : m \in \ran \phi_{f(\langle p,n \rangle)} \wedge p \in {\mathbb N}\} = \\
&\qquad\qquad\qquad \{ [(m,p)]_{E \times \mathsf{Id}} : m \in \ran \phi_{d_0} \} \\
& \iff \{ [g(m,p)]_E : m \in \ran \phi_{f(\langle p,n \rangle)} \wedge p \in {\mathbb N}\} = \\
&\qquad\qquad\qquad \{ [g(m,p)]_E : m \in \ran \phi_{d_0} \wedge p \in {\mathbb N} \} \\
& \iff h(n) \mathrel{E^{{\dot{+}}}} e_0 ,
\end{align*}
so that $(h,e_0)$ is a subset-reduction from $P$ to $E^{{\dot{+}}}$. The construction of $h$ and $e_0$ is uniform, so we can produce the functions $\Psi$ and $\chi$ as described. \end{proof}
\begin{lemma} \label{lem:proper_limit} Suppose $a=3 \cdot 5^e \in \mathcal{O}$, and for each $n$ we have that $(h_n,e_n)$ is a subset-reduction from $A^{[n]}=\{p\in{\mathbb N}:\langle n,p\rangle\in A\}$ to $(E^{{\dot{+}} \phi_e(n)})^{{\dot{+}}}$, with the sequences $\langle h_n \rangle_{n \in {\mathbb N}}$ and $\langle e_n \rangle_{n \in {\mathbb N}}$ computable. Then $A$ is subset-reducible to $(E^{{\dot{+}} a})^{{\dot{+}}}$. Moreover, there are computable functions $\Psi$ and $\chi$ so that $(\Psi(\langle h_n \rangle_{n \in {\mathbb N}}), \chi(\langle e_n \rangle_{n \in {\mathbb N}}))$ provides the subset-reduction. \end{lemma}
\begin{proof}
Define $h$ so that for each $n$ and $p$, $h(\langle n,p\rangle)$ is an index for an enumeration of
$ \{\langle n, q \rangle : q \in \ran \phi_{h_n(p)} \} \cup \{ \langle m, q \rangle : q \in \ran \phi_{e_m} \wedge m \neq n\}$ ,
and let $e$ be an index for an enumeration of $\{ \langle m, q \rangle : q \in \ran \phi_{e_m} \wedge m \in {\mathbb N} \}$. Then $(h,e)$ provides the desired subset-reduction since $\{\langle n, q \rangle : q \in \ran \phi_{h_n(p)} \} \subseteq_{E^{{\dot{+}} a}} \{ \langle n, q \rangle : q \in \ran \phi_{e_n}\}$ for all $n$ and $p$, with $\{\langle n, q \rangle : q \in \ran \phi_{h_n(p)} \} \mathrel{(E^{{\dot{+}} a})^{{\dot{+}}}} \{ \langle n, q \rangle : q \in \ran \phi_{e_m}\}$ iff $p\in A^{[n]}$. The existence of $\Psi$ and $\chi$ is clear. \end{proof}
We now prove the key result for establishing properness of the jump.
\begin{theorem} \label{thm:key_proper} For each hyperarithmetic set $B$ there is $a \in \mathcal{O}$ with $B \leq_m \mathsf{Id}^{{\dot{+}} a}$. \end{theorem}
\begin{proof}
We will show that for each computable Borel code $(T,f)$ there is $a_T \in \mathcal{O}$ so that $B(T,f) \leq_m \mathsf{Id}^{{\dot{+}} a_T}$.
For notational convenience we let $B_t=B_t(T,f)$ for $t \in T$. We will recursively define $a_t \in \mathcal{O}$ and let $E_t = \mathsf{Id}^{{\dot{+}} a_t}$ and establish by effective induction on $t \in T$ that $B_t$ is subset-reducible to $E_t^{{\dot{+}}}$ via $(h_t,e_t)$, with computable maps $t \mapsto a_t$, $t \mapsto h_t$, and $t \mapsto e_t$.
For $t$ a terminal node we have $B_t=\ran \phi_{f(t)}$ and we set $a_t=1$ so $E_t=\mathsf{Id}$ and $E_t^{{\dot{+}}}$ is bireducible with $=^{ce}$. Fix a single $e_t$ for all terminal $t$ so that $\ran \phi_{e_t}={\mathbb N}$, and let $h_t(n)$ be such that $\ran \phi_{h_t(n)} = {\mathbb N}$ if $n \in \ran \phi_{f(t)}$ and $\ran \phi_{h_t(n)} = \emptyset$ if $n \notin \ran \phi_{f(t)}$. Then $(h_t,e_t)$ is a subset-reduction from $B_t$ to $E_t^{{\dot{+}}}$.
Now let $t$ be a non-terminal node, and assume $a_{t \smallfrown \langle p,q \rangle}$, $h_{t \smallfrown \langle p,q \rangle}$, and $e_{t \smallfrown \langle p,q \rangle}$ have been defined for all $t \smallfrown \langle p,q \rangle \in T$. Fix a computable pairing function $(x,y) \mapsto \langle x,y \rangle$ with computable coordinate functions $(\langle x,y \rangle)_0 =x$ and $(\langle x,y \rangle)_1 =y$, and so that $\langle 0,0 \rangle =0$. Define $R_t$ so that $R_t(\langle q, \langle p,n \rangle \rangle) \iff B_{t \smallfrown \langle p,q \rangle}(n)$, so that $B_t(n) \iff \forall p \exists q\ R_t(\langle q, \langle p,n \rangle \rangle)$.
We first adjust ordinal ranks to produce an increasing sequence so that we can take their supremum in $\mathcal{O}$. Let $\tilde{a}_{t,0}=a_{t \smallfrown \langle 0,0 \rangle}$ and let \[ \tilde{a}_{t,m+1} = \tilde{a}_{t,m} +_{\mathcal{O}} a_{t \smallfrown \langle (m)_0,(m)_1 \rangle} +_{\mathcal{O}} 2, \] where $+_{\mathcal{O}}$ is addition in $\mathcal{O}$. Then let $\tilde{a}_{t} = 3 \cdot 5^{i_{t}}$ where $\phi_{i_{t}}(m)= \tilde{a}_{t,m}$ for all $n$. Observe that if $\psi: E \leq F$ then the map $\tilde{\psi}\colon E^{{\dot{+}}} \leq F^{{\dot{+}}}$ as produced in the proof of Proposition~\ref{prop:monotone}(c) will satisfy $e \subseteq_E e' \iff \tilde{\psi}(e) \subseteq_F \tilde{\psi}(e')$. Hence we can uniformly replace $E_{t \smallfrown \langle p,q \rangle}$, $e_{t \smallfrown \langle p,q \rangle}$, and $h_{t \smallfrown \langle p,q\rangle} $ by $\mathsf{Id}^{{\dot{+}} \tilde{a}_{t,\langle p,q \rangle}}$, a corresponding $\tilde{e}_{t \smallfrown \langle p,q \rangle}$, and a corresponding map $\tilde{h}_{t \smallfrown \langle p,q \rangle}$, respectively, while maintaining the conditions for subset-reductions.
Letting $A_t$ be such that $A_t^{(m)}=B_{t \smallfrown \langle (m)_0,(m)_1\rangle}$ for each $m$, we then can effectively produce a subset-reduction from $A_t$ to $(\mathsf{Id}^{{\dot{+}} \tilde{a}_t})^{{\dot{+}}}$ by Lemma~\ref{lem:proper_limit}. Since $A_t$ is computably isomorphic to $R_t$ in a uniform way, we can do the same for $R_t$. Letting $S_t(m) \iff \exists q\ R_t(\langle q,m \rangle)$, we then uniformly produce a subset-reduction from $S_t$ to $(\mathsf{Id}^{{\dot{+}} \tilde{a}_t})^{{\dot{+}} {\dot{+}} {\dot{+}}}$ by Lemma~\ref{lem:proper_union}. Recalling that $\mathsf{Id}^{{\dot{+}} a} \times \mathsf{Id} \leq \mathsf{Id}^{{\dot{+}} a}$ for all $a \in \mathcal{O}$ by Corollary~\ref{cor:absorbs_id}, we can then apply Lemma~\ref{lem:proper_intersection} to effectively obtain a subset reduction $(h_t,e_t)$ from $B_t$ to $(\mathsf{Id}^{{\dot{+}} \tilde{a}_t})^{{\dot{+}} {\dot{+}} {\dot{+}}}$. Letting $a_t= \tilde{a}_t +_{\mathcal{O}} 2^2$, this completes the induction step for $t$. \end{proof}
We are now ready to conclude the proof of Theorem~\ref{thm:fixed_points}. Since the hyperarithmetic sets have no hyperarithmetic upper bound in terms of $m$-reducibility, this gives the main theorem of the section, Theorem~\ref{thm:proper}, as an immediate corollary.
\begin{proof}[Proof of Theorem~\ref{thm:fixed_points}]
Suppose $E^{{\dot{+}}} \leq E$. By Proposition~\ref{prop:monotone}(b) we can assume that $E$ has infinitely many classes. Thus by Proposition~\ref{prop:double_plus} we have $\mathsf{Id} \leq E^{{\dot{+}}{\dot{+}}} \leq E$. Hence by Proposition~\ref{prop:closed} we have $\mathsf{Id}^{{\dot{+}} a} \leq E^{{\dot{+}} a} \leq E$ for all $a \in \mathcal{O}$. But now by Theorem~\ref{thm:key_proper}, every hyperarithmetic set is $m$-reducible to $\mathsf{Id}^{{\dot{+}} a}$ for some $a \in \mathcal{O}$, and hence $m$-reducible to $E$. \end{proof}
The proof of Theorem~\ref{thm:key_proper} does not give optimal bounds on the number of iterates of the jump required. With a bit more care, we can show that every $\Pi^0_{\alpha}$ set is reducible to $\mathsf{Id}^{{\dot{+}} a}$ for some $a \in \mathcal{O}$ with $|a|=\alpha$.
We believe that the optimal bound should be that every $\Pi^0_{2 \cdot \alpha}$ set is reducible to $\mathsf{Id}^{{\dot{+}} a}$ for some $a \in \mathcal{O}$ with $|a|=\alpha$.
Lemma~\ref{lem:proper_base} and Lemma~\ref{lem:proper_intersection} show that $\left(=^{ce}\right)^{{\dot{+}}}$ (and hence $\mathsf{Id}^{{\dot{+}} {\dot{+}}}$) is $\Pi^0_4$-complete, and we can show by an \emph{ad hoc} argument that $\left(=^{ce}\right)^{{\dot{+}}{\dot{+}}}$ is $\Pi^0_6$-complete. The difficulty is that our induction technique requires two iterates of the jump at each step in order to reverse the direction of set containment twice. We would prefer to use $\subseteq_E$ rather than $\supseteq_E$ throughout, but we do not see how to effectively enumerate c.e.\ subsets of a given c.e.\ set up to $E^{{\dot{+}}}$-equivalence, whereas we can enumerate c.e.\ supersets. The natural attempt to do this fails as shown in the following example.
\begin{proposition} \label{prop:counterexample} There are $E$ and $e_0$ so that $\{[e]_{E^{{\dot{+}}}} : e \subseteq_{E} e_0\} \neq \{[W_i \cap e_0]_{E^{{\dot{+}}}} : i \in {\mathbb N}\}$, where $W_i \cap e_0$ denotes an index for an enumeration of $\ran \phi_{e_0} \cap W_i$. \end{proposition}
\begin{proof} Let $E$ be $=^{ce}$, and let $A \subset B$ be c.e.\ sets with $B-A$ not c.e. Let $e_0$ be such that \[ \ran \phi_{\phi_{e_0}(j)} = \begin{cases} \{k\} & \text{if $j=2k+1$} \\ \{k,k+1\} & \text{if $j=2k+2$} \\ \emptyset & \text{if $j=0$} \end{cases} \] and let $e$ be such that \[ \ran \phi_{\phi_{e}(k)} = \begin{cases} \{k\} & \text{if $k \in B-A$} \\ \{k,k+1\} & \text{if $k \in A$} \\ \emptyset & \text{if $k \notin B$} \end{cases} .\] Then $e \subseteq_E e_0$ but there is no $i$ with $e \mathrel{E^{{\dot{+}}}} e_0 \cap W_i$. For if there were, we would have $k \in B-A$ iff $\exists x (x \in W_i \wedge x = \phi_{e_0}(1+2k))$ so that $B-A$ would be c.e. \end{proof}
We have shown that the computable FS-jump of a hyperarithmetic equivalence relation is always strictly above the relation, so there are no hyperarithmetic fixed points up to bireducibility. If we consider non-hyperarithmetic equivalence relations we can find fixed points of the jump.
\begin{definition} Let $\cong_{\mathcal T}$ be the isomorphism relation on computable trees. \end{definition}
Here we can use any reasonable coding of computable trees by natural numbers. Then $\cong_{\mathcal{T}}$ is a $\Sigma_1^1$ equivalence relation which is not hyperarithmetic. In \cite[Theorem 2]{fokina-friedman-etal} it was shown that $\cong_{\mathcal{T}}$ is $\Sigma_1^1$ complete for computable reducibility, that is, $\cong_{\mathcal{T}}$ is $\Sigma_1^1$ and for every $\Sigma_1^1$ equivalence relation $E$, $E\leq \cong_{\mathcal T}$. We can see that $\cong_{\mathcal T}$ is a jump fixed point, i.e., $\cong_{\mathcal T}^{{\dot{+}}}$ is computably bireducible with $\cong_{\mathcal T}$. More generally:
\begin{proposition} Any $\Sigma^1_1$ or $\Pi^1_1$ complete equivalence relation $E$ is a jump fixed point, i.e., $E^{{\dot{+}}}$ is computably bireducible with $E$. \end{proposition}
\begin{proof}
It suffices to show that $E^{{\dot{+}}}$ is $\Sigma_1^1$ (resp. $\Pi^1_1$) for any $\Sigma^1_1$ (resp. $\Pi^1_1$ equivalence relation $E$. This follows immediately from the fact that $E^{{\dot{+}}}$ is a conjunction of $E$ with additional natural number quantifiers. \end{proof}
\begin{corollary} $\cong_{\mathcal T}$ is a jump fixed point. \end{corollary}
We note that although every hyperarithmetic set is many-one reducible to $\mathsf{Id}^{{\dot{+}} a}$ for some $a \in \mathcal{O}$, we do not know whether every hyperarithmetic equivalence relation $E$ satisfies $E \leq \mathsf{Id}^{{\dot{+}} a}$ for some $a \in \mathcal{O}$.
\section{Ceers and the jump}
Recall from the introduction that $E$ is called a \emph{ceer} if it is a computably enumerable equivalence relation. In this section, we study the relationship between the computable FS-jump and the ceers.
We begin with the following upper bound on the complexity of the computable FS-jump of a ceer. In the statement, recall that if $E$ is an equivalence relation and $W\subset{\mathbb N}$, then $W$ is said to be \emph{$E$-invariant} if it is a union of $E$-equivalence classes.
\begin{proposition}
\label{prop:upperbound}
If $E$ is a ceer, then $E^{{\dot{+}}}\leq\mathord{=}^{ce}$. Moreover, we can find a reduction whose range is contained in the set $\{e\in{\mathbb N} : W_e\text{ is $E$-invariant}\}$. \end{proposition}
\begin{proof}
We define a computable function $f$ such that $W_{f(e)}=[\ran\phi_e]_E$. To see that there is such a computable function $f$, one can let $f(e)$ be a program which, on input $n$, searches through all triples $(a,b,c)$ such that $a\in\ran\phi_e$ and $(b,c)\in E$, and halts if and when it finds a triple of the form $(a,a,n)$. Since it is clear that $e\mathrel{E^{{\dot{+}}}}e'$ if and only if $[\ran\phi_e]_E=[\ran\phi_{e'}]_E$, we have that $f$ is a computable reduction from $E^{{\dot{+}}}$ to $=^{ce}$.
It is immediate from the construction that the range of $f$ is contained in $\{e\in{\mathbb N} : W_e\text{ is $E$-invariant}\}$. \end{proof}
The next result gives a lower bound on the complexity of the computable FS-jumps of a ceer.
\begin{theorem}
\label{thm:lowerbound}
If $E$ is a ceer with infinitely many equivalence classes, then $\mathsf{Id}<E^{{\dot{+}}}$. \end{theorem}
\begin{proof}
We first show that $\mathsf{Id}\leq E^{{\dot{+}}}$. To do so, we first define an auxilliary set of pairs $A$ recursively as follows: Let $(n,j)\in A$ if and only if for every $i<j$ there exists $m<n$ and $(m,i')\in A$ such that $i\mathrel{E}i'$. It is immediate from the definition of $A$, the fact that $E$ is c.e., and the recursion theorem that $A$ is a c.e.\ set of pairs.
We observe that each column $A^{[n]}=\{j:(n,j)\in A\}$ of $A$ is an initial interval of ${\mathbb N}$. It is immediate from the definition that the first column $A^{(0)}$ is the singleton $\{0\}$. Next since $E$ has infinitely many classes, we have that each $A^{[n]}$ is bounded. Moreover $A^{[n]}$ is precisely the interval $[0,j]$ where $j$ is the least value that is $E$-inequivalent to every element of $A^{[m]}$ for all $m<n$.
We now define $f$ to be any computable function such that for all $n$, the range of $\phi_{f(n)}$ is precisely $A^{[n]}$. Then as we have seen, $m<n$ implies there exists an element $j$ in the range of $\phi_{f(n)}$ such that $j$ is $E$-inequivalent to everything in the range of $\phi_{f(m)}$. In particular, $f$ is a computable reduction from $\mathsf{Id}$ to $E^{{\dot{+}}}$.
To establish strictness, assume to the contrary that $E^{{\dot{+}}}\leq\mathsf{Id}$. Then since $E\leq E^{{\dot{+}}}$, by Theorem~\ref{thm:proper} we have $E<\mathsf{Id}$, contradicting that $E$ has infinitely many classes. \end{proof}
In order to put the previous result in context, we pause our investigation of ceers briefly to consider the question of which $E$ satisfy $\mathsf{Id}\leq E^{{\dot{+}}}$. We first note that it follows from Proposition~\ref{prop:double_plus} that if $E$ is itself a jump, then $\mathsf{Id}\leq E^{{\dot{+}}}$. We now show on the other hand that there exist equivalence relations $E$ such that $\mathsf{Id}\not\leq E^{{\dot{+}}}$. To describe such an equivalence relation, we recall the following notation.
\begin{definition}
If $A\subset{\mathbb N}$ then the equivalence relation $E_A$ is defined by
\[m\mathrel{E_A}n\iff m=n\text{ or }m,n\in A\text{.}
\] \end{definition}
Thus the equivalence classes of $E_A$ are $A$ itself, together with the singletons $\{i\}$ for $i\notin A$. Note that $E_A\leq E_B$ if and only if $A$ is $1$-reducible to $B$ (see for instance \cite[Proposition~2.8]{coskey-hamkins-miller}).
\begin{theorem} \label{thm:verydarkarithmetic}
There exists an arithmetic coinfinite set $A$ such that $\mathsf{Id}\not\leq E_A^{{\dot{+}}}$. \end{theorem}
\begin{proof}
Let $P$ be the Mathias forcing poset, that is, $P$ consists of pairs $(s,B)$ where $s\subset{\mathbb N}$ is finite, $B\subset{\mathbb N}$ is infinite, and every element of $s$ is less than every element of $B$. The ordering on $P$ is defined by $(s,B)\leq(t,C)$ if $s\supset t$, $B\subset C$, and $s- t\subset C$.
We first show that if $A^c$ is sufficiently Mathias generic, then $A$ satisfies $\mathsf{Id}\not\leq E_A^{{\dot{+}}}$. In order to do so, let $f$ be any total function so that the sets $\ran\phi_{f(i)}$ are pairwise distinct. Define:
\begin{align*}
D_f = \{(s,B)\in P:\,&(\exists i\neq j)\,[ (s\cup B)\cap(\ran\phi_{f(i)}\triangle\ran\phi_{f(j)})=\emptyset \wedge \\
&\ran\phi_{f(i)} \cap (s \cup B)^c \neq \emptyset \wedge \ran\phi_{f(i)} \cap (s \cup B)^c \neq \emptyset ]\}.
\end{align*}
We claim that $D_f$ is dense in $P$. To see this, let $(s,B)$ be given. Repeatedly applying the pigeonhole principle, we can find infinitely many indices $i_n$ such that the sets $\ran\phi_{f(i_n)}$ agree on $s$. Since the sets $\ran\phi_{f(i)}$ are pairwise distinct, there must be three, $i_0,i_1,i_2$, such that each $\ran\phi_{f(i)}$ is not a subset of $s$. Observe that
\begin{align*}
{\mathbb N} = &(\ran\phi_{f(i_0)}\triangle\ran\phi_{f(i_1)})^c
\cup(\ran\phi_{f(i_0)}\triangle\ran\phi_{f(i_2)})^c \cup \\
&(\ran\phi_{f(i_1)}\triangle\ran\phi_{f(i_2)})^c.
\end{align*}
In particular we can suppose without loss of generality that $i_0$ and $i_1$ satisfy that the set $B'=B\cap(\ran\phi_{f(i_0)}\triangle\ran\phi_{f(i_1)})^c$ is infinite. Then $\ran\phi_{f(i_0)}$ and $\ran\phi_{f(i_1)}$ agree on both $s$ and $B'$. Since neither $\ran\phi_{f(i_0)}$ nor $\ran\phi_{f(i_1)}$ is a subset of $s$, we can remove finitely many elements from $B'$ to ensure that $\ran\phi_{f(i_0)} \cap (s \cup B')^c \neq \emptyset$ and $\ran\phi_{f(i_1)} \cap (s \cup B')^c \neq \emptyset$, and so $(s,B')\in D_f$ completing the claim.
Now let $G\subset P$ be a filter satisfying the following conditions:
\begin{enumerate}
\item $G$ meets $\{(s,B)\in P:|s|\geq m\}$ for all $m\in{\mathbb N}$.
\item $G$ meets $D_f$ for all computable functions $f$ so that the sets $\ran\phi_{f(i)}$ are pairwise distinct.
\end{enumerate}
This is possible since the sets in condition~(a) are clearly dense, and we have shown that the $D_f$ are dense. We define the set $A$ by declaring that $A^c=\bigcup\{s:(s,B)\in G\}$. Condition~(a) implies that $A^c$ is infinite, and also that $A^c=\bigcap\{s \cup B : (s,B) \in G\}$; we wish to show that $\mathsf{Id}\not\leq E_A^{{\dot{+}}}$. For this we will show that if $f$ is a given computable function, then $f$ is not a reduction from $\mathsf{Id}$ to $E_A^{{\dot{+}}}$.
Assume, toward a contradiction, that $f$ is a reduction from $\mathsf{Id}$ to $E_A^{{\dot{+}}}$. Then the sets $\ran\phi_{f(i)}$ are pairwise distinct, so there is $(s,B) \in G \cap D_f$. Thus there exist $i\neq j$ such that both $\ran\phi_{f(i)}$ and $\ran\phi_{f(j)}$ intersect $(s \cup B)^c$, and $(s\cup B)\cap(\ran\phi_{f(i)}\triangle\ran\phi_{f(j)})=\emptyset$. Hence both $\ran\phi_{f(i)}$ and $\ran\phi_{f(j)}$ intersect $A$, and $A^c\cap(\ran\phi_{f(i)}\triangle\ran\phi_{f(j)})=\emptyset$. This means that $f(i)\mathrel{E_A^{{\dot{+}}}}f(j)$, so $f$ is not a reduction from $\mathsf{Id}$ to $E_A^{{\dot{+}}}$, as desired.
Finally, we can ensure $A$ is arithmetic by enumerating the dense sets described above, inductively defining a descending sequence $(s_n,B_n)$ meeting the dense sets, and letting $A^c=\bigcup_n s_n = \bigcap_n (s_n \cup B_n)$. More precisely, note that for any condition $(s,B)$, we can find an extension meeting $D_f$ for a suitable $f$ by intersecting $B$ with a $\Delta^0_2$ set, and the set of $i$ so that $f=\varphi_i$ is suitable is $\Pi^0_3$, so the construction of this sequence may be done computably in $0^{(3)}$, from which we can produce an $A$ which is $\Delta^0_4$ \end{proof}
This result leaves open the question of what is the least complexity of an equivalence relation $E$ with infinitely many classes such that $\mathsf{Id}\not\leq E^{{\dot{+}}}$.
Returning to ceers, in view of the bounds from Proposition~\ref{prop:upperbound} and Theorem~\ref{thm:lowerbound}, it is natural to ask whether there is a ceer $E$ such that $E^{{\dot{+}}}$ lies properly between $\mathsf{Id}$ and $=^{ce}$. We first see that there is a large collection of ceers whose jumps are bireducible with $=^{ce}$. We recall the following terminology from \cite{andrews-sorbi-joins}:
\begin{definition} A ceer $E$ is said to be \emph{light} if $\mathsf{Id}\leq E$. $E$ is said to be \emph{dark} if $E$ has infinitely many classes but $\mathsf{Id}\not\leq E$. \end{definition}
Thus every ceer satisfies exactly one of finite, light, or dark.
\begin{proposition}
\label{prop:light-is-high}
If $E$ is a light ceer then $E^{{\dot{+}}}$ is computably bireducible with $=^{ce}$. \end{proposition}
\begin{proof}
This is an immediate consequence of Propositions~\ref{prop:monotone}(c), \ref{prop:idplus}, and~\ref{prop:upperbound}. \end{proof}
We will see that there are also dark ceers which satisfy this conclusion. We introduce the following terminology.
\begin{definition}
We say a ceer $E$ is \emph{high for the computable FS-jump} if $E^{{\dot{+}}}$ is computably bireducible with $=^{ce}$. \end{definition}
This generalizes the notion of lightness for ceers, but also implies that the computable FS-jump is as complicated as possible. As there is no least ceer with infinitely many classes, there does not seem to be a natural notion of low for the computable FS-jump.
In order to describe a dark ceer which is high for the computable FS-jump, recall that a c.e.\ set $A\subset{\mathbb N}$ is called \emph{simple} if there is no infinite c.e.\ set contained in $A^c$. Furthermore $A$ is called \emph{hyperhypersimple} if for all computable functions $f$ such that $\{W_{f(n)}:n\in{\mathbb N}\}$ is a pairwise disjoint family of finite sets, there exists $n\in{\mathbb N}$ such that $W_{f(n)}\subset A$. We refer the reader to \cite[Chapter~5]{soare2} for more about these properties, including examples.
\begin{theorem}
\label{thm:nonhhs-is-high}
Let $A\subset{\mathbb N}$ be a set which is simple and not hyperhypersimple. Then $E_A$ is a dark ceer and $E_A$ is high for the computable FS-jump. \end{theorem}
\begin{proof}
It follows from \cite[Proposition~4.5]{gao-gerdes} together with the assumption that $A$ is simple that $E_A$ is dark.
To see that $E_A^{{\dot{+}}}$ is computably bireducible with $=^{ce}$, first it follows from Proposition~\ref{prop:upperbound} that $E_A^{{\dot{+}}}\leq\mathord{=}^{ce}$. For the reduction in the reverse direction, since $A$ is not hyperhypersimple, there exists a computable function $f$ such that $\{W_{f(n)}:n\in{\mathbb N}\}$ is a pairwise disjoint family of finite sets and for all $n\in{\mathbb N}$ we have $W_{f(n)}\cap A^c\neq\emptyset$. Now given an index $e$ we compute an index $g(e)$ such that $\phi_{g(e)}$ is an enumeration of the set $\bigcup\{W_{f(n)}:n\in W_e\}$. Then since the $W_{f(n)}$ are pairwise disjoint and meet $A^c$, we have $W_e=W_{e'}$ if and only if $A^c\cap\ran\phi_{g(e)}$ and $A^c\cap\ran\phi_{g(e')}$ are distinct subsets of $A^c$. It follows that $e\mathrel{=^{ce}}e'$ if and only if $g(e)\mathrel{E_A^{{\dot{+}}}}g(e')$, as desired. \end{proof}
On the other hand, there also exist dark ceers $E$ such that $E$ is not high for the computable FS-jump. In order to state the results, we recall from \cite[Chapter~X]{soare} that a c.e.\ subset $A\subset{\mathbb N}$ is said to be \emph{maximal} if $A^c$ is infinite and for all c.e.\ sets $W$ either $W- A$ or $W^c- A$ is finite. We further note that if $A$ is maximal then it is hyperhypersimple.
\begin{theorem}
\label{thm:maximal}
Let $A$ be a maximal set. If $B$ is a c.e. set with $B\subsetneq A$, then $E_A^{{\dot{+}}} < E_B^{{\dot{+}}}$. In particular, $E_A$ is not high for the computable FS-jump. \end{theorem}
The proof begins with several preliminary results, which may be of independent value.
\begin{lemma}
\label{lem:reverse}
If $A,B$ are c.e.\ sets and $B\subset A$, then $E_A^{{\dot{+}}} \leq E_B^{{\dot{+}}}$. \end{lemma}
\begin{proof}
If $B$ is non-hyperhypersimple, then the result follows immediately from Proposition~\ref{prop:upperbound} and Theorem~\ref{thm:nonhhs-is-high}. If $B$ is hyperhypersimple, then by \cite[X.2.12]{soare} there exists a computable set $C$ such that $B\cup C=A$. Let $b\in B$ be arbitrary, and define
\[f(n)=\begin{cases}b&n\in C\\n&n\notin C\end{cases}
\]
It is easy to see that $f$ is a computable reduction from $E_A$ to $E_B$, and hence by Proposition~\ref{prop:monotone}(c) we have $E_A^{{\dot{+}}}\leq E_B^{{\dot{+}}}$ as desired. \end{proof}
In the next lemma we will use the following terminology about a function $f\colon{\mathbb N}\to{\mathbb N}$. We say that $f$ is \emph{$=^{ce}$-invariant} if $W_e=W_{e'}$ implies $W_{f(e)}=W_{f(e')}$, that $f$ is \emph{monotone} if $W_{e'}\subset W_{e}$ implies $W_{f(e')}\subset W_{f(e)}$, and that $f$ is \emph{inner-regular} if \begin{equation}
\label{eq:inner-regular}
W_{f(e)}=\bigcup\left\{W_{f(e')} : W_{e'}\subset W_e\text{ and }W_{e'}\text{ is finite}\right\}\text{.} \end{equation} We are now ready to state the lemma.
\begin{lemma}
\label{lem:monotone}
If $f$ is a computable function, the properties $=^{ce}$-invariant, monotone, and inner-regular are all equivalent. \end{lemma}
\begin{proof}
It is clear that inner-regular implies monotone, and monotone implies $=^{ce}$-invariant. We therefore need only show that $=^{ce}$-invariant implies inner-regular. Assume that $f$ is $=^{ce}$-invariant. Then \cite[Lemma~4.5]{coskey-hamkins-miller} gives that $f$ is monotone, so we have $W_{f(e)} \supset \bigcup\left\{W_{f(e')} : W_{e'}\subset W_e\text{ and }W_{e'}\text{ is finite}\right\}$.
For the subset inclusion of Equation~\eqref{eq:inner-regular}, we assume that $x\in W_{f(e)}$ and aim to show that there exists $e'$ such that $W_{e'}\subset W_e$, $W_{e'}$ is finite, and $x\in W_{f(e')}$.
For any $e$, let $W_{e,s}=\{n : n< s \wedge \varphi_{e,s}(n) \downarrow\}$ be the partial enumeration of $W_e$ at stage $s$, so each $W_{e,s}$ is finite.
We can use the Recursion Theorem to find an index $e'$ which satisfies the following:
\[ W_{e',s} = \begin{cases}
W_{e,s} & \text{if $x \notin W_{f(e'),s}$} \\
W_{e,s'} & \text{if $s' \leq s$ is least with $x \in W_{f(e'),s'}$.}
\end{cases}
\]
We must show that $W_{e'}\subset W_e$, $W_{e'}$ is finite, and $x\in W_{f(e')}$. It is clear that $W_{e'}\subset W_e$. To show that $x\in W_{f(e')}$, assume to the contrary that $x\notin W_{f(e')}$. Then $W_{e',s}=W_{e,s}$ for all $s$, that is, we would have $W_{e'}=W_e$. Since $f$ is $=^{ce}$-invariant, we would have $W_{f(e')}=W_{f(e)}$. Our assumption that $x\in W_{f(e)}$ would therefore imply that $x\in W_{f(e')}$ after all.
Now that we know $x\in W_{f(e')}$, we know that there is $s$ with $x \in W_{f(e'),s}$. This means that for all $s$ we have $W_{e',s}=W_{e,s'}$ for the least such $s'$, so $W_{e'}=W_{e,s'}$ is finite, as desired. \end{proof}
We note that the same conclusions remain true if we replace $=^{ce}$-invariance by $\mathsf{Id}^{{\dot{+}}}$-invariance, i.e., $f$ preserves equality of ranges rather than of domains. Thus we can apply this result to reductions among computable jumps.
\begin{corollary}
Let $A$ be a maximal set. If $E^{{\dot{+}}} \leq E_A^{{\dot{+}}}$, then any $E$-invariant c.e.\ set contains either finitely or cofinitely many $E$-classes. In particular, if $E_B^{{\dot{+}}} \leq E_A^{{\dot{+}}}$ then $B$ is maximal. \end{corollary}
\begin{proof}
Let $f$ be a computable reduction from $E^{{\dot{+}}}$ to $E_A^{{\dot{+}}}$. We can assume without loss of generality that for all $e$, $\ran\phi_{f(e)}$ is $E_A$-invariant. Indeed, we may modify $f$ to ensure that if $\phi_{f(e)}$ enumerates any element of $A$ then $\phi_{f(e)}$ enumerates the rest of $A$ too. Hence we can assume that $f$ is $=^{ce}$-invariant. If $W=\ran \phi_e$ is an $E$-invariant c.e.\ set, then $R=\ran \phi_{f(e)}$ is an $E_A$-invariant c.e.\ set, hence either $R - A$ is finite or $R$ is cofinite. If $R$ is cofinite, then $W$ must contain all but finitely many $E$-classes, or else there would be an infinite increasing chain of $E^{{\dot{+}}}$-inequivalent c.e\ sets containing $W$ which must map to an infinite increasing chain of $E_A^{{\dot{+}}}$-inequivalent c.e\ sets, which is impossible. Suppose instead that $R - A$ is finite. Then by inner-regularity and $E_A$-invariance, there must be a finite set $F=\ran \phi_{e_0} \subset W$ so that $R= \ran \phi_{f(e_0)}$. But then $e_0 \mathrel{E^{{\dot{+}}}} e$, so $W$ must contain only finitely many $E$-classes. \end{proof}
We will use the following lemma, well-known in descriptive set theory as a consequence of the effective Reduction Property for the pointclass $\Sigma^0_1$. \begin{lemma} \label{lem:effective-reduction} Let $A_n$ be a uniformly c.e. sequence of c.e. sets. Then there is a uniformly c.e. sequence of c.e. sets $\tilde{A}_n$ so that $\tilde{A}_n \subset A_n$ for each $n$, $\tilde{A}_n \cap \tilde{A}_m = \emptyset$ for $n \neq m$, and $\bigcup_n \tilde{A}_n = \bigcup_n A_n$. \end{lemma}
\begin{proof}
Let $f$ be a computable function with $A_n=W_{f(n)}$ for each $n$. Let $\tilde{A}_n. = \{i : \exists s (\varphi_{f(n),s}(i)\downarrow \wedge (\forall m < n) (\forall t \leq s) \varphi_{f(m),t}(i) \uparrow )\}$.
\end{proof}
We now give the main ingredient to the proof of Theorem~\ref{thm:maximal}.
\begin{definition}
An equivalence relation $E$ is \emph{self-full} if whenever $f$ is a computable reduction from $E$ to $E$, then the range of $f$ meets every $E$ class. \end{definition}
Letting $\mathsf{Id}_n$ denote the identity equivalence relation on $\{0,\ldots,n-1\}$, $E$ being self-full is equivalent to $E \oplus \mathsf{Id}_1 \not\leq E$, so is preserved under computable bireducibility. In the following, we say $f$ and $h$ are \emph{$E$-equivalent} if $f(n)\mathrel{E}h(n)$ for all $n$. We say that $h$ is \emph{induced by a finite support permutation of the $E_A$-classes} when there is an $E_A$-invariant permutation $\pi$ with finite support so that $\ran\phi_{h(e)} = \{\pi(n) : n \in \ran\phi_e\}$ for all $e$ so that $\ran\phi_e$ is $E_A$-invariant.
\begin{lemma}
\label{lem:full}
If $A$ is maximal then $E_A^{{\dot{+}}}$ is self-full. In fact, if $f$ is a computable reduction from $E_A^{{\dot{+}}}$ to itself, then $f$ is $E_A^{{\dot{+}}}$-equivalent to a function $h$ induced by a finite support permutation of the $E_A$-classes. \end{lemma}
\begin{proof}
Suppose $f$ is a computable reduction from $E_A^{{\dot{+}}}$ to itself. We can assume without loss of generality that for all $e$, $\ran\phi_{f(e)}$ is $E_A$-invariant. Indeed, we may modify $f$ to ensure that if $\phi_{f(e)}$ enumerates any element of $A$ then $\phi_{f(e)}$ enumerates the rest of $A$ too. Having done so, we introduce the following mild abuse of notation: if $R=\ran\phi_e$ then we will write $f(R)$ for $\ran\phi_{f(e)}$. Due to our assumption about $f$, this notation is well-defined. Note that we thus have that $f$ is $=^{ce}$-invariant as well, and thus monotone and inner-regular.
We will exploit the following consequence of the monotonicity of $f$ several times: If $C$ and $D$ are c.e. sets with $C \subset D$ and $D - C$ finite and disjoint from $A$, then $|f(D)- f(C)| \geq |D - C|$. This follows since there is a chain of length $|D - C|+1$ of $E_A^{{\dot{+}}}$-inequivalent sets between $C$ and $D$, which must map to a chain of $E_A^{{\dot{+}}}$-inequivalent sets between $f(C)$ and $f(D)$. Similarly, if $C$ is cofinite with $A \subseteq C$, then $|f(C)^c| \geq |C^c|$. The maximality of $A$ then also implies that if $C - A$ is finite then so is $f(C) - A$.
The heart of the proof will be to show that there is a finite support permutation $\pi$ of ${\mathbb N}$ such that for any c.e.\ set $R$, we have $f(R)=\{\pi(n):n\in R\}$. In particular, this implies that $f$ meets every $E_A^{{\dot{+}}}$ class, as desired. We begin be seeing that the range of $f$ is almost covered by the images of singletons.
\begin{claim}
There is a finite set $C$ such that $f(A)\cup(f({\mathbb N})-\bigcup_n f(\{n\}))\subset f(C)$.
\end{claim}
\begin{claimproof}
First, observe that $\bigcup_n f(\{n\})$ is an infinite c.e.\ set (here we tacitly select indices for $\{n\}$ uniformly), and hence intersects $A$, and thus contains $A$. Moreover, there are infinitely many $n$ so that $f(\{n\})$ intersects $A$; otherwise we could omit such $n$ and have an infinite c.e. set disjoint from $A$. Thus $\{n : f(\{n\}) \cap A \neq \emptyset\}$ is an infinite c.e.\ set, and so intersects $A$. Hence there is $n \in A$ with $f(\{n\}) \cap A \neq \emptyset$, so $A\subset f(A)$. Next, since the sets $f(\{n\})$ are distinct for $n \notin A$, we have $\left( \bigcup_n f(\{n\}) \right) - A$ infinite, so the maximality of $A$ implies $\bigcup_n f(\{n\})$ is cofinite. Hence by the inner-regularity of $f$ we can find a finite set $C$ such that $f(A)\cup(f({\mathbb N})-\bigcup_n f(\{n\})\subset f(C)$.
\end{claimproof}
We next see that we will be able to select distinct elements from the images of singletons.
\begin{claim} For $n\notin A\cup C$, we have $f(\{n\})-(f(C)\cup\bigcup_{m\neq n}f(\{m\}))\neq\emptyset$.
\end{claim}
\begin{claimproof}
Let $n\notin A\cup C$. Since $f$ is a reduction and is monotone, we can find $x\in f({\mathbb N})- f({\mathbb N}-\{n\})$. Using the definition of $C$, the fact that $n\notin C$, and monotonicity, we have $f({\mathbb N})-\bigcup_mf(\{m\})\subset f(C)\subset f({\mathbb N}-\{n\})$. In particular, $x\notin f(C)$ and therefore $x\in\bigcup_mf(\{m\})$. Again by monotonicity, $x\notin\bigcup_{m\neq n}f(\{m\})$, so we must have $x\in f(\{n\})$, completing the claim.
\end{claimproof}
We now construct a first approximation to the desired permutation $\pi$.
\begin{claim} There is a finite support permutation $\sigma$ such that for $n\notin A\cup C$ we have $\sigma(n)\in f(\{n\})- A$.
\end{claim}
\begin{claimproof}
From the previous claim, Lemma~\ref{lem:effective-reduction} gives a uniformly c.e.\ sequence $B_n$ of pairwise disjoint sets such that $B_n\subset f(\{n\})$ and $\bigcup_n B_n = \bigcup_n f(\{n\})$, so that for $n\notin A\cup C$ we have $B_n-(f(C)\cup\bigcup_{m\neq n}f(\{m\}))\neq\emptyset$; in particular $B_n - f(C) \neq \emptyset$. We may shrink $B_n$ so that $B_n$ is disjoint from the finite set $f(C)- A$ for all $n$; we may further shrink $B_n$ uniformly to a set $\tilde{B}_n$ so that $\tilde{B}_n- A$ is a singleton for all $n\notin A\cup C$. Note that we do not claim or require that this singleton is not in $\bigcup_{m\neq n}f(\{m\})$, but distinct $n$'s not in $A \cup C$ will produce distinct singletons.
Since $C - A$ is finite, we have that $f((C- A)^c)$ is cofinite, and monotonicity of $f$ implies that $|f((C- A)^c)^c|\geq|C- A|$ as discussed above. We may then let $p$ be any injection from $C- A\to f((C- A)^c)^c$. We now define:
\[G_n=\begin{cases}
A & n\in A \\
A\cup\{p(n)\} & n\in C- A \\
A\cup \tilde{B}_n & n\notin A\cup C
\end{cases}.
\]
Observe that $G_n$ is a uniformly c.e.\ sequence, since we may first check if $n \in C - A$; if not, we enumerate $\tilde{B}_n$ into $G_n$ until we see $n$ enumerated in $A$ (if ever), at which point we enumerate $A$ (which will then contain $\tilde{B}_n$) into $G_n$. We define $\sigma$ as follows:
\[\sigma(n)=\begin{cases}
n&n\in A\\
\text{the unique element of $G_n- A$} & n\notin A
\end{cases}
\]
This completes the definition of $\sigma$. Note that $\sigma(n) \in f(\{n\})$ for $n \notin (C - A)$, and $\sigma(n) \notin A$ for $n \notin A$, as required. We do not claim \emph{a priori} that $\sigma$ is computable (although this will follow later), but we can use the sequence $G_n$ to obtain effectiveness. Define the function $g(R)=\bigcup_{n\in R}G_n$, which is computable in the indices.
We check that $\sigma$ is a permutation with finite support. It is immediate from the construction that $\sigma$ is injective. To show $\sigma$ is surjective, assume $k$ is not in the range of $\sigma$. Then the sequence $R_0=A\cup\{k\}$ and $R_{n+1}=g(R_n)$. Since $k\notin A$ we have that $R_n- A$ is a singleton for all $n$. Moreover the singletons are distinct since $\sigma$ is injective and none of the singletons can equal $k$ for $n>0$. Applying Lemma~\ref{lem:effective-reduction} to the sequence $R_n$, we obtain a uniformly c.e.\ sequence of nonempty pairwise disjoint sets, all meeting $A^c$. This contradicts that $A$ is hyperhypersimple (see \cite[Exercise~X.2.16]{soare}). To see that $\sigma$ has finite support, first note that $\sigma$ cannot have an infinite orbit. Otherwise, we could similarly produce a uniformly c.e. sequence (using the function $g$) which contradicts that $A$ is hyperhypersimple. If $\sigma$ had infinitely many nontrivial orbits, let $R=\{n:(\exists k\geq n)\;n\in G_k\}$. Then $A\subset R$, and for $n\notin A$ we have $n\in R$ when $n$ is the least element of its orbit and $n\notin R$ when $n$ is the greatest element of a nontrivial orbit. Thus $R- A$ is infinite and co-infinite, again contradicting that $A$ is maximal.
\end{claimproof}
We are now ready to construct $\pi$ as follows. Let $\tilde C=(C- A)\cup\supp(\sigma)$. If $R$ is disjoint from $C- A$ then $g(R)\subset f(R)$, therefore if $R$ is disjoint from $\tilde C$ we have $R\subset f(R)$. By monotonicity of $f$, if $R$ is disjoint from $\tilde C$ and cofinite, then by the observation above we have $|f(R)^c| \geq |R^c|$, so $R=f(R)$. In particular $f(\tilde{C}^c)=\tilde{C}^c$. For any $k\in\tilde{C}$, since $k \notin A$ we have that $\tilde C^c\cup\{k\}$ is $E_A^{{\dot{+}}}$-inequivalent to $\tilde{C}^c$; therefore we must have that $f$ sends $\tilde C^c\cup\{k\}$ to $\tilde C^c\cup\{\pi(k)\}$ for some $\pi(k)\in\tilde C$ since the complement of $f(\tilde C^c\cup\{k\})$ must be at least as large as the complement of $\tilde C^c\cup\{k\}$, but must be smaller than $\tilde{C}$. This defines $\pi$ on the finite set $\tilde{C}$; as $\pi$ is injective it is a permutation of $\tilde{C}$. Additionally define $\pi$ to be the identity on $\tilde C^c$. This completes the definition of $\pi$.
We have that $\pi$ is an $E_A$-invariant permutation with finite support; it remains to verify that $\pi$ induces the desired function. Let $h$ be a computable function such that $h(R)=\{\pi(n):n\in R\}$.
\begin{claim} For all $E_A$-invariant c.e. sets $R$ we have $f(R)=h(R)$.
\end{claim}
\begin{claimproof}
We first establish this for sets of the form ${\mathbb N} - \{n\}$ for $n \notin A$.
Note that if $F \subset \tilde{C}$ then we have $f(\tilde{C}^c \cup F) = \tilde{C}^c \cup \{\pi(n):n \in F\}$ from monotonicity and the observation earlier; in particular we see that $f({\mathbb N})={\mathbb N}$ and $f(\tilde{C}^c)=\tilde{C}^c$. From this, we see that if $R$ is any cofinite set containing $A$, then $|f(R)^c|=|R^c|$. For any given $n \notin A$, we claim that $f({\mathbb N} - \{n\})={\mathbb N} - \{\pi(n)\}$. We know this already for $n \in \tilde{C}$. If $n \in \tilde{C}^c$ then $f({\mathbb N} - \{n\})={\mathbb N} - \{\pi(k)\}$ for some $k$; this $k$ can not be in $\tilde{C}$ since $f({\mathbb N} - \{k\})={\mathbb N} - \{\pi(k)\}$ and ${\mathbb N} - \{n\}$ and ${\mathbb N} - \{k\}$ are $E_A^{{\dot{+}}}$-inequivalent. But since $\tilde{C}^c- \{n\} \subset f({\mathbb N} - \{n\})$ we must have $\pi(k)=n=\pi(n)$ as claimed.
Now let $R$ be any infinite $E_A$-invariant c.e.\ set, so $R$ contains $A$. Then $R$ is the intersection of the sets ${\mathbb N} - \{n\}$ for $n \notin R$, so $f(R) \subset h(R)$ for all such $R$ by monotonicity. Suppose there were an infinite $E_A$-invariant c.e.\ set $R$ and some $k \in h(R) - f(R)$. Then $k=\pi(n)$ for some $n \in R - A$, so $k \notin f(A \cup \{n\}) \subset h(A \cup \{n\})=A \cup \{k\}$ and thus $f(A \cup \{n\})=A$, contradicting $A \subset f(A)$.
Finally, we consider finite $R$. If $n \notin A$, then $f(\{n\}) \subset f(A \cup \{n\}) = h(A \cup \{n\}) = A \cup \{\pi(n)\}$. We can not have $f(\{n\}) =A$, so $f(\{n\})=\{\pi(n)\}$. Let $R$ be any finite $E_A$-invariant c.e.\ set, so $R$ is disjoint from $A$. Then $h(R) \subset f(R)$. We also have that $f(R) \subset f(A \cup R)=h(A \cup R)=A \cup h(R)$, so we must have that $f(R) =h(R)$.
\end{claimproof}
Hence, $f$ is $E_A^{{\dot{+}}}$-equivalent to $h$, completing the proof of the lemma. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:maximal}]
Let $A$ be maximal and $B\subsetneq A$. Fix any $a\in A- B$. We observe that $E_{A-\{a\}}$ is computably bireducible with $E_A\oplus\mathsf{Id}_1$. Therefore by Proposition~\ref{prop:monotone}(c) we have that $E_{A-\{a\}}^{{\dot{+}}}$ is computably bireducible with $E_A^{{\dot{+}}}\times\mathsf{Id}_2$.
Now assume towards a contradiction that $E_B^{{\dot{+}}}\leq E_A^{{\dot{+}}}$. Then by Lemma~\ref{lem:reverse} we have $E_{A-\{a\}}^{{\dot{+}}}\leq E_A^{{\dot{+}}}$ and hence by the previous paragraph we have $E_A^{{\dot{+}}}\times\mathsf{Id}_2\leq E_A^{{\dot{+}}}$. But if $f$ is such a reduction, then by Lemma~\ref{lem:full} the restriction of $f$ to either copy of $E_A^{{\dot{+}}}$ has range meeting every $E_A^{{\dot{+}}}$ class. But for a reduction $f$ we cannot have this property true of both copies of $E_A^{{\dot{+}}}$, so we have reached a contradiction. \end{proof}
We record here several immediate consequences of Theorem~\ref{thm:maximal} and its proof.
\begin{corollary}
Let $A,B$ be maximal sets.
\begin{itemize}
\item If $a\in A$ then $E_A^{{\dot{+}}} < E_{A-\{a\}}^{{\dot{+}}}$, and if $b\notin A$ then $E_{A\cup\{b\}}^{{\dot{+}}} < E_{A}^{{\dot{+}}}$.
\item If $|A\triangle B|<\infty$, then $E_A^{{\dot{+}}} \leq E_B^{{\dot{+}}}$ iff $|B- A|\leq|A- B|$.
\item If a c.e. set $C$ is contained in a maximal set, then it is contained in a maximal set $D$ such that $E_D^{{\dot{+}}} < E_C^{{\dot{+}}}$.
\end{itemize} \end{corollary}
We conclude with a small refinement of the second statement of Theorem~\ref{thm:maximal}. Recall that a c.e.\ set $A$ is said to be \emph{quasi-maximal} if it is the intersection of finitely many maximal sets. We refer the reader to \cite[X.3.10]{soare} for more on this notion. In particular, every quasi-maximal set is simple (see \cite[X.3.10(b)]{soare}).
\begin{theorem}
\label{thm:quasimaximal}
If $A\subset{\mathbb N}$ is quasi-maximal then $E_A$ is not high for the computable FS-jump. \end{theorem}
\begin{proof}
Suppose towards a contradiction that $\mathord{=}^{ce}\leq E_A^{{\dot{+}}}$. By Proposition~\ref{prop:upperbound}, $E_A^{{\dot{+}}}$ is reducible to the restriction of $=^{ce}$ to the $E_A$-invariant sets, so by composing reductions there exists a computable reduction $f$ from $=^{ce}$ to $=^{ce}$ such that for all $e$, $W_{f(e)}$ is $E_A$-invariant. Since $A$ is simple, it follows that for all $e$ we have $A\subset W_{f(e)}$ iff $W_{f(e)}$ is infinite and $A\cap W_{f(e)}=\emptyset$ iff $W_{f(e)}$ is finite.
We claim that we may find such an $f$ so that for all $e$ we have $A\subset W_{f(e)}$. We first show that there exists $e_0$ such that $W_{e_0}$ is finite and $A\subset W_{f(e_0)}$. Let $e$ be any index such that $W_e={\mathbb N}$. By Lemma~\ref{lem:monotone}, $f$ is inner-regular. Since $f$ is a reduction, it follows from Equation~\ref{eq:inner-regular} that $W_{f(e)}$ is infinite and hence $A\subset W_{f(e)}$. Further examining Equation~\ref{eq:inner-regular}, together with the last sentence of the previous paragraph, we conclude there exists $e_0$ as desired.
Let $g$ be a computable function such that $W_{g(e)}=W_{e_0}\cup\{\max(W_{e_0})+x:x\in W_e\}$. Then replacing $f$ with $f\circ g$ completes the claim.
It follows from the claim, together with the fact that $f$ is monotone, that the lattice of c.e.\ sets modulo finite may be embedded into the lattice of c.e.\ sets containing $A$ modulo finite. But the former lattice is infinite, and by \cite[X.3.10(a)]{soare} the latter lattice is finite, a contradiction. \end{proof}
\section{Additional remarks and open questions}
We close with some open questions and directions for further investigation.
\begin{question}
For a c.e.\ set $A$, when is $E_A^{{\dot{+}}}$ bireducible with $=^{ce}$? \end{question}
By Theorem \ref{thm:nonhhs-is-high} if $A$ is not hyperhypersimple then $E_A$ is high for the jump, and by Theorem~\ref{thm:quasimaximal} if $A$ is quasi-maximal then $E_A^{{\dot{+}}}<\mathord{=}^{ce}$. The question is, if $A$ is hyperhypersimple but not quasi-maximal, is $E_A^{{\dot{+}}}$ high for the computable FS-jump? One construction of such a set is given in an exercise in \cite[IX.2.28f]{odifreddi}.
We do not know whether the choice of notation for a countable ordinal affects the iterated jump.
\begin{question}
If $a,b \in \mathcal{O}$ with $|a|=|b|$, is $E^{{\dot{+}} a}$ computably bireducible with $E^{{\dot{+}} b}$? \end{question}
Although we saw that every hyperarithmetic set is many-one reducible to some jump of the identity, we do not know if every hyperarithmetic equivalence relation is computably reducible to some iterated jump of the identity.
\begin{question}
If $E$ is hyperarithmetic, is there $a \in \mathcal{O}$ with $E \leq \mathsf{Id}^{{\dot{+}} a}$? \end{question}
For $E$ hyperarithmetic, we have $e \mathrel{E} e'$ iff $[e]_E=[e']_E$, so that $E$ is computably reducible to the relativized version of $=^{ce}$, denoted $=^{ce,E}$, considered in \cite{bard}. This question is then equivalent to asking if these relativized equivalence relations with hyperarithmetic oracles are computably reducible to iterated jumps of the unrelativized $=^{ce}$.
We also note that, unlike the case of the classical Friedman--Stanley jump, the equivalence relation $E_1$ is not an obstruction. Recall that $E_1$ may be defined on $\mathcal{P}({\mathbb N})$ by setting $A \mathrel{E_1} B$ when $A^{[n]}=B^{[n]}$ for all but finitely many $n$, and that $E_1$ is not Borel reducible to any iterated Friedman--Stanley jump of equality.
\begin{proposition}
$E_1^{ce} \leq \left(=^{ce}\right)^{{\dot{+}}}$. \end{proposition}
\begin{proof}
Given $e$, let $g(e)$ be such that $\phi_{g(e)}(\langle f,m\rangle)$ is an index for an enumeration of the set $\bigcup_{n<m} (W_f)^{[n]} \cup \bigcup_{n \geq m} (W_e)^{[n]}$. Then $e \mathrel{E_1^{ce}} e'$ if and only if $g(e) \mathrel{\left(=^{ce}\right)^{{\dot{+}}}} g(e')$. \end{proof}
We can also ask what other fixed points exist besides $\cong_{\mathcal{T}}$. We note that there is no known characterization of fixed points of the classical Friedman--Stanley jump.
\begin{question}
Characterize the fixed points of the computable FS-jump. \end{question}
We used the relations $\subseteq_E$ in establishing properness of the computable FS-jump jump, but the operation $E \mapsto \subseteq_E$ can be applied to other relations and it may be of interest to study its effect on partial orders.
\begin{question} What can be said about the mapping $E \mapsto \subseteq_E$ as an operation on computable partial orders? \end{question}
Finally, we can ask about when the computable FS-jump of an equivalence relation fails to be above the identity relation. The proof of Theorem~\ref{thm:verydarkarithmetic} shows that there is a $\Delta^0_4$ equivalence relation $E$ with infinitely many classes so that $\mathsf{Id}\not\leq E^{{\dot{+}}}$, and Theorem~\ref{thm:lowerbound} shows that there is no $\Sigma^0_1$ such $E$, but we do not know if there can be such an $E$ which is, e.g., $\Sigma^0_2$ or $\Sigma^0_3$.
\begin{question} What is the least complexity of an equivalence relation $E$ with infinitely many classes such that $\mathsf{Id}\not\leq E^{{\dot{+}}}$? \end{question}
\end{document}
|
arXiv
|
{
"id": "2005.13777.tex",
"language_detection_score": 0.7349230647087097,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Macdonald polynomials and chromatic quasisymmetric functions}
\begin{abstract} We express the integral form Macdonald polynomials as a weighted sum of Shareshian and Wachs' chromatic quasisymmetric functions of certain graphs. Then we use known expansions of these chromatic quasisymmetric functions into Schur and power sum symmetric functions to provide Schur and power sum formulas for the integral form Macdonald polynomials. Since the (integral form) Jack polynomials are a specialization of integral form Macdonald polynomials, we obtain analogous formulas for Jack polynomials as corollaries. \end{abstract}
\tableofcontents
\section{Introduction}
The \emph{Macdonald polynomials} are a basis $\{P_{\mu}(x;q,t) : \text{partitions } \mu \}$ for the ring of symmetric functions which are defined by certain triangularity relations \cite{macdonald}. This basis has the additional property that it reduces to classical bases, such as the Schur functions and the monomial symmetric functions, after certain specializations of $q$ and $t$.
A second basis $\{J_{\mu}(x;q,t): \text{partitions } \mu \}$, known as the \emph{integral form Macdonald polynomials}, can be obtained by multiplying each $P_{\mu}(x;q,t)$ by a particular scalar in $\mathbb{Q}(q,t)$. These polynomials are known as integral forms because, when expanded into a suitably modified Schur function basis, the resulting coefficients are in $\mathbb{N}[q,t]$ \cite{haiman-positivity}. There is currently no formula for computing these resulting coefficients. The only result in this direction is a combinatorial formula for expanding integral form Macdonald polynomials into monomial symmetric functions \cite{hhl-nsym}.
In \cite{shareshian-wachs}, Shareshian and Wachs define the \emph{chromatic quasisymmetric function} $X_G(x;t)$ of a graph $G$, which is a $t$-analogue of Stanley's chromatic symmetric function \cite{stanley-chromatic}. Shareshian and Wachs prove that $X_G(x;t)$ is symmetric for a certain class of graphs $G$; in particular, the graphs in this paper will all have symmetric $X_G(x;t)$. These $X_G(x;t)$ also have known formulas for their Schur \cite{shareshian-wachs} and power sum \cite{ath-power} expansions.
Our main result is an expression for the integral form Macdonald polynomials in terms of chromatic quasisymmetric functions of certain graphs. In particular, given a partition $\mu$ we define two graphs: the \emph{attacking graph} of $\mu$, denoted $G_{\mu}$, and the \emph{augmented attacking graph} of $\mu$, denoted $G_{\mu}^{+}$. Both of these graphs have vertex set $\{1,2,\ldots,|\mu|\}$ and every edge of $G_{\mu}$ is an edge of $G_{\mu}^{+}$, which we write as $G_{\mu} \subseteq G_{\mu}^{+}$. Our main formula, which appears later as Theorem \ref{thm:integral-form-chromatic}, is \begin{align} J_{\mu^{\prime}}(x;q,t) &= t^{-n(\mu^{\prime}) + \binom{\mu_1}{2}} (1-t)^{\mu_1} \sum_{G_{\mu} \subseteq H \subseteq G_{\mu}^{+}} X_{H}(x;t) \\ &\times \prod_{\{u,v\} \in H \setminus G_{\mu}} - \left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)}\right) \nonumber \\ &\times \prod_{\{u,v\} \in G_{\mu}^{+} \setminus H} \left(1 - q^{\operatorname{leg}_{\mu}(u)+1}t^{\operatorname{arm}_{\mu}(u)+1}\right) . \nonumber \end{align} where the precise definitions are given in Section \ref{sec:background}.
We use this formula to obtain Schur and power sum expansions of integral form Macdonald polynomials. The coefficients in these expansions are not generally positive, so our formulas cannot be positive. Furthermore, our expansions do have some cancelation in general. We hope that this cancelation can be simplified, or even eliminated, in future work. These expansions, along with our main formula, are the focus of Section \ref{sec:integral-form}.
Finally, the \emph{Jack polynomials} are another basis for the ring of symmetric functions which are obtained by taking a certain limit of integral form Macdonald polynomials. In Section \ref{sec:jack}, we show how to use our previous identities from Section \ref{sec:integral-form} to obtain analogous results for Jack polynomials. The resulting identities are much simpler in this case, and a reader new to this subject may prefer to read Section \ref{sec:jack} before Section \ref{sec:integral-form}.
\section{Background} \label{sec:background}
\subsection{Integral form Macdonald polynomials} The \emph{integral form Macdonald polynomials} $J_{\mu}(x;q,t)$ are symmetric functions in variables $x_1, x_2, \ldots$ with coefficients in the field $\mathbb{Q}(q,t)$; we refer the reader to \cite{haglund-book} for a complete definition. In \cite{hhl-nsym}, Haglund, Haiman, and Loehr prove a combinatorial formula for $J_{\mu}(x;q,t)$. Their result can also be thought of as an expansion of $J_{\mu}(x;q,t)$ into monomial symmetric functions. We state Haglund, Haiman, and Loehr's formula here, which we will use as our ``definition'' for $J_{\mu}(x;q,t)$.
Given a partition $\mu$, let $\mu^{\prime}$ be the conjugate partition of $\mu$. A \emph{filling} of $\mu$ is a map from the cells of the (French) Ferrers diagram of $\mu$ to $\mathbb{Z}_{>0}$. Two cells in a Ferrers diagram \emph{attack} if they are in the same row or if they are one row apart and the right cell is one row above the left cell. A filling is \emph{non-attacking} if the entries in attacking cells are never equal.
\begin{defn} We make the following definitions for a cell $u$ in $\mu$. \begin{itemize} \item The \emph{arm} of $u$ in $\mu$, denoted $\operatorname{arm}_{\mu}(u)$, is the number of cells strictly to the right of $u$ in its row in $\mu$. \item The \emph{leg} of $u$ in $\mu$, denoted $\operatorname{leg}_{\mu}(u)$, is the number of cells strictly above $u$ in its row in $\mu$, \item $\operatorname{down}_{\mu}(u)$ is the cell immediately below $u$ in $\mu$ (if such a cell exists). \end{itemize} \end{defn}
We also use certain statistics $\operatorname{maj}$ and $\operatorname{inv}$ on fillings of $\mu$. We will not define these statistics precisely now; for definitions, see the proof of Theorem \ref{thm:integral-form-chromatic} or the original source \cite{hhl-nsym}.
\begin{figure}
\caption{This is the Ferrers diagram of the partition $(4,3,3,2)$. The cell $u$ has $\operatorname{arm}_{\mu}(u) = 1$ (count the dots), $\operatorname{leg}_{\mu}(u) = 2$ (count the dashes), and $\operatorname{down}_{\mu}(u) = v$.}
\label{fig:arm-leg}
\end{figure}
\begin{thm}[\cite{hhl-nsym}] \label{thm:hhl-nsym} \begin{align} J_{\mu^{\prime}}(x;q,t) &= \sum_{\substack{\sigma: \mu \to \mathbb{Z}_{>0} \\ \sigma \text{ non-attacking}}} x^{\sigma} q^{\operatorname{maj}(\sigma, \mu)} t^{n(\mu^{\prime}) - \operatorname{inv}(\sigma, \mu)} \\ &\times \prod_{\substack{u \in \mu \\ \sigma(u) = \sigma(\operatorname{down}_{\mu}(u))}} \left( 1 - q^{\operatorname{leg}_{\mu}(u)+1} t^{\operatorname{arm}_{\mu}(u)+1} \right) \nonumber \\ &\times \prod_{\substack{u \in \mu \\ \sigma(u) \neq \sigma(\operatorname{down}_{\mu}(u))}} \left( 1 - t \right) . \nonumber \end{align} where each cell in the bottom row of $\mu$ contributes to the last product. \end{thm}
\subsection{Jack polynomials} The \emph{Jack polynomials} $J_{\mu}^{(\alpha)}(x)$ are symmetric functions in the variables $x_1, x_2, \ldots$ with an extra parameter $\alpha$. They can be obtained by taking a certain limit of integral form Macdonald polynomials.
\begin{defn} For any partition $\mu \vdash n$, the (integral form) Jack polynomial is \begin{align} J_{\mu}^{(\alpha)}(x) &= \lim_{t \to 1} \frac{J_{\mu} \left(x;t^{\alpha},t \right)}{(1-t)^n} . \end{align} \end{defn}
Setting $\alpha = 1$ yields a scalar multiple of the Schur function $s_{\mu}$. In \cite{knop-sahi}, Knop and Sahi prove a combinatorial formula for $J_{\mu}^{(\alpha)}(x)$, which we recount here.
\begin{defn} The \emph{$\alpha$-hook} of $u$ in $\mu$ is \begin{align} \operatorname{hook}^{(\alpha)}_{\mu}(u) = \alpha(\operatorname{leg}_{\mu}(u)+1) + \operatorname{arm}_{\mu}(u). \end{align} \end{defn}
\begin{thm}{\cite{knop-sahi}} \begin{align} J_{\mu^{\prime}}^{(\alpha)}(x) &= \sum_{\substack{\sigma: \mu \to \mathbb{Z}_{>0} \\ \sigma \text{ non-attacking}}} x^{\sigma} \prod_{\substack{u \in \mu \\ \sigma(u) = \sigma(\operatorname{down}_{\mu}(u))}} \left(1 + \operatorname{hook}^{(\alpha)}_{\mu}(u)\right) \end{align} \end{thm}
This formula can also be derived from Theorem \ref{thm:hhl-nsym}.
\subsection{Graphs} We will think of graphs as sets of edges $\{u,v\}$ where $u<v$ on some fixed vertex set $\{1,2,\ldots,n\}$.
\begin{defn}[\cite{stanley-chromatic}] The chromatic symmetric function of a graph $G$ is the symmetric function \begin{align} X_G(x) &= \sum_{\substack{\kappa : V(G) \to \mathbb{Z}_{>0} \\ \kappa \text{ proper coloring}}} x^{\kappa} \end{align} where \begin{align} x^{\kappa} &= \prod_{v \in V(G)} x_{\kappa(v)} . \end{align} Here a map $\kappa : V(G) \to \mathbb{Z}_{>0}$ is said to be a proper coloring if $\kappa(u) \neq \kappa(v)$ whenever $\{u,v\} \in G$. \end{defn}
Evaluating $X_G(x)$ at $x_1=x_2=\ldots=x_k=1$ and $x_{k+1}=x_{k+2}=\ldots=0$ recovers the value of the chromatic polynomial of $G$ at the positive integer $k$. In \cite{shareshian-wachs}, Shareshian and Wachs define a $t$-analogue of Stanley's chromatic symmetric function.
\begin{defn}[\cite{shareshian-wachs}] Given a proper coloring $\kappa$ of a graph $G$, let \begin{align} \operatorname{asc}(\kappa) = \# \{\{u,v\} \in G : u < v, \kappa(u) < \kappa(v)\}. \end{align} The chromatic quasisymmetric function of $G$ is defined to be \begin{align} X_G(x;t) &= \sum_{\substack{\kappa : V(G) \to \mathbb{Z}_{>0} \\ \kappa \text{ proper coloring}}} x^{\kappa} t^{\operatorname{asc}(\kappa)} \end{align} \end{defn}
Although $X_G(x;t)$ is not generally symmetric in the $x_i$'s, it will be symmetric in all the cases we address.
\section{Expansions of integral form Macdonald polynomials} \label{sec:integral-form}
In this section, we give a formula for the integral form Macdonald polynomials as a weighted sum of chromatic quasisymmetric functions. Then we use formulas for the Schur and power sum expansion of chromatic quasisymmetric functions, proved by \cite{gasharov} and \cite{ath-power}, respectively, to give Schur and power sum formulas for the integral form Macdonald polynomials. First, we need to define certain graphs.
\begin{defn} Given a partition $\mu \vdash n$, we number the cells of $\mu$ in \emph{reading order} (left to right, top to bottom) with entries $1, 2, \ldots, n$. The \emph{attacking graph} $G_{\mu}$ has vertex set $1,2,\ldots,n$ and edges $\{u, v\}$ if and only if cells $u < v$ are attacking in $\mu$. The \emph{augmented attacking graph} $G_{\mu}^{+}$ consists of the edges of $G_{\mu}$ along with the extra edges $\{u, \operatorname{down}_{\mu}(u)\}$ for every cell $u$ not in the bottom row of $\mu$. \end{defn}
\subsection{Chromatic quasisymmetric functions}
\begin{thm} \label{thm:integral-form-chromatic} Let $n(\lambda) = \sum_{i=1}^{\ell(\lambda)} (i-1) \lambda_i = \sum_{u \in \lambda} \operatorname{leg}_{\lambda}(u)$. Then \begin{align} J_{\mu^{\prime}}(x;q,t) &= t^{-n(\mu^{\prime}) + \binom{\mu_1}{2}} (1-t)^{\mu_1} \sum_{G_{\mu} \subseteq H \subseteq G_{\mu}^{+}} X_{H}(x;t) \\ &\times \prod_{\{u,v\} \in H \setminus G_{\mu}} - \left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)}\right) \nonumber \\ &\times \prod_{\{u,v\} \in G_{\mu}^{+} \setminus H} \left(1 - q^{\operatorname{leg}_{\mu}(u)+1}t^{\operatorname{arm}_{\mu}(u)+1}\right) . \nonumber \end{align} \end{thm}
Before we prove Theorem \ref{thm:integral-form-chromatic}, we work through the example $\mu = (3,2)$. The cells of $\mu$ are numbered as follows. \begin{align} \begin{ytableau} 1 & 2 \\ 3 & 4 & 5 \end{ytableau} \end{align} Then $G_{\mu}$ is the graph \begin{center} \begin{tikzpicture} \node (1) {1}; \node (2) [right of=1] {2}; \node (3) [right of=2] {3}; \node (4) [right of=3] {4}; \node (5) [right of=4] {5}; \path (1) edge node [right] {} (2) (2) edge node [right] {} (3) (3) edge node [right] {} (4) (5) edge[bend right] node [left] {} (3) (4) edge node [right] {} (5); \end{tikzpicture} \end{center} and $G_{\mu}^{+}$ is the graph obtained by adding edges $\{1,3\}$ and $\{2,4\}$: \begin{center} \begin{tikzpicture} \node (1) {1}; \node (2) [right of=1] {2}; \node (3) [right of=2] {3}; \node (4) [right of=3] {4}; \node (5) [right of=4] {5}; \path (1) edge node [right] {} (2) (2) edge node [right] {} (3) (3) edge node [right] {} (4) (5) edge[bend right] node [left] {} (3) (4) edge node [right] {} (5) (3) edge[bend right] node [left] {} (1) (4) edge[bend right] node [left] {} (2); \end{tikzpicture} \end{center} We note that $\operatorname{arm}_{\mu}(1) = 1$, $\operatorname{leg}_{\mu}(1) = \operatorname{arm}_{\mu}(2)=\operatorname{leg}_{\mu}(2) = 0$, and $n(\mu^{\prime}) = 2+2=4$. The formula in Theorem \ref{thm:integral-form-chromatic} gives \begin{align} J_{\mu}(x;q,t) &= t^{-1} \left[ \left(1 - qt^2 \right) \left( 1 - qt \right) X_{G_{\mu}} (x;t) \right. \\ &- \left( 1 - qt \right) \left( 1 - qt \right) X_{G_{\mu} \cup \{1,3\}} (x;t) \nonumber \\ &- \left(1 - qt^2 \right) \left( 1 - q \right) X_{G_{\mu} \cup \{2,4\}} (x;t)\nonumber \\ &+ \left. \left( 1 - qt \right) \left( 1 - q \right) X_{G_{\mu}^{+}} (x;t) \right] \nonumber \end{align} where $G_{\mu} \cup \{u,v\}$ means we add the edge $\{u, v\}$ to $G_{\mu}$. Note that, after distributing the $t^{-1}$ term, the coefficients in this expansion are, in general, only in $\mathbb{Z}(q,t)$ but not $\mathbb{Z}[q,t]$. It is only after adding all the terms together that we get polynomial coefficients.
\begin{proof}[Proof of Theorem \ref{thm:integral-form-chromatic}] First, we precisely state Haglund, Haiman, and Loehr's formula for $J_{\mu^{\prime}}(x;q,t)$. Given an assignment $\sigma$ of a positive integer to each cell in the diagram of $\mu$, recall that $\sigma$ is non-attacking if $\sigma(u) \neq \sigma(v)$ for each pair of cells $u$ and $v$ that either share a row or $v$ is one row below $u$ and strictly to $u$'s left. (We say such pairs of cells are \emph{attacking}.) We also let \begin{itemize} \item $\operatorname{down}_{\mu}(u)$ be the cell just below the cell $u$, if such a cell exists, \item $\operatorname{Des}(\sigma, \mu)$ be the set of cells with $\sigma(u) > \sigma(\operatorname{down}_{\mu}(u))$, \item $\operatorname{maj}(\sigma, \mu) = \sum_{u \in \operatorname{Des}(\sigma, \mu)} (\operatorname{leg}_{\mu}(u) + 1)$, \item $\operatorname{Inv}(\sigma, \mu)$ be the set of attacking pairs of cells $u$ and $v$ such that $u$ appears before $v$ in reading order (as the cells are processed top down and left to right) and $\sigma(u) > \sigma(v)$, and \item $\operatorname{inv}(\sigma, \mu) = \# \operatorname{Inv}(\sigma, \mu) - \sum_{u \in \operatorname{Des}(\sigma, \mu)} \operatorname{arm}_{\mu}(u)$. \end{itemize} Then we have \begin{align} \label{hhl-proof} J_{\mu^{\prime}}(x;q,t) &= \sum_{\substack{\sigma: \mu \to \mathbb{Z}_{>0} \\ \sigma \text{ non-attacking}}} x^{\sigma} q^{\operatorname{maj}(\sigma, \mu)} t^{n(\mu^{\prime}) - \operatorname{inv}(\sigma, \mu)} \\ &\times \prod_{\substack{u \in \mu \\ \sigma(u) = \sigma(\operatorname{down}_{\mu}(u))}} \left( 1 - q^{\operatorname{leg}_{\mu}(u)+1} t^{\operatorname{arm}_{\mu}(u)+1} \right) \nonumber \\ &\times \prod_{\substack{u \in \mu \\ \sigma(u) \neq \sigma(\operatorname{down}_{\mu}(u))}} \left( 1 - t \right) . \nonumber \end{align} where each cell in the bottom row of $\mu$ contributes to the last product.
We need to modify the power of $t$ slightly. Recall that $n(\mu^{\prime}) = \sum_{u \in \mu} \operatorname{arm}_{\mu}(u)$. The number of possible attacking pairs in a filling of shape $\mu$ is \begin{align} n(\mu^{\prime}) + \sum_{u \in \mu : u \notin \mu_1} \operatorname{arm}_{\mu}(u) = 2 n(\mu^{\prime}) - \binom{\mu_1}{2} . \end{align} Let $\operatorname{Coinv}(\sigma, \mu)$ be the set of attacking pairs $u$ and $v$ such that $\sigma(u) < \sigma(v)$. Since $\sigma$ is a non-attacking filling, we have \begin{align} \# \operatorname{Inv}(\sigma, \mu) + \# \operatorname{Coinv}(\sigma, \mu) &= 2 n(\mu^{\prime}) - \binom{\mu_1}{2} \end{align} which implies \begin{align} n(\mu^{\prime}) - \operatorname{inv}(\sigma, \mu) &= n(\mu^{\prime}) - \# \operatorname{Inv}(\sigma, \mu) + \sum_{u \in \operatorname{Des}(\sigma, \mu)} \operatorname{arm}_{\mu}(u) \\ &= \# \operatorname{Coinv}(\sigma, \mu) - n(\mu^{\prime}) + \binom{\mu_1}{2} + \sum_{u \in \operatorname{Des}(\sigma, \mu)} \operatorname{arm}_{\mu}(u) . \end{align} Hence, for a fixed non-attacking filling $\sigma$, the contribution of $\sigma$ to \eqref{hhl-proof} is \begin{align} \label{hhl-chi} & x^{\sigma} t^{-n(\mu^{\prime}) + \binom{\mu_1}{2}} (1-t)^{\mu_1} \prod_{u, v \text{ attacking}} t^{\chi(\sigma(u) < \sigma(v))} \\ &\prod_{v = \operatorname{down}_{\mu}(u)} \left( 1 - q^{\operatorname{leg}_{\mu}(u)+1} t^{\operatorname{arm}_{\mu}(u)+1} \right)^{\chi(\sigma(u) = \sigma(v))} \\ &\times (1-t)^{\chi(\sigma(u) \neq \sigma(v))} \left( q^{\operatorname{leg}_{\mu}(u)+1} t^{\operatorname{arm}_{\mu}(u)} \right)^{\chi(\sigma(u)>\sigma(v)} \nonumber . \end{align} where $\chi$ evaluates to 1 if the inner statement is true and 0 if the statement is false.
For any non-attacking filling $\sigma$ of $\mu$, let $\kappa$ be the coloring of the vertices $1,2,\ldots,n$ given by $\kappa(u) = \sigma(u)$. We will show that the contribution of $\sigma$ to \eqref{hhl-proof} is equal to the contribution of $\kappa$ to the expression in Theorem \ref{thm:integral-form-chromatic}: \begin{align} \label{chromatic-proof}
&t^{-n(\mu^{\prime}) + \binom{\mu_1}{2}} (1-t)^{\mu_1} \sum_{G_{\mu} \subseteq H \subseteq G_{\mu}^{+}} X_{H}(x;t) \\ \nonumber
&\times \prod_{\{u,v\} \in H \setminus G_{\mu}} - \left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)}\right) \\ &\times \prod_{\{u,v\} \in G_{\mu}^{+} \setminus H} \left(1 - q^{\operatorname{leg}_{\mu}(u)+1}t^{\operatorname{arm}_{\mu}(u)+1}\right) . \nonumber \end{align} Clearly the term $x^{\kappa} t^{-n(\mu^{\prime}) + \binom{\mu_1}{2}} (1-t)^{\mu_1}$ appears equally in both expressions. For attacking pairs $u$ and $v$ in $\mu$, we get a contribution of $t^{\chi(\kappa(u) < \kappa(v))}$ to \eqref{chromatic-proof} coming from the definition of $X_{H}(x;t)$ for any $G_{\mu} \subseteq H \subseteq G_{\mu}^{+}$.
All that remains is to consider the cells $u,v$ where $v = \operatorname{down}_{\mu}(u)$. We get different contributions to \eqref{chromatic-proof} in each of three situations: when $\sigma(u) > \sigma(v)$, when $\sigma(u) = \sigma(v)$, and when $\sigma(u) < \sigma(v)$. We will show these contributions match the corresponding contributions to \eqref{hhl-chi}\footnote{One can consider the two factors in \eqref{chromatic-proof} as the solutions to a system of two unknowns and three equations. Of course, such a solution is not guaranteed to exist -- the fact that it does exist in this case is fortunate, although this phenomenon occurs often in the theory of Macdonald polynomials.}.
If $\sigma(u) < \sigma(v)$, we get a contribution of $1-t$ to \eqref{hhl-chi}. We get a contribution of $ \left(1 - q^{\operatorname{leg}_{\mu}(u)+1}t^{\operatorname{arm}_{\mu}(u)+1}\right)$ for each $H$ that does not contain $\{u,v\}$ and $- t \left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)}\right)$ for each $H$ which does contain $\{u,v\}$, since we have to include the factor from \eqref{chromatic-proof} as well as the extra $t$ that comes from a new ascent. Grouping on this condition and summing the contributions, we get \begin{align} \left(1 - q^{\operatorname{leg}_{\mu}(u)+1}t^{\operatorname{arm}_{\mu}(u)+1}\right) - t \left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)}\right) = 1-t \end{align} as desired.
If $\sigma_u = \sigma_v$, we get a term of $1-q^{\operatorname{leg}_{\mu}(u)+1} t^{\operatorname{arm}_{\mu}(u)+1}$ in \eqref{hhl-chi}. From \eqref{chromatic-proof}, we get $\left(1 - q^{\operatorname{leg}_{\mu}(u)+1} t^{\operatorname{arm}_{\mu}(u)+1} \right)$ for each $H$ that does not contain $\{u,v\}$ and no contribution from the other graphs, since $\kappa$ is not proper for those graphs. These factors are clearly equal.
Finally, if $\sigma(u) > \sigma(v)$ we get $q^{\operatorname{leg}_{\mu}(u)+1}t^{\operatorname{arm}_{\mu}(u)}(1-t)$ from \eqref{hhl-chi}. We get a contribution of $ \left(1 - q^{\operatorname{leg}_{\mu}(u)+1}t^{\operatorname{arm}_{\mu}(u)+1}\right)$ for each $H$ that does not contain $\{u,v\}$ and $- t \left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)}\right)$ for each $H$ which does contain $\{u,v\}$. Grouping on this condition and summing the two contributions, we get \begin{align} \left(1 - q^{\operatorname{leg}_{\mu}(u)+1}t^{\operatorname{arm}_{\mu}(u)+1}\right) - \left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)}\right) = q^{\operatorname{leg}_{\mu}(u)+1}t^{\operatorname{arm}_{\mu}(u)}(1-t) \end{align} as desired.
\end{proof}
\subsection{Schur functions}
In \cite{shareshian-wachs}, Shareshian and Wachs provide a formula for the Schur expansion of $X_G(x;t)$ for certain graphs $G$. By applying Theorem \ref{thm:integral-form-chromatic} to their Schur formula, we obtain a Schur formula for the integral form Macdonald polynomials. Since the Schur expansion of $J_{\mu}(x;q,t)$ is not positive, our formulas cannot be positive. In general, we will have some cancelation. In order to state our formula, we need to consider certain tableaux.
\begin{defn} Given two partitions $\mu, \lambda \vdash n$, we again number the cells of $\mu$ with entries $1,2,\ldots,n$ in reading order. An \emph{integral form tableau} $T$ of type $\mu$ and shape $\lambda$, is a bijection $T : \lambda \to \{1,2,\ldots,n\}$ such that \begin{itemize} \item the entries in each row increase from left to right, \item if $v$ is immediately to the right of $u$ then $\{u, v\} \notin G_{\mu}$, and \item if $v$ is immediately below $u$ and $u < v$ then $\{u, v\} \in G_{\mu}^{+}$. \end{itemize} We let $\operatorname{IFT}_{\mu}$ denote the collection of all integral form tableaux of type $\mu$ (of varying shapes $\lambda$). \end{defn}
For example, \begin{align} \label{integral-tableau} \begin{ytableau} 2 \\ 3 & 5 \\ 1 & 4 & 6 \end{ytableau} \end{align} is an integral form tableau of type $(2,2,2)$ and shape $(3,2,1)$. For reference, the reading order labeling of $(2,2,2)$ is \begin{align} \begin{ytableau} 1 & 2 \\ 3 & 4 \\ 5 & 6 \end{ytableau} \end{align} and one can check that this tableau satisfies all the conditions described above. Note that a given tableau can be an integral form tableau for many different types $\mu$. Given such a tableau $T$, we construct a $q,t$-weight for the tableau, denoted $\operatorname{wt}_{\mu}(T)$, as follows.
\begin{defn} For each $u < v$ such that $\{u, v\} \in G_{\mu}^{+} \setminus G_{\mu}$, i.e.\ $v = \operatorname{down}_{\mu}(u)$, we define a factor $\operatorname{wt}_{\mu}(T,u)$ by \begin{itemize} \item $\operatorname{wt}_{\mu}(T,u) = t^{-\operatorname{arm}_{\mu}(u)} \left(1 - q^{\operatorname{leg}_{\mu}(u)+1}t^{\operatorname{arm}_{\mu}(u)+1} \right)$ if $u$ appears immediately to the left of $v$ in $T$, \item $\operatorname{wt}_{\mu}(T,u) = -t^{-\operatorname{arm}_{\mu}(u)+1}\left(1 - q^{\operatorname{leg}_{\mu}(u)+1} t^{\operatorname{arm}_{\mu}(u)} \right)$ if $u$ appears immediately above $v$ in $T$, \item $\operatorname{wt}_{\mu}(T,u) = t^{-\operatorname{arm}_{\mu}(u)}(1-t)$ if $u$ appears in some row above $v$ in $T$ but not immediately above $v$ in $T$, and \item $\operatorname{wt}_{\mu}(T,u) = q^{\operatorname{leg}_{\mu}(u)+1}(1-t)$ otherwise. \end{itemize} We also let $\operatorname{Inv}_{\mu}(T)$ equal the set of edges $\{u,v\} \in G_{\mu}$ such that $u$ appears in a row strictly above $v$'s row and $\operatorname{inv}_{\mu}(T) = \# \operatorname{Inv}_{\mu}(T)$. Then we define \begin{align} \wt_{\mu}(T) &= t^{\operatorname{inv}_{\mu}(T)} \prod_{\{u,v\} \in G_{\mu}^{+} \setminus G_{\mu}} \operatorname{wt}_{\mu}(T,u) . \end{align} \end{defn}
Let us consider the tableau $T$ of type $(2,2,2)$ and shape $(3,2,1)$ depicted in \eqref{integral-tableau}. The edges that are in $G_{\mu}^{+}$ but not in $G_{\mu}$ are $\{1,3\}$, $\{2,4\}$, $\{3,5\}$, and $\{4,6\}$. These contribute the following factors, respectively: \begin{align} \operatorname{wt}_{\mu}(T,1) &= q^{\operatorname{leg}_{\mu}(1)+1} \left(1-t \right) = q\left(1-t\right) \\ \operatorname{wt}_{\mu}(T,2) &= t^{-\operatorname{arm}_{\mu}(2)}(1 - t) = 1-t \\ \operatorname{wt}_{\mu}(T,3) &= t^{-\operatorname{arm}_{\mu}(3)} \left(1 - q^{\operatorname{leg}_{\mu}(3)+1}t^{\operatorname{arm}_{\mu}(3)+1} \right) = t^{-1} \left(1 - q^2t^2 \right) \\ \operatorname{wt}_{\mu}(T,4) &= t^{-\operatorname{arm}_{\mu}(4)} \left(1 - q^{\operatorname{leg}_{\mu}(4)+1} t^{\operatorname{arm}_{\mu}(4)+1} \right) = 1 - q^2 t . \end{align} Multiplying the four terms together and then by $t^{\operatorname{inv}_{\mu}(T)} = t^3$, we get \begin{align} \wt_{\mu}(T) &= qt^2 \left(1-t\right)^2 \left(1-q^2t\right) \left(1-q^2t^2\right) . \end{align}
\begin{cor} \label{cor:integral-form-schur} For any partition $\mu \vdash n$, \begin{align} J_{\mu^{\prime}}(x;q,t) &= (1-t)^{\mu_1} \sum_{T \in \operatorname{IFT}_{\mu}} \operatorname{wt}_{\mu}(T) s_{\operatorname{shape}(T)} \end{align} where the sum is over all integral form tableaux of type $\mu$. \end{cor}
As an example, we compute the coefficient of $s_{2,2}$ in the Schur expansion of $J_{3,1}(x;q,t)$, so $\mu = (2,1,1)$. The only attacking pair in the reading order labeling of $(2,1,1)$ is $\{3,4\}$. We also note that 1 is immediately above 2 and 2 is immediately above 3 in this labeling. The integral form tableaux of type $(2,1,1)$ and shape $(2,2)$ are \begin{align} \begin{ytableau} 2 & 4 \\ 1 & 3 \end{ytableau} \hspace{15pt} \begin{ytableau} 2 & 3 \\ 1 & 4 \end{ytableau} \hspace{15pt} \begin{ytableau} 1 & 4 \\ 2 & 3 \end{ytableau} \hspace{15pt} \begin{ytableau} 1 & 3 \\ 2 & 4 \end{ytableau} \end{align} and their respective weights are $q(1-t)^2 $, $qt(1-t)(1-q^2 t)$, $ -t(1-q)(1-q^2t)$, and $ -q^2 t^2 (1-q) (1-t)$. Summing these weights and multiplying by $(1-t)^2$, we see that the coefficient \begin{align}
\left. J_{3,1}(x;q,t) \right|_{s_{2,2}} &= (1-t)^2 (q-t) (1-qt) (1+qt) . \end{align}
\begin{proof}[Proof of Corollary \ref{cor:integral-form-schur}]
First, we need to recall the Schur expansion of a chromatic quasisymmetric function given in \cite{shareshian-wachs}. If $G$ is the incomparability graph of a $(3+1)$-free poset, we define a \emph{$G$-tableau} of shape $\lambda$ to be a bijective filling of $\lambda$ with integers $1,2,\ldots, |\lambda|$ such that \begin{itemize} \item if $u$ is immediately left of $v$, then $\{u,v\} \notin G$, and \item if $u$ is immediately above $v$, then either $u > v$ or $\{u,v\} \in G$. \end{itemize} Given such a tableau $T$, we define $\operatorname{inv}_{G}(T)$ to be the number of $\{u,v\} \in G$ such that $u$ appears in a row strictly above the row containing $v$. In \cite{shareshian-wachs}, Shareshian and Wachs extend a result of Gasharov \cite{gasharov} to prove that \begin{align} X_G(x;t) &= \sum_{\text{$G$-tableaux } T} t^{\operatorname{inv}_G(T)} s_{\operatorname{shape}(T)} \end{align} for all such graphs $G$. Applying this result to Theorem \ref{thm:integral-form-chromatic}, we have \begin{align} \label{integral-schur-pf} J_{\mu^{\prime}}(x;q,t) &= t^{-n(\mu^{\prime}) + \binom{\mu_1}{2}} (1-t)^{\mu_1} \sum_{G_{\mu} \subseteq H \subseteq G_{\mu}^{+}} \prod_{\{u,v\} \in H \setminus G_{\mu}} - \left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)}\right) \\ &\times \prod_{\{u,v\} \in G_{\mu}^{+} \setminus H} \left(1 - q^{\operatorname{leg}_{\mu}(u)+1}t^{\operatorname{arm}_{\mu}(u)+1}\right) \nonumber \sum_{\text{$H$-tableaux } T} t^{\operatorname{inv}_H(T)} s_{\operatorname{shape}(T)} . \end{align} It is not hard to check that integral form tableaux $T$ of type $\mu$ are exactly the tableaux that are $H$-tableaux for (at least) one graph $H$ such that $G_{\mu} \subseteq H \subseteq G_{\mu}^{+}$. The main idea of the proof is to switch the order of the sums in \eqref{integral-schur-pf}.
We fix an integral form tableau $T$ of type $\mu$. We wish to calculate the contribution of $T$ to \eqref{integral-schur-pf}. The difficulty is that $T$ may be an $H$-tableau for more than one graph $H$. First, we know that each graph $H$ contains $G_{\mu}$, so we need to include $t^{\operatorname{inv}_{G_{\mu}}(T)} = t^{\operatorname{inv}_{\mu}(T)}$. The other contributions will come from edges $\{u,v\} \in G_{\mu}^{+} \setminus G_{\mu}$. We consider the possible configurations case by case and show that they match $\wt_{\mu}(T,u)$.
First, consider the case when $u$ appears immediately left of $v$ in $T$. Then $H$ must not contain the edge $\{u,v\}$, so we must choose the factor $1 - q^{\operatorname{leg}_{\mu}(u)+1}t^{\operatorname{arm}_{\mu}(u)+1}$ from the second product in \eqref{integral-schur-pf}. The $t^{-\operatorname{arm}_{\mu}(u)}$ factor comes from the power of $t$ at the front of \eqref{integral-schur-pf}. Multiplying these together, we obtain $\wt_{\mu}(T,u)$.
Next, if $u$ appears immediately above $v$ in $T$, then we must have $\{u,v\} \in H$, which forces us to take the factor $-\left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)}\right)$. Again, we take $t^{-\operatorname{arm}_{\mu}(u)}$ from the front of \eqref{integral-schur-pf}. We also get another power of $t$ in $t^{\operatorname{inv}_{H}(T)}$ that does not come from $t^{\operatorname{inv}_{\mu}(T)}$, so we multiply by $t$ to get $\wt_{\mu}(T,u)$.
If $u$ appears in a row above $v$ but not immediately above $v$, then $\{u,v\} \in H$ and $\{u,v\} \notin H$ are both possible. Since $T$ appears in both of these cases, we sum the weights from each of the two cases in \eqref{integral-schur-pf}. If $\{u,v\} \in H$, the contribution to \eqref{integral-schur-pf} \begin{align} -t^{-\operatorname{arm}_{\mu}+1} \left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)}\right) . \end{align} If $\{u,v\} \notin H$, then the contribution is \begin{align} t^{-\operatorname{arm}_{\mu}} \left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)+1}\right) . \end{align} Adding these two terms, we obtain $\operatorname{wt}_{\mu}(T,u) = t^{-\operatorname{arm}_{\mu}(u)}(1-t)$.
Finally, if none of these cases apply, then $u$ is in a row strictly south of $v$. Since we never have an additional inversion in this case, the total contribution to \eqref{integral-schur-pf} is \begin{align} &-t^{-\operatorname{arm}_{\mu}} \left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)}\right) + t^{-\operatorname{arm}_{\mu}} \left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)+1}\right) \\ &= q^{\operatorname{leg}_{\mu}(u)+1}(1-t) \end{align} which is equal to the desired weight. This allows us to rewrite \eqref{integral-schur-pf} as the statement in the corollary. \end{proof}
We noted previously that the coefficients in Theorem \ref{thm:integral-form-chromatic} are only rational functions, instead of polynomials, in $q$ and $t$. Here we prove that this problem disappears for Corollary \ref{cor:integral-form-schur}.
\begin{prop} For any integral form tableau $T$ of type $\mu$, \begin{align} \operatorname{wt}_{\mu}(T) \in \mathbb{Z}[q,t]. \end{align} \end{prop}
\begin{proof} We consider $\{u,v\} \in G_{\mu}^{+} \setminus G_{\mu}$ such that \begin{enumerate} \item $u$ appears immediately left of $v$ in $T$, \item $u$ appears immediately above $v$ in $T$, or \item $u$ appears above $v$ in $T$ but not immediately above $v$ in $T$. \end{enumerate} By definition, these are the only edges that have a negative power of $t$ in $\operatorname{wt}_{\mu}(T)$. Their relevant contributions are $t^{-\operatorname{arm}_{\mu}(u)}$, $t^{-\operatorname{arm}_{\mu}(u)+1}$, and $t^{-\operatorname{arm}_{\mu}(u)}$, respectively. Let \begin{align} \operatorname{Arm}_{\mu}(u) &= \{w > u : w \text{ is in the same row of $\mu$ as $u$} \}. \end{align} It is clear from the definition that the cardinality of $\operatorname{Arm}_{\mu}(u)$ is $\operatorname{arm}_{\mu}(u)$. Furthermore, we define \begin{align} \operatorname{inv}_{\mu}(T,u) &= \# \{w \in \operatorname{Arm}_{\mu}(u) : \{u,w\} \in \operatorname{Inv}_{\mu}(T)\} \\ &+ \# \{w \in \operatorname{Arm}_{\mu}(u) : \{w,v\} \in \operatorname{Inv}_{\mu}(T)\} \nonumber. \end{align} Our goal is to show that, in each of the three cases above, $\operatorname{inv}_{\mu}(T,u) \geq \operatorname{arm}_{\mu}(u)$. Since each inversion in $T$ contributes to exactly one $\operatorname{inv}_{\mu}(T,u)$, this will prove the claim. The key to our argument is that $w \in \operatorname{Arm}_{\mu}(u)$ cannot share a row in $T$ with either $u$ or $v$.
Consider case 1, i.e.\ $u$ appears immediately left of $v$ in $T$. By the definition of an integral form tableau, $w \in \operatorname{Arm}_{\mu}(u)$ may not appear in the row containing $u$ and $v$ in $T$. If it appears above this row, then $\{w,v\}$ is an inversion; if it appears below this row, then $\{u,w\}$ is an inversion. Therefore $\operatorname{inv}_{\mu}(T,u) = \operatorname{arm}_{\mu}(u)$.
In case 2, $w$ may not appear in the row containing $u$ or the row containing $v$ in $T$, and the argument from case 1 applies.
In case 3, we have three possible configurations. Listing the cells in their order of appearance from top to bottom we either have $wuv$, $uwv$, or $uvw$. By the same logic as above, each of these three orders contributes (at least) one inversion, so $\operatorname{inv}_{\mu}(T,u) \geq \operatorname{arm}_{\mu}(u)$. \end{proof}
\subsection{Power sum symmetric functions}
In a similar manner, we can use Theorem \ref{thm:integral-form-chromatic} along with the power sum expansion of $X_G$, conjectured in \cite{shareshian-wachs} and proved in \cite{ath-power}, to give a power sum formula for the integral form Macdonald polynomial.
First, we recall the formula for the power sum expansion of $X_H(x;t)$ for our graphs $H$. We translate the formula from the poset setting to the graph setting for the sake of consistency. Given a partition $\lambda \vdash n$ of length $k$ and a permutation $\sigma \in \mathfrak{S}_n$ in one-line notation, we break $\sigma$ into $k$ blocks of lengths $\lambda_1, \lambda_2, \ldots, \lambda_k$ from left to right. For a fixed $\lambda$, we say that $\sigma$ has an \emph{$H$-descent} at position $i$ if \begin{itemize} \item $i$ and $i+1$ are in the same block, \item $\sigma_i > \sigma_{i+1}$, and \item the edge $\{\sigma_{i+1}, \sigma_i\} \notin H$. \end{itemize} Similarly, we say that $\sigma$ has a \emph{nontrivial left-to-right $H$-maximum} at position $j$ if \begin{itemize} \item $j$ is not the first position in its block, \item $\sigma_i < \sigma_j$ for each $i < j$ in $j$'s block, and \item $\{\sigma_i, \sigma_j\} \notin H$ for each $i < j$ in $j$'s block. \end{itemize} Let $\mathcal{N}_{\lambda}(H)$ be the set of permutations $\sigma \in \mathfrak{S}_n$ without $H$-descents and with no nontrivial left-to-right $H$-maxima.
\begin{thm}[\cite{ath-power}] \label{thm:ath-power} For any graph $H$ which is the incomparability graph of a natural unit order, we have \begin{align} \omega X_H(x;t) &= \sum_{\lambda \vdash n} \frac{p_{\lambda}}{z_{\lambda}} \sum_{\sigma \in \mathcal{N}_{\lambda}(H)} t^{\operatorname{inv}_{H}(\sigma)} \end{align} where $\operatorname{inv}_H(\sigma)$ is the number of pairs of indices $i < j$ such that $\sigma_i > \sigma_j$ and $\{\sigma_i, \sigma_j\} \in H$. \end{thm}
Now we must define a weight for each $\sigma \in \mathcal{N}_{\lambda}(G_{\mu}^{+})$.
\begin{defn} \label{defn:p-weight} For partitions $\mu, \lambda \vdash n$ and $\sigma \in \mathcal{N}_{\lambda}(G_{\mu}^{+})$, we define $\wt_{\mu, \lambda}(\sigma)$ to be the product over all $\{u,v\} \in G_{\mu}^{+} \setminus G_{\mu}$ where the term corresponding to $\{u,v\}$ is \begin{enumerate} \item $-t (1-q^{\operatorname{leg}_{\mu}(u)+1} t^{\operatorname{arm}_{\mu}(u)})$ if $u$ and $v$ form a $G_{\mu}$-descent in $\sigma$, \\ \item $-(1-q^{\operatorname{leg}_{\mu}(u)+1} t^{\operatorname{arm}_{\mu}(u)})$ if $v$ is a nontrivial left-to-right $G_{\mu}$-maximum in $\sigma$, \\ \item $1-t$ if otherwise and $u$ and $v$ form a $G_{\mu}$-inversion in $\sigma$, and \item $q^{\operatorname{leg}_{\mu}(u)+1} t^{\operatorname{arm}_{\mu}(u)} (1-t)$ if otherwise. \end{enumerate} \end{defn}
\begin{cor} \begin{align} \omega J_{\mu^{\prime}}(x; q, t) &= t^{-n(\mu^{\prime}) + \binom{\mu_1}{2}} (1-t)^{\mu_1} \sum_{\lambda \vdash n} \frac{p_{\lambda}}{z_{\lambda}} \sum_{\sigma \in \mathcal{N}_{\lambda}(G_{\mu}^{+})} \wt_{\mu, \lambda}(\sigma) \end{align} \end{cor}
\begin{proof} By Theorem \ref{thm:integral-form-chromatic} and \ref{thm:ath-power}, \begin{align} \omega J_{\mu^{\prime}}(x; q, t) &= t^{-n(\mu^{\prime}) + \binom{\mu_1}{2}} (1-t)^{\mu_1} \\ &\sum_{G_{\mu} \subseteq H \subseteq G_{\mu}^{+}} \prod_{\{u,v\} \in H \setminus G_{\mu}} - \left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)}\right) \nonumber \\ &\times \prod_{\{u,v\} \in G_{\mu}^{+} \setminus H} \left(1 - q^{\operatorname{leg}_{\mu}(u)+1}t^{\operatorname{arm}_{\mu}(u)+1}\right) . \nonumber \\ &\times \sum_{\lambda \vdash n} \frac{p_{\lambda}}{z_{\lambda}} \sum_{\sigma \in \mathcal{N}_{\lambda}(H)} t^{\operatorname{inv}_{H}(\sigma)} . \end{align} Now we consider each edge $\{u, v\} \in G_{\mu}^{+} \setminus G_{\mu}$ along with Definition \ref{defn:p-weight}. If $\sigma \in \mathcal{N}_{\lambda}(H)$ for any of the graphs $H$ it must be in $\mathcal{N}_{\lambda}(G_{\mu}^{+})$. If we see a $G$-descent $vu$, then we know that $\{u,v\} \in H$, so we multiply the factor $ - \left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)}\right)$ as well as by a $t$, since we get an $H$-inversion between $u$ and $v$. This yields the factor in (1) of Definition \ref{defn:p-weight}. The second possibility is that $v$ is a nontrivial left-to-right $G_{\mu}$-maximum. Again, this forces $\{u,v\} \in H$ and we get the factor appearing in (2) in Definition \ref{defn:p-weight}. If neither of these situations occurs, then we have freedom to either include or exclude $\{u,v\}$ from $H$. If $u$ is to the left of $v$ in $\sigma$, this results in simply adding the factors \begin{align}
- \left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)}\right) + \left(1 - q^{\operatorname{leg}_{\mu}(u)+1}t^{\operatorname{arm}_{\mu}(u)+1}\right)
\end{align}
which yields the factor in (4) of Definition \ref{defn:p-weight}. If $u$ is to the right of $v$ in $\sigma$, then we create a new $H$-inversion if we include the edge $\{u,v\}$, so we get
\begin{align}
- t\left(1 - q^{\operatorname{leg}_{\mu}(u) + 1} t^{\operatorname{arm}_{\mu}(u)}\right) + \left(1 - q^{\operatorname{leg}_{\mu}(u)+1}t^{\operatorname{arm}_{\mu}(u)+1}\right) = 1-t
\end{align}
which is the factor in (3). \end{proof}
\section{Expansions of Jack polynomials} \label{sec:jack}
In this section, we obtain results for Jack polynomials that are analogous to our results for integral form Macdonald polynomials. Often, the results here are simpler than the results in Section \ref{sec:integral-form}, since they are specializations of our previous results. All results follow directly from the analogous result in Section \ref{sec:integral-form} and the definition of $J_{\mu}^{(\alpha)}(x)$, so we omit proofs.
\subsection{Chromatic symmetric functions}
Recall that \begin{align} \operatorname{hook}^{(\alpha)}_{\mu}(u) = \alpha(\operatorname{leg}_{\mu}(u)+1) + \operatorname{arm}_{\mu}(u). \end{align}
\begin{thm} \label{thm:jack-chromatic} \begin{align} J_{\mu^{\prime}}^{(\alpha)}(x) &= \sum_{G_{\mu} \subseteq H \subseteq G_{\mu}^{+}} X_H(x) \prod_{\{u,v\} \in H \setminus G_{\mu}} \left(-\operatorname{hook}^{(\alpha)}_{\mu}(u) \right) \prod_{\{u,v\} \in G_{\mu}^{+} \setminus H} \left(1 + \operatorname{hook}^{(\alpha)}_{\mu}(u)\right) \end{align} \end{thm}
Let us consider the example $\mu^{\prime} = (2,2,1)$, so $\mu = (3,2)$. The cells of $\mu$ are numbered as follows. \begin{align} \begin{ytableau} 1 & 2 \\ 3 & 4 & 5 \end{ytableau} \end{align} Then $G_{\mu}$ is the following graph. \begin{center} \begin{tikzpicture} \node (1) {1}; \node (2) [right of=1] {2}; \node (3) [right of=2] {3}; \node (4) [right of=3] {4}; \node (5) [right of=4] {5}; \path (1) edge node [right] {} (2) (2) edge node [right] {} (3) (3) edge node [right] {} (4) (5) edge[bend right] node [left] {} (3) (4) edge node [right] {} (5); \end{tikzpicture} \end{center} The formula in Theorem \ref{thm:jack-chromatic} gives \begin{align} J_{\mu^{\prime}}^{(\alpha)}(x) &= \left(-\operatorname{hook}^{(\alpha)}_{\mu}(1) \right) \left(-\operatorname{hook}^{(\alpha)}_{\mu}(2) \right) X_{G_{\mu}} (x) \\ &+ \left(1+ \operatorname{hook}^{(\alpha)}_{\mu}(1)\right) \left(-\operatorname{hook}^{(\alpha)}_{\mu}(2)\right) X_{G_{\mu} \cup \{1,3\}} (x) \nonumber \\ &+ \left(-\operatorname{hook}^{(\alpha)}_{\mu}(1) \right) \left(1+ \operatorname{hook}^{(\alpha)}_{\mu}(2) \right) X_{G_{\mu} \cup \{2,4\}} (x)\nonumber \\ &+ \left(1+ \operatorname{hook}^{(\alpha)}_{\mu}(1)\right) \left( 1+\operatorname{hook}^{(\alpha)}_{\mu}(2) \right) X_{G_{\mu}^{+}} (x)\nonumber \end{align} We have $\operatorname{hook}^{(\alpha)}_{\mu}(1) = \alpha + 1$ and $\operatorname{hook}^{(\alpha)}_{\mu}(2) = \alpha$, which completes the computation.
\subsection{Schur functions}
In \cite{gasharov}, Gasharov gives a formula for the Schur expansion of $X_G(x)$ whenever $G$ is the incomparability graph of a (3+1)-free poset. One can check that all graphs appearing in Theorem \ref{thm:jack-chromatic} meet this condition, so we get a Schur function formula for Jack polynomials. First, we define a new weight function on integral form tableaux.
To compute $\operatorname{wt}_{\mu}^{(\alpha)}(T)$, we begin with 1 and multiply by \begin{itemize} \item $1+\operatorname{hook}^{(\alpha)}_{\mu}(u)$ if $u$ appears immediately left of $v$ in $T$, and \item $-\operatorname{hook}^{(\alpha)}_{\mu}(u)$ if $u$ appears immediately above $v$ in $T$ \end{itemize} for each $\{u,v\} \in G_{\mu}^{+} \setminus G_{\mu}$. In the example depicted in \eqref{integral-tableau}, the pairs $\{u,v\}$ are $\{1,3\}$, $\{2,4\}$, $\{3,5\}$, and $\{4,6\}$. Then the weight is \begin{align} \operatorname{wt}^{(\alpha)}_{\mu}(T) &= \left(1+\operatorname{hook}^{(\alpha)}_{\mu}(2) \right) \left(1+ \operatorname{hook}^{(\alpha)}_{\mu}(4) \right) = (\alpha+1)(2\alpha+1). \end{align}
\begin{cor} For any partition $\mu \vdash n$, \begin{align} J_{\mu^{\prime}}^{(\alpha)}(x) &= \sum_{T \in \operatorname{IFT}_{\mu}} \operatorname{wt}^{(\alpha)}_{\mu}(T) s_{\operatorname{shape}(T)} \end{align} where the sum is over all integral form tableaux of type $\mu$. \end{cor}
As an example, let us revisit the case where $\lambda = (2,2)$ and $\mu = (2,1,1)$. Again, the integral form tableaux of type $(2,1,1)$ and shape $(2,2)$ are as follows. \begin{align} \begin{ytableau} 2 & 4 \\ 1 & 3 \end{ytableau} \hspace{15pt} \begin{ytableau} 2 & 3 \\ 1 & 4 \end{ytableau} \hspace{15pt} \begin{ytableau} 1 & 4 \\ 2 & 3 \end{ytableau} \hspace{15pt} \begin{ytableau} 1 & 3 \\ 2 & 4 \end{ytableau} \end{align} Their respective weights are $1$, $\left(1+\operatorname{hook}^{(\alpha)}_{\mu}(2)\right)$, $\-\operatorname{hook}^{(\alpha)}_{\mu}(1)\left(1+\operatorname{hook}^{(\alpha)}_{\mu}(2)\right)$, and $-\operatorname{hook}^{(\alpha)}_{\mu}(1)$. Adding these weights together and using the fact that $\operatorname{hook}^{(\alpha)}_{\mu}(1) = \alpha$ and $\operatorname{hook}^{(\alpha)}_{\mu}(2) = 2\alpha$, we obtain \begin{align}
\left. J_{(3,1)}^{(\alpha)}(x) \right|_{s_{2,2}} &= 1 + 2\alpha + 1 - (2\alpha^2 + \alpha) - \alpha = -2\alpha^2 + 2. \end{align}
\subsection{Power sum symmetric functions}
We can also use Stanley's result in \cite{stanley-chromatic} on the power sum expansion of $X_G$ to obtain a power sum formula for Jack polynomials.
\begin{cor} \label{cor:jack-power} \begin{align}
J_{\mu^{\prime}}^{(\alpha)}(x) &= \sum_{H \subseteq G_{\mu}^{+}} (-1)^{|H|} p_{\lambda(H)} \prod_{\{u,v\} \in H \setminus G_{\mu}} -\operatorname{hook}^{(\alpha)}_{\mu}(u) \\
&= \sum_{H \subseteq G_{\mu}^{+}} (-1)^{\left| H \cap G_{\mu} \right|} p_{\lambda(H)} \prod_{\{u,v\} \in H \setminus G_{\mu}} \operatorname{hook}^{(\alpha)}_{\mu}(u). \nonumber \end{align}
where $|H|$ is the number of edges in $H$ and $\lambda(H)$ is the partition whose parts equal the sizes of the connected components of the graph induced by $H$. \end{cor}
For example, if $\mu^{\prime} = (3,1)$ then $\mu = (2,1,1)$, so $G_{\mu} = \{\{3,4\}\}$ and $G_{\mu}^{+} = \{\{1,2\},\{2,3\},\{3,4\}\}$. If we wish to compute the coefficient of $p_{2,2}$, we note that the only $H \subseteq G_{\mu}^{+}$ that has 2 connected components of size 2 is $H = \{\{1,2\}, \{3,4\}\}$. Therefore \begin{align}
\left. J_{(3,1)}^{(\alpha)}(x) \right|_{p_{2,2}} &= -\operatorname{hook}^{(\alpha)}_{\mu}(1) = -\alpha. \end{align}
\section{Open problems}
\subsection{Cancelation} As the examples we have computed show, there is generally cancelation in our Schur and power sum formulas. We would like to reduce some of this cancelation via involutions or other methods. It would also be interesting to quantify how much cancelation occurs in our current formulas. For example, one could hope to rigorously compare the amount of cancelation in our Schur formula with the amount of cancelation that comes from computing the Schur expansion using the inverse Kostka numbers.
\subsection{Hanlon's Conjecture} A specific cancelation problem comes from a conjecture of Hanlon. Given any bijection $T_0: \lambda \to \{1,2,\ldots,n\}$, let $\operatorname{RS}(T_0)$ and $\operatorname{CS}(T_0)$ be the row and column stabilizers of $T_0$; these are the subgroups of $\mathfrak{S}_n$ that consist of permutations that only permute elements that are in the same row or column of $T_0$, respectively. Hanlon conjectured that there is a function $f: \operatorname{RS}(T_0) \times \operatorname{CS}(T_0) \to \mathbb{N}$ such that \begin{align} J_{\lambda}^{(\alpha)}(x) = \sum_{\substack{\sigma \in \operatorname{RS}(T_0) \\ \tau \in \operatorname{CS}(T_0)}} \alpha^{f(\sigma, \tau)} \epsilon(\tau) p_{\operatorname{type}(\sigma \tau)} \end{align} where $\epsilon(\tau)$ is the sign of the permutation $\tau$ and the type of a permutation is the partition associated to its conjugacy class. This conjecture has been proved in the $\alpha=2$ case \cite{feray-jack}. It bears some intriguing similarities to Corollary \ref{cor:jack-power}, although we have not been able to clarify these similarities as of yet. We hope that Corollary \ref{cor:jack-power} may lead to a new approach to Hanlon's Conjecture.
\subsection{Specializations} One classical property of the Jack polynomials is that it recovers other well-known functions by specialization, e.g.\ $\alpha = 1$ leads to a scalar multiple of the Schur functions and $\alpha = 2$ recovers the zonal spherical functions \cite{macdonald}. We would like for these specializations to be evident in our formulas for $J_{\mu}^{(\alpha)}$, but that is not currently the case.
\subsection{Schur positivity} It is a conjecture of Haglund that \begin{align} \frac{J_{\mu}(x;q,q^k)}{(1-q)^n} \end{align} is Schur positive for any $\mu \vdash n$ and any positive integer $k$. Similarly, we have noticed that the specialization \begin{align} \frac{t^{k (n(\mu^{\prime}))}J_{\mu}(x;t^{-k},t)}{(1-t)^n} \end{align} is Schur positive and palindromic for $k \in \mathbb{N}$. We hope that our work may inspire progress on these conjectures.
\subsection{LLT polynomials} LLT polynomials were defined by Lascoux, Leclerc, and Thibon in \cite{llt} and are prominent objects in the study of Macdonald polynomials. The LLT polynomials associated to collections of single cells are a close relative of chromatic quasisymmetric functions; in fact, every such LLT polynomial can be written \begin{align} LLT_G(x; t) &= \sum_{\kappa : V(G) \to \mathbb{Z}_{>0}} x^{\kappa} t^{\operatorname{asc}(\kappa)} \end{align} for some $G$ which is the incomparability graph of a unit interval order. Note that the only difference between this definition and the definition for $X_G(x; t)$ is that we do not insist that $\kappa$ is a proper coloring in this case.
The relationship between $LLT_G(x; t)$ and $X_G(x; t)$ can also be captured using plethysm \cite{plethysm}. Specifically, if $G$ has $n$ vertices then \begin{align} \label{chromatic-llt} X_G(x; t) = (t-1)^{-n} LLT_G[(t-1)x; t] \end{align} where the brackets imply plethystic substitution and $x$ stands for the plethystic sum $x_1+x_2+\ldots$. This identity is equivalent to Proposition 3.4 in \cite{carlsson-mellit}, which the authors of that paper prove using results in \cite{hhl}. Equation \eqref{chromatic-llt} can be rewritten as \begin{align} LLT_G(x; t) &= (t-1)^n X_G [ x/(t-1); t ] . \end{align} As a result, every formula for a chromatic quasisymmetric function (of the incomparability graph of a unit interval order) leads to a plethystic formula for the corresponding LLT polynomial. This is especially nice in the power sum case, as plethysm is easiest to compute in the power sum basis. From Theorem \ref{thm:ath-power}, we get \begin{align} \omega LLT_G(x; t) &= (t-1)^n \sum_{\lambda \vdash n} \frac{p_{\lambda}[x/(t-1); t]}{z_{\lambda}} \sum_{\sigma \in \mathcal{N}_{\lambda}(G)} t^{\operatorname{inv}_G(\sigma)} \\ &= (t-1)^n \sum_{\lambda \vdash n} \frac{p_{\lambda}}{(t^{\lambda_1}-1) (t^{\lambda_2}-1) \ldots z_{\lambda}} \sum_{\sigma \in \mathcal{N}_{\lambda}(G)} t^{\operatorname{inv}_G(\sigma)} \\ \label{llt} &= \sum_{\lambda \vdash n} \frac{(t-1)^{n-\ell(\lambda)} p_{\lambda}}{[\lambda_1]_t [\lambda_2]_t \ldots z_{\lambda}} \sum_{\sigma \in \mathcal{N}_{\lambda}(G)} t^{\operatorname{inv}_G(\sigma)}. \end{align} where we use the usual notation for the $t$-analogue, $[n]_t = 1 + t + \ldots + t^{n-1}$. Furthermore, Shareshian and Wachs proved that the sum $\sum_{\sigma \in \mathcal{N}_{\lambda}(G)} t^{\operatorname{inv}_G(\sigma)}$ is divisible by the product $[\lambda_1]_t [\lambda_2]_t \ldots$ \cite{shareshian-wachs}, which we describe below.
Given a partition $\lambda \vdash n$ and a graph $G$ on $n$ vertices that is the incomparability graph of a unit interval order, let $\widetilde{\mathcal{N}}_{\lambda}(G)$ be the set of permutations $\sigma \in \mathfrak{S}_n$ such that when we break $\sigma$ (presented in one-line notation) into segments of lengths $\lambda_1, \lambda_2, \ldots$ then \begin{outline} \1 the leftmost entry in each segment is the smallest entry in the segment, and \1 within each segment, if we see consecutive entries $\sigma_i$, $\sigma_{i+1}$ such that $\sigma_i < \sigma_{i+1}$ then we must have $\{\sigma_i, \sigma_{i+1}\} \notin G$. \end{outline} We can use Proposition 7.8 in \cite{shareshian-wachs} to rewrite \eqref{llt} as \begin{align} \omega LLT_G(x; t) &= \sum_{\lambda \vdash n} \frac{(t-1)^{n-\ell(\lambda)} p_{\lambda}}{z_{\lambda}} \sum_{\sigma \in \widetilde{\mathcal{N}}_{\lambda}(G)} t^{\operatorname{inv}_G(\sigma)}. \end{align}
It would be valuable to see if there is a deeper relationship between LLT polynomials (possibly those that do not correspond to collections of single cells) and chromatic quasisymmetric functions. \subsection{A non-symmetric analogue}
The \emph{integral form non-symmetric Macdonald polynomials} $\{ \mathcal{E}_{\gamma}(x;q,t) : \gamma \in \mathbb{N}^n\}$ are polynomials in $x_1, x_2, \ldots, x_n$ with coefficients in $\mathbb{Q}(q,t)$ that form a basis for the polynomial ring in variables $x_1, x_2, \ldots, x_n$. They are closely related to double affine Hecke algebras \cite{cherednik} and are more easily extended to other root systems than the symmetric case \cite{cherednik-nsym}.
In \cite{hhl-nsym}, Haglund, Haiman, and Loehr obtain a combinatorial formula for $\mathcal{E}_{\gamma}(x;q,t)$ that is very similar to their formula for $J_{\mu}(x;q,t)$. In fact, it is so similar that our main result goes through in that setting, albeit with a new function replacing the chromatic quasisymmetric function.
\begin{defn} Given a graph $G$ with its vertices numbered $1,2,\ldots,n$ and a positive integer $r \leq n$, we define the chromatic non-symmetric function to be \begin{align} \mathcal{X}_{G,r}(x;t) &= \sum_{\kappa : V(G) \to \{1,2,\ldots,r\}} x^{\kappa} t^{\operatorname{asc}(\kappa)} \end{align} where the sum is over proper colorings $\kappa$ such that $\kappa(v) = n - v + 1$ if $n-r+1 \leq v \leq n$. \end{defn}
In \cite{hhl-nsym}, the authors define new notations of arms, legs, and attacking pairs for $\gamma \in \mathbb{N}^n$. We refer the reader to \cite{hhl-nsym} for this notation. We define the attacking and augmented attacking graphs $G_{\gamma}$ and $G_{\gamma}^{+}$ analogously for these new notions of attacking and descents for $\gamma \in \mathbb{N}^n$. Our main theorem goes through for these new definitions.
\begin{prop} \label{prop:chromatic-nsym} \begin{align} \mathcal{E}_{\gamma}(x;q,t) &= t^{-\sum_{u \in \gamma} \operatorname{arm}_{\gamma}(u)} \sum_{G_{\gamma} \subseteq H \subseteq G_{\gamma}^{+}} \mathcal{X}_{H}(x;t) \\ &\times \prod_{\{u,v\} \in H \setminus G_{\gamma}} - \left(1 - q^{\operatorname{leg}_{\gamma}(u) + 1} t^{\operatorname{arm}_{\gamma}(u)}\right) \nonumber \\ &\times \prod_{\{u,v\} \in G_{\gamma}^{+} \setminus H} \left(1 - q^{\operatorname{leg}_{\gamma}(u)+1}t^{\operatorname{arm}_{\gamma}(u)+1}\right) . \nonumber \end{align} \end{prop}
There are a number of ways that this setting could be explored in future work. For example, it seems like the chromatic non-symmetric functions appearing in Proposition \ref{prop:chromatic-nsym} expand positively into key polynomials, also known as Demazure characters \cite{reiner-shimozono}. This implies that the chromatic non-symmetric functions may also appear as a character. One could also try to understand the coefficients of the expansion of chromatic non-symmetric functions into key polynomials and then use these coefficients along with Proposition \ref{prop:chromatic-nsym} to obtain an expansion of $\mathcal{E}_{\gamma}(x;q,t)$ into key polynomials; such an expansion would be unique because key polynomials are linearly independent.
\end{document}
|
arXiv
|
{
"id": "1701.05622.tex",
"language_detection_score": 0.6268274188041687,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[] {Existence and uniqueness of solutions to the Bogomol'nyi equation on graphs}
\author{Yuanyang Hu} \address{School of Mathematics and Statistics,
Henan University, Kaifeng, Henan 475004, P. R. China.} \email{[email protected]}
\date{}
\begin{abstract} Let $G=(V,E)$ be a connected finite graph. We study the Bogomol'nyi equation \begin{equation*}
\Delta u= \mathrm{e}^{u}-1 +4 \pi \sum_{s=1}^{k} n_s \delta_{z_{s}} \quad \text { on } \quad G, \end{equation*} where $z_1, z_2,\dots, z_k$ are arbitrarily chosen distinct vertices on the graph, $n_j$ is a positive integer, $j=1,2,\cdots, k$ and $\delta_{z_{s}}$ is the Dirac mass at $z_s$. We obtain a necessary and sufficient condition for the existence and uniqueness of solutions to the Bogomol'nyi equation. \end{abstract}
\maketitle
\textit{ \footnotesize Mathematics Subject Classification (2010)} {\scriptsize 35A01, 35R02}.
\textit{ \footnotesize Key words:} {\scriptsize Bogomol'nyi equation, finite graph, equation on graphs}
\section{Introduction} Magnetic vortices play important roles in many areas of theoretical physics including condensed-matter physics, cosmology, superconductivity theory, optics, electroweak theory, and quantum Hall effect. Wang and Yang \cite{WY} established a sufficient and necessary condition for the existence of multivortex solutions of the Bogomol'nyi system. Recently, $(2+1)-$dimensional Chern-Simons gauge theory and generalized Abelian Higgs model have attracted extensive attention. The topological, non-topological and doubly periodic multivortices to the generalized self-dual Chern-Simons model and the generalized Abelian Higgs model were established over the past two decades; see, for example, \cite{CI, Ha, T, TY, Y} and the references therein.
Analysis on graph have attracted a considerable amount of attention over the past decade; see, for example, \cite{ ALY, Bdd, GJ, GC, Hu, HWY, HLY, LCT, LP, TZZ, WN, WC} and the references therein. In particular, Huang, Lin and Yau \cite{HLY} proved the existence of solutions to mean field equations on graphs.
Inspired by the work of Huang-Lin-Yau \cite{HLY}, we study the Bogomol'nyi equation
\begin{equation}\label{E}
\Delta u= \mathrm{e}^{u}-1 +4 \pi \sum_{s=1}^{k} n_s \delta_{z_{s}} ,
\end{equation} on $G$, where $G=(V,E)$ is a connected finite graph, and $V$ denotes the vetex set and $E$ denotes the edge set. Let $\mu: V \to (0,+\infty)$ be a finite measure, and $|V|$=$ \text{Vol}(V)=\sum \limits_{x \in V} \mu(x)$ be the volume of $V$.
We state our main result as follows. \begin{theorem}\label{t1}
The equation \eqref{E} admits a unique solution, if and only if $n_1 + n_2 + \cdots + n_k= N <\frac{|V|}{4 \pi} $. \end{theorem}
The paper is organized as follows. In Section 2, we present some basic results that will be used frequently in the following sections. In Section 3, we give the proof of Theorem \ref{t1}.
\section{Preliminary results}
For each edge $xy \in E$, we assume that its weight $w_{xy}>0$ and that $w_{xy}=w_{yx}$. For any function $u: V \to \mathbb{R}$, the Laplacian of $u$ is defined by \begin{equation}\label{1}
\Delta u(x)=\frac{1}{\mu(x)} \sum_{y \sim x} w_{y x}(u(y)-u(x)), \end{equation} where $y \sim x$ means $xy \in E$. The gradient form of $u$ reads \begin{equation}\label{g}
\Gamma(u, v)(x)=\frac{1}{2 \mu(x)} \sum_{y \sim x} w_{x y}(u(y)-u(x))(v(y)-v(x)). \end{equation} We denote the length of the gradient of $u$ by \begin{equation*}
|\nabla u|(x)=\sqrt{\Gamma(u,u)(x)}=\left(\frac{1}{2 \mu(x)} \sum_{y \sim x} w_{x y}(u(y)-u(x))^{2}\right)^{1 / 2}. \end{equation*} Denote, for any function $ u: V \rightarrow \mathbb{R}
$, an integral of $u$ on $V$ by $\int \limits_{V} u d \mu=\sum\limits_{x \in V} \mu(x) u(x)$. For $p \ge 1$, denote $|| u ||_{p}:=(\int \limits_{V} |u|^{p} d \mu)^{\frac{1}{p}}$. As in \cite{ALY}, we define a sobolev space and a norm by \begin{equation*}
W^{1,2}(V)=\left\{u: V \rightarrow \mathbb{R}: \int \limits_{V} \left(|\nabla u|^{2}+u^{2}\right) d \mu<+\infty\right\}, \end{equation*} and \begin{equation*}
\|u\|_{H^{1}(V)}= \|u\|_{W^{1,2}(V)}=\left(\int \limits_{V}\left(|\nabla u|^{2}+u^{2}\right) d \mu\right)^{1 / 2}. \end{equation*}
The following Sobolev embedding and Maximum principle will be used later in the paper. \begin{lemma}\label{21}
{\rm (\cite[Lemma 5]{ALY})} Let $G=(V,E)$ be a finite graph. The sobolev space $W^{1,2}(V)$ is precompact. Namely, if ${u_j}$ is bounded in $W^{1,2}(V)$, then there exists some $u \in W^{1,2}(V)$ such that up to a subsequence, $u_j \to u$ in $W^{1,2}(V)$. \end{lemma}
\begin{lemma}\label{22}
{\rm (\cite[Lemma 4.1]{HLY})} Let $G=(V,E)$, where $V$ is a finite set, and $K \ge 0$ is constant. Suppose a real function $u(x): V\to \mathbb{R}$ satisfies $(\Delta-K)u(x) \ge 0$ for all $x\in \mathbb{R}$, then $u(x) \le 0$ for all $x \in V $. \end{lemma}
\section{The proof of Theorem \ref{t1}} Throught this section, we assume that $N=\sum\limits_{s=1}^{k} n_s$ and that $f$ is a function on $V$ satisfying $\int \limits_{V} f d \mu=1$, it is well-known that there exists a unique solution to the Poisson equation \begin{equation}\label{2}
\Delta u_{0}=-4 \pi N f +4 \pi \sum_{j=1}^{N} n_j \delta_{z_{j}}, \end{equation} in the sense of differing by a constant. Assume $u$ is a solution of \eqref{E}, let $v:=u-u_0$, then $v$ satisfies \begin{equation}\label{3}
\Delta v=e^{v+u_0 }-1+ 4 \pi N f . \end{equation}
Define an operator $P:=\Delta-e^{u_0}-1: W^{1,2}(V) \to L^{2}(V)$, then we have the following proposition. \begin{proposition} \label{p1}
$P$ is bijective. \end{proposition} \begin{proof}
For any $u,v \in H^{1}(V)$, define
$$B(u,v):=\int\limits_{V} \Gamma (u,v)+(e^{u_0} +1) uv d\mu.$$ By Cauchy Schwartz inequality and H$\ddot{\text{o}}$lder inequality, we deduce that \begin{equation}
\begin{aligned}
|B(u, v)| & \leq \int_{V} \Gamma(u, v)+\left(e^{u_{0}}+1\right)|u||v| d \mu \\
& \leq \int_{V} \sum_{y \sim x} \frac{w_{x y}}{2 \mu(x)}(u(y)-u(x))(v(y)-v(x)) d \mu+\max _{V}\left(e^{u_{0}}+1\right)\left(\int_{V} u^{2} d \mu \right)^{\frac{1}{2}}\left(\int_{V} v^{2} d \mu\right)^{\frac{1}{2}} \\
& \leq \int_{V}\left(\sum_{y \sim x} \frac{w_{x y}}{2 \mu(x)}(u(y)-u(x))^{2}\right)^{\frac{1}{2}}\left(\sum_{y \sim x} \frac{w_{x y}}{2 \mu(x)}(v(y)-v(x))^{2}\right)^{\frac{1}{2}} d\mu+C_1 ||u||_{2}||v||_{2} \\
&=\int_{V}[\Gamma(u, u)]^{\frac{1}{2}}[\Gamma(v, v)]^{\frac{1}{2}} d \mu+C_1 ||u||_{2}||v||_{2} \\
&=\int_{V} | \nabla u|| \nabla v|d \mu+C_1 || u ||_{2} ||v||_{2} \\
& \leq\left(\int_{V}|\nabla u|^{2} d \mu\right)^{\frac{1}{2}}\left(\int_{V}|\nabla v|^{2} d \mu \right)^{\frac{1}{2}}+C_{1}||u||_{2}||v||_{2},
\end{aligned} \end{equation} where $C_1=\max\limits_{V} (e^{u_0}+1)$. By \eqref{g} , we have \begin{equation}\label{6}
B(u,u)\ge \int\limits_{V} |\nabla u|^2 + \min\limits_{V}(e^{u_0}+ 1) u^2 d\mu. \end{equation} Therefore, we can find a constant $C>0$ such that \begin{equation}
|B(u,v)| \le C||u||_{H^{1}(V)} ||v||_{H^{1}(V)}, \end{equation} and \begin{equation}
B(u,u) \ge C||u||^{2}_{H^{1}(V)}. \end{equation}
It is easy to check that $B:~H^{1}(V)\times H^{1}(V)\to \mathbb{R}$ is a bilinear mapping. Thus, by the Lax-Milgram Theorem, for any function $g$ on $V$, there exists a unique element $u\in H^{1}(V)$ such that \begin{equation}\label{5}
B(u,v)=-\int\limits_{V} gv d \mu, \end{equation} for any $v\in H^{1}(V)$. Since \eqref{5} is equivalent to $Pu=g$, we see that $P:H^{1}(V)\to L^{2}(V)$ is bijective.\end{proof} By Proposition \ref{p1}, we can define the inverse operator of $P$ by $P^{-1}$. Furthermore, we have the following proposition. \begin{proposition}\label{p2}
$P^{-1}: L^2 \to L^2$ is compact. \end{proposition} \begin{proof}
For any $g \in L^{2}(V)$, by Proposition \ref{31}, there exists $u\in H^{1}(V)$ such that $Pu=g$, which is equivalent to
\begin{equation*}
B(u,v)=-\int\limits_{V} gv d \mu,
\end{equation*} for any $v \in H^{1}(V)$. By Cauchy inequality with $\epsilon$, ($\epsilon>0$), we see that
\begin{equation*}
B(u,u)=-\int\limits_{ V} gu d\mu \le \frac{1}{4 \epsilon}\int\limits_{ V} g^2 d\mu+ \epsilon \int\limits_{ V} u^2 d\mu.
\end{equation*} Thus, from \eqref{6}, we conclude that \begin{equation}\label{7}
\int\limits_{ V} |\nabla u|^2+C_2 u^2 d \mu \le \frac{1}{4 \epsilon}\int\limits_{ V} g^2 d\mu+ \epsilon \int\limits_{ V} u^2 d\mu. \end{equation} Taking $\epsilon=\frac{C_2}{2}$ in \eqref{7}, we deduce that \begin{equation*}
\int\limits_{ V} |\nabla u|^2+\frac{C_2}{2} u^2 d \mu \le \frac{1}{2 C_2}\int\limits_{ V} g^2 d\mu. \end{equation*} Therefore, there exists $C_3>0$ such that \begin{equation*}
||u||^{2}_{H^{1}(V)} \le C_3 \int\limits_{V} g^{2} d\mu. \end{equation*} Thus, by Lemma \ref{21}, we know that $P^{-1}: L^2 \to L^2$ is compact. \end{proof}
Next, we give a necessary condition for equation \eqref{3} to have a solution.
\begin{lemma}\label{31}
If the equation \eqref{3} admits a solution, then $4\pi N< |V|.$
\end{lemma} \begin{proof}
Assume that $v$ is a solution of the equation \eqref{3}, then \begin{equation}
0=\int\limits_{ V} \Delta v d \mu=\int\limits_{V} e^{v+u_0}-1+4\pi Nf d\mu=\int\limits_{ V}e^{v+u_0 } d\mu -|V| +4\pi N.
\end{equation}
Thus, we have $4\pi N < |V|$. \end{proof}
The following lemma gives the uniqueness of solutions of the equation \eqref{3}. \begin{lemma}\label{32}
There exists at most one solution of \eqref{3}. \end{lemma} \begin{proof}
If $u$ and $v$ both satisfy equation \eqref{3}, by mean value theorem, there exists $\xi$ such that
\begin{equation*}
\Delta u- \Delta v =e^{u+u_0}-e^{v+u_0}=e^{\xi+u_0}(u-v).
\end{equation*} Let $M=\max\limits_{V} (u-v)=(u-v) (x_0)$. We claim that $M\le 0$. Otherwise, $M>0$. Thus, we deduce that \begin{equation*}
\Delta (u-v) (x_0)=[e^{\xi+ u_0} (u-v)](x_0)>0. \end{equation*} By \eqref{1}, we have $\Delta(u-v)(x_0)\le 0.$ This is a contradiction. Thus, we have $u(x)\le v(x)$ on $V$. Therefore, we obtain $u\equiv v$ on $V$. \end{proof}
\begin{lemma}\label{33}
Suppose that $|V|>4\pi N$. Then there exist $U$ and $Z$ satisfying $U\ge Z$ such that \begin{equation*}
\Delta U- e^{U+u_0}-(4\pi N f -1)\le 0,
\end{equation*} and \begin{equation*}
\Delta Z- e^{Z+u_0}-(4\pi N f -1)\ge 0. \end{equation*} \end{lemma} \begin{proof}
By Propositions \ref{p1} and \ref{p2}, we see that $-P^{-1}$ is a compact operator. Suppose $(1+P^{-1}) U=0,$ then we deduce that $(\Delta-e^{u_0}) U=0.$ By a similar argument as Lemma \ref{32}, we obtain $U=0$. Thus, by Fredholm alternative, we deduce that there exists $U\in H^{1}(V)$ such that $$(1+P^{-1}) U=P^{-1} (4\pi N f-1).$$ Thus, $U$ satisfies $$(\Delta-e^{u_0}) U=4\pi N f- 1.$$ Therefore, we obtain \begin{equation*}
\Delta U-e^{U+u_0}-(4\pi N f-1)\le \Delta U-e^{U+u_0}-(4\pi N f -1)+e^{u_0}(e^{U}-U)=0.
\end{equation*}
Let $Z$ satisfying $log(1-\frac{4\pi N }{|V|}) -u_0 \ge Z$ be a solution of \begin{equation*}
\Delta Z= 4\pi N f- \frac{4\pi N }{|V|}. \end{equation*} It is easy to see that \begin{equation*}
\Delta Z-e^{Z+u_0}-(4\pi N f -1)=1-\frac{4\pi N }{|V|}-e^{Z+u_0} \ge 0. \end{equation*} \begin{equation*}
\Delta Z-\Delta U\ge e^{Z+u_0}-e^{Z-U}. \end{equation*} By mean value theorem, there exists $\eta$ such that $$\Delta Z-\Delta U\ge e^{Z+u_0}-e^{U+u_0}=e^{\eta +u_0}(Z-U). $$ By a similar argument as Lemma \ref{32}, we deduce that $Z\le U$ on $V$. \end{proof}
\begin{lemma}\label{34}
Suppose that $N<\frac{|V|}{4 \pi}$ and $u_0$ satisfies the equation \eqref{31}. Then the equation \eqref{3} admits a unique solution $W(x)$. \end{lemma} \begin{proof}
Choose a constant $K> \max\limits_{V}e^{u_0+U}$, define a sequence $\{w_n \}$ by an iterative scheme
\begin{equation}\label{8}
\begin{aligned}
(\Delta -K)w_{n+1}&=e^{u_0 +w_n}- K w_n+ 4\pi N f- 1, n=0,1,2,\cdots, \\
w_0&= U.
\end{aligned}
\end{equation}
We now prove \begin{equation*}
w_k \le U~ \text{for}~k\ge 1 \end{equation*} by induction. By \eqref{8} ,we see that \begin{equation*}
\Delta(w_1 - w_0)\ge K (w_1- w_0).
\end{equation*} By a similar argument as Lemma \ref{32}, we can show that $w_1 \le U$ on $V$. Suppose that $w_k\le U$ on $V$ for some integer $k>1$, then \begin{equation}
\begin{aligned}
(\Delta- K) (w_{k+1} -U) &\ge e^{u_0+w_k}-e^{u_0+ U}+K (U-w_k) \\
&= (K-e^{u_0 +\lambda} )( U- w_k)\\
& \ge (K-e^{u_0 +U}) (U- w_k) \\
& \ge 0,
\end{aligned} \end{equation} where $w_k \le \lambda \le U$. By Lemma \ref{22}, we deduce that $w_{k+1}\le U$ on $V$.
We next show that \begin{equation*}
w_{n+1} \le w_n \le \dots \le w_0 \end{equation*} for any $n\ge 1$ by induction.
Assume that $w_k\le w_{k-1}$ on $V$ for some integer $k>2$, then we deduce that \begin{equation*}
\begin{aligned}
\Delta(w_{k+1} - w_{k})- K(w_{k+1} - w_{k}) &=(e^{u_0+ w_k}- e^{u_0+w_{k-1}})-K (w_{k} - w_{k-1}) \\
&=(e^{u_0 + \eta} -K)(w_{k}-w_{k-1})\\
&\ge 0 ,
\end{aligned} \end{equation*} where $w_{k}\le \eta \le w_{k-1}$. By Lemma \ref{22} , we get $w_{k+1} \le w_{k}$ on $V$.
By Lemma \ref{33}, $Z\le U$. Suppose $Z\le w_{k}$ for some integer $k>1$; then \begin{equation*}
\begin{aligned}
(\Delta-K)(Z-w_{k+1})&\ge e^{u_0 +Z}-e^{u_0+ w_{k}}- K(Z-w_{k})\\
&=(e^{u_0 +\xi } -K)(Z-w_k)\\
&\ge 0,
\end{aligned} \end{equation*} where $Z\le \xi \le w_k$. By Lemma \ref{22}, we have $Z \le w_{k+1}$ on $V$. Therefore, we can define $W(x):=\lim\limits_{n\to +\infty} w_{n} (x)$. Clearly, $Z\le W \le U$ on $V$. Letting $n\to +\infty$ in \eqref{8}, we deduce that $W(x)$ satisfies \eqref{3}. By Lemma \ref{32}, $W(x)$ is the unique solution of the equation \eqref{32}. \end{proof}
\begin{lemma}\label{35}
There exists at most one solution of the equation \eqref{E}. \end{lemma} \begin{proof}
For any two solutions $u$ and $v$ of the equation \eqref{E}, by mean value theorem, there exists $\zeta$ such that
\begin{equation}\label{m}
\begin{aligned}
\Delta(u-v) &= e^{u}-1- (e^{v}- 1) \\
&=e^{\zeta} (u-v).
\end{aligned}
\end{equation} Let $M:=\max\limits_{V} (u-v)=(u-v) (x_0)$. We assert that $M\le 0$. Otherwise, $M>0$. Then, by \eqref{m}, we conclude that $$\Delta(u-v)(x_0)>0.$$ By \eqref{21}, we conclude that $\Delta (u-v) (x_0)\le 0,$ which is a contradiction. Thus, we obtain $u\le v$ on $V$. By a similar discussion as above, we obtain $u\ge v$ on $V$. Therefore, we know that $u \equiv v$ on $V$. \end{proof}
\begin{proof}[Proof of Theorem \ref{t1}] The desired conclusion follows directly from Lemmas \ref{31}, \ref{34} and \ref{35}. \end{proof}
\end{document}
|
arXiv
|
{
"id": "2202.05039.tex",
"language_detection_score": 0.5976504683494568,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\renewcommand{0.2}{0.2} \renewcommand{0.8}{0.8} \renewcommand{0.8}{0.8} \renewcommand{0.3}{0.3}
\title{High-precision evaluation of the Vibrational spectra of long-range molecules} \author{C. Tannous} \affiliation{Laboratoire de Magnétisme de Bretagne, UPRES A CNRS 6135, Université de Bretagne Occidentale, BP: 809 Brest CEDEX, 29285 FRANCE} \author{J. Langlois} \affiliation{Laboratoire des Collisions Electroniques et Atomiques, Université de Bretagne Occidentale, BP: 809 Brest CEDEX, 29285 FRANCE}
\date{March 13, 2001}
\begin{abstract} Vibrational spectra of long-range molecules are determined accurately and to arbitrary accuracy with the Canonical Function Method. The energy levels of the $0^-_g$ and $1_u$ electronic states of the $^{23}{\rm Na}_2$ molecule are determined from the Ground state up to the continuum limit. The method is validated by comparison with previous results obtained by Stwalley et al. \cite {Stwalley 78} using the same potential and Trost et al. \cite {Trost 98} whose
work is based on the Lennard-Jones potential adapted to long-range molecules. \pacs{PACS numbers: 03.65.-w,31.15.Gy,33.20.Tp}
\end{abstract}
\maketitle
\section{Introduction} A new kind of high precision molecular spectroscopy is probing long-range forces between constituent atoms of molecules. This spectroscopy is based on using light to combine two colliding cold-trapped atoms into a tenuous molecule.
The burgeoning field of "Photoassociation Spectroscopy" is allowing very precise measurement of lifetimes of the first excited states of Alkaline atoms and observation of retardation effects and long-range forces. It provides a means of probing accurately the weak interaction between these atoms \cite{Jones 96}.
The agreement between theory and experiment requires simultaneously a highly accurate representation of the interaction potential as well as a highly reliable method for the calculation of the corresponding energy levels.
Since our aim is directed towards the latter problem, we make use of an alternative method to evaluate the energy levels for the potential at hand instead of comparing to the experimental values in order to assess the validity of our results.
The determination of the vibrational spectra of these very tenuous molecules is extremely subtle specially for the highest levels which play an important role in photoassociation spectroscopy. Thus a careful control of accuracy is needed in order to diagonalise the Hamiltonian without losing accuracy for all energies including those close to the dissociation limit.
The magnitudes of potential energy, distance and mass values in these kinds of molecules stand several orders of magnitude above or below what is encountered in ordinary short-range molecules.
For instance, the typical intramolecular potential well depth at the equilibrium distance of about 100 $a_0$ (Bohrs), is a fraction of a cm$^{-1}$ while the reduced mass is several 10,000 electron masses. All these extreme values require special numerical techniques in order to avoid roundoffs, divergences, numerical instability and ill-conditioning during processing.
The method we use in this work adapts well to this extreme situation with the proviso of employing a series of isospectral scaling transformations we explain below.
Since accuracy and its control are of paramount importance in this work, the canonical function method (CFM) \cite {Kobeissi 82} is an excellent candidate because it bypasses the calculation of the eigenfunctions. This avoids losing accuracy associated with the numerical calculation specially with rapidly oscillating wave functions of highly excited states.
This method evaluates the full spectrum of the Hamiltonian, for any potential, to any desired accuracy up to the continuum limit. It has been tested succesfully in long-range and short-range potentials for atomic and molecular states. It describes faithfully bound and free states and is staightforward to code.
This work is organised as follows: Section 2 is a description of the CFM and highlights the details of the method. Section 3 presents the results we obtain for the vibrational levels of the $^{23}{\rm Na}_2$ molecule $0^-_g$ and $1_u$ electronic states. Section 4 is an additional validation of the method with the Lennard-Jones molecular potential used in a similar situation. We conclude in section 5.
\section{The Canonical Function Method} In this work we consider rotationless long-range diatomic molecules only. The associated Radial Schr\"odinger equation (RSE) is given by: \begin{equation} (-\frac{\hbar^2}{2\mu}\frac{d^2}{dr^2}-V(r)+E)\psi(r)=0 \end{equation} where $\mu$ is the reduced mass and $V(r)$ is the potential energy of the interacting constituent atoms.
The diagonalisation of the RSE is described mathematically as a singular boundary value problem. The prime advantage of the CFM is in turning it into a regular initial value problem.
Primarily developed by Kobeissi and his coworkers \cite {Kobeissi 82} the CFM, consists, essentially, in writing the general solution as a function of the radial distance $r$ in terms of two basis functions $\alpha(E;r)$ and $\beta(E;r)$ for some energy $E$.
Picking an arbitrary point $r_0$ at which a well defined set of initial conditions are chosen \cite {Kobeissi 90} i.e.: $\alpha(E;r_0)=1$ with $\alpha'(E;r_0)=0$ and $\beta(E;r_0)=0$ with $\beta'(E;r_0)=1$, the RSE is solved by progressing simultaneously towards the origin and $\infty$. The effect of using different algorithms for the numerical integration will be examined in the next section.
During the integration, the ratio of the functions is monitored until saturation signaling the stability of the eigenvalue spectrum \cite {Kobeissi 91}.
One may define two associated energy functions: \begin{equation} l_{+}(E)=\lim_{r \rightarrow +\infty} -\frac{\alpha(E;r)}{\beta(E;r)}; \end{equation} and: \begin{equation} l_{-}(E)= \lim_{r \rightarrow 0} -\frac{\alpha(E;r)}{\beta(E;r)} \end{equation} The eigenvalue function is defined in terms of: \begin{equation} F(E)=l_{+}(E)-l_{-}(E) \end{equation} The saturation of the $\alpha(E;r)/\beta(E;r)$ ratio as r progresses towards 0 or $\infty$ yields a position independant eigenvalue function $F(E)$. Its zeroes yield the spectrum of the RSE.
Generally, one avoids using the wavefunction but if one needs it, the functions $\alpha(E;r)$ and $\beta(E;r)$ are used to determine the wavefunction at any energy with the expression: \begin{equation} \Psi(E;r)=\Psi(E;r_0) \alpha(E;r) + \Psi'(E;r_0) \beta(E;r) \end{equation} where $\Psi(E;r_0)$ and $\Psi'(E;r_0)$ are the wavefunction and its derivative at the start point $r_0$. The eigenfunctions are obtained for $E=E_k$ where $E_k$ is any zero of $F(E)$.
The graph of the eigenvalue function has a typical $\tan(|E|)$ shape versus the logarithm of the absolute value of the energy $E$ as displayed in Fig.~\ \ref{fig1}.
\begin{figure}
\caption{Behavior of the eigenvalue function $F(E)$ with energy on a semi-log scale for the Vibrational spectra of the $0^-_g$ electronic state of the $^{23}{\rm Na}_2$ molecule. The vertical lines indicate the eigenvalue position. Energies are in cm$^{-1}$.}
\label{fig1}
\end{figure}
\section{Vibrational states of the $^{23}{\rm Na}_2$ molecule} We apply the CFM to the calculation of the vibrational energy levels of a diatomic molecule where the interaction between the atoms is given by the Movre and Pichler potential \cite {Movre 77}.
We start with the $0^-_g$ electronic state of the $^{23}{\rm Na}_2$ molecule. The corresponding potential is given by: \begin{equation} V(r)=\frac{1}{2}[(1-3X)+\sqrt{1-6X+81X^2}] \end{equation} where $X={C(0^-_g)}/{9r^3\Delta}$. $r$ is the internuclear distance and the parameter $C(0^-_g)$ is such that: \begin{equation} \lim_{r \rightarrow +\infty} V(r) \rightarrow -\frac{C_3(0^-_g)}{r^3} \end{equation} Identification of the large $r$ limit yields the result: $C_3(0^-_g)=C(0^-_g)/3$. We have used in the calculations below $C_3(0^-_g)=6.390$ Hartrees.$a_0^3$ like \cite{Stwalley 78}. The parameter $\Delta=1.56512.10^{-4}$ Rydbergs is the atomic spin-orbit splitting. Given $C_3(0^-_g)$ and $\Delta$, the equilibrium internuclear distance is $r_e=71.6 a_0$.
We scale all energies with a factor $E_0$ (usually cm$^{-1}$) with the use of equation (1). Then we scale all distances with a typical length $L_0$ transforming the RSE appropriately. This double transformation is reflected generally in the potential coefficients preserving thus the functional form of the potential.
In order to gauge the accuracy of the spectra, we perform the integration of the RSE with two different methods: A fixed step Fourth-order Runge-Kutta method (RK4) and a Variable Step Controlled Accuracy (VSCA) method \cite {Kobeissi 91}.
The VSCA method is based on a series expansion of the potential and the corresponding solution to an order such that a required tolerance criterion is met. Ideally, the series coefficients are determined analytically to any order, otherwise loss of accuracy occurs leading quickly to numerical uncertainties as discussed later.
\begin{table}[htbp] \begin{center} \caption{Vibrational levels for the $0^-_g$ electronic state of the $^{23}{\rm Na}_2$ molecule as obtained with a fixed step RK4 method, Stwalley et al. \cite{Stwalley 78} results and the corresponding ratio. Levels 34-40 were not found by the RK4 method due to the precision limited to fourth order.} \label{tab1} \begin{tabular}{ c c c c } \hline* Index& RK4 (cm$^{-1}$)& Stwalley et al. (cm$^{-1}$)& Ratio \\ \hline
1& -1.7864563& -1.7887& 1.00126\\
2& -1.5595812& -1.5617& 1.00136\\
3& -1.3546211& -1.3566& 1.00146\\
4& -1.1704091& -1.1723& 1.00162\\
5& -1.0057168& -1.0075& 1.00177\\
6& -0.8592746& -0.86087& 1.00186\\
7& -0.7297888& -0.73125& 1.00200\\
8& -0.6159592& -0.61729& 1.00216\\
9& -0.5164938& -0.51770& 1.00234\\
10& -0.4301231& -0.43120& 1.00250\\
11& -0.3556114& -0.35657& 1.00270\\
12& -0.2917683& -0.29261& 1.00288\\
13& -0.2374567& -0.23820& 1.00313\\
14& -0.1916004& -0.19224& 1.00334\\
15& -0.1531898& -0.15374& 1.00359\\
16& -0.1212858& -0.12176& 1.00391\\
17& -9.5022481(-02)& -9.5438(-02)& 1.00437\\
18& -7.3608067(-02)& -7.3940(-02)& 1.00451\\
19& -5.6325397(-02)& -5.6599(-02)& 1.00486\\
20& -4.2530440(-02)& -4.2754(-02)& 1.00526\\
21& -3.1650256(-02)& -3.1831(-02)& 1.00571\\
22& -2.3180032(-02)& -2.3323(-02)& 1.00617\\
23& -1.6679434(-02)& -1.6791(-02)& 1.00669\\
24& -1.1768423(-02)& -1.1854(-02)& 1.00727\\
25& -8.1226859(-03)& -8.1873(-03)& 1.00795\\
26& -5.4687973(-03)& -5.5165(-03)& 1.00872\\
27& -3.5792655(-03)& -3.6136(-03)& 1.00959\\
28& -2.2675456(-03)& -2.2916(-03)& 1.01061\\
29& -1.3831324(-03)& -1.3995(-03)& 1.01183\\
30& -8.0680818(-04)& -8.1747(-04)& 1.01321\\
31& -4.4611287(-04)& -4.5276(-04)& 1.01490\\
32& -2.3077899(-04)& -2.3503(-04)& 1.01842\\
33& -9.4777816(-05)& -1.1252(-04)& 1.18720\\
34& &-4.8564(-05)& \\
35& &-1.8262(-05)& \\
36& &-5.6648(-06)& \\
37& &-1.3175(-06)& \\
38& &-1.9247(-07)& \\
39& &-1.1215(-08)& \\
40& &-4.1916(-11)& \\
\hline* \end{tabular} \end{center} \end{table}
Table \ref{tab1} shows the results we obtain with the RK4 method. The limitation of the RK4 method to fourth order hampers the finding of levels beyond the 33rd (see Table 1). In order to find the higher levels we have to select an algorithm that enables us to tune the accuracy well beyond the fourth order.
Pushing the accuracy within the framework of a fixed step method has the effect of reducing substantially the integration step. In order to avoid this problem, we use a variable step that adjusts itself to the desired accuracy, the VSCA method.
This method is powerful and flexible enough to find all the desired energy levels and allows us to find one additional level that was not detected before. It should be noted that the last three levels given by Stwalley at al. \cite{Stwalley 78} were extrapolated and not calculated. The agreement between our calculated levels and those of Stwalley et al. \cite{Stwalley 78} is quite good. We believe that the small discrepancy, increasing as we progress towards the dissociation limit, is due to a loss of accuracy associated with traditional methods in sharp contrast with the CFM.
\begin{table}[htbp] \begin{center} \caption{Vibrational levels for the $0^-_g$ electronic state of the $^{23}{\rm Na}_2$ molecule as obtained with Stwalley et al. results, the variable step controlled accuracy method (VSCA) method and the corresponding ratio. Levels 38, 39 and 40 of Stwalley et al. \cite{Stwalley 78} are extrapolated with LeRoy and Bernstein \cite{LeRoy 70} semi-classical formulae.} \label{tab2} \begin{tabular}{ c c c c } \hline* Index& Stwalley et al. (cm$^{-1}$) & VSCA (cm$^{-1}$) & Ratio \\ \hline
1& -1.7887& -1.7864488& 1.00126\\
2& -1.5617& -1.5595638& 1.00137\\
3& -1.3566& -1.3546072& 1.00147\\
4& -1.1723& -1.1703990& 1.00162\\
5& -1.0075& -1.0057071& 1.00178\\
6& -0.86087& -0.8592631& 1.00187\\
7& -0.73125& -0.7297908& 1.00200\\
8& -0.61729& -0.6159534& 1.00202\\
9& -0.51770& -0.5164882& 1.00235\\
10& -0.43120& -0.4301217& 1.00251\\
11& -0.35657& -0.3556148& 1.00269\\
12& -0.29261& -0.2917693& 1.00288\\
13& -0.23820& -0.2374560& 1.00313\\
14& -0.19224& -0.1916002& 1.00334\\
15& -0.15374& -0.1531893& 1.00359\\
16& -0.12176& -0.1212854& 1.00391\\
17& -9.5438(-02)& -9.5022588(-02)& 1.00437\\
18& -7.3940(-02)& -7.3608452(-02)& 1.00450\\
19& -5.6599(-02)& -5.6325744(-02)& 1.00485\\
20& -4.2754(-02)& -4.2530867(-02)& 1.00525\\
21& -3.1831(-02)& -3.1650591(-02)& 1.00570\\
22& -2.3323(-02)& -2.3180420(-02)& 1.00615\\
23& -1.6791(-02)& -1.6679756(-02)& 1.00667\\
24& -1.1854(-02)& -1.1768655(-02)& 1.00725\\
25& -8.1873(-03)& -8.1228816(-03)& 1.00793\\
26& -5.5165(-03)& -5.4689541(-03)& 1.00869\\
27& -3.6136(-03)& -3.5793742(-03)& 1.00956\\
28& -2.2916(-03)& -2.2676168(-03)& 1.01058\\
29& -1.3995(-03)& -1.3831806(-03)& 1.01180\\
30& -8.1747(-04)& -8.0683859(-04)& 1.01318\\
31& -4.5276(-04)& -4.4613249(-04)& 1.01486\\
32& -2.3503(-04)& -2.3110168(-04)& 1.01700\\
33& -1.1252(-04)& -1.1035443(-04)& 1.01962\\
34& -4.8564(-05)& -4.7468345(-05)& 1.02308\\
35& -1.8262(-05)& -1.7767388(-05)& 1.02784\\
36& -5.6648(-06)& -5.4747950(-06)& 1.03471\\
37& -1.3175(-06)& -1.2597092(-06)& 1.04588\\
38& -1.9247(-07)& -1.2716754(-07)& 1.51352\\
39& -1.1215(-08)& \\
40& -4.1916(-11)& \\ \hline* \end{tabular} \end{center} \end{table}
The estimation of accuracy of the results hinges basically on two operations, integration and determination of the zeroes of the eigenvalue function $F(E)$. The superiority of the VSCA method is observed in the determination of the upper levels that are not detected by the RK4 method (see Tables 1 and 2). In addition, it is observed in the behavior of the eigenvalue ratio versus the index. While in both cases (RK4 and VSCA) the ratio increases steadily as the index increases because we are probing higher excited states, in the RK4 case it rather blows up as dissociation is approached. We use typically series expansion to order 12 in VSCA with a tolerance of $10^{-8}$. In the root search of $F(E)$, the tolerance required for a zero to be considered as an eigenvalue is $10^{-15}$. This does not imply that we disagree as strongly as $0.13\%$, for instance, with the Ground state value (see Tables 1 and 2) found by Stwalley et al. \cite {Stwalley 78} for the simple reason, we use a splitting energy $\Delta=1.56512.10^{-4}$ Rydbergs corresponding to an equilibrium internuclear distance $r_e=71.6 a_0$. Stwalley et al. do not provide explicitly the value of $\Delta$ they use, and more recently Jones et al. \cite{Jones 96} provide a value that is slightly different.\\
We treat next the $1_u$ electronic state of the $^{23}{\rm Na}_2$ molecule. This state is higher that the $0^-_g$ and the number of vibrational levels is smaller because the potential is shallower as displayed in Fig.~\ \ref{fig2}.
\begin{figure}
\caption{Potential energy in cm$^{-1}$ for the $0^-_g$ and $1_u$ electronic states of the $^{23}{\rm Na}_2$ molecule. The radial distance r is in $a_0$ units.}
\label{fig2}
\end{figure}
The potential associated with the $1_u$ electronic state of the $^{23}{\rm Na}_2$ molecule is a lot more involved. Its VSCA implementation is particularly difficult because of the complex functional of the potential as we explain below. Analytically, the VSCA algorithm requires performing a Taylor series expansion to any order around an arbitrary point \cite{Kobeissi 90}.
This is still, numerically, an open problem for arbitrary functions and the use of LISP based symbolic manipulation techniques produces quickly cumbersome expressions. Special methods based on analytical fitting expressions are needed in order to turn the series coefficients into a more manageable form.
The first step is to determine the $1_u$ electronic state of the $^{23}{\rm Na}_2$ molecule by solving the Movre et al. \cite {Movre 77} secular equation such that: \begin{equation} V(r)=\Delta[-2\sqrt{Q}cos(\frac{\theta-2\pi}{3})-\frac{a}{3}-1] \end{equation} where a=-2-6X and $X={C(1_u)}/{9r^3\Delta}$. In addition, $\theta=cos^{-1}( \frac{1+270X^3}{\sqrt{(1+63X^2)^3}})$, and $Q=\frac{1+63X^2}{9}$.
The parameter $C(1_u)$ is such that: \begin{equation} \lim_{r \rightarrow +\infty} V(r) \rightarrow -\frac{C_3(1_u)}{r^3} \end{equation}
Identification of the large $r$ limit yields to the result: $C_3(1_u)=C(1_u)(\sqrt{7}-2)/9$.
We have used in the calculations below $C_3(1_u)$=1.383 Hartrees.$a_0^3$ like Stwalley et al. The parameter $\Delta=1.56512.10^{-4}$ Rydbergs is the same as for the $0^-_g$ state.
Table \ref{tab3} displays the results we obtain with the RK4 method that cannot find more than 11 levels due to accuracy limitations.
\begin{table}[htbp] \begin{center} \caption{Vibrational levels for the $1_u$ electronic state of the $^{23}{\rm Na}_2$ molecule as obtained with the RK4 method, Stwalley et al. results and the corresponding ratio. RK4 found only 11 levels and levels 15 and 16 of Stwalley et al. are found by extrapolation.} \label{tab3} \begin{tabular}{ c c c c } \hline* Index& RK4 (cm$^{-1}$) & Stwalley et al.(cm$^{-1}$) & Ratio \\ \hline
1& 0.1319536 & 0.13212& 1.00126\\
2& 9.0057392(-02)& 9.0192(-02)& 1.00149\\
3& 5.9472159(-02)& 5.9574(-02)& 1.00171\\
4& 3.7821597(-02)& 3.7896(-02)& 1.00197\\
5& 2.3027828(-02)& 2.3080(-02)& 1.00227\\
6& 1.3324091(-02)& 1.3359(-02)& 1.00262\\
7& 7.2562182(-03)& 7.2787(-03)& 1.00310\\
8& 3.6714971(-03)& 3.6849(-03)& 1.00365\\
9& 1.6950002(-03)& 1.7024(-03)& 1.00437\\
10& 6.9533077(-04)& 6.9904(-04)& 1.00533\\
11& 2.4102039(-04)& 2.4492(-04)& 1.01618\\
12& & 6.8430(-05)& \\
13& & 1.3446(-05)& \\
14& & 1.4122(-06)& \\
15& & 3.8739(-08) & \\
16& & 1.2735(-12) & \\
\hline* \end{tabular} \end{center} \end{table}
The next results for the $1_u$ electronic state of the $^{23}{\rm Na}_2$ molecule are obtained with the VSCA method as shown in table \ref{tab4}. We find an additional 15-th level in contrast to Stwalley et al. who found fourteen and extrapolated the last two levels.
\begin{table}[htbp] \begin{center} \caption{Vibrational levels for the $1_u$ electronic state of the $^{23}{\rm Na}_2$ molecule as obtained by Stwalley et al., the variable step controlled accuracy method (VSCA) and the corresponding ratio. A new 15th level is obtained with the VSCA method.} \label{tab4} \begin{tabular}{ c c c c } \hline* Index& Stwalley et al. (cm$^{-1}$) & VSCA (cm$^{-1}$) & Ratio \\ \hline
1& 0.13212& 0.13244150& 1.00243\\
2& 9.0192(-02)& 9.07598688(-02)& 1.00630\\
3& 5.9574(-02)& 6.01645742(-02)& 1.00991\\
4& 3.7896(-02)& 3.83982868(-02)& 1.01325\\
5& 2.3080(-02)& 2.34584275(-02)& 1.01640\\
6& 1.3359(-02)& 1.36187753(-02)& 1.01944\\
7& 7.2787(-03)& 7.44243743(-03)& 1.02250\\
8& 3.6849(-03)& 3.77999552(-03)& 1.02581\\
9& 1.7024(-03)& 1.75281307(-03)& 1.02961\\
10& 6.9904(-04)& 7.23013793(-04)& 1.03430\\
11& 2.4492(-04)& 2.54856605(-04)& 1.04057\\
12& 6.8430(-05)& 7.18249908(-05)& 1.04961\\
13& 1.3446(-05)& 1.43120199(-05)& 1.06441\\
14& 1.4122(-06)& 1.53931042(-06)& 1.09001\\
15& 3.8739(-08)& 4.66022073(-09)& \\
16& 1.2735(-12)& \\ \hline* \end{tabular} \end{center} \end{table}
The corresponding graph of the eigenvalue function is displayed in Fig.~\ \ref{fig3} below.\\
\begin{figure}
\caption{Behavior of the eigenvalue function $F(E)$ with energy on a semi-log scale for the $1_u$ electronic state of the $^{23}{\rm Na}_2$ molecule. The vertical lines indicate the eigenvalue position. Energies are in cm$^{-1}$.}
\label{fig3}
\end{figure}
\section{Lennard-Jones molecules} We apply our methodology to the Lennard-Jones case. Our results are compared to the results obtained by Trost et al. \cite{Trost 98}. We start with the levels obtained with the RK4 method. The energy unit is $\epsilon$ the depth of the potential well of the Asymmetric Lennard-Jones potential (ALJ):
\begin{equation} V(r)=C_1 ({\frac{1}{r}})^\beta - C_2 ({\frac{1}{r}})^\alpha \end{equation}
The Asymmetric-Lennard-Jones (ALJ) depends on $C_1$ and $C_2$ yielding an equilibrium distance at $r=r_{min}$ and a potential depth of $-\epsilon$. Trost et al. \cite{Trost 98} use the general parameterisation: \begin{equation} C_1 = \frac{\epsilon}{(\beta-\alpha)}\alpha r_{min}^\beta \hspace{0.1cm}, \hspace{0.1cm} C_2=\frac{\epsilon}{(\beta-\alpha)}\beta r_{min}^\alpha \end{equation} It is scaled in such a way that the energy is expressed in units of the potential well depth $\epsilon$. When $\alpha=6$, $\beta=12$ we obtain $r_{min}=\sqrt[6]{\frac{2C_{1}}{C_2}}$ and $\epsilon= \frac{C_2^2}{4C_{1}}$. and the radial distance is in $\frac{a_0}{\sqrt{B}}$ where $B$ is a reduced scaled mass given by $B=2 \mu \epsilon {r_{min}}^2$. Numerically Trost et al. use $B=10^4$ which is the order of magnitude encountered in long-range molecules within the framework of their system of units. The potential energy in these units is displayed in Fig.~\ \ref{fig4}. The RK4 methods yields the results displayed in Table \ref{tab5}.
\begin{figure}\label{fig4}
\end{figure}
\begin{table}[htbp] \caption{Quantum levels of a Lennard-Jones molecule in $\epsilon$ units, the depth of the potential well as obtained by the RK4 method, Trost et al. and the corresponding ratio. Only the last 24th level was missed by the RK4 method.} \label{tab5} \begin{center} \begin{tabular}{ c c c c } \hline* Index& RK4 & Trost et al.& Ratio \\ \hline
1& -0.9410450& -0.9410460& 1.000001\\
2& -0.8299980& -0.8300020& 1.000005\\
3& -0.7276400& -0.7276457& 1.000008\\
4& -0.6336860& -0.6336930& 1.000011\\
5& -0.5478430& -0.5478520& 1.000017\\
6& -0.4698130& -0.4698229& 1.000021\\
7& -0.3992870& -0.3992968& 1.000025\\
8& -0.3359470& -0.3359561& 1.000027\\
9& -0.2794670& -0.2794734& 1.000023\\
10& -0.2295070& -0.2295117& 1.000021\\
11& -0.1857220& -0.1857237& 1.000009\\
12& -0.1477510& -0.1477514& 1.000003\\
13& -0.1152270& -0.1152259& 0.999990\\
14& -8.776970(-02)& -8.7766914(-02)& 0.999968\\
15& -6.498640(-02)& -6.4982730(-02)& 0.999944\\
16& -4.647400(-02)& -4.6469911(-02)& 0.999912\\
17& -3.181750(-02)& -3.1813309(-02)& 0.999868\\
18& -2.059000(-02)& -2.0586161(-02)& 0.999814\\
19& -1.235370(-02)& -1.2350373(-02)& 0.999731\\
20& -6.659580(-03)& -6.6570240(-03)& 0.999616\\
21& -3.048890(-03)& -3.0471360(-03)& 0.999425\\
22& -1.053690(-03)& -1.0527480(-03)& 0.999106\\
23& -1.645210(-04)& -1.9834000(-04)& 1.205560\\
24& & -2.6970000(-06) & \\
\hline* \end{tabular} \end{center} \end{table}
\begin{table}[htbp] \begin{center} \caption{Quantum levels of a Lennard-Jones molecule in $\epsilon$ units, the depth of the potential well as obtained by the VSCA method, Trost et al. and the corresponding ratio.} \label{tab6} \begin{tabular}{ c c c c } \hline* Index& VSCA & Trost et al. & Ratio \\ \hline
1& -0.9410443& -0.9410460& 1.000002\\
2& -0.8299963& -0.8300020& 1.000007\\
3& -0.7276415& -0.7276457& 1.000006\\
4& -0.6336915& -0.6336930& 1.000002\\
5& -0.5478480& -0.5478520& 1.000007\\
6& -0.4698206& -0.4698229& 1.000005\\
7& -0.3992947& -0.3992968& 1.000005\\
8& -0.3359533& -0.3359561& 1.000008\\
9& -0.2794718& -0.2794734& 1.000005\\
10& -0.2295109& -0.2295117& 1.000003\\
11& -0.1857222& -0.1857237& 1.000008\\
12& -0.1477498& -0.1477514& 1.000010\\
13& -0.1152247& -0.1152259& 1.000010\\
14& -8.7766358(-02)& -8.7766914(-02)& 1.000006\\
15& -6.4982534(-02)& -6.4982730(-02)& 1.000003\\
16& -4.6469838(-02)& -4.6469911(-02)& 1.000002\\
17& -3.1813146(-02)& -3.1813309(-02)& 1.000005\\
18& -2.0585953(-02)& -2.0586161(-02)& 1.000010\\
19& -1.2350173(-02)& -1.2350373(-02)& 1.000016\\
20& -6.6568735(-03)& -6.6570240(-03)& 1.000023\\
21& -3.0470500(-03)& -3.0471360(-03)& 1.000028\\
22& -1.0526883(-03)& -1.0527480(-03)& 1.000057\\
23& -1.9832170(-04)& -1.9834000(-04)& 1.000092\\
24& -2.6957891(-06)& -2.6970000(-06)& 1.000449\\ \hline* \end{tabular} \end{center} \end{table}
The accuracy limitation of the RK4 results in losing the uppermost level 24. Thus we move on to the results obtained with the superior VSCA method. All levels are obtained with the VSCA and the agreement with Trost et al. results is perfect as witnessed by the ratio values of Table 6. The eigenvalue function obtained with the VSCA method as a function of energy is displayed in Fig.~\ \ref{fig5}.
\begin{figure}
\caption{ Behavior of the eigenvalue function $F(E)$ with energy on a semi-log scale for the Trost et al. \cite{Trost 98} Lennard-Jones molecule. The vertical lines indicate the eigenvalue position. Energies are in potential depth $\epsilon$ units.}
\label{fig5}
\end{figure}
\section{Conclusion} The CFM is a very powerful method that enables one to find the Vibrational spectra of tenuous molecules where energies and distances are so remote from the ordinary short-range molecules case that special techniques should be developed in order to avoid numerical instabilities and uncertainties.
The VSCA integration method used gives the right number of all the levels and the variation of the eigenvalue function $F(E)$ definitely determines the total number of levels. Generally it requires performing analytically Taylor series expansion to any order of an arbitrary potential function that might require the combination of numerical, symbolic manipulation and functional fitting techniques. Despite these difficulties, the results of the VSCA are rewarding.
The use of the RK4 and the VSCA methods jointly paves the way to a precise comparative evaluation of the accuracy of the spectra obtained. In practice, the RK4 method can be used in optimisation problems whereas the VSCA is more adapted to the direct evaluation of the spectra.
In this work, we did not consider molecular rotation, nevertheless the CFM is adapted to solve accrately Ro-Vibrational problems as well as any RSE diagonalisation problem.
The methodology we describe in this work enables one to tackle precisely weakly bound states that are of great importance in low-energy scattering of atoms and molecules and generally in Bose-Einstein condensation problems \cite {Trost 98}.\\
{\bf Acknowledgements}: Part of this work was performed on the IDRIS machines at the CNRS (Orsay).\\
\end{document}
|
arXiv
|
{
"id": "0103066.tex",
"language_detection_score": 0.7114801406860352,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Generation of arbitrary two dimensional motional states of a trapped ion} \author{XuBo Zou, K. Pahlke and W. Mathis \\ \\Institute TET, University of Hannover,\\ Appelstr. 9A, 30167 Hannover, Germany } \date{}
\maketitle
\begin{abstract} {\normalsize We present a scheme to generate an arbitrary two-dimensional quantum state of motion of a trapped ion. This proposal is based on a sequence of laser pulses, which are tuned appropriately to control transitions on the sidebands of two modes of vibration. Not more than $(M+1)(N+1)$ laser pulses are needed to generate a pure state with a phonon number limit $M$ and $N$.\\
PACS number:42.50.Dv, 42.50.Ct}
\end{abstract} The generation of nonclassical states was studied in the past theoretically and experimentally. The first significant advances were made in quantum optics by demonstrating antibunched light \cite{Paul} and squeezed light \cite{Loudon}. Various optical schemes for generating Schr\"{o}dinger cat states were studied \cite{YSB}, which led to an experimental realization in a quantized cavity field \cite{Brune}. Several schemes were proposed to generate any single-mode quantum state of a cavity field \cite{VAP,PMZK,LE} and traveling laser field \cite{DCKW}. Recently, possible ways of generating various two-mode entangled field states were proposed. For example, it was shown \cite{Sanders} that entangled coherent states, which can be a superposition of two-mode coherent states \cite{Sanders,Chai} can be produced using the nonlinear Mach-Zehnder interferometer. These quantum states can be considered as a two(multi)-mode generalization of single-mode Schr\"{o}dinger cat states \cite{Manko}. A method to generate another type of two-mode Schr\"{o}dinger cat states, which are known as SU(2) Schr\"{o}dinger cat states, was proposed \cite{Sanders2,GGJMO}. These quantum states result when two different SU(2) coherent states \cite{Buzek,Sanders2,GGJMO} are superposed. It was also shown that two-mode entangled number states can be generated by using nonlinear optical interactions \cite{Gerry}, which may then be used to obtain the maximum sensitivity in phase measurements set by the Heisenberg limit \cite{Bollinger} . In general, however, an experimental realization of nonclassical field states is difficult, because the quantum coherence can be destroyed easily by the interaction with the environment.\\ Recent advances in ion cooling and trapping have opened new prospects in nonclassical state generation. An ion confined in an electromagnetic trap can be described approximately as a particle in a harmonic potential. Its center of mass (c.m.) exhibits a simple quantum-mechanical harmonic motion. By driving the ion appropriately with laser fields, its internal and external degrees of freedom can be coupled to the extent that its center-of-mass motion can be manipulated precisely. One advantage of the trapped ion system is that decoherence effects are relatively weak due to the extremely weak coupling between the vibrational modes and the environment. It was realized that this advantage of the trapped ion system makes it a promising candidate for constructing quantum logic gates for quantum computation \cite{CZ} as well as for producing nonclassical states of the center-of-mass motion. In fact, single-mode nonclassical states, such as Fock states, squeezed states and Schr\"{o}dinger cat states of the ion's vibration mode were investigated \cite{CCVPZ,Gerry2,MM}. Recently, various schemes of producing two-mode nonclassical states of the vibration mode were proposed using ions in a two-dimensional trap \cite{Gerry2,GN,GGM}. In particular, several schemes were proposed to generate arbitrary two-dimensional motional quantum states of a trapped ion \begin{eqnarray}
\sum_{m=0}^{M}\sum_{n=0}^{N}C_{m,n}|m>_x|n>_y \, .\label{1} \end{eqnarray}
The ket-vectors $|m>_x$ and $|n>_y$ denote the Fock states of the quantized vibration along the $x$ and $y$ axes. In the schemes of Gardiner et al \cite{GCZ}, the number of required laser operations depends exponentially on the upper photon numbers $M$ and $N$. The scheme introduced by Kneer and Law \cite{KL} involves (2M+1)(N+1)+2N operations, while the scheme proposed by Drobny et al \cite{DHB} requires $2(M+N)^2$ operations. More recently S. B. Zheng presented a scheme \cite{ZH}, which requires only (M+2)(N+1) operations. The reduction of the number of operation is important for further experimental realization. Notice that the quantum state (\ref{1}) involves (M+1)(N+1) complex coefficients. The purpose of this paper is to present a scheme to generate quantum state (\ref{1}) with not more than (M+1)(N+1) quantum operations. We will show that each coefficient of quantum state (\ref{1}) corresponds to only one laser pulse of this most efficient quantum state generation scheme. In order to describe this concept in an obvious way, we split the Hilbert space $H_{vib}$ of the two modes of vibration to subspaces $H_{vib}^{2J}$ with a constant total number of vibrational quanta $m+n=2J$. Thus, the Hilbert-space is a direct sum: $H_{vib}=\bigcup_{2J=0}^{\infty}H_{vib}^{2J}$ with
$2J\in$\nz. The quantum states are formulated by the two-mode Fock states $|J,L>=|J+L>_x|J-L>_y$ with $L\in \{-J,-J+1,\cdots ,J\}$. So $J$ and $L$ can be half numbers. The subspaces $H_{vib}^{2J}$
are spanned by $|J,L>;|J,L-1>;\ldots;|J,-L>$. The quantum state (\ref{1}) can also be written in the form \begin{eqnarray}
\sum_{J=0}^{(M+N)/2}\sum_{L=-J}^{J}d_{J,L}|J,L>\,. \label{2} \end{eqnarray}
In order to synthesize 2D quantum states of the vibration of one trapped ion we suggest to use laser stimulated Raman processes \cite{Steinbach}. We consider a trapped ion confined in a 2D harmonic potential characterized by the trap frequencies $\nu_x$
and $\nu_y$ in two orthogonal directions $x$ and $y$. The ion is irradiated along the $x$ and $y$ axes by two external laser frequencies $\omega_x$, $\omega_y$ and wave vectors $k_x$, $k_y$. The two laser fields stimulate Raman transitions between two electronic ground state levels $|g_1>$ and $|g_2>$ via a third electronic level, which is far enough out of resonance. The difference of energy of these two electronic ground states is $\hbar \omega_0$. We concentrate our investigations on those transitions on the quantum states of the trapped ion, which are in resonance with the laser field \begin{eqnarray} \omega_x-\omega_y=\omega_0-m\nu_x-n\nu_y;\quad m,n\in \mbox{\nz}\,. \label{resonance} \end{eqnarray} This equation can be interpreted as a condition of resonance, which relates the difference of frequency of two Raman laser beams to quantum numbers of two modes of vibration. The concept of the quantum state manipulation scheme, which we present, is based on appropriately tuned laser pulses to drive sideband transitions with different numbers $n$,$m$ selectively. Thus, we want to control multi-phonon transitions with simultaneous changes of the vibrational motion in both directions. We intend to generate each component of the quantum state (\ref{1}) from the ground state of the collective vibration. To make this concept reliable we assume incommensurate trap frequencies $\nu_x$ and $\nu_y$. This requirement can be fulfilled by trap design, since the number of photons in the quantum state (\ref{1}) is bounded: $m\le M,n\le N$.\\ If the resonance condition (\ref{resonance}) holds for every laser pulse the modeling can be simplified. In the example of a single ion confined in a two-dimensional harmonic trap Steinbach \cite{Steinbach} set a slightly different condition to describe the phonon exchange between different directions of vibrational motion. We do a similar kind of modelling, but with a different goal. After the standard dipole and rotating wave approximation, the adiabatic elimination of the auxiliary off-resonant levels lead to an effective Hamiltonian \begin{eqnarray} H=H_0+H_{int} \label{3} \end{eqnarray} with \begin{eqnarray}
H_0=\frac{\omega_0}{2}(|g_2><g_2|-|g_1><g_1|)+\nu_xa^{\dagger}a+ \nu_yb^{\dagger}b\, ; \label{4} \end{eqnarray} \begin{eqnarray}
H_{int}=\Omega{e^{i\eta_x(a+a^{\dagger})+i\eta_y(b+b^{\dagger})+i(\omega_x-\omega_y)t}}|g_2><g_1|+h.c. \,.\label{5} \end{eqnarray} Where $a$ and $b$ ($a^{\dagger}$ and $b^{\dagger}$) are annihilation(creation) operators of the quantized motion along the $x$ and $y$ axes. $\eta_x$ and $\eta_y$ are the associated Lamb-Dicke parameters in the $x$ and $y$ direction. The complex parameter $\Omega$ denotes the effective Raman coupling constant, which considers the phase difference $\phi$ of the two Raman laser beams.\\
In order to generate the motional quantum state (\ref{2}), we consider the situation in which the ion is prepared in the ground state $|g_2>$ and the two c.m. modes in the vacuum states $|0>_x$
and $|0>_y$: \begin{eqnarray}
\Psi_{initial}=|g_2>|0>_x|0>_y=|g_2,0,0>\,.\label{ini} \end{eqnarray}
In the following we show, that each term of this linear combination can be generated from the ground state term $|0>_x|0>_y$ most efficiently by one laser pulse.\\ In the rotating wave approximation we consider only those terms of the operator $H_{int}$, which are in resonance with the actual laser pulse. That is why we write the operator of interaction conveniently in the form \begin{eqnarray} H_{m,n}&=&\Omega{e^{-\eta_x^2/2-\eta_y^2/2}} \sum_{k_1,k_2=0}^{\infty}\frac{(i\eta_x)^{2k_1+m}(i\eta_y)^{2k_2+n}}{k_1!k_2!(k_1+m)!(k_2+n)!}
a^{\dagger{k_1}}a^{k_1}b^{\dagger{k_2}}b^{k_2}a^{m}b^{n}|g_2><g_1|\nonumber\\ &&+h.c. \,.\label{6} \end{eqnarray} The index $\{m,n\}$ indicates the relevant components of $H_{int}$, which correspond to the condition of resonance (\ref{resonance}). It follows, that the operator of time evolution $U_{mn}=\exp{(-iH_{m,n}t)}$ satisfies the relations \begin{eqnarray}
U_{mn}|g_2>|0>_x|0>_y&=&\cos(|\Omega_{m,n}|t)|g_2>|0>_x|0>_y\nonumber\\
&&-ie^{-i\phi_{m,n}}\sin(|\Omega_{m,n}|t)|g_1>|m>_x|n>_y \end{eqnarray} \begin{eqnarray}
U_{mn}|g_1>|k>_x|l>_y=|g_1>|k>_x|l>_y;\quad k+l<m+n \end{eqnarray} \begin{eqnarray}
U_{mn}|g_1>|k>_x|l>_y=|g_1>|k>_x|l>_y;\quad k+l=m+n;\, k\neq{m}\,. \label{7} \end{eqnarray}
Here $|\Omega_{m,n}|$ and $\phi_{m,n}$ are the amplitude and the phase of the corresponding Raman coupling constant \begin{eqnarray}
\Omega_{m,n}=|\Omega_{m,n}|\,e^{i\phi_{m,n}}= \Omega{e^{-\eta_x^2/2-\eta_y^2/2+i\phi}}\frac{(i\eta_x)^{m}(i\eta_y)^{n}}{m!n!}\, . \end{eqnarray} We begin from the ground state (\ref{ini}) by applying the first laser pulse. It fulfills the resonance condition (\ref{resonance})
in the case of: $n=0$; $m=0$. This laser pulse, which corresponds to $H_{0,0}$, has to generate the term $d_{0,0}|0,0>$ of the quantum state (\ref{2}). We choose the duration $t_{0,0}$ of the laser pulse to fulfill \begin{eqnarray}
-ie^{-i\phi_{0,0}}\sin(|\Omega_{0,0}|t_{0,0})=d_{0,0} \,.\label{9} \end{eqnarray} The system's state is transformed into \begin{eqnarray}
\Psi^{0,0}&=&\cos(|\Omega_{0,0}|t_{0,0})|g_2>|0,0>-ie^{-i\phi_{0,0}}\sin(|\Omega_{0,0}|t_{0,0})|g_1>|0,0> \label{8}\\
&=&\sqrt{1-|d_{0,0}|^2}|g_2>|0,0>+d_{0,0}|g_1>|0,0> \, .\label{10} \end{eqnarray} In order to drive the system with the effective interaction $H_{0,1}$ the next laser pulse is tuned according to Eq. (\ref{resonance}) to resonance: $(m;n)=(0;1)$. After this laser pulse has driven the ion with a duration $t_{0,1}$, the system's state becomes \begin{eqnarray}
\Psi^{0,1}&=&\sqrt{1-|d_{0,0}|^2}[\cos(|\Omega_{0,1}|t_{0,1})|g_2>|0,0> \nonumber\\
&&-ie^{-i\phi_{0,1}}\sin(|\Omega_{0,1}|t_{0,1})|g_1>|\frac{1}{2},-\frac{1}{2}>]+d_{0,0}|g_1>|0,0>\,. \label{11} \end{eqnarray} We choose the laser pulse duration $t_{0,1}$ to satisfy \begin{eqnarray}
-ie^{-i\phi_{0,1}}\sqrt{1-|d_{0,0}|^2}\sin(|\Omega_{0,1}|t_{0,1})=d_{\frac{1}{2},-\frac{1}{2}} \, .\label{12} \end{eqnarray} Thus, after the $(\frac{1}{2},-\frac{1}{2})$-component of Eq. (\ref{2}) is generated, the system's state becomes \begin{eqnarray}
\Psi^{0,1}&=&\sqrt{1-|d_{0,0}|^2-|d_{\frac{1}{2},-\frac{1}{2}}|^2}|g_2>|0,0> \nonumber\\
&&+d_{\frac{1}{2},-\frac{1}{2}}|g_1>|\frac{1}{2},-\frac{1}{2}>+d_{0,0}|g_1>|0,0> \, .\label{13} \end{eqnarray} We proceed with a laser pulse, which is characterized by $(m;n)=(1;0)$, to let the quantum state evolve into \begin{eqnarray}
\Psi^{1,0}&=&\sqrt{1-|d_{0,0}|^2-|d_{\frac{1}{2},-\frac{1}{2}}|^2}[\cos(|\Omega_{1,0}|t_{1,0})|g_2>|0,0> \nonumber\\
&&-ie^{-i\phi_{1,0}}\sin(|\Omega_{1,0}|t_{1,0})|g_1>|\frac{1}{2},\frac{1}{2}>]\nonumber\\
&&+d_{\frac{1}{2},-\frac{1}{2}}|g_1>|\frac{1}{2},-\frac{1}{2}>+d_{0,0}|g_1>|0,0>\, . \label{14} \end{eqnarray} With the choice \begin{eqnarray}
-ie^{-i\phi_{1,0}}\sqrt{1-|d_{00}|^2-|d_{\frac{1}{2},-\frac{1}{2}}|^2}\sin(|\Omega_{1,0}|t_{1,0})=d_{\frac{1}{2},\frac{1}{2}} \label{15} \end{eqnarray} we obtain \begin{eqnarray}
\Psi^{1,0}&=&\sqrt{1-|d_{0,0}|^2-|d_{\frac{1}{2},-\frac{1}{2}}|^2-|d_{\frac{1}{2},\frac{1}{2}}|^2}|g_2>|0,0> \nonumber\\
&&+d_{\frac{1}{2},\frac{1}{2}}|g_1>|\frac{1}{2},\frac{1}{2}>+d_{\frac{1}{2},-\frac{1}{2}}|g_1>|\frac{1}{2},-\frac{1}{2}>+d_{00}|g_1>|0,0> \,.\label{16} \end{eqnarray} If this procedure is done for the $[\frac{(i+j)(i+j+1)}{2}+i+1]-$th time, the quantum state of the system is \begin{eqnarray}
\Psi^{i,j}&=&\sum_{J=0}^{(i+j-1)/2}\sum_{L=-J}^{J}d_{J,L}|J,L>|g_1>+\sum_{L=-(i+j)/2}^{(i-j)/2}d_{(i+j)/2,L}|(i+j)/2,L>|g_1> \nonumber\\
&&+\sqrt{1-\sum_{J=0}^{(i+j-1)/2}\sum_{L=-J}^{J}|d_{J,L}|^2-\sum_{L=-(i+j)/2}^{(i-j)/2}|d_{(i+j)/2,L}|^2}|g_2>|0,0> \,.\label{17} \end{eqnarray} We now consider the $[\frac{(i+j)(i+j+1)}{2}+i+2]-$th operation by discussing the two possible cases.\\ In the special case of $j=0$ we drive the ion with the operator of interaction $H_{0,i+1}$. After the system has been driven by the corresponding laser pulse of duration $t_{0,i+1}$, the quantum state becomes \begin{eqnarray}
\Psi^{0,i+1}&=&\sum_{J=0}^{(i)/2}\sum_{L=-J}^{J}d_{J,L}|J,L>|g_1>\nonumber\\
&&+\sqrt{1-\sum_{J=0}^{(i)/2}\sum_{L=-J}^{J}|d_{J,L}|^2}[\cos(|\Omega_{0,i+1}|t_{0,i+1})|g_2>|0,0> \nonumber\\
&&-ie^{-i\phi_{0,i+1}}\sin(|\Omega_{0,i+1}|t_{0,i+1})|g_1>|(i+1)/2,-(i+1)/2>] \, .\label{18} \end{eqnarray} With the choice \begin{eqnarray}
-ie^{-i\phi_{0,i+1}}\sqrt{1-\sum_{J=0}^{(i)/2}\sum_{L=-J}^{J}|d_{J,L}|^2}\sin(|\Omega_{0,i+1}|t_{0,i+1})=d_{(i+1)/2,-(i+1)/2} \label{19} \end{eqnarray} we obtain \begin{eqnarray}
\Psi^{0,i+1}&=&\sum_{J=0}^{(i)/2}\sum_{L=-J}^{J}d_{J,L}|J,L>|g>+d_{(i+1)/2,-(i+1)/2}|(i+1)/2,-(i+1)/2>|g_1> \nonumber\\
&&+\sqrt{1-\sum_{J=0}^{(i)/2}\sum_{L=-J}^{J}|d_{J,L}|^2-|d_{(i+1)/2,-(i+1)/2}|^2}|g_2>|0,0> \,.\label{20} \end{eqnarray} In the other case ($j\neq{0}$) the ion is driven by a laser pulse, which corresponds to $H_{i+1,j-1}$. We choose the interaction time $t_{i+1,j-1}$ and phase of laser field to satisfy \begin{eqnarray}
d_{(i+j)/2,(i-j+2)/2}&=&-ie^{-i\phi_{i+1,j-1}}\sin(|\Omega_{i+1,j-1}|t_{i+1,j-1}) \nonumber\\
&&\times\sqrt{1-\sum_{J=0}^{(i+j-1)/2}\sum_{L=-J}^{J}|d_{J,L}|^2-\sum_{L=-(i+j)/2}^{(i-j)/2}|d_{(i+j)/2L}|^2}
\label{21} \end{eqnarray} and to obtain the quantum state \begin{eqnarray}
\Psi^{i+1,j-1}&=&\sum_{J=0}^{(i+j-1)/2}\sum_{L=-J}^{J}d_{J,L}|J,L>|g_1>+\sum_{L=-(i+j)/2}^{(i-j+2)/2}d_{(i+j)/2,L}|(i+j)/2,L>|g_1> \nonumber\\
&&+\sqrt{1-\sum_{J=0}^{(i+j-1)/2}\sum_{L=-J}^{J}|d_{J,L}|^2-\sum_{L=-(i+j)/2}^{(i-j+2)/2}|d_{(i+j)/2,L}|^2}|g_2>|0,0> \, .\label{22} \end{eqnarray} After the procedure is performed for $\frac{(M+N+1)(M+N)}{2}$ times the system's state definitely becomes \begin{eqnarray}
\Psi_{final}=\sum_{J=0}^{(M+N)/2}\sum_{L=-J}^{J}d_{J,L}|J,L>|g_1> \, . \label{23} \end{eqnarray} Thus, the system is prepared in a product state $\Psi$ of the desired vibrational quantum state (2) and the ground state
$|g_1>$.\\
We have proposed a scheme to generate any two-dimensional quantum state of vibration of one trapped ion. This concept is based on the possibility to drive sideband transitions with different numbers $n$, $m$ selectively by tuning laser pulses appropriately. Thus, we want to control multi-phonon transitions with simultaneous changes of the quantum numbers of vibration in both directions. Each component of the quantum state (\ref{1}) can be generated step by step from the ground state of vibration. To make our concept reliable we have assumed that the trap frequencies $\nu_x$ and $\nu_y$ can be made incommensurate by trap design.
However, there might exist on-resonant terms in addition to the ones included in Eq. (8) due to finite bandwith of laser pulse. If the ratio of the trap frequencies is chosen appropriately so that frequency of each lase pulse is enough separated, these terms can be neglected in the Lamb-Dicke limit. We assume equal values of the Lamb-Dicke parameter $\eta_x=\eta_y$\cite{Steinbach} and require a ratio of trap frequencies to satisfy condition $\nu_x/\nu_y>M+2N$. In this csae, we see that the laser frequency of each sideband transition is enough separated and we can neglect those unwanted resonant terms. Thus, in order to implement our scheme, we require a large enough trap anisotropies. In addition, in Lamb-Dicke limit, the effective Rabi frequency $\Omega_{m,n}$ of Eq. (12) is proportional to $ \frac{(i\eta_x)^{m}(i\eta_y)^{n}}{m!n!}$. If the quantum numbers of vibration $n$ and $m$ are too large the effective Rabi frequency $\Omega_{m,n}$ becomes too small. Thus, in respect of decoherence the time required to complete the procedure might become too long. This problem represents the limitation of this scheme.\\ This scheme of quantum state generation requires not more than (M+1)(N+1) laser pulses, which is the smallest possible number. Thus, in respect of short laser pulse sequences it can be considered as the optimal solution of this problem of the quantum state generation. The number of required laser pulses reduces, if there are coefficients in the target quantum state, which are equal to zero. For example, if the coefficient $d_{J,L}$ of Eq.(\ref{2}), which is generated with the $(2J^2+J+L+1)$-th operation, is zero, it is not necessary to apply the corresponding laser pulse.
\end{document}
|
arXiv
|
{
"id": "0108117.tex",
"language_detection_score": 0.7638261914253235,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title {Marcel Riesz on N\"orlund Means}
\date{}
\author[P.L. Robinson]{P.L. Robinson}
\address{Department of Mathematics \\ University of Florida \\ Gainesville FL 32611 USA }
\email[]{[email protected]}
\subjclass{} \keywords{}
\begin{abstract}
We note that the necessary and sufficient conditions established by Marcel Riesz for the inclusion of {\it regular} N\"orlund summation methods are in fact applicable quite generally.
\end{abstract}
\maketitle
\section*{Introduction}
\medbreak
One of the simplest classes of summation methods for divergent series was introduced independently by N\"orlund [2] in 1920 and by Voronoi in 1901 with an annotated English translation [5] by Tamarkin in 1932. Explicitly, let $(p_n : n \geqslant 0)$ be a real sequence, with $p_0 > 0$ and with $p_n \geqslant 0$ whenever $n > 0$; when $n \geqslant 0$ let us write $P_n = p_0 + \cdots + p_n.$ To each sequence $s = (s_n : n \geqslant 0)$ is associated the sequence $N^p s$ of {\it N\"orlund means} defined by $$ (N^ps)_m = \frac{p_0 s_m + \cdots + p_m s_0}{p_0 + \cdots + p_m} = \frac{1}{P_m} \sum_{n = 0}^m p_{m - n} s_n.$$ We say that the sequence $s$ is $(N, p)$-{\it convergent} to $\sigma$ precisely when the sequence $N^p s$ converges to $\sigma$ in the ordinary sense, writing this as $$s \xrightarrow{(N, p)} \sigma$$ or as $s \rightarrow \sigma \; (N, p)$; viewing the formation of N\"orlund means as a summation method, when $(s_n : n \geqslant 0)$ happens to be the sequence of partial sums of the series $\sum_{n \geqslant 0} a_n$ we may instead say that this series is $(N, p)$-{\it summable} to $\sigma$ and write $$\sum_{n = 0}^{\infty} a_n = \sigma \: (N, p).$$
\medbreak
An important question regarding N\"orlund summation methods (and summation methods in general) concerns inclusion. We say that $(N, q)$ {\it includes} $(N, p)$ precisely when each $(N, p)$-convergent sequence is $(N, q)$-convergent to the same limit; equivalently, when each $(N, p)$-summable series is $(N, q)$-summable to the same sum. This relationship will be symbolized by $(N, p) \rightsquigarrow (N, q)$. The important notion of regularity may be seen as a special case of inclusion: the N\"orlund method $(N, q)$ is said to be {\it regular} precisely when each ordinarily convergent sequence is $(N, q)$-convergent to the same limit; that is, precisely when $(N, u) \rightsquigarrow (N, q)$ where $u_0 = 1$ and where $u_n = 0$ whenever $n > 0$. Precise necessary and sufficient conditions for the {\it regular} N\"orlund method $(N, q)$ to include the {\it regular} N\"orlund method $(N, p)$ were determined by Marcel Riesz and communicated to Hardy in a letter, an extract from which appeared as [3]. The line of argument indicated by Riesz in his letter was amplified by Hardy in his classic treatise `{\it Divergent Series}' [1], which we recommend for further information regarding summation methods in general and N\"orlund methods in particular.
\medbreak
Our primary purpose here is to point out that the necessary and sufficient `Riesz' conditions in fact apply to N\"orlund methods quite generally, without regularity hypotheses.
\medbreak
\section*{Inclusive Riesz Conditions}
\medbreak
A celebrated theorem of Silverman, Steinmetz and Toeplitz gives necessary and sufficient conditions for a linear summation method to be regular, and proves to be very useful. The infinite matrix $[c_{m, n} : m, n \geqslant 0]$ yields a linear summation method $C$ by associating to each sequence $s = (s_n : n \geqslant 0)$ a corresponding sequence $t = (t_m : m \geqslant 0)$ given by $$t_m := \sum_{n = 0}^{\infty} c_{m, n} s_n$$ assumed convergent; to say that this linear summation method is {\it regular} is to say that, whenever the sequence $s$ is convergent, the sequence $t$ is convergent and $\lim_{m \rightarrow \infty} t_m = \lim_{n \rightarrow \infty} s_n$. The Silverman-Steinmetz-Toeplitz theorem may now be stated as follows.
\medbreak
\begin{theorem} \label{SST} The linear summation method $C$ with matrix $[c_{m, n} : m, n \geqslant 0]$ is regular precisely when each of the following conditions is satisfied: \\ {\rm (i)} there exists $H \geqslant 0$ such that for each $m \geqslant 0$
$$\sum_{n = 0}^{\infty} |c_{m, n}| \leqslant H;$$\\ {\rm (ii)} for each $n \geqslant 0$ $$\lim_{m \rightarrow \infty} c_{m, n} = 0;$$\\ {\rm (iii)} $$\lim_{m \rightarrow \infty} \sum_{n = 0}^{\infty} c_{m, n} = 1.$$ \end{theorem}
\begin{proof} This appears conveniently as Theorem 2 in [1]. \end{proof}
\medbreak
Now, let $(N, p)$ and $(N, q)$ be N\"orlund summation methods, or N\"orlunds for short. As $p_0$ is nonzero, the (triangular Toeplitz) system $$q_n = k_0 p_n + \cdots + k_n p_0 \; \; \; (n \geqslant 0)$$ is solved (recursively) by a unique sequence $k = (k_n : n \geqslant 0)$ of comparison coefficients; by summation, it follows that whenever $n \geqslant 0$ also $$Q_n = k_0 P_n + \cdots + k_n P_0.$$ The comparison sequence $k$ generates a (formal) power series $$k(x) = \sum_{n \geqslant 0} k_n x^n$$ while the N\"orlund sequences $p$ and $q$ also generate their own power series; the convolution relation $q = k * p$ between sequences corresponds to the relation $$q(x) = k(x) p(x)$$
between generating functions. We remark that if the N\"orlunds $(N, p)$ and $(N, q)$ are regular, their power series $p(x)$ and $q(x)$ converge whenever $|x| < 1$; the nonvanishing of $p(0) = p_0$ then ensures that the power series $k(x)$ converges when $|x|$ is small.
\medbreak
The introduction of the sequence $(k_n : n \geqslant 0)$ of comparison coefficients facilitates the following convenient expression for the N\"orlund means determined by $(N, q)$ in terms of the N\"orlund means determined by $(N, p)$.
\medbreak
\begin{theorem} \label{NN} If $r = (r_n : n \geqslant 0)$ is any sequence then $$ (N^q r)_m = \sum_{n = 0}^{\infty} c_{m, n} (N^p r)_n$$ where if $n > m$ then $c_{m, n} = 0$ while if $n \leqslant m$ then $c_{m, n} = k_{m - n} P_n / Q_m$. \end{theorem}
\begin{proof} Direct calculation: simply take the definition $$Q_m (N^q r)_m = q_0 r_m + \cdots + q_m r_0$$ and rearrange thus $$k_0 p_0 r_m + \cdots + (k_0 p_m + \cdots + k_m p_0) r_0 = k_0 (p_0 r_m + \cdots + p_m r_0) + \cdots + k_m (p_0 r_0)$$ to obtain $$Q_m (N^q r)_m = k_0 P_m (N^p r)_m + \cdots + k_m P_0 (N^p r)_0.$$ \end{proof}
\medbreak
We note that this result appears in the proof of [1] Theorem 19 but is there recorded only for regular N\"orlunds and established by comparing power series expansions; the argument presented here (essentially due to N\"orlund) is taken from [1] Theorem 17 and comes directly from the comparison coefficients without involving regularity.
\medbreak
The Riesz conditions ${\bf R}_{p q}$ associated to the N\"orlunds $(N, p)$ and $(N, q)$ may now be stated as follows: \medbreak ${\bf R}_{p q}^1$: there exists $H \geqslant 0$ such that for each $m \geqslant 0$
$$|k_0| P_m + \cdots + |k_m| P_0 \leqslant H Q_m;$$ \medbreak ${\bf R}_{p q}^2$: the sequence $(k_m / Q_m : m \geqslant 0)$ converges to zero. \medbreak As mentioned in the introduction, the fact that ${\bf R}_{p q}^1$ and ${\bf R}_{p q}^2$ are both necessary and sufficient for the inclusion $(N, p) \rightsquigarrow (N, q)$ between {\it regular} N\"orlunds appeared in [3] and was elaborated in [1] where it becomes Theorem 19. In what follows, we re-examine the line of argument taken in [3] and [1], deliberately stripping regularity hypotheses.
\medbreak
Henceforth, we shall write $C_{p q}$ for the linear summation method with matrix $[c_{m, n} : m, n \geqslant 0]$ expressing $(N, q)$ in terms of $(N, p)$ as in Theorem \ref{NN}.
\medbreak
On the one hand, we relate inclusion $(N, p) \rightsquigarrow (N, q)$ to regularity of $C_{p q}$.
\medbreak
\begin{theorem} \label{inclusionreg} The inclusion $(N, p) \rightsquigarrow (N, q)$ holds precisely when the linear summation method $C_{p q}$ is regular. \end{theorem}
\begin{proof} Assume $(N, p) \rightsquigarrow (N, q)$. Let $s = (s_n : n \geqslant 0)$ be any sequence. Note that $s = N^p r$ for a unique sequence $r = (r_n : n \geqslant 0)$ found by recursively solving the triangular Toeplitz system $$P_n s_n = p_0 r_n + \cdots + p_n r_0 \; \; \; (n \geqslant 0).$$ According to Theorem \ref{NN}, if $m \geqslant 0$ then $$t_m := \sum_{n = 0}^{\infty} c_{m, n} s_n = \sum_{n = 0}^{\infty} c_{m, n} (N^p r)_n = (N^q r)_m.$$ Now, let $s \rightarrow \sigma$: then $N^p r \rightarrow \sigma$ (by choice of $r$) hence $r \xrightarrow{(N, p)} \sigma$ (by definition of $(N, p)$-convergence) so that $r \xrightarrow{(N, q)} \sigma$ (by the $(N, p) \rightsquigarrow (N, q)$ assumption) whence $N^q r \rightarrow \sigma$ (by definition of $(N, q)$-convegence); that is, $t \rightarrow \sigma$. This proves that $C_{p q}$ is regular. \medbreak Assume that $C_{p q}$ is regular. Let $r = (r_n : n \geqslant 0)$ be $(N, p)$-convergent to $\sigma$: then $N^p r \rightarrow \sigma$ so Theorem \ref{NN} and the regularity of $C_{p q}$ yield $N^q r \rightarrow \sigma$; thus, $r$ is $(N, q)$-convergent to $\sigma$ also. This proves $(N, p) \rightsquigarrow (N, q)$. \end{proof}
\medbreak
On the other hand, we relate regularity of $C_{p q}$ to the Riesz conditions ${\bf R}_{p q}$.
\medbreak
\begin{theorem} \label{regRiesz} The linear summation method $C_{p q}$ is regular precisely when the Riesz conditions ${\bf R}_{p q}^1$ and ${\bf R}_{p q}^2$ are satisfied. \end{theorem}
\begin{proof} Assume $C_{p q}$ to be regular and invoke Theorem \ref{SST}. Part (i) furnishes $H \geqslant 0$ such that for each $m \geqslant 0$
$$\frac{|k_m| P_0 + \cdots + |k_0| P_m}{Q_m} = \sum_{n = 0}^{m} \Big|\frac{k_{m - n} P_n}{Q_m}\Big| = \sum_{n = 0}^{\infty} |c_{m, n}| \leqslant H$$ whence ${\bf R}_{p q}^1$ holds. Part (ii) says that $\lim_{m \rightarrow \infty} c_{m, n} = 0$ for each $n \geqslant 0$; in particular, $$0 = \lim_{m \rightarrow \infty} c_{m, 0} = \lim_{m \rightarrow \infty} \frac{k_m}{Q_m} P_0$$ whence ${\bf R}_{p q}^2$ holds. \medbreak Assume that ${\bf R}_{p q}^1$ and ${\bf R}_{p q}^2$ are satisfied. Theorem \ref{SST}(i) holds because
$$\sum_{n = 0}^{\infty} |c_{m, n}| = \sum_{n = 0}^{m} \Big|\frac{k_{m - n} P_n}{Q_m}\Big| = \frac{|k_m| P_0 + \cdots + |k_0| P_m}{Q_m} \leqslant H$$ on account of ${\bf R}_{p q}^1$. Theorem \ref{SST}(ii) holds because
$$|c_{m, n}| = \Big|\frac{k_{m - n} P_n}{Q_m}\Big| \leqslant \frac{|k_{m - n}|}{Q_{m - n}} P_n \rightarrow 0 \; \; {\rm as} \; \; m \rightarrow \infty$$ on account of ${\bf R}_{p q}^2$. Finally, Theorem \ref{SST}(iii) holds simply because of the relation $$Q_m = k_0 P_m + \cdots + k_m P_0.$$ Theorem \ref{SST} now guarantees that $C_{p q}$ is regular. \end{proof}
\medbreak
In conclusion, the Riesz conditions ${\bf R}_{p q}$ are both necessary and sufficient for the inclusion $(N, p) \rightsquigarrow (N, q)$ without any assumptions of regularity.
\medbreak
\begin{theorem} Let $(N, p)$ and $(N, q)$ be any N\"orlund methods. The inclusion $(N, p) \rightsquigarrow (N, q)$ holds precisely when the Riesz conditions ${\bf R}_{p q}^1$ and ${\bf R}_{p q}^2$ are satisfied. \end{theorem}
\begin{proof} Simply combine Theorem \ref{inclusionreg} and Theorem \ref{regRiesz}. \end{proof}
\bigbreak
\begin{center} {\small R}{\footnotesize EFERENCES} \end{center} \medbreak
[1] G.H. Hardy, {\it Divergent Series}, Clarendon Press, Oxford (1949).
\medbreak
[2] N.E. Norlund, {\it Sur une application des fonctions permutables}, Lunds Universitets \r{A}rsskrift (2) Volume 16 Number 3 (1920) 1-10.
\medbreak
[3] M. Riesz, {\it Sur l'\'equivalence de certaines m\'ethodes de sommation}, Proceedings of the London Mathematical Society (2) {\bf 22} (1924) 412-419.
\medbreak
[4] P.L. Robinson, {\it Finite N\"orlund Summation Methods}, arXiv 1712.06744 (2017).
\medbreak
[5] G.F. Voronoi, {\it Extension of the Notion of the Limit of the Sum of Terms of An Infinite Series}, Annals of Mathematics (2) Volume 33 Number 3 (1932) 422-428.
\end{document}
|
arXiv
|
{
"id": "1712.08592.tex",
"language_detection_score": 0.7486415505409241,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{center} \textbf{A characterization of compact operators via the non-connectedness of the attractors of a family of IFSs}
by
ALEXANDRU\ MIHAIL (Bucharest) and
RADU\ MICULESCU (Bucharest)
\end{center}
{\small Abstract}. {\small In this paper we present a result which establishes a connection between the theory of compact operators and the theory of iterated function systems. For a Banach space }$X${\small , }$S$ {\small \ and }$T${\small \ bounded linear operators from }$X${\small \ to }$ X${\small \ such that }$\left\Vert S\right\Vert ,\left\Vert T\right\Vert <1$ {\small \ and }$w\in X${\small , let us consider the IFS }$ S_{w}=(X,f_{1},f_{2})${\small , where }$f_{1},f_{2}:X\rightarrow X${\small \ are given by }$f_{1}(x)=S(x)${\small \ and }$f_{2}(x)=T(x)+w${\small , for all }$x\in X${\small . On one hand we prove that if the operator }$S${\small \ is compact, then there exists a family }$(K_{n})_{n\in \mathbb{N}}${\small \ of compact subsets of }$X${\small \ such that }$A_{\mathcal{S}_{w}}$ {\small \ is\ not connected, for all }$w\in H-\underset{n\in \mathbb{N}}{ \cup }K_{n}${\small . One the other hand we prove that if }$H$ {\small is an infinite dimensional Hilbert space, then a bounded linear operator }$ S:H\rightarrow H${\small \ having the property that }$\left\Vert S\right\Vert <1${\small \ is compact provided that for every bounded linear operator }$T:H\rightarrow H${\small \ such that }$\left\Vert T\right\Vert <1$ {\small \ there exists a sequence }$(K_{T,n})_{n}${\small \ of compact subsets of }$H${\small \ such that }$A_{\mathcal{S}_{w}}${\small \ is\ not connected for all }$w\in H-\underset{n}{\cup }K_{T,n}${\small . Consequently, given an infinite dimensional Hilbert space }$H$,{\small \ there exists a complete characterization of the compactness of an operator }$ S:H\rightarrow H${\small \ by means of the non-connectedness of the attractors of a family of IFSs related to the given operator.}
\textbf{1. Introduction. }IFSs were introduced in their present form by John Hutchinson (see [9]) and popularized by Michael Barnsley (see [2]). They are one of the most common and most general ways to generate fractals. Although the fractals sets are defined by means of measure theory concepts (see [7]), they have very interesting topological properties. The connectivity of the attractor of an iterated function system has been studied, for example, in [14] (for the case of an iterated multifunction system) and in [6] (for the case of an infinite iterated function system).
It is well known the role of the compact operators theory in functional
---------------------------
\textit{2000 Mathematical\ Subject Classification}{\small : }Primary 28A80, 47B07; Secondary 54D05
\textit{Key words and phrases}: iterated function systems, attractors, connectivity, compact operators
analysis and, in particular, in the theory of the integral equations. In this frame, a natural question is to provide equivalent characterizations for compact operators. Let us mention some results on this direction. A bounded operator $T$ on a separable Hilbert space $H$ is compact if and only if $\underset{n\rightarrow \infty }{\lim } <Te_{n},e_{n}>=0$ (or equivalently $\underset{n\rightarrow \infty }{\lim } Te_{n}=0$), for each orthonormal basis $\{e_{n}\}$ for $H$ (see [1], [8], [16] and [17]) if and only if every orthonormal basis $\{e_{n}\}$ for $H$ has a rearrangement $\{e_{\sigma (n)}\}$ such that $\sum \frac{1}{n} \left\Vert Te_{\sigma (n)}\right\Vert <\infty $ (see [18]). In a more general framework, in [10] a characterization of the compact operators on a fixed Banach space in terms of a construction due to J.J.M. Chadwick and A.W. Wickstead (see [3]) is presented and in [11] a purely structural characterization of compact elements in a $C^{\ast }$ algebra is given.
In contrast to the above mentioned characterizations of the compact operators which are confined to the framework of the functional analysis, in this paper we present such a characterization by means of the non-connectedness of the attractors of a family of IFSs related to the considered operator.
In this way we establish an unexpected connection between the theory of compact operators and the theory of iterated function systems.
\textbf{2.} \textbf{Preliminary results. }In this paper, for a function $f$\ and $n\in \mathbb{N}$, by $f^{[n]}$ we mean the composition of $f$ by itself $n$ times.
DEFINITION\ 2.1. Let $(X,d)$\ be a metric space. A function $f:X\rightarrow X $\ is called a \textit{contraction} in case there exists $\lambda \in (0,1) $ such that \begin{equation*} d(f(x),f(y))\leq \lambda d(x,y)\text{,} \end{equation*} for all $x,y\in X$.
THEOREM\ 2.2 (The Banach-Cacciopoli-Picard contraction principle). \textit{ If }$X$\textit{\ is a complete metric space, then for each contraction }$ f:X\rightarrow X$\textit{\ there exists a unique fixed point }$x^{\ast }$ \textit{\ of }$f$\textit{.}
\textit{Moreover } \begin{equation*} x^{\ast }=\underset{n\rightarrow \infty }{\lim }f^{[n]}(x_{0})\text{\textit{, }} \end{equation*} \textit{\ for each }$x_{0}\in X$\textit{.}
NOTATION. Given a metric space $(X,d)$, by $K(X)$\ we denote the set of non-empty compact subsets of $X$.
DEFINITION\ 2.3. For a metric space $(X,d)$, the function $h:K(X)\times K(X)\rightarrow \lbrack 0,+\infty )$ defined by
\begin{equation*} h(A,B)=\max (d(A,B),d(B,A))= \end{equation*}
\begin{equation*} =\inf \{r\in \lbrack 0,\infty ):A\subseteq B(B,r)\text{ and }B\subseteq B(A,r)\}\text{,} \end{equation*} where \begin{equation*} B(A,r)=\{x\in X:d(x,A)<r\} \end{equation*} and
\begin{equation*} d(A,B)=\underset{x\in A}{\sup }d(x,B)=\underset{x\in A}{\sup }(\underset{ y\in B}{\inf }d(x,y))\text{,} \end{equation*} turns out to be a metric which is called the \textit{Hausdorff-Pompeiu metric }.
REMARK 2.4. The metric space $(K(X),h)$ is complete, provided that $(X,d)$ is a complete metric space.
DEFINITION\ 2.5. Let $(X,d)$\ be a complete metric space. An \textit{ iterated function system} (for short an IFS) on $X$, denoted by $ S=(X,(f_{k})_{k\in \{1,2,...,n\}})$, consists of a finite family of contractions $(f_{k})_{k\in \{1,2,...,n\}}$, $f_{k}:X\rightarrow X$.
THEOREM\ 2.6. \textit{Given} $\mathcal{S}=(X,(f_{k})_{k\in \{1,2,...,n\}})$ \textit{an iterated function system on }$X$, \textit{the function} $F_{ \mathcal{S}}:K(X)\rightarrow K(X)$ \textit{defined by}
\begin{equation*} F_{\mathcal{S}}(C)=\underset{k=1}{\overset{n}{\cup }}f_{k}(C)\text{,} \end{equation*} \textit{for all} $C\in K(X)$, \textit{which is called the set function associated to }$\mathcal{S}$\textit{, turns out to be a contraction and its unique fixed point, denoted by }$A_{\mathcal{S}}$\textit{, is called the attractor of the IFS }$\mathcal{S}$.
REMARK 2.7. For each $i\in \{1,2,...,n\}$, the fixed point of $f_{i}$\ is an element of $A_{\mathcal{S}}$.
REMARK 2.8. If $A\in K(X)$ has the property that $F_{\mathcal{S} }(A)\subseteq A$, then $A_{\mathcal{S}}\subseteq A$.
$\mathtt{Proof}$. The proof is similar to the one of Lemma 3.6 from [13]. $ \square $
DEFINITION\ 2.9. Let $(X,d)$\ be a metric space and $(A_{i})_{i\in I}$\ a family of nonempty subsets of $X$. The \textit{family} $(A_{i})_{i\in I}$\ is said to be \textit{connected} if for every $i,j\in I$, there exist $n\in \mathbb{N}$\ and $\{i_{1},i_{2},...,i_{n}\}\subseteq I$\ such that $i_{1}=i$ , $i_{n}=j$\ and $A_{i_{k}}\cap A_{i_{k+1}}\neq \emptyset $\ for every $k\in \{1,2,..,n-1\}$.
THEOREM\ 2.10\textit{\ }(see [12], Theorem 1.6.2, page 33)\textit{. Given an IFS }$\mathcal{S}=(X,(f_{k})_{k\in \{1,2,...,n\}})$\textit{, where }$(X,d)$ \textit{\ is a complete metric space, the following statements are equivalent:}
\textit{1)\ the family }$(f_{i}(A_{\mathcal{S}}))_{i\in \{1,2,...,n\}}$ \textit{\ is connected;}
\textit{2) }$A_{\mathcal{S}}$\textit{\ is arcwise connected.}
\textit{3) }$A_{\mathcal{S}}$\textit{\ is connected.}
PROPOSITION 2.11. \textit{For a given complete metric space }$(X,d)$\textit{ , let us consider the IFSs }$\mathcal{S}=(X,f_{1},f_{2})$\textit{\ and }$ \mathcal{S}^{^{\prime }}=(X,f_{1}^{[m]},f_{2})$\textit{, where }$m\in \mathbb{N}$\textit{.}
\textit{If }$A_{\mathcal{S}^{^{\prime }}}$\textit{\ is connected, then }$A_{ \mathcal{S}}$\textit{\ is connected.}
$\mathtt{Proof}$. Since $F_{\mathcal{S}^{^{\prime }}}(A_{\mathcal{S} })=f_{1}^{[m]}(A_{\mathcal{S}})\cup f_{2}(A_{\mathcal{S}})\subseteq A_{ \mathcal{S}}$, we get (using Remark 2.8) $A_{\mathcal{S}^{^{\prime }}}\subseteq A_{\mathcal{S}}$ and hence $f_{2}(A_{\mathcal{S}^{^{\prime }}})\subseteq f_{2}(A_{\mathcal{S}})$. Because $f_{1}^{[m]}(A_{\mathcal{S} ^{^{\prime }}})\subseteq f_{1}(A_{\mathcal{S}})$, it follows that $ f_{1}^{[m]}(A_{\mathcal{S}^{^{\prime }}})\cap f_{2}(A_{\mathcal{S}^{^{\prime }}})\subseteq f_{1}(A_{\mathcal{S}})\cap f_{2}(A_{\mathcal{S}})$ (*). Since $ A_{\mathcal{S}^{^{\prime }}}$\textit{\ }is connected, taking into account Theorem 2.10, we deduce that $f_{1}^{[m]}(A_{\mathcal{S}^{^{\prime }}})\cap f_{2}(A_{\mathcal{S}^{^{\prime }}})\neq \varnothing $, which, using $(\ast )$ , implies that $f_{1}(A_{\mathcal{S}})\cap f_{2}(A_{\mathcal{S}})\neq \varnothing $. Then, using again Theorem 2.10, we infer that $A_{\mathcal{S} } $\textit{\ }is connected. $\square $
PROPOSITION 2.12\textbf{\ }(see [5], page 238, lines 11-12). \textit{Assume that }$H$\textit{\ is a Hilbert space. Let us consider a self-adjoint operator }$N:H\rightarrow H$\textit{\ and }$E$ \textit{its spectral decomposition. Then for each }$\lambda \in \mathbb{R}$\textit{\ we have} \begin{equation*} NE((-\infty ,\lambda ))\leq \lambda E((-\infty ,\lambda )) \end{equation*} \textit{and} \begin{equation*} \lambda E((\lambda ,\infty ))\leq NE((\lambda ,\infty ))\text{,} \end{equation*} \textit{for all }$\lambda \in \mathbb{R}$\textit{.}
PROPOSITION 2.13\textbf{\ }(see [5], page 226, Observation 7). \textit{ Assume that }$H$\textit{\ is a Hilbert space. Let us consider two self-adjoint operators }$N_{1},N_{2}:H\rightarrow H$\textit{.}
\textit{If} \begin{equation*} 0\leq N_{1}\leq N_{2}\text{,} \end{equation*} \textit{then} \begin{equation*} \left\Vert N_{1}\right\Vert \leq \left\Vert N_{2}\right\Vert \text{.} \end{equation*}
PROPOSITION 2.14\textbf{\ }(see [19], ex. 25, page 344). \textit{Assume that }$H$\textit{\ is a Hilbert space. Let us consider a normal operator }$ N:H\rightarrow H$\textit{, }$g$ \textit{a bounded Borel function on }$\sigma (N)$\textit{\ and }$S=g(T)$\textit{. If }$E_{N}$\textit{\ and }$E_{S}$ \textit{\ are the spectral decomposition of }$N$\textit{\ and }$S$\textit{, then } \begin{equation*} E_{S}(\omega )=E_{N}(g^{-1}(\omega ))\text{,} \end{equation*} \textit{for every Borel set }$\omega \subseteq \sigma (S)$\textit{.}
PROPOSITION 2.15 (see [4], Proposition 4.1, page 278). \textit{Assume that }$ H$\textit{\ is a Hilbert space. Let us consider a normal operator} $ N:H\rightarrow H$ \textit{and} $E$ \textit{its spectral decomposition.\ Then }$N$\textit{\ is compact if and only if }$E(\{z\mid \left\vert z\right\vert >\varepsilon \})$\textit{\ has finite rank, for every }$\varepsilon >0$ \textit{.}
PROPOSITION 2.16. \textit{Assume that }$H$\textit{\ is a Hilbert space. Let us consider a bounded linear operator }$A:H\rightarrow H$ \textit{which is invertible. Then }$Id_{H}-A^{\ast }A$\textit{\ is compact if and only if }$ Id_{H}-AA^{\ast }$\textit{\ is compact.}
$\mathtt{Proof}$. According to the well known polar decomposition theorem there exists an unitary operator $U:H\rightarrow H$ and a positive operator $ P:H\rightarrow H$ such that $P^{2}=A^{\ast }A$ and $A=UP$. Then \begin{equation*} Id_{H}-AA^{\ast }=Id_{H}-UP(UP)^{\ast }=Id_{H}-UPP^{\ast }U^{\ast }=Id_{H}-UP^{2}U^{\ast }= \end{equation*} \begin{equation*} =UU^{\ast }-UP^{2}U^{\ast }=U(Id_{H}-P^{2})U^{\ast }=U(Id_{H}-A^{\ast }A)U^{\ast }\text{.} \end{equation*}
Hence $Id_{H}-AA^{\ast }=U(Id_{H}-A^{\ast }A)U^{\ast }$ and $Id_{H}-A^{\ast }A=U^{\ast }(Id_{H}-AA^{\ast })U$. From the last two relations we obtain the conclusion. $\square $
COROLLARY 2.17. \textit{Assume that }$H$\textit{\ is a Hilbert space. Let us consider a bounded linear operator }$S:H\rightarrow H$ \textit{such that }$ \left\Vert S\right\Vert <1$\textit{. Then }$S+S^{\ast }-SS^{\ast }$\textit{\ is compact if and only if }$S+S^{\ast }-S^{\ast }S$\textit{\ is compact.}
$\mathtt{Proof}$. The operator $A=Id_{H}-S$ is invertible since $\left\Vert S\right\Vert <1$. According to Proposition 2.16 $Id_{H}-A^{\ast }A$\textit{\ }is compact if and only if $Id_{H}-AA^{\ast }$ is compact i.e. $S+S^{\ast }-SS^{\ast }$ is compact if and only if $S+S^{\ast }-S^{\ast }S$ is compact. $\square $
PROPOSITION 2.18\textbf{\ }(see [19], ex. 14, page 324). \textit{Assume that }$H$\textit{\ is a Hilbert space and let us consider a bounded linear operator }$S:H\rightarrow H$\textit{. If }$S^{\ast }S$\textit{\ is a compact operator, then }$S$\textit{\ is compact.}
\textbf{3.}\ \textbf{A sufficient condition for the compactness of an operator. }In this section, $H$ is an infinite-dimensional Hilbert space. We shall use the notation $Id_{H}$ for the function $Id_{H}:H\rightarrow H$, given by $Id_{H}(x)=x$, for all $x\in H$ . If $S$ and $T$ are bounded linear operators from $H$ to $H$ such that $\left\Vert S\right\Vert ,\left\Vert T\right\Vert <1$, then $S$ and $T$ are contractions. For $w\in X$, we consider the IFS $S_{w}=(X,f_{1},f_{2})$, where $f_{1},f_{2}:X\rightarrow X$ are given by $f_{1}(x)=S(x)$ and $f_{2}(x)=T(x)+w$, for all $x\in X$.
THEOREM\ 3.1. \textit{In the preceding framework, let us consider a bounded linear operator }$S:H\rightarrow H$ \textit{satisfying the condition} $ \left\Vert S\right\Vert <1$\textit{. If for every bounded linear operator }$ T:H\rightarrow H$\textit{\ such that }$\left\Vert T\right\Vert <1$\textit{\ there exists a sequence }$(K_{T,n})_{n}$\textit{\ of compact subsets of }$H$ \textit{\ having the property that }$A_{\mathcal{S}_{w}}$\textit{\ is\ not connected for all }$w\in H-\underset{n}{\cup }K_{T,n}$\textit{, then the operator }$S$\textit{\ is compact.}
$\mathtt{Proof}$. For each $m\in \mathbb{N}$ let us consider the bounded linear operator $U=S^{[m]}$. Obviously $\left\Vert U\right\Vert <1$. Let us consider $P_{\varepsilon }=E((-\infty ,1-\varepsilon ))$ and $\overset{\sim } {P_{\varepsilon }}=E((1+\varepsilon ,\infty ))$, where $E$ is the spectral decomposition of the positive (so self-adjoint, so normal) bounded linear operator \begin{equation*} N=(Id_{H}-U)^{\ast }(Id_{H}-U)=Id_{H}-U-U^{\ast }+U^{\ast }U\text{.} \end{equation*}
We claim\textit{\ that }$P_{\varepsilon }$\textit{\ has finite rank for every }$\varepsilon >0$.
Indeed, if there is to be an $\varepsilon _{0}>0$\textit{\ }such that $ P_{\varepsilon _{0}}$ has infinite rank, then\textit{\ }let us consider the operator $T=(Id_{H}-U)P_{\varepsilon _{0}}$ and remark that \begin{equation*} NP_{\varepsilon _{0}}=NP_{\varepsilon _{0}}^{2}=NP_{\varepsilon _{0}}^{\ast }P_{\varepsilon _{0}}=P_{\varepsilon _{0}}^{\ast }NP_{\varepsilon _{0}}=P_{\varepsilon _{0}}^{\ast }((Id_{H}-U)^{\ast }(Id_{H}-U))P_{\varepsilon _{0}}= \end{equation*} \begin{equation*} =((Id_{H}-U)P_{\varepsilon _{0}})^{\ast }((Id_{H}-U)P_{\varepsilon _{0}})\geq 0\text{.} \end{equation*} Hence, according to Proposition 2.12, we have $0\leq NP_{\varepsilon _{0}}\leq (1-\varepsilon _{0})P_{\varepsilon _{0}}$ and therefore, using Proposition 2.13, it follows that $\left\Vert NP_{\varepsilon _{0}}\right\Vert \leq 1-\varepsilon _{0}$. Consequently we obtain \begin{equation*} \left\Vert T\right\Vert ^{2}=\left\Vert T^{\ast }T\right\Vert =\left\Vert (Id_{H}-U)P_{\varepsilon _{0}})^{\ast }(Id_{H}-U)P_{\varepsilon _{0}}\right\Vert = \end{equation*} \begin{equation*} =\left\Vert P_{\varepsilon _{0}}^{\ast }(Id_{H}-U)^{\ast }(Id_{H}-U)P_{\varepsilon _{0}}\right\Vert =\left\Vert P_{\varepsilon _{0}}NP_{\varepsilon _{0}}\right\Vert \leq \end{equation*} \begin{equation*} \leq \left\Vert P_{\varepsilon _{0}}\right\Vert \left\Vert NP_{\varepsilon _{0}}\right\Vert =\left\Vert NP_{\varepsilon _{0}}\right\Vert \leq 1-\varepsilon _{0} \end{equation*} and thus \begin{equation*} \left\Vert T\right\Vert \leq \sqrt{1-\varepsilon _{0}}<1\text{.} \end{equation*}
For $w\in H$, let us consider, besides $\mathcal{S}_{w}$, the IFS $\mathcal{S }_{w}^{^{\prime }}=(H,f,f_{2})$, where $f:H\rightarrow H$ is given by $ f(x)=U(x)$, for all $x\in H$.
Now let us choose an arbitrary $w\in (Id_{H}-T)P_{\varepsilon _{0}}(H)$. On one hand, since $0$ is the fixed point of $f$, using Remark 2.7, we infer that $0\in A_{\mathcal{S}_{w}}$. On the other hand, using the same argument, we get that $e$, the fixed point of $f_{2}$, belongs to $A_{\mathcal{S}_{w}}$ , that is $e=U^{-1}(w)=(Id_{H}-T)^{-1}(w)\in A_{\mathcal{S}_{w}^{^{\prime }}} $. Since $f(e)=f_{2}(0)=w$, we obtain $w\in f(A_{\mathcal{S} _{w}^{^{\prime }}})\cap f_{2}(A_{\mathcal{S}_{w}^{^{\prime }}})$, which implies $f(A_{\mathcal{S}_{w}^{^{\prime }}})\cap f_{2}(A_{\mathcal{S} _{w}^{^{\prime }}})\neq \emptyset $, and therefore, according to Theorem 2.10, $A_{\mathcal{S}_{w}^{^{\prime }}}$is connected. We conclude (using Proposition 2.11) that $A_{\mathcal{S}_{w}}$ is connected.
Consequently there exists a bounded linear operator $T:H\rightarrow H$ having $\left\Vert T\right\Vert <1$ such that $A_{\mathcal{S}_{w}}$ is connected for every $w\in (Id_{H}-T)P_{\varepsilon _{0}}(H)$.
According to the hypothesis there exists a sequence $(K_{T,n})_{n}$\textit{\ }of compact subsets of $H$\ having the property that $A_{\mathcal{S}_{w}}$\ is\ not connected, for all $w\in H-\underset{n}{\cup }K_{T,n}m.$
Therefore we obtain $(Id_{H}-T)P_{\varepsilon _{0}}(H)\subseteq \underset{n}{ \cup }K_{T,n}$ which (taking into account the fact that $(Id_{H}-T)P_{ \varepsilon _{0}}(H)$ is infinite dimensional, that the closed unit ball in a normed linear space $X$ is compact if and only if $X$ is infinite dimensional and Baire's theorem) generates a contradiction.
We assert \textit{that }$\overset{\sim }{P_{\varepsilon }}$\textit{\ has finite rank for every }$\varepsilon >0$.
Indeed, if by contrary we suppose that there exists $\varepsilon _{0}>0$ \textit{\ }such that $\overset{\sim }{P_{\varepsilon _{0}}}$ has infinite rank, let $R_{\varepsilon _{0}}$ designates the orthogonal projection of $H$ onto $(Id_{H}-U)\overset{\sim }{P_{\varepsilon _{0}}}(H)$ and let us consider the bounded linear operator $T=(Id_{H}-U)^{-1}R_{\varepsilon _{0}}$ . Based upon Proposition 2.12, we have \begin{equation*} N\overset{\sim }{P_{\varepsilon _{0}}}=(Id_{H}-U)^{\ast }(Id_{H}-U)\overset{ \sim }{P_{\varepsilon _{0}}}\geq (1+\varepsilon _{0})\overset{\sim }{ P_{\varepsilon _{0}}}\text{,} \end{equation*} which implies that \begin{equation*} \left\Vert (Id_{H}-U)\overset{\sim }{P_{\varepsilon _{0}}}(x)\right\Vert ^{2}=<N\overset{\sim }{P_{\varepsilon _{0}}}(x),\overset{\sim }{ P_{\varepsilon _{0}}}(x)>\geq (1+\varepsilon _{0})\left\Vert \overset{\sim }{ P_{\varepsilon _{0}}}(x)\right\Vert ^{2}\text{,} \end{equation*} i.e. \begin{equation} \sqrt{1+\varepsilon _{0}}\left\Vert \overset{\sim }{P_{\varepsilon _{0}}} (x)\right\Vert \leq \left\Vert (Id_{H}-U)\overset{\sim }{P_{\varepsilon _{0}} }(x)\right\Vert \text{,} \tag{0} \end{equation} for each $x\in H$. So, as for each $u\in H$ there exists $x_{u}\in H$ such that $R_{\varepsilon _{0}}(u)=(Id_{H}-U)\overset{\sim }{P_{\varepsilon _{0}}} (x_{u})$, we infer that \begin{equation*} \left\Vert T(u)\right\Vert =\left\Vert (Id_{H}-U)^{-1}R_{\varepsilon _{0}}(u)\right\Vert =\left\Vert (Id_{H}-U)^{-1}(Id_{H}-U)\overset{\sim }{ P_{\varepsilon _{0}}}(x_{u})\right\Vert = \end{equation*} \begin{equation*} =\left\Vert \overset{\sim }{P_{\varepsilon _{0}}}(x_{u})\right\Vert \overset{ (0)}{\leq }\frac{1}{\sqrt{1+\varepsilon _{0}}}\left\Vert (Id_{H}-U)\overset{ \sim }{P_{\varepsilon _{0}}}(x)\right\Vert = \end{equation*} \begin{equation*} =\frac{1}{\sqrt{1+\varepsilon _{0}}}\left\Vert R_{\varepsilon _{0}}(u)\right\Vert \leq \frac{1}{\sqrt{1+\varepsilon _{0}}}\left\Vert R_{\varepsilon _{0}}\right\Vert \left\Vert u\right\Vert =\frac{1}{\sqrt{ 1+\varepsilon _{0}}}\left\Vert u\right\Vert \end{equation*} i.e. $\left\Vert T(u)\right\Vert \leq \frac{1}{\sqrt{1+\varepsilon _{0}}} \left\Vert u\right\Vert $, for each $u\in H$, which takes on the form \begin{equation*} \left\Vert T\right\Vert \leq \frac{1}{\sqrt{1+\varepsilon _{0}}}<1\text{.} \end{equation*}
For $w\in H$, let us consider, besides $\mathcal{S}_{w}$, the IFS $\mathcal{S }_{w}^{^{\prime }}=(H,f,f_{2})$, where $f:H\rightarrow H$ is given by $ f(x)=U(x)$, for all $x\in H$.
Now let us choose an arbitrary $w\in (Id_{H}-T)\overset{\sim }{ P_{\varepsilon _{0}}}(H)$. Then there exists $u\in H$ such that $w=(Id_{H}-T) \overset{\sim }{P_{\varepsilon _{0}}}(u)$. On one hand, since $0$ is the fixed point of $f$, using Remark 2.7, we infer that $0\in A_{\mathcal{S} _{w}^{^{\prime }}}$. On the other hand, using the same argument, we get that $e$ (the fixed point of $f_{2}$) belongs to $A_{\mathcal{S}_{w}^{^{\prime }}} $, that is $e=U^{-1}(w)=(Id_{H}-T)^{-1}(w)\in A_{\mathcal{S} _{w}^{^{\prime }}}$, and therefore $f(e)\in A_{\mathcal{S}_{w}^{^{\prime }}}$ . Since $f(0)=0 $, on one hand we infer that \begin{equation} 0\in f(A_{\mathcal{S}_{w}^{^{\prime }}})\text{.} \tag{1} \end{equation}
On the other hand we have \begin{equation*} f_{2}(f(e))=TU(e)+w=TU(Id_{H}-T)^{-1}(w)+(Id_{H}-T)(Id_{H}-T)^{-1}(w)= \end{equation*} \begin{equation*} =(Id_{H}-T(Id_{H}-U))(Id_{H}-T)^{-1}(w)= \end{equation*} \begin{equation*} =(Id_{H}-T(Id_{H}-U))(Id_{H}-T)^{-1}(Id_{H}-T)\overset{\sim }{P_{\varepsilon _{0}}}(u)= \end{equation*} \begin{equation*} =(Id_{H}-T(Id_{H}-U))\overset{\sim }{P_{\varepsilon _{0}}}(u)=\overset{\sim } {P_{\varepsilon _{0}}}(u)-(Id_{H}-U)^{-1}R_{\varepsilon _{0}}(Id_{H}-U)) \overset{\sim }{P_{\varepsilon _{0}}}(u)= \end{equation*} \begin{equation*} =\overset{\sim }{P_{\varepsilon _{0}}}(u)-(Id_{H}-U)^{-1}(Id_{H}-U))\overset{ \sim }{P_{\varepsilon _{0}}}(u)=0\text{,} \end{equation*} so \begin{equation} 0\in f_{2}(A_{\mathcal{S}_{w}^{^{\prime }}})\text{.} \tag{2} \end{equation}
From $(1)$ and $(2)$ we obtain $0\in f(A_{\mathcal{S}_{w}^{^{\prime }}})\cap f_{2}(A_{\mathcal{S}_{w}^{^{\prime }}})$, i.e. $f(A_{\mathcal{S} _{w}^{^{\prime }}})\cap f_{2}(A_{\mathcal{S}_{w}^{^{\prime }}})\neq \varnothing $, so, relying on Theorem 2.10, $A_{S_{w}^{^{\prime }}}$ is connected. We appeal to Proposition 2.11 to deduce that $A_{\mathcal{S}_{w}}$ is connected.
Consequently there exists a bounded linear operator $T:H\rightarrow H$ having $\left\Vert T\right\Vert <1$ such that $A_{\mathcal{S}_{w}}$is connected for every $w\in (Id_{H}-T)\overset{\sim }{P_{\varepsilon _{0}}}(H)$ .
Taking into account the hypothesis there exists a sequence $(K_{T,n})_{n}$ \textit{\ }of compact subsets of $H$\ having the property that $A_{S_{w}}$\ is\ not connected for all $w\in H-\underset{n}{\cup }K_{T,n}m.$
Thus we obtain the inclusion $(Id_{H}-T)\overset{\sim }{P_{\varepsilon _{0}}} (H)\subseteq \underset{n}{\cup }K_{T,n}$ which generates a contradiction by invoking the same arguments that we used in the final part of the previous claim's proof.
Now\textit{\ we state that }$Id_{H}-(Id_{H}-U)^{\ast }(Id_{H}-U)$ \textit{is compact.}
If $\mathcal{E}$ is the spectral decomposition of $Id_{H}-N$, using Proposition 2.14, we obtain $E((-\infty ,1-\varepsilon )\cup (1+\varepsilon ,\infty ))=E(g^{-1}((-\infty ,-\varepsilon )\cup (\varepsilon ,\infty )))= \mathcal{E(}(-\infty ,-\varepsilon )\cup (\varepsilon ,\infty )\mathcal{)}= \mathcal{E(}(-\infty ,-\varepsilon )\cup (\varepsilon ,\infty )\mathcal{)}$, where $g(x)=1-x$. Since from the above two claims we infer that the operator $E(((-\infty ,1-\varepsilon )\cup (1+\varepsilon ,\infty )))=E((-\infty ,1-\varepsilon ))+E((1+\varepsilon ,\infty ))$ has finite rank, we get that $ \mathcal{E(}(-\infty ,-\varepsilon )\cup (\varepsilon ,\infty )\mathcal{)}$ has finite rank, for every $\varepsilon >0$. Proposition 2.15\ assures us that $Id_{H}-N$ is compact, i.e. $Id_{H}-(Id_{H}-U)^{\ast }(Id_{H}-U)=U+U^{\ast }-U^{\ast }U$ is compact.
\textit{Hence} \begin{equation*} S^{[m]}+(S^{[m]})^{\ast }-S^{[m]}(S^{[m]})^{\ast } \end{equation*} \textit{is compact}, \textit{for every }$m\in \mathbb{N}$.
For $m=1$, we get that $S+S^{\ast }-S^{\ast }S$ is compact. Note that, by Corollary 2.17, $S+S^{\ast }-SS^{\ast }$ is compact and hence $SS^{\ast }-S^{\ast }S$ is compact (3).
Consequently $S^{\ast }(S^{\ast }S-SS^{\ast })S=(S^{\ast })^{[2]}S^{[2]}-S^{\ast }SS^{\ast }S$ is compact. (4)
Moreover, for $m=2$, we obtain that $S^{[2]}+(S^{\ast })^{[2]}-(S^{\ast })^{[2]}S^{[2]}$ is compact. (5)
But \begin{equation*} (S+S^{\ast }-S^{\ast }S)(S+S^{\ast }-S^{\ast }S)= \end{equation*} \begin{equation*} =(S+S^{\ast })^{[2]}-(S+S^{\ast }-S^{\ast }S)S^{\ast }S-S^{\ast }S(S+S^{\ast }-S^{\ast }S)-S^{\ast }SS^{\ast }S \end{equation*} is compact.
Since $S+S^{\ast }-S^{\ast }S$ is compact, we infer that \begin{equation*} (S+S^{\ast })^{[2]}-S^{\ast }SS^{\ast }S= \end{equation*} \begin{equation*} =S^{[2]}+(S^{\ast })^{[2]}+SS^{\ast }+S^{\ast }S-S^{\ast }SS^{\ast }S= \end{equation*} \begin{equation*} =S^{[2]}+(S^{\ast })^{[2]}-(S^{\ast })^{[2]}S^{[2]}+SS^{\ast }+S^{\ast }S+(S^{\ast })^{[2]}S^{[2]}-S^{\ast }SS^{\ast }S \end{equation*} is compact. (6)
Then, from (4), (5) and (6), we get that $SS^{\ast }+S^{\ast }S$ is compact. (7)
From (3) and (7) we deduce that $S^{\ast }S$ is a compact operator and, using Proposition 2.18, we conclude that $S$ is compact. $\square $
\textbf{4. A necessary condition for the compactness of an operator. }In this section $X$ is a Banach space. We shall designate by $Id_{X}$ the function $Id_{X}:X\rightarrow X$, given by $Id_{X}(x)=x$, for all $x\in X$. If $S$ and $T$ be bounded linear operator from $X$ to $X$ such that $ \left\Vert S\right\Vert ,\left\Vert T\right\Vert <1$, then $S$ and $T$ are contractions and $T^{[n]}-Id_{X}$ is invertible, for each $n\in \mathbb{N}$. For $w\in X$, we consider the IFS $S_{w}=(X,f_{1},f_{2})$, where $ f_{1},f_{2}:X\rightarrow X$ are given by $f_{1}(x)=S(x)$ and $ f_{2}(x)=T(x)+w $, for all $x\in X$.
THEOREM\ 4.1. \textit{In the above mentioned setting, if the operator }$S$ \textit{is compact, then} \textit{there exists a family }$(K_{n})_{n\in \mathbb{N}}$ \textit{of compact subsets of }$X$\textit{\ such that }$A_{ \mathcal{S}_{w}}$\textit{\ is\ not connected, for all }$w\in H-\underset{ n\in \mathbb{N}}{\cup }K_{n}$\textit{.}
$\mathtt{Proof}$. The proof given in Theorem 5, from [15], applies with little change. More precisely let $C_{0}$ be the compact set $\overline{ S(B(0,1))}$. Let $X^{^{\prime }},X_{1},X_{2},...,X_{n},...$ be given by \begin{equation*} X^{^{\prime }}=S(X)=\underset{k\in \mathbb{N}}{\cup }kC_{0} \end{equation*} and \begin{equation*} X_{n}=(T-Id_{X})(T^{[n]}-Id_{X})^{-1}(X^{^{\prime }}-T^{[n]}(X^{^{\prime }})) \text{,} \end{equation*} for each $n\in \mathbb{N}$. We have \begin{equation*} X_{n}=(T-Id_{X})(T^{[n]}-Id_{X})^{-1}(\underset{k\in \mathbb{N}}{\cup } kC_{0}-T^{[n]}(\underset{l\in \mathbb{N}}{\cup }lC_{0}))= \end{equation*} \begin{equation*} =(T-Id_{X})(T^{[n]}-Id_{X})^{-1}(\underset{k\in \mathbb{N}}{\cup }kC_{0}- \underset{l\in \mathbb{N}}{\cup }lT^{[n]}(C_{0}))= \end{equation*} \begin{equation*} =(T-Id_{X})(T^{[n]}-Id_{X})^{-1}(\underset{k,l\in \mathbb{N}}{\cup } (kC_{0}-lT^{[n]}(C_{0}))\text{,} \end{equation*} for each $n\in \mathbb{N}$ and since $kC_{0}-lT^{[n]}(C_{0})$ is compact for all $k,l\in \mathbb{N}$, we infer that $X_{n}$ is a countable union of compact subsets of $X$. Therefore there exists a family\textit{\ }$ (K_{n})_{n\in \mathbb{N}}$ of compact subsets of $X$ such that $\underset{ n\in \mathbb{N}}{\cup }X_{n}=\underset{n\in \mathbb{N}}{\cup }K_{n}$. The rest of the proof of the Theorem mentioned above does not require any modification.
Hence $A_{\mathcal{S}_{w}}$\textit{\ }is disconnected, for each $w\in X\smallsetminus \underset{n\in \mathbb{N}}{\cup }X_{n}=X\smallsetminus \underset{n\in \mathbb{N}}{\cup }K_{n}$. $\square $
REMARK 4.2. If $X$ is infinite dimensional, then $W\overset{not}{=} X\smallsetminus \underset{n\in \mathbb{N}}{\cup }X_{n}=X\smallsetminus \underset{n\in \mathbb{N}}{\cup }K_{n}$ is dense in $X$.
$\mathtt{Proof}$. Indeed, let us note that $K_{n}$ is a closed set. Moreover $\overset{\circ }{K_{n}}=\varnothing $ since if this is not the case, then the closure of the unit ball of the infinite-dimensional space $X$ is compact which is a contradiction. Consequently $X_{n}$ is nowhere dense, for each $n\in \mathbb{N}$, and therefore $W$ is dense in $X$.
\begin{center} \textbf{References} \end{center}
[1] D. Baki\'{c} and B. Gulja\v{s}, \textit{Which operators approximately annihilate orthonormal bases?}, Acta Sci. Math. (Szeged) 64 (1998), No.3-4, 601-607.
[2] M.F. Barnsley, \textit{Fractals everywhere}, Academic Press Professional, Boston, 1993.
[3] J. J. M. Chadwick and A.W. Wickstead, \textit{A quotient of ultrapowers of Banach spaces and semi-Fredholm operators}, Bull. London Math. Soc. 9 (1977), 321-325.
[4] J. B. Conway, \textit{A course in functional analysis}, Springer-Verlag, New York, Berlin, Heidelberg, Tokyo, 1985.
[5] R. Cristescu, \textit{No\c{t}iuni de Analiz\u{a} Func\c{t}ional\u{a} Liniar\u{a}}, Editura Academiei Rom\^{a}ne, Bucure\c{s}ti, 1998.
[6] D. Dumitru and A. Mihail, \textit{A sufficient condition for the connectedness of the attractors of an infinite iterated function systems}, An. \c{S}tiin\c{t}. Univ. Al. I. Cuza Ia\c{s}i. Mat. (N.S.), LV (2009), f.1, 87-94.
[7] K.J. Falconer, \textit{Fractal geometry, Mathematical foundations and applications}, John Wiley \& Sons, Ltd., Chichester, 1990.
[8] P. A. Fillmore and J.P. Williams, \textit{On operator ranges}, Adv. Math. 7 (1971), 254-281.
[9] J.E. Hutchinson, \textit{Fractals and self-similarity}, Indiana Univ. Math. J. 30 (1981), 713-747.
[10] K. Imazeki, \textit{Characterizations of compact operators and semi-Fredholm operators}, TRU Math. 16 (1980), No.2, 1-8.
[11] J. M. Isidro, \textit{Structural characterization of compact operators} , in Current topics in operator algebras, (Nara, 1990), 114-129, World Sci. Publ., River Edge, NJ, 1991.
[12] J. Kigami, \textit{Analysis on Fractals,} Cambridge University Press, 2001.
[13] R. Miculescu and A. Mihail, \textit{Lipscomb's space }$\omega ^{A}$ \textit{\ is the attractor of an infinite IFS containing affine transformations of }$l^{2}(A)$, Proc. Amer. Math. Soc. 136\ (2008), No. 2, 587-592.
[14] A. Mihail, \textit{On the connectivity of the attractors of iterated multifunction systems}, Real Anal. Exchange, 34 (2009), No. 1, 195-206.
[15] A. Mihail and R. Miculescu, \textit{On a family of IFSs whose attractors are not connected}, to appear in J. Math. Anal. Appl., DOI information: 10.1016/j.jmaa.2010.10.039.
[16] K. Muroi and K. Tamaki, \textit{On Ringrose's characterization of compact operators}, Math. Japon. 19 (1974), 259-261.
[17] J. R. Ringrose, \textit{Compact non-self-adjoint operators}, Van Nostrand Reinhold Company, London, 1971.
[18] \'{A}. Rod\'{e}s-Us\'{a}n, \textit{A characterization of compact operators}, in Proceedings of the tenth Spanish-Portuguese on mathematics, III (Murcia, 1985), 178-182, Univ. Murcia, Murcia, 1985.
[19] W. Rudin, \textit{Functional analysis}, 2nd ed., International Series in Pure and Applied Mathematics. New York, NY: McGraw-Hill, 1991.
Department of Mathematics
Faculty of Mathematics and Informatics
University of Bucharest
Academiei Street, no. 14
010014 Bucharest, Romania
E-mail: mihail\[email protected]
\qquad\ \ \ \ \ [email protected]
\end{document}
|
arXiv
|
{
"id": "1011.0262.tex",
"language_detection_score": 0.5992767214775085,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\maketitle
\noindent{\bf Abstract} Let $u = \{u(t, x); (t,x)\in \mathbb R_+\times \mathbb R\}$ be the solution to a linear stochastic heat equation driven by a Gaussian noise, which is a Brownian motion in time and a fractional Brownian motion in space with Hurst parameter $H\in(0, 1)$. For any given $x\in \mathbb R$ (resp. $t\in \mathbb R_+$), we show a decomposition of the stochastic process $t\mapsto u(t,x)$ (resp. $x\mapsto u(t,x)$) as the sum of a fractional Brownian motion with Hurst parameter $H/2$ (resp. $H$) and a stochastic process with $C^{\infty}$-continuous trajectories. Some applications of those decompositions are discussed. \vskip0.3cm
\noindent {\bf Keywords} {Stochastic heat equation; fractional Brownian motion; path regularity; law of the iterated logarithm.}
\vskip0.3cm
\noindent {\bf Mathematics Subject Classification (2000)}{ 60G15, 60H15, 60G17.} \maketitle
\section{Introduction}
Consider the following one-dimensional stochastic heat equation
\begin{equation}\label{eq SPDE}
\frac{\partial u}{\partial t}=\frac{\kappa}{2}\frac{\partial ^2 u}{\partial x^2}+\dot W, \ \ \ t\ge0, x\in \mathbb R,
\end{equation} with some initial condition $u(0,x) \equiv0$, $\dot W=\frac{\partial^2 W}{\partial t\partial x}$, where $W$ is a centered Gaussian process with covariance given by \begin{equation}\label{eq cov}
\mathbb E[W(s,x)W(t,y)]=\frac12\left(|x|^{2H}+|y|^{2H}-|x-y|^{2H} \right)(s\wedge t), \ \
\end{equation} for any $s,t\ge0, x, y\in\mathbb R$, with $H \in (0,1)$. That is, $W$ is a standard Brownian motion in time and a fractional Browinian motion (fBm for short) with Hurst parameter $H$ in space.
When $H=1/2$, $\dot W$ is a space-time white noise and $u$ is the classical stochastic convolution, which has been understood very well (see e.g., \cite{Walsh}). Theorem 3.3 in \cite{K} tells us that the stochastic process $t\mapsto u(t,x)$ (resp. $x\mapsto u(t,x)$) can be represented as the sum of a fBm with Hurst parameter $1/4$ (resp. $1/2$) and a stochastic process with $C^{\infty}$-continuous trajectories. Hence, locally $t\mapsto u(t,x)$ (resp. $x\mapsto u(t,x)$ ) behaves as a fBm with Hurst parameter $1/4$ (resp. $1/2$), and it has the same regularity (such as the H\"older continuity, the global and local moduli of continuities, Chung-type law of the iterated logarithm) as a fBm with Hurst parameter $1/4$ (resp. $1/2$). See
Lei and Nualart \cite{LN} for earlier related work.
When $H\in(1/2,1)$, Mueller and Wu \cite{MW} used such a decomposition to study the critical dimension for hitting points for the fBm; Tudor and Xiao \cite{TX} studied the sample regularities of the solution for the fractional-colored stochastic heat equation by using this decomposition.
When $H\in (0,1/2)$, the spatial covariance $\Lambda$, given in $$ \mathbb E\left[\dot W(s,x)\dot W(t,y)\right]=\delta_0(t-s)\Lambda(x-y), $$
is a distribution, which fails to be positive. The study of stochastic partial differential equations with this kind of noises lies outside the scope of application of the classical references (see, e.g., \cite{ Dal, PZ, DPZ}).
It seems that the decomposition results in \cite{MW} and \cite{TX} are very hard to be extended to the case of $H\in (0,1/2)$.
Recently, the problems of the stochastic partial differential equation driven by a fractional Gaussian noise in space with $H\in(0,1/2)$ have attracted many authors' attention.
For example, Balan et al. \cite{BJQ} studied the existence and uniqueness of a mild solution for stochastic heat equation with affine multiplicative fractional noise in space, that is, the diffusion coefficient is given by an affine function $\sigma(x)=ax+b$. They
established the H\"older continuity of the solution in \cite{BJQ2016}. The case of the general nonlinear coefficient $\sigma$, which has a Lipschitz derivative and satisfies $\sigma(0)=0$, has been studied in Hu et al. \cite{HHLNT}.
In this paper, we give unified forms of the decompositions for the stochastic convolution about both temporal and spatial variables when $H\in (0,1)$. That is, for any given $x\in \mathbb R$ (resp. $t\in \mathbb R_+$), we show a decomposition of the stochastic process $t\mapsto u(t,x)$ (resp. $x\mapsto u(t,x)$) as the sum of a fractional Brownian motion with Hurst parameter $H/2$ (resp. $H$) and a stochastic process with $C^{\infty}$-continuous trajectories.
Those decompositions not only lead to a better understanding of the H\"older regularity of the stochastic convolution \eqref{eq SPDE}, but also give the uniform and local moduli of continuities and Chung-type law of iterated logarithm.
Notice that our decompositions are natural extensions of \cite[Theorem 3.3]{K}, and they are given in different forms with that obtained in \cite{MW} and \cite{TX}.
The rest of this paper is organized as follows. In Section 2, we recall some results about the Gaussian noise and the stochastic convolution. The main results are given in Section 3, and their proofs are given in Section 4.
\section{The Gaussian noise and the stochastic convolution}
In this section, we introduce the Gaussian noise and corresponding stochastic integration, borrowed from \cite{BJQ} and \cite{PT}.
Let $\mathcal S$ be the space of Schwartz functions, and $\mathcal S'$ be its dual, the space of tempered distributions. The Fourier transform of a function $u\in \mathcal S$ is defined as
$$
\mathcal Fu(\xi)=\int_{\mathbb R}e^{-i\xi x}u(x)dx,
$$ and the inverse Fourier transform is given by $\mathcal F^{-1}u(\xi)=(2\pi)^{-1}\mathcal Fu(-\xi)$.
Given a domain $G\subset \mathbb R^n$ for some $n\ge1$, let $\mathcal D(G)$ be the space of all real-valued infinitely differential functions with compact support on $G$. According to \cite[Theorem 3.1]{PT}, the noise $W$ can be represented by a zero-mean Gaussian family $\{W(\phi), \phi\in \mathcal D((0,\infty)\times \mathbb R)\}$ defined on a complete probability space $(\Omega, \mathcal F, \mathbb P)$, whose covariance is given by \begin{equation}\label{eq cov 2}
\mathbb E[W(\phi)W(\psi)]=c_{1, H}\int_{\mathbb R_+\times \mathbb R} \mathcal F\phi(s,\xi)\overline{\mathcal F\psi(s,\xi)}|\xi|^{1-2H}dsd\xi,\ \ \ \ H \in (0,1), \end{equation} for any $\phi ,\psi \in \mathcal D((0,\infty)\times \mathbb R)$, where the Fourier transforms $\mathcal F \phi, \mathcal F\psi$ are understood as Fourier transforms in space only, $\bar z$ is the conjugation of a complex number $z$ and \begin{equation}\label{const c1H} c_{1, H}=\frac{1}{2\pi} \Gamma(2H+1)\sin(H \pi). \end{equation}
Let $\mathcal H$ be the Hilbert space obtained by completing $\mathcal D (\mathbb R)$ under the inner production, \begin{equation} \langle \phi, \psi\rangle_{\mathcal H}= \left\{ \begin{aligned}\label{eq inn}
&c_{2, H}^2\int_{\mathbb R^2} (\phi( x+y)-\phi(x))(\psi( x+y)-\psi(x))|y|^{2H-2} dxdy,& H&\in(0,1/2);\\
& \int_{\mathbb R} \phi(x)\psi(x)dx, & H&=1/2;\\
&c_{3, H}^2\int_{\mathbb R^2} \phi( x+y)\psi(x)|y|^{2H-2} dxdy, &H&\in(1/2,1), \end{aligned} \ \right. \end{equation} for any $\phi ,\psi \in \mathcal H $, where $c_{2, H}^2=H\left(\frac12-H\right)$, $c_{3, H}^2=H(2H-1).$
Denote $\|\phi\|_{\mathcal H}:=\sqrt{\langle \phi, \phi\rangle_{\mathcal H}}$ for any $\phi\in \mathcal H$. Then \begin{align}\label{eq cov 3}
\mathbb E[W(\phi)W(\psi)]
=\mathbb E\left[\int_{\mathbb R_+}\langle \phi(s), \psi(s)\rangle_{\mathcal H} ds\right]. \end{align} See e.g., \cite{HHLNT, PT, TTV}.
Let \begin{equation}\label{eq p} p_t(x)=\frac{1}{\sqrt{2\pi \kappa t}}e^{-\frac{x^2}{2\kappa t}} \end{equation} be the heat kernel on the real line related to $\frac{\kappa}{2}\Delta$. \begin{definition}\label{def solut}
We say that a random field $u=\{
u(t, x); t\in [0, T], x\in \mathbb R\}$ is a mild solution of \eqref{eq SPDE}, if $u$ is predictable and for any $(t, x)\in [0, T]\times \mathbb R$, \begin{equation}\label{eq solut} u(t,x)= \int_0^t\int_{\mathbb R}p_{t-s}(x-y)W(ds,dy),\ \ \ a.s.. \end{equation}
It is usually called the {\it stochastic convolution}. Denote $u(t,x)$ by $u_t(x)$ for any $(t,x)\in \mathbb R_+\times\mathbb R$. \end{definition}
\section{Main results}
Recall that a mean-zero Gaussian process $\{X_t\}_{t\ge0}$ is called a (two-sided) fractional Brownian motion with Hurst parameter $H\in (0,1)$, if it satisfies \begin{align}\label{eq fBm}
X_0=0, \ \ \ \mathbb E\left(|X_t-X_s|^2\right)=|t-s|^{2H}, \ \ \ t, s\in \mathbb R. \end{align}
\begin{theorem}\label{thm main} For $H\in (0,1)$, the following results hold for the stochastic convolution $u=\{u_t(x); (t,x)\in \mathbb R_+\times \mathbb R\}$ given by \eqref{eq solut}: \begin{itemize}
\item[(a)] For every $x\in \mathbb R$, there exists a fBm $\{X_t\}_{t\ge0}$ with Hurst parameter $H/{2}$, such that $$ u_t(x)- C_{1, H, \kappa} X_t, \ \ \ \ t\ge0, $$ defines a mean-zero Gaussian process with a version that is continuous on $\mathbb R_+$ and infinitely differentiable on $(0, \infty)$, where \begin{equation}\label{eq c} C_{1, H, \kappa}:=\left( \frac{2^{1-H}\Gamma(2H)}{\kappa^{1-H}\Gamma(H)} \right)^{\frac12}. \end{equation}
\item[(b)] For every $t>0$, there exists a fBm $\{B(x)\}_{x\in\mathbb R}$ with Hurst parameter $H$,
such that
$$
u_t(x)- \kappa^{-\frac12} B(x),\ \ \ \ x\in\mathbb R,
$$
defines a Gaussian random field with a version that is continuous on $\mathbb R$ and infinitely differentiable on $\mathbb R$. \end{itemize} \end{theorem}
Let us observe that Theorem \ref{thm main} says that, locally $t\mapsto u_t(x)$ behaves as a fBm with Hurst parameter $H/2$ and $x\mapsto u_t(x)$ behaves as a fBm with Hurst parameter $H$. Thus, for instance, it follows from this observation and known facts about fBm (see e.g., \cite{MRm}, \cite[Chapter 1]{Mis}, or \cite{Xiao2008}), we can obtain the following sample regularities of the stochastic convolution.
By applying Theorem \ref{thm main} and the H\"older continuity result of fBms \cite[Chapter 1]{Mis}, we have the following well-known results, (see e.g., \cite[Chapter 3]{Walsh}, \cite[Theorem 1.1]{BJQ2016}). \begin{corollary} \begin{itemize}
\item[(a)]
For every $x\in \mathbb R$, the stochastic process
$t\mapsto u_t(x)$ is a.s. H\"older continuous of parameter $H/2-\varepsilon$ for every $\varepsilon >0$.
\item[(b)]
For every $t>0$, the stochastic process
$x\mapsto u_t(x)$ is a.s. H\"older continuous of parameter $H-\varepsilon$ for every $\varepsilon >0$. \end{itemize}
\end{corollary}
By applying Theorem \ref{thm main} and the variations of fBms (see e.g.,\cite{MRm}), we have the following results. \begin{corollary}\label{prop sample regularity11} Let $\mathcal N$ be a standard normal random variable. Then, for every $x\in \mathbb R$ and $[a,b]\subset \mathbb R_+$,
\begin{align}\label{eq variation 1}
\lim_{n\rightarrow \infty}\sum_{a2^n\le i\le b 2^n}\left[u_{(i+1)/2^n}(x)-u_{i/2^n}(x) \right]^{\frac 2 H}= (b-a)C_{1, H, \kappa}^{2/H}\mathbb E\left[|\mathcal N|^{2/H}\right], \ \ a.s.; \end{align}
and for every $t>0$, $[c,d]\subset \mathbb R$,
\begin{align}\label{eq variation 2}
\lim_{n\rightarrow \infty}\sum_{c2^n\le i\le d 2^n}\left[u_{t}((i+1)/2^n)-u_{t}(i/2^n) \right]^{\frac 1 H}= (d-c) C_{2, H, \kappa}^{1/H} \mathbb E\left[|\mathcal N|^{1/H}\right], \ \ a.s.. \end{align} \end{corollary}
By applying Theorem \ref{thm main} and the global and local moduli of continuity results for fBms (see e.g., \cite[Chapter 7]{MRm}), we have \begin{corollary}\label{prop sample regularity2 } \begin{itemize}
\item[(a)](Global moduli of continuity for intervals).
For every $x\in \mathbb R$ and $[a,b]\subset \mathbb R_+$, we have
\begin{align}\label{eq LIL 1}
\lim_{\varepsilon \rightarrow 0+}\sup_{s,t\in[a,b], 0<|t-s|\le \varepsilon}\frac{|u_{t}(x)-u_{s}(x)|}{ |t-s|^{H/2}\sqrt{2\ln (1/|t-s|)} }= C_{3, H, \kappa},\ \ \ a.s.; \end{align}
and for every $t>0$, $[c,d]\subset \mathbb R$, we have
\begin{align}\label{eq LIL 2}
\lim_{\varepsilon \rightarrow 0+}\sup_{x,y\in[c,d], 0<|x-y|\le \varepsilon}\frac{|u_{t}(x)-u_{t}(y)|}{ |x-y|^{H}\sqrt{2\ln (1/|x-y|)} }= C_{4, H, \kappa},\ \ \ a.s.. \end{align}
\item[(b)](Local moduli of continuity for intervals).
For every $t>0$ and $x\in \mathbb R$ , we have
\begin{align}\label{eq local moduli 1}
\varlimsup_{\varepsilon \rightarrow 0+}\frac{\sup_{0<|t-s|\le \varepsilon} |u_{t}(x)-u_{s}(x)|}{\varepsilon ^{H/2}\sqrt{2\ln\ln (1/\varepsilon )} }= C_{5, H, \kappa},\ \ \ a.s.;
\end{align}
and
\begin{align}\label{eq local moduli 2}
\varlimsup_{\varepsilon \rightarrow 0+}\frac{\sup_{0<|x-y|\le \varepsilon} |u_{t}(x)-u_{t}(y)|}{\varepsilon ^{H}\sqrt{2\ln\ln (1/\varepsilon)} }= C_{6, H, \kappa},\ \ \ a.s..
\end{align}
\end{itemize}
\end{corollary}
By applying Theorem \ref{thm main} and the Chung-type law of iterated logarithm in \cite [Theorem 3.3]{MR}, we have \begin{corollary}\label{prop sample regularity2 }
For every $t>0$ and $x\in \mathbb R$, we have
\begin{align}\label{eq CLIL 1}
\varliminf_{\varepsilon \rightarrow 0+}\frac{\sup_{0<|t-s|\le \varepsilon} |u_{t}(x)-u_{s}(x)|}{\left(\varepsilon/ \ln\ln (1/\varepsilon)\right)^{H/2} }= C_{7, H, \kappa},\ \ \ a.s.; \end{align} and
\begin{align}\label{eq CLIL 1}
\varliminf_{\varepsilon \rightarrow 0+}\frac{\sup_{0<|x-y|\le \varepsilon} |u_{t}(x)-u_{t}(y)|}{\left(\varepsilon/ \ln\ln (1/\varepsilon)\right)^{H} }= C_{8, H, \kappa},\ \ \ a.s.. \end{align}
\end{corollary}
\section{The proof of Theorem \ref{thm main}}
\subsection{The proof of (a) in Theorem \ref{thm main}}
The method of proof is similar to those of \cite[Theorem 3.3]{K} and \cite[Proposition 3.1]{FKM}, but is complicated in the fractional noise case.
Choose and fix some $x\in \mathbb R$. For every $t,\varepsilon >0$, by \eqref{eq solut}, we have \begin{align*} &u_{t+\varepsilon}(x)-u_t(x)\notag\\ =& \int_0^t\int_{\mathbb R}[p_{t+\varepsilon-s}(x-y)-p_{t-s}(x-y)] W(ds, dy)+ \int_t^{t+\varepsilon}\int_{\mathbb R} p_{t+\varepsilon-s}(x-y) W(ds, dy). \end{align*}
Let \begin{align*}
J_1:=&\int_0^t\int_{\mathbb R}[p_{t+\varepsilon-s}(x-y)-p_{t-s}(x-y)] W(ds, dy),\\
J_2:=&\int_t^{t+\varepsilon}\int_{\mathbb R} p_{t+\varepsilon-s}(x-y) W(ds, dy). \end{align*} The construction of the Gaussian noise $W$, which is white in time, ensures that $J_1$ and $J_2$ are independent mean-zero Gaussian random variables. Thus, \begin{align*}
\mathbb E\left[|u_{t+\varepsilon}(x)-u_t(x)|^2 \right] = \mathbb E(J_1^2)+\mathbb E(J_2^2). \end{align*} Let us compute their variances respectively. First, we compute the variance of $J_2$ by noting that \begin{align*}
\mathbb E(J_2^2)=&c_{1, H}\int_t^{t+\varepsilon} \int_{\mathbb R}\left|\mathcal F p_{t+\varepsilon-s}(\xi) \right|^{2}|\xi|^{1-2H}ds d\xi\\
=& c_{1, H} \int_t^{t+\varepsilon} \int_{\mathbb R} e^{-\kappa ( t+\varepsilon-s) |\xi|^2}|\xi|^{1-2H}ds d\xi\\
=& c_{1, H} \int_0^{\varepsilon} \int_{\mathbb R} e^{-\kappa s |\xi|^2}|\xi|^{1-2H}ds d\xi. \end{align*} The change of variables $\tau:=\sqrt{\kappa s}\xi$ yields \begin{align}\label{eq J2}
\mathbb E(J_2^2)=& c_{1, H} \int_0^{\varepsilon} \int_{\mathbb R} e^{-|\tau|^2}|\tau|^{1-2H} (\kappa s)^{-1+H} ds d\tau\notag\\ =& c_{1, H}\Gamma(1-H) H^{-1}\kappa^{H-1}\varepsilon^{H}. \end{align} For the term $J_1$, we have \begin{align}\label{eq J1}
\mathbb E(J_1^2)=& c_{1, H}\int_0^{t} \int_{\mathbb R}\left|\mathcal F p_{t+\varepsilon-s}(\xi)- \mathcal F p_{t-s}(\xi)\right|^{2}|\xi|^{1-2H}ds d\xi\notag\\
=& c_{1, H}\int_0^{t} \int_{\mathbb R}\left|\mathcal F p_{s+\varepsilon}(\xi)- \mathcal F p_{s}(\xi)\right|^{2}|\xi|^{1-2H}ds d\xi\notag\\
=&c_{1, H}\int_0^{t} \int_{\mathbb R}e^{-\kappa s|\xi|^2}\left(1-e^{-\frac{\kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{1-2H}ds d\xi. \end{align} This integral is hard to evaluate. By the change of variables and Lemma \ref{lem integ} in appendix, we have \begin{align}\label{eq J12}
\int_0^{\infty} \int_{\mathbb R}e^{-\kappa s |\xi|^2}\left(1-e^{-\frac{\kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{1-2H}ds d\xi =& \kappa^{-1}\int_{\mathbb R} \left(1-e^{-\frac{\kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{-1-2H} d\xi\notag\\
=& \varepsilon^H \kappa ^{H-1}\int_{\mathbb R}\left(1-e^{-\frac{ |\tau|^2}{2}}\right)^2 |\tau|^{-1-2H} d\tau\notag\\ =&\Gamma(1-H) H^{-1}(2^{1-H}-1)\kappa^{H-1}\varepsilon^H. \end{align} Therefore, \begin{align}\label{eq J13} \mathbb E(J_1^2)=& c_{1, H} \Gamma(1-H) H^{-1}(2^{1-H}-1)\kappa^{H-1}\varepsilon^H\notag\\
&- c_{1, H}\int_t^{\infty} \int_{\mathbb R}e^{-\kappa s |\xi|^2}\left(e^{-\frac{\kappa \varepsilon |\xi|^2}{2}}-1\right)^2 |\xi|^{1-2H}ds d\xi, \end{align} and hence \begin{align}\label{eq J3}
\mathbb E(J_1^2)+\mathbb E(J_2^2)= &c_{1, H}\Gamma(1-H) H^{-1} 2^{1-H} \kappa^{H-1}\varepsilon^{H}\notag\\
&- c_{1, H}\int_t^{\infty} \int_{\mathbb R}e^{-\kappa s |\xi|^2}\left(1-e^{-\frac{\kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{1-2H}ds d\xi. \end{align} Let $\eta$ denote a white noise on $\mathbb R$ independent of $W$, and consider the Gaussian process $\{T_t\}_{t\ge0}$ defined by $$
T_t:=\left(\frac{c_{1,H}}{\kappa}\right)^{\frac12}\int_{-\infty}^{\infty} \left( 1- e^{-\frac{ \kappa t |\xi|^2}{2}}\right) |\xi|^{-\frac12-H}\eta(d\xi),\ \ t\ge0. $$ Then $\{T_t\}_{t\ge0}$ is a well-defined mean-zero Wiener integral process, $T_0=0$, and $$
{\rm Var} (T_t)=\frac{c_{1,H}}{\kappa}\int_{-\infty}^{\infty} \left( 1- e^{-\frac{ \kappa t |\xi|^2}{2}}\right)^2 |\xi|^{-1-2H}d\xi<\infty, \ \ \text{for all } t>0. $$ Furthermore, we note that for any $t,\varepsilon>0$, \begin{align}\label{eq T}
\mathbb E(|T_{t+\varepsilon}-T_t|^2)=&\frac{c_{1,H}}{\kappa} \int_{-\infty}^{\infty} \left( e^{-\frac{ \kappa t |\xi|^2}{2}}- e^{-\frac{ \kappa (t+\varepsilon) |\xi|^2}{2}}\right)^2 |\xi|^{-1-2H}d\xi\notag\\
=&\frac{c_{1,H}}{\kappa}\int_{-\infty}^{\infty} e^{- \kappa t |\xi|^2} \left( 1 - e^{-\frac{ \kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{-1-2H}d\xi\notag\\
=& c_{1, H} \int_t^{\infty} \int_{-\infty}^{\infty} e^{- \kappa s |\xi|^2} \left( 1 - e^{-\frac{ \kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{1-2H} dsd\xi.
\end{align} This is precisely the missing integral in \eqref{eq J3}. Therefore, by the independence of $T$ and $u$, we can rewrite \eqref{eq J3} as follows: \begin{align}
&\mathbb E\left(\left|(u_{t+\varepsilon}(x)+T_{t+\varepsilon})-(u_t(x)+T_t)\right|^2\right)\notag\\ =&c_{1, H}\Gamma(1-H) H^{-1}\kappa^{H-1} 2^{1-H}\varepsilon^{H}\notag\\ =&\frac{ \sin( H \pi )\Gamma(2H+1)\Gamma(1-H) }{2^H H \kappa^{1-H} \pi} \varepsilon^{H}\notag\\ =& \frac{2^{1-H}\Gamma(2H)}{\kappa^{1-H}\Gamma(H)}\varepsilon^{H}, \end{align} This implies that $$ X_t:=\left(\frac{2^{1-H}\Gamma(2H)}{\kappa^{1-H}\Gamma(H)}\right)^{-\frac12}(u_t(x)+T_t), \ \ \ t\ge0, $$ is a fBm with Hurst parameter $H/2$. Using the same argument in the proof of \cite [Lemma 3.6]{K}, we know that the random process $\{T(t)\}_{t\ge0}$ has a version that is infinitely-differentiable on $(0, \infty)$.
\subsection{The proof of (b) in Theorem \ref{thm main}} This result has been proved in \cite[Proposition 3.1]{FKM} when $H=1/2$. We will prove it for the case of $H\neq 1/2$.
For any $t>0, x\in \mathbb R$, let
\begin{equation}\label{eq S}
S_t(x):=\int_t^{\infty} \int_{\mathbb R} \left[p_s(x-w)-p_s(w)\right]\zeta(ds,dw),
\end{equation}
where $p_t(x)$ is given by \eqref{eq p}, $\zeta$ is a Gaussian noise independent with $W$, which is white in time and fractional in space variable with Hurst parameter $H$. By the argument in the proof of \cite [Lemma 3.6]{K}, we know that $\{S_t(x)\}_{x\in \mathbb R}$ admits a $C^{\infty}$-version for any $t>0$.
Next, we will prove that \begin{equation}\label{eq claim}
\mathbb E\left[\left|\left(u_t(x+\varepsilon)+S_t(x+\varepsilon) \right)-\left(u_t(x)+S_t(x) \right) \right|^2\right]= \kappa^{-1}\varepsilon^{2H}. \end{equation} Then \begin{equation}\label{aim} B(x):= \kappa^{1/2}(u_t(x)+S_t(x)),\ \ \ \ x\in\mathbb R, \end{equation} is a two-side fBm with parameter $H$, and (b) in Theorem \ref{thm main} holds.
In the remaining part, we will prove \eqref{eq claim} for $H\in (0,1/2)$ and $H\in (1/2,1)$, respectively.
\subsubsection{$(0<H<1/2)$}
For any fixed $t>0$ and $\varepsilon>0$, by Plancherel's identity with respect to $y$ and the explicit formula for $\mathcal F p_t$, we have \begin{align}
&\mathbb E\left(|u_t(x+\varepsilon)-u_t(x)|^2 \right) \notag \\ =&c_{2, H}^2\int_0^t\int_{\mathbb R^2}\Big[(p_{t-s}(x+\varepsilon-y+z)-p_{t-s}(x-y+z))-(p_{t-s}(x+\varepsilon-y)-p_{t-s}(x-y))\Big]^2 \notag \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times |z|^{2H-2}dsdydz\notag\\
=& \frac{1}{2\pi}c_{2,H}^2 \int_0^t \int_{\mathbb R^2} e^{-\kappa (t-s) |\xi|^2}\left|e^{i\xi z}-1\right|^2\left|e^{i \xi \varepsilon}-1\right|^2 |z|^{2H-2}ds d\xi dz. \end{align}
Since $|e^{i\xi z}-1|^2=2(1-\cos (\xi z))$ and for any $\alpha\in(0,1)$, \begin{equation}\label{eq iden1} \int_0^{\infty} \frac{1-cos(\xi z)}{z^{1+\alpha}}dz= \alpha^{-1}\Gamma(1-\alpha)\cos(\pi\alpha/2) \xi^{\alpha}, \end{equation} (see \cite [Lemma D.1]{BJQ}), we have \begin{align}\label{uoff}
&\mathbb E\left(|u_t(x+\varepsilon)-u_t(x)|^2 \right) \notag \\
=&\frac{2 \Gamma(2H)\sin( H \pi)}{(1-2H) \pi}c_{2, H}^2 \int_0^t \int_{\mathbb R} e^{-\kappa (t-s)|\xi|^2}
\left|e^{i \xi \varepsilon}-1\right|^2 |\xi|^{1-2H}dsd\xi \notag \\
=&\frac{4\Gamma(2H)\sin( H \pi)}{(1-2H) \kappa\pi }c_{2, H}^2 \int_{\mathbb R}\left(1- e^{- \kappa t |\xi|^2}\right)\left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi \notag \\ =& \frac{4\Gamma(2H)\sin( H \pi)}{(1-2H) \kappa\pi }c_{2, H}^2 \notag \\
&\ \ \ \times\left(\frac{\Gamma(1-2H)\cos( H \pi)\varepsilon^{2H}}{H} - \int_{\mathbb R} e^{- \kappa t |\xi|^2} \left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi \right). \end{align} Recall $S_t(x)$ defined by \eqref{eq S}. Using the same techniques above, we have \begin{align}
& \mathbb E\left(|S_t(x+\varepsilon)-S_t(x)|^2 \right) \notag \\
=& \frac{2\Gamma(2H)\sin( H \pi)}{ (1-2H)\pi}c_{2, H}^2 \int_t^{\infty} \int_{\mathbb R} e^{-\kappa s |\xi|^2}
\left|e^{i \xi \varepsilon}-1\right|^2 |\xi|^{1-2H}ds d\xi\notag \\
=&\frac{4\Gamma(2H)\sin( H \pi)}{(1-2H) \kappa\pi }c_{2, H}^2 \int_{\mathbb R} e^{- \kappa t |\xi|^2} \left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi, \end{align} which is exactly the missing integral in \eqref{uoff}. By the independence of $W$ and $\zeta$, we know that $u_t(x)$ and $S_t(x)$ are independent. Therefore, we have \begin{equation} \begin{aligned}
&\mathbb E\left[\left|\left(u_t(x+\varepsilon)+S_t(x+\varepsilon) \right)-\left(u_t(x)+S_t(x) \right) \right|^2\right]\\ =& \frac{\sin(2 H \pi)\Gamma(2H)\Gamma(1-2H) }{\kappa \pi }
\varepsilon^{2H}\\
=& \kappa^{-1} \varepsilon^{2H}. \end{aligned} \end{equation} \quad
\subsubsection{$(1/2<H<1)$} For any fixed $t>0$ and $\varepsilon>0$, \begin{align}
&\mathbb E\left(|u_t(x+\varepsilon)-u_t(x)|^2 \right) \notag \\ =&c_{3, H}^2\int_0^t\int_{\mathbb R^2} (p_{t-s}(x+\varepsilon-y+z)-p_{t-s}(x-y+z))(p_{t-s}(x+\varepsilon-y)-p_{t-s}(x-y)) \notag \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times |z|^{2H-2}dsdydz. \end{align} Since $p_{t-s}(x+y)p_{t-s}(x)=\frac12\left[p^2_{t-s}(x+y)+p^2_{t-s}(x)-(p_{t-s}(x+y)-p_{t-s}(x))^2\right]$, by Plancherel's identity with respect to $x$ and the explicit formula for $\mathcal F p_t$, we have
$$ \int_{\mathbb R}\int_{\mathbb R} p_{t-s}(x+y)p_{t-s}(x)|y|^{2H-2} dxdy = \frac{1}{2\pi}\int_{\mathbb R}\int_{\mathbb R} e^{-\kappa (t-s) |\xi|^2} \cos(\xi y) |y|^{2H-2} d\xi dy.$$ Therefore, \begin{align}
&\mathbb E\left(|u_t(x+\varepsilon)-u_t(x)|^2 \right) \notag \\
=&\frac{1}{2\pi}c_{3, H}^2\int_0^t\int_{\mathbb R^2}e^{-\kappa (t-s) |\xi|^2}\left[ 2\cos(\xi z)-\cos(\xi(\varepsilon+z))-\cos(\xi(\varepsilon-z))\right]|z|^{2H-2}ds d\xi dz \notag \\
=&\frac1\pi c_{3, H}^2\int_0^t\int_{\mathbb R^2}e^{-\kappa (t-s) |\xi|^2}\cos(\xi z)(1-\cos(\xi \varepsilon))|z|^{2H-2}ds d\xi dz. \end{align} By formula $\left( 3.761-9 \right)$ of \cite {GR}, we know that \begin{equation}\label{eq formula} \int_0^{\infty} \frac{\cos(ax)}{x^{1-\mu}}dx=\frac{\Gamma(\mu)}{a^\mu}\cos(\pi\mu/2),\ \ \ \text{for any} \ \mu \in (0,1), a>0. \end{equation} Since $H \in (1/2,1)$, using \eqref{eq formula} with $\mu=2H-1,$ we have \begin{align}
&\mathbb E\left(|u_t(x+\varepsilon)-u_t(x)|^2 \right) \notag \\
=&\frac{2\Gamma(2H-1)\cos(\pi(2H-1)/2)}{\pi}c_{3, H}^2\int_0^t\int_{\mathbb R}e^{-\kappa (t-s) |\xi|^2} (1-\cos(\xi \varepsilon) |\xi|^{1-2H}ds d\xi \notag \\
=&\frac{2\Gamma(2H-1)\sin( H \pi)}{\kappa \pi}c_{3, H}^2 \int_{\mathbb R}\left(1- e^{- \kappa t |\xi|^2}\right)\left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi \notag \\ =&\frac{2\Gamma(2H-1)\sin( H \pi)}{\kappa \pi}c_{3, H}^2 \notag \\
&\ \ \ \times \left( -\frac{\Gamma(2-2H)\cos( H \pi)}{H(2H-1)}\varepsilon^{2H} - \int_{\mathbb R} e^{- \kappa t |\xi|^2} \left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi \right), \end{align} where the last equality we used the identity: \begin{equation} \int_0^{\infty} \frac{1-cos(\xi z)}{z^{1+\alpha}}dz= -\alpha^{-1}(\alpha-1)^{-1}\Gamma(2-\alpha)\cos(\pi\alpha/2) \xi^{\alpha},\ \ \ \ \alpha\in(1,2), \end{equation} (see \cite [Lemma D.1]{BJQ}). Recall $S_t(x)$ given by \eqref{eq S}. Using the same techniques above, we have \begin{align}
& \mathbb E\left(|S_t(x+\varepsilon)-S_t(x)|^2 \right) \notag \\
=& \frac{2\Gamma(2H-1)\sin( H \pi)}{\pi}c_{3, H}^2 \int_t^{\infty} \int_{\mathbb R} e^{-\kappa s |\xi|^2}
\left|e^{i \xi \varepsilon}-1\right|^2 |\xi|^{1-2H}ds d\xi\notag \\
=&\frac{2\Gamma(2H-1)\sin( H \pi)}{\kappa \pi}c_{3, H}^2 \int_{\mathbb R} e^{- \kappa t |\xi|^2} \left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi. \end{align} By the independence of $W$ and $\zeta$, we know that $u_t(x)$ and $S_t(x)$ are independent. Therefore, we have \begin{equation} \begin{aligned}
&\mathbb E\left[\left|\left(u_t(x+\varepsilon)+S_t(x+\varepsilon) \right)-\left(u_t(x)+S_t(x) \right) \right|^2\right]\\ =& \frac{- \sin(2H \pi) \Gamma(2H-1) \Gamma(2-2H) }{\kappa \pi} \varepsilon^{2H}\\ =&\kappa^{-1} \varepsilon^{2H}. \end{aligned} \end{equation}
\section{Appendix} \begin{lemma}\label{lem integ} The following identity holds: $$\int_0^{\infty}\left(e^{-\frac{x^2}{2}}-1\right)^2 x^{-1-2H} dx= \Gamma(1-H) H^{-1}(2^{-H}-2^{-1}).$$ \end{lemma} \begin{proof} The proof is inspired by Lemma A.1 in \cite{K}. By the change of variables $w=x^2/2$, we have \begin{align*} \int_0^{\infty}\left(e^{-\frac{x^2}{2}}-1\right)^2 x^{-1-2H} dx =& 2^{-1-H} \int_0^{\infty}\left(e^{-w}-1\right)^2 w^{-1-H} dw\notag\\ =& 2^{-1-H} (I_{0,1}-I_{1,2}), \end{align*} where $I_{a, b}:=\int_0^{\infty}\left( e^{-aw}- e^{-b w}\right) w^{-1-H} dw$ for all $a,b\ge0$. Since $e^{-aw}-e^{-b w}=w\int_a^b e^{-rw}dr$, we have \begin{align*} I_{a, b}= \int_0^{\infty}\int_a^b e^{-rw} w^{-H}drdw = \Gamma(1-H) \int_a^b r ^{-1+H}dr = \Gamma(1-H) H^{-1} (b^H-a^H). \end{align*} Thus, $I_{0,1}-I_{1,2}=\Gamma(1-H) H^{-1} (2-2^H)$, and the lemma follows. \end{proof}
\vskip0.5cm
\vskip0.5cm
\noindent{\bf Acknowledgments}: R. Wang is supported by NSFC (11871382), Chinese State Scholarship Fund Award by the China Scholarship Council and Youth Talent Training Program by Wuhan University.
\vskip0.5cm
\end{document}
|
arXiv
|
{
"id": "1912.03684.tex",
"language_detection_score": 0.5212013721466064,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} In this paper we consider the steady water wave problem for waves that possess a merely $L_r-$integrable vorticity, with $r\in(1,\infty)$ being arbitrary. We first establish the equivalence of the three formulations--the velocity formulation, the stream function formulation, and the height function formulation-- in the setting of strong solutions, regardless of the value of $r$. Based upon this result and using a suitable notion of weak solution for the height function formulation, we then establish, by means of local bifurcation theory, the existence of small amplitude capillary and capillary-gravity water waves with a $L_r-$integrable vorticity. \end{abstract}
\maketitle
\section{Introduction}\label{Sec:1}
We consider the classical problem of traveling waves that propagate at the surface of a two-dimensional inviscid and incompressible fluid of finite depth. Our setting is general enough to incorporate the case when the vorticity of the fluid is merely $L_r-$integrable, with $r>1$ being arbitrary. The existence of solutions of the Euler equations in ${\mathbb R}^n$ describing flows with an unbounded vorticity distribution has been addressed lately by several authors, cf. \cite{ Ke11, MB02, Vi00} and the references therein, whereas for traveling free surface waves in two-dimensions there are so far no existence results which allow for a merely $L_r$-integrable vorticity.
In our setting, the hydrodynamical problem is modeled by the steady Euler equations, to which we refer to as the {\em velocity formulation}. For classical solutions and in the absence of stagnation points, there are two equivalent formulations, namely the {\em stream function} and the {\em height function} formulation, the latter being related to the semi-Lagrangian Dubreil-Jacotin transformation. This equivalence property stays at the basis of the existence results of classical solutions with general H\"older continuous vorticity, cf. \cite{CoSt04} for gravity waves and \cite{W06b, W06a} for waves with capillarity. Very recent, taking advantage of the weak formulation of the governing equations, it was rigorously established that there exist gravity waves \cite{CS11} and capillary-gravity waves \cite{CM13xx, MM13x} with a discontinuous and bounded vorticity. The waves found in the latter references are obtained in the setting of strong solutions when the equations of motion are satisfied in $L_r,$ $r>2$ in \cite{CS11}, respectively $L_\infty$ in \cite{CM13xx, MM13x}. The authors of \cite{CS11} also prove the equivalence of the formulations in the setting of $L_r$-solutions, under the restriction that $r>2$. Our first main result, Theorem \ref{T:EQ}, establishes the equivalence of the three formulations for strong solutions that possess Sobolev and weak H\"older regularity. For this we rely on the regularity properties of such solutions, cf. \cite{AC11, EM13x, MM13x}. This equivalence holds for gravity, capillary-gravity, and pure capillary waves without stagnation points and with a $L_r$-integrable vorticity, without making any restrictions on $r\in(1,\infty).$
The equivalence result Theorem \ref{T:EQ} stays at the basis of our second main result, Theorem \ref{T:MT}, where we establish the existence of small amplitude capillary and capillary-gravity water waves having a $L_r-$integrable vorticity distribution for any $r\in(1,\infty).$ On physical background, studying waves with an unbounded vorticity is relevant in the setting of small-amplitude wind generated waves, when capillarity plays an important role. These waves may possess a shear layer of high vorticity adjacent to the wave surface \cite{O82, PB74}, fact which motivates us to consider unbounded vorticity distributions. {Moreover, an unbounded vorticity at the bed is also physically relevant, for example when describing turbulent flows along smooth channels (see the empirical law on page 109 in \cite{B62}).}
In contrast to the irrotational case when, in the absence of an underlying current, the qualitative features of the flow are well-understood
\cite{C12, Co06, DH07}, in the presence of a underlying, even uniform \cite{CoSt10, HH12, SP88}, current many aspects of the flow are more difficult to study, or are even untraceable, and one has to rely often on numerical simulations, cf. \cite{KO1, KO2, SP88}.
For example, by allowing for a discontinuous vorticity, the latter studies display the influence of a favorable or adverse wind on
the amplitude of the waves, or describe extremely high rotational waves and the the flow pattern of waves with eddies.
The rigorous existence of waves with capillarity was obtained first in the setting of irrotational waves \cite{MJ89, JT85, JT86, RS81} and it was only recently extended to the setting of waves with constant vorticity and stagnation points \cite{ CM13a, CM13, CM13x} (see also \cite{CV11}).
In the context of waves with a general H\"older continuous \cite{W06b, W06a} or discontinuous \cite{CM13xx, MM13x} vorticity the existence results are obtained by using the height function formulation and concern only small amplitude waves without stagnation points.
Theorem \ref{T:MT}, which is the first rigorous existence result for waves with unbounded vorticity, is obtained by taking advantage of the weak interpretation of the height function formulation.
More precisely, recasting the nonlinear second-order boundary condition on the surface into a nonlocal and nonlinear equation of order zero
enables us to introduce the
notion of weak (which is shown to be strong) solution for the problem in a suitable analytic setting. By means of local bifurcation theory and ODE techniques we then find local real-analytic curves consisting, with the exception of a single laminar flow solution, only of non-flat symmetric capillary (or capillary-gravity) water waves. The methods we apply are facilitated by the presence of capillary effects (see e.g. the proof of Lemma \ref{L:4}), though not on the value of the surface tension coefficient, the existence question for pure gravity waves with unbounded vorticity being left as an open problem.
The outline of the paper is as follows: we present in Section \ref{Sec:2} the mathematical setting and establish the equivalence of the formulations in Theorems \ref{T:EQ}. We end the section by stating our main existence result Theorem \ref{T:MT}, the Section \ref{Sec:3} being dedicated to its proof.
\section{Classical formulations of the steady water wave problem and the main results}\label{Sec:2}
Following a steady periodic wave from a reference frame which moves in the same direction as the wave and with the same speed $c$,
the equations of motion are the steady-state Euler equations \begin{subequations}\label{eq:P}
\begin{equation}\label{eq:Euler} \left\{ \begin{array}{rllll} ({u}-c) { u}_x+{ v}{ u}_y&=&-{ P}_x,\\ ({ u}-c) { v}_x+{ v}{ v}_y&=&-{ P}_y-g,\\ { u}_x+{v}_y&=&0 \end{array} \right.\qquad \text{in $\Omega_\eta,$} \end{equation} with $x$ denoting the direction of wave propagation and $y$ being the height coordinate. We assumed that the free surface of the wave is the graph $y=\eta(x),$ that the fluid has constant unitary density, and that the flat fluid bed is located at $y=-d$. Hereby, $\eta$ has zero integral mean over a period and $d>0$ is the average mean depth of the fluid. Moreover, $\Omega_\eta $ is the two-dimensional fluid domain \[ \Omega_\eta:=\{(x,y)\,:\,\text{$ x\in\mathbb{S} $ and $-d<y<\eta(x)$}\}, \] with $\mathbb{S}:={\mathbb R}/(2\pi{\mathbb Z})$ denoting the unit circle. This notation expresses the $2\pi$-periodicity in $x $ of $\eta,$ of the velocity field $(u, v), $ and of the pressure $P$.
The equations \eqref{eq:Euler} are supplemented by the following boundary conditions \begin{equation}\label{eq:BC} \left\{ \begin{array}{rllll}
P&=&{P}_0-\sigma\eta''/(1+\eta'^2)^{3/2}&\text{on $ y=\eta(x)$},\\
v&=&({ u}-c) \eta'&\text{on $ y=\eta(x)$},\\
v&=&0 &\text{on $ y=-d$}, \end{array} \right. \end{equation} the first relation being a consequence of Laplace-Young's equation which states that the pressure jump across an interface is proportional to the mean curvature of the interface. We used $ P_0$ to denote the constant atmospheric pressure and $\sigma>0$ is the surface tension coefficient. Finally, the vorticity of the flow is the scalar function \begin{equation*} \omega:= { u}_y-{ v}_x\qquad\text{in $\Omega_\eta$.} \end{equation*} \end{subequations}
The velocity formulation \eqref{eq:P} can be re-expressed in terms of the stream function $\psi $, which is introduced via the relation $\nabla \psi=(-v,u-c)$ in $\Omega_\eta$, cf. Theorem \ref{T:EQ}, and it becomes a free boundary problem \begin{equation}\label{eq:psi} \left\{ \begin{array}{rllll} \Delta \psi&=&\gamma(-\psi)&\text{in}&\Omega_\eta,\\
\displaystyle|\nabla\psi|^2+2g(y+d)-2\sigma\frac{\eta''}{(1+\eta'^2)^{3/2}}&=&Q&\text{on} &y=\eta(x),\\ \psi&=&0&\text{on}&y=\eta(x),\\ \psi&=&-p_0&\text{on} &y=-d. \end{array} \right. \end{equation} Hereby, the constant $p_0<0$ represents the relative mass flux, $Q\in{\mathbb R}$ is related to the total head, and the function $\gamma:(p_0,0)\to{\mathbb R}$ is the vorticity function, that is \begin{equation}\label{vor} \omega(x,y)=\gamma(-\psi(x,y)) \end{equation} for $(x,y)\in \Omega_\eta.$ The equivalence of the velocity formulation \eqref{eq:P} and of the stream function formulation \eqref{eq:psi} in the setting of classical solutions without stagnation points, that is when
\begin{equation}\label{SC} u-c<0\qquad\text{in $\overline \Omega_\eta$} \end{equation} has been established in \cite{Con11, CoSt04}. We emphasize that the assumption \eqref{SC} is crucial when proving the existence of the vorticity function $\gamma$. Additionally, the condition \eqref{SC} guarantees in the classical setting considered in these references that the semi-hodograph transformation
$\Phi:\overline\Omega_\eta\to\overline\Omega$ given by \begin{equation}\label{semH} \Phi(x,y):=(q,p)(x,y):=(x,-\psi(x,y))\qquad \text{for $(x,y)\in\overline\Omega_\eta$}, \end{equation} where $\Omega:=\mathbb{S}\times(p_0,0),$ is a diffeomorphism. This property is used to show that the previous two formulations \eqref{eq:P} and \eqref{eq:psi} can be re-expressed in terms of the so-called
height function $h:\overline \Omega\to{\mathbb R}$ defined by \begin{equation}\label{hodo} h(q,p):=y+d \qquad\text{for $(q,p)\in\overline\Omega$}. \end{equation}
More precisely, one obtains a quasilinear elliptic boundary value problem \begin{equation}\label{PB} \left\{ \begin{array}{rllll} (1+h_q^2)h_{pp}-2h_ph_qh_{pq}+h_p^2h_{qq}-\gamma h_p^3&=&0&\text{in $\Omega$},\\ \displaystyle 1+h_q^2+(2gh-Q)h_p^2-2\sigma \frac{h_p^2h_{qq}}{(1+h_q^2)^{3/2}}&=&0&\text{on $p=0$},\\ h&=&0&\text{on $ p=p_0,$} \end{array} \right. \end{equation} the condition \eqref{SC} being re-expressed as \begin{equation}\label{PBC}
\min_{\overline \Omega}h_p>0. \end{equation}
The equivalence of the three formulations \eqref{eq:P}, \eqref{eq:psi}, and \eqref{PB} of the water wave problem, when the vorticity is only $ L_r-$integrable, has not been established yet for the full range $r\in(1,\infty)$. In the context of strong solutions, when the equations of motion are assumed to hold in $L_r$, there is a recent result \cite[Theorem 2]{CS11} established in the absence of capillary forces. This result though is restricted to the case when $r>2,$ this condition being related to Sobolev's embedding $W^2_r\hookrightarrow C^{1+\alpha}$ in two dimensions. In the same context, but for solutions that possess weak H\"older regularity, there is a further equivalence result \cite[Theorem 1]{VZ12}, but again one has to restrict the range of H\"older exponents. Our equivalence result, cf. Theorem \ref{T:EQ} and Remark \ref{R:-2} below, is true for all $r\in(1,\infty)$ and was obtained in the setting of strong solutions that possess, additionally to Sobolev regularity, weak H\"older regularity, the
H\"older exponent being related in our context to Sobolev's embedding in only one dimension.
This result enables us to establish, cf. Theorem \ref{T:MT} and Remark \ref{R:0}, the existence of small-amplitude capillary-gravity and pure capillary water waves with $L_r-$integrable vorticity function for any $r\in(1,\infty).$
We denote in the following by $\mathop{\rm tr}\nolimits_0$ the trace operator with respect to the boundary component $p=0$ of $\overline\Omega,$ that is $\mathop{\rm tr}\nolimits_0v=v(\cdot,0)$ for all $v\in C(\overline\Omega).$ In the following, we use several times the following product formula \begin{equation}\label{PF} \qquad \partial(uv)=u\partial v+v\partial u\qquad\text{for all $u,v\in W^1_{1,loc}$ with $uv, u\partial v+v\partial u\in L_{1,loc},$} \end{equation} cf. relation (7.18) in \cite{GT01}.
\begin{thm}[Equivalence of the three formulations]\label{T:EQ} Let $r\in(1,\infty)$ be given and define $\alpha=(r-1)/r\in(0,1).$ Then, the following are equivalent \begin{itemize} \item[$(i)$] the height function formulation together with \eqref{PBC} for $h\in C^{1+\alpha}(\overline\Omega)\cap W^2_r(\Omega)$ such that $\mathop{\rm tr}\nolimits_0 h \in W^2_r(\mathbb{S})$, and $\gamma\in L_r((p_0,0))$; \item[$(ii)$] the stream function formulation for $\eta\in W^2_r(\mathbb{S})$, $\psi\in C^{1+\alpha}(\overline\Omega_\eta)\cap W^2_r(\Omega_\eta)$ satisfying $\psi_y<0$ in $\overline\Omega_\eta$, and $\gamma\in L_r((p_0,0))$; \item[$(iii)$] the velocity formulation together with \eqref{SC} for $u,v, P\in C^{\alpha}(\overline\Omega_\eta)\cap W^1_r(\Omega_\eta),$ and $\eta\in W^2_r(\mathbb{S}).$
\end{itemize}
\end{thm}
\begin{rem}\label{R:-2}
Our equivalence result is true for both capillary and capillary-gravity water waves.
Moreover, it is also true for pure gravity waves when the proof is similar, with modifications just when proving that $(iii)$ implies $(i)$: instead of using \cite[Theorem 5.1]{MM13x} one has to rely on the corresponding regularity
result established for gravity waves, cf. Theorem 1.1 in \cite{EM13x}.
We emphasize also that the condition $\mathop{\rm tr}\nolimits_0 h \in W^2_r(\mathbb{S})$ requested at $(i)$ is not a restriction.
In fact, as a consequence of $h\in C^{1+\alpha}(\overline\Omega)\cap W^2_r(\Omega)$ being a strong solution of \eqref{PB}-\eqref{PBC} for $\gamma\in L_r((p_0,0))$, we know that the wave surface and all the other streamlines
are real-analytic curves,
cf. \cite[Theorem 5.1]{MM13x}
and \cite[Theorem 1.1]{EM13x}.
Particularly, $\mathop{\rm tr}\nolimits_0 h$ is a real-analytic function, i.e. $\mathop{\rm tr}\nolimits_0 h\in C^\omega(\mathbb{S})$.
Furthermore, in view of the same references, all weak solutions $h\in C^{1+\alpha}(\overline\Omega)$ of \eqref{PB}, cf. Definition \ref{D:1} (or \cite{CS11} for gravity waves), satisfy $h \in W^2_r(\mathbb{S})$.
\end{rem}
\begin{proof}[Proof of Theorem \ref{T:EQ}] Assume first $(i)$ and let
\begin{align}\label{DEF}
d:=\frac{1}{2\pi}\int_0^{2\pi} \mathop{\rm tr}\nolimits_0 h\, dq\in(0,\infty)\qquad\text{and}\qquad \eta:=\mathop{\rm tr}\nolimits_0 h-d\in W^2_r(\mathbb{S}).
\end{align} We prove that there exists a unique function $\psi\in C^{1+\alpha}(\overline\Omega_\eta)$ with the property that \begin{align}\label{PP1}
y+d-h(x,-\psi(x,y))=0\qquad\text{for all $(x,y)\in\overline\Omega_\eta.$} \end{align}
To this end, let $H:\mathbb{S}\times{\mathbb R}\to{\mathbb R}$ to be a continuous extension of $h$ to $\mathbb{S}\times{\mathbb R}$, having the property that $H(q,\cdot)\in C^1({\mathbb R})$ is strictly increasing and has a bounded derivative for all $q\in\mathbb{S}.$ Moreover, define the function $F:\mathbb{S}\times{\mathbb R}\times{\mathbb R}\to {\mathbb R}$ by setting \begin{align*}
F(x,y,p)=y+d-H(x,p). \end{align*} For every fixed $x\in\mathbb{S}$, we have \[\text{$F(x,\cdot,\cdot)\in C^1({\mathbb R}\times{\mathbb R},{\mathbb R}),$ \quad $F(x,\eta(x),0)=0$,\quad and $F_p(x,\cdot,\cdot)=-H_p(x,\cdot)<0$. }\] Using the implicit function theorem, we find a $C^1-$function $\psi(x,\cdot):(\eta(x)-\varepsilon,\eta(x)+\varepsilon)\to{\mathbb R}$ with the property that \begin{align*}
F(x,y,-\psi(x,y))=0 \qquad\text{for all $y\in (\eta(x)-\varepsilon,\eta(x)+\varepsilon)$}. \end{align*} As $\psi_y(x,y)=1/F_p(x,y,-\psi(x,y))$, we deduce that $\psi(x,\cdot) $ is a strictly decreasing function which maps, due to the boundedness of $H_p(x,\cdot)$, bounded intervals onto bounded intervals. Therefore, $\psi(x,\cdot) $ can be defined on $(-\infty,\eta(x)]$. In view of $F(x,-d,p_0)=0,$ we get that $\psi(x,-d)=-p_0$ for each $x\in\mathbb{S}.$ Observe also that, due to the periodicity of $H$ and $\eta$, $\psi$ is $2\pi-$periodic with respect to $x$, while, because use of $F\in C^1(\mathbb{S}\times{\mathbb R}\times[p_0,0]),$ we have $\psi\in C^1(\Omega_\eta)$. Since the relation \eqref{PP1} is satisfied in $\overline\Omega_\eta$, it is easy to see now that in fact $\psi\in C^{1+\alpha}(\overline\Omega_\eta).$
In order to show that $\psi$ is the desired stream function, we prove that $\psi\in W^2_r(\Omega_\eta).$ Noticing that the relation \eqref{PP1} yields \begin{align}\label{RE1}
\psi_y(x,y)=-\frac{1}{h_p(x,-\psi(x,y))}\qquad\text{and}\qquad \psi_x(x,y)=\frac{h_q(x,-\psi(x,y))}{h_p(x,-\psi(x,y))} \end{align} in $\overline\Omega_\eta,$ the variable transformation \eqref{semH}, integration by parts, and the fact that $h$ is a strong solution of \eqref{PB} yield \begin{align*} \Delta\psi[\widetilde\phi]=& -\int_{\Omega_\eta}\left(\psi_y\widetilde\phi_y+\psi_x\widetilde\phi_x\right)\, d(x,y)=-\int_{\Omega}\left(h_q\phi_q-\frac{1+h_q^2}{h_p}\phi_p\right)\, d(q,p)\\
=&\int_\Omega\left(h_{qq}-\frac{2h_qh_{pq}}{h_p}+\frac{(1+h_q^2)h_{pp}}{h_p^2}\right)\phi\, d(q,p)=\int_\Omega(\gamma \phi)h_p\, d(q,p) \\ =&\int_{\Omega_\eta} \gamma(-\psi)\widetilde \phi \, d(x,y) \end{align*}
for all $\widetilde \phi\in C^\infty_0(\Omega_\eta),$ whereby we set $\phi:=\widetilde\phi\circ\Phi^{-1}.$ This shows that $\Delta\psi=\gamma(-\psi)\in L_r(\Omega_\eta)$. Taking into account that $\psi(x,y)=p_0(y-\eta(x))/(d+\eta(x))$ for $(x,y)\in\partial\Omega_\eta,$ whereby in fact $\eta\in C^\omega(\mathbb{S}),$ c.f. Remark \ref{R:-2}, we find by elliptic regularity, cf. e.g. \cite[Theorems 3.6.3 and 3.6.4]{CW98}, that $\psi\in W^2_r(\Omega_\eta).$ It is also easy to see that $(\eta,\psi)$ satisfy also the second relation of \eqref{eq:psi}, and this completes our arguments in this case.
We now show that $(ii)$ implies $(iii)$. To this end, we define
\begin{align}\label{PP2}
(u-c,v)&:=(\psi_y,-\psi_x)\qquad\text{and}\qquad P:=-\frac{|\nabla\psi|^2}{2}-g(y+d)-\Gamma(-\psi)+P_0+\frac{Q}{2},
\end{align}
where $\Gamma$ is given by
\begin{equation}\label{E:G}\Gamma(p):=\int_0^p\gamma(s)\, ds\qquad\text{for $p\in[p_0,0].$}\end{equation}
Clearly, we have that $u,v\in C^{\alpha}(\overline\Omega_\eta) \cap W^1_r(\Omega_\eta) $ and $\Gamma\in C^\alpha([p_0,0])\cap W^1_r((p_0,0)).$
Moreover, because $\psi\in C^{1+ \alpha}(\overline\Omega_\eta)\cap W^2_r(\Omega_\eta),$ the formula \eqref{PF} shows that $|\nabla\psi|^2\in W^1_r(\Omega_\eta),$ and therefore also $P\in C^{\alpha}(\overline\Omega_\eta)\cap W^1_r(\Omega_\eta).$ The boundary conditions \eqref{eq:BC} are easy to check. Furthermore, the conservation of mass equation is a direct consequence of the first relation of \eqref{PP2}. We are left with the conservation of momentum equations. Therefore, we observe the function $\Gamma(-\psi)$ is differentiable almost everywhere and its partial derivatives belong to $L_r(\Omega_\eta)$, meaning that $\Gamma(-\psi)\in W^1_r(\Omega_\eta)$, cf. \cite{DD12}, the gradient $\nabla(\Gamma(-\psi))$ being determined by the chain rule.
Taking now the weak derivative with respect to $x$ and $y$ in the second equation of \eqref{PP2}, respectively, we obtain in view of \eqref{PF}, the conservation of momentum equations.
We now prove that $(iii)$ implies $(ii)$.
Thus, choose $u,v, P\in C^{\alpha}(\overline\Omega_\eta)\cap W^1_r(\Omega_\eta) $ and $\eta\in W^2_r(\mathbb{S}) $ such that $(\eta, u-c, v,P)$ is a solution of the velocity formulation.
We define
\begin{equation}
\psi(x,y):=-p_0+\int_{-d}^{y} (u(x,s)-c)\, ds\qquad\text{for $(x,y)\in\overline\Omega_\eta,$}
\end{equation} with $p_0$ being a negative constant.
It is not difficult to see that the function $\psi$ belongs to $ C^{1+\alpha}(\overline\Omega_\eta)\cap W^2_r(\Omega_\eta) $ and that it satisfies $\nabla\psi=(-v,u-c).$ The latter relation allows us to pick $p_0$ such that $\psi=0$ on $y=\eta(x).$ Also, we have that $\psi=-p_0$ on the fluid bed. We next show that the vorticity of the flow satisfies the relation \eqref{vor} for some $\gamma\in L_r((p_0,0)).$ To this end, we proceed as in \cite{ BM11} and use the property that the mapping $\Phi$ given by \eqref{semH} is an isomorphism of class $C^{1+\alpha}$ to compute that \begin{align*}
\partial_q (\omega\circ \Phi^{-1})[ \phi]=&\int_{\Omega_\eta} (v_x-u_y)((u-c) \widetilde\phi_x+v\widetilde\phi_y)\, d(x,y) \end{align*} for all $ \phi\in C^\infty_0(\Omega).$ Again, we set $\widetilde\phi :=\phi\circ\Phi\in C^{1+\alpha}_0(\overline\Omega_\eta).$ Since our assumption $(iii)$ implies that $(u-c)^2$ and $v^2$ belong to $W^1_r(\Omega_\eta)$, cf. \eqref{PF}, density arguments, \eqref{eq:Euler}, and integration by parts yield \begin{align*}
\partial_q (\omega\circ \Phi^{-1})[ \phi]=&\int_{\Omega_\eta} ((u-c)v_x+vv_y)\widetilde\phi_x\, d(x,y)-\int_{\Omega_\eta} ((u-c)u_x+vu_y)\widetilde\phi_y\, d(x,y)\\[1ex] =&-\int_{\Omega_\eta} (P_y+g)\widetilde\phi_x\, d(x,y)+\int_{\Omega_\eta} P_x\widetilde\phi_y\, d(x,y)=0. \end{align*} Consequently, there exists $\gamma\in L_r((p_0,0))$ with the property that $\omega\circ\Phi^{-1}=\gamma$ almost everywhere in $\Omega$. This shows that \eqref{vor} is satisfied in $L_r(\Omega_\eta).$ Next, we observe that the same arguments used when proving that $(ii)$ implies $(iii)$ yield that the energy \[
E:=P+\frac{|\nabla\psi|^2}{2}+g(y+d)+\Gamma(-\psi) \] is constant in $\overline\Omega_\eta.$ Defining $Q:=2(E-P_0),$ one can now easily see that $(\eta,\psi)$ satisfies \eqref{eq:psi}, and we have established $(ii)$.
In the final part of the proof we assume that $(ii)$ is satisfied and we prove $(i)$. Therefore, we let $h:\overline\Omega\to{\mathbb R}$ be the mapping defined by \eqref{hodo} (or equivalently \eqref{PP1}). Then, we get that $ h\in C^{1+\alpha}(\overline\Omega)$ verifies the relations \eqref{PP1} and \eqref{RE1}. Consequently, $\mathop{\rm tr}\nolimits_0 h\in W^2_r(\mathbb{S})$ and one can easily see that the boundary conditions of \eqref{PB} and \eqref{PBC} are satisfied. In order to show that $h$ belongs to $W^2_r(\Omega)$ and it also solves the first relation of \eqref{PB}, we observe that the first equation of \eqref{eq:psi} can be written in the equivalent form \begin{equation}\label{eq:psi2} (\psi_x\psi_y)_x+\frac{1}{2}\left(\psi_y^2-\psi_x^2\right)_y+(\Gamma(-\psi))_y=0\qquad\text{in $L_r(\Omega_\eta)$}. \end{equation} Therewith and using the change of variables \eqref{semH}, we find \begin{align*}
&\int_\Omega\frac{h_q}{h_p} \phi_q-\left(\Gamma+\frac{1+h_q^2}{2h_p^2}\right)\phi_p\, d(q,p)\\[1ex]
&=-\int_{\Omega_\eta}\left((\psi_x\psi_y)_x+\frac{1}{2}\left(\psi_y^2-\psi_x^2\right)_y+(\Gamma(-\psi))_y\right)\widetilde\phi\, d(x,y)=0, \end{align*} for all $\phi\in C^1_0(\Omega)$ and with $\widetilde\phi :=\phi\circ\Phi\in C^{1+\alpha}_0(\overline\Omega_\eta)$. Hence, $h\in C^{1+\alpha}(\overline\Omega)$ is a weak solution of the height function formulation, cf. Definition \ref{D:1}. We are now in the position to use the regularity result Theorem 5.1 in \cite{MM13x} which states that the distributional derivatives $\partial_q^mh$ also belong to $C^{1+\alpha}(\overline\Omega)$ for all $m\geq1.$ Particularly, setting $m=1$, we find that $h_p$ is differentiable with respect to $q$ and $\partial_q(h_p)=\partial_p(h_q)\in C^\alpha(\overline\Omega).$ Exploiting the fact that $h$ is a weak solution, we see that the distributional derivatives \[ \partial_q\left(\Gamma+\frac{1+h_q^2}{2h_p^2}\right),\quad \partial_p\left(\Gamma+\frac{1+h_q^2}{2h_p^2}\right)=\partial_q\left( \frac{h_q}{h_p}\right) \]
belong both to $C^\alpha(\overline\Omega)\subset L_r(\Omega).$ Additionally, $1+h_q^2\in C^{1+\alpha}(\overline\Omega)$ and regarding $\Gamma$ as an element of $ W^1_r(\Omega)$, we obtain
\[
\frac{1 }{h_p^2}\in W^1_r(\Omega).
\] Because $h$ satisfies \eqref{PBC} and recalling that $h_p$ is a bounded function, \cite[Theorem 7.8]{GT01} implies that $h_p\in W^1_r(\Omega).$ Hence, $h\in C^{1+\alpha}(\overline\Omega)\cap W^2_r(\Omega)$ and it is not difficult to see that $h$ satisfies the first equation of \eqref{PB} in $L_r(\Omega)$, cf. \eqref{PF}. This completes our arguments. \end{proof}
We now state our main existence result.
\begin{thm}[Existence result]\label{T:MT} We fix $r\in(1,\infty)$ , $p_0\in(-\infty,0),$ and define the H\"older exponent $\alpha:=(r-1)/r\in(0,1).$
We also assume that the vorticity function $\gamma$ belongs to $L_r((p_0,0)).$
Then, there exists a {positive integer $N$} such that for each {integer $n\geq N$}
there exists a local real-analytic curve $ {\mathcal{C}_{n}}\subset C^{1+\alpha}(\overline\Omega)$ consisting only of strong solutions of the problem
\eqref{PB}-\eqref{PBC}.
Each solution $h\in {\mathcal{C}_{n}}$, {$n\geq N,$} satisfies additionally
\begin{itemize}
\item[$(i)$] $h\in W^2_r(\Omega)$,
\item[$(ii)$] $h(\cdot,p)$ is a real-analytic map for all $p\in[p_0,0].$
\end{itemize}
Moreover, each curve ${\mathcal{C}_{n}}$ contains a laminar flow solution
and all the other points on the curve describe waves that have minimal period $2\pi/n $, only one crest and trough per period, and are symmetric with respect to the crest line. \end{thm}
\begin{rem}\label{R:0} While proving Theorem \ref{T:MT} we make no restriction on the constant $g$, meaning that the result is true for capillary-gravity waves but also in the context of capillary waves (when we set $g=0$).
Sufficient conditions which allow us to choose {$N=1$} in Theorem \ref{T:MT} can be found in Lemma \ref{L:9}.
Also, if $\gamma\in C((p_0,0)),$ the solutions found in Theorem \ref{T:MT} are classical as one can easily show that, additionally to the regularity properties stated in Theorem \ref{T:EQ},
we also have $h\in C^2(\Omega),$ $\psi\in C^2(\Omega_\eta)$, and $(u,v,P)\in C^1(\Omega_\eta)$. \end{rem}
\section{Weak solutions for the height function formulation}\label{Sec:3} This last section is dedicated to proving Theorem \ref{T:MT}. Therefore, we pick $r\in(1,\infty)$ and let $\alpha=(r-1)/r\in(0,1)$ be fixed in the remainder of this paper. The formulation \eqref{PB} is very useful when trying to determine classical solution of the water wave problem \cite{W06b, W06a}. However, when the vorticity function belongs to $L_r((p_0,0)),$ $r\in(1,\infty),$ the curvature term and the lack of regularity of the vorticity function gives rise to several difficulties when trying to consider the equations \eqref{PB} in a suitable (Sobolev) analytic setting. For example, the trivial solutions of \eqref{PB}, see Lemma \ref{L:LFS} below, belong merely to $W^2_r(\Omega)\cap C^{1+\alpha}(\overline\Omega).$ When trying to prove the Fredholm property of the linear operator associated to the linearization of the problem around these trivial solutions, one has to deal with an elliptic equation in divergence form and having coefficients merely in $W^1_r(\Omega)\cap C^{\alpha}(\overline\Omega),$ cf. \eqref{L1}. The solvability of elliptic boundary value problems in $W^2_r(\Omega)$ requires in general though more regularity from the coefficients. Also, the trace $\mathop{\rm tr}\nolimits_0 h_{qq}$ which appears in the second equation of \eqref{PB} is meaningless for functions in $W^2_r(\Omega).$
Nevertheless, using the fact that the operator $(1-\partial_q^2):H^2(\mathbb{S})\to L_2(\mathbb{S})$ is an isomorphism and the divergence structure of the first equation of \eqref{PB}, that is \[\left(\frac{h_q}{h_p}\right)_q-\left(\Gamma+\frac{1+h_q^2}{2h_p^2}\right)_p=0\qquad\text{in $\Omega$,}\] with $\Gamma$ being defined by the relation \eqref{E:G}, one can introduce the following definition of a weak solution of \eqref{PB}. \begin{defn}\label{D:1} A function $h\in C^{1}(\overline\Omega)$ which satisfies \eqref{PBC} is called a {\em weak solution} of \eqref{PB} if we have \begin{subequations}\label{WF} \begin{align} h+(1-\partial_q^2)^{-1}\mathop{\rm tr}\nolimits_0\left( \frac{\left(1+h_q^2+(2gh-Q)h_p^2\right)(1+h_q^2)^{3/2}}{2\sigma h_p^2}-h\right)=&0\qquad\text{on $p=0$;}\label{PB0}\\[1ex] h=&0\qquad\text{on $p=p_0$;}\label{PB1} \end{align} and if $h$ satisfies the following integral equation \begin{equation}\label{PB2}
\int_\Omega\frac{h_q}{h_p}\phi_q-\left(\Gamma+\frac{1+h_q^2}{2h_p^2}\right)\phi_p\, d(q,p)=0\qquad\text{for all $\phi\in C^1_0(\Omega)$.} \end{equation} \end{subequations} \end{defn}
Clearly, any strong solution $h\in C^{1+\alpha}(\overline\Omega)\cap W^2_r(\Omega)$ with $\mathop{\rm tr}\nolimits_0 h \in W^2_r(\mathbb{S})$ is a weak solution of \eqref{PB}. Furthermore, because of \eqref{PB0}, any weak solution of \eqref{PB} has additional regularity on the boundary component $p=0,$ that is $\mathop{\rm tr}\nolimits_0 h\in C^2(\mathbb{S}).$ The arguments used in the last part of the proof of Theorem \ref{T:EQ} show in fact that any weak solution $h$ which belongs to $C^{1+\alpha}(\overline\Omega)$ is a strong solution of \eqref{PB} (as stated in Theorem \ref{T:EQ} $(i)$).
The formulation \eqref{WF} has the advantage that in can be recast as an operator equation in a functional setting that enables us to use bifurcation results to prove existence of weak solutions. To present this setting, we introduce the following Banach spaces: \begin{align*}
X&\!:=\!\left\{\widetilde h\in C^{1+\alpha}_{2\pi/n}(\overline\Omega)\,:\, \text{$\widetilde h$ is even in $q$ and $\widetilde h\big|_{p=p_0}=0$}\right\},\\
Y_1&\!:=\!\{f\in\mathcal{D}'(\Omega)\,:\, \text{$f=\partial_q\phi_1+\partial_p\phi_2$ for $\phi_1,\phi_2\in C^\alpha_{2\pi/n}(\overline\Omega)$ with $\phi_1$ odd and $\phi_2$ even in $q$}\},\\
Y_2&\!:=\!\{\varphi\in C^{1+\alpha}_{2\pi/n}(\mathbb{S})\,:\, \text{$\varphi$ is even}\}, \end{align*} the positive integer $n\in{\mathbb N}$ being fixed later on. The subscript $2\pi/n$ is used to express $2\pi/n-$periodic in $q$. We recall that $Y_1$ is a Banach space with the norm \[
\|f\|_{Y_1}:=\inf\{\|\phi_1\|_\alpha+\|\phi_2\|_\alpha\,:\, f=\partial_q\phi_1+\partial_p\phi_2\}. \] In the following lemma we determine all laminar flow solutions of \eqref{WF}. They correspond to waves with a flat surface $\eta=0$ and having parallel streamlines.
\begin{lemma}[Laminar flow solutions]\label{L:LFS} Let $\Gamma_{M}:=\max_{[p_0,0]}\Gamma$. For every $\lambda\in(2\Gamma_M,\infty)$, the function $H(\cdot;\lambda)\in W^2_r((p_0,0))$ with \[ H(p;\lambda):=\int_{p_0}^p \frac{1}{\sqrt{\lambda-2\Gamma(s)}}\, ds\qquad\text{for $p\in[p_0,0]$} \] is a weak solution of \eqref{WF} provided that \[ Q=Q(\lambda):=\lambda+2g\int_{p_0}^0\frac{1}{\sqrt{\lambda-2\Gamma(p)}}\, dp. \] There are no other weak solutions of \eqref{WF} that are independent of $q$. \end{lemma} \begin{proof}
It readily follows from \eqref{PB2} that if $H$ is a weak solution of \eqref{WF} that is independent of the variable $q$, then $2\Gamma+1/H_p^2=0$ in $\mathcal{D}'((p_0,0)).$
The expression for $H$ is obtained now by using the relation \eqref{PB1}.
When verifying the boundary condition \eqref{PB0}, the relation $(1-\partial_q^2)^{-1}\xi=\xi$ for all $\xi\in{\mathbb R}$ yields that $Q$ has to be equal with $Q(\lambda).$ \end{proof}
Because $H(\cdot;\lambda)\in W^2_r((p_0,0))$, we can interpret by means of Sobolev's embedding $H(\cdot;\lambda)$ as being an element of $X.$ We now are in the position of reformulating the problem \eqref{WF} as an abstract operator equation. Therefore, we introduce the nonlinear and nonlocal operator $\mathcal{F}:=(\mathcal{F}_1,\mathcal{F}_2):(2\Gamma_M,\infty)\times X\to Y:=Y_1\times Y_2$ by the relations \begin{align*}
\mathcal{F}_1(\lambda,{\widetilde h})\!:=\!&\left(\frac{{\widetilde h}_q}{H_p+{\widetilde h}_p}\right)_q-\left(\Gamma+\frac{1+{\widetilde h}_q^2}{2(H_p+{\widetilde h}_p)^2}\right)_p,\\
\mathcal{F}_2(\lambda,{\widetilde h})\!:=\!&\mathop{\rm tr}\nolimits_0{\widetilde h}+(1-\partial_q^2)^{-1}\mathop{\rm tr}\nolimits_0\!\left(\!\frac{\left(1+{\widetilde h}_q^2+(2g(H+{\widetilde h})-Q)
(H_p+{\widetilde h}_p)^2\right)(1+{\widetilde h}_q^2)^{3/2}}{2\sigma(H_p+{\widetilde h}_p)^2}-{\widetilde h}\!\right) \end{align*} for $(\lambda,{\widetilde h})\in (2 \Gamma_M,\infty)\times X,$ whereby $H=H(\cdot;\lambda)$ and $Q=Q(\lambda)$ are defined in Lemma \ref{L:LFS}. The operator $\mathcal{F}$ is well-defined and it depends real-analytically on its arguments, that is \begin{align}\label{BP0}
\mathcal{F}\in C^\omega((2\Gamma_M,\infty)\times X, Y). \end{align} With this notation, determining the weak solutions of the problem \eqref{PB} reduces to determining the zeros $(\lambda,{\widetilde h})$ of the equation \begin{align}\label{BP}
\mathcal{F}(\lambda,{\widetilde h})=0\qquad\text{in $Y$ } \end{align} for which ${\widetilde h}+H(\cdot;\lambda)$ satisfies \eqref{PBC}. From the definition of $\mathcal{F}$ we know that the laminar flow solutions of \eqref{PB} correspond to the trivial solutions of $\mathcal{F}$ \begin{align}\label{BP1}
\mathcal{F}(\lambda,0)=0\qquad\text{for all $\lambda\in(2 \Gamma_M,\infty).$} \end{align} Actually, if $(\lambda,{\widetilde h})$ is a solution of \eqref{BP}, the function $h:={\widetilde h}+H(\cdot;\lambda)\in X$ is a weak solution of \eqref{PB} when $Q=Q(\lambda)$, provided that ${\widetilde h}$ is sufficiently small in $C^1(\overline\Omega).$ In order to use the theorem on bifurcation from simple eigenvalues due to Crandall and Rabinowitz \cite{CR71} in the setting of \eqref{BP}, we need to determine special values of $\lambda$ for which the Fr\'echet derivative $\partial_{\widetilde h}\mathcal{F}(\lambda,0)\in\mathcal{L}(X,Y)$, defined by \[ \partial_{\widetilde h}\mathcal{F}(\lambda,0)[w]:=\lim_{\varepsilon\to0}\frac{\mathcal{F}(\lambda,\varepsilon w)-\mathcal{F}(\lambda,0)}{\varepsilon}\qquad\text{for $w\in X$,} \] is a Fredholm operator of index zero with a one-dimensional kernel. To this end, we compute that $\partial_{\widetilde h} \mathcal{F} (\lambda,0)=:(L,T)$ with $L\in\mathcal{L}(X,Y_1) $ and $T\in\mathcal{L}(X,Y_2)$ being given by \begin{equation}\label{L1} \begin{aligned}
Lw:=& \left(\frac{w_q}{H_p}\right)_q+\left(\frac{w_p}{H_p^3}\right)_p,\\
Tw:=&\mathop{\rm tr}\nolimits_0 w+(1-\partial_q^2)^{-1} \mathop{\rm tr}\nolimits_0 \left(\frac{gw-\lambda^{3/2}w_p}{\sigma}-w\right) \end{aligned}\qquad\quad\text{for $w\in X,$} \end{equation} and with $H=H(\cdot;\lambda)$ as in Lemma \ref{L:LFS}.
We now study the properties of the linear operator $\partial_{\widetilde h} \mathcal{F} (\lambda,0)$, $\lambda>2\Gamma_M.$ Recalling that $H\in C^{1+\alpha}([p_0,0]),$ we obtain together with \cite[Theorem 8.34]{GT01} the following result. \begin{lemma}\label{L:2}
The Fr\' echet derivative $\partial_{\widetilde h} \mathcal{F}(\lambda,0)\in\mathcal{L}(X,Y)$ is a Fredholm operator of index zero for each $\lambda\in(2\Gamma_M,\infty).$ \end{lemma} \begin{proof} See the proof of Lemma 4.1 in \cite{MM13x}. \end{proof}
In order to apply the previously mentioned bifurcation result, we need to determine special values for $\lambda$ such that the kernel of $\partial_{\widetilde h} \mathcal{F}(\lambda,0)$ is a subspace of $X$ of dimension one. To this end, we observe that if $0\neq w\in X$ belongs to the kernel of $\partial_{\widetilde h} \mathcal{F}(\lambda,0)$, the relation $Lw=0$ in $Y_1$ implies that, for each $k\in{\mathbb N},$ the Fourier coefficient \[
w_k(p):=\langle w(\cdot, p)|\cos(kn\cdot)\rangle_{L_2}:=\int_0^{2\pi} w(q,p)\cos(knq)\, dq\qquad\text{for $p\in[p_0,0]$} \] belongs to $C^{1+\alpha}([p_0,0])$ and solves the equation \begin{align}\label{EQ:M}
\left(\frac{w_k'}{H_p^3}\right)'-\frac{(kn)^2w_k}{H_p}=0\qquad\text{in $\mathcal{D}'((p_0,0)).$} \end{align} Additionally, multiplying the relation $Tw=0$ by $\cos(knq)$ we determine, in virtue of the symmetry of the operator $(1-\partial_q^2)^{-1},$ that is \begin{align*}
\langle f|(1-\partial_q^2)^{-1}g\rangle_{L_2}=\langle (1-\partial_q^2)^{-1}f|g\rangle_{L_2}\qquad\text{for all $f,g\in L_2(\mathbb{S})$}, \end{align*}
a further relation \[(g+\sigma (kn)^2)w_k(0)=\lambda^{3/2}w_k'(0).\] Finally, because of $w\in X$, we get $w_k(p_0)=0$. Since $W^1_r((p_0,0))$ is an algebra for any $r\in(1,\infty),$ cf. \cite{A75}, it is easy to see that $w_k$ belongs to $ W^2_r((p_0,0))$ and that it solves the system \begin{equation}\label{E:m} \left\{ \begin{array}{rlll}
(a^3(\lambda) w')'-\mu a(\lambda)w&=&0 &\text{in $L_r((p_0,0))$,}\\
(g+\sigma\mu)w(0)&=&\lambda^{3/2}w'(0),\\
w(p_0)&=&0,
\end{array}\right. \end{equation} when $\mu=(kn)^2.$ For simplicity, we set $a(\lambda):=a(\lambda;\cdot):=\sqrt{\lambda-2\Gamma}\in W^1_r((p_0,0)).$
Our task is to determine special values for $\lambda$ with the property that the system \eqref{E:m} has nontrivial solutions, which form a one-dimensional subspace of $W^2_r((p_0,0))$, { only for $\mu=n^2.$} Therefore, given $(\lambda,\mu)\in(2 \Gamma_M,\infty)\times[0,\infty),$ we introduce the Sturm-Liouville type operator $R_{\lambda,\mu}:W^2_{r,0} \to L_r((p_0,0))\times {\mathbb R}$ by \begin{equation*}
R_{\lambda,\mu}w:=
\begin{pmatrix}
(a^3(\lambda) w')'-\mu a(\lambda)w\\
(g+\sigma\mu)w(0)-\lambda^{3/2}w'(0)
\end{pmatrix}\qquad\text{for $w\in W^2_{r,0},$} \end{equation*} whereby $W^2_{r,0}:=\{w\in W^2_r((p_0,0))\,:\, w(p_0)=0\}.$ Additionally, for $(\lambda,\mu)$ as above, we let $v_i\in W^2_r((p_0,0))$, with
$v_i:=v_i(\cdot;\lambda,\mu)$, denote the unique solutions of the initial value problems \begin{equation}\label{ERU} \left\{\begin{array}{lll}
(a^3(\lambda) v_1')'-\mu a(\lambda)v_1=0\qquad \text{in $L_r((p_0,0))$},\\[1ex]
v_1(p_0)=0,\quad v_1'(p_0)=1,
\end{array} \right.
\end{equation} and \begin{equation}\label{ERUa}
\left\{\begin{array}{lll}
(a^3(\lambda)v_2')'-\mu a(\lambda)v_2=0\qquad \text{in $L_r((p_0,0))$},\\[1ex]
v_2(0)=\lambda^{3/2},\quad v_2'(0)=g+\sigma\mu.
\end{array}
\right. \end{equation} Similarly as in the bounded vorticity case $\gamma\in L_\infty((p_0,0)) $ considered in \cite{MM13x}, we have the following property.
\begin{prop}\label{P:2} Given $(\lambda,\mu)\in(2\Gamma_M,\infty)\times[0,\infty),$ $R_{\lambda,\mu}$ is a Fredholm operator of index zero and its kernel is at most one-dimensional.
Furthermore, the kernel of $R_{\lambda,\mu}$ is one-dimensional exactly when the functions $v_i$, $i=1,2,$ given by \eqref{ERU} and \eqref{ERUa}, are linearly dependent.
In the latter case we have $\mathop{\rm Ker}\nolimits R_{\lambda,\mu}=\mathop{\rm span}\nolimits\{v_1\}.$ \end{prop} \begin{proof}
First of all, $R_{\lambda,\mu}$ can be decomposed as the sum $R_{\lambda,\mu}=R_I+R_c$, whereby
\[
R_Iw:=
\begin{pmatrix}
(a^3(\lambda)w')'-\mu a(\lambda)w\\
-\lambda^{3/2}w'(0)
\end{pmatrix} \qquad \text{and}\qquad R_cw:=
\begin{pmatrix}
0\\
(g+\sigma\mu) w(0)
\end{pmatrix}
\]
for all $w\in W^2_{r,0}.$ It is not difficult to see that $R_c$ is a compact operator. Next, we show that $R_I:W^2_{r,0} \to L_r((p_0,0))\times {\mathbb R}$ is an isomorphism.
Indeed, if $w\in W^2_{r,0}$ solves the equation $R_Iw=(f,A), $ with $ (f,A)\in L_r((p_0,0))\times {\mathbb R}$, then, since $W^2_r((p_0,0))\hookrightarrow C^{1+\alpha}([p_0,0]),$ we have
\begin{equation}\label{VF}
\int_{p_0}^0\left(a^3(\lambda)w'\varphi'+\mu a(\lambda)w\varphi\right)dp=-A\varphi(0)-\int_{p_0}^0 f\varphi\, dp
\end{equation}
for all $\varphi\in H_*:=\{w\in W^1_2((p_0,0))\,:\, w(p_0)=0\}$.
The right-hand side of \eqref{VF} defines a linear functional in $\mathcal{L}(H_*,{\mathbb R})$ and that the left-hand side corresponds to a bounded bilinear and coercive functional in $H_*\times H_*.$ Therefore, the existence and uniqueness of a solution $w\in H_*$ of \eqref{VF} follows from the Lax-Milgram theorem, cf. \cite[Theorem 5.8]{GT01}. In fact, one can easily see that $w_*\in W^2_{r,0}$, so that $R_I$ is indeed an isomorphism.
That the kernel of $R_{\lambda,\mu}$ is at most one-dimensional can be seen from the observation that if $w_1,w_2\in W^2_r((p_0,0))$ are solutions of $(a^3(\lambda) w')'-\mu a(\lambda)w=0$, then \begin{equation}\label{BV}a^3(\lambda)(w_1w_2'-w_2w_1')=const. \qquad\text{in $[p_0,0]$}.\end{equation} Particularly, if $w_1, w_2\in W^2_{r,0},$ we obtain, in view of $a(\lambda)>0 $ in $[p_0,0],$ that $w_1$ and $w_2$ are linearly dependent. To finish the proof, we notice that if the functions $v_1$ and $v_2$, given by \eqref{ERU} and \eqref{ERUa}, are linearly dependent, then they both belong to $\mathop{\rm Ker}\nolimits R_{\lambda,\mu}.$ Moreover, if $0\neq v\in \mathop{\rm Ker}\nolimits R_{\lambda,\mu},$ the relation \eqref{BV} yields that $v$ is collinear with both $v_1 $ and $v_2$, argument which completes our proof.
\end{proof}
In view of the Proposition \ref{P:2}, we are left to determine $(\lambda,\mu)\in(2\Gamma_M,\infty)\times[0,\infty)$ for which the Wronskian \[
W(p;\lambda,\mu):=\left| \begin{array}{lll}
v_1&v_2\\
v_1'&v_2' \end{array}
\right| \] vanishes on the entire interval $[p_0,0].$ Recalling \eqref{BV}, we arrive at the problem of determining the zeros of the real-analytic (\eqref{ERU} and \eqref{ERUa} can be seen as initial value problems for first order ordinary differential equations) function $W(0;\cdot,\cdot):(2\Gamma_M,\infty)\times [0,\infty)\to{\mathbb R}$ defined by \begin{equation}\label{DEFG} W(0;\lambda,\mu):=\lambda^{3/2}v_1'(0;\lambda,\mu)-(g+\sigma\mu)v_1(0;\lambda,\mu). \end{equation} We emphasize that the methods used in \cite{CM13xx, MM13x, W06b, W06a} in order to study the solutions of $W(0;\cdot,\cdot)=0$ cannot be used for general $L_r-$integrable vorticity functions. Indeed, the approach {chosen in the context of classical $C^{2+\alpha}-$solutions} in \cite{W06b, W06a} is based on regarding the Sturm-Liouville problem \eqref{E:m} as a non standard eigenvalue problem (the boundary condition depends on the eigenvalue $\mu$). For this, the author of \cite{W06b, W06a} introduces a Pontryagin space with a indefinite inner product and uses abstract results pertaining to this setting. In our context such considerations are possible only when restricting $r\geq 2.$ On the other hand, the methods used in \cite{CM13xx, MM13x} are based on direct estimates for the solution of \eqref{ERU}, but these estimates rely to a large extent on the boundedness of $\gamma.$ Therefore, we need to find a new approach when allowing for general $L_r-$integrable vorticity functions.
{Our strategy is as follows: in a first step we find a constant $\lambda_0\geq 2\Gamma_M $ such that the function $ W(p;\lambda,\cdot)$ changes sign on $(0,\infty)$ for all $\lambda>\lambda_0,$ cf. Lemmas \ref{L:1} and \ref{L:4}. For this, the estimates established in Lemma \ref{L:3} within the setting of ordinary differential equations are crucial. In a second step, cf. Lemmas \ref{L:5} and \ref{L:6}, we prove that $ W(p;\lambda,\cdot)$ changes sign exactly once on $(0,\infty)$, the particular value where $ W(p;\lambda,\cdot)$ vanishes being called $\mu(\lambda).$ The properties of the mapping $\lambda\mapsto \mu(\lambda)$ derived in Lemma \ref{L:6} are the core of the analysis of the kernel of $\partial_{\widetilde h}\mathcal{F}(\lambda,0).$ }
As a first result, we state the following lemma. \begin{lemma}\label{L:1}
There exists a unique minimal $\lambda_0\geq 2\Gamma_M$ such that $W(0;\lambda,0)>0$ for all $\lambda>\lambda_0.$ \end{lemma} \begin{proof} First, we note that given $(\lambda,\mu)\in(2\Gamma_M,\infty)\times[0,\infty)$, the function $v_1$ satisfies the following integral relation \begin{equation}\label{v1}
v_1(p)=\int_{p_0}^p\frac{a^3(\lambda;p_0)}{a^3(\lambda;{s})}\, d{s}+\mu\int_{p_0}^p\frac{1}{a^3(\lambda;s)}\int_{p_0}^sa(\lambda;r)v_1(r)\, dr\, ds\qquad\text{for $p\in[p_0,0].$} \end{equation} Particularly, $v_1$ is a strictly increasing function on $[p_0,0]$. Furthermore, since $a(\lambda;0)=\lambda^{1/2},$ we get \begin{align*}
W(0;\lambda,0)=&a^3(\lambda;p_0)-g\int_{p_0}^0\frac{a^3(\lambda;p_0)}{a^3(\lambda;p)}\, dp=a^3(\lambda;p_0)\left(1-g\int_{p_0}^0\frac{1}{a^3(\lambda;p)}\, dp\right)\to_{\lambda\to\infty}\infty. \end{align*} This proves the claim. \end{proof}
We note that if $g=0,$ then $\lambda_0=2\Gamma_M.$ In the context of capillary-gravity water waves it is possible to choose, in the case of a bounded vorticity function, $\lambda_0>2\Gamma_M$ as being the unique solution of the equation $W(0;\lambda_0,0)=0$. In contrast, for certain unbounded vorticity functions $\gamma\in L_r((p_0,0)),$ with $r\in(1,\infty),$ the latter equation has no zeros in $(2\Gamma_M,\infty).$ Indeed, if we set $\gamma(p):=\delta(-p)^{-1/(kr)}$ for $p\in(p_0,0),$ where $\delta>0$ and $k,r\in(1,3) $ satisfy $kr<3,$ then $\gamma\in L_r((p_0,0))$ and, for sufficiently large $\delta $ (or small $p_0$), we have
\begin{align*}
\inf_{\lambda>2\Gamma_M} W(0;\lambda,0)>0.
\end{align*} This property leads to restrictions on the wavelength of the water waves bifurcating from the laminar flow solutions found in Lemma \ref{L:LFS}, cf. Proposition \ref{P:3}.
The estimates below will be used in Lemma \ref{L:4} to bound the integral mean and the first order moment of the solution $v_1$ of \eqref{ERU} on intervals $[p_1(\mu),0]$ with $p_1(\mu)\nearrow0$ as $\mu\to\infty.$ \begin{lemma}\label{L:3}
Let $p_1\in(p_0,0)$, $A, B\in(0,\infty),$ and $(\lambda,\mu)\in(2\Gamma_M,\infty)\times[0,\infty)$ be fixed and define the positive constants
\begin{equation}\label{constants}
\begin{aligned}
&\underline C:=\min_{p\in[p_1,0]}\frac{a^3(\lambda;p_1)}{a^3(\lambda;p)},\quad \overline C:=\max_{p\in[p_1,0]}\frac{a^3(\lambda;p_1)}{a^3(\lambda;p)},\\
&\underline D:=\min_{s,p\in[p_1,0]}\frac{a(\lambda;s)}{a^3(\lambda;p)},
\quad \overline D:=\max_{s,p\in[p_1,0]}\frac{a(\lambda;s)}{a^3(\lambda;p)}.
\end{aligned}
\end{equation} Then, if $v\in W^2_r((p_1,0))$ is the solution of \begin{equation}\label{EEE} \left\{\begin{array}{lll}
(a^3(\lambda) v')'-\mu a(\lambda)v=0\qquad \text{in $L_r((p_1,0))$},\\[1ex]
v(p_1)=A,\quad v'(p_1)=B,
\end{array}
\right. \end{equation} we have the following estimates \begin{align}
\int_{p_1}^0v(p)\, dp\!\leq& -\frac{A\mu^{-1/2}\sinh(p_1\sqrt{\overline D}\mu^{1/2})}{\sqrt{\overline D}}+\frac{B\overline C\mu^{-1}\left(\cosh(p_1\sqrt{\overline D}\mu^{1/2})-1\right)}{\overline D},\label{FE1}\\[1ex]
\int_{p_1}^0(-p)v(p)\, dp\!\geq& \frac{A\mu^{-1 }\left(\cosh(p_1\sqrt{\underline D}\mu^{1/2})-1\right)}{\underline D}+\frac{B\underline C\mu^{-1}\!\left(\sqrt{\underline D}p_1 - \sinh(p_1\sqrt{\underline D}\mu^{1/2}) \mu^{-1/2}\right)}{\underline D^{3/2}}\label{FE2}. \end{align} \end{lemma} \begin{proof}
It directly follows from \eqref{EEE} that
\begin{align}\label{Der}
v'(p)=\frac{a^3(\lambda;p_1)}{a^3(\lambda;p)}B+\mu\int_{p_1}^p\frac{a (\lambda;s)}{a^3(\lambda;p)}v(s)\, ds\qquad\text{for all $p\in[p_1,0],$}
\end{align} and therefore \[ v'(p)\leq B\overline C+\mu \overline D\int_{p_1}^pv(s)\, ds \qquad\text{in $p\in[p_1,0],$} \] cf. \eqref{constants}. Letting now $\overline u:[p_0,0]\to{\mathbb R}$ be the function defined by \[ \overline u(p):=\int_{p_1}^pv(s)\, ds\qquad\text{for $p\in[p_1,0],$} \] we find that $\overline u\in W^3_r((p_0,0))$ solves the following problem \begin{equation*}
\overline u''-\mu \overline D\overline u\leq B\overline C\quad \text{in $(p_1,0)$},\qquad \overline u(p_1)=0,\, \, \overline u'(p_1)=A. \end{equation*} It is not difficult to see that $\overline u\leq \overline z$ on $[p_1,0],$ where $\overline z$ denotes the solution of the initial value problem \begin{equation*}
\overline z''-\mu \overline D \overline z= B\overline C\quad \text{in $(p_1,0)$},\qquad \overline z(p_1)=0,\, \, \overline z'(p_1)=A. \end{equation*} The solution $\overline z$ of this problem can be determined explicitly \begin{align*}
\overline z(p)=\frac{A\sinh(\sqrt{\overline D}\mu^{1/2}(p-p_1))}{\sqrt{\overline D\mu}}+\frac{B\overline C\left(\cosh(\sqrt{\overline D}\mu^{1/2}(p-p_1))-1\right)}{\overline D\mu}, \qquad p\in[p_1,0], \end{align*} which gives, in virtue of $\overline u(0)\leq \overline z(0),$ the first estimate \eqref{FE1}.
In order to prove the second estimate \eqref{FE2}, we first note that integration by parts leads us to \begin{align*} \int_{p_1}^0(-p)v(p)\,dp =\int_{p_1}^0\int_{p_1}^p v(s)\, ds\,dp \qquad\text{in $[p_1,0],$} \end{align*} so that it is natural to define the function $\underline u:[p_0,0]\to{\mathbb R}$ by the relation \[ \underline u(p):=\int_{p_1}^p\int_{p_1}^rv(s)\, ds\, dr\qquad\text{for $p\in[p_1,0].$} \] Recalling \eqref{Der}, we find similarly as before that \[ v'(p)\geq B\underline C+\mu \underline D\int_{p_1}^pv(s)\, ds \qquad\text{in $p\in[p_1,0],$} \] and integrating this inequality over $(p_1,p)$, with $p\in(p_1,0)$, we get \begin{align*}
v(p)\geq A+B\underline C(p-p_1)+\mu \underline D\int_{p_1}^p\int_{p_1}^rv(s)\, ds\, dr \qquad\text{in $p\in[p_1,0].$} \end{align*} Whence, $\underline u\in W^4_r((p_0,0))$ solves the problem \begin{equation*}
\underline u''-\mu \underline D\,\underline u\geq A+B\underline C(p-p_1)\quad \text{in $(p_1,0)$},\qquad \underline u(p_1)=0,\, \, \underline u'(p_1)=0. \end{equation*} As the right-hand side of the above inequality is positive, we find that $\underline u\geq \underline z$ on $[p_1,0],$ where $\underline z$ stands now for the solution of the problem \begin{equation*}
\underline z''-\mu \underline D \, \underline z= A+B\underline C(p-p_1)\quad \text{in $(p_1,0)$},\qquad \underline z(p_1)=0,\, \, \underline z'(p_1)=0. \end{equation*} One can easily verify that $\underline z$ has the following expression \begin{align*}
\underline z(p)=&\frac{A\left(\cosh(\sqrt{\underline D}\mu^{1/2}(p-p_1))-1\right)}{ \underline D\mu }\\
&+\frac{B\underline C\left( \underline D^{-1/2}\sinh(\sqrt{\underline D}\mu^{1/2}(p-p_1))\mu^{-1/2}-(p-p_1)\right)}{\underline D\mu} \end{align*} for $ p\in[p_1,0] $, and, since $\underline u(0)\geq \underline z(0),$ we obtain the desired estimate \eqref{FE2}. \end{proof}
The estimates \eqref{FE1} and \eqref{FE2} are the main tools when proving the following result. \begin{lemma}\label{L:4}
Given $\lambda> 2\Gamma_M,$ we have that
\begin{align}\label{ES}
\lim_{\mu\to\infty} W(0;\lambda,\mu)=-\infty.
\end{align} \end{lemma} \begin{proof}
Recalling the relations \eqref{DEFG} and \eqref{v1}, we write $W(0;\lambda,\mu)=T_1+\mu T_2,$
whereby we defined
\begin{align*}
T_1&:=a^3(\lambda;p_0)\left(1-(g+\sigma\mu)\int_{p_0}^0\frac{1}{a^3(\lambda;p)}\, dp\right),\\
T_2&:=\int_{p_0}^0a(\lambda;p)v_1(p)\, dp-(g+\sigma\mu)\int_{p_0}^0\frac{1}{a^3(\lambda;s)}\int_{p_0}^sa(\lambda;r)v_1(r)\, dr\, ds. \end{align*} Because $a(\lambda)$ is a continuous and positive function that does on depend on $\mu$, it is easy to see that $T_1\to-\infty$ as $\mu\to\infty.$ In the remainder of this proof we show that \begin{equation}\label{QE1}
\lim_{\mu\to\infty} T_2=-\infty. \end{equation} In fact, since $a(\lambda)$ is bounded from below and from above in $(0, \infty)$, we see, by using integration by parts, that \eqref{QE1} holds provided that there exists a constant $\beta\in(0,1)$ such that \begin{equation}\label{QE2} \lim_{\mu\to\infty} \left( \int_{p_0}^0v_1(p)\, dp-\mu^{\beta}\int_{p_0}^0(-p)v_1(p)\, dp\right)=-\infty. \end{equation} We now fix $\beta\in(1/2,1)$ and prove that \eqref{QE2} is satisfied if we make this choice for $\beta.$ Therefore, we first choose $\gamma\in(1/2,\beta)$ with \begin{align}\label{ch}
\frac{2\beta-1}{2\gamma-1}=4. \end{align} Because for sufficiently large $\mu$ we have \begin{align*} \int_{p_0}^{-\mu^{-\gamma}}v_1(p)\, dp-\mu^{\beta}\int_{p_0}^{-\mu^{-\gamma}}(-p)v_1(p)\, dp\leq&\int_{p_0}^{-\mu^{-\gamma}}v_1(p)\, dp-\mu^{\beta}\int_{p_0}^{-\mu^{-\gamma}}\mu^{-\gamma}v_1(p)\, dp\\[1ex] =&(1-\mu^{\beta-\gamma})\int_{p_0}^{-\mu^{-\gamma}}v_1(p)\, dp\to_{\mu\to\infty}-\infty, \end{align*} we are left to show that \begin{align}\label{QE3} \limsup_{\mu\to\infty}\left(\int_ {-\mu^{-\gamma}}^0v_1(p)\, dp-\mu^{\beta}\int_{-\mu^{-\gamma}}^0(-p)v_1(p)\, dp\right)<\infty. \end{align} The difficulty of showing \eqref{QE2} is mainly caused by the fact that the function $v_1$ grows very fast with $\mu.$ However, because the volume of the interval of integration in \eqref{QE3} decreases also very fast when $\mu\to\infty$, the estimates derived in Lemma \ref{L:3} are accurate enough to establish \eqref{QE3}. To be precise, for all $\mu>(-1/p_0)^{1/\gamma}$, we set $p_1:=-\mu^{-\gamma}$, $A:=v_1(p_1),$ $B:=v_1'(p_1)$, and obtain that the solution $v_1$ of \eqref{EEE} satisfies \begin{align}\label{QE4} \int_ {-\mu^{-\gamma}}^0v_1(p)\, dp-\mu^{\beta}\int_{-\mu^{-\gamma}}^0(-p)v_1(p)\, dp\leq \frac{A \sinh(\sqrt{\overline D}\mu^{1/2-\gamma})}{\underline D\mu^{1/2}}E_1+\frac{B\underline C}{\underline D\mu}E_2, \end{align} whereby $A, B, \overline C,\underline C,\overline D,\underline D$ are functions of $\mu$ now, cf. \eqref{constants}, and \begin{align*}
E_1&:=\frac{\underline D}{\sqrt{\overline D}}- \mu^{\beta-1/2 }\frac{ \cosh( \sqrt{\underline D}\mu^{1/2-\gamma})-1 }{ \sinh(\sqrt{\overline D}\mu^{1/2-\gamma})},\\[1ex]
E_2&:= \frac{\overline C\underline D}{\underline C\overline D} \left(\cosh(\sqrt{\overline D}\mu^{1/2-\gamma})-1\right) -\mu^{\beta -\gamma}\left(\frac{\sinh(\sqrt{\underline D}\mu^{1/2-\gamma})}{\sqrt{\underline D}\mu^{1/2-\gamma}}-1\right). \end{align*} Recalling that $\gamma>1/2$ and that $A$,$B,$ $\overline C,\underline C,\overline D,\underline D$ are all positive, it suffices to show that $E_1$ and $E_2$ are negative when $\mu$ is large. In order to prove this property, we infer from \eqref{constants} that, as $\mu\to\infty,$ we have \[ \overline D\to \lambda^{-1},\qquad \overline D\to \lambda^{-1},\qquad \overline C\to 1, \qquad \underline C\to1. \] Moreover, using the substitution $t:=\sqrt{\underline D}\mu^{1/2-\gamma} $ and l'Hospital's rule, we find \begin{align*}
\lim_{\mu\to\infty}E_1=&1-\lim_{\mu\to\infty}\mu^{\beta-1/2 }\frac{ \cosh( \sqrt{\underline D}\mu^{1/2-\gamma})-1 }{ \sinh(\sqrt{\underline D}\mu^{1/2-\gamma})}\frac{\sinh(\sqrt{\underline D}\mu^{1/2-\gamma})}{ \sinh(\sqrt{\overline D}\mu^{1/2-\gamma})}\\[1ex]
=&1-\lim_{\mu\to\infty}\mu^{\beta-1/2 }\frac{ \cosh( \sqrt{\underline D}\mu^{1/2-\gamma})-1 }{ \sinh(\sqrt{\underline D}\mu^{1/2-\gamma})} =1-\frac{1}{\lambda^2} \lim_{t\searrow0}\frac{ \cosh( t)-1 }{ t^4\sinh(t)}=-\infty, \end{align*} cf. \eqref{ch}, and by similar arguments \begin{align*}
\lim_{\mu\to\infty}E_2=&-\lim_{\mu\to\infty}\mu^{\beta -\gamma}\left(\frac{\sinh(\sqrt{\underline D}\mu^{1/2-\gamma})}{\sqrt{\underline D}\mu^{1/2-\gamma}}-1\right)=-\frac{1}{\lambda^{3/2}} \lim_{t\searrow0}\frac{ \sinh( t)-t }{ t^4}=-\infty. \end{align*} Hence, the right-hand side of \eqref{QE4} is negative when $\mu$ is sufficiently large, fact which proves the desired inequality \eqref{QE3}. \end{proof}
Combining the Lemmas \ref{L:1} and \ref{L:4}, we see that the equation $W(0;\cdot,\cdot)=0$ has at least a solution for each $\lambda>\lambda_0.$ Concerning the sign of the first order derivatives $W_\lambda(0;\cdot,\cdot)$ and $W_\mu(0;\cdot,\cdot)$ at the zeros of $W(0;\cdot,\cdot)$, which will be used below to show that $W(0;\cdot,\cdot)$ has a unique zero for each $\lambda>\lambda_0$, the results established for a H\"older continuous \cite{W06b, W06a} or for a bounded vorticity function \cite{CM13xx, MM13x} extend also to the case of a $L_r$-integrable vorticity function, without making any restriction on $r\in(1,\infty).$
\begin{lemma}\label{L:5} Assume that $(\overline\lambda,\overline\mu)\in(\lambda_0,\infty)\times(0,\infty)$ satisfies $W(0; \overline\lambda,\overline\mu)=0.$ Then, we have \begin{align}\label{slim}
W_\lambda(0; \overline\lambda,\overline\mu)>0\qquad\text{and}\qquad W_\mu(0; \overline\lambda,\overline\mu)<0. \end{align} \end{lemma} \begin{proof}
The Proposition \ref{P:2} and the discussion following it show that
$\mathop{\rm Ker}\nolimits R_{\overline\lambda,\overline\mu} =\mathop{\rm span}\nolimits\{v_1\}$, whereby $v_1:=v_1(\cdot;\overline\lambda,\overline\mu)$. To prove the first claim, we note that the algebra property of $W^1_r((p_0,0))$ yields that the partial derivative $v_{1,\lambda}:=\partial_\lambda v_{1}(\cdot,\overline\lambda,\overline\mu)$ belongs to $W^2_r((p_0,0))$ and solves the problem \begin{equation}\label{v1l}
\left\{\begin{array}{lll}
(a^{3}(\overline\lambda)v_{1,\lambda}')'-\overline\mu a(\overline\lambda) v_{1,\lambda}=
-(3a^2(\overline\lambda)a_{\lambda}(\overline\lambda)v_1')'+\overline\mu a_{\lambda}(\overline\lambda)v_1\qquad\text{in $ L_{r}((p_0,0))$,}\\[1ex]
v_{1,\lambda}(p_0)= v_{1,\lambda}'(p_0)=0, \end{array}\right. \end{equation} where $a_{\lambda}(\overline\lambda)=1/(2a(\overline\lambda))$. Because of the embedding $W^2_r((p_0,0))\hookrightarrow C^{1+\alpha}([p_0,0]),$ we find, by multiplying the differential equation satisfied by $v_1$, cf. \eqref{ERU}, with $v_{1,\lambda}$ and the first equation of \eqref{v1l} with $v_1$, and after subtracting the resulting relations the first claim of \eqref{slim} \begin{align*} W_{\lambda}(0;\overline\lambda,\overline\mu)&=\overline\lambda^{3/2}v_{1,\lambda}'(0)+\frac{3}{2}\overline\lambda^{1/2}v_1'(0)-(g+\sigma\overline\mu)v_{1,\lambda}(0)\\ &=\frac{1}{v_1(0)}\left(\int_{p_0}^{0}\frac{3a(\overline\lambda)}{2} v_1'^{ 2}+\frac{\overline\mu}{2a(\overline\lambda)}v_1^2\, dp\right)>0.\end{align*}
For the second claim, we find as above that $v_{1,\mu}:=\partial_\mu v_{1}(\cdot,\overline\lambda,\overline\mu)\in W^2_r((p_0,0))$ is the unique solution of the problem \begin{equation}\label{v1mu}
\left\{\begin{array}{lll}(a^{3}(\overline\lambda)v_{1,\mu}')'-\overline\mu a(\overline\lambda) v_{1,\mu}=a(\overline\lambda)v_1\qquad \text{in $ L_{r}((p_0,0))$},\\[1ex]
v_{1,\mu}(p_0)=v_{1,\mu}'(p_0)=0.
\end{array}\right. \end{equation} Also, if we multiply the differential equation satisfied by $v_1$ with $v_{1,\mu}$ and the first equation of \eqref{v1mu} with $v_1$, we get after building the difference of these relations \begin{align*}
\int_{p_0}^0\!a(\overline\lambda)v_1^2\, dp=\overline\lambda^{3/2}\!v_{1,\mu}'(0)v_1(0)-\overline\lambda^{3/2}\!v_1'(0)v_{1,\mu}(0)=v_1(0)\left(\!\overline\lambda^{3/2}v_{1,\mu}'(0)-(g+\sigma\overline \mu)v_{1,\mu}(0)\!\right)\!, \end{align*} the last equality being a consequence of the fact that $v_1$ and $v_2:=v_2(\cdot;\overline\lambda,\overline\mu) $ are collinear for this choice of the parameters. Therefore, we have \begin{align}\label{qqq}
W_\mu(0;\overline\lambda,\overline\mu)=\overline\lambda^{3/2}v_{1,\mu}'(0)-\sigma v_1(0)-(g+\sigma\overline\mu)v_{1,\mu}(0)=\frac{1}{v_1(0)} \left(\int_{p_0}^0a(\overline\lambda)v_1^2\, dp-\sigma v_1^2(0)\right). \end{align} In order to determine the sign of the latter expression, we multiply the first equation of \eqref{ERU} by $v_1$ and get, by using once more the collinearity of $v_1$ and $v_2,$ that \[ \int_{p_0}^0a(\overline\lambda)v_1^2\, dp-\sigma v_1^2(0)=\frac{1}{\overline\mu}\left(gv_1^2(0)-\int_{p_0}^0a^3(\overline\lambda)v_1'^2\, dp\right). \] If $g=0$, the latter expression is negative and we are done. On the other hand, if we consider gravity effects, because of $\overline\mu>0,$ it is easy to see that $a^{3/2}(\overline\lambda)v_1'$ and $a^{-3/2}(\overline\lambda)$ are linearly independent functions, fact which ensures together with Lemma \ref{L:1} and with H\"older's inequality that \begin{align*} gv_1^2(0)&=g\left(\int_{p_0}^0a^{3/2}(\overline\lambda)v_1'\frac{1}{a^{3/2}(\overline\lambda)}\, dp\right)^2\\ &<g\left(\int_{p_0}^0a^{3 }(\overline\lambda)v_1'^2 \, dp\right)\left(\int_{p_0}^0 \frac{1}{a^{3 }(\overline\lambda)}\, dp\right)\leq \int_{p_0}^0a^{3 }(\overline\lambda)v_1'^2 \, dp, \end{align*} and the desired claim follows from \eqref{qqq}. \end{proof}
We conclude with the following result. \begin{lemma}\label{L:6}
Given $\lambda>\lambda_0,$ there exists a unique zero $\mu=\mu(\lambda)\in (0,\infty)$ of the equation $W(0;\lambda,\mu(\lambda))=0.$
The function
\[\mu:(\lambda_0,\infty)\to(\inf_{(\lambda_0,\infty)}\mu(\lambda),\infty),\qquad \lambda\mapsto\mu(\lambda)\] is strictly increasing, real-analytic, and bijective. \end{lemma} \begin{proof} Given $\lambda>\lambda_0,$ it follows from the Lemmas \ref{L:1} and \ref{L:4} that there exists a constant $\mu(\lambda)>0$ such that $W(0;\lambda,\mu(\lambda))=0.$ The uniqueness of this constant, and the real-analyticity and the monotonicity of $\lambda\mapsto\mu(\lambda)$ follow readily from Lemma \ref{L:5} and the implicit function theorem. To complete the proof, let us assume that we found a sequence $\lambda_n\to\infty$ such that $(\mu(\lambda_n))_n$ is bounded. Denoting by $v_{1n}$ the (strictly increasing) solution of \eqref{ERU} when $(\lambda,\mu)=(\lambda_n,\mu(\lambda_n)),$ we infer from \eqref{v1} that there exists a constant $C>0$ such that \[ v_{1n}(p)\leq C\left(1+\int_{p_0}^pv_{1n}(s)\, ds\right)\qquad\text{for all $n\geq 1$ and $p\in[p_0,0].$} \] Gronwall's inequality yields that the sequence $(v_{1n})_n$ is bounded in $C([p_0,0])$ and, together with \eqref{v1}, we find that \[0=W(0;\lambda_n,\mu(\lambda_n))\geq a^3(\lambda_n;p_0)-(g+\sigma\mu(\lambda_n))v_{1n}(0)\underset{n\to\infty}\to\infty.\] This is a contradiction, and the proof is complete. \end{proof}
We choose now the integer $N$ from Theorem \ref{T:MT}, to be {the smallest positive integer which} satisfies \begin{equation}\label{eq:rest} { N^2}>\inf_{(\lambda_0,\infty)}\mu(\lambda). \end{equation} Invoking Lemma \ref{L:6}, we {find a sequence $(\lambda_n)_{n\geq N}\subset (\lambda_0,\infty)$ having the properties that $\lambda_n\nearrow\infty$ and \begin{equation}\label{eq:sec}
\text{$\mu(\lambda_n)=n^2$ \qquad for all $n\geq N.$} \end{equation}}
We conclude the previous analysis with the following result. \begin{prop}\label{P:3}
Let {$N\in{\mathbb N}$ be defined by \eqref{eq:rest}.
Then, for each $n\geq N $, the Fr\'echet derivative
$\partial_{\widetilde h} \mathcal{F}(\lambda_n,0)\in\mathcal{L}(X,Y)$, with $\lambda_n$ defined by \eqref{eq:sec},
is a Fredholm operator of index zero with a one-dimensional kernel $\mathop{\rm Ker}\nolimits\partial_{\widetilde h} \mathcal{F}(\lambda_n,0)=\mathop{\rm span}\nolimits\{w_n\}$,
whereby $w_n\in X$ is the function
$w_n(q,p):=v_1(p;\lambda_n,n^2)\cos(nq)$ for all $(q,p)\in\overline\Omega.$} \end{prop} \begin{proof}
The result is a consequence of the Lemmas \ref{L:2} and \ref{L:6}, and of Proposition \ref{P:2}. \end{proof}
In order to apply the theorem on bifurcations from simple eigenvalues to the equation \eqref{BP}, we still have to verify the transversality condition \begin{equation}\label{eq:TC}
\partial_{\lambda {\widetilde h}}\mathcal{F}(\lambda_n,0)[w_n]\notin\mathop{\rm Im}\nolimits \partial_{\widetilde h} \mathcal{F}(\lambda_n,0) \end{equation}
for $n\geq N.$
\begin{lemma}\label{L:TC}
The transversality condition \eqref{eq:TC} is satisfied for all {$n\geq N$}.
\end{lemma} \begin{proof}
The proof is similar to that of the Lemmas 4.4 and 4.5 in \cite{MM13x}, and therefore we omit it. \end{proof}
We come to the proof of our main existence result. \begin{proof}[Proof of Theorem \ref{T:MT}] {Let $N$ be defined by \eqref{eq:rest}, and let $(\lambda_n)_{n\geq N}\subset (\lambda_0,\infty)$ be the sequence defined by \eqref{eq:sec}.}
Invoking the relations \eqref{BP0}, \eqref{BP1}, the Proposition \ref{P:3}, and the Lemma \ref{L:TC}, we see
that all the assumptions of the theorem on bifurcations from simple eigenvalues of Crandall and Rabinowitz \cite{CR71}
are satisfied for the equation \eqref{BP} at each of the points $\lambda=\lambda_n,$ $n\geq N.$
Therefore, for each $n\geq N$, there exists $\varepsilon_n>0$ and a real-analytic curve \[\text{$(\widetilde \lambda_n,{\widetilde h}_n):(\lambda_n-\varepsilon_n,\lambda_n+\varepsilon_n)\to (2\Gamma_M,\infty)\times X,$ }\] consisting only of solutions of the problem
\eqref{BP}. Moreover, as $s\to0$, we have that \begin{equation}\label{asex} \widetilde\lambda_n(s)=\lambda_n+O(s)\quad \text{in ${\mathbb R}$},\qquad {\widetilde h}_n(s)=sw_n+O(s^2)\quad \text{in $X$}, \end{equation}
whereby $w_n\in X$ is the function defined in Proposition \ref{P:3}. Furthermore, in a neighborhood of $(\lambda_n,0),$ the solutions of \eqref{BP} are either laminar or are located on the local curve $(\widetilde\lambda_n,{\widetilde h}_n)$. The constants $\varepsilon_n$ are chosen sufficiently small to guarantee that $H(\cdot;\widetilde\lambda_n(s))+{\widetilde h}_n(s)$ satisfies \eqref{PBC} for all $|s|<\varepsilon_n$ and all $n\geq N.$ For each integer $n\geq N,$ the curve ${\mathcal{C}_{n}}$ mentioned in Theorem \ref{T:MT} is parametrized by $[s\mapsto H(\cdot;\widetilde\lambda_n(s))+{\widetilde h}_n(s)]\in C^\omega((-\varepsilon_n,\varepsilon_n),X).$
We pick now a function $h$ on one of the local curves ${\mathcal{C}_{n}}$. In order to show that this weak solution of \eqref{WF} belongs to $ W^2_r(\Omega)$, we first infer from Theorem 5.1 in \cite{MM13x} that the distributional derivatives $\partial_q^mh$ also belong to $C^{1+\alpha}(\overline\Omega)$ for all $m\geq1.$ Using the same arguments as in the last part of the proof of Theorem \ref{T:EQ}, we find that $h\in C^{1+\alpha}(\overline\Omega)\cap W^2_r(\Omega)$ satisfies the first equation of \eqref{PB} in $L_r(\Omega)$. Because $(1-\partial_q^2)^{-1}\in \mathcal{L}(C^\alpha(\mathbb{S}), C^{2+\alpha}(\mathbb{S}))$, the equation \eqref{PB0} yields that $\mathop{\rm tr}\nolimits_0 h\in C^{2+\alpha}(\mathbb{S})$, and therefore $h$ is a strong solution of \eqref{PB}. Moreover, by \cite[Corollary 5.2]{MM13x}, result which shows that the regularity properties of the streamlines of classical solutions \cite{Hen10, DH12} persist even for weak solutions with merely integrable vorticity, $[q\mapsto h(q,p)]$ is a real-analytic map for any $p\in[p_0,0]$. Finally, because of \eqref{asex}, it is not difficult to see that any solution $h=H(\cdot;\widetilde \lambda_n(s))+{\widetilde h}_n(s)\in{\mathcal{C}_{n}},$ with $s\neq0 $ sufficiently small, corresponds to waves that possess a single crest per period and which are symmetric with respect to the crest (and trough) line. \end{proof}
As noted in the discussion following Lemma \ref{L:1}, when $r\in(1,3),$ there are examples of vorticity functions $\gamma\in L_r((p_0,0))$ for which the mapping $\lambda\mapsto\mu(\lambda)$ defined in Lemma \ref{L:6} is bounded away from zero on $(\lambda_0,\infty)$. This property imposes restrictions (through the { positive integer $N$}) on the wave length of the water waves solutions bifurcating from the laminar flows, cf. Theorem \ref{T:MT}.
The lemma below gives, in the context of capillary-gravity waves, sufficient conditions which ensure that $\mu:(\lambda_0,\infty)\to(0,\infty)$ is a bijective mapping, which corresponds to the choice $N=1$ in Theorem \ref{T:MT}, situation when no restrictions are needed. On the other hand, when considering pure capillary waves and if $\mu:(\lambda_0,\infty)\to(0,\infty)$ is a bijective mapping, then necessarily $\Gamma_M=\Gamma(p_0),$ and the problems \eqref{ERU} and \eqref{ERUa} become singular as $\lambda\to \lambda_0=2\Gamma_M.$ Therefore, finding sufficient conditions in this setting appears to be much more involved.
\begin{lemma}\label{L:9}
Let $r\geq3$, $\gamma\in L_r((p_0,0))$ and assume that $g>0$.
Then, $\lambda_0>2\Gamma_M$ and {the integer $N$ in Theorem \ref{T:MT} satisfies $N=1$}, provided that
\begin{align}\label{eq.condCG}
\int_{p_0}^0a(\lambda_0)\left(\int_{p_0}^p\frac{1}{a^3(\lambda_0;s)}\, ds\right)^2\, dp<\frac{\sigma}{g^2}.
\end{align} \end{lemma} \begin{proof}
Let us assume that $\Gamma(p_1)=\Gamma_M$ for some $p_1\in[p_0,0)$ (the case when $p_1=0$ is similar).
Then, if $\delta<1$ is such that $p_1+\delta<0,$ we have
\begin{align*}
\lim_{\lambda\searrow2\Gamma_M}\int_{p_0}^0 \frac{dp}{a^3(\lambda;p)}&=\lim_{\varepsilon\searrow 0}\int_{p_0}^0\frac{dp}{\sqrt{\varepsilon+2(\Gamma(p_1)-\Gamma(p))}^3}\geq
c\lim_{\varepsilon\searrow 0}\int_{p_1}^{p_1+\delta}\frac{dp}{\varepsilon^{3/2}+\left|\int_{p_1}^p\gamma(s)\, ds\right|^{3/2}}\\
& \geq c\lim_{\varepsilon\searrow 0}\int_{p_1}^{p_1+\delta}\!\frac{dp}{\varepsilon^{3/2}+\left\| \gamma \right\|_{L_r}^{3/2}|p-p_1|^{3\alpha/2}}\geq
c\lim_{\varepsilon\searrow 0}\int_{p_1}^{p_1+\delta}\!\frac{dp}{\varepsilon+p-p_1}=\infty
\end{align*} with $\alpha=(r-1)/r$ and with $c$ denoting positive constants that are independent of $\varepsilon$. We have used the relation $3\alpha/2\geq1$ for $r\geq3.$ In view of Lemma \ref{L:1}, we find that $\lambda_0>2\Gamma_M$ is the unique zero of $W(0;\cdot,0).$ Recalling now \eqref{qqq} and the relation \eqref{v1}, one can easily see, because of $W(0;\lambda_0,0)=0,$ that the condition \eqref{eq.condCG} yields $W_\mu(0;\lambda_0,0)<0$. Since Lemma \ref{L:6} implies $W(0;\lambda_0, \inf_{(\lambda_0,\infty)}\mu)=0,$ the relation $W_\mu(0;\lambda_0,0)<0$ together with Lemma \ref{L:5} guarantee that $\inf_{(\lambda_0,\infty)}\mu=0$. This proves the claim. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1311.6935.tex",
"language_detection_score": 0.6352861523628235,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Variation formulas for an extended Gompf invariant}
\begin{abstract}
In 1998, R. Gompf defined a homotopy invariant $\theta_G$ of oriented 2-plane fields in 3-manifolds. This invariant is defined for oriented 2-plane fields $\xi$ in a closed oriented 3-manifold $M$ when the first Chern class $c_1(\xi)$ is a torsion element of $H^2(M;\b Z)$. In this article, we define an extension of the Gompf invariant for all compact oriented 3-manifolds with boundary and we study its iterated variations under Lagrangian-preserving surgeries. It follows that the extended Gompf invariant is a degree two invariant with respect to a suitable finite type invariant theory. \end{abstract}
\section*{Introduction}
\renewcommand{\arabic{section}.\arabic{theorem}}{\arabic{theorem}}
\subsection*{Context} In \cite{gompf}, R. Gompf defined a homotopy invariant $\theta_G$ of oriented 2-plane fields in 3-manifolds. This invariant is defined for oriented 2-plane fields $\xi$ in a closed oriented 3-manifold $M$ when the first Chern class $c_1(\xi)$ is a torsion element of $H^2(M;\b Z)$. This invariant appears, for instance, in the construction of an absolute grading for the Heegaard-Floer homology groups, see \cite{GH}. Since the positive unit normal of an oriented 2-plane field of a Riemannian 3-manifold $M$ is a section of its unit tangent bundle $UM$, homotopy classes of oriented 2-plane fields of $M$ are in one-to-one correspondence with homotopy classes of sections of $UM$. Thus, the invariant $\theta_G$ may be regarded as an invariant of homotopy classes of nowhere zero vector fields, also called \textit{combings}. In that setting, the Gompf invariant is defined for \textit{torsion combings} of closed oriented 3-manifolds $M$, \textit{ie}\ combings $X$ such that the Euler class $e_2(X^\perp)$ of the normal bundle $X^\perp$ is a torsion element of $H^2(M;\b Z)$. \\
In \cite{lescopcombing}, C. Lescop proposed an alternative definition of $\theta_G$ using a Pontrjagin construction from the combing viewpoint. Here, we use a similar approach to show how to define Pontrjagin numbers for torsion combings by using pseudo-parallelizations, which are a generalization of parallelizations. This enables us to define a relative extension of the Gompf invariant for torsion combings in all compact oriented 3-manifolds with boundary. We also study the iterated variations under Lagrangian-preserving surgeries of this extended invariant and prove that it is a degree two invariant with respect to a suitable finite type invariant theory. In such a study, pseudo-parallelizations reveal decisive since they are, in some sense, compatible with Lagrangian-preserving surgeries while genuine parallelizations are not.
\subsection*{Conventions} In this article, compact oriented 3-manifolds may have boundary unless otherwise mentioned. All manifolds are implicitly equipped with Riemannian structures. The statements and the proofs are independent of the chosen Riemannian structures. \\
If $M$ is an oriented manifold and if $A$ is a submanifold of $M$, let $TM$, resp. $TA$, denote the tangent bundles to $M$, resp. $A$, and let $NA$ refer to the orthogonal bundle to $A$ in $M$, which is canonically isomorphic to the normal bundle to $A$ in $M$. The fibers of $NA$ are oriented so that $NA \oplus TA = TM$ fiberwise and the boundaries of all compact manifolds are oriented using the outward normal first convention. \\
If $A$ and $B$ are transverse submanifolds of an oriented manifold $M$, their intersection is oriented so that $N(A\cap B) = NA \oplus NB$, fiberwise. Moreover, if $A$ and $B$ have comple\-mentary dimensions, \textit{ie}\ if $\mbox{dim}(A)+\mbox{dim}(B)=\mbox{dim}(M)$, let $\varepsilon_{A\cap B}(x) = 1$ if $x \in A\cap B$ is such that $T_xA\oplus T_xB = T_xM$ and $\varepsilon_{A\cap B}(x) = -1$ otherwise. If $A$ and $B$ are compact transverse submani\-folds of an oriented manifold $M$ with complementary dimensions, the \textit{algebraic intersection of $A$ and $B$ in $M$} is $$ \langle A, B \rangle_M = \sum_{x \in A\cap B} \varepsilon_{A\cap B}(x). $$
Let $L_1$ and $L_2$ be two rational cycles of an oriented $n$-manifold $M$. Assume that $L_1$ and $L_2$ bound two rational chains $\Sigma_1$ and $\Sigma_2$, respectively. If $L_1$ is transverse to $\Sigma_2$, if $L_2$ is transverse to $\Sigma_1$ and if $\mbox{dim}(L_1)+\mbox{dim}(L_2) = n-1$, then the \textit{linking number of $L_1$ and $L_2$ in $M$} is $$ lk_M(L_1,L_2) = \langle \Sigma_1 , L_2 \rangle_M = (-1)^{\tiny{n-\mbox{dim}(L_2)}}\langle L_1 , \Sigma_2 \rangle_M. $$
\subsection*{Setting and statements}
A \textit{combing} $(X,\sigma)$ of a compact oriented 3-manifold $M$ is a section $X$ of the unit tangent bundle $UM$ together with a nonvanishing section $\sigma$ of the restriction $X^\perp_{|\partial M}$ of the normal bundle $X^\perp$ to $\partial M$. For simplicity's sake, the section $\sigma$ may be omitted in the notation of a combing. For any combing $(X,\sigma)$, note that $\rho(X)=(X_{|\partial M}, \sigma, X_{|\partial M} \wedge \sigma)$, where $\wedge$ denotes the cross product, is a trivialization of $TM_{|\partial M}$. So, a combing of a compact oriented 3-manifold $M$ may also be seen as a pair $(X,\rho)$ where $X$ is a section of $UM$ that is the first vector of a trivialization $\rho$ of $TM_{|\partial M}$ together with this trivialization. \\
Two combings $(X,\sigma_X)$ and $(Y,\sigma_Y)$ of a compact oriented 3-manifold $M$ are said to be \textit{transverse} when the graph $X(M)$ is transverse to $Y(M)$ and $-Y(M)$ in $UM$. The combings $(X,\sigma_X)$ and $(Y,\sigma_Y)$ are said to be \textit{$\partial$-compatible} when $X_{|\partial M}= Y_{|\partial M}$, $\sigma_X = \sigma_Y$, $X(\mathring M)$ is transverse to $Y(\mathring M)$ and $-Y(\mathring M)$ in $UM$, and $$
\overline{X(\mathring M)\cap Y(\mathring M)} \cap UM_{|\partial M} =\emptyset. $$ \iffalse If $X(\mathring M)$ is transverse to $Y(\mathring M)$ and $-Y(\mathring M)$ in $UM$ and if $$
\overline{X(\mathring M)\cap Y(\mathring M)} \cap UM_{|\partial M} =\emptyset $$
then $(X,\sigma_X)$ and $(Y,\sigma_Y)$ are \textit{almost transverse}. Two combings $(X,\sigma_X)$ and $(Y,\sigma_Y)$ of $M$ are said to be \textit{coincide on $\partial M$} if $X_{|\partial M}= Y_{|\partial M}$ and $\sigma_X=\sigma_Y$. \fi When $(X,\sigma_X)$ and $(Y,\sigma_Y)$ are $\partial$-compatible, define two links $L_{X=Y}$ and $L_{X=-Y}$ as follows. First, let $P_M$ denote the projection from $UM$ to $M$ and set $$ L_{X=-Y} = P_M (X(M) \cap (-Y)(M)). $$ Second, there exists a link $L_{X=Y}$ in $\mathring{M}$ such that $$ P_M (X(M) \cap Y(M)) = \partial M \sqcup L_{X=Y}. $$
If $(X,\sigma)$ is a combing of a compact oriented 3-manifold $M$, its \textit{relative Euler class} $e_2^M(X^{\perp}\hspace{-1mm}, \sigma)$ in $H^2(M,\partial M;\b Z)$ is an obstruction to extending the section $\sigma$ as a nonvanishing section of $X^\perp$. This obstruction is such that its Poincaré dual $P(e_2^M(X^\perp, \sigma))$ is represented by the zero set of a generic section of $X^\perp$ extending $\sigma$. This zero set is oriented by its coorientation induced by the orientation of $X^\perp$. When $M$ is closed, the \textit{Euler class} $e_2(X^\perp)$ of $X$ is just this obstruction to finding a nonvanishing section of $X^\perp$. \\
A combing $(X,\sigma)$ of a compact oriented 3-manifold $M$ is a \textit{torsion combing} if $e_2^M(X^\perp,\sigma)$ is a torsion element of $H^2(M,\partial M ; \b Z)$, \textit{ie}\ $\left[e_2^M(X^\perp,\sigma)\right]=0$ in $H^2(M,\partial M ; \b Q)$. \\
Let $M_1$ and $M_2$ be two compact oriented 3-manifolds. The manifolds $M_1$ and $M_2$ are said to have \textit{identified boundaries} if a collar of $\partial M_1$ in $M_1$ and a collar of $\partial M_2$ in $M_2$ are identified. In this case, ${TM_1}_{|\partial M_1}= \b R n_1 \oplus T \partial M_1$ is naturally identified with ${TM_2}_{|\partial M_2}= \b R n_2 \oplus T \partial M_2$ by an identification that maps the outward normal vector field $n_1$ to $M_1$ to the outward normal vector field $n_2$ to $M_2$. \\
If $\tau_1$ and $\tau_2$ are parallelizations of two compact oriented 3-manifolds $M_1$ and $M_2$ with identified boundaries such that $\tau_1$ and $\tau_2$ coincide on $\partial M_1 \simeq \partial M_2$, then the \textit{first relative Pontrjagin number of $\tau_1$ and $\tau_2$} is an element $p_1(\tau_1,\tau_2)$ of $\b Z$ which corresponds to the Pontrjagin obstruction to extending a specific trivialization $\tau(\tau_1,\tau_2)$ of $TW \otimes \b C$ defined on the boundary of a cobordism $W$ from $M_1$ to $M_2$ with signature zero (see Subsection~\ref{ssec_defpara} or \cite[Subsection 4.1]{lescopcombing}). In the case of a parallelization $\tau$ of a closed oriented 3-manifold $M$, we get an absolute version. The \textit{Pontrjagin number $p_1(\tau)$ of $\tau$} is the relative Pontrjagin number $p_1(\tau_\emptyset,\tau)$ where $\tau_\emptyset$ is the parallelization of the empty set. Hence, for two parallelizations $\tau_1$ and $\tau_2$ of some closed oriented 3-manifolds, $$ p_1(\tau_1,\tau_2) = p_1(\tau_2) - p_1(\tau_1). $$
In \cite{lescopcombing}, using an interpretation of the variation of Pontrjagin numbers of parallelizations as an intersection of chains, C. Lescop showed that such a variation can be computed using only the first vectors of the parallelizations. This led her to the following theorem, which contains a definition of the \textit{Pontrjagin numbers} for torsion combings of closed oriented 3-manifolds. \begin{theorem}[{\cite[Theorem 1.2 \& Subsection 4.3]{lescopcombing}}] \label{thm_defp1X} Let $M$ be a closed oriented 3-manifold. There exists a unique map $$ p_1 \ : \ \lbrace \mbox{homotopy classes of torsion combings of } M \rbrace \longrightarrow \b Q $$ such that : \begin{enumerate}[(i)] \item for any combing $X$ on $M$ such that $X$ extends to a parallelization $\tau$ of $M$ : $$ p_1([X])=p_1(\tau), $$ \item if $X$ and $Y$ are two transverse torsion combings of $M$, then $$ p_1([Y])-p_1([X])= 4 \cdot lk(L_{X=Y},L_{X=-Y}). $$ \end{enumerate} Furthermore, $p_1$ coincides with the Gompf invariant : for any torsion combing $X$, $$ p_1([X])=\theta_G(X^\perp). $$ \end{theorem}
In this article, we study the variations of the Pontrjagin numbers of torsion combings of compact oriented 3-manifolds with respect to specific surgeries, called \textit{Lagrangian-preserving surgeries}, which are defined as follows. \\
A \textit{rational homology handlebody of genus $g\in \b N$}, or $\b Q$HH for short, is a compact oriented 3-manifold with the same homology with coefficients in $\b Q$ as the standard genus $g$ handlebody. Note that the boundary of a genus $g$ rational homology handlebody is homeomorphic to the standard closed connected oriented surface of genus $g$. The \textit{Lagrangian} of a $\b Q$HH $A$ is $$ \go L_A := \mbox{ker}\left(i^A_* : H_1(\partial A; \b Q) \longrightarrow H_1(A;\b Q)\right) $$ where $i^A$ is the inclusion of $\partial A$ into $A$. An \textit{LP$_\b Q$-surgery datum} in a compact oriented 3-manifold $M$ is a triple $(A,B,h)$, or $(\sfrac{B}{A})$ for short, where $A \subset M$, where $B$ and $A$ are rational homology handlebodies and where $h : \partial A \rightarrow \partial B$ is an identification homeomorphism, called \textit{LP$_\b Q$-identification}, such that $h_*(\go L_A)=\go L_B$. Performing the \textit{LP$_\b Q$-surgery} associated with the datum $(A,B,h)$ in $M$ consists in constructing the manifold : $$ M \left( \sfrac{B}{A} \right) = \left( M \setminus \mathring A \right) \ \bigcup_h \ B. $$
If $(M,X)$ is a compact oriented 3-manifold equipped with a combing, if $(A,B,h)$ is an LP$_\b Q$-surgery datum in $M$, and if $X_{B}$ is a combing of $B$ that coincides with $X$ on $\partial A \simeq \partial B$, then $(A,B,h,X_{B})$, or $(\sfrac{B}{A},X_B)$ for short, is an \textit{LP$_\b Q$-surgery datum in $(M,X)$}. Performing the \textit{LP$_\b Q$-surgery} associated with the datum $(A,B,h,X_{B})$ in $(M,X)$ consists in constructing the manifold $M \left( \sfrac{B}{A} \right)$ equipped with the combing : $$ X(\sfrac{B}{A}) = \left\lbrace \begin{aligned} & X & &\mbox{on $M\setminus \mathring A$}, \\ & X_{B} & &\mbox{on $B$.} \end{aligned} \right. $$
The main result of this article is a variation formula for Pontrjagin numbers -- see Theorem~\ref{thm_D2nd} below -- which reads as follows in the special case of compact oriented 3-manifolds without boundary. \begin{theorem} \label{thm_D2nd0} Let $(M,X)$ be a closed oriented 3-manifold equipped with a combing and let $\lbrace (\sfrac{B_i}{A_i},X_{B_i}) \rbrace_{i \in \lbrace 1,2 \rbrace}$ be two disjoint LP$_\b Q$-surgeries in $(M,X)$ (\textit{ie}\ $A_1$ and $A_2$ are disjoint). For all $I \subset \lbrace 1,2 \rbrace$, let $(M_I,X^I)$ be the combed manifold obtained by performing the surgeries associated to the data $ \lbrace (\sfrac{B_i}{A_i},X_{B_i})\rbrace_{i \in I}$. If $\lbrace X^I \rbrace_{I \subset \lbrace 1,2 \rbrace}$ is a family of torsion combings of the $\lbrace M_I \rbrace_{I\subset \lbrace 1,2 \rbrace}$, then $$
\sum_{I\subset\lbrace 1 , 2 \rbrace} (-1)^{|I|} p_1([X^I]) = - 2 \cdot lk_M \left(L_{\lbrace X^I \rbrace}(\sfrac{B_1}{A_1}), L_{\lbrace X^I \rbrace}(\sfrac{B_2}{A_2})\right), $$ where the right-hand side of the equality is defined as follows. For all $i \in \lbrace 1 , 2 \rbrace$, let $$ H_1(A_i;\b Q) \stackrel{i^{A_i}_*}{\longleftarrow} \frac{H_1(\partial A_i;\b Q)}{\go L_{A_i}} \stackrel{h_*}{=} \frac{H_1(\partial B_i;\b Q)}{\go L_{B_i}} \stackrel{i^{B_i}_*}{\longrightarrow} H_1(B_i;\b Q) $$
be the sequence of isomorphisms induced by the inclusions $i^{A_i}$ and $i^{B_i}$. There exists a unique homology class $L_{\lbrace X^I\rbrace}(\sfrac{B_i}{A_i})$ in $H_1(A_i; \b Q)$ such that for any nonvanishing section $\sigma_i$ of $X^\perp_{|\partial A_i}$~: $$
L_{\lbrace X^I \rbrace}(\sfrac{B_i}{A_i}) = i^{A_i}_* \circ (i^{B_i}_*)^{-1}\left(P\big(e_2^{B_i}(X_{B_i}^\perp, \sigma_i)\big)\right) - P\big(e_2^{A_i}(X^\perp_{|A_i}, \sigma_i)\big), $$ where $P$ stands for Poincaré duality isomorphisms from $H^2(A_i,\partial A_i;\b Q)$ to $H_1(A_i;\b Q)$ or from $H^2(B_i,\partial B_i;\b Q)$ to $H_1(B_i;\b Q)$. Furthermore, the homology classes $L_{\lbrace X^I \rbrace}(\sfrac{B_1}{A_1})$ and $L_{\lbrace X^I \rbrace}(\sfrac{B_2}{A_2})$ are mapped to zero in $H_1(M;\b Q)$ and the map $$ lk_M : \mbox{\textup{ker}}\big(H_1(A_1;\b Q) \rightarrow H_1(M; \b Q)\big) \times \mbox{\textup{ker}}\big(H_1(A_2;\b Q) \rightarrow H_1(M; \b Q)\big) \longrightarrow \b Q $$ is well-defined. \end{theorem}
\begin{exampleemempty} Consider $\b S^3$ equipped with a parallelization $\tau : \b S^3 \times \b R^3 \rightarrow T\b S^3$ which extends the standard parallelization of the unit ball. In this ball, consider a positive Hopf link and let $A_1 \sqcup A_2$ be a tubular neighborhood of this link. Let $X$ be the combing $\tau(e_1)=\tau(.,e_1)$, where $e_1=(1,0,0)\in \b S^2$, and let $B_1 = A_1$ and $B_2=A_2$. Identify $A_1$ and $A_2$ with $\b D^2 \times \b S^1$ and consider a smooth map $g : \b D^2 \rightarrow \b S^2$ such that $g(\partial \b D^2) = e_1$, and such that $-e_1$ is a degree 1 regular value of $g$ with a single preimage $\omega$. Finally, for $i \in \lbrace 1,2 \rbrace$, let $X_{B_i}$ be the combing : $$ X_{B_i} : \left\lbrace \begin{aligned} \b D^2 \times \b S^1 &\longrightarrow UM \\ (z,u) &\longmapsto \tau((z,u),g(u)). \end{aligned} \right. $$
In this case, $L_{X(\lbrace \sfrac{B_i}{A_i} \rbrace_{i \in \lbrace 1,2 \rbrace})=-X} = L_{X_{B_1}=-X_{|A_1}} \cup L_{X_{B_2}=-X_{|A_2}}$, and, for $i \in \lbrace 1,2 \rbrace$, using the identification of $A_i$ with $\b D^2 \times \b S^1$, the link $L_{X_{B_i}=-X_{|A_i}}$ reads $\lbrace \omega \rbrace \times \b S^1$. As we will see in Proposition~\ref{prop_linksinhomologyI}, for $i \in \lbrace 1,2 \rbrace$, $L_{\lbrace X^I \rbrace}(\sfrac{B_i}{A_i}) = 2 [L_{X_{B_i}=-X_{|A_i}}]$. Eventually, $$
\sum_{I\subset\lbrace 1 , 2 \rbrace} (-1)^{|I|} p_1([X^I]) = -8. $$ \end{exampleemempty}
In general, for an LP$_\b Q$-surgery datum $(\sfrac{B}{A})$ in a compact oriented 3-manifold $M$, a trivialization of $TM_{|(M\setminus \mathring A)}$ cannot be extended as a parallelization of $M(\sfrac{B}{A})$. It follows that LP$_\b Q$-surgeries cannot be expressed as local moves on parallelized compact oriented 3-manifolds. This makes computing the variation of Pontrjagin numbers of torsion combings under LP$_\b Q$-surgeries tricky since Pontrjagin numbers of torsion combings are defined with respect to Pontrjagin numbers of parallelizations. \\
However, if $M$ is a compact oriented 3-manifold and if $\rho$ is a trivialization of $TM_{|\partial M}$, then the obstruction to finding a parallelization of $M$ which coincides with $\rho$ on $\partial M$ is an element of $H^2(M,\partial M; \sfrac{\b Z}{2\b Z})$ -- hence, its Poincaré dual is an element $[\gamma]$ of $H_1(M;\sfrac{\b Z}{2\b Z})$ -- and it is possible to get around such an obstruction thanks to the notion of \textit{pseudo-parallelization} developed by C.~Lescop. Let us postpone the formal definition to Subsection~\ref{ssec_defppara} (see also \cite{lescopcube}) and, for the time being, let us just mention that a pseudo-parallelization $\bar\tau$ of a compact oriented 3-manifold $M$ is a triple $(N(\gamma); \tau_e, \tau_d)$ where $N(\gamma)$ is a framed tubular neighborhood of a link $\gamma$ in $\mathring M$, $\tau_e$ is a parallelization of $M\setminus N(\gamma)$ and $\tau_d : N(\gamma)\times \b R^3 \rightarrow TN(\gamma)$ is a parallelization of $N(\gamma)$ such that there exists a section $E_1^d$ of $UM$ : $$ E_1^d : \left\lbrace \begin{aligned} m \in M\setminus \mathring N(\gamma) &\longmapsto \tau_e(m,e_1) \\ m \in N(\gamma) &\longmapsto \tau_d(m,e_1). \end{aligned} \right. $$ Let us finally mention that $\bar\tau$ also determines a section $E_1^g$ of $UM$ which coincides with $E_1^d$ on $M\setminus \mathring N(\gamma)$. The sections $E_1^d$ and $E_1^g$ are the \textit{Siamese sections} of $\bar\tau$ and the link $\gamma$ is the \textit{link of the pseudo-parallelization $\bar\tau$}. \\
To a pseudo-parallelization, C. Lescop showed that it is possible to associate a \textit{complex trivialization} up to homotopy, see Definition~\ref{def_complextriv}. This leads to a natural extension of the notion of first relative Pontrjagin numbers of parallelizations to pseudo-parallelizations. Furthermore, as in the case of parallelizations, a pseudo-parallelization $\bar\tau$ of a compact oriented 3-manifold $M$ admits \textit{pseudo-sections} $\bar\tau(M\times\lbrace v\rbrace)$ which are 3-chains of $UM$, for all $v \in \b S^2$. In the special case $v =e_1$ the pseudo-section $\bar\tau(M\times \lbrace e_1 \rbrace)$ of $\bar\tau$ can be written as : $$ \bar\tau(M\times \lbrace e_1 \rbrace) = \frac{E_1^d(M) + E_1^g(M)}{2}. $$
A combing $(X,\sigma)$ of $M$ is said to be \textit{compatible with $\bar\tau$} if $(X,\sigma)$ is $\partial$-compatible with $(E_1^d,{E_2^e}_{|\partial M})$ and $(E_1^g,{E_2^e}_{|\partial M})$, where $E_2^e$ is the second vector of $\tau_e$, and if $$ L_{E_1^d=X} \cap L_{E_1^g=-X}=\emptyset \mbox{ \ and \ } L_{E_1^g=X} \cap L_{E_1^d=-X}=\emptyset. $$
If $(X,\sigma)$ and $\bar\tau$ are compatible, then $\rho(X)=\bar\tau_{|\partial M}$ and we get two disjoint rational combinations of oriented links in $\mathring M$ : $$ L_{\bar\tau = X} = \frac{L_{E_1^d= X} + L_{E_1^g= X}}{2} \mbox{ \ and \ } L_{\bar\tau = - X} = \frac{L_{E_1^d=- X} + L_{E_1^g=- X}}{2}. $$ Pseudo-parallelizations allow us to revisit the definition of Pontrjagin numbers and to generalize it to torsion combings of compact oriented 3-manifolds with non empty boundary as follows. Let $P_{\b S^2}$ denote the standard projection from $W\times \b S^2$ to $\b S^2$, for any manifold $W$. \begin{lemma} \label{lem1} Let $(X,\sigma)$ be a torsion combing of a compact oriented 3-manifold $M$, let $\bar\tau$ be a pseudo-parallelization of $M$, and let $E_1^d$ and $E_1^g$ be the Siamese sections of $\bar\tau$. If $\bar\tau$ and $(X,\sigma)$ are compatible, then the expression $$ 4\cdot lk_M(L_{\bar\tau=X} , L_{\bar\tau=-X}) - lk_{\b S^2} \left( e_1-(-e_1) \ , \ P_{\b S^2} \circ \tau_d^{-1} \circ X(L_{{E_1}^d={-E_1}^g}) \right) $$ depends only on the homotopy class of $(X,\sigma)$. It will be denoted $p_1(\bar\tau,[X])$ and its opposite will be written $p_1([X],\bar\tau)$. \end{lemma}
\begin{theorem} \label{thm_defp1Xb} Let $(X_1,\sigma_{X_1})$ and $(X_2,\sigma_{X_2})$ be torsion combings of two compact oriented 3-manifolds $M_1$ and $M_2$ with identified boundaries such that $(X_1,\sigma_{X_1})$ and $(X_2,\sigma_{X_2})$ coincide on the boundary. For $i\in\lbrace 1,2\rbrace$, let $\bar\tau_i$ be a pseudo-parallelization of $M_i$ such that $\bar\tau_i$ and $(X_i,\sigma_{X_i})$ are compatible. The expression $$ p_1([X_1],[X_2])= p_1( [X_1],\bar\tau_1) + p_1(\bar\tau_1, \bar\tau_2) + p_1(\bar\tau_2, [X_2]) $$ depends only on the homotopy classes of $(X_1,\sigma_{X_1})$ and $(X_2,\sigma_{X_2})$, and it defines \textup{the first relative Pontrjagin number of $(X_1,\sigma_{X_1})$ and $(X_2,\sigma_{X_2})$}. Moreover, if $M_1$ and $M_2$ are closed, then $$ p_1([X_1],[X_2])=p_1([X_2])-p_1([X_1]). $$ \end{theorem}
Under the assumptions of Theorem \ref{thm_D2nd0}, we see that it would be impossible to naively define $p_1([X_{|A_1}],[{X_{\lbrace 1 \rbrace}}_{|B_1}])$ as $p_1([X])-p_1([{X_{\lbrace 1 \rbrace}}])$, where $X$ extends $X_{|A_1}$ to the closed manifold $M$, and ${X_{\lbrace 1 \rbrace}}$ extends ${X_{\lbrace 1 \rbrace}}_{|B_1}$ in the same way to $M(\sfrac{B_1}{A_1})$. Indeed Theorem \ref{thm_D2nd0} and the example that follows it show that the expression $\left(p_1([X])-p_1([{X_{\lbrace 1 \rbrace}}])\right)$ depends on the combed manifold $(M,X)$ into which $(A_1,X_{|A_1})$ has been embedded. It even depends on the combing $X$ that extends the combing $X_{|A_1}$ of $A_1$ to $M$ for the fixed manifold $M$ of this example, since $$ \left( p_1([X]) - p_1([{X_{\lbrace 1 \rbrace}}]) \right) - \left( p_1([{X_{\lbrace 2 \rbrace}}]) - p_1([{X_{\lbrace 1,2 \rbrace}}]) \right) =-8 $$ there. \\
Theorem \ref{thm_defp1Xb} translates as follows in the closed case and it bridges a gap between the two dissimilar generalizations of the Pontrjagin numbers of parallelizations for pseudo-parallelizations and for torsion combings in closed oriented 3-manifolds. \begin{corollary} \label{cor_p1Xppara} Let $X$ be a torsion combing of a closed oriented 3-manifold $M$ and let \linebreak $\bar\tau (N(\gamma);\tau_e,\tau_d)$ be a pseudo-parallelization of $M$. Let $E_1^d$ and $E_1^g$ denote the Siamese sections of $\bar\tau$. If $X$ and $\bar \tau$ are compatible, then $$ \begin{aligned} p_1([X]) = p_1(\bar \tau) &+ 4\cdot lk_M(L_{\bar\tau=X} , L_{\bar\tau=-X}) \\ &- lk_{\b S^2} \left( e_1-(-e_1) \ , \ P_{\b S^2} \circ \tau_d^{-1} \circ X(L_{{E_1}^d={-E_1}^g}) \right). \end{aligned} $$ \end{corollary}
Another special case is when genuine parallelizations can be used. The closed case with genuine parallelizations is nothing but C. Lescop's definition of the Pontrjagin number of torsion combings in closed oriented 3-manifolds stated above. \begin{corollary} \label{cor_compactpara}
Let $(X_1,\sigma_1)$ and $(X_2,\sigma_2)$ be torsion combings of two compact oriented 3-manifolds $M_1$ and $M_2$ with identified boundaries such that $(X_1,\sigma_1)$ and $(X_2,\sigma_2)$ coincide on the boundary. If, for $i \in \lbrace 1 , 2 \rbrace$, $\tau_i \hspace{-1mm}=\hspace{-1mm} (E_1^i,E_2^i,E_3^i)$ is a parallelization of $M_i$ such that $(X_i,\sigma_i)$ and $(E_1^i,{E_2^i}_{|\partial M_i})$ are $\partial$-compatible, then $$ p_1([X_1],[X_2]) = p_1(\tau_1 , \tau_2) + 4 \cdot lk_{M_2}(L_{E_1^{2}=X_2} \ , \ L_{E_1^{2}=-X_2}) - 4 \cdot lk_{M_1}(L_{E_1^{1}=X_1} \ , \ L_{E_1^{1}=-X_1}). $$ \end{corollary}
Finally, for torsion combings defined on a fixed compact oriented 3-manifold (which may have boundary), we have the following simple variation formula, as in the closed case. \begin{theorem} \label{formuleplus} If $(X, \sigma)$ and $(Y,\sigma)$ are $\partial$-compatible torsion combings of a compact oriented 3-manifold $M$, then $$ p_1([X],[Y]) = 4 \cdot lk_M(L_{X=Y}, L_{X=-Y}). $$ \end{theorem}
\def\mbox{spin$^c$}{\mbox{spin$^c$}}
Let $M$ be a compact connected oriented 3-manifold. For all section $\sigma$ of $TM_{|\partial M}$, let $\mbox{spin$^c$}(M,\sigma)$ denote the \textit{set of $\mbox{spin$^c$}$-structures on $M$ relative to $\sigma$}, \textit{ie}\ the set of homotopy classes on $M \setminus \lbrace \omega \rbrace$ of combings $(X,\sigma)$ of $M$, where $\omega$ is any point in $\mathring M$ (see \cite{gmdeloup}, for a detailed presentation of $\mbox{spin$^c$}$-structures). Thanks to Theorem~\ref{formuleplus}, it is possible to classify the torsion combings of a fixed $\mbox{spin$^c$}$-structure up to homotopy, thus generalizing a property of the Gompf invariant in the closed case. I thank Gw\'{e}na\"{e}l Massuyeau for suggesting this statement. \begin{theorem} \label{GM} Let $(X,\sigma)$ and $(Y,\sigma)$ be $\partial$-compatible torsion combings of a compact connected oriented 3-manifold $M$ which represent the same $\mbox{spin$^c$}$-structure. The combings $(X,\sigma)$ and $(Y,\sigma)$ are homotopic relatively to the boundary if and only if $p_1([X],[Y]) =0$. \end{theorem}
The key tool in the proof of Theorem~\ref{thm_defp1Xb} is the following generalization of the interpretation of the variation of the Pontrjagin numbers of parallelizations as an algebraic intersection of three chains. \begin{theorem} \label{prop_varasint} Let $\tau$ and $\bar \tau$ be two pseudo-parallelizations of a compact oriented 3-manifold $M$ that coincide on $\partial M$ and whose links are disjoint. For any $v \in \b S^2$, there exists a 4-chain $C_4(\tau,\bar\tau ;v)$ of $[0,1]\times UM$ transverse to the boundary of $[0,1] \times UM$ such that $$ \partial C_4(\tau,\bar\tau ;v) = \lbrace 1 \rbrace \times \bar\tau(M\times\lbrace v \rbrace) - \lbrace 0 \rbrace \times \tau(M\times \lbrace v \rbrace) - [0,1]\times \tau(\partial M \times \lbrace v \rbrace) $$ and for any $x,y$ and $z$ in $\b S^2$ with pairwise different distances to $e_1$ : $$ p_1(\tau,\bar\tau)= 4 \cdot \langle C_4(\tau,\bar\tau ;x),C_4(\tau,\bar\tau ;y),C_4(\tau,\bar\tau ;z) \rangle_{[0,1]\times UM} $$ for any triple of pairwise transverse $C_4(\tau,\bar\tau,x)$, $C_4(\tau,\bar\tau,y)$ and $C_4(\tau,\bar\tau,z)$ that satisfy the hypotheses above. \end{theorem}
Our general variation formula for Pontrjagin numbers of torsion combings reads as follows for all compact oriented 3-manifolds.
\begin{theorem}\label{thm_D2nd} Let $(M,X)$ be a compact oriented 3-manifold equipped with a combing, \linebreak let $\lbrace (\sfrac{B_i}{A_i},X_{B_i}) \rbrace_{i \in \lbrace 1,2 \rbrace}$ be two disjoint LP$_\b Q$-surgeries in $(M,X)$, and, for all $I \subset \lbrace 1 , 2 \rbrace$, \linebreak let $X^I = X(\lbrace \sfrac{B_i}{A_i} \rbrace_{i \in I})$. If $\lbrace X^I \rbrace_{I \subset \lbrace 1,2 \rbrace}$ is a family of torsion combings of the manifolds \linebreak $M_I=M(\lbrace \sfrac{B_i}{A_i} \rbrace_{i \in I})$, then $$ p_1([X^{\lbrace 2 \rbrace}],[X^{\lbrace 1, 2 \rbrace}])-p_1([X],[X^{\lbrace 1 \rbrace}]) = - 2 \cdot lk_M \left(L_{\lbrace X^I \rbrace}(\sfrac{B_1}{A_1}), L_{\lbrace X^I \rbrace}(\sfrac{B_2}{A_2})\right), $$ where the right-hand side is defined as in Theorem~\ref{thm_D2nd0}. \end{theorem}
A direct consequence of this variation formula is that the extended Gompf invariant for torsion combings of compact oriented 3-manifolds is a degree two finite type invariant with respect to LP$_\b Q$-surgeries. \begin{corollary} \label{cor_FTcombings} Let $(M,X)$ be a compact oriented 3-manifold equipped with a combing, let $\lbrace ( \sfrac{B_i}{A_i} , X_{B_i} ) \rbrace_{i\in \lbrace 1, \ldots, k\rbrace}$ be a family of disjoint LP$_\b Q$-surgeries in $(M,X)$, and, for all $I \subset \lbrace 1 , \ldots, k \rbrace$, let $(M_I,X^I)$ be the combed manifold obtained by performing the surgeries associated to the data $\lbrace (\sfrac{B_i}{A_i}, X_{B_i}) \rbrace_{i \in I}$. If $k\geqslant 3$, and if $\lbrace X^I \rbrace_{I \subset \lbrace 1, \ldots, k \rbrace}$ is a family of torsion combings of the $\lbrace M_I \rbrace_{I \subset \lbrace 1, \ldots, k \rbrace}$, then $$ \sum_{I \subset \lbrace 2 , \ldots , k \rbrace} (-1)^{\card(I)} \ p_1 \left( [X^I] , [X^{I\cup\lbrace 1 \rbrace}] \right)=0. $$ If $\partial M=\emptyset$, this reads $$ \sum_{I \subset \lbrace 1 , \ldots , k \rbrace} (-1)^{\card(I)} \ p_1 \left( [X^I] \right)=0. $$ \end{corollary}
In the first section of this article, we give details on Lagrangian-preserving surgeries, combings and pseudo-parallelizations. Then, we review the definitions of Pontrjagin numbers of parallelizations and pseudo-parallelizations. The second section ends with a proof of Theorem~\ref{prop_varasint}. The third section is devoted to the proof of Theorem~\ref{thm_defp1Xb} and Theorem~\ref{GM}. Finally, we study the variations of Pontrjagin numbers with respect to Lagrangian-preserving surgeries, and finish the last section by proving Theorem~\ref{thm_D2nd}.\\
\begin{large}\textbf{Acknowledgments.}\end{large} First, let me thank C. Lescop and J.-B. Meilhan for their thorough guidance and support. I also thank M. Eisermann and G. Massuyeau for their careful reading and their useful remarks.
\renewcommand{\arabic{section}.\arabic{theorem}}{\arabic{section}.\arabic{theorem}}
\section{More about ...} \subsection{Lagrangian-preserving surgeries} \label{ssec_LPsurgeries}
Let us first note three easy lemmas, the proofs of which are left to the reader.
\begin{lemma} \label{prop_redefLPs} Let $(\sfrac{B}{A})$ be an LP$_\b Q$-surgery datum in a compact oriented 3-manifold $M$ and let $L_1$ and $L_2$ be links in $M \hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring A$. If $L_1$ and $L_2$ are rationally null-homologous in $M$, then they are null-homologous in $M(\sfrac{B}{A})$ and $$ lk_{M(\sfrac{B}{A})}(L_1,L_2) = lk_M(L_1,L_2). $$ \end{lemma} \iffalse \begin{proof} For $i \in \lbrace 1,2 \rbrace$, let $\Sigma_i$ be a 2-chain in $M$ such that $\partial \Sigma_i=L_i$ and $\Sigma_i$ is transverse to $\partial A$. Since $(\sfrac{B}{A})$ is an LP$_\b Q$-surgery, $[\Sigma_i\cap \partial A]= 0$ in $H_1(B; \b Q)$. Therefore, for all $i \in \lbrace 1,2 \rbrace$, there exists a 2-chain $\Sigma'_i$ in $M(\sfrac{B}{A})$ such that $$ \partial \Sigma'_i = L_i \mbox{ \ and \ } \Sigma'_i \cap (M(\sfrac{B}{A})\hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring B) = \Sigma_i \cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring A). $$ As a consequence $L_1$ and $L_2$ are null-homologous in $M(\sfrac{B}{A})$ and, since $L_2 \subset M \hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring A$, it follows that $$ lk_{M(\sfrac{B}{A})}(L_1,L_2) = \langle \Sigma'_1 , L_2 \rangle_{M(\sfrac{B}{A})} = \langle \Sigma_1 , L_2 \rangle_M = lk_M(L_1,L_2). $$ \end{proof} \fi
A \textit{rational homology 3-sphere}, or a $\b Q$HS for short, is a closed oriented 3-manifold with the same homology with rational coefficients as $\b S^3$.
\begin{lemma} \label{prop-LP2} Let $(\sfrac{B}{A})$ be an LP$_\b Q$-surgery in a compact oriented 3-manifold $M$. If $M$ is a $\b Q$HS, then $M(\sfrac{B}{A})$ is a $\b Q$HS. \end{lemma} \iffalse \begin{proof} Let $M$ be a $\b Q$HS and let $A$ be a $\b Q$HH of genus $g\in \b N$. Using the Mayer-Vietoris sequence associated to $M=A\cup (M\hspace{-0.5mm}\setminus\hspace{-0.5mm}\mathring A)$ shows that $M\hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring A$ is a $\b Q$HH of genus $g$ and that the inclusions of $\partial A$ into $A$ and into $M\setminus \mathring A$ induce an isomorphism $$ H_1(\partial A ; \b Q) \simeq H_1(A; \b Q)\oplus H_1(M\hspace{-0.5mm}\setminus\hspace{-0.5mm}\mathring A ; \b Q). $$ For details, see \cite[Sublemma 4.6]{moussardFTIQHS}. It follows that $M(\sfrac{B}{A})\hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring B$ is also a genus $g$ $\b Q$HH and, since $(\sfrac{B}{A})$ is an LP$_\b Q$-surgery, the inclusions of $\partial B$ into $B$ and into $M(\sfrac{B}{A})\setminus \mathring B$ induce an isomorphism $$ H_1(\partial B ; \b Q) \simeq H_1(B; \b Q)\oplus H_1(M(\sfrac{B}{A})\hspace{-0.5mm}\setminus\hspace{-0.5mm}\mathring B ; \b Q). $$ Using this isomorphism in the Mayer-Vietoris sequence associated to the splitting \linebreak $M(\sfrac{B}{A})=B\cup (M(\sfrac{B}{A})\hspace{-0.5mm}\setminus\hspace{-0.5mm}\mathring B)$ shows that $M(\sfrac{B}{A})$ is a $\b Q$HS. \end{proof} \fi
\begin{lemma} \label{phom} If $A$ is a compact connected orientable 3-manifold with connected boundary and if the map $i^A_* : H_1(\partial A ; \b Q) \rightarrow H_1(A; \b Q)$ induced by the inclusion of $\partial A$ into $A$ is surjective, then $A$ is a rational homology handlebody. \end{lemma} \iffalse \begin{proof} First, for such a manifold $A$, we have $H_0(A;\b Q)\simeq \b Q$ and $H_3(A;\b Q)\simeq 0$. Second, using the hypothesis on $i^A_*$ in the exact sequence associated to $(A, \partial A)$, we get that $H_1(A,\partial A;\b Q)= 0$. Using Poincaré duality and the universal coefficient theorem, it follows that $$ H_2(A;\b Q) \simeq H^1(A,\partial A;\b Q) \simeq \mbox{Hom}(H_1(A,\partial A;\b Q),\b Q) = 0. $$ Moreover, we get the following exact sequence from the exact sequence associated to $(A,\partial A)$~: $$ 0\rightarrow H_2(A,\partial A; \b Q) \rightarrow H_1(\partial A;\b Q) \rightarrow H_1(A;\b Q) \rightarrow 0. $$ It follows that $\mbox{dim}(H_2(A,\partial A;\b Q))+\mbox{dim}(H_1(A;\b Q))=\mbox{dim}(H_1(\partial A;\b Q)) = 2g$, where $g$ denotes the genus of $\partial A$. However, $$ H_2(A,\partial A;\b Q) \simeq H^1(A;\b Q) \simeq \mbox{Hom}(H_1(A;\b Q),\b Q), $$ hence $\mbox{dim}(H_1(A;\b Q))=g$. \end{proof} \fi
\begin{proposition} Let $A$ be a compact submanifold with connected boundary of a $\b Q$HS $M$, let $B$ be a compact oriented 3-manifold and let $h:\partial A \rightarrow \partial B$ be a homeomorphism. If the surgered manifold $M(\sfrac{B}{A})$ is a $\b Q$HS and if $$ lk_{M(\sfrac{B}{A})}(L_1,L_2) = lk_M(L_1,L_2) $$ for all disjoint links $L_1$ and $L_2$ in $M \hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring A$, then $(\sfrac{B}{A})$ is an LP$_\b Q$-surgery. \end{proposition} \begin{proof} Using the Mayer-Vietoris exact sequences associated to $M=A\cup (M\hspace{-0.5mm}\setminus\hspace{-0.5mm}\mathring A)$ and \linebreak $M(\sfrac{B}{A})=B\cup (M(\sfrac{B}{A})\hspace{-0.5mm}\setminus\hspace{-0.5mm}\mathring B)$, we get that the maps $i_*^A : H_1(\partial A ; \b Q) \longrightarrow H_1(A; \b Q)$ and \linebreak $i_*^B : H_1(\partial B ; \b Q) \longrightarrow H_1(B; \b Q)$ induced by the inclusions of $\partial A$ and $\partial B$ into $A$ and $B$ are surjective. Using Lemma~\ref{phom}, it follows that $A$ and $B$ are rational homology handlebodies. Moreover, $A$ and $B$ have the same genus since $h : \partial A \rightarrow \partial B$ is a homeomorphism. \\
Let $P_{\go L_A}$ and $P_{\go L_B}$ denote the projections from $H_1(\partial A;\b Q)$ onto $\go L_A$ and $\go L_B$, respectively, with kernel $\go L_{M\setminus \mathring A}$. Consider a collar $[0,1]\times \partial A$ of $\partial A$ such that $\lbrace 0 \rbrace \times \partial A \simeq \partial A$ and note that for all 1-cycles $x$ and $y$ of $\partial A$ : $$ \langle P_{\go L_A}(y), x \rangle_{\partial A} = lk_M(\lbrace 1 \rbrace \times y, \lbrace 0 \rbrace \times x ) = lk_{M(\sfrac{B}{A})}(\lbrace 1 \rbrace \times y, \lbrace 0 \rbrace \times x) =\langle P_{\go L_B}(y), x \rangle_{\partial B}, $$ so that $P_{\go L_B}=P_{\go L_A}$ and $h_*(\go L_A)=\go L_B$. \end{proof}
\subsection{Combings} \begin{proposition} \label{prop_linksandsigns} If $X$ and $Y$ are $\partial$-compatible combings of a compact oriented 3-manifold $M$, then $$ L_{X=Y}=L_{Y=X} \mbox{ \ and \ } L_{X=Y}= - L_{-X=-Y}. $$ \end{proposition} \begin{proof} First, by definition, the link $L_{X=Y}$ is the projection of the intersection of the sections $X(\mathring M)$ and $Y(\mathring M)$. This intersection is oriented so that $$ NX(\mathring M) \oplus NY(\mathring M) \oplus T(X(\mathring M)\cap Y(\mathring M)) $$ orients $UM$, fiberwise. Since the normal bundles $NX(\mathring M)$ and $NY(\mathring M)$ have dimension 2, the isomorphism permuting them is orientation-preserving so that $L_{X=Y}=L_{Y=X}$. Second, $(-X)(\mathring M)\cap(-Y)(\mathring M)$ is the image of $X(\mathring M)\cap Y(\mathring M)$ under the map $\iota$ from $UM$ to itself which acts on each fiber as the antipodal map. This map reverses the orientation of $UM$ as well as the coorientations of $X(\mathring M)$ and $Y(\mathring M)$, \textit{ie}\ $$ \begin{aligned} &N(-X)(M)=-\iota(NX(M)), \ N(-Y)(M)=-\iota(NY(M)). \\ \end{aligned} $$ Since $N(-X)(M) \oplus N(-Y)(M) \oplus T((-X)(M)\cap (-Y)(M))$ has the orientation of $UM$ $$ T((-X)(M)\cap (-Y)(M)) = -\iota(T(X(M)\cap Y(M))). $$ Hence, $L_{X=Y}= - L_{-X=-Y}$. \end{proof}
\begin{definition} Let $M$ be a compact oriented 3-manifold and let $L$ be a link in $\mathring M$. Define the \textit{blow up of $M$ along $L$} as the 3-manifold $Bl(M,L)$ constructed from $M$ in which $L$ is replaced by its unit normal bundle in $M$. The 3-manifold $Bl(M,L)$ inherits a canonical differential structure. See \cite[Definition 3.5]{lescopcombing} for a detailed description. \end{definition}
\begin{lemma} \label{lem_phomotopy} Let $X$ and $Y$ be $\partial$-compatible combings of a compact oriented 3-manifold $M$. There exists a 4-chain $\bar F(X,Y)$ of $UM$ with boundary~: $$
\partial \bar F(X,Y) = Y(M) - X(M) + UM_{| L_{X=-Y}}. $$ \end{lemma} \begin{proof} To construct the desired 4-chain, start with the partial homotopy from $X$ to $Y$ $$ \tilde F(X,Y) : \left\lbrace \begin{aligned} \ [0,1] \times (M\setminus L_{X=-Y}) &\longrightarrow UM\\ (s,m) & \longmapsto \left( m , \l H_X^Y(s,m) \right) \end{aligned} \right. $$ where $\l H_X^Y(s,m)$ is the unique point of the shortest geodesic arc from $X(m)$ to $Y(m)$ such that $$ d_{\b S^2}(X(m),\l H_X^Y(s,m)) = s \cdot d_{\b S^2}(X(m),Y(m)) $$ where $d_{\b S^2}$ denotes the usual distance on $\b S^2$. Next, extend the map $$ (s,m) \longmapsto \l H_X^Y(s,m) $$ on the blow up of $M$ along $L_{X=-Y}$. The section $X$ induces a map $$ X : NL_{X=-Y} \longrightarrow -Y^\perp(L_{X=-Y}) $$
which is a diffeomorphism on a neighborhood of $\lbrace 0 \rbrace \times L_{X=-Y}$ since $X$ and $Y$ are $\partial$-compatible combings. Furthermore, this diffeomorphism is orientation-preserving by definition of the orientation on $L_{X=-Y}$. So, for $n \in UN_mL_{X=-Y}$, $\l H_X^Y(s,n)$ can be defined as the unique point at distance $s\pi$ from $X(m)$ on the unique half great circle from $X(m)$ to $Y(m)$ through $T_m X(n)$. Thanks to transversality again, the set $\lbrace \l H_X^Y(s,n) \ | \ s \in [0,1], \ n \in UN_mL_{X=-Y} \rbrace$ is a whole sphere $\b S^2$ for any fixed $m \in L_{X=-Y}$, so that $$ \partial \tilde F(X,Y)([0,1]\times Bl(M,L_{X=-Y})) = Y(M) - X(M) + \partial_{int} $$ where $\partial_{int} \simeq L_{X=-Y} \times \b S^2 $ (see \cite[Proof of Proposition 3.6]{lescopcombing} for the orientation of $\partial_{int}$). Finally, let $\bar F(X,Y)=\tilde F(X,Y)([0,1]\times Bl(M,L_{X=-Y}))$. \end{proof}
If $X$ and $Y$ are $\partial$-compatible combings of a compact oriented 3-manifold $M$ and if $\sigma$ is a nonvanishing section of $X^\perp_{|\partial M}$, let $\l H^{-Y}_{X,\sigma}$ denote the map from $[0,1] \times (M\hspace{-0.5mm}\setminus\hspace{-0.5mm} L_{X=Y})$ to $UM$ such that, for all $(s,m)$ in $[0,1] \times \partial M$, $\l H^{-Y}_{X,\sigma}(s,m)$ is the unique point at distance $s\pi$ from $X(m)$ on the unique geodesic arc starting from $X(m)$ in the direction of $\sigma(m)$ to $-X(m)=-Y(m)$ and, for all $(s,m)$ in $[0,1] \times (\mathring M \setminus L_{X=Y})$, $\l H^{-Y}_{X,\sigma}(s,m)$ is the unique point on the shortest geodesic arc from $X(m)$ to $-Y(m)$ such that $$ d_{\b S^2}(X(m),\l H^{-Y}_{X,\sigma}(s,m)) = s \cdot d_{\b S^2}(X(m),-Y(m)). $$
As in the previous proof, $\l H^{-Y}_{X,\sigma}$ may be extended as a map from $[0,1]\times Bl(M,L_{X=Y})$ to $UM$. In the case of $X=Y$, for all section $\sigma$ of $X^\perp$, nonvanishing on $\partial M$, let $L_{\sigma=0}$ denote the oriented link $\lbrace m \in M \ | \ \sigma(m)= 0 \rbrace$ and define a map $\l H_{X,\sigma}^{-X}$ as the map from $[0,1] \times (M\hspace{-0.5mm}\setminus\hspace{-0.5mm} L_{\sigma=0})$ to $UM$ such that, for all $(s,m)$ in $[0,1] \times (M\hspace{-0.5mm}\setminus\hspace{-0.5mm} L _{\sigma=0})$, $\l H_{X,\sigma}^{-X}(s,m)$ is the unique point at distance $s\pi$ from $X(m)$ on the unique geodesic arc starting from $X(m)$ in the direction of $\sigma(m)$ to $-X(m)$. Note that $L_{\sigma=0}\cap\partial M = \emptyset$, and $[L_{\sigma=0}]=P(e^M_2(X,\sigma_{|\partial M}))$. Here again, $\l H_{X,\sigma}^{-X}$ may be extended as a map from $[0,1]\times Bl(M,L_{\sigma=0})$ to $UM$. \\
In order to simplify notations, if $A$ is a submanifold of a compact oriented 3-manifold $M$, we may implicitly use a parallelization of $M$ to write $UM_{|A}$ as $A\times \b S^2$.
\begin{proposition} \label{prop_links} If $(X,\sigma)$ and $(Y,\sigma)$ are $\partial$-compatible combings of a compact oriented 3-manifold $M$, then, in $H_3(UM;\b Z)$, $$ \begin{aligned} \ [L_{X=-Y} \times \b S^2 ] &= [X(M) - Y(M)] \\ \ [L_{X=Y} \times \b S^2 ] &= [X(M) - (-Y)(M) + \l H^{-Y}_{X,\sigma} ([0,1]\times \partial M)]. \end{aligned} $$ \end{proposition} \begin{proof} The first identity is a direct consequence of Lemma~\ref{lem_phomotopy}. The second one can be obtained using a similar construction. Namely, construct a 4-chain $\bar F(X,-Y)$ using the partial homotopy from $X$ to $-Y$ : $$ \tilde F(X,-Y) : \left\lbrace \begin{aligned}
\ [0,1] \times (M\setminus L_{X=Y}) &\longrightarrow UM\\ (s,m) & \longmapsto \left( m , \l H^{-Y}_{X,\sigma}(s,m) \right). \end{aligned} \right. $$ As in the proof of Lemma~\ref{lem_phomotopy}, $\tilde F(X,-Y)$ can be extended to $[0,1] \times Bl(M,L_{X=Y})$. Finally, we get a 4-chain $\bar F(X,-Y)$ of $UM$ with boundary : $$
\partial \bar F(X,-Y) = (-Y)(M) - X(M) - \l H^{-Y}_{X,\sigma} ([0,1]\times \partial M) + UM_{|L_{X=Y}}. $$ \end{proof}
\begin{proposition} \label{prop_euler} Let $X$ be a combing of a compact oriented 3-manifold $M$ and let \linebreak $P : H^2(M, \partial M;\b Z) \rightarrow H_1(M;\b Z)$ be the Poincaré duality isomorphism. If $M$ is closed, then, in $H_3(UM;\b Z)$, $$ [P(e_2(X^\perp)) \times \b S^2 ] = [X(M) - (-X)(M)], $$ where $[P(.)\times S^2]$ abusively denotes the homology class of the preimage of a representative of $P(.)$ under the bundle projection $UM \rightarrow M$. In general, if $\sigma$ is a section of $X^\perp$ such that $L_{\sigma=0}\cap\partial M = \emptyset$ then, in $H_3(UM;\b Z)$, $$
[ P(e_2^M(X^\perp,\sigma_{|\partial M})) \times \b S^2 ] = [X(M) - (-X)(M) +\l H_{X,\sigma}^{-X}([0,1]\times \partial M)]. $$ \end{proposition} \begin{proof}
Recall that $P(e_2^M(X^\perp,\sigma_{|\partial M}))=[L_{\sigma=0}]$. Perturbing $X$ by using $\sigma$, construct a section $Y$ homotopic to $X$ that coincides with $X$ on $\partial M$ and such that $[L_{X=Y}] = P(e_2^M(X^\perp,\sigma_{|\partial M}))$. Using Proposition~\ref{prop_links}, $$
[L_{X=Y} \times \b S^2 ] = [X(M) - (-Y)(M) + \l H^{-Y}_{X,\sigma_{|\partial M}} ([0,1]\times \partial M)], $$ so that $$
[ P(e_2^M(X^\perp,\sigma_{|\partial M})) \times \b S^2 ] = [X(M) - (-X)(M) +\l H_{X,\sigma}^{-X}([0,1]\times \partial M)]. $$ \end{proof}
\begin{proposition} \label{prop_linksinhomologyI} If $(X,\sigma)$ and $(Y,\sigma)$ are $\partial$-compatible combings of a compact oriented 3-manifold $M$, then, in $H_1(M;\b Z)$, $$ \begin{aligned} 2 \cdot [L_{X=-Y}] &= P(e_2^M(X^\perp,\sigma)) - P(e_2^M(Y^\perp,\sigma)), \\ 2 \cdot [L_{X=Y}] &= P(e_2^M(X^\perp,\sigma)) + P(e_2^M(Y^\perp,\sigma)). \end{aligned} $$ \end{proposition} \begin{proof} Extend $\sigma$ as a section $\bar\sigma$ of $X^\perp$. Using Propositions~\ref{prop_linksandsigns},~\ref{prop_links}~and~\ref{prop_euler}, we get, in $H_3(UM;\b Z)$,
$$
\begin{aligned}
2 \ \cdot \ [L_{X=-Y}\times \b S^2] &= [L_{X=-Y} \times \b S^2] - [L_{-X=Y} \times \b S^2] \\
&= [X(M)-Y(M)]-[(-X)(M)-(-Y)(M)] \\
&= [X(M)-Y(M)-(-X)(M)+(-Y)(M) \\
& \hspace{5mm}+\l H_{X,\bar\sigma}^{-X}([0,1]\times \partial M) -\l H_{X,\bar\sigma}^{-X} ([0,1]\times \partial M) ] \\
&= [X(M)-(-X)(M)+\l H_{X,\bar\sigma}^{-X}([0,1]\times \partial M)] \\
&- [Y(M)-(-Y)(M)+\l H_{X,\bar\sigma}^{-X}([0,1]\times \partial M) ] \\
&= [P(e_2^M(X^\perp,\sigma)) \times \b S^2] - [P(e_2^M(Y^\perp,\sigma)) \times \b S^2 ],
\end{aligned}
$$
$$
\begin{aligned}
2 \cdot [L_{X=Y}\times \b S^2] &= [L_{X=Y} \times \b S^2] - [L_{-X=-Y} \times \b S^2] \\
&= [X(M) - (-Y)(M)+\l H_{X,\bar\sigma_{|\partial M}}^{-Y} ([0,1]\times \partial M)]\\
&- [(-X)(M) - Y(M) +\l H_{-X,\bar\sigma_{|\partial M}}^{Y} ([0,1]\times \partial M)] \\
&= [ P(e_2^M(X^\perp,\sigma)) \times \b S^2 ] - [ P(e_2^M((-Y)^\perp,\sigma)) \times \b S^2 ] \\
&= [ P(e_2^M(X^\perp,\sigma)) \times \b S^2 ] + [ P(e_2^M(Y^\perp,\sigma)) \times \b S^2 ].
\end{aligned}
$$
\iffalse &= [X(M) - (-Y)(M)+\l H_{X,\sigma}^{-X} ([0,1]\times \partial M)]\\
&- [(-X)(M) - Y(M) + \l H_{-Y,\sigma}^{Y} ([0,1]\times \partial M)] \\
&= [X(M)-(-X)(M) + \l H_{X,\sigma}^{-X} ([0,1]\times \partial M)] \\
&- [(-Y)(M)-Y(M)+ \l H_{-Y,\sigma}^{Y} ([0,1]\times \partial M)] \\ \fi \end{proof}
\begin{remark}
If $M$ is a compact oriented 3-manifold and if $\sigma$ is a trivialization of $TM_{|\partial M}$, then the set $\mbox{spin$^c$}(M,\sigma)$ is a $H^2(M,\partial M; \b Z)$-affine space and the map $$ c : \left\lbrace \begin{aligned} \mbox{spin$^c$}(M,\sigma) &\longrightarrow H^2(M,\partial M; \b Z) \\
[X]^c &\longmapsto e_2^M(X^\perp, \sigma) \end{aligned} \right. $$ is affine over the multiplication by 2. Moreover, $[X]^c-[Y]^c \in H^2(M,\partial M; \b Z) \simeq H_1(M; \b Z)$ is represented by $L_{X=-Y}$, hence $2 \cdot [L_{X=-Y}] = P(e_2^M(X^\perp,\sigma)) - P(e_2^M(Y^\perp,\sigma))$. See \cite[Section 1.3.4]{gmdeloup} for a detailed presentation using this point of view. Both Proposition \ref{prop_linksinhomologyI} and Corollary \ref{corrplus} below are already-known results. For instance, Corollary \ref{corrplus} is also present in \cite{lescopcombing} (Lemma 2.16). \end{remark}
\begin{corollary} \label{corrplus} If $X$ and $Y$ are transverse combings of a closed oriented 3-manifold $M$, then, in $H_1(M;\b Z)$, $$ \begin{aligned} 2 \cdot [L_{X=-Y}] &= P(e_2(X^\perp)) - P(e_2(Y^\perp)), \\ 2 \cdot [L_{X=Y}] &= P(e_2(X^\perp)) + P(e_2(Y^\perp)). \end{aligned} $$ \end{corollary}
\subsection{Pseudo-parallelizations} \label{ssec_defppara} \hspace{-3mm} A \textit{pseudo-parallelization} $\bar\tau \hspace{-1mm} = \hspace{-1mm} (N(\gamma); \tau_e, \tau_d)$ of a compact oriented 3-manifold $M$ is a triple~where \begin{enumerate}[\textbullet] \setlength{\itemsep}{0pt} \setlength{\parskip}{5pt} \item $\gamma$ is a link in $\mathring{M}$, \item $N(\gamma)$ is a tubular neighborhood of $\gamma$ with a given product struture : $$ N(\gamma)\simeq [a,b] \times \gamma \times [-1,1], $$ \item $\tau_e$ is a genuine parallelization of $\smash{M\setminus\mathring{N(\gamma)}}$, \item $\tau_d$ is a genuine parallelization of $N(\gamma)$ such that $$ \tau_d = \left\lbrace \begin{aligned} & \tau_e &\mbox{ on } \partial (\left[ a , b \right] \times \gamma \times \left[ -1 , 1 \right]) \setminus \lbrace b \rbrace \times \gamma \times \left[ -1 , 1 \right] \\ & \tau_e \circ \l T_\gamma &\mbox{ on } \lbrace b \rbrace \times \gamma \times \left[ -1 , 1 \right] \end{aligned} \right. $$ where $\l T_\gamma$ is $$ \l T_\gamma : \left\lbrace \begin{aligned} ( [a,b] \times \gamma \times \left[ -1 , 1 \right] )\times \b R^3 & \longrightarrow ([a,b] \times \gamma \times \left[ -1 , 1 \right]) \times \b R^3 \\ ((t,c,u),v) &\longmapsto ((t,c,u),R_{e_1, \pi+\theta(u)}(v)). \end{aligned} \right. $$ where $R_{e_1, \pi+\theta(u)}$ is the rotation of axis $e_1$ and angle $\pi+\theta(u)$, and where $\theta : [-1,1] \rightarrow [-\pi,\pi]$ is a smooth increasing map constant equal to $\pi$ on the interval $[-1,-1+\varepsilon]$ ($\varepsilon \in ]0,\sfrac{1}{2}[$), and such that $\theta(-x)=-\theta(x)$. \end{enumerate}
Note that a pseudo-parallelization whose link is empty is a parallelization.
\begin{lemma} \label{lem_extendpparallelization}
If $M$ is a compact oriented 3-manifold with boundary and if $\rho$ is a trivialization of $TM_{|\partial M}$, there exists a pseudo-parallelization $\bar\tau$ of $M$ that coincides with $\rho$ on $\partial M$. \end{lemma} \begin{proof} The obstruction to extending the trivialization $\rho$ as a parallelization of $M$ can be represented by an element $[\gamma] \in H_1(M;\pi_1(SO(3)))$ where $\gamma$ is a link in $M$. It follows that $\rho$ can be extended on $M\setminus \mathring {N(\gamma)}$ where $N(\gamma)$ is a tubular neighborhood of $\gamma$. Finally, according to \cite[Lemma 10.2]{lescopcube}, it is possible to extend $\rho$ as a pseudo-parallelization on each torus of~$N(\gamma)$. \end{proof}
Thanks to Lemma \ref{lem_extendpparallelization}, an LP$_\b Q$-surgery in a rational homology 3-sphere equipped with a pseudo-parallelization can be seen as a local move. This is not the case for an LP$_\b Q$-surgery in a rational homology 3-sphere equipped with a genuine parallelization. \\
\iffalse Unlike the case of a parallelized rational homology 3-sphere, in the case of a rational homology 3-sphere equipped with a pseudo-parallelization, an LP$_\b Q$-surgery can be seen as a local move thanks to Lemma \ref{lem_extendpparallelization} above. \fi
Before we move on to the definition of pseudo-sections \textit{ie}\ the counterpart of sections of parallelizations for pseudo-parallelizations, we need the following. \begin{definition} \label{def_addinner} Let $\bar \tau = (N(\gamma); \tau_e, \tau_d)$ be a pseudo-parallelization of a compact oriented 3-manifold. An \textit{additional inner parallelization} is a map $\tau_g$ such that $$ \tau_g : \left\lbrace \begin{aligned} \ [a,b]\times \gamma \times [-1,1] \times \b R^3 &\longrightarrow TN(\gamma) \\ ((t,c,u),v) & \longmapsto \tau_d \left( \l T_\gamma^{-1} ((t,c,u),\l F(t,u)(v))\right) \end{aligned} \right. $$ where, choosing $\varepsilon \in \ ]0,\sfrac{1}{2}[$, $\l F$ is a map such that $$ \l F : \left\lbrace \begin{aligned} \ [a,b]\times[-1,1] & \longrightarrow SO(3) \\ (t,u) & \longmapsto \left\lbrace
\begin{aligned}
& \mbox{Id}_{SO(3)} & \mbox{for $|u|>1-\varepsilon$} \\
& R_{e_1,\pi + \theta(u)} & \mbox{for $t < a + \varepsilon$} \\
& R_{e_1,-\pi - \theta(u)} & \mbox{for $t > b - \varepsilon$}
\end{aligned} \right. \end{aligned} \right. $$ which exists since $\pi_1(SO(3))\hspace{-1mm}=\sfrac{\b Z}{2 \b Z}$ and which is well-defined up to homotopy since $\pi_2(SO(3))\hspace{-1mm}=~\hspace{-2mm}0$. \end{definition}
From now on, we will always consider pseudo-parallelizations together with an additional inner parallelization. Finally, note that if $\bar \tau=(N(\gamma); \tau_e, \tau_d, \tau_g)$ is a pseudo-parallelization of a compact oriented 3-manifold together with an additional inner parallelization, then : \begin{enumerate}[\textbullet] \setlength{\itemsep}{0pt} \setlength{\parskip}{5pt} \item the parallelizations $\tau_e$, $\tau_d$ and $\tau_g$ agree on $\partial N(\gamma) \setminus \lbrace b \rbrace \times \gamma \times [-1,1]$, \item $\tau_g = \tau_e \circ \l T_\gamma^{-1}$ on $\lbrace b \rbrace \times \gamma \times [-1,1]$. \end{enumerate}
\begin{definition}
A \textit{pseudo-section} of a pseudo-parallelization of a compact oriented 3-manifold $M$ together with an additional inner parallelization, $\bar \tau=(N(\gamma); \tau_e, \tau_d, \tau_g)$, is a 3-cycle of \linebreak $(UM,UM_{|\partial M})$ of the following form : $$ \begin{aligned} \bar \tau (M\times \lbrace v \rbrace ) &= \tau_e((M\setminus \mathring N(\gamma))\times \lbrace v \rbrace ) \\ &+ \frac{ \tau_d(N(\gamma)\times \lbrace v \rbrace) + \tau_g(N(\gamma)\times \lbrace v \rbrace) + \tau_e( \lbrace b \rbrace \times \gamma \times C_2(v)) }{2} \end{aligned} $$ where $v\in \b S^2$ and $C_2(v)$ is the 2-chain of $\left[ -1 , 1 \right] \times \b S^1(v)$ of Figure~\ref{fig_C2v}, where $\b S^1(v)$ stands for the circle of $\b S^2$ that lies on the plane orthogonal to $e_1$ and passes through $v$. Note that : $$ \begin{aligned}
\partial C_2(v) &= \lbrace (u, R_{e_1,\pi+\theta(u)} (v)) \ | u \in \left[ -1 , 1 \right] \rbrace \\
&+ \lbrace (u, R_{e_1,-\pi-\theta(u)} (v)) \ | u \in \left[ -1 , 1 \right] \rbrace - 2 \cdot \left[ -1, 1 \right] \times \lbrace v \rbrace. \end{aligned} $$ \end{definition}
\begin{center} \definecolor{zzttqq}{rgb}{0.6,0.2,0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(-2.25,-1) rectangle (13,2.25); \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (0,0) -- (4,2) -- (0,2) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (7.5,2) -- (7.5,0) -- (11.5,0) -- cycle; \draw (0,2)-- (4,2); \draw (4,0)-- (4,2); \draw (4,0)-- (0,0); \draw (0,0)-- (0,2); \draw (7.5,0)-- (7.5,2); \draw (7.5,0)-- (11.5,0); \draw (11.5,0)-- (11.5,2); \draw (7.5,2)-- (11.5,2); \draw (0,0)-- (4,2); \draw (0,2)-- (4,0); \draw (7.5,2)-- (11.5,0); \draw (11.5,2)-- (7.5,0); \draw (-2,1.25) node[anchor=north west] {$C_2(v)=$}; \draw (-0.5,-0.25) node[anchor=north west] {$-1$}; \draw (3.75,-0.25) node[anchor=north west] {$1$}; \draw (7,-0.25) node[anchor=north west] {$-1$}; \draw (11.25,-0.25) node[anchor=north west] {$1$}; \draw [->] (0,0) -- (4,0); \draw [->] (7.5,0) -- (11.5,0); \draw [->] (11.5,0) -- (11.5,2); \draw [->] (4,0) -- (4,2); \draw (4,2) node[anchor=north west] {$\b S^1(v)$}; \draw (11.5,2) node[anchor=north west] {$\b S^1(v)$}; \draw [color=zzttqq] (0,0)-- (4,2); \draw [color=zzttqq] (4,2)-- (0,2); \draw [color=zzttqq] (0,2)-- (0,0); \draw [color=zzttqq] (7.5,2)-- (7.5,0); \draw [color=zzttqq] (7.5,0)-- (11.5,0); \draw [color=zzttqq] (11.5,0)-- (7.5,2); \draw (5.58,1.13) node[anchor=north west] {$-$}; \end{tikzpicture} \captionof{figure}{The 2-chain $C_2(v)$ where we cut the annulus $\left[ -1,1 \right] \times \b S^1(v)$ along $\left[ -1, 1 \right] \times \lbrace v \rbrace$.} \label{fig_C2v} \end{center}
\begin{definition} If $\bar\tau = (N(\gamma);\tau_e,\tau_d, \tau_g)$ is a pseudo-parallelization of a compact oriented 3-manifold $M$, let the \textit{Siamese sections} of $\bar\tau$ denote the following sections of $UM$: $$ E_1^{d}: \left\lbrace \begin{aligned}
m \in M\setminus \mathring N(\gamma) & \longmapsto \tau_e(m,e_1) \\
m \in N(\gamma) & \longmapsto \tau_d(m,e_1) \end{aligned} \right. \hspace{3mm}\mbox{ and }\hspace{3mm} E_1^{g} : \left\lbrace \begin{aligned}
m \in M\setminus \mathring N(\gamma) & \longmapsto \tau_e(m,e_1) \\
m \in N(\gamma) & \longmapsto \tau_g(m,e_1). \end{aligned} \right. $$ \end{definition}
As already mentioned in the introduction, note that when $\bar\tau = (N(\gamma);\tau_e,\tau_d, \tau_g)$ is a pseudo-parallelization of a compact oriented 3-manifold $M$, its pseudo-section at $e_1$ reads $$ \bar\tau(M\times \lbrace e_1 \rbrace) = \frac{E_1^d(M)+E_1^g(M)}{2} $$ where $E_1^d$ and $E_1^g$ are the Siamese sections of $\bar\tau$.
\section{From parallelizations to pseudo-parallelizations} \subsection{Pontrjagin numbers of parallelizations} \label{ssec_defpara} In this subsection we review the definition of first relative Pontrjagin numbers for parallelizations of compact connected oriented 3-manifolds. For a detailed presentation of these objects we refer to \cite[Section 5]{lescopEFTI} and \cite[Subsection~4.1]{lescopcombing}. \\
Let $C_1$ and $C_2$ be compact connected oriented 3-manifolds with identified boundaries. Recall that a \textit{cobordism from $C_1$ to $C_2$} is a compact oriented 4-manifold $W$ whose boundary reads $$ \partial W = - C_1 \bigcup_{\partial C_1 \simeq \lbrace 0 \rbrace \times C_1 } -[0,1] \times \partial C_1 \bigcup_{\partial C_2 \simeq \lbrace 1 \rbrace \times C_1} C_2. $$ Moreover, we require $W$ to be identified with $[0,1[\times C_1$ or $]0,1]\times C_2$ on collars of $\partial W$. \\
Recall that any compact oriented 3-manifold bounds a compact oriented 4-manifold, so that a cobordism from $C_1$ to $C_2$ always exists. Also recall that the signature of a 4-manifold is the signature of the intersection form on its second homology group with real coefficients and that any 4-manifold can be turned into a 4-manifold with signature zero by performing connected sums with copies of $\pm \b C P^2$. So let us fix a connected cobordism $W$ from $C_1$ to $C_2$ with signature zero. \\
Now consider a parallelization $\tau_1$, resp. $\tau_2$, of $C_1$, resp. $C_2$. Define the vector field $\vec n$ on a collar of $\partial W$ as follows. Let $\vec n$ be the unit tangent vector to $[0,1]\times \lbrace x \rbrace$ where $x \in C_1$ or $C_2$. Define $\tau(\tau_1, \tau_2)$ as the trivialization of $TW \otimes \b C$ over $\partial W$ obtained by stabilizing $\tau_1$ or $\tau_2$ into $\vec n \oplus \tau_1$ or $\vec n \oplus \tau_2$ and tensoring with $\b C$. In general, this trivialization does not extend as a parallelization of $W$. This leads to a Pontrjagin obstruction class $p_1(W;\tau(\tau_1, \tau_2))$ in $H^4(W, \partial W, \pi_3(SU(4)))$. Since $\pi_3(SU(4)) \simeq \b Z$, there exists $p_1(\tau_1, \tau_2)\in \b Z$ such that $p_1(W;\tau(\tau_1, \tau_2))=p_1(\tau_1, \tau_2)[W,\partial W]$. Let us call $p_1(\tau_1, \tau_2)$ the \textit{first relative Pontrjagin number of $\tau_1$ and $\tau_2$}. \\
Similarly, define the \textit{Pontrjagin number $p_1(\tau)$ of a parallelization $\tau$} of a closed connected oriented 3-manifold $M$, by taking a connected oriented 4-manifold $W$ with boundary $M$, a collar of $\partial W$ identified with $]0,1] \times M$ and $\vec n$ as the outward normal vector field over $\partial W$. \\
We have not actually defined the sign of the Pontrjagin numbers. We will not give details here on how to define it, instead we refer to \cite[\S 15]{MS} or \cite[p.44]{lescopEFTI}. Let us only mention that $p_1$ is the opposite of the second Chern class $c_2$ of the complexified tangent bundle.
\subsection{Pontrjagin numbers for pseudo-parallelizations} \begin{definition} \label{def_complextriv} Let $\bar \tau=(N(\gamma); \tau_e, \tau_d, \tau_g)$ be a pseudo-parallelization of a compact oriented 3-manifold $M$, a \textit{complex trivialization} $\bar \tau _{\b C}$ associated to $\bar\tau$ is a trivialization of $TM \otimes \b C$ such that~: \begin{enumerate}[\textbullet]
\setlength{\itemsep}{0pt}
\setlength{\parskip}{5pt}
\item $\bar \tau_\b C$ is \textit{special} (\textit{ie}\ its determinant is one everywhere) with respect to the trivialization of the determinant bundle induced by the orientation of $M$,
\item on $M\setminus \mathring{N(\gamma)}$, $\bar \tau_\b C = \tau_e \otimes 1_\b C$,
\item for $m=(t,c,u)\in [a,b] \times \gamma \times [-1,1]$, $\bar \tau_\b C (m,.)= (m,\tau_d (t,c,u)(\l G(t,u)(.)))$, \end{enumerate} where $\l G$ is a map so that : $$ \l G : \left\lbrace \begin{aligned} \ [a,b] \times [-1,1] &\longrightarrow \ SU(3) \\ (t,u) &\longmapsto \left\lbrace
\begin{aligned}
& \mbox{Id}_{SU(3)} &\mbox{for $|u| > 1 - \varepsilon$} \\
& \mbox{Id}_{SU(3)} &\mbox{for $t < a+\varepsilon$} \\
& R_{e_1,-\pi-\theta(u)} &\mbox{for $t>b -\varepsilon$}.
\end{aligned}
\right. \end{aligned} \right. $$ Note that such a smooth map $\l G$ on $[a,b]\times[-1,1]$ exists since $\pi_1(SU(3))=\lbrace 1 \rbrace$. Moreover, $\l G$ is well-defined up to homotopy since $\pi_2(SU(3))=\lbrace 0 \rbrace$. \end{definition}
Pseudo-parallelizations, or pseudo-trivializations, have been first used in \cite[Section 4.3]{lescopKT}, but they have been first defined in \cite[Section 10]{lescopcube}. Note that our conventions are slightly different.
\begin{definition} Let $\bar\tau_1$ and $\bar\tau_2$ be pseudo-parallelizations of two compact oriented 3-manifolds $M_1$ and $M_2$ with identified boundaries and let $W$ be a cobordism from $M_1$ to $M_2$ with signature zero. As in the case of genuine parallelizations, define a trivialization $\tau(\bar\tau_1, \bar\tau_2)$ of $TW \otimes \b C$ over $\partial W$ using the special complex trivializations $\bar\tau_{1,\b C}$ and $\bar\tau_{2,\b C}$ associated to $\bar\tau_1$ and $\bar\tau_2$, respectively. The \textit{first relative Pontrjagin number of $\bar\tau_1$ and $\bar\tau_2$} is the Pontrjagin obstruction $p_1(\bar\tau_1,\bar\tau_2)$ to extending the trivialization $\tau(\bar\tau_1, \bar\tau_2)$ as a trivialization of $TW\otimes \b C$ over $W$. \end{definition}
Finally, if $\bar\tau$ is a pseudo-parallelization of a closed oriented 3-manifold $M$, then define the \textit{Pontrjagin number $p_1(\bar\tau)$ of the pseudo-parallelization $\bar\tau$} as $p_1(\tau_\emptyset, \bar \tau)$ as before.
\subsection[Variation of $p_1$ as an intersection of three 4-chains]{Variation of $p_1$ as an intersection of three 4-chains} In this subsection, we give a proof of Theorem~\ref{prop_varasint}, which expresses the relative Pon\-trjagin numbers (resp. the variation of Pontrjagin numbers) of pseudo-parallelizations in compact (resp. closed) oriented 3-manifolds as an algebraic intersection of three 4-chains.
\begin{lemma} \label{simppara}
If $\bar \tau=(N(\gamma);\tau_e,\tau_d,\tau_g)$ is a pseudo-parallelization of a compact oriented 3-manifold $M$, if $E_1^d$ and $E_1^g$ are its Siamese sections and if $E_2^e$ denotes the second vector of $\tau_e$, then $P(e_2^M({E_1^d}^\perp,{E_2^e}_{|\partial M})) = -P(e_2^M({E_1^g}^\perp,{E_2^e}_{|\partial M})) = [\gamma]$ in $H_1(M;\b Z)$. \end{lemma} \begin{proof}
Since $E_1^d$ and $E_1^e$ coincide on $M\setminus \mathring N(\gamma)$, the obstruction to extending $E_2^e$ as a section of ${E_1^d}^\perp$ is the obstruction to extending $E_2^e$ as a section of ${E_1^d}^\perp_{|N(\gamma)}$. However, parallelizing $N(\gamma)$ with $\tau_d$ and using that $$ \tau_d = \left\lbrace \begin{aligned}
& \tau_e &\mbox{ on } \partial (\left[ a , b \right] \times \gamma \times \left[ -1 , 1 \right]) \setminus \lbrace b \rbrace \times \gamma \times \left[ -1 , 1 \right] \\
& \tau_e \circ \l T_\gamma &\mbox{ on } \lbrace b \rbrace \times \gamma \times \left[ -1 , 1 \right] \end{aligned} \right. $$
we get that $E_2^e$ induces a degree +1 map ${E_2^e}_{|\alpha} : \alpha \rightarrow \b S^1$ on any meridian $\alpha$ of $N(\gamma)$. It follows that $$
P(e_2^M({E_1^d}^\perp,{E_2^e}_{|\partial M})) = + [\gamma]. $$
Similarly, parallelizing $N(\gamma)$ with $\tau_g$, $E_2^e$ induces a degree -1 map ${E_2^e}_{|\alpha} : \alpha \rightarrow \b S^1$ on any meridian $\alpha$ of $N(\gamma)$, so that $$
P(e_2^M({E_1^g}^\perp,{E_2^e}_{|\partial M})) = - [\gamma]. $$ \end{proof}
Recall that for a combing $X$ of a compact oriented 3-manifold $M$ and a pseudo-parallelization $\bar\tau$ of $M$, if $X$ and $\bar\tau$ are compatible, then $$ L_{\bar\tau=X}=\frac{L_{E_1^d=X}+L_{E_1^g=X}}{2} \mbox{ \ \ and \ \ } L_{\bar\tau=-X}=\frac{L_{E_1^d=-X}+L_{E_1^g=-X}}{2} $$ where $E_1^d$ and $E_1^g$ denote the Siamese sections of $\bar\tau$.
\begin{lemma} \label{nulexcep} Let $\bar \tau=(N(\gamma);\tau_e,\tau_d,\tau_g)$ be a pseudo-parallelization of a compact oriented 3-manifold $M$. If $X$ is a torsion combing of $M$ compatible with $\bar \tau$, then $L_{\bar\tau=X}$ and $L_{\bar\tau=-X}$ are rationally null-homologous in $M$. \end{lemma} \begin{proof} Let $E_1^d$ and $E_1^g$ be the Siamese sections of $\bar\tau$. Using Proposition~\ref{prop_linksinhomologyI} and the fact that $X$ is a torsion combing, we get, in $H_1(M;\b Q)$, $$ \begin{aligned}
\ 2 \cdot [L_{X=-E_1^d}+L_{X=-E_1^g} ] &= [-P(e_2^M({E_1^d}^\perp,{E_2^e}_{|\partial M})) -P(e_2^M({E_1^g}^\perp,{E_2^e}_{|\partial M})) ] \\
\ 2 \cdot [L_{X=E_1^d}+L_{X=E_1^g} ] &= [P(e_2^M({E_1^d}^\perp,{E_2^e}_{|\partial M})) + P(e_2^M({E_1^g}^\perp,{E_2^e}_{|\partial M})) ] \end{aligned} $$ where $E_2^e$ is the second vector of $\tau_e$. Conclude with Lemma~\ref{simppara}. \end{proof}
\begin{definition} \label{def_Ft} Let $X$ and $Y$ be $\partial$-compatible combings of a compact oriented 3-manifold $M$. For all $t \in [0,1]$, let $\bar F_t(X,Y)$ denote the 4-chain of $[0,1]\times UM$~: $$ \bar F_t(X,Y) = [0,t] \times X(M) + \lbrace t \rbrace \times \bar F(X,Y) + [t,1] \times Y(M), $$ where $\bar F(X,Y)$ is a 4-chain of $UM$ as in Lemma~\ref{lem_phomotopy}. Note that : $$ \partial \bar F_t(X,Y) = \lbrace 1 \rbrace \times Y(M) - \lbrace 0 \rbrace \times X(M) - [0,1] \times X(\partial M) + \lbrace t \rbrace \times UM_{L_{X=-Y}}. $$ \end{definition}
\begin{lemma} \label{lem_transfert} Let $\bar \tau=(N(\gamma);\tau_e,\tau_d,\tau_g)$ be a pseudo-parallelization of a compact oriented 3-manifold $M$. If $X$ is a torsion combing of $M$ compatible with $\bar \tau$, then there exist 4-chains of $[0,1]\times UM$, $C_4^\pm(\bar\tau,X)$ and $C_4^\pm(X,\bar\tau)$, with boundaries : $$ \begin{aligned}
\partial C_4^\pm(\bar\tau,X) &= \lbrace 1 \rbrace \times (\pm X)(M) - \lbrace 0 \rbrace \times \bar\tau(M\times \lbrace \pm e_1 \rbrace ) - [0,1] \times (\pm X)(\partial M) \\
\partial C_4^\pm(X,\bar\tau) &= \lbrace 1 \rbrace \times \bar\tau(M\times \lbrace \pm e_1 \rbrace ) - \lbrace 0 \rbrace \times (\pm X)(M) - [0,1] \times (\pm X)(\partial M) \\ \end{aligned} $$ \end{lemma} \begin{proof} Let $E_1^d$ and $E_1^g$ be the Siamese sections of $\bar\tau$ and just set $$ \begin{aligned}
C_4^\pm(\bar\tau,X) &= \sfrac{1}{2} \cdot \left( \bar F_t( \pm E_1^d, \pm X) + \bar F_t( \pm E_1^g, \pm X) - \lbrace t \rbrace \times UM_{|\Sigma(\pm e_1)} \right) \\
C_4^\pm(X,\bar\tau) &= \sfrac{1}{2} \cdot \left( \bar F_t( \pm X, \pm E_1^d) + \bar F_t( \pm X, \pm E_1^g) - \lbrace t \rbrace \times UM_{|-\Sigma(\pm e_1)} \right) \end{aligned} $$ where the 4-chains $\bar F_t$ are as in Definition~\ref{def_Ft} and where $\Sigma(\pm e_1)$ are rational 2-chains of $M$ bounded by $\pm(L_{E_1^d=-X}+L_{E_1^g=-X})$, which are rationally null-homologous according to Lemma~\ref{nulexcep}. \end{proof}
\begin{remark} \label{rmkpcase} Recall that a genuine parallelization $\tau$ of a compact oriented 3-manifold is a pseudo-parallelization whose link is empty. In such a case, $E_1^d$ and $E_1^g$ are the first vector $E_1^\tau$ of the parallelization $\tau$ and the chains $C_4^{\pm}$ can be simply defined as $$ \begin{aligned}
C_4^\pm(\tau,X) &= \bar F_t( \pm E_1^\tau, \pm X) - \lbrace t \rbrace \times UM_{|\Sigma(\pm e_1)} \\
C_4^\pm(X,\tau) &= \bar F_t( \pm X, \pm E_1^\tau) - \lbrace t \rbrace \times UM_{|-\Sigma(\pm e_1)} \end{aligned} $$ where the 4-chains $\bar F_t$ are as in Definition~\ref{def_Ft} and where $\Sigma(\pm e_1)$ are rational 2-chains of $M$ bounded by $\pm L_{E_1^\tau=-X}$. \end{remark}
\begin{lemma} \label{lem_C4pme1} Let $\tau$ and $\bar \tau$ be two pseudo-parallelizations of a compact oriented 3-manifold $M$ that coincide on $\partial M$ and whose links are disjoint. For all $v\in \b S^2$, there exists a 4-chain $C_4(M,\tau,\bar\tau ;v)$ of $[0,1] \times UM$ such that $$
\partial C_4(M,\tau,\bar\tau ;v) = \lbrace 1 \rbrace \times \bar \tau (M \times \lbrace v \rbrace)- \lbrace 0 \rbrace \times \tau (M \times \lbrace v \rbrace) - [0,1] \times \tau (\partial M\times \lbrace v \rbrace). $$ \end{lemma} \begin{proof} Let us write $C_4(\tau,\bar\tau ;v)$ instead of $C_4(M,\tau,\bar\tau ;v)$ when there is no ambiguity. Since the 3-chains $\partial C_4(\tau,\bar\tau ;v)$, where $v \in \b S^2$, are homologous, it is enough to prove the existence of $C_4(\tau,\bar\tau ;e_1)$. First, let $X$ be a combing of $M$ such that $X$ is compatible with $\tau$ and $\bar \tau$. In general, this combing is not a torsion combing. Second, let $E_1^d$ and $E_1^g$ (resp. $\bar E_1^d$ and $\bar E_1^g$) denote the Siamese section of $\tau$, (resp. $\bar\tau$) and set $$ \begin{aligned} \bar F(\tau,X) &= \sfrac{1}{2} \cdot \left( \bar F( E_1^d,X) + \bar F( E_1^g,X) \right) \\ \bar F(X,\bar\tau) &= \sfrac{1}{2} \cdot \left( \bar F(X, \bar E_1^d) + \bar F(X, \bar E_1^g)\right). \end{aligned} $$ These chains have boundaries : $$ \begin{aligned}
\partial \bar F(\tau,X) &= X(M) - \tau(M\times \lbrace e_1 \rbrace) + UM_{|L_{\tau=-X}} \\
\partial \bar F(X,\bar\tau) &= \bar\tau(M\times \lbrace e_1 \rbrace) - X(M) + UM_{|-L_{\bar\tau=-X}}. \end{aligned} $$ Hence, for all $t \in [0,1]$, the 4-chain of $[0,1]\times UM$ $$ \bar F_t(\tau, \bar\tau; e_1) = [0,t] \times \tau(M\times \lbrace e_1 \rbrace) + \lbrace t \rbrace \times \left(\bar F(\tau,X) + \bar F(X,\bar\tau)\right) + [t,1] \times \bar\tau(M\times \lbrace e_1 \rbrace) $$ has boundary : $$ \begin{aligned} \partial \bar F_t(\tau, \bar\tau; e_1) &= \lbrace 1 \rbrace \times \bar\tau(M\times \lbrace e_1 \rbrace) - \lbrace 0 \rbrace \times \tau(M\times \lbrace e_1 \rbrace) \\
&- [0,1] \times \tau(\partial M\times \lbrace e_1 \rbrace) + \lbrace t \rbrace \times UM_{|L_{\tau=-X}\cup - L_{\bar\tau=-X}}. \end{aligned} $$ Thanks to Proposition~\ref{prop_linksinhomologyI} and Lemma~\ref{simppara}, in $H_1(M;\b Q)$ : $$ \begin{aligned}
4 \cdot [ L_{\tau=-X} ] &= 2\cdot [L_{E_1^d=-X} + L_{E_1^g=-X}] \\
&= [-2 \cdot P(e_2^M(X^\perp,{E_2^e}_{|\partial M}))\hspace{-1mm}+\hspace{-1mm}P(e_2^M({E_1^d}^\perp,{E_2^e}_{|\partial M}))\hspace{-1mm}+\hspace{-1mm}P(e_2^M({E_1^g}^\perp,{E_2^e}_{|\partial M}))] \\
&= -2 \cdot[P(e_2^M(X^\perp,{E_2^e}_{|\partial M}))] \end{aligned} $$
where $E_2^e$ is the second vector of $\tau_e$. Similarly, $2\cdot [L_{\bar\tau=-X}]=-[P(e_2^M(X^\perp,{E_2^e}_{|\partial M}))]$ in $H_1(M;\b Q)$. So, the link $L_{\tau=-X}\cup - L_{\bar\tau=-X}$ is rationally null-homologous in $M$, \textit{ie}\ there exists a rational 2-chain $\Sigma(\tau,\bar\tau)$ such that $\partial \Sigma(\tau,\bar\tau) = L_{\tau=-X}\cup - L_{\bar\tau=-X}$. Hence, we get a 4-chain $C_4(\tau,\bar\tau;e_1)$ as desired by setting $$
C_4(\tau,\bar\tau;e_1) = \bar F_t(\tau, \bar\tau; e_1) - \lbrace t \rbrace \times UM_{|\Sigma(\tau,\bar\tau)}. $$ \end{proof}
\begin{lemma} \label{lem_welldefined} Let $\tau$ and $\bar \tau$ be two pseudo-parallelizations of a compact oriented 3-manifold $M$ that coincide on $\partial M$. If $x$, $y$ and $z$ are points in $\b S^2$ with pairwise different distances to $e_1$, then there exist pairwise transverse 4-chains $C_4(\tau,\bar\tau ;x)$, $C_4(\tau,\bar\tau ;y)$ and $C_4(\tau,\bar\tau ;z)$ as in Lemma~\ref{lem_C4pme1} and the algebraic intersection $\langle C_4(\tau,\bar\tau ;x), C_4(\tau,\bar\tau ;y), C_4(\tau,\bar\tau ;z) \rangle_{[0,1]\times UM}$ only depends on $\tau$ and~$\bar\tau$. \end{lemma} \begin{proof} Pick any $x$, $y$ and $z$ in $\b S^2$ with pairwise different distances to $e_1$ and consider some 4-chains $C_4(\tau,\bar\tau ;x)$, $C_4(\tau,\bar\tau ;y)$ and $C_4(\tau,\bar\tau ;z)$ such that, for $v \in \lbrace x , y , z \rbrace$, $$ \partial C_4(\tau,\bar\tau ;v) = \lbrace 1 \rbrace \times \bar\tau(M\times\lbrace v \rbrace) - \lbrace 0 \rbrace \times \tau (M\times \lbrace v \rbrace ) - [0,1] \times \tau (\partial M \times \lbrace v \rbrace ). $$ The intersection of $C_4(\tau,\bar\tau;x)$, $C_4(\tau,\bar\tau;y)$ and $C_4(\tau,\bar\tau;z)$ is in the interior of $[0,1]\times UM$. The algebraic triple intersection of these three 4-chains only depends on the fixed boundaries and on the homology classes of the 4-chains. The space $H_4([0,1]\times UM ; \b Q)$ is generated by the classes of 4-chains $\Sigma \times \b S^2$ where $\Sigma$ is a surface in $M$. If $\Sigma \times \b S^2$ is such a 4-chain, then $$ \begin{aligned}
& \langle C_4(\tau,\bar\tau ;x) + \Sigma\times \b S^2, C_4(\tau,\bar\tau ;y), C_4(\tau,\bar\tau ;z) \rangle - \langle C_4(\tau,\bar\tau ;x) , C_4(\tau,\bar\tau ;y), C_4(\tau,\bar\tau ;z) \rangle \\
&= \langle \Sigma\times \b S^2, C_4(\tau,\bar\tau ;y), C_4(\tau,\bar\tau ;z) \rangle \\
&= \left\lbrace
\begin{aligned}
& \langle \Sigma\times \b S^2, [0,1] \times \tau(M \times \lbrace y \rbrace ) , [0,1] \times \tau(M \times \lbrace z \rbrace) \rangle \mbox{ , pushing $\Sigma \times \b S^2$ near $0$,} \\
& \langle \Sigma\times \b S^2, [0,1] \times \bar\tau(M \times \lbrace y \rbrace ) , [0,1] \times \bar \tau(M \times \lbrace z \rbrace) \rangle \mbox{ , pushing $\Sigma \times \b S^2$ near $1$.} \\
\end{aligned}
\right. \end{aligned} $$
Hence, $\langle \Sigma\times \b S^2, C_4(\tau,\bar\tau ;y), C_4(\tau,\bar\tau ;z) \rangle $ is independent of $\tau$ and $\bar\tau$. So, use Lemma~\ref{lem_extendpparallelization} to extend a trivialization of $TM_{|\Sigma}$ as a pseudo-parallelization $\tau'$ that coincides with $\tau$ and $\bar\tau$ on $\partial M$. Considering this pseudo-parallelization we get $$ \langle \Sigma\times \b S^2, C_4(\tau,\bar\tau ;y), C_4(\tau,\bar\tau ;z) \rangle \hspace{-1mm}=\hspace{-1mm} \langle \Sigma\times \b S^2, [0,1] \times \tau'(M \times \lbrace y \rbrace ) , [0,1] \times \tau'(M \times \lbrace z \rbrace) \rangle \hspace{-1mm}=\hspace{-1mm} 0, $$ so that the algebraic triple intersection of the three chains $C_4(\tau,\bar\tau;x)$, $C_4(\tau,\bar\tau;y)$ and $C_4(\tau,\bar\tau;z)$ only depends on their fixed boundaries. \end{proof}
\begin{proof}[Proof of Theorem~\ref{prop_varasint}] Let $\tau$ and $\bar \tau$ be two pseudo-parallelizations of a compact oriented 3-manifold $M$ that coincide on $\partial M$ and whose links are disjoint. To conclude the proof of Theorem~\ref{prop_varasint}, we have to prove that for any $x$, $y$ and $z$ in $\b S^2$ with pairwise different distances to $e_1$ : $$ p_1(\tau, \bar\tau)= 4 \cdot \langle C_4(M,\tau,\bar\tau ;x),C_4(M,\tau,\bar\tau ;y),C_4(M,\tau,\bar\tau ;z) \rangle_{[0,1]\times UM}. $$
First, we know from \cite[Lemma 10.9]{lescopcube} that this is true if $M$ is a $\b Q$HH of genus 1. Notice that it is also true if $M$ embeds in such a manifold. Indeed, if $\b H$ is a $\b Q$HH of genus 1 and if $M$ embeds in $\b H$ then, using Lemma~\ref{lem_extendpparallelization} and using that $\tau$ and $\bar \tau$ coincide on $\partial M$, there exists a pseudo-parallelization $\check \tau$ of $\b H \setminus \mathring M$ such that $$ \bar \tau_{\b H} : \left\lbrace \begin{aligned} m \in M &\mapsto \bar\tau(m,.) \\ m \in \b H\setminus \mathring M & \mapsto \check \tau(m,.) \end{aligned} \right. \mbox{ \ and \ } \tau_{\b H} : \left\lbrace \begin{aligned} m \in M &\mapsto \tau(m,.) \\ m \in \b H\setminus \mathring M & \mapsto \check \tau(m,.) \end{aligned} \right. $$ are pseudo-parallelizations of $\b H$. Furthermore, for any $v \in \b S^2$, let $C_4(\b H,v)$ be the 4-chain of $[0,1]\times U\b H$ : $$ C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;v) = C_4(M,\tau,\bar\tau ;v)\cup [0,1] \times \check \tau( (\b H \setminus \mathring M) \times \lbrace v \rbrace ) $$ where $C_4(M,\tau,\bar\tau ;v)$ is as in Lemma~\ref{lem_C4pme1}. The boundary of $C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;v)$ is~: $$ \partial C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;v) = \lbrace 1 \rbrace \times \bar\tau_{\b H}(M\times \lbrace v \rbrace ) - \lbrace 0 \rbrace \times \tau_{\b H}(M\times \lbrace v \rbrace ) - [0,1] \times \tau_{\b H}(\partial M\times \lbrace v \rbrace ). $$ Using the definition of Pontrjagin numbers of pseudo-parallelizations and the hypothesis on $\b H$, it follows that if $x,y$ and $z$ are points in $\b S^2$ with pairwise different distances to $e_1$ : $$ \begin{aligned} p_1(\tau,\bar\tau) = p_1(\tau_{\b H},\bar\tau_{\b H}) = 4 \cdot \langle C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;x) , C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;y) , C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;z) \rangle. \end{aligned} $$ Now note that : $$ \langle [0,1] \times \check \tau (\b H\setminus \mathring M \times \lbrace x \rbrace) , [0,1] \times \check \tau (\b H\setminus \mathring M \times \lbrace y \rbrace ), [0,1] \times \check \tau (\b H\setminus \mathring M \times \lbrace z \rbrace) \rangle = 0. $$ Indeed, if $\check \tau = (N(\check \gamma); \check \tau_e,\check \tau_d,\check \tau_g)$, for all $v\in \b S^2$ the pseudo-section of $\check\tau$ reads $$ \begin{aligned} \check \tau (M\times \lbrace v \rbrace ) &= \check\tau_e((M\setminus \mathring N(\check\gamma))\times \lbrace v \rbrace ) \\ &+ \frac{\check \tau_d(N(\check\gamma)\times \lbrace v \rbrace) + \check\tau_g(N(\check\gamma)\times \lbrace v \rbrace) + \check\tau_e( \lbrace b \rbrace \times \check\gamma \times C_2(v)) }{2}. \end{aligned} $$ The 3-chains $\check\tau_e((M\setminus \mathring N(\check\gamma))\times \lbrace v \rbrace )$, for $v\in \lbrace x,y,z \rbrace$, are pairwise disjoint since $\check\tau_e$ is a genuine parallelization and since $x,y$ and $z$ are pairwise distinct points in $\b S^2$. Moreover, the 3-chains $\check\tau_e( \lbrace b \rbrace \times \check\gamma \times C_2(v))$, for $v\in \lbrace x,y,z\rbrace$, are also pairwise disjoint since they are subsets of the $\check\tau_e(\lbrace b \rbrace \times \check\gamma \times \b S^1(v))$ , $v \in \lbrace x,y,z \rbrace$, which are pairwise disjoint since $x,y$ and $z$ have pairwise different distances to $e_1$. Finally, we have~: $$ \begin{aligned} \langle \check\tau_d(N(\check \gamma) \times \lbrace x\rbrace)+\check\tau_g(N(\check \gamma) \times \lbrace x\rbrace) , \check\tau_d(N(\check \gamma) \times \lbrace y\rbrace)+\check\tau_g(N(\check \gamma) \times \lbrace y\rbrace) , \\ \check\tau_d(N(\check \gamma) \times \lbrace z\rbrace)+\check\tau_g(N(\check \gamma) \times \lbrace z\rbrace) \rangle &= 0 \end{aligned} $$ since a triple intersection between the 3-chains $$ \lbrace \check\tau_d(N(\check \gamma) \times \lbrace v\rbrace)+\check\tau_g(N(\check \gamma) \times \lbrace v\rbrace)\rbrace_{v\in \lbrace x,y,z\rbrace} $$ would be contained in an intersection between two of the $\lbrace \check\tau_d(N(\check \gamma)\times \lbrace v\rbrace)\rbrace_{v\in \lbrace x,y,z \rbrace}$ or between two of the $\lbrace \check\tau_g(N(\check \gamma)\times \lbrace v\rbrace)\rbrace_{v\in \lbrace x,y,z \rbrace}$ which must be empty since $\check\tau_d$ and $\check\tau_g$ are genuine parallelizations. It follows that $$ \begin{aligned} p_1(\tau,\bar\tau) &= 4 \cdot\langle C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;x) , C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;y) , C_4(\b H,\tau_{\b H},\bar\tau_{\b H} ;z) \rangle \\ &= 4 \cdot\langle C_4(M,\tau,\bar\tau ;x) , C_4(M,\tau,\bar\tau ;y) , C_4(M,\tau,\bar\tau ;z) \rangle. \end{aligned} $$ Using the same construction, note also that it is enough to prove the statement when $M$ is a closed oriented 3-manifold since any oriented 3-manifold embeds into a closed one. \\
Let us finally prove Theorem~\ref{prop_varasint} when $M$ is a closed oriented 3-manifold. Consider a Heegaard splitting $M = H_1 \cup_\Sigma H_2$ such that there is a collar $\Sigma\times[0,1] \subset H_2$ of $\Sigma$ verifying $$ N(\bar \gamma) \cap \left(\Sigma\times[0,1] \right) = \emptyset \mbox{ \ \ and \ \ } N( \gamma) \cap \left(\Sigma\times[0,1] \right) = \emptyset $$ where $\gamma$ and $\bar\gamma$ are the links of $\tau$ and $\bar\tau$, respectively, and such that $\Sigma=\Sigma\times \lbrace 0 \rbrace$. Such a splitting can be obtained by considering a triangulation of $M$ containing $\gamma$ and $\bar\gamma$ in its 1-skeleton, and then defining $H_1$ as a tubular neighborhood of this 1-skeleton.
Using Lemma~\ref{lem_extendpparallelization}, we can construct a pseudo-parallelization $\tau^c$ of $\Sigma\times [0,1]$ such that $\tau^c$ coincides with $\bar\tau$ on $\Sigma \times \lbrace 1 \rbrace$ and with $\tau$ on $\Sigma \times \lbrace 0 \rbrace$. Then, write $H'_1 = H_1 \cup (\Sigma\times[0,1])$ and $H'_2=H_2\setminus( \Sigma\times[0,1[)$ -- see Figure~\ref{figHH} -- and set $$ \check \tau : \left\lbrace \begin{aligned}
(m,v) \in UH_1 &\longmapsto \tau(m,v) \\
(m,v) \in U(\Sigma\times[0,1]) &\longmapsto \tau^c(m,v) \\
(m,v) \in UH'_2 &\longmapsto \bar\tau(m,v). \end{aligned} \right. $$ \begin{center} \definecolor{zzttqq}{rgb}{0.6,0.2,0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=0.75cm] \clip(-3.5,-3.5) rectangle (5.5,3.5); \fill[line width=0pt,color=zzttqq,fill=zzttqq,fill opacity=0.15] (-3,2) -- (0,2) -- (0,-2) -- (-3,-2) -- cycle; \fill[line width=0pt,color=zzttqq,fill=zzttqq,fill opacity=0.05] (0,2) -- (2,2) -- (2,-2) -- (0,-2) -- cycle; \fill[line width=0pt,color=zzttqq,fill=zzttqq,fill opacity=0.15] (2,2) -- (5,2) -- (5,-2) -- (2,-2) -- cycle; \draw (0,2)-- (0,-2); \draw (2,2)-- (2,-2); \draw (0,2.4)-- (0,2.6); \draw (0,2.6)-- (5,2.6); \draw (5,2.4)-- (5,2.6); \draw (2,-2.4)-- (2,-2.6); \draw (2,-2.6)-- (-3,-2.6); \draw (-3,-2.6)-- (-3,-2.4); \begin{scriptsize} \draw (-1.75,0.25) node[anchor=north west] {$H_1$}; \draw (-1.75,-0.5) node[anchor=north west] {$\tau$}; \draw (3.25,0.25) node[anchor=north west] {$H'_2$}; \draw (3.25,-0.5) node[anchor=north west] {$\bar\tau$}; \draw (0.75,-0.5) node[anchor=north west] {$\tau^c$}; \draw (-0.75,-2.75) node[anchor=north west] {$H'_1$}; \draw (2.44,3.11) node[anchor=north west] {$H_2$}; \draw (-0.5,-2) node[anchor=north west] {$\Sigma\times\lbrace 0\rbrace$}; \draw (1.5,2.5) node[anchor=north west] {$\Sigma\times \lbrace 1 \rbrace$}; \end{scriptsize} \end{tikzpicture} \captionof{figure} {}\label{figHH} \end{center} For $v \in \b S^2$, consider some 4-chains $C_4(H_1,\tau,\check\tau ;v)$, $C_4(H_2,\tau,\check\tau ;v)$, $C_4(H'_1,\check\tau,\bar\tau ;v)$ and $C_4(H'_2,\check\tau,\bar\tau ;v)$ of $[0,1]\times UH_1$, $[0,1]\times UH_2$, $[0,1]\times UH'_1$ and $[0,1]\times UH'_2$, respectively, such that : $$ \begin{aligned}
\partial C_4(H_1,\tau,\check\tau ;v) &\hspace{-1mm}=\hspace{-1mm} \lbrace 1 \rbrace \times \check \tau (H_1 \times \lbrace v \rbrace) - \lbrace 0 \rbrace \times \tau (H_1\times \lbrace v \rbrace ) - [0,1] \times \tau (\partial H_1 \times \lbrace v \rbrace) \\
\partial C_4(H_2,\tau,\check\tau ;v) &\hspace{-1mm}=\hspace{-1mm} \lbrace 1 \rbrace \times \check \tau (H_2 \times \lbrace v \rbrace) - \lbrace 0 \rbrace \times \tau (H_2\times \lbrace v \rbrace ) - [0,1] \times \tau (\partial H_2 \times \lbrace v \rbrace) \\ \end{aligned} $$ and $$ \begin{aligned}
\partial C_4(H'_1,\check\tau,\bar\tau ;v) &\hspace{-1mm}=\hspace{-1mm} \lbrace 1 \rbrace \times \bar \tau (H'_1 \times \lbrace v \rbrace) - \lbrace 0 \rbrace \times \check \tau (H'_1\times \lbrace v \rbrace ) - [0,1] \times \check \tau (\partial H'_1 \times \lbrace v \rbrace) \\
\partial C_4(H'_2,\check\tau,\bar\tau ;v) &\hspace{-1mm}=\hspace{-1mm}\lbrace 1 \rbrace \times \bar \tau (H'_2 \times \lbrace v \rbrace) - \lbrace 0 \rbrace \times \check \tau (H'_2\times \lbrace v \rbrace ) - [0,1] \times \check \tau (\partial H'_2 \times \lbrace v \rbrace). \\ \end{aligned} $$ Since $H_1$ and $H_2$ embed in rational homology balls, for any $x,y$ and $z$ in $\b S^2$ with pairwise different distances to $e_1$ $$ \begin{aligned}
p_1(\tau_{|H_1},\check\tau_{|H_1}) &= 4 \cdot\langle C_4(H_1,\tau,\check\tau ;x) , C_4(H_1,\tau,\check\tau ;y) , C_4(H_1,\tau,\check\tau ;z) \rangle_{[0,1]\times UH_1} \\
p_1(\tau_{|H_2},\check\tau_{|H_2}) &= 4 \cdot\langle C_4(H_2,\tau,\check\tau ;x) , C_4(H_2,\tau,\check\tau ;y) , C_4(H_2,\tau,\check\tau ;z) \rangle_{[0,1]\times UH_2} \\ \end{aligned} $$ so that, using $C_4(M,\tau,\check\tau;v) = C_4(H_1,\tau,\check\tau;v)+C_4(H_2,\tau,\check\tau;v)$ for $v\in \b S^2$, $$ p_1(\tau,\check\tau) = 4 \cdot\langle C_4(M,\tau,\check\tau ;x) , \ C_4(M,\tau,\check\tau ;y) , \ C_4(M,\tau,\check\tau ;z) \rangle_{[0,1]\times UM}. $$ Similarly, since $H'_1$ and $H'_2$ embed in rational homology balls, for any $x,y$ and $z$ in $\b S^2$ with pairwise different distances to $e_1$ $$ \begin{aligned}
p_1(\check\tau_{|H'_1},\bar\tau_{|H'_1}) &= 4 \cdot\langle C_4(H'_1,\check\tau,\bar\tau ;x) , C_4(H'_1,\check\tau,\bar\tau ;y) , C_4(H'_1,\check\tau,\bar\tau ;z) \rangle _{[0,1]\times UH'_1} \\
p_1(\check\tau_{|H'_2},\bar\tau_{|H'_2}) &= 4 \cdot\langle C_4(H'_2,\check\tau,\bar\tau ;x) , C_4(H'_2,\check\tau,\bar\tau ;y) , C_4(H'_2,\check\tau,\bar\tau ;z)\rangle_{[0,1]\times UH'_2} \\ \end{aligned} $$ so that, using $C_4(M,\check\tau,\bar\tau;v) = C_4(H'_1,\check\tau,\bar\tau;v)+C_4(H'_2,\check\tau,\bar\tau;v)$ for $v\in \b S^2$, $$ p_1(\check\tau,\bar\tau) = 4 \cdot\langle C_4(M,\check\tau,\bar\tau;x), C_4(M,\check\tau,\bar\tau;y), C_4(M,\check\tau,\bar\tau;z) \rangle_{[0,1]\times UM}. $$ Eventually, reparameterizing and stacking $C_4(M,\tau,\check\tau;v)$ and $C_4(M,\check\tau,\bar\tau;v) $, for all $v \in \b S^2$ we get a 4-chain $C_4(M,\tau,\bar\tau; v)$ of $[0,1]\times UM$ such that $$ \partial C_4(M,\tau,\bar\tau;v) = \lbrace 1 \rbrace \times \bar \tau (M \times \lbrace v \rbrace) - \lbrace 0 \rbrace \times \tau(M\times \lbrace v \rbrace) - [0,1]\times \tau (\partial M \times \lbrace v \rbrace) $$ and such that for any $x,y$ and $z$ in $\b S^2$ with pairwise different distances to $e_1$ $$ p_1(\tau, \bar \tau)= 4 \cdot\langle C_4(M,\tau,\bar\tau;x), C_4(M,\tau,\bar\tau;y), C_4(M,\tau,\bar\tau;z) \rangle_{[0,1]\times UM}. $$ \end{proof}
\section{From pseudo-parallelizations to torsion combings} \subsection{Variation of $p_1$ as an intersection of two 4-chains} \begin{definition}
Let $M$ be a compact oriented 3-manifold. A trivialization $\rho$ of $TM_{|\partial M}$ is \textit{admissible} if there exists a section $X$ of $UM$ such that $(X,\rho)$ is a torsion combing of $M$. \end{definition}
\begin{lemma} \label{lem_HtMb}
Let $M$ be a compact oriented 3-manifold, let $\rho$ be an admissible trivialization of $TM_{|\partial M}$ and let $S_1, S_2 , \ldots , S_{\beta_1(M)}$ be surfaces in $M$ comprising a basis of $H_2(M;\b Q)$. The subspace $H_T^\rho(M)$ of $H_2(UM;\b Q)$ generated by $\lbrace [X(S_1)], \ldots, [X(S_{\beta_1(M)})] \rbrace$ where $X$ is a section of $UM$ such that $(X,\rho)$ is a torsion combing of $M$, only depends on $\rho$. \end{lemma} \begin{proof} Let $(Y,\rho)$ be another choice of torsion combing of $M$. Assume, without loss of generality, that $(X,\rho)$ and $(Y,\rho)$ are $\partial$-compatible, and let $C(X,Y)$ be the 4-chain of $UM$ $$
C(X,Y) = \bar F(X,Y) - UM_{|\Sigma_{X=-Y}} $$ constructed using Lemma~\ref{lem_phomotopy} and Proposition~\ref{prop_linksinhomologyI}, which provide $\bar F(X,Y)$ and a 2-chain $\Sigma_{X=-Y}$ of $M$ bounded by $L_{X=-Y}$, respectively. For $i \in \lbrace 1,2,\ldots,\beta_1(M)\rbrace$, $$
Y(S_i) - X(S_i) = \partial (C(X,Y) \cap UM_{|S_i}). $$ \end{proof}
\begin{lemma} \label{lem_evaluationII}
Let $M$ be a compact oriented 3-manifold, let $\rho$ be an admissible trivialization of $TM_{|\partial M}$ and let $(X,\rho)$ and $(Y,\rho)$ be $\partial$-compatible torsion combings of $M$. There exists a 4-chain $C_4(X,Y)$ of $[0,1]\times UM$ such that $$ \partial C_4(X,Y) = \lbrace 1 \rbrace \times Y(M) - \lbrace 0 \rbrace \times X(M) - [0,1] \times X(\partial M). $$ For any such chain $C_4(X,Y)$, if $C$ is a 2-cycle of $[0,1]\times UM$ then,
$$
[C] = \langle C , C_4(X,Y) \rangle_{[0,1] \times UM} [S] \mbox{ \ in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$,}
$$
where $[S]$ is the homology class of the fiber of $UM$ in $H_2([0,1]\times UM ; \b Q)$. \end{lemma} \begin{proof} Observe that $H_2(UM;\b Q)$ is generated by the family $\lbrace [Z(S_1)], \ldots, [Z(S_{\beta_1(M)})], [S] \rbrace$ where $S_1, \ldots , S_{\beta_1(M)}$ are surfaces in $M$ comprising a basis of $H_2(M;\b Q)$ and where $Z$ is a torsion combing of $M$ that coincides with $X$ and $Y$ on $\partial M$. Let $C_4(X,Y)$ be the 4-chain $$
C_4(X,Y) = \bar F_t(X,Y) - \lbrace t \rbrace \times UM_{|\Sigma_{X=-Y}} $$ where $\bar F_t(X,Y)$ is a 4-chain as in Definition~\ref{def_Ft} and $\Sigma_{X=-Y}$ is a 2-chain of $M$ bounded by $L_{X=-Y}$ provided by Proposition~\ref{prop_linksinhomologyI}. The chain $C_4(X,Y)$ has the desired boundary. Note that $\langle [S], C_4(X,Y) \rangle = 1$. Moreover, $\langle [Z(\Sigma)], C_4(X,Y) \rangle = 0$ for any surface $\Sigma$ in $M$. Indeed, notice that $$ \langle [Z(\Sigma)], C_4(X,Y) \rangle = \left\lbrace \begin{aligned} &\langle [Z(\Sigma)], [0,1]\times X(M) \rangle \mbox{ , pushing $Z(\Sigma)$ before $t$,}\\ &\langle [Z(\Sigma)], [0,1]\times Y(M) \rangle \mbox{ , pushing $Z(\Sigma)$ after $t$.} \end{aligned} \right. $$ As a consequence, $\langle [Z(\Sigma)], C_4(X,Y) \rangle$ is independent of $X$ and $Y$. Let us prove that it is possible to construct a torsion combing $Z'$ that coincides with $X$ and $Y$ on $\partial M$ and such that $$ \langle [Z(\Sigma)], [0,1]\times Z'(M) \rangle=0. $$ Using the parallelization $\rho=(E_1^\rho,E_2^\rho,E_3^\rho)$ of $\partial M$ induced by $X$, define a homotopy $$
\l Z : [0,1] \times \partial M \rightarrow [0,1]\times UM_{|\partial M} $$
from $-Z_{|\partial M}$ to $Z_{|\partial M}$ along the unique geodesic arc passing through $E_3^\rho$. Since $\Sigma$ sits in $\mathring{M}$, we can get a collar $\l C\simeq [0,1]\times \partial M$ of $\partial M$ such that $\l C \cap \Sigma = \emptyset$ and $\lbrace 1 \rbrace \times \partial M = \partial M$. Finally set $Z'$ to coincide with $-Z$ on $M\setminus \mathring{\l C}$ and with the homotopy $\l Z$ on the collar. The combing $(Z',\rho)$ is a torsion combing. Indeed, $E_2^\rho$ can be extended as a nonvanishing section of ${Z'}^\perp_{|\l C}$ so that $$ e_2^M(Z'^\perp,E_2^\rho)=e_2^{M\setminus \mathring{\l C}}(Z'^\perp,E_2^\rho)=e_2^M(-Z^\perp,E_2^\rho)=-e_2^M(Z^\perp,E_2^\rho). $$ Finally, using the torsion combing $Z'$, we get $\langle [Z(\Sigma)], [0,1]\times Z'(M) \rangle = 0$. \\
To conclude the proof, assume that $C'_4(X,Y)$ is a 4-chain with same boundary as the chain $C_4(X,Y)$ we constructed, and let $C$ be a 2-cycle of $[0,1]\times UM$. The 2-cycle $C$ is homologous to a 2-cycle in $\lbrace 1 \rbrace \times UM$. Similarly, $(C'_4(X,Y) - C_4(X,Y))$ is homologous to a 4-cycle in $\lbrace 0 \rbrace \times UM$. Hence, $\langle C , \ C'_4(X,Y) - C_4(X,Y) \rangle = 0$. \end{proof}
\begin{lemma} \label{bord} Let $\tau$ and $\bar \tau$ be two pseudo-parallelizations of a compact oriented 3-manifold $M$ that coincide on $\partial M$. Let $C_4(\tau,\bar\tau;\pm e_1)$ denote 4-chains of $[0,1]\times UM$ as in Theorem~\ref{prop_varasint} for $v=\pm e_1$. If the 4-chains $C_4(\tau,\bar\tau;\pm e_1)$ are transverse to each other, then
$$
\begin{aligned}
\partial (C_4(\tau,\bar\tau;e_1)\cap C_4(\tau,\bar\tau;-e_1)) &= \sfrac{1}{4} \cdot \lbrace 1 \rbrace \times \left( \bar E_1^d(L_{\bar E_1^d=-\bar E_1^g}) - (-\bar E_1^d)(L_{\bar E_1^d=-\bar E_1^g}) \right) \\
&- \sfrac{1}{4} \cdot \lbrace 0 \rbrace \times \left( E_1^d(L_{E_1^d=-E_1^g}) - (-E_1^d)(L_{E_1^d=-E_1^g}) \right)
\end{aligned}
$$
where $E_1^d$ and $E_1^g$, resp. $\bar E_1^d$ and $\bar E_1^g$, are the Siamese sections of $\tau$, resp. $\bar\tau$. \end{lemma} \begin{proof}
Since $\tau$ and $\bar \tau$ coincide with a trivialization of $TM_{|\partial M}$ on $\partial M$, we have $$ \begin{aligned} \partial (C_4(\tau,\bar\tau;e_1)\cap C_4(\tau,\bar\tau;-e_1)) &= \lbrace 1 \rbrace \times \left( \bar\tau(M\times \lbrace e_1 \rbrace)\cap \bar\tau(M\times \lbrace -e_1 \rbrace) \right) \\ &- \lbrace 0 \rbrace \times \left( \tau(M\times \lbrace e_1 \rbrace)\cap \tau(M\times \lbrace -e_1 \rbrace) \right) \\ &=\sfrac{1}{4} \cdot \lbrace 1 \rbrace \times \left( \bar E_1^d(L_{\bar E_1^d=-\bar E_1^g}) + \bar E_1^g(L_{\bar E_1^g=-\bar E_1^d}) \right) \\ &- \sfrac{1}{4} \cdot \lbrace 0 \rbrace \times \left( E_1^d(L_{E_1^d=-E_1^g}) + E_1^g(L_{E_1^g=-E_1^d}) \right) . \end{aligned} $$ \iffalse &= ( \lbrace 1 \rbrace \times UM ) \bigcap \left( C_4(\tau,\bar\tau;e_1)\cap C_4(\tau,\bar\tau;-e_1) \right) \\ &- (\lbrace 0 \rbrace \times UM ) \bigcap \left( C_4(\tau,\bar\tau;e_1)\cap C_4(\tau,\bar\tau;-e_1) \right) \\
&=\sfrac{1}{4} \cdot \lbrace 1 \rbrace \times \left( \bar E_1^d(L_{\bar E_1^d=-\bar E_1^g}) - (-\bar E_1^d)(L_{\bar E_1^d=-\bar E_1^g}) \right) \\ &- \sfrac{1}{4} \cdot \lbrace 0 \rbrace \times \left( E_1^d(L_{E_1^d=-E_1^g}) - (-E_1^d)(L_{E_1^d=-E_1^g}) \right). \fi \end{proof}
\begin{definition} \label{omega} Let $\bar\tau=(N(\gamma); \tau_e, \tau_d, \tau_g)$ be a pseudo-parallelization of a compact oriented 3-manifold $M$, and let $E_1^d$ and $E_1^g$ denote its Siamese sections. Recall from Definition \ref{def_addinner} that the map $$ \tau_d^{-1} \circ \tau_g : [a,b] \times \gamma \times [-1,1] \times \b R^3 \rightarrow [a,b] \times \gamma \times [-1,1] \times \b R^3 $$ is such that $$ \begin{aligned} &\forall t \in [a,b], \ u \in [-1,1], \ c \in \gamma, \ v \in \b R^3 : \\ &\tau_d^{-1} \circ \tau_g ((t,c,u),v) = \l T_\gamma^{-1} ((t,c,u),\l F(t,u)(v)), \end{aligned} $$ Hence, $L_{E_1^d=-E_1^g}$ consists in parallels of $\gamma$ of the form $\lbrace t \rbrace \times \gamma \times \lbrace u \rbrace$. For all component $L$ of $L_{E_1^d=-E_1^g}$, there exists a point $e_2^L$ in $\b S^1(e_2)$ such that $L \times \lbrace e_2^L \rbrace =\tau_d^{-1} \circ \tau_g (L \times \lbrace e_2 \rbrace)$. Choose a point $e_2^\Omega$ in $\b S^1(e_2)$ distinct from $e_2$ and from the points $e_2^L$. Finally, set $$ \Omega(\bar\tau)=-\tau_d(L_{E_1^d=-E_1^g}\times[-e_1,e_1]_{e_2^\Omega}) $$ where $[-e_1,e_1]_{e_2^\Omega}$ is the geodesic arc from $-e_1$ to $e_1$ passing through $e_2^\Omega$. The 2-chain $\Omega(\bar\tau)$ can be seen as the projection of a homotopy from $-E_1^d$ to $E_1^d$ over $L_{E_1^d=-E_1^g}$. Note that $$ \partial \Omega(\bar\tau)= (-E_1^d)(L_{E_1^d=-E_1^g})- E_1^d(L_{E_1^d=-E_1^g}). $$ The choice of $e_2^\Omega$ ensures that $\Omega(\bar\tau) \cap \bar\tau(M\times \lbrace e_2 \rbrace)=\emptyset$. Note that $\Omega(\tau)=\emptyset$ when $\tau$ is a genuine parallelization. \end{definition}
\begin{definition} \label{def_pgo}
Let $M$ be a compact oriented 3-manifold and let $\rho$ be an admissible trivialization of $TM_{|\partial M}$. Let $\tau$ and $\bar\tau$ be pseudo-parallelizations of a compact oriented 3-manifold $M$ which coincide with $\rho$ on $TM_{|\partial M}$ and let $C_4(\tau,\bar\tau;\pm e_1)$ denote 4-chains of $[0,1]\times UM$ as in Theorem~\ref{prop_varasint}. Set $$ \go P(\tau,\bar\tau) = \lbrace 0 \rbrace \times \Omega(\tau) + 4 \cdot (C_4(\tau,\bar\tau;e_1) \cap C_4(\tau,\bar\tau;-e_1)) - \lbrace 1 \rbrace \times \Omega(\bar\tau). $$ When $(X,\rho)$ is a torsion combing of $M$, let $C_4^+(X,\bar\tau)$ and $C_4^-(X,\bar\tau)$ be 4-chains of $[0,1]\times UM$ as in Lemma~\ref{lem_transfert} and set $$ \begin{aligned} \go P(X,\bar\tau) = 4\cdot(C_4^+(X,\bar\tau)\cap C_4^-(X,\bar\tau)) - \lbrace 1 \rbrace \times \Omega(\bar\tau), \\ \go P(\bar\tau,X) = \lbrace 0 \rbrace \times \Omega(\bar\tau) + 4\cdot(C_4^+(\bar\tau,X)\cap C_4^-(\bar\tau,X)). \end{aligned} $$ \end{definition}
According to Lemma~\ref{bord} and Definition~\ref{omega}, the 4-chains $\go P(\lambda,\mu)$ of Definition~\ref{def_pgo} above are cycles. In the remaining of this section, we prove that their classes read $p_1(\lambda,\mu)[S]$ in $H_2([0,1]\times UM;\b Q)/H_T^\rho(M)$.
\begin{proposition} \label{prop_ppara}
Let $M$ be a compact oriented 3-manifold, let $\rho$ be an admissible trivialization of $TM_{|\partial M}$ and let $\tau$ and $\bar \tau$ be two pseudo-parallelizations of $M$ that coincide with $\rho$ on $\partial M$. Under the assumptions of Definition~\ref{def_pgo}, the class of $\go P(\tau,\bar\tau)$ in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$ equals $p_1(\tau,\bar\tau)[S]$ where $[S]$ is the homology class of the fiber of $UM$ in $H_2([0,1]\times UM ; \b Q)$. \end{proposition} \begin{proof} The class of $\go P(\tau,\bar\tau)$ in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$ is $$
\left[ \go P(\tau,\bar\tau) \right] = \langle \go P(\tau,\bar\tau) , C_4(X,Y) \rangle \cdot [S] $$ where $(X,\rho)$ and $(Y,\rho)$ are $\partial$-compatible torsion combings of $M$ and where $C_4(X,Y)$ is any 4-chain of $[0,1]\times UM$ as in Lemma~\ref{lem_evaluationII}. Let us construct a specific $C_4(X,Y)$ as follows. Let $C_4(\tau,\bar\tau;e_2)$ be as in Theorem~\ref{prop_varasint} where $e_2=(0,1,0)$. Since, $\partial C_4(\tau,\bar\tau;e_1)$ and $\partial C_4(\tau,\bar\tau;e_2)$ are homologous, it is possible to reparameterize and to stack the 4-chains $C_4^+(X,\tau)$, $C_4(\tau,\bar\tau;e_2)$ and $C_4^+(\bar\tau,Y)$ where the chains $C_4^+(X,\tau)$ and $C_4^+(\bar\tau,Y)$ are as in Lemma~\ref{lem_transfert}. It follows that, in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$, $$ \begin{aligned} \left[ \go P(\tau,\bar\tau) \right] &= \langle \go P(\tau,\bar\tau) , C_4(X,Y) \rangle [S] \\
&= \langle \go P(\tau,\bar\tau) , C_4(\tau,\bar\tau;e_2) \rangle [S] \\
&= 4 \cdot \langle C_4(\tau,\bar\tau;e_1) \cap C_4(\tau,\bar\tau;-e_1) ,C_4(\tau,\bar\tau;e_2) \rangle [S] \\
&+ \langle \lbrace 0 \rbrace \times \Omega(\tau)- \lbrace 1 \rbrace \times \Omega(\bar\tau),C_4(\tau,\bar\tau;e_2) \rangle [S]. \end{aligned} $$ Now, note that $ \langle \lbrace 0 \rbrace \times \Omega(\tau)- \lbrace 1 \rbrace \times \Omega(\bar\tau),C_4(\tau,\bar\tau;e_2) \rangle=0$ since $$ \Omega(\tau) \cap \tau(M\times \lbrace e_2 \rbrace) = \emptyset \mbox{ \ and \ } \Omega(\bar \tau) \cap \bar \tau(M\times \lbrace e_2 \rbrace) = \emptyset. $$ Hence, $$ \left[ \go P(\tau,\bar\tau) \right]= 4 \cdot \langle C_4(\tau,\bar\tau;e_1),C_4(\tau,\bar\tau;-e_1),C_4(\tau,\bar\tau;e_2) \rangle [S] = p_1(\tau,\bar\tau) [S]. $$ \end{proof}
\subsection[Pontrjagin numbers for combings of compact 3-manifolds]{Pontrjagin numbers for combings of compact 3-manifolds \\ Proof of Theorem~\ref{thm_defp1Xb}} \begin{lemma} \label{cor_reformcombingsb} Let $(X,\rho)$ be a torsion combing of a compact oriented 3-manifold $M$. Let $\bar\tau$ be a pseudo-parallelization of $M$ compatible with $X$. Let $\go P(\bar\tau,X)$ be as in Definition~\ref{def_pgo}. The class $[\go P(\bar\tau,X)]$ in $H_2([0,1]\times UM ; \b Q) / H_T^\rho(M)$ only depends on $\bar\tau$ and on the homotopy class of $X$. It will be denoted by $\tilde p_1(\bar\tau,[X])[S]$. \end{lemma} \begin{proof} Let $\tau$ be another pseudo-parallelization of $M$ which is compatible with $X$. Let $C_4^+(X,\tau)$ and $C_4^-(X,\tau)$ be fixed choices of 4-chains of $[0,1]\times UM$ as in Lemma~\ref{lem_transfert}. Using these 4-chains, construct the cycle $\go P(X,\tau)$ as in Definition~\ref{def_pgo}. Then, in the space $H_2([0,1]\times UM ; \b Q) / H_T^\rho(M)$, we have $$ \begin{aligned} [\go P(\bar\tau,X)] + [\go P(X,\tau)] &= [\go P(\bar\tau,X) + \go P(X,\tau)] \\ &= [ \lbrace 0 \rbrace \times \Omega(\bar\tau) + 4\cdot(C_4^+(\bar\tau,X)\cap C_4^-(\bar\tau,X)) \\ & \hspace{7mm} + 4\cdot(C_4^+(X,\tau)\cap C_4^-(X,\tau)) - \lbrace 1 \rbrace \times \Omega(\tau) ]. \end{aligned} $$ By reparameterizing and stacking $C_4^+(\bar\tau,X)$ and $C_4^+(X,\tau)$, resp. $C_4^-(\bar\tau,X)$ and $C_4^-(X,\tau)$, we get a 4-chain $C_4(\bar\tau,\tau,e_1)$, resp. $C_4(\bar\tau,\tau,-e_1)$, as in Lemma~\ref{lem_C4pme1}. It follows that $$ \begin{aligned} [\go P(\bar\tau,X)] + [\go P(X,\tau)] & \hspace{-1mm}=\hspace{-1mm} [ \lbrace 0 \rbrace \times \Omega(\bar\tau) + 4\cdot(C_4(\bar\tau,\tau,e_1) \cap C_4(\bar\tau,\tau,e_1)) - \lbrace 1 \rbrace \times \Omega(\tau) ] \\ &\hspace{-1mm}=\hspace{-1mm} [\go P(\bar\tau, \tau)] \end{aligned} $$ or, equivalently, $[\go P(\bar\tau,X)] = [\go P(\bar\tau, \tau)] - [\go P(X,\tau)]$. This proves the statement since $\go P(X,\tau)$ is independent of the choices made for $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$, and since, according to Proposition~\ref{prop_ppara}, the class $[\go P(\bar\tau, \tau)]$ is independent of the choices for $C_4(\bar\tau,\tau,e_1)$ and $C_4(\bar\tau,\tau,-e_1)$. \end{proof}
\begin{proposition} \label{cor_reformcombings} If $\bar\tau$ is a pseudo-parallelization of a closed oriented 3-manifold $M$ and if $X$ is a torsion combing of $M$ compatible with $\bar\tau$, then $$
\tilde p_1(\bar\tau,[X]) = p_1([X])-p_1(\bar\tau). $$ \end{proposition} \begin{proof} According to Lemma~\ref{cor_reformcombingsb}, $\tilde p_1(\bar\tau,[X])$ is independent of the choices for $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$. Let us construct convenient 4-chains $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$. Let $\tau$ be a ge\-nuine parallelization of $M$. Thanks to Theorem~\ref{prop_varasint}, there exist two 4-chains of $[0,1]\times UM$, $C_4(\bar\tau,\tau;e_1)$ and $C_4(\bar\tau,\tau;-e_1)$, such that $$ \begin{aligned}
\partial C_4(\bar\tau,\tau;e_1) &= \lbrace 1 \rbrace \times \tau(M\times \lbrace e_1 \rbrace) - \lbrace 0 \rbrace \times \bar \tau(M\times \lbrace e_1 \rbrace), \\
\partial C_4(\bar\tau,\tau;-e_1) &= \lbrace 1 \rbrace \times \tau(M\times \lbrace -e_1 \rbrace) - \lbrace 0 \rbrace \times \bar \tau(M\times \lbrace -e_1 \rbrace). \end{aligned} $$ Furthermore, as in Remark~\ref{rmkpcase}, construct two 4-chains $C_4^+(\tau,X)$ and $C_4^-(\tau,X)$ as $$ \begin{aligned}
C_4^+(\tau, X) &= \bar F_{t_1}(E_1^\tau, X) - \lbrace t_1 \rbrace \times UM_{|\Sigma_{E_1^\tau=-X}}\\
C_4^-(\tau, X) &= \bar F_{t_2}(-E_1^\tau, -X) - \lbrace t_2 \rbrace \times UM_{|\Sigma_{-E_1^\tau=X}} \end{aligned} $$ where $E_1^\tau$ stands for the first vector of the parallelization $\tau$, where $t_1, t_2\in \ ]0,1[$, and where $\Sigma_{E_1^\tau=-X}$ and $\Sigma_{-E_1^\tau=X}$ are 2-chains with boundaries $L_{E_1^\tau=-X}$ and $L_{-E_1^\tau=X}$, respectively. Eventually, define $C_4^+(\bar\tau,X)$, resp. $C_4^-(\bar\tau,X)$, by reparameterizing and stacking the chains $C_4(\bar\tau,\tau;e_1)$ and $ C_4^+(\tau, X)$, resp. $C_4(\bar\tau,\tau;-e_1)$ and $ C_4^-(\tau, X)$. \\
Let us finally compute $[\lbrace 0 \rbrace \times \Omega(\bar\tau)+4\cdot(C_4^+(\bar\tau,X) \cap C_4^-(\bar\tau,X))]$. By construction, we have : $$ \begin{aligned} &[\lbrace 0 \rbrace \times \Omega(\bar\tau)+4 \cdot (C_4^+(\bar\tau,X) \cap C_4^-(\bar\tau,X))] \\ &\hspace{-1mm}=\hspace{-1mm} [\lbrace 0 \rbrace \times \Omega(\bar\tau)+4 \cdot (C_4(\bar\tau,\tau;e_1) \cap C_4(\bar\tau,\tau;-e_1))] + 4 [C_4^+(\tau, X) \cap C_4^-(\tau, X)], \end{aligned} $$ so that, using Proposition~\ref{prop_ppara}, $$ \begin{aligned}
&[\lbrace 0 \rbrace \times \Omega(\bar\tau)+4 \cdot (C_4^+(\bar\tau,X) \cap C_4^-(\bar\tau,X))] \\
&= (p_1(\tau)-p_1(\bar\tau))[S] + 4 \cdot [C_4^+(\tau, X) \cap C_4^-(\tau, X)]. \end{aligned} $$ Now, using Definition~\ref{def_Ft}, $$ \begin{aligned}
C_4^+(\tau, X) = \ & \left[ 0 , t_1 \right] \times E_1^\tau(M)
+ \lbrace t_1 \rbrace \times \bar F(E_1^\tau,X) \\
+ \ & \left[ t_1, 1 \right] \times X(M) - \lbrace t_1 \rbrace \times UM_{|\Sigma_{E_1^\tau=-X}}\\
C_4^-(\tau, X) = \ & \left[ 0 , t_2 \right] \times (-E_1^\tau)(M)
+ \lbrace t_2 \rbrace \times \bar F(-E_1^\tau,-X) \\
+ \ & \left[ t_2, 1 \right] \times (-X)(M) - \lbrace t_2 \rbrace \times UM_{|\Sigma_{-E_1^\tau=X}} \end{aligned} $$ so that, assuming $t_1 < t_2$ without loss of generality, $$ \begin{aligned} C_4^+(\tau, X) \cap C_4^-(\tau, X) = - \ & \lbrace t_1 \rbrace \times (-E_1^\tau)\left( \Sigma_{E_1^\tau=-X} \right) \\ + \ & [t_1 , t_2] \times (-E_1^\tau)\left( L_{-E_1^\tau = X} \right) \\ - \ & \lbrace t_2 \rbrace \times X(\Sigma_{-E_1^\tau=X}). \end{aligned} $$ It follows that, using Theorem~\ref{thm_defp1X} and Lemma~\ref{lem_evaluationII} with $C_4(E_1^\tau,E_1^\tau)=[0,1] \times E_1^\tau(M)$, $$ \begin{aligned} 4 \cdot [C_4^+(\tau, X) \cap C_4^-(\tau, X)] & \hspace{-1mm}=\hspace{-1mm} 4 \cdot \langle C_4^+(\tau, X) , C_4^-(\tau, X) , [0,1] \times E_1^\tau(M) \rangle [S] \\ &\italicegal4 \cdot lk(L_{E_1^\tau = X},L_{E_1^\tau = - X}) [S] \\ &\hspace{-1mm}=\hspace{-1mm}(p_1([X]) - p_1(\tau))[S] \\ \end{aligned} $$ in $H_2([0,1]\times UM; \b Q)/H_T(M)$, and, eventually, $$ \begin{aligned}
[\lbrace 0 \rbrace \times \Omega(\bar\tau)&+4 \cdot (C_4^+(\bar\tau,X) \cap C_4^-(\bar\tau,X))] \\
&= (p_1(\tau)-p_1(\bar\tau))[S] + (p_1([X]) - p_1(\tau))[S] \\
&=(p_1([X]) - p_1(\bar\tau))[S]. \end{aligned} $$ \end{proof}
\begin{lemma} \label{fond} If $(X,\rho)$ is a torsion combing of a compact oriented 3-manifold $M$ and if \linebreak $\bar\tau=(N(\gamma); \tau_e,\tau_d,\tau_g)$ is a pseudo-parallelization of $M$ compatible with $X$, then $$ \begin{aligned} \tilde p_1(\bar\tau, [X]) &= lk_M\left(L_{E_1^d=X} + L_{E_1^g=X} \ , \ L_{E_1^d=-X} + L_{E_1^g=-X} \right) \\ &- lk_{\b S^2}\left(e_1-(-e_1), \ P_{\b S^2} \circ \tau_d^{-1} \circ X(L_{E_1^d=-E_1^g})\right) \end{aligned} $$ where $E_1^d$ and $E_1^g$ denote the Siamese sections of $\bar\tau$. \end{lemma} \begin{proof} We just have to evaluate the class of the 4-cycle $$ \go P(\bar\tau,X) = \lbrace 0 \rbrace \times \Omega(\bar\tau)+4\cdot(C_4^+(\bar\tau,X)\cap C_4^-(\bar\tau,X)) $$ in $H_2([0,1]\times UM ; \b Q) / H_T^\rho(M)$ for convenient 4-chains $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$ with the prescribed boundaries. Let $t_1$ and $t_2$ in $]0,1[$, with $t_1 > t_2$, and set $$ \begin{aligned}
C_4^+(\bar\tau,X) &= \sfrac{1}{2} \cdot \left( \bar F_{t_1}(E_1^d,X)+\bar F_{t_1}(E_1^g,X) - \lbrace t_1 \rbrace \times UM_{|\Sigma(e_1)} \right)\\
C_4^-(\bar\tau,X) &= \sfrac{1}{2} \cdot \left( \bar F_{t_2}(-E_1^d,-X)+\bar F_{t_2}(-E_1^g,-X) - \lbrace t_2 \rbrace \times UM_{|\Sigma(-e_1)} \right) \end{aligned} $$ where the chains $\bar F_t$ are as in Definition~\ref{def_Ft} and where, using Lemma~\ref{nulexcep}, $\Sigma(e_1)$ and $\Sigma(-e_1)$ are 2-chains of $M$ so that $$ \partial \Sigma(\pm e_1) = \pm(L_{E_1^d=-X} + L_{E_1^g=-X}). $$ These 4-chains do have the expected boundaries. Let us now describe $C_4^+(\bar\tau,X) \cap C_4^-(\bar\tau,X)$ : \begin{enumerate}[\textbullet] \item on $[0,t_2[$ : The intersection between $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$ is $$ \sfrac{1}{4} \cdot [0,t_2[ \ \times \ E_1^d(L_{E_1^d=-E_1^g}) + \sfrac{1}{4} \cdot [0,t_2[ \ \times \ E_1^g(L_{E_1^g=-E_1^d}). $$
\item on $]t_2 , t_1[$ : The intersection between $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$ is $$
\sfrac{1}{2} \ \cdot \ ]t_2,t_1[ \ \times \ (-X)(L_{E_1^d=-X}) + \sfrac{1}{2} \ \cdot \ ]t_2,t_1[ \ \times \ (-X)(L_{E_1^g=-X}). $$
\item on $]t_1,1]$ : There is no intersection between $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$ since they consist in $]t_1,1]\times X(M)$ and $]t_1,1]\times (-X)(M)$.
\item at $t_2$ : The intersection between $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$ is $$
\begin{aligned}
&\sfrac{1}{2} \cdot \lbrace t_2 \rbrace \times E_1^d(M) \cap \lbrace t_2 \rbrace \times \bar F(-E_1^g,-X) \\
+ &\sfrac{1}{2} \cdot \lbrace t_2 \rbrace \times E_1^g(M) \cap \lbrace t_2 \rbrace \times \bar F(-E_1^d,-X) \\
- & \sfrac{1}{4} \cdot \lbrace t_2 \rbrace \times E_1^d(\Sigma(-e_1)) - \sfrac{1}{4} \cdot \lbrace t_2 \rbrace \times E_1^g(\Sigma(-e_1))
\end{aligned} $$ \item at $t_1$ : The intersection between $C_4^+(\bar\tau,X)$ and $C_4^-(\bar\tau,X)$ is $$ \begin{aligned}
&\sfrac{1}{2}\cdot \lbrace t_1 \rbrace \times \bar F(E_1^d,X) \cap \lbrace t_1 \rbrace \times (-X)(M) \\
+ &\sfrac{1}{2}\cdot \lbrace t_1 \rbrace \times \bar F(E_1^g,X) \cap \lbrace t_1 \rbrace \times (-X)(M) \\
- & \sfrac{1}{2} \cdot \lbrace t_1 \rbrace \times (-X)(\Sigma(e_1)).
\end{aligned} $$ \end{enumerate} It follows that : $$ \begin{aligned} &\langle \sfrac{1}{4} \cdot \lbrace 0 \rbrace \times \Omega(\bar\tau)+ C_4^+(\bar\tau,X) \cap C_4^-(\bar\tau,X) , [0,1] \times X(M) \rangle_{[0,1]\times UM}\\ &= \sfrac{1}{4} \cdot \langle [0,t_2[ \ \times \ E_1^d(L_{E_1^d=-E_1^g}) + [0,t_2[ \ \times \ E_1^g(L_{E_1^g=-E_1^d}) , [0,1]\times X(M) \rangle_{[0,1]\times UM} \\ &+ \sfrac{1}{2} \cdot \langle \lbrace t_2 \rbrace \times E_1^d(M) \cap \lbrace t_2 \rbrace \times \bar F(-E_1^g,-X) , [0,1] \times X(M) \rangle_{[0,1]\times UM} \\ &+\sfrac{1}{2} \cdot \langle \lbrace t_2 \rbrace \times E_1^g(M) \cap \lbrace t_2 \rbrace \times \bar F(-E_1^d,-X) , [0,1] \times X(M) \rangle_{[0,1]\times UM} \\ &- \sfrac{1}{4} \cdot \langle \lbrace t_2 \rbrace \times E_1^d(\Sigma(-e_1)) + \lbrace t_2 \rbrace \times E_1^g(\Sigma(-e_1)) , [0,1] \times X(M) \rangle_{[0,1]\times UM} \\ &+ \sfrac{1}{4} \cdot \langle \Omega(\bar\tau) , X(M) \rangle_{UM}. \end{aligned} $$ Since $L_{E_1^g=X} \cap L_{E_1^d=-X}$ and $L_{E_1^g=-X} \cap L_{E_1^d=X}$ are empty : $$ \langle [0,t_2[ \ \times \ E_1^d(L_{E_1^d=-E_1^g}) + [0,t_2[ \ \times \ E_1^g(L_{E_1^g=-E_1^d}) , [0,1]\times X(M) \rangle_{[0,1]\times UM} = 0. $$ Furthermore, note that if $(m,v)$ is an intersection point of $$E_1^d(M)\cap \bar F(-E_1^g,-X) \cap X(M) $$ then, in particular, $v=E_1^d(m)=X(m)$ so that $-E_1^g(m)$ and $-X(m)$ are not antipodal since $L_{E_1^d=X} \cap L_{E_1^g=-X}=\emptyset$. It follows that $v=E_1^d(m)=X(m)$ should also sit on the shortest geodesic arc from $-E_1^g(m)$ to $-X(m)$. Since such a configuration is impossible, this triple intersection is empty, thus $$ \langle \lbrace t_2 \rbrace \times E_1^d(M) \cap \lbrace t_2 \rbrace \times \bar F(-E_1^g,-X) , [0,1] \times X(M) \rangle_{[0,1]\times UM} = 0. $$ Similarly, $$ \langle \lbrace t_2 \rbrace \times E_1^g(M) \cap \lbrace t_2 \rbrace \times \bar F(-E_1^d,-X) , [0,1] \times X(M) \rangle_{[0,1]\times UM} = 0. $$ Now, we have $$ \begin{aligned} &\langle \lbrace t_2 \rbrace \times E_1^d(\Sigma(-e_1)) + \lbrace t_2 \rbrace \times E_1^g(\Sigma(-e_1)) , [0,1] \times X(M) \rangle \\ &= - lk_M(L_{E_1^d=X} + L_{E_1^g=X}, \ L_{E_1^d=-X} + L_{E_1^g=-X}). \end{aligned} $$ Furthermore, recall Definition~\ref{omega} $$ \begin{aligned} \langle \Omega(\bar\tau) , X(M) \rangle_{UM} &= \langle \tau_d^{-1} (\Omega(\bar\tau)) , \tau_d^{-1} \circ X(L_{E_1^d=-E_1^g}) \rangle_{L_{E_1^d=-E_1^g} \times \b S^2} \\ &= - \langle L_{E_1^d=-E_1^g} \hspace{-0.5mm} \times \hspace{-0.5mm} [-e_1,e_1]_{e_2^\Omega} \ , \ \tau_d^{-1} \circ X(L_{E_1^d=-E_1^g}) \rangle_{L_{E_1^d=-E_1^g} \times \b S^2} \end{aligned} $$ where $[-e_1,e_1]_{e_2^\Omega}$ is the geodesic arc of $\b S^2$ from $-e_1$ to $e_1$ passing through $e_2^\Omega$. Now, $L_{E_1^d=-E_1^g} \times \b S^2$ is oriented and an intersection $$ (m,v)\hspace{-0.5mm}\in\hspace{-0.5mm}L_{E_1^d=-E_1^g} \hspace{-0.5mm} \times \hspace{-0.5mm} [-e_1,e_1]_{e_2^\Omega} \cap \tau_d^{-1} \circ X(L_{E_1^d=-E_1^g}) $$ is positive when $$ T_{(m,v)}(L_{E_1^d=-E_1^g} \times [-e_1,e_1]_{e_2^\Omega})\oplus T_{(m,v)}(\tau_d^{-1}\circ X(L_{E_1^d=-E_1^g}))=T_{(m,v)}(L_{E_1^d=-E_1^g}\times\b S^2) $$ as an oriented sum, which is equivalent to $$ T_{v}([-e_1,e_1]_{e_2^\Omega})\oplus T_{v}(P_{\b S^2}\circ\tau_d^{-1}\circ X(L_{E_1^d=-E_1^g}))=T_{v}(\b S^2) $$ as an oriented sum, where $P_{\b S^2}$ is the standard projection from $M\times \b S^2$ to $\b S^2$. See Figure \ref{orientation}. \begin{center} \definecolor{ccqqtt}{rgb}{0.8,0,0.2} \definecolor{zzttqq}{rgb}{0.6,0.2,0} \definecolor{ccqqqq}{rgb}{0.8,0,0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=0.5cm,scale=0.9] \clip(9,-2.5) rectangle (23,10); \fill[line width=0pt,color=zzttqq,fill=zzttqq,fill opacity=0.1] (13,4) -- (14.5,4) -- (14.5,-1) -- (13,-1) -- cycle; \fill[line width=0pt,dotted,color=zzttqq,fill=zzttqq,fill opacity=0.1] (16,4) -- (20,4) -- (20,-1) -- (16,-1) -- cycle; \fill[line width=0pt,color=zzttqq,fill=zzttqq,fill opacity=0.1] (21.5,4) -- (23,4) -- (23,-1) -- (21.5,-1) -- cycle; \draw [line width=1.4pt] (16,4)-- (20,4); \draw [line width=1.4pt] (20,-1)-- (20,8); \draw [line width=1.4pt] (20,8)-- (22,9); \draw [->] (16,4) -- (20,4); \draw [->,line width=1.4pt] (21.5,4) -- (23,4); \draw [line width=1.4pt] (14.5,-1)-- (14.5,8); \draw [line width=1.4pt] (14.5,8)-- (16.5,9); \draw [->,line width=1.4pt,color=ccqqqq] (21.5,6) -- (23,6); \draw [->,line width=1.4pt,color=ccqqqq] (13,6) -- (14.5,6); \draw [->,line width=1.4pt] (13,4) -- (14.5,4); \draw [->] (16,-1) -- (16,4); \draw [->] (21.5,-1) -- (21.5,4); \draw[line width=1.4pt,color=ccqqqq] (19.12,4.3) -- (19.11,4.24) -- (19.1,4.18) -- (19.1,4.12) -- (19.09,4.05) -- (19.08,3.99) -- (19.07,3.93) -- (19.07,3.87) -- (19.06,3.81) -- (19.05,3.74) -- (19.05,3.68) -- (19.04,3.62) -- (19.04,3.56) -- (19.03,3.49) -- (19.02,3.43) -- (19.02,3.37) -- (19.01,3.31) -- (19.01,3.25) -- (19,3.18) -- (18.99,3.12) -- (18.99,3.06) -- (18.98,3) -- (18.98,2.94) -- (18.97,2.88) -- (18.96,2.82) -- (18.96,2.76) -- (18.95,2.7) -- (18.95,2.65) -- (18.94,2.59) -- (18.93,2.53) -- (18.93,2.48) -- (18.92,2.42) -- (18.91,2.37) -- (18.9,2.31) -- (18.9,2.26) -- (18.89,2.21) -- (18.88,2.16) -- (18.87,2.11) -- (18.86,2.06) -- (18.85,2.01) -- (18.85,1.97) -- (18.84,1.92) -- (18.83,1.88) -- (18.82,1.83) -- (18.81,1.79) -- (18.79,1.75) -- (18.78,1.71) -- (18.77,1.67) -- (18.76,1.64) -- (18.75,1.6) -- (18.73,1.57) -- (18.72,1.54) -- (18.71,1.51) -- (18.69,1.48) -- (18.68,1.45) -- (18.66,1.42) -- (18.65,1.4) -- (18.63,1.38) -- (18.61,1.36) -- (18.59,1.34) -- (18.58,1.32) -- (18.56,1.31) -- (18.54,1.29) -- (18.52,1.28) -- (18.5,1.27) -- (18.48,1.27) -- (18.45,1.26) -- (18.43,1.26) -- (18.41,1.26) -- (18.38,1.26) -- (18.36,1.27) -- (18.33,1.27) -- (18.31,1.28) -- (18.28,1.29) -- (18.25,1.3) -- (18.22,1.32) -- (18.19,1.34) -- (18.16,1.36) -- (18.13,1.38) -- (18.1,1.41) -- (18.07,1.44) -- (18.03,1.47) -- (18.02,1.48) -- (18.01,1.49) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5)(20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (20,6) -- (19.99,6) -- (19.99,6) -- (19.98,6) -- (19.95,6) -- (19.92,5.99) -- (19.89,5.99) -- (19.86,5.98) -- (19.82,5.97) -- (19.79,5.96) -- (19.77,5.94) -- (19.74,5.93) -- (19.71,5.91) -- (19.68,5.89) -- (19.66,5.87) -- (19.63,5.84) -- (19.61,5.82) -- (19.58,5.79) -- (19.56,5.76) -- (19.54,5.73) -- (19.52,5.7) -- (19.49,5.67) -- (19.47,5.64) -- (19.45,5.6) -- (19.44,5.56) -- (19.42,5.52) -- (19.4,5.48) -- (19.38,5.44) -- (19.36,5.4) -- (19.35,5.36) -- (19.33,5.31) -- (19.32,5.26) -- (19.3,5.22) -- (19.29,5.17) -- (19.27,5.12) -- (19.26,5.07) -- (19.25,5.02) -- (19.24,4.97) -- (19.22,4.91) -- (19.21,4.86) -- (19.2,4.8) -- (19.19,4.75) -- (19.18,4.69) -- (19.17,4.63) -- (19.16,4.58) -- (19.15,4.52) -- (19.14,4.46) -- (19.13,4.4) -- (19.12,4.34) -- (19.12,4.3); \draw[line width=1.4pt,dash pattern=on 2pt off 4pt,color=ccqqtt] (17.44,2.52) -- (17.44,2.54) -- (17.43,2.56) -- (17.42,2.58) -- (17.41,2.6) -- (17.4,2.63) -- (17.4,2.65) -- (17.39,2.67) -- (17.38,2.69) -- (17.37,2.72) -- (17.37,2.74) -- (17.36,2.76) -- (17.35,2.78) -- (17.34,2.81) -- (17.33,2.83) -- (17.33,2.85) -- (17.32,2.88) -- (17.31,2.9) -- (17.3,2.92) -- (17.3,2.95) -- (17.29,2.97) -- (17.28,3) -- (17.27,3.02) -- (17.26,3.05) -- (17.26,3.07) -- (17.25,3.1) -- (17.24,3.12) -- (17.23,3.15) -- (17.23,3.18) -- (17.22,3.2) -- (17.21,3.23) -- (17.2,3.25) -- (17.19,3.28) -- (17.19,3.31) -- (17.18,3.33) -- (17.17,3.36) -- (17.16,3.39) -- (17.15,3.42) -- (17.15,3.44) -- (17.14,3.47) -- (17.13,3.5) -- (17.12,3.53) -- (17.12,3.56) -- (17.11,3.59) -- (17.1,3.61) -- (17.09,3.64) -- (17.08,3.67) -- (17.08,3.7) -- (17.07,3.73) -- (17.06,3.76) -- (17.05,3.79) -- (17.05,3.82) -- (17.04,3.85) -- (17.03,3.88) -- (17.02,3.91) -- (17.01,3.94) -- (17.01,3.97) -- (17,3.99) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4)(17.49,2.04)(18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (18,1.5) -- (17.99,1.51) -- (17.99,1.51) -- (17.98,1.52) -- (17.97,1.53) -- (17.97,1.54) -- (17.96,1.54) -- (17.95,1.55) -- (17.94,1.56) -- (17.93,1.57) -- (17.93,1.58) -- (17.92,1.59) -- (17.91,1.6) -- (17.9,1.61) -- (17.9,1.62) -- (17.89,1.63) -- (17.88,1.64) -- (17.87,1.65) -- (17.86,1.66) -- (17.86,1.67) -- (17.85,1.69) -- (17.84,1.7) -- (17.83,1.71) -- (17.83,1.72) -- (17.82,1.73) -- (17.81,1.74) -- (17.8,1.76) -- (17.79,1.77) -- (17.79,1.78) -- (17.78,1.79) -- (17.77,1.81) -- (17.76,1.82) -- (17.76,1.83) -- (17.75,1.85) -- (17.74,1.86) -- (17.73,1.88) -- (17.72,1.89) -- (17.72,1.9) -- (17.71,1.92) -- (17.7,1.93) -- (17.69,1.95) -- (17.68,1.96) -- (17.68,1.98) -- (17.67,1.99) -- (17.66,2.01) -- (17.65,2.03) -- (17.65,2.04) -- (17.64,2.06) -- (17.63,2.08) -- (17.62,2.09) -- (17.61,2.11) -- (17.61,2.13) -- (17.6,2.14) -- (17.59,2.16) -- (17.58,2.18) -- (17.58,2.19) -- (17.57,2.21) -- (17.56,2.23) -- (17.55,2.25) -- (17.54,2.27) -- (17.54,2.29) -- (17.53,2.3) -- (17.52,2.32) -- (17.51,2.34) -- (17.51,2.36) -- (17.5,2.38) -- (17.49,2.4) -- (17.48,2.42) -- (17.47,2.44) -- (17.47,2.46) -- (17.46,2.48) -- (17.45,2.5) -- (17.44,2.52) -- (17.44,2.52); \draw[line width=1.4pt,color=ccqqtt] (16.44,5.61) -- (16.44,5.62) -- (16.43,5.63) -- (16.42,5.65) -- (16.41,5.66) -- (16.4,5.67) -- (16.4,5.68) -- (16.39,5.7) -- (16.38,5.71) -- (16.37,5.72) -- (16.37,5.73) -- (16.36,5.74) -- (16.35,5.75) -- (16.34,5.77) -- (16.33,5.78) -- (16.33,5.79) -- (16.32,5.8) -- (16.31,5.81) -- (16.3,5.82) -- (16.3,5.83) -- (16.29,5.83) -- (16.28,5.84) -- (16.27,5.85) -- (16.26,5.86) -- (16.26,5.87) -- (16.25,5.88) -- (16.24,5.88) -- (16.23,5.89) -- (16.23,5.9) -- (16.22,5.91) -- (16.21,5.91) -- (16.2,5.92) -- (16.19,5.92) -- (16.19,5.93) -- (16.18,5.94) -- (16.17,5.94) -- (16.16,5.95) -- (16.15,5.95) -- (16.15,5.96) -- (16.14,5.96) -- (16.13,5.97) -- (16.12,5.97) -- (16.12,5.97) -- (16.11,5.98) -- (16.1,5.98) -- (16.09,5.98) -- (16.08,5.99) -- (16.08,5.99) -- (16.07,5.99) -- (16.06,5.99) -- (16.05,5.99) -- (16.05,6) -- (16.04,6) -- (16.03,6) -- (16.02,6) -- (16.01,6) -- (16.01,6) -- (16,6) -- (16,6) -- (16,6) -- (16,6) -- (16,6) -- (16,6) -- (16,6) -- (16,6) -- (16,6) -- (16,6) -- (16,6)(17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4) -- (17,4.01) -- (17,4.01) -- (16.99,4.02) -- (16.99,4.04) -- (16.98,4.07) -- (16.97,4.1) -- (16.97,4.13) -- (16.96,4.16) -- (16.95,4.19) -- (16.94,4.22) -- (16.93,4.25) -- (16.93,4.28) -- (16.92,4.31) -- (16.91,4.34) -- (16.9,4.37) -- (16.9,4.4) -- (16.89,4.42) -- (16.88,4.45) -- (16.87,4.48) -- (16.86,4.51) -- (16.86,4.53) -- (16.85,4.56) -- (16.84,4.59) -- (16.83,4.61) -- (16.83,4.64) -- (16.82,4.66) -- (16.81,4.69) -- (16.8,4.71) -- (16.79,4.74) -- (16.79,4.76) -- (16.78,4.79) -- (16.77,4.81) -- (16.76,4.84) -- (16.76,4.86) -- (16.75,4.88) -- (16.74,4.91) -- (16.73,4.93) -- (16.72,4.95) -- (16.72,4.97) -- (16.71,5) -- (16.7,5.02) -- (16.69,5.04) -- (16.68,5.06) -- (16.68,5.08) -- (16.67,5.1) -- (16.66,5.13) -- (16.65,5.15) -- (16.65,5.17) -- (16.64,5.19) -- (16.63,5.21) -- (16.62,5.23) -- (16.61,5.24) -- (16.61,5.26) -- (16.6,5.28) -- (16.59,5.3) -- (16.58,5.32) -- (16.58,5.34) -- (16.57,5.36) -- (16.56,5.37) -- (16.55,5.39) -- (16.54,5.41) -- (16.54,5.42) -- (16.53,5.44) -- (16.52,5.46) -- (16.51,5.47) -- (16.51,5.49) -- (16.5,5.51) -- (16.49,5.52) -- (16.48,5.54) -- (16.47,5.55) -- (16.47,5.57) -- (16.46,5.58) -- (16.45,5.59) -- (16.44,5.61); \draw (15,10) node[anchor=north west] {$\b S^2$}; \draw (20.5,10) node[anchor=north west] {$\b S^2$}; \draw (9.25,6.8) node[anchor=north west] {$\tau_d^{-1}\circ X(L_{E_1^d = -E_1^g})$}; \draw (14.8,-1.34) node[anchor=north west] {$(-e_1)$}; \draw (20.3,-1.34) node[anchor=north west] {$(-e_1)$}; \draw (15.3,4.4) node[anchor=north west] {$e_1$}; \draw (20.8,4.4) node[anchor=north west] {$e_1$}; \draw (11,4.64) node[anchor=north west] {$L_{E_1^d=-E_1^g}$}; \draw [color=black, line width=1.4pt] (18,1.5)-- ++(-4.5pt,0 pt) -- ++(9.0pt,0 pt) ++(-4.5pt,-4.5pt) -- ++(0 pt,9.0pt); \draw[color=black] (17.5,0.5) node {$(m,v)$}; \end{tikzpicture} \captionof{figure}{A positive intersection $(m,v)$ between the 2-chain $L_{E_1^d=-E_1^g}\times[-e_1,e_1]_{e_2^\Omega}$ and $\tau_d^{-1} \circ X (L_{E_1^d=-E_1^g})$ in $L_{E_1^d=-E_1^g}\times \b S^2$.\\}\label{orientation} \end{center}
\noindent It follows that $$ \begin{aligned} \langle \Omega(\bar\tau) , X(M) \rangle_{UM} &= - \langle [-e_1,e_1]_{e_2^\Omega}, \ P_{\b S^2} \circ \tau_d^{-1} \circ X(L_{E_1^d=-E_1^g}) \rangle_{\b S^2} \\ &= - lk_{\b S^2}\left(e_1-(-e_1), \ P_{\b S^2} \circ \tau_d^{-1} \circ X(L_{E_1^d=-E_1^g})\right), \end{aligned} $$ \iffalse and, finally, $$ \begin{aligned} \langle \go P(\bar\tau,X), [0,1]\times X(M) \rangle_{[0,1]\times UM} &= lk_M\left(L_{E_1^d=X} + L_{E_1^g=X} \ , \ L_{E_1^d=-X} + L_{E_1^g=-X} \right) \\ &- lk_{\b S^2}\left(e_1-(-e_1), \ P_{\b S^2} \circ \tau_d^{-1} \circ X(L_{E_1^d=-E_1^g})\right). \end{aligned} $$ \fi \end{proof}
\begin{proof}[Proof of Lemma~\ref{lem1}] According to Lemmas~\ref{cor_reformcombingsb}~and~\ref{fond}, Lemma~\ref{lem1} is true for $p_1=\tilde p_1$. \end{proof}
From now on, if $X$ is a torsion combing of a compact oriented 3-manifold $M$ and if $\bar\tau$ is a pseudo-parallelization of $M$ compatible with $X$, then set $$ p_1([X],\bar\tau) = \tilde p_1([X],\bar\tau) \mbox{ \ and \ } p_1(\bar\tau,[X]) = \tilde p_1(\bar\tau,[X]). $$ As an obvious consequence, we get the following lemma. \begin{lemma} \label{lempluplus} Under the assumptions of Lemma \ref{cor_reformcombingsb}, in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$, the class of $\go P(\bar\tau,X)$ is $p_1(\bar\tau, X)[S]$. \end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm_defp1Xb}] Let $X_1$ and $X_2$ be torsion combings of two compact oriented 3-manifolds $M_1$ and $M_2$ with identified boundaries such that $X_1$ and $X_2$ coincide on the boundary. Let also $\bar\tau_1$ and $\bar\tau'_1$ be two pseudo-parallelizations of $M_1$ that extend the trivialization $\rho(X_1)$ and, similarly, let $\bar\tau_2$ be a pseudo-parallelization of $M_2$ that extends the trivialization $\rho(X_2)$. In such a context, let $$ \begin{aligned} p_1([X_1],[X_2])(\bar\tau_1,\bar\tau_2) &= p_1([X_1],\bar\tau_1) + p_1(\bar\tau_1,\bar\tau_2) + p_1(\bar\tau_2, [X_2]) \\ p_1([X_1],[X_2])(\bar\tau'_1,\bar\tau_2) &= p_1([X_1],\bar\tau'_1) + p_1(\bar\tau'_1,\bar\tau_2) + p_1(\bar\tau_2, [X_2]) \\ \end{aligned} $$ and note that $$ p_1([X_1],[X_2])(\bar\tau_1,\bar\tau_2) - p_1([X_1],[X_2])(\bar\tau'_1,\bar\tau_2) = p_1(\bar\tau_1,\bar\tau'_1) - \langle \go P(\bar\tau_1,\bar\tau'_1) , [0,1]\times X_1(M) \rangle. $$ Using Proposition~\ref{prop_ppara}, we get $p_1([X_1],[X_2])(\bar\tau_1,\bar\tau_2) - p_1([X_1],[X_2])(\bar\tau'_1,\bar\tau_2)=0$. In other words $p_1([X_1],[X_2])(\bar\tau_1,\bar\tau_2)$ is independent of $\bar\tau_1$. Similarly, it is also independent of $\bar\tau_2$ so that we can drop the pseudo-parallelizations from the notation. Eventually, using Lemma~\ref{lem1}, we get the formula of the statement. \\
For the second part of the statement, if $M_1$ and $M_2$ are closed, conclude with Proposition~\ref{cor_reformcombings}, which ensures that $p_1(\bar\tau_i,[X_i]) = p_1([X_i])-p_1(\bar\tau_i)$ for $i\in \lbrace 1,2 \rbrace$. \end{proof}
Let us now end this section by proving Theorem \ref{formuleplus} and Theorem \ref{GM}, starting with the following.
\begin{lemma} \label{lemmaplus1}
Let $(X,\rho)$ and $(Y,\rho)$ be $\partial$-compatible torsion combings of a compact oriented 3-manifold $M$. Let $C_4(X,Y)$ and $C_4(-X,-Y)$ be 4-chains of $[0,1]\times UM$ as in Lemma \ref{lem_evaluationII}. The class of $\go P(X,Y) \hspace{-1mm}=\hspace{-1mm} 4\big(C_4(X,Y) \cap C_4(-X,-Y)\big)$ in the space $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$ reads $p_1([X],[Y]) [S]$ where $[S]$ is the homology class of the fiber of $UM$ in $H_2([0,1]\times UM ; \b Q)$. \end{lemma} \begin{proof} Let $\bar \tau$ be a pseudo-parallelization of $M$ compatible with $X$ and $Y$. By Theorem \ref{thm_defp1Xb}, in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$ : $$
p_1([X],[Y])[S] = \big(p_1([X],\bar \tau) + p_1(\bar \tau,[Y]) \big)[S] $$ Then, using Lemma \ref{lempluplus}, $$ \begin{aligned}
p_1([X],[Y])[S] &= [\go P(X,\bar \tau) + \go P(\bar \tau, Y)]\\
&= [4(C_4^+(X,\bar \tau)\cap C_4^-(X,\bar \tau)) + 4(C_4^+(\bar \tau, Y)\cap C_4^-(\bar \tau, Y)) ]. \end{aligned} $$ Hence, reparameterizing and stacking $C_4^+(X,\bar\tau)$ and $C_4^+(\bar\tau,Y)$, resp. $C_4^-(X,\bar\tau)$ and $C_4^-(\bar\tau,Y)$, we get a 4-chain $C_4(X,Y)$, resp. $C_4(-X,-Y)$, as in Lemma \ref{lem_evaluationII} and $$
p_1([X],[Y])[S]= 4 \cdot [C_4(X,Y)\cap C_4(-X,-Y)]. $$
To conclude the proof, see that if $C'_4(X,Y)$ is a 4-chain of $[0,1]\times UM$ with the same boundary as $C_4(X,Y)$, then $C'_4(X,Y)-C_4(X,Y)$ is homologous to a 4-cycle in $\lbrace 0 \rbrace \times UM$ so that $$ \begin{aligned} &[C'_4(X,Y) \cap C_4(-X,-Y)] - [C_4(X,Y) \cap C_4(-X,-Y)] \\ &= \left[\big(C'_4(X,Y)- C_4(X,Y) \big) \cap C_4(-X,-Y)\right] \end{aligned} $$ sits in $H_T^\rho(M)$. So, the class $[C_4(X,Y)\cap C_4(-X,-Y)]$ in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$ is independent of the choices for $C_4(X,Y)$. Similarly, it is independent of the choices for $C_4(-X,-Y)$. \end{proof}
\begin{proof}[Proof of Theorem \ref{formuleplus}] According to Lemma \ref{lemmaplus1}, it is enough to evaluate the class of the chain $4\big(C_4(X,Y) \cap C_4(-X,-Y)\big)$ in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$ where $\rho = \rho(X)$ and where $C_4(X,Y)$ and $C_4(-X,-Y)$ are 4-chains of $[0,1]\times UM$ as in Lemma~\ref{lem_evaluationII}. Let us consider the 4-chains $$ \begin{aligned}
C_4(X,Y) &= \bar F_{t_1}(X,Y) - \lbrace t_1 \rbrace \times UM_{|\Sigma_{X=-Y}},\\
C_4(-X,-Y) &= \bar F_{t_2}(-X,-Y) - \lbrace t_2 \rbrace \times UM_{|\Sigma_{-X=Y}} , \end{aligned} $$ where $0<t_1<t_2<1$, and where $\bar F_{t_1}(X,Y)$, resp. $\bar F_{t_2}(-X,-Y)$, is a 4-chain as in Definition~\ref{def_Ft} and $\Sigma_{X=-Y}$, resp. $\Sigma_{-X=Y}$, is a 2-chain of $M$ bounded by $L_{X=-Y}$, resp. $L_{-X=Y}$, provided by Proposition~\ref{prop_linksinhomologyI}. With these chains, $$ \begin{aligned} &C_4(X,Y) \cap C_4(-X,-Y) \\ &= -\lbrace t_1 \rbrace \times (-X)(\Sigma_{X=-Y}) + [t_1, t_2] \times (-X)(L_{-X=Y}) - \lbrace t_2 \rbrace \times Y(\Sigma_{-X=Y}). \end{aligned} $$ Hence, using Lemma \ref{lem_evaluationII} with $[0,1]\times X(M)$, in $H_2([0,1]\times UM ; \b Q)/H_T^\rho(M)$ : $$ \begin{aligned}
[C_4(X,Y) \cap C_4(-X,-Y)] &= \langle C_4(X,Y)\cap C_4(-X,-Y) , [0,1] \times X(M) \rangle_{[0,1]\times UM} [S] \\
&= lk(L_{X=Y},L_{X=-Y}) [S]. \end{aligned} $$ \iffalse
&= \langle - \lbrace t_1 \rbrace \times (-X)(\Sigma_{X=-Y}) + [t_1, t_2] \times (-X)(L_{-X=Y}) \\
& \hspace{6cm} - \lbrace t_2 \rbrace \times Y(\Sigma_{-X=Y}) , [0,1]\times X(M) \rangle_{[0,1]\times UM} [S] \\
&= - lk(L_{X=Y},L_{-X=Y}) [S] \\ \fi \end{proof}
\iffalse \begin{proof} Let $(X,\sigma)$ and $(Y, \sigma)$ be $\partial$-compatible torsion combings of a compact oriented 3-manifold $M$ that represent the same $\mbox{spin$^c$}$-structure. By definition, $(X,\sigma)$ and $(Y, \sigma)$ are homotopic on $M\setminus \l B$ where $\l B$ is a 3-ball in $\mathring M$. \linebreak So, there exists a combing $Y'$ homotopic to $Y$ (on $M$) such that $L_{X=Y'} \cap \partial \l B = \emptyset$ and $L_{X=-Y'} \subset \l B$. Let $L^{\l B} = L_{X=Y'} \cap \l B$ and $L^{M\setminus \l B} = L_{X=Y'} \cap (M\setminus \l B)$. The link $L_{X=-Y'}$ bounds a compact oriented surface in $\l B$, hence $$ \begin{aligned} p_1([X], [Y]) = p_1([X], [Y']) &= 4\cdot lk(L_{X=Y'}, L_{X=-Y'})\\ &= 4\cdot lk(L^{M \setminus \l B} \sqcup L^{\l B}, L_{X=-Y'})\\ &= 4\cdot lk(L^{\l B}, L_{X=-Y'}). \end{aligned} $$
Finally, as in the closed case (see \cite[Subsection 2.3 and Corollary 2.22]{lescopcombing}), $X$ can be extended as a parallelization on $\l B$ so that $Y'$ induces a map from $\l B$ to $\b S^2$. Its class in $\pi_3(\b S^2)\simeq \b Z$ reads $lk(L^{\l B}, L_{X=-Y'})$. Hence, $p_1([X], [Y]) = 0$ if and only if $X$ and $Y$ are homotopic on $M$. \end{proof} \fi
\begin{proof}[Proof of Theorem \ref{GM}] If $X$ and $Y$ are homotopic relatively to the boundary, then $p_1([X],[Y])=0$. \linebreak Conversely, consider two combings $X$ and $Y_0$ in the same $\mbox{spin$^c$}$-structure and assume that \linebreak $p_1([X],[Y_0])=0$. Since $Y_0$ is in the same $\mbox{spin$^c$}$-structure as $X$, there exists a homotopy from $Y_0$ to a combing $Y_1$ that coincides with $X$ outside a ball $\l B$ in $\mathring M$. \\
Let $\sigma$ be a unit section of $X^{\perp}_{|\l B}$, and let $(X, \sigma, X\wedge\sigma)$ denote the corresponding parallelization over $\l B$. Extend the unit section $\sigma$ as a generic section of $X^{\perp}$ such that $\sigma_{|\partial M}=\sigma(X)$, and deform $Y_1$ to $Y$ where $$ Y(m)=\frac{Y_1(m) + \chi(m) \sigma(m)}{\parallel Y_1(m) + \chi(m)\sigma(m)\parallel} $$ for a smooth map $\chi$ from $M$ to $[0,\varepsilon]$, such that $\chi^{-1}(0)=\partial M$ and $\chi$ maps the complement of a neighborhood of $\partial M$ to $\varepsilon$, where $\varepsilon$ is a small positive real number. The link $L_{X=Y}$ is the disjoint union of $L_{X=Y}\cap \l B$ and a link $L_2$ of $M \setminus \l B$, the link $L_{X=-Y}$ sits in $\l B$, and $$ 0=p_1(X,Y_0)=p_1(X,Y)= 4lk(L_{X=Y},L_{X=-Y}) $$ where $lk(L_{X=Y},L_{X=-Y})=lk(L_{X=Y}\cap \l B,L_{X=-Y})=0$.\\
The parallelization $(X, \sigma, X\wedge\sigma)$ turns the restriction $Y_{|\l B}$ into a map from the ball $\l B$ to $\b S^2 = \b S(\b R X \oplus \b R \sigma \oplus \b R X\wedge\sigma)$ constant on $\partial \l B$, thus into a map from $\l B / \partial \l B \simeq \b S^3$ to $\b S^2$, and it suffices to prove that this map is homotopic to the constant map to prove Theorem \ref{GM}. For this it suffices to prove that this map represents $0$ in $\pi_3(\b S^2) \simeq \b Z$. \\
There is a classical isomorphism from $\pi_3(\b S^2)$ to $\b Z$ that maps the class of a map $g$ from $\b S^3$ to $\b S^2$ to the linking number of the preimages of two regular points of $g$ under $g$ (see \cite{hopf} and \cite[Theorem 2]{pontrjagin}). It is easy to check that this map is well-defined, depends only on the homotopy class of $g$, and is a group morphism on $\pi_3(\b S^2)$ that maps the class of the Hopf fibration $\left((z_1,z_2) \in (\b S^3 \subset \b C^2) \mapsto (\sfrac{z_1}{z_2}) \in (\b C P^1=\b S^2) \right)$ to $\pm 1$. Therefore it is an isomorphism from $\pi_3(\b S^2)$ to $\b Z$. Since $Y$ is in the kernel of this isomorphism, it is homotopically trivial so that $Y$ is homotopic to a constant on $B$, relatively to the boundary of $B$, and $Y_0$ is homotopic to $X$ on $M$, relatively to the boundary of $M$. \end{proof}
\section[Variation of Pontrjagin numbers under LP$_\b Q$-surgeries]{Variation of Pontrjagin numbers \\ under LP$_\b Q$-surgeries} \subsection{For pseudo-parallelizations} In this subsection we recall the variation formula and the finite type property of Pontrjagin numbers of pseudo-parallelizations, which are contained in \cite[Section 11]{lescopcube}. \def\mbox{$\overline{\tau}$}{\mbox{$\overline{\tau}$}} \begin{proposition} \label{prop_FTIppara} \noindent For $M$ a compact oriented 3-manifold and $(\sfrac{B}{A})$ an LP$_\b Q$-surgery in $M$, if ${\bar\tau}_{M}$ and ${\bar\tau}_{M(\sfrac{B}{A})}$ are pseudo-parallelizations of $M$ and $M(\sfrac{B}{A})$ which coincide on $M\setminus \mathring A$ and coincide with a genuine parallelization on $\partial A$, then $$
p_1({\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})},{\mbox{$\overline{\tau}$}}_{M}) = p_1({{\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})}}_{|B},{{\mbox{$\overline{\tau}$}}_{M}}_{|A}). $$ \end{proposition} \begin{proof}
Let $W^-$ be a signature zero cobordism from $A$ to $B$. By definition, the obstruction $p_1({{\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})}}_{|B},{{\mbox{$\overline{\tau}$}}_{M}}_{|A})$ is the Pontrjagin obstruction to extending the complex trivialization \linebreak $\tau({{\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})}}_{|B},{{\mbox{$\overline{\tau}$}}_{M}}_{|A})$ of $TW^-_{\hspace{-1mm}|\partial W^-} \hspace{-1mm} \otimes \b C$ as a trivialization of $TW^- \hspace{-1mm}\otimes \b C$. Let $W^+ \hspace{-1.5mm}=\hspace{-1mm} [0,1]\times (M\setminus \mathring A)$ and let $V=-[0,1]\times\partial A$. As shown in \cite[Proof of Proposition 5.3 item 2]{lescopEFTI}, since $(\sfrac{B}{A})$ is an LP$_\b Q$-surgery in $M$, the manifold $W=W^+\cup_V W^-$ has signature zero. Furthermore, since $W$ has signature zero, $p_1({\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})},{\mbox{$\overline{\tau}$}}_{M})$ is the Pontrjagin obstruction to extending the triviali\-zation $\tau({\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})},{\mbox{$\overline{\tau}$}}_{M})$ of $TW_{|\partial W} \otimes \b C$ as a trivialization of $TW\otimes \b C$. Finally, it is clear that $\tau({\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})},{\mbox{$\overline{\tau}$}}_{M})$ extends as a trivialization of $TW_{|W^+} \otimes \b C$ so that $$
p_1({\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})},{\mbox{$\overline{\tau}$}}_{M}) = p_1({{\mbox{$\overline{\tau}$}}_{M(\sfrac{B}{A})}}_{|B},{{\mbox{$\overline{\tau}$}}_{M}}_{|A}). $$
\end{proof}
\begin{corollary} \label{cor_FTppara} Let $M$ be a compact oriented 3-manifold and let $\lbrace \sfrac{B_i}{A_i} \rbrace_{i\in \lbrace 1, \ldots , k\rbrace}$ be a family of disjoint LP$_\b Q$-surgeries where $k\geqslant2$. For any family $\lbrace \bar \tau_I \rbrace_{I \subset \lbrace 1, \ldots , k \rbrace}$ of pseudo-parallelizations of the $\lbrace M(\lbrace \sfrac{B_i}{A_i}\rbrace_{i\in I}) \rbrace_{I \subset \lbrace 1, \ldots , k \rbrace}$ whose links sit in $M\setminus(\cup_{i=1}^k \partial A_i)$ and such that, for all subsets $I, J\subset \lbrace 1, \ldots , k \rbrace$, $\bar \tau^I$ and $\bar \tau^J$ coincide on $(M\setminus \cup_{I\cup J}A_i)\cup_{I\cap J} B_i$, the following identity holds : $$ \sum_{I \subset \lbrace 2, \ldots , k \rbrace} (-1)^{\card(I)} p_1 (\bar \tau ^I,\bar \tau ^{I\cup\lbrace 1 \rbrace})=0. $$ \end{corollary}
\subsection[Lemmas for the proof of Theorem~\ref{thm_D2nd}]{Lemmas for the proof of Theorem~\ref{thm_D2nd}} \begin{lemma} \label{lem_Xdsection}
If $X$ is a combing of a compact oriented 3-manifold $M$ and if $\partial A$ is the connected boundary of a submanifold of $M$ of dimension 3, then the normal bundle $X^\perp_{|\partial A}$ admits a nonvanishing section. \end{lemma} \begin{proof}
Parallelize $M$ so that $X$ induces a map $X_{|\partial A} : \partial A \rightarrow \b S^2$. This map must have degree 0 since $X$ extends this map to $A$ (so that $(X_{|\partial A})_* : H_2(\partial A;\b Q)\rightarrow H_2(\b S^2;\b Q)$ factors through the inclusion $H_2(\partial A ; \b Q) \rightarrow H_2(A;\b Q)$, which is zero). It follows that $X_{|\partial A}$ is homotopic to the map $(m \in\partial A \mapsto e_1 \in \b S^2)$ whose normal bundle admits a nonvanishing section. \end{proof}
\begin{lemma} \label{ind}
Let $(M,X)$ be a compact oriented 3-manifold equipped with a combing, let $(\sfrac{B}{A},X_B)$ be an LP$_\b Q$-surgery in $(M,X)$ and let $\sigma$ be a nonvanishing section of $X^\perp_{|\partial A}$. Let $P$ stand for Poincaré duality isomorphisms and recall the sequence of isomorphisms induced by the inclusions $i^{A_i}$ and $i^{B_i}$ $$ H_1(A;\b Q) \stackrel{i^{A}_*}{\longleftarrow} \frac{H_1(\partial A;\b Q)}{\go L_{A}} = \frac{H_1(\partial B;\b Q)}{\go L_{B}} \stackrel{i^{B}_*}{\longrightarrow} H_1(B;\b Q). $$
The class $ \left(i^A_* \circ {(i^B_*)}^{-1}\left(P(e_2^B(X_B^\perp, \sigma))\right) - P(e_2^A(X_{|A}^\perp, \sigma))\right)$ in $H_1( A ; \b Q)$ is independent of the choice of the section $\sigma$. \end{lemma} \begin{proof}
Let us drop the inclusions $i^B_*$ and $i^A_*$ from the notation. According to Proposition~\ref{prop_euler}, the class $P(e_2^A(X_{|A}^\perp, \sigma))$ verifies $$
[ X(A) - (-X)(A) +\l H_{X,\sigma}^{-X}(\partial A \times [0,1]) ] = [ P(e_2^A(X_{|A}^\perp, \sigma)) \times \b S^2 ] \mbox{ \ in \ } H_1(UA ;\b Q). $$
It follows that, for another choice $\sigma'$ of section of $X^\perp_{|\partial A}$, $$ \begin{aligned} [P(e_2^B(X_B^\perp, \sigma)) \times \b S^2 & - P(e_2^B(X_B^\perp, \sigma')) \times \b S^2 ] \\ &= [\l H_{X,\sigma}^{-X}(\partial A\times [0,1])-\l H_{X,\sigma'}^{-X}(\partial A\times [0,1])] \\
&=[P(e_2^A(X_{|A}^\perp, \sigma)) \times \b S^2 - P(e_2^A(X_{|A}^\perp, \sigma')) \times \b S^2 ]. \end{aligned} $$ \end{proof}
\begin{lemma} \label{zero} Let $(M,X)$ be a compact oriented 3-manifold equipped with a combing and let $(\sfrac{B}{A},X_B)$ be an LP$_\b Q$-surgery in $(M,X)$. If $(X,\sigma)$ is a torsion combing then $(X(\sfrac{B}{A}),\sigma)$ is a torsion combing if and only if $$
i_*^{A} \circ (i_*^B)^{-1} \big( P(e_2^B({X_{B}}^\perp, \zeta))\big) - P(e_2^A(X_{|A}^\perp, \zeta)) = 0 \mbox{ \ in $H_1(M; \b Q)$} $$
for some nonvanishing section $\zeta$ of $X^\perp_{|\partial A}$. \end{lemma} \begin{proof} By definition, we have $$ \begin{aligned}
P(e_2(X^\perp,\sigma)) &= P(e_2^A(X_{|A}^\perp, \zeta)) + P(e_2^{M \setminus \mathring A}(X^\perp,\sigma \cup \zeta)) \\
P(e_2({X(\sfrac{B}{A})}^\perp,\sigma)) &= P(e_2^B(X_{B}^\perp, \zeta)) + P(e_2^{M \setminus \mathring A}(X'^\perp,\sigma \cup \zeta)) \\ \end{aligned} $$
where $\zeta$ is any nonvanishing section of $X^\perp_{|\partial A}$. So, it follows that, using appropriate identifications, $$
P(e_2(X(\sfrac{B}{A})^\perp,\sigma)) - P(e_2({X}^\perp,\sigma)) = P(e_2^B(X_{B}^\perp, \zeta)) - P(e_2^A(X_{|A}^\perp, \zeta)). $$ If $X$ is a torsion combing, then $P(e_2({X}^\perp,\sigma))$ is rationally null-homologous in $M$. Hence, $X(\sfrac{B}{A})$ is a torsion combing if and only if $$
i_*^{A} \circ (i_*^B)^{-1} \big( P(e_2^B({X_{B}}^\perp, \zeta))\big) - P(e_2^A(X_{|A}^\perp, \zeta)) = 0 \mbox{ \ in $H_1(M; \b Q)$}. $$ \end{proof}
\begin{lemma} \label{norm} Let $(M,X)$ be a compact oriented 3-manifold equipped with a combing. Let $\lbrace (\sfrac{B_i}{A_i} , X_{B_i}) \rbrace_{i \in \lbrace 1,\ldots,k\rbrace}$ be a family of disjoint LP$_\b Q$-surgeries in $(M,X)$, where $k \in \b N\setminus \lbrace 0 \rbrace$. For all $I\subset \lbrace 1, \ldots, k \rbrace$, let $M_I=M(\lbrace \sfrac{B_i}{A_i} \rbrace_{i \in I})$ and $X^I=X(\lbrace \sfrac{B_i}{A_i}\rbrace_{i \in I})$. There exists a family of pseudo-parallelizations $\lbrace\bar \tau^I\rbrace_{ I \subset \lbrace 1,\ldots,k\rbrace}$ of the $\lbrace M_I \rbrace_{I \subset \lbrace 1, \ldots , k \rbrace}$ such that : \begin{enumerate}[(i)] \item the third vector of $\bar\tau=\bar\tau^\emptyset$ coincides with $X$ on $\cup_{i=1}^k \partial A_i$, \item for all $I \subset \lbrace 1 , \ldots , k \rbrace$, $\bar\tau^I$ is compatible with $X^I$, \item for all $I \subset \lbrace 1,\ldots,k\rbrace$, if $\gamma^I$ denotes the link of $\bar \tau_I$, then $N(\gamma^I) \cap \left(\cup_{i=1}^k \partial A_i \right) = \emptyset$, \item for all $I, J \subset \lbrace 1,\ldots,k\rbrace$, $\bar\tau^I$ and $\bar\tau^J$ coincide on $(M\setminus \cup_{i\in I\cup J} A_i)\cup_{i\in I\cap J}B_i$, \item there exist links $L^\pm_{A_i}$ in $A_i$, $L^\pm_{B_i}$ in $B_i$ and $L^\pm_{ext}$ in $M \setminus \cup_{i=1}^k \mathring A_i$ such that, for all subset $I \subset \lbrace 1, \ldots , k \rbrace$~: $$ \begin{aligned} & 2 \cdot L_{\bar\tau^I=X^I} = L_{{E^d}^I=X^I} + L_{{E^g}^I=X^I} = L^+_{ext} + \sum_{i \in I} L^+_{B_i} + \sum_{i \in \lbrace 1,\ldots , k \rbrace \setminus I} L^+_{A_i} , \\ & 2 \cdot L_{\bar\tau^I=-X^I} = L_{{E^d}^I=-X^I} + L_{{E^g}^I=-X^I} = L^-_{ext} + \sum_{i \in I} L^-_{B_i} + \sum_{i \in \lbrace 1,\ldots , k \rbrace \setminus I} L^-_{A_i}, \end{aligned} $$ where ${E^d}^I$ and ${E^g}^I$ are the Siamese sections of $\bar\tau^I$. \end{enumerate} \end{lemma} \begin{proof}
Let $\l C$ denote a collar of $\cup_{i=1}^k \partial A_i$. Using Lemma~\ref{lem_Xdsection}, construct a trivialization $\tau_e$ of $\cup_{i=1}^k TM_{|\l C}$ so that its third vector coincides with $X$ on $\l C$. Then use Lemma~\ref{lem_extendpparallelization} to extend $\tau_e$ as pseudo-parallelizations of the $\lbrace A_i \rbrace_{i\in \lbrace 1,\ldots , k \rbrace }$, of the $\lbrace B_i \rbrace_{i\in \lbrace 1,\ldots , k \rbrace}$ and of $M\setminus(\cup_{i=1}^k \mathring A_i)$. Finally, use these pseudo-parallelizations to construct the pseudo-parallelizations of the 3-manifolds $\lbrace M_I \rbrace_{I \subset \lbrace 1,\ldots , k \rbrace}$ as in the statement. \end{proof}
\begin{lemma} \label{inter} In the context of Lemma~\ref{norm}, using the sequence of isomorphisms induced by the inclusions $i^{A_i}$ and $i^{B_i}$ $$ H_1(A_i;\b Q) \stackrel{i^{A_i}_*}{\longleftarrow} \frac{H_1(\partial A_i;\b Q)}{\go L_{A_i}} = \frac{H_1(\partial B_i;\b Q)}{\go L_{B_i}} \stackrel{i^{B_i}_*}{\longrightarrow} H_1(B_i;\b Q), $$ for all $i \in \lbrace 1 , \ldots , k \rbrace$, we have that, in $H_1(A_i ; \b Q)$, $$
\left[i_{\ast}^{A_i} \circ \left(i_{\ast}^{B_i}\right)^{-1}\left( L_{B_i}^{\pm}\right)- L_{A_i}^\pm\right] = \pm \left( i^{A_i}_* \circ(i^{B_i}_*)^{-1}\big(P(e_2^{B_i}(X_{B_i}^\perp, \sigma_i))\big) - P(e_2^{A_i}(X_{|A_i}^\perp, \sigma_i)) \right) $$
where $\sigma_i$ is any nonvanishing section of $X^\perp_{|\partial A_i}$. \end{lemma} \begin{proof}
Let us drop the inclusions $i^B_*$ and $i^A_*$ from the notation. Let $i \in \lbrace 1,\ldots,k \rbrace$. According to Lemma~\ref{zero}, it is enough to prove the statement for a particular non vanishing section $\sigma_i$ of $X^\perp_{|\partial A_i}$. Recall that $A_i$ is equipped with a combing $X_{|A_i}$ and a pseudo-parallelization \linebreak $\bar\tau_{|A_i}=(N(\gamma\cap A_i); {\tau_e}_{|A_i}, {\tau_d}_{|A_i}, {\tau_g}_{|A_i})$ such that $X_{|\partial A_i}$ coincides with ${E_3^e}_{|\partial A_i}$ where $\tau_e=(E_1^e,E_2^e,E_3^e)$. Furthermore, $$
L^+_{A_i} = 2\cdot L_{\bar\tau_{|A_i}= X_{|A_i}} \mbox{ \ and \ } L^-_{A_i} = 2\cdot L_{\bar\tau_{|A_i}= -X_{|A_i}}. $$
Construct a pseudo-parallelization $\check\tau = (N(\check\gamma) ; \check \tau_e ,\check \tau_d ,\check \tau_g)$ of $A_i$ by modifying $\bar\tau$ as follows so that $\check\tau$ and $X$ coincide on $\partial A_i$. Consider a collar $\l C=[0,1]\times \partial A_i$ of $\partial A_i$ such that $\lbrace 1 \rbrace \times \partial A_i = \partial A_i$ and $\l C \cap \gamma = \emptyset$. Without loss of generality, assume that $X_{|A_i}$ coincides with $E_3^e$ on the collar $\l C$. Let $\check\tau$ coincide with $\bar\tau_{|A_i}$ on $\overline{A_i \setminus \l C}$. End the construction of $\check\tau$ by requiring $$ \forall (s,b) \in \l C = [0,1] \times \partial A_i, \ \forall v \in \b S^2 \ : \ \check\tau_e( (s,b),v) = \tau_e \left((s,b), R_{e_2, \frac{-\pi s}{2}}(v)\right). $$
Note that $\check\tau$ and $X_{|A_i}$ are compatible and that $$
L^+_{A_i} = 2\cdot L_{\check\tau= X_{|A_i}} \mbox{ \ and \ } L^-_{A_i} = 2\cdot L_{\check\tau= -X_{|A_i}}. $$ Using $\check\tau$ and Proposition~\ref{prop_linksinhomologyI}, we get $$ \begin{aligned}
[L^+_{A_i}] &= P(e_2^{A_i}(X^\perp_{|A_i}, {\check E^e}_{2|\partial A_i})) + \frac{1}{2} P(e_2^{A_i}({\check{E}}^{d \perp}_{1}, {\check E^e}_{2|\partial A_i})) + \frac{1}{2} P(e_2^{A_i}({\check{E}}^{g \perp}_{1}, {\check E^e}_{2|\partial A_i})) \\
[L^-_{A_i}] &= - P(e_2^{A_i}(X^\perp_{|A_i}, {\check E^e}_{2|\partial A_i})) + \frac{1}{2} P(e_2^{A_i}({\check{E}}^{d \perp}_{1}, {\check E^e}_{2|\partial A_i})) + \frac{1}{2} P(e_2^{A_i}({\check{E}}^{g \perp}_{1}, {\check E^e}_{2|\partial A_i})) \end{aligned} $$ where $\check\tau_e=(\check E_1^e,\check E_2^e,\check E_3^e)$ and where ${\check E_1}^d$ and ${\check E_1}^g$ are the Siamese sections of $\check\tau$. By construction, it follows that $$ \begin{aligned}
[L^+_{A_i}] &= P(e_2^{A_i}(X^\perp_{|A_i}, {E_2^e}_{|\partial A_i})) + \frac{1}{2} P(e_2^{A_i}({E_1^d}^\perp_{|A_i}, {E_2^e}_{|\partial A_i})) + \frac{1}{2} P(e_2^{A_i}({E_1^g}^\perp_{|A_i}, {E_2^e}_{|\partial A_i})) \\
[L^-_{A_i}] &= -P(e_2^{A_i}(X^\perp_{|A_i}, {E_2^e}_{|\partial A_i})) + \frac{1}{2} P(e_2^{A_i}({E_1^d}^\perp_{|A_i}, {E_2^e}_{|\partial A_i})) + \frac{1}{2} P(e_2^{A_i}({E_1^g}^\perp_{|A_i}, {E_2^e}_{|\partial A_i})) \end{aligned} $$ where $E_1^d$ and $E_1^g$ are the Siamese sections of $\bar\tau$. Using the same method, we also get that $$ \begin{aligned}
[L^+_{B_i}] &= P(e_2^{B_i}(X^\perp_{B_i}, {E_2^e}_{|\partial A_i})) + \frac{1}{2} P(e_2^{B_i}({E_1^d}^\perp_{|B_i}, {E_2^e}_{|\partial A_i})) + \frac{1}{2} P(e_2^{B_i}({E_1^g}^\perp_{|B_i}, {E_2^e}_{|\partial A_i})) \\
[L^-_{B_i}] &= -P(e_2^{B_i}(X^\perp_{B_i}, {E_2^e}_{|\partial A_i})) + \frac{1}{2} P(e_2^{B_i}({E_1^d}^\perp_{|B_i}, {E_2^e}_{|\partial A_i})) + \frac{1}{2} P(e_2^{B_i}({E_1^g}^\perp_{|B_i}, {E_2^e}_{|\partial A_i})). \end{aligned} $$ Conclude with Lemma~\ref{simppara}. \end{proof}
\subsection[Variation formula for torsion combings]{Variation formula for torsion combings} \begin{proof}[Proof of Theorem~\ref{thm_D2nd}] Let $(M,X)$ be a compact oriented 3-manifold equipped with a combing. Let $\lbrace (\sfrac{B_i}{A_i},X_{B_i}) \rbrace_{i\in \lbrace 1, 2\rbrace}$ be two disjoint LP$_\b Q$-surgeries in $(M,X)$ and assume that, for all subset $I\subset\lbrace 1,2 \rbrace$, $X^I=X(\lbrace\sfrac{B_i}{A_i}\rbrace_{i \in I})$ is a torsion combing of the 3-manifold $M_I = M(\lbrace \sfrac{B_i}{A_i} \rbrace_{i \in I})$. Note that, for all $I,J\subset \lbrace 1,2 \rbrace$, $X^I$ and $X^J$ coincide on $(M\setminus \cup_{i\in I\cup J} A_i)\cup_{i\in I\cap J}B_i$. Finally, let $\lbrace \bar\tau^I \rbrace_{I\subset \lbrace 1,2\rbrace}$ be a family of pseudo-parallelizations as in Lemma~\ref{norm}, let ${E_1^d}^I$ and ${E_1^g}^I$ denote the Siamese sections of $\bar\tau^I$ for all $I\subset \lbrace 1,2 \rbrace$ and let $L_I$ stand for $L_{{E_1^d}^I=-{E_1^g}^I}$. Using Corollary~\ref{cor_FTppara}, we have $$ \begin{aligned} p_1 \left([X^{\lbrace 2 \rbrace}],[X^{\lbrace 1,2 \rbrace}]\right) - p_1\left([X],[X^{\lbrace 1 \rbrace}]\right) &= p_1 \left([X^{\lbrace 2 \rbrace}],[X^{\lbrace 1,2 \rbrace}]\right)- p_1 \left(\bar\tau^{\lbrace 2 \rbrace},\bar\tau^{\lbrace 1,2 \rbrace}\right) \\ & - p_1\left([X],[X^{\lbrace 1 \rbrace}]\right) + p_1\left(\bar\tau,\bar\tau^{\lbrace 1 \rbrace}\right) \\ \end{aligned} $$ which, using Lemma~\ref{lem1} and Theorem~\ref{thm_defp1Xb}, reads $$ \begin{aligned} & 4 \cdot lk_{M_{\lbrace1,2\rbrace}}(L_{\bar\tau^{\lbrace 1,2 \rbrace}=X^{\lbrace 1,2 \rbrace}} \ , \ L_{\bar\tau^{\lbrace 1,2 \rbrace}=-X^{\lbrace 1,2 \rbrace}}) - 4 \cdot lk_{M_{\lbrace 2 \rbrace}}(L_{\bar\tau^{\lbrace 2 \rbrace}=X^{\lbrace 2 \rbrace}} \ , \ L_{\bar\tau^{\lbrace 2 \rbrace}=-X^{\lbrace 2 \rbrace}}) \\ & - lk_{\b S^2} \left(e_1-(-e_1) \ , \ P_{\b S^2} \circ (\tau^{\lbrace 1,2 \rbrace}_d)^{-1} \circ X^{\lbrace 1,2 \rbrace}(L_{\lbrace 1,2 \rbrace}) - P_{\b S^2} \circ (\tau^{\lbrace 2 \rbrace}_d)^{-1} \circ X^{\lbrace 2 \rbrace}(L_{\lbrace 2 \rbrace})\right)\\ &- 4 \cdot lk_{M_{\lbrace 1 \rbrace}}(L_{\bar\tau^{\lbrace 1 \rbrace}=X^{\lbrace 1 \rbrace}} \ , \ L_{\bar\tau^{\lbrace 1 \rbrace}=-X^{\lbrace 1 \rbrace}}) + 4 \cdot lk_{M}(L_{\bar\tau=X} \ , \ L_{\bar\tau=-X}) \\ & + lk_{\b S^2} \left(e_1-(-e_1) \ , \ P_{\b S^2} \circ (\tau^{\lbrace 1 \rbrace}_d)^{-1} \circ X^{\lbrace 1 \rbrace}(L_{\lbrace 1 \rbrace}) - P_{\b S^2} \circ (\tau_d)^{-1} \circ X(L) \right).
\end{aligned} $$ This can further be reduced to the following by using Lemma~\ref{norm}, $$ \begin{aligned} & lk_M \left( L^+_{ext} + L^+_{A_1} + L^+_{A_2}, \ L^-_{ext} + L^-_{A_1} + L^-_{A_2} \right) \\ &- lk_{M_{\lbrace 1 \rbrace}} \left( L^+_{ext} + L^+_{B_1} + L^+_{A_2}, \ L^-_{ext} + L^-_{B_1} + L^-_{A_2} \right) \\ &- lk_{M_{\lbrace 2 \rbrace}} \left( L^+_{ext} + L^+_{A_1} + L^+_{B_2}, \ L^-_{ext} + L^-_{A_1} + L^-_{B_2} \right) \\ &+ lk_{M_{\lbrace 1,2 \rbrace}} \left( L^+_{ext} + L^+_{B_1} + L^+_{B_2}, \ L^-_{ext} + L^-_{B_1} + L^-_{B_2} \right).
\end{aligned} $$
In order to compute these linking numbers, let us construct specific 2-chains. Let us introduce a more convenient set of notations. For all $i \neq j \in \lbrace 1,2 \rbrace$, let $$
L^\pm_{i}=L^\pm_{A_i}, \ L^\pm_{ij}= L^\pm_{A_i} + L^\pm_{A_j}, \ L^\pm_{eij}= L^\pm_{ext} + L^\pm_{A_i} + L^\pm_{A_j}. $$ Set also similar notations with primed indices where a primed index $i'$, $i \in \lbrace 1,2 \rbrace$, indicates that $L^\pm_{A_i}$ should be replaced by $L^\pm_{B_i}$. For instance, $L^\pm_{i'}=L^\pm_{B_i}$, $L^\pm_{i,j'}= L^\pm_{A_i} + L^\pm_{B_j}$, \textit{etc}. Using these notations, $p_1 \left([X^{\lbrace 2 \rbrace}],[X^{\lbrace 1,2 \rbrace}]\right) - p_1\left([X],[X^{\lbrace 1 \rbrace}]\right)$ reads : $$ \begin{aligned}
lk_M(L_{e12}^+,L_{e12}^-)- lk_{M_{\lbrace 1 \rbrace}}(L_{e1'2}^+,L_{e1'2}^-) - lk_{M_{\lbrace 2 \rbrace}}(L_{e12'}^+,L_{e12'}^-) + lk_{M_{\lbrace 1,2 \rbrace}} (L_{e1'2'}^+,L_{e1'2'}^-) . \end{aligned} $$
Recall from Lemma~\ref{nulexcep} that there exist rational 2-chains $\Sigma_{e12}^\pm$ of $M$ which are bounded by the links $L^\pm_{e12}$. Similarly, there exist rational two chains $\Sigma^\pm_{e1'2'}$ bounded by $L^\pm_{e1'2'}$ in $M_{\lbrace 1,2\rbrace}$. Note that, for all $i \in \lbrace 1,2 \rbrace$, the 2-chains $\Sigma_{e12}^\pm\cap A_i$ are cobordisms between the $L^\pm_i$ and 1-chains $\ell^\pm_i$ in $\partial A_i$. Similarly, for all $i \in \lbrace 1,2 \rbrace$, the 2-chains $\Sigma_{e1'2'}^\pm\cap B_i$ are cobordisms between the $L^\pm_{i'}$ and 1-chains $\ell^\pm_{i'}$ in $\partial B_i$. Furthermore, according to Lemma~\ref{inter}, for all $i\in \lbrace 1,2 \rbrace$ and for any nonvanishing section $\sigma_i$ of $X^\perp_{|\partial A_i}$, in $H_1(\partial A_i ; \b Q)/\go L_{A_i}$ : $$
[\ell^\pm_{i'}-\ell^\pm_{i}]= \pm \big((i_*^{B_i})^{-1}(P(e_2^{B_i}(X_{B_i}^\perp, \sigma_i))) - (i_*^{A_i})^{-1}(P(e_2^{A_i}(X_{|A_i}^\perp, \sigma_i)))\big). $$ So, according to Lemma~\ref{zero}, for all $I \subset \lbrace 1,2 \rbrace$, there exists a 2-chain $S_{\sfrac{B_i}{A_i}}^I$ in $M_I$ which is bounded by $\ell^+_{i'}-\ell^+_i$. Finally, since $(\sfrac{B_1}{A_1})$ and $(\sfrac{B_2}{A_2})$ are LP$_\b Q$-surgeries, we can construct these chains so that $$ \begin{aligned} S_{\sfrac{B_1}{A_1}}^{\lbrace 1\rbrace} \cap (M_{\lbrace 1 \rbrace}\hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring A_2) &= S_{\sfrac{B_1}{A_1}}^{\lbrace 1,2 \rbrace} \cap (M_{\lbrace 1,2 \rbrace}\hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring B_2), \\ S_{\sfrac{B_2}{A_2}}^{\lbrace 2\rbrace} \cap (M_{\lbrace 2 \rbrace}\hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring A_1) &= S_{\sfrac{B_2}{A_2}}^{\lbrace 1,2 \rbrace} \cap (M_{\lbrace 1,2 \rbrace})\hspace{-0.5mm}\setminus\hspace{-0.5mm} \mathring B_1). \end{aligned} $$
Let us now return to the computation of $p_1 \left([X^{\lbrace 2 \rbrace}],[X^{\lbrace 1,2 \rbrace}]\right) - p_1\left([X],[X^{\lbrace 1 \rbrace}]\right)$. Using the 2-chains we constructed, we have : $$ \begin{aligned}
lk_M(L_{e12}^+,L_{e12}^-) &= \langle \Sigma^+_{e12}, L_{e12}^- \rangle \\
lk_{M_{\lbrace 1 \rbrace}}(L_{e1'2}^+,L_{e1'2}^-) &= \langle \Sigma^+_{e12} \cap (M \setminus \mathring A_1) + S_{\sfrac{B_1}{A_1}}^{\lbrace 1 \rbrace} + \Sigma^+_{e1'2'}\cap B_1, L_{e1'2}^- \rangle \\
lk_{M_{\lbrace 2 \rbrace}}(L_{e12'}^+,L_{e12'}^-) &= \langle \Sigma^+_{e12}\cap (M \setminus \mathring A_2) + S_{\sfrac{B_2}{A_2}}^{\lbrace 2 \rbrace} + \Sigma^+_{e1'2'}\cap B_2 , L_{e12'}^- \rangle \\
lk_{M_{\lbrace 1,2 \rbrace}} (L_{e1'2'}^+,L_{e1'2'}^-)&=\langle \Sigma^+_{e12}\cap(M\setminus(\mathring A_1\hspace{-1mm}\cup\hspace{-1mm}\mathring A_2)) +S_{\sfrac{B_1}{A_1}}^{\lbrace 1 , 2 \rbrace} \\
&\hspace{1cm}+S_{\sfrac{B_2}{A_2}}^{\lbrace 1 , 2 \rbrace} + \Sigma^+_{e1'2'}\cap(B_1\cup B_2) , L_{e1'2'}^- \rangle. \\ \end{aligned} $$ So, the contribution of the intersections in $M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1\cup \mathring A_2)$ is zero since it reads : $$ \begin{aligned} & \langle \Sigma^+_{e12} \cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1 \cup \mathring A_2)) , L_{e}^- \rangle_{M\setminus (\mathring A_1\cup \mathring A_2)} \\ & -\langle \Sigma^+_{e12} \cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1 \cup \mathring A_2)) + S_{\sfrac{B_1}{A_1}}^{\lbrace 1 \rbrace}\cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1 \cup \mathring A_2)) , L_{e}^- \rangle_{M \setminus (\mathring A_1\cup \mathring A_2)} \\ & -\langle \Sigma^+_{e12} \cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1 \cup \mathring A_2)) + S_{\sfrac{B_2}{A_2}}^{\lbrace 2 \rbrace}\cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1 \cup \mathring A_2)), L_{e}^- \rangle_{M\setminus (\mathring A_1\cup \mathring A_2)} \\ & +\langle \Sigma^+_{e12} \cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1 \cup \mathring A_2)) + S_{\sfrac{B_1}{A_1}}^{\lbrace 1 , 2 \rbrace}\cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1 \cup \mathring A_2)) \\ &\hspace{5cm}+ S_{\sfrac{B_2}{A_2}}^{\lbrace 1 , 2 \rbrace}\cap (M \hspace{-0.5mm}\setminus\hspace{-0.5mm} (\mathring A_1 \cup \mathring A_2)) , L_{e}^- \rangle_{M\setminus (\mathring A_1\cup \mathring A_2)} . \end{aligned} $$ The contribution in $A_1$ is $$ \begin{aligned}
\langle \Sigma^+_{e12} \cap A_1 , L_{1}^- \rangle_{A_1} - \langle\Sigma^+_{e12} \cap A_1 + S_{\sfrac{B_2}{A_2}}^{\lbrace 2 \rbrace}\cap A_1 , L_{1}^- \rangle_{A_1}
= - \langle S_{\sfrac{B_2}{A_2}}^{\lbrace 2 \rbrace}\cap A_1, L_{1}^- \rangle_{A_1}. \end{aligned} $$ In $A_2$, we similarly get $- \langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 \rbrace}\cap A_2, L_{2}^- \rangle_{A_2}$. The contribution in $B_1$ is $$ \begin{aligned}
&-\hspace{-1mm}\langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 \rbrace}\cap B_1 \hspace{-1mm}+\hspace{-1mm} \Sigma^+_{e1'2'}\cap B_1 , L_{1'}^- \rangle_{B_1} \hspace{-1mm}+\hspace{-1mm} \langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 , 2 \rbrace}\cap B_1 \hspace{-1mm} +\hspace{-1mm} S_{\sfrac{B_2}{A_2}}^{\lbrace 1 , 2 \rbrace}\cap B_1 \hspace{-1mm} +\hspace{-1mm} \Sigma^+_{e1'2'}\cap B_1 , L_{1'}^- \rangle_{B_1} \\
&= \langle S_{\sfrac{B_2}{A_2}}^{\lbrace 1 , 2 \rbrace}\cap B_1 , L_{1'}^- \rangle_{B_1} \end{aligned} $$ and, in $B_2$, we get $\langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 , 2 \rbrace}\cap B_2 , L_{2'}^- \rangle_{B_2}$. Eventually, $L_{\lbrace X^I \rbrace}(\sfrac{B_i}{A_i}) = i_*^{A_i}([\ell_{i'}^+ - \ell_{i}^+])$ for $i \in \lbrace 1,2\rbrace$. Moreover, recall that $[\ell^+_{i'}-\ell^+_{i}]=-[\ell^-_{i'}-\ell^-_{i}]$ in $H_1(\partial A_i ; \b Q)/\go L_{A_i}$, and complete the computations~: $$ \begin{aligned} p_1 & \left([X^{\lbrace 2 \rbrace}],[X^{\lbrace 1,2 \rbrace}]\right) - p_1\left([X],[X^{\lbrace 1 \rbrace}]\right) \\ &= \langle S_{\sfrac{B_2}{A_2}}^{\lbrace 1 , 2 \rbrace}\cap B_1 , L_{1'}^- \rangle_{B_1} + \langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 , 2 \rbrace}\cap B_2 , L_{2'}^- \rangle_{B_2} \\ &- \langle S_{\sfrac{B_2}{A_2}}^{\lbrace 2 \rbrace}\cap A_1, L_{1}^- \rangle_{A_1} - \langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 \rbrace} \cap A_2, L_{2}^- \rangle_{A_2} \\ &= \langle S_{\sfrac{B_2}{A_2}}^{\lbrace 1 , 2 \rbrace} , \ell_{1'}^- \rangle_{M_{\lbrace 1,2 \rbrace}} + \langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 , 2 \rbrace} , \ell_{2'}^- \rangle_{M_{\lbrace 1,2 \rbrace}} \\ &- \langle S_{\sfrac{B_2}{A_2}}^{\lbrace 2 \rbrace}, \ell_{1}^- \rangle_{M_{\lbrace 2 \rbrace}} - \langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 \rbrace} , \ell_{2}^- \rangle_{M_{\lbrace 1 \rbrace}} \\ &= \langle S_{\sfrac{B_2}{A_2}}^{\lbrace 1 , 2 \rbrace} , \ell_{1'}^- - \ell_{1}^- \rangle_{M_{\lbrace 1,2 \rbrace}} + \langle S_{\sfrac{B_1}{A_1}}^{\lbrace 1 , 2 \rbrace} , \ell_{2'}^- - \ell_{2}^- \rangle_{M_{\lbrace 1,2 \rbrace}} \\ &=- 2 \cdot lk_M \left(L_{\lbrace X^I \rbrace}(\sfrac{B_1}{A_1}) \ , \ L_{\lbrace X^I \rbrace}(\sfrac{B_2}{A_2}) \right). \end{aligned} $$ \iffalse &= lk_{M_{\lbrace 1,2 \rbrace}} \left(L_{\lbrace X^I \rbrace}(\sfrac{B_2}{A_2}) \ , \ - L_{\lbrace X^I \rbrace}(\sfrac{B_1}{A_1})\right) \\ &+ \ lk_{M_{\lbrace 1,2 \rbrace}}\left(L_{\lbrace X^I \rbrace}(\sfrac{B_1}{A_1}) \ , \ - L_{\lbrace X^I \rbrace}(\sfrac{B_2}{A_2})\right) \\ &=- 2 \cdot lk_{M_{\lbrace 1,2 \rbrace}} \left(L_{\lbrace X^I \rbrace}(\sfrac{B_1}{A_1}) \ , \ L_{\lbrace X^I \rbrace}(\sfrac{B_2}{A_2}) \right) \\ \fi \end{proof}
\nocite{hirzebruch,KM,pontrjagin,turaev,lickorish,rolfsen,MR1030042,MR1189008,MR1712769}
\end{document}
|
arXiv
|
{
"id": "1703.03219.tex",
"language_detection_score": 0.5210089087486267,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{ Decomposing a Matrix into two Submatrices with Extremely Small Operator Norm} \begin{abstract} We give sufficient conditions on a matrix A ensuring the existence of a partition of this matrix into two submatrices with extremely small norm of the image of any vector. Under some weak conditions on a matrix A we obtain a partition of A with the extremely small $(1,q)$--norm of submatrices. \end{abstract}
\small{Keywords: {\it submatrix, operator norm, partition of a matrix, Lunin's method}} \\
\normalsize This paper is devoted to the estimates of operator norms of submatrices. The subject is actively being developed and finds various applications. The present work can be viewed as a continuation of \cite{1} discussing the $(2,1)$--norm case. This case was studied earlier for matrices with orthonormal columns in \cite{2}, where an analogue of the partition (\ref{partition}) (see below) with the extremely small $(2,1)$--norm of the corresponding submatrices was obtained. Using the modified Lunin's method we prove an essential reinforcement of Assertion $4$ from \cite{1} and the generalization of Assertion $3$ to the case of the $(X, q)$--norm with $1\leq q<\infty$. We study the case of the $(1,q)$--norm in greater detail.
For an $N\times n$ matrix $A$, viewed as an operator from $l_p^n$ to $l_q^N$, we define the $(p, q)$--norm: \begin{equation*}
\left\| A\right\|_{(p,q)}=\sup\limits_{\left\| x \right\|_{l_p^n}\leq 1}\left\| Ax\right\| _{l_q^N}, \ 1\leq p, q\leq \infty. \end{equation*} In fact, Proposition \ref{assertion_norm} is proved here for a more general $(X,q)$--norm where $X$ is an $n$--dimensional norm space.
We use the following notation: $\rk(A)$ is the rank of a matrix $A$, $\left< N\right>$ is the set of natural numbers $1, 2,\ldots, N$; $v_i$, $i\in \left< N\right>$ stands for the rows of $A$, $w_j$, $j\in\left<n\right>$ --- its columns. For a subset $\omega\subset\left<N\right>$ $A(\omega)$ denotes the submatrix of a matrix $A$ formed by the rows $v_i, i\in\omega$, $\overline{\omega}=\left<N\right>\setminus\omega$ .
$( \cdot, \cdot)$ stands for the inner product in $\mathbb R ^n$, $\left\| x\right\|_p$ is the norm of $x\in\mathbb{R}^n$ in $l_p^n$, $1\leq p\leq\infty$. For a norm space $X$ $\left\|\cdot\right\|_X$ is the norm on X.
The following condition is the counterpart of the condition on a matrix from \cite{1} in the case of an arbitrary $1\leq q<\infty$: \begin{equation}\label{cond}
\forall x\in\mathbb R^n\ \ \forall i_0\in\left<N\right> \ \ |(v_{i_0}, x)|\leq\varepsilon\left(\sum_{i=1}^N|(v_i, x)|^q\right)^{1/q}. \end{equation}
\begin{theorem}\label{assertion_pointwise} Assume that an $N\times n$ matrix $A$ stisfies (\ref{cond}) with $0<\varepsilon\leq (\rk(A))^{-1/q}$ and $1\leq q<\infty$. Then there exists a partition \begin{equation}\label{partition} \left<N\right>=\Omega_1\cup\Omega_2,\ \Omega_1\cap\Omega_2=\emptyset, \end{equation} such that \begin{equation}\label{obtained_estimate}
\left\| A(\Omega_k)x\right\|_q\leq \gamma\left\| Ax\right\|_q, \ \ \gamma=\frac{1}{2^{1/q}}+
\frac{2+3\cdot 2^{-1/q}}{q}\left(\rk(A)\varepsilon^q\ln \frac{6q}{(\rk(A)\varepsilon^q)^{1/3}}\right)^{1/3}
\end{equation}
for any $x\in\mathbb{R}^n$ and $k=1,2$. \end{theorem}
\begin{remark} No one knows whether such a partition exists or not if $1<\rk(A)\varepsilon^q$. \end{remark}
\begin{proof}[Sketch of proof] First, we prove Proposition \ref{assertion_pointwise} for the case of $\rk(A)=n$. Denote $$\delta=\frac{(n\varepsilon^q)^{1/3}}{q}.$$
Let $X$ be the space $\mathbb{R}^n$ with the norm $\left\|x\right\|_X=\left\|Ax\right\|_q$ (it is a norm on $\mathbb{R}^n$ because $\rk(A)=n$). Let $S_X=\{x\in \mathbb{R}^n: \left\|x\right\|_X=1 \}$ be the unit sphere of $X$. Let $\mathbb{Y}$ be a $\delta$--net in the norm $\left\|\cdot\right\|_X$ on the sphere $S_X$ with at most $\leq(3/\delta)^n$ elements. Suppose that it is not the case, then for every partition (\ref{partition}) there exists a vector $x_1\in S_X$ such that $$
\left\| A(\Omega_1)x_1\right\|_q>\gamma\left\| Ax_1\right\|_q $$ (in this case let $\omega'=\Omega_1$, $x_{\omega'}=x_1$ ), or there exists a vector $x_2\in S_X$ such that $$
\left\| A(\Omega_2)x_2\right\|_q>\gamma\left\| Ax_2\right\|_q $$ (then we define $\omega'=\Omega_2$, $x_{\omega'}=x_2$ ). For every pair $(\Omega_1, \Omega_2)$ we find $\omega'$ and $x_{\omega'}$.
Let $y_{\omega'}$ be one of the nearest to $x_{\omega'}$ vectors from the net $\mathbb{Y}$. There are $2^{N-1}-1$ different partitions of the set $\left<N\right>$ into two nonempty parts. Therefore there exists a vector $y_0\in \mathbb{Y}$ such that the set $K=\{\omega' : y_0=y_{\omega'}\}$ is large enough: \begin{equation}\label{K_est}
|K|\geq(2^{N-1}-1)\left(\frac{\delta}{3}\right)^n \geq 2^N\left(\frac{\delta}{6}\right)^n. \end{equation} (Here we assume that $n>1$, otherwise Proposition \ref{assertion_pointwise} is obvious.) Therefore there is a vector $y_0\in S_X$ and at least
$2^N(\delta/6)^n$ subsets $\omega'\subset\langle N\rangle$ for which $\left\|A(\omega')x_{\omega'}\right\|_q>\gamma \left\|Ax_{\omega'}\right\|_q$ and $\left\|y_0-x_{\omega'}\right\|_X<\delta$.
Note that for $x\in S_X$ and $\omega\subset\langle N\rangle$ $\left\|A(\omega)x\right\|_q\leq \left\|A(\omega)\right\|_{(X,q)}\leq \left\|A\right\|_{(X,q)}$.
Below we assume that $\gamma<1$, otherwise (\ref{obtained_estimate}) is obviously true. As $\gamma<1$, for $\omega'\in K$ we obtain: \begin{gather*}
\left\| A(\omega')y_0\right\| _q\geq \left\| A(\omega')x_{\omega'}\right\| _q-\left\| A(\omega')(x_{\omega'}-y_0)\right\| _q>\\
>\gamma\left\| Ax_{\omega'}\right\| _q-\delta
\left\| A(\omega')\left\{\frac{x_{\omega'}-y_0}{\left\|x_{\omega'}-y_0\right\|_X}\right\}\right\|_q\geq
(\gamma\left\| Ay_0\right\| _q-\gamma\left\| A(x_{\omega'}-y_0)\right\| _q)- \end{gather*} \begin{equation*}\label{24}
-\delta \left\| A\left(\frac{x_{\omega'}-y_0}{\left\|x_{\omega'}-y_0\right\|_X}\right)\right\| _q\geq \gamma\left\| Ay_0\right\| _q-2\delta \geq \gamma\left\| Ay_0\right\|_q-2\delta \left\| Ay_0\right\|_q=
\left\| Ay_0\right\|_q\left(\gamma-2\delta\right). \end{equation*}
Since $y_0\in S_X$, we have used $\left\|Ay_0\right\|_q=\left\|y_0\right\|_X=1$ in the last inequality. Let $R$ be an amount of subsets $\omega\subset\langle N\rangle$ for which $$
\left\| A(\omega)y_0\right\|_q\geq \left(\gamma-2\delta\right)\left\| Ay_0\right\|_q $$ holds. Let $K_1$ be the set of such subsets. Let us show that $R<2^N(\delta/6)^n$, then we will come to the contradiction, and it will complete the proof of Proposition \ref{assertion_pointwise} in the case of $\rk(A)=n$. Denote $M=3\cdot 2^{-1/q}$. Since $\delta\leq\phi(n, \varepsilon)$, then for $\omega'\in K_1$ we have: $$
\sum\limits_{i\in \omega'}{|(v_i,y_0)|^q}>(\gamma-2\delta)^q S> \left(\frac{1}{2^{1/q}}+M\phi(n, \varepsilon)\right)^q S\geq $$ $$ \geq \left(\frac{1}{2}+q\frac{1}{2^{(q-1)/q}}M\phi(n, \varepsilon)\right)S=\left(\frac{1}{2}+q\frac{2^{1/q}}{2}M\phi(n, \varepsilon)\right)S. $$
$R$ can be estimated as in the proof of Assertion $3$ from \cite{1}.
Now, let a matrix have the rank $r<n$. Without loss of generality, we can assume that the vectors $w_1, \dots, w_{r}$ are linearly independent. It is clear that (\ref{cond}) holds for the matrix $\tilde{A}$, which consists from the first $r$ columns of $A$. We have $\rk{\tilde{A}}=r$, therefore there exists a partition of the form (\ref{partition}) such that (\ref{obtained_estimate}) holds. Let $w_j=\sum\limits_{i=1}^r \lambda_j^i w_i$. For a vector $x\in\mathbb{R}^n$ we construct the vector $\tilde{x}\in\mathbb{R}^r$ having coordinates $\tilde{x}_i=x_i+\sum\limits_{j=r+1}^n \lambda_j^ix_j$, then $Ax=\tilde{A}\tilde{x}$ and for $k=1,2$ $A(\Omega_k)x=\tilde{A}(\Omega_k)\tilde{x}$, so for the partition we have found (\ref{obtained_estimate}) also holds for the matrix $A$. \end{proof}
\begin{corollary}\label{assertion_pointwise} Assume that an $N\times n$ matrix $A$ satisfies (\ref{cond}) with $0<\varepsilon\leq (\rk(A))^{-1/q}$ and $1\leq q<\infty$. Then there exists a partition (\ref{partition}) such that for any $x\in\mathbb{R}^n$ and $k=1,2$ we have \begin{equation*}
\left(\frac{1}{2}-\psi\right)\sum_{i\in\Omega_k}|(v_i, x)|^q\leq \sum_{i=1}^N|(v_i, x)|^q\leq \left(\frac{1}{2}+\psi\right)\sum_{i\in\Omega_k}|(v_i, x)|^q,
\end{equation*}
where
$$ \psi=2^{q+1} \left(\rk(A)\varepsilon^q\ln \frac{6q}{(\rk(A)\varepsilon^q)^{1/3}}\right)^{1/3}.
$$ \end{corollary}
The following proposition is a simple corollary of Proposition \ref{assertion_pointwise}.
\begin{theorem}\label{assertion_norm} Let for an $N\times n$ matrix $A$ (\ref{cond}) hold for some $0<\varepsilon\leq (\rk(A))^{-1/q}$ and $1\leq q<\infty$. Then there exists a partition (\ref{partition}) such that for $k=1,2$ the following inequality holds $$
\left\| A(\Omega_k)\right\|_{(X,q)}\leq \gamma \left\| A\right\|_{(X,q)}, $$ where $\gamma$ is defined in the formulation of Proposition \ref{assertion_pointwise}. \end{theorem}
The following proposition is analogous to Proposition \ref{assertion_norm} for the $(1,q)$--norm. Let $e_j$, $j\in\langle n\rangle$ be the standard basis in $\mathbb{R}^n$.
\begin{theorem}\label{assertion_1_q_norm}
If for an $N\times n$ matrix $A$ the
inequality
\begin{equation}\label{a_i_j_cond}
|a_j^i|\leq \varepsilon \left\| w_j \right\|_q
\end{equation}
holds for some $1\leq q<\infty$ and $0<\varepsilon< 1$ and for every $i\in\langle N\rangle$, and $j\in\langle n\rangle$, then there exists a partition (\ref{partition}) such that for $k=1,2$ the following holds:
$
\text{a) }\left\| A(\Omega_k)\right\|_{(1,q)}\leq
\left(\frac{1}{2}+\frac{3}{2}\varepsilon^{q/3}\ln^{1/3}{(4n)}\right)^{1/q}
\left\| A\right\|_{(1,q)},$
$
\text{b) }\left\| A(\Omega_k)\right\|_{(1,q)}\leq
\left(\frac{1}{2}+\frac{1}{2}\varepsilon^{q}\sqrt{N}(1+\log(\frac{n}{N}+1)^{1/2}\right)^{1/q}
\left\| A\right\|_{(1,q)}, $
$
\text{c) } \left\| A(\Omega_k)\right\|_{(1,q)}\leq\left(\frac{1+n\varepsilon^q}{2}\right)^{1/q}\left\| A\right\|_{(1,q)}.
$
\end{theorem}
\begin{remark} In Proposition \ref{assertion_1_q_norm} we need sufficiently weak conditions (compared to Proposition \ref{assertion_pointwise}) on the elements of a matrix.
\end{remark}
\begin{proof} Since
the function $\left\| Ax \right\|_q$ is convex, then the $(1, q)$--norm of a matrix is attained on one of the vectors from the standard basis.
The proof of a) is indeed close to the previous arguments from Proposition \ref{assertion_pointwise}, so here we only show the sketch. Assume that our proposition is not true and for each partition (\ref{partition}) there exists a number $k$ such that $\left\| A(\Omega_k)\right\|_{(1,q)}>\left(1/2+(3/2)\varepsilon^{q/3}\ln^{1/3}{(4n)}\right)^{1/q}\left\| A\right\|_{(1,q)}$. Denote $\omega'=\Omega_k$. The $(1,q)$--norm of the matrix $A_{\omega'}$ is attained on some vector $e_{j_{\omega'}}$, $j_{\omega'}\in\langle n\rangle$, therefore the following holds:
$ \sum\limits_{i\in \omega'}|a^i_{j_{\omega'}}|^q>\left(1/2+(3/2)\varepsilon^{q/3}\ln^{1/3}{(4n)}\right)\left\|w_{j_{\omega'}}\right\|^q$.
Like in the proof of Proposition \ref{assertion_pointwise}, there exists $j_0\in \langle n\rangle$ such that the set $K=\{\omega' : j_{\omega'}=j_0\}$ is large enough:
\begin{equation*}
|K|\geq(2^{N-1}-1)/n> 2^{N-2}/n. \eqno(6)
\end{equation*}
It is easy to see that for every $\omega\in K$
\begin{equation*}
\sum\limits_{i\in \omega}|a^i_{j_0}|^q>\left(1/2+(3/2)\varepsilon^{q/3}\ln^{1/3}{(4n)}\right)\left\|w_{j_0}\right\|^q. \eqno(7)
\end{equation*}
So, for the proof of a) it is enough to check that a number
$R$ of subsets
$\omega\subset\langle N\rangle$ for which (7) holds is less than the right part of (6).
The value $R$ is estimated as in the proof of Assertion $3$ from \cite{1}.
To prove b) we use Corollary $5$ from \cite{3}.
Let $\tilde w_j=(|a_j^1|^q,\dots, |a_j^N|^q)$ be a vector which is obtained from the $j$--th column of $A$ by raising the moduli of its coordinates to the power $q$.
For all $j\in\langle n\rangle$ $\left\|w_j\right\|_q^q\leq \left\|A\right\|_{(1,q)}$, so (\ref{a_i_j_cond}) implies that $\left\|\tilde w_j\right\|_{\infty}\leq \varepsilon^q\left\|A\right\|_{(1,q)}^q$. Then due to Corollary from \cite{3} mentioned above there exists such a vector $\xi=(\xi_1,\dots,\xi_N)\in\mathbb R^N$, whose coordinates have modulus $1$ such that for every $j\in\langle n\rangle$ the following inequality holds:
\begin{equation*}
\left|( \tilde w_j, \xi )\right|\leq \varepsilon^q\sqrt{N}\left(1+\log\bigl(\frac{n}{N}+1\bigr)\right)^{1/2}\left\|A\right\|_{(1,q)}^q.
\end{equation*}
Let $\Omega_1=\{i\in\langle N\rangle : \xi_i=1\}$, $\Omega_2=\langle N\rangle\backslash \Omega_1=\{i\in\langle N\rangle : \xi_i=-1\}$. Let us check b). Denote $\theta=\sqrt{N}\left(1+\log\bigl(\frac{n}{N}+1\bigr)\right)^{1/2}$. For $k=1,2$ there exists $j_0^k\in\langle n\rangle$ such that
\begin{gather*}
\left\|A(\Omega_k)\right\|_{(1,q)}^q=
\sum\limits_{i\in\Omega_k}|a_{j_0^k}^i|^q
\leq
\frac{1}{2}\left(\sum\limits_{i\in\langle N\rangle}|a_{j_0}^i|^q+\varepsilon^q\theta\left\|A\right\|_{(1,q)}^q\right)
\leq \left(\frac{1}{2}+\frac{1}{2}\varepsilon^{q}\theta\right)\left\|A\right\|_{(1,q)}^q,
\end{gather*}
as required.
To prove c) we apply the following theorem.
\begin{thh}[\cite{4}, p. 287] Let $A_1,\dots, A_n$ be sets in $\mathbb{R}^n$ with finite Lebesgue measure, then there exists a ~hyperplane $\pi$ which divides the measure of each of them in half.
\end{thh}
Let $M=\max\limits_{i,j}\lbrace|a_j^i|^q\rbrace+1$.
One can put in $\mathbb{R}^n$ $N$ cubes with sides equal to $M$ and parallel to the axes such that every hyperplane intersects at most $n$ of them. (It follows from the existence of $N$ points of the general position in $\mathbb{R}^n$ and the continuity of the equation of a plane.) Let us numerate these cubes. For $i\in\langle N\rangle$ let $u_i$ be the vertex of $i$--th cube with the smallest coordinates.
For each entry $a_j^i$ of the matrix we define a parallelepiped $\widetilde{P_j^i}= [0,1]^{j-1}\times [1, 1+|a_j^i|^q]\times [0,1]^{n- j}$. We put $n$ rectangular parallelepipeds defined by the entries of the row $v_i$ ($P_j^i=u_i+ \widetilde{P_j^i}$) into the cube with number $i$. Note that $\mu(P_j^i)=|a_j^i|^q$. We call the set of $P_j^i$, $j\in\langle n \rangle$ for a fixed $i$ by an $i$--th $``$angle$"$.
For $j\in\langle n \rangle$ let
$A_j = \bigcup\limits_{i\in\langle N\rangle}{P_j^i}$. Applying the theorem mentioned above to $A_j$, we get a hyperplane $\pi$ which divides in half the measure of each $A_j$. Let $P_1$ and $P_2$ be halfspaces into which $\pi$ divides $\mathbb{R}^n$. By construction $\pi$ intersects at most $n$ cubes, consequently at most $n$ $``$angles$"$. It is clear now how to obtain a partition (\ref{partition}). We put the indices of the $``$angles$"$ which entirely belong to $P_1$ (or $P_2$) in $\Omega_1$ (in $\Omega_2$ correspondingly). We put the indices of the $``$angles$"$ which intersect both $P_1$ and $P_2$ in $\Omega_1$. Let $G$ be the set of such indices. Let us show that for every $j\in \langle n \rangle$ the $l_q^N$--norm of the column $w_j$ will decrease at least $\left(\frac{1+n\varepsilon^q}{2}\right)^{1/q}$ times under the partition. It will prove our proposition. Since $\pi$ divides in half the measure of $A_j$, then
$$\sum\limits_{i\in \Omega_1 \backslash G } |a_j^i|^q + V_1 = \sum\limits_{i\in \Omega_2} |a_j^i|^q + V_2,$$
where $V_k$, $k=1,2$ stands for the volume of $\cup_{i\in\langle G\rangle}P_j^i\cap P_k$.
From (\ref{a_i_j_cond}) and due to the fact that $\pi$ intersects at most $n$ $``$angles$"$, we have the following inequality:
$$V_1+V_2 \leq n\varepsilon^q \sum\limits_{i\in \langle N\rangle}{|a_j^i|^q},$$
so for $k=1,2$
$$
\sum\limits_{i\in \Omega_k} |a_j^i|^q \leq \frac{1+n\varepsilon^q}{2}\sum\limits_{i\in \langle N\rangle} |a_j^i|^q.
$$
Thus, Proposition \ref{assertion_1_q_norm} is proved.
\end{proof}
The following proposition shows that there is a case when (\ref{a_i_j_cond}) holds for $\varepsilon<1$ but for every partition one of the submatrices has the same $(1,q)$--norm as the whole matrix.
\begin{theorem}
For $n=2^{2k-1}$ there exists a $2k\times n$ -- matrix $A$ for which (\ref{a_i_j_cond}) holds for $\varepsilon^q\log_2{2n}\geq 2$, but for every partition of the form (\ref{partition}) the following equality holds:
\begin{gather*}
\max\biggl\{\left\|A(\Omega_1)\right\|_{(1, q)}, \left\|A(\Omega_2)\right\|_{(1, q)} \biggr\} = \left\|A\right\|_{(1, q)}.
\end{gather*}
\end{theorem} \begin{proof}[Sketch of proof] For every pair of the subsets $\omega$ and $\langle 2k\rangle\backslash \omega$ of the set $\langle 2k\rangle$ we choose the subset (any) of the largest cardinality. Let us numerate such subsets: $B_1, \ldots, B_{2^{2k-1}}$. We construct a matrix $A$ in the following way: if $i\in B_j$, then
$a_j^i=\frac{1}{|B_j|^{1/q}}$, otherwise, $a_j^i=0$. It is easy to check that (\ref{a_i_j_cond}) holds for $A$ and that for any partition (\ref{partition}) either $\left\|A(\Omega_1)\right\|_{(1,q)}=\left\|A\right\|_{(1,q)}$ or $\left\|A(\Omega_2)\right\|_{(1,q)}=\left\|A\right\|_{(1,q)}$. \end{proof}
Let $q=\infty$ and $A$ be an arbitrary matrix. There is no partition that decreases (even a little) $(X,\infty)$--norms of two submatrices. It is because one can find a row $v_{\sup}$ of the matrix $A$ such that $
\left\| A\right\|_{(X,\infty)}=\sup\limits_{\left\| x \right\|_{X}\leq 1}{\langle x, v_{\sup}\rangle}, $ and then the norm of the submatrix containing a row $v_{\sup}$, will be equal to the norm of $A$.
\thanks{The work was supported by the Russian Federation Government Grant No. 14.W03.31.0031.}
The paper is submitted to Mathematical Notes.
\end{document}
|
arXiv
|
{
"id": "2003.00979.tex",
"language_detection_score": 0.5875319242477417,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\tolerance=1000
\begin{abstract}
The purpose of this paper is to introduce a cohomology theory for abelian matched pairs of Hopf algebras and to explore its relationship to Sweedler cohomology, to Singer cohomology and to extension theory. An exact sequence connecting these cohomology theories is obtained for a general abelian matched pair of Hopf algebras, generalizing those of Kac and Masuoka for matched pairs of finite groups and finite dimensional Lie algebras. The morphisms in the low degree part of this sequence are given explicitly, enabling concrete computations.
\end{abstract}
\title{Cohomology of abelian matched pairs and the Kac sequence}
\setcounter{section}{-1}
\section{Introduction}
In this paper we discuss various cohomology theories for Hopf algebras and their relation to extension theory.
It is natural to think of building new algebraic objects from simpler structures, or to get information about the structure of complicated objects by decomposing them into simpler parts. Algebraic extension theories serve exactly that purpose, and the classification problem of such extensions is usually related to cohomology theories.
In the case of Hopf algebras, extension theories are proving to be invaluable tools for the construction of new examples of Hopf algebras, as well as in the efforts to classify finite dimensional Hopf algebras.
Hopf algebras, which occur for example as group algebras, as universal envelopes of Lie algebras, as algebras of representative functions on Lie groups, as coordinate algebras of algebraic groups and as Quantum groups, have many \lq group like\rq\; properties. In particular, cocommutative Hopf algebras are group object in the category of cocommutative coalgebras, and are very much related to ordinary groups and Lie algebras. In fact, over an algebraically closed field of characteristic zero, such a Hopf algebra is a semi-direct product of a group algebra by a universal envelope of a Lie algebra, hence just a group algebra if finite dimensional (see [MM, Ca, Ko] for the connected case, [Gr1,2, Sw2] for the general case).
In view of these facts it appears natural to try to relate the cohomology of Hopf algebras to that of groups and Lie algebras. The first work in this direction was done by M.E. Sweedler [Sw1] and by G.I. Kac [Kac] in the late 1960's. Sweedler introduced a cohomology theory of algebras that are modules over a Hopf algebra (now called Sweedler cohomology). He compared it to group cohomology, to Lie algebra cohomology and to Amitsur cohomology. In that paper he also shows how the second cohomology group classifies cleft comodule algebra extensions. Kac considered Hopf algebra extensions of a group algebra $kT$ by the dual of a group algebra $k^N$ obtained from a matched pair of finite groups $(N,T)$, and found an exact sequence connecting the cohomology of the groups involved and the group of Hopf algebra extensions $\operatorname{Opext} (kT,k^N)$ $$\begin{array}{l} 0\to H^1(N\bowtie T,k^{\bullet})\to H^1(T,k^{\bullet}) \oplus H^1(N,k^{\bullet})\to \operatorname{Aut} (k^N\# kT) \\ \to H^2(N\bowtie T,k^{\bullet})\to H^2(T,k^{\bullet})\to\operatorname{Opext} (kT,k^N)\to H^3(N\bowtie T,k^{\bullet})\to ... \end{array}$$ which is now known as the Kac sequence. In the work of Kac all Hopf algebras are over the field of complex numbers and also carry the structure of a $C^*$-algebra. Such structures are now called Kac algebras. The generalization to arbitrary fields appears in recent work by A. Masuoka [Ma1,2], where it is also used to show that certain groups of Hopf algebra extensions are trivial. Masuoka also obtained a version of the Kac sequence for matched pairs of Lie bialgebras [Ma3], as well as a new exact sequence involving the group of quasi Hopf algebra extensions of a finite dimensional abelian Singer pair [Ma4].
In this paper we introduce a cohomology theory for general abelian matched pairs $(T,N,\mu,\nu)$, consisting of two cocommutative Hopf algebras acting compatibly on each other with bismash product $H=N\bowtie T$, and obtain a general Kac sequence $$\begin{array}{l} 0\to H^1(H,A)\to H^1(T,A)\oplus H^1(N,A)\to \mathcal H^1(T,N,A) \to H^2(H,A) \\ \to H^2(T,A)\oplus H^2(N,A)\to \mathcal H^2(T,N,A)\to H^3(H,k)\to ... \end{array}$$ relating the cohomology $\mathcal H^*(T,N,A)$ of the matched pair with coefficients in a module algebra $A$ to the Sweedler cohomologies of the Hopf algebras involved. For trivial coefficients the maps in the low degree part of the sequence are described explicitly. If $T$ is finite-dimensional then abelian matched pairs $(T,N,\mu ,\nu )$ are in bijective correspondence with abelian Singer pairs $(N,T^*)$, and we get a natural isomorphism $\mathcal H^*(T,N,k)\cong H^*(N,T^*)$ between the cohomology of the abelian matched pair and that of the corresponding abelian Singer pair. In particular, together with results from [Ho] one obtains $\mathcal{H}^1(T,N,k)\cong H^1(N,T^*)\cong \operatorname{Aut} (T^*\# N)$ and $\mathcal H^2(T,N,k)\cong H^2(N,T^*)\cong \operatorname{Opext} (N,T^*)$. The sequence gives information about extensions of cocommutative Hopf algebras by commutative ones. It can also be used in certain cases to compute the (low degree) cohomology groups a Hopf algebras.
Such a sequence can of course not exist for non-abelian matched pairs, at least if the sequence is to consist of groups and not just pointed sets as in [Sch].
Together with the five term exact sequence for a smash product of Hopf algebras $H=N\rtimes T$ [M2], generalizing that of K. Tahara [Ta] for a semi-direct product of groups, $$\begin{array}{l} 1\to H^1_{meas}(T,\operatorname{Hom} (N,A))\to \tilde H^2(H,A)\to H^2(N,A)^T \\ \to H^2_{meas}(T,\operatorname{Hom}(N,A))\to \tilde H^3(H,A)\end{array}$$ it is possible in principle to give a procedure to compute the second cohomology group of any abelian matched pair of pointed Hopf algebras over a field of characteristic zero with a finite group of points and a reductive Lie algebra of primitives.
In Section 1 abelian Singer pairs of Hopf algebras are reviewed. In particular we talk about the cohomology of an abelian Singer pair, about Sweedler cohomology and Hopf algebra extensions [Si, Sw1].
In the second section abelian matched pairs of Hopf algebras are discussed. We introduce a cohomology theory for an abelian matched pair of Hopf algebras with coefficients in a commutative module algebra, and in Section 4 we see how it compares to the cohomology of a Singer pair.
The generalized Kac sequence for an abelian matched pair of Hopf algebra is presented in Section 5. The homomorphisms in the low dgree part of the sequence are given explicitly, so as to make it possible to use them in explicit calculations of groups of Hopf algebra extensions and low degree Sweedler cohomology groups.
Section 6 examines how the tools introduced combined with some additional observations can be used to describe explicitly the second cohomology group of some abelian matched pairs.
In the appendix some results from (co-)simplicial homological algebra used in the main body of the paper are presented.
Throughout the paper ${_H\mathcal V}$, ${_H\mathcal A}$ and ${_H\mathcal C}$ denote the categories of left $H$-modules, $H$-module algebras and $H$-module coalgebras, respectively, for the Hopf algebra $H$ over the field $k$. Similarly, $\mathcal V^H$, $\mathcal A^H$ and $\mathcal C^H$ stand for the categories of right $H$-comodules, $H$-comodule algebras and $H$-comodule coalgebras, respectively.
We use the Sweedler sigma notation for comultiplication: $\Delta(c)=c_1\otimes c_2$, $(1\otimes\Delta)\Delta(c)=c_1\otimes c_2\otimes c_3$ etc. In the cocommutative setting the indices are clear from the context and we will omit them whenever convenient.
If $V$ is a vector space, then $V^n$ denotes its $n$-fold tensor power.
\section{Cohomology of an abelian Singer pair}
\subsection{Singer pairs}
Let $(B,A)$ be a pair of Hopf algebras together with an action $\mu\colon B\otimes A\to A$ and a coaction $\rho\colon B\to B\otimes A$ so that $A$ is a $B$-module algebra and $B$ is an $A$-comodule coalgebra. Then $A\otimes B$ can be equipped with the cross product algebra structure as well as the cross product coalgebra structure. To ensure compatibility of these structures, i.e: to get a Hopf algebra, further conditions on $(B,A,\mu ,\rho)$ are necessary. These are most easily expressed in term of the action of $B$ on $A\otimes A,$ twisted by the coaction of $A$ on $B$, $$\mu_2=(\mu\otimes\operatorname{m}_A(1\otimes\mu ))(14235)((\rho\otimes 1)\Delta_B\otimes 1\otimes 1)\colon B\otimes A\otimes A\to A\otimes A,$$ i.e: $b(a\otimes a')=b_{1B}(a)\otimes b_{1A}\cdot b_2(a')$, and the coaction of $A$ on $B\otimes B,$ twisted by the action of $B$ on $A$, $$\rho_2=(1\otimes 1\otimes\operatorname{m}_A(1\otimes\mu ))(14235)((\rho\otimes 1)\Delta_B\otimes\rho )\colon B\otimes B\to B\otimes B\otimes A,$$ i.e: $\rho_2(b\otimes b')=b_{1B}\otimes b'_B\otimes b_{1A}\cdot b_2(b'_A)$.
Observe that for trivial coaction $\rho\colon B\to B\otimes A$ one gets the ordinary diagonal action of $B$ on $A\otimes A$, and for trivial action $\mu\colon B\otimes A\to A$ the diagonal coaction of $A$ on $B\otimes B$.
\begin{Definition} The pair $(B,A,\mu ,\rho )$ is called an abelian Singer pair if $A$ is commutative, $B$ is cocommutative and the following are satisfied. \begin{enumerate} \item $(A,\mu)$ is a $B$-module algebra (i.e: an object of $_B\mathcal A$), \item $(B,\rho )$ is a $A$-comodule coalgebra (i.e: an object of $\mathcal C^A$), \item $\rho\operatorname{m}_B=(\operatorname{m}_B\otimes 1)\rho_2$, i.e: the diagram $$\begin{CD} B\otimes B @>\operatorname{m}_B >> B \\ @V\rho_2 VV @V\rho VV \\ B\otimes B\otimes A @>\operatorname{m}_B\otimes 1 >> B\otimes A \end{CD}$$ commutes, \item $\Delta_A\mu =\mu_2(1\otimes\Delta_A)$, i.e: the diagram $$\begin{CD} B\otimes A @>\mu >> A \\ @V1\otimes\Delta_A VV @V\Delta_A VV \\ B\otimes A\otimes A @>\mu_2 >> A\otimes A \end{CD}$$ commutes. \end{enumerate} \end{Definition}
The twisted action of $B$ on $A^n$ and the twisted coaction of $A$ on $B^n$ can now be defined inductively: $$\mu_{n+1}=(\mu_n\otimes\operatorname{m}_A(1\otimes\mu ))(14235)((\rho\otimes 1)\Delta_B\otimes 1^n\otimes 1) \colon B\otimes A^{n}\otimes A\to A^{n}\otimes A$$ with $\mu_1=\mu$ and $$\rho_{n+1}=(1\otimes 1^n\otimes\operatorname{m}_A(1\otimes\mu ))(14235)((\rho\otimes 1)\Delta_B\otimes\rho_n)\colon B\otimes B^n\to B\otimes B^n\otimes A$$ with $\rho_1=\rho$.
\subsection{(Co-)modules over Singer pairs}
It is convenient to introduce the abelian category $_B\mathcal V^A$ of triples $(V,\omega ,\lambda)$, where \begin{enumerate} \item $\omega\colon B\otimes V\to V$ is a left $B$-module structure, \item $\lambda\colon V\to V\otimes A$ is a right $A$-comodule structure and \item the two equivalent diagrams $$\begin{CD} B\otimes V @>\omega >> V @. \quad \quad @. B\otimes V @>\omega >> B \\ @V1\otimes\lambda VV @V\lambda VV @. @V\lambda_{B\otimes V} VV @V\lambda VV \\ B\otimes V\otimes A @>\omega_{V\otimes A} >> V\otimes A @. \quad \quad @. B\otimes V\otimes A @>\omega \otimes 1>> V\otimes A \end{CD}$$ commute, where the twisted action $\omega_{V\otimes A}\colon B\otimes V\otimes A\to V\otimes A$ of $B$ on $V\otimes A$ is given by $\omega_{V\otimes A}=(\omega\otimes \operatorname{m}_A(1\otimes\mu )(14235)((\rho\otimes 1)\Delta_B\otimes 1\otimes 1)$ and the twisted coaction $\lambda_{B\otimes V}\colon B\otimes V\to B\otimes V\otimes A$ of $A$ on $B\otimes V$ by $\lambda_{B\otimes V}=(1\otimes 1\otimes\operatorname{m}_A(1\otimes\mu ))(14235)((\rho\otimes 1)\otimes\Delta_B\otimes\lambda )$. \end{enumerate} The morphisms are $B$-linear and $A$-colinear maps. Observe that $(B,\operatorname{m}_B,\rho )$, $(A,\mu ,\Delta_A)$ and $(k, \epsilon_B\otimes 1, 1\otimes\iota_A)$ are objects of ${_B\mathcal V}^A$. Moreover, $( {_B\mathcal V}^A, \otimes , k)$ is a symmetric monoidal category, so that commutative algebras and cocommutative coalgebras are defined in $( {_B\mathcal V}^A, \otimes , k)$.
The free functor $F\colon \mathcal V^A\to {{_B\mathcal V}^A}$, defined by $F(X,\alpha )=(B\otimes X, \alpha_{B\otimes X})$ with twisted $A$-coaction $\alpha_{B\otimes X}=(1\otimes 1\otimes\operatorname{m}_A(1\otimes\mu))(14235)((\rho\otimes 1)\Delta_B\otimes\alpha )$ is left adjoint to the forgetful functor $U\colon {_B\mathcal V^A}\to \mathcal V^A$, with natural isomorphism $\theta\colon {_B\mathcal V^A}(FM,N)\to \mathcal V^A(M,UN)$ given by $\theta(f)(m)=f(1\otimes m)$ and $\theta^{-1}(g)(n\otimes m)=\mu_N(n\otimes g(m))$. The unit $\eta_M\colon M\to UF(M)$ and the counit $\epsilon_N\colon FU(N)\to N$ of the adjunction are given by $\eta_M=\iota_B\otimes 1$ and $\epsilon_N=\mu_N$, respectively, and give rise to a comonad $\mathbf{G}=(FU,\epsilon, \delta=F\eta U).$
Similarly, the cofree functor $L\colon {_B\mathcal V}\to {_B\mathcal V}^A$, defined by $L(Y,\beta )=(Y\otimes A, \beta_{Y\otimes A})$ with twisted $B$-action $\beta_{Y\otimes A}=(\beta\otimes\operatorname{m}_A(1\otimes\mu ))(14235)((\rho\otimes 1)\Delta_B\otimes 1\otimes 1)$ is right adjoint to the forgetful functor $U\colon {_B\mathcal V}^A\to {_B\mathcal V}$, with natural isomorphism $\psi\colon {_B\mathcal V}(UM,N)\to {_B\mathcal V}^A(M,LN)$ given by $\psi (g)=(1\otimes g)\delta_M$ and $\psi^{-1}(f)=(1\otimes\epsilon_A)f$. The unit $\eta_M\colon M\to LU(M)$ and the counit $\epsilon_N\colon UL(N)\to N$ of the adjunction are given by $\eta_M=\delta_M$ and $\epsilon_N=1\otimes\epsilon_A$, respectively. They give rise to a monad (or triple) $\mathbf{T}=(LU,\eta ,\mu =L\epsilon U)$ on $_B\mathcal{V}^A$. The (non-commutative) square of functors $$\begin{CD} \mathcal V @>L>> \mathcal V^A \\ @VFVV @VFVV \\ {_B\mathcal V} @>L>> {_B\mathcal V}^A \end{CD}$$ together with the corresponding forgetful adjoint functors describes the situation. Observe that $ {_B\mathcal V}^A(G(M),T(N))\cong \mathcal V(UM,UN)$. These adjunctions, monads and comonads restrict to coalgebras and algebras.
\subsection{Cohomology of an abelian Singer pair}
The comonad $\mathbf G=(FU,\epsilon ,\delta =F\eta U)$ defined on ${_B\mathcal V}^A$ can be used to construct $B$-free simplicial resolutions $\mathbf X_B(N)$ with $X_n(N)=G^{n+1}N=B^{n+1}\otimes N$, faces and degeneracies $$\partial_i=G^i\epsilon_{G^{n-i}(N)}\colon X_{n+1}\to X_n, \quad s_i=G^i\delta_{G^{n-i}(N)}\colon X_n\to X_{n+1}$$ given by $\partial_i =1^i\otimes\operatorname{m}_B\otimes 1^{n+1-i}$ for $0\leq i\leq n$, $\partial_{n+1}=1^{n+1}\otimes\mu_N$, and $s_i=1^i\otimes\iota_B\otimes 1^{n+2-i}$ for $0\leq i\leq n$.
The monad $\mathbf T=(LU,\eta ,\mu =L\epsilon U)$ on ${_B\mathcal V}^A$ can be used to construct $A$-cofree cosimplicial resolutions $\mathbf{Y}_A(M)$ with $Y_A^n(M)=T^{n+1}M=M\otimes A^{n+1}$, cofaces and codegeneracies $$\partial^i=T^{n+1-i}\eta_{T^i(M)}\colon Y^n\to Y^{n+1} \quad ,\quad s^i=T^{n_i}\mu_{T^i(M)}\colon Y^{n+1}\to Y^{n}$$ given by $\partial^0=\delta_M\otimes 1^{n+1}$, $\partial^i=1^{i-1}\otimes\Delta_A\otimes 1^{n+2-i}$ for $1\leq i\leq n+1$, and $s^i=1^{i+1}\otimes\epsilon_A\otimes 1^{n+1-i}$ for $0\leq i\leq n$.
The total right derived functor of $$ {_B\operatorname{Reg}^A}=\mathcal{U} {_B\operatorname{Hom}^A}\colon ({_B\mathcal C^A})^{op}\times {_B\mathcal A^A}\to \operatorname{Ab}$$ is now defined by means of the simplicial $\mathbf G$-resolutions $\mathbf X_B(M)=\mathbf G^{*+1}M$ and the cosimplicial $\mathbf T$-resolutions $\mathbf Y_A(N)=\mathbf T^{*+1}N$ as $$R^*( {_B\operatorname{Reg}}^A(M,N)=H^*(\operatorname{Tot} {_B\operatorname{Reg}^A}(\mathbf X_B(M),\mathbf Y_A(N)).$$
\begin{Definition}\label{d12} The cohomology of a Singer pair $(B,A,\mu ,\rho)$ is given by $$H^*(B,A)=H^{*+1}(\operatorname{Tot}\mathbf Z_0)$$ where $\mathbf Z_0$ is the double cochain complex obtained from the double cochain complex $\mathbf Z={_B\operatorname{Reg}^A}(\mathbf X(k),\mathbf Y(k)))$ by deleting the $0^{th}$ row and the $0^{th}$ column. \end{Definition}
\subsection{The normalized standard complex}
Use the natural isomorphism $${_B\mathcal V}^A(FU(M),LU(N))\cong \mathcal V(UM,UN)$$ to get the standard double complex $$Z^{m,n}=({_B\operatorname{Reg}}^A(G^{m+1}(k)), T^{n+1}(k),\partial',\partial)\cong ( \operatorname{Reg}(B^{m},A^{n}),\partial', \partial).$$ For computational purposes it is useful to replace this complex by the normalized standard complex $Z_+$, where $Z_+^{m,n}=\operatorname{Reg}_+(B^m,A^n)$ is the intersection of the degeneracies, consisting of all convolution invertible maps $f\colon B^m\to A^n$ satisfying $f(1\otimes\ldots\otimes\eta\varepsilon\otimes\ldots\ot1)=\eta\varepsilon$ and $(1\otimes\ldots\otimes\eta\varepsilon\otimes\ldots\ot1)f=\eta\varepsilon$. In more detail, the normalized standard double complex is of the form \begin{small} $$\begin{CD} \operatorname{Reg}_+(k,k) @>\partial^{0,0}>> \operatorname{Reg}_+(B,k) @>\partial^{1,0}>> \operatorname{Reg}_+(B^2,k) @>\partial^{2,0}>>\operatorname{Reg}_+(B^3,k)\dots \\ @V\partial_{0,0}VV @V\partial_{1,0}VV @V\partial_{2,0}VV @V\partial_{3,0}VV \\ \operatorname{Reg}_+(k,A) @>\partial^{0,1}>>\operatorname{Reg}_+(B,A) @>\partial^{1,1}>>\operatorname{Reg}_+(B^2,A) @>\partial^{2,1}>>\operatorname{Reg}_+(B^3,k)\dots \\ @V\partial_{0,1}VV @V\partial_{1,1}VV @V\partial_{2,1}VV @V\partial_{3,1}VV \\ \operatorname{Reg}_+(k,A^2) @>\partial^{0,2}>>\operatorname{Reg}_+(B,A^2) @>\partial^{1,2}>>\operatorname{Reg}_+(B^2,A^2) @>\partial^{2,2}>>\operatorname{Reg}_+(B^3,A^2)\dots \\ @V\partial_{0,2}VV @V\partial_{1,2}VV @V\partial_{2,2}VV @V\partial_{3,2}VV \\ \operatorname{Reg}_+(k,A^3) @>\partial^{0,3}>>\operatorname{Reg}_+(B,A^3) @>\partial^{1,3}>>\operatorname{Reg}_+(B^2,A^3) @>\partial^{2,3}>>\operatorname{Reg}_+(B^3,A^3)\dots \\ @V\partial_{0,3}VV @V\partial_{1,3}VV @V\partial_{2,3}VV @V\partial_{3,3}VV \\ \vdots @. \vdots @. \vdots @. \vdots \end{CD}$$ \end{small} The coboundary maps $$d_{n,m}^i\colon \operatorname{Reg}_+(B^n,A^m)\to \operatorname{Reg}_+(B^{n+1},A^m)$$ defined by $$d_{n,m}^0\alpha =\mu_m(1_B\otimes \alpha),\ d_{n,m}^i\alpha =\alpha(1_{B^{i-1}}\otimes\operatorname{m}_B\otimes 1_{B^{n-i}}),\ d_{n,m}^{n+1}\alpha =\alpha\otimes\varepsilon,$$ for $1\le i\le n,$ are used to construct the horizontal differentials $$\partial_{n,m}\colon\operatorname{Reg}_+(B^n,A^m)\to \operatorname{Reg}_+(B^{n+1},A^m),$$ given by the \lq alternating' convolution product $$\partial_{n,m}\alpha=d_{n,m}^0\alpha*d_{n,m}^1\alpha^{-1}*d_{n,m}^2\alpha*\ldots *d_{n,m}^{n+1}\alpha^{(-1)^{n+1}}.$$ Dually the coboundaries $${d'}^i_{n,m}\colon\operatorname{Reg}_+(B^n,A^m)\to \operatorname{Reg}_+(B^n,A^{m+1})$$ defined by $${d'}^0_{n,m}\beta =(\beta\otimes 1_A)\rho_n,\ {d'}^i_{n,m}\beta = ( 1_{A^{i-1}}\otimes\Delta_A\otimes 1_{A^{n-i}})\beta,\ {d'}^{n+1}_{n,m}\beta =\eta\otimes\beta,$$ for $1\le i\le n$, determine the vertical differentials $$\partial^{n,m}\colon \operatorname{Reg}_+(B^n,A^m)\to \operatorname{Reg}_+(B^{n},A^{m+1}),$$ where $$\partial^{n,m}\beta={d'}^0_{n,m}\beta *{d'}^1_{n,m}\beta^{-1}*{d'}^2_{n,m}\beta*\ldots *d'^{n+1}_{n,m}\beta^{(-1)^{n+1}}.$$
The cohomology of the abelian Singer pair $(B,A,\mu,\rho)$ is by definition the cohomology of the total complex. $$\begin{array}{l} 0\to\operatorname{Reg}_+(B,A)\to \operatorname{Reg}_+(B^2,A)\oplus\operatorname{Reg}_+(B,A^2)\to\\ \ldots\to\bigoplus_{i=1}^n\operatorname{Reg}_+(B^{n+1-i},A^i)\to \ldots \end{array}$$
There are cannonical isomorphisms $H^1(B,A)\simeq \operatorname{Aut}(A\# B)$ and $H^2(B,A)\simeq \operatorname{Opext}(B,A)$ [Ho] (here $\operatorname{Opext}(B,A)=\operatorname{Opext}(B,A,\mu,\rho)$ denotes the abelian group of equivalence classes of those Hopf algebra extensions that give rise to the Singer pair $(B,A,\mu,\rho)$).
\subsection{Special cases} In particular, for $A=k=M$ and $N$ a commutative $B$-module algebra we get Sweedler cohomology of $B$ with coefficients in $N$ [Sw1] $$H^*(B,N)=H^*(\operatorname{Tot} {_B\operatorname{Reg}}(\mathbf X(k),N))=H^*(\operatorname{Tot} {_B\operatorname{Reg}}(\mathbf G^{*+1}(k),N)).$$ In [Sw1] it is also shown that if $G$ is a group and $\mathbf{g}$ is a Lie algebra, then there are canonical isomorphisms $H^n(kG,A)\simeq H^n(G,\mathcal{U}(A))$ for $n\ge 1$ and $H^m(U\mathbf{g},A)\simeq H^m(\mathbf{g},A^+)$ for $m\ge 2$, where $\mathcal{U}(A)$ denotes the multiplicative group of units and $A^+$ denotes the underlying vector space.
For $B=k=N$ and $M$ a cocommutative $A$-comodule coalgebra we get the dual version [Sw1,Do] $$H^*(M,A)=H^*(\operatorname{Tot} {\operatorname{Reg}^A}(M,\mathbf Y(k)))=H^*(\operatorname{Tot} {\operatorname{Reg}^A}(M,\mathbf T^{*+1}(k))).$$
\section{Cohomology of an abelian matched pair}
\subsection{Abelian matched pairs}
Here we consider pairs of cocommutative Hopf algebras $(T,N)$ together with a left action $\mu\colon T\otimes N\to N$, $\mu (t\otimes n)=t(n)$, and a right action $\nu\colon T\otimes N\to T$, $\nu (t\otimes n)=t^n$. Then we have the twisted switch $$\tilde\sigma =(\mu\otimes\nu )\Delta_{T\otimes N}\colon T\otimes N\to N\otimes T$$ or, in shorthand $\tilde\sigma (t\otimes n)=t_1(n_1)\otimes t_2^{n_2}$, which in case of trivial actions reduces to the ordinary switch $\sigma\colon T\otimes N\to N\otimes T$.
\begin{Definition} Such a configuration $(T,N,\mu ,\nu )$ is called an abelian matched pair if \begin{enumerate} \item $N$ is a left $T$-module coalgebra, i.e: $\mu\colon T\otimes N\to N$ is a coalgebra map, \item $T$ is a right $N$-module coalgebra, i.e: $\nu\colon T\otimes N\to T$ is a coalgebra map, \item $N$ is a left $T$-module algebra with respect to the twisted left action $\tilde\mu =(1\otimes\mu )(\tilde\sigma\otimes 1)\colon T\otimes N\otimes N\to N$, in the sense that the diagrams $$\begin{CD} T\otimes N\otimes N @>1\otimes\operatorname{m}_N>> T\otimes N @. \quad \quad @. T\otimes k @>1\otimes\iota_N>> T\otimes N \\ @V\tilde\mu VV @V\mu VV @. @V\epsilon_T\otimes 1VV @V\mu VV \\ N\otimes N @>m_N>> N @. \quad \quad @. k @>\iota_N>> N \end{CD}$$ commute, i.e: $\mu (t\otimes nm)=\sum \mu (t_1\otimes n_1)\mu (\nu (t_2\otimes n_2)\otimes m)$ and $\mu (t\otimes 1)=\epsilon (t)1_N$, or in shorthand $t(nm)=t_1(n_1)t_2^{n_2}(m)$ and $t(1_N)=\epsilon (t)1_N$, \item $T$ is a right $N$-module algebra with respect to the twisted right action $\tilde\nu =(\nu\otimes 1 )(1\otimes\tilde\sigma )\colon T\otimes T\otimes N\to T\otimes T$, in the sense that the diagrams $$\begin{CD} T\otimes T\otimes N @>m_T\otimes 1>> T\otimes N @. \quad \quad @. k\otimes N @>\iota_T\otimes 1>> T\otimes N \\ @V\tilde\nu VV @V\nu VV @. @V1\otimes\epsilon_NVV @V\nu VV \\ T\otimes T @>m_T>> T @. \quad \quad @. k @>\iota_T>> T \end{CD}$$ commute, i.e: $\nu (ts\otimes n)=\sum \nu (t\otimes\mu (s_1\otimes n_1))\nu (s_2\otimes n_2)$ and $\nu (1_T\otimes n)=\epsilon (n)1_T$, or in shorthand $(ts)^n=t^{s_1(n_1)}s_2^{n_2}$ and $1_T^n=\epsilon (n)1_T$ \end{enumerate} \end{Definition}
The bismash product Hopf algebra $(N\bowtie T,\operatorname{m},\Delta, \iota ,\epsilon ,S )$ is the tensor product coalgebra $N\otimes T$ with unit $\iota_{N\otimes T}\colon k\to N\otimes T$, twisted multiplication $$m=(m\otimes\operatorname{m})(1\otimes\tilde\sigma\otimes 1)\colon N\otimes T\otimes N\otimes T\to N\otimes T,$$ in short $\tilde\sigma (t\otimes n)=t_1(n_1)\otimes t_2^{n_2}$, $(n\otimes t)(m\otimes s)=nt_1(m_1)\otimes t_2^{m_2}s$, and antipode $$S=\tilde\sigma (S\otimes S)\sigma\colon N\otimes T\to N\otimes T,$$ i.e: $S(n\otimes t)=S(t_2)(S(n_2))\otimes S(t_1)^{S(n_1)}$. For a proof that this is a Hopf algebra see [Kas]. To avoid ambiguity we will often write $n\bowtie t$ for $n\otimes t$ in $N\bowtie T$. We also identify $N$ and $T$ with the Hopf subalgebras $N\bowtie k$ and $k\bowtie T$, respectively, i.e: $n\equiv n\bowtie 1$ and $t\equiv 1\bowtie t$. In this sense we write $n\bowtie t=nt$ and $tn=t_1(n_1)t_2^{n_2}$.
If the action $\nu\colon T\otimes N\to T$ is trivial, then the bismash product $N\bowtie T$ becomes the smash product (or semi-direct product) $N\rtimes T$. An action $\mu\colon T\otimes N\to N$ is compatible with the trivial action $1\otimes\epsilon\colon T\otimes N\to T$, i.e: $(T,N,\mu , 1\otimes\epsilon )$ is a matched pair, if and only if $N$ is a $T$-module bialgebra and $\mu (t_1\otimes n)\otimes t_2=\mu (t_2\otimes n)\otimes t_1$. Note that the last condition is trivially satisfied if $T$ is cocommutative.
To make calculations more transparent we start to use the abbreviated Sweedler sigma notation for the cocommutative setting whenever convenient.
\begin{Lemma}[{[Ma3]}, Proposition 2.3]\label{l22} Let $(T,N,\mu,\nu)$ be an abelian matched pair. \begin{enumerate} \item A left $T$-module, left $N$-module $(V,\alpha ,\beta )$ is a left $N\bowtie T$-module if and only if $t(nv)=t(n)(t^{n}(v))$, i.e: if and only if with the twisted action $\tilde\alpha =(1\otimes\alpha )(\tilde\sigma\otimes 1)\colon T\otimes N\otimes V\to N\otimes V$ the square $$\begin{CD} T\otimes N\otimes V @>1\otimes\beta >> T\otimes V \\ @V\tilde\alpha VV @V\alpha VV \\ N\otimes V @> \beta >> V \end{CD}$$ commutes. \item A right $T$-module, right $N$-module $(W,\alpha ,\beta )$ is a right $N\bowtie T$-module if and only if $(v^t)^n=(v^{t(n)})^{t^{n}}$, i.e: if and only if with the twisted action $\tilde\beta =(\beta\otimes 1)(1\otimes\tilde\sigma )\colon W\otimes T\otimes N\to W\otimes T$ the square $$\begin{CD} W\otimes T\otimes N @>\alpha\otimes 1 >> W\otimes N \\ @V\tilde\beta VV @V\beta VV \\ W\otimes T @> \alpha >> W \end{CD}$$ commutes. \item Let $(V, \alpha )$ be a left $T$-module and $(W, \beta )$ a right $N$-module. Then \begin{enumerate} \item [(i)] $N\otimes V$ is a left $N\bowtie T$-module with $N$-action on the first factor and $T$-action given by $$\tilde\alpha =(1\otimes\alpha )\tilde\sigma\colon T\otimes N\otimes V\to N\otimes V,$$ that is $t(n\otimes v)= t_1(n_1)\otimes t_2^{n_2}(v)$.
\item[(ii)] $W\otimes T$ is a right $N\bowtie T$-module with $T$-action on the right factor and $N$-action given by $$\tilde\beta =(\beta\otimes 1)(1\otimes\tilde\sigma )\colon W\otimes T\otimes N\to W\otimes T,$$ that is $(w\otimes t)^n= w^{t_2(n_2)}\otimes t_1^{n_1}$. Moreover, $W\otimes T$ is a left $N\bowtie T$-module by twisting the action via the antipode of $N\bowtie T$.
\item[(iii)] The map $\psi\colon (N\bowtie T)\otimes V\otimes W\to (W\otimes T)\otimes (N\otimes V)$ defined by $\psi ((n\bowtie t)\otimes v\otimes w)=w^{S(t)(S(n))}\otimes S(t)^{S(n)}\otimes n\otimes tv$, is a $N\bowtie T$-homomorphism, when $N\bowtie T$ is acting on the first factor of $(N\bowtie T)\otimes V\otimes W$ and diagonally on $(W\otimes T)\otimes (N\otimes V)$\\ $(nt)(w\otimes s\otimes\operatorname{m}\otimes v)=w^{(sS(t))(S(n))}\otimes (sS(t))^{S(n)}\otimes nt(m)\otimes t^m(v).$\\ In particular, $(W\otimes T)\otimes (N\otimes V)$ is a free left $N\bowtie T$-module in which any basis of the vector space $(W\otimes k)\otimes (k\otimes V)$ is a $N\bowtie T$-free basis. \end{enumerate} \end{enumerate} \end{Lemma}
Observe that the inverse of $\psi\colon (N\bowtie T)\otimes V\otimes W\to (W\otimes T)\otimes (N\otimes V)$ is given by $$\psi^{-1}((w\otimes t)\otimes (n\otimes v)=(n\bowtie S(t^n))\otimes (w^{t(n)}\otimes t^n(v)).$$
The twisted actions can now be extended by induction to higher tensor powers $$\mu_{p+1}=(1\otimes\mu_p)(\tilde\sigma\otimes 1^p)\colon T\otimes N^{p+1}\to N^{p+1}$$ so that $\mu_{p+1}(t\otimes n\otimes\mathbf m)=\mu (t\otimes n)\otimes \mu_p(\nu (t\otimes n)\otimes\mathbf m)$, $t(n\otimes\mathbf m)=t(n)\otimes t^{n}(\mathbf m)$ and $$\nu_{q+1}=(\nu_q\otimes 1)(1^q\otimes\tilde\sigma )\colon T^{q+1}\otimes N\to T^{q+1}$$ so that $\nu_{q+1}(\mathbf t\otimes s\otimes n)=\nu_q(\mathbf t\otimes\mu (s\otimes n))\otimes \nu (s\otimes n)$, $(\mathbf t\otimes s)^n={\mathbf t}^{s(n)}\otimes s^{n}$. Observe that the squares $$\begin{CD} T\otimes N^{p+1} @>\mu_{p+1}>> N^{p+1} @. \quad \quad @. T^{q+1}\otimes N @>\nu_{q+1}>> T^{q+1} \\ @V1\otimes fVV @VfVV @. @Vg\otimes 1VV @VgVV \\
T\otimes N^p @>\mu_p>> N^p @. \quad \quad @. T^q\otimes N @>\nu_q>> T^q \end{CD}$$ commute when $f=1^{i-1}\otimes\operatorname{m}_N\otimes 1^{p-i}$ for $1\leq i\leq p$ and $g=1^{j-1}\otimes\operatorname{m}_T\otimes 1^{q-j}$ for $1\leq j\leq q$, respectively.
By part 3 (iii) of the lemma above $T^{i+1}\otimes N^{j+1}$ can be equipped with the $N\bowtie T$-module structure defined by $(nt)(\mathbf r\otimes s\otimes\operatorname{m}\otimes\mathbf k)=\mathbf r^{(sS(t))(S(n))}\otimes (sS(t))^{S(n)}\otimes nt(m)\otimes t^m(\mathbf k)$.
\begin{Corollary}\label{c31} The map $\psi\colon (N\bowtie T)\otimes T^i\otimes N^j\to T^{i+1}\otimes N^{j+1}$, defined by $\psi ((nt)\otimes (\mathbf r\otimes\mathbf k)= \mathbf r^{S(t)(S(n)}\otimes S(t)^{S(n)}\otimes n\otimes t(\mathbf k)$, is an isomorphism of $N\bowtie T$-modules. \end{Corollary}
The content of the Lemma \ref{l22} can be summarized in the square of \lq free\rq\ functors between monoidal categories $$\begin{CD} \mathcal V @> F_T>> {_T\mathcal V} \\ @VF_NVV @V\tilde F_NVV \\ _N\mathcal V @>\tilde F_T>> {_{N\bowtie T}\mathcal V} \end{CD}$$ each with a corresponding tensor preserving right adjoint forgetful functor.
\subsection{The distributive law of a matched pair}
The two comonads on $ {_{N\bowtie T}\mathcal V}$ given by $$\tilde{\mathbf G_T}=(\tilde G_T,\delta_T, \epsilon_T) \quad , \quad \tilde{\mathbf G_N}=(\tilde{G_N},\delta_N, \epsilon_N)$$ with $\tilde{G_T}=\tilde{F_T}\tilde{U_T}$, $\delta_T(t\otimes x)=t\otimes 1\otimes x$, $\epsilon_T(t\otimes x)=tx$, and with $\tilde{G_N}=\tilde{F_N}\tilde{U_N}$, $\delta_N(n\otimes x)=n\otimes 1\otimes x$, $\epsilon_N(n\otimes x)=nx$, satisfy a distributive law [Ba] $$\tilde\sigma\colon \tilde{G_T}\tilde{\mathbf G_N}\to \tilde{\mathbf G_N}\tilde{G_T}$$ given by $\tilde\sigma (t\otimes n\otimes -)=\tilde\sigma (t\otimes n)\otimes - =t_1(n_1)\otimes t_2^{n_2}\otimes - $. The equations for a distributive law $$\tilde{G_N}\delta_T\cdot\tilde\sigma =\tilde\sigma\tilde{G_T}\cdot \tilde{G_T}\tilde\sigma\cdot\delta_T\tilde{G_N} \quad , \quad \delta_N\tilde{G_T}\cdot\tilde\sigma =\tilde{G_N}\tilde\sigma\cdot\tilde\sigma\tilde{G_N}\cdot\tilde{G_T}\delta_N$$ and $$\epsilon_N\tilde{G_T}\cdot\tilde\sigma =\tilde{G_T}\epsilon_N \quad , \quad \tilde{G_N}\epsilon_T\cdot\tilde\sigma =\epsilon_T\tilde{G_N}$$ are easily verified.
\begin{Proposition}[{[Ba]}, Th. 2.2] The composite $$\mathbf G=\mathbf G_N\circ_{\tilde\sigma}\mathbf G_T$$ with $G=(G_NG_T$, $\delta =G_N\tilde\sigma G_T\cdot\delta_N\delta_T$ and $\epsilon =\epsilon_N\epsilon_T)$ is again a comonad on $ {_{N\bowtie T}\mathcal V}$. Moreover, $\mathbf G=\mathbf G_{N\bowtie T}$. \end{Proposition}
The antipode can be used to define a left action $$\nu_S=S\nu (S\otimes S)\sigma\colon N\otimes T\to T$$
by $n(t)=\nu_S(n\otimes t)=S\nu (S\otimes S)\sigma (n\otimes t)=S(S(t)^{S(n)})$ and a right action $$\mu_S=S\mu (S\otimes S)\sigma\colon N\otimes T\to N$$ by $n^t=\mu_S(n\otimes t)=S\mu (S\otimes S)\sigma (n\otimes t)=S(S(t)(S(n))$. The inverse of the twisted switch is then $$\tilde\sigma^{-1}=(\nu_S\otimes\mu_S)\Delta_{N\otimes T}\colon N\otimes T\to T\otimes N$$ given by $\tilde\sigma^{-1}(n\otimes t)=n_1(t_1)\otimes n_2^{t_2}$, and induces the inverse distributive law $$\tilde\sigma^{-1}\colon G_NG_T\to G_TG_N.$$ \vskip .5cm
\subsection{Matched pair cohomology}\label{s23}
For every Hopf algebra $H$ the category of $H$-modules ${_H\mathcal V}$ is symmetric monoidal. The tensor product of two $H$-modules $V$ and $W$ has underlying vector space the ordinary vector space tensor product $V\otimes W$ and diagonal $H$-action. Algebras and coalgebras in $_H\mathcal V$ are known as $H$-module algebras and $H$-module coalgebras, respectively. The adjoint functors and comonads of the last section therefore restrict to the situations where $\mathcal V$ is replaced by $\mathcal C$ or $\mathcal A$. In particular, if $(T,N,\mu ,\nu )$ is an abelian matched pair, $H=N\bowtie T$ and $C$ is a $H$-module coalgebra then $\mathbf X_H(C)$ is a canonical simplicial free $H$-module coalgebra resolution of $C$ and by the Corollary \ref{c31} the composite $\mathbf X_N(\mathbf X_T(C))$ is a simplicial double complex of free $H$-module coalgebras.
\begin{Definition}\label{d25} The cohomology of an abelian matched pair $(T,N,\mu ,\nu )$ with coefficients in the commutative $N\bowtie T$-module algebra is defined by $$\mathcal{H}^*(T,N,A)=H^{*+1}(\operatorname{Tot} (\mathbf B_0),$$ where $B_0$ is the double cochain complex obtained from the double cochain complex $\mathbf B=C({_{N\bowtie T}\operatorname{Reg}}(\mathbf X_N(\mathbf X_T(k),A))$ by deleting the $0^{th}$ row and the $0^{th}$ column. \end{Definition}
\subsection{The normalized standard complex}
Let $H=N\bowtie T$ be a bismash product of an abelian matched pair of Hopf algebras and let the algebra $A$ be a left $N$ and a right $T$-module such that it is a left $H$-module via $nt(a)=n({a^{S(t)}})$, i.e. $(n(a))^{S(t)}=(t(n))(a^{S(t^n)}).$
Note that $\operatorname{Hom}(T^{p},A)$ becomes a left $N$-module via $n(f)({\mathbf t})=n(f(\nu_p({\mathbf t},n)))$ and $\operatorname{Hom}(N^{q},A)$ becomes a right $T$-module via $f^t({\mathbf n})=(f(\mu_q(t,{\mathbf n})))^t= S(t)(f(\mu_q(t,{\mathbf n})))$.
The simplicial double complex $G_T^pG_N^q(k)=(T^{p}\otimes N^{q})_{p,q}$, $p,q\ge 1$ of free $H$-modules has horizontal face operators $1\otimes d_N^*\colon T^{p}\otimes N^{q+1}\to T^{p}\otimes N^{q}$, vertical face operators $d_T^*\otimes 1\colon T^{p+1}\otimes N^{q}\to T^{p}\otimes N^{q}$, horizontal degeneracies $1\otimes s_N^*\colon T^{p}\otimes N^{q}\to T^{p}\otimes N^{q+1}$ and vertical degeneracies $s_T^*\ot1\colon T^{p}\otimes N^{q}\to T^{p+1}\otimes N^{q}$, where $$d_N^i= 1^{i}\otimes\operatorname{m}\otimes 1^{q-i-1},\quad d_N^q= 1^q\otimes\varepsilon,\quad s_N^i= 1^{i}\otimes\eta\otimes 1^{q-i}$$ for $0\le i\le q-1$, and $$d_T^j= 1^{p-j-1}\otimes\operatorname{m}\otimes 1^{j},\quad d_T^p=\varepsilon\otimes 1^p,\quad s_T^j= 1^{p-j}\otimes\eta\otimes 1^{j}$$ for $0\le j\le p-1$.
These maps preserve the $H$-module structure on $T^{p}\otimes N^{q}$. Apply the functor $\operatorname{_HReg}(\_,A)\colon {{_H\mathcal C}}^{op}\to \operatorname{Ab}$ to get a cosimplicial double complex of abelian groups $\mathbf B={\operatorname{_HReg}}(\mathbf X_N(\mathbf X_T(k),A)$ with $B^{p,q}=\operatorname{_HReg}(T^{p+1}\otimes N^{q+1},A)$, coface operators $\operatorname{_HReg}(d_{N*},A)$, $\operatorname{_HReg}(d_{T*},A)$ and codegeneracies are $\operatorname{_HReg}(s_{N*},A)$, $\operatorname{_HReg}(s_{T*},A)$.
The isomorphism described in Corollary \ref{c31} induces an isomorphism of double complexes $\mathbf{B}(T,N,A)\cong\mathbf{C}(T,N,A)$ given by $${\operatorname{_HReg}}(T^{p+1}\otimes N^{q+1},A)\stackrel{\operatorname{_HReg}(\psi,A)}{\longrightarrow} {\operatorname{_HReg}}(H\otimes T^{p}\otimes N^{q},A) \stackrel{\theta}{\longrightarrow} \operatorname{Reg}(T^{p}\otimes N^{q},A)$$ for $p,q\ge 0$, where $C^{p,q}=\operatorname{Reg} (T^p\otimes N^q,A)$ is the abelian group of convolution invertible linear maps $f\colon N^{p}\otimes T^{q}\to A.$
The vertical differentials ${\delta_N}\colon C^{p,q}\to C^{p+1,q}$ and the horizontal differentials ${\delta_T}\colon C^{p,q}\to C^{p,q+1}$ are transported from ${\mathbf B}$ and turn out to be the twisted Sweedler differentials on the $N$ and $T$ parts, respectively. The coface operators are $$ {\delta_N}_i f({\mathbf t}\otimes{\mathbf n})=\begin{cases}s f({\mathbf t}\otimes n_1\otimes\ldots \otimes n_in_{i+1}\otimes\ldots\otimes n_{q+1}), \mbox{ for } i=1,\ldots ,q \\ n_1(f(\nu_q({\mathbf t}\otimes n_1)\otimes n_2\otimes\ldots\otimes n_{p+1})), \mbox{ for } i=0 \\ f({\mathbf t}\otimes n_1\otimes\ldots\otimes n_q)\varepsilon(n_{q+1}), \mbox{ for } i=q+1 \end{cases}$$ where ${\mathbf t}\in T^{p}$ and ${\mathbf n}=n_1\otimes\ldots\otimes n_{q+1}\in N^{q+1}$, and similarly $${\delta_T}_j f({\mathbf t}\otimes{\mathbf n})=\begin{cases}s f(t_{p+1}\otimes\ldots\otimes t_{j+1}t_{j}\otimes \ldots \otimes t_{1}\otimes {\mathbf n}), \mbox{ for } j=1,\ldots ,p \\ (f(t_{p+1}\otimes\ldots \otimes t_2\otimes \mu_p(t_{1}\otimes{\mathbf n})))^{t_1},i \mbox{ for } j=0 \\ \varepsilon(t_{p+1})f(t_p\otimes\ldots\otimes t_{1}\otimes{\mathbf n}), \mbox{ for } j=p+1 \end{cases}$$ where ${\mathbf t}=t_1\otimes\ldots t_{q+1}\in T^{q+1}$ and ${\mathbf n}\in N^{q}$. The differentials in the associated double cochain complex are the alternating convolution products $${\delta_N} f={\delta_N}_0f*{\delta_N}_1 f^{-1}*\ldots *{\delta_N}_{q+1}f^{\pm 1}$$ and $${\delta_T} f={\delta_T}_0f*{\delta_T}_1 f^{-1}*\ldots *{\delta_T}_{p+1}f^{\pm 1}.$$
In the associated normalized double complex $\mathbf C_+$, the $(p,q)$ term $C^{p,q}_+=\operatorname{Reg}_+(T^{p}\otimes N^{q},A)$ is the intersection of the degeneracy operators, that is, the abelian group of convolution invertible maps $f\colon T^{p}\otimes N^{q}\to A$ with $f(t_p\otimes\ldots\otimes t_1\otimes n_1\otimes\ldots n_q)=\varepsilon(t_p)\ldots \varepsilon(n_q)$, whenever one of $t_i$ or one of $n_j$ is in $k$. Then $\mathcal H^*(N,T,A)\cong H^{*+1}(\operatorname{Tot}\mathbf C_0)$, where $\mathbf C_0$ is the double complex obtained from $\mathbf C_+$ by replacing the edges by zero.
The groups of cocycles ${\mathcal Z}^i(T,N,A)$ and the groups coboundaries ${\mathcal B}^i(T,N,A)$ consist of $i$-tuples of maps $(f_j)_{1\le j\le i}$, $f_j\colon T^{j}\otimes N^{i+1-j}\to A$ that satisfy certain conditions.
We introduce the subgroups ${\mathcal Z}^i_p(T,N,A)\le {\mathcal Z}^i(T,N,A)$, that are spanned by $i$-tuples in which the $f_j$'s are trivial for $j\not= p$ and subgroups ${\mathcal B}_p^i={\mathcal Z}_p^i\cap {\mathcal B}^i\subset {\mathcal B}_i$. These give rise to subgroups of cohomology groups ${\mathcal H}_p^i={\mathcal Z}_p^i/{\mathcal B}_p^i\simeq ({\mathcal Z}_p^i+{\mathcal B}^i)/{\mathcal B}^i\subseteq {\mathcal H}^i$ which have a nice interpretation when $i=2$ and $p=1,2$; see Section \ref{mcp}. \label{s24}
\section{The homomorphism $\pi\colon \mathcal{H}^2(T,N,A)\rightarrow H^{i,j}(T,N,A)$}
If $T$ is a finite group and $N$ is a finite $T$-group, then we have the following exact sequence [M1] $$ H^2(N,k^\bullet)\stackrel{{\delta_T}}{\to}\operatorname{Opext}(kT,k^N)\stackrel{\pi}{\to} H^1(T,H^2(N,k^\bullet)). $$ Here we define a version of homomorphism $\pi$ for arbitrary smash products of cocommutative Hopf algebras.
We start by introducing the Hopf algebra analogue of $H^i(T,H^j(N,k^\bullet))$. For positive $i,j$ and an abelian matched pair of Hopf algebras $(T,N)$, with the action of $N$ on $T$ trivial, we define \begin{eqnarray*}
Z^{i,j}(T,N,A)&=&\{\alpha\in \operatorname{Reg}_+(T^{i}\otimes N^{j},A)|{\delta_N}\alpha=\varepsilon,\; \mbox{and}\\ &&\exists \beta\in \operatorname{Reg}_+(T^{i+1}\otimes N^{j-1},A):\;{\delta_T}\alpha={\delta_N}\beta\},\\
B^{i,j}(T,N,A)&=&\{\alpha\in \operatorname{Reg}_+(T^{i}\otimes N^{j},A)|\exists \gamma\in \operatorname{Reg}_+(T^{i}\otimes N^{j-1},A),\\ &&\exists \gamma'\in \operatorname{Reg}_+(T^{i-1}\it N^{j},A):\; \alpha={\delta_N} \gamma*{\delta_T}\gamma'\}.\\ H^{i,j}(T,N,A)&=&Z^{i,j}(T,N,A)/B^{i,j}(T,N,A). \end{eqnarray*}
\begin{Remark} If $j=1$, then $$H^{i,1}(T,N,A) \simeq \mathcal{H}^i_i(T,N,A)\simeq H_{meas}^i(T,\operatorname{Hom}(N,A)),$$ where the $H^i_{meas}$ denotes the measuring cohomology [M2]. \end{Remark}
\begin{Proposition} If $T=kG$ is a group algebra, then there is an isomorphism $$H^i(G,H^j(N,A))\simeq H^{i,j}(kG,N,A).$$ \end{Proposition}
\begin{Remark} Here the \textbf{right} action of $G$ on $H^j(N,A)$ is given by precomposition. We can obtain symmetric results in case we start with a \textbf{right} action of $T=kG$ on $N$, hence a \textbf{left} action of $G$ on $H^j(N,A)$. \end{Remark}
\begin{proof}[Proof (of the proposition above)] By inspection we have \begin{eqnarray*} Z^i(G,H^j(N,A))&=&Z^{i,j}(kG,N,A)/\{\alpha\colon G\to B^j(N,A)\},\\ B^i(G,H^j(N,A))&=&B^{i,j}(kG,N,A)/\{\alpha\colon G\to B^j(N,A)\}. \end{eqnarray*} Here we identify regular maps from $(kG)^{i}\otimes N^{j}$ to $A$ with set maps from $G^{\times i}$ to $\operatorname{Reg}(N^{j},A)$ in the obvious way. \end{proof}
The following is a straightforward generalization of the Theorem 7.1 in [M2].
\begin{Theorem}\label{pi} The homomorphism $\pi\colon {\mathcal H}^2(T,N,A)\to H^{1,2}(T,N,A)$, induced by $(\alpha,\beta)\mapsto \alpha$, makes the following sequence $$ H^2(N,A)\oplus {\mathcal H}^2_2(T,N,A)\stackrel{{\delta_T}+\iota}{\longrightarrow}{\mathcal H}^2(T,N,A) \stackrel{\pi}{\to} H^{1,2}(T,N,A) $$ exact. \end{Theorem}
\begin{proof} It is clear that $\pi{\delta_T}=0$ and obviously also $\pi({\mathcal H}^2_2)=0$.
Suppose a cocycle pair $(\alpha,\beta)\in {\mathcal Z}^2(T,N,A)$ is such that $\alpha\in B^{1,2}(T,N,A)$. Then for some $\gamma\colon T\otimes N\to A$ and some $\gamma'\colon N\otimes N\to A$ we have $\alpha={\delta_N}\gamma*{\delta_T}\gamma'$, and hence $(\alpha,\beta)=({\delta_N}\gamma,\beta)*({\delta_T}\gamma',\varepsilon)\sim ({\delta_N}\gamma^{-1},{\delta_T}\gamma)*({\delta_N}\gamma,\beta)*({\delta_T}\gamma',\varepsilon)= (\varepsilon,{\delta_T}\gamma*\beta)*({\delta_T}\gamma',\varepsilon)\in {\mathcal Z}_2^2(T,N,A)*{\delta_T}(Z^2(N,A))$. \end{proof}
\section{Comparison of Singer pairs and matched pairs}
\subsection{Singer pairs vs. matched pairs}\label{s41}
In this section we sketch a correspondence from matched pairs to Singer pairs. For more details we refer to [Ma3]. \begin{Definition}
We say that an action $\mu\colon A\otimes M\to M$ is locally finite, if every orbit $A(m)=\{a(m)|a\in A\}$ is finite dimensional. \end{Definition}
\begin{Lemma}[{[Mo1]}, Lemma 1.6.4]\label{Mo164} Let $A$ be an algebra and $C$ a coalgebra. \begin{enumerate} \item If $M$ is a right $C$-comodule via $\rho\colon M\to M\otimes C$, $\rho(m)= m_0\otimes\operatorname{m}_1$, then $M$ is a left $C^*$-module via $\mu\colon C^*\otimes M\to M$, $\mu(f\otimes\operatorname{m})=f(m_1)m_0$.
\item Let $M$ be a left $A$-module via $\mu\colon A\otimes M\to M.$ Then there is (a unique) comodule structure $\rho\colon M\to M\otimes A^\circ$, such that $(1\otimes\operatorname{ev})\rho=\mu$ if and only if the action $\mu$ is locally finite. The coaction is then given by $\rho(m)=\sum f_i\otimes\operatorname{m}_i$, where $\{m_i\}$ is a basis for $A(m)$ and $f_i\in A^{\circ}\subseteq A^*$ are coordinate functions of $a(m)$, i.e. $a(m)=\sum f_i(a)m_i$. \end{enumerate} \end{Lemma}
Let $(T,N,\mu,\nu)$ be an abelian matched pair and suppose $\mu\colon T\otimes N\to N$ is locally finite. Then the Lemma above gives a coaction $\rho\colon N\to N\otimes T^{\circ}$, $\rho(n)=n_N\otimes n_{T^\circ}$, such that $t(n)=\sum n_N\cdot n_{T^\circ}(t)$.
There is a left action $\nu'\colon N\otimes T^*\to T^*$ given by pre-composition, i.e. $\nu'(n\otimes f)(t)=f(t^n)$. If $\mu$ is locally finite, it is easy to see that $\nu'$ restricts to $T^\circ\subseteq T^*$.
\begin{Lemma}[{[Ma3]}, Lemma 4.1]
If $(T,N,\mu,\nu)$ is an abelian matched pair with $\mu$ locally finite then the quadruple $(N,T^\circ,\nu',\rho)$ forms an abelian Singer pair.
\end{Lemma}
\begin{Remark} There is also a correspondence in the opposite direction [M3].\end{Remark}
\subsection{Comparison of Singer and matched pair cohomologies}
Let\break $(T,N,\mu,\nu)$ be an abelian matched pair of Hopf algebras, with $\mu$ locally finite and $(N,T^\circ,\nu',\rho)$ the Singer pair associated to it as above.
The embedding $\operatorname{Hom}(N^{i},(T^\circ)^{j})\subseteq \operatorname{Hom}(N^{i},(T^{j})^*)\simeq\operatorname{Hom}(T^{j}\otimes N^{i},k)$ induced by the inclusion ${T^\circ}^{j}=(T^{j})^\circ \subseteq (T^{j})^*$ restricts to the embedding $\operatorname{Reg}_+(N^{i},(T^\circ)^{j})\subseteq \operatorname{Reg}_+(T^{j}\otimes N^{i},k)$. A routine calculation shows that it preserves the differentials, i.e. that it gives an embedding of double complexes, which is an isomorphism in case $T$ is finite dimensional.
There is no apparent reason for the embedding of complexes to induce an isomorphism of cohomology groups in general. It is our conjecture that this is not always the case.
In some cases we can compare the multiplication part of $H^2(N,T^\circ)$ (see the following section) and ${\mathcal H}^2_2(N,T,k)$. We use the following lemma for this purpose.
\begin{Lemma}\label{compare} Let $(T,N,\mu,\nu)$ be an abelian matched pair with the action $\mu$ locally finite. If $f\colon T\otimes N^{i}\to k$ is a convolution invertible map, such that ${\delta_T} f=\varepsilon$, then for each ${\bf n}\in N^{i}$, the map $f_{\bf n}=f(\_,{\bf n})\colon T\to k$ lies in the finite dual $T^\circ\subseteq T^*$. \end{Lemma}
\begin{proof} It suffices to show that the orbit of $f_{\bf n}$ under the action of $T$ (given by $(s(f_{\bf n})(t)=f_{\bf n}(ts)$) is finite dimensional (see [DNR], [Mo1] or [Sw2] for the description of finite duals). Using the fact that ${\delta_T} f=\varepsilon$ we get $s(f_{\bf n})(t)= f_{\bf n}(ts)= \sum f_{{\bf n}_1}(s_1)f_{\mu_i(s_2\otimes {\bf n}_2)}(t)$.
Let $\Delta({\bf n})=\sum_j {\bf n'}_j\otimes {\bf n''}_j$. The action $\mu_i\colon T\otimes N^{i}\to N^{i}$ is locally finite, since $\mu\colon T\otimes N\to N$ is, and hence we can choose a finite basis
$\{{\bf m}_p\}$ for $\operatorname{Span}\{\mu_i(s\otimes {\bf n''}_j)|s\in T\}$. Now note that $\{f_{{\bf m}_p}\}$ is a finite set which spans $T(f_{\bf n})$. \end{proof}
\begin{Corollary}\label{cor1} If $(T,N,\mu,\nu)$ is an abelian matched pair, with $\mu$ locally finite and $(N,T^\circ,\omega,\rho)$ is the corresponding Singer pair, then ${\mathcal H}^1(T,N,k)=H^1(N,T^\circ)$. \end{Corollary}
\subsection{The multiplication and comultiplication parts of the second cohomology group of a Singer pair}\label{mcp}
Here we discuss in more detail the Hopf algebra extensions that have an \lq\lq unperturbed" multiplication and those that have an \lq\lq unperturbed" comultiplication, more precisely we look at two subgroups $\H_m^2(B,A)$ and $\H_c^2(B,A)$ of $H^2(B,A)\simeq \operatorname{Opext}(B,A)$, one generated by the cocycles with a trivial multiplication part and the other generated by the cocycles with a trivial comultiplication part [M1]. Let $$
\Z_c^2(B,A)=\{\beta\in \Reg_+(B,A\otimes A)|(\eta\varepsilon,\beta)\in Z^2(B,A)\}. $$ We shall identify $\Z_c^2(B,A)$ with a subgroup of $Z^2(B,A)$ via the injection $\beta\mapsto(\eta\varepsilon,\beta).$ Similarly let $$
\Z_m^2(B,A)=\{\alpha\in \Reg_+(B\otimes B,A)|(\alpha,\eta\varepsilon)\in Z^2(B,A)\}. $$ If $$ \B_c^2(B,A)=B^2(B,A)\cap \Z_c^2(B,A)\;\mbox{and}\; \B_m^2(B,A)=B^2(B,A)\cap \Z_m^2(B,A) $$ then we define $$ \H_c^2(B,A)=\Z_c^2(B,A)/\B_c^2(B,A)\; \mbox{and}\; \H_m^2(B,A)=\Z_m^2(B,A)/\B_m^2(B,A). $$ The identification of $\H_c^2(B,A)$ with a subgroup of $H^2(B,A)$ is given by $$ \H_c^2(B,A)\stackrel{\sim}{\to} (\Z_c^2(B,A)+B^2(B,A))/B^2(B,A)) \le H^2(B,A),$$ and similarly for $\H_m^2\le H^2$.
Note that in case $T$ is finite dimensional $\H_c^2(N,T^*)\simeq {\mathcal H}^2_2(T,N,k)$ and \break $\H_m^2(N,T^*)\simeq {\mathcal H}^2_1(T,N,k)$ with $\mathcal{H}_p^i(T,N,k)$ as defined in Section \ref{s24}.
\begin{Proposition}\label{cor2} Let $(T,N,\mu,\nu)$ be an abelian matched pair, with $\mu$ locally finite and let $(N,T^\circ,\omega,\rho)$ be the corresponding Singer pair. Then $$\H_m^2(N,T^\circ)\simeq {\mathcal H}^2_1(T,N,k).$$ \end{Proposition}
\begin{proof} Observe that we have an inclusion
$\Z_m^2(N,T^\circ)= \{\alpha\colon N\otimes N\to T^\circ|\partial\alpha=\varepsilon,
\partial'\alpha=\varepsilon\}\subseteq \{\alpha\colon T\otimes N\otimes N\to k|{\delta_T}\alpha=\varepsilon, {\delta_N}\alpha=\varepsilon\}={\mathcal Z}^2_1(T,N,k)$. The inclusion is in fact an equality by Lemma \ref{compare}. Similarly the inclusion $\B_m^2(N,T^\circ)\subseteq {\mathcal B}^2_1(T,N,k)$ is an equality as well. \end{proof}
\section{The generalized Kac sequence}
\subsection{The Kac sequence of an abelian matched pair}
We now start by sketching a conceptual way to obtain a generalized version of the Kac sequence for an arbitrary abelian matched pair of Hopf algebras relating the cohomology of the matched pair to Sweedler cohomology. Since it is difficult to describe the homomorphisms involved in this manner, we then proceed in the next section to give an explicit description of the low degree part of this sequence.
\begin{Theorem} Let $H=N\bowtie T$, where $(T,N,\mu ,\nu )$ be an abelian matched pair of Hopf algebras, and let $A$ be a commutative left $H$-module algebra. Then there is a long exact sequence of abelian groups $$\begin{array}{l} 0\to H^1(H,A)\to H^1(T,A)\oplus H^1(N,A)\to \mathcal H^1(T,N,A)\to H^2(H,A) \\ \to H^2(T,A)\oplus H^2(N,A)\to \mathcal H^2(T,N,A)\to H^3(H,A)\to ... \end{array}$$ Moreover, if $T$ is finite dimensional then $(N,T^*)$ is an abelian Singer pair, $H^*(T,k)\cong H^*(k,N^*)$ and $\mathcal H^*(T,N,k)\cong H^*(N,T^*)$. \end{Theorem}
\begin{proof} The short exact sequence of double cochain complexes $$0\to\mathbf B_0\to \mathbf B\to \mathbf B_1\to 0,$$ where $\mathbf B_1$ is the edge double cochain complex of $\mathbf B= {_H\operatorname{Reg}}(\mathbf{X}_T\mathbf{X}_N(k),A)$ as in Section \ref{s23}, induces a long exact sequence in cohomology $$\begin{array}{l} 0\to H^1(\operatorname{Tot} (\mathbf B))\to H^1(\operatorname{Tot} (\mathbf B_1))\to H^2(\operatorname{Tot} (\mathbf B_0))\to H^2(\operatorname{Tot} (\mathbf B)) \\ \to H^2(\operatorname{Tot} (\mathbf B_1))\to H^3(\operatorname{Tot} (\mathbf B_0))\to H^3(\operatorname{Tot} (\mathbf B))\to H^3(\operatorname{Tot} (\mathbf B_1))\to ... \end{array}$$ where $H^0(\operatorname{Tot} (\mathbf B_0))=0=H^1(\operatorname{Tot} (\mathbf B_0))$ and $H^0(\operatorname{Tot} (\mathbf B))=H^0(\operatorname{Tot} (\mathbf B_1))$ have already been taken into account. By Definition \ref{d25} $H^{*+1}(\operatorname{Tot} (\mathbf B_0))=\mathcal H^*(T,N,A)$ is the cohomology of the matched pair $(T,N,\mu ,\nu )$ with coefficients in $A$. Moreover, $H^*(\operatorname{Tot} (\mathbf B_1)\cong H^*(T,A)\oplus H^*(N,A)$ is a direct sum of Sweedler cohomologies.
From the cosimplicial version of the Eilenberg-Zilber theorem (see Appendix) it follows that $H^*(\operatorname{Tot} (\mathbf B))\cong H^*(\operatorname{Diag} (\mathbf B))$. On the other hand, Barr's theorem [Ba, Th. 3.4] together with Corollary \ref{c31} says that $\operatorname{Diag} \mathbf X_T(\mathbf X_N(k))\simeq \mathbf X_H(k)$, and gives an equivalence $${_H\operatorname{Reg}}(\operatorname{Diag} \mathbf X_T(\mathbf X_N(k)),A)\simeq \operatorname{Diag} ({_H\operatorname{Reg}}(\mathbf X_T(\mathbf X_N(k)),A)=\operatorname{Diag} (\mathbf B)).$$ Thus, we get $$H^*(H,A)=H^*({_H\operatorname{Reg}}(\mathbf X_H(k),A))\cong H^*(\operatorname{Diag} (\mathbf B))\cong H^*(\operatorname{Tot} (\mathbf B)),$$ and the proof is complete. \end{proof}
\subsection{Explicit description of the low degree part}
The aim of this section is to define explicitly homomorphisms that make the following sequence \begin{eqnarray*} 0&\to& H^1(H,A)\stackrel{\operatorname{res}_2}{\longrightarrow}H^1(T,A)\oplus H^1(N,A) \stackrel{{\delta_N}*{\delta_T}}{\longrightarrow}{\mathcal H}^1(T,N,A) \stackrel{\phi}{\to}H^2(H,A)\\[0pt] &\stackrel{\operatorname{res}_2}{\longrightarrow}& H^2(T,A)\oplus H^2(N,A)\stackrel{{\delta_N}*{\delta_T}^{-1}}{\longrightarrow}{\mathcal H}^2(T,N,A)\stackrel{\psi}{\to}H^3(H,A). \end{eqnarray*} exact. This is the low degree part of the generalized Kac sequence. Here $H=N\bowtie T$ is the bismash product Hopf algebra arising from a matched pair $\mu\colon T\otimes N\to N$, $\nu\colon T\otimes N\to T$. Recall that we abbreviate $\mu(t,n)=t(n)$, $\nu(t,n)=t^n$. We shall also assume that $A$ is a trivial $H$-module.
We define $\operatorname{res}_2=\operatorname{res}_2^i\colon H^i(H,A)\to H^i(T,A)\oplus H^i(N,A)$ to be the map $(\operatorname{res}_T,\operatorname{res}_N)\Delta$, more precisely if
$f\colonH^{i}\to A$ is a cocycle, then it gets sent to a pair of cocycles $(f_{T},f_{N})$, where $f_T=f|_{T^{i}}$ and
$f_N=f|_{N^{i}}$.
By ${\delta_N}*{\delta_T}^{(-1)^{i+1}}$, we denote the composite $$\begin{array}{l} H^i(T,A)\oplus H^i(N,A)\stackrel{{\delta_N}\oplus{\delta_T}^{\pm 1}}{\longrightarrow} \mathcal{H}^i_i(T,N,A)\oplus \mathcal{H}^i_1(T,N,A)
\\
\stackrel{\iota\oplus\iota}{\longrightarrow} {\mathcal H}^i(T,N,A)\oplus {\mathcal H}^i(T,N,A) \stackrel{*}{\to} {\mathcal H}^i(T,N,A). \end{array}$$ When $i=1$, the map just defined, sends a pair of cocycles $a\in Z^1(T,A)$, $b\in Z^1(N,A)$ to a map ${\delta_N} a*{\delta_T} b\colon T\otimes N\to A$ and if $i=2$ a pair of cocycles $a\inZ^2(T,A)$, $b\in Z^2(N,A)$ becomes a cocycle pair $({\delta_N} a,\varepsilon)*(\varepsilon,{\delta_T} b^{-1})=({\delta_N} a,{\delta_T} b^{-1}) \colon (T\otimes T\otimes N)\oplus (T\otimes N\otimes N)\to A$. Here ${\delta_N}$ and ${\delta_T}$ are the differentials for computing the cohomology of a matched pair described in Section \ref{s24}.
The map $\phi\colon {\mathcal H}^1(T,N,A)\to H^2(H,A)$ assigns to a cocycle $\gamma\colon T\otimes N\to A$, a map $\phi(\gamma)\colon H\otimes H\to A$, which is characterized by $\phi(\gamma)(nt,n't')=\gamma(t,n').$
The homomorphism $\psi\colon {\mathcal H}^2(T,N,A)\to H^3(H,A)$ is induced by a map that sends a cocycle pair $(\alpha,\beta)\in {\mathcal Z}^2(T,N,A)$ to the cocycle $f=\psi(\alpha,\beta)\colon H\otimes H\otimes H\to A$ given by $$ f(nt,n't',n''t'')=\varepsilon(n)\varepsilon(t'')\alpha(t^{n'},t',n'')\beta(t,n',t'(n'')). $$
A direct, but lengthy computation shows that the maps just defined induce homomorphisms that make the sequence above exact [M3]. The most important tool in computations is the following lemma about the structure of the second cohomology group ${\mathcal H}^2(H,A)$ [M3].
\begin{Lemma}\label{mainlemma} Let $f\colon H\otimes H\to A$ be a cocycle. Define maps $g_f\colon H\to A$, $h\colon H\otimes H\to A$ and $f_c\colon T\otimes N\to A$ by $g_f(nt)=f(n\otimes t)$, $h=f*\delta g_f$ and $f_c(t\otimes n)=f(t\otimes n)f^{-1}(t(n)\otimes t^n)$. Then \begin{enumerate} \item $ h(nt,n't')=f_T(t^{n'},t')f_N(n,t'(n'))f_c(t,n') $
\item $ h_T=f_T,\; h_N=f_N,\; h|_{N\otimes T}=\varepsilon,\; h|_{T\otimes N}=h_c=f_c,\; g_h=\varepsilon $
\item the maps $f_T$ and $f_N$ are cocycles and ${\delta_N} f_T={\delta_T} f_c^{-1}$, ${\delta_T} f_N= {\delta_N} f_c^{-1}$
\item If $a\colon T\otimes T\to A$, $b\colon N\otimes N\to A$ are cocycles and $\gamma\colon T\otimes N\to A$ is a convolution invertible map, such that ${\delta_N} a={\delta_T}\gamma$ and ${\delta_T} b={\delta_N}\gamma$, then the map $f=f_{a,b,\gamma}\colon H\otimes H\to A$, defined by $$ f(nt,n't')=a(t^{n'},t')b(n,t(n'))\gamma^{-1}(t,n') $$
is a cocycle and $f_T=a$, $f_N=b$, $f_c=f|_{T\otimes N}=\gamma^{-1}$
and $f|_{N\otimes T}=\varepsilon$.
\end{enumerate} \end{Lemma}
\subsection{The locally finite case}
Suppose that the action $\mu\colon T\otimes N\to N$ is locally finite and let $(N,T^\circ,\omega,\rho)$ be the Singer pair corresponding to the matched pair $(T,N,\mu,\nu)$ as in Section \ref{s41}.
By Corollary \ref{cor1} we have ${\mathcal H}^1(T,N,k)=H^1(N,T^\circ)$.
From the explicit description of the generalized Kac sequence, we see that $({\delta_N}*{\delta_T}^{-1})|_{H^2(T,A)}={\delta_N}\colon H^2(T,A)\to {\mathcal H}^2_2(N,T,A)$ and similarly that
$({\delta_N}*{\delta_T}^{-1})|_{H^2(N,A)}={\delta_T}^{-1}\colon H^2(N,A)\to {\mathcal H}^2_1(N,T,A)$. By Proposition \ref{cor2} we have the equality ${\mathcal H}_1^2(T,N,k)=\H_m^2(N,T^\circ)$. Recall that $\H_m^2(N,T^\circ)\subseteq H^2(N,T^\circ)\simeq \operatorname{Opext}(N,T^\circ)$.
If the action $\nu$ is locally finite as well, then there is also a (right) Singer pair $(T,N^\circ,\omega',\rho')$. By \lq{right\rq} we mean that we have a right action $\omega'\colon N^\circ\otimes T\to N^\circ$ and a right coaction $\rho'\colon T\otimes N^\circ\otimes T$. In this case we get that ${\mathcal H}^2_2(T,N,k)\simeq {{ \H_m^2}}'(T,N^\circ)\subseteq \operatorname{Opext}'(T,N^\circ)$. The dash refers to the fact that we have a right Singer pair.
Define ${H^2_{mc}}=\H_m^2\cap \H_c^2$ and ${H^2_{mc}}'= {H^2_{m}}'\cap {H^2_{c}}'$ and note $H_{mc}(N,T^\circ)\simeq{\mathcal H}^2_2(N,T,k)\cap {\mathcal H}^2_1(N,T,k)\simeq {H_{mc}}'(T,N^\circ)$. Hence \begin{eqnarray*} \operatorname{im}({\delta_N}*{\delta_T}^{-1})&\subseteq& {\mathcal H}^2_1(T,N,k)+{\mathcal H}^2_2(T,N,k) \simeq \frac{{\mathcal H}^2_1(T,N,k)\oplus {\mathcal H}^2_2(T,N,k)}{{\mathcal H}^2_1(T,N,k)\cap {\mathcal H}^2_2(T,N,k)}\\ &=&\frac{\H_m^2(N,T^\circ)\oplus {\H_m^2}'(T,N^\circ)}{\langle {\mathrm H^2_{mc}}(N,T^\circ)\equiv{H^2_{mc}}'(T,N^\circ)\rangle}. \end{eqnarray*} In other words, $\operatorname{im}({\delta_N}*{\delta_T}^{-1})$ is contained in a subgroup of ${\mathcal H}^2(T,N,k)$, that is isomorphic to the pushout $$\begin{CD} {{H^2_{mc}}'(T,N^\circ)\simeq {H^2_{mc}}(N,T^\circ)} @> >> {{H^2_{m}}(N,T^\circ)} \\ @V VV @V VV \\ {{H^2_{m}}'(T,N^\circ)} @> >> X. \end{CD}$$ Hence if both actions $\mu$ and $\nu$ of the abelian matched pair $(T,N,\mu,\nu)$ are locally finite then we get the following version of the low degree part of the Kac sequence: \begin{eqnarray*} 0&\to& H^1(H,k)\stackrel{\operatorname{res}_2}{\longrightarrow}H^1(T,k)\oplus H^1(N,k) \stackrel{{\delta_N}*{\delta_T}}{\longrightarrow}{H}^1(N,T^\circ) \stackrel{\phi}{\to}H^2(H,k)\\ &\stackrel{\operatorname{res}_2}{\longrightarrow}& H^2(T,k)\oplus
H^2(N,k)\stackrel{{\delta_N}*{\delta_T}^{-1}}{\longrightarrow}X\stackrel{\psi|_X}{\longrightarrow}H^3(H,k). \end{eqnarray*}
\subsection{The Kac sequence of an abelian Singer pair}
Here is a generalization of the Kac sequence relating Sweedler and Doi cohomology to Singer cohomology.
\begin{Theorem} For any abelian Singer pair $(B,A,\mu,\rho)$ there is a long exact sequence $$\begin{array}{l} 0\to H^1(Tot Z)\to H^1(B,k)\oplus H^1(k,A)\to H^1(B,A)\to H^2(\operatorname{Tot} Z)\\ \to H^2(B,k)\oplus H^2(k,A)\to H^2(B,A)\to H^3(\operatorname{Tot} Z)\to\ldots, \end{array}$$ where $Z$ is the double complex from Definition \ref{d12}. Moreover, we always have $H^1(B,A)\cong \operatorname{Aut}(A\# B)$, $H^2(B,A)\cong\operatorname{Opext} (B,A)$ and $H^*(\operatorname{Tot} Z)\cong H^*(\operatorname{Diag} Z)$. If $A$ is finite dimensional then $H^*(\operatorname{Tot} Z)=H^*(A^*\bowtie B,k)$. \end{Theorem}
\begin{proof} The short exact sequence of double cochain complexes $$0\to Z_0\to Z\to Z_1\to 0,$$ where $Z_1$ is the edge subcomplex of $Z= {_B\operatorname{Reg}^A}(\mathbf{X}_B(k),\mathbf{Y}_A(k))$, induces a long exact sequence $$\begin{array}{l} 0\to H^1(Tot Z)\to H^1(\operatorname{Tot} Z_1)\to H^2(\operatorname{Tot} Z_0)\to H^2(\operatorname{Tot} Z)\\ \to H^2(\operatorname{Tot} Z_1)\to H^3(\operatorname{Tot} Z_0)\to H^3(\operatorname{Tot} Z)\to H^3(\operatorname{Tot} Z_0)\to\ldots \end{array}$$ where $H^0(\operatorname{Tot} Z_0)=0=H^1(\operatorname{Tot} Z_0)$ and $H^0(\operatorname{Tot} Z)=H^0(\operatorname{Tot} Z_1)$ have already been taken into account. By definition $H^*(\operatorname{Tot} Z_0)=H^*(B,A)$ is the cohomology of the abelian Singer pair $(B,A,\mu ,\rho )$, and by [Ho] we have $H^1(B,A)\cong\operatorname{Aut} (A\# B)$ and $H^2(B,A)\cong\operatorname{Opext} (B,A)$. Moreover, we clearly have $H^*(\operatorname{Tot} Z_1)\cong H^*(B,k)\oplus H^*(k,A)$, where the summands are Sweedler and Doi cohomologies. By the cosimplicial Eilenberg-Zilber theorem (see appendix) there is a natural isomorphism $H^*(\operatorname{Tot} (\mathbf Z))\cong H^*(\operatorname{Diag} (\mathbf Z))$. Finally, if $A$ is finite dimensional then $\mathbf Z={_B\operatorname{Reg}^A}(\mathbf X(k),\mathbf Y(k))\cong {_{A^*\bowtie B}\operatorname{Reg}} (\mathbf{B}(k),k)$, where $\mathbf B(k)=\mathbf X_{A^*}(\mathbf X_B(k))$. \end{proof}
\section{On the matched pair cohomology of pointed cocommutative Hopf algebras over fields of zero characteristic} In this section we describe a method which gives information about the second cohomology group ${\mathcal H}^2(T,N,A)$ of an abelian matched pair.
\subsection{The method}
Let $(T,N)$ be an abelian matched pair of pointed Hopf algebras, and $A$ a trivial $N\bowtie T$-module algebra.
\begin{enumerate} \item Since $\operatorname{char} k=0$ and $T$ and $N$ are pointed we have $T\simeq UP(T) \rtimes kG(T)$ and $N\simeq UP(N)\rtimes kG(N)$ and $N\bowtie T\simeq U(P(T)\bowtie P(N)) \rtimes k(G(T)\bowtie G(N))$ [Gr1,2]. If $H$ is a Hopf algebra then $G(H)$ denotes the group of points and $P(H)$ denotes the Lie algebra of primitives.
\item We can use the generalized Tahara sequence [M2] (see introduction) to compute $H^2(T)$, $H^2(N)$, $H^2(N\bowtie T)$. In particular if $G(T)$ is finite then the cohomology group ${H_{meas}^2}(kG(T),\operatorname{Hom}(UP(T),A))= H^{2,1}(kG(T),UP(T),A)= {\mathcal H}^2_2(kG(T),UP(T),A)$ is trivial and there is a direct sum decomposition $H^2(T)=H^2(P(T))^{G(T)}\oplus H^2(G(T))$; we get a similar decomposition for $H^2(N)$ if $G(N)$ is finite and for $H^2(N\bowtie T)$ in the case $G(T)$ and $G(N)$ are both finite.
\item Since the Lie algebra cohomology groups $H^i({\bf g})$ admit a vector space structure, the cohomology groups $H^{1,2}(G,{\bf g},A)\simeq H^1(G,H^2({\bf g},A))$ are trivial if $G$ is finite (any additive group of a vector space over a field of zero characteristic is uniquely divisible).
\item The exactness of the sequence from Theorem \ref{pi} implies that the maps ${\delta_T}\colon H^2(G(\_))\to {\mathcal H}^2(kG(\_),UP(\_),A)$ are surjective if $G(\_)$ is finite, hence by the generalized Kac sequence the kernels of the maps $\operatorname{res}_2^3\colon H^3(\_)\to H^3(P(\_)\oplusH^3(G(\_))$ are trivial. This then gives information about the kernel of the map $\operatorname{res}_2^3\colon H^3(N\bowtie T)\to H^3(T)\oplus H^3(N)$.
\item Now use the exactness of the generalized Kac sequence \begin{eqnarray*} H^2(N\bowtie T)&\stackrel{\operatorname{res}_2^2}{\longrightarrow}&H^2(T)\oplus H^2(N)\stackrel{{\delta_T}+{\delta_N}^{-1}}{\longrightarrow} {\mathcal H}^2(T,N,A)\\ &\to& H^3(N\bowtie T)\stackrel{\operatorname{res}_2^3}{\longrightarrow}H^3(T)\oplusH^3(N) \end{eqnarray*} to get information about ${\mathcal H}^2(T,N,A)$. \end{enumerate}
\subsection{Examples}
Here we describe how the above procedure works on concrete examples.
In the first three examples we restrict ourselves to a case in which one of the Hopf algebras involved is a group algebra.
Let $T=UP(T)\rtimes kG(T)$ and $N=kG(N)$ and suppose that the matched pair of $T$ and $N$ arises from actions $G(T)\times G(N)\to G(N)$ and $(G(N)\rtimes G(T))\times P(T)\to P(T)$. If the groups $G(T)$ and $G(N)$ are finite and their orders are relatively prime, then the generalized Kac sequence shows that there is an injective homomorphism $$\Phi\colon \frac{H^2(P(T))^{G(T)}}{H^2(P(T))^{G(N)\rtimes G(T)}}\oplus \frac{H^2(G(N))}{H^2(G(N))^{G(T)}}\to {\mathcal H}^2(T,N,A).$$ Theorem \ref{pi} guarantees that the map $H^3(N\bowtie T)=H^3(U(P(T))\rtimes k(G(N)\rtimes G(T)))\to H^3(P(T))\oplus H^3(G(N)\rtimes G(T))$ is injective. Since the orders of $G(T)$ and $G(N)$ are assumed to be relatively prime the map $H^3(G(N)\rtimes G(T))\to H^3(G(N))\oplus H^3(G(T))$ is also injective. Hence the map $$\operatorname{res}_2^3\colon H^3(N\bowtie T)\to H^3(N)\oplus H^3(T)$$ must be injective as well, since the composite $H^3(N\bowtie T)\to H^3(N)\oplus H^3(T)\to H^3(G(N))\oplus H^3(P(T))\oplus H^3(G(T))$ is injective. Hence by the exactness of the generalized Kac sequence $\Phi$ is an isomorphism.
\begin{Example}Let ${\bf g}=k\times k$ be the abelian Lie algebra of dimension 2 and let $G=C_2=\langle a \rangle$ be the cyclic group of order two. Furthermore assume that $G$ acts on ${\bf g}$ by switching the factors, i.e. $a(x,y)=(y,x)$. Recall that $U{\bf g}=k[x,y]$ and that $H^i_{Sweedler}(U{\bf g},A)=H^i_{Hochschild}(U{\bf g},A)$ for $i\ge 2$ and that $H^i_{Hochschild}(k[x,y],k)=k^{\oplus {i\choose 2 }}$. A computation shows that $G$ acts on $k\simeq H^2(k[x,y],k)$ by $a(t)=-t$ and hence $H^2(k[x,y],k)^G=0$. Thus the homomorphism $\pi$ (Theorem \ref{pi}) is the zero map and the homomorphism $k\simeq H^2(k[x,y],k)\stackrel{{\delta_T}}{\to}{\mathcal H}^2(kC_2,k[x,y],k)$ is an isomorphism.\end{Example}
\begin{Example}[symmetries of a triangle] Here we describe an example arising from the action of the dihedral group $D_3$ on the abelian Lie algebra of dimension $3$ (basis consists of vertices of a triangle). More precisely let ${\bf g}=k\times k\times k$, $G=C_2=\langle a\rangle$, $H=C_3=\langle b \rangle$, the actions $G\times {\bf g}\to {\bf g}$, $H\times {\bf g}\to {\bf g}$ and $H\times G\to H$ are given by $a(x,y,z)=(z,y,x)$, $b(x,y,z)=(z,x,y)$ and $b^a=b^{-1}$ respectively. A routine computation reveals the following \begin{itemize} \item $C_2$ acts on $k\times k\times k\simeq H^2(k[x,y,z],k)$ by $a(u,v,w)=(-w,-v,-u)$, hence the $G$ stable part is $$H^2(k[x,y,z],k)^G=\{(u,0,-u)\}\simeq k.$$
\item $H=C_3$ acts on $k\times k\times k$ by $b(u,v,w)=(w,u,v)$ and the $H$ stable part is $H^2(k[x,y,z],k)^H= \{(u,u,u)\}\simeq k$.
\item The $D_3=C_2\rtimes C_3$ stable part $H^2(k[x,y,z],k)^{D_3}$ is trivial.
\end{itemize}
Thus we have an isomorphism $k\times k^\bullet/(k^\bullet)^3\simeq {\mathcal H}^2(k[x,y,z]\rtimes kC_2,kC_3,k)$.\end{Example}
\begin{Remark}The above also shows that there is an isomorphism $$k\times k\times k\simeq \mathcal{H}^2(k[x,y,z],kD_3,k).$$\end{Remark}
\begin{Example} Let ${\bf g}=sl_n$, $G=C_2=\langle a\rangle$, $H=C_n=\langle b \rangle$, where $a$ is a matrix that has $1$'s on the skew diagonal and zeroes elsewhere and $b$ is the standard permutation matrix of order $n$. Let $H$ and $G$ act on $sl_n$ by conjugation in ${\mathcal M}_n$ and let $G$ act on $H$ by conjugation inside $GL_n$. Furthermore assume that $A$ is a finite dimensional trivial $U{\bf g}\rtimes k(H\rtimes G)$-module algebra. By Whitehead's second lemma $H^2({\bf g},A)=0$ and hence we get an isomorphism $\mathcal{U} A/(\mathcal{U} A)^n\simeq {\mathcal H}^2(Usl_n\rtimes kC_2,kC_n,A)$ if $n$ is odd. \end{Example}
\begin{Example} Let $H=U{\bf g}\rtimes kG$, where ${\bf g}$ is an abelian Lie algebra and $G$ is a finite abelian group and assume the action of $H$ on itself is given by conjugation, i.e. $h(k)=h_1 kS(h_2)$. In this case it is easy to see that $H^2(H,A)^H=H^2(H,A)$ for any trivial $H$-module algebra $A$ and hence the homomorphism in the generalized Kac sequence $\delta_{H,1}\oplus\delta_{H,2}\colon H^2(H,A)\oplus H^2(H,A)\to {\mathcal H}^2(H,H,A)$ is trivial. Hence ${\mathcal H}^2(H,H,A)\simeq \ker(H^3(H\rtimes H,A)\to H^3(H,A)\oplus H^3(H,A)).$\end{Example}
\begin{appendix} \section{Simplicial homological algebra}
This is a collection of notions and results from simplicial homological algebra used in the main text. The emphasis is on the cohomology of cosimplicial objects, but the considerations are similar to those in the simplicial case [We].
\subsection{Simplicial and cosimplicial objects}
Let $\mathbf\Delta$ denote the simplicial category [Mc]. If $\mathcal A$ is a category then the functor category $\mathcal A^{\mathbf\Delta^{op}}$ is the category of simplicial objects while $\mathcal A^{\mathbf\Delta}$ is the category of cosimplicial objects in $\mathcal A$. Thus a simplicial object in $\mathcal A$ is given by a sequence of objects $\{ X_n\}$ together with, for each $n\geq 0$, face maps $\partial_i\colon X^{n+1}\to X_n$ for $0\leq i\leq n+1$ and degeneracies $\sigma_j\colon X_n\to X_{n+1}$ for $0\leq j\leq n$ such that
$\partial_i\partial_j=\partial_{j-1}\partial_i$ for $i<j$,
$\sigma_i\sigma_j=\sigma_{j+1}\sigma_i$ for $i\leq j$,
$\partial_i\sigma_j=\begin{cases} \sigma_{j-1}\partial_i,& \mbox{ if } i<j;\\ 1,& \mbox{ if } i=j, j+1;\\ \sigma_j\partial_{i-1},& \mbox{ if } i>j+1. \end{cases}$
A cosimplicial object in $\mathcal A$ is a sequence of objects $\{ X^n\}$ together with, for each $n\geq 0$, coface maps $\partial^i\colon X^n\to X^{n+1}$ for $0\leq i\leq n+1$ and codegeneracies $\sigma^j\colon X^{n+1}\to X^n$ such that
$\partial^j\partial^i=\partial^i\partial^{j-1}$ for $i<j$,
$\sigma^j\sigma^i=\sigma^i\sigma^{j+1}$ for $i\leq j$,
$\sigma^j\partial^i=\begin{cases} \partial^i\sigma^{j-1},& \mbox{ if } i<j;\\ 1,& \mbox{ if } i=j,j+1;\\ \partial^{i-1}\sigma^j,& \mbox{ if } i>j+1. \end{cases}$
Two cosimplicial maps $f,g\colon X\to Y$ are homotopic if for each $n\geq 0$ there is a family of maps $\{ h^i\colon X^{n+1}\to Y^n|0\leq i\leq n\}$ in $\mathcal A$ such that
$h^0\partial^0=f$, $h^n\partial^{n+1}=g$,
$h^j\partial^i= \begin{cases} \partial^ih^{j-1},& \mbox{ if } i<j;\\ h^{i-1}\partial^i, & \mbox{ if } i=j\ne 0;\\ \partial^{i-1}h^j, & \mbox{ if } i>j+1, \end{cases}$
$h^j\sigma^i= \begin{cases} \sigma^ih^{j+1}, & \mbox{ if } i\leq j;\\ \sigma^{i-1}h^j, & \mbox{ if } i>j. \end{cases}$ \\Clearly, homotopy of cosimplicial maps is an equivalence relation.
If ${X}$ is a cosimplicial object in an abelian category $\mathcal{A}$, then $C({X})$ denotes the associated cochain complex in $\mathcal{A}$, i.e. an object of the category of cochain complexes $\operatorname{Coch}(\mathcal{A})$.
\begin{Lemma} For a cosimplicial object $X$ in the abelian category $\mathcal{A}$ let $N^n(X)=\cap_{i=0}^{n-1}\ker\sigma^i$ and $D^n(X)=\sum_{j=0}^{n-1}\operatorname{im}\partial^j$. Then $C(X)\cong N(X)\oplus D(X)$. Moreover, ${X}/{D(X)}\cong N(X)$ is a cochain complex with differentials given by $\partial^n\colon {X^n}/{D^n}\to {X^{n+1}}/{D^{n+1}}$, and $\pi^*(X)= H^*(N^*(X))$ is the sequence of cohomotopy objects of $X$. \end{Lemma}
\begin{Theorem}[Cosimplicial Dold-Kan correspondence, {[We, 8.4.3]}] If $\mathcal A$ is an abelian category then \begin{enumerate}
\item $N\colon\mathcal{A}^{\mathbf\Delta}\to \operatorname{Coch} (\mathcal A)$ is an equivalence and $N(X)$ is a summand of $C(X)$;
\item $\pi^*(X)=H^*(N(X))\cong H^*(C(X))$.
\item If $\mathcal A$ has enough injectives, then $\pi^*=H^*N\colon \mathcal A^{\mathbf\Delta}\to \operatorname{Coch} (\mathcal A)$ and $H^*C\colon A^{\mathbf\Delta}\to \operatorname{Coch} (\mathcal A)$ are the sequences of right derived functors of $\pi^0=H^0N\colon \mathcal A^{\mathbf\Delta}\to \mathcal A$ and $H^0C\colon \mathcal A^{\mathbf\Delta}\to \mathcal A$, respectively.
\end{enumerate} \end{Theorem}
\begin{proof} (1) If $y\in N^n(X)\cap D^n(X)$ then $y=\sum_{i=0}^{n-1}\partial^i(x_i)$, where each $x_i\in X^{n-1}$. Suppose that $y=\partial^0(x)$ and $y\in N^n(X)$, then $0=\sigma^0(y)=\sigma^0\partial^0(x)=x$ and hence $y=\partial^0(x)=0$ Now proceed by induction on the largest $j$ such that $\partial^j(x_j)\neq 0$ So let $y=\sum_{i=0}^j\partial^i(x_i)$ such that $\partial^j(x_j)\neq 0$, i.e: $y\notin \sum_{i<j}\operatorname{im}\partial^i$, and $y\in N^n(X)$. Then $0=\sigma^j(y)=\sum_{i\leq j}\sigma^J\partial^i(x_i) =x_j+\sum{i<j}\sigma^j\partial^i(x_i)=x_j+\sum_{i<j}\partial^i\sigma^{j-1}(x_i)$. This implies that $x_j=\sum_{i<j}\partial^i\sigma^{j-1}(x_i)$ and hence $\partial^j(x_j)=-\sum_{i<j}\partial^j\partial^i\sigma^{j-1}(x_i) =-\sum_{i<j}\partial^i\partial^{j-1}\sigma^{j-1}(x_i)\in \sum_{i<j}\operatorname{im}\partial^i$, a contradiction. Thus, $N^n(X)\cap D^n(X)=0$.
Now let us show that $D^n(X)+N^n(X)=C^n(X)$. Suppose that $y=\partial^0(x)$ for some $x\in X_{n-1}$ and $y\in N^n(x)=\cap_{i=0}^{n-1}\ker\sigma^i$. Then $0=\sigma^0(y)=\sigma^0\partial^0(x)=x$, so that $\sigma^i(y)\neq 0$. If $y'=y-\partial^i\sigma^i(y)$ then $y-y'\in D^n(X)$. For $i<j$ we get $\sigma^j(y')=\sigma^j(y)-\sigma^j\partial^i\sigma^i(y) =\sigma^j(y)-\partial^i\sigma^{j-1}\sigma^i(y) =\sigma^j(y)-\partial^i\sigma^i\sigma^j(y)=0$. Moreover, $\sigma^i(y')=\sigma^i(y)-\sigma^i\partial^i\sigma^i(y)=\sigma^i(y)-\sigma^i(y)=0$, so that $i-1$ is the largest index for which $\sigma^{i-1}y'\neq 0$. By induction, there is a $z\in D^n(X)$ such that $y-z\in N^n(X)$, and hence $y\in D^n(X)+N^n(X)$.
It now follows that $\cap_{i=0}^{n-1}\ker\sigma^i =N^n(X)\cong X^n/{D^n(X)} =X^n/{\sum_{i=0}^{n-1}\operatorname{im}\partial^i}$. The differential $\partial^n\colon N^n(X)\to N^{n+1}(X)$ is given by $\partial^n(x+D^n(X))=\partial^n(x)+D^{n+1}(X)$.
(2) By definition, see [We, 8.4.3].
(3) The functors $N\colon \mathcal A^{\mathbf\Delta}\to\mathcal{A}$ and $C\colon\mathcal{A}^{\mathbf\Delta}\to \operatorname{Coch} (\mathcal{A})$ are exact. \end{proof}
The inverse equivalence $K\colon \operatorname{Coch} (\mathcal A)\to \mathcal A^{\mathbf\Delta}$ has a description, similar to that for the simplicial case [We, 8.4.4].
\subsection{Cosimplicial bicomplexes}
The category of cosimplicial bicomplexes in the abelian category $\mathcal A$ is the functor category $\mathcal A^{\mathbf\Delta\times\mathbf\Delta}=(\mathcal A^{\mathbf\Delta})^{\mathbf\Delta}$. In particular, in a cosimplicial bicomplex $X=\{ X^{p,q}\}$ in $\mathcal A$ \begin{enumerate} \item Horizontal and vertical cosimplicial identities are satisfied; \item Horizontal and vertical cosimplicial operators commute. \end{enumerate}
The associated (unnormalized) cochain bicomplex $C(X)$ with $C(X)^{p,q}=X^{p,q}$ has horizontal and vertical differentials $$d_h=\sum_{i=0}^{p+1}(-1)^i\partial_h^i\colon X_{p,q}\to X^{p+1,q}\quad ,\quad d_v=\sum_{j=0}^{q+1}(-1)^{p+j}\partial_v^j\colon X^{p,q}\to X^{p,q+1}$$ so that $d_hd_v=d_vd_h$. The normalized cochain bicomplex $N(X)$ is obtained from $X$ by taking the normalized cochain complex of each row and each column. It is a summand of $CX$. The cosimplicial Dold-Kan theorem then says that $H^{**}(CX)\cong H^{**}(NX)$ for every cosimplicial bicomplex.
The diagonal $\operatorname{diag}\colon \Delta\to \Delta\times\Delta$ induces the diagonalization functor $\operatorname{Diag} =\mathcal A^{\operatorname{diag}}\colon \mathcal A^{\Delta\times\Delta}\to\mathcal A^{\Delta}$, where $\operatorname{Diag}^p(X)=X^{p,p}$ with coface maps $\partial^i=\partial_h^i\partial_v^i\colon X^{p,p}\to X^{p+1,p+1}$ and codegeneracies $\sigma^j=\sigma_h^j\sigma_v^j\colon X^{p+1,p+1}\to X^{p,p}$ for $0\leq i\leq p+1$ and $0\leq j\leq p$, respectively.
\begin{Theorem}[The cosimplicial Eilenberg-Zilber Theorem.] Let $\mathcal A$ be an abelian category with enough injectives. There is a natural isomorphism $$\pi^*(\operatorname{Diag} X)=H^*(C\operatorname{Diag} (X))\cong H^*(\operatorname{Tot} (X)),$$ where $\operatorname{Tot}(X)$ denotes the total complex associated to the double cochain complex $CX$. Moreover, there is a convergent first quadrant cohomological spectral sequence $$E_1^{p,q}=\pi_v^q(X^{p,*})\quad ,\quad E_2^{p,q}=\pi_h^p\pi_v^q(X)\Rightarrow \pi^{p+q}(\operatorname{Diag} X).$$ \end{Theorem}
\begin{proof} It suffices to show that $\pi^0\operatorname{Diag} \cong H^0(\operatorname{Tot} X)$, and that $$\pi^*\operatorname{Diag},\; H^*\operatorname{Tot} \colon\mathcal A^{\Delta\times\Delta }\to \mathcal A^{\mathbf N}$$ are sequences of right derived functors.
First observe that $\pi^0(\operatorname{Diag} X)=\operatorname{eq} (\partial_h^0\partial_v^0, \partial_h^0\partial_v^0\colon X^{0,0}\to X^{1,1})$, while $H^0(\operatorname{Tot} (X))=\ker ((\partial_h^0-\partial_h^1, \partial_v^0-\partial_v^1)\colon X^{0,0}\to X^{10}\oplus X^{01})$. But $\partial_h^0\partial_v^0x=\partial_h^1\partial_v^1x$ implies that $\partial_v^0x=\sigma_h^0\partial_h^0\partial_v^0x =\sigma_h^0\partial_h^1\partial_v^1x =\partial_v^1x$, since $\sigma_h^0\partial_h^0 =1=\sigma_h^0\partial_h^1$, and similarly $\partial_h^0x=\sigma_v^0\partial_h^0\partial_v^0x =\sigma_v^0\partial_h^1\partial_v^1x =\partial_h^1x$, since $\sigma_v^0\partial_v^0 =1=\sigma_v^0\partial_v^1$, so that $\pi^0(\operatorname{Diag} X)\subseteq H^0(\operatorname{Tot} (X))$.
Conversely, if $\partial_h^0x=\partial_h^1x$ and $\partial_v^0x=\partial_v^1x$ then $\partial_h^0\partial_v^0x=\partial_h^0\partial_v^1x =\partial_v^1\partial_h^0x=\partial_v^1\partial_h^1x =\partial_h^1\partial_v^1x$, and hence $H^0(\operatorname{Tot} (X))\subseteq \pi^0(\operatorname{diag} X)$.
The additive functors $\operatorname{Diag}\colon \mathcal A^{\Delta\times\Delta}\to \mathcal A^{\Delta}$ and $\operatorname{Tot}\colon\mathcal A^{\Delta\times\Delta}\to \operatorname{Coch} (\mathcal A)$ are obviously exact, while $\pi^*, H^*$ are cohomological $\delta$-functors, so that both $\pi^*\operatorname{Diag} ,H^*\operatorname{Tot}\colon\mathcal A^{\Delta\times\Delta}\to \operatorname{Coch} (\mathcal A)$ are cohomological $\delta$ functors.
The claim is that this cohomological $\delta$ functors are universal, i.e: the right derived functors of $\pi^0\operatorname{Diag} ,H^0\operatorname{Tot} C\colon \mathcal A^{\Delta\times\Delta}\to \mathcal A$, respectively. Since $\mathcal A$ has enough injectives, so does $\operatorname{Coch} (\mathcal A)$ by [We, Ex. 2.3.4], and hence by the Dold-Kan equivalence $\mathcal A^{\Delta}$ and $\mathcal A^{\Delta\times\Delta}$ have enough injectives. Moreover, by the next lemma, both $\operatorname{Diag}$ and $\operatorname{Tot} $ preserve injectives. It therefore follows that $$\begin{array}{l} \pi^*\operatorname{Diag} =(R^*\pi^0)\operatorname{Diag} =R^*(\pi^0\operatorname{Diag} ),\\ H^*\operatorname{Tot} =(R^*H^0)\operatorname{Tot} =R^*(H^0\operatorname{Tot} ). \end{array}$$
The canonical cohomological first quadrant spectral sequence associated with the cochain bicomplex $C(X)$ has $$E_1^{p,q}=H_v^q(C^{p,*}(X))=\pi_v^q(X^{p,*})\quad ,\quad E_2^{p,q}=H_h^p(C(\pi_v^q(X))=\pi_h^p\pi_v^q(X)$$ and converges finitely to $H^{p+q}(\operatorname{Tot} (X))\cong \pi^{p+q}(\operatorname{diag} X)$. \end{proof}
\begin{Lemma} The functors $\operatorname{Diag}\colon \mathcal A^{\Delta\times\Delta}\to\mathcal A^{\Delta}$ and $\operatorname{Tot}\colon\mathcal A^{\Delta\times\Delta}\to\operatorname{Coch}\mathcal A$ preserve injectives. \end{Lemma}
\begin{proof} A cosimplicial bicomplex $J$ is an injective object in $\mathcal A^{\Delta\times\Delta}$ if and only if \begin{enumerate} \item each $J^{p,q}$ is an injective object of $\mathcal A$, \item each row and each column is cosimplicially null-homotopic, i.e: the identity map is cosimplicially homotopic to the zero map, \item the vertical homotopies $h_v^j\colon J^{*,q}\to J^{*,q-1}$ for $0\leq j\leq q-1$ are cosimplicial maps. \end{enumerate}
It then follows that $\operatorname{Diag} (J)$ is an injective object in $\mathcal A^{\Delta}$, since $J^{p,p}$ is injective in $\mathcal A$ for every $p\geq 0$ and the maps $h^i=h_h^ih_v^i\colon J^{p,p}\to J^{p-1,p-1}$, $0\leq i\leq p-1$ and $p>0$, form a contracting cosimplicial homotopy, i.e: a the identity map od $\operatorname{Diag} J$ is cosimplicially null-homotopic.
On the other hand $\operatorname{Tot} (J)$ is a non-negative cochain complex of injective objects in $\mathcal A$, so it is injective in $\operatorname{Coch} (\mathcal A)$ if and only if it is split-exact, that is if and only if it is exact. But every column of the associated cochain bicomplex $C(J)$ is acyclic, since $H_v^*(J^{p,*})=\pi^*(J^{p,*})=0$. The exactness of $\operatorname{Tot} (J)$ now follows from the convergent spectral sequence with $E_1^{p,q}=H^q(C^{p,*}(J))=0$ and $E_2^{p,q}=H_h^p(H_v^q(C(J)) \Rightarrow H^{p+q}(\operatorname{Tot} (J))$. \end{proof} \vskip .5cm
\subsection{The cosimplicial Alexander-Whitney map}
The cosimplicial Alexander Whitney map gives an explicit formula for the isomorphism in the Eilenberg-Zilber theorem. For $p+q=n$ let $$g_{p,q}=d^n_hd^{n-1}_h\ldots d^{p+1}_hd^0_v\ldots d^0_v\colon X^{p,q}\to X^{n,n}$$ and $g^n=(g^{p,q})\colon \operatorname{Tot}^n (X)\to X^{n,n}$. This defines a natural cochain map $g\colon \operatorname{Tot} (X)\to C(\operatorname{Diag} X)$, which induces a morphism of universal $\delta$-functors $$g^*\colon H^*(\operatorname{Tot} (X))\to H^*(C(\operatorname{Diag} X))=\pi^*(\operatorname{Diag} X).$$ Moreover, $g^0\colon \operatorname{Tot}^0(X)=X^0=C^0(\operatorname{Diag} X)$, and hence $$g^0\colon H^0(\operatorname{Tot} (X))\to H^0(C(\operatorname{Diag} X))=\pi^0(\operatorname{Diag} X).$$ The cosimplicial Alexander Whitney map is therefore (up to equivalence) the unique cochain map inducing the isomorphism in the Eilenberg-Zilber theorem. The inverse map $f\colon C(\operatorname{Diag} X)\to \operatorname{Tot} (X)$ is given by the shuffle coproduct formula $$f^{p,q}=\sum_{(p,q)-\mbox{shuffles}}(-1)^{\mu}\sigma^{\mu (n)}_h\ldots \sigma^{\mu (p+1)}_h\sigma^{\mu (p)}_v\ldots \sigma^{\mu (1)}_v\colon X^{n.n}\to X^{p,q},$$ and is a natural cochain map. It induces a natural isomorphism $\pi^0(\operatorname{Diag} X)=H^0(C(\operatorname{Diag} X))\cong H^0(\operatorname{Tot} (X))$, and thus
$$f^*\colon \pi^*(\operatorname{Diag} X)=H^*(C(\operatorname{Diag} X))\cong H^*(\operatorname{Tot} (X))$$ is the unique isomorphism of universal $\delta$-functors given in the cosimplicial Eilenber-Zilber theorem. In particular, $f^*$ is the inverse of $g^*$.
\end{appendix}
\end{document}
|
arXiv
|
{
"id": "0212124.tex",
"language_detection_score": 0.6177752614021301,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Intercomparison of Machine Learning Methods for Statistical Downscaling: The Case of Daily and Extreme Precipitation}
\abstract{
Statistical downscaling of global climate models (GCMs) allows researchers to study local climate change effects decades into the future. A wide range of statistical models have been applied to downscaling GCMs but recent advances in machine learning have not been explored. In this paper, we compare four fundamental statistical methods, Bias Correction Spatial Disaggregation (BCSD), Ordinary Least Squares, Elastic-Net, and Support Vector Machine, with three more advanced machine learning methods, Multi-task Sparse Structure Learning (MSSL), BCSD coupled with MSSL, and Convolutional Neural Networks to downscale daily precipitation in the Northeast United States. Metrics to evaluate of each method's ability to capture daily anomalies, large scale climate shifts, and extremes are analyzed. We find that linear methods, led by BCSD, consistently outperform non-linear approaches. The direct application of state-of-the-art machine learning methods to statistical downscaling does not provide improvements over simpler, longstanding approaches. }
\section{Introduction}
The sustainability of infrastructure, ecosystems, and public health depends on a predictable and stable climate. Key infrastructure allowing society to function, including power plants and transportation systems, are built to sustain specific levels of climate extremes and perform optimally in it's expected climate. Studies have shown that the changing climate has had, and will continue to have, significant impacts on critical infrastructure~\cite{ganguli2015water,neumann2015climate}. Furthermore, climate change is having dramatic negative effects to ecosystems, from aquatic species to forests ecosystems, caused by increases in greenhouse gases and temperatures~\cite{walther2002ecological,parmesan2006ecological,hansen2013high}. Increases in frequency and duration of heat waves, droughts, and flooding is damaging public health~\cite{haines2006climate,frumkin2008climate}.
Global Circulation Models (GCMs) are used to understand the effects of the changing climate by simulating known physical processes up to two hundred years into the future. The computational resources required to simulate the global climate on a large scale is enormous, limiting models to coarse spatial and temporal scale projections. While the coarse scale projections are useful in understanding climate change at a global and continental level, regional and local understanding is limited. Most often, the critical systems society depends on exist at the regional and local scale, where projections are most limited. Downscaling techniques are applied to provide climate projections at finer spatial scales, exploiting GCMs to build higher resolution outputs. Statistical and dynamical are the two classes of techniques used for downscaling. The statistical downscaling (SD) approach aims to learn a statistical relationship between coarse scale climate variables (ie. GCMs) and high resolution observations. The other approach, dynamical downscaling, joins the coarse grid GCM projections with known local and regional processes to build Regional Climate Models (RCMs). RCMs are unable to generalize from one region to another as the parameters and physical processes are tuned to specific regions. Though RCMs are useful for hypothesis testing, their lack of generality across regions and extensive computational resources required are strong disadvantages.
\subsection{Statistical Downscaling}
Statistical downscaling methods are further categorized into three approaches, weather generators, weather typing, and transfer functions. Weather generators are typically used for temporal downscaling, rather than spatial downscaling. Weather typing, also known as the method of analogues, searches for a similar historical coarse resolution climate state that closely represents the current state. Though this method has shown reasonable results~\cite{frost2011comparison}, in most cases, it is unable to satisfy the non-stationarity assumption in SD. Lastly, transfer functions, or regression methods, are commonly used for SD by learning functional relationships between historical precipitation and climate variables to project high resolution precipitation.
A wide variety of regression methods have been applied to SD, ranging from Bias Correction Techniques to Artificial Neural Networks. Traditional methods include Bias Correction Spatial Disaggregation (BCSD)~\cite{wood2002long} and Automated Regression Based Downscaling (ASD)~\cite{hessami2008automated} and are the most widely used. BSCD assumes that the climate variable being downscaled is well simulated by GCMs, which often is not the case with variables such as precipitation~\cite{schiermeier2010real}. Rather than relying on projections of the climate variable being downscaled, regression methods can be used to estimate the target variable. For instance, precipitation can be projected using a regression model with variables such as temperature, humidity, and sea level pressure over large spatial grids. High dimensionality of covariates leads to multicollinearity and overfitting in statistical models stemming from a range of climate variables over three dimensional space. ASD improves upon multiple linear regression by selecting covariates implicitly, using covariate selection techniques such as backward stepwise regression and partial correlations. The Least Absolute Shrinkage and Selection Operator (Lasso), a widely used method for high dimensional regression problems through the utilization of a $l_1$ penalty term, is analogous to ASD and has shown superior results in SD~\cite{tibshirani1996regression,hammami2012predictor}. Principle component analysis (PCA) is another popular approach to dimensionality reduction in SD~\cite{tatli2004statistical,Ghosh2010,jakob2011empirical}, decomposing the features into a lower dimensional space to minimize multicollinearity between covariates. PCA is disadvantaged by the inability to infer which covariates are most relevant to the problem, steering many away from the method. Other methods for SD include Support Vector Machines (SVM)~\cite{Ghosh2010}, Artificial Neural Networks (ANNs)~\cite{taylor2000quantile,Coulibaly2005}, and Bayesian Model Averaging~\cite{Zhang2015}.
Many studies have aimed to compare and quantify a subset of the SD models presented above by downscaling averages and/or extremes at a range of temporal scales. For instance, Burger et al. presented an intercomparison on five state-of-the-art methods for downscaling temperature and precipitation at a daily temporal resolution to quantify extreme events~\cite{Burger2012}. Another recent study by Gutmann et al. presented an intercomparison of methods on daily and monthly aggregated precipitation~\cite{gutmann2014intercomparison}. These studies present a basis for comparing SD models by downscaling at a daily temporal resolution to estimate higher level climate statistics, such as extreme precipitation and long-term droughts. In this paper we follow this approach to test the applicability of more advanced machine learning models to downscaling.
\subsection{Multi-task Learning for Statistical Downscaling}
Traditionally, SD has focused on downscaling a locations independently without accounting for clear spatial dependencies in the system. Fortunately, numerous machine learning advances may aid SD in exploiting such dependencies. Many of these advancements focus on an approach known as multi-task learning, aiming to learn multiple tasks simultaneously rather than in isolation. A wide variety of studies have shown that exploiting related tasks through multi-task learning (MTL) greatly outperforms single-task models, from computer vision~\cite{zhang2012robust} to biology~\cite{kim2010tree}. Consider the work presented by~\cite{evgeniou2007multi} in which increasing the number of tasks leads to more significant feature selection and lower test error through the inclusion of task relatedness and regularization terms in the objective function. MTL has also displayed the ability to uncover and exploit structure between task relationships~\cite{zhang2012convex,chen2011integrating,argyriou2007spectral}.
Recently Goncalves et al. presented a novel method, Multi-task Sparse Structure Learning (MSSL), ~\cite{goncalves2014multi} and applied it to GCM ensembles in South America. MSSL aims to exploit sparsity in both the set of covariates as well as the structure between tasks, such as set of similar predictands, through alternating optimization of weight and precision (inverse covariance) matrices. The results showed significant improvements in test error over Linear Regression and Multi-model Regression with Spatial Smoothing (a special case of MSSL with a pre-defined precision matrix). Along with a lower error, MSSL captured spatial structure including long range teleconnections between some coastal cities. The ability to harness this spatial structure and task relatedness within a GCM ensembles drives our attention toward MTL in other climate applications.
Consider, in SD, each location in a region as a task with an identical set of possible covariates. These tasks are related through strong unknown spatial dependencies which can be harnessed for SD projections. In the common high dimensional cases of SD, sparse features learned will provide greater significance as presented by~\cite{evgeniou2007multi}. Furthermore, the structure between locations will be learned and may aid projections. MSSL, presented by \cite{goncalves2014multi}, accounts for sparse feature selection and structure between tasks.
In this study we aim to compare traditional statistical downscaling approaches, BCSD, Multiple Linear Regression, Lasso, and Support Vector Machines, against new approaches in machine learning, Multi-task Sparse Structure Learning and Convolutional Neural Networks (CNNs). During experimentation we apply common training architectures as part of the automated statistical downscaling framework. Results are then analyzed with a variety of metrics including, root mean Square error (RMSE), bias, skill of estimating underlying distributions, correlation, and extreme indices.
\section{Statistical Downscaling Methods}
\subsection{Bias Corrected Spatial Disaggregation}
BCSD~\cite{wood2002long} is widely used in the downscaling community due to its simplicity~\cite{abatzoglou2012comparison,Burger2012,wood2004hydrologic,maurer2010utility}. Most commonly, GCM data is bias corrected followed by spatial disaggregation on monthly data and then temporally disaggregated to daily projections. Temporal disaggregation is performed by selecting a month at random and adjusting the daily values to reproduce it's statistical distribution, ignoring daily GCM projections. Thrasher et al. presented a process applying BCSD directly to daily projections~\cite{thrasher2012technical}, removing the step of temporal disaggregation. We the following steps with overlapping a reanalysis dataset and gridded observation data.
1) Bias correction of daily projections using observed precipitation. Observed precipitation is first remapped to match the reanalysis grid. For each day of the year values are pooled, $\pm$ 15 days, from the reanalysis and observed datasets to build a quantile mapping. With the quantile mapping computed, the reanalysis data points are mapped, bias corrected, to the same distribution as the observed data. When applying this method to daily precipitation detrending the data is not necessary because of the lack of trend and is therefore not applied.
2) Spatial disaggregation of the bias-corrected reanalysis data. Coarse resolution reanalysis is then bilinearly interpolated to the same grid as the observation dataset. To preserve spatial details of the fine-grained observations, the average precipitation of each day of the year is computed from the observation and set as scaling factors. These scaling factors are then multiplied to the daily interpolated GCM projections to provide downscaled GCM projections.
\subsection{Automated Statistical Downscaling}
ASD is a general framework for statistical downscaling incorporating covariate selection and prediction~\cite{hessami2008automated}. Downscaling of precipitation using ASD requires two essential steps: 1. Classify rainy/non-rainy days ($\geq$ 1mm), 2. Predict precipitation totals for rainy days. The predicted precipitation can then be written as:
\begin{equation} \begin{aligned} \label{eq:asd}
\textbf{E}[Y] = R * E[Y | R] \text{ where }
R =
\begin{cases}
0, & \text{if}\ \textbf{P}(Rainy) < 0.5 \\
1, & \text{otherwise}
\end{cases} \end{aligned} \end{equation} Formulating $R$ as a binary variable preserves rainy and non-rainy days. We test this framework using five pairs of classification and regression techniques.
\subsubsection{Multiple Linear Regression}
The simplicity of Multiple Linear Regression (MLR) motivated its use in SD, particularly as part of SDSM~\cite{wilby2002sdsm} and ASD~\cite{hessami2008automated}. To provide a baseline relative to the following methods, we apply a variation of MLP using PCA. As discussed previously, PCA is implemented to reduce the dimensionality of a high dimensional feature space by selecting the components that account for a percentage (98\% in our implementation) of variance in the data. These principle components, $X$, are used as inputs to classify and predict precipitation totals. We apply a logistic regression model to classify rainy versus non-rainy days. MLP is then applied to rainy days to predict precipitation amounts, $Y$:
\begin{equation} \begin{aligned} \label{eq:mlp} & \hat{\beta} = \arg\!\min_{\beta} \parallel Y - X\beta \parallel \\ \end{aligned} \end{equation}
This particular formulation will aid in comparison to ~\cite{Ghosh2010} where PCA is coupled with an SVM.
\subsubsection{Elastic-Net}
Covariate selection can be done in a variety of methods, such as backward stepwise regression and partial correlations. Automatic covariate selection through the use of regularization terms, such as the $L_1/L_2$ norms in the statistical methods Lasso~\cite{tibshirani1996regression}, Ridge~\cite{hoerl1970ridge}, and Elastic-Net~\cite{zou2005regularization}. Elastic-Net uses a linear combination of $L_1/L_2$ norms which we will apply in this intercomparison. Given a set of covariates $X$ and observations $Y$, Elastic-net is defined as:
\begin{equation} \begin{aligned} \label{eq:elnet} & \hat{\beta} = \arg\!\min_{\beta} \big( \parallel Y - X\beta \parallel_2^2 + \lambda_1 \parallel \beta \parallel_1 + \lambda_2 \parallel \beta \parallel_2^2 \big) \\ \end{aligned} \end{equation}
The $L_1$ norm forces uninformative covariate coefficients to zero while the $L_2$ norm enforces smoothness while allowing correlated covariates to persist. Cross-validation is applied with a grid-search to find the optimal parameter values for $\lambda_1$ and $\lambda_2$. High-dimensional Elastic-Net is much less computational than stepwise regression techniques and most often leads to more generalizable models. A similar approach is applied to the classification step by using a logistic regression with an $L_1$ normalization term. Previous studies have considered the use of Lasso for SD~\cite{hammami2012predictor} but to our knowledge, none have considered Elastic-Net.
\subsubsection{Support Vector Machine Regression}
Ghosh et al. introduced a coupled approach of PCA and Support Vector Machine Regression (SVR) for statistical downscaling~\cite{ghosh2008statistical,Ghosh2010}. The use of SVR for downscaling aims to capture non-linear effects in the data. As discussed previously, PCA is implemented to reduce the dimensionality of a high dimensional feature space by selecting the components that account for a percentage (98\% in our implementation) of variance in the data. Following dimensionality reduction, SVR is used to define the transfer function between the principle components and observed precipitation. Given a set of covariates (the chosen principle components) $X \in \mathbb{R}^{n \times m}$ and $Y \in \mathbb{R}^n$ with $d$ covariates and $n$ samples, the support vector regression is defined as~\cite{smola1997support}:
\begin{equation} \begin{aligned} & f(x) = \sum_{i=1}^d w_i \times K(x_i, x) + b \end{aligned} \end{equation}
where $K(x_i, x)$ and $w_i$ are the kernel functions and their corresponding weights with a bias term $b$. The support vectors are selected during training by optimizing the number of points from the training data to define the relationship between then predictand ($Y$) and predictors ($X$). Parameters $C$ and $\epsilon$ are set during training, which we set to $1.0$ and $0.1$ respectively, corresponding to regularization and loss sensitivity. A linear kernel function is applied to limit overfitting to the training set. Furthermore, support vector classifier was used for classification of rainy versus non-rainy days.
\subsubsection{Multi-task Sparse Structure Learning}
Recent work in Multi-task Learning aims to exploit structure in the set of predictands while keeping a sparse feature set. Multi-task Sparse Structure Learning (MSSL) in particular learns the structure between predictands while enforcing sparse feature selection (\cite{goncalves2014multi}). Goncalves et al. presented MSSL's exceptional ability to predict temperature through ensembles of GCMs while learning interesting teleconnections between locations (\cite{goncalves2014multi}). Moreover, the generalized framework of MSSL allows for implementation of classification and regression models. Applying MSSL to downscaling with least squares regression (logistic regression for classification), we denote $K$ as the number of tasks (observed locations), $n$ as the number of samples, and $d$ as the number of covariates with predictor $X \in \mathbb{R}^{n \times d}$, and predictand $Y \in \mathbb{R}^{n \times K}$. As proposed in \cite{goncalves2014multi}, optimization over the precision matrix, $\boldsymbol{\Omega}$, is defined as
\begin{equation} \label{eq:MSSL} \begin{aligned}
\min_{\boldsymbol{W},\boldsymbol{\Omega} \succ 0} \bigg\{ \dfrac{1}{2} \sum_{k=1}^K \parallel X \boldsymbol{W}_k - Y_k \parallel_2^2 - \dfrac{K}{2} \text{log}|\boldsymbol{\Omega}| + Tr(\boldsymbol{W} \boldsymbol{\Omega} \boldsymbol{W}^T) + \lambda \parallel \boldsymbol{\Omega} \parallel_1 + \gamma \parallel \boldsymbol{W} \parallel_1 \bigg\}\\ \end{aligned} \end{equation}
\noindent where $\boldsymbol{W} \in \mathbb{R}^{d \times K}$ is the weight matrix and $\boldsymbol{\Omega} \in \mathbb{R}^{K \times k}$ is an inverse precision matrix. The $L_1$ regularization parameters $\lambda$ and $\gamma$ enforce sparsity over $\boldsymbol{\Omega}$ and $\boldsymbol{W}$. $\boldsymbol{\Omega}$ represents the structure contained between the high resolution observations. Alternating minimization is applied to (\ref{eq:MSSL})
1. Initialize $\boldsymbol{\Omega}^0 = I_k, \boldsymbol{W}^0 = \textbf{0}_{d\boldsymbol{X} k}$ \\
2. for t=1,2,3,.. \textbf{do}
\begin{equation}
\boldsymbol{W}^{t+1} | \boldsymbol{\Omega}^t = \min_{\boldsymbol{W}} \bigg\{ \dfrac{1}{2}\sum_{k=1}^K \parallel X_k \boldsymbol{W}_k - Y_k \parallel_2^2 + Tr(\boldsymbol{W} \boldsymbol{\Omega} \boldsymbol{W}^T) + \gamma \parallel \boldsymbol{W} \parallel_1 \bigg\} \\ \label{eq:MSSL-W} \end{equation}
\begin{equation}
\boldsymbol{\Omega}^{t+1} | \boldsymbol{W}^{t+1} = \min_{\boldsymbol{\Omega}} \bigg\{ Tr(\boldsymbol{W} \boldsymbol{\Omega} \boldsymbol{W}^T) - \dfrac{K}{2} log|\boldsymbol{\Omega}| + \lambda \parallel \boldsymbol{\Omega} \parallel_1 \bigg\} \\ \label{eq:MSSL-Omega} \end{equation}
\noindent \ref{eq:MSSL-W} and \ref{eq:MSSL-Omega} are independently approximated through Alternating Direction Method of Multipliers (ADMM). Furthermore, by assuming the predictors of each task is identical (as it is for SD), \ref{eq:MSSL-W} is updated using Distributed-ADMM across the feature space \cite{boyd2011distributed}.
MSSL enforces similarity between rows of $\boldsymbol{W}$ by learning the structure $\boldsymbol{\Omega}$. For example, two locations which are nearby in space may tend to exhibit similar properties. MSSL will the exploit these properties and impose similarity in their corresponding linear weights. By enforcing similarity in linear weights, we are encouraging smoothness of SD projections between highly correlated locations. $L_1$ regularization over $\boldsymbol{W}$ and $\boldsymbol{\Omega}$ jointly encourages sparseness and does not force structure. The parameters encouraging sparseness, $\gamma$ and $\lambda$, are chosen from a validation set using the grid-search technique. These steps are applied for both regression and classification.
\subsubsection{Convolutional Neural Networks}
\begin{figure}
\caption{Given a set of GCM inputs $\boldsymbol{Y}$, the first layer extracts a set of feature maps followed by a pooling layer. A second convolution layer is then applied to the reduced feature space and pooled one more time. The second pooling layer is then flattened and fully connected to the high resolution observations.}
\label{fig:cnn-framework}
\end{figure}
Artificial Neural Networks (ANN) have been widely applied to SD with mixed results~\cite{taylor2000quantile,schoof2001downscaling,Burger2012}, to name a few. In the past, ANNs had difficulty converging to a local minimum. Recent progress in deep learning has renewed interested in ANNs and are beginning to have impressive results in many applications, including image classification and speech recognition~\cite{krizhevsky2012imagenet,hinton2012deep,basu2015learning}. In particular, Convolutional Neural Networks (CNNs) have greatly impacted computer vision applications by extracting, representing, and condensing spatial properties of the image~\cite{krizhevsky2012imagenet}. SD may benefit from CNN advances by learning spatial representations of GCMs. Though CNNs rely on a high number of samples to reduce overfitting, dropout has been shown to be an effective method of reducing overfitting with limited samples~\cite{srivastava2014dropout}. We note that the number of observations available to daily statistical downscaling may cause overfitting.
CNNs rely on two types of layers, a convolution layer and a pooling layer. In the convolution layer, a patch of size $3 \times 3$ is chosen and slid with a stride of 1 around the image. A non-linear transformation is applied to each patch resulting in 8 filters. Patches of size $2 \times 2$ are then pooled by selecting the maximum unit with a stride of $2$. A second convolution layer with a $3 \times 3$ patch to $2$ filters is followed by a max pooling layer of size $3 \times 3$ with stride $3$. The increase of pooling size decreases the dimensionality further. The last pooling layer is then vectorized and densely connected to each high resolution location. This architecture is presented in Figure 1.
Multiple variables and pressure levels from our reanalysis dataset are represented as channels in the CNN input. Our CNN is trained using the traditional back propagation optimization with a decreasing learning rate. During training, dropout with probability 0.5 is applied the densely connected layer. This method aims to exploit the spatial structure contained in the GCM. A sigmoid function is applied to the output layer for classification. To our knowledge, this is the first application of CNNs to statistical downscaling.
\subsection{Bias Corrected Spatial Disaggregation with MSSL}
To further understand the use of BCSD in Statistical Downscaling, we propose a technique to estimate the errors introduced in BCSD. As presented above, BCSD utilizes a relatively simple quantile mapping approach to statistical downscaling following by interpolation and spatial scaling. Following the BCSD estimates of the observed climate, we compute the presented errors, which may be consistent and have a predictive signal. Modeling such errors using the transfer function approaches above, such as MSSL, may uncover this signal and improve BCSD projections. To apply this technique, the following steps are taken:
\begin{enumerate} \item Apply BCSD to the coarse scale climate variable and compute the errors. \item Excluding a hold out dataset, use MSSL where they predictand is the computed errors and the predictands are from a different set of climate variables, such as Temperature, Wind, Sea Level Pressure, etc. \item Subtract the expected errors modeled by step 2 from BCSD projections in step 1. \end{enumerate} \noindent The transfer function learned in step 2 is then applicable to future observations.
\section{Data}
The Northeastern United States endures highly variable season and annual weather patterns. Variable climate and weather patterns combined with diverse topology provides difficulty in regional climate projection. Precipitation in particular varies heavily in frequency and intensity seasonally and annually~\cite{karl1998secular}. We choose this region to provide an in-depth comparison of statistical downscaling techniques for daily precipitation and extremes.
\subsection{United States Unified Gauge-Based Analysis of Precipitation}
High resolution gridded precipitation datasets often provide high uncertainties due to a lack of gauge based observations, poor quality control, and interpolation procedures. Fortunately, precipitation gauge data in the continental United States is dense with high temporal resolution (hourly and daily). The NOAA Climate Prediction Center CPC Unified Gauge-Based Analysis of Precipitation exploits the dense network of rain gauges to provide a quality controlled high resolution (0.25$^{\circ}$ by 0.25$^{\circ}$) gridded daily precipitation dataset from 1948 to the current date. State of the art quality control~\cite{chen2008assessing} and interpolation~\cite{xie2007gauge} techniques are applied giving us high confidence in the data. We select all locations within the northeastern United States watershed.
\subsection{NASA Modern-Era Retrospective Analysis for Research and Applications 2 (MERRA-2)}
Reanalysis datasets are often used as proxies to GCMs for statistical downscaling when comparing methods due to their low resolution gridded nature with a range of pressure levels and climate variables. Uncertainties and biases occur in each dataset, but state-of-the-art reanalysis datasets attempt to mitigate these issues. NASA's MERRA-2 reanalysis dataset~\cite{rienecker2011merra} was chosen after consideration of NCEP Reanalysis I/II~\cite{kalnay1996ncep} and ERA-Interm~\cite{dee2011era} datasets. \cite{kossin2015validating} showed the reduced bias of MERRA and ERA-Interm over NCEP Reanalysis II, which is most often used in SD studies. MERRA-2 provides a significant temporal resolution from 1980 to present with relatively high spatial resolution (0.50$^{\circ}$ by 0.625$^{\circ}$). Satellite data provided by NASA’s GEOS-5 project in conjunction with NASA's data assimilation system when producing MERRA-2~\cite{rienecker2011merra}.
Only variables available from the CCSM4 GCM model are selected as covariates for our SD models. Temperature, vertical wind, horizontal wind, and specific humidity are chosen from pressure levels 500hpa, 700hpa, and 850hpa. At the surface level, temperature, sea level pressure, and specific humidity are chosen as covariates. To most closely resemble CCSM4, each variable is spatially upscaled to 1.00$^{\circ}$ to 1.25$^{\circ}$ at a daily resolution. A large box centralized around the Northeastern Region ranging from 35$^{\circ}$ to 50$^{\circ}$ latitude and 270$^{\circ}$ to 310$^{\circ}$ longitude is used for each variable. When applying the BCSD model, we use a spatially upscaled Land Precipitation MERRA-2 Reanalysis dataset at a daily temporal resolution. Bilinear interpolation is applied over the coast to allow for quantile mapping of coastal locations as needed.
\section{Experiments and Evaluation}
In-depth evaluation of downscaling techniques is crucial in testing and understanding their credibility. The implicit assumptions in SD must be clearly understood and tested when applicable. Firstly, SD models assume that the predictors chosen credibly represent the variability in the predictands. This assumption is partially validated through the choice of predictors presented above, which physically represents variability of precipitation. The remainder of the assumption must be tested through experimentation and statistical tests between downscaled projections and observations. The second assumption then requires the statistical attributes of predictands and predictors to be valid outside of the data using for statistical modeling. A hold out set will be used to test the feasibility of this assumption at daily, monthly, and annually temporal resolutions. Third, the climate change signal must be incorporated in the predictands through GCMs. Predictands chosen for this experiment are available through CMIP5 CCSM4 simulations. It is understood that precipitation is not well simulated by GCMs and therefore not used in ASD models~\cite{schiermeier2010real}.
To test these assumptions, we provide in-depth experiments, analysis, and statistical metrics for each method presented above. The years 1980-2004 are used from training and years 2005-2014 are used for testing, taken from the overlapping time period of MERRA-2 and CPC Precipitation. For each method (excluding the special case of BCSD), we chose all covariates from each variable, pressure level, and grid point presented above, totaling 12,781 covariates. Each method applies either dimensionality reduction or regularization techniques to reduce complexity of this high dimensional dataset. Separate models are trained for each season (DJF, MAM, JJA, SON) and used to project the corresponding observations.
\noindent Analysis and evaluation of downscaled projections aim to cover three themes: \begin{enumerate} \item Ability to capture daily anomalies. \item Ability to respond to large scale climate trends on monthly and yearly temporal scales. \item Ability to capture extreme precipitation events. \end{enumerate}
Similar evaluation techniques were applied in recent intercomparison studies of SD~\cite{Burger2012,gutmann2014intercomparison}. Evaluation of daily anomalies are tested through comparison of bias (Projected - Observed), Root Mean Square Error (RMSE), correlations, and a skill score~\cite{perkins2007evaluation}. The skill score presented by~\cite{perkins2007evaluation} measures how similar two probability density functions are from a range of 0 to 1 where 1 corresponds to identical distributions. Statistics are presented for winter (DJF), summer (JJA), and annually to understand season credibility. Statistics for spring and fall are computed but not presented in order to minimize overlapping climate states and simply results. Each of the measures are computed independently in space then averaged to a single metric. Large scale climate trends are tested by aggregating daily precipitation to monthly and annual temporal scales. The aggregated projections are then compared using Root Mean Square Error (RMSE), correlations, and a skill score as presented in~\cite{perkins2007evaluation}. Due to the limited number of data points in the monthly and yearly projections, we estimate each measure using the entire set of projections and observations.
Climate indices are used for evaluation of SD models' ability to estimate extreme events. Four metrics from ClimDEX (http://www.clim-dex.org), chosen to encompass a range of extremes, will be utilized for evaluation, as presented by B\"{u}rger~\cite{Burger2012}. \begin{enumerate} \item CWD - Consecutive wet days $\geq$ 1mm \item R20 - Very heavy wet days $\geq$ 20mm \item RX5day - Monthly consecutive maximum 5 day precip \item SDII - Daily intensity index = Annual total / precip days $\geq$ 1m \end{enumerate} Metrics will be computed on observations and downscaled estimates followed by annual (or monthly) comparisons. For example, correlating the maximum number of consecutive wet days per year between observations and downscaled estimates measures each SD models' ability to capture yearly anomalies. A skill score will also be utilized to understand abilities of reproducing statistical distributions.
\section{Results}
Results presented below are evaluated using a hold-out set, years 2005-2014. Each model's ability to capture daily anomalies, long scale climate trends, and extreme events are presented. Our goal is to understand a SD model's overall ability to provide credible projections rather than one versus one comparisons, therefore statistical significance was not computed when comparing statistics.
\subsection{Daily Anomalies}
\begin{figure}
\caption{Each map presents the spatial bias, or directional error, of the model. White represents no bias produced by the model while red and blue respectively show positive and negative biases.}
\label{fig:bias}
\end{figure}
\begin{figure}
\caption{Root mean square error (RMSE) is computed for each downscaling location and method. Each boxplot presents the distribution of all RMSEs for the respective method. The box shows the quartiles while the whiskers shows the remaining distribution, with outliers displayed by points.}
\label{fig:daily-rmse}
\end{figure}
\begin{table} \small
\noindent\makebox[\textwidth]{
\begin{tabularx}{1.2\textwidth}{>{\em}l|XXX|XXX|XXX|XXX}
\toprule
{} & \multicolumn{3}{c|}{Bias (mm)} & \multicolumn{3}{c|}{Correlation} & \multicolumn{3}{c|}{RMSE (mm)} & \multicolumn{3}{c}{Skill Score} \\
Season & Annual & DJF & JJA & Annual & DJF & JJA & Annual & DJF & JJA & Annual & DJF & JJA \\
\midrule
BCSD & -0.44 & -0.36 & -0.36 & 0.52 & 0.49 & 0.46 & 0.75 & 0.65 & 0.81 & 0.93 & 0.92 & 0.89 \\
PCAOLS & -0.89 & -0.71 & -1.16 & 0.55 & 0.60 & 0.49 & 0.70 & 0.55 & 0.75 & 0.82 & 0.81 & 0.76 \\
PCASVR & 0.37 & 0.04 & 0.20 & 0.33 & 0.39 & 0.31 & 1.10 & 0.79 & 1.05 & 0.91 & 0.87 & 0.87 \\
ELNET & -0.88 & -0.66 & -1.16 & 0.64 & 0.69 & 0.55 & 0.65 & 0.50 & 0.72 & 0.84 & 0.85 & 0.78 \\
MSSL & -1.58 & -1.20 & -2.05 & 0.62 & 0.64 & 0.54 & 0.68 & 0.55 & 0.74 & 0.92 & 0.90 & 0.88 \\
BCSD-MSSL & -0.16 & -0.10 & -0.02 & 0.58 & 0.60 & 0.50 & 0.69 & 0.56 & 0.77 & 0.79 & 0.80 & 0.74 \\
CNN & -3.27 & -2.72 & -3.68 & 0.58 & 0.63 & 0.55 & 0.87 & 0.69 & 0.90 & 0.73 & 0.74 & 0.67 \\
\bottomrule
\end{tabularx}} \caption{Daily statistical metrics averaged over space for annual, winter, and summer projections. Bias measures the directional error from each model. Correlation (larger is better) and RMSE (lower is better) describe the models ability to capture daily fluxuations in precipitation. The skill score statistic measure the model's ability to estimate the observed probability distribution.} \label{tab:daily-stats} \end{table}
Evaluation of daily anomalies depends on a model's ability to estimate daily precipitation given the state of the system. This is equivalent to analyzing the error between projections and observations. Four statistical measures are used to evaluate these errors: bias, Pearson Correlation, skill score, and root mean square error (RMSE), as presented in Figure 2, Figure 3, and Table 1). All daily precipitation measures are computed independently in space and averaged to provide a single value. This approach is taken to summarize the measures as simply as possible. Figure 2 shows the spatial representation of annual bias in Table 1.
Overall, methods tend underestimate precipitation annually and seasonally with only PCASVR overestimating. BCSD-MSSL shows the lowest annual and summer bias and second lowest winter bias. BCSD is consistently under projects daily precipitation, but by modeling the possible error with MSSL, bias is reduced. PCAOLS and ELNET are less biased compared to MSSL. CNN has a strong tendency underestimate precipitation. Figure 2 shows consistent negative bias through space for BCSD, ELNET, PCAOLS, MSSL, and CNN while PCASVR shows no discernible pattern.
Correlation measures in Table 1 presents a high linear relationship between projections and observations for the models ELNET (0.64 annually) and MSSL (0.62 annually). We find that BCSD has a lower correlation even in the presence of error correction in BCSD-MSSL. PCASVR provides low correlations, averaging 0.33 annually, but PCAOLS performs substantially better at 0.55.
The skill score is used to measure a model's ability to reproduce the underlying distribution of observed precipitation where a higher value is better between 0 and 1. BCSD, MSSL, and PCASVR have the largest skill scores, 0.93, 0.92, and 0.91 annually. We find that modeling the errors of BCSD decreases the ability to replicate the underlying distribution. The more basic linear models, PCAOLS and ELNET, present lower skill scores. The much more complex CNN model has difficulty replicating the distribution.
RMSE, presented in Figure 3 and Table 1, measures the overall ability of prediction by squaring the absolute errors. The boxplot in Figure 3, where the box present the quartiles and whiskers the remaining distributions with outliers as points, shows the distribution of RMSE annually over space. The regularized models of ELNET and MSSL have similar error distributions and outperform others. CNN, similar to its under performance in bias, shows a poor ability to minimize error. The estimation of error produced by BCSD-MSSL aids in lowering the RMSE of plain BCSD. PCAOLS reasonably minimizes RMSE while PCASVR severely under-performs compared to all other models. Regression models applied minimize error during optimization while BCSD does not. Seasonally, winter is easier to project with summer being a bit more challenging.
\subsection{Large Climate Trends}
\begin{figure}
\caption{The average root mean square Error for each month with each line representing a single downscaling model.}
\label{fig:monthly-rmse}
\end{figure}
\begin{table} \centering \small
\begin{tabular}{>{\em}l|rr|rr|rr} \toprule
{} & \multicolumn{2}{c|}{RMSE (mm)} & \multicolumn{2}{c|}{Skill Score} & \multicolumn{2}{c}{Correlation} \\ Time-frame & Month & Year & Month & Year & Month & Year \\ \midrule BCSD & 31.97 & 204.78 & 0.88 & 0.63 & 0.85 & 0.64 \\ PCAOLS & 50.01 & 362.73 & 0.75 & 0.27 & 0.63 & 0.41 \\ PCASVR & 92.17 & 414.40 & 0.83 & 0.69 & 0.29 & 0.15 \\ ELNET & 46.96 & 353.67 & 0.76 & 0.27 & 0.71 & 0.50 \\ MSSL & 62.63 & 597.80 & 0.56 & 0.05 & 0.67 & 0.40 \\ BCSD-MSSL & 31.24 & 155.04 & 0.88 & 0.87 & 0.82 & 0.60 \\ CNN & 112.21 & 1,204.27 & 0.01 & 0.00 & 0.59 & 0.54 \\ \bottomrule \end{tabular} \caption{Large Scale Projection Results: After aggregating daily downscaled estimates to monthly and yearly time scales, RMSE and Skill are computed per location and averaged.} \label{tab:res-largscale} \end{table}
Analysis of a SD model's ability to capture large scale climate trends can be done by aggregating daily precipitation to monthly and annual temporal scales. To increase the confidence in our measures, presented in Table II and Figures 4 and 5, we compare all observations and projections in a single computation, rather than separating by location and averaging.
Table 2 and Figure 4 show a wide range of RMSE. A clear difficulty in projecting precipitation in the fall, October in particular, is presented by each time-series in Figure 4. The difference in overall predictability relative to RMSE between the models is evident. BCSD and BCSD-MSSL have significantly lower monthly RMSEs compared to the others. Annually, BCSD-MSSL reduced RMSE by 25\% compared to plain BCSD. The linear models, ELNET, MSSL, and PCAOLS, have similar predictability while the non-linear models suffer, CNN being considerably worse.
The skill scores in Table 2 show more difficulty in estimating the annual distribution versus monthly distribution. On a monthly scale BCSD and BCSD-MSSL skill scores outperform all other models but BCSD suffers slightly on an annual basis. However, BCSD-MSSL does not lose any ability to estimate the annual distribution. PCAOLS annual skill score is remarkably higher than the monthly skill score. Furthermore, the three linear models outperform BCSD on an annual basis. PCASVR's skill score suffers on an annual scale and CNN has no ability to estimate the underlying distribution.
Correlation measures between the models and temporal scales show much of the same. BCSD has the highest correlations in both monthly (~0.85) and yearly (~0.64) scales while BCSD-MSSL are slightly lower. CNN correlations fall just behind BCSD and BCSD-MSSL. PCASVR fails with correlation values of 0.22 and 0.18. ELNET has slightly higher correlations in relation to MSSL and PCAOLS.
\subsection{Extreme Events}
\begin{figure}
\caption{Annual precipitation observed (x-axis) and projected (y-axis) for each model is presented along with the corresponding Pearson Correlation. Each point represents a single location and year.}
\label{fig:annual-corr}
\end{figure}
\begin{figure}
\caption{The daily intensity index (Annual Precipitation/Number of Precipitation Days) averaged per year.}
\label{fig:sdii}
\end{figure}
\begin{table}
\centering
\small
\begin{tabular}{>{\em}l|rrrr|rrrr}
\toprule
{} & \multicolumn{4}{c|}{Correlation} & \multicolumn{4}{c}{Skill Score} \\
{Metric} & CWD & R20 & RX5day & SDII & CWD & R20 & RX5day & SDII \\
\midrule
model & & & & & & & & \\
BCSD & \textbf{0.43} & \textbf{0.83} & \textbf{0.73} & \textbf{0.70} & 0.71 & 0.80 & \textbf{0.84} & 0.44 \\
PCAOLS & 0.25 & 0.65 & 0.44 & 0.67 & 0.69 & 0.60 & 0.65 & 0.44 \\
PCASVR & 0.24 & 0.81 & 0.19 & 0.25 & 0.78 & \textbf{0.89} & 0.80 & \textbf{0.65} \\
ELNET & 0.36 & 0.71 & 0.57 & 0.64 & 0.79 & 0.62 & 0.63 & 0.35 \\
MSSL & 0.33 & \textbf{0.84} & 0.56 & 0.52 & \textbf{0.90} & 0.63 & 0.57 & 0.16 \\
BCSD-MSSL & 0.25 & \textbf{0.83} & 0.70 & \textbf{0.69} & 0.41 & 0.75 & \textbf{0.84} & 0.08 \\
CNN & 0.07 & --- & 0.33 & -0.30 & 0.05 & 0.59 & 0.01 & 0.00 \\
\bottomrule
\end{tabular}
\caption{Statistics for ClimDEX Indices: For each model's downscaled estimate we compute four extreme indices, consecutive wet days (CWD), very heavy wet days (R20), maximum 5 day precipitation (RX5day), and daily intensity index (SDII), for each location. We then compare these indices to those extracted from observations to compute correlation and skill metrics.}
\label{tab:climdex} \end{table}
A SD model's ability to downscale extremes from reanalysis depends on both the response to observed anomalies and ability to reproduce the underlying distribution. Resulting correlation measures present the response to observed anomalies, shown in Figure 6 and Table 3. We find that BCSD has higher correlations for three metrics, namely consecutive wet days, very heavy wet days, and daily intensity index along with a similar results from 5-day maximum precipitation. Furthermore, modeling BCSD's expected errors with BCSD-MSSL decreases the ability to estimate the chosen extreme indices. Non-linear methods, PCASVR and CNN, suffer greatly in comparison to more basic bias correction and linear approaches. The linear methods, PCAOLS, ELNET, and MSSL, provide similar correlative performance.
A skill score is used to quantify each method's ability to estimate an indices statistical distribution, presented in Table 3. Contrary to correlative results, PCASVR outperforms the other methods on two metrics, very heavy wet days and daily intensity index, with better than average scores on the other two metrics. BCSD also performs reasonably well in terms of skill scores while BCSD-MSSL suffers from the added complexity. MSSL estimates the number of consecutive wet days well but is less skilled on other metrics. The very complex CNN model has little ability to recover such distributions.
Figure 6 displays a combination of correlative power and magnitude estimate of the daily intensity index. The SDII metric is computed from total annual precipitation and number of wet days. A low SDII metric corresponds to either a relatively large number of estimated wet days or low annual precipitation. We find that the on average methods underestimate this intensity. Based on Figure 5 we see that CNN severely underestimates annual precipitation, causing a low SDII. In contrast, PCASVR overestimates annual precipitation and intensity.
Inconsistent results of PCASVR and CNN indicates that capturing non-linear relationships is outweighed by overfitting. However, BCSD and linear methods are more consistent throughout each metric.
\section{Discussion and Conclusion}
The ability of statistical downscaling methods to produce credible results is necessary for a multitude of applications. Despite numerous studies experimenting with a wide range of models for statistical downscaling, none have clearly outperformed others. In our study, we experiment with the off-the-shelf applicability of machine learning advances to statistical downscaling in comparison to traditional approaches.
Multi-task Sparse Structure learning, an approach that exploits similarity between tasks, was expected to increase accuracy beyond automated statistical downscaling approaches. We find that MSSL does not provide improvements beyond ELNET, an ASD approach. Furthermore, the parameter set, estimated through cross-validation, attributed no structure aiding prediction.
The recent popularity in deep learning along with it's ability to capture spatial information, namely Convolutional Neural Networks, motived us to experiment with basic architectures for statistical downscaling. CNNs benefit greatly by implicitly learning abstract non-linear spatial features based on the target variable. This approach proved to poorly estimate downscaled estimates relative to simpler methods. We hypothesize that implicitly learning abstract features rather than preserving the granular feature spaced caused poor performance. More experimentation with CNNs in a different architecture may still provide valuable results.
BCSD, a popular approach to statistical downscaling, outperformed the more complex models in estimating underlying statistical distributions and climate extremes. In many cases, correcting BCSD's error with MSSL increased daily correlative performance but decreased skill of estimating the distribution. From this result, we can conclude that a signal aiding in prediction was lost during quantile mapping, interpolation, or spatial scaling. Future work may study and improve each step independently to increase overall performance.
Of the seven statistical downscaling approaches studied, the traditional BCSD and ASD methods outperformed non-linear methods, namely Convolutional Neural Network and Support Vector regression, while downscaling daily precipitations. We find that BCSD is skilled at estimating the statistical distribution of daily precipitation, generating better estimates of extreme events. The expectation of CNN and MSSL, two recent machine learning advances which we found most applicable to statistical downscaling, to outperform basic modeled proved false. Improvements and customization of machine learning methods is needed to provide more credible projections.
\section*{Acknowledgments} This work was funded by NSF CISE Expeditions in Computing award 1029711, NSF CyberSEES award 1442728, and NSF BIGDATA award 1447587.
MERRA-2 climate reanalysis datasets used were provided by the Global Modeling and Assimilation Office at NASA's Goddard Space Flight Center. The CPC Unified Gauge-Based Analysis was provided by NOAA Climate Prediction Center.
\end{document}
|
arXiv
|
{
"id": "1702.04018.tex",
"language_detection_score": 0.8603045344352722,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{ Restrained double Roman domination of a graph} \author{{\small Doost Ali Mojdeh$^{a}$\thanks{Corresponding author} , Iman Masoumi$^{b}$ Lutz Volkmann$^{c}$}\\ \\ {\small $^{a}$Department of Mathematics, Faculty of Mathematical Sciences}\\{\small University of Mazandaran, Babolsar, Iran}\\{\small email: [email protected]}\\ \\ {\small $^{b}$Department of Mathematics, University of Tafresh}\\{\small Tafresh, Iran}\\ {\small email: i\[email protected]}\\ \\ {\small $^{c}$Lehrstuhl II f\"{u}r Mathematik, RWTH Aachen University}\\ {\small 52056 Aachen, Germany}\\ {\small email: [email protected]} } \date{} \maketitle
\begin{abstract} For a graph $G=(V,E)$, a restrained double Roman dominating function is a function $f:V\rightarrow\{0,1,2,3\}$ having the property that if $f(v)=0$, then the vertex $v$ must have at least two neighbors assigned $2$ under $f$ or one neighbor $w$ with $f(w)=3$, and if $f(v)=1$, then the vertex $v$ must have at least one neighbor $w$ with $f(w)\geq2$, and at the same time, the subgraph $G[V_0]$ which includes vertices with zero labels has no isolated vertex. The weight of a restrained double Roman dominating function $f$ is the sum $f(V)=\sum_{v\in V}f(v)$, and the minimum weight of a restrained double Roman dominating function on $G$ is the restrained double Roman domination number of $G$. We initiate the study of restrained double Roman domination with proving that the problem of computing this parameter is $NP$-hard. Then we present an upper bound on the restrained double Roman domination number of a connected graph $G$ in terms of the order of $G$ and characterize the graphs attaining this bound. We study the restrained double Roman domination versus the restrained Roman domination. Finally, we characterized all trees $T$ attaining the exhibited bound. \end{abstract}
\textbf{2010 Mathematical Subject Classification:} 05C69
\textbf{Keywords}: Domination, restrained Roman domination, restrained double Roman domination.
\section{Introduction} Throughout this paper, we consider $G$ as a finite simple graph with vertex set $V=V(G)$ and edge set $E=E(G)$. We use \cite{west} as a reference for terminology and notation which are not explicitly defined here. The open neighborhood of a vertex $v$ is denoted by $N(v)$, and its closed neighborhood is $N[v]=N(v)\cup \{v\}$. The minimum and maximum degrees of $G$ are denoted by $\delta(G)$ and $\Delta(G)$, respectively. Given subsets $A,B \subseteq V(G)$, by $[A,B]$ we mean the set of all edges with one end point in $A$ and the other in $B$. For a given subset $S\subseteq V(G)$, by $G[S]$ we represent the subgraph induced by $S$ in $G$. A tree $T$ is a double star if it contains exactly two vertices that are not leaves. A double star with $p$ and $q$ leaves attached to each support vertex, respectively, is denoted by $S_{p,q}$. A wounded spider is a tree obtained from subdividing at most $n-1$ edges of a star $K_{1,n}$. A wounded spider obtained by subdividing $t \le n-1$ edges of $K_{1,n}$, is denoted by $ws(1,n, t)$.\\
A set $S\subseteq V(G)$ is called a dominating set if every vertex not in $S$ has a neighbor in $S$. The domination number $\gamma(G)$ of $G$ is the minimum cardinality among all dominating sets of $G$. A restrained dominating set ($RD$ set) in a graph $G$ is a dominating set $S$ in $G$ for which every vertex in $V(G)-S$ is adjacent to another vertex in $V(G)-S$. The restrained domination number ($RD$ number) of $G$, denoted by $\gamma_r(G)$, is the smallest cardinality of an $RD$ set of $G$. This concept was formally introduced in \cite{domke} (albeit, it was indirectly introduced in \cite{hattingh, haynes}).
The variants of restrained domination have been already worked. For instance, a total restrained domination of a graph $G$ is an $RD$ set of $G$ for which the subgraph induced by the dominating set of $G$ has no isolated vertex, which can be referred to the \cite{chen}. Secure restrained dominating set $(SRDS)$ which is a set $S \subseteq V(G)$ for which $S$ is restrained dominating and for all $u \in V\setminus S$ there exists $v \in S\cap N(u)$ such that $(S\setminus \{v\})\cup \{u\}$ is restrained dominating set \cite{roushini}.
The restrained Roman dominating function is a Roman dominating function $f: V(G) \to \{0,1,2\}$ such that the subgraph induced by the set $\{v\in V(G): f(v)=0\}$ has no isolated vertex, \cite{roushini1}. The restrained Italian dominating function ($RIDF$) is an Italian dominating function $f: V(G) \to \{0,1,2\}$ such that the subgraph induced by the set $\{v\in V(G): f(v)=0\}$ has no isolated vertex, \cite{samadi}.
These results motivates us to consider a double Roman dominating function $f$ for which the subgraph induced by $V_0^f$ has no isolated vertex,
which is the concept that we stand on it as new parameter namely restrained double Roman domination and will be investigated in this paper
Beeler \emph{et al}. (2016) \cite{bhh} introduced the concept of double Roman domination of a graph.\\
If $ f:V(G)\rightarrow \{0,1,2,3\}$ is a function, then let $(V_0,V_1,V_2,V_3)$ be the ordered partition of $V(G)$ induced by $f$, where
$V_i=\{v\in V(G):f(v)=i\}$ for $i=0,1,2,3$. There is a 1-1 correspondence between the function $f$ and the ordered partition $(V_0,V_1,V_2,V_3)$. So we will write $f=(V_0,V_1,V_2,V_3)$. A double Roman dominating function (DRD function for short) of a graph $G$ is a function $ f:V(G)\rightarrow \{0,1,2,3\}$ for which the following conditions are satisfied. \begin{itemize}
\item[(a)] If $f(v)=0$, then the vertex $v$ must have at least two neighbors in $V_2$ or one neighbor in $V_3$.
\item[(b)] If $f(v)=1$ , then the vertex $v$ must have at least one neighbor in $V_2\cup V_3$. \end{itemize} This parameter was also studied in \cite{al}, \cite{jr}, \cite{mojdeh} and \cite{zljs}.
Accordingly, a restrained double Roman dominating function ({$RDRD$} function for short) is a double Roman dominating function
$f:V\rightarrow\{0,1,2,3\}$ having the property that:
the subgraph induced by $V_0$ (the vertices with zero labels under $f$) $G[V_0]$ has no isolated vertex. The restrained double Roman domination
number ($RDRD$ number) $\gamma_{rdR}(G)$ is the minimum weight of an $RDRD$ function $f$ of $G$. For the sake of convenience, an $RDRD$
function $f$ of a graph $G$ with weight $\gamma_{rdR}(G)$ is called a $\gamma_{rdR}(G)$-function.
This paper is organized as follows. We prove that the restrained double Roman domination problem is $NP$-hard even for general graphs. Then,
we present an upper bound on the restrained double Roman domination number of a connected graph $G$ in terms of the order of $G$ and characterize
the graphs attaining this bound. We study the restrained double Roman domination versus the restrained Roman domination. Finally, we characterize trees $T$ by the given restrained double Roman domination number of $T$.
\section{Complexity and computational issues} We consider the problem of deciding whether a graph $G$ has an $RDRD$ function of weight at most a given integer. That is stated in the following decision problem.\\
We shall prove the $NP$-completeness by reducing the following vertex cover decision problem, which is known to be $NP$-complete.
\framebox{ \parbox{1\linewidth}{
VERTEX COVER DECISION PROBLEM INSTANCE: A graph $G = (V,E)$ and a positive integer $p \le |V (G)|$. QUESTION: Does there exist a subset $C \subseteq V (G)$ of size at most $p$ such that for each edge $xy \in E(G)$ we have $x \in C$ or $y \in C$?}}
\begin{theorem}
\emph{(Karp \cite{karp} )}\label{the-karp} Vertex cover decision problem is $NP$-complete for general graphs. \end{theorem}
\framebox{ \parbox{1\linewidth}{
RISTRAINED DOUBLE ROMAN DOMINATION problem ($RDRD$ problem)\\
INSTANCE: A graph $G$ and an integer $p\leq |V(G)|$.\\ QUESTION: Is there an $RDRD$ function $f$ for $G$ of weight at most $p$?}}\\
\begin{theorem}\label{the-NP} The restrained double Roman domination problem is $NP$-complete for general graphs. \end{theorem}
\begin{proof}
We transform the vertex cover decision problem for general graphs to the restrained double Roman domination decision problem for general graphs. For a given graph $G = (V(G), E(G))$, let $ m= 3|V (G)| + 4$ and construct a graph $H = (V(H),E(H))$ as follows. Let $V(H) = \{x_i : 1 \le i \le m\} \cup \{y\} \cup V (G) \cup \{u_{j_i} : 1 \le i \le m\ \mbox{for\ each}\ e_j \in E(G)\}$, and let $$E(H)=\{x_ix_{i+1}: (\mbox{mod}\ m)\ 1\le i\le m\}$$ $$\ \ \ \cup \{x_iy: 1\le i\le m\} \cup \{vy: v\in V(G)\}$$ $$\ \ \ \cup \{vu_{j_i}: v\ \mbox{is\ the\ vertex\ of\ edge}\ e_j\in E(G)\ \mbox{and}\ 1\le i \le m \}$$ $$\ \ \ \cup \{u_{j_i}u_{j_{(i+1)}}\ (\mbox{mod}\ m) : 1 \le i \le m\}.$$
Figure 1 shows the graph $H$ obtained from $G = P_4=a_1a_2a_3a_4$ by the above procedure. Note that, since $m= 3|V (G)| + 4=16$ for this example and $G$
\begin{figure}
\caption{The graph $G=P_4$ and $H$.}
\label{fig:g1-g2-g3}
\end{figure}
has three edges $e_1, e_2, e_3$, $$H[\{x_i: 1\le i\le 16\}] \cong H[\{u_{1_i}: 1\le i\le 16\}] \cong H[\{u_{2_i}: 1\le i\le 16\}]\cong H[\{u_{3_i}: 1\le i\le 16\}]\cong C_{16}$$ $y$ is adjacent to $x_i$ for $1\le i \le 16$ and $a_l$ for $1\le l\le 4$; $u_{j_i}$ is adjacent to both $a_j$ and $a_{j+1}$ for $1\le j\le 3$ and $1\le i\le 16$.
We claim that $G$ has a vertex cover of size at most $k$ if and only if $H$ has an RDRDF with weight at most $3k+3$. Hence the $NP$-completeness of the restrained double Roman domination problem in general graphs will be equivalent to the $NP$-completeness of vertex cover problem. First, if $G$ has a vertex cover $C$ of size at most $k$, then the function $f$ defined on $V(G)$ by $f(v) = 3$ for $v \in C \cup \{y\}$ and $f(v) = 0$ otherwise, is an RDRDF with weight at most $3k + 3$. On the other hand, suppose that $g$ is an RDRDF on $H$ with weight at most $3k + 3$. If $g(y)\ne 3$, then there exist two cases.
Case 1. Let $g(y)\in \{0,1\}$. Then
$$\sum_{i=1}^{m}g(x_i)\ge \gamma_{rdR}(C_m) \ge \gamma_{dR}(C_m)\ge m >3|V(G)|+3 \ge 3k+3$$ that is a contradiction.
Case 2. Let $g(y)=2$ and $C_m=\{x_ix_{i+1}: (\mod\ m)\ 1\le i\le m\}$. Then $g(C_m)\ge 2m/3$ and $g(H)\ge 2m/3 +2k+2 = 2(3|V (G)| + 4)/3 +2k+2\ge 4k+14/3> 3k+3$ which is a contradiction. Thus $g(y) = 3$. Similarly, we have $g(u) = 3$ or $g(v) = 3$ for any $e = uv \in E(G)$. Therefore $C = \{v \in V : g(v) = 3\}$ is a vertex cover of $G$ and
$3|C| + 3 \le w(g) \le 3k + 3$. Consequently, $|C| \le k$. \end{proof}
\section{$RDRD$ number of some graphs} In this section we investigate the exact value of the restrained double Roman domination number of some graphs. \begin{observation}\label{the-com-par} For complete graph $K_n$ and complete bipartite graph $K_{m,n}$,\\
\emph{(i)} $\gamma_{rdR}(K_n)=3$ for $n\ge 2$.\\
\emph{(ii)} $\gamma_{rdR}(K_{n,m})=6$ for $m,n\ge 2$.\\
\emph{(iii)} $\gamma_{rdR}(K_{1,m})=m+2.$
\emph{(iv)} $\gamma_{rdR}(K_{n_1,n_2,\cdots, n_m})=\left\{
\begin{array}{ll}
3, & \mbox{if}\ \emph{min}\{n_1 ,n_2,\cdots, n_m\}=1,\\
6, & \hbox{otherwise.}
\end{array}
\right.$\\. \end{observation}
\begin{theorem}\label{the-path} For a path $P_n$ $(n\geq 4)$, $\gamma_{rdR}(P_n)=n+2$.\\ \end{theorem} \begin{proof} Assume that $n\ge 4$ and $P_n=v_1v_2\cdots v_n$. Define $h:V(P_n) \to \{0,1,2,3\}$ by $h(v_{3i+2})=3$ for $0\le i \le n/3-1,\ h(v_{1})=h(v_n)=1$ and $h(v)=0$ otherwise, whenever $n \equiv 0 \,({\rm mod}\, 3)$.\\ Define $h:V(P_n) \to \{0,1,2,3\}$ by $h(v_{3i+1})=3$ for $0\le i \le (n-1)/3$ and $h(v)=0$ otherwise, whenever $n \equiv 1\, ({\rm mod}\, 3)$. \\ Define $h:V(P_n) \to \{0,1,2,3\}$ by $h(v_{3i+2})=3$ for $0\le i \le (n-2)/3,\ h(v_{1})=1$ and $h(v)=0$ otherwise, whenever $n \equiv 2\, ({\rm mod}\, 3)$. Therefore $\gamma_{rdR}(P_n)\le n+2$ for $n\ge 4$.
Now we prove the inverse inequality. It is straightforward to verify that $\gamma_{rdR}(P_n)=n+2$ for $4\le n\le 6$. For $n\ge 7$ we proceed by induction on $n$. Let $n\ge 7$ and let the inverse inequality be true for every path of order less than $n$. Assume that $f = (V_0, V_1, V_2, V_3)$ is a $\gamma_{rdR}$-function of $P_n$. It is well known that $f(v_n)\ne 0$.
If $f(v_n)=1$, then $f(v_{n-1}) \ge 2$. Define $g: P_{n-1} \to \{0,1,2,3\}$, $g(v_i)=f(v_i)$ for $1\le i\le n-1$. Clearly, $g$ is an RDRD-function
of $P_{n-1}$. It follows from the induction hypothesis that $$\gamma_{rdR}(P_n)=w(f)=w(g)+1\ge \gamma_{rdR}(P_{n-1})+1\ge (n-1)+2 +1\ge n+2.$$ If $f(v_{n}) =2$, then $f(v_{n-1})= 1$ and $f(v_{n-2})\ge 1$. Define $g: P_{n-2} \to \{0,1,2,3\}$, $g(v_i)=f(v_i)$ for $1\le i\le n-2$. Clearly, $g$ is a $RDRD$-function of $P_{n-2}$. As above we obtain, $$\gamma_{rdR}(P_n)=w(f)=w(g)+3\ge \gamma_{rdR}(P_{n-2})+3\ge (n-2)+2 +3=n+3.$$
If $f(v_{n}) =3$, then $f(v_{n-1})= 0$, $f(v_{n-2})= 0$ and $f(v_{n-3})= 3$. Define $g: P_{n-3} \to \{0,1,2,3\}$, $g(v_i)=f(v_i)$ for $1\le i\le n-3$.
Clearly, $g$ is a $RDRD$-function of $P_{n-3}$. It also follows from the induction hypothesis that $$\gamma_{rdR}(P_n)=w(f)=w(g)+3\ge \gamma_{rdR}(P_{n-3})+3\ge (n-3)+2 +3=n+2.$$
Thus the proof is complete.\\ \end{proof}
\begin{theorem}\label{the-cycle} For a cycle $C_n$, $(n\ge 3)$,
$\gamma_{rdR}(C_n)=\left\{
\begin{array}{ll}
n, & \mbox{if}\ n \equiv 0\ (\mbox{mod}\ 3), \\
n+2, & \hbox{otherwise.}
\end{array}
\right.$\\ \end{theorem}
\begin{proof}
Assume that $n\ge 3$ and $C_n=v_1v_2\cdots v_nv_1$. Define $h:V(C_n) \to \{0,1,2,3\}$ by $h(v_{3i})=3$ for $1\le i \le n/3$ and
$h(v)=0$ otherwise, whenever $n \equiv 0\, ({\rm mod}\, 3)$.\\ Define $h:V(C_n) \to \{0,1,2,3\}$ by $h(v_{3i+1})=3$ for $0\le i \le (n-1)/3$ and $h(v)=0$ otherwise, whenever $n \equiv 1\, ({\rm mod}\, 3)$. \\ Define $h:V(C_n) \to \{0,1,2,3\}$ by $h(v_{3i+2})=3$ for $0\le i \le (n-2)/3,\ h(v_{1})=1$ and $h(v)=0$ otherwise, whenever $n \equiv 2\, ({\rm mod}\, 3)$. Therefore $$\gamma_{rdR}(C_n)\le \left\{
\begin{array}{ll}
n, & \mbox{if}\ n \equiv 0\ (\mbox{mod}\ 3), \\
n+2, & \hbox{otherwise.}
\end{array}
\right.$$
Now we prove the inverse inequality. For $n \equiv 0 \,({\rm mod}\, 3)$, since $\gamma_{rdR}(C_n)\ge \gamma_{dR}(C_n)=n$, (see \cite{al, bhh}),
clearly the result holds.
Let $n \not \equiv 0\ (\mbox{mod}\ 3)$ and let $f = (V_0, V_1, V_2, V_3)$ be a $\gamma_{rdR}$-function of $C_n$. Since the neighbor
of vertex of weight $0$ is a vertex of weight $3$ and a vertex of weight $0$, if $n \not \equiv 0\ (\mbox{mod}\ 3)$, there are two adjacent vertices $v_i, v_{i+1}$ in $C_n$ such that their weights are positive. Now, if $f(v_i)\ge 2$ and $f(v_{i+1})\ge 2$, then by removing the edge $v_iv_{i+1}$, the resulted graph is $P_n$. Define $g: P_{n} \to \{0,1,2,3\}$, $g(v_i)=f(v_i)$ for $1\le i\le n$. Clearly, $g$ is an RDRD-function of $P_{n}$ with $w(g)=w(f)$. Since $w(g)\ge n+2$ then $w(f)\ge n+2$.\\ Let $f(v_i)\ge 2$ and $f(v_{i+1})= 1$. Then $f(v_{i+2})\ge 1$. Now remove the edge $v_{i+1}v_{i+2}$ and obtain a $P_n$. Define $g: P_{n} \to \{0,1,2,3\}$, $g(v_i)=f(v_i)$ for $1\le i\le n$. Clearly, $g$ is an RDRD-function of $P_{n}$ with $w(g)=w(f)$. Thus $w(f)\ge n+2$. \\ Let $f(v_i)=f(v_{i+1})= 1$. As above, we remove the edge $v_iv_{i+1}$ and the resulted graph $P_n$ has an RDRD-function $g$ of weight at least $w(f)$. That is $w(f)\ge n+2$. Therefore the proof is complete. \end{proof}
\section{Upper bounds on the $RDRD$ number} In this section we obtain sharp upper bounds on the restrained double Roman domination number of a graph. \begin{proposition}\label{2n-1} Let $G$ be a connected graph of order $n\ge 2$. Then $\gamma_{rdR}(G) \le 2n-1$, with equality if and only if $n=2$. \end{proposition} \begin{proof} If $w$ is a vertex of $G$, then define the fuction $f$ by $f(w)=1$ and $f(x)=2$ for $x\in V(G)\setminus\{w\}$. Since $G$ is connected of order $n\ge 2$, we observe that $f$ is an RDRD function of $G$ of weight $2n-1$ and thus $\gamma_{rdR}(G) \le 2n-1$. If $n\ge 3$, then $G$ contains a vertex $w$ with at least two neighbors $u$ and $v$. Now define the function $g$ by $g(u)=g(v)=1$ and $g(x)=2$ for $x\in V(G)\setminus\{u,v\}$. Then $g$ is an RDRD function of $G$ of weight $2n-2$ and so $\gamma_{rdR}(G) \le 2n-2$ in this case. Since $\gamma_{rdR}(K_2)=3=2\cdot 2-1$, the proof is complete. \end{proof}
\begin{proposition}\label{diam} Let $G$ be a connected graph of order $n\ge 2$. Then $\gamma_{rdR}(G) \le 2n+1 - diam(G)$ and this bound is sharp for the path $P_n$ ($n\ge 4$). \end{proposition}
\begin{proof} By Theorem \ref{the-path}, $\gamma_{rdR}(P_n) \le n+2$. Let $P=v_1v_2\cdots v_{diam(G)+1}$ be a diametrical path in $G$. Let $g$ be a $\gamma_{rdR}$-function of $P$. Then $w(g)\le diam(G)+3$. Now we define an RDRD-function $f$ as:\\ $$f(x)=\left\{
\begin{array}{ll}
2, & x \notin V(P),\\
g(x), & \hbox{otherwise.}
\end{array}
\right.$$
It is clear that $f$ is an RDRD-function of $G$ of weight $w(f) \le 2(n-(diam(G)+1)) + diam (G)+3$. Therefore $\gamma_{rdR}(G) \le 2n+1 - diam(G)$.\\
Theorem \ref{the-path} shows the sharpness of this bound. \end{proof}
\begin{proposition} Let $G$ be a connected graph of order $n$ and circumference $c(G)<\infty$. Then $\gamma_{rdR}(G) \le 2n +2 - c(G)$, and this bound is sharp for each cycle $C_n$ with $3 \nmid n$. \end{proposition}
\begin{proof} Let $C$ be a longest cycle of $G$, that means $|V(C)|=c(G)$. By Theorem \ref{the-cycle}, $\gamma_{rdR}(C) \le c(G)+2$. Let $h$ be a $\gamma_{rdR}$-function on $C$. Then $w(h)\le c(G)+2$. Now we define an RDRD-function $f$ as:\\ $$f(x)=\left\{
\begin{array}{ll}
2, & x\notin V(C), \\
h(x), & \hbox{otherwise.}
\end{array}
\right.$$\\
It is clear that $f$ is an RDRD-function of $G$ of weight $w(f) \le 2(n-c(G)) + c(G)+2$. Therefore $\gamma_{rdR}(G) \le 2n+2 - c(G)$.\\
For sharpness, if $G=C_n$ and $3\nmid n$, then $\gamma_{rdR}(C_n)=n+2= 2n+2 - n=2n+2-c(G)$. \end{proof}
\begin{observation}\label{1}
Let $G$ be a graph and $f=(V_0,V_1,V_2)$ a $\gamma_{rR}$-function of $G$. Then $\gamma_{rdR}(G)\leq 2|V_1|+3|V_2|$. \end{observation}
\begin{proof} Let $G$ be a graph and $f=(V_0,V_1,V_2)$ a $\gamma_{rR}$-function of $G$. We define a function $g=(V_0',V_2',V_3')$ as follows:
$V_0'=V_0$, $V_2'=V_1$, $V_3'=V_2$. Note that under $g$, every vertex with a label $0$ has a neighbor assigned $3$ and each vertex with label $1$ becomes a vertex with label $2$ and also $G[V_0']$ has no isolated vertex. Hence, $g$ is a restrained double Roman dominating function. Thus, $\gamma_{rdR}(G)\leq 2|V_2'|+3|V_3'|=2|V_1|+3|V_2|$. \end{proof}
Clearly, the bound of observation \ref{1} is sharp, as can be seen with the path $G=P_4$, where $\gamma_{rR}(G)=4$ and $\gamma_{rdR}(G)=6$. We also note that strict inequality in the bound can be achieved by the subdivided star $G=S(K_{1,k})$ which formed by subdividing each edge of the star $K_{1,k}$, for $k\geq 3$, exactly once. Then it is simple to check that $\gamma_{rR}(G)=2k+1$ and $\gamma_{rdR}(G)=3k$. Hence, $|V_1|=1$
and $|V_2|=k$, and so, $3k=\gamma_{rdR}(G)<2|V_1|+3|V_2|=2+3k.$
\begin{lemma}\label{lem1} If a graph $G$ has a non-pendant edge, then there is a $\gamma_{rdR}(G)$-function $f = (V_0, V_1, V_2, V_3)$ such that $V_0\cup V_1 \ne \emptyset$. \end{lemma}
\begin{proof}
If $\gamma_{rdR}(G)<2n$, then obviously $V_0\cup V_1 \ne \emptyset$. Now we show that $\gamma_{rdR}(G)<2n$. Let $uw$ be a non-pendant edge with $\deg(u)$ and $\deg(w)$ be at least $2$.\\ Assume that $N_{G}(u)\cap N_{G}(w)\ne \emptyset$, and let $v$ be a vertex in $N_{G}(u)\cap N_{G}(w)$. Then the function $f=(V_0 =\{u,w\}, V_1 =\emptyset, V_2= V(G) \setminus \{u,w,v\}, V_3=\{v\})$ is an RDRD-function of $G$ with $w(f)\le 2n-3$.\\
Assume that $N_{G}(u)\cap N_{G}(w)= \emptyset$, and let $a\in N_G(u)\setminus \{w\}$ and $b\in N_G(w)\setminus \{u\}$. Then the function $f=(V_0= \{u,w\}, V_1=\emptyset, V_2= V(G) \setminus \{u,w,a,b\}, V_3=\{a,b\})$ is an RDRD-function of $G$ with $w(f)\le 2n-2$. This completes the proof.
All in all the proof is complete. \end{proof}
\begin{figure}
\caption{The graph $H_{10},\ F_{9}$ .}
\label{fig:g1-g2-g3}
\end{figure}
For any integer $n \ge 3$, let $H_n$ be the graph obtained from $(n-2)/2$ copies of $K_2$ and a copy of $K_1$ by adding a new vertex and joining it to both leaves of each $K_2$ and the given $K_1$, and let $F_n$ be the graph obtained from $(n-2)/2$ copies of $K_2$ by adding a new vertex and joining it to both leaves of each $K_2$. Thus
for $n \ge 4$, $H_n$ have a vertex of degree $n-1$, a vertex of degree $1$ and other vertices of degree two and for $n \ge 3$, $F_n$ have a vertex of degree $n-1$ and other vertices of degree two. Figure 2 shows the graph $H_{10}$ and $F_9$. Let $\mathcal{H} = \{H_n :n \ge 4\ \mbox{is\ even}\}$ and $\mathcal{F} = \{F_n :n \ge 3\ \mbox{is\ odd}\}$.
\begin{theorem}\label{the} For every connected graph $G$ of order $n \ge 3$ with $m$ edges, $\gamma_{rdR}(G) \ge 2n + 1- \lceil(4m-1)/3\rceil$, with equality if and only if $G \in \mathcal{H}\cup \mathcal{F}$ or $G\in\{K_{1,2},K_{1,3},K_{1,4}\}$. \end{theorem}
\begin{proof}
If $G=K_{1,n-1}$ is a star, then $\gamma_{rdR}(G)=n+1$ and $m=n-1$. Now it is easy to see that
$\gamma_{rdR}(K_{1,n-1})= 2n + 1- \lceil(4m-1)/3\rceil$ for $3\le n\le 5$ and $\gamma_{rdR}(K_{1,n-1})>2n + 1- \lceil(4m-1)/3\rceil$ for $n\ge 6$. Next assume that $G$ is not a star. By Lemma \ref{lem1} there is a $\gamma_{rdR}(G)$-function of $f= (V_0, V_1, V_2, V_3)$ such that $V_0\cup V_1\ne \emptyset$. It is well known that,
the induced subgraph $G[V_0]$ has no isolated vertex. Therefore, $|E(G[V_0])| \ge |V_0|/2$. Let $V'_0=\{v\in V_0: N(v) \subseteq V_2\}$ and
$V''_0=\{v\in V_0: v \,\,{has\,\,a\,\,neighbor\,\,in}\,\,V_3\}$. Then $|E(V_0,V_2)| \ge 2|V'_0|$, $|E(V_0,V_3)| \ge |V''_0|$ and
$|E(V_1,V_2\cup V_3)| \ge |V_1|$.
Therefore
$$|E(G)|=m\ge |V_0|/2+ 2|V'_0|+ |V''_0|+ |V_1|.$$
Since $|V_0|=|V'_0|+|V''_0|$, we deduce that
\begin{equation}\label{EQ11}
(4m-1)/3\ge 2|V_0|+ 4/3|V'_0|+ 4/3|V_1|-1/3
\end{equation}
and thus
\begin{equation}\label{EQ12}
2n+1 - \lceil(4m-1)/3\rceil \le 2n+1 - (4m-1)/3 \le 2n+1 -2|V_0|-4/3|V'_0|- 4/3|V_1|+1/3.
\end{equation}
Since $\gamma_{rdR}(G) = |V_1|+2|V_2|+3|V_3|$, $|V_0|+|V_1|+|V_2|+|V_3|=n$ and $2n+1=2|V_0|+2|V_1|+2|V_2|+2|V_3|+1$, we obtain \begin{eqnarray*}
2n+1 -2|V_0|-4/3|V'_0|- 4/3|V_1|+1/3 & = & -4/3|V'_0|+ 2/3|V_1|+2|V_2|+2|V_3|+4/3\\
& = & \gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3. \end{eqnarray*}
Next we will show that \begin{equation}\label{EQ13}
\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3\le \gamma_{rdR}(G) \end{equation}
or $\gamma_{rdR}(G) \ge 2n + 1- \lceil(4m-1)/3\rceil$. If $|V'_0|\ge 1$, then $-4/3|V'_0|-1/3|V_1|-|V_3|+4/3\le 0$ and so $\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3\le \gamma_{rdR}(G)$.\\
Let now $|V'_0|=0$. Note that the condition $V_0\cup V_1\ne \emptyset$ implies $V''_0 \cup V_1\ne \emptyset$.\\
Assume next that $V_1= \emptyset$. We deduce that $|V''_0|\ge 1$ and therefore $|V_3|\ge 1$. If there are at least two vertices of weight $3$, then $\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3<\gamma_{rdR}(G)$.\\ If there is only one vertex of weight $3$, then $m\ge n-1+\frac{n-1}{2}=\frac{3(n-1)}{2}$. We deduce that $\gamma_{rdR}(G)\ge 3 \ge 2n+1 -\left\lceil \frac{6(n-1)-1}{3}\right\rceil
\ge 2n+1 -\left\lceil \frac{4m-1}{3}\right\rceil$, with equality if and only if $|V_2|=0$, $n$ is odd and $m=\frac{3(n-1)}{2}$, that means $G\in{\cal F}$.\\
Now assume that $|V_1|\ge 1$. If $|V''_0|\ge 1$, then $|V_3|\ge 1$ and thus $\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3\le \gamma_{rdR}(G)$. Next let $|V''_0|=0$. If $|V_3|\ge 1$, then $\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3\le \gamma_{rdR}(G)$. Now assume that $|V_3|=0$. This implies that all vertices have weight $1$ or $2$. If $3\le n\le 5$, then it is easy to see that
$\gamma_{rdR}(G)> 2n+1-\left\lceil\frac{4m-1}{3}\right\rceil$. Let now $n\ge 6$. If $|V_1|\ge 5$, then $\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3<\gamma_{rdR}(G)$. Otherwise $|V_1|\le 4$, $|V_2|\ge n-4$ and $m\ge n-1$. This implies $$\gamma_{rdR}(G)\ge 2(n-4)+4=2n-4>2n+1-\left\lceil\frac{4(n-1)-1}{3}\right\rceil\ge 2n+1-\left\lceil\frac{4m-1}{3}\right\rceil.$$
\\.
Thus $\gamma_{rdR}(G) \ge 2n+1 -(4m-1)/3\ge 2n+1 -\lceil(4m-1)/3\rceil$.\\
For equality: If $G\in \mathcal{H}$, then $G=H_n$ for $n\ge 4$ even and $|E(H_n)|=3(n-2)/2+1$. Thus $2n+1- (4(3(n-2)/2+1)-1)/3= 2n+1- \lceil (4(3(n-2)/2+1)-1)/3 \rceil = 2n+1 -2(n-2)-1=4= \gamma_{rdR}(H_n)$. If $G\in \mathcal{F}$, then $G=F_n$ for $n\ge 3$ odd and $|E(F_n)|=3(n-1)/2$. Thus $2n+1- \lceil(4(3(n-1)/2))-1/3\rceil = 2n+1 -2(n-1)=3= \gamma_{rdR}(F_n)$.
Conversely, assume that $\gamma_{rdR}(G)=2n+1-\lceil(4m-1)/3\rceil$. Then all inequalities occurring in the proof become equalities. In the case $|V_1|=0$, we have seen above that we have equality if and only if $G\in{\cal F}$. In the case $|V_1|\ge 1$, we have seen above that $|V_3|\ge 1$. Therefore the equality in Inequality
(\ref{EQ13}) leads to $|V_3|=|V_1|=1$ and $|V'_0|=0$. Hence $V_0=V''_0$. Thus equality in Inquality (\ref{EQ11}) or equivalently, in the inequality $|E(G)|=m\ge |V_0|/2+ 2|V'_0|+ |V''_0|+ |V_1|$
leads to $m=3/2|V''_0|+1$. Now let the vertices $v, u$ be of weight $3,1$ respectively. Then $m=|E(G)| \ge |E(v, V''_0)| + G[V''_0] +1 \ge |V''_0|+ 1/2|V''_0|+1=3/2|V''_0|+1$. If $|V_2|\ne 0$, then the connectivity of $G$ leads to the contradiction $m\ge 3/2|V''_0|+2$. Consequently, $|V_2|=0, |V_0|=(2m-2)/3$ and $u$ and $v$ are adjacent. Since $G$ is connected, $G\in \mathcal{H}$. \\ \end{proof}
\section{$RDRD$-set versus $RRD$-set} One of the aim of studying these parameters is that to see the related between them and compare each together.
\begin{proposition}\label{cor1} For any graph $G$, $\gamma_{rdR}(G)\leq 2\gamma_{rR}(G)$ with equality if and only if $G=\overline{K_n}$. \end{proposition}
\begin{proof}
Let $f=(V_0,V_1,V_2)$ be a $\gamma_{rR}$-function of $G$. Since $\gamma_{rR}(G)=|V_1|+2|V_2|$, by Observation \ref{1}, we have that
$\gamma_{rdR}(G)\leq 2|V_1|+3|V_2|=\gamma_{rR}(G)+|V_1|+|V_2|\leq 2\gamma_{rR}(G)$. If $\gamma_{rdR}(G)=2\gamma_{rR}(G)=2|V_1|+4|V_2|$, then since $\gamma_{rdR}(G)\leq 2|V_1|+3|V_2|$, we must have $V_2=\emptyset$. Hence, $V_0=\emptyset$ must hold, and so $V=V_1$. By definition of $\gamma_{rR}$-function, we deduce that no two vertices in $G$ are adjacent, for otherwise, if $u$ and $v$ are adjacent, then only one of them in every $\gamma_{rdR}$-function on $G$ has a label of $2$ which contradicts with $\gamma_{rdR}(G)=2\gamma_{rR}(G)$. \end{proof}
The proof of Lemma \ref{lem1} shows the next proposition.
\begin{proposition} If $G$ contains a triangle, then $\gamma_{rdR}(G) \le 2n-3$. \end{proposition}
\begin{theorem}\label{the-111} For every graph $G$, $\gamma_{rR}(G) < \gamma_{rdR}(G$). \end{theorem} \begin{proof} Let $f=(V_0,V_1,V_2,V_3)$ be a $\gamma_{rdR}(G)$-function. If $V_3 \ne \emptyset$, then $(V'_0=V_0, V'_1=V_1 , V'_2=V_2\cup V_3)$ is an RRD-function $g$ such that $w(g)< w(f)$. Let $V_3=\emptyset$. If $V_0=\emptyset$, then, since $V_2 \ne \emptyset$,
$g=(\emptyset, V'_1=V_1\cup V_2, \emptyset)$ is an RRD-function such that $w(g)< w(f)$. If $V_0 \ne \emptyset$, then $|V_2|\ge 2$. Let $f(v)=2$ for a vertex $v$. Then $g=(V'_0=V_0, V'_1=V_1\cup \{v\}, V'_2=V_2-\{v\})$ is an RRD-function $g$ for which $w(g)< w(f)$. Therefore $\gamma_{rR}(G) < \gamma_{rdR}(G)$. \end{proof}
As an immediate consequence of Proposition \ref{cor1}, we have. \begin{corollary} For any nontrivial connected graph $G$, $\gamma_{rdR}(G) < 2\gamma_{rR}(G)$. \end{corollary}
\begin{theorem}\label{the-222} Let $G$ be a graph of order $n$. Then $\gamma_{rdR}(G)=\gamma_{rR}(G)+1$ if and only if $G$ is one of the following graphs.\\ \emph{1}. $G$ has a vertex of degree $n-1$.\\ \emph{2}. There exists a subset $S$ of $V(G)$ such that:\\ \emph{2.1}. every vertex of $V-S$ is adjacent to a vertex in $S$,\\ \emph{2.2}. there are two subsets $A_0$ and $A_1$ of $V-S$ with $A_0\cup A_1=V-S$ such that $A_0$ is the set of non-isolated vertices in $N(S)$ and each vertex in $A_0$ has at least two neighbors in $S$,\\ \emph{2.3}. for any $2$-subset $\{a,b\}$ of $S$, $N(\{a,b\})\cup A_0 \ne \emptyset$ and for a $3$-subset $\{x,y,z\}$ of $S$, if $\{x,y,z\} \cap A_0 \ne \emptyset$, then there are three vertices $u,v,w$ in $A_0$ such that $N(u)\cup S=\{x,y\}$, $N(v)\cup S=\{x,z\}$ and $N(w)\cup S=\{y,z\}$. \end{theorem}
\begin{proof} Let $\gamma_{rdR}(G)=\gamma_{rR}(G)+1$ with a $\gamma_{rdR}(G)$-function $f=(V_0,V_1,V_2,V_3)$ and a $\gamma_{rR}(G)$-function
$g=(U_0,U_1,U_2)$. If $V_3\ne \emptyset$, then $|V_3|=1$. Because if $|V_3|\ge 2$ then by changing $3$ to $2$ we obtain a RRD-function $h$ with $w(h)< w(g)$, a contradiction. Let $V_3=\{v\}$.
In addition, we note that $|V_2|=0$. I we suppose that $|V_2|\ge 1$, then let $u\in V_2$. Then $h=(V'_0=V_0, V'_1=V_1\cup \{u\}, V'_2=V_2\cup \{v\})$ is an RRD-function $g$ for which $w(h)< w(g)$, a contradiction. Thus all vertices different from $v$ are adjacent to the vertex $v$ such that the non-isolated vertices in $N(v)$ are assigned with $0$ and the isolated vertices in $N(v)$ are assigned with $1$. In this case $U_0=V_0, U_1=V_1$ and $U_2=V_3$.\\
If $V_3= \emptyset$, then $V_2\ne \emptyset$ and $|V_2| \ge 2$. In this case, there must exist a vertex $v\in V_2$ such that $U_0= V_0, U_1=V_1\cup \{v\}$ and $U_2=V_2-\{v\}$. There is such a function $f$ if we guarantee a subset $S$ of $V(G)$ with each vertex of weight $2$ for which every other vertex in $V-S$ has to adjacent to a vertex of $S$, that is the condition 2.1 holds.\\ Since we can only change one of vertices of weight $2$ in $f$ to a vertex of weight $1$ in $g$, there must be existed two subsets $A_0$ and $A_1$ in $V-S$ such that the conditions 2.2 and 2.3 hold.\\
Conversely, if the condition 1 holds, then $f=(V_0, V_1, \emptyset, V_3=\{v\})$ and $g=(U_0=V_0, U_1=V_1, U_2=\{v\})$ are $\gamma_{rdR}(G)$-function and $\gamma_{rR}(G)$-function respectively where $V_0$ is the set of non-isolated vertices in $N(v)$ and $V_1$ is the set of isolated vertices in $N(v)$. Thus $\gamma_{rdR}(G)=\gamma_{rR}(G)+1$.\\ If the condition 2 holds, then we can have only one vertex of weight $2$ in $G$ under $f$ such that it changes to the weight $1$ in $G$ under $g$. Thus $\gamma_{rdR}(G)=\gamma_{rR}(G)+1$. \end{proof}
We showed that for any graph $G$, $\gamma_{rdR}(G)\le 2\gamma_{rR}(G)$ and the equality holds if and on if $G$ is a trivial graph $\overline{K_n}$. Hence, for any nontrivial graph $G$, $\gamma_{rdR}(G)\le 2\gamma_{rR}(G)-1$. Now we characterise graph $G$ with this property $\gamma_{rdR}(G)= 2\gamma_{rR}(G)-1$.
\begin{theorem} If $G$ is a nontrivial graph, then $\gamma_{rdR}(G)\le 2\gamma_{rR}(G)-1$. If $\gamma_{rdR}(G)=2\gamma_{rR}(G)-1$, then $G$ consists of a $K_2$ and $n-2$ isolated vertices or $G$ consists of a vertex $h$ and two disjoint vertex sets $H$ and $R$ such that $H=N(h)$, $G[H]$ does not have isolated vertices, $G[R]$ is trivial, there is no edge between $h$ and $R$ and $N(h)\cap N(R)\neq N(h)$. \end{theorem}
\begin{proof} Since $G$ is a nontrivial graph, Proposition \ref{cor1} implies $\gamma_{rdR}(G)\le 2\gamma_{rR}(G)-1$. Now we investigate the equality.\\ Let $\gamma_{rdR}(G)= 2\gamma_{rR}(G)-1$, where $f=(V_0,V_1,V_2,V_3)$ is a $\gamma_{rdR}(G)$-function and $g=(U_0,U_1,U_2)$ is a
$\gamma_{rR}(G)$-function. Then $2|U_1|+4|U_2|-1=|V_1|+2|V_2|+3|V_3|$. On the other hand, since $2|U_1|+4|U_2|-1=|V_1|+2|V_2|+3|V_3|=\gamma_{rdR}(G) \le 2|U_1|+3|U_2|$, it follows that $|U_2|\le 1$.
If $U_2=\emptyset$, then $|U_0|=0$ and therefore $|U_1|=n$. Using the inequality above, we obtain
$$2n-1=2|U_1|-1\le\gamma_{rdR}(G)\le 2|U_1|=2n.$$ If $\gamma_{rdR}(G)=2n$, then $G$ is trivial, a contradiction. If $\gamma_{rdR}(G)=2n-1$, then Proposition \ref{2n-1} shows that $G$ consists of a $K_2$ and $n-2$ isolated vertices.
Let now $|U_2|= 1$ such that $U_2=\{h\}$, $H=N(h)$, $R=V(G)\setminus N[h]=\{u_1,u_1,\ldots,u_p\}$. Clearly, $U_0\subseteq H$ and $R\subseteq U_1$.
If $H$ contains exactly $s\ge 1$ isolated vertices, then $\gamma_{rR}(G)=2+s+p$ and therefore $\gamma_{rdR}(G)\le 3+s+2p\le 2\gamma_{rR}(G)-2$, a contradiction. Hence $H=N(h)$ does not contain isolated vertices and thus $\gamma_{rR}(G)=p+2$.
If $G[R]$ contains an edge, then we obtain the contradiction $\gamma_{rdR}(G)\le 3+2p-1=2p+2\le 2\gamma_{rR}(G)-2$. Thus $G[R]$ is trivial.
If there is an edge between $h$ and $R$, then we also obtain the contradiction $\gamma_{rdR}(G)\le 3+2p-1=2p+2\le 2\gamma_{rR}(G)-2$.
If $N(h)\cap N(R)=N(h)$, then $f=(H,\emptyset,\{h\}\cup R,\emptyset)$ is an RDRD function of $G$, and hence $\gamma_{rdR}(G)\le 2p+2\le 2\gamma_{rR}(G)-2$, a contradiction. \end{proof}
\section{Trees} In this section we study the restrained double Roman domination of trees.\\
\begin{theorem}\label{the-tree1} If $T$ is a tree of order $n\geq2$, then $\gamma_{rdR}(T)\leq \lceil\frac{3n-1}{2}\rceil$. The equality holds if $T\in\{P_2,P_3,P_4,P_5, S_{1,2}, ws(1,n, n-1), ws(1,n, n-2)\}$. \end{theorem}
\begin{proof}
Let $T$ be a tree of order $n\geq2$. We will proceed by induction on $n$. If $n=2$, then $\gamma_{rdR}(T)=3=\lceil\frac{3n-1}{2}\rceil$. If $n\geq3$,
then $diam(T)\geq2$. If $diam(T)=2$, then $T$ is the star $K_{1,n-1}$ for $n\geq3$ and $\gamma_{rdR}(T)=n+1\leq \lceil\frac{3n-1}{2}\rceil$. If $diam(T)=3$, then $T$ is a double star $S_{r,s}$ for $1\leq r\leq s$. Hence, $n=r+s+2\geq4$. If $r=1=s$, then $T=P_4$ and $\gamma_{rdR}(T)=6\leq\lceil\frac{12-1}{2}\rceil$. If $r=1, s\ge 2$, then $n=s+3$ and $\gamma_{rdR}(T)=s+5\leq\lceil\frac{3(s+3)-1}{2}\rceil$. If $r\ge 2, s\ge 2$, then $n=r+s+2$ and $\gamma_{rdR}(T)=r+s+4\leq\lceil\frac{3(r+s+2)-1}{2}\rceil$.\\ Hence, we may assume that $diam(T)\geq4$. This implies that $n\geq5$. Assume that any tree $T'$ with order $2\leq n'<n$ has $\gamma_{rdR}(T')\leq \lceil\dfrac{3n'-1}{2}\rceil$. Among all longest paths in $T$, choose $P$ to be one that maximizes the degree of its next-to-last vertex $v$, and let $w$ be a leaf neighbor of $v$. Note that by our choice of $v$, every child of $v$ is a leaf. Since $deg(v)\geq2$, the vertex $v$ has at least one leaf as a child. Now we put $T'=T-T_v$ where the order of the substar $T_v$ is $k+1$ with $k\geq1$. Note that since $diam(T)\geq4$, $T'$ has at least three vertices, that is, $n'\geq3$. Let $f'$ be a $\gamma_{rdR}$-function of $T'$. Form $f$ from $f'$ by letting $f(x)=f'(x)$ for all $x\in V(T')$, $f(v)=2$, and $f(z)=1$ for all leaf neighbors of $v$. Thus $f$ is a restrained double Roman dominating function of $T$, implying that $\gamma_{rdR}(T)\leq \gamma_{rdR}(T')+k+2 \le \lceil\dfrac{3(n-k-1)-1}{2}\rceil+k+2=\lceil\dfrac{3n-k}{2}\rceil \leq \lceil\dfrac{3n-1}{2}\rceil$.\\
If $T\in \{P_2,P_3,P_4,P_5, S_{1,2}, , ws(1,n, n-1), ws(1,n, n-2)\}$, then clearly $\gamma_{rdR}(T)=\lceil\dfrac{3n-1}{2}\rceil$. \end{proof}
\begin{theorem}\label{the-tree2}
For every tree $T$ of order $n\geq 3$, with $l$ leaves and $s$ support vertices, we have $\gamma_{rdR}(T)\leq\dfrac{4n+2s-l}{3}$, and this
bound is sharp for the family of stars ($K_{1,n-1}$ $n\geq 3$),
double stars, caterpillars for which each vertex is a leaf or a support vertex and all support vertices have even degree
$2m$ or at most two end support vertices has degree $2m-1$ and the other support vertices has degree $2m$, wounded spiders in which
the central vertex is adjacent with at least two leaves. \end{theorem} \begin{proof} Let $T$ be a tree with order $n\geq3$. Since $n\geq3$, $diam(T)\geq2$. If $diam(T)=2$, then $T$ is the star $K_{1,n-1}$ for $n\geq3$ and $\gamma_{rdR}(T)=n+1\leq\dfrac{4n+2-(n-1)}{3}=\dfrac{3n+3}{3}=n+1$. If $diam(T)=3$, then $T$ is a double star $S_{r,t}$ for $1\leq r\leq t$. We have $\gamma_{rdR}(T)=n+2=\dfrac{4n+2s-l}{3}$. Hence, we may assume $diam(T)\geq4$. Thus, $n\geq5$. Assume that any tree $T'$ with order $3\leq n'<n$, $l'$ leaves and $s'$ support vertices has $\gamma_{rdR}(T')\leq\dfrac{4n'+2s'-l'}{2}$. Among all longest paths in $T$, choose $P$ to be one that maximizes the degree of its next-to-last vertex $u$, and let $x$ be a leaf neighbor of $u$, $w$ be a parent vertex of $v$ and $v$ be a parent vertex of $u$. Note that by our choice of $u$, every child of $u$ is a leaf. Since $t=deg(u)\geq2$, the vertex $u$ has at least one leaf children. We now consider the two cases are as follows:\\ \textbf{Case 1}. $deg(v)\geq3$. In this case, we put $T'=T-T_u$, where the order of the star $T_u$ is $t$ with $t\geq2$. Note that since $diam(T)\geq4$, $T'$ has at least three vertices, that is, $n'\geq3$. Let $f'$ be a $\gamma_{rdR}$-function of $T'$. Thus we have $n'=n-t$, $l'=l-(t-1)$ and $s'=s-1$. Clearly, $\gamma_{rdR}(T)\leq \gamma_{rdR}(T')+t+1\leq\dfrac{4(n-t)+2(s-1)-(l-(t-1))}{3}+t+1=\dfrac{4n+2s-l}{3}$.\\
\textbf{Case 2}. $deg(v)=2$. We now consider the following two subcases.\\ \textbf{i}. $deg(w)>2$. Then we put $T'=T-T_v$ where order of subtree $T_v$ is $t+1$. Clearly, we have $n'=n-(t+1)$, $s'=s-1$ and $l'=l-(t-1)$. Thus, $\gamma_{rdR}(T)\leq \gamma_{rdR}(T')+t+2\leq \dfrac{4(n-t-1)+2(s-1)-(l-(t-1))}{3}+t+2=\dfrac{4n+2s-l-1}{3}\leq \dfrac{4n+2s-l}{3}$.\\ \textbf{ii}. $deg(w)=2$. Then we put $T'=T-T_v$, where the order of the subtree $T_v$ is $t+1$. Thus in this case, $w$ in the subtree $T'$ becomes a leaf and we have $n'=n-(t+1)$, $s'\le s$ and $l'= l-(t-1)+1$. Thus, $\gamma_{rdR}(T)\leq \gamma_{rdR}(T')+t+2\leq \dfrac{4(n-t-1)+2(s)-(l-(t-1)+1)}{3}+t+2=\dfrac{4n+2s-l}{3}$. \end{proof}
\begin{theorem}\label{the-tree3} If $T$ is a tree, then $\gamma_r(T)+1\le \gamma_{rdR}(T)\le 3\gamma_r(T)$, and equality for the lower bound holds if and only if $T$ is a star. The upper bound is sharp for the paths $P_{m}$ ($m\equiv 1\ \mbox{mod}\ 3$), The cycles $C_n$ ($n\equiv 0,\,1\ \mbox{mod}\ 3$), the complete graphs $K_n$, the complete bipartite graphs $K_{n,m}\ (m,n \ge 2)$ and the multipartite graphs $K_{n_1,n_2,\cdots, n_m},\ (m\ge 3)$. \end{theorem} \begin{proof} Let $T$ be a tree. Since at least one vertex has value $2$ under any $RDRD$ function of $T$, we see that $\gamma_r(T)+1 \le \gamma_{rdR}(T)$. If we assign the value $3$ to the vertices in a $\gamma_r(T)$-set, then we obtain an RDRD function of $T$. Therefore $\gamma_{rdR}(T)\le 3\gamma_r(T)$.\\ The sharpness of the upper bound is deuced from Propositions 1-7 of \cite{domke} and Observation \ref{the-com-par}, Theorem \ref{the-path} and Theorem \ref{the-cycle}.\\ For equality of the lower bound, if $T=K_{1,n-1}$ is a star, then it is clear $\gamma_{rdR}(T)=n+1$ and $\gamma_{r}(T)=n$. If $T$ is a tree and $\gamma_{rdR}(T)=\gamma_{r}(T)+1$, then we have only one vertex of value $2$ in any $\gamma_{rdR}(T)$-function and the other vertices of positive weight have value $1$. In addition, the vertices of value 1 are adjacent to the vertex of value 2, and therefore $T$ is a star.
least one vertex
\end{proof}
The following result gives us the RDRD of $G$ in terms of the size of $E(G)$, and order of $G$. \begin{proposition}\label{prop-tree4} Let $G$ be a connected graph $G$ of order $n\ge 2$ with $m$ edges. Then $\gamma_{rdR}(G) \le 4m-2n+3$, with equality if and only if $G$ is a tree with $\gamma_{rdR}(G) = 2n-1$. \end{proposition} \begin{proof} For the given connected graph, $m\ge n-1$ and according to Proposition \ref{diam} $\gamma_{rdR}(G) \le 2n-1 =4n-4 -2n +3\le 4m-2n+3$.\\ If $\gamma_{rdR}(G) = 4m-2n+3$, then $m=n-1$ and $G$ is a tree with $\gamma_{rdR}(G) = 2n-1$.\\ Conversely, assume that $G$ is a tree with $\gamma_{rdR}(G) = 2n-1$. Hence $\gamma_{rdR}(G) = 4m-2n+3$. \end{proof}
\section{Conclusions and problems} The concept of restrained double Roman domination in graphs was initially investigated in this paper. We studied the computational complexity of this concept and proved some bounds on the $RDRD$ number of graphs. In the case of trees, we characterized all trees attaining the exhibited bound. We now conclude the paper with some problems suggested by this research.
\\ $\bullet$ For any graph $G$, provided the characterizations of graphs with small or large $RDRD$ numbers. $\bullet$ It is also worthwhile proving some other nontrivial sharp bounds on $\gamma_{rdR}(G)$ for general graphs $G$ or some well-known families such as, chordal, planar, triangle-free, or claw-free graphs.
\\ $\bullet$ The decision problem RESTRAINED DOUBLE ROMAN DOMINATION is NP-complete for general graphs, as proved in Theorem \ref{the-NP}. By the way, there might be some families of graphs such that $RDRD$ is NP-complete for them or there might be some polynomial-time algorithms for computing the $RDRD$ number of some well-known families of graphs, for instance, trees. Can you provide these families?\\ $\bullet$ In Theorems \ref{the-tree1} and \ref{the-tree2} we showed upper bounds for $\gamma_{rdR}(T)$. The sufficient and necessity conditions for equality may be problems.
\end{document}
|
arXiv
|
{
"id": "2106.08501.tex",
"language_detection_score": 0.7234070897102356,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{A note on the $\mathbb Z_2$-equivariant Montgomery-Yang correspondence} \author{Yang Su} \address{Hua Loo-Keng Key Laboratory of Mathematics \newline \indent Chinese Academy of Sciences \newline\indent Beijing, 100190, China} \email{suyang{@}math.ac.cn}
\date{August 19, 2009}
\begin{abstract} In this paper, a classification of free involutions on $3$-dimensional homotopy complex projective spaces is given. By the $\mathbb Z_2$-equivariant Montgomery-Yang correspondence, we obtain all smooth involutions on $S^6$ with fixed-point set an embedded $S^3$. \end{abstract}
\maketitle
\section{Introduction}\label{sec:one} In \cite{M-Y}, Montgomery and Yang established a $1:1$ correspondence between the set of isotopy classes of smooth embeddings $S^3 \hookrightarrow S^6$, $C_3^3$, and the set of diffeomorphism classes of smooth manifolds homotopy equivalent to the $3$-dimensional complex projective space $\mathbb C \mathrm P^3$ ( these manifolds are called homotopy $\mathbb C \mathrm P^3$). It is known that the latter set is identified with the set of integers by the first Pontrjagin class of the manifold. Therefore so is the set $C_3^3$.
In a recent paper \cite{Lv-Li}, Bang-he Li and Zhi L\"u established a $\mathbb Z_2$-equivariant version of the Montgomery-Yang correspondence. Namely, they proved that there is a $1:1$ correspondence between the set of smooth involutions on $S^6$ with fixed-point set an embedded $S^3$ and the set of smooth free involutions on homotopy $\mathbb C \mathrm P^3$. This correspondence gives a way of studying involutions on $S^6$ with fixed-point set an embedded $S^3$ by looking at free involutions on homotopy $\mathbb C \mathrm P^3$. As an application, by combining this correspondence and a result of Petrie \cite{Petrie}, saying that there are infinitely many homotopy $\mathbb C \mathrm P^3$'s which admit
free involutions, the authors constructed infinitely many counter examples for the Smith conjecture, which says that only the unknotted $S^3$ in $S^6$ can be the fixed-point set of an involution on $S^6$.
In this note we study the classification of the orbit spaces of free involutions on homotopy $\mathbb C \mathrm P^3$. As a consequence, we get the classification of free involutions on homotopy $\mathbb C \mathrm P^3$, and further by the $\mathbb Z_2$-equivariant Montgomery-Yang correspondence, the classification of involutions on $S^6$ with fixed-point set an embedded $S^3$.
The manifolds $X^6$ homotopy equivalent to $\mathbb C \mathrm P^3$ are classified up to diffeomorphism by the first Pontrjagin class $p_1(X) =(24j+4)x^2$, $j \in \mathbb Z$, where $x^2$ is the canonical generator of $H^4(X;\mathbb Z)$ (c.~f.~\cite{M-Y}, \cite{Wall6}). We denote the manifold with $p_1 =(24j+4)x^2$ by $H\cp^3_j$.
\begin{theorem}\label{thm:one} The manifold $H\cp^3_j$ admits a (orientation reversing) smooth free involution if and only if $j$ is even. On every $H\cp^3_{2k}$, there are exactly two free involutions up to conjugation.\footnote{The same result was also obtained independently by Bang-he Li (unpublished).} \end{theorem}
\begin{corollary}\label{coro:one} An embedded $S^3$ in $S^6$ is the fixed-point set of an involution on $S^6$ if and only if its Montgomery-Yang correspondence is $H\cp^3_{2k}$. For each embedding satisfying the condition, there are exactly two such involutions up to conjugation. \end{corollary}
Theorem \ref{thm:one} is a consequence of a classification theorem (Theorem \ref{thm:two}) of the orbit spaces. Theorem \ref{thm:two} will be shown in Section $3$ by the classical surgery exact sequence of Browder-Novikov-Sullivan-Wall. In Section $2$ we show some topological properties of the orbit spaces, which will be needed in the solution of the classification problem.
\section{Topology of the Orbit Space}
In this section we summarize the topological properties of the orbit space of a smooth free involution on a homotopy $\mathbb C \mathrm P^3$. Some of the properties are also given in \cite{Lv-Li}. Here we give shorter proofs from a different point of view and in a different logical order.
Let $\tau$ be a smooth free involution on $H\cp^3$, a homotopy $\mathbb C \mathrm P^3$. Denote the orbit manifold by $M$.
\begin{example} The $3$-dimensional complex projective space $\mathbb C \mathrm P^3$ can be viewed as the sphere bundle of a $3$-dimensional real vector bundle over $S^4$. The fiberwise antipodal map $\tau_0$ is a free involution on $\mathbb C \mathrm P^3$ (c.~f.~\cite[A.1]{Petrie}). Denote the orbit space by $M_0$. \end{example}
As a consequence of the Lefschetz fixed-point theorem and the multiplicative structure of $H^*(H\mathbb C \mathrm P^3)$, we have
\begin{lemma}\cite[Theorem 1.4]{Lv-Li} $\tau$ must be orientation reversing. \end{lemma}
\begin{lemma}\label{lemma:cohom} The cohomology ring of $M$ with $\mathbb Z_2$-coefficients is
$$H^*(M;\mathbb Z_2)=\mathbb Z_2[t,q]/(t^3=0, q^2=0),$$ where $|t|=1$, $|q|=4$. \end{lemma} \begin{proof} Note the the fundamental group of $M$ is $\mathbb Z_2$. There is a fibration $\widetilde{M} \to M \to \mathbb R \mathrm P^{\infty}$, where $M \to \mathbb R \mathrm P^{\infty}$ is the classifying map of the covering. We apply the Leray-Serre spectral sequence. Since $\widetilde{M}$ is homotopy equivalent to $\mathbb C \mathrm P^3$, the nontrivial $E_2$-terms are $E_2^{p,q}=H^p(\mathbb R \mathrm P^{\infty};\mathbb Z_2)$ for $q=0,2,4,6$. Therefore all differentials $d_2$ are trivial, and henceforth $E_2=E_3$. Now the differential $d_3 \colon E_3^{0,2} \to E_3^{3,0}$ must be an isomorphism. For otherwise the multiplicative structure of the spectral sequence implies that the spectral sequence collapses at the $E_3$-page, which implies that $H^*(M;\mathbb Z_2)$ is nontrivial for $* >6$, a contradiction. Then it is easy to see that $M$ has the claimed cohomology ring. \end{proof}
\begin{remark} There is an exact sequence (cf.~\cite{Brown}) $$H_3(\mathbb Z/2) \to \mathbb Z \otimes_{\mathbb Z[\mathbb Z/2]} \mathbb Z_- \to H_2(M) \to H_2(\mathbb Z/2),$$ where $\mathbb Z_-$ is the nontrivial $\mathbb Z[\mathbb Z_2]$-module. By this exact sequence, $H_2(M)$ is either $\mathbb Z_2$ or trivial. $H^2(M;\mathbb Z_2)\cong \mathbb Z_2$ implies that $H_2(M) =0$. This was shown in \cite[Lemma 2.1]{Lv-Li} by geometric arguments. \end{remark}
Now let's consider the Postnikov system of $M$. Since $\pi_1(M) \cong \mathbb Z_2$, $\pi_2(M) \cong \mathbb Z$ and the action of $\pi_1(M)$ on $\pi_2(M)$ is nontrivial, following \cite{Baues}, there are two candidates for the second space $P_2(M)$ of the Postnikov system, which are distinguished by their homology groups in low dimensions. See \cite[pp.265]{Olbermann} and \cite[Section 2A]{Su}.
Let $\lambda$ be the free involution on $\cp^{\infty}$, mapping $[z_0, z_1, z_2, z_3, \cdots]$ to $[-z_1, z_0, -z_3, z_2, \cdots]$. Let $Q = (\cp^{\infty} \times S^{\infty})/(\lambda, -1)$, where $-1$ denotes the antipodal map on $S^{\infty}$, then there is a fibration $\cp^{\infty} \to Q \to \rp^{\infty}$ which corresponds to the nontrivial $k$-invariant. Lemma \ref{lemma:cohom} implies that $P_2(M)=Q$ since $Q$ has the same homology as $M$ in low dimensions.
Let $f_2 \colon M \to Q$ be the second Postnikov map, since $\pi_i(M) \cong \pi_i(\mathbb C \mathrm P^3)=0$ for $3 \le i \le 6$, $f_2$ is actually a $7$-equivalence and $Q$ is the $6$-th space of the Postnikov system of $M$. By the formality of constructing the Postnikov system, all the orbit spaces have the same Postnikov system. Therefore we have shown \begin{proposition}\cite[Lemma 3.2]{Lv-Li} \label{prop:hmtp} The orbit spaces of free involutions on homotopy $\mathbb C \mathrm P^3$ are all homotopy equivalent. \end{proposition}
Now let us consider the characteristic classes of $M$.
\begin{lemma} The total Stiefel-Whitney class of $M$ is $w(M)=1+t+t^2$, where $t \in H^1(M;\mathbb Z_2)$ is the generator. \end{lemma} \begin{proof} The involution $\tau$ is orientation reversing, therefore $M$ is nonorientable and $w_1(M)=t$. The Steenrod square $Sq^2 \colon H^4(M;\mathbb Z_2) \to H^6(M;\mathbb Z_2)$ is trivial, this can be seen by looking at $M_0$, whose $4$-dimensional cohomology classes are pulled back from $S^4$. Therefore the second Wu class is $v_2(M)=0$. Thus by the Wu formula $w(M)=Sqv(M)$ it is seen that the total Stiefel-Whitney class of $M$ is $w(M)=1+t+t^2$. \end{proof}
Let $\pi \colon H\cp^3 \to M$ be the projection map, the $\pi^*p_1(M)=p_1(H\cp^3)$.
\begin{lemma} The induced map $\pi^* \colon H^4(M) \to H^4(H\cp^3)$ is an isomorphism. \end{lemma} \begin{proof} Apply the Leray-Serre spectral sequence for integral cohomology to the fibration $\widetilde{M} \to M \to \rp^{\infty}$, the $E_2$-terms are $E_2^{p,q}=H^p(\rp^{\infty};\underline{H^q(\widetilde{M})})$. It is known that $H^3(M)=H^3(Q)=0$ and $H^5(M)=H^5(Q)=0$ (for $H^*(Q)$, see \cite[pp.265]{Olbermann}), therefore $E_{\infty}^{0,4}=E_2^{0,4}=H^4(\widetilde{M})$ is the only nonzero term in the line $p+q=4$. This shows that the edge homomorphism is an isomorphism, which is just $\pi^*$. \end{proof}
Therefore the first Pontrjagin class of $M$ is $p_1(M)=(24j+4)u$ ($j \in \mathbb Z$), where $u=\pi^*(x^2)$ is the canonical generator of $H^4(M)$.
\section{Classification of the orbit spaces} By Proposition \ref{prop:hmtp}, every orbit space $M$ is homotopy equivalent to $M_0$. Thus the set of conjugation classes of free involutions on homotopy $\mathbb C \mathrm P^3$ is in $1:1$ correspondence to the set of diffeomorphism classes of smooth manifolds homotopy equivalent to $M_0$. Denote the latter by $\mathcal M (M_0)$. Let $\mathscr{S}(M_0)$ be the smooth structure set of $M_0$, $Aut(M_0)$ be the set of homotopy classes of self-equivalences of $M_0$. There is an action of $Aut(M_0)$ on $\mathscr S(M_0)$ with orbit set $\mathcal M(M_0)$. (Since the Whitehead group of $\mathbb Z_2$ is trivial, we omit the decoration $s$ all over.)
The surgery exact sequence for $M_0$ is $$L_7(\mathbb Z_2^-) \to \mathscr S(M_0) \to [M_0, G/O] \to L_6(\mathbb Z_2^-).$$ By \cite[Theorem 13A.1]{Wall}, $L_7(\mathbb Z_2^-)=0$, $L_6(\mathbb Z_2^-) \stackrel{c}{\cong} \mathbb Z_2$, where $c$ is the Kervaire-Arf invariant. Since $\dim M_0=6$ and $PL/O$ is $6$-connected, we have an isomorphism $[M_0, G/O] \cong [M, G/PL]$. For a given surgery classifying map $g \colon M_0 \to G/PL$, the Kervaire-Arf invariant is given by the Sullivan formula (\cite{Sullivan}, \cite[Theorem 13B.5]{Wall}) \begin{eqnarray*} c(M_0, g) & = & \langle w(M_0) \cdot g^*\kappa, [M_0] \rangle \\
& = & \langle (1+t+t^2) \cdot g^*(1+Sq^2+Sq^2Sq^2)(k_2+k_6),
[M_0] \rangle \\
& = & \langle g^*k_6, [M_0] \rangle . \end{eqnarray*}
Now since $M_0$ has only $2$-torsion, and modulo the groups of odd order we have $$G/PL \simeq Y \times \prod_{i \ge 2}(K(\mathbb Z_2, 4i-2) \times K(\mathbb Z,4i)),$$ where $Y=K(\mathbb Z_2,2) \times_{\delta Sq^2} K(\mathbb Z,4)$, we have $[M_0, G/PL]=[M_0,Y] \times [M_0, K(\mathbb Z_2,6)]$. $k_6$ is the fundamental class of $K(\mathbb Z_2,6)$. Therefore the surgery exact sequence implies
\begin{lemma}\label{lemma:str} $\mathscr S (M_0) \cong [M_0, Y]$. \end{lemma}
The projection $\pi \colon \mathbb C \mathrm P^3 \to M_0$ induces a homomorphism $\pi^* \colon [M_0, Y] \to [\mathbb C \mathrm P^3, Y]$, and $[\mathbb C \mathrm P^3,Y]$ is isomorphic to $\mathbb Z$ through the splitting invariant $s_4$ (\cite[Lemma 14C.1]{Wall}). Let $\Phi=s_4 \circ \pi^*$ be the composition.
\begin{lemma}\label{lemma:exact} There is a short exact sequence $\mathbb Z_2 \to [M_0,Y] \stackrel{\Phi}{\rightarrow} 2\mathbb Z$. \end{lemma} \begin{proof} We have $[\mathbb C \mathrm P^3, Y]=[\mathbb C \mathrm P^2, Y]$, and according to Sullivan \cite{Sullivan}, the exact sequence $$L_4(1) \stackrel{\cdot 2}{\rightarrow} [\mathbb C \mathrm P^2, Y] \to [\mathbb C \mathrm P^1,Y]$$ is non-splitting. Let $p \colon Y \to K(\mathbb Z_2,2)$ be the projection map, then for any $f \in [\mathbb C \mathrm P^3, Y]$, $s_4(f) \in 2\mathbb Z$ if and only if $p \circ f \colon \mathbb C \mathrm P^3 \to K(\mathbb Z_2,2)$ is null-homotopic. Now by Lemma \ref{lemma:cohom}, the homomorphism $H^2(M_0;\mathbb Z_2) \to H^2(\mathbb C \mathrm P^3;\mathbb Z_2)$ is trivial. Therefore for any $g \in [M_0, Y]$, the composition $p \circ g \circ \pi$ is null-homotopic, thus $\mathrm{Im} \Phi \subset 2\mathbb Z$. On the other hand, since $\pi^* \colon H^4(M_0;\mathbb Z) \to H^4(\mathbb C \mathrm P^3)$ is an isomorphism, any map $f \colon \mathbb C \mathrm P^3 \to K(\mathbb Z,4)$ factors through some $g' \colon M_0 \to K(\mathbb Z,4)$. Let $i \colon K(\mathbb Z,4) \to Y$ be the fiber inclusion, since $s_4(i\circ f)$ takes any value in $2\mathbb Z$, so does $\Phi(i \circ g')$.
Let $h \colon M_0 \to K(\mathbb Z_2,2)$ be a map corresponding to the nontrivial cohomology class in $H^2(M_0;\mathbb Z_2)$. By obstruction theory, there is a lifting $g \colon M_0 \to Y$. By the previous argument, there is also a map $g' \colon M_0 \to Y$ such that $\Phi(g)=\Phi(g')$, but $ p \circ g' \colon M_0 \to K(\mathbb Z_2,2)$ is null-homotopic. Therefore the kernel of $\Phi$ consists of two elements. \end{proof}
\begin{remark} In \cite{Petrie} Petrie showed that every homotopy $\mathbb C \mathrm P^3$ admits free involution. It was pointed out by Dovermann, Masuda and Schultz \cite[pp.~4]{DMS} that since the class $G$ is in fact twice the generator of $H^4(S^4)$, Petrie's computation actually shows that every $H\cp^3_{2k}$ admits free involution, which is consistent with Lemma \ref{lemma:exact}. \end{remark}
The set of diffeomorphism classes of manifolds homotopy equivalent to $M_0$, $\mathcal M(M_0)$, is the orbit set $\mathscr S(M_0)/Aut(M_0)$. In general, the determination of the action of $Aut(M_0)$ on the structure set is very difficult. But in our case, the situation is quite simple, since
\begin{lemma}\label{lemma:action} The group of self-equivalences $Aut(M_0)$ is the trivial group. \end{lemma} \begin{proof} A special CW-complex structure of $M_0$ was given in \cite[pp.~885]{Lv-Li}: $M_0$ is a $\mathbb R \mathrm P^2$-bundle over $S^4$, therefore it is the union of two copies of $\mathbb R \mathrm P^2 \times D^4$, glued along boundaries. Choose a CW-complex structure of $\mathbb R \mathrm P^2$, we have a product CW-structure on one copy of $ \mathbb R \mathrm P^2 \times D^4$, and by shrinking the other copy of $\mathbb R \mathrm P^2 \times D^4$ to the core $\mathbb R \mathrm P^2$, we get a CW-complex structure on $M_0$, whose $2$-skeleton is $\mathbb R \mathrm P^2$.
Let $\varphi \in Aut(M_0)$ be a self homotopy equivalence of $M_0$. By cellular approximation, we may assume that $\varphi$ maps $\mathbb R \mathrm P^2$
to $\mathbb R \mathrm P^2$. It is easy to see that $\varphi|_{\mathbb R \mathrm P^2}$ is homotopic
$\mathrm{id}_{\mathbb R \mathrm P^2}$. Therefore, by homotopy extension, we may further assume that $\varphi|_{\mathbb R \mathrm P^2}=\mathrm{id}_{\mathbb R \mathrm P^2}$. The obstruction to construct a homotopy between $\varphi$ and $\mathrm{id}_{M_0}$, which is the identity on $\mathbb R \mathrm P^2$, is in $H^i(M,\mathbb R \mathrm P^2;\pi_i(M_0))$. Since $\pi_i(M_0)=0$ for $3 \le i \le 6$ and $H^1(M_0,\mathbb R \mathrm P^2;\mathbb Z_2)=H^2(M_0,\mathbb R \mathrm P^2;\mathbb Z)=0$, all the obstruction groups are zero. Therefore $\varphi \simeq \mathrm{id}_{M_0}$. \end{proof}
Combine Lemma \ref{lemma:str}, Lemma \ref{lemma:exact} and Lemma \ref{lemma:action}, we have a classification of manifolds homotopy equivalent to $M_0$.
\begin{theorem}\label{thm:two} Let $M^6$ be a smooth manifold homotopy equivalent to $M_0$, then $p_1(M)=(48j+4)u$, where $u\in H^4(M;\mathbb Z)$ is the canonical generator; for each $j \in \mathbb Z$, up to diffeomorphism, there are two such manifolds with the same $p_1=48j+4$. \end{theorem}
Theorem \ref{thm:one} and Corollary \ref{coro:one} are direct consequences of this theorem.
\end{document}
|
arXiv
|
{
"id": "0908.3053.tex",
"language_detection_score": 0.6930097341537476,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{singlespace} \title{Sample Path Large Deviations\for Stochastic Evolutionary Game Dynamics
hanks{We thank Michel Bena\"im for extensive discussions about this paper and related topics, and two anonymous referees and an Associate Editor for helpful comments. Financial support from NSF Grants SES-1155135 and SES-1458992, U.S. Army Research Office Grant MSN201957, and U.S. Air Force OSR Grant FA9550-09-0538 are gratefully acknowledged.}
\begin{abstract} We study a model of stochastic evolutionary game dynamics in which the probabilities that agents choose suboptimal actions are dependent on payoff consequences. We prove a sample path large deviation principle, characterizing the rate of decay of the probability that the sample path of the evolutionary process lies in a prespecified set as the population size approaches infinity. We use these results to describe excursion rates and stationary distribution asymptotics in settings where the mean dynamic admits a globally attracting state, and we compute these rates explicitly for the case of logit choice in potential games. \end{abstract}
\end{singlespace}
\section{Introduction}
Evolutionary game theory concerns the dynamics of aggregate behavior of populations of strategically interacting agents. To define these dynamics, one specifies the population size $N$, the game being played, and the revision protocol agents follow when choosing new actions. Together, these objects define a Markov chain $\mathbf{X}^N$ over the set of population states---that is, of distributions of agents across actions.
From this common origin, analyses of evolutionary game dynamics generally proceed in one of two directions. One possibility is to consider deterministic dynamics, which describe the evolution of aggregate behavior using an ordinary differential equation. More precisely, a \emph{deterministic evolutionary dynamic} is a map that assigns population games to dynamical systems on the set of population states. The replicator dynamic (\cite{TayJon78}), the best response dynamic (\cite{GilMat91}), and the logit dynamic (\cite{FudLev98}) are prominent examples.
In order to derive deterministic dynamics from the Markovian model of individual choice posited above, one can consider the limiting behavior of the Markov chains as the population size $N$ approaches infinity. \cite{Kur70}, \cite{Ben98}, \cite{BenWei03,BenWei09}, and \cite{RotSan13} show that if this limit is taken, then over any finite time span, it becomes arbitrarily likely that the Markov chain is very closely approximated by solutions to a differential equation---the \emph{mean dynamic}---defined by the Markov chain's expected increments. Different revision protocols generate different deterministic dynamics: for instance, the replicator dynamic can be obtained from a variety of protocols based on imitation, while the best response and logit dynamics are obtained from protocols based on exact and perturbed optimization, respectively. \footnote{For imitative dynamics, see \cite{Hel92}, \cite{Wei95}, \cite{BjoWei96}, \cite{Hof95im}, and \cite{Sch98}; for exact and perturbed best response dynamics, see \cite{RotSan13} and \cite{HofSan07}, respectively. For surveys, see \cite{SanPGED,SanHB}.} These deterministic dynamics describe typical behavior in a large population, specifying how the population settles upon a stable equilibrium, a stable cycle, or a more complex stable set. They thus provide theories of how equilibrium behavior is attained, or of how it may fail to be attained.
At the same time, if the process $\mathbf{X}^N$ is ergodic---for instance, if there is always a small chance of a revising player choosing any available action---then any stable state or other stable set of the mean dynamic is only temporarily so: equilibrium must break down, and new equilibria must emerge. Behavior over very long time spans is summarized by the stationary distribution of the process. This distribution is typically concentrated near a single stable set, the identity of which is determined by the relative probabilities of transitions between stable sets.
This last question is the subject of the large literature on stochastic stability under evolutionary game dynamics. \footnote{Key early contributions include \cite{FosYou90}, \cite{KanMaiRob93}, and \cite{You93}; for surveys, see \cite{You98} and \cite{SanPGED}.} The most commonly employed framework in this literature is that of \cite{KanMaiRob93} and \cite{KanRob95, KanRob98}. These authors consider a population of fixed size, and suppose that agents employ the \emph{best response with mutations} rule: with high probability a revising agent plays an optimal action, and with the complementary probability chooses a action uniformly at random. They then study the long run behavior of the stochastic game dynamic as the probability of mutation approaches zero. The assumption that all mistakes are equally likely makes the question of equilibrium breakdown simple to answer, as the unlikelihood of a given sample path depends only on the number of suboptimal choices it entails. This eases the determination of the stationary distribution, which is accomplished by means of the well-known Markov chain tree theorem. \footnote{See \pgcite{FreWen98}{Lemma 6.3.1} or \pgcite{You93}{Theorem 4}.}
To connect these two branches of the literature, one can consider the questions of equilibrium breakdown and stochastic stability in the large population limit, describing the behavior of the processes $\mathbf{X}^N$ when this behavior differs substantially from that of the mean dynamic. Taking a key first step in this direction, this paper establishes a \emph{sample path large deviation principle}: for any prespecified set of sample paths $\Phi$ of a fixed duration, we characterize the rate of decay of the probability that the sample path of $\mathbf{X}^N$ lies in $\Phi$ as $N$ grows large. This large deviation principle is the basic preliminary to obtaining characterizations of the expected waiting times before transitions between equilibria and of stationary distribution asymptotics.
As we noted earlier, most work in stochastic evolutionary game theory has focused on evolution under the best response with mutations rule, and on properties of the small noise limit. In some contexts, it seems more realistic to follow the approach taken here, in which the probabilities of mistakes depend on their costs. \footnote{Important early work featuring this assumption includes the logit model of \cite{Blu93,Blu97} and the probit model of \cite{MyaWal03}. For more recent work and references, see \cite{SanSIM,SanORDERS}, \cite{Sta12}, and \cite{SanSta16}.} Concerning the choice of limits, \cite{BinSam97} argue that the large population limit is more appropriate than the small noise limit for most economic modeling. However, the technical demands of this approach have restricted previous analyses to the two-action case. \footnote{See \cite{BinSamVau95}, \cite{BinSam97}, \cite{Blu93}, and \cite{SanSIM,SanORDERS}.} This paper provides a necessary first step toward obtaining tractable analyses of large population limits in many-action environments.
In order to move from the large deviation principle to statements about the long run behavior of the stochastic process, one can adapt the analyses of \cite{FreWen98} of diffusions with vanishing noise parameters to our setting of sequences of Markov chains running on increasingly fine grids in the simplex. In Section \ref{sec:App}, we explain how the large deviation principle can be used to estimate the waiting times to reach sets of states away from an attractor and to describe the asymptotic behavior of the stationary distribution in cases where the mean dynamic admits a globally attracting state. We prove that when agents playing a potential game make decisions using the logit choice rule, the control problems in the statement of the large deviation principle can be solved explicitly. We illustrate the implications of these results by using them to characterize long run behavior in a model of traffic congestion.
Our work here is closely connected to developments in two branches of the stochastic processes literature. Large deviation principles for environments quite close to those considered here have been established by \cite{AzeRug77}, \cite{Dup88}, and \cite{DupEll97}. In these works, the sequences of processes under consideration are defined on open sets in $\mathbb{R}^n$, and have transition laws that allow for motion in all directions from every state. These results do not apply directly to the evolutionary processes considered here, which necessarily run on a compact set. Thus relative to these works, our contribution lies in addressing behavior at and near boundary states.
There are also close links to work on interacting particle systems with long range interactions. In game-theoretic terms, the processes studied in this literature describe the \emph{individual choices} of each of $N$ agents as they evolve in continuous time, with the stochastic changes in each agent's action being influenced by the aggregate behavior of all agents. Two large deviation principles for such systems are proved by \cite{Leo95LD}.
The first describes large deviations of the sequence of probability distributions on the set of empirical distributions, where the latter distributions anonymously describe the $N$ agents' sample paths through the finite set of actions $\scrA = \{1, \ldots n\}$. \footnote{In more detail, each sample path of the $N$ agent particle system specifies the action $i \in \scrA$ played by each agent as a function of time $t \in [0, T]$. Each sample path generates an empirical distribution $D^N$ over the set of paths $\mathscr{I} = \{\iota \colon [0, T] \to \scrA\}$, where with probability one, $D^N$ places mass $\frac1N$ on $N$ distinct paths in $\mathscr{I}$. The random draw of a sample path of the particle system then induces a probability distribution $\mathscr{P}^N$ over empirical distributions $D^N$ on the set of paths $\mathscr{I}$. The large deviation principle noted above concerns the behavior of the probability distributions $\mathscr{P}^N$ as $N$ grows large.} The second describes large deviations of the sequence the probability distributions over paths on discrete grids $\mathscr{X}^N$ in the $n$-simplex, paths that represent the evolution of \emph{aggregate} behavior in the $N$ agent particle system. \footnote{In parlance of the particle systems literature, the first result is concerns the ``empirical distributions'' (or ``empirical measures'') of the system, while the latter concerns the ``empirical process''.} The Freidlin-Wentzell theory for particle systems with long range interactions has been developed by \cite{BorSun12}, who provide many further references to this literature.
The large deviation principle we prove here is a discrete-time analogue of the second result of \cite{Leo95LD} noted above. Unlike \cite{Leo95LD}, we allow individuals' transition probabilities to depend in a vanishing way on the population size, as is natural in our game-theoretic context (see Examples \ref{ex:MatchNF}--\ref{ex:Clever} below). Also, our discrete-time framework obviates the need to address large deviations in the arrival of revision opportunities. \footnote{Under a continuous-time process, the number of revision opportunities received by $N$ agents over a short but fixed time interval $[t, t+dt]$ follows a Poisson distribution with mean $N\, dt$. As the population size grows large, the number of arrivals per agent over this interval becomes almost deterministic. However, a large deviation principle for the evolutionary process must account for exceptional realizations of the number of arrivals. For a simple example illustrating how random arrivals influence large deviations results, see \pgcite{DemZei98}{Exercise 2.3.18}.} But the central advantage of our approach is its simplicity. Describing the evolution of the choices of each of $N$ individual agents requires a complicated stochastic processes. Understanding the proofs (and even the statements) of large deviation principles for these processes requires substantial background knowledge. Here our interest is in aggregate behavior.
By making the aggregate behavior process our primitive, we are able to state our large deviation principle with a minimum of preliminaries. Likewise, our proof of this principle, which follows the weak convergence approach of \cite{DupEll97}, is relatively direct, and in Section \ref{sec:LDP} we explain its main ideas in a straightforward manner. These factors may make the work to follow accessible to researchers in economics, biology, engineering, and other fields.
This paper is part of a larger project on large deviations and stochastic stability under evolutionary game dynamics with payoff-dependent mistake probabilities and arbitrary numbers of actions. In \cite{SanSta16}, we considered the case of the small noise double limit, in which the noise level in agents' decisions is first taken to zero, and then the population size to infinity. The initial analysis of the small noise limit concerns a sequence of Markov chains on a fixed finite state space; the relevant characterizations of large deviations properties in terms of discrete minimization problems are simple and well-known.
Taking the second limit as the population size grows large turns these discrete minimization problems into continuous optimal control problems. We show that the latter problems possess a linear structure that allows them to be solved analytically.
The present paper begins the analysis of large deviations and stochastic stability when the population size is taken to infinity for a fixed noise level. This analysis concerns a sequence of Markov chains on ever finer grids in the simplex, making the basic large deviations result---our main result here---considerably more difficult than its small-noise counterpart. Future work will provide a full development of Freidlin-Wentzell theory for the large population limit, allowing for mean dynamics with multiple stable states. It will then introduce the second limit as the noise level vanishes, and determine the extent to which the agreement of the two double limits agree. Further discussion of this research agenda is offered in Section \ref{sec:Disc}.
\section{The Model} We consider a model in which all agents are members of a single population. The extension to multipopulation settings only requires more elaborate notation.
\subsection{Finite-population games}\label{sec:FPopGames} We consider games in which the members of a population of $N$ agents choose actions from the common finite action set $\scrA= \{1, \ldots , n\}$. We describe the population's aggregate behavior by a \emph{population state} $x$, an element of the simplex $X = \{x \in \mathbb{R}^n_+\colon \sum_{i=1}^n x_i = 1\}$, or more specifically, the grid $\mathscr{X}^N = X \cap \frac1N \mathbb{Z}^n =\brace{x \in X\colon Nx \in {\mathbb{Z}}^n}$. The standard basis vector $e_i \in X \subset \mathbb{R}^n$ represents the \emph{pure population state} at which all agents play action $i$.
We identify a \emph{finite-population game} with its payoff function $F^N\colon \mathscr{X}^N \to \mathbb{R}^n$, where $F^N_i(x)\in \mathbb{R}$ is the payoff to action $i$ when the population state is $x \in \mathscr{X}^N$.
\begin{example}[Matching in normal form games]\label{ex:MatchNF} Assume that agents are matched in pairs to play a symmetric two-player normal form game $A \in \mathbb{R}^{n \times n}$, where $A_{ij}$ is the payoff obtained by an $i$ player who is matched with a $j$ player. If each agent is matched with all other agents (but not with himself), then average payoffs in the resulting population game are given by
$F^N_i(x) =\frac1{N-1} (A(Nx - e_i))_i=(Ax)_i + \tfrac1{N-1}((Ax)_i - A_{ii})\,$. \ensuremath{\hspace{4pt}\Diamondblack} \end{example}
\begin{example}[Congestion games] To define a \emph{congestion game} (\cite{BecMcGWin56}, \cite{Ros73}), one specifies a collection of facilities $\Lambda$ (e.g., links in a highway network), and associates with each facility $\lambda \in \Lambda$ a function $\ell^N_\lambda \colon \{0, \frac1N, \ldots , 1\} \to \mathbb{R}$ describing the cost (or benefit, if $\ell^N_\lambda <0$) of using the facility as a function of the fraction of the population that uses it. Each action $i \in \scrA$ (a path through the network) requires the facilities in a given set $\Lambda_i \subseteq \Lambda$ (the links on the path), and the payoff to action $i$ is the negative of the sum of the costs accruing from these facilities. Payoffs in the resulting population game are given by $F^N_i(x) = -\sum_{\lambda \in \Lambda_i}\ell^N_\lambda(u_\lambda(x))$, where $u_\lambda(x)=\sum_{i:\: \lambda\in \Lambda_i}x_i$ denotes the total utilization of facility $\lambda$ at state $x$. \ensuremath{\hspace{4pt}\Diamondblack} \end{example}
Because the population size is finite, the payoff vector an agent considers when revising may depend on his current action. To allow for this possibility, we let $F^N_{i \to \cdot}\colon \mathscr{X}^N \to \mathbb{R}^n$ denote the payoff vector considered at state $x$ by a action $i$ player.
\begin{example}[Simple payoff evaluation]\label{ex:Simple} Under \emph{simple payoff evaluation}, all agents' decisions are based on the current vector of payoffs: $F^N_{i\to j}(x)=F^N_j(x)$ for all $i, j \in \scrA$. \ensuremath{\hspace{4pt}\Diamondblack} \end{example}
\begin{example}[Clever payoff evaluation]\label{ex:Clever} Under \emph{clever payoff evaluation}, an action $i$ player accounts for the fact that by switching to action $j$ at state $x$, he changes the state to the adjacent state $y =x + \frac1N(e_j - e_i)$. To do so, he evaluates payoffs according to the \emph{clever payoff vector} $F^N_{i\to j}(x)=F^N_j(x +\tfrac1N(e_j-e_i))$. \footnote{This adjustment is important in finite population models: see \pgcite{SanPGED}{Section 11.4.2} or \cite{SanSta16}.
}
\ensuremath{\hspace{4pt}\Diamondblack} \end{example}
As the assumptions in Section \ref{sec:Processes} will make clear, our results are the same whether simple or clever payoff evaluation is assumed.
\subsection{Revision protocols}\label{sec:RP}
In our model of evolution, each agent occasionally receives opportunities to switch actions. At such moments, an agent decides which action to play next by employing a \emph{protocol} $\rho^N \colon \mathbb{R}^n \times \mathscr{X}^N \to X^n$, with the choice probabilities of a current action $i$ player being described by $\rho^N_{i\, \cdot}\colon \mathbb{R}^n \times \mathscr{X}^N \to X$.
Specifically, if a revising action $i$ player faces payoff vector $\pi \in \mathbb{R}^n$ at population state $x \in \mathscr{X}^N$, then the probability that he proceeds by playing action $j$ is $\rho^N_{i j}(\pi, x)$. We will assume shortly that this probability is bounded away from zero, so that there is always a nonnegligible probability of the revising agent playing each of the actions in $\scrA$; see condition \eqref{eq:LimSPBound} below.
\begin{example}[The logit protocol]\label{ex:Logit} A fundamental example of a revision protocol with positive choice probabilities is the \emph{logit protocol}, defined by \begin{equation}\label{eq:LogitProtocol} \rho^N_{ij}(\pi, x) = \frac{\exp(\eta^{-1}\pi_j)}{\sum_{k \in \scrA}\exp(\eta^{-1}\pi_k)} \end{equation} for some \emph{noise level} $\eta>0$. When $\eta$ is small, an agent using this protocol is very likely to choose an optimal action, but places positive probability on every action, with lower probabilities being placed on worse-performing actions. \ensuremath{\hspace{4pt}\Diamondblack} \end{example}
\begin{example}[Perturbed best response protocols] One can generalize \eqref{eq:LogitProtocol} by assuming that agents choice probabilities maximize the difference between their expected base payoff and a convex penalty: \begin{equation*} \rho^N_{i\cdot}(\pi, x)=\argmax_{x\in \Int(X)}\paren{\sum_{k\in\scrA}\pi_kx_k -h(x)}, \end{equation*}
where $h \colon \Int(X) \to \mathbb{R}$ is strictly convex and steep, in the sense that $|\nabla h(x)|$ approaches infinity whenever $x$ approaches the boundary of $X$. The logit protocol \eqref{eq:LogitProtocol} is recovered when $h$ is the negated entropy function $\eta^{-1}\sum_{k \in \scrA}x_k \log x_k$. \ensuremath{\hspace{4pt}\Diamondblack} \end{example}
\begin{example}[The pairwise logit protocol]\label{ex:PLogit} Under the \emph{pairwise logit protocol}, a revising agent chooses a candidate action distinct from his current action at random, and then applies the logit rule \eqref{eq:LogitProtocol} only to his current action and the candidate action: \begin{equation*} \rho^N_{ij}(\pi, x) = \begin{cases} \frac1{n-1}\cdot\frac{\exp(\eta^{-1}\pi_j)}{\exp(\eta^{-1}\pi_i)+\exp(\eta^{-1}\pi_j)}&\text{if }j \ne i\\ \frac1{n-1}\sum_{k\ne i}\frac{\exp(\eta^{-1}\pi_i)}{\exp(\eta^{-1}\pi_i)+\exp(\eta^{-1}\pi_k)}&\text{if }j = i.\;\ensuremath{\hspace{4pt}\Diamondblack} \end{cases} \end{equation*}
\end{example}
\begin{example}[Imitation with ``mutations''] Suppose that with probability $1-\varepsilon$, a revising agent picks an opponent at random and switches to her action with probability proportional to the opponent's payoff, and that with probability $\varepsilon>0$ the agent chooses an action at random. If payoffs are normalized to take values between 0 and 1, the resulting protocol takes the form \begin{equation*} \rho^N_{ij}(\pi, x) = \begin{cases} (1-\varepsilon)\,\frac{N}{N-1}x_j\,\pi_j + \frac{\varepsilon}n&\text{if }j \ne i\\ (1-\varepsilon)\paren{\frac{Nx_i -1}N+\sum_{k\ne i}\frac{N}{N-1}x_k(1 - \pi_k)} + \frac{\varepsilon}n&\text{if }j = i. \end{cases} \end{equation*} The positive mutation rate ensures that all actions are chosen with positive probability. \ensuremath{\hspace{4pt}\Diamondblack} \end{example}
For many further examples of revision protocols, see \cite{SanPGED}.
\subsection{The stochastic evolutionary process}\label{sec:TheSEP}
Together, a population game $F^N$ and a revision protocol $\rho^N$ define a discrete-time stochastic process $\mathbf{X}^N = \{X^{N}_{k}\}_{k=0}^\infty$, which is defined informally as follows: During each period, a single agent is selected at random and given a revision opportunity. The probabilities with which he chooses each action are obtained by evaluating the protocol $\rho^N$ at the relevant payoff vector and population state. Each period of the process $\mathbf{X}^N$ takes $\frac1N$ units of clock time, as this fixes at one the expected number of revision opportunities that each agent receives during one unit of clock time.
More precisely, the process $\mathbf{X}^N$ is a Markov chain with initial condition $X^N_0 \in \mathscr{X}^N$ and transition law \begin{equation}\label{eq:PNEta} \mathbb{P}\paren{X^{N}_{k+1}=y \,\big\vert\, X^{N}_{k}=x}= \begin{cases} x_{i}\, \rho^N_{ij}(F^N_{i\to\cdot}(x),x) & \text{if}\;y=x+\frac1N(e_{j}-e_{i})\text{ and }j\ne i,\\ \sum_{i=1}^{n}x_{i}\, \rho^N_{ii}(F^N_{i\to\cdot}(x),x) & \text{if}\;y=x,\\ 0 & \text{otherwise}. \end{cases} \end{equation} When a single agent switches from action $i$ to action $j$, the population state changes from $x$ to $y=x+\frac1N(e_{j}-e_{i})$. This requires that the revision opportunity be assigned to a current action $i$ player, which occurs with probability $x_i$, and that this player choose to play action $j$, which occurs with probability $\rho^N_{ij}(F^N_{i\to\cdot}(x),x) $. Together these yield the law \eqref{eq:PNEta}.
\begin{example} Suppose that $N$ agents are matched to play the normal form game $A\in \mathbb{R}^{n\times n}$, using clever payoff evaluation and the logit choice rule with noise level $\eta>0$. Then if the state in period $k$ is $x \in \mathscr{X}^N$, then by Examples \ref{ex:MatchNF}, \ref{ex:Clever}, and \ref{ex:Logit} and equation \eqref{eq:PNEta}, the probability that the state in period $k+1$ is $x+\frac1N(e_{j}-e_{i}) \ne x$ is equal to \footnote{Since the logit protocol is parameterized by a noise level and since clever payoff evaluation is used, this example satisfies the assumptions of our analysis of the small noise double limit in \cite{SanSta16}.} \[ \mathbb{P}\paren{X^{N}_{k+1}=x+\tfrac1N(e_{j}-e_{i}) \,\big\vert\, X^{N}_{k}=x}= x_i \cdot \frac{\exp\paren{\eta^{-1}\cdot\frac1{N-1}(A (Nx-e_i))_j}}{\sum\limits_{\ell \in \scrA}\exp\paren{\eta^{-1}\cdot\frac1{N-1}(A (Nx-e_i))_\ell}} .\;\; \ensuremath{\hspace{4pt}\Diamondblack} \] \end{example}
\subsection{A class of population processes}\label{sec:Processes}
It will be convenient to consider an equivalent class of Markov chains defined using a more parsimonious notation. All Markov chains to come are defined on a probability space $(\Omega, \mathscr{F}, \mathbb{P})$, and we sometimes use the notation $\mathbb{P}_{x}$ to indicate that the Markov chain $\mathbf{X}^N$ under consideration is run from initial condition $x\in\mathscr{X}^N$.
The Markov chain $\mathbf{X}^N = \{X^{N}_{k}\}_{k=0}^\infty$ runs on the discrete grid $\mathscr{X}^N = X \cap \frac1N \mathbb{Z}^n$, with each period taking $\frac1N$ units of clock time, so that each agent expects to receive one revision opportunity per unit of clock time (cf.~Section \ref{sec:MDDA}). We define the law of $\mathbf{X}^N$ by setting an initial condition $X^N_0 \in \mathscr{X}^N$ and specifying subsequent states via the recursion
\begin{equation}\label{eq:RecursiveDef} X^{N}_{k+1}=X^{N}_{k}+\tfrac{1}{N}\zeta^{N}_{k+1}. \end{equation}
The normalized increment $\zeta^{N}_{k+1}$ follows the conditional law $\nu^{N}(\,\cdot\,|X^{N}_{k})$, defined by \begin{equation}\label{eq:CondLaw}
\nu^{N}(\mathscr{z}|x)= \begin{cases} x_{i}\, \sigma^N_{ij}(x) & \text{if}\;\mathscr{z}=e_{j}-e_{i}\text{ and }j\ne i,\\ \sum_{i=1}^{n}x_{i}\, \sigma^N_{ii}(x) & \text{if}\;\mathscr{z}=\mathbf{0},\\ 0 &\text{otherwise}, \end{cases} \end{equation} where the function $\sigma^N\colon \mathscr{X}^N \to \mathbb{R}^{n\times n}_+$ satisfies $\sum_{j \in \scrA}\sigma^N_{ij}(x) = 1$ for all $i \in \scrA$ and $x \in \mathscr{X}^N$. The \emph{switch probability} $\sigma^N_{ij}(x)$ is the probability that an agent playing action $i $ who receives a revision opportunity proceeds by playing action $j$. The model described in the previous sections is the case in which $\sigma^N_{ij}(x) =\rho^N_{ij}(F^N_{i\to\cdot}(x),x)$.
We observe that the support of the transition measure $\nu^{N}(\,\cdot\,|x)$ is contained in the set of \emph{raw increments} $\mathscr{Z} = \{e_j - e_i\colon i, j \in \scrA\}$. Since an unused action cannot become less common, the support of $\nu^{N}(\,\cdot\,|x)$ is contained in $\mathscr{Z}(x) =\{\mathscr{z} \in \mathscr{Z} \colon x_i = 0 \Rightarrow \mathscr{z}_i \geq 0\}$.
Our large deviations results concern the behavior of sequences $\{\mathbf{X}^N\}_{N=N_0}^\infty$ of Markov chains defined by \eqref{eq:RecursiveDef} and \eqref{eq:CondLaw}. To allow for finite population effects, we permit the switch probabilities $\sigma^N_{ij}(x)$ to depend on $N$ in a manner that becomes negligible as $N$ grows large. Specifically, we assume that there is a Lipschitz continuous function $\sigma \colon \mathscr{X}^N \to \mathbb{R}^{n\times n}_+$ that describes the limiting switch probabilities, in the sense that \begin{equation}\label{eq:LimSPs}
\lim_{N\to \infty}\max_{x \in \mathscr{X}^N}\max_{i,j\in\scrA}|\sigma^N_{ij}(x)-\sigma_{ij}(x) |=0. \end{equation} In the game model, this assumption holds when the sequences of population games $F^N$ and revision protocols $\rho^N$ have at most vanishing finite population effects, in that they converge to a limiting population game $F \colon X \to \mathbb{R}^n$ and a limiting revision protocol $\rho \colon \mathbb{R}^n \times X \to \mathbb{R}$, both of which are Lipschitz continuous.
In addition, we assume that limiting switch probabilities are bounded away from zero: there is a $\varsigma >0$ such that \begin{equation}\label{eq:LimSPBound} \min_{x \in X}\min_{i,j \in \scrA}\sigma_{ij}(x) \geq \varsigma. \end{equation} This assumption is satisfied in the game model when the choice probabilities $\rho^N_{ij}(\pi, x)$ are bounded away from zero. \footnote{More specifically, the bound on choice probabilities must hold uniformly over
the payoff vectors $\pi$ that may arise in the population games $F^N$.} This is so under all of the revision protocols from Section \ref{sec:RP}. Assumption \eqref{eq:LimSPBound} and the transition law \eqref{eq:CondLaw} imply that the Markov chain $\mathbf{X}^N$ is aperiodic and irreducible for $N$ large enough. Thus for such $N$, $\mathbf{X}^N$ admits a unique stationary distribution, $\mu^{N}$, which is both the limiting distribution of the Markov chain and its limiting empirical distribution along almost every sample path.
Assumptions \eqref{eq:LimSPs} and \eqref{eq:LimSPBound} imply that the transition kernels \eqref{eq:CondLaw} of the Markov chains $\mathbf{X}^N$ approach a limiting kernel $\nu \colon X \to \Delta(\mathscr{Z})$, defined by \begin{equation}\label{eq:CondLawLimit}
\nu(\mathscr{z}|x)= \begin{cases} x_{i}\, \sigma_{ij}(x) & \text{if}\;\mathscr{z}=e_{j}-e_{i}\text{ and }j\ne i\\ \sum_{i\in\scrA}x_{i}\, \sigma_{ii}(x) & \text{if}\;\mathscr{z}=\mathbf{0},\\ 0 &\text{otherwise}. \end{cases} \end{equation} Condition \eqref{eq:LimSPs} implies that the convergence of $\nu^N$ to $\nu$ is uniform: \begin{equation}\label{eq:LimTrans}
\lim_{N\to \infty}\max_{x \in \mathscr{X}^N}\max_{\mathscr{z} \in \mathscr{Z}}\abs{\nu^N(\mathscr{z}|x)-\nu(\mathscr{z}|x) }=0. \end{equation}
The probability measures $\nu(\,\cdot \,|x)$ depend Lipschitz continuously on $x$, and by virtue of condition \eqref{eq:LimSPBound}, each measure $\nu(\,\cdot \,|x)$ has support $\mathscr{Z}(x)$.
\section{Sample Path Large Deviations}\label{sec:SPLD}
\subsection{Deterministic approximation}\label{sec:MDDA}
Before considering the large deviations properties of the processes $\mathbf{X}^N$, we describe their typical behavior. By definition, each period of the process $\mathbf{X}^N=\{X^{N}_{k}\}_{k=0}^\infty$ takes $\frac1N$ units of clock time, and leads to a random increment of size $\frac1N$. Thus when $N$ is large, each brief interval of clock time contains a large number of periods during which the transition measures $\nu^{N}(\,\cdot\,|X^{N}_{k})$ vary little. Intuition from the law of large numbers then suggests that over this interval, and hence over any finite concatenation of such intervals, the Markov chain $\mathbf{X}^N$ should follow an almost deterministic trajectory, namely the path determined by the process's expected motion.
To make this statement precise, note that the expected increment of the process $\mathbf{X}^N$ from state $x$ during a single period is \begin{equation}\label{eq:ExpInc}
\mathbb{E}(X^N_{k+1} - X^N_k | X^N_k = x) = \frac1N \mathbb{E}(\zeta^N_k | X^N_k = x)=\frac1N \sum_{\mathscr{z} \in \mathscr{Z}}\mathscr{z}\, \nu^N(\mathscr{z} | x) . \end{equation} Since there are $N$ periods per time unit, the expected increment per time unit is obtained by multiplying \eqref{eq:ExpInc} by $N$. Doing so and taking $N$ to its limit defines the \emph{mean dynamic}, \begin{subequations}\label{eq:MD} \begin{equation}\label{eq:MDZeta} \dot x
= \sum_{\mathscr{z} \in \mathscr{Z}}\mathscr{z}\, \nu(\mathscr{z} | x)= \mathbb{E}\zeta_x, \end{equation}
where $\zeta_x$ is a random variable with law $\nu(\cdot| x)$. Substituting in definition \eqref{eq:CondLawLimit} and simplifying yields the coordinate formula \begin{equation}\label{eq:MDCoor} \dot x_i = \sum_{j \in \scrA} x_j \,\sigma_{ji}(x)- x_i. \end{equation} \end{subequations} Assumption \eqref{eq:LimSPBound} implies that the boundary of the simplex is repelling under \eqref{eq:MD}. Since the right hand side of \eqref{eq:MD} is Lipschitz continuous, it admits a unique forward solution $\{x_t\}_{t\ge 0}$ from every initial condition $x_0 =x$ in $X$, and this solution does not leave $X$.
A version of the deterministic approximation result to follow was first proved by \cite{Kur70}, with the exponential rate of convergence established by \cite{Ben98}; see also \cite{BenWei03,BenWei09}. To state the result, we let $|\cdot|$ denote the $\ell^1$ norm on $\mathbb{R}^n$, and we
define $\hat\mathbf{X}^N=\{\hat {X}_t^N \}_{t\ge 0}$ to be the piecewise affine interpolation of the process $\mathbf{X}^N$: \[ \hat {X}_t^N =X_{\lfloor Nt \rfloor}^N + (Nt-\lfloor Nt \rfloor)(X_{\lfloor Nt \rfloor+1}^N -X_{\lfloor Nt \rfloor}^N). \]
\begin{theorem}\label{thm:DetApprox} Suppose that $\{X^{N}_{k}\}$ has initial condition $x^N \in \mathscr{X}^N$, and let $\lim_{N\to \infty}x^N = x \in X$. Let $\{x_t\}_{t \geq 0}$ be the solution to \eqref{eq:MD} with $x_{0}=x$. For any $T< \infty$ there exists a constant $c > 0$ independent of $x$ such that for all $\varepsilon > 0$ and $N$ large enough, \[
\mathbb{P}_{x^N}\!\left( {\sup\limits_{t\in [0,T]} \left|
{\hat {X}_t^N -x_t } \right|\ge \varepsilon } \right)\le 2n\exp (-c\varepsilon ^2N). \] \end{theorem}
\subsection{The Cram\'{e}r transform and relative entropy}\label{sec:Cramer}
Stating our large deviations results requires some additional machinery. \footnote{For further background on this material,
see \cite{DemZei98} or \cite{DupEll97}.} Let $\mathbb{R}^n_{0}=\{z\in\mathbb{R}^n\colon\sum_{i\in \scrA}z_{i}=0\}$ denote the set of vectors tangent to the simplex.
The \textit{Cram\'{e}r transform} $L(x, \cdot):\mathbb{R}^n_{0}\to[0,\infty]$ of probability distribution $\nu(\cdot|x)\in\Delta(\mathscr{Z})$ is defined by \begin{equation}\label{eq:CramerTr}
L(x, z)=\sup \limits_{ u\in \mathbb{R}^n_0 }\left(\abrack{ u, z}-H(x, u)\right), \text{ where }\:H(x,u)= \log\paren{\sum_{\mathscr{z} \in \mathscr{Z}}\mathrm{e}^{\langle u,\mathscr{z} \rangle}\,\nu(\mathscr{z}|x)}. \end{equation}
In words, $L(x, \cdot)$ is the convex conjugate of the log moment generating function of $\nu(\cdot|x)$. It is well known that $L(x, \cdot)$ is convex, lower semicontinuous, and nonnegative, and that $L(x, z) =0$ if and only if $z= \mathbb{E} \zeta_x$; moreover, $L(x, z) < \infty$ if and only if $z \in Z(x)$, where $Z(x) = \conv(\mathscr{Z}(x))$ is the convex hull of the support of $\nu(\,\cdot\,|x)$.
To help interpret what is to come, we recall \emph{Cram\'{e}r's theorem}:
Let $\{\zeta_x^k\}_{k=1}^\infty$ be a sequence of i.i.d.\ random variables with law $\nu(\,\cdot\,|x)$, and let $\bar \zeta_x^N $ be the sequence's $N$th sample mean. Then for any set $V\subseteq \mathbb{R}^n$, \begin{subequations} \begin{gather} \limsup \limits_{N\to \infty } \tfrac 1N \log \mathbb{P}(\bar \zeta_x^N \in V)\leq-\inf \limits_{ z\in \cl( V)} L (x, z),\;\text{ and}\label{eq:LDPIIDUpper}\\ \liminf \limits_{N\to \infty } \tfrac 1N \log \mathbb{P}(\bar \zeta_x^N \in V)\geq-\inf \limits_{ z\in \Int( V)} L (x, z).\label{eq:LDPIIDLower} \end{gather} \end{subequations} Thus for ``nice'' sets $V$, those for which the right-hand sides of the upper and lower bounds \eqref{eq:LDPIIDUpper} and \eqref{eq:LDPIIDLower} are equal, this common value is the exponential rate of decay of the probability that $\bar \zeta_x^N$ lies in $V$.
Our analysis relies heavily on a well-known characterization of the Cram\'er transform as a constrained minimum of relative entropy, a characterization that also provides a clear intuition for Cram\'{e}r's theorem. Recall that the \emph{relative entropy} of probability measure $\lambda\in\Delta(\mathscr{Z})$ given probability measure $\pi\in\Delta(\mathscr{Z})$ is the extended real number \begin{equation*}
R(\lambda||\pi)=\sum_{\mathscr{z}\in\mathscr{Z}}\lambda(\mathscr{z})\log\frac{\lambda(\mathscr{z})}{\pi(\mathscr{z})}, \end{equation*} with the conventions that $0\log 0 = 0\log\frac00=0$.
It is well known that $R(\cdot||\cdot)$ is convex, lower semicontinuous, and nonnegative, that
$R(\lambda||\pi)= 0$ if and only $\lambda=\pi$, and that
$R(\lambda||\pi)<\infty$ if and only if $\support(\lambda)\subseteq\support(\pi)$.
A basic interpretation of relative entropy is provided by \emph{Sanov's theorem}, which concerns the asymptotics of the empirical distributions $\mathscr{E}^N_x$ of the sequence $\{\zeta_x^k\}_{k=1}^\infty$, defined by $\mathscr{E}^N_x(\mathscr{z}) = \frac1N\sum_{k=1}^N 1(\zeta_x^k =\mathscr{z})$.
This theorem says that for every set of distributions $\Lambda \subseteq \Delta(\mathscr{Z})$, \begin{subequations} \begin{gather}
\limsup \limits_{N\to \infty } \tfrac 1N \log \mathbb{P}(\mathscr{E}^N_x \in \Lambda)\leq-\:\inf_{\lambda \in \Lambda}\;R(\lambda||\nu(\,\cdot\,|x)),\;\text{ and}\label{eq:SanovUpper}\\
\liminf \limits_{N\to \infty } \tfrac 1N \log \mathbb{P}(\mathscr{E}^N_x \in \Lambda)\geq-\inf_{\lambda \in \Int(\Lambda)} R(\lambda||\nu(\,\cdot\,|x)).\label{eq:SanovLower} \end{gather} \end{subequations} Thus for ``nice'' sets $\Lambda$, the probability that the empirical distribution lies in $\Lambda$ decays at an exponential rate given by the minimal value of relative entropy on $\Lambda$.
The intuition behind Sanov's theorem and relative entropy is straightforward. We can express the probability that the $N$th empirical distribution is the feasible distribution $\lambda\in \Lambda$
as the product of the probability of obtaining a particular realization of $\{\zeta_x^k\}_{k=1}^\infty$ with empirical distribution $\lambda$ and the number of such realizations: \[
\mathbb{P}(\mathscr{E}^N_x = \lambda) = \prod_{\mathscr{z} \in \mathscr{Z}}\nu(\mathscr{z}|x)^{N\lambda(\mathscr{z})} \times \frac{N!}{\prod_{\mathscr{z} \in \mathscr{Z}}\,(N\lambda(\mathscr{z}))!}. \] Then applying Stirling's approximation $n! \approx n^n e^{-n}$ yields \[
\frac 1N \log \mathbb{P}(\mathscr{E}^N_x = \lambda) \approx \sum_{\mathscr{z} \in \mathscr{Z}}\lambda(\mathscr{z}) \log \nu(\mathscr{z}|x) - \sum_{\mathscr{z} \in \mathscr{Z}}\lambda(\mathscr{z}) \log \lambda(\mathscr{z}) = -R(\lambda||\nu(\,\cdot\,|x)). \] The rate of decay of $\mathbb{P}(\mathscr{E}^N_x \in \Lambda)$ is then determined by the ``most likely'' empirical distribution in $\Lambda$: that is, by the one whose relative entropy is smallest.
\footnote{Since number of empirical distributions for sample size $N$ grows polynomially (it is less than $(N+1)^{|\mathscr{Z}|}$), the rate of decay cannot be determined by a large set of distributions in $\Lambda$ with higher relative entropies.}
The representation of the Cram\'er transform in terms of relative entropy is obtained by a variation on the final step above: given Sanov's theorem, the rate of decay of obtaining a sample mean $\bar \zeta_x^N $ in $V \subset \mathbb{R}^n$ should be determined by the smallest relative entropy associated with a probability distribution whose mean lies in $V$. \footnote{This general idea---the preservation of large deviation principles under continuous functions---is known as the \emph{contraction principle}. See \cite{DemZei98}.} Combining this idea with \eqref{eq:LDPIIDUpper} and \eqref{eq:LDPIIDLower} suggests the representation \footnote{See \cite{DemZei98}, Section 2.1.2 or \cite{DupEll97}, Lemma 6.2.3(f). } \begin{equation}\label{eq:CramerRep}
L(x, z)=\min_{\lambda\in\Delta(\mathscr{Z})}\left\{R(\lambda||\nu(\cdot|x))\colon \sum_{\mathscr{z}\in\mathscr{Z}}\mathscr{z}\lambda(\mathscr{z})=z\right\}. \end{equation} If $z \in Z(x)$, so that $L(x,z) < \infty$, then the minimum in \eqref{eq:CramerRep} is attained uniquely.
\subsection{Path costs}
To state the large deviation principle for the sequence of interpolated processes $\{\hat\mathbf{X}^{N}\}_{N=N_0}^\infty$, we must introduce a function that characterizes the rates of decay of the probabilities of sets of sample paths through the simplex $X$.
Doing so requires some preliminary definitions. For $T \in (0, \infty)$, let $\mathscr{C}[0, T]$ denote the set of continuous paths $\phi\colon [0,T]\to X$ through $X$ over time interval $[0, T]$, endowed with the supremum norm. Let $\mathscr{C}_x[0, T]$ denote the set of such paths with initial condition $\phi_{0} = x$, and let $\scrA\scrC_{x}[0, T]$ be the set of absolutely continuous paths in $\mathscr{C}_x[0, T]$.
We define the \emph{path cost function} (or \emph{rate function}) $c_{x,T}\colon \mathscr{C}[0, T]\to [0, \infty ]$ by \begin{equation}\label{eq:PathCost} c_{x,T} (\phi )= \begin{cases} \int_{0}^{T} L (\phi _t ,\dot {\phi }_t )\,\mathrm{d} t & \text{if }\phi\in\scrA\scrC_{x}[0,T],\\ \infty & \text{otherwise}. \end{cases} \end{equation} By Cram\'er's theorem, $L (\phi _t ,\dot {\phi }_t )$ describes the ``difficulty'' of proceeding from state $\phi_t$
in direction $\dot {\phi }_t$ under the transition laws of our Markov chains. Thus the path cost $c_{x,T} (\phi )$ represents the ``difficulty'' of following the entire path $\phi$. Since $L(x, z) = 0$ if and only if $z = \mathbb{E} \zeta_x$, path $\phi\in \mathscr{C}_x[0, T]$ satisfies $c_{x,T}(\phi) = 0$ if and only if it is the solution to the mean dynamic \eqref{eq:MD} from state $x$. In light of definition \eqref{eq:PathCost}, we sometimes refer to the function $L\colon X \times \mathbb{R}^n \to [0, \infty]$ as the \emph{running cost function}.
As illustrated by Cram\'er's theorem, the rates of decay described by large deviation principles are defined in terms of the smallest value of a function over the set of outcomes in question. This makes it important for such functions to satisfy lower semicontinuity properties. The following result, which follows from Proposition 6.2.4 of \cite{DupEll97}, provides such a property.
\begin{proposition}\label{prop:GoodRF} The function $c_{x,T}$ is a $($\emph{good}$)$ \emph{rate function}: its lower level sets $\{\phi\in\mathscr{C}\colon c_{x,T}(\phi)\leq M\}$ are compact. \end{proposition}
\subsection{A sample path large deviation principle}\label{sec:LD}
Our main result, Theorem \ref{thm:SPLDP}, shows that the sample paths of the interpolated processes $\hat\mathbf{X}^N$ satisfy a large deviation principle with rate function \eqref{eq:PathCost}. To state this result, we use the notation $\hat\mathbf{X}^{N}_{[0,T]}$ as shorthand for $\{\hat {X}_t^{N} \}_{t\in [0,T]}$.
\begin{theorem} \label{thm:SPLDP}\text{} Suppose that the processes $\{\hat\mathbf{X}^{N}\}_{N=N_0}^\infty$ have initial conditions $x^N \in \mathscr{X}^N$ satisfying $\lim\limits_{N\to\infty}x^N = x \in X$. Let $\Phi \subseteq \mathscr{C}[0, T]$ be a Borel set. Then \begin{subequations} \begin{gather}\label{eq:SPLDUpper} \limsup \limits_{N\to \infty } \frac 1N \log \mathbb{P}_{x^N}\!\paren{\hat\mathbf{X}^{N}_{[0,T]}\in \Phi } \le -\inf \limits_{\phi \in \cl(\Phi)} c_{x,T} (\phi ),\,\text{ and}\\ \label{eq:SPLDLower} \liminf\limits_{N\to \infty } \frac 1N \log \mathbb{P}_{x^N}\!\paren{\hat\mathbf{X}^{N}_{[0,T]}\in \Phi } \ge -\inf \limits_{\phi \in \Int(\Phi)} c_{x,T} (\phi ). \end{gather} \end{subequations} \end{theorem} We refer to inequality \eqref{eq:SPLDUpper} as the \emph{large deviation principle upper bound}, and to \eqref{eq:SPLDLower} as the \emph{large deviation principle lower bound}.
While Cram\'{e}r's theorem concerns the probability that the sample mean of $N$ i.i.d. random variables lies in a given subset of $\mathbb{R}^n$ as $N$ grows large, Theorem \ref{thm:SPLDP} concerns the probability that the sample path of the process $\hat\mathbf{X}^{N}_{[0,T]}$ lies in a given subset of $\mathscr{C}[0, T]$ as $N$ grows large. If $\Phi \subseteq \mathscr{C}[0, T]$ is a set of paths for which the infima in \eqref{eq:SPLDUpper} and \eqref{eq:SPLDLower} are equal and attained at some path $\phi^{\mathlarger *}$, then Theorem \ref{thm:SPLDP} shows that the probability that the sample path of $\hat\mathbf{X}^{N}_{[0,T]}$ lies in $\Phi$ is of order $\exp(-Nc_{x,T}(\phi^{\mathlarger *}))$.
\subsection{Uniform results}
Applications to Freidlin-Wentzell theory require uniform versions of the previous two results, allowing for initial conditions $x^N \in \mathscr{X}^N$ that take values in compact subsets of $X$. We therefore note the following extensions of Proposition \ref{prop:GoodRF} and Theorem \ref{thm:SPLDP}.
\begin{proposition}\label{prop:UniformGoodRF} For any compact set $K\subseteq X$ and any $M<\infty$, the sets $\bigcup_{x \in K}\{\phi\in\mathscr{C}[0,T]\colon c_{x,T}(\phi)\leq M\}$ are compact. \end{proposition}
\begin{theorem} \label{thm:USPLDP} Let $\Phi \subseteq \mathscr{C}[0, T]$ be a Borel set. For every compact set $K\subseteq X$,
\begin{subequations} \begin{gather}\label{eq:uniformSPLDUpper} \limsup \limits_{N\to \infty } \frac 1N\log\paren{\sup_{x^N\in K \cap \mathscr{X}^N}\mathbb{P}_{x^N}\!\paren{\hat\mathbf{X}^{N}_{[0,T]}\in \Phi }}\le -\inf_{x\in K}\inf \limits_{\phi \in \cl(\Phi)} c_{x,T} (\phi ),\,\text{ and}\\ \label{eq:uniformSPLDLower} \liminf\limits_{N\to \infty } \frac 1N \log\paren{\inf_{x^N\in K\cap \mathscr{X}^N}\mathbb{P}_{x^N}\!\paren{\hat\mathbf{X}^{N}_{[0,T]}\in \Phi } }\ge -\sup_{x\in K}\inf \limits_{\phi \in \Int(\Phi)} c_{x,T} (\phi ). \end{gather} \end{subequations} \end{theorem}
The proof of Proposition \ref{prop:UniformGoodRF} is an easy extension of that of Proposition 6.2.4 of \cite{DupEll97}; compare p.~165 of \cite{DupEll97}, where the property being established is called \emph{compact level sets uniformly on compacts}.
Theorem \ref{thm:USPLDP}, the \emph{uniform large deviation principle}, follows from Theorem \ref{thm:SPLDP} and an elementary compactness argument; compare the proof of Corollary 5.6.15 of \cite{DemZei98}.
\section{Applications}\label{sec:App}
To be used in most applications, the results above must be combined with ideas from Freidlin-Wentzell theory. In this section, we use the large deviation principle to study the frequency of excursions from a globally stable rest point and the asymptotics of the stationary distribution, with a full analysis of the case of logit choice in potential games. We then remark on future applications to games with multiple stable equilibria, and the wider scope for analytical solutions that may arise by introducing a second limit.
\subsection{Excursions from a globally attracting state and stationary distributions}\label{sec:Exit}
In this section, we describe results on excursions from stable rest points that can be obtained by combining the results above with the work of \cite{FreWen98} and refinements due to \cite{DemZei98}, which consider this question in the context of diffusion processes with a vanishingly small noise parameter. To prove the results in our setting, one must adapt arguments for diffusions to sequences of processes running on increasingly fine finite state spaces. As an illustration of the difficulties involved, observe that while a diffusion process is a Markov process with continuous sample paths, our original process $\mathbf{X}^{N}$ is Markov but with discrete sample paths, while our interpolated process $\hat\mathbf{X}^{N}$ has continuous sample paths but is not Markov. \footnote{When the interpolated process $\hat\mathbf{X}^{N}$ is halfway between adjacent states $x$ and $y$ at time $\frac{k+1/2}N$, its position at time $\frac{k}N$ determines its position at time $\frac{k+1}N$.} Since neither process possesses both desirable properties, a complete analysis of the problem is quite laborious. We therefore only sketch the main arguments here, and will present a detailed analysis in future work.
Consider a sequence of processes $\mathbf{X}^{N}$ satisfying the assumptions above and whose mean dynamic \eqref{eq:MD} has a globally stable rest point $x^{\mathlarger *}$. We would like to estimate the time until the process exits a given open set $O\subset X$ containing $x^{\mathlarger *}$. By the large deviations logic described in Section \ref{sec:Cramer}, we expect this time should be determined by the cost of the least cost path that starts at $x^{\mathlarger *}$ and leaves $O$. With this in mind, let $\partial O$ denote the boundary of $O$ relative to $X$, and define \begin{gather} C_y = \inf_{T>0}\:\inf_{\phi\in\mathscr{C}_{x^{\mathlarger *}}[0,T]:\:\phi(T)=y}c_{x^{\mathlarger *}\!,T}(\phi)\;\text{ for } y \in X,\label{eq:yCost}\\ C_{\partial O}=\inf_{y \in \partial O} C_y.\label{eq:ExitCost} \end{gather} Thus $C_y$ is the lowest cost of a path from $x^{\mathlarger *}$ to $y$, and the \emph{exit cost} $C_{\partial O}$ is the lowest cost of a path that leaves $O$.
Now define
$\hat\tau^{N}_{\partial O}=\inf\{t\geq 0\colon\hat{X}^{N}_{t}\in \partial O\}$
to be the random time at which the interpolated process $\hat\mathbf{X}^{N}$ hits $\partial O$ of $O$. If this boundary satisfies a mild regularity condition,
\footnote{The condition requires that for all $\delta >0$ small enough, there is a nonempty closed set $K_\delta \subset X$ disjoint from $\cl(O)$ such that for all $x \in \partial O$, there exists a $y \in K_\delta$ satisfying $|x-y|=\delta$.
} we can show that for all $\varepsilon>0$ and all sequences of $x^N \in \mathscr{X}^N$ converging to some $x \in O$, we have \begin{gather} \lim_{N\to\infty}\mathbb{P}_{x^N}\!\paren{C_{\partial O}-\varepsilon < \tfrac1N\log\hat\tau^{N}_{\partial O} <C_{\partial O}+\varepsilon}=1 \:\text{ and}\label{eq:ETBound}\\ \lim_{N\to\infty}\tfrac{1}{N}\log\mathbb{E}_{x^N}\hat\tau^{N}_{\partial O}=C_{\partial O}.\label{eq:ETExBound} \end{gather}
That is, the time until exit from $O$ is of approximate order $\exp(NC_{\partial O})$ with probability near 1, and the expected time until exit from $O$ is of this order as well. Since stationary distribution weights are inversely proportional to expected return times, equation \eqref{eq:ETExBound} can be used to show that the rates of decay of stationary distribution weights are also determined by minimal costs of paths. If we let $B_\delta(y) = \{x \in X \colon |y-x|< \delta\}$, then for all $y \in X$ and $\varepsilon>0$, there is a $\delta$ sufficiently small that \begin{equation}\label{eq:LDSD} -C_y - \varepsilon \leq \tfrac{1}{N}\log \mu^{N}(B_\delta(y))\leq -C_y + \varepsilon. \end{equation} for all large enough $N$.
The main ideas of the proofs of \eqref{eq:ETBound} and \eqref{eq:ETExBound} are as follows. To prove the upper bounds, we use the LDP lower bound to show there is a finite duration $T$ such that the probability of reaching $\partial O$ in $T$ time units starting from any state in $O$ is at least $q^N_T=\exp(-N(C_{\partial O}+\varepsilon))$. It then follows from the strong Markov property
that the probability of failing to reach $\partial O$ within $kT$ time units is at most $(1-q^N_T)^k$. Put differently, if we define the random variable $R^N_T$ to equal $k$ if $\partial O$ is reached between times $(k-1)T$ and $kT$, then the distribution of $R^N_T$ is stochastically dominated by the geometric distribution with parameter $q^N_T$. It follows that the expected time until $\partial O$ is reached is at most $T\cdot\mathbb{E} R^N_T \leq T/q^N_T = T\exp(N(C_{\partial O}+\varepsilon))$, yielding the upper bound in \eqref{eq:ETExBound}. The upper bound in \eqref{eq:ETBound} then follows from Chebyshev's inequality.
To prove the lower bounds in \eqref{eq:ETBound} and \eqref{eq:ETExBound}, we again view the process $\hat\mathbf{X}^N$ as making a series of attempts to reach $\partial O$.
Each attempt requires at least $\delta>0$ units of clock time, and the LDP upper bound implies that for $N$ large enough, an attempt succeeds with probability less than $\exp(-N(C_{\partial O} - \frac\eps2))$.
Thus to reach $\partial O$ within $k\delta$ time units, one of the first $k$ attempts must succeed, and this has probability less than $k \exp(-N(C_{\partial O} - \frac\eps2))$. Choosing $k \approx \delta^{-1}\exp(N(C_{\partial O} -\varepsilon))$, we conclude that the probability of exiting $O$ in $\exp(N(C_{\partial O}-\varepsilon))$ time units is less than $k\delta \approx \delta^{-1}\exp(-N\varepsilon/2)$. This quantity vanishes as $N$ grows large, yielding the lower bound in \eqref{eq:ETBound}; then Chebyshev's inequality gives the lower bound in \eqref{eq:ETExBound}.
\subsection{Logit evolution in potential games}
We now apply the results above in a context for which the exit costs $C_O$ can be computed explicitly: that of evolution in potential games under the logit choice rule. Consider a sequence of stochastic evolutionary processes $\{\mathbf{X}^{N}\}_{N=N_0}^\infty$ derived from population games $F^N$ and revision protocols $\rho^N$ that converge uniformly to Lipschitz continuous limits $F$ and $\rho$ (see Section \ref{sec:Processes}), where $\rho$ is the logit protocol with noise level $\eta >0$ (Example \ref{ex:Logit}). Theorem \ref{thm:DetApprox} implies that when $N$ is large, the process $\hat\mathbf{X}^{N}$ is well-approximated over fixed time spans by solutions to the mean dynamic \eqref{eq:MD}, which in the present case is the \emph{logit dynamic} \begin{equation}\label{eq:LogitDyn} \dot x = M^\eta(F(x)) - x,\;\text{ where }\;M^\eta_j(\pi) = \frac{\exp(\eta^{-1}\pi_j)}{\sum_{k \in \scrA}\exp(\eta^{-1}\pi_k)}. \end{equation}
Now suppose in addition that the limiting population game $F$ is a \emph{potential game} (\cite{San01}), meaning that there is a function $f \colon \mathbb{R}^n_+ \to \mathbb{R}$ such that $\nabla f(x) = F(x)$ for all $x \in X$. \footnote{The analysis to follow only requires the limiting game $F$ to be a potential game. In particular, there is no need for the convergent sequence $\{F^N\}$ of finite-population games to consist of potential games (as defined in \cite{MonSha96}), or to assume that any of the processes $\hat\mathbf{X}^N$ are reversible (cf.~\cite{Blu97}).} In this case, \cite{HofSan07} establish the following global convergence result.
\begin{proposition}\label{prop:LogPotGC} If $F$ is a potential game, then the \emph{logit potential function} \begin{equation}\label{eq:LogitPotential} f^\eta(x) = \eta^{-1}f(x) - h(x), \quad h(x)=\sum\nolimits_{i \in \scrA}x_i \log x_i. \end{equation} is a strict global Lyapunov function for the logit dynamic \eqref{eq:LogitDyn}. Thus solutions of \eqref{eq:LogitDyn} from every initial condition converge to connected sets of rest points of \eqref{eq:LogitDyn}. \end{proposition}
\noindent We provide a concise proof of this result in Section \ref{sec:LogPotProofs}. Together, Theorem \ref{thm:DetApprox} and Proposition \ref{prop:LogPotGC} imply that for large $N$, the typical behavior of the process $\hat\mathbf{X}^{N}$ is to follow a solution of the logit dynamic \eqref{eq:LogitDyn}, ascending the function $f^\eta$ and approaching a rest point of \eqref{eq:LogitDyn}.
We now use the large deviation principle to describe the excursions of the process $\hat\mathbf{X}^N$ from stable rest points, focusing as before on cases where the mean dynamic \eqref{eq:LogitDyn} has a globally attracting rest point $x^{\mathlarger *}$, which by Proposition \ref{prop:LogPotGC} is the unique local maximizer of $f^\eta$ on $X$.
According to the results from the previous section, the time required for the process to exit an open set $O$ containing $x^{\mathlarger *}$ is characterized by the exit cost \eqref{eq:ExitCost}, which is the infimum over paths from $x^{\mathlarger *}$ to some $y \notin O$ of the path cost \begin{equation}\label{eq:PathCost2} c_{x^{\mathlarger *}\!,T} (\phi )=\int_{0}^{T} \!L (\phi _t ,\dot {\phi }_t )\,\mathrm{d} t = \int_{0}^{T} \!\sup_{u_t \in \mathbb{R}^n_0}\paren{u_t^\prime \dot\phi_t - H (\phi _t ,u_t)}\mathrm{d} t, \end{equation} where the expression of the path cost in terms of the log moment generating functions $H(x, \cdot)$ follows from definition \eqref{eq:CramerTr} of the Cram\'er transform. In the logit/potential model, we are able to use properties $H$ to compute the minimal costs \eqref{eq:yCost} and \eqref{eq:ExitCost} exactly.
\begin{proposition}\label{prop:LPLD} In the logit/potential model, when \eqref{eq:LogitDyn} has a globally attracting rest point $x^{\mathlarger *}$, we have $c^{\mathlarger *}_y =f^\eta(x^{\mathlarger *})-f^\eta(y) $, and so $C^{\mathlarger *}_{\partial O} = \min\limits_{y \in \partial O}\,(f^\eta(x^{\mathlarger *})-f^\eta(y))$. \end{proposition}
In words, the minimal cost of a path from $x^{\mathlarger *}$ to state $y \ne x^{\mathlarger *}$ is equal to the decrease in the logit potential. Combined with equations \eqref{eq:ETBound}--\eqref{eq:LDSD}, Proposition \ref{prop:LPLD} implies that the waiting times $\tau^{N}_{\partial O} $ to escape the set $O$ is described by the smallest decrease in $f^\eta$ required to reach the boundary of $O$, and that $f^\eta$ also governs the rates of decay in the stationary distributions weights $\mu^N(B_\delta(y))$.
\textit{Proof. } We prove the proposition using tools from the calculus of variations (cf.~\pgcite{FreWen98}{Section 5.4}) and these two basic facts about the function $H$ in the logit/potential model, which we prove in Section \ref{sec:LogPotProofs}:
\begin{lemma}\label{lem:NewHLemma} Suppose $x \in \Int(X)$. Then in the logit/potential model, \begin{gather} H(x,-\nabla f^\eta (x))= 0\;\text{ and}\label{eq:HJ}\\ \nabla _u H(x,-\nabla f^\eta(x))= -(M^\eta(F(x)) - x).\label{eq:HFOC} \end{gather} \end{lemma}
Equation \eqref{eq:HJ} is the Hamilton-Jacobi equation associated with path cost minimization problem \eqref{eq:yCost}, and shows that changes in potential provide a lower bound on the cost of reaching any state $y$ from state $x^{\mathlarger *}$ along an interior path $\phi\in\mathscr{C}_{x^{\mathlarger *}}[0,T]$ with $\phi_T=y$: \begin{equation}\label{eq:CostBound} c_{x^{\mathlarger *}\!,T} (\phi ) = \int_{0}^{T} \!\sup_{u_t \in \mathbb{R}^n_0}\paren{u_t^\prime \dot\phi_t - H (\phi _t ,u_t)}\mathrm{d} t \geq \int_{0}^{T}\!\!\!-\nabla f^\eta(\phi_t)^\prime \dot\phi_t\,\mathrm{d} t = f^\eta(x^{\mathlarger *}) - f^\eta(y) . \end{equation} In Section \ref{sec:LogPotProofs}, we prove a generalization of \eqref{eq:HJ} to boundary states which lets us extend inequality \eqref{eq:CostBound} to paths with boundary segments---see equation \eqref{eq:CostBoundBd}. These inequalities give us the lower bound \begin{equation}\label{eq:cstaryLB} c^{\mathlarger *}_y \geq f^\eta(x^{\mathlarger *})-f^\eta(y) . \end{equation}
Equation \eqref{eq:HFOC} is the first order condition for the first integrand in \eqref{eq:CostBound} for paths that are reverse-time solutions to the logit dynamic \eqref{eq:LogitDyn}. Thus if $\psi\colon (-\infty, 0] \to X$ satisfies $\psi_0 = y$ and $\dot \psi_t = -(M^\eta(F(\psi_t)) - \psi_t)$ for all $ t\leq 0$, then Proposition \ref{prop:LogPotGC} and the assumption that $x^{\mathlarger *}$ is globally attracting imply that $\lim_{t\to-\infty}\psi_t = x^{\mathlarger *}$, which with \eqref{eq:HJ} and \eqref{eq:HFOC} yields \begin{equation}\label{eq:CostAttained} \int_{-\infty}^{0} \;\sup_{u_t \in \mathbb{R}^n_0}\paren{u_t^\prime \dot\psi_t - H (\psi _t ,u_t)}\mathrm{d} t = \int_{-\infty}^{0}\!\!\!-\nabla f^\eta(\psi_t)^\prime \dot\psi_t\,\mathrm{d} t = f^\eta(x^{\mathlarger *}) - f^\eta(y) . \end{equation} This equation and a continuity argument imply that lower bound \eqref{eq:cstaryLB} is tight. \hspace{4pt}\ensuremath{\blacksquare}
Congestion games are the most prominent example of potential games appearing in applications, and the logit protocol is a standard model of decision making in this context (\cite{BenLer85}). We now illustrate how the results above can be used to describe excursions of the process $\mathbf{X}^N$ from the stationary state of the logit dynamic and the stationary distribution $\mu^N$ of the process. We consider a network with three parallel links in order to simplify the exposition, as our analysis can be conducted just as readily in any congestion game with increasing delay functions.
\begin{example} Consider a network consisting of three parallel links with delay functions $\ell_1(u) = 1 + 8u$, $\ell_2(u) = 2+4u$, and $\ell_3(u) = 4$. The links are numbered in increasing order of congestion-free travel times (lower-numbered links are shorter in distance), but in decreasing order of congestion-based delays (higher-numbered links have greater capacity). The corresponding continuous-population game has payoff functions $F_i(x) = -\ell_i(x_i)$ and concave potential function \[ f(x) = -\sum_{i\in\scrA}\int_0^{x_i}\ell_i(u)\,\mathrm{d} u = -\!\paren{x_1 + 4 (x_1)^2 + 2x_2 + 2(x_2)^2 +4x_3 }. \] The unique Nash equilibrium of this game, $x^{\mathlarger *} = (\frac38, \frac12, \frac18)$, is the state at which travel times on each link are equal ($\ell_1(x^{\mathlarger *}_1)=\ell_2(x^{\mathlarger *}_2)=\ell_3(x^{\mathlarger *}_3)=4$), and it is the maximizer of $f$ on $X$.
Suppose that a large but finite population of agents repeatedly play this game, occasionally revising their strategies by applying the logit rule $M^\eta$ with noise level $\eta$. Then in the short term, aggregate behavior evolves according to the logit dynamic \eqref{eq:LogitDyn}, ascending the logit potential function $f^\eta = \eta^{-1} f + h$ until closely approaching its global maximizer $x^\eta$. Thereafter, \eqref{eq:ETBound} and \eqref{eq:ETExBound} imply that excursions from $f^\eta$ to other states $y$ occur at rate $\exp(N(f^\eta(x^\eta)-f^\eta(y)))$. The values of $N$ and $f^\eta(y)$ also describe the proportions of time spent at each state: by virtue of \eqref{eq:LDSD}, $\mu^N(B_\delta(y)) \approx \exp(-N(f^\eta(x^\eta)-f^\eta(y)))$.
Figure \ref{fig:Congestion} presents solutions of the logit dynamic \eqref{eq:LogitDyn} and level sets of the logit potential function $f^\eta$ in the congestion game above for noise levels $\eta = .25$ (panel (i)) and $\eta = .1$ (panel (ii)). In both cases, all solutions of \eqref{eq:LogitDyn} ascend the logit potential function and converge to its unique maximizer, $x^{(.25)} \approx (.3563, .4482, .1956)$ in (i), and $x^{(.1)} = (.3648, .4732, .1620)$ in (ii). The latter rest point is closer to the Nash equilibrium on account of the smaller amount of noise in agents' decisions.
\begin{figure}
\caption{Solution trajectories of the logit dynamics and level sets of $f^\eta$ in a congestion game. In both panels, lighter shades represent higher values of $f^\eta$, and increments between level sets are $.5$ units. For any point $y$ on a solution trajectory, the most likely excursion path from the rest point to a neighborhood of $y$ follows the trajectory backward from the rest point. The values of $f^\eta$ also describe the rates of decay of mass in the stationary distribution.}
\label{fig:Congestion}
\end{figure}
In each panel, the ``major axes'' of the level sets of $f^\eta$ correspond to exchanges of agents playing strategy 3 for agents playing strategies 2 and 1 in fixed shares, with a slightly larger share for strategy 2. That deviations of this sort are the most likely is explained by the lower sensitivity of delays on higher numbered links to fluctuations in usage. In both panels, the increments between the displayed level sets of $f^\eta$ are $.5$ units. Many more level sets are drawn in panel (ii) than in panel (i): \footnote{In panel (i), the size of the range of $f^\eta$ is $f^{(.25)}(x^{(.25)}) - f^{(.25)}(e_1)\approx -10.73 - (-20) = 9.27$, while in panel (ii) it is $f^{(.1)}(x^{(.1)}) - f^{(.1)}(e_1)\approx -28.38 - (-50) = 21.62$.} when there is less noise in agents' decisions, excursions from equilibrium of a given unlikelihood are generally smaller, and excursions of a given size and direction are less common. \ensuremath{\hspace{4pt}\Diamondblack}
\end{example}
\subsection{Discussion}\label{sec:Disc}
The analyses above rely on the assumption that the mean dynamic \eqref{eq:MD} admits a globally stable state. If instead this dynamic has multiple attractors, then the time $\hat\tau^N_{\partial O}$ to exit $O$ starting from a stable rest point $x^{\mathlarger *} \in O$ need only satisfy properties \eqref{eq:ETBound} and \eqref{eq:ETExBound} when the set $O$ is contained in the basin of attraction of $x^{\mathlarger *}$. Beyond this case, the most likely amount of time required to escape $O$ may disagree with the expected amount of time to do so, since the latter may be driven by a small probability of becoming stuck near another attractor in $O$. Likewise, when the global structure of \eqref{eq:MD} is nontrivial, the asymptotics of the stationary distribution are more complicated, being driven by the relative likelihoods of transitions between the different attractors. To study these questions in our context, one must not only address the complications noted in Section \ref{sec:Exit}, but must also employ the graph-theoretic arguments developed by \pgcite{FreWen98}{Chapter 6} to capture the structure of transitions among the attractors. Because the limiting stationary distribution is the basis for the approach to equilibrium selection discussed in the introduction, carrying out this analysis is an important task for future work.
We have shown that the control problems appearing in the statement of the large deviation principle can be solved explicitly in the case of logit choice in potential games. They can also be solved in the context of two-action games, in which the state space $X$ is one-dimensional. Beyond these two cases, the control problems do not appear to admit analytical solutions.
To contend with this, and to facilitate comparisons with other analyses in the literature, one can consider the \emph{large population double limit}, studying the behavior of the large population limit as the noise level in agents' decisions is taken to zero.
There are strong reasons to expect this double limit to be analytically tractable. In \cite{SanSta16}, we study the reverse order of limits, under which the noise level $\eta$ is first taken to zero, and then the population size $N$ to infinity. For this order of limits, we show that large deviations properties are determined by the solutions to piecewise linear control problems, and that these problems can be solved analytically. Moreover, \cite{SanORDERS} uses birth-death chain methods to show that in the two-action case, large deviations properties under the two orders of limits are identical. These results and our preliminary analyses suggest that the large population double limit is tractable, and that in typical cases, conclusions for the two orders of limits will agree. While we are a number of steps away from reaching these ends, the analysis here provides the tools required for work on this program to proceed.
\section{Analysis}\label{sec:LDP}
The proof of Theorem \ref{thm:SPLDP} follows the weak convergence approach of \cite{DupEll97} (henceforth DE). As noted in the introduction, the main novelty we must contend with is the fact that our processes run on a compact set $X$. This necessitates a delicate analysis of the behavior of the process on and near the boundary of $X$. At the same time, the fact that the conditional laws \eqref{eq:CondLaw}
have finite support considerably simplifies a number of the steps from DE's approach. Proofs of auxiliary results that would otherwise interrupt the flow of the argument are relegated to Sections \ref{sec:LSCProof} and \ref{sec:RIOP}.
Some technical arguments that mirror those from DE are provided in Section \ref{sec:AD}.
Before entering into the details of our analysis, we provide an overview. In Section \ref{sec:JC}, we use the representation \eqref{eq:CramerRep} of the Cram\'er transform to establish joint continuity properties of the running cost function $L(\cdot, \cdot)$. To start, we provide examples of discontinuities that this function exhibits at the boundary of $X$. We then show that excepting these discontinuities, the running cost function is ``as continuous as possible'' (Proposition \ref{prop:Joint2}).
The remaining sections follow the line of argument in DE, with modifications that use Proposition \ref{prop:Joint2} to contend with boundary issues. Section \ref{sec:laplace} describes how the large deviation principle upper and lower bounds can be deduced from corresponding Laplace principle upper and lower bounds.
The latter bounds concern the limits of expectations of continuous functions, making them amenable to analysis using weak convergence arguments. Section \ref{sec:SOCP} explains how the expectations appearing in the Laplace principle can be expressed as solutions to stochastic optimal control problems \eqref{eq:VNSeqEqInitial}, the running costs of which are relative entropies defined with respect to the transition laws $\nu^N(\cdot | x)$ of $\mathbf{X}^N$. Section \ref{sec:LCP} describes the limit properties of the controlled processes as $N$ grows large.
Finally, Sections \ref{sec:ProofUpper} and \ref{sec:ProofLower} use the foregoing results to prove the Laplace principle upper and lower bounds; here the main novelty is in Section \ref{sec:ProofLower}, where we show that the control problem appearing on the right hand side of the Laplace principle admits $\varepsilon$-optimal solutions that initially obey the mean dynamic and remain in the interior of the simplex thereafter (Proposition \ref{prop:interiorpaths}).
\subsection{Joint continuity of running costs}\label{sec:JC}
Representation \eqref{eq:CramerRep} implies that for each $x \in X$, the Cram\'er transform $L(x, \cdot)$ is continuous on its domain $Z(x)$ (see the beginning of the proof of Proposition \ref{prop:Joint2} below). The remainder of this section uses this representation to establish joint continuity properties of the running cost function $L(\cdot, \cdot)$.
The difficulty lies in establishing these properties at states on the boundary of $X$. Fix $x \in X$, and let $i \in \support(x)$ and $j \ne i$. Since $e_j-e_i$ is an extreme point of $Z(x)$, the point mass $\delta_{e_j-e_i}$ is the only distribution in $\Delta(\mathscr{Z})$ with mean $e_j - e_i$. Thus representation \eqref{eq:CramerRep} implies that \begin{equation}\label{eq:SimpleLBound0}
L(x, e_j - e_i)=R(\delta_{e_j-e_i}||\nu(\cdot|x)) = -\log x_i \sigma_{ij}(x)\geq -\log x_i. \end{equation} Thus $L(x, e_j - e_i)$ grows without bound as $x$ approaches the face of $X$ on which $x_i=0$, and $L(x, e_j-e_i) = \infty$ when $x_i=0$.
Intuitively, reducing the number of action $i$ players reduces the probability that such a player is selected to revise; when there are no such players, selecting one becomes impossible.
A more serious difficulty is that running costs are not continuous at the boundary of $X$ even when they are finite. For example, suppose that $n \ge 3$, let $x$ be in the interior of $X$, and let $z_\alpha = e_3 - (\alpha e_1 + (1-\alpha) e_2)$. Since the unique $\lambda$ with mean $z_\alpha$ has $\lambda(e_3 -e_1) = \alpha$ and $\lambda(e_3 -e_2) = 1-\alpha$, equation \eqref{eq:CramerRep} implies that \[ L(x,z_\alpha)= \alpha \log\frac{\alpha}{x_1\sigma_{13}(x)}+(1-\alpha)\log\frac{1-\alpha}{x_2\sigma_{23}(x)}. \] If we set $\alpha(x) = -(\log (x_1\sigma_{13}(x)))^{-1}$ and let $x$ approach some $x^{\mathlarger *}$ with $x^{\mathlarger *}_1=0$, then $L(x,z_{\alpha(x)})$ approaches $1 - \log (x_2 \sigma_{23}(x^{\mathlarger *}))$; however, $z_{\alpha(x)}$ approaches $e_3-e_2$, and $L(x^{\mathlarger *}, e_3-e_2) =- \log (x_2 \sigma_{23}(x^{\mathlarger *}))$.
These observations leave open the possibility that the running cost function is continuous as a face of $X$ is approached, provided that one restricts attention to displacement directions $z\in Z=\conv(\mathscr{Z}) = \conv(\{e_j - e_i\colon i, j \in \scrA\})$ that remain feasible on that face. Proposition \ref{prop:Joint2} shows that this is indeed the case.
For any nonempty $I \subseteq \scrA$, define $X(I) = \{x\in X\colon I\subseteq \support(x)\}$, $\mathscr{Z}(I) = \{\mathscr{z} \in \mathscr{Z} \colon \mathscr{z}_j \geq 0 \text{ for all }j \notin I\}$, and $Z(I)=\conv(\mathscr{Z}(I)) = \{z \in Z \colon z_j \geq 0 \text{ for all }j \notin I\}$.
\begin{proposition}\label{prop:Joint2} \begin{mylist} \item $L(\cdot, \cdot)$ is continuous on $\Int(X) \times Z$. \item For any nonempty $I \subseteq \scrA$, $L(\cdot, \cdot)$ is continuous on $X(I) \times Z(I)$. \end{mylist} \end{proposition}
\emph{Proof}. For any $\lambda \in \Delta(\mathscr{Z}(I))$ and $x\in X(I) $, we have $\support(\lambda) \subseteq \mathscr{Z}(I) \subseteq \support(\nu(\cdot|x)) $. Thus by the definition of relative entropy,
the function $\mathscr{L} \colon X(I) \times \Delta(\mathscr{Z}(I)) \to [0,\infty]$ defined by $
\mathscr{L}(x,\lambda) = R(\lambda||\nu(\cdot|x))$ is real-valued and continuous.
Let \begin{equation}\label{eq:Lambdaz} \Lambda_{\mathscr{Z}(I)}(z)= \brace{\lambda \in \Delta(\mathscr{Z}) \colon \support(\lambda) \subseteq\mathscr{Z}(I), \sum\nolimits_{\mathscr{z} \in \mathscr{Z}}\mathscr{z} \lambda(\mathscr{z})=z} \end{equation} be the set of distributions on $\mathscr{Z}$ with support contained in $\mathscr{Z}(I)$ and with mean $z$. Then the correspondence $\Lambda_{\mathscr{Z}(I)} \colon Z(I) \Rightarrow \Delta(\mathscr{Z}(I))$ defined by \eqref{eq:Lambdaz} is clearly continuous and compact-valued. Thus if we define $L_I\colon X(I) \times Z(I) \to [0, \infty)$ by \begin{equation}\label{eq:LAgain2}
L_I(x,z) = \min\{R(\lambda||\nu(\cdot|x))\colon \lambda \in \Lambda_{\mathscr{Z}(I)}(z) \},
\end{equation} then the theorem of the maximum (\cite{Ber63}) implies that $L_I$ is continuous.
By representation \eqref{eq:CramerRep}, \begin{equation}\label{eq:LAgain}
L(x,z) = \min\{R(\lambda||\nu(\cdot|x))\colon \lambda \in \Lambda_\mathscr{Z}(z) \}.
\end{equation} Since $\mathscr{Z}(\scrA)=\mathscr{Z}$, \eqref{eq:LAgain2} and \eqref{eq:LAgain} imply that $L_S(x,z)=L(x,z)$, establishing part (i).
To begin the proof of part (ii), we eliminate redundant cases using an inductive argument on the cardinality of $I$. Part (i) establishes the base case in which $\#I = n$. Suppose that the claim in part (ii) is true when $\#I > k \in \{1, \ldots , n -1\}$; we must show that this claim is true when $\#I = k$.
Suppose that $\support(x) = J \supset I$, so that $\#J > k$. Then the inductive hypothesis implies that the restriction of $L$ to $X(J) \times Z(J)$ is continuous at $(x, z)$. Since $X(J)\subset X(I)$ is open relative to $X$ and since $Z(J)\supset Z(I)$, the restriction of $L$ to $X(I) \times Z(I)$ is also continuous at $(x, z)$.
It remains to show that the restriction of $L$ to $X(I) \times Z(I)$ is continuous at all $(x, z) \in X(I) \times Z(I)$ with $ \support(x) = I$.
Since $\mathscr{Z}(I)\subset\mathscr{Z}$, \eqref{eq:LAgain2} and \eqref{eq:LAgain} imply that for all $(x, z) \in X(I) \times Z(I)$, \begin{equation}\label{eq:LIneq} L(x,z)\le L_I(x,z). \end{equation} If in addition $\support(x) = I$, then $\mathscr{L}(x,\lambda) = \infty$ whenever $\support(\lambda) \not\subseteq\mathscr{Z}(I)$, so \eqref{eq:LAgain2} and \eqref{eq:LAgain} imply that inequality \eqref{eq:LIneq} binds. Since $L_I$ is continuous, our remaining claim follows directly from this uniform approximation:
\begin{lemma}\label{lem:RevIneq} For any $\varepsilon>0$, there exists a $\delta>0$ such that for any $x \in X$ with $\max_{k \in \scrA \smallsetminus I}x_k < \delta$ and any $z \in Z(I)$, we have \begin{equation}\label{eq:LIneq2} L(x,z)\ge L_I(x,z)-\varepsilon. \end{equation} \end{lemma}
\noindent A constructive proof of Lemma \ref{lem:RevIneq} is provided in Section \ref{sec:LSCProof}.
\subsection{The large deviation principle and the Laplace principle} \label{sec:laplace}
While Theorem \ref{thm:SPLDP} is stated for the finite time interval $[0,T]$, we assume without loss of generality that $T=1$. In what follows, $\mathscr{C}$ denotes the set of continuous functions from $[0,1]$ to $X$ endowed with the supremum norm,
$\mathscr{C}_{x}\subset \mathscr{C}$ denotes the set of paths in $\mathscr{C}$ starting at $x$, and $\scrA\scrC\subset \mathscr{C}$ and $\scrA\scrC_{x}\subset \mathscr{C}_{x}$ are the subsets consisting of absolutely continuous paths.
Following DE, we deduce Theorem \ref{thm:SPLDP} from the \emph{Laplace principle}.
\begin{theorem} \label{thm:laplace} Suppose that the processes $\{\hat\mathbf{X}^{N}\}_{N=N_0}^\infty$ have initial conditions $x^N \in \mathscr{X}^N$ satisfying $\lim\limits_{N\to\infty}x^N = x \in X$. Let $h\colon \mathscr{C} \to \mathbb{R}$ be a bounded continuous function. Then \begin{subequations} \begin{gather} \label{eq:LPUpper} \limsup_{N\rightarrow\infty}\frac{1}{N}\log\mathbb{E}_{x^N}\!\left[\exp\left(-Nh(\hat{\mathbf{X}}^{N})\right)\right]\leq-\inf_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)},\,\text{ and}\\
\liminf_{N\rightarrow\infty}\frac{1}{N}\log\mathbb{E}_{x^N}\!\left[\exp\left(-Nh(\hat{\mathbf{X}}^{N})\right)\right]\geq -\inf_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}.\label{eq:LPLower} \end{gather} \end{subequations} \end{theorem}
\noindent Inequality \eqref{eq:LPUpper} is called the \emph{Laplace principle upper bound}, and inequality \eqref{eq:LPUpper} is called the \emph{Laplace principle lower bound}.
Because $c_x$ is a rate function (Proposition \ref{prop:GoodRF}), the large deviation principle (Theorem \ref{thm:SPLDP}) and the Laplace principle (Theorem \ref{thm:laplace}) each imply the other. The forward implication is known as \emph{Varadhan's integral lemma} (DE Theorem 1.2.1). For intuition, express the large deviation principle as $\mathbb{P}_{x^N}(\mathbf{X}^N \approx \phi) \approx \exp(-N c_x(\phi))$, and argue heuristically that \begin{align*} \mathbb{E}_{x^N}\left[\exp(-Nh(\hat{\mathbf{X}}^{N}))\right] &\approx \int_\Phi \exp(-Nh(\phi))\, \mathbb{P}_{x^N}(\mathbf{X}^N \approx \phi)\,\mathrm{d}\phi\\ & \approx \int_\Phi \exp(-N(h(\phi)+ c_x(\phi)))\,\mathrm{d}\phi\\ & \approx \exp\paren{-N\inf_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}}, \end{align*} where the final approximation uses the Laplace method for integrals (\cite{Bru70}).
Our analysis requires the reverse implication (DE Theorem 1.2.3), which can be derived heuristically as follows. Let $\Phi \subset \mathscr{C}$, and let the (extended real-valued, discontinuous) function $h_\Phi$ be the indicator function of $\Phi \subset \mathscr{C}$ in the convex analysis sense: \[ h_\Phi(\phi)= \begin{cases}
0 & \text{if }\phi\in\Phi,\\ +\infty & \text{otherwise}. \end{cases} \] Then plugging $h_\Phi$ into \eqref{eq:LPUpper} and \eqref{eq:LPLower} yields \[ \lim_{N\to\infty}\frac{1}{N}\log\mathbb{P}_{x^N}(\hat{\mathbf{X}}^N\in\Phi) = -\inf_{\phi\in\Phi}c_x(\phi), \] which is the large deviation principle. The proof of DE Theorem 1.2.3 proceeds by considering well-chosen approximations of $h_\Phi$ by bounded continuous functions.
The statements that form the Laplace principle concern limits of expectations of continuous functions, and so can be evaluated by means of weak convergence arguments. We return to this point at the end of the next section.
\subsection{The stochastic optimal control problem}\label{sec:SOCP}
For a given function $h \in \mathscr{C}\to\mathbb{R}$, we define \begin{equation}\label{eq:VN} V^{N}(x^N)=-\frac{1}{N}\log\mathbb{E}_{x^N}\!\left[\exp(-Nh(\hat{\mathbf{X}}^{N}))\right] \end{equation} to be the negation of the expression from the left hand side of the Laplace principle. This section, which follows DE Sections 3.2 and 4.3, shows how $V^N$ can be expressed as the solution of a stochastic optimal control problem. The running costs of this problem are relative entropies, and its terminal costs are determined by the function $h$.
For each $k\in\{0,1,2,\ldots,N\}$ and sequence $(x_{0},\ldots,x_{k})\in (\mathscr{X}^{N})^{k+1}$, we define the period $k$ value function \begin{equation}\label{eq:VNk}
V^{N}_k(x_{0},\ldots,x_{k})=-\frac{1}{N}\log\mathbb{E}\!\left[\exp(-Nh(\hat{\mathbf{X}}^{N}))\big|X^{N}_{0}=x_{0},\ldots,X^{N}_{k}=x_{k}\right]. \end{equation} Note that $V^N_0 \equiv V^N$. If we define the map $\hat\phi \:(\:= \hat\phi^N)$ from sequences $x_0, \ldots , x_N$ to paths in $\mathscr{C}$ by \begin{equation}\label{eq:PhiHat} \hat\phi_t(x_0, \ldots , x_N)=x_{k}+(Nt-k)(x_{k+1}-x_{k})\;\;\text{for all }t\in[\tfrac{k}{N},\tfrac{k+1}{N}], \end{equation} then \eqref{eq:VNk} implies that \begin{equation}\label{eq:VNTerminal} V^{N}_N(x_{0},\ldots,x_N)=h(\hat\phi(x_{0},\ldots,x_N)). \end{equation} Note also that $\hat X^N_t = \hat\phi_t(X^{N}_{0}, \ldots , X^{N}_{N})$; this can be expressed concisely as $\hat{\mathbf{X}}^{N}=\hat\phi({\mathbf{X}}^{N})$.
Proposition \ref{prop:DPFE} shows that the value functions $V^N_k$ satisfy a dynamic programming functional equation, with running costs given by relative entropy functions and with terminal costs given by $h(\hat\phi(\cdot))$. To read equation \eqref{eq:VNFunctionalEq}, recall that $\nu^N$ is the transition kernel for the Markov chain $\{X^{N}_{k}\}$.
\begin{proposition}\label{prop:DPFE} For $k\in\{0,1,\ldots,N-1\}$ and $(x_{0},\ldots,x_{k})\in(\mathscr{X}^{N})^{k+1}$, we have \begin{equation}\label{eq:VNFunctionalEq}
V^{N}_k(x_{0},\ldots,x_{k})=\inf_{\lambda\in\Delta(\mathscr{Z})}\paren{\frac{1}{N}R(\lambda\,||\,\nu^{N}(\hspace{1pt}\cdot\hspace{1pt}|x_{k}))+\sum_{\mathscr{z}\in\mathscr{Z}}V^{N}_{k+1}(x_{0},\ldots,x_{k}+\tfrac{1}{N}\mathscr{z})\,\lambda(\mathscr{z})}. \end{equation} \end{proposition}
\noindent For $k = N$, $V^N_N$ is given by the terminal condition \eqref{eq:VNTerminal}.
The key idea behind Proposition \ref{prop:DPFE} is the following observation (DE Proposition 1.4.2), which provides a variational formula for expressions like \eqref{eq:VN} and \eqref{eq:VNk}.
\begin{observation}\label{obs:VarRep} For any probability measure $\pi \in \Delta(\mathscr{Z})$ and function $\gamma\colon\mathscr{Z}\rightarrow\mathbb{R}$, we have \begin{equation}\label{eq:variational}
-\log\sum_{\mathscr{z}\in\mathscr{Z}}\mathrm{e}^{-\gamma(\mathscr{z})}\pi(\mathscr{z})=\!\min_{\lambda\in\Delta(\mathscr{Z})}\paren{R(\lambda||\pi)+\sum_{\mathscr{z}\in\mathscr{Z}}\gamma(\mathscr{z})\lambda(\mathscr{z})}. \end{equation} The minimum is attained at $\lambda^{\mathlarger *}(\mathscr{z})=\pi(\mathscr{z})\,\mathrm{e}^{-\gamma(\mathscr{z})}/\sum_{\mathscr{y}\in\mathscr{Z}} \pi(\mathscr{y})\,\mathrm{e}^{-\gamma(\mathscr{y})}$. In particular, $\lambda^{\mathlarger *} \ll \pi$. \end{observation}
Equation \eqref{eq:variational} expresses the log expectation on its left hand side as the minimized sum of two terms: a relative entropy term that only depends on the probability measure $\pi$, and an expectation that only depends on the function $\gamma$. This additive separability and the Markov property lead to equation \eqref{eq:VNFunctionalEq}. Specifically,
observe that \begin{gather*} \exp\paren{\!-N V^N_k(x_0, \ldots , x_k)}
=\mathbb{E}\!\brack{\exp(-Nh(\hat\phi({\mathbf{X}}^{N})))\,\big|\,X^{N}_{0}=x_{0},\ldots,X^{N}_{k}=x_{k}}\\
\qquad=\mathbb{E}\!\brack{\mathbb{E}\!\brack{\exp(-Nh(\hat\phi({\mathbf{X}}^{N})))\,\big|\, X^{N}_{0},\ldots,X^{N}_{k+1}}\,\big|\,X^{N}_{0}=x_{0},\ldots,X^{N}_{k}=x_{k}}\\
\qquad=\mathbb{E}\!\brack{-N V^N_{k+1}(X^{N}_{0},\ldots,X^{N}_{k+1})\,\big|\,X^{N}_{0}=x_{0},\ldots,X^{N}_{k}=x_{k}}\\
\qquad=\sum_{\mathscr{z} \in \mathscr{Z}}\exp\paren{-N V^N_{k+1}(x_0, \ldots , x_k,x_k + \tfrac1N \mathscr{z})}\hspace{1pt}\nu^{N}(\mathscr{z}|x_{k}), \end{gather*} where the last line uses the Markov property. This equality and Observation \ref{obs:VarRep} yield \begin{align*}
V^N_k(x_0, \ldots , x_k) &= -\frac1N\log\sum_{\mathscr{z} \in \mathscr{Z}}\exp\paren{\!-N V^N_{k+1}(x_0, \ldots , x_k,x_k + \tfrac1N \mathscr{z})}\hspace{1pt}\nu^{N}(\mathscr{z}|x_{k})\\
&=\frac1N\inf_{\lambda\in\Delta(\mathscr{Z})}\paren{R(\lambda\,||\,\nu^{N}(\hspace{1pt}\cdot\hspace{1pt}|x_{k}))+\sum_{\mathscr{z}\in\mathscr{Z}}NV^{N}_{k+1}(x_{0},\ldots,x_{k}+\tfrac{1}{N}\mathscr{z})\,\lambda(\mathscr{z})}, \end{align*} which is equation \eqref{eq:VNFunctionalEq}.
Since the value functions $V^N_k$ satisfy the dynamic programming functional equation \eqref{eq:VNFunctionalEq},
they also can be represented by describing the same dynamic program in sequence form. To do so, we define for $k \in \{0, \ldots , N-1\}$ a \emph{period k control} $\lambda^N_k\colon (\mathscr{X}^N)^{k} \to \Delta(\mathscr{Z})$, which for each sequence of states $(x_0, \ldots , x_{k})$ specifies a probability distribution $\lambda^N_k(\hspace{1pt}\cdot\hspace{1pt}|x_0, \ldots , x_{k})$, namely the minimizer of problem \eqref{eq:VNFunctionalEq}. Given the sequence of controls $\{\lambda^N_k\}_{k=0}^{N-1}$ and an initial condition $\xi^N_0 = x^N \in \mathscr{X}^N$, we define the \emph{controlled process} $\boldsymbol{\xi}^N=\{\xi^N_k\}_{k=0}^N$ by $\xi^N_0 = x^N \in \mathscr{X}^N$ and the recursive formula \begin{equation}\label{eq:ControlledProcess} \xi^N_{k+1} = \xi^N_k + \frac1N\zeta^N_k, \end{equation}
where $\zeta^N_k$ has law $\lambda^N_k(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_0, \ldots , \xi^N_k)$. We also define the piecewise affine interpolation $\smash{\hat\boldsymbol{\xi}}^N=\{\hat\xi^N_t\}_{t\in[0,1]}$ by $\hat \xi^N_t = \hat\phi_t(\boldsymbol{\xi}^N)$, where $\hat\phi$ is the interpolation function \eqref{eq:PhiHat}. We then have
\begin{proposition}\label{prop:VNSeqEq} For $k\in\{0,1,\ldots,N-1\}$ and $(x_{0},\ldots,x_{k})\in(\mathscr{X}^{N})^{k+1}$, $V^N_k(x_0, \ldots , x_k)$ equals \begin{equation}\label{eq:VNSeqEq} \inf_{\lambda^N_k, \ldots , \lambda^N_{N-1}}\!\!
\mathbb{E}\Bigg[\frac1N\sum_{j=k}^{N-1}R\paren{\lambda^N_j(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_0, \ldots , \xi^N_j)\,||\, \nu^N(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_j)} + h(\smash{\hat\boldsymbol{\xi}}^N)\hspace{1pt}\bigg|\hspace{1pt}\xi^{N}_{0}=x_{0},\ldots,\xi^{N}_{k}=x_{k}\Bigg]. \end{equation} \end{proposition}
\noindent Since Observation \ref{obs:VarRep} implies that the infimum in the functional equation \eqref{eq:VNFunctionalEq} is always attained, Proposition \ref{prop:VNSeqEq} follows from standard results (cf.~DE Theorem 1.5.2), and moreover, the infimum in \eqref{eq:VNSeqEq} is always attained.
Since $V^N_0 \equiv V^N$ by construction, Proposition \ref{prop:VNSeqEq} yields the representation \begin{equation}\label{eq:VNSeqEqInitial} V^N(x^N) = \inf_{\lambda^N_0, \ldots , \lambda^N_{N-1}}\!\!
\mathbb{E}_{x^N}\Bigg[\frac1N\sum_{j=0}^{N-1}R\paren{\lambda^N_j(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_0, \ldots , \xi^N_j)\,||\, \nu^N(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_j)} + h(\smash{\hat\boldsymbol{\xi}}^N)\Bigg]. \end{equation} The running costs in \eqref{eq:VNSeqEqInitial} are relative entropies of control distributions with respect to transition distributions of the Markov chain $\mathbf{X}^N$, and so reflect how different the control distribution is from the law of the Markov chain at the relevant state. Note as well that the terminal payoff $h(\smash{\hat\boldsymbol{\xi}}^N)$ may depend on the entire path of the controlled process $\boldsymbol{\xi}^N$.
With this groundwork in place, we can describe \apcite{DupEll97} weak convergence approach to large deviations as follows: Equation \eqref{eq:VNSeqEqInitial} represents expression $V^N(x^N)$ from the left-hand side of the Laplace principle as the expected value of the optimal solution to a stochastic optimal control problem. For any given sequence of pairs of control sequences $\{\lambda^N_k\}_{k=0}^{N-1}$ and controlled processes $\boldsymbol{\xi}^N$, Section \ref{sec:LCP} shows that suitably chosen subsequences converge in distribution to some limits $\{\lambda_t\}_{t \in [0,1]}$ and $\boldsymbol{\xi}$ satisfying the averaging property \eqref{eq:LimControlled}.
This weak convergence and the continuity of $h$ allow one to obtain limit inequalities for $V^N(x^N)$ using Fatou's lemma and the dominated convergence theorem. By considering the optimal control sequences for \eqref{eq:VNSeqEqInitial}, one obtains both the candidate rate function $c_x(\cdot)$ and the Laplace principle upper bound \eqref{eq:LPUB2}. The Laplace principle lower bound is then obtained by choosing a path $\psi$ that approximately minimizes $c_x(\cdot) + h(\cdot)$, constructing controlled processes that mirror $\psi$, and using the weak convergence of the controlled processes and the continuity of $L$ and $h$ to establish the limit inequality \eqref{eq:LPLB2}.
\subsection{Convergence of the controlled processes}\label{sec:LCP}
The increments of the controlled process $\boldsymbol{\xi}^N$ are determined in two steps: first, the history of the process determines the measure $\lambda^N_k(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_0, \ldots , \xi^N_k) \in \Delta(\mathscr{Z})$, and then the increment itself is determined by a draw from this measure.
With some abuse of notation, one can write $\lambda^N_k(\cdot) = \lambda^N_k(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_0, \ldots , \xi^N_k)$, and thus view $\lambda^N_k$ as a random measure.
Then, using compactness arguments, one can show that as $N$ grows large, certain subsequences of the random measures $\lambda^N_k$ on $\Delta(\mathscr{Z})$ converge in a suitable sense to limiting random measures. Because the increments of $\boldsymbol{\xi}^N$ become small as $N$ grows large (cf.~\eqref{eq:ControlledProcess}), intuition from the law of large numbers---specifically Theorem \ref{thm:DetApprox}---suggests that the idiosyncratic part of the randomness in these increments should be averaged away. Thus in the limit, the evolution of the controlled process should still depend on the realizations of the random measures, but it should only do so by way of their means.
The increments of the controlled process $\boldsymbol{\xi}^N$ are determined in two steps: first, the history of the process determines the measure $\lambda^N_k(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_0, \ldots , \xi^N_k) \in \Delta(\mathscr{Z})$, and then the increment itself is determined by a draw from this measure.
With some abuse of notation, one can write $\lambda^N_k(\cdot) = \lambda^N_k(\hspace{1pt}\cdot\hspace{1pt}|\xi^N_0, \ldots , \xi^N_k)$, and thus view $\lambda^N_k$ as a random measure.
Then, using compactness arguments, one can show that as $N$ grows large, certain subsequences of the random measures $\lambda^N_k$ on $\Delta(\mathscr{Z})$ converge in a suitable sense to limiting random measures. Because the increments of $\boldsymbol{\xi}^N$ become small as $N$ grows large (cf.~\eqref{eq:ControlledProcess}), intuition from the law of large numbers---specifically Theorem \ref{thm:DetApprox}---suggests that the idiosyncratic part of the randomness in these increments should be averaged away. Thus in the limit, the evolution of the controlled process should still depend on the realizations of the random measures, but it should only do so by way of their means.
To make this argument precise, we introduce continuous-time interpolations of the controlled processes $\boldsymbol{\xi}^N=\{\xi^N_k\}_{k=0}^N$ and the sequence of controls $\{\lambda^N_k\}_{k=0}^{N-1}$. The piecewise affine interpolation $\smash{\hat\boldsymbol{\xi}}^N=\{\hat\xi^N_t\}_{t\in[0,1]}$ was introduced above; it takes values in the space $\mathscr{C} = \mathscr{C}([0,1]:X)$, which we endow with the topology of uniform convergence. The piecewise constant interpolation $\smash{\bar\boldsymbol{\xi}}^N=\{\bar\xi^N_t\}_{t\in[0,1]}$ is defined by \begin{equation*} \bar{\xi}^{N}_{t}=\left\{\begin{array}{ll}\xi^{N}_{k} & \text{if}\, t\in[\frac{k}{N},\frac{k+1}{N})\text{ and }k=0,1,\ldots,N-2,\\ \xi^{N}_{N-1} & \text{if}\; t\in[\frac{N-1}{N},1]. \end{array}\right. \end{equation*} The process $\smash{\bar\boldsymbol{\xi}}^N$ takes values in the space $\mathscr{D} = \mathscr{D}([0, 1]: X)$ of left-continuous functions with right limits, which we endow with the Skorokhod topology. Finally, the piecewise constant control process $\{\bar\lambda^N_t\}_{t\in[0,1]}$ is defined by \begin{equation*} \bar{\lambda}^{N}_{t}(\cdot)= \begin{cases}
\lambda^{N}_{k}(\cdot\hspace{1pt}|\hspace{1pt}\xi^{N}_{0},\ldots,\xi^{N}_{k}) & \text{if}\, t\in[\frac{k}{N},\frac{k+1}{N})\text{ and }k=0,1,\ldots,N-2,\\
\lambda^{N}_{N-1}(\cdot\hspace{1pt}|\hspace{1pt}\xi^{N}_{0},\ldots,\xi^{N}_{N-1}) & \text{if}\; t\in[\frac{N-1}{N},1]. \end{cases} \end{equation*} Using these definitions, we can rewrite formulation \eqref{eq:VNSeqEqInitial} of $V^N(x^N)$ as \begin{equation}\label{eq:VNInt}
V^{N}(x^N)=\inf_{\lambda^N_0, \ldots , \lambda^N_{N-1}}\!\!\mathbb{E}_{x^N}\paren{\int_{0}^{1}R(\bar{\lambda}^{N}_{t}\,||\,\nu^{N}(\cdot\hspace{1pt}|\hspace{1pt}\bar{\xi}^{N}_{t}))\,\mathrm{d} t+h(\smash{\hat{\boldsymbol{\xi}}}^{N})}. \end{equation} As noted after Proposition \ref{prop:VNSeqEq}, the infimum in \eqref{eq:VNInt} is always attained by some choice of the control sequence $\{\lambda^N_k\}_{k=0}^{N-1}$.
Let $\mathscr{P}(\mathscr{Z}\times[0,1])$ denote the space of probability measures on $\mathscr{Z}\times[0,1]$. For a collection $\{\pi_t\}_{t\in[0,1]}$ of measures $\pi_t \in \Delta(\mathscr{Z})$ that is Lebesgue measurable in $t$, we define the measure $\pi_t \otimes dt \in \mathscr{P}(\mathscr{Z} \times [0, 1])$ by \[ (\pi_t \otimes dt)(\{z\} \times B) = \int_B \pi_t(z)\, \mathrm{d} t \] for all $z \in \mathscr{Z}$ and all Borel sets $B$ of $[0,1]$. Using this definition, we can represent the piecewise constant control process $\{\bar\lambda^N_t\}_{t\in[0,1]}$ as the \emph{control measure} $\Lambda^{\!N} = \bar\lambda^N_t \otimes dt$. Evidently, $\Lambda^{\!N}$ is a random measure taking values in $\mathscr{P}(\mathscr{Z}\times[0,1])$, a space we endow with the topology of weak convergence.
Proposition \ref{prop:converge}, a direct consequence of DE Theorem 5.3.5 and p.~165, formalizes the intuition expressed in the first paragraph above. It shows that along certain subsequences, the control measures $\Lambda^{N}$ and the interpolated controlled processes $\smash{\hat\boldsymbol{\xi}}^N$ and $\smash{\bar\boldsymbol{\xi}}^N$ converge in distribution to a random measure $\Lambda$ and a random process $\boldsymbol{\xi}$, and moreover, that the evolution of $\boldsymbol{\xi}$ is amost surely determined by the means of $\Lambda$.
\begin{proposition}\label{prop:converge} Suppose that the initial conditions $x^{N}\in\mathscr{X}^{N}$ converge to $x \in X$, and that the control sequence $\{\lambda^N_k\}_{k=0}^{N-1}$ is such that $\sup_{N\geq N_0} \!V^N(x^N)<\infty$. \begin{mylist} \item Given any subsequence of $\{(\Lambda^{N},\smash{\hat\boldsymbol{\xi}}^N,\smash{\bar\boldsymbol{\xi}}^N)\}_{N=N_0}^\infty$, there exists a $\mathscr{P}(\mathscr{Z}\times[0,1])$-valued random measure $\Lambda$ and $\mathscr{C}$-valued random process $\boldsymbol{\xi}$ $($both possibly defined on a new probability space$)$ such that some subsubsequence converges in distribution to $(\Lambda,\boldsymbol{\xi},\boldsymbol{\xi})$ in the topologies specified above. \item There is a collection of $\Delta(\mathscr{Z})$-valued random measures $\{\lambda_t\}_{t \in [0,1]}$, measurable with respect to $t$, such that with probability one, the random measure $\Lambda$ can be decomposed as $\Lambda =\lambda_t \otimes\mathrm{d} t$. \item With probability one, the process $\boldsymbol{\xi}$ satisfies $\xi_{t}=x+\int_{0}^{t}\paren{\sum_{\mathscr{z}\in\mathscr{Z}}\mathscr{z}\lambda_{s}(\mathscr{z})}\mathrm{d} s$ for all $t\in[0,1]$, and is absolutely continuous in $t$. Thus with probability one, \end{mylist}
\begin{equation}\label{eq:LimControlled} \dot{\xi}_{t}=\sum\nolimits_{\mathscr{z}\in\mathscr{Z}}\mathscr{z}\lambda_{t}(\mathscr{z}) \end{equation}
\hspace{.5in}almost surely with respect to Lebesgue measure. \end{proposition}
\subsection{Proof of the Laplace principle upper bound} \label{sec:ProofUpper}
In this section we consider the Laplace principle upper bound \eqref{eq:LPUpper}, which definition \eqref{eq:VN} allows us to express as \begin{equation}\label{eq:LPUB2} \liminf_{N\rightarrow\infty} V^{N}(x^{N})\geq \inf_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}. \end{equation} The argument here follows DE Section 6.2. Let $\{\lambda^N_k\}_{k=0}^{N-1}$ be the optimal control sequence in representation \eqref{eq:VNSeqEqInitial}, and let $\boldsymbol{\xi}^N$ be the corresponding controlled process. Define the triples $\{(\Lambda^{N},\smash{\hat\boldsymbol{\xi}}^N,\smash{\bar\boldsymbol{\xi}}^N)\}_{N=N_0}^\infty$ of interpolated processes as in Section \ref{sec:LCP}. Proposition \ref{prop:converge} shows that for any subsequence of these triples, there is a subsubsequence that converges in distribution to some triple $(\lambda_t \otimes\mathrm{d} t,\boldsymbol{\xi},\boldsymbol{\xi})$ satisfying \eqref{eq:LimControlled}. Then one argues that along this subsubsequence, \begin{align*} \liminf_{N\rightarrow\infty}V^{N}(x^{N}) &\geq
\mathbb{E}_x\paren{\int_{0}^{1}R\paren{\vphantom{I^N}{\lambda}_{t}\hspace{1pt}||\hspace{1pt} \nu(\cdot|\xi_{t})}\mathrm{d} t+h(\boldsymbol{\xi})}\\ &\geq\mathbb{E}_x\paren{\int_{0}^{1}L\left(\xi_{t},\sum_{\mathscr{z}\in\mathscr{Z}}\mathscr{z} \lambda_{t}(\mathscr{z})\right)\mathrm{d} t+h(\boldsymbol{\xi})}\\ &= \mathbb{E}_x\paren{\int_{0}^{1}L(\xi_{t},\dot{\xi}_{t})\,\mathrm{d} t+h(\boldsymbol{\xi})}\\ &\geq \inf_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}. \end{align*} The key ingredients needed to establish the initial inequality are equation \eqref{eq:VNSeqEqInitial}, Skorokhod's theorem, equation \eqref{eq:UnifConvR} below, the lower semicontinuity of relative entropy, and Fatou's lemma. Then the second inequality follows from representation \eqref{eq:CramerRep} of the Cram\'er transform, the equality from equation \eqref{eq:LimControlled}, and the final inequality from the definition \eqref{eq:PathCost} of the cost function $c_x$. Since the subsequence chosen initially was arbitrary, inequality \eqref{eq:LPUB2} is proved.
The details of this argument can be found in Section \ref{sec:ProofUpperApp}, which largely follows DE Section 6.2.
But while in DE the transition kernels $\nu^N(\cdot | x)$ of the Markov chains are assumed to be independent of $N$, here we allow for a vanishing dependence on $N$ (cf.~equation \eqref{eq:LimTrans}). Thus we require a simple additional argument, Lemma \ref{lem:RelEnt}, that uses lower bound \eqref{eq:LimSPBound} to establish the uniform convergence of relative entropies: namely, that if
$\lambda^N \colon \mathscr{X}^N \to \Delta(\mathscr{Z})$ are transition kernels satisfying $\lambda^N(\cdot |x) \ll \nu^N(\cdot |x) $ for all $x \in \mathscr{X}^N$, then \begin{equation}\label{eq:UnifConvR}
\lim_{N\to\infty}\max_{x\in\mathscr{X}^N}\abs{R(\lambda^N(\cdot|x)\hspace{1pt}||\hspace{1pt} \nu^N(\cdot|x)) - R(\lambda^N(\cdot|x)\hspace{1pt}||\hspace{1pt} \nu(\cdot|x))}=0. \end{equation}
\subsection{Proof of the Laplace principle lower bound} \label{sec:ProofLower}
Finally, we consider the Laplace principle lower bound \eqref{eq:LPLower}, which definition \eqref{eq:VN} lets us express as \begin{equation}\label{eq:LPLB2} \limsup_{N\rightarrow\infty} V^{N}(x^{N})\leq \inf_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}. \end{equation} The argument here largely follows DE Sections 6.2 and 6.4. Their argument begins by choosing a path that is $\varepsilon$-optimal in the minimization problem from the right-hand side of \eqref{eq:LPLB2}. To account for our processes running on a set with a boundary, we show that this path can be chosen to start with a brief segment that follows the mean dynamic, and then stays in the interior of $X$ thereafter (Proposition \ref{prop:interiorpaths}). With this choice of path, the joint continuity properties of the running costs $L(\cdot, \cdot)$ established in Proposition \ref{prop:Joint2} are sufficient to complete the dominated convergence argument in display \eqref{eq:DomConvArgument}, which establishes that inequality \eqref{eq:LPLB2} is violated by no more than $\varepsilon$. Since $\varepsilon$ was arbitrary, \eqref{eq:LPLB2} follows.
For a path $\phi \in \mathscr{C}=\mathscr{C}([0,1]:X)$ and an interval $I \subseteq[0,1]$, write $\phi_I$ for $\{\phi_t\colon t \in I\}$. Define the set of paths \[ \tilde\mathscr{C} = \{\phi \in \mathscr{C} \colon \text{ for some }\alpha\in(0, 1], \phi_{[0,\alpha]}\text{ solves }\eqref{eq:MD}\text{ and }\phi_{[\alpha, 1]} \subset \Int(X)\}. \] Let $\tilde \mathscr{C}_x$ denote the set of such paths that start at $x$.
\begin{proposition}\label{prop:interiorpaths} For all $x \in X$, $ \inf\limits_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}=\inf\limits_{\phi\in\tilde\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}. $ \end{proposition}
\noindent The proof of this result is rather involved, and is presented in Section \ref{sec:RIOP}.
The next proposition, a version of DE Lemma 6.5.5, allows us to further restrict our attention to paths having convenient regularity properties. We let $\mathscr{C}^{{\mathlarger *}}\subset \tilde\mathscr{C}$ denote the set of paths $\phi \in \tilde\mathscr{C}$ such that after the time $\alpha>0$ such that $\phi^\alpha_{[0,\alpha]}$ solves \eqref{eq:MD}, the derivative $\dot{\phi}$ is piecewise constant and takes values in $Z$.
\begin{proposition}\label{prop:interiorpaths2} $ \inf\limits_{\phi\in\tilde\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}=\inf\limits_{\phi\in\mathscr{C}^{{\mathlarger *}}}\paren{c_{x}(\phi)+h(\phi)}. $ \end{proposition}
\noindent The proof of Proposition \ref{prop:interiorpaths2} mimics that of DE Lemma 6.5.5; see Section \ref{sec:PfInteriorpaths2} for details.
Now fix $\varepsilon > 0$. By the previous two propositions, we can choose an $\alpha > 0$ and a path $\psi \in \mathscr{C}^{\mathlarger *}$ such that $\psi_{[0,\alpha]}$ solves \eqref{eq:MD} and \begin{equation}\label{eq:EpsOpt} c_{x}(\psi) + h(\psi)\leq \inf_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}+\varepsilon. \end{equation}
We now introduce a controlled process $\boldsymbol\chi^N$ that follows $\psi$ in expectation as long as it remains in a neighborhood of $\psi$. Representation \eqref{eq:CramerRep} implies that for each $k \in \{0, \ldots, N-1\}$ and $x \in \mathscr{X}^N$, there is a transition kernel $\pi^{N}_{k}(\cdot\hspace{1pt}|x)$ that minimizes relative entropy with respect to $\nu(\cdot|x)$ subject to the aforementioned constraint on its expectation: \begin{equation}\label{eq:AnotherREForL}
R(\pi_{k}^{N}(\cdot\hspace{1pt}|x)\hspace{1pt}||\hspace{1pt}\nu(\cdot|x))=L(x,\dot\psi_{k/N})\;\text{ and }\;\sum_{\mathscr{z}\in\mathscr{Z}}\mathscr{z} \pi_{k}^{N}(\mathscr{z}|x)=\dot\psi_{k/N}. \end{equation} To ensure that this definition makes sense for all $k$, we replace the piecewise continuous function $\dot \psi$ with its right continuous version.
Since $\psi_{[0,\alpha]}$ solves \eqref{eq:MD}, it follows from property \eqref{eq:NewPhi0}
that there is an $\hat \alpha \in (0, \alpha]$ such that \begin{equation}\label{eq:Growing} \dot\psi_t \in Z(x) = \{z \in Z\colon z_j \geq 0 \text{ for all }j \notin\support(x)\}\:\text{ whenever }t \in [0,\hat\alpha].
\end{equation} Property \eqref{eq:NewPhi0} also implies that $(\psi_t)_i \geq x_i \wedge \varsigma$ for all $t \in [0, \alpha]$ and $i \in \support(x)$. Now choose a $\delta >0$ satisfying \begin{equation}\label{eq:DeltaInequality} \delta < \min\paren{\{\varsigma\} \cup \{x_i\colon i \in \support(x)\} \cup \{\tfrac12 \dist(\psi_t, \partial X) \colon t \in [\hat \alpha, 1]\}}. \end{equation}
For future reference, note that if $y \in X$ satisfies $|y - \psi_t|< \delta$, then $|y_i - (\psi_t)_i|< \frac\delta2$ for all $i \in \scrA$ (by the definition of the $\ell^1$ norm), and so if $t \in [0, \alpha]$ we also have \begin{equation}\label{eq:XBarX}
y \in \bar X_x \equiv \{\hat x \in X\colon \hat x_i \geq \tfrac12(x_i \wedge \varsigma)\text{ for all }i \in \support(x)\}. \end{equation}
For each $(x_0,\ldots,x_k)\in (\mathscr{X}^N)^{k+1}$, define the sequence of controls $\{\lambda^N_k\}_{k=0}^{N-1}$ by \begin{equation}\label{eq:NewLambda}
\lambda_{k}^{N}(\mathscr{z}|x_0,\ldots,x_k)= \begin{cases}
\pi_{k}^{N}(\mathscr{z}|x_k) & \text{if }\max_{0\leq j\leq k}|x_{j}-\psi_{j/N}|\leq\delta,\\
\nu^N(\mathscr{z}|x_k) & \text{if }\max_{0\leq j\leq k}|x_{j}-\psi_{j/N}|>\delta. \end{cases} \end{equation} Finally, define the controlled process $\boldsymbol\chi^N=\{\chi_{k}^{N}\}_{k=0}^{N}$ by setting $\chi^N_0 = x^N$ and using the recursive formula $ \chi^N_{k+1} = \chi^N_k + \frac1N\zeta^N_k, $
where $\zeta^N_k$ has law $\lambda^N_k(\hspace{1pt}\cdot\hspace{1pt}|\chi^N_0, \ldots , \chi^N_k)$. Under this construction, the process $\boldsymbol\chi^N$ evolves according to the transition kernels $\pi^N_k$, and so follows $\psi$ in expectation, so long as it stays $\delta$-synchronized with $\psi$. If this ever fails to be true, the evolution of $\boldsymbol\chi^N$ proceeds according to the kernel $\nu^N$ of the original process $\mathbf{X}^N$. This implies that until the stopping time \begin{equation}\label{eq:TauN}
\tau^{N}=\frac{1}{N}\min\brace{k\in\{0,1,\ldots,N\}\colon|\chi^{N}_{k}-\psi_{k/N}|>\delta}\wedge 1, \end{equation} the relative entropies of transitions are given by \eqref{eq:AnotherREForL}, and that after $\tau^N$ these relative entropies are zero.
Define the pair $\{(\Lambda^{N},\smash{\hat\boldsymbol{\xi}}^N)\}_{N=N_0}^\infty$ of interpolated processes as in Section \ref{sec:LCP}. Proposition \ref{prop:converge} shows that for any subsequence of these pairs, there is a subsubsequence that converges in distribution to some pair $(\lambda_t \otimes\mathrm{d} t,\boldsymbol{\xi})$ satisfying \eqref{eq:LimControlled}.
By Prokhorov's theorem, $\tau^N$ can be assumed to converge in distribution on this subsubsequence to some $[0,1]$-valued random variable $\tau$. Finally, DE Lemma 6.4.2 and Proposition 5.3.8 imply that $\tau = 1$ and $\boldsymbol\chi = \psi$ with probability one.
For the subsubsequence specified above, we argue as follows: \begin{align} \limsup_{N\to\infty}&\,V^N(x^N)
\leq\lim_{N\to\infty}\mathbb{E}_{x^N}\Bigg[\frac1N\sum_{j=0}^{N-1}R\paren{\lambda^N_j(\hspace{1pt}\cdot\hspace{1pt}|\chi^N_0, \ldots , \chi^N_j)\,||\, \nu^N(\hspace{1pt}\cdot\hspace{1pt}|\chi^N_j)} + h(\smash{\hat\boldsymbol\chi}^N)\Bigg]\notag\\
&=\lim_{N\to\infty}\mathbb{E}_{x^N}\Bigg[\frac1N\sum_{j=0}^{N\tau^N-1}L(\chi^N_j,\dot\psi_{j/N}) + h(\smash{\hat\boldsymbol\chi}^N)\Bigg]\notag\\ &=\lim_{N\to\infty}\mathbb{E}_{x^N}\Bigg[\frac1N\!\!\sum_{j=0}^{(N\tau^N \wedge \lfloor N\hat\alpha\rfloor)-1}\hspace{-2ex}L(\hat\chi^N_{j/N},\dot\psi_{j/N})+\frac1N\!\sum_{j=N\tau^N \wedge \lfloor N\hat\alpha\rfloor}^{N\tau^N-1}\hspace{-2ex}L(\hat\chi^N_{j/N},\dot\psi_{j/N})+ h(\smash{\hat\boldsymbol\chi}^N)\Bigg]\label{eq:DomConvArgument}\\
&=\int_0^{\hat\alpha} L(\psi_t,\dot\psi_t)\,\mathrm{d} t + \int_{\hat\alpha}^1L(\psi_t,\dot\psi_t)\,\mathrm{d} t +h(\psi)\notag\\
&=c_x(\psi)+h(\psi).\notag \end{align} The initial inequality follows from representation \eqref{eq:VNSeqEqInitial}, the second line from the uniform convergence in \eqref{eq:UnifConvR}, along with equations \eqref{eq:AnotherREForL}, \eqref{eq:NewLambda}, and \eqref{eq:TauN}, and the fifth line from the definition of $c_x$. The fourth line is a consequence of the continuity of $h$, the convergence of $\tau^N$ to $\tau$, the uniform convergence of $\smash{\hat\boldsymbol\chi}^N$ to $\psi$, the piecewise continuity and right continuity of $\dot\psi$, Skorokhod's theorem, and the dominated convergence theorem. For the application of dominated convergence to the first sum, we use the fact that when $ j/N < \tau^N \wedge \hat \alpha$, we have $\hat\chi^N_{j/N} \in \bar X_x$ (see \eqref{eq:XBarX}) and $\dot\psi_{j/N} \in Z(x) $ (see \eqref{eq:Growing}), along with the fact that $L(\cdot,\cdot)$ is continuous, and hence uniformly continuous and bounded, on $\bar X_x \times Z(x)$, as follows from Proposition \ref{prop:Joint2}(ii). For the application of dominated convergence to the second sum, we use the fact when $\hat \alpha \leq j/N < \tau^N$, we have $\dist(\hat\chi^N_{j/N}, \partial X) \geq \frac\delta2$ (see \eqref{eq:DeltaInequality}), and the fact that $L(\cdot,\cdot)$ is continuous and bounded on $Y \times Z$ when $Y$ is a closed subset of $\Int(X)$, as follows from Proposition \ref{prop:Joint2}(i).
Since every subsequence has a subsubsequence that satisfies \eqref{eq:DomConvArgument}, the sequence as a whole must satisfy \eqref{eq:DomConvArgument}. Therefore, since $\varepsilon>0$ was arbitrary, \eqref{eq:DomConvArgument}, \eqref{eq:EpsOpt}, and \eqref{eq:VN} establish inequality \eqref{eq:LPLB2}, and hence the lower bound \eqref{eq:LPLower}.
\section{Proof of Lemma \ref{lem:RevIneq}}\label{sec:LSCProof}
Lemma \ref{lem:RevIneq} follows from equation \eqref{eq:LAgain} and Lemma \ref{lem:Approx2}, which in turn requires Lemma \ref{lem:Approx1}. Lemma \ref{lem:Approx1} shows that for any distribution $\lambda$ on $\mathscr{Z}$ with mean $z \in Z(I)$, there is a distribution $\bar \lambda$ on $\mathscr{Z}(I)$ whose mean is also $z$, with the variational distance between $\bar\lambda$ and $\lambda$ bounded by a fixed multiple of the mass that $\lambda$ places on components outside of $\mathscr{Z}(I)$. The lemma also specifies some equalities that $\lambda$ and $\bar\lambda$ jointly satisfy. Lemma \ref{lem:Approx2} shows that if $x$ puts little mass on actions outside $I$, then the reduction in relative entropy obtained by switching from $\bar \lambda$ to $\lambda$ is small at best, uniformly over the choice of displacement vector $z \in Z(I)$.
Both lemmas require additional notation. Throughout what follows we write $K$ for $\scrA\smallsetminus I$. For $\lambda \in \Delta(\mathscr{Z})$, write $\lambda_{ij}$ for $\lambda(e_j - e_i)$ when $j \ne i$. Write $\lambda_{[i]} = \sum_{j \ne i}\lambda_{ij}$ for the $i$th ``row sum'' of $\lambda$ and $\lambda^{[j]} = \sum_{i \ne j}\lambda_{ij}$ for the $j$th ``column sum''. (Remember that $\lambda$ has no ``diagonal components'', but instead has a single null component $\lambda_{\mathbf{0}} = \lambda(\mathbf{0})$.) For $I \subseteq \scrA$, write $\lambda_{I} = \sum_{i \in I}\lambda_{[i]}$ for the sum over all elements of $\lambda$ from rows in $I$. If $\lambda, \bar\lambda \in \Delta(\mathscr{Z})$, we apply the same notational devices to $\Delta\lambda = \bar\lambda - \lambda$ and to $|\Delta\lambda|$, the latter of which is defined by $|\Delta\lambda|_{ij} = |(\Delta\lambda)_{ij}|$. Finally, if $\chi \in \mathbb{R}^{I}_+$, we write $\chi_{[I]}$ for $\sum_{i \in I} \chi_i$.
\begin{lemma}\label{lem:Approx1} Fix $z \in Z(I)$ and $\lambda \in \Lambda_{\mathscr{Z}}(z)$. Then there exist a distribution $\bar\lambda \in \Lambda_{\mathscr{Z}(I)}(z)$ and a vector $\chi \in \mathbb{R}^{I}_+$ satisfying
\begin{mylist} \item $\Delta\lambda_{[i]} = \Delta\lambda^{[i]} = -\chi_i$ for all $i \in I$, \item $\Delta\lambda_{\mathbf{0}} = \lambda_{[K]}+\chi_{[I]}$, \item $\chi_{[I]} \leq \lambda_{[K]}$, \text{ and}
\item $|\Delta\lambda|_{[S]} \leq 3\lambda_{[K]}$, \end{mylist}
\noindent where $\Delta\lambda = \bar\lambda - \lambda$. \end{lemma}
\begin{lemma}\label{lem:Approx2} Fix $\varepsilon>0$. There exists a $\delta>0$ such that for any $x \in X$ with $\bar x_K \equiv\max_{k \in K}x_k < \delta$, any $z \in Z(I)$, and any $\lambda \in \Lambda_{\mathscr{Z}(\support(x))}(z)$, we have \begin{equation*}
R(\lambda||\nu(\cdot|x))\ge R(\bar\lambda||\nu(\cdot|x))-\varepsilon. \end{equation*} where $\bar\lambda \in \Lambda_{\mathscr{Z}(I)}(z)$ is the distribution determined for $\lambda$ in Lemma \ref{lem:Approx1}. \end{lemma}
To see that Lemma \ref{lem:Approx2} implies Lemma \ref{lem:RevIneq}, fix $x \in X$ with $\bar x_K < \delta$ and $z \in Z(I)$, and let $\lambda\in \Lambda_\mathscr{Z}(z) $ and $\lambda^I\in \Lambda_{\mathscr{Z}(I)}(z) $ be the minimizers in \eqref{eq:LAgain} and \eqref{eq:LAgain2}, respectively; then since $\bar\lambda \in \Lambda_{\mathscr{Z}(I)}(z)$, \[
L(x,z) = R(\lambda||\nu(\cdot|x)) \geq R(\bar\lambda||\nu(\cdot|x))-\varepsilon \geq R(\lambda^I||\nu(\cdot|x))-\varepsilon = L_I(x,z)-\varepsilon. \]
\subsection{Proof of Lemma \ref{lem:Approx1}} To prove Lemma \ref{lem:Approx1}, we introduce an algorithm that incrementally constructs the pair $(\bar\lambda, \chi) \in \Lambda_{\mathscr{Z}(I)}(z) \times \mathbb{R}^I_+$ from the pair $(\lambda,\mathbf{0}) \in \Lambda_{\mathscr{Z}}(z) \times \mathbb{R}^I_+$. All intermediate states $(\nu, \pi)$ of the algorithm are in $\Lambda_{\mathscr{Z}}(z) \times \mathbb{R}^I_+$.
Suppose without loss of generality that $K = \{1, \ldots , \bar k\}$. The algorithm first reduces all elements of the first row of $\lambda$ to 0, then all elements of the second row, and eventually all elements of the $\bar k$th row.
Suppose that at some stage of the algorithm, the state is $(\nu, \pi)\in \Lambda_{\mathscr{Z}}(z) \times \mathbb{R}^I_+$, rows $1$ through $k-1$ have been zeroed, and row $k$ has not been zeroed: \begin{gather} \nu_{[h]} = 0\text{ for all }h < k, \text{ and }\label{eq:AlgDag}\\ \nu_{[k]} > 0.\label{eq:AlgDagAlt} \end{gather} Since $\nu \in \Lambda_{\mathscr{Z}}(z)$ and $z \in Z(I)$, \begin{gather} \nu^{[i]} - \nu_{[i]} =\sum\nolimits_{\mathscr{z} \in \mathscr{Z}}\mathscr{z}_i \nu(\mathscr{z}) = z_i\text{ for all }i \in I,\text{ and}\label{eq:AlgStar}\\ \nu^{[\ell]} - \nu_{[\ell]} =\sum\nolimits_{\mathscr{z} \in \mathscr{Z}}\mathscr{z}_\ell \nu(\mathscr{z}) = z_\ell \geq 0\text{ for all }\ell \in K.\label{eq:Alg2Star} \end{gather} \eqref{eq:AlgDagAlt} and \eqref{eq:Alg2Star} together imply that $\nu^{[k]}>0$. Thus there exist $j\ne k$ and $i \ne k$ such that \begin{equation}\label{eq:AlgInc} \nu_{kj} \wedge\nu_{ik} \equiv c > 0. \end{equation}
Using \eqref{eq:AlgInc}, we now construct the algorithm's next state $(\hat\nu, \hat\pi)$ from the current state $(\nu, \pi)$ using one of three mutually exclusive and exhaustive cases, as described next; only the components of $\nu$ and $\pi$ whose values change are noted explicitly. \begin{alignat}{3} &\text{Case 1: }\;i \ne j &&\hspace{.5in}\text{Case 2: }\;i = j\in K&&\hspace{.5in}\text{Case 3: }\;i = j\in I \notag\\ &\hat\nu_{kj} = \nu_{kj} - c&&\hspace{.5in}\hat\nu_{kj} = \nu_{kj} - c&&\hspace{.5in}\hat\nu_{kj} = \nu_{kj} - c\notag\\ &\hat\nu_{ik} = \nu_{ik} - c&&\hspace{.5in}\hat\nu_{jk} = \nu_{jk} - c&&\hspace{.5in}\hat\nu_{jk} = \nu_{jk} - c\notag\\ &\hat\nu_{ij} = \nu_{ij} + c&&&&\notag\\ &\hat\nu_{\mathbf{0}} = \nu_{\mathbf{0}} + c&&\hspace{.5in}\hat\nu_{\mathbf{0}} = \nu_{\mathbf{0}} + 2c&&\hspace{.5in}\hat\nu_{\mathbf{0}} = \nu_{\mathbf{0}} + 2c\notag\\ &&& &&\hspace{.5in}\hat\pi_{i} = \pi_{i} + c\notag \end{alignat} In what follows, we confirm that following the algorithm to completion leads to a final state $(\bar\lambda, \chi)$ with the desired properties.
Write $\Delta\nu = \hat\nu - \nu$ and $\Delta\pi = \hat\pi - \pi$, and define $|\Delta\nu|$ componentwise as described above. The following statements are immediate: \begin{subequations} \begin{alignat}{3} &\text{Case 1: }\;i \ne j &&\hspace{.3in}\text{Case 2: }\;i = j\in K&&\hspace{.3in}\text{Case 3: }\;i = j\in I \notag\\ &\Delta\nu_{[k]} = \Delta^{[k]} =- c&&\hspace{.3in}\Delta\nu_{[k]} = \Delta\nu^{[k]} =- c&&\hspace{.3in}\Delta\nu_{[k]} = \Delta\nu^{[k]} =- c\label{eq:chartl1}\\ &&&\hspace{.3in}\Delta\nu_{[j]} = \Delta\nu^{[j]} =- c&&\hspace{.3in}\Delta\nu_{[j]} = \Delta\nu^{[j]} =- c\label{eq:chartl2}\\ & \Delta\nu_{[\ell]} = \Delta\nu^{[\ell]} = 0, \ell \ne k&&\hspace{.3in}\Delta\nu_{[\ell]} = \Delta\nu^{[\ell]} = 0, \ell \notin \{k,j\}&&\hspace{.3in}\Delta\nu_{[\ell]} = \Delta\nu^{[\ell]} = 0, \ell \notin \{k,j\}\label{eq:chartl3}\\ &\Delta\nu_{\mathbf{0}} = c&&\hspace{.3in}\Delta\nu_{\mathbf{0}} = 2c &&\hspace{.3in}\Delta\nu_{\mathbf{0}} = 2c\label{eq:chartl4}\\ &\Delta\pi_{[I]} = 0&&\hspace{.3in}\Delta\pi_{[I]} = 0 &&\hspace{.3in}\Delta\pi_{[I]} = c\label{eq:chartl5}\\ &\Delta\nu_{[K]} = -c&&\hspace{.3in}\Delta\nu_{[K]} = -2 c &&\hspace{.3in}\Delta\nu_{[K]} = -c\label{eq:chartl6}\\
&|\Delta\nu|_{[\scrA]} = 3 c&&\hspace{.3in}|\Delta\nu|_{[\scrA]} = 2 c &&\hspace{.3in}|\Delta\nu|_{[S]} = 2 c\label{eq:chartl7} \end{alignat} \end{subequations}
The initial equalities in \eqref{eq:chartl1}--\eqref{eq:chartl3} imply that if $\nu$ is in $\Lambda_{\mathscr{Z}}(z)$, then so is $\hat\nu$. \eqref{eq:chartl1}--\eqref{eq:chartl3} also imply that no step of the algorithm increases the mass in any row of $\nu$. Moreover, \eqref{eq:AlgInc} and the definition of the algorithm imply that during each step, no elements of the $k$th row or the $k$th column of $\nu$ are increased, and that at least one such element is zeroed. It follows that row 1 is zeroed in at most $2n-3$ steps, followed by row 2, and ultimately followed by row $\bar k$. Thus a terminal state with $\bar\lambda_{[K]}=0$, and hence with $\bar\lambda\in\Lambda_{\mathscr{Z}(I)}(z)$, is reached in a finite number of steps. For future reference, we note that \begin{equation}\label{eq:AlgFrown} \Delta\lambda_{[K]} = \bar\lambda_{[K]} - \lambda_{[K]} = -\lambda_{[K]}. \end{equation}
We now verify conditions (i)-(iv) from the statement of the lemma. First, for any $i \in I$, \eqref{eq:chartl2}, \eqref{eq:chartl3}, and \eqref{eq:chartl5} imply that in all three cases of the algorithm, $\Delta\nu_{[i]} = \Delta\nu^{[i]} = -\Delta\pi_{[I]}$; the common value is 0 in Cases 1 and 2 and $-c$ in Case 3. Thus aggregating
over all steps of the algorithm yields $\Delta\lambda_{[i]} = \Delta\lambda^{[i]} = -\chi_i$, which is condition (i).
Second, \eqref{eq:chartl4}--\eqref{eq:chartl6} imply that in all three cases, $\Delta\nu_{\mathbf{0}} = -\Delta\nu_{[K]} +\Delta\pi_{[I]}$. Aggregating over all steps of the algorithm yields $\Delta\lambda_{\mathbf{0}} = -\Delta\lambda_{[K]} +\chi_{[I]}$. Then substituting \eqref{eq:AlgFrown} yields $\Delta\lambda_{\mathbf{0}} = \lambda_{[K]} +\chi_{[I]}$ which is condition (ii).
Third, \eqref{eq:chartl5} and \eqref{eq:chartl6} imply that in all three cases, $\Delta\pi_{[I]} \leq -\Delta\nu_{[K]} $. Aggregating and using \eqref{eq:AlgFrown} yields $\chi_{[I]} \leq -\Delta\lambda_{[K]} =\lambda_{[K]}$, establishing (iii).
Fourth, \eqref{eq:chartl6} and \eqref{eq:chartl7} imply that in all three cases, $|\Delta\nu|_{[\scrA]} \le -3 \Delta\nu_{[K]} $. Aggregating and using \eqref{eq:AlgFrown} yields $|\Delta\lambda|_{[\scrA]} \le -3\Delta\lambda_{[K]} =3\lambda_{[K]}$, establishing (iv).
This completes the proof of Lemma \ref{lem:Approx1}.
\subsection{Proof of Lemma \ref{lem:Approx2}} To prove Lemma \ref{lem:Approx2}, it is natural to introduce the notation $d = \lambda - \bar \lambda = -\Delta \lambda$ to represent the increment generated by a move from distribution $\bar\lambda \in \Lambda_{\mathscr{Z}(I)}(z)$ to distribution $\lambda\in \Lambda_{\mathscr{Z}}(z)$. We will show that when $\bar x_K =\max_{k \notin I}x_i$ is small, such a move can only result in a slight reduction in relative entropy.
To start, observe that \begin{gather} d_{[i]} = d^{[i]} = \chi_i\text{ for all }i \in I,\label{eq:d1} \\ d_{[k]} = d^{[k]} = \lambda_{[k]}\text{ for all }k \in K,\label{eq:d2} \\ d_{\mathbf{0}} = -\lambda_{[K]}-\chi_{[I]}\geq -2\lambda_{[K]},\text{ and}\label{eq:d3}\\
|d|_{[\scrA]} \leq 3\lambda_{[K]}\label{eq:d4}. \end{gather} Display \eqref{eq:d1} follows from part (i) of Lemma \ref{lem:Approx1}, display \eqref{eq:d3} from parts (ii) and (iii), and display \eqref{eq:d4} from part (iv). For display \eqref{eq:d2}, note first that since $\lambda$ and $\bar\lambda$ are both in $\Lambda_{\mathscr{Z}}(z)$, we have \[ \lambda^{[k]} - \lambda_{[k]} =\sum\nolimits_{\mathscr{z} \in \mathscr{Z}}\mathscr{z}_k \lambda(\mathscr{z}) = z_k =\sum\nolimits_{\mathscr{z} \in \mathscr{Z}}\mathscr{z}_k \bar\lambda(\mathscr{z})= \bar\lambda^{[k]} - \bar\lambda_{[k]}, \] and hence \[ d^{[k]} = \lambda^{[k]} - \bar\lambda^{[k]} =\lambda_{[k]} - \bar\lambda_{[k]} = d_{[k]}; \] then \eqref{eq:d2} follows from the fact that $\bar\lambda_{[k]}=0$, which is true since $\bar\lambda \in \Lambda_{\mathscr{Z}(I)}(z)$.
By definition, \begin{equation*}
R(\lambda||\nu(\cdot|x)) = \sum_{i \in \scrA}\sum_{j\ne i}\paren{\lambda_{ij}\log \lambda_{ij} - \lambda_{ij} \log x_i \sigma_{ij}} + \lambda_{\mathbf{0}}\log\lambda_{\mathbf{0}} - \lambda_{\mathbf{0}} \log \Sigma, \end{equation*} where $\sigma_{ij} = \sigma_{ij}(x)$ and $\Sigma = \sum_{j \in \scrA}x_j\sigma_{jj}$. Thus, writing $\ell(p) = p\log p$ for $p\in (0,1]$ and $\ell(0) = 0$, we have \begin{align}\label{eq:REDiff}
R(\lambda||\nu(\cdot|x))- R(\bar\lambda||\nu(\cdot|x)) &=\sum_{i \in \scrA}\sum_{j\ne i}\paren{\ell(\lambda_{ij}) - \ell(\bar\lambda_{ij}) }+\paren{ \ell(\lambda_{\mathbf{0}}) - \ell(\bar\lambda_{\mathbf{0}}) }\\ &\qquad - \paren{\sum_{i \in \scrA}\sum_{j\ne i} d_{ij} \log x_i\sigma_{ij} + d_{\mathbf{0}} \log\Sigma}.\notag \end{align}
We first use \eqref{eq:d1}--\eqref{eq:d3}, Lemma \ref{lem:Approx1}(iii), and the facts that $\chi \geq 0$ and $\Sigma \geq \varsigma$ to obtain a lower bound on the final term of \eqref{eq:REDiff}: \begin{align*} -\Bigg(\sum_{i \in \scrA}&\sum_{j\ne i} d_{ij} \log x_i\sigma_{ij} + d_{\mathbf{0}} \log\Sigma\Bigg)\\
&= -\sum_{i \in I}\chi_i\log x_i -\sum_{k \in K}\lambda_{[k]}\log x_k-\sum_{i \in \scrA}\sum_{j\ne i} d_{ij} \log \sigma_{ij} + \paren{\sum_{i \in I}\chi_i +\sum_{k \in K}\lambda_{[k]}}\log\Sigma\\
&\geq\sum_{i \in I}\chi_i\log \Sigma +\sum_{k \in K}\lambda_{[k]} \log \frac{\Sigma}{x_k }+\sum_{i \in \scrA}\sum_{j\ne i} |d_{ij}| \log \sigma_{ij} \\
&\geq \chi_{[I]}\log\varsigma + \lambda_{[K]}\log \frac{\varsigma}{\bar x_K } +|d|_{[\scrA]}\log{\varsigma}\\ &\geq \lambda_{[K]}\log\varsigma + \lambda_{[K]}\log \frac{\varsigma}{\bar x_K } +3\lambda_{[K]}\log{\varsigma}\\ &\geq \lambda_{[K]}\log \frac{\varsigma^5}{\bar x_K }. \end{align*}
To bound the initial terms on the right-hand side of \eqref{eq:REDiff}, observe that the function $\ell\colon[0,1] \to \mathbb{R}$ is convex, nonpositive, and minimized at $\mathrm{e}^{-1}$, where $\ell(\mathrm{e}^{-1}) = -\mathrm{e}^{-1}$. Define the convex function $\hat\ell \colon [0,1] \to \mathbb{R}$ by $\hat\ell(p)=\ell(p)$ if $p \leq \mathrm{e}^{-1}$ and $\hat\ell(p)=-\mathrm{e}^{-1}$ otherwise.
We now argue that for any $p,q \in [0,1]$, we have \begin{equation}\label{eq:HatEllIneq}
-|\ell(p) -\ell(q)|\geq \hat\ell(|p-q|). \end{equation} Since $\ell$ is nonpositive with minimum $-\mathrm{e}^{-1}$,
$-|\ell(p) -\ell(q)|\geq -\mathrm{e}^{-1}$ always holds. If $|p-q| \leq \mathrm{e}^{-1}$, then $-|\ell(p) -\ell(q)|\geq \min\{\ell(|p-q|), \ell(1-|p-q|)\} = \ell(|p-q|)$; the inequality follows from the convexity of $\ell$, and the equality obtains because $\ell(r)-\ell(1-r) \leq 0$ for $r \in [0,\frac12]$, which is true because $r \mapsto \ell(r)-\ell(1-r)$ is convex on $[0,\frac12]$ and equals $0$ at the endpoints. Together these claims yield \eqref{eq:HatEllIneq}.
Together, inequality \eqref{eq:HatEllIneq}, Jensen's inequality, and inequality \eqref{eq:d4} imply that \begin{align*} \sum_{i \in \scrA}\sum_{j\ne i}\paren{\ell(\lambda_{ij}) - \ell(\bar\lambda_{ij}) } &\geq-\sum_{i \in \scrA}\sum_{j\ne i}\abs{\ell(\lambda_{ij}) - \ell(\bar\lambda_{ij}) }\\
&\geq\sum_{i \in \scrA}\sum_{j\ne i}\hat\ell(|\lambda_{ij} -\bar\lambda_{ij}|)\\
&\geq(n^2-n)\,\hat\ell\paren{\frac{\sum_{i \in \scrA}\sum_{j\ne i}|\lambda_{ij} -\bar\lambda_{ij}|}{n^2-n}}\\ &\geq(n^2-n)\,\hat\ell\paren{\tfrac{3}{n^2-n}\lambda_{[K]}}. \end{align*}
Finally, display \eqref{eq:d3} implies that $d_{\mathbf{0}}=\lambda_{\mathbf{0}} -\bar \lambda_{\mathbf{0}} \in [-2\lambda_{[K]}, 0]$. Since $\ell\colon[0,1] \to \mathbb{R}$ is convex with $\ell(1) = 0$ and $\ell^\prime(1)=1$, it follows that \begin{equation*} \ell(\lambda_{\mathbf{0}}) - \ell( \bar \lambda_{\mathbf{0}}) \geq \ell( 1 + d_{\mathbf{0}}) -\ell(1) \geq d_{\mathbf{0}}\geq -2\lambda_{[K]}. \end{equation*} Substituting three of the last four displays into \eqref{eq:REDiff}, we obtain \[
R(\lambda||\nu(\cdot|x))- R(\bar\lambda||\nu(\cdot|x)) \geq (n^2 -n)\,\hat\ell\paren{\tfrac{3}{n^2-n}\lambda_{[K]}}+\lambda_{[K]}\paren{\log \frac{\varsigma^5}{\bar x_K }-2}. \]
To complete the proof of the lemma, it is enough to show that if $\bar x_K \leq \varsigma^5\mathrm{e}^{-2}$, then \begin{equation}\label{eq:RELast} (n^2 -n)\,\hat\ell\paren{\tfrac{3}{n^2-n}\lambda_{[K]}}+\lambda_{[K]}\paren{\log \frac{\varsigma^5}{\bar x_K }-2}\geq -(n^2 -n)\paren{\frac {\bar x_K}{\mathrm{e}\varsigma^5}}^{1/3}. \end{equation} We do so by computing the minimum value of the left-hand side of \eqref{eq:RELast} over $\lambda_{[K]}\geq 0$. Let $a= n^2 - n$ and $c = \log \frac{\varsigma^5}{\bar x_K }-2$.
The assumption that $\bar x_K \leq \varsigma^5\mathrm{e}^{-2}$ then becomes $c \geq 0$. Thus if $s > \frac{a}{3\mathrm{e}}$, then \[ a\hat\ell(\tfrac3a s)+cs = -a\mathrm{e}^{-1} + cs \geq -a \mathrm{e}^{-1}+c\cdot\tfrac{a}{3\mathrm{e}} = a \ell(\tfrac3a \cdot\tfrac{a}{3\mathrm{e}} )+c\cdot\tfrac{a}{3\mathrm{e}} . \] Thus if $s\leq \frac{a}{3\mathrm{e}}$ minimizes $a \ell(\frac3a s)+cs$ over $s \geq 0$, it also minimizes $a \hat\ell(\frac3a s)+cs$, and the minimized values are the same. To minimize $a \ell(\frac3a s)+cs$, note that it is concave in $s$; solving the first-order condition yields the minimizer, $s^{\mathlarger *}= \frac{a}3\exp(-\frac{c}3-1)$. This is less than or equal to $\frac{a}{3\mathrm{e}}$ when $c \geq 0$. Plugging $s^{\mathlarger *}$ into the objective function yields $-a \exp(-\frac{c}3-1)$, and substituting in the values of $a$ and $c$ and simplifying yields the right-hand side of \eqref{eq:RELast}.
This completes the proof of Lemma \ref{lem:Approx2}.
\section{Proof of Proposition \ref{prop:interiorpaths}} \label{sec:RIOP}
It remains to prove Proposition \ref{prop:interiorpaths}. Inequality \eqref{eq:LimSPBound} implies that solutions $\bar\phi$ to the mean dynamic \eqref{eq:MD} satisfy \begin{equation}\label{eq:NewPhi0} \varsigma -\bar\phi_i \leq \dot{\bar\phi}_i \leq 1 \end{equation} for every action $i \in \scrA$.
Thus if $(\bar\phi_0)_i \leq \frac\varsigma2$, then for all $t \in [0, \frac\varsigma4]$, the upper bound in \eqref{eq:NewPhi0} yields $(\bar\phi_t)_i \leq \frac{3\varsigma}4$. Then the lower bound yields $ (\dot{\bar\phi}_t)_i \geq \varsigma - \frac{3\varsigma}4= \frac\varsigma4$, and thus \begin{equation}\label{eq:NewPhi1} (\bar\phi_0)_i \leq \tfrac\varsigma2\;\text{ implies that }\;(\bar\phi_t)_i-(\bar\phi_0)_i \geq \tfrac\varsigma4 t\;\text{ for all }t \in [0, \tfrac\varsigma4]. \end{equation} In addition, it follows easily from \eqref{eq:NewPhi0} and \eqref{eq:NewPhi1} that solutions $\bar\phi$ of \eqref{eq:MD} from every initial condition in $X$ satisfy \begin{equation}\label{eq:NewPhi2} (\bar\phi_t)_i \geq \tfrac\varsigma4 t\;\text{ for all }t \in [0, \tfrac\varsigma4]. \end{equation}
Fix $\phi \in \mathscr{C}_x$ with $c_x(\phi) < \infty$, so that $\phi$ is absolutely continuous, with $\dot\phi_t \in Z$ at all $t \in [0,1]$ where $\phi$ is differentiable. Let $\bar\phi \in \mathscr{C}_x$ be the solution to \eqref{eq:MD} starting from $x$. Then for $\alpha \in (0, \tfrac\varsigma4]$, define trajectory $\phi^\alpha \in \tilde \mathscr{C}_x$ as follows \begin{equation}\label{eq:DefNewPhi} \phi^\alpha_t= \begin{cases} \bar\phi_t &\text{ if }t\leq\alpha,\\ \bar\phi_\alpha + (1 - \frac{2}{\varsigma}\alpha )(\phi_{t -\alpha}- x) &\text{ if }t>\alpha. \end{cases} \end{equation} Thus $\phi^\alpha$ follows the solution to mean dynamic from $x$ until time $\alpha$; then, the increments of $\phi^\alpha$ starting at time $\alpha$ mimic those of $\phi$ starting at time 0, but are slightly scaled down.
The next lemma describes the key properties of $\phi^\alpha$. In part ($ii$) and hereafter, $|\cdot|$ denotes the $\ell^1$ norm on $\mathbb{R}^n$.
\begin{lemma}\label{lem:NewPhiProps} If $\alpha \in (0, \tfrac\varsigma4]$ and $t \in [\alpha,1]$, then \begin{mylist} \item $\dot\phi^\alpha_t = (1 - \tfrac{2}{\varsigma}\alpha )\dot\phi_{t -\alpha}$.
\item $|\phi^\alpha_t - \phi_{t-\alpha}| \leq (2+\frac4\varsigma) \alpha$. \item for all $i \in \scrA$, $(\phi^\alpha_t)_i \geq \tfrac\varsigma4\alpha$. \item for all $i \in \scrA$, $(\phi^\alpha_t)_i \geq \tfrac\varsigma{12}(\phi_{t -\alpha})_i$. \end{mylist} \end{lemma}
\textit{Proof. } Part (i) is immediate. For part (ii), combine the facts that $\bar\phi_t$ and $\phi_t$ move at $\ell^1$ speed at most $2$ (since both have derivatives in $Z$) and the identity $\phi_{t-\alpha} = \phi_0 + (\phi_{t-\alpha} -\phi_0)$ with the definition of $\phi^\alpha_t$ to obtain \[ \abs{\phi^\alpha_t - \phi_{t-\alpha}} \leq \abs{ \bar\phi_\alpha - x} + \tfrac{2}{\varsigma}\alpha (\phi_{t -\alpha}- x)\leq 2\alpha + \tfrac{2}{\varsigma}\alpha \cdot 2(t-\alpha) \leq (2+\tfrac4\varsigma) \alpha. \]
We turn to part (iii). If $(\phi_{t -\alpha}- x)_i \geq 0$, then it is immediate from definition \eqref{eq:DefNewPhi} and inequality \eqref{eq:NewPhi2} that $(\phi^\alpha_t)_i \geq \tfrac\varsigma4\alpha$. So suppose instead that $(\phi_{t -\alpha}- x)_i < 0$. Then \eqref{eq:DefNewPhi} and the fact that $(\phi_{t -\alpha})_i\geq 0$ imply that \begin{equation}\label{eq:NewPhi3} (\phi^\alpha_t)_i\geq(\bar\phi_\alpha)_i - (1 - \tfrac{2}{\varsigma}\alpha )x_i=((\bar\phi_\alpha)_i -x_i) + \tfrac{2}{\varsigma}\alpha x_i. \end{equation} If $x_i \leq \frac\varsigma2$, then \eqref{eq:NewPhi3} and \eqref{eq:NewPhi1} yield \[ (\phi^\alpha_t)_i\geq ((\bar\phi_\alpha)_i -x_i) + \tfrac{2}{\varsigma}\alpha x_i \geq \tfrac\varsigma4\alpha + 0 = \tfrac\varsigma4\alpha. \] If $x_i \in[ \frac\varsigma2, \varsigma]$, then \eqref{eq:NewPhi3} and \eqref{eq:NewPhi0} yield \[ (\phi^\alpha_t)_i\geq((\bar\phi_\alpha)_i -x_i) + \tfrac{2}{\varsigma}\alpha x_i \geq 0 + \tfrac{2}{\varsigma}\alpha \cdot \tfrac\varsigma2\geq\alpha. \] And if $x_i \geq \varsigma$, then \eqref{eq:NewPhi3}, \eqref{eq:NewPhi0}, and the fact that $\dot{\bar\phi}_i \geq -1$ yield \[ (\phi^\alpha_t)_i\geq((\bar\phi_\alpha)_i -x_i) + \tfrac{2}{\varsigma}\alpha x_i \geq -\alpha + \tfrac{2}{\varsigma}\alpha \cdot \varsigma\geq\alpha. \]
It remains to establish part (iv). If $(\phi_{t -\alpha})_i=0$ there is nothing to prove. If $(\phi_{t -\alpha})_i\in (0, 3\alpha]$, then part (iii) implies that \[ \frac{(\phi^\alpha_t)_i}{(\phi_{t -\alpha})_i}\geq \frac{\tfrac\varsigma4\alpha}{3\alpha}=\frac\varsigma{12}. \] And if $(\phi_{t -\alpha})_i\geq 3\alpha$, then definition \eqref{eq:DefNewPhi} and the facts that $\dot{\bar\phi}_i \geq -1$ and $\alpha \leq \frac\varsigma4$ imply that \[ \frac{(\phi^\alpha_t)_i}{(\phi_{t -\alpha})_i} \geq \frac{(\bar\phi_\alpha - x)_i + (1 - \frac{2}{\varsigma}\alpha )(\phi_{t -\alpha})_i+\frac{2}{\varsigma}\alpha x_i}{(\phi_{t -\alpha})_i} \geq -\frac{\alpha}{3\alpha}+ 1 - \frac{2}{\varsigma}\alpha \geq \frac23 - \frac12 = \frac16. \;\hspace{4pt}\ensuremath{\blacksquare}
\]
Each trajectory $\phi^\alpha$ is absolutely continuous, and Lemma \ref{lem:NewPhiProps}(ii) and the fact that \eqref{eq:MD} is bounded imply that $\phi^\alpha$ converges uniformly to $\phi$ as $\alpha$ approaches 0. This uniform convergence implies that \begin{equation}\label{eq:hConv} \lim_{\alpha\to 0}h(\phi^\alpha)= h(\phi). \end{equation} Since $\phi^\alpha_{[0,\alpha]}$ is a solution to \eqref{eq:MD}, and thus has cost zero, it follows from Lemma \ref{lem:NewPhiProps}(i) and the convexity of $L(x, \cdot)$ that \begin{align} c_{x}(\phi^\alpha) &= \int_{\alpha}^{1}L(\phi^{\alpha}_t,\dot{\phi}^{\alpha}_t)\,\mathrm{d} t\notag\\ &\leq \int_{\alpha}^{1}L(\phi^{\alpha}_t,\dot{\phi}_{t-\alpha})\,\mathrm{d} t +\int_{\alpha}^{1}\tfrac{2}{\varsigma}\alpha \paren{L(\phi^{\alpha}_t,\mathbf{0})-L(\phi^{\alpha}_t,\dot{\phi}_{t-\alpha})}\mathrm{d} t.\label{ex:CostDecomp} \end{align}
To handle the second integral in \eqref{ex:CostDecomp}, fix $t \in [\alpha, 1]$. Since $\phi^\alpha_t\in \Int(X)$, $\nu(\cdot|\phi^\alpha_t)$ has support $Z$, a set with extreme points $\ext(Z) =\{e_j-e_i\colon j\neq i \}=\mathscr{Z} \smallsetminus\{\mathbf{0}\}$. Therefore, the convexity of $L(x, \cdot)$, the final equality in \eqref{eq:SimpleLBound0}, the lower bound \eqref{eq:LimSPBound}, and Lemma \ref{lem:NewPhiProps}(iii) imply that for all $z \in Z$, \begin{align*} L(\phi^\alpha_t,z) \leq \max_{i\in \scrA}\max_{j\ne i}L(\phi^\alpha_t, e_j-e_i) \leq -\!\log\paren{\varsigma \min_{i\in \scrA}\hspace{1pt}(\phi^\alpha_t)_i} \leq -\!\log \tfrac{\varsigma^2}4 \alpha. \end{align*} Thus since $L$ is nonnegative and since $\lim_{\alpha\to0}\alpha \log{\alpha} = 0$, the second integrand in \eqref{ex:CostDecomp} converges uniformly to zero, and so \begin{equation}\label{eq:SecondIntBound} \lim_{\alpha \to 0}\int_{\alpha}^{1}\tfrac{2}{\varsigma}\alpha \paren{L(\phi^{\alpha}_t,\mathbf{0})-L(\phi^{\alpha}_t,\dot{\phi}_{t-\alpha})}\mathrm{d} t = 0. \end{equation}
To bound the first integral in \eqref{ex:CostDecomp}, note first that by representation \eqref{eq:CramerRep}, for each $t \in [0, 1]$ there is a probability measure $\lambda_t \in \Delta(\mathscr{Z})$ with $\lambda_t \ll \nu(\cdot|\phi_{t})$ such that \begin{gather}
L(\phi_{t},\dot{\phi}_{t})=R(\lambda_{t}||\nu(\cdot|\phi_{t}))\;\text{ and}\label{eq:LNew1}\\
L(\phi^{\alpha}_{t+\alpha},\dot{\phi}_{t})\leq R(\lambda_{t}||\nu(\cdot|\phi^{\alpha}_{t+\alpha}))\;\text{ for all }\alpha \in (0,1].\label{eq:LNew2} \end{gather} DE Lemma 6.2.3 ensures that $\{\lambda_t\}_{t\in[0,1]}$ can be chosen to be a measurable function of $t$. (Here and below, we ignore the measure zero set on which either $\dot{\phi}_{t}$ is undefined or $L(\phi_{t},\dot{\phi}_{t})=\infty$.)
Lemma \ref{lem:RelEnt2} and expressions \eqref{eq:LNew1} and \eqref{eq:LNew2} imply that \begin{align*} \limsup_{\alpha \to 0}\int_{\alpha}^{1}L(\phi^{\alpha}_t,\dot{\phi}_{t-\alpha})\,\mathrm{d} t &=\limsup_{\alpha \to 0}\int_{0}^{1-\alpha}L(\phi^{\alpha}_{t+\alpha},\dot{\phi}_{t})\,\mathrm{d} t \\
&\leq \limsup_{\alpha \to 0}\paren{\int_{0}^{1-\alpha}R(\lambda_{t}||\nu(\cdot|\phi^{\alpha}_{t+\alpha}))\,\mathrm{d} t +\int_{1-\alpha}^1 0\,\mathrm{d} t}\\
&= \int_{0}^{1}R(\lambda_{t}||\nu(\cdot|\phi_{t}))\,\mathrm{d} t \\ &=\int_{0}^{1}L(\phi_t,\dot{\phi}_t)\,\mathrm{d} t \\ &= c_x(\phi), \end{align*} where the third line follows from the dominated convergence theorem and Lemma \ref{lem:RelEnt2} below. Combining this inequality with \eqref{eq:hConv}, \eqref{ex:CostDecomp}, and \eqref{eq:SecondIntBound}, we see that \[ \inf_{\phi\in\mathscr{C}^{\circ}}\paren{c_{x}(\phi)-h(\phi)}\leq \limsup_{\alpha\to0}\paren{c_{x}(\phi^\alpha)-h(\phi^\alpha)} \leq c_{x}(\phi)-h(\phi). \] Since $\phi \in \mathscr{C}$ was arbitrary, the result follows.
It remains to prove the following lemma:
\begin{lemma}\label{lem:RelEnt2}
Write $R^\alpha_{t+\alpha} = R(\lambda_{t}||\nu(\cdot|\phi^{\alpha}_{t+\alpha}))$ and $R_t = R(\lambda_{t}||\nu(\cdot|\phi_{t}))$. Then \begin{mylist} \item for all $t \in [0,1)$, $\lim_{\alpha \to 0} R^\alpha_{t+\alpha} = R_t$; \item for all $\alpha >0$ small enough and $t \in [0, 1- \alpha]$, $R^\alpha_{t+\alpha} \leq R_t + \log \frac{12}\varsigma + 1$. \end{mylist} \end{lemma}
\textit{Proof. } Definition \eqref{eq:CondLawLimit} implies that \begin{gather}\label{eq:RelEntCalc2}
\hspace{-.25in}R^\alpha_{t+\alpha} - R_t=\sum_{\mathscr{z} \in \mathscr{Z}(\phi_t)}\lambda_{t}(\mathscr{z})\log\frac{\nu(\mathscr{z}|\phi_t)}{\nu(\mathscr{z}|\phi^\alpha_{t+\alpha})}\\ =\sum_{i \in \support(\phi_t)}\sum_{j \in \scrA\smallsetminus\{i\}}\lambda_{t}(e_j-e_i)\paren{\log\frac{(\phi_t)^{}_i}{(\phi^\alpha_{t+\alpha})^{}_i} +\log \frac{\sigma_{ij}(\phi_t)}{\sigma_{ij}(\phi^\alpha_{t+\alpha})}} + \lambda_t(\mathbf{0})\log\frac{\sum_{i\in \scrA}(\phi_t)^{}_i\hspace{1pt}\sigma_{ii}(\phi_t)}{\sum_{i\in \scrA}(\phi^\alpha_{t+\alpha})^{}_i\hspace{1pt}\sigma_{ii}(\phi^\alpha_{t+\alpha})}.\notag \end{gather} The uniform convergence from Lemma \ref{lem:NewPhiProps}(ii) and the continuity of $\sigma$ imply that for each $t \in [0, 1)$, the denominators of the fractions in \eqref{eq:RelEntCalc2} converge to their numerators as $\alpha$ vanishes, implying statement (i).
The lower bound \eqref{eq:LimSPBound} then implies that the second and third logarithms in \eqref{eq:RelEntCalc2} themselves converge uniformly to zero as $\alpha$ vanishes; in particular, for $\alpha$ small enough and all $t \in [0, 1-\alpha]$, these logarithms are bounded above by 1. Moreover, Lemma \ref{lem:NewPhiProps}(iv) implies that when $\alpha$ is small enough and $t \in [0, 1-\alpha]$ is such that $i \in \support(\phi_t)$, the first logarithm is bounded above by $\log \frac{12}\varsigma$. Together these claims imply statement (ii).
This completes the proof of the lemma, and hence the proof of Proposition \ref{prop:interiorpaths}.
\section{Proofs and Auxiliary Results for Section \ref{sec:App}}\label{sec:LogPotProofs}
In the analyses in this section, we are often interested in the action of a function's derivative in directions $z \in \mathbb{R}^n_0$ that are tangent to the simplex. With this in mind, we let $\mathbf{1} \in \mathbb{R}^n$ be the vector of ones, and let $P = I - \frac1n\mathbf{1}\mathbf{1}'$ be the matrix that orthogonally projects $\mathbb{R}^n$ onto $\mathbb{R}^n_0$. Given a function $g\colon \mathbb{R}^n \to \mathbb{R}$, we define the \emph{gradient of} $g$ \emph{with respect to} $\mathbb{R}^n_0$ by $\nabla_{\!0} g(x) = P \nabla g(x)$, so that for $z \in \mathbb{R}^n_0$, we have $\nabla g(x)^\prime z = \nabla g(x)^\prime P z = (P \nabla g(x))^\prime z= \nabla_{\!0} g(x)^\prime z$.
\noindent\emph{Proof of Proposition \ref{prop:LogPotGC}}.
It is immediate from the definition of $M^\eta$ that $M^\eta(\pi) = M^\eta(P\pi)$ for all $\pi \in \mathbb{R}^n$, leading us to introduce the notation $\bar M^\eta \equiv M^\eta|_{\mathbb{R}^n_0}$. Recalling that $h(x) = \sum_{k \in S} x_k\log x_k$ denotes the negated entropy function, one can verify by direct substitution that \begin{equation}\label{eq:Inverses} \bar M^\eta\colon \mathbb{R}^n_0 \to \Int(X)\text{ and }\eta \nabla_{\!0} h\colon \Int(X) \to \mathbb{R}^n_0\text{ are inverse functions}. \end{equation}
Now let $x_t \in \Int(X)$ and $y_t=M^\eta(F(x_t)) = M^\eta(P F(x_t))$. Then display \eqref{eq:Inverses} implies that $\eta \nabla_{\!0} h(y_t)=P F(x_t)$. Since $f^\eta(x) = \eta^{-1}f(x) - h(x)$, $\nabla f(x) = F(x)$, and $\dot x_t = M^\eta(F(x_t ))-x_t\in \mathbb{R}^n_0$, we can compute as follows: \begin{align*} \tfrac {\mathrm{d}}{\mathrm{d} t} f^\eta(x_t ) &=\nabla_{\!0} f^\eta(x)'\dot x_t = (\eta ^{-1}P F(x_t )-\nabla_{\!0} h(x_t ))'(M^\eta(F(x_t ))-x_t ) \\ &= (\nabla_{\!0} h(y_t )-\nabla_{\!0} h(x_t ){)}'(y_t -x_t ) \le 0, \end{align*} strictly so whenever $M^\eta(F(x_{t})) \ne x_{t}$, by the strict convexity of $h$. Since the boundary of $X$ is repelling under \eqref{eq:LogitDyn}, the proof is complete. \hspace{4pt}\ensuremath{\blacksquare}
We now turn to Lemma \ref{lem:NewHLemma} and subsequent claims. We start by stating the generalization of the Hamilton-Jacobi equation \eqref{eq:HJ}. For each $R \subseteq S$ with $\#R \geq 2$, let $X_{R} = \{x \in X: \support(x)\subseteq R\}$, and define $f^\eta_{R}\colon X \to \mathbb{R}$ by \[ f^\eta_{R} (x)=\eta ^{-1}f(x)-\left( {\sum\limits_{i\in R} {x_i \log x_i } +\sum\limits_{j\in S\smallsetminus R} {x_j } } \right), \] respectively. Evidently, \begin{equation}\label{eq:fetaR} f^\eta_{R}(x)=f^\eta(x)\text{ when }\support(x) = R. \end{equation} Our generalization of equation \eqref{eq:HJ} is \begin{equation}\label{eq:HJ2} H(x,-\nabla f^\eta_{R} (x))\leq 0\text{ when }\support(x) = R,\text{with equality if and only if }R=S. \end{equation} To use \eqref{eq:fetaR} and \eqref{eq:HJ2} to establish the upper bound $c^{\mathlarger *}_y \leq -f^\eta(y)$ for paths $\phi\in\mathscr{C}_{x^{\mathlarger *}}[0,T]$, $\phi(T)=y$ that include boundary segments, define $S_t = \support(\phi_t)$. At any time $t$ at which $\phi$ is differentiable, $\dot\phi_t$ is tangent to the face of $X$ corresponding to $S_t$, and so \eqref{eq:fetaR} implies that $\tfrac{\mathrm{d}}{\mathrm{d} t} f^\eta(\phi_t) = \nabla f^\eta_{S_t}(\phi_t)'\dot\phi_t$. We therefore have \begin{align} c_{x^{\mathlarger *}\!,T} (\phi ) &= \int_{0}^{T} {\sup _{u_t \in \mathbb{R}^n_0 } \left( {{u}'_t \dot {\phi }_t -H(\phi _t ,u_t )} \right)\mathrm{d} t} \ge \int_{0}^{T} {\left( {-\nabla f^\eta_{S_t} (\phi _t )'\dot {\phi }_t -H(\phi _t ,-\nabla f^\eta_{S_t} (\phi _t ))} \right)\mathrm{d} t}\notag \\ &\geq \int_{0}^{T} \!\!\! -\nabla f^\eta_{S_t}(\phi _t )'\dot \phi _t \,\mathrm{d} t = f^\eta(x^{\mathlarger *}) -f^\eta(y ) = -f^\eta(y ),\label{eq:CostBoundBd} \end{align} establishing the lower bound.
\noindent\emph{Derivation of property \eqref{eq:HJ2}}.
Let $x \in X$ have support $R \subseteq S$, $\#R \geq 2$. Then since $P\mathbf{1} = \mathbf{0}$, \begin{equation}\label{eq:nablofeta} \nabla_{\!0} f^\eta_{R} (x)= P \left( {\eta ^{-1}F(x)-\sum\limits_{i\in R} {e_i (1+\log x_i )-\sum\limits_{j\in S\smallsetminus R} {e_j } } } \right) = P \left( {\eta ^{-1}F(x)-\sum\limits_{i\in R} {e_i \log x_i } } \right). \end{equation}
Recalling definition \eqref{eq:CramerTr} of $H$, letting $\zeta_x$ be a random variable with distribution $\nu(\cdot|x)$, and using the fact that $P(e_{j} - e_{i})=e_{j} - e_{i}$, we compute as follows: \begin{align*} \exp&(H(x,-\nabla f^\eta_{R} (x)))= \mathbb{E}\exp(-\nabla f^\eta_{R} (x{)}'\zeta _x )\\ &= \sum\limits_{i\in S} {\sum\limits_{j\ne i} {\exp (-\nabla f^\eta_{R} (x{)}'(e_j -e_i ))\mathbb{P}(\zeta _x =e_j -e_i )} } +\mathbb{P}(\zeta_{x} = 0) \\ &= \sum\limits_{i\in R} {\sum\limits_{j\in S\smallsetminus \{i\}} {\exp (-\eta ^{-1}F_j (x)+\eta ^{-1}F_i (x)+\log x_j -\log x_i )\;x_i \frac{\exp (\eta ^{-1}F_j (x))}{\sum\nolimits_{k\in S} {\exp (\eta ^{-1}F_k (x))} }} } \\ &\hspace{2em}+ \sum\limits_{i\in S} {x_i \frac{\exp (\eta ^{-1}F_i (x))}{\sum\nolimits_{k\in S} {\exp (\eta ^{-1}F_k (x))} }} \\ &= \sum\limits_{i\in R} {\frac{\exp (\eta ^{-1}F_i (x))}{\sum\nolimits_{k\in S} {\exp (\eta ^{-1}F_k (x))} }(1-x_i)} +\sum\limits_{i\in R} {\frac{\exp (\eta ^{-1}F_i (x))}{\sum\nolimits_{k\in S} {\exp (\eta ^{-1}F_k (x))} }\;x_i } \\ &= \frac{\sum\nolimits_{i\in R}\exp (\eta ^{-1}F_i (x))}{\sum\nolimits_{k\in S} {\exp (\eta ^{-1}F_k (x))} } . \end{align*} Since the final expression equals 1 when $R=S$ and is less than 1 when $R \subset S$, property \eqref{eq:HJ2} follows. \hspace{4pt}\ensuremath{\blacksquare}
\noindent\emph{Derivation of equation \eqref{eq:HFOC}}.
Let $x \in \Int(X)$, and observe that \begin{equation}\label{eq:PartialH} \frac{\partial H}{\partial u_i } (x,u) = \frac{\sum\nolimits_{j\ne i} \left( \exp (u_i -u_j )x_j \exp (\eta ^{-1}F_i (x))-\exp (u_j -u_i )x_i \exp (\eta ^{-1}F_j (x)) \right)}{\mathbb{E}\exp (u'\zeta _x )\sum\nolimits_{k\in S} \exp (\eta ^{-1}F_k (x))} . \end{equation} Recall from the previous derivation that $\mathbb{E}\exp(-\nabla f^\eta (x{)}'\zeta _x )=1$. Thus since $u_i-u_j=(e_i-e_j)^\prime u = (P (e_i-e_j))^\prime u$, it follows from \eqref{eq:PartialH} that $\frac{\partial H}{\partial u_i } (x,u)=\frac{\partial H}{\partial u_i } (x,P u)$, so we can use equation \eqref{eq:nablofeta} with $R=S$ to compute as follows: \begin{align*} \frac{\partial H}{\partial u_i }&(x,-\nabla f^\eta(x))=\frac{\partial H}{\partial u_i }(x,-\nabla_{\!0} f^\eta(x))\\ &= \frac{1}{\sum\limits_{k\in\scrA} {\exp (\eta ^{-1}F_k (x))} }\sum\limits_{j\ne i} {\left( {\exp (-\eta ^{-1}F_i (x)+\eta ^{-1}F_j (x)+\log x_i -\log x_j )\,x_j \exp (\eta ^{-1}F_i (x))} \right.} \\ &\hspace{2em}-\left. {\exp (-\eta ^{-1}F_j (x)+\eta ^{-1}F_i (x)+\log x_j -\log x_i )\,x_i \exp (\eta ^{-1}F_j (x))} \right) \\ &= \frac{1}{\sum\limits_{k\in\scrA} {\exp (\eta ^{-1}F_k (x))} }\sum\limits_{j\ne i} {\left( {x_i \exp (\eta ^{-1}F_j (x))-x_j \exp (\eta ^{-1}F_i (x))} \right)} \\ &= x_i \frac{\sum\nolimits_{j\ne i} {\exp (\eta ^{-1}F_j (x))} }{\sum\nolimits_{k\in\scrA} {\exp (\eta ^{-1}F_k (x))} }-(1-x_i )\frac{\exp (\eta ^{-1}F_i (x))}{\sum\nolimits_{k\in\scrA} {\exp (\eta ^{-1}F_k (x))} } \\ &= x_{i} (1 -M^\eta_i (F(x))) - (1 - x_{i}) M^\eta_i (F(x)) \\ &= x_{i} - M^\eta_i (F(x)). \hspace{4pt}\ensuremath{\blacksquare}
\end{align*}
\section{Additional Details}\label{sec:AD}
\subsection{Proof of the Laplace principle upper bound: further details} \label{sec:ProofUpperApp}
Here we give a detailed proof of the Laplace principle upper bound. The argument follows DE Section 6.2.
By equation \eqref{eq:VNInt}, and the remarks that follow it, there is an optimal control sequence $\{\lambda^N_k\}_{k=0}^{N-1}$ that attains the infimum in \eqref{eq:VNInt}: \begin{equation}\label{eq:VNAgain}
V^{N}(x^N)=\mathbb{E}_{x^N}\paren{\int_{0}^{1}R(\bar{\lambda}^{N}_{t}\,||\,\nu^{N}(\cdot\hspace{1pt}|\hspace{1pt}\bar{\xi}^{N}_{t}))\,\mathrm{d} t+h(\smash{\hat{\boldsymbol{\xi}}}^{N})}. \end{equation} The control measure $\Lambda^N$ and the interpolated processes $\smash{\hat\boldsymbol{\xi}}^N$ and $\smash{\bar\boldsymbol{\xi}}^N$ induced by the control sequence are defined in the previous section. Proposition \ref{prop:converge} implies that there is a random measure $\Lambda$ and a random process $\boldsymbol{\xi}$ such that some subsubsequence of $(\Lambda^{N},\smash{\hat\boldsymbol{\xi}}^N,\smash{\bar\boldsymbol{\xi}}^N)$ converges in distribution to $(\Lambda,\boldsymbol{\xi},\boldsymbol{\xi})$. By the Skorokhod representation theorem, we can assume without loss of generality that $(\Lambda^{N},\smash{\hat\boldsymbol{\xi}}^N,\smash{\bar\boldsymbol{\xi}}^N)$ converges almost surely to $(\Lambda,\boldsymbol{\xi},\boldsymbol{\xi})$ (again along the subsubsequence, which hereafter is fixed).
The next lemma establishes the uniform convergence of relative entropies generated by the transition probabilities $\nu^N(\cdot | x)$ of the stochastic evolutionary process. \begin{lemma}\label{lem:RelEnt}
For each $N \geq N_0$, let $\lambda^N \colon \mathscr{X}^N \to \Delta(\mathscr{Z})$ be a transition kernel satisfying $\lambda^N(\cdot |x) \ll \nu^N(\cdot |x) $ for all $x \in \mathscr{X}^N$. Then \[
\lim_{N\to\infty}\max_{x\in\mathscr{X}^N}\abs{R\paren{\lambda^N(\cdot|x)\hspace{1pt}||\hspace{1pt} \nu^N(\cdot|x)} - R\paren{\lambda^N(\cdot|x)\hspace{1pt}||\hspace{1pt} \nu(\cdot|x)}}=0. \] \end{lemma}
\textit{Proof. } Definitions \eqref{eq:CondLaw} and \eqref{eq:CondLawLimit} imply that \begin{equation}\label{eq:RelEntCalc} \begin{split}
R&\paren{\lambda^N(\cdot|x)\hspace{1pt}||\hspace{1pt} \nu(\cdot|x)} - R\paren{\lambda^N(\cdot|x)\hspace{1pt}||\hspace{1pt} \nu^N(\cdot|x)}
=\sum_{\mathscr{z} \in \mathscr{Z}(x)}\lambda^N(\mathscr{z}|x)\log\frac{\nu^N(\mathscr{z}|x)}{\nu(\mathscr{z}|x)}\\
&=\sum_{i \in \support(x)}\sum_{j \in S\smallsetminus\{i\}}\lambda^N(e_j-e_i|x)\log\frac{\sigma^N_{ij}(x)}{\sigma_{ij}(x)}
+ \lambda^N(\mathbf{0}|x)\log\frac{\sum_{i\in\support(x)}x_i\sigma^N_{ii}(x)}{\sum_{i\in\support(x)}x_i\sigma_{ii}(x)}. \end{split} \end{equation} By the uniform convergence in \eqref{eq:LimSPs}, there is a vanishing sequence $\{\varepsilon^N\}$ such that \begin{gather*}
\max_{x \in \mathscr{X}^N}\max_{i,j \in S}|\sigma_{ij}^N(x) - \sigma_{ij}( x)| \leq \varepsilon^N.
\end{gather*} This inequality, the lower bound \eqref{eq:LimSPBound}, and display \eqref{eq:RelEntCalc} imply that for each $\mathscr{z} \in \mathscr{Z}(x)$ and $x \in \mathscr{X}^N$, we can write \[
\log\frac{\nu^N(\mathscr{z}|x)}{\nu(\mathscr{z}|x)} = \log\paren{1+\frac{\varepsilon^N(x,\mathscr{z})}{\varsigma(x,\mathscr{z})}}, \]
where $|\varepsilon^N(x,\mathscr{z})|\leq \varepsilon^N$ and $\varsigma(x,\mathscr{z})\geq \varsigma$. This fact and display \eqref{eq:RelEntCalc} imply the result. \hspace{4pt}\ensuremath{\blacksquare}
To proceed, we introduce a more general definition of relative entropy and an additional lemma. For $\alpha, \beta \in \mathscr{P}(\mathscr{Z} \times [0, 1])$ with $\beta \ll \alpha$, let $\frac{\mathrm{d} \beta}{\mathrm{d} \alpha}\colon \mathscr{Z} \times [0, 1] \to \mathbb{R}_+$ be the Radon-Nikodym derivative of $\beta$ with respect to $\alpha$. The \emph{relative entropy} of $\beta$ with respect to $\alpha$ is defined by \[
\mathscr{R}( \beta || \alpha) = \int_{\mathscr{Z} \times [0, 1]} \log\paren{\frac{\mathrm{d} \beta}{\mathrm{d} \alpha}(\mathscr{z},t)}\mathrm{d} \beta(\mathscr{z},t). \] We then have the following lemma (DE Theorem 1.4.3(f)):
\begin{lemma}\label{lem:RECR} Let $\{\pi_t\}_{t\in[0,1]}$, $\{\hat\pi_t\}_{t\in[0,1]}$ with $\pi_t, \hat\pi_t \in \Delta(\mathscr{Z})$ be Lebesgue measurable in $t$, and suppose that $\hat\pi_t \ll \pi_t$ for almost all $t \in [0, 1]$. Then \begin{equation}\label{eq:scrRandR}
\mathscr{R}( \hat\pi_t \otimes dt \,||\, \pi_t \otimes dt) = \int_0^1 R( \hat\pi_t \hspace{1pt}||\hspace{1pt} \pi_t)\,\mathrm{d} t. \end{equation} \end{lemma}
\noindent This result is an instance of the \emph{chain rule for relative entropy}, which expresses the relative entropy of two probability measures on a product space as the sum of two terms: the expected relative entropy of the conditional distributions of the first component given the second, and the relative entropy of the marginal distributions of the second component (see DE Theorem C.3.1). In Lemma \ref{lem:RECR}, the marginal distributions of the second component are both Lebesgue measure; thus the second summand is zero, yielding formula \eqref{eq:scrRandR}.
We now return to the main line of argument. For a measurable function $\phi\colon[0,1]\to X$, define the collection $\{\nu_t^{\phi}\}_{t\in[0,1]}$ of measures in $\Delta(\mathscr{Z})$ by \begin{equation}\label{eq:AnotherNu}
\nu_t^{{\phi}}(\mathscr{z})=\nu(\mathscr{z}|{\phi}_{t}). \end{equation}
Focusing still on the subsubsequence from Proposition \ref{prop:converge}, we begin our computation as follows: \begin{align} \liminf_{N\rightarrow\infty}V^{N}(x^{N})
&= \liminf_{N\rightarrow\infty}\mathbb{E}_{x^N}\paren{\int_{0}^{1}R\paren{\bar{\lambda}^{N}_{t}\hspace{1pt}||\hspace{1pt} \nu^{N}(\cdot|\bar{\xi}^{N}_{t})}\mathrm{d} t+h(\smash{\hat{\boldsymbol{\xi}}}^{N})}\notag\\
&=\liminf_{N\rightarrow\infty}\mathbb{E}_{x^N}\paren{\int_{0}^{1}R\paren{\bar{\lambda}^{N}_{t}\hspace{1pt}||\hspace{1pt} \nu(\cdot|\bar{\xi}^{N}_{t})}\mathrm{d} t+h(\smash{\hat{\boldsymbol{\xi}}}^{N})}\notag\\
&=\liminf_{N\rightarrow\infty}\mathbb{E}_{x^N}\paren{\mathscr{R}( \bar\lambda^N_t \otimes dt \,||\, \nu_t^{\hspace{1pt}\smash{\bar\boldsymbol{\xi}}^N}\! \otimes dt) +h(\smash{\hat{\boldsymbol{\xi}}}^{N})}\notag \end{align}
The first line is equation \eqref{eq:VNAgain}. The second line follows from Lemma \ref{lem:RelEnt}, using Observation \ref{obs:VarRep} to show that the optimal control sequence $\{\lambda^N_k\}$ satisfies $\lambda^N_k(\hspace{1pt}\cdot\hspace{1pt}|x_0, \ldots , x_{k}) \ll \nu^N(\cdot|x_k)$ for all $(x_{0},\ldots,x_{k})\in(\mathscr{X}^{N})^{k+1}$ and $k\in\{0,1,2,\ldots,N-1\}$. The third line follows from equation \eqref{eq:AnotherNu} and Lemma \ref{lem:RECR}.
We specified above that $(\Lambda^{N}= \bar\lambda^N_t\otimes dt,\smash{\hat\boldsymbol{\xi}}^N,\smash{\bar\boldsymbol{\xi}}^N)$ converges almost surely to $(\Lambda= \lambda_t \otimes dt ,\boldsymbol{\xi},\boldsymbol{\xi})$ in the topology of weak convergence, the topology of uniform convergence, and the Skorokhod topology, respectively. The last of these implies that $\smash{\bar\boldsymbol{\xi}}^N$ also converges to $\boldsymbol{\xi}$ almost surely in the uniform topology (DE Theorem A.6.5(c)).
Thus, since $x \mapsto \nu( \cdot | x)$ is continuous, $\nu_t^{\smash{\bar\boldsymbol{\xi}}^N}$ converges weakly to $\nu_t^{\boldsymbol{\xi}}$ for all $t \in [0, 1]$ with probability one. This implies in turn that $\nu_t^{\hspace{1pt}\smash{\bar\boldsymbol{\xi}}^N}\! \otimes dt$ converges weakly to $\nu_t^{\boldsymbol{\xi}}\! \otimes dt$ with probability one (DE Theorem A.5.8). Finally, $\mathscr{R}$ is lower semicontinuous (DE Lemma 1.4.3(b)) and $h$ is continuous. Thus DE Theorem A.3.13(b), an extension of Fatou's lemma, yields \[
\liminf_{N\rightarrow\infty}\mathbb{E}_{x^N}\paren{\mathscr{R}( \bar\lambda^N_t \otimes dt \,||\, \nu_t^{\hspace{1pt}\smash{\bar\boldsymbol{\xi}}^N}\! \otimes dt) +h(\smash{\hat{\boldsymbol{\xi}}}^{N})}
\geq \mathbb{E}_{x}\paren{\mathscr{R}( \lambda_t \otimes dt \,||\, \nu_t^{\boldsymbol{\xi}}\! \otimes dt) +h({\boldsymbol{\xi}})}. \]
To conclude, we argue as follows: \begin{align*} \liminf_{N\rightarrow\infty}V^{N}(x^{N})
&\geq \mathbb{E}_{x}\paren{\mathscr{R}( \lambda_t \otimes dt \,||\, \nu_t^{\boldsymbol{\xi}}\! \otimes dt) +h({\boldsymbol{\xi}})}\\
&=\mathbb{E}_x\paren{\int_{0}^{1}R\paren{\vphantom{I^N}{\lambda}_{t}\hspace{1pt}||\hspace{1pt} \nu(\cdot|\xi_{t})}\mathrm{d} t+h(\boldsymbol{\xi})}\\ &\geq\mathbb{E}_x\paren{\int_{0}^{1}L\left(\xi_{t},\sum_{\mathscr{z}\in\mathscr{Z}}\mathscr{z} \lambda_{t}(\mathscr{z})\right)\mathrm{d} t+h(\boldsymbol{\xi})}\\ &= \mathbb{E}_x\paren{\int_{0}^{1}L(\xi_{t},\dot{\xi}_{t})\,\mathrm{d} t+h(\boldsymbol{\xi})}\\ &\geq \inf_{\phi\in\mathscr{C}}\paren{c_{x}(\phi)+h(\phi)}. \end{align*} Here the second line follows from equation \eqref{eq:AnotherNu} and Lemma \ref{lem:RECR}, the third from representation \eqref{eq:CramerRep}, the fourth from Proposition \ref{prop:converge}(iii), and the fifth from the definition \eqref{eq:PathCost} of the cost function $c_x$.
Since every subsequence has a subsequence that satisfies the last string of inequalities, the sequence as a whole must satisfy the string of inequalities. This establishes the upper bound \eqref{eq:LPUpper}.
\subsection{Proof of Proposition \ref{prop:interiorpaths2}}\label{sec:PfInteriorpaths2}
Finally, we prove Proposition \ref{prop:interiorpaths2}, adding only minor modifications to the proof of DE Lemma 6.5.5.
Fix an $\alpha > 0$ and an absolutely continuous path $\phi^\alpha\in\tilde\mathscr{C}$ such that $\phi^\alpha_{[0,\alpha]}$ solves \eqref{eq:MD} and $\phi^\alpha_{[\alpha,1]}\subset\Int(X)$. For $\beta\in(0,1)$ with $\frac{1}{\beta}\in\mathbb{N}$, we define the path $\phi^{\beta}\in\mathscr{C}^{\mathlarger *}$ as follows: On $[0, \alpha]$, $\phi^\beta$ agrees with $\phi^\alpha$. If $k \in \mathbb{Z}_+$ satisfies $\alpha + (k + 1)\beta \leq 1$, then for $t \in (\alpha +k\beta,\alpha +(k+1)\beta]$, we define the derivative $\dot\phi^\beta_t$ by \begin{align} \dot{\phi}^{\beta}_t&=\frac{1}{\beta}\int_{\alpha +k\beta}^{\alpha +(k+1)\beta}\dot{\phi}^\alpha_s\,\mathrm{d} s\notag\\ &=\frac{1}{\beta}(\phi^\alpha_{\alpha +(k+1)\beta}-\phi^\alpha_{\alpha +k\beta}).\label{eq:PWDef} \end{align} Similarly, if there is an $\ell \in \mathbb{Z}_+$ such that $\alpha +\ell\beta< 1<\alpha +(\ell+1)\beta$, then for $t \in (\alpha +\ell\beta, 1]$ we set $\dot\phi^\beta_t = \frac{1}{1-(\alpha +\ell\beta)}(\phi^\alpha_1-\phi^\alpha_{\alpha +\ell\beta})$. Evidently, $\phi^\beta \in \mathscr{C}^{\mathlarger *}$, and because $\phi^\alpha$ is absolutely continuous, applying the definition of the derivative to expression \eqref{eq:PWDef} shows that \begin{align*} \lim_{\beta\to 0}\dot{\phi}^{\beta}_t=\dot{\phi}^\alpha_t\;\text{ for almost every }t\in[0,1]. \end{align*}
We now prove that $\phi^\beta$ converges uniformly to $\phi^\alpha$. To start, note that $\phi^\beta_{k\beta} = \phi^\alpha_t$ if $t \in [0, \alpha]$, if $t = \alpha + k \beta \leq 1$, or if $t = 1$. To handle the in-between times, fix $\delta>0$, and choose $\beta$ small enough that $|\phi^\alpha_t - \phi^\alpha_s| \leq \frac\delta2$ whenever $|t-s|\leq \beta$. If $t \in (k\beta, (k+1)\beta)$, then \begin{align*}
|\phi^{\beta}_t-\phi^\alpha_t| =\abs{\int_{k\beta}^{t}(\dot{\phi}^{\beta}_s-\dot{\phi}^\alpha_s)\,\mathrm{d} s} \leq \frac{t-k\beta}{\beta}\abs{\phi^\alpha_{(k+1)\beta}-\phi^\alpha_{k\beta}}+\abs{\phi^\alpha_t-\phi^\alpha_{k\beta}} \leq \delta. \end{align*}
A similar argument shows that $|\phi^{\beta}_t-\phi^\alpha_t|\leq \delta$ if $t \in (\alpha +\ell\beta, 1)$. Since $\delta > 0$ was arbitrary, we have established the claim.
Since $h$ is continuous, the uniform convergence of $\phi^\beta$ to $\phi^\alpha$ implies that $\lim_{\beta\to 0}h(\phi^\beta)=h(\phi^\alpha)$. Moreover, this uniform convergence, the convergence of the $Z$-valued functions $\dot\phi^\beta$ to $\dot\phi^\alpha$, the fact that $\phi^\alpha_{[\alpha,1]}\subset\Int(X)$, the continuity (and hence uniform continuity and boundedness) of $L(\cdot,\cdot)$ on closed subsets of $\Int(X)\times Z$ (see Proposition \ref{prop:Joint2}(i)), and the bounded convergence theorem imply that \[ \lim_{\beta\to 0}c_{x}(\phi^{\beta})=\lim_{\beta\to 0}\paren{\int_{0}^{\alpha}L(\phi^{\alpha}_t,\dot{\phi}^{\alpha}_t)\,\mathrm{d} t+\int_{\alpha}^{1}L(\phi^{\beta}_t,\dot{\phi}^{\beta}_t)\,\mathrm{d} t}=\int_{0}^{1}L(\phi^\alpha_t,\dot{\phi}^\alpha_t)\,\mathrm{d} t=c_{x}(\phi^\alpha). \] Since $\phi^\alpha$ was an arbitrary absolutely continuous path in $\tilde\mathscr{C}$, the proof is complete. \hspace{4pt}\ensuremath{\blacksquare}
\mybibliography
\end{document}
|
arXiv
|
{
"id": "1511.07897.tex",
"language_detection_score": 0.7007191777229309,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{center}\begin{large} New approach to Minkowski fractional inequalities using generalized k-fractional integral operator \end{large}\end{center} \begin{center}
$Vaijanath \, L. Chinchane $
Department of Mathematics,\\ Deogiri Institute of Engineering and Management\\ Studies Aurangabad-431005, INDIA\\
[email protected] \end{center} \begin{abstract}
In this paper, we obtain new results related to Minkowski fractional integral inequality using generalized k-fractional integral operator which is in terms of the Gauss hypergeometric function. \end{abstract} \textbf{Keywords :} Minkowski fractional integral inequality, generalized k-fractional integral operator and Gauss hypergeometric function.\\ \textbf{Mathematics Subject Classification:} 26D10, 26A33, 05A30.\\ \section{Introduction }
\paragraph{} In the last decades many researchers have worked on fractional integral inequalities using Riemann-Liouville, generalized Riemann-Liouville, Hadamard and Siago, see \cite{C1,C2,C3,D1,D2,D3,D4}. W. Yang \cite{YA} proved the Chebyshev and Gr$\ddot{u}$ss-type integral inequalities for Saigo fractional integral operator. S. Mubeen and S. Iqbal \cite{MU} has proved the Gr$\ddot{u}$ss-type integral inequalities generalized k-fractional integral. In \cite{BA1,C5,KI2,YI} authors have studied some fractional integral inequalities using generalized k-fractional integral operator (in terms of the Gauss hypergeometric function). Recently many researchers have shown development of fractional integral inequalities associated with hypergeometric functions, see \cite{SH1,KI2,P1,R1,S1,SA,V1,W1,YI}. Also, in \cite{C2,D1} authors established reverse Minkowski fractional integral inequality using Hadamard and Riemann-Liouville integral operator respectively. \paragraph{}In literature few results have been obtained on some fractional integral inequalities using Saigo fractional integral operator, see \cite{C4,K4,P1,P2,YI}. Motivated from \cite{C1,C5,D1,KI2}, our purpose in this paper is to establish some new results using generalized k-fractional integral in terms of Gauss hypergeometric function. The paper has been organized as follows, in section 2, we define basic definitions and proposition related to generalized k-fractional integral. In section 3, we give the results about reverse Minkowski fractional integral inequality using fractional generalized k-fractional integral, In section 4, we give some other inequalities using fractional generalized k-fractional integral. \section{Preliminaries} \paragraph{} In this section, we give some necessary definitions which will be used latter. \begin{definition} \cite{KI2,YI} The function $f(x)$, for all $x>0$ is said to be in the $L_{p,k}[0,\infty),$ if \begin{equation}
L_{p,k}[0,\infty)=\left\{f: \|f\|_{L_{p,k}[0,\infty)}=\left(\int_{0}^{\infty}|f(x)|^{p}x^{k}dx\right)^{\frac{1}{p}} < \infty \, \, 1 \leq p < \infty \, k \geq 0\right\}, \end{equation} \end{definition} \begin{definition} \cite{KI2,SAO,YI} Let $f \in L_{1,k}[0,\infty),$. The generalized Riemann-Liouville fractional integral $I^{\alpha,k}f(x)$ of order $\alpha, k \geq 0$ is defined by \begin{equation} I^{\alpha,k}f(x)= \frac{(k+1)^{1-\alpha}}{\Gamma (\alpha)}\int_{0}^{x}(x^{k+1}-t^{k+1})^{\alpha-1}t^{k} f(t)dt. \end{equation} \end{definition} \begin{definition} \cite{KI2,YI} Let $k\geq 0,\alpha>0, \mu >-1$ and $\beta, \eta \in R $. The generalized k-fractional integral $I^{\alpha,\beta,\eta,\mu}_{x,k}$ (in terms of the Gauss hypergeometric function)of order $\alpha$ for real-valued continuous function $f(t)$ is defined by \begin{equation} \begin{split} I^{\alpha,\beta,\eta,\mu}_{x,k}[f(x)]& = \frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f(\tau)d\tau. \end{split} \end{equation} \end{definition} where, the function $_{2}F_{1}(-)$ in the right-hand side of (2.3) is the Gaussian hypergeometric function defined by
\begin{equation}
_{2}F_{1} (a, b; c; x)=\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n}} \frac{x^{n}}{n!}, \end{equation} and $(a)_{n}$ is the Pochhammer symbol\\ $$(a)_{n}=a(a+1)...(a+n-1)=\frac{\Gamma(a+n)}{\Gamma(a)}, \,\,\,(a)_{0}=1.$$ Consider the function \begin{equation} \begin{split} F(x,\tau)&= \frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\tau^{(k+1)\mu}\\ &(x^{k+1}-\tau^{k+1})^{\alpha-1} \times _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\\ &=\sum_{n=0}^{\infty}\frac{(\alpha+\beta+\mu)_{n}(-n)_{n}}{\Gamma(\alpha+n)n!}x^{(k+1)(-\alpha-\beta-2\mu-\eta)}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1+n}(k+1)^{\mu+\beta+1}\\ &=\frac{\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}(k+1)^{\mu+\beta+1}}{x^{k+1}(\alpha+\beta+2\mu)\Gamma(\alpha)}+\\ &\frac{\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha}(k+1)^{\mu+\beta+1}(\alpha+\beta+\mu)(-n)}{x^{k+1}(\alpha+\beta+2\mu+1)\Gamma(\alpha+1)}+\\ &\frac{\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha+1}(k+1)^{\mu+\beta+1}(\alpha+\beta+\mu)(\alpha+\beta+\mu+1)(-n)(-n+1)}{x^{k+1}(\alpha+\beta+2\mu+1)\Gamma(\alpha+2)2!}+... \end{split} \end{equation} It is clear that $F(x,\tau)$ is positive because for all $\tau \in (0, x)$ , $(x>0)$ since each term of the (2.5) is positive. \section{Reverse Minkowski fractional integral inequality } \paragraph{}In this section, we establish reverse Minkowski fractional integral inequality using generalized k-fractional integral operator (in terms of the Gauss hypergeometric function).
\begin{theorem} Let $p\geq1$ and let $f$, $g$ be two positive function on $[0, \infty)$, such that for all $x>0$, $I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]<\infty$, $I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]<\infty$. If $0<m\leq \frac{f(\tau)}{g(\tau)}\leq M$, $\tau \in (0,x)$ we have
\begin{equation} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\right]^{\frac{1}{p}}+\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\right]^{\frac{1}{p}}\leq \frac{1+M(m+2)}{(m+1)(M+1)}\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)]\right]^{\frac{1}{p}}, \end{equation}
for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$
\end{theorem} \textbf{Proof}: Using the condition $\frac{f(\tau)}{g(\tau)}\leq M$, $\tau \in (0,x)$, $x>0$, we can write \begin{equation} (M+1)^{p}f(\tau)\leq M^{p}(f+g)^{p}(\tau). \end{equation} Multiplying both side of (3.2) by $F(x,\tau)$, then integrating resulting identity with respect to $\tau$ from $0$ to $x$, we get \begin{equation} \begin{split} &(M+1)^{p}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f^{p}(\tau)d\tau\\ &\leq M^{p}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} (f+g)^{p}(\tau)d\tau, \end{split} \end{equation} \noindent which is equivalent to \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)] \leq \frac{M^{p}}{(M+1)^{p}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)]\right], \end{equation} \noindent hence, we can write \begin{equation} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)] \right]^{\frac{1}{p}} \leq \frac{M}{(M+1)} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)]\right]^{\frac{1}{p}}. \end{equation} On other hand, using condition $m\leq \frac{f(\tau)}{g(\tau)}$, we obtain \begin{equation} (1+\frac{1}{m})g(\tau)\leq \frac{1}{m}(f(\tau)+g(\tau)), \end{equation} therefore, \begin{equation} (1+\frac{1}{m})^{p}g^{p}(\tau)\leq(\frac{1}{m})^{p}(f(\tau)+g(\tau))^{p}. \end{equation} Now, multiplying both side of (3.7) by $F(x,\tau)$, ( $\tau \in(0,x)$, $x>0$), where $G(x,\tau)$ is defined by (2.5). Then integrating resulting identity with respect to $\tau$ from $0$ to $x$, we have \begin{equation} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{p}(x)]\right]^{\frac{1}{p}} \leq \frac{1}{(m+1)} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)]\right]^{\frac{1}{p}}. \end{equation} The inequalities (3.1) follows on adding the inequalities (3.5) and (3.8). \paragraph{}Our second result is as follows. \begin{theorem} Let $p\geq1$ and $f$, $g$ be two positive function on $[0, \infty)$, such that for all $x>0$, $I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]<\infty$, $I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]<\infty$. If $0<m\leq \frac{f(\tau)}{g(\tau)}\leq M$, $\tau \in (0,x)$ then we have \begin{equation} \begin{split} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)] \right]^{\frac{2}{p}}+\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)] \right]^{\frac{2}{p}}\geq &(\frac{(M+1)(m+1)}{M}-2)\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)] \right]^{\frac{1}{p}}+\\ &\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)] \right]^{\frac{1}{p}}. \end{split} \end{equation} for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$ \end{theorem} \textbf{Proof}: Multiplying the inequalities (3.5) and (3.8), we obtain \begin{equation} \frac{(M+1)(m+1)}{M}\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\right]^{\frac{1}{p}}\times \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\right]^{\frac{1}{p}}\leq \left([I^{\alpha,\beta,\eta,\mu}_{x,k}[(f(x)+g(x))^{p}]]^{\frac{1}{p}}\right)^{2}. \end{equation} Applying Minkowski inequalities to the right hand side of (3.10), we have
\begin{equation}
(\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f(x)+g(x))^{p}]\right]^{\frac{1}{p}})^{2}\leq (\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\right]^{\frac{1}{p}}+\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\right]^{\frac{1}{p}})^{2}, \end{equation} which implies that \begin{equation} \begin{split}
\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f(x)+g(x))^{p}]\right]^{\frac{2}{p}}\leq & \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\right]^{\frac{2}{p}}+
\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\right]^{\frac{2}{p}}\\
&+2\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\right]^{\frac{1}{p}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\right]^{\frac{1}{p}}. \end{split} \end{equation} Hence, from (3.10) and (3.12), we obtain (3.9). Theorem 3.2 is thus proved. \section{ Other fractional integral inequalities related to Minkowski inequality} \paragraph{}In this section, we establish some new integral inequalities related to Minkowski inequality using generalized k-fractional integral operator (in terms of the Gauss hypergeometric function). \begin{theorem} Let $p>1$, $\frac{1}{p}+\frac{1}{q}=1 $ and $f$, $g$ be two positive function on $[0, \infty)$, such that $I^{\alpha,\beta,\eta,\mu}_{x,k}[f(x)]<\infty$, $I^{\alpha,\beta,\eta,\mu}_{x,k}[g(x)]<\infty$. If $0<m\leq \frac{f(\tau)}{g(\tau)}\leq M < \infty$, $\tau \in [0,x]$ we have \begin{equation} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f(x)]\right]^{\frac{1}{p}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g(x)]\right]^{\frac{1}{q}} \leq (\frac{M}{m})^{\frac{1}{pq}}\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[[f(x)]^{\frac{1}{p}}[g(x)]^{\frac{1}{q}}]\right], \end{equation}
for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$ \end{theorem} \textbf{Proof:-} Since $\frac{f(\tau)}{g(\tau)}\leq M $, $\tau \in[0,x]$ $x> 0$, therefore \\ \begin{equation} [g(\tau)]^{\frac{1}{p}}\geq M^{\frac{-1}{q}}[f(\tau)]^{\frac{1}{q}}, \end{equation} and also, \begin{equation} \begin{split} [f(\tau)]^{\frac{1}{p}}[g(\tau)]^{\frac{1}{q}}&\geq M^{\frac{-1}{q}}[f(\tau)]^{\frac{1}{q}}[f(\tau)]^{\frac{1}{p}}\\ &\geq M^{\frac{-1}{q}}[f(\tau)]^{\frac{1}{q}+\frac{1}{q}}\\ &\geq M^{\frac{-1}{q}}[f(\tau)]. \end{split} \end{equation} Multiplying both side of (4.3) by $F(x,\tau)$, ( $\tau \in(0,x)$, $x>0$), where $F(x,\tau)$ is defined by (2.5). Then integrating resulting identity with respect to $\tau$ from $0$ to $x$, we have \begin{equation} \begin{split} &\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f(\tau)^{\frac{1}{p}}g(\tau)^{\frac{1}{q}}d\tau \\ &\leq M^{\frac{-1}{q}}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f(\tau)d\tau, \end{split} \end{equation} which implies that \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}\left[[f(x)]^{\frac{1}{p}}[g(x)]^{\frac{1}{q}} \right] \leq M^{\frac{-1}{q}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}f(x)\right]. \end{equation} Consequently, \begin{equation} \left(I^{\alpha,\beta,\eta,\mu}_{x,k}\left[[f(x)]^{\frac{1}{p}}[g(x)]^{\frac{1}{q}} \right]\right)^{\frac{1}{p}} \leq M^{\frac{-1}{pq}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}f(x)\right]^{\frac{1}{p}}, \end{equation} on other hand, since $m g(\tau)\leq f(\tau)$, \, $\tau \in[0,x)$, $x>0$, then we have \begin{equation} [f(\tau)]^{\frac{1}{p}}\geq m^{\frac{1}{p}}[g(\tau)]^{\frac{1}{p}}, \end{equation} multiplying equation (4.7) by $[g(\tau)]^{\frac{1}{q}}$, we have \begin{equation} [f(\tau)]^{\frac{1}{p}}[g(\tau)]^{\frac{1}{q}}\geq m^{\frac{1}{p}}[g(\tau)]^{\frac{1}{q}}[g(\tau)]^{\frac{1}{p}}= m^{\frac{1}{p}}[g(\tau)]. \end{equation} Multiplying both side of (4.8) by $F(x,\tau)$, ( $\tau \in(0,x)$, $x>0$), where $F(x,\tau)$ is defined by (2.5). Then integrating resulting identity with respect to $\tau$ from $0$ to $x$, we have \begin{equation} \begin{split} &\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f(\tau)^{\frac{1}{p}}g(\tau)^{\frac{1}{q}}d\tau \\ &\leq M^{\frac{1}{p}}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} g(\tau)d\tau, \end{split} \end{equation}
which implies that \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}\left[[f(x)]^{\frac{1}{p}}[g(x)]^{\frac{1}{q}} \right] \leq m^{\frac{1}{p}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}g(x)\right]. \end{equation} Hence, we can write \begin{equation} \left(I^{\alpha,\beta,\eta,\mu}_{x,k}\left[[f(x)]^{\frac{1}{p}}[g(x)]^{\frac{1}{q}} \right]\right)^{\frac{1}{q}} \leq m^{\frac{1}{pq}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}f(x)\right]^{\frac{1}{q}}, \end{equation} multiplying equation (4.6) and (4.11) we get the result (4.1). \begin{theorem} Let $f$ and $g$ be two positive function on $[0, \infty[$, such that\\ $I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]<\infty$,
$I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]<\infty$. $x>0$, If $0<m\leq \frac{f(\tau)^{p}}{g(\tau)^{q}}\leq M < \infty$, $\tau \in [0,x]$. Then we have
\begin{equation*} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}f^{p}(x)\right]^{\frac{1}{p}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}g^{q}(x)\right]^{\frac{1}{q}}\leq (\frac{M}{m})^{\frac{1}{pq}}\left[I^{\alpha,\beta,\eta,\mu}_{x,k}(f(x)g(x))\right] hold. \end{equation*} Where $p>1$, $\frac{1}{p}+\frac{1}{q}=1 $, for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$ \end{theorem} \textbf{Proof:-} Replacing $f(\tau)$ and $g(\tau)$ by $f(\tau)^{p}$ and $g(\tau)^{q}$, $\tau \in [0,x]$, $x>0$ in theorem 4.1, we obtain required inequality. \paragraph{} Now, here we present fractional integral inequality related to Minkowsky inequality as follows \begin{theorem} let $f$ and $g$ be two integrable functions on $[1, \infty]$ such that $\frac{1}{p}+\frac{1}{q}=1, p>1,$ and $0<m<\frac{f(\tau)}{g(\tau)}<M, \tau \in (0,x), x>0.$ Then for all $\alpha>0,$ we have \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}\{fg\}(x)\leq \frac{2^{p-1}M^{p}}{p(M+1)^{p}}\left(I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}+g^{p}](x)\right)+\frac{2^{q-1}}{q(m+1)^{q}}\left(I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{q}+g^{q}](x)\right), \end{equation}
for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$ \end{theorem} \textbf{Proof:-} Since, $\frac{f(\tau)}{g(\tau)}<M, \tau \in (0,x), x>0,$ we have \begin{equation} (M+1)f(\tau)\leq M(f+g)(\tau). \end{equation} Taking $p^{th}$ power on both side and multiplying resulting identity by $ F(x,\tau)$, we obtain \begin{equation} \begin{split} &(M+1)^{p}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f^{p}(\tau)d\tau\\
&\leq M^{p} \frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} (f+g)^{p}(\tau)d\tau, \end{split} \end{equation} therefore, \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\leq \frac{M^{p}}{(M+1)^{p}}I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)], \end{equation} on other hand, $0<m<\frac{f(\tau)}{g(\tau)}, \tau \in (0,x), x>0,$ we can write \begin{equation} (m+1)g(\tau)\leq (f+g)(\tau), \end{equation} therefore, \begin{equation} \begin{split} &(m+1)^{q}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} g^{q}(\tau)d\tau\\
&\leq \frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} (f+g)^{q}(\tau)d\tau, \end{split} \end{equation} consequently, we have \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\leq \frac{1}{(m+1)^{q}}I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{q}(x)]. \end{equation} Now, using Young inequality \begin{equation} [f(\tau)g(\tau)]\leq \frac{f^{p}(\tau)}{p}+\frac{g^{q}(\tau)}{q}. \end{equation} Multiplying both side of (4.19) by $ F(x,\tau)$, which is positive because $\tau \in(0,x)$, $x>0$, then integrate the resulting identity with respect to $\tau$ from $0$ to $x$, we get \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}[f(x)g(x))]\leq \frac{1}{p}\,I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]+\frac{1}{q}\,I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)], \end{equation} from equation (4.15), (4.18) and (4.20) we get \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}[f(x)g(x))]\leq \frac{M^{p}}{p(M+1)^{p}}\,I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)]+\frac{1}{q(m+1)^{q}}\,I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{q}(x)], \end{equation} now using the inequality $(a+b)^{r}\leq 2^{r-1}(a^{r}+b^{r}), r>1, a,b \geq 0,$ we have \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)] \leq 2^{p-1}I^{\alpha,\beta,\eta,\mu}_{x,k}[(f^{p}+g^{p})(x)], \end{equation} and \begin{equation} I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{q}(x)] \leq 2^{q-1}I^{\alpha,\beta,\eta,\mu}_{x,k}[(f^{q}+g^{q})(x)]. \end{equation} Injecting (4.22), (2.23) in (4.21) we get required inequality (4.12). This complete the proof. \begin{theorem} Let $f$, $g$ be two positive function on $[0, \infty)$, such that $f$ is non-decreasing and $g$ is non-increasing. Then \begin{equation} \begin{split} I^{\alpha,\beta,\eta,\mu}_{x,k}f^{\gamma}(x) g^{\delta}(x)&\leq (k+1)^{-\mu-\beta}x^{(k+1)(\mu+\beta)}\frac{\Gamma(1-\beta)\Gamma(1+\mu+\eta+1)}{\Gamma(1-\beta+\eta)\Gamma(\mu+1)} \\ &\times I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)]I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{\delta}(x)], \end{split}\end{equation}
for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$ \end{theorem} \textbf{Proof:-} let $\tau,\rho \in [0,x]$, $x>0$, for any $\delta>0$, $\gamma>0$, we have \begin{equation} \left(f^{\gamma}(\tau)-f^{\gamma}(\rho)\right)\left(g^{\delta}(\rho)-g^{\delta}(\tau)\right) \geq 0, \end{equation} \begin{equation} f^{\gamma}(\tau)g^{\delta}(\rho)-f^{\gamma}(\tau)g^{\delta}(\tau)- f^{\gamma}(\rho)(g^{\delta}(\rho)+f^{\gamma}(\rho)g^{\delta}(\tau) \geq 0, \end{equation} therefore \begin{equation} f^{\gamma}(\tau)g^{\delta}(\tau)+f^{\gamma}(\rho)(g^{\delta}(\rho)\leq f^{\gamma}(\tau)g^{\delta}(\rho)+f^{\gamma}(\rho)g^{\delta}(\tau). \end{equation} Now, multiplying both side of (4.27) by $F(x,\tau)$, ( $\tau \in(0,x)$, $x>0$), where $F(x,\tau)$ is defined by (2.5). Then integrating resulting identity with respect to $\tau$ from $0$ to $x$, we have \begin{equation} \begin{split} &\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k}[f^{\gamma}(\tau)g^{\delta}(\tau)]d\tau\\ &+ f^{\gamma}(\rho)g^{\delta}(\rho)\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k}[1]d\tau \\ &\leq g^{\delta}(\rho)\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k}f^{\gamma}(\tau)d\tau\\ &+f^{\gamma}(x)\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1} \times \\ & _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} g^{\delta}(\tau)d\tau, \end{split} \end{equation} \begin{equation} \begin{split} &I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)g^{\delta}(x)]+f^{\gamma}(\rho)(g^{\delta}(\rho)I^{\alpha,\beta,\eta,\mu}_{x,k}[1]\\ &\leq g^{\delta}(\rho)I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)]+f^{\gamma}(\rho)I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{\delta}(x)]. \end{split} \end{equation} Again, multiplying both side of (4.29) by $F(x,\rho)$, ( $\rho \in(0,x)$, $x>0$), where $F(x,\rho)$ is defined by (2.5). Then integrating resulting identity with respect to $\rho$ from $0$ to $x$, we have \begin{equation*} \begin{split} &I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)g^{\delta}(x)]I^{\alpha,\beta,\eta,\mu}_{x,k}[1]+I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)g^{\delta}(x)] I^{\alpha,\beta,\eta,\mu}_{x,k}[1]\\ &\leq I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{\delta}(x)]I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)] +I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)]I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{\delta}(x)], \end{split} \end{equation*} then we can write \begin{equation*} 2I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)g^{\delta}(x)] \leq \frac{1}{[I^{\alpha,\beta,\eta,\mu}_{x,k}[1]]^{-1}}2 II^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)]I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{\delta}(x)]. \end{equation*} This proves the result (4.24).\\ \textbf{Competing interests}\\ The authors declare that they have no competing interests.
\end{document}
|
arXiv
|
{
"id": "1702.05234.tex",
"language_detection_score": 0.4672352075576782,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} In this paper, we compute the number of two-term tilting complexes for an arbitrary symmetric algebra with radical cube zero over an algebraically closed field. Firstly, we give a complete list of symmetric algebras with radical cube zero having only finitely many isomorphism classes of two-term tilting complexes in terms of their associated graphs. Secondly, we enumerate the number of two-term tilting complexes for each case in the list. \end{abstract}
\maketitle
\section{Introduction} Tilting theory plays an important role in the study of many areas of mathematics. A central notion of tilting theory is a tilting complex which is a generalization of a progenerator in Morita theory. Indeed, its endomorphism algebra is derived equivalent to the original algebra \cite{Rickard89der}. Hence it is a natural problem to give a classification of tilting complexes for a given algebra.
In this paper, we study a classification of two-term tilting complexes for an arbitrary symmetric algebra with radical cube zero over an algebraically closed field $\mathbf{k}$. Symmetric algebras with radical cube zero have been studied by Okuyama \cite{Okuyama86}, Benson \cite{Benson08} and Erdmann--Solberg \cite{ES}, and also appear in several areas such as \cite{CL,HK,Seidel08}. Recently, Green--Schroll \cite{GSa} showed that this class is precisely the Brauer configuration algebras with radical cube zero.
The study of symmetric algebras $A$ with radical cube zero can be reduced to that of algebras with radical square zero. For example, as an application of $\tau$-tilting theory (\cite{AIR}), we find in Proposition \ref{reduction} that the functor $-\otimes_A A/\operatorname{soc}\nolimits A$ gives a bijection \begin{align} \mathop{2\text{-}\mathsf{tilt}}\nolimits A \longrightarrow \mathop{2\text{-}\mathsf{silt}}\nolimits \, (A/\operatorname{soc}\nolimits A). \notag \end{align} Here, we denote by $\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ (respectively, $\mathop{2\text{-}\mathsf{silt}}\nolimits A$) the set of isomorphism classes of basic two-term tilting (respectively two-term silting) complexes for $A$. Notice that tilting complexes coincide with silting complexes for a symmetric algebra $A$ (\cite[Example 2.8]{AI}).
In \cite{Adachi16b,Aoki18,Zhang13}, they study two-term silting theory (or equivalently $\tau$-tilting theory) for algebras with radical square zero. The first author (\cite{Adachi16b}) gives a characterization of algebras with radical square zero which are $\tau$-tilting finite (i.e., having only finitely many isomorphism classes of basic two-term silting complexes) by using the notion of single quivers, see Proposition \ref{RSZ}(2). Using this result, we give a complete list of $\tau$-tilting finite symmetric algebras with radical cube zero as follows.
Now, let $A$ be a basic connected finite dimensional symmetric $\mathbf{k}$-algebra with radical cube zero. Let $Q$ be the Gabriel quiver of $A$ and $Q^{\circ}$ the quiver obtained from $Q$ by deleting all loops. We show in Definition-Proposition \ref{graphA} that $Q^{\circ}$ is the double quiver $Q_G$ (see Definition \ref{def:double quiver}) of a finite connected (undirected) graph $G$ with no loops, i.e., $Q^{\circ}=Q_G$. We call $G$ the graph of $A$.
\begin{theorem} \label{theorem1} Let $A$ be a basic connected finite dimensional symmetric $\mathbf{k}$-algebra with radical cube zero. Then the following conditions are equivalent. \begin{enumerate}[\rm (1)] \item $A$ is $\tau$-tilting finite (or equivalently, $\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ is finite). \item The graph of $A$ is one of graphs in the following list. \end{enumerate}
$$ \begin{xy} (6, -4)*{(\mathbb{A}_n)}, (-2,-12)*+{1}="A1", (6,-12)*+{2}="A2", (25, -12)*+{n}="An", (15, -12)*{\cdots}="dot", { "A1" \ar@{-} "A2"}, {"A2" \ar@{-} (11,-12)}, { (19, -12) \ar@{-} "An" }, (48, -4)*{(\mathbb{D}_n)\ 4 \leq n}, (33,-7)*+{1}="D1", (33, -17)*+{2}="D2", (41,-12)*+{3}="D3", (60, -12)*+{n}="Dn", (50, -12)*{\cdots}="dot", { "D1" \ar@{-} "D3"}, { "D2" \ar@{-} "D3"}, {"D3" \ar@{-} (46, -12)}, { (54, -12) \ar@{-} "Dn"}, (78, -4)*{(\mathbb{E}_6)}, (70, -14)*+{1}="E61", (78, -14)*+{2}="E62", (86, -14)*+{3}="E63", (94, -14)*+{5}="E64", (102, -14)*+{6}="E65", (86, -6)*+{4}="E66", {"E61" \ar@{-} "E62"}, {"E62" \ar@{-} "E63"}, {"E63" \ar@{-} "E64"}, {"E64" \ar@{-} "E65"}, {"E63" \ar@{-} "E66"}, (6, -20)*{(\mathbb{E}_7)}, (-2, -31)*+{1}="E71", (6, -31)*+{2}="E72", (14, -31)*+{3}="E73", (22, -31)*+{5}="E74", (30, -31)*+{6}="E75", (38, -31)*+{7}="E76", (14, -23)*+{4}="E77", {"E71" \ar@{-} "E72"}, {"E72" \ar@{-} "E73"}, {"E73" \ar@{-} "E74"}, {"E74" \ar@{-} "E75"}, {"E75" \ar@{-} "E76"}, {"E73" \ar@{-} "E77"}, (58, -20)*{(\mathbb{E}_8)}, (50, -31)*+{1}="E81", (58, -31)*+{2}="E82", (66, -31)*+{3}="E83", (74, -31)*+{5}="E84", (82, -31)*+{6}="E85", (90, -31)*+{7}="E86", (98, -31)*+{8}="E87", (66, -23)*+{4}="E88", {"E81" \ar@{-} "E82"}, {"E82" \ar@{-} "E83"}, {"E83" \ar@{-} "E84"}, {"E84" \ar@{-} "E85"}, {"E85" \ar@{-} "E86"}, {"E86" \ar@{-} "E87"}, {"E83" \ar@{-} "E88"}, (125, -4)*{(\widetilde{\mathbb{A}}_{n-1}) \ n\colon \mathrm{odd}}, (125, -12)*+{1}="b1", (115, -18)*+{2}="b2", (115, -28)*+{3}="b3",(125, -32)*+{4}="b4", (135, -18)*+{n}="bn", (135, -26)*{\rotatebox{90}{$\cdots$}}="ddot", {"b1" \ar@{-} "b2"}, {"b2" \ar@{-} "b3"}, {"b3" \ar@{-} "b4"}, {"b1" \ar@{-} "bn"}, {"bn" \ar@{-} (135, -22)}, { (133, -30) \ar@{-} "b4"}, \end{xy}
\noindent $$
$$ \begin{xy} (26, 0)*{({\rm I}_n) \ 4 \leq n}, (28, -7)*+{1}="c1", (21,-15)*+{2}="c2", (36, -15)*+{3}="c3", (36, -23)*+{4}="c4", (36, -28)="c5", (36, -35)="c6", (36.5, -31)*{\rotatebox{90}{$\cdots$}}="ddot", (36, -39)*+{n}="c7", {"c1" \ar@{-} "c2"}, {"c1" \ar@{-} "c3"}, {"c2" \ar@{-} "c3"}, {"c3" \ar@{-} "c4"}, {"c4" \ar@{-} "c5"}, {"c6" \ar@{-} "c7"}, (53, 0)*{({\rm II}_n) \ 5 \leq n \leq8}, (52, -7)*+{1}="d1", (45,-15)*+{2}="d2", (59, -15)*+{3}="d3", (45, -23)*+{4}="d4", (59, -23)*+{5}="d5", (59, -28)="d6", (59, -35)="d7", (59.5, -31)*{\rotatebox{90}{$\cdots$}}="edot", (59, -39)*+{n}="d8", {"d1" \ar@{-} "d2"}, {"d1" \ar@{-} "d3"}, {"d2" \ar@{-} "d3"}, {"d2" \ar@{-} "d4"}, {"d3" \ar@{-} "d5"}, {"d5" \ar@{-} "d6"}, {"d7" \ar@{-} "d8"}, (72, 0)*{({\rm III})}, (79,-7)*+{1}="e1", (79, -15)*+{2}="e2", (72,-22)*+{3}="e3", (72, -30)*+{4}="e4", (86,-22)*+{5}="e5", (86,-30)*+{6}="e6", {"e1" \ar@{-} "e2"}, {"e2" \ar@{-} "e3"}, {"e2" \ar@{-} "e5"}, {"e3" \ar@{-} "e4"}, {"e4" \ar@{-} "e6"}, {"e5" \ar@{-} "e6"}, (97, 0)*{({\rm IV})}, (104, -7)*+{1}="f1", (104, -15)*+{2}="f2", (97,-23)*+{3}="f3", (111, -23)*+{4}="f4", (97, -31)*+{5}="f5", (111, -31)*+{6}="f6", {"f1" \ar@{-} "f2"}, {"f2" \ar@{-} "f3"}, {"f2" \ar@{-} "f4"}, {"f3" \ar@{-} "f4"}, {"f3" \ar@{-} "f5"}, {"f4" \ar@{-} "f6"}, (121, 0)*{({\rm V})}, (128, -7)*+{1}="g1", (121,-15)*+{2}="g2", (135, -15)*+{3}="g3", (121, -23)*+{4}="g4", (121, -31)*+{5}="g5", (135, -23)*+{6}="g6", (135, -31)*+{7}="g7", {"g1" \ar@{-} "g2"}, {"g1" \ar@{-} "g3"}, {"g2" \ar@{-} "g3"}, {"g2" \ar@{-} "g4"}, {"g3" \ar@{-} "g6"}, {"g4" \ar@{-} "g5"}, {"g6" \ar@{-} "g7"}, \end{xy} $$ \end{theorem}
The second author (\cite{Aoki18}) classifies two-term silting complexes for an arbitrary algebra with radical square zero by using tilting modules over a path algebra (see Proposition \ref{RSZ}(1)). Since the cardinality of the set of isomorphism classes of tilting modules over a path algebra is well known, this provides us an explicit way to compute the number of them. We use this result to determine the number $\#\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ for each graph $G$ in the list of Theorem \ref{theorem1}.
\begin{theorem} \label{theorem2} In Theorem \ref{theorem1}, the number $\#\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ depends only on the graph $G$ of $A$ and is given as follows.
{\fontsize{9pt}{0.4cm}\selectfont \begin{table}[h] {\renewcommand\arraystretch{1.3}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $G$ & $\mathbb{A}_n$& $\mathbb{D}_{n}$&$\mathbb{E}_6 $&$ \mathbb{E}_7$&$ \mathbb{E}_8$& $\tilde{\mathbb{A}}_{n-1}$ & $ {\rm I}_n$ & ${\rm II}_5$& ${\rm II}_6$& ${\rm II}_7$ & ${\rm II}_8$ & ${\rm III}$ & ${\rm IV}$ & ${\rm V}$ \\ \hline $\# \mathop{2\text{-}\mathsf{tilt}}\nolimits A$ &$\binom{2n}{n}$ & $a_n$ & $1700$ & $8872$ & $54066$ & $2^{2n-1}$ & $b_n$ & $632$ & $2936$ & $11306$ & $75240$ & $3108$& $4056$& $17328$ \\ \hline \end{tabular} } \end{table}} \noindent Here, for any $n\ge 4$, let $a_n:= 6\cdot 4^{n-2}-2\binom{2(n-2)}{n-2}$ and $b_n:=6\cdot 4^{n-2} + 2\binom{2n}{n} -4\binom{2(n-1)}{n-1}-4\binom{2(n-2)}{n-2}$. \end{theorem}
We remark that the numbers for Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$ in the list are precisely the biCatalan numbers introduced by \cite{BR} in the context of Coxeter-Catalan combinatorics. Our results for Dynkin graphs are independently obtained by \cite{DIRRT} in the study of biCambrian lattices for preprojective algebras.
We also remark that we can generalize our results for Brauer configuration algebras in terms of multiplicities. A Brauer configuration algebra is defined by a configuration and a multiplicity function. The configuration of a Brauer configuration algebra with radical cube zero corresponds to a graph \cite{GSa}. By \cite{EJR}, one can show that the number of two-term tilting complexes over Brauer configuration algebras is independent of the multiplicity. Therefore, we can also apply our results to any Brauer configuration algebra obtained by replacing the multiplicity of a Brauer configuration associated with a graph in the list of Theorem \ref{theorem1}.
This paper is organized as follows. In Section \ref{sec:preliminaries}, we recall the definition of algebras with radical square zero and their two-term silting theory which are needed in this paper. In Section \ref{sec:RCZ}, we study symmetric algebras with radical cube zero together with the correspondence algebra with radical square zero. Our main results are Theorem \ref{reduced ver} and Corollary \ref{number by graph} which provide us an explicit way to compute the number of two-term tilting complexes for a given symmetric algebra with radical cube zero. In Section \ref{main theorem}, we prove Theorems \ref{theorem1} and \ref{theorem2} by using results shown in the previous section.
\section{Preliminaries} \label{sec:preliminaries}
Throughout this paper, $\mathbf{k}$ is an algebraically closed field. We recall that any basic connected finite dimensional $\mathbf{k}$-algebra $A$ is isomorphic to a bound quiver algebra $A\cong \mathbf{k}Q/I$, where $Q$ is a finite connected quiver and $I$ is an admissible ideal in the path algebra $\mathbf{k}Q$ of the quiver $Q$. We call $Q_A:=Q$ the \emph{Gabriel quiver} of $A$.
\subsection{Silting complexes}
Let $A$ be a basic (not necessary connected) finite dimensional $\mathbf{k}$-algebra. We denote by $\moduleCategory A$ the category of finitely generated right $A$-modules and by $\proj A$ the category of finitely generated projective right $A$-modules. Let $\mathsf{K}^{\rm b}(\proj A)$ denote the homotopy category of bounded complexes of objects of $\proj A$. For a complex $X\in \mathsf{K}^{\rm b}(\proj A)$, we say that $X$ is \emph{basic} if it is a direct sum of pairwise non-isomorphic indecomposable objects.
\begin{definition} A complex $T$ in $\mathsf{K}^{\rm b}(\proj A)$ is said to be \emph{presilting} if it satisfies \begin{align} \operatorname{Hom}\nolimits_{\mathsf{K}^{\rm b}(\proj A)}(T,T[i])=0 \notag \end{align} for all positive integers $i$. A presilting complex $T$ is called a \emph{silting complex} if it satisfies $\thick T=\mathsf{K}^{\rm b}(\proj A)$, where $\thick T$ is the smallest triangulated full subcategory which contains $T$ and is closed under taking direct summands. In addition, a silting complex $T$ is called a \emph{tilting complex} if $\operatorname{Hom}\nolimits_{\mathsf{K}^{\rm b}(\proj A)}(T,T[i])=0$ for all non-zero integers $i$. \end{definition}
We restrict our interest to the set of two-term silting complexes. Here, a complex $T=(T^{i},d^{i})$ in $\mathsf{K}^{\rm b}(\proj A)$ is said to be \emph{two-term} if it is isomorphic to a complex concentrated only in degree $0$ and $-1$, i.e., \begin{align} (T^{-1}\overset{d^{-1}}{\rightarrow} T^0) = \cdots \to 0 \to T^{-1} \overset{d^{-1}}{\longrightarrow} T^0 \to 0 \to \cdots \notag \end{align} We denote by $\mathop{2\text{-}\mathsf{silt}}\nolimits A$ (respectively, $\mathop{2\text{-}\mathsf{tilt}}\nolimits A$) the set of isomorphic classes of basic two-term silting (respectively, two-term tilting) complexes for $A$.
Now, we call $M\in \moduleCategory A$ a \emph{tilting module} if all the following conditions are satisfied: (i) the projective dimension of $M$ is at most $1$, (ii) $\operatorname{Ext}\nolimits_A^1(M,M)=0$, and (iii) $|M|=|A|$, where $|M|$ denotes the number of pairwise non-isomorphic indecomposable direct summands of $M$. We denote by $\mathop{\mathsf{tilt}}\nolimits A$ the set of isomorphism classes of basic tilting $A$-modules. By definition, we can naturally regard a tilting $A$-module $M$ as a tilting complex. More precisely, by taking a minimal projective presentation $P_1 \overset{f}{\to} P_0\to M \to 0$ of $M$ in $\moduleCategory A$, the two-term complex $(P_{1} \overset{f}{\to} P_0)$ provides a tilting complex in $\mathsf{K}^{\rm b}(\proj A)$.
The number of tilting modules over a path algebra of a Dynkin quiver is well known.
\begin{proposition}{\rm (see \cite{ONFR} for example)} \label{tilting number} Let $Q$ be a quiver whose underlying graph $\Delta$ is one of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$. Then the number $\#\mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q$ is given by the following table and does not depend on the orientation of $Q$. \begin{table}[h] \begin{center} {\renewcommand\arraystretch{1.3}
\begin{tabular}{|c||c|c|c|c|c|c|} \hline $\Delta$ & $\mathbb{A}_n \, (n\geq 1)$ &$\ \mathbb{D}_n \,(n\geq4)$& $\mathbb{E}_6$ & $\mathbb{E}_7$ & $\mathbb{E}_8$ \\ \hline $\# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q$ & $\frac{1}{n+1}\binom{2n}{n}$ & $\frac{3n-4}{2n}\binom{2(n-1)}{n-1}$ & $418$ & $2431$ & $17342$\\ \hline \end{tabular}} \end{center} \end{table} \end{proposition}
More generally, if $Q$ is a disjoint union of Dynkin quivers $Q_{\lambda}$ ($\lambda \in \Lambda$), then we have \begin{equation} \label{disjoint Dynkin}
\#\mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q = \prod_{\lambda\in \Lambda} \#\mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q_{\lambda} \end{equation} and this number is completely determined by a collection of the underlying graphs $\Delta_{\lambda}$ of $Q_{\lambda}$ for all $\lambda\in \Lambda$ as in Proposition \ref{tilting number}.
\subsection{Algebras with radical square zero}
Let $A$ be a basic connected finite dimensional $\mathbf{k}$-algebra. We say that $A$ is an algebra with \emph{radical square zero} (respectively, \emph{radical cube zero}) if $J^2=0$ but $J\neq 0$ (respectively, $J^3=0$ but $J^2\neq 0$), where $J$ is the Jacobson radical of $A$. For simplicity, we abbreviate an algebra with radical square zero (respectively, radical cube zero) by a RSZ (respectively, RCZ) algebra.
We first recall that any basic connected finite dimensional RSZ $\mathbf{k}$-algebra $A$ is isomorphic to a bound quiver algebra $\mathbf{k}Q/I$, where $Q:=Q_A$ is the Gabriel quiver of $A$ and $I$ is the two-sided ideal in $\mathbf{k}Q$ generated by all paths of length $2$.
Next, let $Q=(Q_{0},Q_{1})$ be a finite connected quiver, where $Q_{0}$ is the vertex set and $Q_{1}$ is the arrow set. We denote by $Q^{\rm op}$ the opposite quiver of $Q$. For a map $\epsilon\colon Q_{0}\to \{ \pm 1\}$, we define a quiver $Q_{\epsilon}$, called a {\it single quiver} of $Q$, as follows: \begin{itemize} \item The set of vertices is $Q_0$. \item We draw an arrow $a \colon i\to j$ in $Q_{\epsilon}$ whenever there exists an arrow $a\colon i\to j$ with $\epsilon(i)=+1$ and $\epsilon(j)=-1$. \end{itemize} Note that $Q_{\epsilon}$ is bipartite (i.e., each vertex is either a sink or a source), but not connected in general. Since it has no loops by definition, we have $Q_{\epsilon}=(Q^{\circ})_{\epsilon}$, where $Q^{\circ}$ denotes the quiver obtained from $Q$ by deleting all loops.
We give a connection between two-term silting complexes for a RSZ algebra and tilting modules over path algebras.
\begin{proposition}\label{RSZ} Let $A$ be a basic connected finite dimensional RSZ $\mathbf{k}$-algebra and $Q_A$ the Gabriel quiver of $A$. Let $Q:=(Q_A)^{\circ}$ be the quiver obtained from $Q_A$ by deleting all loops. Then the following statements hold. \begin{enumerate}[\rm (1)] \item \textnormal{(\cite[Theorem 1.1]{Aoki18})} There is a bijection \begin{align} \mathop{2\text{-}\mathsf{silt}}\nolimits A \longrightarrow \bigsqcup_{\epsilon \colon Q_{0} \rightarrow\{\pm 1\}} \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op}. \notag \end{align} \item \textnormal{(\cite{Adachi16b,Aoki18})} The following conditions are equivalent. \begin{enumerate}[\rm (a)] \item $\mathop{2\text{-}\mathsf{silt}}\nolimits A$ is finite. \item For every map $\epsilon\colon Q_{0}\to \{ \pm 1\}$, the underlying graph of the single quiver $Q_{\epsilon}$ is a disjoint union of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$. \end{enumerate} \item If one of equivalent conditions of \textnormal{(2)} holds, we have \begin{align} \#\mathop{2\text{-}\mathsf{silt}}\nolimits A = \sum_{\epsilon \colon Q_0 \to \{\pm1\}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op}.\notag \end{align} \end{enumerate} \end{proposition}
We remark that we can replace the quiver $Q$ with the Gabriel quiver $Q_A$ of $A$ in Proposition \ref{RSZ} since we have $(Q_A)_{\epsilon} = Q_{\epsilon}$ for any map $\epsilon\colon Q_0 \to \{\pm1\}$.
\section{Two-term tilting complexes over symmetric RCZ algebras} \label{sec:RCZ}
Let $A$ be a basic connected finite dimensional symmetric RCZ $\mathbf{k}$-algebra. Then $\overline{A}:=A/\operatorname{soc}\nolimits A$ is a RSZ algebra by definition. Moreover, the Gabriel quiver of $\overline{A}$ coincides with the Gabriel quiver of $A$ since $\operatorname{soc}\nolimits A$ is contained in the square of the Jacobson radical of $A$.
The following is basic. Here, we remember that silting complexes coincide with tilting complexes for a symmetric algebra $A$ (\cite[Example 2.8]{AI}). In particular, $\mathop{2\text{-}\mathsf{tilt}}\nolimits A=\mathop{2\text{-}\mathsf{silt}}\nolimits A$.
\begin{proposition} \cite[Theorem 3.3]{Adachi16a} \label{reduction} Let $A$ be a basic connected finite dimensional symmetric RCZ $\mathbf{k}$-algebra and $\overline{A}:=A/\operatorname{soc}\nolimits A$. Then the functor $-\otimes_A \overline{A}$ gives a bijection \begin{align} \mathop{2\text{-}\mathsf{tilt}}\nolimits A \longrightarrow \mathop{2\text{-}\mathsf{silt}}\nolimits \overline{A}. \notag \end{align} \end{proposition}
Next, the following observations provide us a combinatorial framework of studying two-term tilting complexes over symmetric RCZ algebras.
\begin{definition} \label{def:double quiver} For a finite connected graph $G$ with no loops, we define a quiver $Q_G$ as follows. \begin{itemize} \item The set of vertices of $Q_G$ is the set of vertices of $G$. \item We draw two arrows $a^{\ast} \colon i\to j$ and $a^{\ast\ast} \colon j\to i$ whenever there exists an edge $a$ of $G$ connecting $i$ and $j$. \end{itemize} We call $Q_G$ the \emph{double quiver} of $G$. Notice that $Q_G$ has no loops since so does $G$. \end{definition}
\begin{defprop} \label{graphA} Let $A$ be a basic connected finite dimensional symmetric RCZ $\mathbf{k}$-algebra. Let $Q_A$ be the Gabriel quiver of $A$ and $Q:=(Q_A)^{\circ}$ the quiver obtained from $Q_A$ by deleting all loops. Then $Q$ is the double quiver $Q_G$ of a finite connected (undirected) graph $G$ with no loops. We call $G$ the graph of $A$. \end{defprop}
\begin{proof} For the Gabriel quiver $Q_A$ of $A$, let $\pi \colon \mathbf{k}Q_A \to A$ be a canonical surjection. For any vertex $i$ of $Q_A$, let $P_i$ be the indecomposable projective $A$-module corresponding to $i$. By definition, $P_i$ has Loewy length $3$ and its simple socle is isomorphic to the simple top $S_{i}:=P_{i}/P_{i}J$.
We recall from \cite[Proposition 5.6]{GSa} that our algebra $A$ is special multiserial (we refer to \cite[Definition 2.2]{GSa} for the definition of special multiserial algebras). Then each arrow $a\colon i\to j$ of $Q_A$ determines the unique arrow $\sigma(a)$ such that $\pi(a\sigma(a))\neq 0$, and the correspondence $\sigma$ gives a permutation of the set of arrows of $Q_A$, see \cite[Definition 4.8]{GSa}. In addition, the element $\pi(a\sigma(a)\sigma^2(a)\cdots\sigma^{m-1}(a))$ lies in the socle of $P_i$, where $m$ is the length of the $\sigma$-orbit containing the arrow $a$. Since $P_i$ has Loewy length $3$, $m=2$ must hold. In particular, $\sigma(a)$ is the unique arrow $\sigma(a)\colon j\to i$ such that $\pi(\sigma(a)a)\neq 0$.
Now, we can restrict the permutation $\sigma$ to the subset consisting of all arrows which are not loops. Then we define a finite undirected graph $G$ as follows: The set of vertices of $G$ bijectively corresponds to the set of vertices of $Q_A$, and the set of edges of $G$ is naturally given by the set of unordered pairs $\{a,\sigma(a)\}$ for all arrows $a$ of $Q_A$ which are not loops. Then $G$ is the desired one as $(Q_A)^{\circ}=Q_G$ from our construction. \end{proof}
As we mentioned before, the algebras $A$ and $\overline{A}:=A/\operatorname{soc}\nolimits A$ have the same Gabriel quiver $Q_A = Q_{\overline{A}}$. Therefore, $(Q_A)^{\circ}= (Q_{\overline{A}})^{\circ}$ is the double quiver $Q_G$ of a common finite connected graph $G$ with no loops by Definition-Proposition \ref{graphA}.
\begin{theorem} \label{reduced ver} Let $A$ be a basic connected finite dimensional symmetric RCZ $\mathbf{k}$-algebra and $\overline{A}:=A/\operatorname{soc}\nolimits A$. Let $Q_A$ be the Gabriel quiver of $A$ and $Q:=(Q_A)^{\circ}$ the quiver obtained from $Q_A$ by deleting all loops. \begin{enumerate}[\rm (1)] \item The following conditions are equivalent. \begin{enumerate}[\rm (a)] \item $\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ is finite. \item $\mathop{2\text{-}\mathsf{silt}}\nolimits \overline{A}$ is finite. \item For every map $\epsilon \colon Q_{0} \rightarrow \{\pm 1\}$, the underlying graph of the single quiver $Q_{\epsilon}$ is a disjoint union of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$. \end{enumerate} \item Fix any vertex $v\in Q_{0}$. If one of the equivalent conditions in \textnormal{(1)} is satisfied, then the following equalities hold. \begin{align} \# \mathop{2\text{-}\mathsf{tilt}}\nolimits A = \# \mathop{2\text{-}\mathsf{silt}}\nolimits \overline{A} = 2 \cdot \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=+1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}{Q}_{\epsilon}. \notag \end{align} \end{enumerate} \end{theorem} \begin{proof} (1) It follows from Propositions \ref{RSZ}(2) and \ref{reduction}.
(2) By Proposition \ref{reduction}, we have $\# \mathop{2\text{-}\mathsf{tilt}}\nolimits A = \# \mathop{2\text{-}\mathsf{silt}}\nolimits \overline{A}$. We show the second equality. Let $v$ be a vertex in $Q$. By Proposition \ref{RSZ}(1), we have \begin{align} \# \mathop{2\text{-}\mathsf{silt}}\nolimits \overline{A} = \sum_{\epsilon \colon Q_{0} \rightarrow\{\pm 1\}} \#\mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op} = \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=+1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op} + \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=-1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op}. \notag \end{align} For a map $\epsilon\colon Q_{0}\to \{ \pm 1\}$, we define a map $-\epsilon\colon Q_{0}\to \{ \pm 1\}$ by $(-\epsilon)(i):=-\epsilon(i)$ for all $i\in Q_{0}$. Since $Q$ is the double quiver of the graph $G$ of $A$, we have $Q_{-\epsilon}=(Q_{\epsilon})^{\mathrm{op}}$. This implies that $Q_{\epsilon}$ and $Q_{-\epsilon}$ have the same underlying graph $\Delta$. By our assumption, $\Delta$ is a disjoint union of Dynkin graphs. Thus we obtain $\# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q_{\epsilon}= \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q_{-\epsilon}$ because the number of non-isomorphic tilting modules over a path algebra of Dynkin type does not depend on orientation, see Proposition \ref{tilting number}. Hence we have \begin{align} \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=+1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q_{\epsilon} =\sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=+1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op} = \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=-1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op}. \notag \end{align} This finishes the proof. \end{proof}
For our convenience, we restate Theorem \ref{reduced ver} in terms of undirected graphs. Let $G=(G_0,G_1)$ be a finite connected graph with no loops, where $G_0$ is the set of vertices and $G_1$ is the set of edges. For each map $\epsilon \colon G_0\to \{\pm1\}$, let $G_{\epsilon}$ be the graph obtained from $G$ by removing all edges between vertices $i,j$ with $\epsilon(i)=\epsilon(j)$. From our construction, $G_{\epsilon}$ is precisely the underlying graph of the quiver $Q_{\epsilon}$, where $Q:=Q_G$ is the double quiver of $G$ with vertex set $Q_0=G_0$. In particular, $Q_{\epsilon}$ is a disjoint union of Dynkin quivers if and only if $G_{\epsilon}$ is a disjoint union of Dynkin graphs.
Now, we recall that, for a quiver $Q$ whose underlying graph $\Delta$ is a disjoint union of Dynkin graphs, the number $\#\mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q$ does not depend on orientation of $Q$ and given by (\ref{disjoint Dynkin}). Then, we set $|\Delta| := \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q$.
\begin{corollary} \label{number by graph} Let $A$ be a basic connected finite dimensional symmetric RCZ $\mathbf{k}$-algebra and $G$ the graph of $A$. \begin{enumerate}[\rm (1)] \item The following conditions are equivalent. \begin{enumerate}[\rm (a)] \item $\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ is finite. \item For every map $\epsilon \colon G_0 \to \{\pm1\}$, the graph $G_{\epsilon}$ is a disjoint union of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$. \end{enumerate} \item Assume that, for any $\epsilon \colon G_0\to \{\pm1\}$, the graph $G_{\epsilon}$ is a disjoint union of Dynkin graphs $\Delta_{\epsilon,\lambda}$ ($\lambda \in \Lambda_{\epsilon}$). Then for a fixed vertex $v$ of $G$, the number $\#\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ is equal to \end{enumerate} \begin{align}\label{number G}
2\cdot \sum_{\substack{\epsilon\colon G_0 \to \{\pm1\} \\ \epsilon(v)=+1}} |G_{\epsilon}| \ = \
2\cdot \sum_{\substack{\epsilon\colon G_0 \to \{\pm1\} \\ \epsilon(v)=+1}} \prod_{\lambda\in \Lambda_{\epsilon}} |\Delta_{\epsilon,\lambda}|. \end{align} \end{corollary} \begin{proof} Let $Q:=(Q_A)^{\circ}$, where $Q_A$ is the Gabriel quiver of $A$. Then $Q=Q_G$ holds by Definition-Proposition \ref{graphA}. Then the assertion follows from Theorem \ref{reduced ver} since $G_{\epsilon}=Q_{\epsilon}$ for any map $\epsilon \colon G_0\to \{\pm1\}$. \end{proof}
\begin{definition} \label{number GG}
Keeping the notations in Corollary \ref{number by graph}(2), we write $||G||$ for the number given by the left hand side of (\ref{number G}). \end{definition}
\begin{figure}
\caption{A half of single quivers of the double quiver of $\mathbb{E}_6$.}
\label{Fig.E6}
\end{figure}
\begin{example} \label{example-E6} \begin{enumerate}[\rm (1)] \item Let $Q$ be a quiver whose underlying graph $\Delta$ is one of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$. Let $A$ be the trivial extension of the path algebra $\mathbf{k}Q$ of $Q$ by a minimal co-generator. It is easy to see that $A$ is a symmetric RCZ algebra if $Q$ is bipartite. In this case, the Gabriel quiver of $A$ is precisely the double quiver $Q_{\Delta}$ of $\Delta$, in other words, the graph of $A$ is $\Delta$. On the other hand, $Q^{\rm op}$ also determines the symmetric RCZ algebra, which is naturally isomorphic to $A$. \item Let $\Delta=\mathbb{E}_{6}$ and let $A$ be the symmetric RCZ algebra obtained as in (1). In Figure \ref{Fig.E6}, we describe single quivers of $Q:=Q_{\mathbb{E}_6}$ associated to maps $\epsilon$ with $\epsilon(6)=+1$. Here, the notation $i^{\sigma}$ denotes the vertex $i$ with $\epsilon(i)=\sigma \in \{\pm1\}$. Using the Corollary \ref{number by graph}, we find that there are $1700$ isomorphism classes of basic two-term tilting complexes over $A$ as in the list of Theorem \ref{theorem2}. \end{enumerate} \end{example}
\section{Proof of Main Theorem} \label{main theorem}
In this section, we prove Theorems \ref{theorem1} and \ref{theorem2}. Throughout this section, $G$ is a finite connected graph with no loops.
\subsection{Proof of Theorem \ref{theorem1}}
By Corollary \ref{number by graph}(1), the proof is completed with the following proposition.
\begin{proposition}\label{tothm1} Let $G$ be a connected finite graph with no loops. Then the graph $G_{\epsilon}$ is a disjoint union of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$ for every map $\epsilon \colon G_{0} \to\{\pm 1\}$ if and only if $G$ is one of the list in Theorem \ref{theorem1}. \end{proposition}
In the following, we give a proof of Proposition \ref{tothm1} by removing extended Dynkin graphs from the collection $G_{\epsilon}$ of subgraphs of $G$. We start with removing extended Dynkin graphs of type $\widetilde{\mathbb{A}}$. A graph is called an \emph{$n$-cycle} if it is a cycle with exactly $n$ vertices. In particular, it is called an \emph{odd-cycle} if $n$ is odd, and an \emph{even-cycle} if $n$ even.
\begin{lemma}\label{remove-ext-A} The following statements are equivalent: \begin{enumerate}[\rm (1)] \item There exists a map $\epsilon \colon G_{0} \to\{\pm 1\}$ such that $G_{\epsilon}$ contains an extended Dynkin graph of type $\widetilde{\mathbb{A}}$ as a subgraph. \item $G$ contains an even-cycle as a subgraph. \end{enumerate} \end{lemma} \begin{proof} (2)$\Rightarrow$(1): Let $G'$ be a subgraph of $G$ which is an even-cycle. Since an even-cycle is a bipartite graph, there exists a map $\epsilon \colon G_{0} \to \{\pm 1\}$ such that the underlying graph of $G_{\epsilon}$ contains $G'$ as a subgraph. Hence the assertion follows.
(1)$\Rightarrow$(2): Assume that for some map $\epsilon\colon G_{0}\to \{ \pm 1\}$, the graph $G_{\epsilon}$ contains an extended Dynkin graph $G'$ of type $\widetilde{\mathbb{A}}$. Since $G_{\epsilon}$ is bipartite, so is $G'$. Hence $G'$ is an even-cycle and a subgraph of $G$. This finishes the proof. \end{proof}
By Lemma \ref{remove-ext-A}, we may assume that $G$ contains no even-cycle as a subgraph. In particular, $G$ has no multiple edges. We give a connection between our graphs $G_{\epsilon}$ and subtrees of $G$. Recall that a \emph{subtree} of $G$ is a connected subgraph of $G$ without cycles.
\begin{proposition}\label{subtree-bipartite} Assume that $G$ contains no even-cycle as a subgraph. Let $G'$ be a connected graph. Then the following statements are equivalent. \begin{enumerate}[\rm (1)] \item There exists a map $\epsilon \colon G_{0} \to\{ \pm 1\}$ such that $G_{\epsilon}$ contains $G'$ as a subgraph. \item $G'$ is a subtree of $G$. \end{enumerate}
In particular, there exists a naturally two-to-one correspondence between the set of connected graphs of the form $G_{\epsilon}$ and the set of subtrees of $G$. \end{proposition} \begin{proof} (2)$\Rightarrow$(1) is clear. We show (1)$\Rightarrow$(2). Since $G$ has no even-cycle as a subgraph, then $G_{\epsilon}$ is tree by Lemma \ref{remove-ext-A}. Since $G'$ is a subgraph of $G$, any subgraph of $G'$ is a subtree of $G$. \end{proof}
For a tree, we have the following result. \begin{corollary}\label{Dynkin-case} Assume $G$ is a tree. Then the graph $G_{\epsilon}$ is a disjoint union of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$ for each map $\epsilon \colon G_{0} \to \{\pm 1\}$ if and only if $G$ is a Dynkin graph of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$. \end{corollary} \begin{proof} It is well known that $G$ is a Dynkin graph if and only if all subtrees of $G$ are Dynkin graphs. The assertion follows from Proposition \ref{subtree-bipartite}. \end{proof}
We remove extended Dynkin graphs of type $\widetilde{\mathbb{D}}$. Assume that $G$ contains at least two odd-cycles. Then there exists a subtree $G'$ of $G$ such that $G'$ is an extended Dynkin graph of type $\widetilde{\mathbb{D}}$. Moreover, by Proposition \ref{subtree-bipartite}, there exists a map $\epsilon \colon G_{0} \to \{\pm 1\}$ such that $G_\epsilon$ contains an extended Dynkin graph of type $\widetilde{\mathbb{D}}$ as a subgraph. Hence we may assume that $G$ contains at most one odd-cycle. By Corollary \ref{Dynkin-case}, it is enough to consider the case where $G$ contains exactly one odd-cycle. Namely, $G$ consists of an odd-cycle such that each vertex $v$ in the odd-cycle is attached to a tree $T_{v}$. \begin{align} \xymatrix@C=4mm@R=3mm{ \bullet\ar@{-}[r]&\bullet\ar@{-}[r]&v_{1}\ar@{-}[dr]\ar@{-}[dl]\ar@{-}[r]&\bullet&\bullet\ar@{-}[d]&\\ &v_{2}\ar@{-}[rr]&&v_{3}\ar@{-}[r]&\bullet\ar@{-}[r]&\bullet\\ }\notag \end{align}
\begin{lemma}\label{remove-ext-D} Fix an integer $k\ge 1$ and $n:=2k+1$. Assume that $G$ consists of an $n$-cycle such that each vertex $v$ in the $n$-cycle is attached to a tree $T_{v}$. Then the following statements are equivalent: \begin{enumerate}[\rm (1)] \item There exists a map $\epsilon \colon G_{0} \to\{ \pm 1\}$ such that $G_{\epsilon}$ contains an extended Dynkin graph of type $\widetilde{\mathbb{D}}$ as a subgraph. \item $G$ contains an extended Dynkin graph of type $\widetilde{\mathbb{D}}$ as a subgraph. \item $G$ satisfies one of the following conditions. \begin{enumerate}[\rm (a)] \item There is a vertex $v$ in the $n$-cycle such that the degree is at least four. \item There is a vertex $v$ in the $n$-cycle such that the degree is exactly three and $T_{v}$ is not a Dynkin graph of type $\mathbb{A}$. \item $k\ge 2$ and there are at least two vertices in the $n$-cycle such that the degrees are at least three. \end{enumerate} \end{enumerate} \end{lemma} \begin{proof} (1)$\Leftrightarrow$(2) follows from Proposition \ref{subtree-bipartite}. Moreover, we can easily check (2)$\Leftrightarrow$(3) because $\widetilde{\mathbb{D}}_{4}$ has exactly one vertex whose degree is exactly four and $\widetilde{\mathbb{D}}_{l}$ ($l\ge 5$) has exactly two vertices whose degree are exactly three. \end{proof}
Fix an integer $k \ge 1$ and $n:=2k+1$. By Lemma \ref{remove-ext-D}, we may assume that $G$ is one of the following graphs:
\begin{align} \begin{picture}(400,200)(0,0) \put(95,5){$k=1$} \put(290,5){$k\ge 2$} \put(70,200){\xymatrix@C=3mm@R=3mm{ &1_{l_{1}}\ar@{-}[d]&&\\ &\vdots\ar@{-}[d]&&\\ &1_{1}\ar@{-}[d]&&\\ &1\ar@{-}[rd]\ar@{-}[ld]&&\\ 2\ar@{-}[rr]\ar@{-}[d]&&3\ar@{-}[d]\\ 2_{1}\ar@{-}[d]&&3_{1}\ar@{-}[d]\\ \vdots\ar@{-}[d]&&\vdots\ar@{-}[d]\\ 2_{l_{2}}&&3_{l_{3}} }} \put(270,170){\xymatrix@C=4mm@R=4mm{ &1_{l_{1}}\ar@{-}[d]&\\ &\vdots\ar@{-}[d]&\\ &1_{1}\ar@{-}[d]&\\ &1\ar@{-}[rd]\ar@{-}[ld]&\\ 2\ar@{-}[d]&&n\ar@{-}[d]\\ 3\ar@{.}[rr]&&{n-1} }} \end{picture}\notag \end{align}
Finally, we remove extended Dynkin graphs of type $\widetilde{\mathbb{E}}$.
\begin{lemma}\label{remove-ext-E} Fix an integer $k \ge 1$ and $n:=2k+1$. \begin{enumerate}[\rm (1)] \item Assume that $k=1$. The following graphs $(\mathrm{i})$, $(\mathrm{ii})$ and $(\mathrm{iii})$ are the minimal graphs containing an extended Dynkin graph of type $\widetilde{\mathbb{E}}$. \begin{align} \begin{picture}(400,160)(0,0) \put(25,150){\textnormal{(i)}} \put(30,150){\xymatrix@C=3mm@R=3mm{ &1_{1}\ar@{-}[d]&\\ &1\ar@{-}[rd]\ar@{-}[ld]&\\ 2\ar@{-}[d]\ar@{-}[rr]&&3\ar@{-}[d]\\ 2_{1}&&3_{1}\ar@{-}[d]\\ &&3_{2} }} \put(145,150){\textnormal{(ii)}} \put(265,150){\textnormal{(iii)}} \put(150,150){\xymatrix@C=3mm@R=3mm{ &1\ar@{-}[rd]\ar@{-}[ld]&\\ 2\ar@{-}[rr]\ar@{-}[d]&&3\ar@{-}[d]\\ 2_{1}\ar@{-}[d]&&3_{1}\ar@{-}[d]\\ 2_{2}&&3_{2}\ar@{-}[d]\\ &&3_{3} }} \put(270,150){\xymatrix@C=3mm@R=3mm{ &1\ar@{-}[rd]\ar@{-}[ld]&\\ 2\ar@{-}[rr]\ar@{-}[d]&&3\ar@{-}[d]\\ 2_{1}&&3_{1}\ar@{-}[d]\\ &&3_{2}\ar@{-}[d]\\ &&3_{3}\ar@{-}[d]\\ &&3_{4}\ar@{-}[d]\\ &&3_{5} }} \end{picture}\notag \end{align} \item Assume that $k\ge 2$. The following graphs $(\mathrm{iv})$ and $(\mathrm{v})$ are the minimal graphs containing an extended Dynkin graph of type $\widetilde{\mathbb{E}}$. \begin{align} \begin{picture}(300,145)(0,0) \put(60,5){\textnormal{(iv)} $k= 2$} \put(210,5){\textnormal{(v)} $k\ge 3$} \put(45,130){\xymatrix@C=4mm@R=4mm{ &1_{2}\ar@{-}[d]&\\ &1_{1}\ar@{-}[d]&\\ &1\ar@{-}[rd]\ar@{-}[ld]&\\ 2\ar@{-}[d]&&n\ar@{-}[d]\\ 3\ar@{.}[rr]&&{n-1} }} \put(195,130){\xymatrix@C=4mm@R=4mm{ &1_{1}\ar@{-}[d]&\\ &1\ar@{-}[rd]\ar@{-}[ld]&\\ 2\ar@{-}[d]&&n\ar@{-}[d]\\ 3\ar@{-}[d]&&n-1\ar@{-}[d]\\ 4\ar@{.}[rr]&&n-2 }} \end{picture}\notag \end{align} \end{enumerate} \end{lemma} \begin{proof} We can easily find extended Dynkin graphs $\widetilde{\mathbb{E}}_{6}$, $\widetilde{\mathbb{E}}_{7}$ and $\widetilde{\mathbb{E}}_{8}$ in the graphs above. \end{proof}
Now we are ready to prove Proposition \ref{tothm1}.
\begin{proof}[Proof of Proposition \ref{tothm1}] If $G$ is a tree, then the assertion follows from Corollary \ref{Dynkin-case}. We assume that $G$ is not a tree. By Lemma \ref{remove-ext-A}, we may assume that $G$ does not contain even-cycles as subgraphs. Then $G$ does not contain extended Dynkin graphs as subgraphs if and only if $G$ is one of the following classes: \begin{itemize} \item $(\mathrm{I}_{n})_{n\ge 4}$ in Theorem \ref{theorem1}(2), \item proper connected non-tree subgraphs appearing in Lemma \ref{remove-ext-E}(i)--(v). \end{itemize} The second class coincides with the graphs $(\widetilde{\mathbb{A}}_{n-1})_{n:{\rm odd}}$, $(\mathrm{I}_{n})_{4\le n \le 8}$, $(\mathrm{II}_{n})_{5\le n\le 8}$, (III), (IV) and (V) in Theorem \ref{theorem1}(2). Hence the assertion follows from Proposition \ref{subtree-bipartite}. \end{proof}
We finish this subsection with proof of Theorem \ref{theorem1}.
\begin{proof}[Proof of Theorem \ref{theorem1}] The result follows from Corollary \ref{number by graph}(1) and Proposition \ref{tothm1}. \end{proof}
\subsection{Proof of Theorem 1.2} We just compute the number of two-term tilting complexes for each graph in the list of Theorem \ref{theorem1}. Our calculation is based on Theorem \ref{reduced ver} and Corollary \ref{number by graph}. For our purpose, we assume that $G$ is a graph appearing in the list of Theorem \ref{theorem1} and let $A$ be a basic connected finite dimensional symmetric RCZ algebra whose graph is $G$.
Keeping above notations, we determine the number $\#\mathop{2\text{-}\mathsf{tilt}}\nolimits A$, or equivalently, $||G||$ in Definition \ref{number GG}. First, for types $\mathbb{A}$ and $\widetilde{\mathbb{A}}$, the number is already computed by \cite{Aoki18}:
\begin{proposition} \cite[Theorem 1.2]{Aoki18} \label{type A} The following equality holds. \begin{align} \# \mathop{2\text{-}\mathsf{tilt}}\nolimits A = \begin{cases} \binom{2n}{n} &\text{if $G=\mathbb{A}_{n}$,}\\ 2^{2n-1} &\text{if $G=\widetilde{\mathbb{A}}_{n-1}$ for odd $n$.} \end{cases}\notag \end{align} \end{proposition}
Secondly, we consider the case where $G$ is a Dynkin graph of type $\mathbb{D}$. For simplicity, let $c_{0}=1$, $c_{l}:=\binom{2l}{l}$ for each $l\ge 1$.
Then we have $||\mathbb{A}_l|| = c_l$ for all $l\geq 1$ by Proposition \ref{type A}. In addition, let $||\mathbb{A}_0||:=2$.
\begin{proposition}\label{type D} Let $n\ge 4$ and $G=\mathbb{D}_n$. Then we have \begin{align} \# \mathop{2\text{-}\mathsf{tilt}}\nolimits A = 6\cdot 4^{n-2} - 2 c_{n-2}.\notag \end{align} \end{proposition} \begin{proof} Let $G$ be a graph as follows. \begin{align} \xymatrix@R=1mm{ 1&&&&&&&\\ &3\ar@{-}[lu]\ar@{-}[ld]\ar@{-}[r]&4\ar@{-}[r]&\cdots\ar@{-}[r]&n.\\ 2&&&&&&& }\notag \end{align} By Corollary \ref{number by graph}, we have \begin{equation} \label{for D}
\# \mathop{2\text{-}\mathsf{tilt}}\nolimits A =2 \cdot \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{ \pm 1\} \\ \epsilon(3)=+1}} |{G}_{\epsilon}|. \end{equation} We study the right hand side of (\ref{for D}). Let $M$ be the set of maps $\epsilon \colon G_{0} \rightarrow \{\pm1\}$ such that $\epsilon(3)=+1$. Clearly, $M$ is a disjoint union of the following subsets: \begin{itemize} \item $M_{1}:=\{ \epsilon\in M \mid \epsilon(1)=\epsilon(2)=\epsilon(3) \}$. \item $M_{2}:=\{ \epsilon\in M \mid \epsilon(1)=-\epsilon(2)=\epsilon(3) \}$. \item $M_{3}:=\{ \epsilon\in M \mid -\epsilon(1)=\epsilon(2)=\epsilon(3) \}$. \item $M_{4}:=\{ \epsilon\in M \mid -\epsilon(1)=-\epsilon(2)=\epsilon(3)=\epsilon(4) \}$. \item $M_{5}:=\{ \epsilon\in M \mid -\epsilon(1)=-\epsilon(2)=\epsilon(3)=-\epsilon(4) \}=\bigsqcup_{t=4}^{n}M_{5}(t)$, where \begin{align}
M_{5}(t):=\Bigl\{ \epsilon\in M_{5}\ \Bigl.\Bigr|\ t=\min\{ 4 \le j \le n \mid \epsilon(j)=\epsilon(j+1)\} \Bigr\}. \notag \end{align} \end{itemize}
From now, we compute $\mathsf{n}(i):=\sum_{\epsilon \in M_i} |G_{\epsilon}|$ for each $i\in\{ 1,\ldots, 5\}$. In the following, the notation $\xymatrix{i\ar@{~}[r]&j}$ is replaced by an edge connecting $i$ and $j$ if $\epsilon(i)\neq\epsilon(j)$, otherwise nothing between them.
(i) Let $\epsilon \in M_{1}$. Then $G_{\epsilon}$ is given by \begin{align} \xymatrix@R=1mm{ 1&&&&&\\ &3\ar@{~}[r]&4\ar@{~}[r]&\cdots\ar@{~}[r]&n-1\ar@{~}[r]&n.\\ 2&&&&& }\notag \end{align}
Let $G'$ be the subgraph of $G$ obtained by removing the vertices $\{1,2\}$. Then we have $|G_{\epsilon}| = |G'_{\epsilon|_{\{3,\ldots,n\}}}|$. Since $G'$ is a Dynkin graph of type $\mathbb{A}_{n-2}$, we obtain \[
2\mathsf{n}(1)= 2 \cdot \sum_{\substack{\epsilon\colon G_0' \to \{\pm1\} \\ \epsilon(3)=+1}} |G'_{\epsilon}| =||\mathbb{A}_{n-2}|| = c_{n-2} \] where the last equality follows from Proposition \ref{type A}.
By an argument similar to (1), we can calculate other cases.
(ii) For each $\epsilon\in M_{2}$, the graph $G_{\epsilon}$ is given by \begin{align} \xymatrix@R=1mm{ 1&&&&&\\ &3\ar@{-}[ld]\ar@{~}[r]&4\ar@{~}[r]&\cdots\ar@{~}[r]&n-1\ar@{~}[r]&n.\\ 2&&&&& }\notag \end{align}
Then we can check $2\mathsf{n}(2)=||\mathbb{A}_{n-1}||-||\mathbb{A}_{n-2}||=c_{n-1}-c_{n-2}$.
(iii) By the symmetry of $G$, we have $\mathsf{n}(3)=\mathsf{n}(2)$.
(iv) Let $\epsilon\in M_{4}$. Then $G_{\epsilon}$ is described as \begin{align} \xymatrix@R=1mm{ 1&&&&&\\ &3\ar@{-}[lu]\ar@{-}[ld]&4\ar@{~}[r]&\cdots\ar@{~}[r]&n-1\ar@{~}[r]&n.\\ 2&&&&& }\notag \end{align}
Thus we find that $2\mathsf{n}(4)= |\mathbb{A}_3| \cdot ||\mathbb{A}_{n-3}|| = 5c_{n-3}$.
(v) For $\epsilon\in M_{5}(t)$, the graph $G_{\epsilon}$ is given by \begin{align} \xymatrix@R=1mm{ 1&&&&&&&\\ &3\ar@{-}[lu]\ar@{-}[ld]\ar@{-}[r]&4& \ar@{-}[l]\cdots\ar@{-}[r]&t&t+1\ar@{~}[r]&\cdots\ar@{~}[r]&n.\\ 2&&&&&&& }\notag \end{align} Then we obtain \begin{align} 2\mathsf{n}(5)
&=\sum_{t=4}^{n}|\mathbb{D}_t| \cdot ||\mathbb{A}_{n-t}|| =\frac{3n-4}{2n}c_{n-1}\cdot 2c_{0}+\sum_{t=4}^{n-1}\frac{3t-4}{2t}c_{t-1}c_{n-t}\notag\\ &=\frac{3n-4}{2n}c_{n-1}+\sum_{t=4}^{n}\frac{3t-4}{2t}c_{t-1}c_{n-t}.\notag \end{align}
To finish the proof, we need the following lemma. \begin{lemma} \label{binom numbers} For any positive integer $n$, the following equalities hold: \begin{enumerate}[(1)] \item $\displaystyle{\sum_{t=1}^{n}c_{t-1}c_{n-t}=4^{n-1}}$. \item $\displaystyle{\sum_{t=1}^{n}\frac{1}{t}c_{t-1}c_{n-t}=\frac{1}{2}c_{n}}$. \end{enumerate} \end{lemma} \begin{proof} The equality (1) is well-known. The equality (2) is obtained by \begin{align} \sum_{t=1}^{n}\frac{1}{t}c_{t-1}c_{n-t} =\frac{n+1}{2}\sum_{t=1}^{n}C_{t-1}C_{n-t} =\frac{n+1}{2}C_{n}=\frac{1}{2}c_{n},\notag \end{align} where $C_{n} := \frac{1}{n+1}c_{n}$ is the $n$-th Catalan number. \end{proof}
By Lemma \ref{binom numbers}, we obtain the equality \begin{align} \sum_{t=4}^{n}\frac{3t-4}{2t}c_{t-1}c_{n-t} &=\frac{3}{2}\sum_{t=4}^{n} c_{t-1}c_{n-t}-2\sum_{t=4}^{n}\frac{1}{t}c_{t-1}c_{n-t}\notag\\ &=\frac{3}{2}(4^{n-1} -c_{n-1} -2 c_{n-2} -6 c_{n-3}) -2(\frac{1}{2}c_{n}-c_{n-1}-c_{n-2}-2c_{n-3})\notag\\ &= 6\cdot 4^{n-2}-c_{n}+\frac{1}{2}c_{n-1}-c_{n-2}-5c_{n-3}.\notag \end{align} By (i)--(v), we have \begin{align} \# \mathop{2\text{-}\mathsf{tilt}}\nolimits A &=c_{n-2}+2(c_{n-1}-c_{n-2})+5c_{n-3}+6\cdot 4^{n-2}-c_{n}+\frac{2n-2}{n}c_{n-1}-c_{n-2}-5c_{n-3}\notag\\ &=6\cdot 4^{n-2}-c_{n}+\frac{4n-2}{n}c_{n-1}-2c_{n-2}\notag\\ &=6\cdot 4^{n-2}-2c_{n-2},\notag \end{align} where the last equality follows from $c_{n}=\frac{2(2n-1)}{n}c_{n-1}$. \end{proof}
Thirdly, we give an enumeration for type ($\mathrm{I}$). The number is obtained by using the result on type $\mathbb{D}$.
\begin{proposition} \label{type I} If $G=\mathrm{I}_{n}$, then we have \begin{align} \#\mathop{2\text{-}\mathsf{tilt}}\nolimits A = 6\cdot 4^{n-2} + 2c_{n} -4c_{n-1}-4c_{n-2}.\notag \end{align} \end{proposition} \begin{proof} Let $G$ be a graph as follows. \begin{align} \xymatrix@R=1mm{ 1\ar@{-}[dd]&&&&&&&\\ &3\ar@{-}[lu]\ar@{-}[ld]\ar@{-}[r]&4\ar@{-}[r]&\cdots\ar@{-}[r]&n.\\ 2&&&&&&& }\notag \end{align} By using a similar method of the proof of proposition \ref{type D}, we calculate the right-hand side of \begin{align}
\# \mathop{2\text{-}\mathsf{tilt}}\nolimits A = 2 \sum_{\substack{\epsilon\colon G_{0} \rightarrow \{ \pm 1\} \\ \epsilon(3)=+1}} |G_{\epsilon}|.\notag \end{align}
Let $M$ and $M_{i}$ $(1\leq i \leq 5)$ be sets of maps given in the proof of Proposition \ref{type D} and $\mathsf{m}(i):=\sum_{\epsilon\in M_{i}} |G_{\epsilon}|$. For each map $\epsilon\in M_{1}\sqcup M_{4}\sqcup M_{5}$, we have $G_{\epsilon}=(\mathbb{D}_n)_{\epsilon}$. Hence for each $i\in \{1,4,5\}$, we have \begin{align}
\mathsf{m}(i)=\sum_{\epsilon\in M_{i}} |G_{\epsilon}|=\sum_{\epsilon\in M_{i}} |(\mathbb{D}_n)_{\epsilon}| = \mathsf{n}(i).\notag \end{align} Since $\mathsf{m}(2)=\mathsf{m}(3)$ holds by the symmetry of $G$, we have only to calculate $\mathsf{m}(2)$. For each map $\epsilon\in M_{2}$, the graph $G_{\epsilon}$ is given by \begin{align} \xymatrix@R=1mm{ 1\ar@{-}[dd]&&&&&\\ &3\ar@{-}[ld]\ar@{~}[r]&4\ar@{~}[r]&\cdots\ar@{~}[r]&n-1\ar@{~}[r]&n.\\ 2&&&&& }\notag \end{align} Then the calculation of $\mathsf{m}(2)$ is reduced to that of Dynkin graphs of type $\mathbb{A}$. In fact, let $G'$ be the Dynkin graph $\mathbb{A}_{n}$. Then we have \begin{align} \mathsf{m}(2)
&= \sum_{\substack{\epsilon\colon G'_{0}\to \{ \pm 1\}\\ \epsilon(3)=+1}} |G'_{\epsilon}|- \sum_{\substack{\epsilon\colon G'_{0}\to \{ \pm 1\}\\ \epsilon(2)=\epsilon(3)=+1}} |G'_{\epsilon}| -
\sum_{\substack{\epsilon\colon G'_{0}\to \{ \pm 1\} \\ -\epsilon(1)=-\epsilon(2)=\epsilon(3)=+1}} |G'_{\epsilon}|\notag \\
&=\frac{1}{2}||\mathbb{A}_{n}|| - \frac{1}{4} ||\mathbb{A}_{2}|| \cdot ||\mathbb{A}_{n-2}|| - \mathsf{n}(2). \notag\\ &=\frac{1}{2}c_{n}-\frac{3}{2}c_{n-2}-\mathsf{n}(2).\notag \end{align} Therefore we obtain \begin{align} \#\mathop{2\text{-}\mathsf{tilt}}\nolimits A &= 2(\mathsf{m}(1)+\mathsf{m}(2)+\mathsf{m}(3)+\mathsf{m}(4)+\mathsf{m}(5))\notag\\ &= 2(\mathsf{n}(1)+2\mathsf{n}(2)+\mathsf{n}(4)+\mathsf{n}(5))-4\mathsf{n}(2)+4\mathsf{m}(2)\notag\\
&= || \mathbb{D}_{n}|| - 4\mathsf{n}(2) + 4\mathsf{m}(2) \notag\\ &= 6\cdot 4^{n-2} - 2c_{n-2} -4\mathsf{n}(2) + 2c_{n}-6c_{n-2}-4\mathsf{n}(2) \notag\\ &= 6\cdot 4^{n-2} + 2c_{n} - 4c_{n-1} - 4c_{n-2}. \notag \end{align} This finishes the proof. \end{proof}
For the remained finite series $\mathbb{E}$, (II), (III), (IV) and (V), we just compute the number by using the formula (\ref{number G}) in Corollary \ref{number by graph}(2).
\begin{proposition} \label{type sporadic} For each case \textnormal{$\mathbb{E}$, (II), (III), (IV)} and \textnormal{(V)}, the number $\mathop{2\text{-}\mathsf{tilt}}\nolimits A_G$ is given by the table of Theorem \ref{theorem2}. \end{proposition} \begin{proof} The number for $\mathbb{E}_6$ is shown in Example \ref{example-E6}(2) and the others are similar. The detail is left to the reader. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1805.08392.tex",
"language_detection_score": 0.6003933548927307,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Noncommutative Local Systems} \setlength{\parindent}{0pt} \begin{center} \author{ {\textbf{Petr R. Ivankov*}\\ e-mail: * [email protected] \\ } } \end{center}
\noindent
\paragraph{} Gelfand - Na\u{i}mark theorem supplies a one to one correspondence between commutative $C^*$-algebras and locally compact Hausdorff spaces. So any noncommutative $C^*$-algebra can be regarded as a generalization of a topological space. Generalizations of several topological invariants may be defined by algebraic methods. For example Serre Swan theorem \cite{karoubi:k} states that complex topological $K$-theory coincides with $K$-theory of $C^*$-algebras. This article is concerned with generalization of local systems. The classical construction of local system implies an existence of a path groupoid. However the noncommutative geometry does not contain this object. There is a construction of local system which uses covering projections. Otherwise a classical (commutative) notion of a covering projection has a noncommutative generalization. A generalization of noncommutative covering projections supplies a generalization of local systems.
\tableofcontents
\section{Motivation. Preliminaries}
\paragraph{} Local system examples arise geometrically from vector bundles with flat connections, and from topology by means of linear representations of the fundamental group. Generalization of local systems requires a generalization of a topological space given by the Gelfand-Na\u{i}mark theorem \cite{arveson:c_alg_invt} which states the correspondence between locally compact Hausdorff topological spaces and commutative $C^*$-algebras.
\begin{thm}\label{gelfand-naimark}\cite{arveson:c_alg_invt} Let $A$ be a commutative $C^*$-algebra and let $\mathcal{X}$ be the spectrum of A. There is the natural $*$-isomorphism $\gamma:A \to C_0(\mathcal{X})$. \end{thm}
\paragraph{} So any (noncommutative) $C^*$-algebra may be regarded as a generalized (noncommutative) locally compact Hausdorff topological space. We would like to generalize a notion of a local system. A classical notion of local system uses a fundamental groupoid. \begin{thm} \cite{spanier:at} For each topological space there is a category $\mathscr{P}(\mathcal{X})$ whose objects are points of $\mathcal{X}$, whose morphisms from $x_0$ to $x_1$ are the path classes with $x_0$ as origin and $x_1$ as end, and whose composite is the product of path classes. \end{thm} \begin{defn} \cite{spanier:at} The category $\mathscr{P}(\mathcal{X})$ is called the {\it category of path classes} of $\mathcal{X}$ or the {\it fundamental groupoid}. \end{defn} \begin{defn} \cite{spanier:at} A {\it local system} on a space $\mathcal{X}$ is a covariant functor from fundamental groupoid of $\mathcal{X}$ to some category. For any category $\mathscr{C}$ there is a category of local systems on $\mathcal{X}$ with values in $\mathscr{C}$. Two local systems are said to be {\it equivalent} if they are equivalent objects in this category. \end{defn} \paragraph{} Otherwise it is known that any connected gruopoid is equivalent to a category with single object, i.e. a groupoid is equivalent to a group which is regarded as a category. Any groupoid can be decomposed into connected components, therefore any local system corresponds to representations of groups. It means that in case of linearly connected space $\mathcal{X}$ local systems can be defined by representations of fundamental group $\pi_1(\mathcal{X})$. Otherwise there is an interrelationship between fundamental group and covering projections. This circumstance supplies a following definition \ref{borel_const_comm} and a lemma \ref{borel_local_system_app} which do not explicitly uses a fundamental groupoid. \begin{defn}\label{borel_const_comm}\cite{davis_kirk_at} Let $p : \mathcal{P} \to \mathcal{B}$ be a principal $G$-bundle. Suppose $G$ acts on the left on a space $\mathcal{F}$, i.e. an action $G \times \mathcal{F} \to \mathcal{F}$ is given. Define the {\it Borel construction} \begin{equation*} \mathcal{P} \times_G \mathcal{F} \end{equation*}
to be the quotient space $\mathcal{P} \times \mathcal{F} / \approx$ where \begin{equation*} \left(x, f\right) \approx \left(xg, g^{-1}f\right). \end{equation*}
\end{defn} We next give one application of the Borel construction. Recall that a local coefficient system is a fiber bundle over $B$ with fiber $A$ and structure group $G$ where $A$ is a (discrete) abelian group and G acts via a homomorphism $G \to \mathrm{Aut}(A)$. \begin{lem}\label{borel_local_system_app}\cite{davis_kirk_at} Every local coefficient system over a path-connected (and semilocally simply connected) space $B$ is of the form
\begin{tikzpicture}\label{borel_local_comm}
\matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em]
{
A & \widetilde{\mathcal{B}} \times_{\pi_1(\mathcal{B})}A \\
& B \\};
\path[-stealth]
(m-1-1) edge node [left] {} (m-1-2)
(m-1-2) edge node [right] {$q$} (m-2-2);
\end{tikzpicture}
i.e., is associated to the principal $\pi_1(\mathcal{B})$-bundle given by the universal cover $\widetilde{\mathcal{B}}$ of $\mathcal{B}$ where the action is given by a homomorphism $\pi_1(\mathcal{B}) \to \mathrm{Aut}(A)$. \end{lem}
In lemma \ref{borel_local_comm} the $\mathcal{B}$ is a topological space, the $\widetilde{\mathcal{B}}$ means the universal covering space of $\mathcal{B}$, $\pi = \pi_1(\mathcal{B})$ is the fundamental group of the $\mathcal{B}$. The $\pi$ group equals to group of covering transformations $G(\widetilde{\mathcal{B}}, \mathcal{B})$ of the universal covering space. So above construction does not need fundamental groupoid, it uses a covering projection and a group of covering transformations. However noncommutative generalizations of these notions are developed in \cite{ivankov:infinite_cov_pr}. So local systems can be generalized. We may summarize several properties of the Gelfand - Na\u{i}mark correspondence with the following dictionary. \newline \break
\begin{tabular}{|c|c|} \hline TOPOLOGY & ALGEBRA\\ \hline Locally compact space & $C^*$-algebra\\ Covering projection & Noncommutative covering projection \\ Group of covering transformations & Noncommutative group of covering transformations \\ Local system & ? \\ \hline \end{tabular} \newline \newline \break
This article assumes elementary knowledge of following subjects: \begin{enumerate} \item Set theory \cite{halmos:set}, \item Category theory \cite{spanier:at}, \item Algebraic topology \cite{spanier:at}, \item $C^*$-algebras and operator theory \cite{pedersen:ca_aut}, \item Differential geometry \cite{koba_nomi:fgd}, \item Spectral triples and their connections \cite{connes:c_alg_dg,connes:ncg94,varilly:noncom,varilly_bondia}. \end{enumerate}
The terms "set", "family" and "collection" are synonyms. Following table contains used in this paper notations. \newline
\begin{tabular}{|c|c|} \hline Symbol & Meaning\\ \hline \\
$A^G$ & Algebra of $G$ invariants, i.e. $A^G = \{a\in A \ | \ ga=a, \forall g\in G\}$\\ $\mathrm{Aut}(A)$ & Group * - automorphisms of $C^*$ algebra $A$\\ $B(H)$ & Algebra of bounded operators on Hilbert space $H$\\
$\mathbb{C}$ (resp. $\mathbb{R}$) & Field of complex (resp. real) numbers \\
$C(\mathcal{X})$ & $C^*$ - algebra of continuous complex valued \\
& functions on topological space $\mathcal{X}$\\ $C_0(\mathcal{X})$ & $C^*$ - algebra of continuous complex valued \\
& functions on topological space $\mathcal{X}$\\
& functions on topological space $\mathcal{X}$\\
$G(\widetilde{\mathcal{X}} | \mathcal{X})$ & Group of covering transformations of covering projection \\
& $\widetilde{\mathcal{X}} \to \mathcal{X}$ \cite{spanier:at}\\ $H$ &Hilbert space \\
$M(A)$ & A multiplier algebra of $C^*$-algebra $A$\\
$\mathscr{P}(\mathcal{X})$ & Fundamental groupoid of a topological space $\mathcal{X}$\\
$U(H) \subset \mathcal{B}(H) $ & Group of unitary operators on Hilbert space $H$\\ $U(A) \subset A $ & Group of unitary operators of algebra $A$\\ $U(n) \subset GL(n, \mathbb{C}) $ & Unitary subgroup of general linear group\\
$\mathbb{Z}$ & Ring of integers \\
$\mathbb{Z}_m$ & Ring of integers modulo $m$ \\ $\Omega$ & Natural contravariant functor from category of commutative \\ & $C^*$ - algebras, to category of Hausdorff spaces\\
\hline \end{tabular}
\section{Noncommutative covering projections} \paragraph{} In this section we recall the described in \cite{ivankov:infinite_cov_pr} construction of a noncommutative covering projection. Instead the expired "rigged space" notion we use the "Hilbert module" one.
\subsection{Hermitian modules and functors}
\begin{defn} \cite{rieffel_morita} Let $B$ be a $C^*$-algebra. By a (left) {\it Hermitian $B$-module} we will mean the Hilbert space $H$ of a non-degenerate *-representation $A \rightarrow B(H)$. Denote by $\mathbf{Herm}(B)$ the category of Hermitian $B$-modules.
\end{defn} \paragraph{}
Let $A$, $B$ be $C^*$-algebras. In this section we will study some general methods for construction of functors from $\mathbf{Herm}(B)$ to $\mathbf{Herm}(A)$.
\begin{defn} \cite{rieffel_morita} Let $B$ be a $C^*$-algebra. By (right) {\it pre-$B$-Hilbert module} we mean a vector space, $X$, over complex numbers on which $B$ acts by means of linear transformations in such a way that $X$ is a right $B$-module (in algebraic sense), and on which there is defined a $B$-valued sesquilinear form $\langle,\rangle_X$ conjugate linear in the first variable, such that \begin{enumerate} \item $\langle x, x \rangle_B \ge 0$ \item $\left(\langle x, y \rangle_X\right)^* = \langle y, x \rangle_X$ \item $\langle x, yb \rangle_X = \langle x, y \rangle_Xb$. \end{enumerate} \end{defn} \begin{empt}
It is easily seen that if we factor a pre-$B$-Hilbert module by subspace of the elements $x$ for which $\langle x, x \rangle_X = 0$, the quotient becomes in a natural way a pre-$B$-Hilbert module having the additional property that inner product is definite, i.e. $\langle x, x \rangle_X > 0$ for any non-zero $x\in X$. On a pre-$B$-Hilbert module with definite inner product we can define a norm $\|\|$ by setting \begin{equation}\label{rigged_norm_eqn}
\|x\|=\|\langle x, x \rangle_X\|^{1/2}. \end{equation} From now we will always view a pre-$B$-Hilbert module with definite inner product as being equipped with this norm. The completion of $X$ with this norm is easily seen to become again a pre-$B$-Hilbert module. \end{empt} \begin{defn} \cite{rieffel_morita} Let $B$ be a $C^*$-algebra. By a {\it Hilbert $B$-module} we will mean a pre-$B$-Hilbert module, $X$, satisfying the following conditions: \begin{enumerate} \item If $\langle x, x \rangle_X\ = 0$ then $x = 0$, for all $x \in X$ \item $X$ is complete for the norm defined in (\ref{rigged_norm_eqn}). \end{enumerate} \end{defn} \begin{exm}\label{fin_rigged_exm} Let $A$ be a $C^*$-algebra and a finite group acts on $A$, $A^G$ is the algebra of $G$-invariants. Then $A$ is a Hilbert $A^G$-module on which is defined following $A^G$-valued form
\begin{equation}\label{inv_scalar_product}
\langle x, y \rangle_A = \frac{1}{|G|} \sum_{g \in G} g(x^*y).
\end{equation}
Since given by \ref{inv_scalar_product} sum is $G$-invariant we have $ \langle x, y \rangle_A \in A^G$. \end{exm} \paragraph{} Viewing a Hilbert $B$-module as a generalization of an ordinary Hilbert space, we can define what we mean by bounded operators on a Hilbert $B$-module.
\begin{defn}\cite{rieffel_morita} Let $X$ be a Hilbert $B$-module. By a {\it bounded operator} on $X$ we mean a linear operator, $T$, from $X$ to itself which satisfies following conditions: \begin{enumerate} \item for some constant $k_T$ we have \begin{equation}\nonumber \langle Tx, Tx \rangle_X \le k_T \langle x, x \rangle_X, \ \forall x\in X, \end{equation} or, equivalently $T$ is continuous with respect to the norm of $X$. \item there is a continuous linear operator, $T^*$, on $X$ such that \begin{equation}\nonumber \langle Tx, y \rangle_X = \langle x, T^*y \rangle_X, \ \forall x, y\in X. \end{equation} \end{enumerate} It is easily seen that any bounded operator on a $B$-Hilbert module will automatically commute with the action of $B$ on $X$ (because it has an adjoint). We will denote by $\mathcal{L}(X)$ (or $\mathcal{L}_B(X)$ there is a chance of confusion) the set of all bounded operators on $X$. Then it is easily verified than with the operator norm $\mathcal{L}(X)$ is a $C^*$-algebra. \end{defn} \begin{defn}\cite{pedersen:ca_aut} If $X$ is a Hilbert $B$-module then denote by $\theta_{\xi, \zeta} \in \mathcal{L}_B(X)$ such that \begin{equation}\nonumber \theta_{\xi, \zeta} (\eta) = \zeta \langle\xi, \eta \rangle_X , \ (\xi, \eta, \zeta \in X) \end{equation} Norm closure of a generated by such endomorphisms ideal is said to be the {\it algebra of compact operators} which we denote by $\mathcal{K}(X)$. The $\mathcal{K}(X)$ is an ideal of $\mathcal{L}_B(X)$. Also we shall use following notation $\xi\rangle \langle \zeta \stackrel{\text{def}}{=} \theta_{\xi, \zeta}$. \end{defn}
\begin{defn}\cite{rieffel_morita}\label{corr_defn} Let $A$ and $B$ be $C^*$-algebras. By a {\it Hilbert $B$-$A$-correspondence} we mean a Hilbert $B$-module, which is a left $A$-module by means of *-homomorphism of $A$ into $\mathcal{L}_B(X)$. \end{defn}
\begin{empt}\label{herm_functor_defn} Let $X$ be a Hilbert $B$-$A$-correspondence. If $V\in \mathbf{Herm}(B)$ then we can form the algebraic tensor product $X \otimes_{B_{\mathrm{alg}}} V$, and equip it with an ordinary pre-inner-product which is defined on elementary tensors by \begin{equation}\nonumber \langle x \otimes v, x' \otimes v' \rangle = \langle \langle x',x \rangle_B v, v' \rangle_V. \end{equation} Completing the quotient $X \otimes_{B_{\mathrm{alg}}} V$ by subspace of vectors of length zero, we obtain an ordinary Hilbert space, on which $A$ acts (by $a(x \otimes v)=ax\otimes v$) to give a *-representation of $A$. We will denote the corresponding Hermitian module by $X \otimes_{B} V$. The above construction defines a functor $X \otimes_{B} -: \mathbf{Herm}(B)\to \mathbf{Herm}(A)$ if for $V,W \in \mathbf{Herm}(B)$ and $f\in \mathrm{Hom}_B(V,W)$ we define $f\otimes X \in \mathrm{Hom}_A(V\otimes X, W\otimes X)$ on elementary tensors by $(f \otimes X)(x \otimes v)=x \otimes f(v)$. We can define action of $B$ on $V\otimes X$ which is defined on elementary tensors by \begin{equation}\nonumber b(x \otimes v)= (x \otimes bv) = x b \otimes v. \end{equation} \end{empt}
\subsection{Galois correspondences} \begin{defn}\label{herm_a_g_defn}
Let $A$ be a $C^*$-algebra, $G$ is a finite or countable group which acts on $A$. We say that $H \in \mathbf{Herm}(A)$ is a {\it $A$-$G$ Hermitian module} if \begin{enumerate} \item Group $G$ acts on $H$ by unitary $A$-linear isomorphisms, \item There is a subspace $H^G \subset H$ such that \begin{equation}\label{g_act} H = \bigoplus_{g\in G}gH^G. \end{equation} \end{enumerate} Let $H$, $K$ be $A$-$G$ Hermitian modules, a morphism $\phi: H\to K$ is said to be a $A$-$G$-morphism if $\phi(gx)=g\phi(x)$ for any $g \in G$. Denote by $\mathbf{Herm}(A)^G$ a category of $A$-$G$ Hermitian modules and $A$-$G$-morphisms. \end{defn} \begin{rem}
Condition 2 in the above definition is introduced because any topological covering projection $\widetilde{\mathcal{X}} \to \mathcal{X}$ commutative $C^*$ algebras $C_0\left(\widetilde{\mathcal{X}}\right)$, $C_0\left(\mathcal{X}\right)$ satisfies it with respect to the group of covering transformations $G(\widetilde{\mathcal{X}}| \mathcal{X})$. \end{rem}
\begin{defn} Let $H$ be $A$-$G$ Hermitian module, $B\subset M(A)$ is sub-$C^*$-algebra such that $(ga)b = g(ab)$, $b(ga) = g(ba)$, for any $a\in A$, $b \in B$, $g \in G$. There is a functor $(-)^G: \mathbf{Herm}(A)^G \to\mathbf{Herm}(B)$ defined by following way \begin{equation} H \mapsto H^G. \end{equation} This functor is said to be the {\it invariant functor}. \end{defn}
\begin{defn} Let $_AX_B$ be a Hilbert $B$-$A$ correspondence, $G$ is finite or countable group such that \begin{itemize} \item $G$ acts on $A$ and $X$, \item Action of $G$ is equivariant, i.e. $g (a\xi) = (ga) (g\xi)$ , and $B$ invariant, i.e. $g(\xi b)=(g\xi)b$ for any $\xi \in X$, $b \in B$, $a\in A$, $g \in G$, \item Inner-product of $G$ is equivariant, i.e. $\langle g\xi, g \zeta\rangle_X = \langle\xi, \zeta\rangle_X$ for any $\xi, \zeta \in X$, $g \in G$. \end{itemize} Then we say that $_AX_B$ is a {\it $G$-equivariant Hilbert $B$-$A$-correspondence}. \end{defn} \paragraph{} Let $_AX_B$ be a $G$-equivariant Hilbert $B$-$A$-correspondence. Then for any $H\in \mathbf{Herm}(B)$ there is an action of $G$ on $X\otimes_B H$ such that \begin{equation*} g \left(x \otimes \xi\right) = \left(x \otimes g\xi\right). \end{equation*}
\begin{defn}\label{inf_galois_defn} Let $_AX_B$ be a $G$-equivariant Hilbert $B$-$A$-correspondence. We say that $_AX_B$ is {\it $G$-Galois Hilbert $B$-$A$-correspondence} if it satisfies following conditions: \begin{enumerate} \item $X \otimes_B H$ is a $A$-$G$ Hermitian module, for any $H \in \mathbf{Herm}(B)$, \item A pair $\left(X \otimes_B -, \left(-\right)^G\right)$ such that \begin{equation}\nonumber X \otimes_B -: \mathbf{Herm}(B) \to \mathbf{Herm}(A)^G, \end{equation} \begin{equation}\nonumber (-)^G: \mathbf{Herm}(A)^G \to \mathbf{Herm}(B). \end{equation}
is a pair of inverse equivalence.
\end{enumerate}
\end{defn} Following theorem is an analog of to theorems described in \cite{miyashita_infin_outer_gal}, \cite{takeuchi:inf_out_cov}. \begin{thm}\cite{ivankov:infinite_cov_pr}\label{main_lem} Let $A$ and $\widetilde{A}$ be $C^*$-algebras, $_{\widetilde{A}}X_A$ be a $G$-equivariant Hilbert $A$-$\widetilde{A}$-correspondence. Let $I$ be a finite or countable set of indices, $\{e_i\}_{i\in I} \subset M(A)$, $\{\xi_i\}_{i\in I} \subset \ _{\widetilde{A}}X_A$ such that \begin{enumerate} \item \begin{equation}\label{1_mb} 1_{M(A)} = \sum_{i\in I}^{}e^*_ie_i, \end{equation} \item \begin{equation}\label{1_mkx} 1_{M(\mathcal{K}(X))} = \sum_{g\in G}^{} \sum_{i \in I}^{}g\xi_i\rangle \langle g\xi_i , \end{equation} \item \begin{equation}\label{ee_xx} \langle \xi_i, \xi_i \rangle_X = e_i^*e_i, \end{equation} \item \begin{equation}\label{g_ort} \langle g\xi_i, \xi_i\rangle_X=0, \ \text{for any nontrivial} \ g \in G. \end{equation} \end{enumerate} Then $_{\widetilde{A}}X_A$ is a $G$-Galois Hilbert $A$-$\widetilde{A}$-correspondence.
\end{thm}
\begin{defn} Consider a situation from the theorem \ref{main_lem}. Let us consider two specific cases \begin{enumerate} \item $e_i \in A$ for any $i \in I$, \item $\exists i \in I \ e_i \notin A$. \end{enumerate}
Norm completion of the generated by operators \begin{equation*} g\xi_i^* \rangle \langle g \xi_i \ a; \ g \in G, \ i \in I, \ \begin{cases}
a \in M(A), & \text{in case 1},\\
a \in A, & \text{in case 2}.
\end{cases} \end{equation*} algebra is said to be the {\it subordinated to $\{\xi_i\}_{i \in I}$ algebra}. If $\widetilde{A}$ is the subordinated to $\{\xi_i\}_{i \in I}$ then \begin{enumerate} \item $G$ acts on $\widetilde{A}$ by following way \begin{equation*} g \left( \ g'\xi_i \rangle \langle g' \xi_i \ a \right) = gg'\xi_i \rangle \langle gg' \xi_i \ a; \ a \in M(A). \end{equation*} \item $X$ is a left $A$ module, moreover $_{\widetilde{A}}X_A$ is a $G$-Galois Hilbert $A$-$\widetilde{A}$-correspondence. \item There is a natural $G$-equivariant *-homomorphism $\varphi: A \to M\left(\widetilde{A}\right)$, $\varphi$ is equivariant, i.e. \begin{equation}
\varphi(a)(g\widetilde{a})= g \varphi(a)(\widetilde{a}); \ a \in A, \ \widetilde{a}\in \widetilde{A}. \end{equation} \end{enumerate}
A quadruple $\left(A, \widetilde{A}, _{\widetilde{A}}X_A, G\right)$ is said to be a {\it Galois quadruple}. The group $G$ is said to be a {\it group Galois transformations} which shall be denoted by $G\left(\widetilde{A}\ | \ A\right)=G$. \end{defn} \begin{rem} Henceforth subordinated algebras only are regarded as noncommutative generalizations of covering projections. \end{rem} \begin{defn} If $G$ is finite then bimodule $_{\widetilde{A}}X_A$ can be replaced with $_{\widetilde{A}}\widetilde{A}_A$ where product $\langle \ , \ \rangle_{\widetilde{A}}$ is given by \eqref{inv_scalar_product}. In this case a Galois quadruple $\left(A, \widetilde{A}, _{\widetilde{A}}X_A, G\right)=\left(A, \widetilde{A}, _{\widetilde{A}}A_A, G\right)$ can be replaced with a {\it Galois triple} $\left(A, \widetilde{A}, G\right)$. \end{defn}
\subsection{Infinite noncommutative covering projections} \paragraph{} In case of commutative $C^*$-algebras definition \ref{inf_galois_defn} supplies algebraic formulation of infinite covering projections of topological spaces. However I think that above definition is not a quite good analogue of noncommutative covering projections. Noncommutative algebras contains inner automorphisms. Inner automorphisms are rather gauge transformations \cite{gross_gauge} than geometrical ones. So I think that inner automorphisms should be excluded. Importance of outer automorphisms was noted by Miyashita \cite{miyashita_fin_outer_gal,miyashita_infin_outer_gal}. It is reasonably take to account outer automorphisms only. I have set more strong condition. \begin{defn}\label{gen_in_def}\cite{rieffel_finite_g} Let $A$ be $C^*$-algebra. A *-automorphism $\alpha$ is said to be {\it generalized inner} if it is given by conjugating with unitaries from multiplier algebra $M(A)$. \end{defn} \begin{defn}\label{part_in_def}\cite{rieffel_finite_g} Let $A$ be $C^*$ - algebra. A *- automorphism $\alpha$ is said to be {\it partly inner} if its restriction to some non-zero $\alpha$-invariant two-sided ideal is generalized inner. We call automorphism {\it purely outer} if it is not partly inner. \end{defn} Instead definitions \ref{gen_in_def}, \ref{part_in_def} following definitions are being used. \begin{defn} Let $\alpha \in \mathrm{Aut}(A)$ be an automorphism. A representation $\rho : A\rightarrow B(H)$ is said to be {\it $\alpha$ - invariant} if a representation $\rho_{\alpha}$ given by \begin{equation*} \rho_{\alpha}(a)= \rho(\alpha(a)) \end{equation*} is unitary equivalent to $\rho$. \end{defn} \begin{defn} Automorphism $\alpha \in \mathrm{Aut}(A)$ is said to be {\it strictly outer} if for any $\alpha$- invariant representation $\rho: A \rightarrow B(H) $, automorphism $\rho_{\alpha}$ is not a generalized inner automorphism. \end{defn} \begin{defn}\label{nc_fin_cov_pr_defn} A Galois quadruple $\left(A, \widetilde{A}, _{\widetilde{A}}X_A, G\right)$ (resp. a triple $\left(A, \widetilde{A}, G\right)$) with countable (resp. finite) $G$ is said to be a {\it noncommutative infinite (resp. finite) covering projection} if action of $G$ on $\widetilde{A}$ is strictly outer. \end{defn}
\section{Noncommutative generalization of local systems}
\begin{defn}\label{loc_sys_defn} Let $A$ be a $C^*$-algebra, and let $\mathscr{C}$ be a category. A {\it noncommutative local system} contains following ingredients: \begin{enumerate} \item A noncommutative covering projection $\left(A, \widetilde{A}, _{\widetilde{A}}X_A, G\right)$ (or $\left(A, \widetilde{A}, G\right)$), \item A covariant functor $F: G \to \mathscr{C}$, \end{enumerate} where $G$ is regarded as a category with a single object $e$, which is the unity of $G$. Indeed a local system is a group homomorphism $G \to \mathrm{Aut}(F(e))$. \end{defn}
\begin{exm}
If $\mathcal{X}$ is a linearly connected space then there is the equivalence of categories $\mathscr{P}(\mathcal{X}) \approx \pi_1(\mathcal{X})$. Let $F: \mathscr{P}(\mathcal{X})\to \mathscr{C}$ is a local system then there is an object $A$ in $\mathscr{C}$ such that $F$ is uniquely defined by a group homomorphism $f: \pi_1(\mathcal{X}) \to \mathrm{Aut}(A)$. Let $G=\pi_1(\mathcal{X})/\mathrm{ker}f$ be a factor group and let $\widetilde{\mathcal{X}} \to \mathcal{X}$ be a covering projection such that $G(\widetilde{\mathcal{X}} | \mathcal{X})\approx G$. Then there is a natural group homomorphism $G \to \mathrm{Aut}(A)$ which can be regarded as covariant functor $G \to \mathscr{C}$. If $\mathcal{X}$ is locally compact and Hausdorff than from \cite{ivankov:infinite_cov_pr} it follows that there is a noncommutative covering projection $\left(C_0(\mathcal{X}), C_0(\mathcal{\widetilde{X}}), \ _{C_0(\mathcal{X})}X_{C_0(\mathcal{\widetilde{X}})} \ , G\right)$. So a noncommutative local system is a generalization of a commutative one.
\end{exm}
\section{Noncommutative bundles with flat connections} \subsection{Cotensor products}
\begin{empt} {\it Cotensor products associated with Hopf algebras}. Let $H$ be a Hopf algebra over a commutative ring $k$, with bijective antipode $S$. We use the Sweedler notation \cite{karaali:ha} for the comultiplication on $H$: $\Delta(h)= h_{(1)}\otimes h_{(2)}$. $\mathcal{M}^H$ (respectively ${}^H\mathcal{M}$) is the category of right (respectively left) $H$-comodules. For a right $H$-coaction $\rho$ (respectively a left $H$-coaction $\lambda$) on a $k$-module $M$, we denote $$\rho(m)=m_{[0]}\otimes m_{[1]}\quad \ \mathrm{and} \ \quad\lambda(m)=m_{[-1]}\otimes m_{[0]}.$$ Let $M$ be a right $H$-comodule, and $N$ a left $H$-comodule. The cotensor product $M\square_H N$ is the $k$-module \begin{equation}\label{cotensor_hopf}
M\square_H N= \left\{\sum_i m_i\otimes n_i\in M\otimes N~|~\sum_i \rho(m_i)\otimes n_i= \sum_i m_i\otimes \lambda(n_i)\right\}. \end{equation} If $H$ is cocommutative, then $M\square_H N$ is also a right (or left) $H$-comodule.
\end{empt} \begin{empt} {\it Cotensor products associated with groups}. Let $G$ be a finite group. A set $H = \mathrm{Map}(G, \mathbb{C})$ has a natural structure of commutative Hopf algebra (See \cite{hajac:toknotes}). Addition (resp. multiplication) on $H$ is pointwise addition (resp. pointwise multiplication). Let $\delta_g\in H, ( g \in G)$ be such that \begin{equation}\label{group_hopf_action_rel} \delta_g(g')\left\{ \begin{array}{c l}
1 & g'=g\\
0 & g' \ne g \end{array}\right. \end{equation}
Comultiplication $\Delta: H \rightarrow H \otimes H$ is induced by group multiplication \begin{equation}\nonumber \Delta f(g) = \sum_{g_1 g_2 = g} f(g_1) \otimes f(g_2); \ \forall f \in \mathrm{Map}(G, \mathbb{C}), \ \forall g\in G. \end{equation} i.e. \begin{equation}\nonumber \Delta \delta_g = \sum_{g_1 g_2 = g} \delta_{g_1} \otimes \delta_{g_2}; \ \forall g\in G, \end{equation}
Let $M$ (resp. $N$) be a linear space with right (resp. left) action of $G$ then
\begin{equation}\label{cotensor_g}
M\square_{\mathrm{Map}(G,\mathbb{C})}N = \left\{\sum_i m_i\otimes n_i\in M\otimes N~|~\sum_i m_i g\otimes n_i= \sum_i m_i\otimes gn_i;~\forall g\in G\right\}. \end{equation}
Henceforth we denote by $M\square_GN$ a cotensor product $M\square_{\mathrm{Map}(G,\mathbb{C})}N$.
\end{empt}
\subsection{Bundles with flat connections in differential geometry}\label{fvb_dg} \paragraph{} I follow to \cite{koba_nomi:fgd} in explanation of the differential geometry and flat bundles.
\begin{prop}\label{comm_cov_mani}(Proposition 5.9 \cite{koba_nomi:fgd}) \begin{enumerate} \item Given a connected manifold $M$ there is a unique (unique up to isomorphism) universal covering manifold, which will be denoted by $\widetilde{M}$. \item The universal covering manifold $\widetilde{M}$ is a principal fibre bundle over $M$ with group $\pi_1(M)$ and projection $p: \widetilde{M} \to M$, where $\pi_1(M)$ is the first homotopy group of $M$. \item The isomorphism classes of covering spaces over $M$ are in 1:1 correspondence with the conjugate classes of subgroups of $\pi_1(M)$. The correspondence is given as follows. To each subgroup $H$ of $\pi_1(M)$, we associate $E=\widetilde{M}/H$. Then the covering manifold $E$ corresponding to $H$ is a fibre bundle over $M$ with fibre $\pi_1(M)/H$ associated with the principal bundle $\widetilde{M}(M, \pi_1(M))$. If $H$ is a normal subgroup of $\pi_1(M)$, $E=\widetilde{M}/H$ is a principal fibre bundle with group $\pi_1(M)/H$ and is called a regular covering manifold of $M$. \end{enumerate} \end{prop} \paragraph{}Let $\Gamma$ be a flat connection $P(M, G)$, where $M$ is connected and paracompact. Let $u_0\in P$; $M^*=P(u_0)$, the holonomy bundle through $u_0$; $M^*$ is a principal fibre bundle over $M$ whose structure group is the holomomy group $\Phi(u_0)$. In \cite{koba_nomi:fgd} is explained that $\Phi(u_0)$ is discrete, and since $M^*$ is connected, $M^*$ is a covering space of $M$. Set $x_0=\pi(u_0)\in M$. Every closed curve of $M$ starting from $x_0$ defines, by means of the parallel displacement along it, an element of $\Phi(u_0)$. In \cite{koba_nomi:fgd} it is explained that the same element of the first homotopy group $\pi_1(M, x_0)$ give rise to the same element of $\Phi(u_0)$. Thus we obtain a homomorphism of $\pi_1(M, x_0)$ onto $\Phi(u_0)$. Let $N$ be a normal subgroup of $\Phi(u_0)$ and set $M'=M^*/N$. Then $M'$ is principal fibre bundle over $M$ with structure group $\Phi(u_0)/N$. In particular $M'$ is a covering space of $M$. Let $P'(M',G)$ be the principal fibre bundle induced by covering projection $M'\to M$. There is a natural homomorphism $f: P' \to P$ \cite{koba_nomi:fgd}. \begin{prop}\label{flat_dg_prop} (Proposition 9.3 \cite{koba_nomi:fgd}) There exists a unique connection $\Gamma'$ in $P'(M',G)$ which is is mapped into $\Gamma$ by homomorphism $f: P'\to P$. The connection $\Gamma'$ is flat. If $u'_0$ is a point of $P'$ such that $f(u'_0)=u_0$, then the holonomy group $\Phi(u'_0)$ of $\Gamma'$ with reference point $u'_0$ is isomorphically mapped onto $N$ by $f$. \end{prop} \begin{empt}\label{dg_fl_con_ingr}{\it Construction of flat connections} Let $M$ be a manifold. Proposition \ref{flat_dg_prop} supplies construction of flat bundle $P(M,G)$ which imply following ingredients: \begin{enumerate} \item A covering projection $M'\to M$. \item A principal bundle $P'(M', G)$ with a flat connection $\Gamma$. \end{enumerate} \end{empt} \begin{empt}\label{can_fl_conn}{\it Associated vector bundle}. A principal bundle $P(M,G)$ and a flat connection $\Gamma$ are given by these ingredients. If $G$ acts on $\mathbb{C}^n$ then there is an associated with $P(M,G)$ vector fibre $\mathcal{F}$ bundle with a standard fibre $\mathbb{C}^n$. A space $F$ of continuous sections of $\mathcal{F}$ is a finitely generated projective $C(M)$-module. See \cite{koba_nomi:fgd}. \end{empt} \begin{empt}\label{can_fl_bundle}{\it Canonical flat connection and flat bunles}. There is a specific case of flat principal bundle such that $P'=M'\times G$ and $\Gamma$ is a canonical flat connection \cite{koba_nomi:fgd}. In this case the existence of $P(M,G)$ depends only on $\pi_1(M)$ and does not depend on differential structure of $M$. \end{empt}
\begin{empt}\label{comm_fund_k}{\it Local systems and $K$-theory}. If $R(G)$ is the group representation ring and $R_0(G)$ is a subgroup of zero virtual dimension then there is a natural homomorphism $R_0(G) \to K^0(M)$ described in \cite{gilkey:odd_space,wolf:const_curv}. \end{empt}
\subsection{Topological noncommutative bundles with flat connections} \paragraph{} There are noncommutative generalizations of described in \ref{fvb_dg} constructions. According to Serre Swan theorem \cite{karoubi:k} any vector bundle over space $\mathcal{X}$ corresponds to a projective $C_0(\mathcal{X})$ module. \begin{defn} Let $\left(A, \widetilde{A}, G\right)$ be a finite noncommutative covering projection. According to definition \ref{loc_sys_defn} any group homomorphism $G \to U(n)$ is a local system. There is a natural linear action of $G$ on $\mathbb{C}^n$, and $\widetilde{A}\square_G\mathbb{C}^n$ is a left $A$-module which is said to be a {\it topological noncommutative bundle with flat connection}. \end{defn} \begin{lem} Let $\left(A, \widetilde{A}, G\right)$ be a finite noncommutative covering projection, and let $P = \widetilde{A}\square_G\mathbb{C}^n$ be a topological noncommutative bundle with flat connection. Then $P$ is a finitely generated projective left and right $A$-module. \end{lem} \begin{proof} According to definition $\widetilde{A}$ is a left finitely generated projective $A$-module. A left $A$-module $\widetilde{A}\otimes_{\mathbb{C}}\mathbb{C}^n$ is also finitely generated and projective because $\widetilde{A}\otimes_{\mathbb{C}}\mathbb{C}^n \approx \widetilde{A}^n$. There is a projection $p: \widetilde{A}\otimes_{\mathbb{C}}\mathbb{C}^n \to \widetilde{A}\otimes_{\mathbb{C}}\mathbb{C}^n$ given by: \begin{equation*}
p(a \otimes x) = \frac{1}{|G|}\sum_{g\in G} ag \otimes g^{-1}x. \end{equation*} The image of $p$ is $P$, therefore $P$ is projective left $A$-module. Similarly we can prove that $P$ is a finitely generated projective right $A$-module \end{proof}
\begin{exm} Let $M$ be a differentiable manifold $M'\to M$ is a covering projection $P'=M' \times U(n)$ is a principal bundle with a canonical flat connection $\Gamma'$. So there are all ingredients of \ref{dg_fl_con_ingr}. So we have a principal bundle $P(M, U(n))$ with a flat connection $\Gamma$. There is a noncommutative covering projection $\left(C(M), C(M'), _{C(M')}X_{C(M)}, G\right)$. Let $\mathcal{F}$ (resp. $\mathcal{F'}$) be a vector bundle associated with $P(M,U(n))$ (resp. $P(M',U(n))$), and let $F$ (resp. $F'$) be a projective finitely generated $C(M)$ (resp. $C(M')$ module which corresponds to $\mathcal{F}$ (resp. $\mathcal{F'}$). Then we have $F = C(M')\square_GF'$, i.e. $F$ is a topological flat bundle. \end{exm} \begin{rem} Since existence of $P(M, U(n)$ depend on topology of $M$ only we use a notion "topological noncommutative bundle with flat connection" is used for its noncommutative generalization. \end{rem} \begin{exm}\label{nc_torus_fin_cov} Let $A_{\theta}$ be a noncommutative torus $\left(A_{\theta}, A_{\theta'}, \mathbb{Z}_m\times\mathbb{Z}_n\right)$ a Galois triple described in \cite{ivankov:infinite_cov_pr}. Any group homomorphism $\mathbb{Z}_m\times\mathbb{Z}_n\to U(1)$ induces a topological noncommutative flat bundle. \end{exm} \subsection{General noncommutative bundles with flat connections} \begin{empt}\label{n_f_b_constr} A vector fibre bundle with a flat connection is not necessary a topological bundle with flat connection, since proposition \ref{fvb_dg} and construction \ref{dg_fl_con_ingr} does not require it. However general case of \ref{fvb_dg} and construction \ref{dg_fl_con_ingr} have a noncommutative analogue. The analogue requires a noncommutative generalization of differentiable manifolds with flat connections. Generalization of a spin manifold is a spectral triple \cite{connes:c_alg_dg,connes:ncg94,varilly:noncom,varilly_bondia}. First of all we generalize the proposition \ref{comm_cov_mani}.
Suppose that there is a spectral triple $(\mathcal{B}, H, D)$ such that \begin{itemize} \item $\mathcal{B} \subset B$ is a pre-$C^*$-algebra which is a dense subalgebra in $B$. \item there is a faithful representation $B \to B(H)$. \end{itemize} Let $\left(B, A, G\right)$ be a finite noncommutative covering projection. According to 8.2 of \cite{ivankov:infinite_cov_pr} there is the spectral triple $(\mathcal{A}, A \otimes_BH, \widetilde{D} )$ such that
\begin{itemize} \item $\mathcal{A} \subset A$ is a pre-$C^*$-algebra which is a dense subalgebra of $A$. \item $\widetilde{D}gh = g\widetilde{D}h$, for any $g \in G$, $h \in \Dom \widetilde{D}$. \end{itemize}
Let $\mathcal{F}$ be a finite projective right $\mathcal{B}$-module with a flat connection $\nabla: \mathcal{F} \to \mathcal{F} \otimes_{\mathcal{B}} \Omega^1(\mathcal{B})$. Let $\mathcal{E} = \mathcal{F}\otimes_{\mathcal{B}} \mathcal{A}$ be a projective finitely generated $\mathcal{A}$-module and the action of $G$ on $\mathcal{E}$ is induced by the action of $G$ on $\mathcal{A}$. According to \cite{ivankov:infinite_cov_pr} connection $\nabla$ can be naturally lifted to $\widetilde{\nabla}: \mathcal{E}\to \mathcal{E} \otimes_{\mathcal{B}} \Omega^1(\mathcal{B})$. Let $\mathcal{E}'$ be an isomorphic to $\mathcal{E}$ as $\mathcal{A}$-module and there is an action of $G$ on $\mathcal{E}'$ such that \begin{equation}\label{twisted_act} g(xa)=(gx)(ga); \ \forall x \in \mathcal{E}, \ \forall a \in \mathcal{A}, \ \forall g \in G. \end{equation} Different actions of $G$ give different $\mathcal{B}$-modules $\mathcal{F} = \mathcal{E}\square_G \mathcal{A}$, $\mathcal{F}' = \mathcal{E}'\square_G \mathcal{A}$. Both $\mathcal{F}$ and $\mathcal{F}'$ can be included into following sequences \begin{equation}\label{seqf}
\mathcal{F} \xrightarrow{i} \mathcal{E} \xrightarrow{p} \mathcal{F}, \end{equation} \begin{equation*}
\mathcal{F}' \xrightarrow{i'} \mathcal{E}' \xrightarrow{p'} \mathcal{F}'. \end{equation*} These sequences induce following \begin{equation}\label{seqfo}
\mathcal{F} \otimes_{\mathcal{B}} \Omega^1(\mathcal{B}) \xrightarrow{i \otimes \mathrm{Id}_{\Omega^1(\mathcal{B})}} \mathcal{E}\otimes_{\mathcal{B}}\Omega^1(\mathcal{B}) \xrightarrow{p \otimes \mathrm{Id}_{\Omega^1(\mathcal{B})}} \mathcal{F}\otimes_{\mathcal{B}}\Omega^1(\mathcal{B}), \end{equation} \begin{equation*}
\mathcal{F}' \otimes_{\mathcal{B}} \Omega^1(\mathcal{B}) \xrightarrow{i' \otimes \mathrm{Id}_{\Omega^1(\mathcal{B})}} \mathcal{E}'\otimes_{\mathcal{B}}\Omega^1(\mathcal{B}) \xrightarrow{p' \otimes \mathrm{Id}_{\Omega^1(\mathcal{B})}} \mathcal{F}'\otimes_{\mathcal{B}}\Omega^1(\mathcal{B}), \end{equation*} The connection $\nabla$ is given by \begin{equation*} \nabla p(x) = \left(p \otimes \mathrm{Id}_{\Omega^1(\mathcal{B})}\right)\left(\widetilde{\nabla}(x)\right); \ x \in \mathcal{E}. \end{equation*} From \eqref{seqf} and \eqref{seqfo} it follows that if $y\in \mathcal{F}$ then $\nabla y$ does not depend on $x \in \mathcal{E}$ such that $y=p(x)$. Similarly there is a flat connection $\nabla': \mathcal{F}' \to \mathcal{F}' \otimes_{\mathcal{B}} \Omega^1(\mathcal{B})$ given by \begin{equation*} \nabla' p'(x) = \left(p' \otimes \mathrm{Id}_{\Omega^1(\mathcal{B})}\right)\left(\widetilde{\nabla'}(x)\right); \ x \in \mathcal{E}'. \end{equation*} Following table explains a correspondence between the proposition \ref{comm_cov_mani} and the above construction. \newline \newline \break
\begin{tabular}{|c|c|} \hline DIFFERENTIAL GEOMETRY & SPECTRAL TRIPLES\\ \hline Manifold $M$ & Spectral triple $(\mathcal{B}, H, D)$\\ The covering manifold $E$ & Spectral triple $(\mathcal{A}, \mathcal{A} \otimes_{\mathcal{B}}H, \widetilde{D} )$ \\ A regular covering projection $E\to M$ & A noncommutative covering projection $(B, A, G)$ \\ Group of covering transformations $\pi_1(M)/H$ & Group of noncommutative covering transformations $G$ \\ A connection on vector fibre bundle $F\to M$ & An operator $\nabla: \mathcal{F} \to \mathcal{F} \otimes_{\mathcal{B}} \Omega^1(\mathcal{B})$ \\ \hline \end{tabular} \newline \newline \break \end{empt} \begin{exm} Let $(\mathcal{A}_{\theta}, H, D)$ be a spectral triple associated to a noncommutative torus $A_{\theta}$ generated by unitary elements $u,v\in A_{\theta}$. Let $\mathcal{F} = \mathcal{A}^4_{\theta}$ be a free module and let $e_1,..., e_4 \in \mathcal{F}$ be its generators. Let $\nabla: \mathcal{F} \to \mathcal{F} \otimes \Omega^1(\mathcal{A}_{\theta})$ be a connection given by \begin{equation*} \nabla e_1 = c_u e_2 \otimes du, \ \nabla e_2 = -c_u e_1 \otimes du, \ \nabla e_3 = c_v e_4 \otimes dv, \ \nabla e_4 = -c_v e_3 \otimes dv. \end{equation*} where $c_u, c_v \in \mathbb{R}$. According to \cite{ivankov:nc_wilson_lines} the connection $\nabla$ is flat. Let $\left(A_{\theta}, A_{\theta'}, \mathbb{Z}_m\times\mathbb{Z}_n\right)$ a Galois triple from example \ref{nc_torus_fin_cov}. This data induces a spectral triple $\left(\mathcal{A}_{\theta'}, \mathcal{A}_{\theta'}\otimes_{\mathcal{A}_{\theta}}H, D\right)$. If $\mathcal{E} = \mathcal{F} \otimes_{\mathcal{A}_{\theta}}\mathcal{A}_{\theta'}$ then \begin{equation} \mathcal{E} \approx \mathcal{A}_{\theta'}\otimes \mathbb{C}^{4} \approx \mathcal{A}_{\theta}\otimes \mathbb{C}^{4nm} \end{equation} and there is a natural connection $\widetilde{\nabla}: \mathcal{E} \to \mathcal{E}\otimes_{\mathcal{A}_{\theta}}\Omega^1\left(\mathcal{A}_{\theta}\right)$. Let $\rho : \mathbb{Z}_m\times\mathbb{Z}_n \to U(4)$ be a nontrivial representation. There is an action of $\mathbb{Z}_m\times\mathbb{Z}_n$ on $\mathcal{E}' = \mathcal{A}_{\theta'}\otimes \mathbb{C}^{4}$ given by \begin{equation*} g (a \otimes x) = ga \otimes \rho(g)x; \ a \in \mathcal{A}_{\theta}, \ x \in \mathbb{C}^4. \end{equation*} which satisfies \eqref{twisted_act}. Then $\mathcal{F}' = \mathcal{A}_{\theta'}\square_{\mathbb{Z}_m\times\mathbb{Z}_n}\mathcal{E}'$ is a finitely generated $A_{\theta}$ module with a connection $\nabla': \mathcal{F}' \to \mathcal{F} \otimes \Omega^1(\mathcal{A}_{\theta})$ given by the construction \ref{n_f_b_constr}. \end{exm}
\subsection{Noncommutative bundles with flat connections and $K$-theory} \paragraph{}A homomorphism $R_0(G) \to K^0(M)$ from \ref{comm_fund_k} can be generalized. Let $\left(A, \widetilde{A}, G\right)$ be a finite noncommutative covering projection and $\rho: G \to U(n)$ is a representation, $\mathrm{triv}_n: G \to U(n)$ is the trivial representation. Suppose that an action if $G$ on $\mathbb{C}^n$ is given by $\rho$. Then a homomorphism $R_0(G) \to K(A)$ is given by \begin{equation*} [\rho] - \left[\mathrm{triv}_n\right] \mapsto \left[\widetilde{A}\square_G\mathbb{C}^n\right] - \left[A^n\right]. \end{equation*}
\section{Noncommutative generalization of Borel construction}
\begin{empt} There is a noncommutative generalization of the Borel construction \ref{borel_const_comm} \end{empt} \begin{defn}\label{borel_const_ncomm} Let $A$, $B$ be $C^*$-algebras, let $G$ be a group which acts on both $A$ and $B$. Let $A\otimes_{\mathbb{C}} B$ is any tensor product such that $A\otimes_{\mathbb{C}} B$ is a $C^*$-algebra. The norm closure of generated by \begin{equation*}
C = \left\{\sum_i a_i\otimes b_i\in A\otimes_{\mathbb{C}} B~|~\sum_i a_i g\otimes b_i= \sum_i a_i\otimes gb_i;~\forall g\in G\right\}. \end{equation*}
subalgebra is said to be a {\it cotensor product of $C^*$-algebras}. Denote by $A \square_G B$ the cotensor product. \end{defn} \begin{rem} We do not fix a type of a tensor product because different applications can use different tensor products (See \cite{bruckler:tensor}). \end{rem}
\begin{exm} Let $\mathcal{X}$, $\mathcal{Y}$ be locally compact Hausdorff spaces and let $G$ be a finite or countable group which acts on both $\mathcal{X}$ and $\mathcal{Y}$. Suppose that action on $\mathcal{X}$ (resp. $\mathcal{Y}$) is right (resp. left). Then there is natural right (resp. left) action of on $C_0(\mathcal{X})$ (resp. $C_0(\mathcal{Y})$). From \cite{bruckler:tensor} it follows that the minimal and the maximal norm on $C_0(\mathcal{X}) \otimes_{\mathbb{C}} C_0(\mathcal{Y})$ coincide. It is well known that $C_0(\mathcal{X}\times\mathcal{Y}) \approx C_0(\mathcal{X}) \otimes_{\mathbb{C}} C_0(\mathcal{Y})$. Let $\mathcal{Z} = \mathcal{X}\times\mathcal{Y} / \approx$ where $\approx$ is given by \begin{equation*} (xg, y) \approx (x, g^{-1}y). \end{equation*} It is clear that $C_0(\mathcal{Z}) \approx C_0(\mathcal{X}) \square_G C_0(\mathcal{Y})$. \end{exm}
\begin{defn} Let $\left(A, \widetilde{A}, _{\widetilde{A}}X_A, G\right)$ be a Galois quadruple such that there is right action of $G$ of $\widetilde{A}$ and left action of $G$ on $C^*$-algebra $B$. A cotensor product $\widetilde{A}\square_G B$ is said to be a {\it noncommutative Borel construction}. \end{defn}
\begin{exm}
Let $p: \widetilde{\mathcal{B}}\to \mathcal{B}$ be a topological normal covering projection of locally compact topological spaces, and $G = G(\widetilde{\mathcal{B}}| \mathcal{B})$ is a group of covering transformations. Then $p$ is a principal $G(\widetilde{\mathcal{B}}| \mathcal{B})$-bundle. Let $\mathcal{F}$ be a locally compact topological space with action of $G$ on it. Then there is a natural isomorphism with the $C^*$-algebra of a topological Borel construction \begin{equation*} C_0(\widetilde{\mathcal{B}}\times_G\mathcal{F})\approx C_0(\widetilde{\mathcal{B}})\square_GC_0(\mathcal{F}). \end{equation*} \end{exm}
\end{document}
|
arXiv
|
{
"id": "1411.2505.tex",
"language_detection_score": 0.6239027380943298,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Discrete dynamical systems in group theory}
\abstract{ In this expository paper we describe the unifying approach for many known entropies in Mathematics developed in \cite{DGV1}.
First we give the notion of semigroup entropy $h_{\mathfrak{S}}:{\mathfrak{S}}\to\mathbb R_+$ in the category ${\mathfrak{S}}$ of normed semigroups and contractive homomorphisms, recalling also its properties from \cite{DGV1}. For a specific category $\mathfrak X$ and a functor $F:\mathfrak X\to {\mathfrak{S}}$ we have the entropy $h_F$, defined by the composition $h_F=h_{\mathfrak{S}}\circ F$, which automatically satisfies the same properties proved for $h_{\mathfrak{S}}$. This general scheme permits to obtain many of the known entropies as $h_F$, for appropriately chosen categories $\mathfrak X$ and functors $F:\mathfrak X\to {\mathfrak{S}}$.
In the last part we recall the definition and the fundamental properties of the algebraic entropy for group endomorphisms, noting how its deeper properties depend on the specific setting. Finally we discuss the notion of growth for flows of groups, comparing it with the classical notion of growth for finitely generated groups.}
\section{Introduction}
This paper covers the series of three talks given by the first named author at the conference ``Advances in Group Theory and Applications 2011'' held in June, 2011 in Porto Cesareo. It is a survey about entropy in Mathematics, the approach is the categorical one adopted in \cite{DGV1} (and announced in \cite{D}, see also \cite{LoBu}).
We start Section 4 recalling that a \emph{flow} in a category $\mathfrak X$ is a pair $(X,\phi)$, where $X$ is an object of $\mathfrak X$ and $\phi: X\to X$ is a morphism in $\mathfrak X$. A morphism between two flows $\phi: X\to X$ and $\psi: Y\to Y$ is a morphism $\alpha: X \to Y$ in $\mathfrak X$ such that the diagram $$\xymatrix{X\ar[r]^{\alpha} \ar[d]_{\phi}&Y\ar[d]^{\psi}\\X\ar[r]^{\alpha}& Y.}$$ commutes. This defines the category $\mathbf{Flow}_{\mathfrak X}$ of flows in $\mathfrak X$.
To classify flows in $\mathfrak X$ up to isomorphisms one uses invariants, and entropy is roughly a numerical invariant associated to flows. Indeed, letting $\mathbb R_{\geq 0} = \{r\in \mathbb R: r\geq 0\}$ and $\mathbb R_+= \mathbb R_{\geq 0}\cup \{\infty\}$, by the term \emph{entropy} we intend a function \begin{equation}\label{dag} h: \mathbf{Flow}_{\mathfrak X}\to \mathbb R_+, \end{equation} obeying the invariance law $h(\phi) = h(\psi)$ whenever $(X,\phi)$ and $(Y,\psi)$ are isomorphic flows.
The value $h(\phi)$ is supposed to measure the degree to which $X$ is ``scrambled" by $\phi$, so for example an entropy should assign $0$ to all identity maps. For simplicity and with some abuse of notations, we adopt the following
\noindent\textbf{Convention.} If $\mathfrak X$ is a category and $h$ an entropy of $\mathfrak X$, writing $h: {\mathfrak X}\to \mathbb R_+$ we always mean $h: \mathbf{Flow}_{\mathfrak X}\to \mathbb R_+$ as in \eqref{dag}.
The first notion of entropy in Mathematics was the measure entropy $h_{mes}$ introduced by Kolmogorov \cite{K} and Sinai \cite{Sinai} in 1958 in Ergodic Theory. The topological entropy $h_{top}$ for continuous self-maps of compact spaces was defined by Adler, Konheim and McAndrew \cite{AKM} in 1965. Another notion of topological entropy $h_B$ for uniformly continuous self-maps of metric spaces was given later by Bowen \cite{B} (it coincides with $h_{top}$ on compact metric spaces). Finally, entropy was taken also in Algebraic Dynamics by Adler, Konheim and McAndrew \cite{AKM} in 1965 and Weiss \cite{W} in 1974; they defined an entropy $\mathrm{ent}$ for endomorphisms of torsion abelian groups. Then Peters \cite{P} in 1979 introduced its extension $h_{alg}$ to automorphisms of abelian groups; finally $h_{alg}$ was defined in \cite{DG} and \cite{DG-islam} for any group endomorphism. Recently also a notion of algebraic entropy for module endomorphisms was introduced in \cite{SZ}, namely the algebraic $i$-entropy $\mathrm{ent}_i$, where $i$ is an invariant of a module category. Moreover, the adjoint algebraic entropy $\mathrm{ent}^\star$ for group endomorphisms was investigated in \cite{DGV} (and its topological extension in \cite{G}). Finally, one can find in \cite{AZD} and \cite{DG-islam} two ``mutually dual'' notions of entropy for self-maps of sets, namely the covariant set-theoretic entropy $\mathfrak h$ and the contravariant set-theoretic entropy $\mathfrak h^*$.
The above mentioned specific entropies determined the choice of the main cases considered in this paper. Namely, $\mathfrak X$ will be one of the following categories (other examples can be found in \S\S \ref{NewSec1} and \ref{NewSec2}): \begin{itemize} \item[(a)] $\mathbf{Set}$ of sets and maps and its non-full subategory $\mathbf{Set}_{\mathrm{fin}}$ of sets and finite-to-one maps (set-theoretic entropies $\mathfrak h$ and $\mathfrak h^*$ respectively); \item[(b)] $\mathbf{CTop}$ of compact topological spaces and continuous maps (topological entropy $h_{top}$); \item[(c)] $\mathbf{Mes}$ of probability measure spaces and measure preserving maps (measure entropy $h_{mes}$); \item[(d)] $\mathbf{Grp}$ of groups and group homomorphisms and its subcategory $\mathbf{AbGrp}$ of abelian groups (algebraic entropy $\mathrm{ent}$, algebraic entropy $h_{alg}$ and adjoint algebraic entropy $\mathrm{ent}^\star$); \item[(e)] $\mathbf{Mod}_R$ of right modules over a ring $R$ and $R$-module homomorphisms (algebraic $i$-entropy $\mathrm{ent}_i$). \end{itemize} Each of these entropies has its specific definition, usually given by limits computed on some ``trajectories'' and by taking the supremum of these quantities (we will see some of them explicitly). The proofs of the basic properties take into account the particular features of the specific categories in each case too. It appears that all these definitions and basic properties share a lot of common features. The aim of our approach is to unify them in some way, starting from a general notion of entropy of an appropriate category. This will be the semigroup entropy $h_{\mathfrak{S}}$ defined on the category ${\mathfrak{S}}$ of normed semigroups.
In Section \ref{sem-sec} we first introduce the category ${\mathfrak{S}}$ of normed semigroups and related basic notions and examples mostly coming from \cite{DGV1}. Moreover, in \S \ref{preorder-sec} (which can be avoided at a first reading) we add a preorder to the semigroup and discuss the possible behavior of a semigroup norm with respect to this preorder. Here we include also the subcategory $\mathfrak L$ of ${\mathfrak{S}}$ of normed semilattices, as the functors given in Section \ref{known-sec} often have as a target actually a normed semilattice.
In \S \ref{hs-sec} we define explicitly the semigroup entropy $h_{\mathfrak{S}}: {\mathfrak{S}} \to \mathbb R_+$ on the category ${\mathfrak{S}}$ of normed semigroups. Moreover we list all its basic properties, clearly inspired by those of the known entropies, such as Monotonicity for factors, Invariance under conjugation, Invariance under inversion, Logarithmic Law, Monotonicity for subsemigroups, Continuity for direct limits, Weak Addition Theorem and Bernoulli normalization.
Once defined the semigroup entropy $h_{\mathfrak{S}}:{\mathfrak{S}}\to \mathbb R_+$, our aim is to obtain all known entropies $h:{\mathfrak X} \to \mathbb R_+$ as a composition $h_F:=h_{\mathfrak{S}} \circ F$ of a functor $F: \mathfrak X\to {\mathfrak{S}}$ and $h_{\mathfrak{S}}$: \begin{equation*} \xymatrix@R=6pt@C=37pt {\mathfrak{X}\ar[dd]_{F}\ar[rrd]^{h=h_F} & & \\ & & \mathbb R_+ \\ {\mathfrak{S}}\ar[rru]_{h_{\mathfrak{S}}} & & } \end{equation*} This is done explicitly in Section \ref{known-sec}, where all specific entropies listed above are obtained in this scheme. We dedicate to each of them a subsection, each time giving explicitly the functor from the considered category to the category of normed semigroups. More details and complete proofs can be found in \cite{DGV1}. These functors and the entropies are summarized by the following diagram: \begin{equation*} \xymatrix@-1pc{
&&&\mathbf{Mes}\ar@{-->}[ddddr]|-{\mathfrak{mes}}\ar[ddddddr]|-{h_{mes}}& &\mathbf{AbGrp}\ar[ddddddl]|-{\mathrm{ent}}\ar@{-->}[ddddl]|-{\mathfrak{sub}}&&\\
& &\mathbf{CTop}\ar@{-->}[dddrr]|-{\mathfrak{cov}}\ar[dddddrr]|-{h_{top}}& & & & \mathbf{Grp}\ar@{-->}[dddll]|-{\mathfrak{pet}}\ar[dddddll]|-{h_{alg}} & &\\
& \mathbf{Set}\ar@{-->}[ddrrr]|-{\mathfrak{atr}}\ar[ddddrrr]|-{\mathfrak h} && & & & & \mathbf{Grp}\ar@{-->}[ddlll]|-{\mathfrak{sub}^\star}\ar[ddddlll]|-{\mathrm{ent}^\star} \\
\mathbf{Set}_\mathrm{fin}\ar@{-->}[drrrr]|-{\mathfrak{str}}\ar[dddrrrr]|-{\mathfrak h^*} && && & & & &\mathbf{Mod}_R\ar@{-->}[dllll]|-{\mathfrak{sub}_i}\ar[dddllll]|-{\mathrm{ent}_i} \\
& && & \mathfrak S \ar[dd]|-{h_\mathfrak S} && & \\
\\ & && &{\mathbb R_+} & & & } \end{equation*} In this way we obtain a simultaneous and uniform definition of all entropies and uniform proofs (as well as a better understanding) of their general properties, namely the basic properties of the specific entropies can be derived directly from those proved for the semigroup entropy.
The last part of Section \ref{known-sec} is dedicated to what we call Bridge Theorem (a term coined by L. Salce), that is roughly speaking a connection between entropies $h_1:\mathfrak X_1 \to \mathbb R_+$ and $h_2:\mathfrak X_2 \to \mathbb R_+$ via functors $\varepsilon: \mathfrak X_1 \to \mathfrak X_2$. Here is a formal definition of this concept:
\begin{Definition}\label{BTdef} Let $\varepsilon: \mathfrak X_1 \to \mathfrak X_2$ be a functor and let $h_1:\mathfrak X_1 \to \mathbb R_+$ and $h_2:\mathfrak X_2 \to \mathbb R_+$ be entropies of the categories $\mathfrak X_1 $ and $ \mathfrak X_2$, respectively (as in the diagram below). \begin{equation*}\label{Buz} \xymatrix@R=6pt@C=37pt {\mathfrak{X}_1\ar[dd]_{\varepsilon}\ar[rrd]^{h_{1}} & & \\ & & \mathbb R_+ \\ \mathfrak{X}_2\ar[rru]_{h_{2}} & & } \end{equation*}We say that the pair $(h_1, h_2)$ satisfies the \emph{weak Bridge Theorem} with respect to the functor $\varepsilon$ if there exists a positive constant $C_\varepsilon$, such that for every endomorphism $\phi$ in $\mathfrak X_1$ \begin{equation}\label{sBT} h_2(\varepsilon(\phi)) \leq C_\varepsilon h_1(\phi). \end{equation} If equality holds in \eqref{sBT} we say that $(h_1,h_2)$ satisfies the \emph{Bridge Theorem} with respect to $\varepsilon$, and we shortly denote this by $(BT_\varepsilon)$. \end{Definition}
In \S \ref{BTsec} we discuss the Bridge Theorem passing through the category ${\mathfrak{S}}$ of normed semigroups and so using the new semigroup entropy. This approach permits for example to find a new and transparent proof of Weiss Bridge Theorem (see Theorem \ref{WBT}) as well as for other Bridge Theorems.
A first limit of this very general setting is the loss of some of the deeper properties that a specific entropy may have. So in the last Section \ref{alg-sec} for the algebraic entropy we recall the definition and the fundamental properties, which cannot be deduced from the general scheme.
We start Section 4 recalling the Algebraic Yuzvinski Formula (see Theorem \ref{AYF}) recently proved in \cite{GV}, giving the values of the algebraic entropy of linear transformations of finite-dimensional rational vector spaces in terms of the Mahler measure. In particular, this theorem provides a connection of the algebraic entropy with the famous Lehmer Problem. Two important applications of the Algebraic Yuzvinski Formula are the Addition Theorem and the Uniqueness Theorem for the algebraic entropy in the context of abelian groups.
In \S \ref{Growth-sec} we describe the connection of the algebraic entropy with the classical topic of growth of finitely generated groups in Geometric Group Theory. Its definition was given independently by Schwarzc \cite{Sch} and Milnor \cite{M1}, and after the publication of \cite{M1} it was intensively investigated; several fundamental results were obtained by Wolf \cite{Wolf}, Milnor \cite{M2}, Bass \cite{Bass}, Tits \cite{Tits} and Adyan \cite{Ad}. In \cite{M3} Milnor proposed his famous problem (see Problem \ref{Milnor-pb} below); the question about the existence of finitely generated groups with intermediate growth was answered positively by Grigorchuk in \cite{Gri1,Gri2,Gri3,Gri4}, while the characterization of finitely generated groups with polynomial growth was given by Gromov in \cite{Gro} (see Theorem \ref{GT}).
Here we introduce the notion of finitely generated flows $(G,\phi)$ in the category of groups and define the growth of $(G,\phi)$. When $\phi=\mathrm{id}_G$ is the identical endomorphism, then $G$ is a finitely generated group and we find exactly the classical notion of growth. In particular we recall a recent significant result from \cite{DG0} extending Milnor's dichotomy (between polynomial and exponential growth) to finitely generated flows in the abelian case (see Theorem \ref{DT}). We leave also several open problems and questions about the growth of finitely generated flows of groups.
The last part of the section, namely \S \ref{aent-sec}, is dedicated to the adjoint algebraic entropy. As for the algebraic entropy, we recall its original definition and its main properties, which cannot be derived from the general scheme. In particular, the adjoint algebraic entropy can take only the values $0$ and $\infty$ (no finite positive value is attained) and we see that the Addition Theorem holds only restricting to bounded abelian groups.
A natural side-effect of the wealth of nice properties of the entropy $h_F=h_{\mathfrak{S}}\circ F$, obtained from the semigroup entropy $h_{\mathfrak{S}}$ through functors $F:\mathfrak X\to {\mathfrak{S}}$, is the loss of some entropies that do not have all these properties. For example Bowen's entropy $h_B$ cannot be obtained as $h_F$ since $h_B(\phi^{-1})= h_B(\phi)$ fails even for the automorphism $\phi: \mathbb R \to \mathbb R$ defined by $\phi(x)= 2x$, see \S \ref{NewSec2} for an extended comment on this issue; there we also discuss the possibility to obtain Bowen's topological entropy of measure preserving topological automorphisms of locally compact groups in the framework of our approach. For the same reason other entropies that cannot be covered by this approach are the intrinsic entropy for endomorphisms of abelian groups \cite{DGSV} and the topological entropy for automorphisms of locally compact totally disconnected groups \cite{DG-tdlc}. This occurs also for the function $\phi \mapsto \log s(\phi)$, where $s(\phi)$ is the scale function defined by Willis \cite{Willis,Willis2}. The question about the relation of the scale function to the algebraic or topological entropy was posed by T. Weigel at the conference; these non-trivial relations are discussed for the topological entropy in \cite{BDG}.
\section{The semigroup entropy}\label{sem-sec}
\subsection{The category ${\mathfrak{S}}$ of normed semigroups}
We start this section introducing the category ${\mathfrak{S}}$ of normed semigroups, and other notions that are fundamental in this paper.
\begin{Definition}\label{Def1} Let $(S,\cdot)$ be a semigroup. \begin{itemize} \item[(i)] A \emph{norm} on $S$ is a map $v: S \to \mathbb R_{\geq 0}$ such that \begin{equation*} v(x \cdot y) \leq v(x) + v(y)\ \text{for every}\ x,y\in S. \end{equation*} A \emph{normed semigroup} is a semigroup provided with a norm.
If $S$ is a monoid, a \emph{monoid norm} on $S$ is a semigroup norm $v$ such that $v(1)=0$; in such a case $S$ is called \emph{normed monoid}.
\item[(ii)] A semigroup homomorphism $\phi:(S,v)\to (S',v')$ between normed semigroups is \emph{contractive} if $$v'(\phi(x))\leq v(x)\ \text{for every}\ x\in S.$$ \end{itemize} \end{Definition}
Let ${\mathfrak{S}}$ be the category of normed semigroups, which has as morphisms all contractive semigroup homomorphisms. In this paper, when we say that $S$ is a normed semigroup and $\phi:S\to S$ is an endomorphism, we will always mean that $\phi$ is a contractive semigroup endomorphism. Moreover, let $\mathfrak M$ be the non-full subcategory of ${\mathfrak{S}}$ with objects all normed monoids, where the morphisms are all (necessarily contractive) monoid homomorphisms.
We give now some other definitions.
\begin{Definition} A normed semigroup $(S,v)$ is: \begin{itemize} \item[(i)] \emph{bounded} if there exists $C\in \mathbb N_+$ such that $v(x) \leq C$ for all $x\in S$; \item[(ii)]\emph{arithmetic} if for every $x\in S$ there exists a constant $C_x\in \mathbb N_+$ such that $v(x^n) \leq C_x\cdot \log (n+1)$ for every $n\in\mathbb N$. \end{itemize} \end{Definition}
Obviously, bounded semigroups are arithmetic.
\begin{Example}\label{Fekete} Consider the monoid $S = (\mathbb N, +)$. \begin{itemize} \item[(a)] Norms $v$ on $S$ correspond to {subadditive sequences} $(a_n)_{n\in\mathbb N}$ in $ \mathbb R_+$ (i.e., $a_{n + m}\leq a_n + a_m$) via $v \mapsto (v(n))_{n\in\mathbb N}$. Then $\lim_{n\to \infty} \frac{a_n}{n}= \inf_{n\in\mathbb N} \frac{a_n}{n}$ exists by Fekete Lemma \cite{Fek}. \item[(b)] Define $v: S \to \mathbb R_+$ by $v(x) = \log (1+ x)$ for $x\in S$. Then
$v$ is an arithmetic semigroup norm.
\item[(c)] Define $v_1: S \to \mathbb R_+$ by $v_1(x) = \sqrt x$ for $x\in S$. Then $v_1$ is a semigroup norm, but $(S, + , v_1)$ is not arithmetic. \item[(d)] For $a\in \mathbb N$, $a>1$ let $v_a(n) = \sum_i b_i$, when $n= \sum_{i=0}^k b_ia^i$ and $0\leq b_i < a$ for all $i$. Then $v_a$ is an arithmetic norm on $S$ making the map $x\mapsto ax$ an endomorphism in ${\mathfrak{S}}$. \end{itemize} \end{Example}
\subsection{Preordered semigroups and normed semilattices}\label{preorder-sec}
A triple $(S,\cdot,\leq)$ is a \emph{preordered semigroup} if the semigroup $(S,\cdot)$ admits a preorder $\leq$ such that $$x\leq y\ \text{implies}\ x \cdot z \leq y \cdot z\ \text{and}\ z \cdot x \leq z \cdot y\ \text{for all}\ x,y,z \in S.$$ Write $x\sim y$ when $x\leq y$ and $y\leq x$ hold simultaneously. Moreover, the \emph{positive cone} of $S$ is $$P_+(S)=\{a\in S:x\leq x \cdot a \ \text{and}\ x\leq a\cdot x\ \text{for every}\ x\in S\}.$$
A norm $v$ on the preordered semigroup $(S,\cdot,\leq)$ is \emph{monotone} if $x\leq y$ implies $v(x) \leq v(y)$ for every $x,y \in S$. Clearly, $v(x) = v(y)$ whenever $x \sim y$ and the norm $v$ of $S$ is monotone.
Now we propose another notion of monotonicity for a semigroup norm which does not require the semigroup to be explicitly endowed with a preorder.
\begin{Definition} Let $(S,v)$ be a normed semigroup. The norm $v$ is \emph{s-monotone} if $$\max\{v(x), v(y)\}\leq v(x \cdot y)\ \text{for every}\ x,y \in S.$$ \end{Definition}
This inequality may become a too stringent condition when $S$ is close to be a group; indeed, if $S$ is a group, then it implies that $v(S) = \{v(1)\}$, in particular $v$ is constant.
If $(S,+,v)$ is a commutative normed monoid, it admits a preorder $\leq^a$ defined for every $x,y\in S$ by $x\leq^a y$ if and only if there exists $z\in S$ such that $x+z=y$. Then $(S,\cdot,\leq)$ is a {preordered semigroup} and the norm $v$ is s-monotone if and only if $v$ is monotone with respect to $\leq^a$.
The following connection between monotonicity and s-monotonicity is clear.
\begin{Lemma} Let $S$ be a preordered semigroup. If $S=P_+(S)$, then every monotone norm of $S$ is also s-monotone. \end{Lemma}
A \emph{semilattice} is a commutative semigroup $(S,\vee)$ such that $x\vee x=x$ for every $x\in S$.
\begin{Example} \begin{itemize} \item[(a)] Each lattice $(L, \vee, \wedge)$ gives rise to two semilattices, namely $(L, \vee)$ and $(L, \wedge)$. \item[(b)] A filter $\mathcal F$ on a given set $X$ is a semilattice with respect to the intersection, with zero element the set $X$. \end{itemize} \end{Example}
Let ${\mathfrak{L}}$ be the full subcategory of ${\mathfrak{S}}$ with objects all normed semilattices.
Every normed semilattice $(L,\vee)$ is trivially arithmetic, moreover the canonical partial order defined by $$x\leq y\ \text{if and only if}\ x\vee y=y,$$ for every $x,y\in L$, makes $L$ also a partially ordered semigroup.
Neither preordered semigroups nor normed semilattices are formally needed for the definition of the semigroup entropy. Nevertheless, they provide significant and natural examples, as well as useful tools in the proofs, to justify our attention to this topic.
\subsection{Entropy in ${\mathfrak{S}}$}\label{hs-sec}
For $(S,v)$ a normed semigroup $\phi:S\to S$ an endomorphism, $x\in S$ and $n\in\mathbb N_+$ consider the \emph{$n$-th $\phi$-trajectory of $x$} $$T_n(\phi,x) = x \cdot\phi(x)\cdot\ldots \cdot\phi^{n-1}(x)$$ and let $$c_n(\phi,x) = v(T_n(\phi,x)).$$ Note that $c_n(\phi,x) \leq n\cdot v(x)$. Hence the growth of the function $n \mapsto c_n(\phi,x)$ is at most linear.
\begin{Definition} Let $S$ be a normed semigroup. An endomorphism $\phi:S\to S$ is said to have \emph{logarithmic growth}, if for every $x\in S$ there exists $C_x\in\mathbb N_+$ with $c_n(\phi,x) \leq C_x\cdot \log (n+1)$ for all $n\in\mathbb N_+$. \end{Definition}
Obviously, a normed semigroup $S$ is arithmetic if and only if $\mathrm{id}_{S}$ has logarithmic growth.
The following theorem from \cite{DGV1} is fundamental in this context as it witnesses the existence of the semigroup entropy; so we give its proof also here for reader's convenience.
\begin{Theorem}\label{limit} Let $S$ be a normed semigroup and $\phi:S\to S$ an endomorphism. Then for every $x \in S$ the limit \begin{equation}\label{hs-eq} h_{{\mathfrak{S}}}(\phi,x):= \lim_{n\to\infty}\frac{c_n(\phi,x)}{n} \end{equation}
exists and satisfies $h_{{\mathfrak{S}}}(\phi,x)\leq v(x)$. \end{Theorem} \begin{proof} The sequence $(c_n(\phi,x))_{n\in\mathbb N_+}$ is subadditive. Indeed, \begin{align*} c_{n+m}(\phi,x)&= v(x\cdot\phi(x)\cdot\ldots\cdot\phi^{n-1}(x)\cdot\phi^{n}(x)\cdot\ldots\cdot\phi^{n+m-1}(x))\\ &=v((x\cdot\phi(x)\cdot\ldots\cdot\phi^{n-1}(x))\cdot\phi^{n}(x\cdot\ldots\cdot\phi^{m-1}(x))) \\ &\leq c_n(\phi,x)+v(\phi^{n}(x\cdot\ldots\cdot\phi^{m-1}(x))) \\ &\leq c_n(\phi,x)+v(x\cdot\ldots\cdot\phi^{m-1}(x))=c_n(\phi,x)+c_m(\phi,x). \end{align*} By Fekete Lemma (see Example \ref{Fekete} (a)), the limit $\lim_{n\to\infty} \frac{c_n(\phi,x)}{n}$ exists and coincides with $\inf_{n\in\mathbb N_+} \frac{c_n(\phi,x)}{n}$. Finally, $h_{{\mathfrak{S}}}(\phi,x)\leq v(x)$ follows from $c_n(\phi,x) \leq n v(x)$ for every $n\in\mathbb N_+$. \end{proof}
\begin{Remark} \begin{itemize} \item[(a)] The proof of the existence of the limit defining $h_{{\mathfrak{S}}}(\phi,x) $ exploits the property of the semigroup norm and also the condition on $\phi$ to be contractive. For an extended comment on what can be done in case the function $v: S \to \mathbb R_+$ fails to have that property see \S \ref{NewSec1}. \item[(b)] With $S = (\mathbb N,+)$, $\phi = \mathrm{id}_\mathbb N$ and $x=1$ in Theorem \ref{limit} we obtain exactly item (a) of Example \ref{Fekete}. \end{itemize} \end{Remark}
\begin{Definition}\label{SEofEndos} Let $S$ be a normed semigroup and $\phi:S\to S$ an endomorphism. The \emph{semigroup entropy} of $\phi$ is $$h_{{\mathfrak{S}}}(\phi)=\sup_{x\in S}h_{{\mathfrak{S}}}(\phi,x).$$ \end{Definition}
If an endomorphism $\phi:S\to S$ has logarithmic growth, then $h_{{\mathfrak{S}}}(\phi) = 0$. In particular, $h_{{\mathfrak{S}}}(\mathrm{id}_S)=0$ if $S$ is arithmetic.
Recall that an endomorphism $\phi:S\to S$ of a normed semigroup $S$ is \emph{locally quasi periodic} if for every $x\in S$ there exist $n,k\in\mathbb N$, $k>0$, such that $\phi^n(x)=\phi^{n+k}(x)$. If $S$ is a monoid and $\phi(1)=1$, then $\phi$ is \emph{locally nilpotent} if for every $x\in S$ there exists $n\in\mathbb N_+$ such that $\phi^n(x)=1$.
\begin{Lemma}\label{locally} Let $S$ be a normed semigroup and $\phi:S\to S$ an endomorphism. \begin{itemize} \item[(a)]If $S$ is arithmetic and $\phi$ is locally periodic, then $h_{\mathfrak{S}}(\phi)=0$. \item[(b)] If $S$ is a monoid and $\phi(1)=1$ and $\phi$ is locally nilpotent, then $h_{\mathfrak{S}}(\phi)=0$. \end{itemize} \end{Lemma} \begin{proof} (a) Let $x\in S$, and let $l,k\in\mathbb N_+$ be such that $\phi^l(x)=\phi^{l+k}(x)$. For every $m\in\mathbb N_+$ one has $$T_{l+mk}(\phi,x)=T_l(\phi,x)\cdot T_m(\mathrm{id}_S,y) = T_l(\phi,x)\cdot y^m,$$ where $y=\phi^l(T_k(\phi,x))$. Since $S$ is arithmetic, there exists $C_x\in \mathbb N_+$ such that \begin{equation*}\begin{split} v(T_{l+mk}(\phi,x)) = v(T_l(\phi,x)\cdot y^m) \leq \\ v(T_l(\phi,x)) + v( y^m) \leq v(T_l (\phi,x)) + C_x\cdot\log (m+1), \end{split}\end{equation*}
so $\lim_{m\to \infty} \frac{v(T_{l+mk}(\phi,x))}{l+mk}=0$. Therefore we have found a subsequence of $(c_n(\phi,x))_{n\in\mathbb N_+}$ converging to $0$, hence also $h_{\mathfrak{S}}(\phi,x)=0$. Hence $h_{\mathfrak{S}}(\phi)=0$.
(b) For $x\in S$, there exists $n\in\mathbb N_+$ such that $\phi^n(x)=1$. Therefore $T_{n+k}(\phi,x)=T_n(\phi,x)$ for every $k\in\mathbb N$, hence $h_{\mathfrak{S}}(\phi,x)=0$. \end{proof}
We discuss now a possible different notion of semigroup entropy. Let $(S,v)$ be a normed semigroup, $\phi:S\to S$ an endomorphism, $x\in S$ and $n\in\mathbb N_+$. One could define also the ``left'' $n$-th $\phi$-trajectory of $x$ as $$T_n^{\#}(\phi,x)=\phi^{n-1}(x)\cdot\ldots\cdot\phi(x)\cdot x,$$ changing the order of the factors with respect to the above definition. With these trajectories it is possible to define another entropy letting $$h_{\mathfrak{S}}^{\#}(\phi,x)=\lim_{n\to\infty}\frac{v(T_n^{\#}(\phi,x))}{n},$$ and $$h_{\mathfrak{S}}^{\#}(\phi)=\sup\{h_{\mathfrak{S}}^{\#}(\phi,x):x\in S\}.$$ In the same way as above, one can see that the limit defining $h_{\mathfrak{S}}^{\#}(\phi,x)$ exists.
Obviously $h_{\mathfrak{S}}^{\#}$ and $h_{\mathfrak{S}}$ coincide on the identity map and on commutative normed semigroups, but now we see that in general they do not take always the same values. Item (a) in the following example shows that it may occur the case that they do not coincide ``locally'', while they coincide ``globally''. Moreover, modifying appropriately the norm in item (a), J. Spev\'ak found the example in item (b) for which $h_{\mathfrak{S}}^{\#}$ and $h_{\mathfrak{S}}$ do not coincide even ``globally''.
\begin{Example} Let $X=\{x_n\}_{n\in\mathbb Z}$ be a faithfully enumerated countable set and let $S$ be the free semigroup generated by $X$. An element $w\in S$ is a word $w=x_{i_1}x_{i_2}\ldots x_{i_m}$ with $m\in\mathbb N_+$ and $i_j\in\mathbb Z$ for $j= 1,2, \ldots, m$. In this case $m$ is called the {\em length} $\ell_X(w)$ of $w$, and a subword of $w$ is any $w'\in S$ of the form $w'=x_{i_k}x_{i_k+1}\ldots x_{i_l}$ with $1\le k\le l\le n$.
Consider the automorphism $\phi:S\to S$ determined by $\phi(x_n)=x_{n+1}$ for every $n\in\mathbb Z$.
\begin{itemize}\label{ex-jan} \item[(a)] Let $s(w)$ be the number of adjacent pairs $(i_k,i_{k+1})$ in $w$ such that $i_k<i_{k+1}$. The map $v:S\to\mathbb R_+$ defined by $v(w)=s(w)+1$ is a semigroup norm. Then $\phi:(S,v)\to (S,v)$ is an automorphism of normed semigroups.
It is straightforward to prove that, for $w=x_{i_1}x_{i_2}\ldots x_{i_m}\in S$, \begin{itemize} \item[(i)] $h_{\mathfrak{S}}^\#(\phi,w)=h_{\mathfrak{S}}(\phi,w)$ if and only if $i_1>i_m+1$; \item[(ii)] $h_{\mathfrak{S}}^\#(\phi,w)=h_{\mathfrak{S}}(\phi,w)-1$ if and only if $i_m=i_1$ or $i_m=i_1-1$. \end{itemize} Moreover, \begin{itemize} \item[(iii)]$h_{\mathfrak{S}}^\#(\phi)=h_{\mathfrak{S}}(\phi)=\infty$. \end{itemize} In particular, $h_{\mathfrak{S}}(\phi,x_0)=1$ while $h_{\mathfrak{S}}^\#(\phi,x_0)=0$.
\item[(b)] Define a semigroup norm $\nu: S\to \mathbb R_+$ as follows. For $w=x_{i_1}x_{i_2}\ldots x_{i_n}\in S$ consider its subword $w'=x_{i_k}x_{i_{k+1}}\ldots x_{i_l}$ with maximal length satisfying $i_{j+1}=i_j+1$ for every $j\in\mathbb Z$ with $k\le j\le l-1$ and let $\nu(w)=\ell_X(w')$. Then $\phi:(S,\nu)\to (S,\nu)$ is an automorphism of normed semigroups.
It is possible to prove that, for $w\in S$, \begin{enumerate} \item[(i)] if $\ell_X(w)=1$, then $\nu(T_n(\phi,w))=n$ and $\nu(T^\#_n(\phi,w))=1$ for every $n\in\mathbb N_+$; \item[(ii)] if $\ell_X(w)=k$ with $k>1$, then $\nu(T_n(\phi,w))< 2k$ and $\nu(T^\#_n(\phi,w))< 2k $ for every $n\in\mathbb N_+$. \end{enumerate} From (i) and (ii) and from the definitions we immediately obtain that \begin{itemize} \item[(iii)] $h_\mathfrak{S}(\phi)=1\neq 0=h^\#_\mathfrak{S}(\phi)$. \end{itemize} \end{itemize} \end{Example}
We list now the main basic properties of the semigroup entropy. For complete proofs and further details see \cite{DGV1}.
\begin{Lemma}[Monotonicity for factors] Let $S$, $T$ be normed semigroups and $\phi: S \to S$, $\psi:T\to T$ endomorphisms. If $\alpha:S\to T$ is a surjective homomorphism such that $\alpha \circ \psi = \phi \circ \alpha$, then $h_{{\mathfrak{S}}}(\phi) \leq h_{{\mathfrak{S}}}(\psi)$. \end{Lemma} \begin{proof} Fix $x\in S$ and find $y \in T$ with $x= \alpha(y)$. Then $c_n(x, \phi) \leq c_n(\psi, y)$.
Dividing by $n$ and taking the limit gives $h_{{\mathfrak{S}}}(\phi,x) \leq h_{{\mathfrak{S}}}(\psi,y)$. So $h_{{\mathfrak{S}}}(\phi,x)\leq h_{{\mathfrak{S}}}(\psi)$. When $x$ runs over $S$, we conclude that $h_{{\mathfrak{S}}}(\phi) \leq h_{{\mathfrak{S}}}(\psi)$. \end{proof}
\begin{Corollary}[Invariance under conjugation] Let $S$ be a normed semigroup and $\phi: S \to S$ an endomorphism. If $\alpha:T\to S$ is an isomorphism, then $h_{{\mathfrak{S}}}(\phi)=h_{{\mathfrak{S}}}(\alpha\circ\phi\circ\alpha^{-1})$. \end{Corollary}
\begin{Lemma}[Invariance under inversion]\label{inversion} Let $S$ be a normed semigroup and $\phi:S\to S$ an automorphism. Then $h_{{\mathfrak{S}}}(\phi^{-1})=h_{{\mathfrak{S}}}(\phi)$. \end{Lemma}
\begin{Theorem}[Logarithmic Law] Let $(S,v)$ be a normed semigroup and $\phi:S\to S$ an endomorphism. Then $$ h_{{\mathfrak{S}}}(\phi^{k})\leq k\cdot h_{{\mathfrak{S}}}(\phi) $$
for every $k\in \mathbb N_+$. Furthermore, equality holds if $v$ is s-monotone. Moreover, if $\phi:S\to S$ is an automorphism, then $$h_{{\mathfrak{S}}}(\phi^k) = |k|\cdot h_{{\mathfrak{S}}}(\phi)$$ for all $k \in \mathbb Z\setminus\{0\}$. \end{Theorem} \begin{proof} Fix $k \in \mathbb N_+$, $x\in S$ and let $y= x\cdot\phi(x)\cdot\ldots\cdot\phi^{k-1}(x)$. Then \begin{align*} h_{\mathfrak{S}}(\phi^k)\geq h_{\mathfrak{S}}(\phi^k, y)&=\lim_{n\to\infty} \frac{c_{n}(\phi^k,y)}{n}=\lim_{n\to \infty} \frac{v (y\cdot \phi^k(y)\cdot\ldots \cdot\phi^{(n-1)k}(y)) }{n}=\\
&= k \cdot \lim_{n\to \infty} \frac{c_{nk}(\phi,x)}{nk}=k\cdot h_{\mathfrak{S}}(\phi,x). \end{align*} This yields $h_{\mathfrak{S}}(\phi^k)\geq k\cdot h_{\mathfrak{S}}(\phi,x)$ for all $x\in S$, and consequently, $h_{\mathfrak{S}}(\phi^k)\geq k\cdot h_{\mathfrak{S}}(\phi)$.
Suppose $v$ to be s-monotone, then \begin{equation*}\begin{split} h_{\mathfrak{S}}(\phi,x)=\lim_{n\to \infty} \frac{v(x\cdot\phi (x)\cdot\ldots\cdot\phi^{nk-1}(x))}{n\cdot k} \geq \\
\lim_{n\to\infty} \frac{ v(x\cdot\phi^k(x)\cdot\ldots\cdot(\phi^k)^{n-1}(x))}{n\cdot k}= \frac{h_{\mathfrak{S}}(\phi^k,x)}{k} \end{split}\end{equation*} Hence, $k\cdot h_{\mathfrak{S}}(\phi)\geq h_{\mathfrak{S}}(\phi^k,x)$ for every $x\in S$. Therefore, $k\cdot h_{\mathfrak{S}}(\phi)\geq h_{\mathfrak{S}}(\phi^k)$.
If $\phi$ is an automorphism and $k\in\mathbb Z\setminus\{0\}$, apply the previous part of the theorem and Lemma \ref{inversion}. \end{proof}
The next lemma shows that monotonicity is available not only under taking factors:
\begin{Lemma}[Monotonicity for subsemigroups] Let $(S,v)$ be a normed semigroup and $\phi:S\to S$ an endomorphism. If $T$ is a $\phi$-invariant normed subsemigroup of $(S,v)$, then $h_{{\mathfrak{S}}}(\phi)\geq h_{{\mathfrak{S}}}(\phi\restriction_{T})$. Equality holds if $S$ is ordered, $v$ is monotone and $T$ is cofinal in $S$. \end{Lemma}
Note that $T$ is equipped with the induced norm $v\restriction_T$. The same applies to the subsemigroups $S_i$ in the next corollary:
\begin{Corollary}[Continuity for direct limits] Let $(S,v)$ be a normed semigroup and $\phi:S\to S$ an endomorphism. If $\{S_i: i\in I\}$ is a directed family of $\phi$-invariant normed subsemigroup of $(S,v)$ with $ S =\varinjlim S_i$, then $h_{{\mathfrak{S}}}(\phi)=\sup h_{{\mathfrak{S}}}(\phi\restriction_{S_i})$. \end{Corollary}
We consider now products in ${\mathfrak{S}}$. Let $\{(S_i,v_i):i\in I\}$ be a family of normed semigroups and let $S=\prod_{i \in I}S_i$ be their direct product in the category of semigroups.
In case $I$ is finite, then $S$ becomes a normed semigroup with the $\max$-norm $v_{\prod}$, so $(S,v_{\prod})$ is the product of the family $\{S_i:i\in I\}$ in the category ${\mathfrak{S}}$; in such a case one has the following
\begin{Theorem}[Weak Addition Theorem - products]\label{WAT} Let $(S_i,v_i)$ be a normed semigroup and $\phi_i:S_i\to S_i$ an endomorphism for $i=1,2$. Then the endomorphism $\phi_1 \times \phi_2$ of $ S _1 \times S_2$ has $h_{\mathfrak{S}}(\phi_1 \times \phi_2)= \max\{ h_{\mathfrak{S}}(\phi_1),h_{\mathfrak{S}}(\phi_2)\}$. \end{Theorem}
If $I$ is infinite, $S$ need not carry a semigroup norm $v$ such that every projection $p_i: (S,v) \to (S_i,v_i)$ is a morphism in ${\mathfrak{S}}$. This is why the product of the family $\{(S_i,v_i):i\in I\}$ in ${\mathfrak{S}}$ is actually the subset $$S_{\mathrm{bnd}}=\{x=(x_i)_{i\in I}\in S: \sup_{i\in I}v_i(x_i)\in\mathbb R\}$$ of $S$ with the norm $v_{\prod}$ defined by $$v_{\prod}(x)=\sup_{i\in I}v_i(x_i)\ \text{for any}\ x=(x_i)_{i\in I}\in S_{\mathrm{bnd}}.$$ For further details in this direction see \cite{DGV1}.
\subsection{Entropy in $\mathfrak M$}
We collect here some additional properties of the semigroup entropy in the category $\mathfrak M$ of normed monoids where also coproducts are available. If $(S_i,v_i)$ is a normed monoid for every $i\in I$, the direct sum
$$S= \bigoplus_{i\in I} S_i =\{(x_i)\in \prod_{i\in I}S_i: |\{i\in I: x_i \ne 1\}|<\infty\}$$ becomes a normed monoid with the norm $$v_\oplus(x) = \sum_{i\in I} v_i(x_i)\ \text{for any}\ x = (x_i)_{i\in I} \in S.$$ This definition makes sense since $v_i$ are monoid norms, so $v_i(1) = 0$. Hence, $(S,v_\oplus)$ becomes a coproduct of the family $\{(S_i,v_i):i\in I\}$ in $\mathfrak M$.
We consider now the case when $I$ is finite, so assume without loss of generality that $I=\{1,2\}$. In other words we have two normed monoids $(S_1,v_1)$ and $(S_2,v_2)$. The product and the coproduct have the same underlying monoid $S=S_1\times S_2$, but the norms $v_\oplus$ and $v_{\prod}$ in $S$ are different and give different values of the semigroup entropy $h_{\mathfrak{S}}$; indeed, compare Theorem \ref{WAT} and the following one.
\begin{Theorem}[Weak Addition Theorem - coproducts] Let $(S_i,v_i)$ be a normed monoid and $\phi_i:S_i\to S_i$ an endomorphism for $i=1,2$. Then the endomorphism $\phi_1 \oplus \phi_2$ of $S _1 \oplus S_2$ has $h_{\mathfrak{S}}(\phi_1 \oplus \phi_2)= h_{\mathfrak{S}}(\phi_1) + h_{\mathfrak{S}}(\phi_2)$. \end{Theorem}
For a normed monoid $(M,v) \in \mathfrak M$ let $B(M)= \bigoplus_\mathbb N M$, equipped with the above coproduct norm $v_\oplus(x) = \sum_{n\in\mathbb N} v(x_n)$ for any $x=(x_n)_{n\in\mathbb N}\in B(M)$. The \emph{right Bernoulli shift} is defined by $$\beta_M:B(M)\to B(M), \ \beta_M(x_0,\dots,x_n,\dots)=(1,x_0,\dots,x_n,\dots),$$ while the \emph{left Bernoulli shift} is $${}_M\beta:B(M)\to B(M),\ {}_M\beta(x_0,x_1,\dots,x_n,\dots)=(x_1,x_2, \dots,x_n,\dots).$$
\begin{Theorem}[Bernoulli normalization] Let $(M,v)$ be a normed monoid. Then: \begin{itemize} \item[(a)] $h_{\mathfrak{S}} (\beta_M)=\sup_{x\in M}v(x)$; \item[(b)] $h_{\mathfrak{S}}({}_M\beta) = 0$. \end{itemize} \end{Theorem} \begin{proof} (a) For $x\in M$ consider $\underline{x}=(x_n)_{n\in\mathbb N}\in B(M)$ such that $x_0=x$ and $x_n=1$ for every $n\in\mathbb N_+$. Then $v_\oplus(T_n(\beta_M,\underline{x}))=n\cdot v(x)$, so $h_{\mathfrak{S}}(\beta_M,\underline{x})=v(x)$. Hence $h_{\mathfrak{S}}(\beta_M)\geq \sup_{x\in M}v(x)$. Let now $\underline{x}=(x_n)_{n\in\mathbb N}\in B(M)$ and let $k\in\mathbb N$ be the greatest index such that $x_k\neq 1$; then \begin{equation*}\begin{split} v_\oplus(T_n(\beta_M,\underline{x}))= \sum_{i=0}^{k+n} v(T_n(\beta_M,\underline{x})_i)\leq\\ \sum_{i=0}^{k-1} v(x_0\cdot\ldots\cdot x_i) + (n-k)\cdot v(x_1\cdot\ldots\cdot x_k)+\sum_{i=1}^{k} v(x_i\cdot\ldots\cdot x_k). \end{split}\end{equation*} Since the first and the last summand do not depend on $n$, after dividing by $n$ and letting $n$ converge to infinity we obtain $$h_{\mathfrak{S}}(\beta_M,\underline{x})=\lim_{n\to \infty} \frac{v_\oplus(T_n(\beta_M,\underline{x}))}{n}\leq v(x_1\cdot\ldots\cdot x_k)\leq \sup_{x\in M}v(x).$$
(b) Note that ${}_M\beta$ is locally nilpotent and apply Lemma \ref{locally}. \end{proof}
\subsection{Semigroup entropy of an element and pseudonormed semigroups}\label{NewSec1}
One can notice a certain asymmetry in Definition \ref{SEofEndos}.
Indeed, for $S$ a normed semigroup, the local semigroup entropy defined in \eqref{hs-eq} is a two variable function $$h_{{\mathfrak{S}}}: \mathrm{End}(S) \times S \to \mathbb R_+.$$ Taking $h_{{\mathfrak{S}}}(\phi)=\sup_{x\in S}h_{{\mathfrak{S}}}(\phi,x)$ for an endomorphism $\phi\in\mathrm{End}(S)$, we obtained the notion of semigroup entropy of $\phi$. But one can obviously exchange the roles of $\phi$ and $x$ and obtain the possibility to discuss the entropy of an element $x\in S$. This can be done in two ways. Indeed, in Remark \ref{Asymm} we consider what seems the natural counterpart of $h_{{\mathfrak{S}}}(\phi)$, while here we discuss a particular case that could appear to be almost trivial, but actually this is not the case, as it permits to give a uniform approach to some entropies which are not defined by using trajectories. So, by taking $\phi=\mathrm{id}_S$ in \eqref{hs-eq}, we obtain a map $h_{\mathfrak{S}}^0:S\to\mathbb R_+$:
\begin{Definition} Let $S$ be a normed semigroup and $x\in S$. The \emph{semigroup entropy} of $x$ is $$ h_{{\mathfrak{S}}}^0(x):=h_{{\mathfrak{S}}}(\mathrm{id}_S,x) = \lim_{n\to\infty} \frac{v(x^n)}{n}. $$ \end{Definition}
We shall see now that the notion of semigroup entropy of an element is supported by many examples. On the other hand, since some of the examples given below cannot be covered by our scheme, we propose first a slight extension that covers those examples as well.
Let ${\mathfrak S}^*$ be the category having as objects of all pairs $(S,v)$, where $S$ is a semigroup and $v:S \to \mathbb R_+$ is an \emph{arbitrary} map. A morphism in the category ${\mathfrak S}^*$ is a semigroup homomorphism $\phi: (S,v) \to (S',v')$ that is contracting with respect to the pair $v,v'$, i.e., $v'(\phi(x)) \leq v(x)$ for every $x\in S$. Note that our starting category ${\mathfrak S}$ is simply a full subcategory of ${\mathfrak S}^*$, having as objects those pairs $(S,v)$ such that $v$ satisfies (i) from Definition \ref{Def1}. These pairs were called normed semigroups and $v$ was called a semigroup norm. For the sake of convenience and in order to keep close to the current terminology, let us call the function $v$ in the larger category ${\mathfrak S}^*$ a \emph{semigroup pseudonorm} (although, we are imposing no condition on $v$ whatsoever).
So, in this setting, one can define a local semigroup entropy $h_{{\mathfrak{S}}^*}: \mathrm{End}(S) \times S \to \mathbb R_+$ following the pattern of \eqref{hs-eq}, replacing the limit by $$h_{{\mathfrak{S}}^*} (\phi,x)=\limsup_{n\to \infty}\frac{v(T_n(\phi,x))}{n}.$$ In particular, $$h_{{\mathfrak{S}}^*}^0(x)=\limsup_{n\to \infty}\frac{v(x^n)}{n}.$$ Let us note that in order to have the last $\limsup$ a limit, one does not need $(S,v)$ to be in ${\mathfrak{S}}$, but it suffices to have the semigroup norm condition (i) from Definition \ref{Def1} fulfilled only for products of powers of the same element.
We consider here three different entropies, respectively from \cite{MMS}, \cite{FFK} and \cite{Silv}, that can be described in terms of $h_{\mathfrak{S}}^0$ or its generalized version $h_{{\mathfrak{S}}^*}^0$. We do not go into the details, but we give the idea how to capture them using the notion of semigroup entropy of an element of the semigroup of all endomorphisms of a given object equipped with an appropriate semigroup (pseudo)norm.
\begin{itemize} \item[(a)] Following \cite{MMS}, let $R$ be a Noetherian local ring and $\phi:R\to R$ an endomorphism of finite length; moreover, $\lambda(\phi)$ is the length of $\phi$, which is a real number $\geq 1$. In this setting the entropy of $\phi$ is defined by $$h_\lambda(\phi)=\lim_{n\to \infty}\frac{\log\lambda(\phi^n)}{n}$$ and it is proved that this limit exists.
Then the set $S=\mathrm{End}_{\mathrm{fl}}(R)$ of all finite-length endomorphisms of $R$ is a semigroup and $\log\lambda(-)$ is a semigroup norm on $S$. For every $\phi\in S$, we have $$ h_\lambda(\phi)=h_{\mathfrak{S}}(\mathrm{id}_S,\phi)=h_{{\mathfrak{S}}}^0(\phi). $$ In other words, $h_\lambda(\phi)$ is nothing else but the semigroup entropy of the element $\phi$ of the normed semigroup $S=\mathrm{End}_{\mathrm{fl}}(R)$.
\item[(b)] We recall now the entropy considered in \cite{Silv}, which was already introduced in \cite{BV}. Let $t\in\mathbb N_+$ and $\varphi:\mathbb P^t\to\mathbb P^t$ be a dominant rational map of degree $d$. Then the entropy of $\varphi$ is defined as the logarithm of the dynamical degree, that is $$ h_\delta (\varphi)=\log \delta_\phi=\limsup_{n\to \infty}\frac{\log\deg(\varphi^n)}{n}. $$ Consider the semigroup $S$ of all dominant rational maps of $\mathbb P^n$ and the function $\log\deg(-)$. In general this is only a semigroup pseudonorm on $S$ and $$h_{{\mathfrak{S}}^*}^0(\varphi)=h_\delta(\varphi).$$ Note that $\log\deg(-)$ is a semigroup norm when $\varphi$ is an endomorphism of the variety $\mathbb P^t$.
\item[(c)] We consider now the growth rate for endomorphisms introduced in \cite{Bowen} and recently studied in \cite{FFK}. Let $G$ be a finitely generated group, $X$ a finite symmetric set of generators of $G$, and $\varphi:G\to G$ an endomorphism. For $g\in G$, denote by $\ell_X(g)$ the length of $g$ with respect to the alphabet $X$. The growth rate of $\varphi$ with respect to $x\in X$ is $$\log GR(\varphi,x)=\lim_{n\to \infty}\frac{\log \ell_X(\varphi^n(x))}{n}$$ (and the growth rate of $\varphi$ is $\log GR(\varphi)=\sup_{x\in X} \log GR(\varphi,x)$).
Consider $S=\mathrm{End}(G)$ and, fixed $x\in X$, the map $\log GR(-,x)$. As in item (b) this is only a semigroup pseudonorm on $S$. Nevertheless, also in this case the semigroup entropy $$\log GR(\varphi,x)=h_{{\mathfrak{S}}^*}^0(\varphi).$$ \end{itemize}
\begin{Remark}\label{Asymm}
For a normed semigroup $S$, let $h_{{\mathfrak{S}}}: \mathrm{End}(S) \times S \to \mathbb R_+$ be the local semigroup entropy defined in \eqref{hs-eq}.
Exchanging the roles of $\phi\in \mathrm{End}(S)$ and $x\in S$, define the \emph{global semigroup entropy} of an element $x\in S$ by
$$
h_{{\mathfrak{S}}}(x)=\sup_{\phi \in \mathrm{End}(S)}h_{{\mathfrak{S}}}(\phi,x).
$$ Obviously, $h_{{\mathfrak{S}}}^0(x) \leq h_{{\mathfrak{S}}}(x)$ for every $x\in S$. \end{Remark}
\section{Obtaining known entropies}\label{known-sec}
\subsection{The general scheme}
Let $\mathfrak X$ be a category and let $F:\mathfrak X\to {\mathfrak{S}}$ be a functor. Define the entropy $$h_{F}:\mathfrak X\to \mathbb R_+$$ on the category $\mathfrak X$ by $$h_{F}(\phi)=h_{{\mathfrak{S}}}(F(\phi)),$$ for any endomorphism $\phi: X \to X$ in $\mathfrak X$. Recall that with some abuse of notation we write $h_{F}:\mathfrak X\to \mathbb R_+$ in place of $h_{F}:\mathrm{Flow}_\mathfrak X\to \mathbb R_+$ for simplicity.
Since the functor $F$ preserves commutative squares and isomorphisms, the entropy $h_{F}$ has the following properties, that automatically follow from the previously listed properties of the semigroup entropy $h_{\mathfrak{S}}$. For the details and for properties that need a further discussion see \cite{DGV1}.
Let $X$, $Y$ be objects of $\mathfrak X$ and $\phi:X\to X$, $\psi:Y\to Y$ endomorphism in $\mathfrak X$. \begin{itemize} \item[(a)][Invariance under conjugation] If $\alpha:X\to Y$ is an isomorphism in $\mathfrak X$, then $h_{F}(\phi)=h_{F}(\alpha\circ\phi\circ\alpha^{-1})$. \item[(b)][Invariance under inversion] If $\phi:X\to X$ is an automorphism in $\mathfrak X$, then $h_{F}(\phi^{-1})=h_{F}(\phi)$. \item[(c)][Logaritmic Law] If the norm of $F(X)$ is $s$-monotone, then $h_{F}(\phi^{k})=k\cdot h_{F}(\phi)$ for all $k\in \mathbb N_+$. \end{itemize} Other properties of $h_{F}$ depend on properties of the functor $F$. \begin{itemize} \item[(d)][Monotonicity for invariant subobjects] If $F$ sends subobject embeddings in $\mathfrak X$ to embeddings in ${\mathfrak{S}}$ or to surjective maps in ${\mathfrak{S}}$, then, if $Y$ is a $\phi$-invariant subobject of $X$, we have $h_{F}(\phi\restriction_Y)\leq h_{F}(\phi)$. \item[(e)][Monotonicity for factors] If $F$ sends factors in $\mathfrak X$ to surjective maps in ${\mathfrak{S}}$ or to embeddings in ${\mathfrak{S}}$, then, if $\alpha:T\to S$ is an epimorphism in $\mathfrak X$ such that $\alpha \circ \psi = \phi \circ \alpha$, then $h_F(\phi) \leq h_F(\psi)$. \item[(f)][Continuity for direct limits] If $F$ is covariant and sends direct limits to direct limits, then $h_F(\phi)=\sup_{i\in I} h_F(\phi\restriction_{X_i})$ whenever $X=\varinjlim X_i$ and $X_i$ is a $\phi$-invariant subobject of $X$ for every $i\in I$. \item[(g)][Continuity for inverse limits] If $F$ is contravariant and sends inverse limits to direct limits, then $h_F(\phi)=\sup_{i\in I} h_F(\overline\phi_i)$ whenever $X=\varprojlim X_i$ and $(X_i,\phi_i)$ is a factor of $(X,\phi)$ for every $i\in I$.
\end{itemize}
In the following subsections we describe how the known entropies can be obtained from this general scheme. For all the details we refer to \cite{DGV1}
\subsection{Set-theoretic entropy}
In this section we consider the category $\mathbf{Set}$ of sets and maps and its (non-full) subcategory $\mathbf{Set}_\mathrm{fin}$ having as morphisms all the finitely many-to-one maps. We construct a functor $\mathfrak{atr}:\mathbf{Set}\to{\mathfrak{S}}$ and a functor $\mathfrak{str}: \mathbf{Set}_\mathrm{fin} \to {\mathfrak{S}}$, which give the set-theoretic entropy $\mathfrak h$ and the covariant set-theoretic entropy $\mathfrak h^*$, introduced in \cite{AZD} and \cite{DG-islam} respectively. We also recall that they are related to invariants for self-maps of sets introduced in \cite{G0} and \cite{AADGH} respectively.
A natural semilattice with zero, arising from a set $X$, is the family $({\mathcal S}(X),\cup)$ of all finite subsets of $X$ with neutral element $\emptyset$. Moreover the map defined by $v(A) = |A|$ for every $A\in\mathcal S(X)$ is an s-monotone norm. So let $\mathfrak{atr}(X)=(\mathcal S(X),\cup,v)$. Consider now a map $\lambda:X\to Y$ between sets and define $\mathfrak{atr}(\lambda):\mathcal S(X)\to \mathcal S(Y)$ by $A\mapsto \lambda(A)$ for every $A\in\mathcal S(X)$. This defines a covariant functor $$\mathfrak{atr}: \mathbf{Set} \to {\mathfrak{S}}$$ such that $$h_{\mathfrak{atr}}=\mathfrak h.$$
Consider now a finite-to-one map $\lambda:X\to Y$. As above let $\mathfrak{str}(X)=(\mathcal S(X),\cup,v)$, while $\mathfrak{str}(\lambda):\mathfrak{str}(Y)\to\mathfrak{str}(X)$ is given by $A \mapsto\lambda^{-1}(A)$ for every $A\in\mathcal S(Y)$. This defines a contravariant functor $$ \mathfrak{str}: \mathbf{Set}_\mathrm{fin}\to{\mathfrak{S}} $$ such that $$ h_{\mathfrak{str}}=\mathfrak h^*. $$
\subsection{Topological entropy for compact spaces}
In this subsection we consider in the general scheme the topological entropy $h_{top}$ introduced in \cite{AKM} for continuous self-maps of compact spaces. So we specify the general scheme for the category $\mathfrak X=\mathbf{CTop}$ of compact spaces and continuous maps, constructing the functor $\mathfrak{cov}:\mathbf{CTop}\to{\mathfrak{S}}$.
For a topological space $X$ let $\mathfrak{cov}(X)$ be the family of all open covers $\mathcal U$ of $X$, where it is allowed $\emptyset\in\mathcal U$. For ${\cal U}, {\cal V}\in \mathfrak{cov}(X)$ let ${\cal U} \vee {\cal V}=\{U\cap V: U\in {\cal U}, V\in {\cal V}\}\in \mathfrak{cov}(X)$. One can easily prove commutativity and associativity of $\vee$; moreover, let $\mathcal E=\{X\}$ denote the trivial cover. Then \begin{center} $(\mathfrak{cov}(X), \vee, \mathcal E)$ is a commutative monoid. \end{center}
For a topological space $X$, one has a natural preorder ${\cal U} \prec{\cal V} $ on $\mathfrak{cov}(X)$; indeed, ${\cal V}$ \emph{refines} ${\cal U}$ if for every $V \in {\cal V}$ there exists $U\in {\cal U}$ such that $V\subseteq U$. Note that this preorder has bottom element $\mathcal E$, and that it is not an order. In general, ${\cal U} \vee {\cal U} \ne {\cal U} $, yet ${\cal U} \vee {\cal U} \sim {\cal U} $, and more generally \begin{equation}\label{vee} {\cal U} \vee {\cal U} \vee \ldots \vee {\cal U} \sim {\cal U}. \end{equation}
For $X$, $Y$ topological spaces, a continuous map $\phi:X\to Y$ and ${\cal U}\in \mathfrak{cov} (Y)$, let $\phi^{-1}({\cal U})=\{\phi^{-1}(U): U\in {\cal U}\}$. Then, as $\phi^{-1}({\cal U} \vee {\cal V})= \phi^{-1}({\cal U})\vee \phi^{-1}({\cal V})$, we have that $\mathfrak{cov} (\phi): \mathfrak{cov} (Y)\to \mathfrak{cov} (X)$, defined by ${\cal U} \mapsto \phi^{-1}({\cal U})$, is a semigroup homomorphism. This defines a contravariant functor $\mathfrak{cov}$ from the category of all topological spaces to the category of commutative semigroups.
To get a semigroup norm on $\mathfrak{cov}(X)$ we restrict this functor to the subcategory ${\mathbf{CTop}}$ of compact spaces. For a compact space $X$ and ${\cal U}\in \mathfrak{cov}(X)$, let $$M({\cal U})=\min\{|{\cal V}|: {\cal V}\mbox{ a finite subcover of }{\cal U}\}\ \text{and}\ v({\cal U})=\log M({\cal U}).$$ Now \eqref{vee} gives $v({\cal U} \vee {\cal U} \vee \ldots \vee {\cal U}) = v({\cal U})$, so \begin{center} $(\mathfrak{cov}(X), \vee, v)$ is an arithmetic normed semigroup. \end{center}
For every continuous map $\phi:X\to Y$ of compact spaces and ${\cal W}\in \mathfrak{cov}(Y)$, the inequality $v(\phi^{-1}({\cal W}))\leq v({\cal W})$ holds. Consequently \begin{center} $\mathfrak{cov}(\phi): \mathfrak{cov} (Y)\to \mathfrak{cov} (X)$, defined by ${\cal W}\mapsto\phi^{-1}({\cal W})$, is a morphism in ${\mathfrak{S}}$. \end{center} Therefore the assignments $X \mapsto \mathfrak{cov}(X)$ and $\phi\mapsto\mathfrak{cov}(\phi)$ define a contravariant functor $$\mathfrak{cov}:\mathbf{CTop}\to {\mathfrak{S}}.$$
Moreover, $$h_{\mathfrak{cov}}=h_{top}.$$
Since the functor $\mathfrak{cov}$ takes factors in ${\mathbf{CTop}}$ to embeddings in ${\mathfrak{S}}$, embeddings in ${\mathbf{CTop}}$ to surjective morphisms in ${\mathfrak{S}}$, and inverse limits in ${\mathbf{CTop}}$ to direct limits in ${\mathfrak{S}}$, we have automatically that the topological entropy $h_{top}$ is monotone for factors and restrictions to invariant subspaces, continuous for inverse limits, is invariant under conjugation and inversion, and satisfies the Logarithmic Law.
\subsection{Measure entropy}
In this subsection we consider the category ${\mathbf{MesSp}}$ of probability measure spaces $(X, \mathfrak B, \mu)$ and measure preserving maps, constructing a functor $\mathfrak{mes}:{\mathbf{MesSp}}\to{\mathfrak{S}}$ in order to obtain from the general scheme the measure entropy $h_{mes}$ from \cite{K} and \cite{Sinai}.
For a measure space $(X,\mathfrak{B},\mu)$ let $\mathfrak{P}(X)$ be the family of all measurable partitions $\xi=\{A_1,A_2,\ldots,A_k\}$ of $X$. For $\xi, \eta\in \mathfrak{P}(X)$ let $\xi \vee \eta=\{U\cap V: U\in \xi, V\in \eta\}$. As $\xi \vee \xi = \xi$, with zero the cover $\xi_0=\{X\}$, \begin{center} $(\mathfrak{P}(X),\vee)$ is a semilattice with $0$. \end{center} Moreover, for $\xi=\{A_1,A_2,\ldots,A_k\}\in \mathfrak{P}(X)$ the \emph{entropy} of $\xi$ is given by Boltzmann's Formula $$ v(\xi)=-\sum_{i=1}^k \mu(A_k)\log \mu(A_k). $$ This is a monotone semigroup norm making $\mathfrak{P}(X)$ a normed semilattice and a normed monoid.
Consider now a measure preserving map $T:X\to Y$. For a cover $\xi=\{A_i\}_{i=1}^k\in \mathfrak{P}(Y)$ let $T^{-1}(\xi)=\{T^{-1}(A_i)\}_{i=1}^k$. Since $T$ is measure preserving, one has $T^{-1}(\xi)\in \mathfrak{P}(X)$ and $\mu (T^{-1}(A_i)) = \mu(A_i)$ for all $i=1,\ldots,k$. Hence, $v(T^{-1}(\xi)) = v(\xi)$ and so \begin{center} $\mathfrak{mes}(T):\mathfrak{P}(Y)\to\mathfrak{P}(X)$, defined by $\xi\mapsto T^{-1}(\xi)$, is a morphism in ${\mathfrak{L}}$. \end{center} Therefore the assignments $X \mapsto\mathfrak{P}(X)$ and $T\mapsto\mathfrak{mes}(T)$ define a contravariant functor $$\mathfrak{mes}:{\mathbf{MesSp}}\to{\mathfrak{L}}.$$
Moreover, $$h_{\mathfrak{mes}}=h_{mes}.$$
The functor $\mathfrak{mes}:{\mathbf{MesSp}}\to{\mathfrak{L}}$ is covariant and sends embeddings in ${\mathbf{MesSp}}$ to surjective morphisms in ${\mathfrak{L}}$ and sends surjective maps in ${\mathbf{MesSp}}$ to embeddings in ${\mathfrak{L}}$. Hence, similarly to $h_{top}$, also the measure-theoretic entropy $h_{mes}$ is monotone for factors and restrictions to invariant subspaces, continuous for inverse limits, is invariant under conjugation and inversion, satisfies the Logarithmic Law and the Weak Addition Theorem.
In the next remark we briefly discuss the connection between measure entropy and topological entropy.
\begin{Remark} \begin{itemize} \item[(a)] If $X$ is a compact metric space and $\phi: X \to X$ is a continuous surjective self-map,
by Krylov-Bogolioubov Theorem \cite{BK} there exist some $\phi$-invariant Borel probability measures $\mu$ on $X$ (i.e., making $\phi:(X,\mu) \to (X,\mu)$ measure preserving). Denote by $h_\mu$ the measure entropy with respect to the measure $\mu$. The inequality $h_{\mu}(\phi)\leq h_{top}(\phi)$ for every $\mu \in M(X,\phi)$ is due to Goodwyn \cite{Goo}. Moreover the \emph{variational principle} (see \cite[Theorem 8.6]{Wa}) holds true: $$h_{top}(\phi)= \sup \{h_{\mu}(\phi): \mu\ \text{$\phi$-invariant measure on $X$}\}.$$
\item[(b)] In the computation of the topological entropy it is possible to reduce to surjective continuous self-maps of compact spaces. Indeed, for a compact space $X$ and a continuous self-map $\phi:X\to X$, the set $E_\phi(X)=\bigcap_{n\in\mathbb N}\phi^n(X)$ is closed and $\phi$-invariant, the map $\phi\restriction_{E_\phi(X)}:E_\phi(X)\to E_\phi(X)$ is surjective and $h_{top}(\phi)=h_{top}(\phi\restriction_{E_\phi(X)})$ (see \cite{Wa}). \item[(c)] In the case of a compact group $K$ and a continuous surjective endomorphism $\phi:K\to K$, the group $K$ has its unique Haar measure and so $\phi$ is measure preserving as noted by Halmos \cite{Halmos}. In particular both $h_{top}$ and $h_{mes}$ are available for surjective continuous endomorphisms of compact groups and they coincide as proved in the general case by Stoyanov \cite{S}.
In other terms, denote by $\mathbf{CGrp}$ the category of all compact groups and continuous homomorphisms, and by $\textbf{CGrp}_e$ the non-full subcategory of $\textbf{CGrp}$, having as morphisms all epimorphisms in $\textbf{CGrp}$. So in the following diagram we consider the forgetful functor $V: \textbf{CGrp}_e\to \textbf{Mes}$, while $i$ is the inclusion of $\textbf{CGrp}_e$ in $\textbf{CGrp}$ as a non-full subcategory and $U:\mathbf{CGrp}\to \mathbf{Top}$ is the forgetful functor: \begin{equation*} \xymatrix{ \textbf{CGrp}_e\ar[r]^{i}\ar[d]^V & \textbf{CGrp}\ar[r]^{U}& \textbf{Top} \\ \textbf{Mes} } \end{equation*} For a surjective endomorphism $\phi$ of the compact group $K$, we have then $h_{mes}(V(\phi))=h_{top}(U(\phi))$.
\end{itemize} \end{Remark}
\subsection{Algebraic entropy}
Here we consider the category $\mathbf{Grp}$ of all groups and their homomorphisms and its subcategory $\mathbf{AbGrp}$ of all abelian groups. We construct two functors $\mathfrak{sub}:\mathbf{AbGrp}\to{\mathfrak{L}}$ and $\mathfrak{pet}:\mathbf{Grp}\to{\mathfrak{S}}$ that permits to find from the general scheme the two algebraic entropies $\mathrm{ent}$ and $h_{alg}$. For more details on these entropies see the next section.
Let $G$ be an abelian group and let $({\cal F}(G),\cdot)$ be the semilattice consisting of all finite subgroups of $G$. Letting $v(F) = \log|F|$ for every $F \in {\cal F}(G)$, then \begin{center} $({\cal F}(G),\cdot,v)$ is a normed semilattice \end{center} and the norm $v$ is monotone.
For every group homomorphism $\phi: G \to H$, \begin{center} the map ${\cal F}(\phi): {\cal F}(G) \to {\cal F}(H)$, defined by $F\mapsto \phi(F)$, is a morphism in ${\mathfrak{L}}$. \end{center} Therefore the assignments $G\mapsto {\cal F}(G)$ and $\phi\mapsto {\cal F}(\phi)$ define a covariant functor $$\mathfrak{sub}: {\mathbf{AbGrp}} \to {\mathfrak{L}}.$$ Moreover $$h_{\mathfrak{sub}}=\mathrm{ent}.$$
Since the functor $\mathfrak{sub}$ takes factors in $\mathbf{AbGrp}$ to surjective morphisms in ${\mathfrak{S}}$, embeddings in $\mathbf{AbGrp}$ to embeddings in ${\mathfrak{S}}$, and direct limits in $\mathbf{AbGrp}$ to direct limits in ${\mathfrak{S}}$, we have automatically that the algebraic entropy $\mathrm{ent}$ is monotone for factors and restrictions to invariant subspaces, continuous for direct limits, invariant under conjugation and inversion, satisfies the Logarithmic Law.
For a group $G$ let $\mathcal{H}(G)$ be the family of all finite non-empty subsets of $G$. Then $\mathcal{H}(G)$ with the operation induced by the multiplication of $G$ is a monoid with neutral element $\{1\}$.
Moreover, letting $v(F) = \log |F|$ for every $F \in \mathcal{H}(G)$ makes $\mathcal{H}(G)$ a normed semigroup. For an abelian group $G$ the monoid $\mathcal{H}(G)$ is arithmetic since for any $F\in \mathcal{H}(G)$ the sum of $n$ summands satisfies $|F + \ldots + F|\leq (n+1)^{|F|}$. Moreover, $(\mathcal{H}(G),\subseteq)$ is an ordered semigroup and the norm $v$ is $s$-monotone.
For every group homomorphism $\phi:G \to H$, \begin{center} the map $\mathcal{H}(\phi): \mathcal{H}(G) \to \mathcal{H}(H)$, defined by $F\mapsto \phi(F)$, is a morphism in ${\mathfrak{S}}$. \end{center} Consequently the assignments $G \mapsto (\mathcal{H}(G),v)$ and $\phi\mapsto \mathcal{H}(\phi)$ give a covariant functor $$\mathfrak{pet}:\mathbf{Grp}\to {\mathfrak{S}}.$$ Hence $$h_{\mathfrak{pet}}=h_{alg}.$$ Note that the functor $\mathfrak{sub}$ is a subfunctor of $\mathfrak{pet}: {\mathbf{AbGrp}} \to{\mathfrak{S}}$ as $ {\cal F}(G) \subseteq {\cal H}(G)$ for every abelian group $G$.
As for the algebraic entropy $\mathrm{ent}$, since the functor $\mathfrak{pet}$ takes factors in $\mathbf{Grp}$ to surjective morphisms in ${\mathfrak{S}}$, embeddings in $\mathbf{Grp}$ to embeddings in ${\mathfrak{S}}$, and direct limits in $\mathbf{Grp}$ to direct limits in ${\mathfrak{S}}$, we have automatically that the algebraic entropy $h_{alg}$ is monotone for factors and restrictions to invariant subspaces, continuous for direct limits, invariant under conjugation and inversion, satisfies the Logarithmic Law.
\subsection{$h_{top}$ and $h_{alg}$ in locally compact groups}\label{NewSec2}
As mentioned above, Bowen introduced topological entropy for uniformly continuous self-maps of metric spaces in \cite{B}. His approach turned out to be especially efficient in the case of locally compact spaces provided with some Borel measure with good invariance properties, in particular for {continuous endomorphisms of locally compact groups provided with their Haar measure}. Later Hood in \cite{hood} extended Bowen's definition to uniformly continuous self-maps of arbitrary uniform spaces and in particular to continuous endomorphisms of (not necessarily metrizable) locally compact groups.
On the other hand, Virili \cite{V} extended the notion of algebraic entropy to continuous endomorphisms of locally compact abelian groups, inspired by Bowen's definition of topological entropy (based on the use of Haar measure). As mentioned in \cite{DG-islam}, his definition can be extended to continuous endomorphisms of arbitrary locally compact groups.
Our aim here is to show that both entropies can be obtained from our general scheme in the case of measure preserving topological automorphisms of locally compact groups. To this end we recall first the definitions of $h_{top}$ and $h_{alg}$ in locally compact groups. Let $G$ be a locally compact group, let $\mathcal C(G)$ be the family of all compact neighborhoods of $1$ and $\mu$ be a right Haar measure on $G$. For a continuous endomorphism $\phi: G \to G$, $U\in\mathcal C(G)$ and a positive integer $n$, the $n$-th cotrajectory $C_n(\phi,U)=U\cap \phi^{-1}(U)\cap\ldots\cap\phi^{-n+1}(U)$ is still in $\mathcal C(G)$. The topological entropy $h_{top}$ is intended to measure the rate of decay of the $n$-th cotrajectory $C_n(\phi,U)$. So let \begin{equation} H_{top}(\phi,U)=\limsup _{n\to \infty} - \frac{\log \mu (C_n(\phi,U))}{n}, \end{equation} which does not depend on the choice of the Haar measure $\mu$. The \emph{topological entropy} of $\phi$ is $$ h_{top}(\phi)=\sup\{H_{top}(\phi,U):U\in\mathcal C(G)\}. $$
If $G$ is discrete, then $\mathcal C(G)$ is the family of all finite subsets of $G$ containing $1$, and $\mu(A) = |A|$ for subsets $A$ of $G$. So $H_{top}(\phi,U)= 0$ for every $U \in \mathcal C(G)$, hence $h_{top}(\phi)=0$.
To define the algebraic entropy of $\phi$ with respect to $U\in\mathcal C(G)$ one uses the {$n$-th $\phi$-trajectory} $T_n(\phi,U)=U\cdot \phi(U)\cdot \ldots\cdot \phi^{n-1}(U)$ of $U$, that still belongs to $\mathcal C(G)$. It turns out that the value \begin{equation}\label{**} H_{alg}(\phi,U)=\limsup_{n\to \infty} \frac{\log \mu (T_n(\phi,U))}{n} \end{equation} does not depend on the choice of $\mu$. The \emph{algebraic entropy} of $\phi$ is $$ h_{alg}(\phi)=\sup\{H_{alg}(\phi,U):U\in\mathcal C(G)\}. $$ The term ``algebraic'' is motivated by the fact that the definition of $T_n(\phi,U)$ (unlike $C_n(\phi,U)$) makes use of the group operation.
As we saw above \eqref{**} is a limit when $G$ is discrete. Moreover, if $G$ is compact, then $h_{alg}(\phi)=H_{alg}(\phi,G)=0$.
In the sequel, $G$ will be a locally compact group. We fix also a measure preserving topological automorphism $\phi: G \to G$.
To obtain the entropy $h_{top}(\phi)$ via semigroup entropy fix some $V\in \mathcal C(G)$ with $\mu(V)\leq 1$. Then consider the subset $$ \mathcal C_0(G)=\{U\in \mathcal C(G): U \subseteq V\}. $$ Obviously, $\mathcal C_0(G)$ is a monoid with respect to intersection, having as neutral element $V$. To obtain a pseudonorm $v$ on $\mathcal C_0(G)$ let $v(U) = - \log \mu (U)$ for any $U \in \mathcal C_0(G)$. Then $\phi$ defines a semigroup isomorphism $\phi^\#: \mathcal C_0(G)\to \mathcal C_0(G)$ by $\phi^\#(U) = \phi^{-1}(U)$ for any $U\in \mathcal C_0(G)$. It is easy to see that $\phi^\#: \mathcal C_0(G)\to \mathcal C_0(G)$ is a an automorphism in ${\mathfrak{S}}^*$ and the semigroup entropy $h_{{\mathfrak{S}}^*}(\phi^\#)$ coincides with $h_{top}(\phi)$ since $H_{top}(\phi,U) \leq H_{top}(\phi,U')$ whenever $U \supseteq U'$.
To obtain the entropy $h_{alg}(\phi)$ via semigroup entropy fix some $W\in \mathcal C(G)$ with $\mu(W)\geq 1$. Then consider the subset $$ \mathcal C_1(G)=\{U\in \mathcal C(G): U \supseteq W\} $$ of the set $\mathcal C(G)$. Note that for $U_1, U_2 \in \mathcal C_1(G)$ also $U_1U_2 \in \mathcal C_1(G)$. Thus $\mathcal C_1(G)$ is a semigroup. To define a pseudonorm $v$ on $\mathcal C_1(G)$ let $v(U) = \log \mu (U)$ for any $U \in \mathcal C_1(G)$. Then $\phi$ defines a semigroup isomorphism $\phi_\#: \mathcal C_1(G)\to \mathcal C_1(G)$ by $\phi_\#(U) = \phi(U)$ for any $U\in \mathcal C_1(G)$. It is easy to see that $\phi_\#: \mathcal C_1(G)\to \mathcal C_1(G)$ is a morphism in ${\mathfrak{S}}^*$ and
the semigroup entropy $h_{{\mathfrak{S}}^*}(\phi_\#)$ coincides with $h_{alg}(\phi)$,
since $ \mathcal C_1(G)$ is cofinal in $ \mathcal C(G)$ and $H_{alg}(\phi,U) \leq H_{alg}(\phi,U')$ whenever $U \subseteq U'$.
\begin{Remark} We asked above the automorphism $\phi$ to be ``measure preserving". In this way one rules out many interesting cases of topological automorphisms that are not measure preserving (e.g., all automorphisms of $\mathbb R$ beyond $\pm \mathrm{id}_\mathbb R$). This condition is imposed in order to respect the definition of the morphisms in ${\mathfrak{S}}^*$. If one further relaxes this condition on the morphisms in ${\mathfrak{S}}^*$ (without asking them to be contracting maps with respect to the pseudonorm),
then one can obtain a semigroup entropy that covers the topological and the algebraic entropy of arbitrary topological automorphisms of locally compact groups (see \cite{DGV} for more details).
\end{Remark}
\subsection{Algebraic $i$-entropy}
For a ring $R$ we denote by $\mathbf{Mod}_R$ the category of right $R$-modules and $R$-module homomorphisms. We consider here the algebraic $i$-entropy introduced in \cite{SZ}, giving a functor ${\mathfrak{sub}_i}:\mathbf{Mod}_R\to {\mathfrak{L}}$, to find $\mathrm{ent}_i$ from the general scheme. Here $i: \mathbf{Mod}_R \to \mathbb R_+$ is an invariant of $\mathbf{Mod}_R$ (i.e., $i(0)=0$ and $i(M) = i(N)$ whenever $M\cong N$). Consider the following conditions: \begin{itemize} \item[(a)] $i(N_1 + N_2)\leq i(N_1) + i(N_2)$ for all submodules $N_1$, $N_2$ of $M$; \item[(b)] $i(M/N)\leq i(M)$ for every submodule $N$ of $M$; \item[(b$^*$)] $i(N)\leq i(M)$ for every submodule $N$ of $M$. \end{itemize} The invariant $i$ is called \emph{subadditive} if (a) and (b) hold, and it is called \emph{preadditive} if (a) and (b$^*$) hold.
For $M\in\mathbf{Mod}_R$ denote by ${\cal L}(M)$ the lattice of all submodules of $M$. The operations are intersection and sum of two submodules, the bottom element is $\{0\}$ and the top element is $M$. Now fix a subadditive invariant $i$ of $\mathbf{Mod}_R$ and for a right $R$-module $M$ let $${\cal F}_i(M)=\{\mbox{submodules $N$ of $M$ with }i(M)< \infty\},$$ which is a subsemilattice of ${\cal L}(M)$ ordered by inclusion. Define a norm on ${\cal F}_i(M)$ setting $$v(H)=i(H)$$ for every $H \in {\cal F}_i(M)$. The norm $v$ is not necessarily monotone (it is monotone if $i$ is both subadditive and preadditive).
For every homomorphism $\phi: M \to N$ in $\mathbf{Mod}_R$, \begin{center} ${\cal F}_i(\phi): {\cal F}_i(M) \to {\cal F}_i(N)$, defined by ${\cal F}_i(\phi)(H) =\phi(H)$, is a morphism in ${\mathfrak{L}}$. \end{center} Moreover the norm $v$ makes the morphism ${\cal F}_i(\phi)$ contractive by the property (b) of the invariant. Therefore, the assignments $M \mapsto {\cal F}_i(M)$ and $\phi\mapsto {\cal F}_i(\phi)$ define a covariant functor $$\mathfrak{sub}_i:\mathbf{Mod} _R\to {\mathfrak{L}}.$$ We can conclude that, for a ring $R$ and a subadditive invariant $i$ of $\mathbf{Mod}_R$,
$$h_{\mathfrak{sub}_i}=\mathrm{ent}_i.$$
If $i$ is preadditive, the functor ${\mathfrak{sub}_i}$ sends monomorphisms to embeddings and so $\mathrm{ent}_i$ is monotone under taking submodules. If $i$ is both subadditive and preadditive then for every $R$-module $M$ the norm of ${\mathfrak{sub}_i}(M)$ is s-monotone, so $\mathrm{ent}_{i}$ satisfies also the Logarithmic Law. In general this entropy is not monotone under taking quotients, but this can be obtained with stronger hypotheses on $i$ and with some restriction on the domain of ${\mathfrak{sub}_i}$.
A clear example is given by vector spaces; the algebraic entropy $\mathrm{ent}_{\dim}$ for linear transformations of vector spaces was considered in full details in \cite{GBS}:
\begin{Example} Let $K$ be a field. Then for every $K$-vector space $V$ let ${\cal F}_d(M)$ be the set of all finite-dimensional subspaces $N$ of $M$.
Then $({\cal F}_d(V),+)$ is a subsemilattice of $({\cal L}(V),+)$ and $v(H)=\dim H$ defines a monotone norm on ${\cal F}_d(V)$. For every morphism $\phi: V \to W$ in $\mathbf{Mod}_K$ \begin{center} the map ${\cal F}_d(\phi): {\cal F}_d(V) \to {\cal F}_d(W)$, defined by $H\mapsto\phi(H)$, is a morphism in ${\mathfrak{L}}$. \end{center}
Therefore, the assignments $M \mapsto {\cal F}_d(M)$ and $\phi\mapsto {\cal F}_d(\phi)$ define a covariant functor $$\mathfrak{sub}_d:\mathbf{Mod}_K\to {\mathfrak{L}}.$$ Then $$h_{\mathfrak{sub}_d}=\mathrm{ent}_{\dim}.$$ Note that this entropy can be computed ad follows. Every flow $\phi: V \to V$ of $\mathbf{Mod}_K$ can be considered as a $K[X]$-module $V_\phi$ letting $X$ act on $V$ as $\phi$. Then $h_{\mathfrak{sub}_d}(\phi)$ coincides with the rank of the $K[X]$-module $V_\phi$. \end{Example}
\subsection{Adjoint algebraic entropy}
We consider now again the category $\mathbf{Grp}$ of all groups and their homomorphisms, giving a functor $\mathfrak{sub}^\star:\mathbf{Grp}\to {\mathfrak{L}}$ such that the entropy defined using this functor coincides with the adjoint algebraic entropy $\mathrm{ent}^\star$ introduced in \cite{DGS}.
For a group $G$ denote by ${\cal C}(G)$ the family of all subgroups of finite index in $G$. It is a subsemilattice of $({\cal L}(G), \cap)$. For $N\in{\cal C}(G)$, let $$v(N) = \log[G:N];$$ then \begin{center} $({\cal C}(G),v)$ is a normed semilattice, \end{center} with neutral element $G$; moreover the norm $v$ is monotone.
For every group homomorphism $\phi: G \to H$ \begin{center} the map ${\cal C}(\phi): {\cal C}(H) \to {\cal C}(G)$, defined by $N\mapsto \phi^{-1}(N)$, is a morphism in ${\mathfrak{S}}$. \end{center} Then the assignments $G\mapsto{\cal C}(G)$ and $\phi\mapsto{\cal C}(\phi)$ define a contravariant functor $$\mathfrak{sub}^\star:\mathbf{Grp}\to {\mathfrak{L}}.$$ Moreover $$h_{\mathfrak{sub}^\star}=\mathrm{ent}^\star.$$
There exists also a version of the adjoint algebraic entropy for modules, namely the adjoint algebraic $i$-entropy $\mathrm{ent}_i^\star$ (see \cite{Vi}), which can be treated analogously.
\subsection{Topological entropy for totally disconnected compact groups}
Let $(G,\tau)$ be a totally disconnected compact group and consider the filter base ${\cal V}_G(1)$ of open subgroups of $G$. Then \begin{center} $({\cal V}_G(1), \cap)$ is a normed semilattice \end{center} with neutral element $G \in {\cal V}_G(1)$ and norm defined by $v_o(V)=\log [G:V]$ for every $V\in{\cal V}_G(1)$.
For a continuous homomorphism $\phi: G\to H$ between compact groups, \begin{center} the map ${\cal V}_H(1)\to {\cal V}_G(1)$, defined by $V \mapsto \phi^{-1}(V)$, is a morphism in ${\mathfrak{L}}$. \end{center} This defines a contravariant functor $$\mathfrak{sub}_o^\star:\mathbf{TdCGrp}\to{\mathfrak{L}},$$ which is a subfunctor of $\mathfrak{sub}^\star$.
Then the entropy $h_{\mathfrak{sub}^\star_o}$ coincides with the restriction to $\mathbf{TdCGrp}$ of the topological entropy $h_{top}$.
This functor is related also to the functor $\mathfrak{cov}:\mathbf{TdCGrp} \to{\mathfrak{S}}$. Indeed, let $G$ be a totally disconnected compact group. Each $V\in {\cal V}_G(1)$ defines a cover ${\cal U}_V=\{x\cdot V\}_{x\in G}$ of $G$ with $v_o(V)=v({\cal U}_V)$. So the map $V \mapsto {\cal U}_V$ defines an isomorphism between the normed semilattice $\mathfrak{sub}_o^\star(G)={\cal V}_G(1)$ and the subsemigroup $\mathfrak{cov}_s(G)=\{{\cal U}_V:V \in {\cal V}_G(1)\}$ of $\mathfrak{cov}(G)$.
\subsection{Bridge Theorem}\label{BTsec}
In Definition \ref{BTdef} we have formalized the concept of Bridge Theorem between entropies $h_1:\mathfrak X_1 \to \mathbb R_+$ and $h_2:\mathfrak X_2 \to \mathbb R_+$ via functors $\varepsilon: \mathfrak X_1 \to \mathfrak X_2$. Obviously, the Bridge Theorem with respect to the functor $\varepsilon$ is available when each $h_i$ has the form $h_i= h_{F_i}$ for appropriate functors $F_i: \mathfrak{X}_i \to {\mathfrak{S}}$ ($i= 1,2$) that commute with $\varepsilon$ (i.e., $F_1 = F_2 \varepsilon$), that is $$ h_2(\varepsilon(\phi))= h_1(\phi)\ \mbox{ for all morphisms $\phi$ in }\ \mathfrak X_1. $$ Actually, it is sufficient that $F_i$ commute with $\varepsilon$ ``modulo $h_{\mathfrak{S}}$" (i.e., $h_{\mathfrak{S}} F_1 = h_{\mathfrak{S}} F_2 \varepsilon$) to obtain this conclusion: \begin{equation}\label{Buzz} \xymatrix@R=6pt@C=37pt { \mathfrak{X}_1\ar[dd]_{\varepsilon}\ar[dr]^{F_1}\ar@/^2pc/[rrd]^{h_{1}} & & \\
& {{\mathfrak{S}}}\ar[r]|-{ {h_{\mathfrak{S}}}}&\mathbb R^+ \\ \mathfrak{X}_2\ar[ur]_{F_2}\ar@/_2pc/[rru]_{h_{2}} & & } \end{equation}
In particular the Pontryagin duality functor {$\ \widehat{}: {\mathbf{AbGrp}} \to {\mathbf{CAbGrp}}$} connects the category of abelian groups and that of compact abelian groups so connects the respective entropies $h_{alg}$ and $h_{top}$ by a Bridge Theorem. Taking the restriction to torsion abelian groups and the totally disconnected compact groups one obtains:
\begin{Theorem}[Weiss Bridge Theorem]\emph{\cite{W}}\label{WBT} Let $K$ be a totally disconnected compact abelian group and $\phi: K\to K$ a continuous endomorphism. Then $h_{top}(\phi) = \mathrm{ent}(\widehat \phi)$. \end{Theorem} \begin{proof} Since totally disconnected compact groups are zero-dimensional, every open finite cover $\mathcal U$ of $K$ admits a refinement consisting of clopen sets in $K$. Moreover, since $K$ admits a local base at 0 formed by open subgroups, it is possible to find a refinement of $\mathcal U$ of the form $\mathcal U_V$ for some open subgroup $ \mathcal V$. This proves that $\mathfrak{cov}_s(K)$ is cofinal in $\mathfrak{cov}(K)$. Hence, we have $$ h_{top}(\phi)=h_{\mathfrak{S}}(\mathfrak{cov}(\phi))=h_{\mathfrak{S}}(\mathfrak{cov}_s(\phi)). $$ Moreover, we have seen above that $\mathfrak{cov}_s(K)$ is isomorphic to $\mathfrak{sub}^\star_o(K)$, so one can conclude that $$h_{\mathfrak{S}}(\mathfrak{cov}_s(\phi))=h_{\mathfrak{S}}(\mathfrak{sub}^\star_o (\phi)).$$ Now the semilattice isomorphism $L\to \mathcal F(\widehat K)$ given by $N \mapsto N^\perp$ preserves the norms, so it is an isomorphism in ${\mathfrak{S}}$. Hence $$h_{\mathfrak{S}}(\mathfrak{sub}^\star_o (\phi))=h_{\mathfrak{S}}(\mathfrak{sub}(\widehat \phi))$$ and consequently $$h_{top}(\phi)= h_{\mathfrak{S}}(\mathfrak{sub}(\widehat \phi))=\mathrm{ent}(\widehat \phi).$$ \end{proof}
The proof of Weiss Bridge Theorem can be reassumed by the following diagram. \begin{equation*} \xymatrix@R=6pt@C=37pt {
(\widehat K,\widehat\phi)\ar[r]^{\mathfrak{sub}}\ar@/^4.5pc/[rrrddd]^{h_{\mathfrak{sub}}}&(({\cal F}(\widehat K),+);\mathfrak{sub}(\widehat \phi))\ar[dd]_{\widehat{}}\ar[dddrr]|-{h_{\mathfrak{S}}}& &\\ & & &\\ &((\mathfrak{sub}^\star_o(K),\cap);\mathfrak{sub}^\star_o(\phi))\ar[dd]_{\gamma} & & \\ & & & \mathbb R^+ \\ &((\mathfrak{cov}_{s}(K),\vee);\phi)\ar@{^{(}->}[dd]_{\iota} & & \\ & & &\\
(K,\phi)\ar@/^3pc/[uuuuuu]^{\widehat{}\;\;}\ar[r]_{\mathfrak{cov}}\ar@/_4.5pc/[rrruuu]_{h_{\mathfrak{cov}}}&((\mathfrak{cov}(K),\vee);\mathfrak{cov}(\phi))\ar[uuurr]|-{h_{\mathfrak{S}}} & & } \end{equation*}
Similar Bridge Theorems hold for other known entropies; they can be proved using analogous diagrams (see \cite{DGV1}). The first one that we recall concerns the algebraic entropy $\mathrm{ent}$ and the adjoint algebraic entropy $\mathrm{ent}^\star$:
\begin{Theorem} Let $\phi: G\to G$ be an endomorphism of an abelian group. Then $\mathrm{ent}^\star(\phi) = \mathrm{ent}(\widehat\phi)$. \end{Theorem}
The other two Bridge Theorems that we recall here connect respectively the set-theoretic entropy $\mathfrak h$ with the topological entropy $h_{top}$ and the contravariant set-theoretic entropy $\mathfrak h^*$ with the algebraic entropy $h_{alg}$.
We need to recall first the notion of generalized shift, which extend the Bernoulli shifts. For a map $\lambda:X\to Y$ between two non-empty sets and a fixed non-trivial group $K$, define $\sigma_\lambda:K^Y \to K^X$ by $\sigma_\lambda(f) = f\circ \lambda $ for $f\in K^Y$. For $Y = X$, $\lambda$ is a self-map of $X$ and $\sigma_\lambda$ was called \emph{generalized shift} of $K^X$ (see \cite{AADGH,AZD}). In this case $\bigoplus_X K$ is a $\sigma_\lambda$-invariant subgroup of $K^X$ precisely when $\lambda$ is finitely many-to-one. We denote $\sigma_\lambda\restriction_{\bigoplus_XK}$ by $\sigma_\lambda^\oplus$.
Item (a) in the next theorem was proved in \cite{AZD} (see also \cite[Theorem 7.3.4]{DG-islam}) while item (b) is \cite[Theorem 7.3.3]{DG-islam} (in the abelian case it was obtained in \cite{AADGH}).
\begin{Theorem} \emph{\cite{AZD}} Let $K$ be a non-trivial finite group, let $X$ be a set and $\lambda:X\to X$ a self-map. \begin{itemize}
\item[(a)]Then $h_{top}(\sigma_\lambda)=\mathfrak h(\lambda)\log|K|$.
\item[(b)] If $\lambda$ is finite-to-one, then $h_{alg}(\sigma_\lambda^\oplus)=\mathfrak h^*(\lambda)\log|K|$. \end{itemize} \end{Theorem}
In terms of functors, fixed a non-trivial finite group $K$, let $\mathcal F_K: \mathbf{Set}\to \mathbf{TdCGrp}$ be the functor defined on flows, sending a non-empty set $X$ to $K^X$, $\emptyset$ to $0$, a self-map $\lambda:X\to X$ to $\sigma_\lambda:K^Y\to K^X$
when $X\ne \emptyset$. Then the pair $(\mathfrak h, h_{top})$ satisfies $(BT_{\mathcal F_K})$ with constant $\log |K|$.
Analogously, let $\mathcal G_K: \mathbf{Set}_\mathrm{fin}\to \mathbf{Grp}$ be the functor defined on flows sending $X$ to $\bigoplus_X K$ and a finite-to-one self-map $\lambda:X\to X$ to $\sigma_\lambda^\oplus:\bigoplus_X K\to \bigoplus_X K$. Then the pair $(\mathfrak h^*, h_{alg})$ satisfies $(BT_{\mathcal G_K})$ with constant $\log |K|$.
\begin{Remark} At the conference held in Porto Cesareo, R. Farnsteiner posed the following question related to the Bridge Theorem. Is $h_{top}$ studied in non-Hausdorff compact spaces?
The question was motivated by the fact that the prime spectrum $\mathrm{Spec}(A)$ of a commutative ring $A$ is usually a non-Hausdorff compact space. Related to this question and to the entropy $h_\lambda$ defined for endomorphisms $\phi$ of local Noetherian rings $A$ (see \S \ref{NewSec1}), one may ask if there is any relation (e.g., a weak Bridge Theorem) between these two entropies and the functor $\mathrm{Spec}$; more precisely, one can ask whether there is any stable relation between $h_{top}(\mathrm{Spec}(\phi))$ and $h_\lambda(\phi)$. \end{Remark}
\section{Algebraic entropy and its specific properties}\label{alg-sec}
In this section we give an overview of the basic properties of the algebraic entropy and the adjoint algebraic entropy. Indeed, we have seen that they satisfy the general scheme presented in the previous section, but on the other hand they were defined for specific group endomorphisms and these definitions permit to prove specific features, as we are going to briefly describe. For further details and examples see \cite{DG}, \cite{DGS} and \cite{DG-islam}.
\subsection{Definition and basic properties}
Let $G$ be a group and $\phi:G\to G$ an endomorphism. For a finite subset $F$ of $G$, and for $n\in\mathbb N_+$, the \emph{$n$-th $\phi$-trajectory} of $F$ is \begin{equation*}\label{T_n} T_n(\phi,F)=F\cdot\phi(F)\cdot\ldots\cdot\phi^{n-1}(F); \end{equation*} moreover let \begin{equation}\label{gamma}
{\gamma_{\phi,F}(n)}=|T_n(\phi,F)|. \end{equation} The \emph{algebraic entropy of $\phi$ with respect to $F$} is \begin{equation*}\label{H} H_{alg}(\phi,F)={\lim_{n\to \infty}\frac{\log \gamma_{\phi,F}(n) }{n}}; \end{equation*}
This limit exists as $H_{alg}(\phi,F)=h_{\mathfrak{S}}(\mathcal H(\phi),F)$ and so Theorem \ref{limit} applies (see also \cite{DG-islam} for a direct proof of the existence of this limit and \cite{DG} for the abelian case). The \emph{algebraic entropy} of $\phi:G\to G$ is $$ h_{alg}(\phi)=\sup\{H_{alg}(\phi,F): F\ \text{finite subset of}\ G\}=h_{\mathfrak{S}}(\mathcal H(\phi)). $$ Moreover $$ \mathrm{ent}(\phi)=\sup\{H_{alg}(\phi,F): F\ \text{finite subgroup of}\ G\}. $$ If $G$ is abelian, then $\mathrm{ent}(\phi)=\mathrm{ent}(\phi\restriction_{t(G)})= h_{alg}(\phi\restriction_{t(G)})$.
Moreover, $h_{alg}(\phi) = \mathrm{ent}(\phi)$ if $G$ is locally finite, that is every finite subset of $G$ generates a finite subgroup; note that every locally finite group is obviously torsion, while the converse holds true under the hypothesis that the group is abelian (but the solution of Burnside Problem shows that even groups of finite exponent fail to be locally finite).
For every abelian group $G$, the identity map has $h_{alg}(\mathrm{id}_G)=0$ (as the normed semigroup $\mathcal H(G)$ is arithmetic, as seen above). Another basic example is given by the endomorphisms of $\mathbb Z$, indeed if $\phi: \mathbb Z \to \mathbb Z$ is given by $\phi(x) = mx$ for some positive integer $m$, then $h_{alg}(\phi) = \log m$. The fundamental example for the algebraic entropy is the right Bernoulli shift:
\begin{Example}\label{shift}(Bernoulli normalization) Let $K$ be a group. \begin{itemize} \item[(a)] The \emph{right Bernoulli shift} $\beta_K:K^{(\mathbb N)}\to K^{(\mathbb N)}$ is defined by $$(x_0,\ldots,x_n,\ldots)\mapsto (1,x_0,\ldots,x_n,\ldots).$$
Then $h_{alg}(\beta_K)=\log|K|$, with the usual convention that $\log|K|=\infty$ when $K$ is infinite. \item[(b)] The \emph{left Bernoulli shift} ${}_K\beta:K^{(\mathbb N)}\to K^{(\mathbb N)}$ is defined by $$(x_0,\ldots,x_n,\ldots)\mapsto (x_1,\ldots,x_{n+1},\ldots).$$ Then $h_{alg}({}_K\beta)=0$, as ${}_K\beta$ is locally nilpotent. \end{itemize} \end{Example}
The following basic properties of the algebraic entropy are consequences of the general scheme and were proved directly in \cite{DG-islam}.
\begin{fact}\label{properties} \emph{Let $G$ be a group and $\phi:G\to G$ an endomorphism. \begin{itemize} \item[(a)]\emph{[Invariance under conjugation]} If $\phi=\xi^{-1}\psi\xi$, where $\psi:H\to H$ is an endomorphism and $\xi:G\to H$ isomorphism, then $h_{alg}(\phi) = h_{alg}(\psi)$. \item[(b)]\emph{[Monotonicity]} If $H$ is a $\phi$-invariant normal subgroup of the group $G$, and $\overline\phi:G/H\to G/H$ is the endomorphism induced by $\phi$, then $h_{alg}(\phi)\geq \max\{h_{alg}(\phi\restriction_H),h_{alg}(\overline{\phi})\}$.
\item[(c)]\emph{[Logarithmic Law]} For every $k\in\mathbb N$ we have $h_{alg}(\phi^k) = k \cdot h_{alg}(\phi)$; if $\phi$ is an automorphism, then $h_{alg}(\phi)=h_{alg}(\phi^{-1})$, so $h_{alg}(\phi^k)=|k|\cdot h_{alg}(\phi)$ for every $k\in\mathbb Z$. \item[(d)]\emph{[Continuity]} If $G$ is direct limit of $\phi$-invariant subgroups $\{G_i : i \in I\}$, then $h_{alg}(\phi)=\sup_{i\in I}h_{alg}(\phi\restriction_{G_i}).$ \item[(e)]\emph{[Weak Addition Theorem]} If $G=G_1\times G_2$ and $\phi_i:G_i\to G_i$ is an endomorphism for $i=1,2$, then $h_{alg}(\phi_1\times\phi_2)=h_{alg}(\phi_1)+h_{alg}(\phi_2)$. \end{itemize} } \end{fact}
As described for the semigroup entropy in the previous section, and as noted in \cite[Remark 5.1.2]{DG-islam}, for group endomorphisms $\phi:G\to G$ it is possible to define also a ``left'' algebraic entropy, letting for a finite subset $F$ of $G$, and for $n\in\mathbb N_+$, $$T_n^\#(\phi,F)=\phi^{n-1}(F)\cdot\ldots\cdot\phi(F)\cdot F,$$
$$H^\#_{alg}(\phi,F)={\lim_{n\to \infty}\frac{\log |T^\#_n(\phi,F)|}{n}}$$ and $$h_{alg}^\#(\phi)=\sup\{H^\#_{alg}(\phi,F): F\ \text{finite subset of}\ G\}.$$ Answering a question posed in \cite[Remark 5.1.2]{DG-islam}, we see now that $$h_{alg}(\phi)=h_{alg}^\#(\phi).$$ Indeed, every finite subset of $G$ is contained in a finite subset $F$ of $G$ such that $1\in F$ and $F={F^{-1}}$; for such $F$ we have $$H_{alg}(\phi,F)=H_{alg}^\#(\phi,F),$$ since, for every $n\in\mathbb N_+$, \begin{equation*}\begin{split} T_n(\phi,F)^{-1}=\phi^{n-1}(F)^{-1}\cdot\ldots\cdot\phi(F)^{-1}\cdot F^{-1}=\\ \phi^{n-1}(F^{-1})\cdot\ldots\cdot\phi(F^{-1})\cdot F^{-1}=T_n^\#(\phi,F) \end{split}\end{equation*}
and so $|T_n(\phi,F)|=|T_n(\phi,F)^{-1}|=|T_n^\#(\phi,F)|$.
\subsection{Algebraic Yuzvinski Formula, Addition Theorem and Uni\-que\-ness}\label{ab-sec}
We recall now some of the main deep properties of the algebraic entropy in the abelian case. They are not consequences of the general scheme and are proved using the specific features of the algebraic entropy coming from the definition given above. We give here the references to the papers where these results were proved, for a general exposition on algebraic entropy see the survey paper \cite{DG-islam}.
The next proposition shows that the study of the algebraic entropy for torsion-free abelian groups can be reduced to the case of divisible ones. It was announced for the first time by Yuzvinski \cite{Y1}, for a proof see \cite{DG}.
\begin{Proposition}\label{AA_} Let $G$ be a torsion-free abelian group, $\phi:G\to G$ an endomorphism and denote by $\widetilde\phi$ the (unique) extension of $\phi$ to the divisible hull $D(G)$ of $G$. Then $h_{alg}(\phi)=h_{alg}(\widetilde\phi)$. \end{Proposition}
Let $f(t)=a_nt^n+a_1t^{n-1}+\ldots+a_0\in\mathbb Z[t]$ be a primitive polynomial and let $\{\lambda_i:i=1,\ldots,n\}\subseteq\mathbb C$ be the set of all roots of $f(t)$. The \emph{(logarithmic) Mahler measure} of $f(t)$ is $$m(f(t))= \log|a_n| + \sum_{|\lambda_i|>1}\log |\lambda_i|.$$ The Mahler measure plays an important role in number theory and arithmetic geometry and is involved in the famous Lehmer Problem, asking whether $\inf\{m(f(t)):f(t)\in\mathbb Z[t]\ \text{primitive}, m(f(t))>0\}>0$ (for example see \cite{Ward0} and \cite{Hi}).
If $g(t)\in\mathbb Q[t]$ is monic, then there exists a smallest positive integer $s$ such that $sg(t)\in\mathbb Z[t]$; in particular, $sg(t)$ is primitive. The Mahler measure of $g(t)$ is defined as $m(g(t))=m(sg(t))$. Moreover, if $\phi:\mathbb Q^n\to \mathbb Q^n$ is an endomorphism, its characteristic polynomial $p_\phi(t)\in\mathbb Q[t]$ is monic, and the Mahler measure of $\phi$ is $m(\phi)=m(p_\phi(t))$.
The formula \eqref{yuzeq} below was given a direct proof recently in \cite{GV}; it is the algebraic counterpart of the so-called Yuzvinski Formula for the topological entropy \cite{Y1} (see also \cite{LW}). It gives the values of the algebraic entropy of linear transformations of finite dimensional rational vector spaces in terms of the Mahler measure, so it allows for a connection of the algebraic entropy with Lehmer Problem.
\begin{Theorem}[Algebraic Yuzvinski Formula] \label{AYF}\emph{\cite{GV}} Let $n\in\mathbb N_+$ and $\phi:\mathbb Q^n\to\mathbb Q^n$ an endomorphism. Then \begin{equation}\label{yuzeq} h_{alg}(\phi)=m(\phi). \end{equation} \end{Theorem}
The next property of additivity of the algebraic entropy was first proved for torsion abelian groups in \cite{DGSZ}, while the proof of the general case was given in \cite{DG} applying the Algebraic Yuzvinski Formula.
\begin{Theorem}[Addition Theorem]\emph{\cite{DG}}\label{AT} Let $G$ be an abelian group, $\phi:G\to G$ an endomorphism, $H$ a $\phi$-invariant subgroup of $G$ and $\overline\phi:G/H\to G/H$ the endomorphism induced by $\phi$. Then $$h_{alg}(\phi)=h_{alg}(\phi\restriction_H)+ h_{alg}(\overline\phi).$$ \end{Theorem}
Moreover, uniqueness is available for the algebraic entropy in the category of all abelian groups. As in the case of the Addition Theorem, also the Uniqueness Theorem was proved in general in \cite{DG}, while it was previously proved in \cite{DGSZ} for torsion abelian groups.
\begin{Theorem}[Uniqueness Theorem]\label{UT}\emph{\cite{DG}} The algebraic entropy $$h_{alg}:\mathrm{Flow}_\mathbf{AbGrp}\to\mathbb R_+$$ is the unique function such that: \begin{itemize} \item[(a)] $h_{alg}$ is invariant under conjugation; \item[(b)] $h_{alg}$ is continuous on direct limits; \item[(c)] $h_{alg}$ satisfies the Addition Theorem;
\item[(d)] for $K$ a finite abelian group, $h_{alg}(\beta_K)=\log|K|$; \item[(e)] $h_{alg}$ satisfies the Algebraic Yuzvinski Formula. \end{itemize} \end{Theorem}
\subsection{The growth of a finitely generated flow in $\mathbf{Grp}$}\label{Growth-sec}
In order to measure and classify the growth rate of maps $\mathbb N \to \mathbb N$, one need the relation $\preceq$ defined as follows. For $\gamma, \gamma': \mathbb N \to \mathbb N$ let $\gamma \preceq \gamma'$ if there exist $n_0,C\in\mathbb N_+$ such that $\gamma(n) \leq \gamma'(Cn)$ for every $n\geq n_0$. Moreover $\gamma\sim\gamma$ if $\gamma\preceq\gamma'$ and $\gamma'\preceq\gamma$ (then $\sim$ is an equivalence relation), and $\gamma\prec\gamma'$ if $\gamma\preceq\gamma'$ but $\gamma\not\sim\gamma'$.
For example, for every $\alpha, \beta\in\mathbb R_{\geq0}$, $n^\alpha\sim n^\beta$ if and only if $\alpha=\beta$; if $p(t)\in\mathbb Z[t]$ and $p(t)$ has degree $d\in\mathbb N$, then $p(n)\sim n^d$. On the other hand, $a^n\sim b^n$ for every $a,b\in\mathbb R$ with $a,b>1$, so in particular all exponentials are equivalent with respect to $\sim$.
So a map $\gamma: \mathbb N \to \mathbb N$ is called: \begin{itemize} \item[(a)] \emph{polynomial} if $\gamma(n) \preceq n^d$ for some $d\in\mathbb N_+$; \item[(b)] \emph{exponential} if $\gamma(n) \sim 2^n$; \item[(c)] \emph{intermediate} if $\gamma(n)\succ n^d$ for every $d\in\mathbb N_+$ and $\gamma(n)\prec 2^n$. \end{itemize}
Let $G$ be a group, $\phi:G\to G$ an endomorphism and $F$ a non-empty finite subset of $G$. Consider the function, already mentioned in \eqref{gamma}, $$
\gamma_{\phi,F}:\mathbb N_+\to\mathbb N_+\ \text{defined by}\ \gamma_{\phi,F}(n)=|T_n(\phi,F)|\ \text{for every}\ n\in\mathbb N_+. $$ Since $$
|F|\leq\gamma_{\phi,F}(n)\leq|F|^n\mbox{ for every }n\in\mathbb N_+, $$
the growth of $\gamma_{\phi,F}$ is always at most exponential; moreover, $H_{alg}(\phi,F)\leq \log |F|$. So, following \cite{DG0} and \cite{DG-islam}, we say that $\phi$ has \emph{polynomial} (respectively, \emph{exponential}, \emph{intermediate}) \emph{growth at $F$} if $\gamma_{\phi,F}$ is polynomial (respectively, exponential, intermediate).
Before proceeding further, let us make an important point here. All properties considered above concern practically the $\phi$-invariant subgroup $G_{\phi,F}$ of $G$ generated by the trajectory $T(\phi, F) = \bigcup_{n\in\mathbb N_+} T_n(\phi, F)$ and the restriction $\phi\restriction_{G_{\phi,F}}$.
\begin{Definition} We say that the flow $(G, \phi)$ in $\mathbf{Grp}$ is \emph{finitely generated} if $G = G_{\phi,F}$ for some finite subset $F$ of $G$. \end{Definition}
Hence, all properties listed above concern finitely generated flows in $\mathbf{Grp}$. We conjecture the following, knowing that it holds true when $G$ is abelian or when $\phi=\mathrm{id}_G$: if the flow $(G,\phi)$ is finitely generated, and if $G = G_{\phi,F}$ and $G = G_{\phi,F'}$ for some finite subsets $F$ and $F'$ of $G$, then $\gamma_{\phi,F}$ and $\gamma_{\phi,F'}$ have the same type of growth. In this case the growth of a finitely generated flow $G_{\phi,F}$ would not depend on the specific finite set of generators $F$ (so $F$ can always be taken symmetric). In particular, one could speak of growth of a finitely generated flow without any reference to a specific finite set of generators. Nevertheless, one can give in general the following
\begin{Definition} Let $(G,\phi)$ be a finitely generated flow in $\mathbf{Grp}$. We say that $(G,\phi)$ has \begin{itemize} \item[(a)] \emph{polynomial growth} if $\gamma_{\phi,F}$ is polynomial for every finite subset $F$ of $G$; \item[(b)] \emph{exponential growth} if there exists a finite subset $F$ of $G$ such that $\gamma_{\phi,F}$ is exponential; \item[(c)] \emph{intermediate growth} otherwise. \end{itemize} We denote by $\mathrm{Pol}$ and $\mathrm{Exp}$ the classes of finitely generated flows in $\mathbf{Grp}$ of polynomial and exponential growth respectively. Moreover, $\mathcal M=\mathrm{Pol}\cup\mathrm{Exp}$ is the class of finitely generated flows of non-intermediate growth. \end{Definition}
This notion of growth generalizes the classical one of growth of a finitely generated group given independently by Schwarzc \cite{Sch} and Milnor \cite{M1}. Indeed, if $G$ is a finitely generated group and $X$ is a finite symmetric set of generators of $G$, then $\gamma_X=\gamma_{\mathrm{id}_G,X}$ is the classical \emph{growth function} of $G$ with respect to $X$. For a connection of the terminology coming from the theory of algebraic entropy and the classical one, note that for $n\in\mathbb N_+$ we have $T_n(\mathrm{id}_G,X)=\{g\in G:\ell_X(g)\leq n\}$, where $\ell_X(g)$
is the length of the shortest word $w$ in the alphabet $X$ such that $w=g$ (see \S \ref{NewSec1} (c)). Since $\ell_X$ is a norm on $G$, $T_n(\mathrm{id}_G,X)$ is the ball of radius $n$ centered at $1$ and $\gamma_X(n)$ is the cardinality of this ball.
Milnor \cite{M3} proposed the following problem on the growth of finitely generated groups.
\begin{problem}[Milnor Problem]\label{Milnor-pb}{\cite{M3}} Let $G$ be a finitely generated group and $X$ a finite set of generators of $G$. \begin{itemize} \item[(i)] Is the growth function $\gamma_X$ necessarily equivalent either to a power of $n$ or to the exponential function $2^n$? \item[(ii)] In particular, is the {growth exponent} $\delta_G=\limsup_{n\to \infty}\frac{\log\gamma_X(n)}{\log n}$ either a well defined integer or infinity? For which groups is $\delta_G$ finite? \end{itemize} \end{problem}
Part (i) of Problem \ref{Milnor-pb} was solved negatively by Grigorchuk in \cite{Gri1,Gri2,Gri3,Gri4}, where he constructed his famous examples of finitely generated groups $\mathbb G$ with intermediate growth. For part (ii) Milnor conjectured that $\delta_G$ is finite if and only if $G$ is virtually nilpotent (i.e., $G$ contains a nilpotent finite-index subgroup). The same conjecture was formulated by Wolf \cite{Wolf} (who proved that a nilpotent finitely generated group has polynomial growth) and Bass \cite{Bass}. Gromov \cite{Gro} confirmed Milnor's conjecture:
\begin{Theorem}[Gromov Theorem]\label{GT}\emph{\cite{Gro}} A finitely generated group $G$ has polynomial growth if and only if $G$ is virtually nilpotent. \end{Theorem}
The following two problems on the growth of finitely generated flows of groups are inspired by Milnor Problem.
\begin{problem} Describe the permanence properties of the class $\mathcal M$. \end{problem}
Some stability properties of the class $\mathcal M$ are easy to check. For example, stability under taking finite direct products is obviously available, while stability under taking subflows (i.e., invariant subgroups) and factors fails even in the classical case of identical flows. Indeed, Grigorchuk's group $\mathbb G$ is a quotient of a finitely generated free group $F$, that has exponential growth; so $(F,\mathrm{id}_F) \in \mathcal M$, while $(\mathbb G, \mathrm{id}_{\mathbb G})\not \in\mathcal M$. Furthermore, letting $G = \mathbb G \times F$, one has $(G,\mathrm{id}_G) \in \mathcal M$, while $(\mathbb G, \mathrm{id}_{\mathbb G})\not \in\mathcal M$, so $\mathcal M$ is not stable even under taking direct summands. On the other hand, stability under taking powers is available since $(G,\phi) \in \mathcal M$ if and only if $(G,\phi^n) \in \mathcal M$ for $n\in\mathbb N_+$.
\begin{problem}\label{Ques4} \begin{itemize} \item[(i)] Describe the finitely generated groups $G$ such that $(G,\phi)\in\mathcal M$ for every endomorphism $\phi:G\to G$. \item[(ii)] Does there exist a finitely generated group $G$ such that $(G,\mathrm{id}_G)\in\mathcal M$ but $(G,\phi)\not\in\mathcal M$ for some endomorphism $\phi:G\to G$? \end{itemize} \end{problem}
In item (i) of the above problem we are asking to describe all finitely generated groups $G$ of non-intermediate growth such that $(G,\phi)$ has still non-intermediate growth for every endomorphism $\phi:G\to G$. On the other hand, in item (ii) we ask to find a finitely generated group $G$ of non-intermediate growth that admits an endomorphism $\phi:G\to G$ of intermediate growth.
The basic relation between the growth and the algebraic entropy is given by Proposition \ref{exp} below. For a finitely generated group $G$, an endomorphism $\phi$ of $G$ and a pair $X$ and $X'$ of finite generators of $G$, one has $\gamma_{\phi,X}\sim\gamma_{\phi,X'}$. Nevertheless, $H_{alg}(\phi,X)\neq H_{alg}(\phi,X')$ may occur; in this case $(G,\phi)$ has necessarily exponential growth. We give two examples to this effect:
\begin{Example}\label{exaAugust} \begin{itemize} \item[(a)] {\cite{DG-islam}} Let $G$ be the free group with two generators $a$ and $b$; then $X=\{a^{\pm 1},b^{\pm 1}\}$ gives $H_{alg}(\mathrm{id}_G,X)=\log 3$ while for $X'=\{a^{\pm 1},b^{\pm 1},(ab)^{\pm 1}\}$ we have $H_{alg}(\mathrm{id}_G,X')=\log 4$.
\item[(b)] Let $G = \mathbb Z$ and $\phi: \mathbb Z \to \mathbb Z$ defined by $\phi(x) = mx$ for every $x\in \mathbb Z$ and with $m>3$. Let also $X= \{0,\pm 1\}$ and $X'= \{0,\pm 1, \ldots \pm m\}$. Then $H_{alg}(\phi,X) \leq \log |X| =\log 3$, while $H_{alg}(\phi,X')= h_{alg}(\phi)= \log m$. \end{itemize} \end{Example}
\begin{Proposition}\label{exp}\emph{\cite{DG-islam}} Let $(G,\phi)$ be a finitely generated flow in {\bf Grp}. \begin{itemize} \item[(a)]Then $h_{alg}(\phi)>0$ if and only if $(G,\phi)$ has exponential growth. \item[(b)]If $(G,\phi)$ has polynomial growth, then $h_{alg}(\phi)=0$. \end{itemize} \end{Proposition}
In general the converse implication in item (b) is not true even for the identity. Indeed, if $(G,\phi)$ has intermediate growth, then $h_{alg}(\phi)=0$ by item (a). So for Grigorchuk's group $\mathbb G$, the flow $(\mathbb G,\mathrm{id}_\mathbb G)$ has intermediate growth yet $h_{alg}(\mathrm{id}_\mathbb G)=0$. This motivates the following
\begin{Definition}\label{MPara} Let $\mathcal G$ be a class of groups and $\Phi$ be a class of morphisms. We say that the pair $(\mathcal G, \Phi)$ satisfies Milnor Paradigm (briefly, MP) if no finitely generated flow $(G,\phi)$ with $G\in\mathcal G$ and $\phi\in\Phi$ can have intermediate growth. \end{Definition}
In terms of the class $\mathcal M$, $$(\mathcal G, \Phi)\ \text{satisfies MP if and only if }\ (\mathcal G, \Phi)\in \mathcal M\ (\forall G\in\mathcal G)(\forall \phi\in\Phi) .$$ Equivalently, $(\mathcal G, \Phi)$ satisfies MP when $h_{alg}(\phi)=0$ always implies that $(G,\phi)$ has polynomial growth for finitely generated flows $(G,\phi)$ with $G\in\mathcal G$ and $\phi\in\Phi$.
In these terms Milnor Problem \ref{Milnor-pb} (i) is asking whether the pair $(\mathbf{Grp},\mathcal{I}d)$ satisfies MP, where $\mathcal I d$ is the class of all identical endomorphisms. So we give the following general open problem.
\begin{problem}\label{PB0} \begin{itemize}
\item[(i)] Find pairs $(\mathcal G,\Phi)$ satisfying MP.
\item[(ii)] For a given $\Phi$ determine the properties of the largest class $\mathcal G_\Phi$ such that $(\mathcal G_\Phi, \Phi)$ satisfies MP.
\item[(iii)] For a given $\mathcal G$ determine the properties of the largest class $\Phi_{\mathcal G}$ such that $(\mathcal G, \Phi_{\mathcal G})$ satisfies MP.
\item[(iv)] Study the Galois correspondence between classes of groups $\mathcal G$ and classes of endomorphisms $\Phi$ determined by MP. \end{itemize} \end{problem}
According to the definitions, the class $\mathcal G_{\mathcal{I} d}$ coincides with the class of finitely generated groups of non-intermediate growth.
The following result solves Problem \ref{PB0} (iii) for $\mathcal G=\mathbf{AbGrp}$, showing that $\Phi_\mathbf{AbGrp}$ coincides with the class $\mathcal E$ of all endomorphisms.
\begin{Theorem}[Dichotomy Theorem]\emph{\cite{DG0}}\label{DT} There exist no finitely generated flows of intermediate growth in $\mathbf{AbGrp}$. \end{Theorem}
Actually, one can extend the validity of this theorem to nilpotent groups. This leaves open the following particular case of Problem \ref{PB0}. We shall see in Theorem \ref{osin} that the answer to (i) is positive when $\phi=\mathrm{id}_G$.
\begin{question}\label{Ques1} Let $(G,\phi)$ be a finitely generated flow in $\mathbf{Grp}$.
\begin{itemize} \item[(i)] If $G$ is solvable, does $(G,\phi)\in\mathcal M$? \item[(ii)] If $G$ is a free group, does $(G,\phi)\in\mathcal M$? \end{itemize} \end{question}
We state now explicitly a particular case of Problem \ref{PB0}, inspired by the fact that the right Bernoulli shifts have no non-trivial quasi-periodic points and they have uniform exponential growth (see Example \ref{bern}). In \cite{DG0} group endomorphisms $\phi:G\to G$ without non-trivial quasi-periodic points are called algebraically ergodic for their connection (in the abelian case and through Pontryagin duality) with ergodic transformations of compact groups.
\begin{question}\label{Ques2} Let $\Phi_0$ be the class of endomorphisms without non-trivial quasi-periodic points. Is it true that the pair $(\mathbf{Grp},\Phi_0)$ satisfies MP? \end{question}
For a finitely generated group $G$, the \emph{uniform exponential growth rate} of $G$ is defined as $$ \lambda(G)=\inf\{H_{alg}(\mathrm{id}_G,X):X\ \text{finite set of generators of}\ G\} $$
(see for instance \cite{dlH-ue}). Moreover, $G$ has \emph{uniform exponential growth} if $\lambda(G)>0$. Gromov \cite{GroLP} asked whether every finitely generated group of exponential growth is also of uniform exponential growth. This problem was recently solved by Wilson \cite{Wilson} in the negative.
Since the algebraic entropy of a finitely generated flow $(G,\phi)$ in $\mathbf{Grp}$ can be computed as $$ h_{alg}(\phi)=\sup\{H_{alg}(\phi,F): F\ \text{finite subset of $G$ such that $G=G_{\phi,F}$}\}, $$ one can give the following counterpart of the uniform exponential growth rate for flows:
\begin{Definition} For $(G,\phi)$ be a finitely generated flow in $\mathbf{Grp}$ let $$ \lambda(G,\phi)=\inf\{H_{alg}(\phi,F): F\ \text{finite subset of $G$ such that $G=G_{\phi,F}$} \}. $$ The flow $(G,\phi)$ is said to have \emph{uniformly exponential growth} if $\lambda(G,\phi)>0$.
Let $\mathrm{Exp}_\mathrm u$ be the subclass of $\mathrm{Exp}$ of all finitely generated flows in $\mathbf{Grp}$ of uniform exponential growth. \end{Definition}
Clearly $\lambda(G,\phi)\leq h_{alg}(\phi)$, so one has the obvious implication
\begin{equation}\label{GP} h_{alg}(\phi)=0\ \Rightarrow\ \lambda(G,\phi)=0. \end{equation}
To formulate the counterpart of Gromov's problem on uniformly exponential growth it is worth to isolate also the class $\mathcal W$ of the finitely generated flows in $\mathbf{Grp}$ of exponential but not uniformly exponential growth
(i.e., $\mathcal W=\mathrm{Exp}\setminus \mathrm{Exp}_\mathrm u$). Then $\mathcal W$ is the class of finitely generated flows $(G,\phi)$ in $\mathbf{Grp}$ for which \eqref{GP} cannot be inverted, namely $h_{alg}(\phi)> 0=\lambda(G,\phi)$.
We start stating the following problem.
\begin{problem} Describe the permanence properties of the classes $\mathrm{Exp}_\mathrm u$ and $\mathcal W$. \end{problem}
It is easy to check that $\mathrm{Exp}_\mathrm u$ and $\mathcal W$ are stable under taking direct products. On the other hand, stability of $\mathrm{Exp}_\mathrm u$ under taking subflows (i.e., invariant subgroups) and factors fails even in the classical case of identical flows. Indeed, Wilson's group $\mathbb W$ is a quotient of a finitely generated free group $F$, that has uniform exponential growth (see \cite{dlH-ue}); so $(F,\mathrm{id}_F)\in \mathrm{Exp}_\mathrm u$, while $(\mathbb W, \mathrm{id}_{\mathbb W})\in\mathcal W$. Furthermore, letting $G = \mathbb W \times F$, one has $(G,\mathrm{id}_G)\in \mathrm{Exp}_\mathrm u$, while $(\mathbb W, \mathrm{id}_{\mathbb W})\in\mathcal W$, so $\mathrm{Exp}_\mathrm u$ is not stable even under taking direct summands.
In the line of MP, introduced in Definition \ref{MPara}, we can formulate also the following
\begin{Definition}\label{GPara} Let $\mathcal G$ be a class of groups and $\Phi$ be a class of morphisms. We say that the pair $(\mathcal G, \Phi)$ satisfies Gromovr Paradigm (briefly, MP), if every finitely generated flow $(G,\phi)$ with $G\in\mathcal G$ and $\phi\in\Phi$ of exponential growth has has uniform exponential growth. \end{Definition}
In terms of the class $\mathcal W$, $$ (\mathcal G, \Phi)\ \text{satisfies GP if and only if }\ (\mathcal G, \Phi)\not\in \mathcal M\ (\forall G\in\mathcal G)(\forall \phi\in\Phi) . $$ In these terms, Gromov's problem on uniformly exponential growth asks whether the pair $(\mathbf{Grp}, \mathcal I d)$ satisfies GP. In analogy to the general Problem \ref{PB0}, one can consider the following obvious counterpart for GP:
\begin{problem}\label{PB1} \begin{itemize} \item[(i)] Find pairs $(\mathcal G, \Phi)$ satisfying GP. \item[(ii)] For a given $\Phi$ determine the properties of the largest class $\mathcal G_\Phi$ such that $(\mathcal G_\Phi,\Phi)$ satisfies GP. \item[(iii)] For a given $\mathcal G$ determine the properties of the largest class $\Phi_{\mathcal G}$ such that $(\mathcal G,\Phi_\mathcal G)$ satisfies GP. \item[(iv)] Study the Galois correspondence between classes of groups $\mathcal G$ and classes of endomorphisms $\Phi$ determined by GP. \end{itemize} \end{problem}
We see now in item (a) of the next example a particular class of finitely generated flows for which $\lambda$ coincides with $h_{alg}$ and they are both positive, so in particular these flows are all in $\mathrm{Exp}_\mathrm u$. In item (b) we leave an open question related to Question \ref{Ques2}.
\begin{Example}\label{bern} \begin{itemize}
\item[(a)] For a finite group $K$, consider the flow $(\bigoplus_\mathbb N K,\beta_K)$. We have seen in Example \ref{shift} that $h_{alg}(\beta_K)=\log|K|$. In this case we have $\lambda(\bigoplus_\mathbb N K,\beta_K)=\log|K|$, since a subset $F$ of $\bigoplus_\mathbb N K$ generating the flow $(\bigoplus_\mathbb N K,\beta_K)$ must contain the first copy $K_0$ of $K$ in $\bigoplus_\mathbb N K$, and $H_{alg}(\beta_K,K_0)=\log|K|$. \item[(b)] Is it true that $\lambda(G,\phi) = h_{alg}(\phi) > 0$ for every finitely generated flow $(G,\phi)$ in $\mathbf{Grp}$ such that $\phi \in \Phi_0$? In other terms, we are asking whether all finitely generated flows $(G,\phi)$ in $\mathbf{Grp}$ with $\phi\in\Phi_0$ have uniform exponential growth (i.e., are contained in $\mathrm{Exp}_\mathrm u$). \end{itemize} \end{Example}
One can also consider the pairs $(\mathcal G, \Phi)$ satisfying the conjunction MP \& GP.
For any finitely generated flow $(G,\phi)$ in $\mathbf{Grp}$ one has \begin{equation}\label{osin-eq} (G,\phi)\ \text{has polynomial growth}\ \ \buildrel{(1)}\over\Longrightarrow\ h_{alg}(\phi)=0\ \ \buildrel{(2)}\over\Longrightarrow\ \lambda(G,\phi)=0. \end{equation} The converse implication of (1) (respectively, (2)) holds for all $(G,\phi)$ with $G\in\mathcal G$ and $\phi\in\Phi$ precisely when the pair $(\mathcal G, \Phi)$ satisfies MP (respectively, GP).
Therefore, the pair $(\mathcal G, \Phi)$ satisfies the conjunction MP \& GP precisely when
the three conditions in \eqref{osin-eq} are all equivalent (i.e., $\lambda(G,\phi)=0 \Rightarrow (G,\phi)\in \mathrm{Pol}$) for all finitely generated flows $(G,\phi)$ with $G\in\mathcal G$ and $\phi\in\Phi$.
A large class of groups $\mathcal G$ such that $(\mathcal G, \mathcal I d)$ satisfies MP \& GP was found by Osin \cite{O} who proved that a finitely generated solvable group $G$ of zero uniform exponential growth is virtually nilpotent, and recently this result was generalized in \cite{O1} to elementary amenable groups. Together with Gromov Theorem and Proposition \ref{exp}, this gives immediately the following
\begin{Theorem}\label{osin} Let $G$ be a finitely generated
elementary amenable group. The following conditions are equivalent: \begin{itemize} \item[(a)] $h_{alg}(\mathrm{id}_G)=0$; \item[(b)] $\lambda(G)=0$; \item[(c)] $G$ is virtually nilpotent; \item[(d)] $G$ has polynomial growth. \end{itemize} \end{Theorem}
This theorem shows that the pair $\mathcal G=\{\mbox{elementary amenable groups}\}$ and $\Phi =\mathcal{I} d$ satisfies simultaneously MP and GP. In other words it proves that the three conditions in \eqref{osin-eq} are all equivalent when $G$ is an elementary amenable finitely generated group and $\phi=\mathrm{id}_G$.
\subsection{Adjoint algebraic entropy}\label{aent-sec}
We recall here the definition of the adjoint algebraic entropy $\mathrm{ent}^\star$ and we state some of its specific features not deducible from the general scheme, so beyond the ``package" of general properties coming from the equality $\mathrm{ent}^\star=h_{\mathfrak{sub}^\star}$ such as Invariance under conjugation and inversion, Logarithmic Law, Monotonicity for factors (these properties were proved in \cite{DG-islam} in the general case and previously in \cite{DGS} in the abelian case applying the definition).
In analogy to the algebraic entropy $\mathrm{ent}$, in \cite{DGS} the adjoint algebraic entropy of endomorphisms of abelian groups $G$ was introduced ``replacing" the family $\mathcal F(G)$ of all finite subgroups of $G$ with the family $\mathcal C(G)$ of all finite-index subgroups of $G$. The same definition was extended in \cite{DG-islam} to the more general setting of endomorphisms of arbitrary groups as follows. Let $G$ be a group and $N\in \mathcal C(G)$. For an endomorphism $\phi:G\to G$ and $n\in\mathbb N_+$, the \emph{$n$-th $\phi$-cotrajectory of $N$} is $$C_n(\phi,N)=N\cap\phi^{-1}(N)\cap\ldots\cap\phi^{-n+1}(N).$$
The \emph{adjoint algebraic entropy of $\phi$ with respect to $N$} is $$ H^\star(\phi,N)={\lim_{n\to \infty}\frac{\log[G:C_n(\phi,N)]}{n}}. $$ This limit exists as $H^\star(\phi,N)=h_{\mathfrak{S}}(\mathcal C(\phi),N)$ and so Theorem \ref{limit} applies. The \emph{adjoint algebraic entropy of $\phi$} is $$\mathrm{ent}^\star(\phi)=\sup\{H^\star(\phi,N):N\in\mathcal C(G)\}.$$
The values of the adjoint algebraic entropy of the Bernoulli shifts were calculated in \cite[Proposition 6.1]{DGS} applying \cite[Corollary 6.5]{G0} and the Pontryagin duality; a direct computation can be found in \cite{G}. So, in contrast with what occurs for the algebraic entropy, we have:
\begin{Example}[Bernoulli shifts]\label{beta*} For $K$ a non-trivial group, $$\mathrm{ent}^\star(\beta_K)=\mathrm{ent}^\star({}_K\beta)=\infty.$$ \end{Example}
As proved in \cite{DGS}, the adjoint algebraic entropy satisfies the Weak Addition Theorem, while the Monotonicity for invariant subgroups fails even for torsion abelian groups; in particular, the Addition Theorem fails in general. On the other hand, the Addition Theorem holds for bounded abelian groups:
\begin{Theorem}[Addition Theorem]\label{AT*} Let $G$ be a bounded abelian group, $\phi:G\to G$ an endomorphism, $H$ a $\phi$-invariant subgroup of $G$ and $\overline\phi:G/H\to G/H$ the endomorphism induced by $\phi$. Then $$\mathrm{ent}^\star(\phi)=\mathrm{ent}^\star(\phi\restriction_H)+\mathrm{ent}^\star(\overline\phi).$$ \end{Theorem}
The following is one of the main results on the adjoint algebraic entropy proved in \cite{DGS}. It shows that the adjoint algebraic entropy takes values only in $\{0,\infty\}$, while clearly the algebraic entropy may take also finite positive values.
\begin{Theorem}[Dichotomy Theorem]\label{dichotomy}\emph{\cite{DGS}} Let $G$ be an abelian group and $\phi:G\to G$ an endomorphism. Then \begin{center} either $\mathrm{ent}^\star(\phi)=0$ or $\mathrm{ent}^\star(\phi)=\infty$. \end{center} \end{Theorem}
Applying the Dichotomy Theorem and the Bridge Theorem (stated in the previous section) to the compact dual group $K$ of $G$ one gets that for a continuous endomorphism $\psi$ of a compact abelian group $K$ either $\mathrm{ent} (\psi)=0$ or $\mathrm{ent}(\psi)=\infty$. In other words:
\begin{Corollary} If $K$ is a compact abelian group, then every endomorphism $\psi:K\to K$ with $0 < \mathrm{ent} (\psi) < \infty$ is discontinuous. \end{Corollary}
\end{document}
|
arXiv
|
{
"id": "1308.4035.tex",
"language_detection_score": 0.7190448045730591,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} In recent work, Cuntz, Deninger and Laca have studied the Toeplitz type C*-algebra associated to the affine monoid of algebraic integers in a number field, under a time evolution determined by the absolute norm. The KMS equilibrium states of their system are parametrized by traces on the C*-algebras of the semidirect products $J_\gamma \rtimes \ok^*$ resulting from the multiplicative action of the units $\ok^*$ on integral ideals $J_\gamma$ representing each ideal class $\gamma \in \Cl_K$. At each fixed inverse temperature $\beta >2$, the extremal equilibrium states correspond to extremal traces of $C^*(J_\gamma \rtimes \ok^*)$. Here we undertake the study of these traces using the transposed action of $\ok^*$ on the duals $\hat{J}_\gamma$ of the ideals and the recent characterization of traces on transformation group C*-algebras due to Neshveyev.
We show that the extremal traces of $C^*(J_\gamma \rtimes \ok^*)$ are parametrized by pairs consisting of an ergodic invariant measure for the action of $\ok^*$ on $\hat{J}_\gamma$ together with a
character of the isotropy subgroup associated to the support of this measure. For every class $\gamma$, the dual group $\hat{J}_\gamma$ is a $d$-torus on which $\ok^*$ acts by linear toral automorphisms. Hence, the problem of classifying all extremal traces is a generalized version of Furstenberg's celebrated $\times_2$ $\times_3$ conjecture.
We classify the results for various number fields in terms of ideal class group, degree, and unit rank, and we point along the way the trivial, the intractable, and the conjecturally classifiable cases. At the topological level, it is possible to characterize the number fields for which infinite $\ok^*$-invariant sets are dense in $\hat{J}_\gamma$, thanks to a theorem of Berend; as an application we give a description of the primitive ideal space of $C^*(J_\gamma \rtimes \ok^*)$ for those number fields. \end{abstract}
\maketitle
\section{Introduction}
Let $K$ be an algebraic number field and let $\OO_{\! K}$ denote its ring of integers.
The associated multiplicative monoid $\OO_{\! K}^\times := \OO_{\! K} \setminus \{0\}$ of nonzero integers acts by injective endomorphisms on the additive group of $\OO_{\! K}$ and gives rise to the semi-direct product $\ok\rtimes \ok^\times$, the affine monoid (or `$b+ax$ monoid') of algebraic integers in $K$.
Let $\{\xi_{(x,w)}: (x,w) \in \ok\rtimes \ok^\times\}$ be the standard orthonormal basis of the Hilbert space $\ell^2(\ok\rtimes \ok^\times)$. The left regular representation $L$ of $ \ok\rtimes \ok^\times$ by isometries on $\ell^2(\ok\rtimes \ok^\times)$ is determined by $L_{(b,a)} \xi_{(x,w)} = \xi_{(b+ax,aw)}$. In \cite{CDL}, Cuntz, Deninger and Laca studied the Toeplitz-like C*-algebra
$\mathfrak{T} [\OO_{\! K}] := C^*(L_{(b,a)}: (b,a) \in \ok\rtimes \ok^\times)$ generated by this representation and analyzed the equilibrium states of the natural time evolution $\sigma$ on $\mathfrak{T} [\OO_{\! K}]$ determined by the absolute norm $N_a := | \OO_{\! K}^\times/(a)|$ via \[ \sigma _t (L_{(b,a)}) = N_a^{it} L_{(b,a)} \qquad a\in \OO_{\! K}^\times, \ \ t\in \mathbb R. \]
One of the main results of \cite{CDL} is a characterization of the simplex of KMS equilibrium states of this dynamical system at each inverse temperature $\beta \in (0,\infty]$. Here we will be interested in the low-temperature range of that classification. To describe the result briefly, let $\ok^*$ be the group of units, that is, the elements of $\OO_{\! K}^\times$ whose inverses are also integers, and recall that by a celebrated theorem of Dirichlet, $\ok^* \cong W_K \times \mathbb Z^{r+s-1}$, where $W_K$ (the group of roots of unity in $\ok^*$) is finite, $r$ is the number of real embeddings of $K$, and $s$ is equal to half the number of complex embeddings of $K$. Let $\Cl_K $ be the ideal class group of $K$, which, by definition, is the quotient of the group of all fractional ideals in $K$ modulo the principal ones, and is a finite abelian group. For each ideal class $\gamma \in \Cl_K$ let $J_\gamma \in \gamma$ be an integral ideal representing $\gamma$.
By \cite[Theorem 7.3]{CDL}, for each $\beta > 2$ the KMS$_\beta$ states of $C^*(\ok\rtimes \ok^\times)$ are parametrized by
the tracial states of the direct sum of group C*-algebras $\bigoplus_{\gamma\in\Cl_K} C^*(J_\gamma\rtimes \ok^*)$, where the units act by multiplication on each ideal viewed as an additive group. It is intriguing that exactly the same direct sum of group C*-algebras also plays a role in the computation of the $K$-groups of the semigroup C*-algebras of algebraic integers in the work of Cuntz, Echterhoff and Li, see e.g. \cite[Theorem 8.2.1]{CEL}. Considering as well that the group of units and the ideals representing different ideal classes are a measure of the failure of unique factorization into primes in $\OO_{\! K}$, we feel it is of interest to investigate the tracial states of the
C*-algebras $C^*(J_\gamma\rtimes \ok^*)$ that arise as a natural parametrization of KMS equilibrium states of $C^*(\ok\rtimes \ok^\times)$.
This work is organized as follows. In Section \ref{FromKMS} we review the phase transition from \cite{CDL} and apply a theorem of Neshveyev's to show in \thmref{thm:nesh} that the extremal KMS states arise from ergodic invariant probability measures and characters of their isotropy subgroups for the actions $\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \hat J_\gamma$ of units on the duals of integral ideals.
We begin Section \ref{unitaction} by showing that for imaginary quadratic fields, the orbit space of the action of units is a compact Hausdorff space that parametrizes the ergodic invariant probability measures. All other number fields have infinite groups of units leading to `bad quotients' for which noncommutative geometry provides convenient tools of analysis. Units act by toral automorphisms and so the classification of equilibrium states is intrinsically related to the higher-dimensional, higher-rank version of the question, first asked by H. Furstenberg, of whether Lebesgue measure is the only nonatomic ergodic invariant measure for the pair of transformations $\times 2$ and $\times 3$ on $\mathbb R/\mathbb Z$. Once in this framework, it is evident from work of Sigmund \cite{Sig} and of Marcus \cite{Mar} on partially hyperbolic toral automorphisms and from the properties of the Poulsen simplex \cite{LOS}, that for fields whose unit rank is $1$, which include real quadratic fields, there is an abundance of ergodic measures, \proref{poulsen}, and hence of extremal equilibrium states, see also \cite{katz}. We also show in this section that there is solidarity among integral ideals with respect to the ergodicity properties of the actions of units, \proref{oneidealsuffices}.
In Section \ref{berendsection}, we look at the topological version of the problem and we identify the number fields for which \cite[Theorem 2.1]{B} can be used to give a complete description of the invariant closed sets. In \thmref{conjecturalclassification} we summarize the consequences, for extremal equilibrium at low temperature, of the current knowledge on the generalized Furstenberg conjecture. For fields of unit rank at least $2$ that are not complex multiplication fields, i.e. that have no proper subfields of the same unit rank, we show that if there is an extremal KMS state that does not arise from a finite orbit or from Lebesgue measure, then it must arise from a zero-entropy, nonatomic ergodic invariant measure; it is not known whether such a measure exists. For complex multiplication fields of unit rank at least $2$, on the other hand, it is known that there are other measures, arising from invariant subtori.
As a byproduct, we also provide in \proref{ZWclaim} a proof of an interesting fact stated in \cite{ZW}, namely the units acting on algebraic integers are generic among toral automorphism groups that have Berend's ID property.
We conclude our analysis in Section \ref{prim} by computing the topology of the quasi-orbit space of the action $\ok^*\mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}$ for number fields satisfying Berend's conditions. As an application we also obtain an explicit description of the primitive ideal space of the C*-algebra $C(\OO_{\! K} \rtimes \ok^*)$, \thmref{primhomeom}. For the most part, sections \ref{unitaction} and \ref{berendsection} do not depend on operator algebra considerations other than for the motivation and the application, which are discussed in sections \ref{FromKMS} and \ref{prim}.
\noindent{\sl Acknowledgments:} This research started as an Undergraduate Student Research Award project and the authors acknowledge the support from the Natural Sciences and Engineering Research Council of Canada. We would like to thank Martha {\L}\k{a}cka for pointing us to \cite{LOS}, and we also especially thank Anthony Quas for bringing Z. Wang's work \cite{ZW} to our attention, and for many helpful comments, especially those leading to Lemmas \ref{partition} and \ref{Anthony'sLemma1}.
\section{From KMS states to invariant measures and isotropy}\label{FromKMS} Our approach to describing the tracial states of the C*-algebras $\bigoplus_{\gamma\in\Cl_K} C^*(J_\gamma\rtimes \ok^*)$ is shaped by the following three observations. First, the tracial states of a group C*-algebra form a Choquet simplex \cite{thoma}, so it suffices to focus our attention on the {\em extremal traces}. Second, there is a canonical isomorphism $C^*(J\rtimes \ok^*) \cong C^*(J)\rtimes \ok^* $, which we may combine with the Gelfand transform for $C^*(J)$, thus obtaining an isomorphism of $C^*(J\rtimes \ok^*)$ to the transformation group C*-algebra $C(\hat{J})\rtimes \ok^*$, associated to the transposed action of
$\ok^*$ on the continuous complex-valued functions on the compact dual group $\hat{J}$.
Specifically, the action of $\ok^*$ on $\hat{J}$ is determined by
\begin{equation}\label{actiononjhat} (u \cdot \chi )(j):= \chi( u j), \qquad u \in \ok^*, \ \ \chi \in \hat{J}, \ \ j\in J, \end{equation}
or by $\langle j, u \cdot \chi \rangle = \langle u j, \chi\rangle$, if we use
$\langle \ , \ \rangle$ to denote the duality pairing of $J$ and $\hat J$. Third, this puts the problem of describing the tracial states squarely in the context of
Neshveyev's characterization of traces on crossed products, so our task is to identify and describe the relevant ingredients of this characterization. In brief terms, when \cite[Corollary 2.4]{nes} is interpreted in the present situation, it says that that for each integral ideal $J$, the extremal traces on $C(\hat{J})\rtimes \ok^*$ are parametrized by triples $(H,\chi,\mu)$ in which
$H$ is a subgroup of $\ok^*$, $\chi$ is a character of $H$, and $\mu$ is an ergodic $\ok^*$-invariant measure on $\hat{J}$ such that the set of points in $\hat{J}$ whose isotropy subgroups for the action of $\ok^*$ are equal to $H$ has full $\mu$ measure.
Recall that, by definition, an $\ok^*$-invariant probability measure $\mu$ on $\widehat{J}$ is \emph{ergodic invariant} for the action of $\ok^*$ if $\mu(A) \in\{0,1\}$ for every $\ok^*$-invariant Borel set $A\subset \hat{J} $. Our first simplification is that the action of $\ok^*$ on $\hat J$ automatically has $\mu$-almost everywhere constant isotropy with respect to each ergodic invariant probability measure $\mu$.
\begin{lemma}\label{automaticisotropy} Let $K$ be an algebraic number field with ring of integers $\OO_{\! K}$ and group of units $\ok^*$ and let $J$ be a nonzero ideal in $\OO_{\! K}$. Suppose $\mu$ is an ergodic $\ok^*$-invariant probability measure on $\hat J$. Then there exists a unique subgroup $H_\mu$ of $\ok^*$ such that the isotropy group $(\ok^*)_\chi:= \{u \in \ok^*: u\cdot \chi = \chi\}$ is equal to $H_\mu$ for $\mu$-a.a. characters $\chi\in \hat J$. \end{lemma} \begin{proof} For each subgroup $H \leq \ok^*$, let $M_H := \{\chi \in \hat J \mid (\ok^*)_\chi = H\}$ be the set of characters of $J$ with isotropy equal to $H$. Since the isotropy is constant on orbits, each $M_H$ is $\ok^*$-invariant, and clearly the $M_H$ are mutually disjoint.
By Dirichlet's unit theorem $\ok^* \cong W_K \times \mathbb Z^{r+s-1}$ with $W_K$ finite, and $r$ and $2s$ the number of real and complex embeddings of $K$, respectively. Thus every subgroup of $\ok^*$ is generated by at most $|W_K| + (r+s-1)$ generators, and hence $\ok^*$ has only countably many subgroups. Thus $\{M_H: H \leq \ok^*\}$ is a countable partition of $\hat J$ into subsets of constant isotropy.
We claim that each $M_H$ is a Borel measurable set in $\hat J$. To see this, observe:
\begin{align*} M_H =&\{ \chi \in \hat J: u\cdot \chi = \chi \text{ for all }u \in H\text{ and } u\cdot \chi \neq \chi \text{ for all }u \in \ok^*\setminus H\}\\ =& \Big(\bigcap\limits_{u \in H} \{\chi \in \hat J \mid \chi^{-1}(u\cdot\chi)=1\} \Big)\bigcap \Big( \bigcap\limits_{u \in \ok^*\setminus H} \{\chi \in \hat J \mid \chi^{-1}(u\cdot \chi)\ne 1\}\Big) \end{align*} because $u\cdot \chi = \chi$ iff $\chi^{-1}(u\cdot\chi)=1$. Since the map $\chi \mapsto\chi^{-1}(u\cdot \chi)$ is continuous on $\hat J$, the sets in the first intersection are closed and those in the second one are open. By above, the intersection is countable, so $M_H$ is Borel-measurable, as desired.
For every Borel measure $\mu$ on $\hat J$, we have
\[ \sum\limits_{H \leq \ok^*} \mu (M_H) = \mu\Big(\bigcup\limits_{H \leq \ok^*} M_H \Big) = 1, \]
so at least one $M_H$ has positive measure. If $\mu$ is ergodic $\ok^*$-invariant, then there exists
a unique ${H_\mu \leq \ok^*}$ such that $\mu(M_{H_\mu}) = 1$ and thus $H_\mu$ is the (constant) isotropy group of $\mu$-a.a points $\chi \in \hat J$.
\end{proof}
Since each ergodic invariant measure determines an isotropy subgroup, the characterization of extremal traces from \cite[Corollary 2.4]{nes} simplifies as follows.
\begin{theorem}\label{thm:nesh} Let $K$ be an algebraic number field with ring of integers $\OO_{\! K}$ and group of units $\ok^*$ and let $J$ be a nonzero ideal in $\OO_{\! K}$. Denote the standard generating unitaries of $C^*(J\rtimes \ok^*)$ by $\delta_j$ for $j\in J$ and $\nu_u$ for $u \in \ok^*$. Then for each extremal trace $\tau$ on $C^*(J\rtimes \ok^*)$ there exists a unique probability measure $\mu_\tau$ on $\hat J$ such that \begin{equation} \label{mufromtau} \int_{\hat J} \<j, x\rangle d\mu_\tau(x) = \tau(\delta_j) \quad \text{ for } j \in J. \end{equation} The probability measure $\mu_\tau $ is ergodic $\ok^*$-invariant, and if we denote by $H_{\mu_\tau}$ its associated isotropy subgroup from \lemref{automaticisotropy}, then the function $\chi_\tau$ defined by $\chi_\tau(h):= \tau(\nu_h)$ for $h \in H_{\mu_\tau}$ is a character on $H_{\mu_\tau}$.
Furthermore, the map $\tau \mapsto (\mu_\tau,\chi_\tau)$ is a bijection of the set of extremal traces of $C^*(J\rtimes \ok^*)$ onto the set of pairs $(\mu, \chi)$ consisting of an ergodic $\ok^*$-invariant probability measure $\mu$ on $\hat J$ and a character $\chi \in \widehat H_\mu$. The inverse map $(\mu,\chi) \mapsto \tau_{(\mu,\chi)}$ is determined by \begin{equation} \label{muchi-parameters} \tau_{(\mu,\chi)}(\delta_j \nu_u) = \begin{cases}\displaystyle\chi(u)\int_{\hat J} \<j, x\>d\mu(x) &\text{ if $u \in H_\mu$}\\ 0 &\text{ otherwise,}\end{cases} \end{equation} for $j\in J$ and $u\in \ok^*$. \end{theorem}
\begin{proof} Recall that equation \eqref{actiononjhat} gives the continuous action of $\ok^*$ by automorphisms of the compact abelian group $\hat J$ obtained on transposing the multiplicative action of $\ok^*$ on $J$. There is a corresponding action $\alpha$ of $\ok^*$ by automorphisms of the C*-algebra $C(\hat J)$ of continuous functions on $\hat{J}$; it is given by $\alpha_u(f) (\chi) = f(u^{-1} \cdot \chi)$.
The characterization of traces \cite[Corollary 2.4]{nes} then applies to the crossed product $C(\hat J) \rtimes_\alpha \ok^*$ as follows. For a given extremal tracial state $\tau$ of $C^*(J\rtimes \ok^*)$ there is a probability measure $\mu_\tau$ on $\hat J$ that arises, via the Riesz representation theorem, from the restriction
of $\tau$ to $C^*(J) \cong C(\hat J)$ and is characterized by its Fourier coefficients in equation \eqref{mufromtau}. By \lemref{automaticisotropy}, there is a subset of $\hat J$ of full $\mu_\tau$ measure on which the isotropy subgroup is automatically constant, and is denoted by $H_{\mu_\tau}$. The unitary elements $\nu_u$ generate a copy of $C^*(\ok^*)$ inside $C(\hat J) \rtimes_\alpha \ok^*$ and the restriction of $\tau$ to these generators determines a character $\chi_\tau$ of $H_{\mu_\tau}$ given by $\chi_\tau(u) := \tau(\nu_u)$. See the proof of \cite[Corollary 2.4]{nes} for more details.
By \lemref{automaticisotropy}, the condition of almost constant isotropy is automatically satisfied for every ergodic invariant measure on $\hat J$, hence every ergodic invariant measure arises as $\mu_\tau$ for some extremal trace $\tau$. The parameter space for extremal tracial states
is thus the set of all pairs $(\mu,\chi)$ consisting of an ergodic $\ok^*$-invariant probability measure $\mu$ on $\hat J$ and a character $\chi$ of the isotropy subgroup $H_\mu$ of $\mu$. Formula \eqref{muchi-parameters} is a particular case of the formula in \cite[Corollary 2.4]{nes} with $f$ equal to the character function $f(\cdot) = \<j,\cdot\rangle$ on $\hat J$ associated to $j\in J$.
Since for a fixed $u \in \ok^*$ the right hand side of \eqref{muchi-parameters} is a continuous linear functional of the integrand
and the character functions span a dense subalgebra, this particular case is enough to imply
\begin{equation} \tau_{(\mu,\chi)}(f\nu_u) = \begin{cases}\displaystyle\chi(u)\int_{\hat J} f(x)d\mu(x) &\text{ if $u \in H_\mu$}\\ 0 &\text{ otherwise,}\end{cases} \end{equation}
for every $f\in C(\hat J)$.
\end{proof}
\section{The action of units on integral ideals}\label{unitaction} Combining \cite[Theorem 7.3]{CDL} with \thmref{thm:nesh} above, we see that for $\beta> 2$, the extremal KMS$_\beta$ equilibrium states of the system $(\mathfrak{T} [\OO_{\! K}], \sigma)$ are indexed by pairs $(\mu,\kappa)$ consisting of an ergodic invariant probability measure $\mu$ and a character $\kappa$ of its isotropy subgroup relative to the action of the unit group $\ok^*$ on a representative of each ideal class.
If the field $K$ is imaginary quadratic, that is, if $r=0$ and $s=1$, then the group of units is finite, consisting exclusively of roots of unity. In this case, things are easy enough to describe because the space of $\ok^*$-orbits in $\hat{J}$ is a compact Hausdorff topological space.
\begin{proposition} \label{imaginaryquadratic}Suppose $K$ is an imaginary quadratic number field, let $J \subset \OO_{\! K}$ be an integral ideal
and write $W_K$ for the group of units. Then the orbit space $W_K \backslash \hat J $ is a compact Hausdorff space
and the closed invariant sets in $\hat J$ are indexed by the closed sets in $W_K \backslash \hat J $. Moreover, the ergodic invariant probability measures on $\hat J$ are the equiprobability measures on the orbits and correspond to unit point masses on $W_K\backslash \hat J$.
\end{proposition}
\begin{proof}
Since $W_K$ is finite, distinct orbits are separated by disjoint invariant open sets, so the quotient space $W_K \backslash \hat J $ is a compact Hausdorff space. Since $\hat J$ is compact, the quotient map $q: \hat J \to W_K \backslash \hat J $ given by $q(\chi ) := W_K \cdot \chi$ is a closed map by the closed map lemma, and so invariant closed sets in $\hat J$ correspond to closed sets in the quotient.
For each probability measure $\mu$ on $\hat J$, there is a probability measure $\tilde\mu$ on $W_K \backslash \hat J$ defined by \[\tilde\mu(E) := \mu(q^{-1}(E))\quad \text{ for each measurable } E\subseteq W_K \backslash \hat J.\] This maps the set of $W_K$-invariant probability measures on $\hat J$ onto the set of
all probability measures on $W_K \backslash \hat J $. Ergodic invariant measures correspond to unit point masses on $W_K\backslash \hat J$, and their $W_K$-invariant lifts are equiprobability measures on single orbits in $\hat J$.
\end{proof}
As a result we obtain the following characterization of extremal KMS equilibrium states. \begin{corollary} Suppose $K$ is an imaginary quadratic algebraic number field and let $J_\gamma$ be an integral ideal representing the ideal class $\gamma\in \Cl_K$. For $\beta>2$, the extremal KMS$_\beta$ states of the system $(\mathfrak T[\OO_{\! K}], \sigma^N)$ are parametrized by the triples $(\gamma, W\cdot \chi, \kappa)$, where $\gamma\in \Cl_K$, $\chi$ is a point in $ \hat J_\gamma$, with orbit $W\cdot \chi$ and $\kappa$ is a character of the isotropy subgroup of $\chi$. \end{corollary}
Before we discuss invariant measures and isotropy for fields with infinite group of units, we need to revisit a few general facts about the multiplicative action of units on the algebraic integers and, more generally, on the integral ideals. The concise discussion in \cite{ZW} is particularly convenient for our purposes. As is customary, we let $d = [K:\mathbb Q]$ be the {\em degree} of $K$ over $\mathbb Q$. The number $r$ of real embeddings and the number $2s$ of complex embeddings satisfy $r+2s=d$. We also let $n = r+s -1$ be the {\em unit rank} of $K$, namely, the free abelian rank of $\ok^*$ according to Dirichlet's unit theorem. We shall denote the real embeddings of $K$ by
$\sigma_j:K \to \mathbb R$ for $j = 1, 2, \cdots r$ and the conjugate pairs of complex embeddings of $K$ by $\sigma_{r +j}, \sigma_{r+s+j} : K \to \mathbb C$ for $ j = 1, \cdots, s$. Thus, there is an isomorphism \[ \sigma: K \otimes_\mathbb Q \mathbb R \to \mathbb R^r \times \mathbb C^s \] such that \[ \sigma (k\otimes x) = (\sigma_1( k) x, \sigma_2(k) x, \cdots, \sigma_r(k) x;\, \sigma_{r+1}(k) x, \cdots, \sigma_{r+s}(k) x ). \] The ring of integers $\OO_{\! K}$ is a free $\mathbb Z$-module of rank $d$, and thus $\OO_{\! K}\otimes_\mathbb Z \mathbb R \cong \mathbb R^d \cong \mathbb R^r \oplus \mathbb C^s$. We temporarily fix an integral basis for $\OO_{\! K}$, which fixes an isomorphism $\theta: \OO_{\! K} \to \mathbb Z^d$. Then, at the level of $\mathbb Z^d$, the action of each $u \in \ok^*$ is implemented as left multiplication by a matrix $A_u \in GL_d(\mathbb Z)$. Moreover, once this basis has been fixed, the usual duality pairing $\langle \mathbb Z^d, \mathbb R^d/\mathbb Z^d \rangle $ given by $\langle n, t \rangle = \exp{2\pi i (n \cdot t)} $, with $n\in \mathbb Z^d$, $t\in \mathbb R^d$ and $n\cdot t = \sum_{j=1}^d n_j t_j$, gives an isomorphism of $\mathbb R^d/\mathbb Z^d $ to $\widehat{\OO}_{\! K}$, in which the character $\chi_t\in \widehat{\OO}_{\! K}$ corresponding to $t\in \mathbb R^d/\mathbb Z^d$ is given by $\chi_t(x) = \exp{2\pi i (\theta(x) \cdot t)} $ for $x\in \OO_{\! K}$. Thus, the action of a unit $u\in \ok^*$ is \[(u\cdot \chi_t)(x) = \chi_t(u\cdot x) = \exp{2\pi i (A_u \theta(x) \cdot t)} = \exp{2\pi i ( \theta(x) \cdot A_u^T t)}.\]
This implies that the action $\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}$ is implemented, at the level of $\mathbb R^d/\mathbb Z^d$, by the representation
$\rho: \ok^* \to GL_d(\mathbb Z)$ defined by $\rho(u) = A_u^{T}$, cf. \cite[Theorem 0.15]{Wal}.
Similar considerations apply to the action of $\ok^*$ on $\hat J$ for each integral ideal $J \subset \OO_{\! K}$, giving a representation $\rho_J: \ok^* \to GL_d(\mathbb Z)$. For ease of reference we state the following fact about this matrix realization $\rho_J$ of the action of $\ok^*$ on $\hat{J}$. \begin{proposition}\label{diagonalization} The collection of matrices $\{\rho(u) : u \in \ok^*\}$ is simultaneously diagonalizable (over $\mathbb C$) , and for each $u\in \ok^*$ the eigenvalue list of $\rho(u)$ is the list of its archimedean embeddings $\sigma_k(u): k = 1, 2, \cdots r+2s$. \end{proposition} See e.g. the discussion in \cite[Section 2.1]{LW}, and \cite[Section 2.1]{ZW} for the details. Multiplication of complex numbers in each complex embedding is regarded as the action of 2x2 matrices on $\mathbb R+i\mathbb R \cong \mathbb R^2$, and the 2x2 blocks corresponding to complex roots simultaneously diagonalize over $\mathbb C^d$. The self duality of $\mathbb R^r\oplus \mathbb C^s $ can be chosen to be compatible with the isomorphism mentioned right after (2.1) in \cite{LW} and with multiplication by units.
See also \cite[Ch7]{KlausS}.
When the number field $K$ is not imaginary quadratic, then $\ok^*$ is infinite and so the analysis of orbits and invariant measures is much more subtle; for instance, most orbits are infinite, some are dense, and the orbit space does not have a Hausdorff topology. We summarize for convenience of reference the known basic general properties in the next proposition.
\begin{proposition}\label{orbitsandisotropy} Let $K$ be a number field with $\operatorname{rank}(\ok^*) \geq 1$, and let $J$ be an ideal in $\OO_{\! K}$. Then normalized Haar measure on $\hat J$ is ergodic $\ok^*$-invariant, and for each $\chi \in \hat J$, \begin{enumerate} \item the orbit $\ok^*\cdot \chi$ is finite if and only if $\chi$ corresponds to a point with rational coordinates in the identification $\hat J \cong \mathbb R^d /\mathbb Z^d$; in this case the corresponding isotropy subgroup is a full-rank subgroup of $\ok^*$; \item the orbit $\ok^*\cdot \chi$ is infinite if and only if $\chi $ corresponds to a point with at least one irrational coordinate in $\mathbb R^d /\mathbb Z^d$; \item the characters $\chi$ corresponding to points $(w_1,w_2,\ldots,w_d) \in \mathbb R^d$ such that the numbers $1, w_1, w_2, \ldots w_d$ are rationally independent have trivial isotropy. \end{enumerate} \end{proposition} \begin{proof} By \proref{diagonalization}, for each $u\in \ok^*$, the eigenvalues of the matrix $\rho(u)$ encoding the action of $u$ at the level of $\mathbb R^d/\mathbb Z^d$ are precisely the various embeddings of $u$ in the archimedean completions of $K$. Since $\operatorname{rank}(\ok^*) \geq 1$, there exists a non-torsion element $u \in \ok^*$, whose eigenvalues are not roots of unity. Hence normalized Haar measure is ergodic for the action of $\{\rho(u): u\in\ok^*\}$ by \cite[Corollary 1.10.1]{Wal}
and the first assertion now follows from \cite[Theorem 5.11]{Wal}. The isotropy is a full rank subgroup of $\ok^*$ because $|\ok^*/(\ok^*)_x| = |\ok^*\cdot x| < \infty$.
Let $w = (w_1, w_2, \cdots, w_d)$ be a point in $\mathbb R^d/\mathbb Z^d$ such that $1, w_1, \ldots, w_d$ are rationally independent. Suppose $w$ is a fixed point for the matrix $\rho(u) \in GL_d(\mathbb Z)$ acting on $\mathbb R^d/\mathbb Z^d$.
Then $\rho(u) w = w$ (mod $\mathbb Z^d$) and hence $(\rho(u) - I)w \in \mathbb Z^d$, i.e.
\[[(\rho(u)-I)w]_i = \sum\limits_{j=1}^d (\rho(u)-I)_{ij}w_j \in \mathbb Z\]
for all $1 \le i \le d$. Since $(\rho(u)-I)_{ij} \in \mathbb Z$ for all $i,j$, the rational independence of $1, w_1, \ldots, w_d$ implies that $\rho(u) = I$, so $u=1$, as desired.
\end{proof}
We see next that for the number fields with unit rank 1 there are many more ergodic invariant probability measures on $\widehat{\OO}_{\! K}$ than just Haar measure and measures supported on finite orbits. In fact, a smooth parametrization of these measures and of the corresponding KMS equilibrium states of $(\mathfrak T[\OO_{\! K}], \sigma)$ seems unattainable.
\begin{proposition}\label{poulsen} Suppose the number field $K$ has unit-rank equal to $1$, namely, $K$ is real quadratic, mixed cubic, or complex quartic. Then the simplex of ergodic invariant probability measures on $\widehat{\OO}_{\! K}$ is isomorphic to the Poulsen simplex \cite{LOS}. \end{proposition}
\begin{proof} The fundamental unit gives a partially hyperbolic toral automorphism of $\widehat{\OO}_{\! K}$, for which Haar measure is ergodic invariant. By \cite{Mar,Sig}, the invariant probability measures of such an automorphism that are supported on finite orbits are dense in the space of all invariant probability measures. This remains true when we include the torsion elements of $\ok^*$. Since these equiprobabilities supported on finite orbits are obviously ergodic invariant and hence extremal among invariant measures, it follows from \cite[Theorem 2.3]{LOS} that the simplex of invariant probability measures on $\widehat{\OO}_{\! K}$ is isomorphic to the Poulsen simplex. \end{proof}
For fields with unit rank at least $2$, whether normalized Haar measure and equiprobabilities supported on finite orbits are the only ergodic $\ok^*$-invariant probability measures is a higher-dimensional version of the celebrated Furstenberg conjecture, according to which Lebesgue measure is the only non-atomic probability measure on $\mathbb T = \mathbb R/\mathbb Z$ that is jointly ergodic invariant for the transformations $\times 2$ and $\times 3$ on $\mathbb R$ modulo $\mathbb Z$. As stated, this remains open, however, Rudolph and Johnson have proved that if $p$ and $q$ are multiplicatively independent positive integers, then the only probability measure on $\mathbb R/\mathbb Z$ that is ergodic invariant for
$\times p$ and $ \times q$ and has non-zero entropy is indeed Lebesgue measure \cite{Rud,Joh}. Number fields always give rise to automorphisms of tori of dimension at least $2$, so, strictly speaking the problem in which we are interested does not contain Furstenberg's original formulation as a particular case. Nevertheless, the higher-dimensional problem is also interesting and open as stated in general, and there is significant recent activity on it and on closely related problems \cite{KS,KK, KKS}. In particular, see \cite{EL1} for a summary of the history and also a positive entropy result for higher dimensional tori along the lines of the Rudolph--Johnson theorem. We show next that the toral automorphism groups arising from different integral ideals have a solidarity property with respect to the generalized Furstenberg conjecture.
\begin{proposition} \label{oneidealsuffices}
If for some integral ideal $J$ in $\OO_{\! K}$ the only ergodic $\ok^*$-invariant probability
measure on $\hat J$ having infinite support is normalized Haar measure, then the same is true for every integral ideal in $\OO_{\! K}$. \end{proposition} The proof depends on the following lemmas.
\begin{lemma}\label{partition} Let $J\subseteq I$ be two integral ideals in $\OO_{\! K}$ and let $r: \hat I \to \hat J$ be the restriction map. Denote by $\lambda_{\hat I}$ normalized Haar measure on $\hat I$.
For each $\gamma \in \hat J$, there exists a neighborhood $N$ of $\gamma$ in $\hat J$ and homeomorphisms
$h_j$ of $N$ into $\hat I$ for $j = 1, 2, \ldots , | I/J |$, with mutually disjoint images and such that \begin{enumerate}
\item $\lambda_{\hat I} (h_j(E)) = \lambda_{\hat I} ( h_k(E))$ for every measurable $E\subseteq N$ and $1\leq j, k \leq | I/J |$; \item $r\circ h_j = \operatorname {id}_N$; \item $r^{-1} ( E) = \bigsqcup_j h_j(E)$ for all $E \subseteq N$, that is, the $h_j$'s form a complete system of local inverses of $r$ on $N$. \end{enumerate} \end{lemma} \begin{proof} Let $J^\perp:= \{\kappa\in \hat I: \kappa (j) = 1, \forall j\in J\}$ be the kernel of the restriction map $r: \hat I \to \hat J$. Since
$J^\perp$ is a subgroup of order $| I/J | <\infty$, and since $\hat I$ is Hausdorff, we may choose a collection $\{ A_\kappa: \kappa \in J^\perp\}$ of mutually disjoint open subsets of $\hat I$ such that $\kappa \in A_\kappa$ for each $\kappa\in J^\perp$. Define $B_1:= \bigcap_{\kappa \in J^\perp} \kappa^{-1} A_\kappa$ and for each $\kappa\in J^\perp$ let $B_\kappa := \kappa B_1$. Then $\{B_\kappa : \kappa \in J^\perp\}$ is a collection of mutually disjoint open sets such that $\kappa \in B_\kappa$ and $r(B_\kappa) = r(B_1)$ for every $\kappa \in J^\perp$. We claim that the restrictions $r: B_\kappa \to \hat J$ are homeomorphisms onto their image. Since the $B_\kappa$ are translates of $B_1$ and since $r$ is continuous and open, it suffices to verify that $r$ is injective on $B_1$. This is easy to see because if
$r(\xi_1) = r(\xi_2)$ for two distinct elements $\xi_1,\xi_2$ of $B_1$, then $\xi_2 = \kappa \xi_1$ for some $\rho \in J^\perp \setminus \{1\}$, and this would contradict $B_1 \cap \kappa B_1 = \emptyset$. This proves the claim. We may then take $N: = \gamma \, r(B_1)$ and define $h_\rho := (r |_{B_\rho})^{-1}$, for which properties (1)-(3) are now easily verified. \end{proof}
\begin{lemma}\label{Anthony'sLemma1} Let $X$ be a measurable space and let $T: X \to X$ be measurable. Suppose that $\lambda$ is an ergodic $T$-invariant probability measure on $X$. If $\mu$ is a $T$-invariant probability measure on $X$ such that $\mu \ll \lambda$, then $\mu = \lambda$. \end{lemma} \begin{proof}
Fix $f \in L^\infty(\lambda)$ and define $(A_n f)(x) = \frac{1}{n} \sum\limits_{k=0}^{n-1} f(T^{k}x)$. Let $S = \{x \in X: (A_nf)(x) \to \int_X f d\lambda\}$. By the Birkhoff ergodic theorem, we have that $\lambda(S^c)=0$, and so $\mu(S^c)=0$ as well, that is, $(A_nf)(x) \to \int_X f d\lambda$ $\mu$-a.e.. Since $f \in L^\infty(\lambda)$ and $\mu \ll \lambda$, we have that $f \in L^\infty(\mu)$ as well, with $\|f\|_\infty^\mu \le \|f\|_\infty^\lambda$. Observe that $|(A_nf)(x)| \le \|f\|_\infty^\lambda$ for $\mu$-a.e. $x$, and so by the dominated convergence theorem, $\int_X A_n f d\mu \to \int_X\left(\int_X f d\lambda\right) d\mu = \int_X f d\lambda$, with the last equality because $\mu(X)=1$.
Because $\mu$ is $T$-invariant, we have that $\int_X A_n f d\mu = \int_X f d\mu$ for all $n$. Combining this with the above implies that $\int_X f d\lambda = \int_X f d\mu$ for all $f \in L^\infty(\lambda)$. In particular, this holds for the indicator function of each measurable set, and so $\mu = \lambda.$ \end{proof}
\begin{lemma} \label{fromItoJ} Let $J \subseteq I$ be two integral ideals in $\widehat{\OO}_{\! K}$ and let $r: \hat I \to \hat J$ be the restriction map. If $\mu$ is an ergodic $\OO_{\! K}^*$-invariant probability measure on $\hat I$, then $\tilde{\mu}:= \mu \circ r^{-1}$ is an ergodic invariant probability measure on $\hat J$. Moreover, the support of $\mu$ is finite if and only if the support of $\tilde \mu$ is finite. \end{lemma}
\begin{proof} Assume $\mu$ is ergodic invariant on $\hat I$ and let $E \subseteq \hat J$ be an $\ok^*$-invariant measurable set. Since $r$ is $\ok^*$-equivariant, $r^{-1}(E)$ is also $\ok^*$-invariant so $\tilde{\mu}(E):=\mu(r^{-1}(E)) \in \{0,1\}$ because $\mu$ is ergodic invariant. Thus, $\tilde{\mu}$ is also ergodic invariant. The statement about the support follows immediately because $r$ has finite fibers. \end{proof}
\begin{lemma}\label{liftingF} Suppose $J \subseteq I$ are integral ideals in $\OO_{\! K}$, and let $\lambda_{\hat J}, \lambda_{\hat I}$ be normalized Haar measures on $\hat J, \hat I$, respectively. If the only ergodic $\OO_{\! K}^*$-invariant probability measure on $\hat J$ with infinite support is $\lambda_{\hat J}$, then the only ergodic $\OO_{\! K}^*$-invariant probability measure on $\hat I$ with infinite support is $\lambda_{\hat I}$. \end{lemma} \begin{proof} Let $\mu$ be an ergodic $\OO_{\! K}^*$-invariant probability measure on $\hat I$ with infinite support. By \lemref{fromItoJ}, $\mu \circ r^{-1}$ is an ergodic $\OO_{\! K}^*$-invariant probability measure on $\hat J$ with infinite support, and so by assumption must equal $\lambda_{\hat J}$. In particular, $\lambda_{\hat I} \circ r^{-1} = \lambda_{\hat J}$.
Since $\hat J$ is compact, the open cover $\{N_\gamma: \gamma \in \hat J\}$ given by the sets constructed in \lemref{partition}
has a finite subcover, that is, there exist $\gamma_1, \ldots, \gamma_n \in \hat J$ so that $\hat J = \bigcup\limits_{k=1}^n N_{\gamma_k}$, where $N_{\gamma_k}$ is a neighborhood of $\gamma_k \in \hat J$ satisfying the conditions stated in \lemref{partition}, with corresponding maps $h_{\gamma_k}^{(j)}$, for $1 \le j \le |I/J|$ and $1 \le k \le n$.
We will first show that if $B \subseteq \hat I$ is such that $r|_B$ is a homeomorphism with $r(B) \subseteq N_{\gamma_k}$ for some $k$, and if $\lambda_{\hat I}(B) = 0$, then $\mu(B) = 0$. Suppose $B$ is such a set and $\lambda_{\hat I}(B)=0$. By part (3) of \lemref{partition}, $r^{-1}(r(B)) = \bigsqcup\limits_{j=1}^{|I/J|} h_{\gamma_k}^{(j)}(r(B))$, so $r^{-1}(r(B))$ is a disjoint union of $|I/J|$ sets, all having the same measure under $\lambda_{\hat I}$. Moreover, there exists some $1 \le j \le |I/J|$ such that $h_{\gamma_k}^{(j)}(r(B)) = B$, because the $h_{\gamma_k}^{(j)}$'s form a complete set of local inverses for $r$, and $r$ is injective on $B$. Putting these together yields
\[\lambda_{\hat I}(r^{-1}(r(B))) = |I/J| \lambda_{\hat I}(h_{\gamma_k}^{(j)}(r(B))) = |I/J|\lambda_{\hat I}(B) =0.\]
Since $\mu \circ r^{-1} = \lambda_{\hat J} = \lambda_{\hat I} \circ r^{-1}$, this implies that $\mu(r^{-1}(r(B))) = 0$ as well, and since $B \subseteq r^{-1}(r(B))$, we have that $\mu(B) = 0$.
Now, since $r: \hat I \to \hat J$ is a covering map, for each $\chi \in \hat I$, there exists an open neighbourhood $U_\chi$ of $\chi$ such that $r|_{U_\chi}$ is a homeomorphism. Let $1 \le k \le n$ be such that $r(\chi) \in N_{\gamma_k}$, and let $W_\chi:=U_\chi \cap r^{-1}(N_{\gamma_k})$. This forms another open cover of $\hat I$, and so by compactness of $\hat I$, there exists a finite subcover $W_1, \ldots, W_m$.
Finally, let $A \subseteq \hat I$ be such that $\lambda_{\hat I}(A)=0$. Then $A \cap W_i$ is a set on which $r$ acts as a homeomorphism, and there exists $1 \le k \le n$ such that $r(A \cap W_i) \subseteq N_{\gamma_k}$. Thus, by the above, we conclude $\mu(A \cap W_i) = 0$ for all $1 \le i \le m$. Since these sets cover $A$, we have that $\mu(A)=0$, and hence $\mu \ll \lambda_{\hat I}$, as desired. By \lemref{Anthony'sLemma1} it follows that $\mu = \lambda_{\hat I}$. \end{proof}
\begin{proof}[Proof of \proref{oneidealsuffices}:] Suppose $J$ is an integral ideal such that the only $\ok^*$-invariant probability measure on $\hat J$ having an infinite orbit is normalized Haar measure. By \lemref{liftingF} applied to the inclusion $J\subset \OO_{\! K}$, the only ergodic $\OO_{\! K}^*$-invariant probability measure on $\widehat{\OO}_{\! K}$ with infinite support is normalized Haar measure.
Suppose now $I \subseteq \OO_{\! K}$ is an arbitrary integral ideal. Since the ideal class group is finite, a power of $I$ is principal and thus we may choose $q \in \mathcal O^\times_K$ such that $q \OO_{\! K} \subseteq I$. The action of $\OO_{\! K}^*$ on $\widehat{\OO}_{\! K}$ is conjugate to the action of $\OO_{\! K}^*$ on $\widehat{q\OO_{\! K}}$, and so the only ergodic $\OO_{\! K}^*$-invariant probability measure on $\widehat{q\OO_{\! K}}$ with infinite support is normalized Haar measure. Thus, by \lemref{liftingF} again with $\widehat{q\OO_{\! K}} \subseteq \hat I$, we conclude that the only ergodic $\OO_{\! K}^*$-invariant probability measure on $\hat I$ is $\lambda_{\hat I}$. \end{proof}
In order to understand the situation for number fields with unit rank higher than $1$, we review in the next section the topological version of the problem of ergodic invariant measures, namely, the classification of closed invariant sets.
\section{Berend's theorem and number fields}\label{berendsection} An elegant generalization to higher-dimensional tori of Furstenberg's characterization \cite[Theorem IV.1]{F} of closed invariant sets for semigroups of transformations of the circle was obtained by Berend \cite[Theorem 2.1]{B}. The fundamental question investigated by Berend is whether an infinite invariant set is necessarily dense, and his original formulation is for semigroups of endomorphisms of a torus. Here we are interested in the specific situation arising from an algebraic number field $K$ in which the units $\ok^*$ act by automorphisms on $\hat J$ for integral ideals $J \subseteq \OO_{\! K}$ representing each ideal class, so we paraphrase Berend's Property ID for the special case of a group action on a compact space. \begin{definition} (cf. \cite[Definition 2.1]{B}.) Let $G$ be a group acting on a compact space $X$ by homeomorphisms. We say that the action $G\mathrel{\reflectbox{$\righttoleftarrow$}} X$ {\em satisfies the ID property}, or that it has the {\em infinite invariant dense property}, if the only closed infinite $G$-invariant subset of $X$ is $X$ itself. \end{definition} The first observation is a topological version of the measure-theoretic solidarity proved in \proref{oneidealsuffices}; namely, if $K$ is a given number field, then the action $\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \hat J$ has the ID property either for all integral ideals $J$, or for none.
\begin{proposition}\label{IDforallornone} Suppose $K$ is an algebraic number field, and let $J$ be an ideal in $\OO_{\! K}$. Then the action of $\ok^*$ on $\hat J$ is ID if and only if the action of $\ok^*$ on $\hat{\OO_{\! K}}$ is ID. \end{proposition} \begin{proof} Suppose first that $J_1 \subseteq J_2$ are ideals in $\OO_{\! K}$ and assume that the action of $\ok^*$ on $\hat{J}_2$ is ID. The restriction map $r: \hat J_2 \to \hat J_1$ is $\ok^*$-equivariant, continuous, surjective, and has finite fibers. Thus, if $E$ were a closed, proper, infinite $\ok^*$-invariant subset of $\hat{J}_1$, then $r^{-1}(E)$ would be a closed, proper, infinite $\ok^*$-invariant subset of $\hat{J}_2$, contradicting the assumption that the action of $\ok^*$ on $\hat{J}_2$ is ID. So no such set $E$ exists, proving that the action of $\ok^*$ on $\hat{J}_1$ is also ID.
In particular, if the action of $\ok^*$ on $\hat{\OO_{\! K}}$ is ID, then the action on $\hat{J}$ is also ID for every integral ideal $J\subset \OO_{\! K}$. For the converse, recall that, as in the proof of \proref{oneidealsuffices}, there exists an integer $q \in \OO_{\! K}^\times$ such that $q\OO_{\! K} \subseteq J$, so we may apply the preceding paragraph to this inclusion. Since the action of $\ok^*$ on $\widehat{q\OO_{\! K}}$ is conjugate to that on $\widehat{\OO}_{\! K}$, this completes the proof. \end{proof}
In order to decide for which number fields the action of units on the integral ideals is ID, we need to recast Berend's necessary and sufficient conditions in terms of properties of the number field. Recall that, by definition, a number field is called a {\em complex multiplication (or CM) field} if it is a totally imaginary quadratic extension of a totally real subfield. These fields were studied by Remak \cite{rem}, who observed that they are exactly the fields that have a {\em unit defect}, in the sense that they contain a proper subfield $L$ with the same unit rank.
\begin{theorem}\label{berend4units}
Let $K$ be an algebraic number field and let $J$ be an ideal in $\OO_{\! K}$.
The action of $\ok^*$ on $\hat J$ is ID if and only if $K$ is not a CM field and $\operatorname{rank} \ok^* \geq 2$. \end{theorem}
For the proof we shall need a few number theoretic facts. We believe these are known but we include the relatively straightforward proofs below for the convenience of the reader.
\begin{lemma}\label{99percent} Suppose $\mathcal F$ is a finite family of subgroups of $\mathbb Z^d$ such that $\operatorname{rank}(F) < d$ for every $F\in \mathcal F$. Then there exists $m\in \mathbb Z^d$ such that $m+F$ is nontorsion in $\mathbb Z^d/F$ for every $F \in \mathcal F$. \end{lemma} \begin{proof} Recall that for each subgroup $F$ there exists a basis $\{n^F_j\}_{j= 1, 2, \ldots , d}$ of $\mathbb Z^d$ and integers $a_1, a_2, \ldots, a_{\operatorname{rank}(F)}$ such that \[ F = \textstyle\big\{ \sum_{i=1}^{\operatorname{rank}(F)} k_i n^F_i:\ k_i \in a_i\mathbb Z, \ 1\leq i \leq \operatorname{rank}(F)\big\}. \]
The associated vector subspaces $S_F := \operatorname{span}_\mathbb R\{n^F_1, \ldots, n^F_{\operatorname{rank}(F)}\} $ of $\mathbb R^d$ are proper and closed so $\mathbb R^d \setminus \cup_F S_F$ is a nonempty open set, see e.g. \cite[Theorem 1.2]{rom}. Let $r$ be a point in $\mathbb R^d \setminus \cup_F S_F$ with rational coordinates. If $k$ denotes the l.c.m. of all the denominators of the coordinates of $r$, then $m := kr \in \mathbb Z^d$ and its image $m+F \in \mathbb Z^d/F$ is of infinite order for every $F$ because $m \notin S_F$. \end{proof}
\begin{proposition}\label{unitdefect}
Let $K$ be an algebraic number field. Then there exists a unit $u\in \ok^*$ such that
$K = \mathbb Q(u^k)$ for every $k\in \N^\times$ if and only if $K$ is not a CM field. \end{proposition} \begin{proof}Assume first $K$ is not a CM field. Then $\operatorname{rank} \OO_{\!F}^* < \operatorname{rank} \ok^*$ for every proper subfield $F$ of $K$. Since there are only finitely many proper subfields $F$ of $K$, \lemref{99percent} gives a unit $u\in \ok^*$ with nontorsion image in $\ok^*/\OO_{\!F}^*$ for every $F$. Thus $u^k \notin F$ for every proper subfield $F$ of $K$ and every $k\in \mathbb N$.
Assume now $K$ is a CM field, and let $F$ be a totally real subfield with the same unit rank as $K$ \cite{rem}. Then the quotient $\ok^*/\OO_{\!F}^*$ is finite and there exists a fixed integer $m$ such that $u^m \in F$ for every $u\in \ok^*$. \end{proof}
\begin{lemma} \label{friday} Let $k$ be an algebraic number field with $\operatorname{rank} \ok^* \geq 1$. Then for every embedding
$\sigma: k \to \mathbb C$, there exists $u \in \ok^* $ such that $|\sigma(u)| > 1$. \end{lemma} \begin{proof} Assume for contradiction that $\sigma$ is an embedding of $k$ in $\mathbb C$
such that $\sigma(\ok^* ) \subseteq \{z \in \mathbb C: |z|=1\}$. Let $K = \sigma(k)$ and let $U_K = \sigma(\ok^* )$. Then $K \cap \mathbb R$ is a real subfield of $K$ with $ U_{K\cap \mathbb R} = \{\pm1\}$, so $K \cap \mathbb R= \mathbb Q$. Also $K \cap \mathbb R$ is the maximal real subfield of $K$, and since we are assuming $\operatorname{rank} \ok^* \geq 1$, $K$ cannot be a CM field. To see this, suppose that $k$ were CM. Let $\ell \subseteq k$ be a totally real subfield such that $[k:\ell] = 2$. Since $\ell$ is totally real, $\sigma(\ell)\subseteq \mathbb R$, and since $K \cap \mathbb R= \mathbb Q$, it must be that $\sigma(\ell) = \mathbb Q$. Then $\ell = \mathbb Q$, so $k$ is quadratic imaginary, contradicting $\operatorname{rank} \ok^* \ge 1$.
By \proref{unitdefect}, there exists $u \in U_K$ such that $K = \mathbb Q(u)$. Since $|u|=1$, we have that $\overline K =\mathbb Q(\overline u) = \mathbb Q(u^{-1})= \mathbb Q(u) = K$, so $K$ is closed under complex conjugation. Write $u = a + ib$. Then $u + \overline u = 2a \in K \cap \mathbb R= \mathbb Q$, so $a \in \mathbb Q$. Thus, $K = \mathbb Q(u) = \mathbb Q(ib).$ Since $|u|=1$, $a^2+b^2=1$, and so we have that \[\mathbb Q(ib) \cong \mathbb Q(\sqrt{-b^2}) \cong \mathbb Q(\sqrt{a^2-1}) \cong \mathbb Q\left(\sqrt{\frac{m^2-n^2}{n^2}}\right) \cong \mathbb Q(\sqrt{m^2-n^2}),\] where $a = m/n \in \mathbb Q$.
Thus, $K$ is a quadratic field. But it cannot be quadratic imaginary because $\operatorname{rank} U_K \ge 1$, and it cannot be quadratic real because all the units lie on the unit circle. This proves there can be no such embedding. \end{proof}
\begin{proof} [Proof of \thmref{berend4units}] By \proref{IDforallornone}, it suffices to prove the case $J = \OO_{\! K}$. Let $d = [K:\mathbb Q]$ and recall that $\widehat{\OO}_{\! K} \cong \mathbb T^d$. All we need to do is verify that Berend's necessary and sufficient conditions for ID \cite[Theorem 2.1]{B}, when interpreted for the automorphic action of $\ok^*$ on $\widehat{\OO}_{\! K}$, characterize non-CM fields of unit rank $2$ or higher. Since the action of $\ok^*$ by linear toral automorphisms $\rho(u)$ with $u\in \ok^*$ is faithful by \cite[p. 729]{KKS}, Berend's conditions are:
\begin{enumerate}
\item (totally irreducible)
there exists a unit $u$ such that the characteristic polynomial of $\rho(u^n)$ is irreducible for all $n\in \mathbb N$;
\item (quasi-hyperbolic)
for every common eigenvector of $\{\rho(u): u\in \ok^*\}$, there is a unit $u\in \ok^*$ such that the corresponding eigenvalue
of $\rho(u)$ is outside the unit disc; and
\item (not virtually cyclic) there exist units $u,v\in \ok^*$ such that if $m,n \in \mathbb N$ satisfy $\rho(u^m) = \rho(v^n)$, then $m = n = 0$. \end{enumerate}
Suppose first that the action of $\ok^*$ on $\OO_{\! K}$ is ID. By \cite[Theorem 2.1]{B} conditions (1) and (3) above hold, i.e. the action of $\ok^*$ on $\OO_{\! K}$ is totally irreducible and not virtually cyclic. By \proref{unitdefect}, $K$ is not a CM field and since $\rho:\ok^* \to GL_d(\mathbb Z)$ is faithful, (3) is a restatement of $\operatorname{rank}\ok^* \geq 2$.
Suppose now that $K$ is not CM and has unit-rank at least $2$.
By \proref{unitdefect}, there exists $u \in \ok^*$ such that $\mathbb Q(u^n) = K$ for every $n \in \mathbb N$. Hence the
minimal polynomial of $\rho(u^n)$ has degree $d$, and so it coincides with the characteristic polynomial. This proves that condition (1) holds,
i.e. the action of $\rho(u)$ is totally irreducible. We have already observed that condition
(3) holds iff the unit rank of $K$ is at least $2$, so it remains to see that the hyperbolicity condition (2) holds too. In the simultaneous diagonalization of the matrix group $\rho(\ok^*)$, the diagonal entries
of $\rho(u)$ are the embeddings of $u$ into $\mathbb R$ or $\mathbb C$, see e.g. \cite[p.729]{KKS}. Then condition (2) follows from \lemref{friday}. \end{proof} \begin{remark} Notice that for units acting on algebraic integers, Berend's hyperbolicity condition (2) is automatically implied by the rank condition (3). \end{remark}
\begin{remark} Since the matrices representing the actions of $\ok^*$ on $\hat J$ and on $\hat \OO_{\! K}$ are conjugate over $\mathbb Q$, \proref{IDforallornone} can be derived from the implication (1)$\implies$(3) in \cite[Proposition 2.1]{KKS}. We may also see that the matrices implementing the action on $\hat J$ and on $\hat \OO_{\! K}$ have the same sets of characteristic polynomials, so the questions of expansive eigenvalues (condition (2)) and of total irreducibility are equivalent for the two actions. The third condition is independent of whether we look at $\hat J$ or $\hat \OO_{\! K}$, so this yields yet another proof of Proposition \ref{IDforallornone}. \end{remark}
By \thmref{berend4units}, for each non-CM algebraic number field $K$ with unit rank at least $2$, the action $\ok^*$ on $\widehat{\OO}_{\! K}$, transposed as $\{\rho(u): u\in \ok^*\}$ acting on $\mathbb R^d/\mathbb Z^d$,
is an example of an abelian toral automorphism group for which one may hope
to prove that normalized Haar measure is the only ergodic invariant probability measure with infinite support. So it
is natural to ask which groups of toral automorphisms arise this way. A striking observation of Z. Wang \cite[Theorem 2.12]{ZW}, see also \cite[Proposition 2.2]{LW}, states that every finitely generated abelian group of automorphisms of $\mathbb T^d$ that contains a totally irreducible element and whose rank is maximal and greater than or equal to $2$ arises, up to conjugacy, from a finite index subgroup of units acting on the integers of a non-CM field of degree $d$ and unit rank at least $2$, cf. \cite[Condition 1.5]{ZW}. We wish next to give a proof of the converse, which was also stated in \cite{ZW}. \begin{proposition} \label{ZWclaim} Suppose $G$ is an abelian subgroup of $SL_d(\mathbb Z)$ satisfying \cite[Condition 2.8]{ZW}. Specifically, suppose there exist \begin{itemize} \item a non-CM number field $K$ of degree $d$ and unit rank at least $2$; \item an embedding $\phi:G \to \ok^*$ of $G$ into a finite index subgroup of $\ok^*$; \item a co-compact lattice $\Gamma$ in $K\subset K\otimes_\mathbb Q \mathbb R\cong \mathbb R^d$ invariant under multiplication by $\phi(G)$; and \item a linear isomorphism $\psi: \mathbb R^d \to K\otimes_\mathbb Q \mathbb R\cong \mathbb R^d$ mapping $\mathbb Z^d$ onto $\Gamma$ that intertwines
the actions $G\mathrel{\reflectbox{$\righttoleftarrow$}}\mathbb R^d$ and $\phi(G) \mathrel{\reflectbox{$\righttoleftarrow$}} (K\otimes_\mathbb Q \mathbb R)/\Gamma$. \end{itemize}
Then $G $ satisfies \cite[Condition 1.5]{ZW}, namely
\begin{enumerate}
\item $\operatorname{rank}(G)\geq 2$;
\item the action $g\mathrel{\reflectbox{$\righttoleftarrow$}} \mathbb R^d/\mathbb Z^d \cong \mathbb T^d$ is totally irreducible for some $g\in G$;
\item $\operatorname{rank} G_1 = \operatorname{rank} G$ for each abelian subgroup $G_1 \subsetSL_d(\mathbb Z)$ containing~$G$.
\end{enumerate} \end{proposition}
\begin{proof} Suppose $K$ is a non-CM algebraic number field of degree $d$ with unit rank at least $2$, and assume $G$ is a subgroup of $SL_d(\mathbb Z)$ that satisfies the assumptions with respect to $K$. Part (1) of \cite[Condition 1.5]{ZW} is immediate, because $\phi(G)$ is of full rank in $\ok^*$.
By \proref{unitdefect}, there exists a unit $u\in \ok^*$ such that the characteristic polynomial of $\rho(u^m)$ is irreducible over $\mathbb Q$ for all $m \in \mathbb N$. This is equivalent to the
action of $u^m$ on $\widehat{\OO}_{\! K}$ being irreducible for all $m \in \mathbb N$, see, e.g. \cite[Proposition 3.1]{KKS}. Since $\phi(G)$ is of finite index in $\ok^*$, there exists $N\in \mathbb N$ such that $u^N \in \phi(G)$. We claim that $g := \phi^{-1}(u^N)$ is a totally irreducible element in $G\mathrel{\reflectbox{$\righttoleftarrow$}} \mathbb R^d/\mathbb Z^d$. To see this, it suffices to show that the characteristic polynomial of $g^k$ is irreducible over $\mathbb Q$ for every positive integer $k$. Since the linear isomorphism $\psi$ intertwines the actions $g^k\mathrel{\reflectbox{$\righttoleftarrow$}} \mathbb T^d$ and $\rho(\phi(g))^k \mathrel{\reflectbox{$\righttoleftarrow$}} (K\otimes_\mathbb Q\mathbb R)/\Gamma$, the characteristic polynomial of $g^k$ equals the characteristic polynomial of $\rho(\phi(g))^k = \rho(u^{kN})$, which is irreducible because it coincides with the characteristic polynomial of $u^{kN}$ as an element of the ring $\OO_{\! K}$. This proves part (2) of Condition 1.5.
Suppose now that $G_1$ is an abelian subgroup of $SL_d(\mathbb Z)$ containing $G$ and apply the construction from \cite[Proposition 2.13]{ZW} (see also \cite{KlausS,EL1}) to the irreducible element $g\in G\subset G_1 \mathrel{\reflectbox{$\righttoleftarrow$}} \mathbb T^d$. Up to an automorphism, the resulting number field arising from this construction is $K= \mathbb Q(u^N)$, and the embedding $\phi_1: G_1 \to \ok^*$ is an extension of $\phi:G\to \ok^*$. Since $\phi(G)\subset \phi_1(G_1) \subset \ok^*$ and $\phi(G)$ is of finite index in $\ok^*$, \[\operatorname{rank} (G_1) = \operatorname{rank}\phi_1(G_1) = \operatorname{rank} \ok^* = \operatorname{rank} \phi(G) = \operatorname{rank} G,\] and this proves proves part (3) of Condition 1.5. \end{proof} As a consequence, we see that the action of units on the algebraic integers of number fields are generic for group actions with Berend's ID property in the following sense, cf. \cite{ZW,LW}. \begin{corollary} If $G$ is a finitely generated abelian subgroup of $SL_d(\mathbb Z)$ of torsion-free rank at least 2 that contains a totally irreducible element and is maximal among abelian subgroups of $SL_d(\mathbb Z)$ containing $G$, then $G$ is conjugate to a finite-index toral automorphism subgroup of the action of $\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}$ for a non-CM algebraic number field $K$ of degree $d$ and unit rank at least $2$. \end{corollary}
Finally, we summarize what we can say at this point for equilibrium states of C*-algebras associated to number fields with unit rank strictly higher than one. If the generalized Furstenberg conjecture is verified, the following result would complete the classification started in \proref{imaginaryquadratic} and \proref{poulsen}.
Let $K$ be a number field and for each $\gamma \in \Cl_K$ define $F_\gamma $ to be the set of all pairs $(\mu, \chi)$ with $\mu$ an equiprobability measure on a finite orbit of the action of $\ok^*$ in $\hat {J}_\gamma$, and $\chi \in \hat{H}_\mu$, where the $\mu$-a.e. isotropy group $H_\mu$ is a finite index subgroup of $\ok^*$. Also let $(\lambda_J, 1) $ denote the pair consisting of normalized Haar measure on $\hat{J}$ and the trivial character of its trivial a.e. isotropy group. Then the map $(\mu,\chi) \mapsto \tau_{\mu,\chi}$ from \thmref{thm:nesh} gives an extremal tracial state of $C^*(J_\gamma \rtimes \ok^*)$ for each pair $(\mu,\chi) \in F_\gamma \sqcup \{(\lambda_{J_\gamma}, 1) \}$.
Recall that the map $\tau \mapsto \varphi_\tau$ from \cite[Theorem 7.3]{CDL} is an affine bijection of all tracial states of $\bigoplus_{\gamma\in \Cl_K} C^*(J_\gamma\rtimes \ok^*)$ onto $\mathcal K_\beta$, the simplex of KMS$_\beta$ equilibrium states of the system $(\mathfrak T[\OO_{\! K}], \sigma)$ studied in \cite{CDL}. \begin{theorem}\label{conjecturalclassification} Suppose $K$ is an algebraic number field with unit rank at least $2$ and define $\Phi:(\mu,\chi) \mapsto \varphi_{\tau_{\mu,\chi}}$ to be the composition of the maps from \thmref{thm:nesh} and from \cite[Theorem 7.3]{CDL}, assigning a state $\varphi_{\tau_{\mu,\chi}} \in \operatorname{Extr}(\mathcal K_\beta)$ to each pair $(\mu,\chi)$ consisting of an ergodic invariant probability measure $\mu$ in one of the $\hat{J}_\gamma$ and an associated character of the $\mu$-almost constant isotropy $H_\mu$. Let
\[ F_K:= \bigsqcup_{\gamma \in \Cl_K} \big(F_\gamma \sqcup \{(\lambda_{J_\gamma}, 1) \}\big) \]
be the set of pairs whose measure $\mu$ has finite support or is Haar measure. Then \begin{enumerate} \item if $K$ is a CM field, then the inclusion $\Phi(F_K) \subset \operatorname{Extr}(\mathcal K_\beta)$ is proper; and \item if $K$ is not a CM field, and if there exists $ \phi \in \operatorname{Extr}(\mathcal K_\beta) \setminus \Phi(F_K) $ then the measure $\mu$ on $\hat{J}_\gamma$ arising from $\phi$ has zero-entropy and infinite support. \end{enumerate} \end{theorem} \begin{proof} To prove assertion (1), recall that when $K$ is a CM field Berend's theorem implies that there are invariant subtori, which have ergodic invariant probability measures on the fibers, cf. \cite{KK,KS}. These measures give rise to tracial states and to KMS states not accounted for in $\Phi(F_K)$. Assertion (2) follows from \cite[Theorem 1.1]{EL1}. \end{proof}
\section{Primitive ideal space}\label{prim}
The computation of the primitive ideal spaces of the C*-algebras $C^*(J \rtimes \ok^*)$ associated to the action of units on integral ideals
lies within the scope of Williams' characterization in \cite{DW}. We briefly review the general setting next. Let $G$ be a countable, discrete, abelian group acting continuously on a second countable compact Hausdorff space $X$. We
define an equivalence relation on $X$ by saying that {\em $x$ and $y$ are equivalent} if $x$ and $y$ have the same orbit closure, i.e. if $\overline{G\cdot x} = \overline{G \cdot y}$. The equivalence class of $x$, denoted by $[x]$, is called the {\em quasi-orbit} of $x$, and the quotient space, which in general is not Hausdorff, is denoted by $\mathcal{Q}(G \mathrel{\reflectbox{$\righttoleftarrow$}} X)$ and is called the {\em quasi-orbit space}. It is important to distinguish the quasi-orbit of a point from the closure of its orbit, as the latter may contain other points with strictly smaller orbit closure.
Let $\epsilon_x$ denote evaluation at $x\in X$, viewed as a one-dimensional representation of $C(X)$. For each character $\kappa \in \hat G_x$, the pair $(\epsilon_x,\kappa)$ is clearly covariant for the
transformation group $(C(X), G_x)$, and the corresponding representation $\epsilon_x\times \kappa$ of $C(X) \rtimes G_x$ gives rise to an induced representation $\operatorname{Ind}_{{G_x}}^G(\epsilon_x\times \kappa)$ of $C(X) \rtimes G$, which is irreducible because $\epsilon_x\times \kappa$ is.
Since $G$ is abelian and the action is continuous, whenever $x$ and $y$ are in the same quasi-orbit, $[x] = [y]$, the corresponding isotropy subgroups coincide: $G_x = G_y$. Thus, we may consider an equivalence relation
on the product $\mathcal{Q}(G \mathrel{\reflectbox{$\righttoleftarrow$}} X) \times \hat G$ defined by
\[
([x],\kappa) \sim ([y],\lambda) \quad \iff \quad [x]=[y] \text{ and } \kappa\restr{G_x} = \lambda\restr{G_x} . \]
By \cite[Theorem 5.3]{DW}, the map $(x,\kappa) \mapsto \ker \operatorname{Ind}_{{G_x}}^G(\epsilon_x\times \kappa)$ induces a homeomorphism of
$(\mathcal{Q}(G \mathrel{\reflectbox{$\righttoleftarrow$}} X) \times \hat G)/_{\!\sim}$ onto the primitive ideal space of the crossed product
$C(X)\rtimes G$, see e.g. \cite[Theorem 1.1]{primbc} for more details on this approach.
We wish to apply the above result to actions $\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \hat J$ for integral ideals $J$
of non-CM number fields with unit rank at least $2$, as in \thmref{berend4units}.
Notice that by \proref{orbitsandisotropy} if the orbit $\ok^*\cdot \chi$ is finite, then it is equal to the quasi-orbit $[\chi]$. The first step is to describe the quasi-orbit space for the action of units. We focus on the case $J = \OO_{\! K}$; ideals representing nontrivial classes behave similarly because of the solidarity established in \proref{IDforallornone}.
\begin{proposition}\label{quasiorbitset} Suppose $K$ is a non-CM algebraic number field with unit rank at least $2$. Then the quasi-orbit space of the action $\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}$ is \[
\mathcal{Q}(\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}) = \{[x]: |\ok^*\cdot x | < \infty\}\cup \{\omega_\infty\}. \] The point $\omega_\infty$ is the unique infinite quasi-orbit $[\alpha]$ of any $\alpha \in \widehat{\OO}_{\! K}\cong \mathbb R^d/\mathbb Z^d$ having at least one irrational coordinate. The closed proper subsets are the finite subsets all of whose points are finite (quasi-)orbits.
Infinite subsets and subsets that contain the infinite quasi-orbit $\omega_\infty$ are dense in the whole space. \end{proposition}
\begin{proof} By \thmref{berend4units}, the closure of each infinite orbit is the whole space. Thus, the points with infinite orbits collapse into a single quasi-orbit \[
\omega_\infty := \{x\in \widehat{\OO}_{\! K}: |\ok^* \cdot x| =\infty\} = \{x\in \widehat{\OO}_{\! K}: \overline{\ok^* \cdot x} = \widehat{\OO}_{\! K}\}. \] That this is the set of points with at least one irrational coordinate is immediate from \cite[Theorem 5.11]{Wal}. When the orbit of $x$ is finite, it is itself a quasi-orbit, which we view as a point in $\mathcal{Q}(\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K})$. In this case $x\in \widehat{\OO}_{\! K}$ has all rational coordinates.
To describe the topology, recall that the quotient map $q: \widehat{\OO}_{\! K} \to \mathcal{Q}(\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K})$ is surjective, continuous and open by the Lemma in page 221 of \cite{PG}, see also the proof of Proposition 2.4 in \cite{primbc}.
Any two different finite quasi-orbits $[x]$ and $[y]$ are finite, mutually disjoint subsets of $\widehat{\OO}_{\! K}$ and as such can be separated by disjoint open sets $V$ and $W$, so that $[x]\subset V$ and $[y] \subset W$. Passing to the quotient space, we have
$[x]\notin q(W)$ and $[y] \notin q(V)$, so $[x]$ and $[y]$ are $T_1$-separated, which implies that finite sets of finite quasi-orbits are closed in $\mathcal Q(\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K})$.
The singleton $\{\omega_\infty\}$ is dense in $\mathcal Q(\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K})$ because every infinite orbit in $\widehat{\OO}_{\! K}$ is dense by \thmref{berend4units}. If $A$ is an infinite subset of $\mathcal Q(\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K})$ consisting of finite quasi-orbits, then $\bigcup_{[x]\in A} [x]$ is an infinite invariant set in $\widehat{\OO}_{\! K}$, hence is dense by \thmref{berend4units}. This implies that $\omega_\infty$ is in the closure of $A$, and hence $A$ is dense in $\mathcal Q(\ok^* \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K})$. \end{proof}
\begin{theorem} \label{primhomeom} Let $K$ be a non-CM algebraic number field with unit rank at least 2, and let $G = \ok^*$. The primitive ideal space of $C(\widehat{\OO}_{\! K})\rtimes G$ is homeomorphic to the space \begin{equation}\label{primdef}
\bigsqcup\limits_{[x]} \left(\{[x]\} \times \hat{G}_x \right) \end{equation} in which a net $([x_\iota], \gamma_\iota)$ converges to $([x],\gamma)$ iff $[x_\iota] \to [x]$ in $\mathcal{Q}(G \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K})$ and $\gamma_\iota|_{G_x} \to \gamma|_{G_x}$ in $\hat{G}_x$.
Notice that if $[x]$ is a finite quasi-orbit, then the net $\{[x_\iota] \}$ is eventually constant equal to $[x]$, and if $[x] = \omega_\infty$, then the condition $\gamma_\iota|_{G_{\omega_\infty}} \to \gamma|_{G_{\omega_\infty}}$ is trivially true because $G_{\omega_\infty} = \{1\}$. \end{theorem} \begin{proof}
Consider the diagram below, where $f$ is the quotient map and the vertical map $g$ is defined by $g([([x],\gamma)]) = ([x], \gamma|_{G_x})$, where $[([x],\gamma)]$ denotes the equivalence class of $([x],\gamma)$ with respect to $\sim$. \[
\begin{tikzcd}
\mathcal{Q}(G\mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}) \times \hat G \arrow{r}{f} \arrow[swap]{dr}{g\circ f} & \mathcal{Q}(G\mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}) \times \hat G/\sim \arrow{d}{g} \\
& \bigsqcup\limits_{[x]} \left(\{[x]\} \times \hat{G}_x \right)
\end{tikzcd} \] By the fundamental property of the quotient map, see e.g. \cite[Theorem 9.4]{wil}, $g\circ f$ is continuous if and only if $g$ is continuous.
It is clear that $g$ is a bijection. We show next that $g \circ f$ is continuous. Suppose that $([x_\iota], \gamma_\iota)$ is a net in $\mathcal{Q}(G\mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}) \times \hat G$ converging to $([x],\gamma)$. Then $[x_\iota] \to [x]$ in $\mathcal{Q}(G\mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K})$, and $\gamma_\iota \to \gamma$ in $\hat G$. Then clearly also $\gamma_\iota|_{G_x} \to \gamma|_{G_x}$ in $\hat{G}_x$ as well. Hence the net $g\circ f([x_\iota], \gamma_\iota)$ converges to $g\circ f([x],\gamma) = ([x], \gamma|_{G_x})$, as desired.
It remains to show that $g^{-1}$ is continuous, or equivalently, that $g$ is a closed map. Suppose that $W \subseteq \mathcal{Q}(G\mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}) \times \hat G / \sim$ is closed, and suppose that $([x_\iota], \gamma_\iota)$ is a net in $g(W)$ converging to $([x], \gamma)$.
Consider any net $(\tilde{\gamma_\iota})$ in $\hat{G}$ such that $\tilde{\gamma_\iota}|_{G_x} = \gamma_\iota$. By the compactness of $\hat{G}$, there exists a convergent subnet $\tilde{\gamma}_{\iota_\eta}$ with limit $\tilde{\gamma}$. Then $\tilde{\gamma}_{\iota_\eta}|_{G_x} \to \tilde{\gamma}|_{G_x}$ as well, so $\gamma_{\iota_\eta} \to \tilde{\gamma}|_{G_x}$. Since $\hat{G}_x$ is Hausdorff, limits are unique, and hence $\tilde{\gamma}|_{G_x} = \gamma$.
The net $([x_{\iota_\eta}], \tilde{\gamma}_{\iota_\eta})$ converges to $([x], \tilde{\gamma})$ in $\mathcal{Q}(G \mathrel{\reflectbox{$\righttoleftarrow$}} \widehat{\OO}_{\! K}) \times \hat{G}$, and since $f$ is continuous, $f([x_{\iota_\eta}], \tilde{\gamma}_{\iota_\eta}) \to f([x],\tilde{\gamma})$. Moreover, $f([x_{\iota_\eta}], \tilde{\gamma}_{\iota_\eta}) = [([x_{\iota_\eta}], \tilde{\gamma}_{\iota_\eta})] \in W$ because $g$ is injective and $g([([x_{\iota_\eta}],\tilde{\gamma}_{\iota_\eta})]) = ([x_{\iota_\eta}],\gamma_{\iota_\eta}) \in g(W)$ by assumption. Since $W$ is closed, $[([x],\tilde{\gamma})] \in W$, and so its image $([x], \gamma) \in g(W)$, as desired. \end{proof}
\begin{remark} Recall that $G \cong W \times \mathbb Z ^d$ with $W$ the roots of unity in $G$, and that the isotropy subgroup $G_x$ is constant on the quasi-orbit $[x]$ of $x$. If $[x]$ is finite, then $G_x$ is of full rank in $G$, and thus $G_x \cong V_x \times \mathbb Z^d$, with $V_x \subset W$ the torsion part of $G_x$. Hence, for every finite quasi-orbit $[x]$, we have $\hat{G_x} \cong \hat V_{[x]} \times \mathbb T^d$. Notice that $\hat V_{[x]} \cong V_{[x]}$ (noncanonically) because $V_x$ is finite. \end{remark}
\end{document}
|
arXiv
|
{
"id": "1801.07802.tex",
"language_detection_score": 0.7647281289100647,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Thermal effect on mixed state geometric phases for neutrino propagation in a magnetic field}
\author{Da-Bao Yang}
\email{[email protected]}
\affiliation{Department of fundamental physics, School of Science, Tianjin Polytechnic University, Tianjin 300387, People's Republic of China}
\author{Ji-Xuan Hou}
\affiliation{Department of Physics, Southeast University, Nanjing 211189, People's Republic of China}
\author{Ku Meng}
\affiliation{Department of fundamental physics, School of Science, Tianjin Polytechnic University, Tianjin 300387, People's Republic of China}
\date{\today} \begin{abstract} In astrophysical environments, neutrinos may propagate over a long distance in a magnetic field. In the presence of a rotating magentic field, the neutrino spin can flip from left-handed neutrino to right-handed neutrino. Smirnov demonstrated that the pure state geometric phase due to the neutrino spin precession may cause resonantg spin conversion inside the Sun. However, in general, the neutrinos may in an ensemble of thermal state. In this article, the corresponding mixed state geometric phases will be formulated, including the off-diagonal casse and diagonal ones. The spefic features towards temperature will be analysized. \end{abstract}
\pacs{03.65.Vf, 75.10.Pq, 31.15.ac}
\maketitle
\section{Introduction}
\label{sec:introduction}
Geometric phase had been discovered by Berry \cite{berry1984quantal} in the circumstance of adiabatic evolution. Then it was generalized by Wilczek and Zee \cite{Wilczek1984appearance}, Aharonov and Anandan \cite{aharonov1987phase,anandon1988nonadiabatic}, Samuel and Bhandari\cite{samuel1988general}, and Samuel and Bhandari \cite{samuel1988general} in the context of pure state. Moreover, it was also extended to mixed state counterparts. Its operationally well-defined notion was proposed by Sj$\ddot{o}$qvist \emph{et. al}. \cite{sjoqvist2000mixed} based on inferferometry. Subsequently, it was generalized to degenerate case by Singh et. al. \cite{singh2003geometric} and to nonunitary evuolution by Tong et. al. \cite{tong2004kinematic} by use of kenimatic approach. In addition, when the final state is orthogonal to the initial state, the above geometric phase is meaningless. So the complementary one to the usual geometric phases had been put forward by Manini and Pistolesi \cite{manini2000off}. The new phase is called off-diagonal geoemtric phase, which was been generalized to non-abelian casse by Kult et. al. \cite{kult2007nonabelian}. It also had been extended to mixed state ones by Filipp and Sj$\ddot{o}$qvist \cite{filipp2003PRL,filipp2003offdiagonalPRA} during unitary evolution . Further extension to non-degenerate case was made by Tong et. al. \cite{tong2005offdiagonal} by kinematic approach. Finally , there are excellent reviewed articles \cite{xiao2010Berry} and monographs \cite{shapere1989book,bohm2003book,chru2004book} talking about its influence and applications in physics and other natural science.
As well kown, Neutrino plays an important role in particle physics and astronomy. Smirnov investigated the effect of resonant spin converstion of solar neutrinos which was induced by the geometric phase \cite{smirnov1991solarneutrino}. Joshi and Jain figured out the geometric phase of neutrino when it was propagating in a rotating transverse mangnetic field \cite{joshi2015neutrino}. However, their disscussion are confined to the pure state case. In this article, we will talk about mixed state geometric phase of neutrino, ranging from off-diagonal phase to diagonal one.
This paper is organised as follows. In the next section, the off-diagonal geometric phase for mixed state will be reviewed as well as the usual mixed state geometric phase. Furthermore, the related equation about the propagation of two helicity components of neutrino will be retrospected. In Sec. III, both the off-diagonal and diagonal mixed geometric phase for neutrino in thermal state are going to be calculated. Finally, a conclusion is drawn in the last section.
\section{Review of o ff-diagonal phase}
\label{sec:reviews}
If a non-degenerate density matrix takes this form \begin{equation}
\rho_{1}=\lambda_{1}|\psi_{1}\rangle\langle\psi_{2}|+\cdots+\lambda_{N}|\psi_{N}\rangle\langle\psi_{N}|.\label{eq:DenstiyMatrix1} \end{equation} Moreover, a density operator that can't interfer with $\rho_{1}$ is introduced \cite{filipp2003offdiagonalPRA}, which is \[ \rho_{n}=W^{n-1}\rho_{1}(W^{\dagger})^{n-1},n=1,...,N \]
where \[
W=|\psi_{1}\rangle\langle\psi_{N}|+\psi_{N}\rangle\langle\psi_{N-1}|\cdots+|\psi_{2}\rangle\langle\psi_{1}|. \] In the unitary evolution, excet the ususal mixed state geometric phase, there exists so called mixed state off diagonal phase , which reads \cite{filipp2003offdiagonalPRA} \begin{equation} \gamma_{\rho_{j_{1}}...\rho_{j_{l}}}^{(l)}=\Phi[Tr(\prod_{a=1}^{l}U^{\parallel}(\tau)\sqrt[l]{\rho_{j_{a}}})],\label{eq:OffdiagonlaGeometricPhase} \end{equation}
where $\Phi[z]\equiv z/|z|$ for nonzero complex number $z$ and \cite{tong2005offdiagonal} \begin{equation} U^{\parallel}=U(t)\sum_{k=1}^{N}e^{-i\delta_{k}},\label{eq:ParallelUnitaryEvolution} \end{equation} in which \begin{equation}
\delta_{k}=-i\int_{0}^{t}\langle\psi_{k}|U^{\dagger}(t^{\prime})\dot{U}(t^{\prime})|\psi_{k}\rangle dt^{\prime}\label{eq:DynamicalPhase} \end{equation} and $U(t)$ is the time evolution operator of this system. Moreover $U^{\parallel}$ satisfies the parallel transport condition, which is \[
\langle\psi_{k}|U^{\parallel\dagger}(t)\dot{U}^{\parallel}(t)|\psi_{k}\rangle=0,\ k=1,\cdots,N. \]
In addition, the usual mixed state geometric phase factor \cite{tong2004kinematic} takes the following form \begin{equation}
\gamma=\Phi\left[\sum_{k=1}^{N}\lambda_{k}\langle\psi_{k}|U(\tau)|\psi_{k}\rangle e^{-i\delta_{k}}\right]\label{eq:DiagonalGeometricPhase} \end{equation}
The propagation helicity components $\left(\begin{array}{cc} \nu_{R} & \nu_{L}\end{array}\right)^{T}$of a neutrino in a magnetic field obeys the following equation \cite{smirnov1991solarneutrino}
\begin{equation} i\frac{d}{dt}\left(\begin{array}{c} \nu_{R}\\ \nu_{L} \end{array}\right)=\left(\begin{array}{cc} \frac{V}{2} & \mu_{\nu}Be^{-i\omega t}\\ \mu_{\nu}Be^{i\omega t} & -\frac{V}{2} \end{array}\right)\left(\begin{array}{c} \nu_{R}\\ \nu_{L} \end{array}\right),\label{eq:Schrodinger} \end{equation} where $T$ denotes the matrices transposing operation, $\vec{B}=B_{x}+iB_{y}=Be^{i\omega t}$, $\mu_{\nu}$ reprensents the magnetic moment of a massive Dirac neutrino and $V$ is a term due to neutrino mass as well as interaction with matter. The instantaneous eigenvalues and eigenvectors corresponding to the Hamiltonian take the following form \cite{joshi2015neutrino} \[ E_{1}=+\sqrt{\left(\frac{V}{2}\right)^{2}+(\mu_{\nu}B)^{2}} \] \begin{equation}
|\psi_{1}\rangle=\frac{1}{N}\left(\begin{array}{c} \mu_{\nu}B\\ -e^{i\omega t}\left(\frac{V}{2}-E_{1}\right) \end{array}\right)\label{eq:EigenVector1} \end{equation} and \[ E_{2}=-\sqrt{\left(\frac{V}{2}\right)^{2}+(\mu_{\nu}B)^{2}} \] \begin{equation}
|\psi_{2}\rangle=\frac{1}{N}\left(\begin{array}{c} e^{-i\omega t}\left(\frac{V}{2}-E_{1}\right)\\ \mu_{\nu}B \end{array}\right),\label{eq:EigenVector2} \end{equation} where the normalized factor \[ N=\sqrt{\left(\frac{V}{2}-E_{1}\right)^{2}+\left(\mu_{\nu}B\right)^{2}}. \] If this system in a thermal state, the density operator can be written as \begin{equation}
\rho=\lambda_{1}|1\rangle\langle1|+\lambda_{2}|2\rangle\langle2|\label{eq:DensityMatrix} \end{equation} where \[ \lambda_{1}=\frac{e^{-\beta E_{1}}}{e^{-\beta E_{1}}+e^{-\beta E_{2}}} \] and \[ \lambda_{2}=\frac{e^{-\beta E_{2}}}{e^{-\beta E_{1}}+e^{-\beta E_{2}}}. \] In addition, $\beta=1/(kT),$ where $k$ is the Boltzmann constant and $T$ represents the temperature. In the next section, the mixed state geometric phase for both off-diagonal one and diagonal one will be calculated.
\section{Mixed state geometric phase}
\label{sec:Nonadiabatic}
The differential equation Eq. \eqref{eq:Schrodinger} can be exactly solved by the following transformation \begin{equation} \left(\begin{array}{c} \nu_{R}\\ \nu_{L} \end{array}\right)=e^{-i\sigma_{z}\frac{1}{2}\omega t}\left(\begin{array}{c} a\\ b \end{array}\right),\label{eq:TransformedState} \end{equation} where $\sigma_{z}$ is a Pauli matrix along $z$ direction whose explicit form is \[ \sigma_{z}=\left(\begin{array}{cc} 1 & 0\\ 0 & -1 \end{array}\right). \] By substituting Eq. \eqref{eq:TransformedState} into Eq. \eqref{eq:Schrodinger}, one can obtain \begin{equation} i\frac{d}{dt}\left(\begin{array}{c} a\\ b \end{array}\right)=\tilde{H}\left(\begin{array}{c} a\\ b \end{array}\right),\label{eq:TransformedSchodinger} \end{equation} where \[ \tilde{H}=\mu_{\nu}B\sigma_{x}+\frac{1}{2}(V-\omega)\sigma_{z}. \] Furthermore, it can be written in this form \begin{equation} \tilde{H}=\frac{1}{2}\Omega\left(\begin{array}{ccc} \frac{2\mu_{\nu}B}{\Omega} & 0 & \frac{V-\omega}{\Omega}\end{array}\right)\centerdot\left(\begin{array}{ccc} \sigma_{x} & \sigma_{y} & \sigma_{z}\end{array}\right),\label{eq:TransformedHamiltonian} \end{equation} where $\Omega=\sqrt{\left(2\mu_{\nu}B\right)^{2}+\left(V-\omega\right)^{2}}$. Because $\tilde{H}$ is independent of time, Eq. \ref{eq:TransformedSchodinger} can be exactly solved, whose time evolution operator takes the form \[ \tilde{U}=e^{-i\tilde{H}t}. \] Associating with Eq. \eqref{eq:TransformedState}, the time evolution operator for Eq. \eqref{eq:Schrodinger} is \begin{equation} U=e^{-i\tilde{H}t}e^{i\sigma_{z}\frac{1}{2}\omega t}.\label{eq:UnitaryEvolution} \end{equation} By substituting Eq. \eqref{eq:TransformedHamiltonian} into Eq. \ref{eq:UnitaryEvolution}, the above operator can be written in an explicit form, which is \[ U=\left(\begin{array}{cc} \cos\frac{\Omega}{2}t-i\frac{V-\omega}{\Omega}\sin\frac{\Omega}{2}t & -i\frac{2\mu_{\nu}B}{\Omega}\sin\frac{\Omega}{2}t\\ -i\frac{2\mu_{\nu}B}{\Omega}\sin\frac{\Omega}{2}t & \cos\frac{\Omega}{2}t+i\frac{V-\omega}{\Omega}\sin\frac{\Omega}{2}t \end{array}\right)\left(\begin{array}{cc} e^{i\frac{\omega t}{2}} & 0\\ 0 & e^{-i\frac{\omega t}{2}} \end{array}\right) \] In order to calculate off-diagonal phase \eqref{eq:OffdiagonlaGeometricPhase}, by use of Eq. \eqref{eq:ParallelUnitaryEvolution}, we can work out \[ \begin{array}{ccc}
U_{11}^{\parallel} & \equiv & \langle\psi_{1}|U(t)\left(e^{-i\delta_{1}}|\psi_{1}\rangle\langle\psi_{1}|+e^{-i\delta_{2}}|\psi_{2}\rangle\langle\psi_{2}|\right)|\psi_{1}\rangle\\
& = & U_{11}e^{-i\delta_{1}}, \end{array} \]
where $U_{11}=\langle\psi_{1}|U(t)|\psi_{1}\rangle$. In order to simplify the result, let's talk about an easier case. when $t=\tau=2\pi/\Omega$, \begin{equation} U_{11}=-\frac{1}{N^{2}}\left[\mu_{\nu}^{2}B^{2}e^{i\frac{\omega\tau}{2}}+\left(\frac{V}{2}-E_{1}\right)^{2}e^{-i\frac{\omega\tau}{2}}\right]=U_{22}^{*},\label{eq:UDiagonal} \end{equation} where $*$ denotes the complex conjugate operation. By similar calculations, one can obtains \begin{equation} U_{12}=\frac{2}{N^{2}}\mu_{\nu}B\left(\frac{V}{2}-E_{1}\right)\sin\left(\frac{1}{2}\omega\right)e^{-i(\omega\tau+\frac{\pi}{2})}=-U_{21}^{*}\label{eq:UOffDiagonal} \end{equation} Furthermore, $\delta_{1}$ can be explicitly calculated out by substituting Eq. \eqref{eq:UnitaryEvolution} Eq. \eqref{eq:EigenVector1} into Eq. \eqref{eq:DynamicalPhase}, which takes the form \begin{equation} \delta_{1}=\frac{1}{N^{2}}\left[2\mu_{\nu}^{2}B^{2}\left(\frac{V}{2}-E_{1}\right)+\left(\frac{V}{2}-E_{1}\right)^{2}\left(\frac{V}{2}-\omega\right)-\mu_{\nu}^{2}B^{2}\left(\frac{V}{2}-\omega\right)\right]\tau.\label{eq:DynamicalPhase1} \end{equation}
By similar calculation, one can get \begin{equation} \delta_{2}=-\delta_{1}.\label{eq:DynamicalPhase2} \end{equation} Hence Eq. \eqref{eq:ParallelUnitaryEvolution} can be explicitly calculated out, \begin{equation} \left(\begin{array}{cc} U_{11}^{\parallel} & U_{12}^{\parallel}\\ U_{21}^{\parallel} & U_{22}^{\parallel} \end{array}\right)=\left(\begin{array}{cc} U_{11} & U_{12}\\ U_{21} & U_{22} \end{array}\right)\left(\begin{array}{cc} e^{-i\delta_{1}} & 0\\ 0 & e^{-i\delta_{2}} \end{array}\right).\label{eq:RelationsUnitaryEvolutions} \end{equation}
Now, let us calculate the mixed state off-diagonal phasse \begin{equation} \gamma_{\rho_{1}\rho_{2}}^{(2)}=\Phi\left[Tr\left(\prod_{a=1}^{2}U^{\parallel}(\tau)\sqrt{\rho_{a}}\right)\right],\label{eq:OffDiagonalGeometricPhaseForNeutrino} \end{equation}
where $\rho_{1}=\lambda_{1}|1\rangle\langle1|+\lambda_{2}|2\rangle\langle2|$
and $\rho_{2}=\lambda_{1}|2\rangle\langle2|+\lambda_{2}|1\rangle\langle1|$. Under the basis of $|\psi_{1}\rangle$ and $|\psi_{2}\rangle$, \begin{equation} \begin{array}{ccc}
Tr\left(\prod_{a=1}^{2}U^{\parallel}(\tau)\sqrt{\rho_{a}}\right) & = & \sum_{b=1}^{2}\langle\psi_{b}|\prod_{a=1}^{2}U^{\parallel}(\tau)\sqrt{\rho_{a}}|\psi_{b}\rangle\\
& = & \sqrt{\lambda_{1}\lambda_{2}}\left[\left(U_{11}^{\parallel}\right)^{2}+\left(U_{22}^{\parallel}\right)^{2}\right]+U_{12}^{\parallel}U_{21}^{\parallel}. \end{array}\label{eq:TraceOffDiagonal} \end{equation} By substituting Eq. \eqref{eq:RelationsUnitaryEvolutions} into Eq. \eqref{eq:TraceOffDiagonal}, we can obtain a simpler result \[ Tr\left(\prod_{a=1}^{2}U^{\parallel}(\tau)\sqrt{\rho_{a}}\right)=\sqrt{\lambda_{1}\lambda_{2}}\left[\left(U_{11}e^{-i\delta_{1}}\right)^{2}+\left(U_{22}e^{-i\delta_{2}}\right)^{2}\right]+U_{12}U_{21}e^{-i\left(\delta_{1}+\delta_{2}\right)} \] By substituting Eq. \eqref{eq:UDiagonal} and Eq. \eqref{eq:UOffDiagonal} into the above equation, off-diagonal geometric phase \eqref{eq:OffDiagonalGeometricPhaseForNeutrino} can be explicitly calculated, \begin{equation} \begin{array}{ccc} \gamma_{\rho_{1}\rho_{2}}^{(2)} & = & \Phi\{\left(\frac{V}{2}-E_{1}\right)^{2}\mu_{\nu}^{2}B^{2}\left(\cos\omega\tau-1\right)+\sqrt{\lambda_{1}\lambda_{2}}\vartimes\\
& & [\left(\frac{V}{2}-E_{1}\right)^{4}\cos\left(\omega\tau+2\delta_{1}\right)+\mu_{\nu}^{4}B^{4}\cos\left(\omega\tau+2\delta_{1}\right)\\
& & +2\mu_{\nu}^{2}B^{2}\left(\frac{V}{2}-E_{1}\right)^{2}\cos2\delta_{1}]\} \end{array}\label{eq:OffDiagonalPhaseFinalResult} \end{equation} Hence, the corresponding phase is either $\pi$ or $0$, which depends on temperature and magnetic field. So, its phase is unresponsitive to temperature.
By substituting Eq. \eqref{eq:UDiagonal}, Eq. \eqref{eq:DynamicalPhase1} and Eq. \eqref{eq:DynamicalPhase2} into Eq. \eqref{eq:DiagonalGeometricPhase}, the diagonal geometric phase for miexed state reads \begin{equation} \begin{array}{ccc} \gamma & = & \Phi\{\left[\lambda_{1}e^{i(\frac{\omega\tau}{2}-\delta_{1})}+\lambda_{2}e^{-i(\frac{\omega\tau}{2}-\delta_{1})}\right]\mu_{\nu}^{2}B^{2}+\\
& & \left[\lambda_{1}e^{-i(\frac{\omega\tau}{2}+\delta_{1})}+\lambda_{2}e^{i(\frac{\omega\tau}{2}+\delta_{1})}\right]\left(\frac{V}{2}-E_{1}\right)^{2}\}. \end{array}\label{eq:DiagonalPhaseFinalResult} \end{equation} From the above result, we can draw a conclution that if $\lambda_{1}=\lambda_{2}$, in another word $T\rightarrow\infty$, the corresponding phase maybe $\pi$ or $0$. In other circumstance, it may vary continuously in an interval. By contrary to off-diagonal one, the diagonal phase is more sensitive to temprature.
\section{Conclusions and Acknowledgements }
\label{sec:discussion}
In this article, the time evolution operator of neutrino spin in the presence of uniformly rotating magnetic field is obtained. Under this time evolution operator, a thermal state of this neutrinos evolves. Then there exists mixed off-diagonal geometric phase for mixed state, as well as diagonal ones. They have been calculated respectively. And an analytic form is achieved. In addition, a conclusion is drawn that diagonal phase is more sensentive to off-diagonal one towards temperature.
D.B.Y. is supported by NSF ( Natural Science Foundation ) of China under Grant No. 11447196. J.X.H. is supported by the NSF of China under Grant 11304037, the NSF of Jiangsu Province, China under Grant BK20130604, as well as the Ph.D. Programs Foundation of Ministry of Education of China under Grant 20130092120041. And K. M. is supported by NSF of China under grant No.11447153.
\end{document}
|
arXiv
|
{
"id": "1512.08219.tex",
"language_detection_score": 0.6650899052619934,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{On Carlson's depth conjecture in group cohomology}
\author[D.~J. Green]{David J. Green} \address{Dept of Mathematics \\ Univ.\@ of Wuppertal \\ D--42097 Wuppertal \\ Germany} \email{[email protected]}
\subjclass{Primary 20J06} \date{5 June 2002}
\begin{abstract} We establish a weak form of Carlson's conjecture on the depth of the mod-$p$ cohomology ring of a $p$-group. In particular, Duflot's lower bound for the depth is tight if and only if the cohomology ring is not detected on a certain family of subgroups. The proofs use the structure of the cohomology ring as a comodule over the cohomology of the centre via the multiplication map. We demonstrate the existence of systems of parameters (so-called polarised systems) which are particularly well adapted to this comodule structure. \end{abstract}
\maketitle
\section*{Introduction} \noindent Let $G$ be a finite $p$-group and $C$ its greatest central elementary abelian subgroup. Write $k$ for the prime field $\f$. Cohomology will always be with coefficients in~$k$. Denote by $r$ the $p$-rank of~$G$, and by $z$ the $p$-rank of $C$.
Following Broto and Henn~\cite{BrHe:Central} we shall exploit the fact that the multiplication map $\mu \colon G \times C \rightarrow G$, $(g,c) \mapsto g.c$ is a group homomorphism. The main result of this paper is as follows:
\begin{theorem} \label{theorem:introED1} Suppose that $G$ is a $p$-group whose centre has $p$-rank $z$. Then the following statements are equivalent: \begin{enumerate} \item The mod-$p$ cohomology ring $\coho{G}$ is not detected on the centralizers of its rank $z+1$ elementary abelian subgroups. \item There is an associated prime $\mathfrak{p}$ such that $\coho{G}/\mathfrak{p}$ has dimension $z$. \item The depth of $\coho{G}$ equals $z$. \end{enumerate} \end{theorem}
\noindent This is a special case of a conjecture due to Carlson~\cite{Carlson:DepthTransfer}, reproduced here as Conjecture~\ref{conjecture:ED}. Recall that Duflot proved in~\cite{Duflot:Depth} that $z$ is a lower bound for the depth. So this result characterises the cases where Duflot's lower bound is tight.
Theorem~\ref{theorem:introED1} is proved as Theorem~\ref{theorem:ED1}. The proof rests upon the existence of \emph{polarised systems}: homogeneous systems of parameters for $\coho{G}$ which are particularly well adapted to the $\coho{C}$-comodule structure. There are two extreme types of behaviour which a cohomology class $x \in \coho{G}$ can demonstrate under the comodule structure map $\mu^*$: one extreme is that $\operatorname{Res}_C(x)$ is nonzero, and so $\mu^*(x) = 1 \otimes \operatorname{Res}_C(x) + \text{other terms}$. The other extreme is that $x$ is primitive, meaning that $\mu^*(x) = x \otimes 1$. Roughly speaking, a polarised system of parameters is one consisting solely of elements which each exhibit one or the other of these extreme kinds of behaviour. The precise definition, which ensures that each such system is a detecting sequence for the depth of $\coho{G}$, is given in Definition~\ref{definition:polarised}\@. Polarised systems of parameters always exist, as is proved in Theorem~\ref{theorem:existence}\@.
\noindent In addition to Theorem~\ref{theorem:introED1} we also prove a weak form of the general case of Carlson's conjecture. This is done in Theorem~\ref{theorem:polarisedEqualities}, which includes the following statement:
\begin{theorem} \label{theorem:introMyEDgen} Let $\zeta_1$, \dots, $\zeta_z$, $\kappa_1$, \dots, $\kappa_{r-z}$ be a polarised system of parameters for $\coho{G}$. Then $\coho{G}$ has depth $z + \sa$, where $\sa \in \{0,\ldots,r-z\}$ is defined by \[ \sa = \max \{ s \leq r-z \mid \text{$\kappa_1,\ldots,\kappa_s$ is a regular sequence in $\coho{G}$} \} \, . \] \end{theorem}
\section{Primitive comodule elements} \label{section:primitive} \noindent Group multiplication $\mu \colon G \times C \rightarrow G$ is a group homomorphism. As observed by Broto and Henn~\cite{BrHe:Central}, this means that $\coho{G}$ inherits the structure of a comodule over the coalgebra $\coho{C}$.
Recall that $x \in \coho{G}$ is called a primitive comodule element if $\mu^*(x) = x \otimes 1 \in \coho{G \times C} \cong \coho{G} \otimes_k \coho{C}$. As the comodule structure map $\mu^*$ is simultaneously a ring homomorphism, it follows that the primitives form a subalgebra $\PH{G}$ of $\coho{G}$. As the quotient map $G \rightarrow G/C$ coequalises $\mu$ and projection onto the first factor of $G \times C$, it follows that the image of inflation from $\coho{G/C}$ is contained in $\PH{G}$.
\begin{lemma} \label{lemma:quotientComodule} Suppose that $I$ is a homogeneous ideal in $\coho{G}$ which is generated by primitive elements. Then \[ \mu^*(I) \subseteq I \otimes_k \coho{C} \, , \] and so $\mu^*$ induces a ring homomorphism \[ \lambda \colon \coho{G}/I \rightarrow \coho{G}/I \otimes_k \coho{C} \] which induces an $\coho{C}$-comodule structure on $\coho{G}/I$. \end{lemma}
\begin{proof} For $x \in I$ and $y \in \coho{G}$ one has $\mu^*(xy) = \mu^*(x) \mu^*(y) = (x \otimes 1) \mu^*(y) \in I \otimes_k \coho{C}$. \end{proof}
\begin{lemma} \label{lemma:myBrotoCarlsonHenn} Suppose that $\zeta_1, \ldots, \zeta_t$ is a sequence of homogeneous elements of $\coho{G}$ whose restrictions form a regular sequence in $\coho{C}$, and suppose that $I$~is an ideal in $\coho{G}$ generated by primitive elements. Then $\zeta_1, \ldots, \zeta_t$ is a regular sequence for the quotient ring $\coho{G}/I$. \end{lemma}
\begin{proof} Carlson's proof for the case $I=0$ (Proposition 5.2 of~\cite{Carlson:Problems}) generalises easily. Denote by $R$ the polynomial algebra $k[\zeta_1, \ldots, \zeta_t]$. The map $\lambda$~of Lemma~\ref{lemma:quotientComodule} induces an $R$-module structure on $\coho{G}/I \otimes_k \coho{C}$, and $\lambda$~is a split monomorphism of $R$-modules, the splitting map being induced by projection onto the first factor $G \times C \rightarrow G$. So as an $R$-module $\coho{G}/I$ is a direct summand of $\coho{G}/I \otimes_k \coho{C}$. The result will therefore follow if we can show that $\coho{G}/I \otimes_k \coho{C}$ is a free $R$-module.
To see that $\coho{G}/I \otimes_k \coho{C}$ is indeed a free $R$-module set \[ F_i := \sum_{j \geq i} \coho[j]{G}/(I \cap \coho[j]{G}) \] and observe that $F_i \otimes_k \coho{C}$ is an $R$-submodule of $\coho{G}/I \otimes_k \coho{C} = F_0 \otimes_k \coho{C}$. Projection $G \times C \rightarrow C$ makes $F_i \otimes_k \coho{C} / F_{i+1} \otimes_k \coho{C}$ a free $\coho{C}$-module. Now for $x \in F_i$, $y \in \coho{C}$ and $\theta \in R$ we have $\theta.(x \otimes y) \in x \otimes (\operatorname{Res}^G_C \theta). y + F_{i+1} \otimes_k \coho{C}$. So the $R$-module structure on $F_i \otimes_k \coho{C} / F_{i+1} \otimes_k \coho{C}$ is induced by the restriction map $R \rightarrow \coho{G} \rightarrow \coho{G}/I \rightarrow \coho{C}$ from the free $\coho{C}$-structure. But $\coho{C}$ is a free $R$-module by Theorem 10.3.4 of \cite{Evens:book}, because the restrictions of $\zeta_1,\ldots,\zeta_t$ form a regular sequence. So $F_i \otimes_k \coho{C} / F_{i+1} \otimes_k \coho{C}$ is a free $R$-module for all~$i$, whence it follows that $F_0 \otimes_k \coho{C} / F_i \otimes_k \coho{C}$ is a free $R$-module for all~$i$. As the degree of each homogeneous element of $F_i \otimes_k \coho{C}$ is at least~$i$, it follows that $\coho{G}/I \otimes_k \coho{C}$ is itself a free $R$-module. \end{proof}
\begin{corollary} \label{coroll:myBrotoCarlsonHenn} Let $G$ be a $p$-group whose centre has $p$-rank~$z$. Suppose that there is a length~$s$ regular sequence in $\coho{G}$ which consists entirely of primitive elements. Then the depth of $\coho{G}$ is at least $z + s$. \end{corollary}
\begin{proof} Let $I$ be the ideal generated by the primitive elements in the regular sequence. Let $\zeta_1,\ldots,\zeta_z$ be elements of $\coho{G}$ whose restrictions form a homogeneous system of parameters for $\coho{C}$: it is well known that such sequences exist. Now apply Lemma~\ref{lemma:myBrotoCarlsonHenn}.
One concrete example of such classes $\zeta_i$ is obtained as follows. Let $\rho_G$ be the regular representation of~$G$, and $\zeta_i$ the Chern class $c_{p^n - p^{n-i}}(\rho_G)$ for $1 \leq i \leq z$, where $p^n$ is the order of~$G$. Then $\rho_G$ restricts to~$C$
as $|G:C|$ copies of the regular representation $\rho_C$, whence $\operatorname{Res}_C(\zeta_i) = c_{p^z - p^{z-i}}(\rho_C)^{|G:C|}$. But the $c_{p^z - p^{z-i}}(\rho_C)$ are (up to a sign) the Dickson invariants. See the proof of Theorem~\ref{theorem:existence} for more details. \end{proof}
\section{Polarised systems of parameters} \label{section:polarised} \noindent We shall now give the definition of a polarised system of parameters, the key definition of this paper. In fact we shall introduce two closely related concepts: the axioms for a polarised system (Definition~\ref{definition:polarised}) are easily checked in practice, whereas the special polarised systems of Definition~\ref{definition:specialPolarised} have precisely the properties we shall need to investigate depth. Lemma~\ref{lemma:polarisedDefinitions} shows that the two concepts are more or less interchangeable.
\begin{definition} \label{definition:ACG} Let $G$ be a $p$-group of $p$-rank $r$ whose centre has $p$-rank $z$. Denote by $C$ the greatest central elementary abelian subgroup of~$G$, and set \begin{align*} \mathcal{A}^C(G) & := \{ V \leq G \mid \text{$V$ is elementary abelian and contains~$C$} \} \, , \\ \mathcal{A}^C_d(G) & := \{ V \in \mathcal{A}^C(G) \mid \text{$V$ has $p$-rank $d$} \} \, \\ \mathcal{H}^C_d(G) & := \{ C_G(V) \mid V \in \mathcal{A}^C_d(G) \} \, . \end{align*} So $\mathcal{A}^C(G)$ is the disjoint union of the $\mathcal{A}^C_{z+s}(G)$ for $0 \leq s \leq r-z$. \end{definition}
\begin{definition} \label{definition:polarised} Let $G$ be a $p$-group of $p$-rank $r$ whose centre has $p$-rank $z$. Recall that inflation map $\operatorname{Inf} \colon \coho{V/C} \rightarrow \coho{V}$ is a split monomorphism for each $V \in \mathcal{A}^C(G)$, and so its image $\operatorname{Im} \operatorname{Inf}$ is isomorphic to $\coho{V/C}$.
A system $\zeta_1,\ldots,\zeta_z$, $\kappa_1,\ldots,\kappa_{r-z}$ of homogeneous elements of $\coho{G}$ shall be called a polarised system of parameters if the following four axioms are satisfied. \begin{description} \item[(PS1)] $\operatorname{Res}_C(\zeta_1)$, \dots, $\operatorname{Res}_C(\zeta_z)$ is a system of parameters for $\coho{C}$. \item[(PS2)] $\operatorname{Res}_V(\kappa_j)$ lies in $\operatorname{Im} \operatorname{Inf}$ for each $1 \leq j \leq r-z$ and for each $V \in \mathcal{A}^C(G)$. \item[(PS3)] For each $V \in \mathcal{A}^C(G)$, the restrictions $\operatorname{Res}_V(\kappa_1), \ldots, \operatorname{Res}_V(\kappa_s)$ constitute a system of parameters for $\operatorname{Im} \operatorname{Inf}$. Here $z+s$ is the rank of~$V$. \item[(PS4)] $\operatorname{Res}_V(\kappa_j) = 0$ for $V \in \mathcal{A}^C_{z+s}(G)$ with $0 \leq s < j \leq r-z$. \end{description} \end{definition}
\begin{remark} Polarised systems of parameters always exist, as we shall see in Theorem~\ref{theorem:existence}\@. Observe that Axiom (PS1) involves only the $\zeta_i$, whereas the remaining axioms involve only the $\kappa_j$. Basically Axiom (PS1) says that $\zeta_1,\ldots,\zeta_z$ is a regular sequence which can be detected on the centre, (PS2) says that the $\kappa_j$ are primitive after raising to suitably high $p$th powers, and (PS3) says that the $\kappa_j$ together with the $\zeta_i$ will form a detecting sequence for the depth of $\coho{G}$. Axiom (PS4) is a more technical condition which we shall only use once: it is needed in Lemma~\ref{lemma:polarisedDefinitions} to show that, after raising to a suitably high $p$th power, each $\kappa_j$ is a sum of transfer classes as required by Axiom (PS5) below. \end{remark}
\begin{lemma} Polarised systems of parameters for $\coho{G}$ are indeed systems of parameters. \end{lemma}
\begin{proof} Let $V \in \mathcal{A}^C_{z+s}(G)$. The restrictions of $\zeta_1,\ldots,\zeta_z, \kappa_1,\ldots, \kappa_s$ constitute a system of parameters for~$\coho{V}$ by (PS1) and (PS3)\@. Hence $\zeta_1,\ldots,\zeta_z, \kappa_1, \ldots, \kappa_{r-z}$ are algebraically independent over~$k$, for we may choose $V$ to have $p$-rank~$r$.
Now let $\gamma$ be a homogeneous element of $\coho{G}$. For $V \in \mathcal{A}^C(G)$ there is a monic polynomial $f_V(x)$ with coefficients in $k[\zeta_1,\ldots,\zeta_z, \kappa_1, \ldots, \kappa_{r-z}]$ such that $f_V(\gamma)$ has zero restriction to~$V$. Taking the product of all such polynomials one obtains a polynomial $f(x)$ such that $f(\gamma)$ has zero restriction to each maximal elementary abelian subgroup of~$G$. So $f(\gamma)$ is nilpotent by a well-known result of Quillen. \end{proof}
\begin{definition} \label{definition:specialPolarised} Let $G$ be a $p$-group of $p$-rank $r$ whose centre has $p$-rank $z$. A system $\zeta_1,\ldots,\zeta_z$, $\kappa_1,\ldots,\kappa_{r-z}$ of homogeneous elements of $\coho{G}$ shall be called a special polarised system of parameters if it satisfies the following five axioms: (PS1), (PS3), (PS4) and \begin{description} \item[(PS$\mathbf{2'}$)] $\kappa_j$ is a primitive element of the $\coho{C}$-comodule $\coho{G}$ for each $1 \leq j \leq r-z$. \item[(PS5)] $\kappa_j$ lies in $\sum_{H \in \mathcal{H}^C_{z+i}(G)} \operatorname{Tr}^G_H(\coho{H})$ for each $1 \leq i \leq j \leq r-z$. \end{description} \end{definition}
\begin{lemma} \label{lemma:polarisedDefinitions} Axiom (PS$2'$) implies Axiom (PS2), and so every special polarised system is a polarised system of parameters. Conversely for each polarised system $\zeta_1,\ldots,\zeta_z$, $\kappa_1, \ldots, \kappa_{r-z}$ there is a nonnegative integer $N$ such that $\zeta_1,\ldots,\zeta_z$, $\kappa_1^{p^N}, \ldots, \kappa_{r-z}^{p^N}$ is a special polarised system of parameters for $\coho{G}$. \end{lemma}
\begin{proof} Let $V \in \mathcal{A}^C(G)$. Restriction from $\coho{G}$~to $\coho{V}$ is a map of $\coho{C}$-comodules and so sends primitive elements to primitive elements. But the subalgebra of primitive elements of $\coho{V}$ coincides with the image of inflation from $V/C$. So (PS$2'$) implies (PS2).
Now suppose $\zeta_1,\ldots,\zeta_z$, $\kappa_1, \ldots, \kappa_{r-z}$ is a polarised system for $\coho{G}$. For each $1 \leq j \leq r-z$ the restriction of $\kappa_j$ to each $V \in \mathcal{A}^C(G)$ is primitive by (PS2)\@. Hence the element $\mu^*(\kappa_j) - \kappa_j \otimes 1$ of $\coho{G} \otimes \coho{C}$ has zero restriction to every maximal elementary abelian subgroup and is therefore nilpotent.
For fixed $1 \leq i \leq r-z$, denote by $\mathcal{K}$ the set consisting of those subgroups $K$~of $G$ such that $C_G(K)$ is not $G$-conjugate to any subgroup of any $H \in \mathcal{H}^C_{z+i}$. Following Carlson (Proof of Corollary 2.2~of \cite{Carlson:DepthTransfer}), observe that every $K \in \mathcal{K}$ has $p$-rank less than $z+i$. Moreover every $K \in \mathcal{K}$ is contained in $K_C = \langle K,C \rangle$, and $K_C$~itself lies in~$\mathcal{K}$. So $\operatorname{Res}_{K_C}(\kappa_j) = 0$ for all $j \geq i$ by (PS4)\@. Hence each such $\kappa_j$ lies in the radical ideal $\sqrt{J'}$, where $J'$ is the ideal $\bigcap_{K \in \mathcal{K}} \ker \operatorname{Res}_K$. So by Benson's result on the image of the transfer map (Theorem~1.1 of \cite{Benson:ImTr}) the $\kappa_j$ also lie in $\sqrt{J}$, where $J$ is the ideal $\sum_{H \in \mathcal{H}^C_{z+i}(G)} \operatorname{Tr}^G_H(\coho{H})$. \end{proof}
\begin{remark} The above proof is the only time we shall make use of the Axiom (PS4)\@. In particular, the results of \S\ref{section:specialPolarisedDepth} do not depend on (PS4)\@. I do not know whether or not (PS4) is a consequence of the remaining axioms for a special polarised system of parameters.
Axiom (PS5) will be used in Lemma~\ref{lemma:kappaAssocPrime} to prove the existence of an associated prime ideal with desirable properties. \end{remark}
\begin{lemma} \label{lemma:genBrotoHenn} Suppose that $\zeta_1,\ldots,\zeta_z$, $\kappa_1,\ldots,\kappa_{r-z}$ is a polarised system of parameters for $\coho{G}$. Let $0 \leq s \leq r-z$. Then the sequence $\zeta_1, \ldots, \zeta_z, \kappa_1, \ldots, \kappa_s$ is regular in $\coho{G}$ if and only if the sequence $\kappa_1, \ldots, \kappa_s$ is regular in $\coho{G}$. \end{lemma}
\begin{proof} Recall that regular sequences may be permuted at will. Moreover, replacing one element of a sequence by its $p$th power has no effect on whether the sequence is regular or not. Hence by Lemma~\ref{lemma:polarisedDefinitions} it suffices to prove the result for special polarised systems.
So we may assume that the given sequence is a special polarised system of parameters. Let $I$ be the homogeneous ideal $I$ in $\coho{G}$ generated by $\kappa_1, \ldots, \kappa_s$. By Axiom (PS$2'$) this ideal is generated by primitive elements. Also, the restrictions of $\zeta_1,\ldots,\zeta_z$ form a regular sequence in $\coho{C}$ by Axiom~(PS1)\@. Therefore Lemma~\ref{lemma:myBrotoCarlsonHenn} tells us that $\zeta_1,\ldots,\zeta_z$ constitute a regular sequence for $\coho{G}/I$. \end{proof}
\section{Three depth-related numbers} \label{section:threeNumbers} \noindent In this section we shall introduce three numbers $\tauH$, $\taua$~and $\tauaH$, each of which is an approximation to the depth $\tau$ of $\coho{G}$.
\begin{definition} Let $G$ be a $p$-group of $p$-rank $r$ whose centre has $p$-rank $z$. Write $\tau$ for the depth of $\coho{G}$ and set \[ \tauH := \max \{ d \in \{z, \ldots, r\} \mid \text{The family $\mathcal{H}^C_d(G)$ detects $\coho{G}$} \} \, . \] \end{definition}
\noindent In~\cite{Carlson:DepthTransfer} Carlson formulates the following conjecture: \begin{conjecture}[Carlson] \label{conjecture:ED} The number~$\tauH$ coincides with the depth $\tau$~of $\coho{G}$. Moreover, $\coho{G}$ has an associated prime $\mathfrak{p}$ such that $\dim \coho{G}/\mathfrak{p} = \tau$. \end{conjecture}
\noindent In fact, Carlson formulates the conjecture not just for $p$-groups, but for arbitrary finite groups. In this article however we only consider $p$-groups. In Theorem~\ref{theorem:ED1} we shall prove a special case of this conjecture, after deriving a partial result for the general case in Theorem~\ref{theorem:polarisedEqualities}. For this we need two more depth-related numbers.
\begin{definition} Let $G$ be a $p$-group of $p$-rank $r$ whose centre has $p$-rank $z$, and let $\mathfrak{a} = (\zeta_1,\ldots,\zeta_z, \kappa_1,\ldots,\kappa_{r-z})$ be a polarised system of parameters for $\coho{G}$\@.
Define $\taua$ to be $z + \sa$, where $\sa$ is the largest $s \in \{0, \ldots, r-z\}$ such that $\kappa_1,\ldots, \kappa_s$ is a regular sequence in $\coho{G}$.
Let $\SaH$~be the subset of $\{0, \ldots, r-z\}$ such that $s$~lies in $\SaH$ if and only if the restriction map \[ \coho{G}/(\kappa_1,\ldots,\kappa_{s-1}) \rightarrow \prod_{H \in \mathcal{H}^C_{z+s}} \coho{H}/(\operatorname{Res}_H \kappa_1,\ldots, \operatorname{Res}_H \kappa_{s-1}) \] is injective\footnote{So $\SaH$ always contains~$0$, and $1$~lies in $\SaH$ if and only if the family $\mathcal{H}^C_{z+1}$ detects $\coho{G}$.}. Define $\saH := \max \SaH$ and $\tauaH := z + \saH$. \end{definition}
\begin{lemma} \label{lemma:tauCdash} Let $\mathfrak{a} = (\zeta_1,\ldots,\zeta_z$, $\kappa_1,\ldots,\kappa_{r-z})$ be a polarised system of parameters for $\coho{G}$\@. If $s > 0$ lies in $\SaH$ then the family $\mathcal{H}^C_{z + s}(G)$ detects $\coho{G}$ and $s-1$ lies in~$\SaH$. Therefore $\tauH \geq \tauaH$ and $\SaH = \{ 0, \ldots, \saH \}$. \end{lemma}
\noindent For the proof we shall need an elementary fact about regular sequences.
\begin{lemma} \label{lemma:genGrComm} Suppose that $R,S$ are connected graded commutative $k$-algebras and that $f \colon R \rightarrow S$ is an algebra homomorphism which respects the grading. Suppose further that $\zeta_1,\ldots,\zeta_d$ is a family of homogenous positive-degree elements of $R$ satisfying the following conditions: \begin{enumerate} \item $f(\zeta_1),\ldots,f(\zeta_d)$ is a regular sequence in~$S$. \item The induced map $f_d \colon R/(\zeta_1,\ldots,\zeta_d) \rightarrow S/(f(\zeta_1),\ldots,f(\zeta_d))$ is an injection. \end{enumerate} Then $f \colon R \rightarrow S$ is an injection. \end{lemma}
\begin{proof} It suffices to prove the case $r=1$. Write $\zeta$~for $\zeta_1$. Let $a \not = 0$ be an element of $\ker(f)$ whose degree is as small as possible. Since $f(a) = 0$ in $S/(f(\zeta))$ it follows that there is an $a' \in R$ with $a = a' \zeta$. Since $f(a) = 0$ and $f(\zeta)$ is regular it follows that $a' \in \ker(f)$, contradicting the minimality of $\deg(a)$. \end{proof}
\begin{proof}[Proof of Lemma~\ref{lemma:tauCdash}]
Apply Lemma~\ref{lemma:genGrComm} to the family $\kappa_1,\ldots, \kappa_{s-1}$ with $R = \coho{G}$, $S = \prod_{\mathcal{H}^C_{z + s}} \coho{H}$ and $f$ the product of the restriction maps. Because $s \in \SaH$ the induced map of quotients is an injection. By Axiom~(PS3) the restrictions of $\kappa_1,\ldots, \kappa_{s-1}$ form a regular sequence in $\coho{V}$ for each $V \in \mathcal{A}^C_{z + s}(G)$, and so by \cite[Prop.~5.2]{Carlson:Problems} they form a regular sequence in $\coho{H}$ for each $H \in \mathcal{H}^C_{z + s}$. Hence the restrictions form a regular sequence in~$S$, and so the family $\mathcal{H}^C_{z + s}$ detects $\coho{G}$. If instead we just invoke the first step in the proof of Lemma~\ref{lemma:genGrComm}, we see that the $\coho{H}/(\operatorname{Res} \kappa_1,\ldots,\operatorname{Res} \kappa_{s-2})$ with $H \in \mathcal{H}^C_{z + s}$ detect $\coho{G}/(\kappa_1,\ldots,\kappa_{s-2})$. \end{proof}
\section{Depth and special polarised systems} \label{section:specialPolarisedDepth} \noindent The following fact from commutative algebra is well known.
\begin{lemma} \label{lemma:assocPrime} Let $A$ be a graded commutative ring and $M$ a Noetherian graded $A$-module. Suppose that $\mathfrak{p}$ is an associated prime of~$M$, and that $\zeta_1,\ldots,\zeta_d$ is a regular sequence for $M$. Then $M/(\zeta_1,\ldots,\zeta_d)M$ has an associated prime~$\mathfrak{q}$ containing $\mathfrak{p}$. \end{lemma}
\begin{proof} It suffices to prove the case $d=1$. Write $\zeta$~for $\zeta_1$. Pick $x \in M$ with $\operatorname{Ann}_A(x) = \mathfrak{p}$. If $x$ lies in $\zeta M$ then there is an $x' \in M$ with $\zeta x' = x$. As $\zeta$ is regular it follows that $\operatorname{Ann}_A(x') = \mathfrak{p}$ too, so replace $x$~by $x'$. This can only happen finitely often, as $M$ is Noetherian and $Ax$ is strictly contained in $Ax'$. So we may assume that $x$~does not lie in $\zeta M$, which means that the image of $x$~in $M/\zeta M$ is nonzero and annihilated by $\mathfrak{p}$. \end{proof}
\begin{lemma} \label{lemma:kappaAssocPrime} Let $\mathfrak{a} = (\zeta_1,\ldots,\zeta_z, \kappa_1,\ldots,\kappa_{r-z})$ be a special polarised system of parameters for $\coho{G}$, and $I$ the ideal generated by the $\kappa_j$ with $j \leq \saH$.
Then the $\coho{G}$-module $\coho{G}/I$ has an associated prime $\mathfrak{p}$ which contains $\kappa_1$, \dots, $\kappa_{r-z}$. \end{lemma}
\begin{proof} Set $s = \saH + 1$. The result is trivial if $\saH = r-z$, so we may assume that $s \leq r-z$. By definition of $\saH$, the family $\mathcal{H} = \mathcal{H}^C_{z + s}$ does not detect the quotient $\coho{G}/I$. Pick a class $x \in \coho{G}$ which does not lie in the ideal $I$, but whose restriction to each $H \in \mathcal{H}$ does lie in the ideal $\coho{H} . \operatorname{Res}_H(I)$. Let $A$ be the ideal of classes in $\coho{G}$ which annihilate the image of $x$~in the quotient $\coho{G}/I$.
For any $j \geq s$ we have $\kappa_j \in \sum_{H \in \mathcal{H}} \operatorname{Tr}^G_H \coho{H}$ by Axiom~(PS5), say $\kappa_j = \sum_H \operatorname{Tr}_H \gamma_H$. So $\kappa_j x = \sum_H \operatorname{Tr}_H (\gamma_H \operatorname{Res}_H(x))$ by Frobenius reciprocity. Now by assumption $\operatorname{Res}_H(x)$ lies in the ideal generated by $\operatorname{Res}_H(\kappa_1)$, \dots, $\operatorname{Res}_H(\kappa_{s-1})$; and this by a second application of Frobenius reciprocity means that $\kappa_j x$ lies in the ideal $I$. So $\kappa_j \in A$ for all $j \geq s$, which means that the $\coho{G}$-module $\coho{G}/I$ has an associated prime $\mathfrak{p}$ containing $\kappa_1,\ldots,\kappa_{r-z}$. \end{proof}
\begin{theorem} \label{theorem:specialPolarisedEqualities} Let $G$ be a $p$-group of $p$-rank $r$ whose centre has $p$-rank $z$, and let $ \mathfrak{a} = (\zeta_1,\ldots,\zeta_z, \kappa_1,\ldots,\kappa_{r-z})$ be a special polarised system of parameters for $\coho{G}$. Then the numbers $\taua$~and $\tauaH$ both coincide with the depth $\tau$~of $\coho{G}$. \end{theorem}
\begin{proof} We shall prove that $\tau \geq \taua \geq \tauaH = \tau$. Each $\kappa_j$ is primitive by Axiom~(PS$2'$), so $\tau \geq \taua$ by Corollary~\ref{coroll:myBrotoCarlsonHenn}.
$\taua \geq \tauaH$: Suppose that $s \in \SaH$ and $\kappa_1$, \dots, $\kappa_{s-1}$ is a regular sequence in $\coho{G}$. If $\kappa_1,\ldots,\kappa_s$ is not a regular sequence then there is some nonzero $x \in \coho{G}/(\kappa_1,\ldots,\kappa_{s-1})$ annihilated by $\kappa_s$. Since $s \in \SaH$ there is some $H \in \mathcal{H}^C_{z + s}$ such that $\operatorname{Res}_H(x)$ is nonzero in $\coho{H}/(\operatorname{Res}_H \kappa_1,\ldots,\operatorname{Res}_H \kappa_{s-1})$. But this is a contradiction since (as in the proof of Lemma~\ref{lemma:tauCdash}) the restrictions of $\kappa_1,\ldots,\kappa_s$ form a regular sequence in $\coho{H}$. By induction on~$s$ we deduce that $\taua \geq \tauaH$.
$\tauaH = \tau$: Set $s = \saH$ and denote by $I$ the ideal $(\kappa_1,\ldots,\kappa_s)$ of $\coho{G}$. By Lemma~\ref{lemma:kappaAssocPrime} the $\coho{G}$-module $\coho{G}/I$ has an associated prime~$\mathfrak{p}$ which contains $\kappa_1$, \dots, $\kappa_{r-z}$. As $\taua \geq \tauaH$ the sequence $\kappa_1,\ldots,\kappa_s$ is regular in $\coho{G}$, so by Lemma~\ref{lemma:genBrotoHenn} the sequence $\zeta_1,\ldots,\zeta_z$, $\kappa_1$, \dots, $\kappa_s$ is regular in $\coho{G}$. Therefore the sequence $\zeta_1,\ldots,\zeta_z$ is regular for $\coho{G}/I$, and so (by Lemma~\ref{lemma:assocPrime}) the $\coho{G}$-module $\coho{G}/(\zeta_1,\ldots,\zeta_z,\kappa_1,\ldots,\kappa_s)$ has an associated prime $\mathfrak{q}$ containing all elements of the homogeneous system of parameters $\zeta_1$, \dots, $\zeta_z$, $\kappa_1$, \dots, $\kappa_{r-z}$ for $\coho{G}$. So the depth of this quotient module is zero. But every regular sequence in $\coho{G}$ can be extended to a length~$\tau$ regular sequence (see \cite[\S4.3--4]{Benson:PolyInvts}, for example). So $\tau = z + s = \tauaH$. \end{proof}
\section{Depth and polarised systems} \label{section:polarisedDepth} \noindent In this section we shall remove the requirement in Theorem~\ref{theorem:specialPolarisedEqualities} that the polarised system be special.
\begin{theorem} \label{theorem:polarisedEqualities} Let $G$ be a $p$-group of $p$-rank $r$ whose centre has $p$-rank $z$, and let $ \mathfrak{a} = (\zeta_1,\ldots,\zeta_z, \kappa_1,\ldots,\kappa_{r-z})$ be a polarised system of parameters for $\coho{G}$. Then the numbers $\taua$~and $\tauaH$ both coincide with the depth $\tau$~of $\coho{G}$. \end{theorem}
\noindent For the proof we shall need one further fact about regular sequences.
\begin{lemma} \label{lemma:liftingInjections} Suppose that $R,S$ are connected graded commutative $k$-algebras and that $f \colon R \rightarrow S$ is an algebra homomorphism which respects the grading. Suppose further that $\zeta_1,\ldots,\zeta_d$ is a family of homogenous positive-degree elements of $R$ satisfying the following conditions: \begin{enumerate} \item $\zeta_1,\ldots,\zeta_d$ is a regular sequence in~$R$. \item
The induced map $\fui[n] \colon R/(\zeta_1^{n_1},\ldots,\zeta_d^{n_d}) \rightarrow S/(f(\zeta_1)^{n_1},\ldots,f(\zeta_d)^{n_d})$ is an injection for certain positive integers $n_1,\ldots,n_d$. \end{enumerate} Then the induced map $\fui[1] \colon R/(\zeta_1,\ldots,\zeta_d) \rightarrow S/(f(\zeta_1), \ldots, f(\zeta_d))$ is an injection. \end{lemma}
\begin{proof}
Pick $x \in R$ with $\fui[1](x) = 0$.
Then $\fui[n](\zeta_1^{n_1-1} \ldots \zeta_d^{n_d-1} x) = 0$ and so \begin{equation} \label{eqn:peelOff} \text{$\zeta_1^{n_1-1} \ldots \zeta_d^{n_d-1} x$ lies in the ideal $(\zeta_1^{n_1}, \ldots, \zeta_d^{n_d})$ of $R$.} \end{equation} Set $\zeta' := \zeta_1^{n_1-1} \ldots \zeta_{d-1}^{n_{d-1}-1}$. Then there are $a_1, \ldots, a_d \in R$ such that $\zeta' \zeta_d^{n_d-1} x = \zeta_1^{n_1} a_1 + \cdots + \zeta_d^{n_d} a_d$, whence $\zeta_d^{n_d-1} (\zeta' x - \zeta_d a_d) \in (\zeta_1^{n_1}, \ldots, \zeta_{d-1}^{n_{d-1}})$. As the sequence $\zeta_1^{n_1}, \ldots, \zeta_{d-1}^{n_{d-1}}, \zeta_d^s$ is regular in~$R$ for $s \geq 1$ we deduce that $\zeta' x \in (\zeta_1^{n_1}, \ldots, \zeta_{d-1}^{n_{d-1}}, \zeta_d)$. So we have reduced Eqn.~\eqref{eqn:peelOff} to the case $n_d = 1$ without altering the remaining $n_t$. As regular sequences may be permuted at will we deduce that $x \in (\zeta_1, \ldots, \zeta_d)$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem:polarisedEqualities}] Arguing exactly as in the proof of Theorem~\ref{theorem:specialPolarisedEqualities} one shows that $\taua \geq \tauaH$. By Lemma~\ref{lemma:polarisedDefinitions} there is an integer $N \geq 0$ such that the system of parameters $\mathfrak{b} = (\zeta_1,\ldots,\zeta_z, \kappa_1^{p^N}, \ldots, \kappa_{r-z}^{p^N})$ is special polarised. From the definition one sees that $\taua = \taua[b]$. So by Theorem~\ref{theorem:specialPolarisedEqualities} it suffices to prove that $\tauaH \geq \tauaH[b]$. But this is a consequence of Lemma~\ref{lemma:liftingInjections} applied to the sequence $\kappa_1, \ldots, \kappa_{\saH[b]-1}$ with $R = \coho{G}$, $S = \prod_{\mathcal{H}} \coho{H}$ and $f$~the restriction map, where $\mathcal{H} = \mathcal{H}^C_{z+\saH[b]}(G)$ and $n_i = p^N$ for all~$i$. \end{proof}
\section{Dickson invariants} \label{section:Dickson} \noindent Let $V$ be an $m$-dimensional $k$-vector space. We shall make extensive use of the Dickson invariants, the polynomial generators of the ring of $\text{\sl GL}(V)$-invariants in $k[V]$. See Benson's book~\cite{Benson:PolyInvts} for proofs of the properties of these invariants.
Denote by $f_V$ the polynomial in $k[V][X]$ defined as follows: \begin{equation} \label{eqn:fVdef} f_V(X) = \prod_{v \in V} (X - v) \, . \end{equation} Recall that Dickson proved there are homogeneous polynomials $D_s(V)$ for $1 \leq s \leq m$ such that \begin{equation} \label{eqn:fV} f_V(X) = \sum_{s = 0}^m (-1)^s D_s(V) X^{p^{m-s}} \, , \end{equation} where $D_0(V) = 1$. The sequence $D_1(V), \ldots, D_m(V)$ is regular in $k[V]$, and the invariant ring $k[V]^{\text{\sl GL}(V)}$ is the polynomial algebra $k[D_1(V), \ldots, D_m(V)]$.
If $\pi \colon V \rightarrow U$ is projection onto a codimension~$\ell$ subspace, then the induced map $k[V] \rightarrow k[U]$ sends \begin{equation} \label{eqn:DicksonRes} D_s(V) \mapsto \begin{cases} D_s(U)^{p^{\ell}} & \text{if $s \leq \dim(U)$,} \\ 0 & \text{otherwise.} \end{cases} \end{equation}
\section{Existence of polarised systems} \label{section:existence} \noindent For each elementary abelian $p$-group $V$ we shall embed $k[V^*]$ in $\coho{V}$ by identifying $V^*$ with the image of the Bockstein map $\coho[1]{V} \rightarrow \coho[2]{V}$.
\subsection{A construction using the norm map} Let $G$ be a $p$-group of order $p^n$ and $p$-rank $r$ whose centre has $p$-rank~$z$. We shall only be interested in the case $r > z$. Let $U_1, \ldots, U_K$ be representatives of the $G$-orbits in $\mathcal{A}^C_{z+1}(G)$, which is a $G$-set via the conjugation action. For each $U \in \mathcal{A}^C_{z+1}(G)$ pick a basis element $x_U$ for the one-dimensional subspace $\operatorname{Ann}(C)$ of $U^*$, and observe that $x_U^{p-1}$ is independent of the basis element chosen. As before, view $U^*$ as a subspace of $\coho[2]{U}$.
Define $\Theta \in \coho{G}$ by \begin{equation} \label{eqn:ThetaDef} \Theta = \prod_{i = 1}^K \mathcal{N}^G_{U_i}
\left(1 + x_{U_i}^{p-1}\right)^{|G : N_G(U_i)|} \, . \end{equation} Now consider the restriction $\operatorname{Res}_V(\Theta)$ for $V \in \mathcal{A}^C(G)$. By the Mackey formula \[ \operatorname{Res}_V (\Theta) = \prod_{i = 1}^K \prod_{g \in U_i \setminus G / V} \mathcal{N}^V_{U_i^g \cap V} \, g^* \operatorname{Res}^{U_i}_{U_i \cap {}^g V}
\left(1 + x_{U_i}^{p-1}\right)^{|G : N_G(U_i)|} \, . \] The intersection $U_i \cap {}^g V$ always contains $C$, the largest central elementary abelian subgroup of~$G$. Conversely the intersection equals $C$ (and $x_{U_i}$ therefore restricts to zero) unless $U' = U_i^g$ lies in~$V$, in which case $g^* x_{U_i}^{p-1} = x_{U'}^{p-1}$. Moreover, the number of double cosets
$U_i g V$ leading to this $U'$ is $|N_G(U_i)|/|V|$ and every $U'$~in $\mathcal{A}^C_{z+1}(V) := \{ U \in \mathcal{A}^C_{z+1}(G) \mid U \leq V \}$ occurs for some~$i$. So \begin{equation} \label{eqn:ResV_Theta} \operatorname{Res}_V(\Theta) = \prod_{U' \in \mathcal{A}^C_{z+1}(V)}
\mathcal{N}_{U'}^V (1 + x_{U'}^{p-1})^{|G : V|} \, . \end{equation} In particular for $U \in \mathcal{A}^C_{z+1}(G)$ one has \begin{equation} \label{eqn:ResU_Theta} \operatorname{Res}_U(\Theta) = 1 + x_U^{(p-1)p^{n-(z+1)}} \, . \end{equation} Let $\eta \in \coho[2(p-1)p^{n-(z+1)}]{G}$ be the homogeneous component of $\Theta$ in this degree. As the norm map from $\coho{U'}$~to $\coho{V}$ is a ring homomorphism (see \cite[Proposition~6.3.3]{Evens:book}), we deduce from Eqn.~\eqref{eqn:ResV_Theta} that \begin{equation} \label{eqn:ResV_eta}
\operatorname{Res}_V(\eta) = \hat{\eta}^{|G:V|} \quad \text{for} \quad \hat{\eta} = \sum_{U' \in \mathcal{A}^C_{z+1}(V)} \mathcal{N}_{U'}^V x_{U'}^{p-1} \, . \end{equation} Denote by $W = W(V)$ the subspace $\operatorname{Ann}(C)$~of $V^*$. Then $\hat{\eta}$ lies in $k[W]$, since $\mathcal{N}_{U'}^V (x_{U'})$ is the product of all $\phi \in V^*$ with $\operatorname{Res}_{U'} (\phi) = x_{U'}$. Moreover $\hat{\eta}$ is by construction a $\text{\sl GL}(W)$-invariant, so a scalar multiple of $D_1(W)$ for degree reasons. By considering the restriction to any $U \in \mathcal{A}^C_{z+1}(V)$, we deduce from Eqn.~\eqref{eqn:DicksonRes} that \begin{equation} \label{eqn:ResV_eta_Dickson}
\operatorname{Res}^G_V(\eta) = D_1(W)^{|G : V|} \, . \end{equation}
\subsection{The existence proof}
\begin{theorem} \label{theorem:existence} Let $G$ be a $p$-group of order $p^n$ and $p$-rank $r$ whose centre has $p$-rank~$z$. For $1 \leq i \leq z$ define $\zeta_i \in \coho{G}$ by $\zeta_i = c_{p^n - p^{n-i}}(\rho_G)$, a Chern class of the regular representation of~$G$.
If $z < r$ define $\eta \in \coho{G}$ as above and homogeneous elements $\kappa_1, \kappa_2, \ldots, \kappa_{r-z} \in \coho{G}$ as follows: \[ \kappa_j := \mathcal{P}^{p^{n - z + j - 4}} \cdots \mathcal{P}^{p^{n - z - 1}} \mathcal{P}^{p^{n - z - 2}} (\eta) \in \coho[2(p^{n - z} - p^{n - z - j})]{G} \] for $1 \leq j \leq r - z$. Then for each $1 \leq j \leq r-z$ and for each $V \in \mathcal{A}^C_{z+s}(G)$ one has \begin{equation} \label{eqn:existence} \operatorname{Res}^G_V(\kappa_j) = \begin{cases}
D_j(W)^{|G:V|} & \text{if $j \leq s$, and} \\ 0 & \text{otherwise.} \end{cases} \end{equation} Here, $W$~is the subspace of $V^*$ which annihilates~$C$.
Then $\zeta_1, \ldots, \zeta_z, \kappa_1, \ldots, \kappa_{r-z}$ is a polarised system of parameters for $\coho{G}$. So $\coho{G}$ has both polarised and special polarised systems of parameters. \end{theorem}
\begin{proof} Equation~\eqref{eqn:existence} holds for $\kappa_1 = \eta$ by Eqn.~\eqref{eqn:ResV_eta_Dickson}. The general case of Eqn.~\eqref{eqn:existence} follows from Eqn.~\eqref{eqn:DicksonRes} and the action of the Steenrod algebra on the Dickson invariants (see~\cite{Wilkerson:Dickson}).
Axioms (PS2) and (PS4) follow immediately from Eqn.~\eqref{eqn:existence}. Axiom (PS3) holds because the Dickson invariants form a regular sequence. Observe that $\rho_G$ restricts to~$C$ as $p^{n-z}$ copies of the regular representation $\rho_C$. So the total Chern class $c(\rho_G)$ restricts to $C$ as $c(\rho_C)^{p^{n-z}}$, meaning that
$\operatorname{Res}_C(\zeta_i) = c_{p^z - p^{z-i}}(\rho_C)^{p^{n-z}}$ for $1 \leq i \leq z$. In view of Eqn.~\eqref{eqn:fVdef} one has $c(\rho_C) = f_{C^*}(1)$ and hence $c_{p^z - p^{z - i}}(\rho_C) = (-1)^i D_i(C^*)$
by Equation~\eqref{eqn:fV}, a well known observation due originally to Milgram. So $\operatorname{Res}_C(\zeta_i) = (-1)^i D_i(C^*)^{|G:C|}$, which means that Axiom (PS1) is satisfied and so a polarised system of parameters has been constructed. Hence special polarised systems of parameters exist too, by Lemma~\ref{lemma:polarisedDefinitions}. \end{proof}
\section{Tightness of Duflot's lower bound} \label{section:DuflotTight} \noindent Recall that Duflot's Theorem~\cite{Duflot:Depth} states that the depth of $\coho{G}$ is at least~$z$.
\begin{theorem} \label{theorem:ED1} Let $G$ be a $p$-group of $p$-rank $r$ whose centre has $p$-rank~$z$. Then the following statements are equivalent: \begin{enumerate} \item \label{enum:notDetected} The mod-$p$ cohomology ring $\coho{G}$ is not detected on the family $\mathcal{H}^C_{z+1}(G)$. \item \label{enum:assocPrime} $\coho{G}$ has an associated prime $\mathfrak{p}$ such that the
dimension of $\coho{G}/\mathfrak{p}$ is $z$. \item \label{enum:oneK1} There is a polarised system of parameters $\zeta_1,\ldots,\zeta_z$, $\kappa_1,\ldots,\kappa_{r-z}$ for $\coho{G}$ such that $\kappa_1$ is a zero divisor in $\coho{G}$. \item \label{enum:allK1} If $\zeta_1,\ldots,\zeta_z$, $\kappa_1,\ldots,\kappa_{r-z}$ is a polarised system of parameters for $\coho{G}$ then $\kappa_1$ is a zero divisor in $\coho{G}$. \item \label{enum:depthz} The depth of $\coho{G}$ equals $z$. \end{enumerate} \end{theorem}
\begin{proof} Carlson proved in~\cite{Carlson:DepthTransfer} that \eqref{enum:notDetected} implies \eqref{enum:depthz}\@. A standard commutative algebra argument shows that \eqref{enum:assocPrime} implies \eqref{enum:depthz}\@. We saw in Lemma~\ref{lemma:genBrotoHenn} that \eqref{enum:depthz} implies \eqref{enum:allK1}, and \eqref{enum:oneK1} follows from \eqref{enum:allK1} by the existence result Theorem~\ref{theorem:existence}.
Now let $\mathfrak{a}$ be a polarised system of parameters satisfying Statement~\eqref{enum:oneK1}, which is equivalent to $\taua = z$. So $\tauaH = z$ by Theorem~\ref{theorem:polarisedEqualities}\@. As in the proof of that theorem there is a special polarised system of parameters~$\mathfrak{b}$ which satisfies $\tauaH[b] = \tauaH = z$, so \eqref{enum:assocPrime} follows by Lemma~\ref{lemma:kappaAssocPrime}. Finally consider the definition of~$\tauaH$. If $\tauaH = z$ then $\mathcal{H}^C_{z+1}(G)$ does not detect $\coho{G}$, yielding~\eqref{enum:notDetected}\@. \end{proof}
\section{An example} Let $G$ be the extraspecial $p$-group $p^{1+2n}_+$ with $n \geq 1$. This group has order $p^{2n+1}$ and $p$-rank $n+1$. Its centre has $p$-rank~$1$. If $p$~is odd $G$ has exponent~$p$. The mod-$p$ cohomology ring $\coho{G}$ is Cohen-Macaulay for $p=2$ by Theorem~4.6 of~\cite{Quillen:Extraspecial}, for Quillen computes the cohomology ring as the quotient of a polynomial algebra by a regular sequence. Also, Milgram and Tezuka showed in~\cite{MilgramTezuka} that the cohomology ring is Cohen-Macaulay for $G = 3^{1+2}_+$.
From now on assume that $p$ is odd, with $n \geq 2$ if $p = 3$. Then by a result of Minh~\cite{Minh:EssExtra}, there are essential classes. For such groups the centre $C$ is cyclic of order~$p$, and the set $\mathcal{H}^C_2(G)$ of centralisers coincides with the set of maximal subgroups. Consequently $\mathcal{H}^C_{z+1}(G)$ does not detect $\coho{G}$, and so $\coho{G}$ has depth~$1$ by the part of Theorem~\ref{theorem:ED1} proved by Carlson in~\cite{Carlson:DepthTransfer}.
Now let $V$ be a rank $n+1$ elementary abelian subgroup of~$G$, and let $\hat{\rho}$ be a one-dimensional ordinary representation of~$V$ whose restriction to $C$ is not trivial. Let $\rho$ be the induced representation of~$G$. Then $\rho$ is an irreducible representation of degree~$p^n$, and its character restricts to each $U \in \mathcal{A}^C_{n+1}(G)$ as the sum of all degree one characters whose restrictions to $C$ coincide with the restriction of the character of~$\hat{\rho}$. Set $\zeta_1 := c_{p^n}(\rho)$ and $\kappa_j := c_{p^n - p^{n-j}}(\rho)$ for $1 \leq j \leq n$. Then $\zeta_1, \kappa_1, \ldots, \kappa_n$ satisfies the axioms for a polarised system of parameters for $\coho{G}$. So combining Minh's result with Theorem~\ref{theorem:ED1} one deduces that $\kappa_1$ has nontrivial annihilator in $\coho{G}$. Conversely a direct proof of this fact would yield a new proof of Minh's result. If $n=1$ and $p > 3$ it is known (see~\cite{Leary:integral}) that $c_2(\rho)$ is a nonzero essential class which annihilates~$\kappa_1$.
\end{document}
|
arXiv
|
{
"id": "0206127.tex",
"language_detection_score": 0.6800205707550049,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} Our main result is elementary and concerns the relationship between the multiplicative groups of the coordinate and endomorphism rings of the formal additive group over a field of characteristic $p>0$. The proof involves the combinatorics of base $p$ representations of positive integers in a striking way. We apply the main result to construct a canonical quotient of the module of Coleman power series over the Iwasawa algebra when the base local field is of characteristic $p$. This gives information in a situation which apparently has never previously been investigated. \end{abstract} \title{Digit patterns and Coleman power series} \section{Introduction} \subsection{Overview and motivation} Our main result (Theorem~\ref{Theorem:MainResult} below) concerns the relationship between the multiplicative groups of the coordinate and endomorphism rings of the formal additive group over a field of characteristic $p>0$. Our result is elementary and does not require a great deal of apparatus for its statement. The proof of the main result involves the combinatorics of base $p$ representations of positive integers in a striking way. We apply our main result (see Corollary~\ref{Corollary:ColemanApp} below) to construct a canonical quotient of the module of Coleman power series over the Iwasawa algebra when the base local field is of characteristic $p$. By {\it Coleman power series} we mean the telescoping power series introduced and studied in Coleman's classical paper \cite{Coleman}.
Apart from Coleman's \cite{ColemanLocModCirc} complete results in the important special case of the formal multiplicative group over ${\mathbb{Z}}_p$, little is known about the structure of the module of Coleman power series over the Iwasawa algebra, and, so far as we can tell, the characteristic $p$ situation has never previously been investigated. We undertook this research in an attempt to fill the gap in characteristic $p$. Our results are far from being as complete as Coleman's, but they are surprising on account of their ``digital'' aspect, and they raise further questions worth investigating.
\subsection{Formulation of the main result} The notation introduced under this heading is in force throughout the paper.
\subsubsection{Rings and groups of power series} Fix a prime number $p$ and a field $K$ of characteristic $p$. Let $q$ be a power of $p$. Consider: the (commutative) power series ring
$$K[[X]]=\left\{\left.\sum_{i=0}^\infty a_i X^i\right| a_i\in K\right\};$$ the (in general noncommutative) ring
$$R_{q,K}=\left\{\left.\sum_{i=0}^\infty a_i X^{q^i}\right| a_i\in K\right\},$$ in which by definition multiplication is power series composition; and the subgroup
$$\Gamma_{q,K}=\left\{\left.X+\sum_{i=1}^\infty a_i X^{q^i}\right| a_i\in K\right\}\subset R_{q,K}^\times,$$ where in general $A^\times$ denotes the group of units of a ring $A$ with unit. Note that $K[[X]]^\times$ is a right $\Gamma_{q,K}$-module via composition of power series.
\subsubsection{Logarithmic differentiation} Given $F=F(X)\in K[[X]]^\times$, put $${\mathbf{D}}[F](X)=XF'(X)/F(X)\in XK[[X]].$$ Note that \begin{equation}\label{equation:Homogeneity} {\mathbf{D}}[F(\alpha X)]={\mathbf{D}}[F](\alpha X) \end{equation} for all $\alpha\in K^\times$. Note that the sequence \begin{equation}\label{equation:Factoid} 1\rightarrow K[[X^p]]^\times\subset K[[X]]^\times\xrightarrow{{\mathbf{D}}}
\left\{\left.\sum_{i=1}^\infty a_i X^i\in XK[[X]]\right| a_{pi}=a_i^p\;\mbox{for all}\;i\in {\mathbb{N}} \right\}\rightarrow 0 \end{equation} is exact, where ${\mathbb{N}}$ denotes the set of positive integers.
\subsubsection{$q$-critical integers} Given $c\in {\mathbb{N}}$, let $$ O_q(c)=\{n\in {\mathbb{N}}\vert (n,p)=1\;\mbox{and} \;n\equiv p^ic\bmod{q-1}\;\mbox{for some $i\in {\mathbb{N}}\cup\{0\}$}\}. $$ Given $n\in {\mathbb{N}}$, let $\ord_p n$ denote the exact order with which $p$ divides $n$. We define $$
C^0_q=\left\{c\in {\mathbb{N}}\cap(0,q)\left| (c,p)=1\;\mbox{and}\;\frac{c+1}{p^{\ord_p(c+1)}}=\min_{n\in O_q(c)\cap(0,q)} \frac{n+1}{p^{\ord_p(n+1)}}\right.\right\}, $$ and we call elements of this set {\em $q$-critical integers}. In the simplest case $p=q$ one has $C^0_p=\{1,\dots,p-1\}$, but in general the set $C^0_q$ is somewhat complicated.
Put $$ C_q=\bigcup_{c\in C_q^0}\{q^i(c+1)-1\vert i\in {\mathbb{N}}\cup\{0\}\}, $$ noting that the union is disjoint, since the sets in the union are contained in different congruence classes modulo $q-1$. See below for informal ``digital'' descriptions of the sets $C_q^0$ and $C_q$.
\subsubsection{The homomorphism $\psi_q$} We define a homomorphism $$\psi_q:XK[[X]]\rightarrow X^2K[[X]]$$ as follows: given $F=F(X)=\sum_{i=1}^\infty a_iX^i\in XK[[X]]$, put $$ \psi_{q}[F]=X\cdot \sum_{k\in C_q}a_kX^{k}. $$ Note that the composite map $$\psi_q\circ {\mathbf{D}}:K[[X]]^\times\rightarrow
\left\{\left.\sum_{k\in C_q}a_k X^{k+1}\right| a_k\in K\right\}$$ is surjective by exactness of sequence (\ref{equation:Factoid}). Further, since the set $\{k+1\vert k\in C_q\}$ is stable under multiplication by $q$, the target of $\psi_q\circ {\mathbf{D}}$ comes equipped with the structure of left $R_{q,K}$-module. More precisely, the target of $\psi_q\circ {\mathbf{D}}$ is a free left $R_{q,K}$-module for which the set $\{X^{k+1}\vert k\in C_q^0\}$ is a basis.
The following is the main result of the paper.
\begin{Theorem}\label{Theorem:MainResult} The formula \begin{equation}\label{equation:MainResult} \psi_{q}[{\mathbf{D}}[F\circ \gamma]]=\gamma^{-1}\circ \psi_{q}[{\mathbf{D}}[F]] \end{equation} holds for all $\gamma\in \Gamma_{q,K}$ and $F\in K[[X]]^\times$. \end{Theorem} \noindent
In \S\S\ref{section:Reduction}--\ref{section:DigitMadness2} we give the proof of the theorem. More precisely, we first explain in \S\ref{section:Reduction} how to reduce the proof of the theorem to a couple of essentially combinatorial assertions (Theorems~\ref{Theorem:DigitMadness} and \ref{Theorem:DigitMadness2}), and then we prove the latter in \S\ref{section:DigitMadness} and \S\ref{section:DigitMadness2}, respectively. In \S\ref{section:ColemanApp} we make the application (Corollary~\ref{Corollary:ColemanApp}) of Theorem~\ref{Theorem:MainResult} to Coleman power series. The application does not require any of the apparatus of the proof of Theorem~\ref{Theorem:MainResult}.
\subsection{Informal discussion} \subsubsection{``Digital'' description of $C_q^0$} The definition of $C_q^0$ can readily be understood in terms of simple operations on digit strings.
For example, to verify that $39$ is $1024$-critical, begin by writing out the base $2$ representation of $39$ thus:
$$39=100111_2$$ Then put enough place-holding $0$'s on the left so as to represent $39$ by a digit string of length $\ord_2 1024=10$: $$39=0000100111_2$$ Then calculate as follows: $$\begin{array}{cclr} \mbox{permute cyclically} &0000100111_2&\xrightarrow{\mbox{\tiny strike trailing $1$'s and leading $0$'s}}&100_2\\ &0001001110_2&\mbox{ignore: terminates with a $0$}\\ &0010011100_2&\mbox{ignore: terminates with a $0$}\\ \downarrow&0100111000_2&\mbox{ignore: terminates with a $0$}\\ &1001110000_2&\mbox{ignore: terminates with a $0$}\\ &0011100001_2&\xrightarrow{\mbox{\tiny strike trailing $1$'s and leading $0$'s}}&1110000_2\\ &0111000010_2&\mbox{ignore: terminates with a $0$}\\ &1110000100_2&\mbox{ignore: terminates with a $0$}\\ \downarrow &1100001001_2&\xrightarrow{\mbox{\tiny strike trailing $1$'s and leading $0$'s}}&110000100_2\\ &1000010011_2&\xrightarrow{\mbox{\tiny strike trailing $1$'s and leading $0$'s}}&10000100_2\\ \end{array}$$ Finally, conclude that $39$ is $1024$-critical because the first entry of the last column is the smallest in that column. This numerical example conveys some of the flavor of the combinatorial considerations coming up in the proof of Theorem~\ref{Theorem:MainResult}. \subsubsection{``Digital'' description of $C_q$} Given $c\in C_q^0$, let $[c_1,\cdots,c_m]_p$ be a string of digits representing $c$ in base $p$. (The digit string notation is defined below in \S\ref{section:Reduction}.) Then each digit string of the form $$[c_1,\dots,c_m,\underbrace{p-1,\dots,p-1}_{ n\ord_p q}]_p$$ represents an element of $C_q$. Moreover, each element of $C_q$ arises this way, for unique $c\in C_q^0$ and $n\in {\mathbb{N}}\cup\{0\}$.
\subsubsection{Miscellaneous remarks} $\;$
(i) The set $C_q$ is a subset of the set of {\em magic numbers} (relative to the choice of $q$) as defined and studied in \cite[\S8.22, p.\ 309]{Goss}. For the moment we do not understand this connection
on any basis other than ``numerology'', but we suspect that it runs much deeper.
(ii) A well-ordering of the set of positive integers distinct from the usual one, which we call the {\em $p$-digital well-ordering}, plays a key role in the proof of Theorem~\ref{Theorem:MainResult}, via Theorems~\ref{Theorem:DigitMadness} and \ref{Theorem:DigitMadness2} below. In particular, Theorem~\ref{Theorem:DigitMadness2}, via Proposition~\ref{Proposition:DigitMadness2}, characterizes the sets $C_q^0$ and $C_q$ in terms of the $p$-digital well-ordering and congruences modulo $q-1$.
(iii) The results of this paper were discovered by extensive
computer experimentation with base $p$ expansions
and binomial coefficients modulo $p$. No doubt refinements of our results can be discovered
by continuing such experiments.
(iv) It is an open problem to find a minimal set of generators
for $K[[X]]^\times$ as a topological right $\Gamma_{q,K}$-module, the topologies here being the $X$-adically induced ones. It seems very likely that the module is always infinitely generated, even when $K$ is a finite field. Computer experimentation (based on the method of proof of Proposition~\ref{Proposition:Reduction} below) with the simplest case of the problem (in which $K$ is the two-element field and $p=q=2$) has revealed some interesting patterns. But still we are unable to hazard any detailed guess about the solution.
\section{Application to Coleman power series} \label{section:ColemanApp} We assume that the reader already knows about Lubin-Tate formal groups and Coleman power series, and is familiar with their applications. We refer the less well-versed reader to \cite{LubinTate}, \cite{Coleman}, \cite{ColemanLocModCirc} and \cite{ColemanAI} to get started up the mountain of literature.
\subsection{Background} We review \cite{LubinTate}, \cite{Coleman} and \cite{ColemanLocModCirc} just far enough to fix a consistent system of notation and to frame precisely the general structure problem motivating our work. \subsubsection{The setting} Let $k$ be a nonarchimedean local field with maximal compact subring ${\mathcal{O}}$ and uniformizer $\pi$. Let $q$ and $p$ be the cardinality and characteristic, respectively, of the residue field ${\mathcal{O}}/\pi$. Let $\bar{k}$ be an algebraic closure of $k$. Let $H$ be a complete unramified extension of $k$ in the completion of $\bar{k}$, let $\varphi$ be the arithmetic Frobenius automorphism of $H/k$, and let ${\mathcal{O}}_H$ be the ring of integers of $H$. Let the action of $\varphi$ be extended coefficient-by-coefficient to the power series ring ${\mathcal{O}}_H[[X]]$. \subsubsection{Lubin-Tate formal groups} We say that formal power series with coefficients in ${\mathcal{O}}$ are {\em congruent modulo $\pi$} if they are so coefficient-by-coefficient, and we say they are {\em congruent modulo degree $2$} if the constant and linear terms agree. Let ${\mathcal{F}}_\pi$ be the set of one-variable power series $f=f(X)$ such that $$f(X)\equiv \pi X\bmod{\deg 2},\;\;\;f(X)\equiv X^q\bmod{\pi}.$$ The simplest example of an element of ${\mathcal{F}}_\pi$ is $\pi X+X^q$. The general example of an element of ${\mathcal{F}}_\pi$ is $\pi X+X^q+\pi X^2e(X)$, where $e(X)\in {\mathcal{O}}[[X]]$ is arbitrary. Given $f\in {\mathcal{F}}_\pi$, there exists unique $F_f=F_f(X,Y)\in {\mathcal{O}}[[X,Y]]$ such that $$F_f(X,Y)\equiv X+Y\bmod{\deg 2},\;\;\;f(F_f(X,Y))=F_f(f(X),f(Y)).$$ The power series $F_f(X,Y)$ is a {\em commutative formal group law}. Given $a\in {\mathcal{O}}$ and $f,g\in {\mathcal{F}}_\pi$, there exists unique $[a]_{f,g}=[a]_{f,g}(X)\in {\mathcal{O}}[[X]]$ such that $$[a]_{f,g}(X)\equiv aX\bmod{\deg 2},\;\;f([a]_{f,g}(X))=[a]_{f,g}(g(X)).$$ We write $[a]_f=[a]_f(X)=[a]_{f,f}(X)$ to abbreviate notation. The family \linebreak $\{[a]_f(X)\}_{a\in {\mathcal{O}}}$ is a system of {\em formal complex multiplications} for the formal group law $F_f(X,Y)$. For each fixed $f\in {\mathcal{F}}_\pi$, the package $$({\mathcal{F}}_f(X,Y),\{[a]_f(X)\}_{a\in {\mathcal{O}}})$$ is a {\em Lubin-Tate formal group}. The formal properties of the ``big package'' $$\left(\{F_f(X,Y)\}_{f\in {\mathcal{F}}_\pi}, \{[a]_{f,g}(X)\}_{\begin{subarray}{c} a\in {\mathcal{O}}\\ f,g\in {\mathcal{F}}_\pi \end{subarray}}\right)$$ are detailed in \cite[Thm.\ 1, p.\ 382]{LubinTate}. In particular, one has \begin{equation}\label{equation:LTformal} [\pi]_f(X)=f(X),\;\;\;[1]_f(X)=X,\;\;\; [a]_{f,g}\circ[b]_{g,h}=[ab]_{f,h} \end{equation} for all $a,b\in {\mathcal{O}}$ and $f,g,h\in {\mathcal{F}}_\pi$. We remark also that \begin{equation}\label{equation:LTformalbis} [\omega]_{\pi X+X^q}(X)=\omega X \end{equation} for all roots $\omega$ of unity in $k$ of order prime to $p$.
\subsubsection{Coleman power series}
By Coleman's theory \cite{Coleman} there exists for each $f\in {\mathcal{F}}_\pi$ a unique group homomorphism $${\mathcal{N}}_f:{\mathcal{O}}_H[[X]]^\times\rightarrow {\mathcal{O}}_H[[X]]^\times$$ such that $$ {\mathcal{N}}_f[h](f(X))=\prod_{\begin{subarray}{c} \lambda\in \bar{k}\\ f(\lambda)=0 \end{subarray}}h(F_f(X,\lambda)) $$ for all $h\in {\mathcal{O}}_H[[X]]^\times$.
Let $${\mathcal{M}}_f=\{h\in {\mathcal{O}}_H[[X]]^\times\vert {\mathcal{N}}_f[h]=\varphi h\}.$$ We refer to elements of ${\mathcal{M}}_f$ as {\em Coleman power series}.
\subsubsection{Natural operations on Coleman power series} The group ${\mathcal{M}}_f$ comes equipped with the structure of right ${\mathcal{O}}^\times$-module by the rule \begin{equation}\label{equation:OOModuleRule} ((h,a)\mapsto h\circ [a]_f):{\mathcal{M}}_f\times {\mathcal{O}}^\times\rightarrow {\mathcal{M}}_f, \end{equation} and we have at our disposal a canonical isomorphism \begin{equation}\label{equation:Cformal} (h\mapsto h\circ [1]_{g,f}):{\mathcal{M}}_g\rightarrow {\mathcal{M}}_f \end{equation} of right ${\mathcal{O}}^\times$-modules for all $f,g\in {\mathcal{F}}_\pi$, as one verifies by applying the formal properties (\ref{equation:LTformal}) of the big Lubin-Tate package in a straightforward way. We also have at our disposal a canonical group isomorphism \begin{equation}\label{equation:CBijection} (h\mapsto h\bmod{\pi}):{\mathcal{M}}_f\rightarrow ({\mathcal{O}}_H/\pi)[[X]]^\times \end{equation} as one verifies by applying \cite[Lemma 13, p.\ 103]{Coleman} in a straightforward way. The {\em Iwasawa algebra} (completed group ring) ${\mathbb{Z}}_p[[{\mathcal{O}}^\times]]$ associated to $k$ acts naturally on the slightly modified version $${\mathcal{M}}_f^0=\{h\in {\mathcal{O}}_H[[X]]^\times \vert h\in {\mathcal{M}}_f,\;h(0)\equiv 1\bmod{\pi}\}$$ of ${\mathcal{M}}_f$.
\subsubsection{The structure problem} Little seems to be known in general about the structure of the ${\mathcal{O}}^\times$-module ${\mathcal{M}}_f$. To determine this structure is a fundamental problem in local class field theory, and the problem remains open. Essentially everything we do know about the problem is due to Coleman. Namely, in the special case
$$k={\mathbb{Q}}_p=H,\;\;\;\pi=p,\;\;\;f(X)=(1+X)^p-1\in {\mathcal{F}}_\pi,$$ Coleman showed \cite{ColemanLocModCirc} that ${\mathcal{M}}^0_f$ is ``nearly'' a free ${\mathbb{Z}}_p[[{\mathcal{O}}^\times]]$-module of rank $1$, and in the process recovered Iwasawa's result on the structure of local units modulo circular units. Moreover, Coleman's methods are strong enough to analyze ${\mathcal{M}}^0_f$ completely in the case of general $H$, even though this level of generality is not explicitly considered in \cite{ColemanLocModCirc}. So in the case of the formal multiplicative group over ${\mathbb{Z}}_p$ we have a complete and satisfying description of structure. Naturally one wishes for so complete a description in the general case. We hope with the present work to contribute to the solution of the structure problem.
Here is the promised application of Theorem~\ref{Theorem:MainResult}, which makes the ${\mathcal{O}}^\times$-module structure of a certain quotient of ${\mathcal{M}}_f$ explicit when $k$ is of characteristic $p$.
\begin{Corollary}\label{Corollary:ColemanApp} Assume that $k$ is of characteristic $p$ and fix $f\in {\mathcal{F}}_\pi$. Then there exists a surjective group homomorphism
$$\Psi_f:{\mathcal{M}}_f\rightarrow\left\{\left.\sum_{k\in C_q}a_kX^{k+1}\right| a_k\in {\mathcal{O}}_H/\pi\right\}$$ such that \begin{equation}\label{equation:ColemanApp} \Psi_f[h\circ [\omega u]_f]\equiv [(\omega u)^{-1}]_{\pi X+X^q}\circ \Psi_f[h]\circ[\omega]_{\pi X+X^q}\bmod{\pi} \end{equation} for all $h=h(X)\in {\mathcal{M}}_{f}$, $u\in 1+\pi{\mathcal{O}}$ and roots of unity $\omega\in {\mathcal{O}}^\times$.
\end{Corollary} \proof If we are able to construct $\Psi_{\pi X+X^q}$ with the desired properties, then in the general case the map $$\Psi_f=(h\mapsto \Psi_{\pi X+X^q}[h\circ [1]_{f,\pi X+X^q}])$$ has the desired properties by (\ref{equation:LTformal}) and (\ref{equation:Cformal}). We may therefore assume without loss of generality that $$f=\pi X+X^q,$$
in which case
$F_f(X,Y)=X+Y$,
i.~e., the formal group underlying the Lubin-Tate formal group attached to $f$ is additive.
By (\ref{equation:LTformalbis}) and the definitions, given
$a\in {\mathcal{O}}$ and writing
$$a=\sum_{i=0}^\infty \alpha_i \pi^i\;\;\;(\alpha_i^q=\alpha_i),$$
in the unique possible way, one has
$$[a]_f\equiv \sum_{i=0}^\infty \alpha_i X^{q^i}\bmod{\pi},$$
and hence the map
$a\mapsto [a]_{f}\bmod \pi$ gives rise to an isomorphism $$\theta:{\mathcal{O}}\iso R_{q,{\mathcal{O}}/\pi}$$ of rings. Let $$\rho:{\mathcal{M}}_f\rightarrow ({\mathcal{O}}_H/\pi)[[X]]^\times$$ be the isomorphism (\ref{equation:CBijection}). We claim that $$\Psi_f=\psi_q\circ {\mathbf{D}}\circ \rho$$ has all the desired properties. In any case, since $\rho$ is an isomorphism and $\psi_q\circ {\mathbf{D}}$ by (\ref{equation:Factoid}) is surjective, $\Psi_f$ is surjective, too. To verify (\ref{equation:ColemanApp}), we calculate as follows: $$\begin{array}{rcl} \psi_q[{\mathbf{D}}[\rho(h\circ [\omega u]_f)]]&=&\psi_q[{\mathbf{D}}[\rho(h\circ [\omega]_f\circ[ u]_f)]]\\ &=& \psi_q[{\mathbf{D}}[\rho(h)\circ \theta(\omega)\circ \theta(u)]]\\ &=&\theta(u^{-1})\circ \psi_q[{\mathbf{D}}[\rho(h)\circ \theta(\omega)]]\\ &=&\theta(u^{-1})\circ \psi_q[{\mathbf{D}}[\rho(h)]\circ \theta(\omega)]\\ &=&\theta(u^{-1})\circ\theta(\omega^{-1})\circ \psi_q[{\mathbf{D}}[\rho(h)]]\circ \theta(\omega)\\ &=&\theta((u\omega)^{-1})\circ \psi_q[{\mathbf{D}}[\rho(h)]]\circ \theta(\omega) \end{array} $$ The third and fourth steps are justified by (\ref{equation:MainResult}) and (\ref{equation:Homogeneity}), respectively.
The remaining steps are clear. The claim is proved, and with it the corollary.
\qed
\section{Reduction of the proof}\label{section:Reduction} We put Coleman power series behind us for the rest of the paper. We return to the elementary point of view taken in the introduction. In this section we explain how to reduce the proof of Theorem~\ref{Theorem:MainResult} to a couple of combinatorial assertions. \subsection{Digital apparatus}
\subsubsection{Base $p$ expansions} Given an additive decomposition $$n=\sum_{i=1}^s n_ip^{s-i}\;\;\;(n_i\in {\mathbb{Z}}\cap [0,p),\;n\in {\mathbb{N}}),$$ we write $$n=[n_1,\dots,n_s]_p,$$ we call the latter a {\em base $p$ expansion} of $n$ and we call the coefficients $n_i$ {\em digits}. Note that we allow base $p$ expansions to have leading $0$'s. We say that a base $p$ expansion is {\em minimal} if the first digit is positive. For convenience, we set the empty base $p$ expansion $[]_p$ equal to $0$ and declare it to be minimal.
We always read base $p$ expansions left-to-right, as though they were words spelled in the alphabet $\{0,\dots,p-1\}$. In this notation the well-known theorem of Lucas takes the form $$\left(\begin{array}{c} \;[a_1,\dots,a_n]_p\\ \;[b_1,\dots,b_n]_p \end{array}\right)\equiv \left(\begin{array}{c} a_1\\ b_1 \end{array}\right) \cdots \left(\begin{array}{c} a_n\\ b_n \end{array}\right)\bmod{p}.$$ (For all $n\in {\mathbb{N}}\cup\{0\}$ and $k\in {\mathbb{Z}}$ we set $\left(\begin{subarray}{c} n\\ k\end{subarray}\right)= \frac{n!}{k!(n-k)!}$ if $0\leq k\leq n$ and $\left(\begin{subarray}{c} n\\ k\end{subarray}\right)=0$ otherwise.) The theorem of Lucas implies that for all integers $k,\ell,m\geq 0$ such that $m=k+\ell$, the binomial coefficient $\left(\begin{subarray}{c} m\\ k \end{subarray}\right)$ does not vanish modulo $p$ if and only if the addition of $k$ and $\ell$ in base $p$ requires no ``carrying''. \subsubsection{The $p$-core function $\kappa_p$} Given $n\in {\mathbb{N}}$, we define $$ \kappa_p(n)=(n/p^{\ord_p n}+1)/p^{\ord_p(n/p^{\ord_p n}+1)}-1. $$ We call $\kappa_p(n)$ the {\em $p$-core}
of $n$. For example, $\kappa_p(n)=0$ iff $n=p^{k-1}(p^\ell-1)$ for some $k,\ell\in {\mathbb{N}}$. The meaning of the $p$-core function is easiest to grasp in terms of minimal base $p$ expansions. One calculates $\kappa_p(n)$ by discarding trailing $0$'s and then discarding trailing $(p-1)$'s. For example, to calculate the $3$-core of $963=[1,0,2,2,2,0,0]_3$, first discard trailing $0$'s to get $[1,0,2,2,2]_3=107$, and then discard trailing $2$'s to get $\kappa_3(963)=[1,0]_3=3$. \subsubsection{The $p$-defect function $\delta_p$} For each $n\in{\mathbb{N}}$, let $\delta_p(n)$ be the length of the minimal base $p$ representation of $\kappa_p(n)$. We call $\delta_p(n)$ the {\em $p$-defect} of $n$. For example, since as noted above $\kappa_3(963)=[1,0]_3$, one has $\delta_3(963)=2$.
\subsubsection{The $p$-digital well-ordering} We equip the set of positive integers with a well-ordering
$\leq_p$ by declaring $m\leq_{p}n$
if
$$ \kappa_p(m)<\kappa_p(n)$$
or
$$\kappa_p(m)=\kappa_p(n)\;\mbox{and}\;m/p^{\ord_p m}<n/p^{\ord_pn}$$
or
$$\kappa_p(m)=\kappa_p(n)\;\mbox{and}\;m/p^{\ord_p m}=n/p^{\ord_p n}\;\mbox{and} \;m\leq n.$$
In other words, to verify $m\leq _p n$, first compare $p$-cores of $m$ and $n$, then in case of a tie compare numbers of $(p-1)$'s trailing the $p$-core, and in case of another tie compare numbers of trailing $0$'s. We call $\leq_p$ the {\em $p$-digital well-ordering}.
In the obvious way we derive order relations $<_p$, $\geq_{p}$ and $>_{p}$
from $\leq_{p}$. We remark that
$$\delta_p(m)<\delta_p(n)\Rightarrow m<_p n,\;\;\;
m\leq_p n\Rightarrow \delta_p(m)\leq \delta_p(n);$$ in other words, the function $\delta_p$ gives a reasonable if rough approximation
to the $p$-digital well-ordering. \subsubsection{The function $\mu_q$} Given $c\in {\mathbb{N}}$, let $\mu_q(c)$ be the unique element of the set $$\{n\in {\mathbb{N}}\vert n\equiv p^i c\bmod{q-1}\;\mbox{for some}\;i\in {\mathbb{N}}\cup\{0\}\} $$ minimal with respect to the $p$-digital well-ordering. Note that $\mu_q(c)$ cannot be divisible by $p$. Consequently $\mu_q(c)$ may also be characterized as the unique element of the set $O_q(c)$ minimal with respect to the $p$-digital well-ordering. \subsubsection{$p$-admissibility} We say that a quadruple $(j,k,\ell,m)\in {\mathbb{N}}^4$ is {\em $p$-admissible} if $$(m,p)=1,\;\;\;m=k+j(p^\ell-1),\;\;\;\left(\begin{array}{c} k-1\\ j \end{array}\right)\not\equiv 0\bmod{p}.$$ This is the key technical definition of the paper. Let ${\mathcal{A}}_p$ denote the set of $p$-admissible quadruples.
\begin{Theorem}\label{Theorem:DigitMadness} For all $(j,k,\ell,m)\in {\mathcal{A}}_p$, one has (i) $k<_p m$, and moreover, (ii) if \linebreak $\kappa_p(k)=\kappa_p(m)$, then
$j=(p^{\ord_p k}-1)/(p^\ell-1)$.
\end{Theorem} \noindent We will prove this result in \S\ref{section:DigitMadness}. Note that the conclusion of part (ii) of the theorem implies $\ord_pk>0$ and $\ell\vert \ord_p k$.
\begin{Theorem}\label{Theorem:DigitMadness2} One has \begin{equation}\label{equation:DigitMadness2} \max_{c\in {\mathbb{N}}}\mu_q(c)<q, \end{equation} \begin{equation}\label{equation:DigitMadness2bis} \begin{array}{cl} &\displaystyle\{(\mu_q(c)+1)q^{i}-1\;\vert\; i\in {\mathbb{N}}\cup\{0\},\;c\in {\mathbb{N}}\}\\\\
=&\displaystyle\left\{c\in {\mathbb{N}}\left|(c,p)=1,\; \kappa_p(c)=\min_{n\in O_q(c)}\kappa_p(n)\right.\right\}. \end{array} \end{equation} \end{Theorem} \noindent
We will prove this result in \S\ref{section:DigitMadness2}. We have phrased the result in a way emphasizing the $p$-digital well-ordering.
But perhaps it is not clear what the theorem means in the context of Theorem~\ref{Theorem:MainResult}. The next result provides an explanation.
\begin{Proposition}\label{Proposition:DigitMadness2}
Theorem~\ref{Theorem:DigitMadness2} granted,
one has
\begin{equation}\label{equation:DigitMadness2quad}
C_q^0=\{\mu_q(c)\vert c\in {\mathbb{N}}\},
\end{equation}
\begin{equation}\label{equation:DigitMadness2ter}
C_q=\left\{c\in {\mathbb{N}}\left|(c,p)=1,\; \kappa_p(c)=\kappa_p(\mu_q(c))\right.\right\}.
\end{equation}
\end{Proposition}
\proof
The definition of $C^0_q$ can be rewritten $$C^0_q=\left\{c\in {\mathbb{N}}\cap(0,q)\left|
(c,p)=1,\;\kappa_p(c)=\min_{n\in O_q(c)\cap(0,q)}\kappa_p(n)\right.\right\}.$$ Therefore relation (\ref{equation:DigitMadness2}) implies containment $\supset$ in (\ref{equation:DigitMadness2quad}) and moreover, supposing failure of equality in (\ref{equation:DigitMadness2quad}), there exist $c,c'\in C_q^0$ such that $$c=\mu_q(c)\neq c',\;\;\;\kappa_p(c)=\kappa_p(c').$$ But $c'=q^i(c+1)-1$ for some $i\in {\mathbb{N}}$ by (\ref{equation:DigitMadness2bis}), hence $c'\geq q$, and hence $c'\not\in C_q^0$. This contradiction establishes equality in (\ref{equation:DigitMadness2quad})
and in turn containment $\subset$ in (\ref{equation:DigitMadness2ter}). Finally, (\ref{equation:DigitMadness2bis})
and (\ref{equation:DigitMadness2quad}) imply equality in (\ref{equation:DigitMadness2ter}). \qed
The following is the promised reduction of the proof of Theorem~\ref{Theorem:MainResult}.
\begin{Proposition}\label{Proposition:Reduction} If Theorems~\ref{Theorem:DigitMadness} and \ref{Theorem:DigitMadness2} hold, then Theorem~\ref{Theorem:MainResult} holds, too. \end{Proposition} \noindent Before turning to the proof, we pause to discuss the groups in play. \subsection{Generators for $K[[X]]^\times$, ${\mathbf{D}}[K[[X]]^\times]$ and $\Gamma_{q,K}$} \label{subsection:Convenient} Equip $K[[X]]^\times$ with the topology for which the family $\{1+X^nK[[X]]\vert n\in {\mathbb{N}}\}$ is a neighborhood base at the origin. Then the set $$\{1+\alpha X^k\vert \alpha \in K^\times,\;k\in {\mathbb{N}}\}\cup K^\times$$ generates $K[[X]]^\times$ as a topological group. Let ${\mathbb{F}}_p$ be the residue field of ${\mathbb{Z}}_p$. Let $E_p=E_p(X)\in {\mathbb{F}}_p[[X]]$ be the reduction modulo $p$ of the Artin-Hasse exponential $$\exp\left(\sum_{i=0}^\infty \frac{X^{p^i}}{p^i}\right)\in ({\mathbb{Q}}\cap{\mathbb{Z}}_p)[[X]],$$ noting that $${\mathbf{D}}[E_p]=\sum_{i=0}^\infty X^{p^i}.$$ Since $E_p(X)=1+X+O(X^2)$, the set $$\{E_p(\alpha X^k)\;\vert \;\alpha\in K^\times,\;k\in {\mathbb{N}},\;(k,p)=1\}\cup K[[X^p]]^\times$$ generates $K[[X]]^\times$ as a topological group. For each $k\in {\mathbb{N}}$ such that $(k,p)=1$ and $\alpha\in K^\times$, put $$W_{k,\alpha}=W_{k,\alpha}(X)=k^{-1}{\mathbf{D}}[E_p(\alpha X^k)]=\sum_{i=0}^\infty \alpha^{p^i}X^{kp^i}\in XK[[X]]. $$ Equip ${\mathbf{D}}[K[[X]]^\times]$ with the relative $X$-adic topology. The set $$\{W_{k,\alpha}\vert k\in {\mathbb{N}},\;(k,p)=1,\;\alpha\in K^\times\}$$ generates ${\mathbf{D}}[K[[X]]^\times]$ as a topological group, cf.\ exact sequence (\ref{equation:Factoid}). Equip $\Gamma_{q,K}$ with the relative $X$-adic topology. Note that $$ (X+\beta X^{q^\ell})^{-1}=\sum_{i=0}^\infty (-1)^{i}\beta^{\frac{q^{\ell i}-1}{q^\ell-1}}X^{q^{\ell i}}\in \Gamma_{q,K} $$ for all $\ell\in {\mathbb{N}}$ and $\beta \in K^\times$. The inverse operation here is of course understood in the functional rather than multiplicative sense. The set $$\{X+\beta X^{q^\ell}\;\vert\; \beta\in K^\times,\;\ell\in {\mathbb{N}}\}$$ generates $\Gamma_{q,K}$ as a topological group. \subsection{Proof of the proposition} It is enough to verify (\ref{equation:MainResult}) with $F$ and $\gamma$ ranging over sets of generators for the topological groups $K[[X]]^\times$ and $\Gamma_{q,K}$, respectively. The generators mentioned in the preceding paragraph are the convenient ones. So fix $\alpha,\beta\in K^\times$ and $k,\ell\in {\mathbb{N}}$ such that $(k,p)=1$. It will be enough to verify that \begin{equation}\label{equation:Nuff} \psi_{q}[M_{k,\alpha,\ell,\beta}]= \left\{\begin{array}{rl}\displaystyle \alpha X^{k+1}+\sum_{\ell\vert f\in {\mathbb{N}}} (-1)^{f/\ell}\alpha^{q^f}\beta^{\frac{q^f-1}{q^\ell-1}}X^{q^f(k+1)}&\mbox{if $k\in C_q$,}\\\\ 0&\mbox{otherwise,} \end{array}\right. \end{equation} where \begin{equation}\label{equation:MExpansion} \begin{array}{cl} &M_{k,\alpha,\ell,\beta}=M_{k,\alpha,\ell,\beta}(X)=k^{-1}{\mathbf{D}}[E_p(\alpha(X+\beta X^{q^\ell})^k)]\\\\ =&\displaystyle W_{k,\alpha}+\sum_{i=0}^\infty \sum_{j=1}^\infty \left(\begin{array}{c} p^ik-1\\ j \end{array}\right)\alpha^{p^i}\beta^{j}X^{p^ik+j(q^\ell-1)}.\end{array} \end{equation} By Theorem~\ref{Theorem:DigitMadness}, many terms on the right side of (\ref{equation:MExpansion}) vanish, and more precisely, we can rewrite (\ref{equation:MExpansion}) as follows: \begin{equation}\label{equation:Nuff2} \begin{array}{rcl} M_{k,\alpha,\ell,\beta}&\equiv&\displaystyle\alpha X^k+ \sum_{\begin{subarray}{c} m\in O_q(k)\\ m>_pk \end{subarray}} \left(\sum_{\begin{subarray}{c} i\in {\mathbb{N}}\cup\{0\}, j\in{\mathbb{N}}\\ (j,p^ik,\ord_p q^\ell,m)\in {\mathcal{A}}_p \end{subarray}} \left(\begin{array}{c} p^ik-1\\ j \end{array}\right)\alpha^{p^i}\beta^j\right)X^m\\\\ &&\hskip 5cm\bmod{X^pK[[X^p]]}. \end{array} \end{equation} By Theorem~\ref{Theorem:DigitMadness2} as recast in the form of Proposition~\ref{Proposition:DigitMadness2}, along with formula (\ref{equation:Nuff2}) and the definitions, both sides of (\ref{equation:Nuff}) vanish unless $k\in C_q$. So now fix $c\in C_q^0$ and $g\in {\mathbb{N}}\cup\{0\}$ and put $$k=(c+1)q^g-1\in C_q$$ for the rest of the proof of the proposition. Also fix $f\in {\mathbb{N}}\cup\{0\}$ and put $$m=q^f(k+1)-1=(c+1)q^{f+g}-1\in C_q$$ for the rest of the proof. It is enough to evaluate the coefficient of $X^m$ in (\ref{equation:Nuff2}). By part (ii) of Theorem~\ref{Theorem:DigitMadness}, there is no term in the sum on the right side of (\ref{equation:Nuff2}) of degree $m$ unless $\ell\vert f$, in which case there is exactly one term, namely $$\left(\begin{array}{c} q^fk-1\\ \frac{q^f-1}{q^\ell-1} \end{array}\right)\alpha^{q^f}\beta^{\frac{q^f-1}{q^\ell-1}}X^m,$$ and by the theorem of Lucas, the binomial coefficient mod $p$ evaluates
to $(-1)^{f/\ell}$. Therefore (\ref{equation:Nuff}) does indeed hold. \qed \subsection{Remarks} $\;$
(i) By formula (\ref{equation:Nuff2}), the $p$-digital well-ordering actually gives rise to a $\Gamma_{q,K}$-stable complete separated filtration of the quotient $K[[X]]^\times/K[[X^p]]^\times$ distinct from the $X$-adically induced one. Theorem~\ref{Theorem:MainResult} merely describes the structure of $K[[X]]^\times/K[[X^p]]^\times$ near the top of the ``$p$-digital filtration''.
(ii) Computer experimentation based on formula (\ref{equation:MExpansion}) was helpful in making the discoveries detailed in this paper. We believe that continuation of such experiments could lead to further progress, e.g., to the discovery of a minimal set of generators for $K[[X]]^\times$ as a topological right $\Gamma_{q,K}$-module.
\section{Proof of Theorem~\ref{Theorem:DigitMadness}}\label{section:DigitMadness}
\begin{Lemma}\label{Lemma:DigitGames} Fix $(j,k,\ell,m)\in {\mathcal{A}}_p$. Put $$e=\ord_p(m+1),\;\;\;f=\ord_p k,\;\;\;g=\ord_p(k/p^f+1).$$ Then there exists a unique integer $r$ such that \begin{equation}\label{equation:DigitGames0} 0\leq r\leq e+\ell-1,\;\; r\equiv 0\bmod{\ell},\;\; j\equiv \frac{p^r-1}{p^\ell-1}\bmod{p^{e}}, \end{equation} and moreover \begin{equation}\label{equation:DigitGames1} f+g\geq e, \end{equation} \begin{equation}\label{equation:DigitGames2} \kappa_p(m)\geq \kappa_p(k). \end{equation} \end{Lemma} \noindent This lemma is the key technical result of the paper.
\subsection{Completion of the proof of the theorem, granting the lemma} Fix $(j,k,\ell,m)\in {\mathcal{A}}_p$. Let $e,f,g,r$ be as defined in Lemma~\ref{Lemma:DigitGames}. Since the number of digits in the minimal base $p$ expansion of $k$ cannot exceed the number of digits in the minimal base $p$ expansion of $m$, one has \begin{equation}\label{equation:DigitGames3} \delta_p(k)+f+g\leq \delta_p(m)+e. \end{equation} Combining (\ref{equation:DigitGames1}) and (\ref{equation:DigitGames3}), one has \begin{equation}\label{equation:DigitGames4} \delta_p(k)=\delta_p(m)\Rightarrow f+g=e. \end{equation} Now in general one has $$m+1=(\kappa_p(m)+1)p^e,\;\;\; k+p^f=(\kappa_p(k)+1)p^{f+g},$$ and hence $$ \kappa_p(k)=\kappa_p(m)\Rightarrow \left(j=\frac{p^{f}-1}{p^\ell-1}\;\mbox{and}\;e>g\right) $$ via (\ref{equation:DigitGames4}). Theorem~\ref{Theorem:DigitMadness} now follows via (\ref{equation:DigitGames2}) and the definition of the $p$-digital well-ordering. \qed
\subsection{Proof of Lemma~\ref{Lemma:DigitGames}} Since $e$ is the number of trailing $(p-1)$'s in the minimal base $p$ expansion of $m$, the lemma is trivial in the case $e=0$. We therefore assume that $e>0$ for the rest of the proof.
Let $$m=[m_1,\dots,m_t]_p\;\;\;(t>0,\;m_1>0,\;\;m_t>0)$$ be the minimal base $p$ expansion of $m$. For convenience, put $$d=\delta_p(m)\geq 0,\;\;\;m_\nu=0\;\mbox{for $\nu<1$}.$$ Then $$t=e+d,\;\;\;m_{d+1}=\cdots=m_{d+e}=p-1,\;\;\;m_{d}< p-1.$$ By hypothesis $$\left(\begin{array}{c} k-1\\ j \end{array}\right)=\left(\begin{array}{c} m-jp^\ell-1+j\\ m-jp^\ell-1 \end{array}\right)>0,$$ hence $$m>jp^\ell,$$ and hence the number of digits in the minimal base $p$ of expansion of $jp^\ell$ does not exceed that of $m$. Accordingly, $$t> \ell$$ and one has a base $p$ expansion for $j$ of the form $$j=[j_1,\dots,j_{t-\ell}]_p,$$ which perhaps is not minimal. For convenience, put $$j_\nu=0\;\mbox{for $\nu<1$ and also for $\nu>t-\ell$}.$$ This state of affairs is summarized by the ``snapshot'' $$m=[m_1,\dots,m_t]=[m_{1},\dots,m_{d},\underbrace{p-1,\dots,p-1}_e]_p,\;\;\kappa_p(m)=[m_{1},\dots,m_{d}]_p,$$ $$jp^\ell=[j_1,\dots,j_t]_p=[j_1,\dots,j_{t-\ell},\underbrace{0,\dots,0}_\ell]_p ,$$ which the reader should keep in mind as we proceed.
We are ready now to prove the existence and uniqueness of $r$. One has $$m-jp^\ell-1=k-1-j=[m_1',\dots,m_d',p-1-j_{d+1},\dots, p-1-j_{t-1},p-2]_p,$$ where the digits $m'_1,\dots,m'_d$ are defined by the equation \begin{equation}\label{equation:Swivel} \kappa_p(m)-[j_{1},\dots,j_{d}]_p=[m'_1,\dots,m'_d]_p. \end{equation} By hypothesis and the theorem of Lucas, the addition of $k-1-j$ and $j$ in base $p$ requires no ``carrying'', and hence \begin{equation}\label{equation:BigDigit} k-1= \left\{\begin{array}{ll} \;[m_1'+j_{1-\ell},\dots,m_d'+j_{d-\ell},\\ \;p-1-j_{d+1}+j_{d+1-\ell},\dots, p-1-j_{d+e-1}+j_{d+e-1-\ell},p-2+j_{d+e-\ell}]_p. \end{array}\right. \end{equation} From the system of inequalities for the last $e+\ell$ digits of the base $p$ expansion of $jp^\ell$ implicit in (\ref{equation:BigDigit}), it follows that there exists $r_0\in {\mathbb{N}}\cup\{0\}$ such that \begin{equation}\label{equation:BigDigitBis} jp^\ell=[j_{1-\ell},\dots,j_{d-\ell}, \overbrace{0,\dots,0,\underbrace{\underbrace{1,0,\dots,0}_{\ell},\dots,\underbrace{1,0,\dots,0}_{\ell}}_{\mbox{\tiny $r_0$ blocks}},0}^{e+\ell}]_p. \end{equation} Therefore $r=r_0\ell$ has the required properties (\ref{equation:DigitGames0}). Uniqueness of $r$ is clear. For later use, note the relation \begin{equation}\label{equation:HeadScratcher} r\geq e\Leftrightarrow [j_{d-\ell+1},\dots,j_{d}]_p\neq 0\Rightarrow [j_{d-\ell+1},\dots,j_{d}]_p=p^{r-e}, \end{equation} which is easy to see from the point of view adopted here to prove (\ref{equation:DigitGames0}).
By (\ref{equation:DigitGames0}) one has \begin{equation}\label{equation:DigitGames2.2} k+p^r-(m+1)+j' p^e (p^\ell-1)=0\;\;\;\mbox{for some $j'\in {\mathbb{N}}\cup\{0\}$,} \end{equation} and hence one has \begin{equation}\label{equation:DigitGames2.5} r\geq \min(f,e),\;\;\;f\geq \min(r,e). \end{equation} This proves (\ref{equation:DigitGames1}), since either one has $f\geq e$, in which case (\ref{equation:DigitGames1}) holds trivially, or else $f<e$, in which case $r=f$ by (\ref{equation:DigitGames2.5}), and hence (\ref{equation:DigitGames1}) holds by (\ref{equation:DigitGames2.2}).
Put $$k-1=[k'_1,\dots,k'_{d+e}]_p,\;\;\;\;{\mathbf{1}}_{r\geq e}=\left\{\begin{array}{ll} 1&\mbox{if $r\geq e$,}\\ 0&\mbox{if $r<e$.} \end{array}\right. $$ Comparing (\ref{equation:BigDigit}) and (\ref{equation:BigDigitBis}), we see that
the digits $k'_{d+1},\dots,k'_{d+e}$ are all $(p-1)$'s with at most one exception, and the exceptional digit if it exists is a $p-2$. Further, one has $$k'_{d+1}=\dots=k'_{d+e}=p-1\Leftrightarrow f\geq e\Leftrightarrow {\mathbf{1}}_{r\geq e}=1$$ by (\ref{equation:DigitGames2.5}). Therefore one has $$\kappa_p(k)\leq[k'_1,\dots,k'_d]+{\mathbf{1}}_{r\geq e}.$$ Finally, via (\ref{equation:Swivel}), (\ref{equation:BigDigit}) and (\ref{equation:HeadScratcher}), it follows that $$ \begin{array}{rcl} \kappa_p(k)&\leq &[m_1'+j_{1-\ell},\dots,m_d'+j_{d-\ell}]_p+{\mathbf{1}}_{r\geq e}\\ &=&\kappa_p(m)-[j_1,\dots,j_d]_p+[j_{1-\ell},\dots,j_{d-\ell}]_p+{\mathbf{1}}_{r\geq e}\\ &=&\kappa_p(m)-[j_{1-\ell},\dots,j_d]_p+[j_{1-\ell},\dots,j_{d-\ell}]_p+{\mathbf{1}}_{r\geq e}\\ &=&\kappa_p(m)-[j_{d-\ell+1},\dots,j_{d}]_p+{\mathbf{1}}_{r\geq e} \\ &&-[j_{1-\ell},\dots,j_{d-\ell},\underbrace{0,\dots,0}_\ell]_p+[j_{1-\ell},\dots,j_{d-\ell}]_p\\ &=&\kappa_p(m)-{\mathbf{1}}_{r\geq e}(p^{r-e}-1)-(p^\ell-1)[j_{1-\ell},\dots,j_{d-\ell}]_p\\ &\leq &\kappa_p(m). \end{array} $$ Thus (\ref{equation:DigitGames2}) holds and the proof of the lemma is complete. \qed
\section{Proof of Theorem~\ref{Theorem:DigitMadness2}}\label{section:DigitMadness2}
\subsection{Further digital apparatus} Put $\lambda=\ord_p q$. For each $c\in {\mathbb{N}}$, let $$\bracket{c}_q=\min\{n\in {\mathbb{N}}\vert n\equiv c\bmod{q-1}\},\;\;\;\tau_p(c)=c/p^{\ord_pc}. $$ Note that $$0<\bracket{c}_q<q,\;\;\; \bracket{c}_q=\bracket{c'}_q\Leftrightarrow c\equiv c'\bmod{q-1}$$ for all $c,c'\in {\mathbb{N}}$. Given $c\in {\mathbb{N}}$, and writing $\bracket{c}_q=[c_1,\dots,c_\lambda]_p$, note that $$\{c_1,\dots,c_\lambda\}\neq \{0\},\;\; \bracket{pc}_q=[c_2,\dots,c_\lambda,c_1]_p,$$ $$\langle c\rangle_q\geq \tau_p(\bracket{c}_q)=[c_1,\dots,c_{\max\{i\vert c_i\neq 0\}}]_p\geq \kappa_p(\langle c\rangle_q).$$
\begin{Lemma}\label{Lemma:Necklace2} $\langle p^ic\rangle_q\leq p^i-1\Rightarrow \tau_p(\langle c\rangle_q)\leq\langle p^ic\rangle_q$ for $c\in {\mathbb{N}}$ and $i\in {\mathbb{N}}\cap(0,\lambda)$. \end{Lemma} \begin{Lemma}\label{Lemma:Necklace3} $\displaystyle \min_{i=0}^{\lambda-1}\tau_p(\langle p^ic+1\rangle_q)=1+\min_{i=0}^{\lambda-1}\kappa_p(\langle p^ic\rangle_q)=1+\kappa_p(\mu_q(c))$ for $c\in {\mathbb{N}}$. \end{Lemma} \begin{Lemma}\label{Lemma:Necklace4}
$i\not\equiv 0\bmod{\lambda}\Rightarrow p^i(\mu_q(c)+1)-1\not\in O_q(c)$ for $i,c\in {\mathbb{N}}$.
\end{Lemma} \subsection{Completion of the proof of the theorem, granting the lemmas} Relation (\ref{equation:DigitMadness2}) holds by Lemma~\ref{Lemma:Necklace3}. Relation (\ref{equation:DigitMadness2bis}) holds by Lemma~\ref{Lemma:Necklace4}. \qed
\subsection{Proof of Lemma~\ref{Lemma:Necklace2}} Write $\langle c\rangle_q=[c_1,\dots,c_\lambda]_p$. By hypothesis $$\langle p^ic\rangle_q= [\underbrace{0,\dots,0}_{\lambda-i},c_1,\dots,c_i]_q,\;\;\; c=[c_1,\dots,c_i,\underbrace{0,\dots,0}_{\lambda-i}]_p, $$ and hence $\tau_p(c)\leq \langle p^ic\rangle_q$. \qed
\subsection{Proof of Lemma~\ref{Lemma:Necklace3}} Since $$\mu_q(c)=(\kappa_p(\mu_q(c))+1)p^g-1\in O_q(c),$$ for some $g\in {\mathbb{N}}\cup\{0\}$, one has $$\kappa_p(\mu_q(c))+1\geq \min_{i=0}^{\lambda-1} \min_{j=0}^{\lambda-1} \langle p^i(p^jc+1)\rangle_q.$$ One has $$ \tau_p(\langle n+1\rangle_q)\geq 1+\kappa_p(\langle n\rangle_q) $$ for all $n\in {\mathbb{N}}$, as can be verified by a somewhat tedious case analysis which we omit. Clearly, the inequalities $\geq$ hold in the statement we are trying to prove. Therefore it will be enough to prove that $$ \min_{i=0}^{\lambda-1} \min_{j=0}^{\lambda-1} \langle p^i(p^jc+1)\rangle_q\geq \min_{j=0}^{\lambda-1} \tau_p(\langle p^jc+1\rangle_q). $$ Fix $i=1,\dots,\lambda-1$ and $j=0,\dots,\lambda-1$. It will be enough just to prove that \begin{equation}\label{equation:AlmostLastNuff} \langle p^i(p^jc+1)\rangle_q <\tau_p(\langle p^jc+1\rangle_q) \Rightarrow \langle p^i(p^jc+1)\rangle_q \geq \tau_p(\langle p^{i+j}c+1\rangle_q). \end{equation} But by the preceding lemma, under the hypothesis of (\ref{equation:AlmostLastNuff}), one has $$p^i-1<\langle p^i(p^jc+1)\rangle_q$$ and hence $$\langle p^i(p^jc+1)\rangle_q= \langle p^{i+j}c+1\rangle_q+p^i-1\geq \tau_p( \langle p^{i+j}c+1\rangle_q).$$ Thus (\ref{equation:AlmostLastNuff}) is proved, and with it the lemma.
\qed
\subsection{Proof of Lemma~\ref{Lemma:Necklace4}} We may assume without loss of generality that \linebreak $0<i<\lambda$ and $c=\mu_q(c)$. By the preceding lemma $c<q$. Write $c=[c_1,\dots,c_\lambda]_p$ and define $c_k$ for all $k$ by enforcing the rule $c_{k+\lambda}=c_k$. Supposing that the desired conclusion does not hold, one has $$p^{\lambda-i}[c_1,\dots,c_\lambda,\underbrace{p-1,\dots,p-1}_{i}]_p\equiv [c_1,\dots,c_\lambda,\underbrace{p-1,\dots,p-1}_{i},\underbrace{0,\dots,0}_{\lambda-i}]_p$$ $$ \equiv [c_1,\dots,c_\lambda]_p+[\underbrace{p-1,\dots,p-1}_{i},\underbrace{0,\dots,0}_{\lambda-i}]_p $$ $$ \equiv [c_1,\dots,c_\lambda]_p-[\underbrace{0,\dots,0}_{i},\underbrace{p-1,\dots,p-1}_{\lambda-i}]_p$$ $$\equiv [c_{1+m},\dots,c_{\lambda+m}]_p=\bracket{p^mc}_q$$ for some integer $m$, where all the congruences are modulo $q-1$. It is impossible to have $c_1=\cdots=c_i=0$ since this would force the frequency of occurrence of the digit $0$ to differ in the digit strings $c_1,\dots,c_\lambda$ and $c_{1+m},\dots,c_{\lambda+m}$, which after all are just cyclic permutations one of the other. Similarly we can rule out the possibility $c_{i+1}=\cdots=c_\lambda=p-1$. Thus the base $p$ expansion of $c$ takes the form $$c=[\underbrace{0,\dots,0}_{\alpha}, \underbrace{\bullet,\dots,\bullet}_{\beta}, \underbrace{p-1,\dots,p-1}_{\gamma} ]_p,$$ where $$\alpha< i,\;\;\beta>0,\;\;\;\gamma<\lambda-i,\;\;\; \alpha+\beta+\gamma=\lambda,$$ and the bullets hold the place of a digit string not beginning with a $0$ and not ending with a $p-1$. Then one has $$\begin{array}{rcl} 1+\kappa_p(c)&=&(c+1)/p^\gamma\\ &>&(c+1-p^{\lambda-i})/p^\gamma+1\;\;(\mbox{strict inequality!})\\ &\geq &\tau_p(c+1-p^{\lambda-i})+1\\ &=&\tau_p(\langle p^mc\rangle_q)+1\\ &\geq &\kappa_p(\langle p^mc\rangle_q)+1 \end{array} $$ in contradiction to the preceding lemma. This contradiction finishes the proof. \qed
\end{document}
|
arXiv
|
{
"id": "0510355.tex",
"language_detection_score": 0.6301348209381104,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\twocolumn[ \icmltitle{Greedy Column Subset Selection: \\
New Bounds and Distributed Algorithms}
\icmlauthor{Jason Altschuler}{[email protected]} \icmladdress{Princeton University,
Princeton, NJ 08544} \icmlauthor{Aditya Bhaskara}{[email protected]} \icmladdress{School of Computing, 50 S. Central Campus Drive,
Salt Lake City, UT 84112} \icmlauthor{Gang (Thomas) Fu}{[email protected]} \icmlauthor{Vahab Mirrokni}{[email protected]} \icmlauthor{Afshin Rostamizadeh}{[email protected]} \icmlauthor{Morteza Zadimoghaddam}{[email protected]} \icmladdress{Google, 76 9th Avenue, New York, NY 10011}
\icmlkeywords{column selection, greedy algorithms, coresets}
\vskip 0.3in ]
\begin{abstract} The problem of column subset selection has recently attracted a large body of research, with feature selection serving as one obvious and important application. Among the techniques that have been applied to solve this problem, the greedy algorithm has been shown to be quite effective in practice. However, theoretical guarantees on its performance have not been explored thoroughly, especially in a distributed setting. In this paper, we study the greedy algorithm for the column subset selection problem from a theoretical and empirical perspective and show its effectiveness in a distributed setting. In particular, we provide an improved approximation guarantee for the greedy algorithm which we show is tight up to a constant factor, and present the first distributed implementation with provable approximation factors. We use the idea of randomized composable core-sets, developed recently in the context of submodular maximization. Finally, we validate the effectiveness of this distributed algorithm via an empirical study. \end{abstract}
\section{Introduction} Recent technological advances have made it possible to collect unprecedented amounts of data. However, extracting patterns of information from these high-dimensional massive datasets is often challenging. How do we automatically determine, among millions of measured features (variables), which are informative, and which are irrelevant or redundant? The ability to select such features from high-dimensional data is crucial for computers to recognize patterns in complex data in ways that are fast, accurate, and even human-understandable \cite{Guyon}.
An efficient method for feature selection receiving increasing attention is Column Subset Selection (CSS). CSS is a constrained low-rank-approximation problem that seeks to approximate a matrix (e.g. instances by features matrix) by projecting it onto a space spanned by only a few of its columns (features). Formally, given a matrix $A$ with $n$ columns, and a target rank $k < n$, we wish to find a size-$k$ subset $S$ of $A$'s columns such that each column $A_i$ of $A$ ($i \in \{1, \dots, n\}$) is contained as much as possible in the subspace $\text{span}(S)$, in terms of the Frobenius norm: \begin{align*}
\text{arg max}_{S \text{ contains $k$ of $A$'s columns}} \sum_{i=1}^n \|\text{proj}(A_i \; | \; \text{span}(S))\|_2^2 \end{align*}
While similar in spirit to general low-rank approximation, some advantages with CSS include flexibility, interpretability and efficiency during inference. CSS is an unsupervised method and does not require labeled data, which is especially useful when labeled data is sparse. We note, on the other hand, unlabeled data is often very abundant and therefore scalable methods, like the one we present, are often needed.
Furthermore, by subselecting features, as opposed to generating new features via an arbitrary function of the input features, we keep the semantic interpretation of the features intact. This is especially important in applications that require interpretable models. A third important advantage is the efficiency of applying the solution CSS feature selection problem during inference. Compared to PCA or other methods that require a matrix-matrix multiplication to project input features into a reduced space during inference time, CSS only requires selecting a subset of feature values from a new instance vector. This is especially useful for latency sensitive applications and when the projection matrix itself may be prohibitively large, for example in restricted memory settings.
While there have been significant advances in CSS~\cite{Boutsidis1,Boutsidis2,Guruswami}, most of the algorithms are either impractical and not applicable in a distributed setting for large datasets, or they do not have good (multiplicative $1 - \varepsilon$) provable error bounds. Among efficient algorithms studied for the CSS problem is the simple {\em greedy algorithm}, which iteratively selects the best column and keeps it. Recent work shows that it does well in practice and even in a distributed setting~\cite{Farahat1, Farahat2} and admits a performance guarantee \cite{Civril1}. However, the known guarantees depend on an arbitrarily large matrix-coherence parameter, which is unsatisfactory. Also, even though the algorithm is relatively fast, additional optimizations are needed to scale it to datasets with millions of features and instances.
\subsection{Our contributions} Let $A \in \mathbb{R}^{m \times n}$ be the given matrix, and let $k$ be the target number of columns. Let $OPT_k$ denote the {\em optimal} set of columns, i.e., one that {\em covers} the maximum Frobenius mass of $A$. Our contributions are as follows.
{\em Novel analysis of Greedy.} For any $\varepsilon > 0$, we show that the natural greedy algorithm (Section~\ref{section-2}), after $r = \frac{k}{\sigma_{\min}(OPT_k) \varepsilon}$ steps, gives an objective value that is within a $(1-\varepsilon)$ factor of the optimum. We also give a matching lower bound, showing that $\frac{k}{\sigma_{\min}(OPT_k) \varepsilon}$ is tight up to a constant factor. Here $\sigma_{\min}(OPT_k)$ is the smallest squared singular value of the {\em optimal} set of columns (after scaling to unit vectors).
Our result is similar in spirit to those of~\cite{Civril1, Liberty}, but with an important difference. Their bound on $r$ depends on the {\em least} $\sigma_{\min}(S)$ over \textit{all} $S$ of size $k$, while ours depends on $\sigma_{\min}(OPT_k)$. Note that these quantities can differ significantly. For instance, if the data has even a little bit of redundancy (e.g. few columns that are near duplicates), then there exist $S$ for which $\sigma_{\min}$ is tiny, but the optimal set of columns could be reasonably well-conditioned (in fact, we would {\em expect} the optimal set of columns to be fairly well conditioned).
{\em Distributed Greedy.} We consider a natural distributed implementation of the greedy algorithm (Section~\ref{section-2}). Here, we show that an interesting phenomenon occurs: even though partitioning the input does not work in general (as in coreset based algorithms), {\em randomly} partitioning works well. This is inspired by a similar result on submodular maximization~\cite{Mirrokni}.
Further, our result implies a $2$-pass streaming algorithm for the CSS problem in the {\em random arrival} model for the columns.
We note that if the columns each have sparsity $\phi$,~\cite{Boutsidis2015} gives an algorithm with total communication of $O(\frac{sk\phi}{\varepsilon} + \frac{sk^2}{\varepsilon^4})$. Their algorithm works for ``worst case'' partitioning of the columns into machines and is much more intricate than the greedy algorithm. In constrast, our algorithm is very simple, and for a random partitioning, the communication is just the first term above, along with an extra $\sigma_{\min}(OPT)$ term. Thus depending on $\sigma_{\min}$ and $\varepsilon$, each of the bounds could be better than the other.
{\em Further optimizations.} We also present techniques to speed up the implementation of the greedy algorithm. We show that the recent result of~\cite{Mirzasoleiman} (once again, on submodular optimization) can be extended to the case of CSS, improving the running time significantly.
We then compare our algorithms (in accuracy and running times) to various well-studied CSS algorithms. (Section 6.)
\subsection{Related Work} The CSS problem is one of the central problems related to matrix approximation. Exact solution is known to be UG-hard~\cite{Civril2}, and several approximation methods have been proposed over the years. Techniques such as importance sampling \cite{Drineas1, Frieze}, adaptive sampling \cite{Deshpande1}, volume sampling \cite{Deshpande2, Deshpande4}, leverage scores \cite{Drineas-Leverage}, and projection-cost preserving sketches \cite{Cohen} have led to a much better understanding of the problem. \cite{Guruswami} gave the optimal dependence between column sampling and low-rank approximation. Due to the numerous applications, much work has been done on the implementation side, where adaptive sampling and leverage scores have been shown to perform well. A related, extremely simple algorithm is the greedy algorithm, which turns out to perform well and be scalable \cite{Farahat1, Farahat2}. This was first analyzed by~\cite{Civril1}, as we discussed.
There is also substantial literature about distributed algorithms for CSS \cite{Pi, Feldman, Cohen, Farahat3, Farahat4, Boutsidis2015}. In particular, \cite{Farahat3, Farahat4} present distributed versions of the greedy algorithm based on MapReduce. Although they do not provide theoretical guarantees, their experimental results are very promising.
The idea of composable coresets has been applied explicitly or implicitly to several problems~\cite{FeldmanSS13,BalcanEL13,VahabPODS2014}. Quite recently, for some problems in which coreset methods do not work in general, surprising results have shown that randomized variants of them give good approximations~\cite{BarbosaENW15,Mirrokni}. We extend this framework to the CSS problem.
\subsection{Background and Notation} We use the following notation throughout the paper. The set of integers $\{1, \dots, n\}$ is denoted by $[n]$. For a matrix $A \in \mathbb{R}^{m \times n}$, $A_j$ denotes the $j$th column ($A_j \in \mathbb{R}^m$). Given $S \subseteq [n]$, $A[S]$ denotes the submatrix of $A$ containing columns indexed by $S$. The projection matrix $\Pi_A$ projects onto the column span of $A$.
Let $\norm{A}_F$ denote the Frobenius norm, i.e., $\sqrt{\sum_{i,j} A_{i, j}^2}$. We write $\sigma_{\min}(A)$ to denote the minimum \textit{squared} singular value, i.e., $\inf_{x:\norm{x}_2 = 1} \frac{\|Ax\|_2^2}{\|x\|_2^2}$. We abuse notation slightly, and for a set of vectors $V$, we write $\sigma_{\min}(V)$ for the $\sigma_{\min}$ of the matrix with columns $V$.
\iffalse
{\bf Submodular optimization.} Given a finite set $\Omega$ and a set function $f : 2^{\Omega} \to \mathbb{R}$, define the marginal gain of adding an element $x \in \Omega$ to a set $S \subseteq \Omega$ by $\Delta(x | S) = f(S \cup \{x\}) - f(S)$. $f$ is said to be submodular if $\Delta(x | S) \geq \Delta(x | T)$ for any subsets $S \subseteq T \subseteq \Omega$ and any element $x \in \Omega \setminus T$. This is a formalization of the well-known economic principle of decreasing marginal utility. $f$ is further said to be nonnegative if $f(S) \geq 0$ for any $S \subseteq \Omega$, and monotonically nondecreasing if $f(S) \leq f(T)$ for any $S \subseteq T \subseteq \Omega$. The theory of maximizing submodular functions subject to a cardinality constraint has been well studied, and has been shown to be NP-hard [Nemhauser and Wolsey 1978; Feige 1998]. However, it is a key result in combinatorial optimization that a simple greedy algorithm to this problem for nonnegative, monotone nondecreasing submodular functions admits a $1 - \frac{1}{e}$ constant factor approximation [Nemhauser '78]. \fi
\begin{defin}\label{defn:css-problem} Given a matrix $A \in \mathbb{R}^{m \times n}$ and an integer $k \le n$, the \textbf{Column Subset Selection (CSS) Problem} asks to find
\[ \text{arg max}_{S \subseteq [n], |S| = k} \norm{\Pi_{A[S]}A}_F^2, \] i.e., the set of columns that {\em best explain} the full matrix $A$. \end{defin}
We note that it is also common to cast this as a minimization problem, with the objective being $\norm{A - \Pi_{A[S]} A}_F^2$. While the exact optimization problems are equivalent, obtaining multiplicative approximations for the minimization version could be harder when the matrix is low-rank.
For a set of vectors $V$ and a matrix $M$, we denote \[ f_M(V) = \norm{\Pi_V M}_F^2. \]
\iffalse In this article, instead of minimizing the unexplained (error) part $\|A - \Pi_{A[S]}A\|_F^2$ of $A$, we maximize the explained part $\|\Pi_{A[S]}A\|_F^2$ of $A$. Formally, \begin{align}
\text{arg max}_{S \subseteq [n], |S| = k} \|\Pi_{A[S]}A\|_F^2
= \text{arg min}_{S \subseteq [n], |S| = k} \|A - \Pi_{A[S]}A\|_F^2 \end{align} Thus, a subset of columns that maximizes explanation of $A$ will also minimized the unexplained error. \fi Also, the case when $M$ is a single vector will be important. For any vector $u$, and a set of vectors $V$, we write \[ f_u(V) = \norm{\Pi_V u}_2^2. \]
\begin{remark} \label{rem:not-submodular} Note that $f_M (V)$ can be viewed as the extent to which we can {\em cover} matrix $M$ using vectors $V$. However, unlike combinatorial covering objectives, our definition is not submodular, or even subadditive. \iffalse \begin{defin} \label{f definition matrix} Given $A \in \mathbb{R}^{m \times n}$, define the function: $f_A : \mathcal{P}(\mathbb{R}^m) \to \mathbb{R}$ by: $$f_A(S) = \sum_{j=1}^n f_{A_j}(S)$$ over the columns $A_j \in \mathbb{R}^m$ of $A$. \end{defin}
\fi As an example, consider covering the following $A$ using its own columns. Here, $f_A(\{A_1, A_2\}) = \|A\|_F^2 > f(\{A_1\}) + f(\{A_2\})$. \[ A = \left( \begin{array}{ccc} 1 & 0 & 1 \\ 1 & -1 & 0 \\ 0 & 1 & 1 \end{array} \right)\]
\end{remark}
\iffalse \subsection{Overview of article} In section 2.1, we present a ``vanilla'' greedy algorithm GREEDY for choosing $k$ columns of a matrix $A$ that can approximate all of $A$'s other columns linearly. This provides intuition for our proposed algorithm ALG, presented in section 2.2. ALG is a much more efficient version of GREEDY because of three optimizations: (1) an efficient calculation of marginal gain; (2) a random projection to compress the ambient dimension of $A$'s columns; and (3) only looking over a small random subset of all $n$ columns in each iteration. These optimizations make analyzing ALG slightly more involved than GREEDY. To this end, we first analyze GREEDY in section 3, and then use those results to analyze ALG in section 4. \fi
\section{Greedy Algorithm for Column Selection} \label{section-2}
Let us state our algorithm and analysis in a slightly general form. Suppose we have two matrices $A, B$ with the same number of rows and $n_A$, $n_B$ columns respectively. The $\textsf{GCSS}(A, B, k)$ problem is that of finding a subset $S$ of columns of $B$, that maximizes $f_A(S)$ subject to $|S|=k$. Clearly, if $B = A$, we recover the CSS problem stated earlier. Also, note that scaling the columns of $B$ will not affect the solution, so let us assume that the columns of $B$ are all unit vectors. The greedy procedure iteratively picks columns of $B$ as follows:
\begin{algorithm} \label{alg:greedy} \caption{$\textsc{Greedy}$($A \! \in \! \mathbb{R}^{m \times n_A}$, $B \! \in\! \mathbb{R}^{m \times n_B}$, $k \leq n_B$)} \begin{algorithmic}[1] \STATE $S \leftarrow \emptyset$ \FOR{$i = 1:k$} \STATE Pick column $B_j$ that maximizes $f_A(S \cup B_j)$ \STATE $S \leftarrow S \cup \{B_j\}$ \ENDFOR \STATE Return $S$ \end{algorithmic} \end{algorithm}
Step (3) is the computationally intensive step in $\textsc{Greedy}$ -- we need to find the column that gives the most {\em marginal gain}, i.e., $f_A(S \cup B_j) - f_A(S)$. In Section~\ref{section-5}, we describe different techniques to speed up the calculation of marginal gain, while obtaining a $1-\varepsilon$ approximation to the optimum $f(\cdot)$ value. Let us briefly mention them here.
{\em Projection to reduce the number of rows.} We can left-multiply both $A$ and $B$ with an $r \times n$ Gaussian random matrix. For $r \ge \frac{k\log n}{\varepsilon^2}$, this process is well-known to preserve $f_A(\cdot)$, for any $k$-subset of the columns of $B$ (see~\cite{Sarlos} or Appendix Section~\ref{app:random-projections} for details).
{\em Projection-cost preserving sketches.} Using recent results from \cite{Cohen}, we can project each {\em row} of $A$ onto a random $O(\frac{k}{\varepsilon^2})$ dimensional space, and then work with the resulting matrix. Thus we may assume that the number of columns in $A$ is $O(\frac{k}{\varepsilon^2})$. This allows us to efficiently compute $f_A(\cdot)$.
\iffalse \textbf{Random projections to reduce the number of rows.} We can project each column of $A$ and $B$ onto a random $O(\frac{k\log (\max(n', n_B))}{\varepsilon^2})$ dimensional space, and then work with the resulting matrices. Thus we may assume that the number of rows in both $A$ and $B$ is $\min\{ m, O(\frac{k\log( \max(n', n_B))}{\varepsilon^2}) \}$, which can be a big improvement. This can be obtained by a union bound on Lemma 10 from \cite{Sarlos}. (Full details in appendix.) \fi
{\em Lazier-than-lazy greedy.} \cite{Mirzasoleiman} recently proposed the first algorithm that achieves a constant factor approximation for maximizing submodular functions with a {\em linear} number of marginal gain evaluations. We show that a similar analysis holds for $\textsf{GCSS}$, even though the cost function is not submodular.
We also use some simple yet useful ideas from \cite{Farahat2} to compute the marginal gains (see Section~\ref{section-5}).
\subsection{Distributed Implementation} We also study a distributed version of the greedy algorithm, shown below (Algorithm~\ref{alg:cs-greedy}). $\ell$ is the number of machines.
\begin{algorithm} \label{alg:cs-greedy} \caption{$\textsc{Distgreedy}$($A$, $B$, $k$, $\ell$)} \begin{algorithmic}[1] \STATE {\em Randomly} partition the columns of $B$ into $T_1, \dots, T_{\ell}$ \STATE (Parallel) compute $S_i \leftarrow \textsc{Greedy}(A, T_i, \frac{32k}{\sigma_{\min}(OPT)})$ \STATE (Single machine) aggregate the $S_i$, and compute $S \leftarrow \textsc{Greedy}(A, \cup_{i=1}^{\ell}S_i, \frac{12k}{\sigma_{\min}(OPT)})$ \STATE Return $\text{arg max}_{S' \in \{S, S_1,\dots, S_{\ell}\}} f_A(S')$ \end{algorithmic} \end{algorithm}
As mentioned in the introduction, the key here is that the partitioning is done {\em randomly}, in contrast to most results on {\em composable summaries}. We also note that machine $i$ only sees columns $T_i$ of $B$, but requires evaluating $f_A(\cdot)$ on the full matrix $A$ when running \textsc{Greedy}.\footnote{It is easy to construct examples in which splitting both $A$ and $B$ fails badly.} The way to implement this is again by using projection-cost preserving sketches. (In practice, keeping a small sample of the columns of $A$ works as well.) The sketch is first passed to all the machines, and they all use it to evaluate $f_A(\cdot)$.
We now turn to the analysis of the single-machine and the distributed versions of the greedy algorithm.
\section{Peformance analysis of GREEDY} \label{section-3}
The main result we prove is the following, which shows that by taking only slightly more than $k$ columns, we are within a $1 -\varepsilon$ factor of the optimal solution of size $k$.
\begin{thm} \label{thm:greedy-main}
Let $A \in \mathbb{R}^{m \times n_A}$ and $B \in \mathbb{R}^{m \times n_B}$. Let $OPT_k$ be a set of columns from $B$ that maximizes $f_A(S)$ subject to $|S| = k$. Let $\varepsilon > 0$ be any constant, and let $T_r$ be the set of columns output by $\textsc{Greedy}(A, B, r)$, for $r = \frac{16k}{\varepsilon \sigma_{\min}(OPT_k)}$. Then we have $$f_A(T_r) \geq (1 - \varepsilon) f_A(OPT_k).$$
\end{thm}
We show in Appendix Section \ref{app:tight-ex} that this bound is tight up to a constant factor, with respect to $\varepsilon$ and $\sigma_{\min}(OPT_k)$. Also, we note that $\textsf{GCSS}$ is a harder problem than $\textsf{MAX-COVERAGE}$, implying that if we can choose only $k$ columns, it is impossible to approximate to a ratio better than $(1-\frac{1}{e}) \approx 0.63$, unless P=NP. (In practice, $\textsc{Greedy}$ does much better, as we will see.)
The basic proof strategy for Theorem~\ref{thm:greedy-main} is similar to that of maximizing submodular functions, namely showing that in every iteration, the value of $f(\cdot)$ increases significantly. The key lemma is the following.
\begin{lemma} \label{lem:large-gain} Let $S, T$ be two sets of columns, with $f_A(S) \ge f_A(T)$. Then there exists $v \in S$ such that
\[ f_A(T \cup v) - f_A(T) \ge \sigma_{\min}(S) \frac{\big(f_A(S) - f_A(T)\big)^2}{4|S|f_A(S)}.\] \end{lemma}
Theorem~\ref{thm:greedy-main} follows easily from Lemma~\ref{lem:large-gain}, which we show at the end of the section. Thus let us first focus on proving the lemma. Note that for submodular $f$, the analogous lemma simply has $\frac{f(S) - f(T)}{|S|}$ on the right-hand side (RHS). The main ingredient in the proof of Lemma~\ref{lem:large-gain} is its {\em single vector} version: \begin{lemma}\label{lem:one-vector} Let $S, T$ be two sets of columns, with $f_u(S) \ge f_u(T)$. Suppose $S=\{v_1, \dots, v_k\}$. Then \[ \sum_{i=1}^k \Big( f_u(T \cup v_i) - f_u(T) \Big) \ge \sigma_{\min}(S) \frac{\big(f_u(S) - f_u(T)\big)^2}{4f_u(S)}.\] \end{lemma}
Let us first see why Lemma~\ref{lem:one-vector} implies Lemma~\ref{lem:large-gain}. Observe that for any set of columns $T$, $f_A (T) = \sum_{j} f_{A_j} (T)$ (sum over the columns), by definition. For a column $j$, let us define $\delta_j = \min \{ 1, \frac{f_{A_j}(T)}{f_{A_j}(S)}\}$. Now, using Lemma~\ref{lem:one-vector} and plugging in the definition of $\delta_j$, we have \begin{align}
& \frac{1}{\sigma_{\min}(S)} \sum_{i=1}^k \big( f_A(T \cup v_i) - f_A(T) \big) \label{eq:start}\\
& \quad = \frac{1}{\sigma_{\min}(S)} \sum_{j = 1}^n \sum_{i=1}^k \big( f_{A_j}(T \cup v_i) - f_{A_j}(T) \big) \notag\\ & \quad \geq \sum_{j=1}^n \frac{ (1-\delta_j)^2 f_{A_j}(S)}{4} \label{eq:temp3}\\ & \quad = \frac{f_A(S)}{4} \sum_{j=1}^n (1 - \delta_j)^2 \frac{f_{A_j}(S)}{f_A(S)} \label{eq:temp4}\\ & \quad \geq \frac{f_A(S)}{4} \left( \sum_{j=1}^n (1 - \delta_j) \frac{f_{A_j}(S)}{f_A(S)}\right)^2 \label{eq:temp5} \\ & \quad = \frac{1}{4 f_A(S)} \Big(\sum_{j=1}^n \max\{ 0, f_{A_j}(S) - f_{A_j}(T) \}\Big)^2 \label{eq:temp6} \\
& \quad \ge \frac{1}{4 f_A(S)} \Big(f_A(S) - f_A(T) \Big)^2 \label{eq:temp8} \end{align}
To get \eqref{eq:temp5}, we used Jensen's inequality ($\mathbb{E}[X^2] \geq ( \mathbb{E}[X])^2$) treating $\frac{f_{A_j}(S)}{f_{A}(S)}$ as a probability distribution over indices $j$. Thus it follows that there exists an index $i$ for which the gain is at least a $\frac{1}{|S|}$ factor, proving Lemma~\ref{lem:large-gain}.
\begin{proof}[Proof of Lemma~\ref{lem:one-vector}] Let us first analyze the quantity $f_u(T \cup v_i) - f_u(T)$, for some $v_i \in S$. As mentioned earlier, we may assume the $v_i$ are normalized. If $v_i \in \text{span}(T)$, this quantity is $0$. Thus we can assume that such $v_i$ have been removed from $S$. Now, adding $v_i$ to $T$ gives a gain because of the component of $v_i$ orthogonal to $T$, i.e., $v_i - \Pi_T v_i$, where $\Pi_T$ denotes the projector onto $\text{span}(T)$. Define \[ v_i' = \frac{v_i - \Pi_T v_i}{\norm{v_i - \Pi_T v_i}}_2.\] By definition, $\text{span}(T \cup v_i) = \text{span}(T \cup v_i')$. Thus the projection of a vector $u$ onto $\text{span}(T \cup v_i')$ is $\Pi_T u + \iprod{u, v_i'} v_i'$, which is a vector whose squared length is $\norm{\Pi_T u}^2 + \iprod{u, v_i'}^2 = f_u(T) + \iprod{u, v_i'}^2$. This implies that \begin{equation}\label{eq:gain-single} f_u(T \cup v_i ) - f_u(T) = \iprod{u, v_i'}^2. \end{equation}
Thus, to show the lemma, we need a lower bound on $\sum_i \iprod{u, v_i'}^2$. Let us start by observing that a more explicit definition of $f_u(S)$ is the squared-length of the projection of $u$ onto $\text{span}(S)$, i.e. $f_u(S) = \max_{x \in \text{span}(S), \norm{x}_2 = 1} \iprod{u, x}^2$. Let $x = \sum_{i=1}^k \alpha_i v_i$ be a maximizer. Since $\norm{x}_2=1$, by the definition of the smallest squared singular value, we have $\sum_i \alpha_i^2 \le \frac{1}{\sigma_{\min}(S)}$. Now, decomposing $x = \Pi_T x + x'$, we have \[ f_u(S) = \langle x, u \rangle^2 = \langle x' + \Pi_T x, \; u \rangle^2 = (\langle x', u \rangle + \langle \Pi_T x, u \rangle)^2.\] Thus (since the worst case is when all signs align), \begin{align}
|\iprod{ x', u}| &\ge \sqrt{f_u(S)} - |\langle \Pi_T x, u \rangle|
\ge \sqrt{f_u(S)} - \sqrt{f_u(T)} \notag \\ &= \frac{f_u(S) - f_u(T)}{\sqrt{f_u(S)}+ \sqrt{f_u(T)}} \ge \frac{f_u(S) - f_u(T)}{2\sqrt{f_u(S)}}. \label{eq:dotprod-lb2} \end{align}
where we have used the fact that $|\iprod{\Pi_T x, u}|^2 \le f_u(T)$, which is true from the definition of $f_u(T)$ (and since $\Pi_T x$ is a vector of length $\le 1$ in $\text{span}(T)$).
Now, because $x = \sum_i \alpha_i v_i$, we have $x' = x - \Pi_T x = \sum_i \alpha_i(v_i - \Pi_T v_i) = \sum_i \alpha_i\norm{v_i - \Pi_Tv_i}_2v_i'$. Thus, \begin{align*} \iprod{x', u}^2 &= \big( \sum_i \alpha_i \norm{v_i - \Pi_Tv_i}_2 \iprod{v_i' , u} \big)^2 \\ &\le \big( \sum_i \alpha_i^2 \norm{v_i - \Pi_Tv_i}_2^2 \big) \big( \sum_i \iprod{v_i', u}^2 \big) \\ &\le \big( \sum_i \alpha_i^2 \big) \big( \sum_i \iprod{v_i', u}^2 \big). \end{align*}
where we have used Cauchy-Schwartz, and then the fact that $\|v_i - \Pi_Tv_i\|_2 \leq 1$ (because $v_i$ are unit vectors). Finally, we know that $\sum_i \alpha_i^2 \le \frac{1}{\sigma_{\min}(S)}$, which implies \[ \sum_i \iprod{v_i', u}^2 \ge \sigma_{\min}(S) \iprod{x', u}^2 \ge \sigma_{\min}(S) \frac{(f_u(S)- f_u(T))^2}{4f_u(S)}\] Combined with~\eqref{eq:gain-single}, this proves the lemma. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:greedy-main}]
For notational convenience, let $\sigma = \sigma_{min}(OPT_k)$ and $F = f_A(OPT_k)$. Define $\Delta_0 = F$, $\Delta_1 = \frac{\Delta_0}{2}$, $\dots$, $\Delta_{i+1} = \frac{\Delta_i}{2}$ until $\Delta_N \leq \varepsilon F$. Note that the gap $f_A(OPT_k) - f_A(T_0) = \Delta_0$. We show that it takes at most $\frac{8kF}{\sigma \Delta_i}$ iterations (i.e. additional columns selected) to reduce the gap from $\Delta_i$ to $\frac{\Delta_i}{2} = \Delta_{i+1}$. To prove this, we invoke Lemma \ref{lem:large-gain} to see that the gap filled by $\frac{8kF}{\sigma \Delta_i}$ iterations is at least $\frac{8kF}{\sigma \Delta_i} \cdot \sigma \frac{(\frac{\Delta_i}{2})^2}{4kF}
= \frac{\Delta_i}{2} = \Delta_{i+1}$. Thus the total number of iterations $r$ required to get a gap of at most $\Delta_N \leq \varepsilon F$ is:
\[
r \leq \sum_{i=0}^{N-1} \frac{8kF}{\sigma \Delta_i} = \frac{8kF}{\sigma} \sum_{i=0}^{N-1} \frac{2^{i-N+1}}{\Delta_{N-1}}
< \frac{16k}{\varepsilon \sigma} .\]
where the last step is due to $\Delta_{N-1} > \varepsilon F$ and $\sum_{i=0}^{N-1} 2^{i-N+1} < 2$. Therefore, after $r < \frac{16k}{\varepsilon \sigma}$ iterations, we have $f_A(OPT_k) - f_A(T_r) \leq \varepsilon f_A(OPT_k)$. Rearranging proves the lemma.
\end{proof}
\section{Distributed Greedy Algorithm} \label{section-4} We will now analyze the distributed version of the greedy algorithm that was discussed earlier. We show that in one {\em round}, we will find a set of size $O(k)$ as before, that has an objective value $\Omega(f(OPT_k)/\kappa)$, where $\kappa$ is a condition number (defined below). We also combine this with our earlier ideas to say that if we perform $O(\kappa / \varepsilon)$ {\em rounds} of \textsc{Distgreedy}, we get a $(1-\varepsilon)$ approximation (Theorem~\ref{thm:core-set-2}).
\subsection{Analyzing one round} \iffalse Often modern data sets are too large for us to run GREEDY or ALG on a single processor. The approach of randomized composable core-sets has received increased popularity as a provably good way to subdivide a large problem into smaller problems, each of which can be approached by a separate machine in a computer cluster. This type of approach is especially scalable because it can easily be used in tandem with mainstream distributed paradigms such as MapReduce. \\ \\ Recently, Mirrokni and Zadimoghaddam showed that randomized composable core-sets give a constant factor approximation for maximizing submodular functions \cite{Mirrokni}. We show that here that although our cost function is not submodular, a similar result still holds (up to the condition number). \\ \\ Formally, consider as before Column Subset Section on a matrix $A \in \mathbb{R}^{m \times n}$ with target rank $k < n$. The difference is that now we have $\ell$ machines at our disposal. The approach we take is to randomly partition the $m \times n$ columns into $\ell$ submatrices, each of dimension roughly $m \times \frac{n}{\ell}$. We give each submatrix to a different machine, and run GREEDY on each of these machines with target rank $k'$ to produce a ``core-set'' solution. Finally, we collect the $\ell$ core-sets, and run GREEDY a final time on a single machine to produce our final set of $k$ columns. \\ \\ The parameter $k'$ must be chosen with care, since there is a natural tradeoff between precision and efficiency. Importantly, it must be sufficiently large such that we have enough options in the final stage to choose a a set of $k$ columns with a constant factor approximation. But from a practical standpoint, $k'$ must be small enough such that the final stage with approximately $\ell k'$ can be run on a single machine. If this is not possible, we can run the above procedure with multiple rounds.
\subsection{Constant factor approximation (up to condition number)} \fi
We consider an instance of $\textsf{GCSS}(A, B, k)$, and let $OPT$ denote an optimum set of $k$ columns.
Let $\ell$ denote the number of machines available.
The columns (of $B$) are partitioned across machines, such that machine $i$ is given columns $T_i$. It runs $\textsc{Greedy}$ as explained earlier and outputs $S_i \subset T_i$ of size $k' = \frac{32k}{\sigma_{\min}(OPT)}$. Finally, all the $S_i$ are moved to one machine and we run $\textsc{Greedy}$ on their union and output a set $S$ of size $k'' = \frac{12k}{\sigma_{\min}(OPT)}$. Let us define $\kappa(OPT) = \frac{\sigma_{\max}(OPT)}{\sigma_{\min}(OPT)}$.
\begin{thm}\label{thm:distributed-main} Consider running $\textsc{Distgreedy}$ on an instance of $\textsf{GCSS}(A, B, k)$. We have \[ \mathbb{E}[ \max\{ f_A(S), \max_i \{f_A(S_i)\}\}] \ge \frac{f(OPT)}{8\cdot \kappa(OPT)}.\] \end{thm} The key to our proof are the following definitions: \begin{align*} OPT_i^S &= \{x \in OPT \; : \; x \in \textsc{Greedy}(A, T_i \cup x, k') \}\\ OPT_i^{NS} &= \{x \in OPT \; : \; x \not \in \textsc{Greedy}(A, T_i \cup x, k') \} \end{align*} In other words, $OPT_i^S$ contains all the vectors in $OPT$ that would have been selected by machine $i$ if they had been added to the input set $T_i$. By definition, the sets $(OPT_i^S, OPT_i^{NS})$ form a partition of $OPT$ for every $i$.
\iffalse \begin{remark} \label{rem-opt-partition} By definition, $OPT = OPT_i^S \cup OPT_i^{NS}$ is a union of disjoint sets for all $i \in [m]$. That is, $OPT_i^S$ and $OPT_i^{NS}$ partition $OPT$. \end{remark} \fi
{\bf Proof outline.} Consider any partitioning $T_1, \dots, T_\ell$, and consider the sets $OPT_i^{NS}$. Suppose one of them (say the $i$th) had a large value of $f_A(OPT_i^{NS})$. Then, we claim that $f_A(S_i)$ is also large. The reason is that the greedy algorithm does {\em not} choose to pick the elements of $OPT_i^{NS}$ (by definition) -- this can only happen if it ended up picking vectors that are ``at least as good''. This is made formal in Lemma~\ref{lem:opt-ns}. Thus, we can restrict to the case when {\em none} of $f_A(OPT_i^{NS})$ is large. In this case, Lemma~\ref{lem:additivity} shows that $f_A(OPT_i^{S})$ needs to be large for each $i$. Intuitively, it means that most of the vectors in $OPT$ will, in fact, be picked by $\textsc{Greedy}$ (on the corresponding machines), and will be considered when computing $S$. The caveat is that we might be unlucky, and for every $x \in OPT$, it might have happened that it was sent to machine $j$ for which it was not part of $OPT_j^{S}$. We show that this happens with low probability, and this is where the random partitioning is crucial (Lemma~\ref{lem:opt-s}). This implies that either $S$, or one of the $S_i$ has a large value of $f_A(\cdot)$.
Let us now state two lemmas, and defer their proofs to Sections~\ref{app:opt-ns} and~\ref{app:additivity} respectively.
\begin{lemma} \label{lem:opt-ns} For $S_i$ of size $k' = \frac{32 k}{\sigma_{\min}(OPT)}$, we have \[ f(S_i) \geq \frac{f_A(OPT_i^{NS})}{2} ~\text{ for all $i$.}\]
\end{lemma}
\iffalse Fix $i \in [m]$. Let $S_{i, r}$ denote the output of $GREEDY$ on $T_i$ for target rank $r$. By definition of $k'$ and $S_i$, we know $S_i = S_{i, k'}$. Denote $F = f(OPT_i^{NS})$ for shorthand. Recall from Lemma \ref{Large marginal gain lemma} that for any $r \in \mathbb{N}$ such that $f(S_{i, r}) \geq F$, then: \begin{align}
f(S_{i, r+1}) - f(S_{i, r}) \geq \sigma_{min}(OPT_i^{NS}) \frac{\big(F - f(S_{i, r})\big)^2}{4F \; |OPT_i^{NS}|} \end{align}
Note that since $OPT_i^{NS} \subset OPT$ by construction, so $\sigma_{min}(OPT_i^{NS}) \geq \sigma_{min}(OPT)$ and also $|OPT_i^{NS}| \leq |OPT| = k$. Thus: \begin{align} f(S_{i, r+1}) - f(S_{i, r}) \geq \sigma_{min}(OPT) \frac{\big(F - f(S_{i, r})\big)^2}{4kF} \label{eq-lem-coreset-2-gain} \end{align} Let $\Delta_0 = F$, $\Delta_1 = \frac{\Delta_0}{2}$, \dots, $\Delta_{i+1} = \frac{\Delta_i}{2}$ until $\Delta_N \leq cF$. We show that it takes at most $\frac{8kF}{\sigma_{min}(OPT) \Delta_i}$ iterations (i.e. additional columns selected) to reduce the gap from $\Delta_i$ to $\frac{\Delta_i}{2} = \Delta_{i+1}$. To see this, invoke equation \eqref{eq-lem-coreset-2-gain} above to see that: \begin{align} \text{Gap filled by $\frac{8kF}{\sigma_{min}(OPT) \Delta_i}$ iterations} \geq \frac{8kF}{\sigma_{min}(OPT) \Delta_i} \cdot \sigma_{min}(OPT) \frac{(\frac{\Delta_i}{2})^2}{4kF} = \frac{\Delta_i}{2} = \Delta_{i+1} \end{align} So the total number of iterations $k'$ required to get a gap of at most $\Delta_N \leq cF$ is: \begin{align} k' &\leq \sum_{i=0}^{N-1} \frac{8kF}{\sigma_{min}(OPT) \Delta_i} \\ &= \frac{8kF}{\sigma_{min}(OPT)} \sum_{i=0}^{N-1} \frac{2^{i-N+1}}{\Delta_{N-1}} \label{eq-lem-coreset-2-geo-1} \\ &< \frac{16k}{c \sigma_{min}(OPT)} \label{eq-lem-coreset-2-geo-2} \end{align} Equation \eqref{eq-lem-coreset-2-geo-1} is due to the fact that $\{\Delta_i\}_{i=0}^{N}$ is a geometric series by construction. Equation \ref{eq-lem-coreset-2-geo-2} is because $\Delta_{N-1} > cF$ and $\sum_{i=0}^{N-1} 2^{i-N+1} < 2$. \\ \\ Thus, after $k' < \frac{16k}{c \sigma_{min}(OPT)}$ iterations, we have $f(S_i) - f(OPT_i^{NS}) \leq c f(OPT_i^{NS})$. Rearranging proves the lemma. \end{proof} \fi
\begin{lemma} \label{lem:additivity} For any matrix $A$, and any partition $(I, J)$ of $OPT$: \begin{equation}\label{eq:to-prove-additivity} f_A(I) + f_A(J) \geq \frac{f_A(OPT)}{2\kappa(OPT)}. \end{equation} \end{lemma}
\iffalse Let $x = \sum_{i=1}^{t} \alpha_i v_i$ be a maximizer. Define $x_A = \sum_{i=1}^s \alpha_i v_i \in \text{span}(A)$ and $x_B = \sum_{i=s+1}^t \alpha_i v_i \in \text{span}(B)$. Note that by bilinearity of the inner product: \begin{align} f_u(A \cup B) = \langle u, \; x \rangle^2 = \big(\langle u, \; x_a \rangle + \langle u, \; x_b \rangle \big)^2 \end{align}
Thus by a simple averaging argument, either $\langle u, \; x_a \rangle ^2 \geq \frac{f_u(A \cup B)}{4}$ or $\langle u, \; x_b \rangle^2 \geq \frac{f_u(A \cup B)}{4}$. WLOG, let the first one be true. Let $x_a' = \frac{x_a}{\|x_a\|_2}$ denote the normalization of $x_a$. Then: \begin{align} \frac{f_u(A \cup B)}{4} \leq \langle u, \; x_a \rangle ^2
= \|x_a\|_2^2 \cdot \langle u, \; x_a' \rangle ^2
\leq \|x_a\|_2^2 \cdot f_u(A) \label{eq-lem-coreset-final-1} \end{align}
Now observe that since $\|x\|_2^2 = 1$, so: \begin{align} \sigma_{min}(OPT) \leq \sigma_{min}(A \cup B)
= \inf_{(\beta_1, \dots, \beta_t)} \frac{\|\sum_{i=1}^t \beta_i v_i \|_2^2}{\sum_{i=1}^{t} \beta_i^2}
\leq \frac{\|\sum_{i=1}^t \alpha_i v_i \|_2^2}{\sum_{i=1}^{t} \alpha_i^2}
= \frac{\|x\|_2^2}{\sum_{i=1}^{t} \alpha_i^2} = \frac{1}{\sum_{i=1}^{t} \alpha_i^2} \end{align} In particular, this implies that: \begin{align} \sigma_{min}(OPT) \leq \frac{1}{\sum_{i=1}^{t} \alpha_i^2} \leq \frac{1}{\sum_{i=1}^{s} \alpha_i^2}
\leq \frac{1}{\|x_a\|_2^2} \label{eq-lem-coreset-final-2} \end{align} Combining equations \eqref{eq-lem-coreset-final-1} and \eqref{eq-lem-coreset-final-2} finishes the proof, since $f_u(B) \geq 0$ is non-negative by Lemma \ref{f structure lemma}. \end{proof}
Let $S$ be the set of $16k/\sigma_{min}(OPT)$ items that Greedy selects on set $\cup_{i=1}^{\ell} S_i$, i.e. $S =$ $GREEDY(\cup_{i=1}^{\ell} S_i,$ $16k/\sigma_{min}(OPT))$. \fi Our final lemma is relevant when none of $f_A(OPT_i^{NS})$ are large and, thus, $f_A(OPT_i^{S})$ is large for {\em all} $i$ (due to Lemma~\ref{lem:additivity}). In this case, Lemma~\ref{lem:opt-s} will imply that the expected value of $f(S)$ is large.
Note that $T_i$ is a random partition, so the $T_i$, the $OPT_i^{S}$, $OPT_i^{NS}$, $S_i$, and $S$ are all random variables. However, all of these value are fixed given a partition $\{T_i\}$. In what follows, we will write $f(\cdot)$ to mean $f_A(\cdot)$.
\begin{lemma}\label{lem:opt-s} For a random partitioning $\{T_i\}$, and $S$ of size $k'' = \frac{12 k}{\sigma_{\min}(OPT)}$, we have \begin{equation}\label{eq:main-lem-to-show} \mathbb{E}[f(S)] \ge \frac{1}{2}\mathbb{E} \left[ \frac{\sum_{i=1}^{\ell} f(OPT_i^S)}{\ell}\right]. \end{equation} \end{lemma} \begin{proof} At a high level, the intuition behind the analysis is that many of the vectors in $OPT$ are selected in the first phase, i.e., in $\cup_i S_i$. For an $x \in OPT$, let $I_x$ denote the indicator for $x \in \cup_i S_i$.
Suppose we have a partition $\{T_i\}$. Then if $x$ had gone to a machine $i$ for which $x \in OPT_i^{S}$, then by definition, $x$ will be in $S_i$. Now the key is to observe (see definitions) that the event $x \in OPT_i^S$ does not depend on where $x$ is in the partition! In particular, we could think of partitioning all the elements except $x$ (and at this point, we know if $x \in OPT_i^S$ for all $i$), and {\em then} randomly place $x$. Thus \begin{equation}\label{eq:n14}
\mathbb{E}[ I_x ] = \mathbb{E} \left[ \frac{1}{\ell} \sum_{i=1}^{\ell} [[x \in OPT_i^S]] \right], \end{equation} where $[[~\cdot~]]$ denotes the indicator.
We now use this observation to analyze $f(S)$. Consider the execution of the greedy algorithm on $\cup_i S_i$, and suppose $V^t$ denotes the set of vectors picked at the $t$th step (so $V^t$ has $t$ vectors). The main idea is to give a lower bound on \begin{equation} \label{eq:diff-expectations}
\mathbb{E}[ f(V^{t+1}) - f(V^t) ], \end{equation} where the expectation is over the partitioning $\{T_i\}$. Let us denote by $Q$ the RHS of \eqref{eq:main-lem-to-show}, for convenience. Now, the trick is to show that for {\em any} $V^t$ such that $f(V^t) \le Q$, the expectation in~\eqref{eq:diff-expectations} is large. One lower bound on $f(V^{t+1}) - f(V^t)$ is (where $I_x$ is the indicator as above) \[\frac{1}{k} \sum_{x\in OPT} I_x \big( f(V^t \cup x) - f(V^t) \big).\] Now for every $V$, we can use~\eqref{eq:n14} to obtain \begin{align}
\mathbb{E}[ &f(V^{t+1}) - f(V^t) | V^t = V] \notag \\ &\ge \frac{1}{k\ell} \!\! \sum_{x \inOPT} \!\!\!\! \mathbb{E} \left[ \sum_{i=1}^\ell [[x \in OPT_i^S]]\right] \big( f(V \cup x) \!-\! f(V) \big) \notag \\ &= \frac{1}{k\ell} \mathbb{E} \left[ \sum_{i=1}^\ell \sum_{x \in OPT_i^S} \big( f(V \cup x) - f(V) \big) \right] \,. \notag \end{align}
Now, using~\eqref{eq:start}-\eqref{eq:temp6}, we can bound the inner sum by \[ \sigma_{\min}(OPT_i^S) \frac{ ( \max\{ 0, f(OPT_i^S) - f(V)\} )^2}{4f(OPT_i^S)} \,.\] Now, we use $\sigma_{\min}(OPT_i^S) \ge \sigma_{\min}(OPT)$ and the identity that for any two nonnegative reals $a, b$: $(\max\{ 0, a-b \})^2/a \ge a/2 - 2b/3$.
Together, these imply \begin{align*}
\mathbb{E}[ & f(V^{t+1}) - f(V^t) | V^t = V ] \\ &\ge \frac{\sigma_{\min}(OPT)}{4k\ell} \mathbb{E}\left[ \sum_{i=1}^\ell \frac{f(OPT_i^S)}{2} - \frac{2 f(V)}{3} \right]. \end{align*} and consequently:
$\mathbb{E}[ f(V^{t+1}) - f(V^t)] \ge \alpha (Q - \frac{2}{3}\mathbb{E}[f(V^t)]$
for $\alpha = \sigma_{\min}(OPT)/4k$. If for some $t$, we have $\mathbb{E}[f(V^t)] \geq Q$, the proof is complete because $f$ is monotone, and $V^t \subseteq S$. Otherwise, $\mathbb{E}[f(V^{t+1}) - f(V^t)]$ is at least $\alpha Q/3$ for each of the $k'' = 12 k/\sigma_{\min}(OPT) = 3/\alpha$ values of $t$. We conclude that $\mathbb{E}[f(S)]$ should be at least $(\alpha Q/3) \times (3/\alpha) = Q$ which completes the proof.
\iffalse For any $0 \leq t \leq \frac{16k}{\sigma_{min}(OPT)}$, we define $S^t$ to be the set of first $t$ columns that Greedy chooses from $\cup_{i=1}^{\ell} S_i$. In particular, we have $\emptyset = S^0 \subset S^1 \subset S^2 \cdots \subset S^t = S$. To lower bound $\mathbb{E}[f(S)]$, we apply linearity of expectation and lower bound the expected marginal value $\mathbb{E}[f(S^t) - f(S^{t-1})]$ at step $1 \leq t \leq \frac{16k}{\sigma_{min}(OPT)}$. Since algorithm Greedy chooses the column with the maximum marginal value at each step, we know that the marginal value at step $t$ is at least $f(\{x\} \cup S^{t-1}) - f(S^{t-1})$ for any column $x \in \cup_{i=1}^{\ell} S_i$. In particular we are interested in lower bounding the marginal values using $x \in OPT^S$ where $OPT^S$ is the set of columns in $OPT$ that are selected for the second stage, i.e. $OPT^S = OPT \cap (\cup_{i=1}^{\ell} S_i)$. For the sake of analysis, suppose $x$ is a random column in $OPT$. This column is sent to a random machine $1 \leq i \leq \ell$, i.e. $x \in T_i$. By definition, $x$ is selected ($x \in OPT^S$) if and only if $x \in OPT_i^{S}$, and in that case, we know the marginal value at this step is lower bounded by $f(\{x\} \cup S^{t-1}) - f(S^{t-1})$, otherwise we just lower bound the marginal by zero. Therefore, $\mathbb{E}[f(S^t) - f(S^{t-1})]$ is at least $\mathbb{E}_{x \in OPT, 1 \leq i \leq \ell}[(f(\{x\} \cup S^{t-1}) - f(S^{t-1})) \cdot [x \in OPT_i^S]]$ where $[x \in OPT_i^S]$ is the indicator function, and is one when $x$ is in $OPT_i^S$, and zero otherwise. Since the choice of $x$ and $i$ are independent of each other, we can imagine that machine $i$ is chosen randomly at first, then a random column $x$ is drawn from $OPT$. We conclude that:
\begin{align}\label{eq:marginal-1} \mathbb{E}[f(S^t) - f(S^{t-1})] \geq \frac{1}{\ell k}\mathbb{E}[\sum_{i=1}^{\ell} \sum_{x \in OPT_i^S} f(\{x\} \cup S^{t-1}) - f(S^{t-1})] \end{align}
Using Lemma~\ref{Large marginal gain helper lemma}, we know the sum $\sum_{x \in OPT_i^S} f(\{x\} \cup S^{t-1}) - f(S^{t-1})$ is lower bounded by $\sigma_{min}(OPT) \frac{(f(OPT_i^S) - f(S^{t-1}))^2} {(4f(OPT_i^S))}$. The quadratic form of this lower bound makes it harder to use it for expectations. Therefore we use the following relaxed inequality:
\begin{align} \sum_{x \in OPT_i^S} f(\{x\} \cup S^{t-1}) - f(S^{t-1}) & \geq \sigma_{min}(OPT) (\frac{f(OPT_i^S)}{16} - \frac{f(S^{t-1})}{8}) \label{eq:marginal-2} \\ &\geq \sigma_{min}(OPT) (\frac{f(OPT_i^S)}{16} - \frac{f(S)}{8}) \label{eq:marginal-3} \end{align}
where the first inequality can be seen by the identity $\frac{(a-b)^2}{a} \geq \frac{a}{4} - \frac{b}{2}$ for any $a$ and $b$, and the second inequality holds because $S^{t-1}$ is a subset of $S$, and $f$ is a non-decreasing function. Taking the expected value of both sides of equations Equations \eqref{eq:marginal-2}, and \eqref{eq:marginal-3}, and combining the result with equation \eqref{eq:marginal-1} yields the following lower bound on the expected value of marginal value at each step:
\begin{align} \mathbb{E}[f(S^t) - f(S^{t-1})] &\geq \frac{\sigma_{min}(OPT)}{\ell k} \mathbb{E}[\sum_{i=1}^{\ell} (\frac{f(OPT_i^S)}{16} - \frac{f(S)}{8})] \\ &= \frac{\sigma_{min}(OPT)}{k} \mathbb{E}[\frac{\sum_{i=1}^{\ell} f(OPT_i^S)}{16\ell} - \frac{f(S)}{8}] \end{align}
We note that $\mathbb{E}[f(S)]$ is at least ${16k}{\sigma_{min}(OPT)}$ times the above lower bound since we select that many items in the second round. Therefore we have $\mathbb{E}[f(S)] \geq \mathbb{E}[\frac{\sum_{i=1}^{\ell} f(OPT_i^S)}{\ell}] - 2\mathbb{E}[f(S)]$ which completes the proof. \fi \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:distributed-main}] If $f_A( OPT_i^{NS}) \ge \frac{ f(OPT)}{4\kappa(OPT)}$ for some $i$, then we are done, because Lemma~\ref{lem:opt-ns} implies that $f_A(S_i)$ is large enough. Otherwise, by Lemma~\ref{lem:additivity}, $f_A(OPT_i^{S}) \ge \frac{ f(OPT)}{4\kappa(OPT)}$ for all $i$. Now we can use Lemma~\ref{lem:opt-s} to conclude that $\mathbb{E}[ f_A(S) ] \ge \frac{ f(OPT)}{8\kappa(OPT)}$, completing the proof. \end{proof}
\subsection{Multi-round algorithm} We now show that repeating the above algorithm helps achieve a $(1-\varepsilon)$-factor approximation.
We propose a framework with $r$ epochs for some integer $r>0$. In each epoch $t \in [r]$, we run the $\textsc{Distgreedy}$ algorithm to select set $S^t$. The only thing that changes in different epochs is the objective function: in epoch $t$, the algorithm selects columns based on the function $f^t$ which is defined to be: $f^t(V) = f_A(V \cup S^1 \cup S^2 \cdots \cup S^{t-1})$ for any $t$. We note that function $f^1$ is indeed the same as $f_A$. The final solution is the union of solutions: $\cup_{t=1}^r S^t$.
\begin{thm}\label{thm:core-set-2} For any $\varepsilon <1$, the expected value of the solution of the $r$-epoch $\textsc{Distgreedy}$ algorithm, for $r = O(\kappa(OPT)/\epsilon)$, is at least $(1-\varepsilon)f(OPT)$.
\end{thm}
The proof is provided in Section~\ref{app:core-set-2} of the appendix.
{\em Necessity of Random Partitioning.} We point out that the random partitioning step of our algorithm is crucial for the $\textsf{GCSS}(A, B, k)$ problem. We adapt the instance from~\cite{VahabPODS2014} and show that even if each machine can compute $f_A(\cdot)$ exactly, and is allowed to output $\text{poly}(k)$ columns, it cannot compete with the optimum. Intuitively, this is because the partition of the columns in $B$ could ensure that in each partition $i$, the best way of covering $A$ involve picking some vectors $S_i$, but the $S_i$'s for different $i$ could overlap heavily, while the global optimum should use different $i$ to capture different {\em parts} of the space to be covered. (See Theorem \ref{thm:rand-part} in Appendix~\ref{app:rand-part} for details.)
\section{Further optimizations for \textsc{Greedy}} \label{section-5} We now elaborate on some of the techniques discussed in Section~\ref{section-2} for improving the running time of $\textsc{Greedy}$. We first assume that we left-multiply both $A$ and $B$ by a random Gaussian matrix of dimension $r \times m$, for $r \approx k \log n/\varepsilon^2$. Working with the new instance suffices for the purposes of $(1-\varepsilon)$ approximation to CSS (for picking $O(k)$ columns). (Details in the Appendix, Section~\ref{app:random-projections})
\subsection{Projection-Cost Preserving Sketches} Marginal gain evaluations of the form $f_A(S \cup v) - f_A(S)$ require summing the marginal gain of $v$ onto each column of $A$. When $A$ has a large number of columns, this can be very expensive. To deal with this, we use a {\em sketch} of $A$ instead of $A$ itself. This idea has been explored in several recent works; we use the following notation and result:
\begin{defin}[\cite{Cohen}] \label{defin:pcps} For a matrix $A \in \mathbb{R}^{m \times n}$, $A' \in \mathbb{R}^{m \times n'}$ is a \emph{rank-$k$ Projection-Cost Preserving Sketch (PCPS)} with error $0 \leq \varepsilon < 1$ if for any set of $k$ vectors $S$, we have: $(1 - \varepsilon) f_A(S) \leq f_{A'}(S) + c \leq (1 + \varepsilon) f_A(S)$
where $c \geq 0$ is a constant that may depend on $A$ and $A'$ but is independent of $S$. \end{defin}
\begin{thm}\label{thm:pcps}[Theorem 12 of \cite{Cohen}] Let $R$ be a random matrix with $n$ rows and $n' = O(\frac{k + \log\frac{1}{\delta}}{\varepsilon^2})$ columns, where each entry is set independently and uniformly to $\plusminus \sqrt{\frac{1}{n'}}$. Then for any matrix $A \in \mathbb{R}^{m \times n}$, with probability at least $1 - O(\delta)$, $AR$ is a rank-$k$ PCPS for $A$. \end{thm}
Thus, we can use PCPS to sketch the matrix $A$ to have roughly $k/\varepsilon^2$ columns, and use it to compute $f_A(S)$ to a $(1\pm \varepsilon)$ accuracy for any $S$ of size $\le k$. This is also used in our distributed algorithm, where we send the sketch to every machine.
\subsection{Lazier-than-lazy Greedy} The natural implementation of $\textsc{Greedy}$ requires $O(nk)$ evaluations of $f(\cdot)$ since we compute the marginal gain of all $n$ candidate columns in each of the $k$ iterations. For submodular functions, one can do better: the recently proposed $\textsc{Lazier-than-lazy Greedy}$ algorithm obtains a $1 - \frac{1}{e} - \delta$ approximation with only a linear number $O(n \log (1/\delta))$ of marginal gain evaluations \cite{Mirzasoleiman}. We show that a similar result holds for \textsf{GCSS}, even though our cost function $f(\cdot)$ is not submodular.
The idea is as follows. Let $T$ be the current solution set. To find the next element to add to $T$, we draw a sized $\frac{n_B \log (1/\delta)}{k} $ subset uniformly at random from the columns in $B \setminus T$. We then take from this set the column with largest marginal gain, add it to $T$, and repeat. We show this gives the following guarantee (details in Appendix Section~\ref{app:thm-lazier-than-lazy}.)
\begin{thm} \label{thm-lazier-than-lazy:main}
Let $A\in \mathbb{R}^{m \times n_A}$ and $B \in \mathbb{R}^{m \times n_B}$. Let $OPT_k$ be the set of columns from $B$ that maximizes $f_A(S)$ subject to $|S| = k$. Let $\varepsilon, \delta > 0$ be any constants such that $\epsilon + \delta \leq 1$. Let $T_r$ be the set of columns output by $\textsc{Lazier-than-lazy Greedy} (A, B, r)$, for $r = \frac{16k}{\varepsilon \sigma_{\min}(OPT_k)}$. Then we have: $$\mathbb{E}[f_A(T_r)] \geq (1 - \varepsilon - \delta) f_A(OPT_k)$$ Further, this algorithm evaluates marginal gain only a linear number $\frac{16n_B \log (1/\delta)}{\varepsilon \sigma_{\min}(OPT_k)}$ of times. \end{thm}
Note that this guarantee is nearly identical to our analysis of $\textsc{Greedy}$ in Theorem \ref{thm:greedy-main}, except that it is in expectation.
The proof strategy is very similar to that of Theorem \ref{thm:greedy-main}, namely showing that the value of $f(\cdot)$ increases significantly in every iteration (see Appendix Section~\ref{app:thm-lazier-than-lazy}).
{\bf Calculating marginal gain faster.} We defer the discussion to Appendix Section~\ref{sec:app:marginal}.
\section{Experimental results}\label{section-6}
\begin{figure*}
\caption{A comparison of reconstruction accuracy, model classification accuracy and runtime of various column selection methods (with PCA proved as an upper bound). The runtime is shown plot shows the relative speedup over the naive GREEDY algorithm.}
\label{fig}
\end{figure*}
In this section we present an empirical investigation of the GREEDY, GREEDY++ and \textsc{Distgreedy}\ algorithms. Additionally, we will compare with several baselines:
\\
{\bf Random:}
The simplest imaginable baseline, this method selects
columns randomly. \\
{\bf 2-Phase:}
The two-phased algorithm of \cite{Boutsidis2}, which operates by first sampling $\Theta(k \log k)$ columns based on properties of the top-$k$ right singular space of the input matrix (this requires computing a top-$k$ SVD), then finally selects exactly $k$ columns via a deterministic procedure. The overall complexity is dominated by the top-$k$ SVD, which is $O( \min\{mn^2, m^2n\})$. \\
{\bf PCA:}
The columns of the rank-$k$ PCA projection matrix will be used to serve as an upper bound on performance, as they explicitly minimize the Forbenius reconstruction criteria. Note this method only serves as an upper bound and does not fall into the framework of column subset selection.
We investigate using these algorithms using two datasets, one with a small set of columns (mnist) that is used to compare both scalable and non-scalable methods, as well as a sparse dataset with a large number of columns (news20.binary) that is meant to demonstrate the scalability of the GREEDY core-set algorithm.\footnote{Both datasets can be found at: www.csie.ntu.edu.tw/$\sim$cjlin/libsvmtools/datasets/multiclass.html.}
Finally, we are also interested in the effect of column selection as a preprocessing step for supervised learning methods. To that end, we will train a linear SVM model, using the LIBLINEAR library \citep{fan2008}, with the subselected columns (features) and measure the effectiveness of the model on a held out test set. For both datasets we report test error for the best choice of regularization parameter $c \in \{10^{-3}, \ldots, 10^4\}$. We run GREEDY++ and \textsc{Distgreedy}\ with $\frac{n}{k}\log(10)$ marginal gain evaluations per iteration and the distributed algorithm uses $s = \sqrt{\frac{n}{k}}$ machines with each machine recieving $\frac{n}{s}$ columns.
\subsection{Small scale dataset (mnist)} We first consider the MNIST digit recognition task, which is a ten-class classification problem. There are $n$ = 784 input features (columns) that represent pixel values from the $28\times28$-pixel images. We use $m$ = 60,000 instances to train with and 10,000 instances for our test set.
From Figure~\ref{fig} we see that all column sampling methods, apart from Random, select columns that approximately provide the same amount of reconstruction and are able to reach within 1\% of the performance of PCA after sampling 300 columns. We also see a very similar trend with respect to classification accuracy. It is notable that, in practice, the core-set version of GREEDY incurs almost no additional error (apart from at the smallest values of $k$) when compared to the standard GREEDY algorithm.
Finally, we also show the relative speed up of the competitive methods over the standard GREEDY algorithm. In this small dataset regime, we see that the core-set algorithm does not offer an improvement over the single machine GREEDY++ and in fact the 2-Phase algorithm is the fastest. This is primarily due to the overhead of the distributed core-set algorithm and the fact that it requires two greedy selection stages (e.g.\ map and reduce). Next, we will consider a dataset that is large enough that a distributed model is in fact necessary.
\subsection{Large scale dataset (news20.binary)}
\begin{table} \begin{small} \begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
\bf n & \bf Rand & \bf 2-Phase & \bf $\textsc{Distgreedy}$ & \bf PCA \\
\hline
\hline
500 & 54.9 & 81.8 (1.0) & 80.2 (72.3) & 85.8 (1.3) \\
\hline
1000 & 59.2 & 84.4 (1.0) & 82.9 (16.4) & 88.6 (1.4)\\
\hline
2500 & 67.6 & 87.9 (1.0) & 85.5 (2.4) & 90.6 (1.7) \\
\hline
\end{tabular} \end{center}
\end{small}
\caption{A comparison of the classification accuracy of selected features. Also, the relative speedup over the 2-Phase algorithm for selecting features is shown in parentheses.
} \label{table} \end{table}
In this section, we show that the $\textsc{Distgreedy}$ algorithm can indeed scale to a dataset with a large number of columns.
The news20.binary dataset is a binary class text classification problem, where we start with $n$ = 100,000 sparse features (0.033\% non-zero entries) that represent text trigrams, use $m$ = 14,996 examples to train with and hold-out 5,000 examples to test with.
We compare the classification accuracy and column selection runtime of the naive random method, 2-Phase algorithm
as well as PCA (that serves as an upper bound on performance) to the $\textsc{Distgreedy}$ algorithm.
The results are presented in Table~\ref{table}, which shows that $\textsc{Distgreedy}$ and 2-Phase both perform significantly better than random sampling and come relatively close to the PCA upper bound in terms of accuracy. However, we also find that $\textsc{Distgreedy}$ can be magnitudes of order faster than the 2-Phase algorithm. This is in a large part because the 2-Phase algorithm suffers from the bottleneck of computing a top-$k$ SVD. We note that an approximate SVD method could be used instead, however, it was outside the scope of this preliminary empirical investigation.
In conclusion, we have demonstrated that \textsc{Distgreedy}\ is able to scale to larger sized datasets while still selecting effective features.
\appendix
\section{Appendix}
\subsection{Proof of Lemma~\ref{lem:opt-ns}}\label{app:opt-ns} \begin{proof} Let us fix some machine $i$. The main observation is that running greedy with $T_i$ is the same as running it with $T_i \cup OPT_i^{NS}$ (because by definition, the added elements are not chosen). Applying Theorem~\ref{thm:greedy-main}\footnote{To be precise, Theorem~\ref{thm:greedy-main} is presented as comparing against the optimum set of $k$ columns. However, an identical argument (simply stop at the last line in the proof of Theorem~\ref{thm:greedy-main}) shows the same bounds for any (potentially non-optimal) set of $k$ columns. This is the version we use here.} with $B = T_i \cup OPT_i^{NS}$ and $\varepsilon = \frac{1}{2}$, we have that for
$ k' \ge \frac{32 |OPT_i^{NS}|}{\sigma_{\min}(OPT_i^{NS})}$, then $f_A(S_i) \geq \frac{f_A(OPT_i^{NS})}{2}$. Now since $OPT_i^{NS}$ is a subset of $OPT$, we have that $OPT_i^{NS}$ is of size at most $|OPT| = k$, and also $\sigma_{\min}(OPT_i^{NS}) \ge \sigma_{\min}(OPT)$. Thus the above bound certainly holds whenever $k' \ge \frac{32 k}{\sigma_{\min}(OPT)}$. \end{proof}
\subsection{Proof of Lemma~\ref{lem:additivity}} \label{app:additivity}
\begin{proof} [Lemma~\ref{lem:additivity}] As before, we will first prove the inequality for one column $u$ instead of $A$, and adding over the columns gives the result. Suppose $OPT = \{v_1, \dots, v_k\}$, and let us abuse notation slightly, and use $I, J$ to also denote subsets of indices that they correspond to. Now, by the definition of $f$, there exists an $x = \sum_i \alpha_i v_i$, such that $\norm{x}=1$, and $\iprod{x, u}^2 = f_u(OPT)$.
Let us write $x = x_I + x_J$, where $x_I = \sum_{i \in I} \alpha_i v_i$. Then, \begin{align*} \iprod{x, u}^2 &= (\iprod{x_I, u} + \iprod{x_J, u})^2 \le 2(\iprod{x_I, u}^2 + \iprod{x_J, u}^2 ) \\ &\le 2(\norm{x_I}^2 f_u(I) + \norm{x_J}^2 f_u(J) ) \\ &\le 2(\norm{x_I}^2 + \norm{x_J}^2) (f_u(I) + f_u(J)). \end{align*} Now, we have \[ \norm{x_I}^2 \le \sigma_{\max}(I)(\sum_{i \in I} \alpha_i^2),\] from the definition of $\sigma_{\max}$, and we clearly have $\sigma_{\max}(I)\le \sigma_{\max}(OPT)$, as $I$ is a subset. Using the same argument with $J$, we have \[ \norm{x_I}^2 + \norm{x_J}^2 \le \sigma_{\max}(OPT) (\sum_i \alpha_i^2). \] Now, since $\norm{x}=1$, the definition of $\sigma_{\min}$ gives us that $\sum_i \alpha_i^2 \le 1/\sigma_{\min}(OPT)$, thus completing the proof. \end{proof}
\subsection{Tight example for the bound in Theorem~\ref{thm:greedy-main}} \label{app:tight-ex}
We show an example in which we have a collection of (nearly unit) vectors such that: \enum{ \item Two of them can exactly represent a target vector $u$ (i.e., $k=2$). \item The $\sigma_{\min}$ of the matrix with these two vectors as columns is $\sim \theta^2$, for some parameter $\theta <1$. \item The greedy algorithm, to achieve an error $\le \epsilon$ in the squared-length norm, will require at least $\frac{1}{\theta^2
\epsilon}$ steps. }
The example also shows that using the greedy algorithm, we cannot expect to obtain a multiplicative guarantee on the {\em error}. In the example, the optimal error is zero, but as long as the full set of vectors is not picked, the error of the algorithm will be non-zero.
\paragraph{The construction.} Suppose $e_0, e_1, \dots, e_n$ are orthogonal vectors. The vectors in our collection are the following: $e_1$, $\theta e_0 +e_1$, and $2\theta e_0 + e_j$, for $j \ge 2$. Thus we have $n+1$ vectors. The target vector $u$ is $e_0$. Clearly we can write $e_0$ as a linear combination of the first two vectors in our collection.
Let us now see what the greedy algorithm does. In the first step, it picks the vector that has maximum squared-inner product with $e_0$. This will be $2\theta e_0 + e_2$ (breaking ties arbitrarily). We claim inductively that the algorithm never picks the first two vectors of our collection. This is clear because $e_1, e_2, \dots, e_n$ are all orthogonal, and the first two vectors have a strictly smaller component along $e_0$, which is what matters for the greedy choice (it is an easy calculation to make this argument formal).
Thus after $t$ iterations, we will have picked $2\theta e_0 + e_2, 2\theta e_0 + e_3, \dots, 2\theta e_0 + e_{t+1}$. Let us call them $v_1, v_2, \dots v_t$ resp. Now, what is the unit vector in the span of these vectors that has the largest squared dot-product with $e_0$?
It is a simple calculation to find the best linear combination of the $v_i$ -- all the coefficients need to be equal. Thus the best unit vector is a normalized version of $(1/t) (v_1 + \dots v_t)$, which is \[ v = \frac{ 2\theta e_0 + \frac{1}{t}(e_2 + e_3 +\dots e_{t+1})}{\sqrt{\frac{1}{t} + 4\theta^2}}. \]
For this $v$, to have $\iprod{u, v}^2 \ge 1-\epsilon$, we must have \[\frac{4\theta^2}{\frac{1}{t} + 4\theta^2} \le 1- \epsilon, \] which simplifies (for $\epsilon \le 1/2$) to $\frac{1}{4 t\theta^2} \le \frac{\epsilon}{2}$, or $t > \frac{1}{2\theta^2\epsilon}$.
\subsection{Proof of Theorem~\ref{thm-lazier-than-lazy:main}} \label{app:thm-lazier-than-lazy} The key ingredient in our argument is that in every iteration, we obtain large marginal gain in expectation. This is formally stated in the following lemma.
\begin{lemma} \label{lem:large-gain-lazier-than-lazy}
Let $S, T$ be two sets of columns from $B$, with $f_A(S) \geq f_A(T)$. Let $|S| \leq k$, and let $R$ be a size $\frac{n_B \log \frac{1}{\delta}}{k}$ subset drawn uniformly at random from the columns of $B \setminus T$. Then the expected gain in an iteration of $\textsc{Lazier-than-lazy Greedy}$ is at least $(1 - \delta) \sigma_{\min}(S) \frac{(f_A(S) - f_A(T))^2}{4kf_A(S)}$. \end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:large-gain-lazier-than-lazy}] The first part of the proof is nearly identical to the proof of Lemma 2 in \cite{Mirzasoleiman}. We repeat the details here for the sake of completeness.
Intuitively, we would like the random sample $R$ to include vectors we have not seen in $S \setminus T$. In order to lower bound the probability that $R \cap (S \setminus T) \neq \emptyset$, we first upper bound the probability that $R \cap (S \setminus T) = \emptyset$. \begin{align} \mathbb{P}\{R \cap (S \setminus T) = \emptyset\}
&= \Big(1 - \frac{|S \setminus T|}{n_B - |T|}\Big)^{\frac{n_B \log(\frac{1}{\delta})}{k}} \label{Stochastic greedy lemma eq 1}
\\ &\leq e^{-\frac{n_B \log(\frac{1}{\delta})}{k} \frac{|S \setminus T|}{n_B - |T|}}
\\ &\leq e^{-\frac{\log(\frac{1}{\delta})}{k} |S \setminus T|} \end{align}
where we have used the fact that $1 - x \leq e^{-x}$ for $x \in \mathbb{R}$. Recalling that $\frac{|S\setminus T|}{k} \in [0, 1]$, we have: \begin{align} \mathbb{P}\{R \cap (S \setminus T) \neq \emptyset\}
&\geq 1 - e^{-\frac{\log(\frac{1}{\delta})}{k} |S \setminus T|}
\\ &\geq (1 - e^{-\log(\frac{1}{\delta})}) \frac{|S \setminus T|}{k}
\\ &= (1 - \delta) \frac{|S \setminus T|}{k} \label{Stochastic greedy lemma eq 2} \end{align}
The next part of the proof relies on techniques developed in the proof of Theorem 1 in \cite{Mirzasoleiman}. For notational convenience, define $\Delta(v|T) = f_A(T \cup v) - f_A(T)$ to be the marginal gain of adding $v$ to $T$. Using the above calculations, we may lower bound the expected gain $\mathbb{E}[\max_{v \in R} \Delta(v | T)]$ in an iteration of $\textsc{Lazier-than-lazy Greedy}$ as follows: \begin{align}
& \mathbb{E} \big[\max_{v \in R} \Delta(v | T)\big]
\\ &\geq (1 - \delta) \frac{|S \setminus T|}{k} \cdot \mathbb{E}[\max_{v \in R} \Delta(v | T) \; \Big| \; R \cap (S \setminus T) \neq \emptyset] \label{Stochastic greedy lemma eq 3}
\\ &\geq (1 - \delta) \frac{|S \setminus T|}{k} \cdot \mathbb{E}[\max_{v \in R \cap (S \setminus T)} \Delta(v | T) \; \Big| \; R \cap (S \setminus T) \neq \emptyset] \label{Stochastic greedy lemma eq 4}
\\ &\geq (1 - \delta) \frac{|S \setminus T|}{k} \cdot \frac{\sum_{v \in S \setminus T} \Delta(v | T)}{|S \setminus T|} \label{Stochastic greedy lemma eq 5} \\ &\geq (1 - \delta) \sigma_{min}(S) \frac{(f_A(S)- f_A(T))^2}{4kf_A(S)} \label{Stochastic greedy lemma eq 6} \end{align}
Equation \eqref{Stochastic greedy lemma eq 3} is due to conditioning on the event that $R \cap (S \setminus T) \neq \emptyset$, and lower bounding the probability that this happens with Equation \eqref{Stochastic greedy lemma eq 2}. Equation \eqref{Stochastic greedy lemma eq 5} is due to the fact each element of $S$ is equally likely to be in $R$, since $R$ is chosen uniformly at random. Equation \eqref{Stochastic greedy lemma eq 6} is a direct application of equation \eqref{eq:temp8} because $\sum_{v \in S \setminus T} \Delta(v | T) = \sum_{v \in S} \Delta(v | T)$. \end{proof}
We are now ready to prove Theorem~\ref{thm-lazier-than-lazy:main}. The proof technique is similar to that of Theorem~\ref{thm:greedy-main}. \begin{proof}[Proof of Theorem~\ref{thm-lazier-than-lazy:main}] For each $i \in \{0, \dots, r\}$, let $T_i$ denote the set of $i$ columns output by $\textsc{Lazier-than-lazy Greedy}(A, B, i)$. We adopt the same notation for $F$ as in the proof of Theorem \ref{thm:greedy-main}. We also use a similar construction of $\{\Delta_0, \dots, \Delta_N\}$ except that we stop when $\Delta_N \leq \frac{\varepsilon}{1 - \delta}F$.
We first demonstrate that it takes at most $\frac{8kF}{(1 - \delta) \sigma_{min}(OPT_k) \Delta_i}$ iterations to reduce the gap from $\Delta_i$ to $\frac{\Delta_i}{2} = \Delta_{i+1}$ in expectation. To prove this, we invoke Lemma \ref{lem:large-gain-lazier-than-lazy} on each $T_i$ to see that the expected gap filled by $\frac{8kF}{(1 - \delta) \sigma_{min}(OPT_k) \Delta_i}$ iterations is lower bounded by $\frac{8kF}{(1 - \delta) \sigma_{min}(OPT_k) \Delta_i} \cdot (1 - \delta) \frac{\sigma_{min}(OPT_k)(\frac{\Delta_i}{2})^2}{4kF} = \frac{\Delta_i}{2} = \Delta_{i+1}$. Thus the total number of iterations $r$ required to decrease the gap to at most $\Delta_N \leq \frac{\varepsilon}{1 - \delta} F$ in expectation is: \begin{align} r &\leq \sum_{i=0}^{N-1} \frac{8kF}{(1 - \delta)\sigma_{min}(OPT_k) \Delta_i} \\ &= \frac{8kF}{(1 - \delta)\sigma_{min}(OPT_k)} \sum_{i=0}^{N-1} \frac{2^{i-N+1}}{\Delta_{N-1}} \label{Stochastic greedy theorem eq 2} \\ &< \frac{16k}{\varepsilon \sigma_{min}(OPT_k)} \label{Stochastic greedy theorem eq 3} \end{align} where equation \eqref{Stochastic greedy theorem eq 3} is because $\Delta_{N-1} > \frac{\varepsilon}{1 - \delta} F$ and $\sum_{i=0}^{N-1} 2^{i-N+1} < 1$. Therefore, after $r \geq \frac{16k}{\varepsilon \sigma_{min}(OPT_k)}$ iterations, we have that: \begin{align} f_A(OPT_k) - \mathbb{E}[f_A(T_r)] &\leq \frac{\varepsilon}{1 - \delta} f_A(OPT_k) \\ &\leq (\varepsilon + \delta)f_A(OPT_k) \end{align} because $\varepsilon + \delta \leq 1$. Rearranging proves the theorem. \end{proof}
\subsection{Random Projections to reduce the number of rows} \label{app:random-projections} Suppose we have a set of vectors $A_1, A_2, \dots, A_n$ in $\mathbb{R}^m$, and let $\varepsilon, \delta$ be given accuracy parameters. For an integer $1 \le k \le n$, we say that a vector $x$ is in the $k$-span of $A_1, \dots, A_n$ if we can write $x = \sum_j \alpha_j A_j$, with at most $k$ of the $\alpha_j$ non-zero. Our main result of this section is the following.
\begin{thm} \label{thm:random-projections} Let $1\le k \le n$ be given, and set $d = O^*(\frac{k \log (\frac{n}{\delta \varepsilon})}{\varepsilon^2})$, where we use $O^*(\cdot)$ to omit $\log \log$ terms. Let $G \in \mathbb{R}^{d \times m}$ be a matrix with entries drawn independently from $\mathcal{N}(0,1)$. Then with probability at least $1-\delta$, for {\em all} vectors $x$ that are in the $k$-span of $A_1, A_2, \dots, A_n$, we have \[ (1-\varepsilon)\norm{x}_2 \le \frac{1}{\sqrt{d}} \norm{Gx}_2 \le (1+\varepsilon) \norm{x}_2. \] \end{thm}
The proof is a standard $\varepsilon$-net argument that is similar to the proof of Lemma 10 in \cite{Sarlos}. Before giving the proof, we first state the celebrated lemma of Johnson and Lindenstrauss. \begin{thm}\label{thm:jl-classic}\cite{Johnson} Let $x \in \mathbb{R}^m$, and suppose $G \in \mathbb{R}^{d\times m}$ be a matrix with entries drawn independently from $\mathcal{N}(0,1)$. Then for any $\varepsilon > 0$, we have \[ \Pr\left[ (1-\varepsilon)\norm{x}_2 \le \frac{1}{\sqrt{d}} \norm{Gx}_2 \le (1+\varepsilon) \norm{x}_2 \right] \ge 1- e^{-\varepsilon^2d/4}. \] \end{thm}
Now we prove Theorem ~\ref{thm:random-projections}. \begin{proof}[Proof of Theorem ~\ref{thm:random-projections}] The proof is a simple `net' argument for unit vectors in the $k$-span of $A_1, \dots, A_n$. The details are as follows.
First note that since the statement is scaling invariant, it suffices to prove it for {\em unit} vectors $x$ in the $k$-span. Next, note that it suffices to prove it for vectors in a $\gamma$-net for the unit vectors in the $k$-span, for a small enough $\gamma$. To recall, a $\gamma$-net for a set of vectors $S$ is a finite set subset $\mathcal{N}_\gamma$ with the property that for all $x \in S$, there exists a $u \in \mathcal{N}_\gamma$ such that $\norm{x - u}_2 \le \gamma$.
Suppose we fix some $\gamma$-net for the set of unit vectors in the $k$-span of $A_1, \dots, A_n$, and suppose we show that for all $u \in \mathcal{N}_\gamma$, we have \begin{equation}
(1-\varepsilon/2)\norm{u}_2 \le \frac{1}{\sqrt{d}} \norm{Gu}_2 \le (1+\varepsilon/2) \norm{u}_2.\label{eq:eps-net-needed} \end{equation} Now consider any $x$ in the $k$-span. By definition, we can write $x = u +w$, where $u \in \mathcal{N}_\gamma$ and $\norm{w} <\gamma$. Thus we have \begin{equation}
\norm{Gu}_2 - \gamma \norm{G}_{2} \le \norm{Gx}_2 \le \norm{Gu}_2 + \gamma \norm{G}_2, \label{eq:eps-net-bound} \end{equation} where $\norm{G}_2$ is the spectral norm of $G$. From now, let us set $\gamma = \frac{\varepsilon}{4\sqrt{d} \log(4/\delta)}$. Now, whenever $\norm{G}_2 < 2\sqrt{d}\log (4/\delta)$, equation~\eqref{eq:eps-net-bound} implies \[ \norm{Gu}_2 - \frac{\varepsilon}{2} \le \norm{Gx}_2 \le \norm{Gu}_2 + \frac{\varepsilon}{2}.\]
The proof follows from showing the following two statements: (a) there exists a net $\mathcal{N}_\gamma$ (for the above choice of $\gamma$) such that Eq.~\eqref{eq:eps-net-needed} holds for all $u \in \mathcal{N}_\gamma$ with probability $\ge 1- \delta/2$, and (b) we have $\norm{G}_2 < 2\sqrt{d}\log(4/\delta)$ w.p. at least $1-\delta/2$.
Once we have (a) and (b), the discussion above completes the proof. We also note that (b) follows from the concentration inequalities on the largest singular value of random matrices \cite{Rudelson}. Thus it only remains to prove (a).
For this, we use the well known result that every $k$-dimensional space, the set of unit vectors in the space has a $\gamma$-net (in $\ell_2$ norm, as above) of size at most $(4/\gamma)^k$ \cite{Vershynin}. In our setting, there are $\binom{m}{k}$ such spaces we need to consider (i.e., the span of every possible $k$-subset of $A_1, \dots, A_m$). Thus there exists a $\gamma$-net for the unit vectors in the $k$-span, which has a size at most \[ \binom{m}{k} \cdot \left( \frac{4}{\gamma} \right)^k < \left( \frac{4m}{\gamma} \right)^k, \] where we used the crude bound $\binom{m}{k} < m^k$.
Now Theorem~\ref{thm:jl-classic} implies that for any (given) $u \in \mathcal{N}_\gamma$ (replacing $\varepsilon$ by $\varepsilon/2$ and noting $\norm{u}_2 = 1$), that \[ \Pr \left[ 1+ \frac{\varepsilon}{2} \le \frac{1}{\sqrt{d}} \norm{Gu}_2 \le 1+\frac{\varepsilon}{2} \right] \ge 1- e^{-\varepsilon^2 d/16}.\]
Thus by a union bound, the above holds for all $u \in \mathcal{N}_\gamma$
with probability at least $1- |\mathcal{N}_\gamma|e^{-\varepsilon^2 d/16}$. For our choice of $d$, it is easy to verify that this quantity is at least $1-\delta/2$. This completes the proof of the theorem. \end{proof}
\subsection{Efficient Calculation of Marginal Gain}\label{sec:app:marginal}
A naive implementation of calculating the marginal gain $f_A(S \cup v) - f(S)$ takes $O(mk^2 + kmn_A)$ floating-point operations (FLOPs) where $|S| = O(k)$. The first term is from performing the Grahm-Schmidt orthonormalization of $S$, and the second term is from calculating the projection of each of $A$'s columns onto span$(S)$.
However, it is possible to significantly reduce the marginal gain calculations to $O(mn_A)$ FLOPs in $\textsc{Greedy}$ by introducing $k$ updates, each of which takes $O(mn_A + mn_B)$ FLOPs. This idea was originally proven correct by \cite{Farahat2}, but we discuss it here for completeness.
The simple yet critical observation is that $\textsc{Greedy}$ permanently keeps a column $v$ once it selects it. So when we select $v$, we immediately update all columns of $A$ and $B$ by removing their projections onto $v$. This allows us to calculate marginal gains in future iterations without having to consider $v$ again.
\subsection{Proof of Theorem~\ref{thm:core-set-2}} \label{app:core-set-2}
\begin{proof}
Let $C^t$ be the union of first $t$ solutions: $\cup_{j=1}^t S^{j}$. The main observation is that to compute $f^{t+1}(V)$, we can think of first subtracting off the components of the columns of $A$ along $C^t$ to obtain $A'$, and simply computing $f_{A'}(V)$. Now, a calculation identical to the one in Eq.~\eqref{eq:dotprod-lb2} followed by the ones in Eq.~\eqref{eq:start}-\eqref{eq:temp8} (to go from one vector to the matrix) implies that $f_{A'}(OPT) \ge \big( \sqrt{f_{A}(OPT)} - \sqrt{f_{A}(C^t)}\big)^2$. Now we can complete the proof as (Theorem~\ref{thm:greedy-main}). \iffalse For any $t \geq 1$, we use $OPT$ as the benchmark, and note that $f^t(OPT) = f^t(OPT \cup A^{t-1}) \geq f(OPT)$. On the other hand, $f(\emptyset)$ is equal to $f(A^{t-1})$. So there is a gap of at least $f(OPT) - f(A^{t-1})$ to be exploited. Using Theorem~\ref{thm:core-set-1}, we know that in epoch $t$ we find a set $S^t$ with expected $\mathbb{E}[f^t(S^t)]$ at least $\Omega(\sigma_{min}(OPT) (f(OPT) - f(A^{t-1})))$. This can be rewritten as:
$$ \mathbb{E}[f(A^t)] \geq (1-\sigma)\mathbb{E}[f(A^{t-1})] + \sigma f(OPT) $$
where $\sigma$ is $\Omega(\sigma_{min}(OPT))$. By monotonicity of $f$ and induction, we can prove that $\mathbb{E}[f(A^t)]$ is at least $[1 - (1 - \Omega(\sigma_{min}(OPT)))^r] f(OPT)$ which completes the proof. \fi \end{proof}
\subsection{Necessity of Random Partitioning} \label{app:rand-part} We will now make the intuition formal. We consider the $\textsf{GCSS}(A, B, k)$ problem, in which we wish to cover the columns of $B$ using columns of $A$. Our lower bound holds not just for the greedy algorithm, but for any {\em local} algorithm we use -- i.e., any algorithm that is not aware of the entire matrix $B$ and only works with the set of columns it is given and outputs a poly$(k)$ sized subset of them.
\begin{thm} \label{thm:rand-part} For any square integer $k \geq 1$ and constants $\beta, c >0$, there exist two matrices $A$ and $B$, and a partitioning of $(B_1, B_2, \dots, B_\ell)$ with the following property. Consider any local, possibly randomized, algorithm that takes input $B_i$ and outputs $O(k^{\beta})$ columns $S_i$. Now pick $ck$ elements $S^*$ from $\cup_i S_i$ to maximize $f_A(S^*)$. Then, we have the expected value of $f_A (S^*)$ (over the randomization of the algorithm) is at most $O\left( \frac{c\beta \log k}{\sqrt{k}} \right) f_A(OPT_k)$. \end{thm}
\begin{proof} Let $k = a^2$, for some integer $a$. We consider matrices with $a^2 + a^3$ rows. Our target matrix $A$ will be a single vector containing all $1$'s. The coordinates (rows) are divided into sets as follows: $X = \{1, 2, \cdots, a^2\}$, and for $1 \leq i \leq a^2$, $Y_i$ is the set $\{a^2 + (i-1) \cdot a + 1, i \cdot a^2 + (i-1) \cdot a + 2, \cdots, a^2 + i \cdot a\}$. Thus we have $a^2 + 1$ blocks, $X$ of size $a^2$ and the rest of size $a$.
Now, let us describe the matrix $B_i$ that is sent to machine $i$. It consists of all possible $a$-sized subsets of $X \cup Y_i$.\footnote{As described, it has an exponential in $a$ number of columns, but there are ways to deal with this.} Thus we have $\ell = a^2$ machines, each of which gets $B_i$ as above.
Let us consider what a local algorithm would output given $B_i$. Since the instance is extremely symmetric, it will simply pick $O(k^{\beta})$ sets, such that all the elements of $X \cup Y_i$ are {\em covered}, i.e., the vector $A$ restricted to these coordinates is spanned by the vectors picked. But the key is that the algorithm cannot distinguish $X$ from $Y_i$! Thus we have that any set in the cover has at most $O(\beta \log a)$ overlap with the elements of $Y_i$.\footnote{To formalize this, we need to use Yao's minmax lemma and consider the uniform distribution.}
Now, we have sets $S_i$ that all of which have $O(\beta \log a)$ overlap with the corresponding $Y_i$. It is now easy to see that if we select at most $ck$ sets from $\cup_i S_i$, we can cover at most $c a^2 \log a$ of the coordinates of the $a^3$ coordinates in $\cup_i Y_i$. The optimal way to span $A$ is to pick precisely the indicator vectors for $Y_i$, which will cover a $(1-(1/a))$ fraction of the mass of $A$. Noting that $k = a^2$, we have the desired result. \iffalse We construct the set of columns $A_i$ based on $X$ and $Y_i$ as follows. We put $a^2 + a^3$ rows in $A$ one for each number in $X \cup Y_1 \cup Y_2 \cdots Y_{a^2}$. We set the number of machines $m$ to be equal to $k = a^2$. For any size $a$ subset $S \subset X \cup Y_i$, we put an indicator column $1_S$ in $A_i$ with $a$ ones in entries that belong to $S$, and zeros elsewhere. Finally we set $B$ to be just one column with all its entries equal to one. The symmetry we observe in each machine makes any algorithm unable to distinguish between entries in $X$ and entries in $Y_i$. Therefore in expectation every selected column will have at most $log(a)$ non-zero entries in $Y_i$. Therefore we can say with high probability each column in $S_i$ has at most $log(a)$ non-zero entries in $Y_i$ since $S_i$ has size at most $k^{\beta}$ (polynomial in $k$). So in the pool of selected columns $\cup_{i=1}^m S_i$ with high probability each column has at most $log(a) \leq log(k)$ non-zero entries in $\cup_{i=1}^m Y_i$. So any subset of $c \cdots k$ columns among the selected columns will not cover more than $c \cdot k log(k)$ entries of the $a^3 = k\sqrt{k}$ entries of $\cup_{i=1}^m Y_i$. The proof completes by observing that $1-\frac{1}{k}$ fraction of entries of $B$ are in $\cup_{i=1}^m Y_i$. \fi \end{proof}
\end{document}
|
arXiv
|
{
"id": "1605.08795.tex",
"language_detection_score": 0.7729477286338806,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{A slicing obstruction from the $\frac{10}{8}$ theorem}
\begin{abstract} From Furuta's $\frac{10}{8}$ theorem, we derive a smooth slicing obstruction for knots in $S^3$ using a spin $4$-manifold whose boundary is $0$-surgery on a knot. We show that this obstruction is able to detect torsion elements in the smooth concordance group and find topologically slice knots which are not smoothly slice.
\end{abstract} \maketitle
\section{Introduction}\label{section:1}
A knot $K$ in $S^3$ is \emph{smoothly slice} if it bounds a disk that is smoothly embedded in the four-ball. Although detecting whether or not a knot is slice is not typically an easy task to do, there are various known ways to obstruct sliceness. For instance, the Alexander polynomial of a slice knot factors, up to a unit, as $f(t)f(t^{-1})$ and the averaged signature function of the knot vanishes (see, for instance, \cite[Chapter~8]{Lickorish1997}). Also in recent years, modern techniques in low-dimensional topology have been applied to produce obstructions. Examples include the $\tau$-invariant \cite{Ozsvath2003, Rasmussen2003}, $\epsilon$ \cite{Hom2014} and $\Upsilon$ \cite{Ozsvath2014} invariants, all coming from Heegaard Floer homology \cite{Ozsvath2004a, Ozsvath2013}, and the $s$-invariant \cite{Rasmussen2010} from Khovanov homology \cite{Khovanov2000}. In this paper we introduce a new obstruction using techniques in handlebody theory. We call a $4$-manifold a \emph{2-handlebody} if it may be obtained by attaching $2$-handles to $D^4$. The main ingredient is the following: {\thm\label{thm:slicing} Let $K \subset S^3$ be a smoothly slice knot and $X$ be a spin 2-handlebody with $\partial X = S^3_0(K)$. Then either $b_2(X)=1$ or \[
4b_2(X) \ge 5|\sigma(X)|+12. \]}
A key tool used in the proof of Theorem~\ref{thm:slicing} is Furuta's $10/8$ theorem \cite{Furuta2001}. Our theorem can be regarded as an analogous version of his theorem for manifolds with certain types of boundary. Similar ideas to this paper have been used by Bohr and Lee in \cite{Lee2001}, using the branched double cover of a knot.
Given a knot $K$, we construct a spin 4-manifold $X$ such that $\partial X = S^3_0(K)$. If we think of $0$-surgery on $K$ as the boundary of the manifold given by a single 2-handle attached to $\partial D^4$, the spin structures on $S^3_0(K)$ are in one-to-one correspondence with characteristic sublinks in this Kirby diagram. (See Section~\ref{section:2} for the relevant definitions.) The 0-framed knot $K$ represents a spin structure which does not extend over this $4$-manifold. We may alter the $4$-manifold, without changing the boundary $3$-manifold, by a sequence of blow ups, blow downs and handle slides, until the characteristic link corresponding to this spin structure is the empty sublink. The manifold we obtain is a spin 4-manifold. Now if $b_2$ and $\sigma$ of the resulting four-manifold violate the inequality of Theorem~\ref{thm:slicing}, $K$ is not smoothly slice.
The reason we are interested in the obstruction obtained from Theorem~\ref{thm:slicing} is twofold. First, we show in Section~\ref{sec:4} that our obstruction is able to detect torsion elements in the concordance group; in particular, the obstruction detects the non-sliceness of the figure eight knot. Second, we show that the obstruction is capable of detecting the smooth non-sliceness of topologically slice knots. We remind the reader that a topologically slice knot is a knot in $S^3$ which bounds a locally flat disk in $D^4$. All the algebraic concordance invariants (e.g. the signature function) vanish for a topologically slice knot.
\section{The Slicing Obstruction}\label{section:2} In this section we prove Theorem \ref{thm:slicing} and describe how to produce the spin manifolds used to give slicing obstructions. The argument uses Furuta's $10/8$ Theorem.
{\thm\cite[Theorem~1]{Furuta2001}
Let $W$ be a closed, spin, smooth 4-manifold with an indefinite intersection form. Then \[4b_2(W) \geq 5 |\sigma(W)| +8.\]}
Note that, by Donaldson's diagonalisation theorem \cite{Donaldson1987}, a closed, smooth, spin manifold $W$ can have a definite intersection form only if $b_2(W)=0$.
\begin{proof}[Proof of Theorem \ref{thm:slicing}] We start by noting that when $K$ is smoothly slice, $S^3_0(K)$ smoothly embeds in $S^4$. (See \cite{Gilmer-Livingston}, for example.) The embedding splits $S^4$ into two spin manifolds $U$ and $V$ with common boundary $S^3_0(K)$. Since $S^3_0(K)$ has the same integral homology as $S^1 \times S^2$, a straightforward argument using the Mayer-Vietoris sequence shows manifolds $U$ and $V$ will have the same homology as $S^2 \times D^2$ and $S^1 \times D^3$ respectively. In particular both spin structures on the three-manifold extend over $V$.
Now, as in \cite[Lemma~5.6]{Donald2015}, if $X$ is a spin 2-handlebody with $\partial X = \partial V$, let $W=X \cup_{S^3_0(K)} -V$. This will be spin and $\sigma(W) = \sigma(X)$ since $\sigma(V)=0$. In addition, we have $\chi(W)=\chi(X)=1+b_2(X)$. Since $H_1(W,X;\mathbb{Q}) \cong H_1(V,Y;\mathbb{Q})=0$ it follows from the exact sequence for the pair $(W,X)$ that $b_1(W)=b_3(W)=0$. Therefore $b_2(W) = b_2(X) -1$. The result follows by applying Furuta's theorem in the case $b_2(X) >1$.
\end{proof}
The rest of this section provides the background needed to apply the obstruction of Theorem~\ref{thm:slicing}. We refer the reader to \cite{Gompf1999} for a more detailed discussion on spin manifolds and characteristic links.
{\defn \label{spin} A manifold $X$ has a spin structure if its stable tangent bundle $TX\oplus \epsilon^k$, where $\epsilon^k$ denotes a trivial bundle, admits a trivialization over the 1-skeleton of $X$ which extends over the 2-skeleton. A spin structure is a homotopy class of such trivializations. } \\
It can be shown that the definition does not depend on $k$ for $k \ge 1$. An oriented manifold $X$ admits a spin structure if the second Stiefel-Whitney class vanishes, that is $\omega_2(X)=0$. An oriented 3-manifold always admits a spin structure, since its tangent bundle is trivial. We remind the reader that any closed, connected, spin $3$-manifold $(Y, \mathfrak{s})$ is the spin boundary of a $4$-dimensional spin $2$-handlebody. A constructive proof is given in \cite{Kaplan1979}.
As described in Section~\ref{section:1} we are interested in $0$-surgery on knots. The resulting three-manifold is spin with two spin structures $\mathfrak{s}_0, \mathfrak{s}_1$. Note that one of the spin structures, $\mathfrak{s}_0$, extends to the 4-manifold obtained by attaching a $0$-framed $2$-handle to $D^4$ along the knot. There is another $2$-handlebody (not the one with one $2$-handle that $\mathfrak{s}_0$ extends over) that is also bounded by $S^3_0(K)$ and $\mathfrak{s}_1$ extends over it. We explain how to construct such a four-manifold in what follows. {\defn\label{charactersitic} Let $L=\{K_1, ..., K_m\}$ be a framed, oriented link in $S^3$. The linking number $lk(K_i, K_j)$ is defined as the linking number of the two components if $i \neq j$ and is the framing on $K_i$ if $i=j$. A characteristic link $L^{'}\subset L$ is a sublink such that for each $K_i$ in $L$, $lk(K_i, K_i)$ is congruent mod 2 to the total linking number $lk(K_i, L^{'})$.} \\
Note that the characteristic links are independent of the choice of orientation of $L$. A framed link is a Kirby diagram for a $2$-handlebody $X$ and the characteristic links are in one-to-one correspondence with spin structures on $\partial X$. The link components form a natural basis for $H_2(X)$ and the intersection form is given by the linking numbers $lk$. The empty link is characteristic if and only if this form is even and, since $2$-handlebodies are simply connected, this occurs if and only if $X$ is spin. A non-empty characteristic link correspond to a spin structure on the boundary which does not extend. We can remove a characteristic link by modifying the Kirby diagram by handle-slide, blow up and blow down moves until it becomes the empty sublink. These do not change the boundary $3$-manifold, but the latter two change the $4$-manifold. This process produces a spin $4$-manifold where the given spin structure extends.
For convenience, we briefly recall how these moves change the framings in link and the effect on a characteristic link. When a component $K_1$ with framing $n_1$ is slid over $K_2$ with framing $n_2$, the new component will be a band sum of $K_1$ and a parallel copy of $K_2$. It will have framing $n_1 + n_2 + 2 lk(K_1,K_2)$, where this linking number is computed using orientations on $K_1$ and $K_2$ induced by the band. The new component will represent the class of $K_1 + K_2$ in $H_2(X)$. Consequently, if $K_1$ and $K_2$ were part of a characteristic link before the slide, the new component will replace them in the new diagram. The most basic blow up move adds a split unknot with framing $\pm 1$. Each characteristic link will change simply by adding this extra component. A general blow up across $r$ parallel strands consists of first adding a split component and then sliding each of the $r$ strands over it. Therefore blowing up positively (respectively negatively), if the linking of the blow up circle with a component of the Kirby diagram is $p$, the framing change on that component will be $p^2$ (respectively $-p^2$). If a blow up curve links a characteristic link non-trivially mod 2 then it does not add any components to the characteristic link. However, if the blow up curve circles $2k$ strands of a characteristic link, it will be added to the characteristic link. Example~\ref{eg:figureeight} (more specifically, Figure~\ref{fig8b} and Figure~\ref{fig8c}) illustrates this. A blow down is the reverse move. Blowing down a component of a characteristic link removes it.
Note that during the process of removing a characteristic link, we do not need to keep track of the whole Kirby diagram. Instead, we need only keep the information about the characteristic link and its framings, along with $b_2$ and $\sigma$. This is straightforward to do by counting the number of blow ups and blow downs with their signs. \subsection{Obtaining a spin 4-manifold bounded by $S^3_0(K)$}
The argument above suggests that Theorem~\ref{thm:slicing} can give slicing obstructions for a knot $K$ that can be ``efficiently'' unknotted by a sequence of blow-ups. If the characteristic link is an unknot, the framing can be transformed to $\pm 1$ by further blow ups (along meridians) and then we may blow down to get an empty characteristic link.
We finish this section by showing how Theorem~\ref{thm:slicing} can be used to prove that positive $(p, kp\pm1)$ torus knots are not smoothly slice for odd $p \ge 3$ \footnote{There are many ways to show that positive torus knots are not smoothly slice. Our goal in presenting this example is to show that our obstruction works well with \emph{generalized twisted torus knots}, which are, roughly speaking, torus knots where there are full-twists between adjacent strands. See Figure~\ref{fig:k6} for an example of a generalized twisted torus knot.} . Given a zero framed positive $(p, kp\pm1)$ torus knot, we first blow up $k$ times negatively around $p$ parallel strands. Each will introduce a negative full twist and, since $p$ is odd, the characteristic link will be a $-kp^2$ framed unknot. Blowing up $kp^2-1$ times positively along meridians and blowing down once negatively will give us a spin manifold $X$. This sequence used $k$ negative blow ups, $kp^2-1$ positive blow ups and one negative blow down so we see $b_2(X)= 1+k+kp^2-1-1= kp^2+k-1$ and $\sigma(X) = -k+kp^2-1+1=kp^2-k$. Now, $4b_2(X)-5|\sigma(X)|-12= -kp^2+9k-16<0$, and so such knots are not slice.
\section{Examples}\label{sec:4}
The obstruction from Theorem \ref{thm:slicing} is able to detect knots with order two in the smooth concordance group and can also be used to obstruct topologically slice knots from being smoothly slice. This section describes examples which illustrate each of these properties.
\subsection{Figure eight knot}
\begin{examp}\label{eg:figureeight} The knot $4_1$ is not slice. \end{examp}
This knot is shown in Figure \ref{fig8a}. Start with the manifold obtained by attaching a $0$-framed $2$-handle to $D^4$ along $4_1$. Blow up the manifold twice as indicated in Figure \ref{fig8b}. Sliding one of the two blow up curves over the other results in the diagram in Figure~\ref{fig8c}. The characteristic link is a split link whose components are a $0$-framed trefoil and a $-2$-framed unknot.
Figure \ref{fig8d} shows just the characteristic link. Blowing up negatively once more changes the characteristic link to a $2$-component unlink with framings $-2$ and $-9$ as in Figure \ref{fig8e}. This is inside a $4$-manifold with signature $-3$ and second Betti number $4$. Positively blowing up meridians nine times changes both framings in the characteristic link to $-1$ and blowing down each of them results in a spin manifold. Counting blow-up and blow-down moves, we see that the signature of this spin manifold is $+8$ and the second Betti number is $11$. Theorem \ref{thm:slicing} then applies.
\begin{figure}
\caption{A sequence of blow up and blow downs showing that $S^3_0(4_1)$ bounds a spin manifold with $b_2=11$ and $\sigma=8$. The characteristic link at each stage is specified by darker curves.}
\label{fig8}
\label{fig8a}
\label{fig8b}
\label{fig8c}
\label{fig8d}
\label{fig8e}
\end{figure}
This example shows that Theorem \ref{thm:slicing} may obstruct sliceness of $K$ but not of $K\#K$. The following result describes how the obstruction behaves with respect to connected sums. For any knot $K$, let $\mathfrak{s}_1$ denote the spin structure on $S^3_0(K)$ which does not extend over the $4$-manifold produced by attaching a $0$-framed $2$-handle to $D^4$ along $K$.
\begin{prop}\label{prop:sums}
Let $K_1, K_2$ be knots and $X_i$ be a smooth spin 2-handlebody with boundary $(S^3_0(K_i), \mathfrak{s}_1)$ for $i=1,2$.
There is a smooth spin 2-handlebody $X$ with $\partial X = (S^3_0(K_1\# K_2), \mathfrak{s}_1)$, $\sigma(X) = \sigma(X_1)+\sigma(X_2)$ and $b_2(X) = b_2(X_1) + b_2(X_2) + 1$.
\end{prop}
\begin{proof} Let $W$ be the 2-handle cobordism from $Y=S^3_0(K_1) \# S^3_0(K_2)$ to $S^3_0(K_1\#K_2)$ illustrated in Figure \ref{nicecobordism}. Let $X$ be the manifold constructed by attaching $W$ to $X_1 \natural X_2$ along $Y$.
\begin{figure}
\caption{ {$2$-handle cobordism $W:S^3_0(K_1) \# S^3_0(K_2) \to S^3_0(K_1\#K_2)$.}}
\label{nicecobordism}
\end{figure}
The characteristic link for the spin structure $\mathfrak{s}_1$ in $Y$ is the knot $K_1 \# K_2$ and, since the new 2-handle has linking zero with this component, there is a spin structure on $W$ which restricts to $\mathfrak{s}_1 \# \mathfrak{s}_1$ on $Y$ and $\mathfrak{s}_1$ on $S^3_0(K_1\#K_2)$. Consequently, $X$ extends the correct spin structure on its boundary.
It is easy to see that $\sigma(W) = 0$ and so $\sigma(X) = \sigma(X_1) + \sigma(X_2)$. Since $X_1, X_2$ and $X$ are all 2-handlebodies \[b_2(X) = \chi(X) -1 = \chi(X_1 \natural X_2) +\chi(W) -1 = 1+b_2(X_1) + b_2(X_2).\]
\end{proof}
\begin{rmk} The signature of any spin manifold with spin boundary $(S^3_0(K),\mathfrak{s}_1)$ is $8 \operatorname{Arf} K \mod 16$, where $\operatorname{Arf} K$ is the Arf invariant of the knot $K$. (See \cite{Saveliev2002}.) Note that after removing the characteristic link, to get to a spin manifold bounded by the $0$-surgery on $K$, the signature must be a multiple of $8$. \end{rmk}
\subsection{A topologically slice example}
Let $K$ be the knot shown in Figure \ref{fig:k6}. A straightforward calculation of the Alexander polynomial shows that $\Delta_K(t)=1$ and so $K$ is topologically slice. See~\cite[11.7B~Theorem]{Freedman1990}, \cite[Theorem~7]{Freedman1984}. See also~\cite[Appendix~A]{Garoufalidis2004} and \cite{Cha2014}.
\begin{figure}\label{fig:k6}
\end{figure}
\begin{examp} $K$ is not smoothly slice. \end{examp}
Add a $0$-framed $2$-handle to $\partial D^4$ along $K$ and then blow up three times around the sets of strands indicated in Figure \ref{fig:k6BU}. Blow up negatively across nine strands on the top and positively across five and seven strands on the bottom of the diagram. This gives a manifold with signature $1$ and second Betti number $4$. The characteristic link has one component, as shown in Figure \ref{fig:k6BU2}, with framing $-7$. An isotopy verifies that this knot is $4_1$.
\begin{figure}
\caption{ {$K$ can be simplified by blowing up along the blue curves with appropriate signs. Note that none of the blue curves will be part of the characteristic link.}}
\label{fig:k6BU}
\end{figure}
\begin{figure}
\caption{ {Characteristic link is a $-7$-framed figure-eight. }}
\label{fig:k6BU2}
\end{figure}
Following the procedure from Example \ref{eg:figureeight}, we may blow up negatively three times to produce a characteristic link which is a two-component unlink with framings $-2$ and $-16$ in a manifold with $\sigma=-2$ and $b_2=7$. Blow up meridional curves of this unlink until the framing coefficients are both $-1$, then blow down the resulting $-1$-framed unlink. This yields a spin manifold with signature $16$ and second Betti number $21$. Therefore, by Theorem~\ref{thm:slicing}, $K$ is non-slice.
Note that Figure \ref{fig:k6} presents $K$ as a generalized twisted torus knot. It is the closure of a braid formed by taking a $(9,8)$ torus knot and then adding negative full twists on seven strands, then on non-adjacent sets of three strands and finally a pair of negative clasps. The obstruction from Theorem \ref{thm:slicing} is generally easier to apply to knots like this because they can be unknotted efficiently by blowing up to remove full twists. For many twisted torus knots this provides a slicing obstruction which is often more easily computable than the signature function.
It would be interesting to find other examples where this obstruction applies. It may be able to obstruct smooth sliceness for Whitehead doubles. To apply Theorem~\ref{thm:slicing}, we need the sequence of blow-up moves to predominantly involve blow-ups of the same sign. However, at least for the standard diagrams of Whitehead doubles, it is not easy to see how to do this. Similarly, it should be possible to detect other torsion elements of the knot concordance group. Example~\ref{eg:figureeight} demonstrates this in principle but it would be interesting to obtain new examples of torsion elements.
\end{document}
|
arXiv
|
{
"id": "1508.07047.tex",
"language_detection_score": 0.8216283917427063,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Generalized perfect numbers}
\twoauthors{ Antal Bege }{ Sapientia--Hungarian University of Transilvania\\ Department of Mathematics and Informatics,\\ T\^argu Mure\c{s}, Romania }{[email protected] }{ Kinga Fogarasi } { Sapientia--Hungarian University of Transilvania\\ Department of Mathematics and Informatics,\\ T\^argu Mure\c{s}, Romania }{[email protected] }
\short{ A. Bege, K. Fogarasi }{ Generalized perfect numbers
}
\begin{abstract} Let $\sigma(n)$ denote the sum of positive divisors of the natural number $n$. A natural number is perfect if $\sigma(n) = 2n$. This concept was already generalized in form of superperfect numbers $\sigma^2(n) = \sigma (\sigma (n)) = 2n$ and hyperperfect numbers $\sigma(n) = \frac{k+1}{k} n + \frac{k-1}{k}.$ \\ In this paper some new ways of generalizing perfect numbers are investigated, numerical results are presented and some conjectures are established.
\end{abstract}
\section{Introduction}
For the natural number $n$ we denote the sum of positive divisors by \[ \sigma(n)=\sum\limits_{d\mid n} d. \]
\begin{definition} A positive integer $n$ is called perfect number if it is equal to the sum of its proper divisors. Equivalently: \[ \sigma (n) = 2n, \] where \end{definition}
\begin{example} The first few perfect numbers are: $6, 28, 496, 8128, \dots$ (Sloane's A000396 \cite{11}), since \begin{eqnarray} 6 &=& 1 + 2 + 3 \nonumber\\ 28 &=& 1 + 2 + 4 + 7 + 14\nonumber\\ 496 &=& 1 + 2 + 4 + 8 + 16 + 31 + 62 + 124 + 248\nonumber \end{eqnarray} Euclid discovered that the first four perfect numbers are generated by the formula $2^{n-1} (2^n -1)$. He also noticed that $2^n-1$ is a prime number for every instance, and in Proposition IX.36 of "Elements" gave the proof, that the discovered formula gives an even perfect number whenever $2^n-1$ is prime. \\ Several wrong assumptions were made, based on the four known perfect numbers:
\begin{itemize}
\item[$\bullet$] Since the formula $2^{n-1} (2^n-1)$ gives the first four perfect numbers for $n = 2, 3, 5,$ and 7 respectively, the fifth perfect number would be obtained when $n = 11$. However $2^{11} - 1 = 23 \cdot 89$ is not prime, therefore this doesn't yield a perfect number.
\item[$\bullet$] The fifth perfect number would have five digits, since the first four had 1, 2, 3, and 4 digits respectively, but it has 8 digits. The perfect numbers would alternately end in 6 or 8.
\item[$\bullet$] The fifth perfect number indeed ends with a 6, but the sixth also ends in a 6, therefore the alternation is disturbed.
\end{itemize} \end{example}
In order for $2^n-1$ to be a prime, $n$ must itself to be a prime.
\begin{definition} A \textbf{Mersenne prime} is a prime number of the form: \[ M_n = 2^{p_n} - 1 \] where $p_n$ must also be a prime number. \end{definition}
Perfect numbers are intimately connected with these primes, since there is a concrete one-to-one association between \emph{even} perfect numbers and Mersenne primes. The fact that Euclid's formula gives all possible even perfect numbers was proved by Euler two millennia after the formula was discovered. \\ Only 46 Mersenne primes are known by now (November, 2008 \cite{9}), which means there are 46 known even perfect numbers. There is a conjecture that there are infinitely many perfect numbers. The search for new ones is the goal of a distributed search program via the Internet, named GIMPS (Great Internet Mersenne Prime Search) in which hundreds of volunteers use their personal computers to perform pieces of the search. \\ It is not known if any \emph{odd} perfect numbers exist, although numbers up to $10^{300}$ (R. Brent, G. Cohen, H. J. J. te Riele \cite{brent1}) have been checked without success. There is also a distributed searching system for this issue of which the goal is to increase the lower bound beyond the limit above. Despite this lack of knowledge, various results have been obtained concerning the odd perfect numbers:
\begin{itemize} \item[$\bullet$] Any odd perfect number must be of the form $12 m + 1$ or $36m + 9$.
\item[$\bullet$] If $n$ is an odd perfect number, it has the following form: \[ n = q^\alpha p_1^{2e_1} \dots p_k^{2e_k}, \] where $q, p_1, \dots, p_k$ are distinct primes and $q \equiv \alpha \equiv 1 \pmod{4}$. (see L. E. Dickson \cite{dickson1})
\item[$\bullet$] In the above factorization, $k$ is at least 8, and if 3 does not divide $N$, then $k$ is at least 11.
\item[$\bullet$] The largest prime factor of odd perfect number $n$ is greater than $10^8$ (see T. Goto, Y. Ohno \cite{goto1}), the second largest prime factor is greater than $10^4$ (see D. Ianucci \cite{ianucci1}), and the third one is greater than $10^2$ (see D. Iannucci \cite{ianucci2}).
\item[$\bullet$] If any odd perfect numbers exist in form \[ n = q^\alpha p_1^{2e_1} \dots p_k^{2e_k}, \] they would have at least 75 prime factor in total, that means: $\alpha + 2 \sum\limits_{i=1}^k e_i \ge 75.$ (see K. G. Hare \cite{4}) \end{itemize}
D. Suryanarayana introduced the notion of superperfect number in 1969 \cite{8}, here is the definition. \begin{definition} A positive integer $n$ is called \textbf{superperfect number} if \[ \sigma (\sigma (n)) = 2n. \] \end{definition}
Some properties concerning superperfect numbers:
\begin{itemize} \item[$\bullet$] Even superperfect numbers are $2^{p-1}$, where $2^p -1$ is a Mersenne prime.
\item[$\bullet$] If any odd superperfect numbers exist, they are square numbers (G. G. Dandapat \cite{dandapat1}) and either $n$ or $\sigma(n)$ is divisible by at least three distinct primes. (see H. J. Kanold \cite{5}) \end{itemize}
\section{Hyperperfect numbers}
Minoli and Bear \cite{minoli1} introduced the concept of $k$-hyperperfect number and they conjecture that there are $k$-hyperperfect numbers for every $k$.
\begin{definition}
A positive integer $n$ is called \textbf{$k$-hyperperfect number} if \[ n = 1 + k[\sigma (n) - n-1] \] rearranging gives: \[ \sigma (n) = \frac{k+1}{k} n + \frac{k-1}{k}. \] \end{definition}
\noindent We remark that a number is perfect iff it is 1-hyperperfect. In the paper of J. S. Craine \cite{6} all hyperperfect numbers less than $10^{11}$ have been computed
\begin{example} The table below shows some $k$-hyperperfect numbers for different $k$ values: \\
\begin{center}
\begin{tabular}{|c|l|}
\hline
$\textbf{k}$ & $\mathbf{k}$-\textbf{hyperperfect} number \\ \hline
1 & 6 ,28, 496, 8128, ... \\
2 & 21, 2133, 19521, 176661, ... \\
3 & 325, ... \\
4 & 1950625, 1220640625, ... \\
6 & 301, 16513, 60110701, ... \\
10 & 159841, ... \\
12 & 697, 2041, 1570153, 62722153, ... \\
\hline \end{tabular}
\end{center}
\end{example}
\noindent Some results concerning hyperperfect numbers: \begin{itemize} \item[$\bullet$] If $k > 1$ is an odd integer and $p = (3k+1)/2$ and $q = 3k + 4$ are prime numbers, then $p^2q$ is $k$-hyperperfect; J. S. McCraine \cite{6} has conjectured in 2000 that all $k$-hyperperfect numbers for odd $k > 1$ are of this form, but the hypothesis has not been proven so far.
\item[$\bullet$] If $p$ and $q$ are distinct odd primes such that $k (p+q) = pq - 1$ for some integer, $k$ then $n = pq$ is $k$-hyperperfect.
\item[$\bullet$] If $k > 0$ and $p = k+1$ is prime, then for all $i > 1$ such that $q = p^i - p +1$ is prime, $n = p^{i-1} q$ is $k$-hyperperfect (see H. J. J. te Riele \cite{teriele1}, J. C. M. Nash \cite{7}). \end{itemize}
We have proposed some other forms of generalization, different from $k$-hyperperfect numbers, and also we have examined \textbf{super-hyperperfect numbers} ("super" in the way as super perfect):
\begin{eqnarray} && \sigma (\sigma (n)) = \frac{k+1}{k} n + \frac{k-1}{k} \nonumber \\ && \sigma (n) = \frac{2k-1}{k} n + \frac{1}{k} \nonumber \\ && \sigma (\sigma (n)) = \frac{2k-1}{k} n + \frac{1}{k} \nonumber \\
&& \sigma (n) = \frac{3}{2} (n+1) \nonumber \\ && \sigma (\sigma (n)) = \frac{3}{2} (n+1) \nonumber \end{eqnarray}
\section{Numerical results}
For finding the numerical results for the above equalities we have used the ANSI C programming language, the Maple and the Octave programs. Small programs written in C were very useful for going through the smaller numbers up to $10^7$, and for the rest we used the two other programs. In this chapter the small numerical results are presented only in the cases where solutions were found.
3.1. Super-hyperperfect numbers. The table below shows the results we have reached:
\begin{table}[htbp]
\centering
\begin{tabular}{|c|l|}
\hline
$\textbf{k}$ & \hspace{2cm} \textbf{n} \\ \hline
1 & $2, 2^2, 2^4, 2^6, 2^{12}, 2^{16}, 2^{18}$ \\
2 & $3^2, 3^6, 3^{12}$ \\
4 & $5^2$ \\
\hline \end{tabular}
\end{table}
3.2. $\sigma(n) = \frac{2k-1}{k} n + \frac{1}{k}$
For $k = 2:$
\begin{table}[htbp]
\centering
\begin{tabular}{|c|l|}
\hline
$\textbf{n}$ & \textbf{prime factorization} \\ \hline
21 & $3 \cdot 7 = 3(3^2 - 2)$ \\
2133 & $3^3 \cdot 79 = 3^3 \cdot (3^4 - 2)$ \\
19521 & $3^4 \cdot 241 = 3^4 \cdot (3^5 - 2)$ \\
176661 & $3^5 \cdot 727 = 3^5 \cdot (3^6 - 2)$ \\
\hline
\end{tabular}
\end{table}
We have performed searches for $k = 3$ and $k = 5$ too, but we haven't found any solution
3.3. $\sigma (\sigma (n)) = \frac{2k-1}{k} n + \frac{1}{k}$
For $k=2:$
\begin{table}[htbp]
\centering
\begin{tabular}{|c|l|}
\hline
$\textbf{k}$ & \textbf{prime factorization} \\ \hline
9 & $3^2$ \\
729 & $3^6$ \\
531441 & $3^{12}$ \\
\hline \end{tabular}
\end{table}
We have performed searches for $k = 3$ and $k = 5$ too, but we haven't found any solution
3.4. $\sigma (n) = \frac{3}{2} (n+1)$
\begin{table}[htbp]
\centering
\begin{tabular}{|c|l|}
\hline
$\textbf{k}$ & \textbf{prime factorization} \\ \hline
15 & $3\cdot 5$ \\
207 & $3^2 \cdot 23$ \\
1023 & $3\cdot 11 \cdot 31$ \\
2975 & $5^2 \cdot 7 \cdot 17$ \\
19359 & $3^4 \cdot 239$ \\
147455 & $5\cdot 7 \cdot 11 \cdot 383$ \\
1207359 & $3^3 \cdot 97 \cdot 461$ \\
5017599 & $3^3 \cdot 83 \cdot 2239$\\
\hline \end{tabular}
\end{table}
\section{Results and conjectures}
\begin{proposition}
If $n = 3^{k-1} (3^k - 2)$ where $3^k - 2$ is prime, then $n$ is a 2-hyperperfect number. \end{proposition}
\begin{proof} Since the divisor function $\sigma$ is multiplicative and for a prime $p$ and prime power we have: \[ \sigma (p) = p+1 \] and \[ \sigma (p^\alpha) = \frac{p^{\alpha +1} - 1}{p-1}, \] it follows that: \begin{eqnarray} \sigma (n) &=& \sigma (3^{k-1} (3^k - 2)) = \sigma (3^{k-1}) \cdot \sigma (3^k - 2) = \frac{3^{(k-1)+1} -1}{3-1} \cdot (3^k - 2+1) = \nonumber\\ &=& \frac{(3^k - 1) \cdot (3^k -1 )}{2} = \frac{3^{2k} - 2\cdot 3^k + 1}{2} = \frac{3}{2} 3^{k-1} (3^k - 2) + \frac{1}{2}.\nonumber \end{eqnarray} \end{proof}
\begin{conjecture} All 2-hyperperfect numbers are of the form $n = 3^{k-1} (3^k - 2),$ where $3^k - 2$ is prime. \end{conjecture}
\noindent We were looking for adequate results fulfilling the suspects, therefore we have searched for primes that can be written as $3^k - 2$. We have reached the following results:
\begin{center}
\begin{tabular}{|c|c|}
\hline
\textbf{ \# }& $k$ \textbf{ for which }$3^k - 2$ \textbf{is prime} \\ \hline
1 & 2 \\
2 & 4 \\
3 & 5\\
4 & 6 \\
5 & 9 \\
6 & 22 \\
7 & 37 \\
8 & 41 \\
9 & 90 \\
\hline \end{tabular} \end{center}
\begin{center}
\begin{tabular}{|c|c|}
\hline
\textbf{ \# }& $k$ \textbf{ for which }$3^k - 2$ \textbf{is prime} \\ \hline
10 & 102 \\
11 & 105 \\
12 & 317 \\
13 & 520 \\
14 & 541 \\
15 & 561 \\
16 & 648 \\
17 & 780 \\
18 & 786 \\
19 & 957 \\
20 & 1353 \\
21 & 2224 \\
22 & 2521 \\
23 & 6184 \\
24 & 7989 \\
25 & 8890 \\
26 & 19217 \\
27 & 20746 \\
\hline \end{tabular} \end{center}
\noindent Therefore the last result we reached is: $3^{20745} (3^{20746} - 2)$, which has 19796 digits.
\noindent If we consider the super-hiperperfect numbers in special form $\sigma (\sigma (n)) = \frac{3}{2} n + \frac{1}{2}$ we prove the following result.
\begin{proposition} If $n = 3^{p-1}$ where $p$ and $(3^p -1)/2$ are primes, then $n$ is a super-hyperperfect number. \end{proposition}
\begin{proof} \begin{eqnarray*} \sigma (\sigma(n)) &=& \sigma (\sigma (3^{p-1})) = \sigma \left(\frac{3^p -1}{2} \right) = \frac{3^p -1}{2} + 1 =\\ &=& \frac{3}{2} \cdot 3^{p-1} + \frac{1}{2}=\frac{3}{2}n+\frac{1}{2}. \end{eqnarray*} \end{proof}
\begin{conjecture} All solutions for this generalization are $3^{p-1}$-like numbers, where $p$ and $(3^p -1)/2$ are primes. \end{conjecture}
We were looking for adequate results fulfilling the suspects, therefore we have searched for primes $p$ for which $(3^p-1)/2$ is also prime. We have reached the following results:
\begin{center}
\begin{tabular}{|c|c|}
\hline
\textbf{ \# }& $p-1$ \textbf{for which}$p$ and $(3^p - 1)/2$ \textbf{are primes} \\ \hline
1 & 2 \\
2 & 6 \\
3 & 12\\
4 & 540 \\
5 & 1090 \\
6 & 1626 \\
7 & 4176 \\
8 & 9010 \\
9 & 9550 \\
\hline \end{tabular} \end{center}
\noindent Therefore the last result we reached is: $3^{9550}$, which has 4556 digits.
\rightline{\emph{Received: November 9, 2008}}
\end{document}
|
arXiv
|
{
"id": "1008.0155.tex",
"language_detection_score": 0.7177469730377197,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.